Extracting Features from Images

Obtaining pixel intensities for image and transforming those into usable features

A very common input to machine learning models are images, and images are typically represented as a matrix. Every cell in this matrix represents a particular pixel in the image, and each pixel holds some value based on the kind of image this is.

For a color image, every pixel is represented using three separate values. These are RGB values where each value is a number between zero and 255. For example, the color red will be represented by 255, 0, 0, the color green by 0, 255, 0, and the color blue by 0, 0, 255. Color images are called three channel images because each pixel requires three values to represent its information.

Grayscale image, on the other hand, require just one value to represent the information in one pixel. This value represents the intensity of that pixel, and is typically a number between zero and one. It can be a number between 0 and 255 as well. We typically divide it by 255 to get a number between zero and one.

Installation of opencv may be required

OpenCV == Open Source Computer Vision Library

In [1]:
!pip install opencv-python
Requirement already satisfied: opencv-python in /Users/kishan/anaconda3/lib/python3.6/site-packages
Requirement already satisfied: numpy>=1.11.1 in /Users/kishan/anaconda3/lib/python3.6/site-packages (from opencv-python)
In [1]:
import cv2

Load an image from which to extract RGB pixel intensities

In [2]:
# Image with dimensions 173x130
imagePath = '../data/dog.jpg'

image = cv2.imread(imagePath)

View image using matplotlib

In [3]:
%matplotlib inline

import matplotlib
import matplotlib.pyplot as plt

plt.imshow(image)
Out[3]:
<matplotlib.image.AxesImage at 0x20532fde978>

Image array contains RGB values for each pixel in the image

In [4]:
image.shape
Out[4]:
(130, 173, 3)
In [5]:
image
Out[5]:
array([[[199, 171, 164],
        [227, 206, 198],
        [249, 237, 231],
        ...,
        [253, 242, 234],
        [254, 243, 235],
        [254, 243, 235]],

       [[221, 195, 188],
        [228, 209, 201],
        [246, 237, 228],
        ...,
        [254, 243, 235],
        [255, 243, 237],
        [254, 242, 236]],

       [[227, 209, 198],
        [225, 210, 201],
        [238, 231, 222],
        ...,
        [254, 242, 236],
        [254, 242, 238],
        [253, 241, 237]],

       ...,

       [[203, 207, 208],
        [199, 203, 204],
        [200, 202, 203],
        ...,
        [ 98, 112, 118],
        [ 99, 113, 119],
        [ 98, 112, 118]],

       [[201, 202, 206],
        [203, 204, 208],
        [204, 205, 209],
        ...,
        [103, 115, 119],
        [107, 119, 123],
        [104, 116, 120]],

       [[200, 201, 205],
        [201, 202, 206],
        [203, 204, 208],
        ...,
        [106, 118, 122],
        [111, 123, 127],
        [108, 120, 124]]], dtype=uint8)

Each pixel has RGB intensities

In [6]:
image[0][0]
Out[6]:
array([199, 171, 164], dtype=uint8)

Scale this image to a smaller size

In [7]:
size=(32, 32)
resized_image_feature_vector = cv2.resize(image, size)
In [8]:
plt.imshow(resized_image_feature_vector)
Out[8]:
<matplotlib.image.AxesImage at 0x2053613db38>
In [9]:
resized_image_feature_vector.shape
Out[9]:
(32, 32, 3)
In [10]:
resized_image_feature_vector
Out[10]:
array([[[244, 237, 228],
        [251, 254, 253],
        [255, 250, 242],
        ...,
        [253, 247, 242],
        [255, 245, 238],
        [254, 243, 236]],

       [[126, 118, 108],
        [237, 218, 211],
        [166, 156, 144],
        ...,
        [255, 246, 242],
        [255, 245, 241],
        [254, 241, 233]],

       [[254, 253, 251],
        [255, 254, 252],
        [254, 253, 251],
        ...,
        [250, 239, 230],
        [243, 224, 214],
        [190, 177, 165]],

       ...,

       [[206, 206, 210],
        [208, 209, 213],
        [207, 208, 212],
        ...,
        [119, 131, 138],
        [121, 135, 141],
        [115, 129, 136]],

       [[207, 208, 211],
        [211, 209, 212],
        [213, 211, 214],
        ...,
        [110, 124, 130],
        [104, 118, 124],
        [107, 121, 127]],

       [[202, 204, 206],
        [211, 210, 213],
        [211, 209, 211],
        ...,
        [119, 132, 137],
        [100, 113, 118],
        [ 99, 112, 118]]], dtype=uint8)

Image array can be flattened into a one-dimensional array

You can view the three-dimensional matrix. In some cases you might find that when you feed an image into your model, you might want to represent it as a 1D array. This can be done using the flatten operation on your image matrix. The flatten function flattens all of the dimension of the array. There were three dimensions in the original array. The final array has just a single dimension. It is a vector, and the length of that vector is 3072, which is 32 multiplied by 32 multiplied by three

In [11]:
resized_flattened_image_feature_vector = resized_image_feature_vector.flatten()
In [12]:
len(resized_flattened_image_feature_vector)
Out[12]:
3072
In [13]:
image_grayscale = cv2.imread(imagePath, cv2.IMREAD_GRAYSCALE )

Go ahead and view this image using matplotlib, and you can see it's the same dog, but this time in grayscale. The shape of the image will be different, though. You will see that it is a 130 by 173 image, but the last dimension is one because this is a single channel image The last dimension isn't explicitly specified here because it's represented as a scaler.

In [14]:
plt.imshow(image_grayscale)
Out[14]:
<matplotlib.image.AxesImage at 0x205361aa710>
In [15]:
image_grayscale.shape
Out[15]:
(130, 173)
In [16]:
image_grayscale
Out[16]:
array([[172, 206, 237, ..., 241, 242, 242],
       [196, 209, 235, ..., 242, 243, 242],
       [208, 209, 229, ..., 242, 242, 241],
       ...,
       [207, 203, 202, ..., 112, 113, 112],
       [203, 205, 206, ..., 115, 119, 116],
       [202, 203, 205, ..., 118, 123, 120]], dtype=uint8)
In [17]:
import numpy as np

expanded_image_grayscale = np.expand_dims(image_grayscale, axis=2)
expanded_image_grayscale.shape
Out[17]:
(130, 173, 1)
In [18]:
expanded_image_grayscale
Out[18]:
array([[[172],
        [206],
        [237],
        ...,
        [241],
        [242],
        [242]],

       [[196],
        [209],
        [235],
        ...,
        [242],
        [243],
        [242]],

       [[208],
        [209],
        [229],
        ...,
        [242],
        [242],
        [241]],

       ...,

       [[207],
        [203],
        [202],
        ...,
        [112],
        [113],
        [112]],

       [[203],
        [205],
        [206],
        ...,
        [115],
        [119],
        [116]],

       [[202],
        [203],
        [205],
        ...,
        [118],
        [123],
        [120]]], dtype=uint8)

If you display the actual image matrix, you will see that there is a single intensity value for each pixel, and this intensity number is between zero and 255. Here we used the expand_dims function to basically expand the third axis, the axis at index two for our image. This will add an additional dimension to represent the pixel intensity. So now the shape of our image will be 130 by 173 by one