SVM Model for Image Classification

Using SVM to classify MNIST data - a set of images of hand-written digits
In [1]:
import pandas as pd

MNIST data set:

Text images of 28x28 pixels represented as flattened array of 784 pixels
Each pixel is represented by a pixel intensity value from 0-255

Download Link: https://www.kaggle.com/c/3004/download/train.csv

In [2]:
mnist_data = pd.read_csv("../data/mnist/train.csv")
mnist_data.tail()
Out[2]:
label pixel0 pixel1 pixel2 pixel3 pixel4 pixel5 pixel6 pixel7 pixel8 ... pixel774 pixel775 pixel776 pixel777 pixel778 pixel779 pixel780 pixel781 pixel782 pixel783
41995 0 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
41996 1 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
41997 7 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
41998 6 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
41999 9 0 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0

5 rows × 785 columns

Preparing our training and test data

The pixel intensities are divided by 255 so that they're all between 0 and 1

In [4]:
from sklearn.model_selection import train_test_split

features = mnist_data.columns[1:]
X = mnist_data[features]
Y = mnist_data['label']

X_train, X_test, Y_train, y_test = train_test_split(X/255., Y, test_size=0.1, random_state=0)

Create an SVM classifier model

  • penalty can be L1 or L2
  • dual set to false since we have many more samples than features
In [5]:
from sklearn.svm import LinearSVC

clf_svm = LinearSVC(penalty="l2", dual=False, tol=1e-5)
clf_svm.fit(X_train, Y_train)
Out[5]:
LinearSVC(C=1.0, class_weight=None, dual=False, fit_intercept=True,
     intercept_scaling=1, loss='squared_hinge', max_iter=1000,
     multi_class='ovr', penalty='l2', random_state=None, tol=1e-05,
     verbose=0)

Calculate accuracy of the model against the test set

In [6]:
from sklearn.metrics import accuracy_score

y_pred_svm = clf_svm.predict(X_test)
acc_svm = accuracy_score(y_test, y_pred_svm)
print ('SVM accuracy: ',acc_svm)
SVM accuracy:  0.9102380952380953

When your model has a number of hyper parameters, we've spoken earlier of the need to tune them to find the best possible model on your data set. Scikit-learn offers some specialized tools to perform exactly this tuning. It will help you choose the best possible model by using a few different values of the hyperparameters that you specify. This is done using the GridSearchCV.

  • A brute-force way to obtain the best parameters for the ML algorithm
  • Tries out all combinations of parameters specified in the "grid"
  • Returns combination of parameters with the highest accuracy score
  • Since it explores all combinations - this will take a long time

When we instantiate our LinearSVC estimator which is going to be trained with various combinations of the parameters that we've specified in the grid, we can also pass in other arguments, which will remain constant during training.

The second argument to our GridSearchCV is the grid which contains our hyperparameter values. GridSearchCV will now run training on our data with every possible model parameter combination.

The CV parameter specifies that we want this model to be cross validated to mitigate over fitting. CV is equal to three means that the input data set will be split into three different parts. This is threefold cross validation. The training data will be two out of three parts and the validation data will be the third part.

In [7]:
from sklearn.model_selection import GridSearchCV

penalties = ['l1', 'l2']
tolerances = [1e-3, 1e-4, 1e-5]

param_grid = {'penalty': penalties, 'tol': tolerances}

grid_search = GridSearchCV(LinearSVC(dual=False), param_grid, cv=3)
grid_search.fit(X_train, Y_train)

grid_search.best_params_
Out[7]:
{'penalty': 'l1', 'tol': 0.0001}

Plugging in the "best parameters" to redefine the model

In [8]:
clf_svm = LinearSVC(penalty="l1", dual=False, tol=1e-3)
clf_svm.fit(X_train, Y_train)
Out[8]:
LinearSVC(C=1.0, class_weight=None, dual=False, fit_intercept=True,
     intercept_scaling=1, loss='squared_hinge', max_iter=1000,
     multi_class='ovr', penalty='l1', random_state=None, tol=0.001,
     verbose=0)
In [9]:
y_pred_svm = clf_svm.predict(X_test)
acc_svm = accuracy_score(y_test, y_pred_svm)
print ('SVM accuracy: ',acc_svm)
SVM accuracy:  0.9121428571428571
In [ ]: