Artificial Intelligence Nanodegree

Convolutional Neural Networks


In this notebook, we train an MLP to classify images from the MNIST database.

1. Load MNIST Database

In [1]:
from keras.datasets import mnist

# use Keras to import pre-shuffled MNIST database
(X_train, y_train), (X_test, y_test) = mnist.load_data()

print("The MNIST database has a training set of %d examples." % len(X_train))
print("The MNIST database has a test set of %d examples." % len(X_test))
Using TensorFlow backend.
Downloading data from https://s3.amazonaws.com/img-datasets/mnist.npz
11493376/11490434 [==============================] - 4s 0us/step
The MNIST database has a training set of 60000 examples.
The MNIST database has a test set of 10000 examples.

2. Visualize the First Six Training Images

In [2]:
import matplotlib.pyplot as plt
%matplotlib inline
import matplotlib.cm as cm
import numpy as np

# plot first six training images
fig = plt.figure(figsize=(20,20))
for i in range(6):
    ax = fig.add_subplot(1, 6, i+1, xticks=[], yticks=[])
    ax.imshow(X_train[i], cmap='gray')
    ax.set_title(str(y_train[i]))

3. View an Image in More Detail

In [3]:
def visualize_input(img, ax):
    ax.imshow(img, cmap='gray')
    width, height = img.shape
    thresh = img.max()/2.5
    for x in range(width):
        for y in range(height):
            ax.annotate(str(round(img[x][y],2)), xy=(y,x),
                        horizontalalignment='center',
                        verticalalignment='center',
                        color='white' if img[x][y]<thresh else 'black')

fig = plt.figure(figsize = (12,12)) 
ax = fig.add_subplot(111)
visualize_input(X_train[0], ax)

4. Rescale the Images by Dividing Every Pixel in Every Image by 255

In [4]:
# rescale [0,255] --> [0,1]
X_train = X_train.astype('float32')/255
X_test = X_test.astype('float32')/255 

5. Encode Categorical Integer Labels Using a One-Hot Scheme

In [5]:
from keras.utils import np_utils

# print first ten (integer-valued) training labels
print('Integer-valued labels:')
print(y_train[:10])

# one-hot encode the labels
y_train = np_utils.to_categorical(y_train, 10)
y_test = np_utils.to_categorical(y_test, 10)

# print first ten (one-hot) training labels
print('One-hot labels:')
print(y_train[:10])
Integer-valued labels:
[5 0 4 1 9 2 1 3 1 4]
One-hot labels:
[[0. 0. 0. 0. 0. 1. 0. 0. 0. 0.]
 [1. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
 [0. 0. 0. 0. 1. 0. 0. 0. 0. 0.]
 [0. 1. 0. 0. 0. 0. 0. 0. 0. 0.]
 [0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
 [0. 0. 1. 0. 0. 0. 0. 0. 0. 0.]
 [0. 1. 0. 0. 0. 0. 0. 0. 0. 0.]
 [0. 0. 0. 1. 0. 0. 0. 0. 0. 0.]
 [0. 1. 0. 0. 0. 0. 0. 0. 0. 0.]
 [0. 0. 0. 0. 1. 0. 0. 0. 0. 0.]]

6. Define the Model Architecture

In [30]:
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten

# define the model
model = Sequential()
model.add(Flatten(input_shape=X_train.shape[1:]))
model.add(Dense(512, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(10, activation='softmax'))

# summarize the model
model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
flatten_5 (Flatten)          (None, 784)               0         
_________________________________________________________________
dense_14 (Dense)             (None, 512)               401920    
_________________________________________________________________
dropout_10 (Dropout)         (None, 512)               0         
_________________________________________________________________
dense_15 (Dense)             (None, 256)               131328    
_________________________________________________________________
dropout_11 (Dropout)         (None, 256)               0         
_________________________________________________________________
dense_16 (Dense)             (None, 10)                2570      
=================================================================
Total params: 535,818
Trainable params: 535,818
Non-trainable params: 0
_________________________________________________________________

7. Compile the Model

In [31]:
# compile the model
model.compile(loss='categorical_crossentropy', optimizer='rmsprop', 
              metrics=['accuracy'])

8. Calculate the Classification Accuracy on the Test Set (Before Training)

In [32]:
# evaluate test accuracy
score = model.evaluate(X_test, y_test, verbose=0)
accuracy = 100*score[1]

# print test accuracy
print('Test accuracy: %.4f%%' % accuracy)
Test accuracy: 14.3700%

9. Train the Model

In [33]:
from keras.callbacks import ModelCheckpoint   

# train the model
checkpointer = ModelCheckpoint(filepath='mnist.model.best.hdf5', 
                               verbose=1, save_best_only=True)
hist = model.fit(X_train, y_train, batch_size=128, epochs=10,
          validation_split=0.2, callbacks=[checkpointer],
          verbose=1, shuffle=True)
Train on 48000 samples, validate on 12000 samples
Epoch 1/10
48000/48000 [==============================] - 6s 131us/step - loss: 0.2999 - acc: 0.9094 - val_loss: 0.1210 - val_acc: 0.9645

Epoch 00001: val_loss improved from inf to 0.12098, saving model to mnist.model.best.hdf5
Epoch 2/10
48000/48000 [==============================] - 5s 110us/step - loss: 0.1230 - acc: 0.9634 - val_loss: 0.1116 - val_acc: 0.9658

Epoch 00002: val_loss improved from 0.12098 to 0.11162, saving model to mnist.model.best.hdf5
Epoch 3/10
48000/48000 [==============================] - 5s 109us/step - loss: 0.0876 - acc: 0.9732 - val_loss: 0.0970 - val_acc: 0.9720

Epoch 00003: val_loss improved from 0.11162 to 0.09705, saving model to mnist.model.best.hdf5
Epoch 4/10
48000/48000 [==============================] - 5s 107us/step - loss: 0.0705 - acc: 0.9789 - val_loss: 0.0852 - val_acc: 0.9754

Epoch 00004: val_loss improved from 0.09705 to 0.08525, saving model to mnist.model.best.hdf5
Epoch 5/10
48000/48000 [==============================] - 6s 116us/step - loss: 0.0600 - acc: 0.9811 - val_loss: 0.0900 - val_acc: 0.9773

Epoch 00005: val_loss did not improve from 0.08525
Epoch 6/10
48000/48000 [==============================] - 5s 111us/step - loss: 0.0499 - acc: 0.9841 - val_loss: 0.0823 - val_acc: 0.9788

Epoch 00006: val_loss improved from 0.08525 to 0.08232, saving model to mnist.model.best.hdf5
Epoch 7/10
48000/48000 [==============================] - 5s 110us/step - loss: 0.0438 - acc: 0.9863 - val_loss: 0.0982 - val_acc: 0.9769

Epoch 00007: val_loss did not improve from 0.08232
Epoch 8/10
48000/48000 [==============================] - 5s 112us/step - loss: 0.0399 - acc: 0.9872 - val_loss: 0.0899 - val_acc: 0.9793

Epoch 00008: val_loss did not improve from 0.08232
Epoch 9/10
48000/48000 [==============================] - 5s 105us/step - loss: 0.0361 - acc: 0.9890 - val_loss: 0.0942 - val_acc: 0.9791

Epoch 00009: val_loss did not improve from 0.08232
Epoch 10/10
48000/48000 [==============================] - 6s 116us/step - loss: 0.0337 - acc: 0.9897 - val_loss: 0.0956 - val_acc: 0.9801

Epoch 00010: val_loss did not improve from 0.08232

10. Load the Model with the Best Classification Accuracy on the Validation Set

In [34]:
# load the weights that yielded the best validation accuracy
model.load_weights('mnist.model.best.hdf5')

11. Calculate the Classification Accuracy on the Test Set

In [35]:
# evaluate test accuracy
score = model.evaluate(X_test, y_test, verbose=0)
accuracy = 100*score[1]

# print test accuracy
print('Test accuracy: %.4f%%' % accuracy)
Test accuracy: 97.9100%
In [ ]: