Self-Driving Car Engineer Nanodegree

Deep Learning

Project: Build a Traffic Sign Recognition Classifier

In this notebook, a template is provided for you to implement your functionality in stages, which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission if necessary.

Note: Once you have completed all of the code implementations, you need to finalize your work by exporting the iPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run so that reviewers can see the final implementation and output. You can then export the notebook by using the menu above and navigating to \n", "File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.

In addition to implementing code, there is a writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing the code template and writeup template will cover all of the rubric points for this project.

The rubric contains "Stand Out Suggestions" for enhancing the project beyond the minimum requirements. The stand out suggestions are optional. If you decide to pursue the "stand out suggestions", you can include the code in this Ipython notebook and also discuss the results in the writeup file.

Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.


Step 0: Load The Data

In [79]:
# Load pickled data
import pickle
import numpy as np
import tensorflow as tf
import pandas as pd
import matplotlib.pyplot as plt
import cv2
import os
from sklearn.utils import shuffle
from sklearn.metrics import confusion_matrix

# Visualizations will be shown in the notebook.
%matplotlib inline

training_file = "data/train.p"
validation_file= "data/valid.p"
testing_file = "data/test.p"
signnames_file = "signnames.csv"

with open(training_file, mode='rb') as f:
    train = pickle.load(f)
with open(validation_file, mode='rb') as f:
    valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
    test = pickle.load(f)
with open(signnames_file) as f:
    f.readline() # Strip the header
    tuples = [line.strip().split(',') for line in f]
    sign_names = {int(t[0]): t[1] for t in tuples}
    
X_train, y_train = shuffle(train['features'], train['labels'])
X_valid, y_valid = valid['features'], valid['labels']
X_test, y_test = test['features'], test['labels']

Step 1: Dataset Summary & Exploration

The pickled data is a dictionary with 4 key/value pairs:

  • 'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
  • 'labels' is a 1D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id.
  • 'sizes' is a list containing tuples, (width, height) representing the the original width and height the image.
  • 'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES

Complete the basic data summary below. Use python, numpy and/or pandas methods to calculate the data summary rather than hard coding the results. For example, the pandas shape method might be useful for calculating some of the summary results.

Provide a Basic Summary of the Data Set Using Python, Numpy and/or Pandas

In [2]:
### Replace each question mark with the appropriate value. 
### Use python, pandas or numpy methods rather than hard coding the results

# Number of training examples
n_train = X_train.shape[0]

# Number of validation examples
n_valid = X_valid.shape[0]

# Number of testing examples.
n_test = X_test.shape[0]

# What's the shape of an traffic sign image?
image_shape = X_train.shape[1:]

# How many unique classes/labels there are in the dataset.
n_classes = len(set(y_train))

print("Number of training examples =", n_train)
print("Number of validation examples =", n_valid)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
Number of training examples = 34799
Number of validation examples = 4410
Number of testing examples = 12630
Image data shape = (32, 32, 3)
Number of classes = 43

Include an exploratory visualization of the dataset

Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc.

The Matplotlib examples and gallery pages are a great resource for doing visualizations in Python.

NOTE: It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections.

In [3]:
### Data exploration visualization code goes here.

# Plot Training / Validation / Test summary counts
for (data, name) in [[y_train, "training"], [y_valid, "validation"], [y_test, "test"]]:
    df = pd.DataFrame({'label': data})
    counts = df.groupby(['label']).agg({'label': 'count'})
    counts.plot(kind='bar', title="counts of each sign in %s data" % name, figsize=(15,4), rot=0)
    plt.xlabel("sign")
    plt.show()
    
# Gather 5 Example images per label
examples_per_sign = 5
total = 0
example = {}
for (img,label) in zip(X_train, y_train):
    example.setdefault(label, [])
    if len(example[label]) < examples_per_sign:
        example[label].append(img)
        total += 1
    if total == n_classes * examples_per_sign:
        break;

for label in sorted(example.keys()):
    fig = plt.figure()
    print(sign_names[label])
    for i in range(examples_per_sign):
        plt.subplot(1,examples_per_sign,i+1)
        plt.imshow(example[label][i])
    plt.show()
Speed limit (20km/h)
Speed limit (30km/h)
Speed limit (50km/h)
Speed limit (60km/h)
Speed limit (70km/h)
Speed limit (80km/h)
End of speed limit (80km/h)
Speed limit (100km/h)
Speed limit (120km/h)
No passing
No passing for vehicles over 3.5 metric tons
Right-of-way at the next intersection
Priority road
Yield
Stop
No vehicles
Vehicles over 3.5 metric tons prohibited
No entry
General caution
Dangerous curve to the left
Dangerous curve to the right
Double curve
Bumpy road
Slippery road
Road narrows on the right
Road work
Traffic signals
Pedestrians
Children crossing
Bicycles crossing
Beware of ice/snow
Wild animals crossing
End of all speed and passing limits
Turn right ahead
Turn left ahead
Ahead only
Go straight or right
Go straight or left
Keep right
Keep left
Roundabout mandatory
End of no passing
End of no passing by vehicles over 3.5 metric tons

Step 2: Design and Test a Model Architecture

Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the German Traffic Sign Dataset.

There are various aspects to consider when thinking about this problem:

  • Neural network architecture
  • Play around preprocessing techniques (normalization, rgb to grayscale, etc)
  • Number of examples per label (some have more than others).
  • Generate fake data.

Here is an example of a published baseline model on this problem. It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.

NOTE: The LeNet-5 implementation shown in the classroom at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play!

Pre-process the Data Set (normalization, grayscale, etc.)

Use the code cell (or multiple code cells, if necessary) to implement the first step of your project.

Model Architecture

In [10]:
### Preprocess the data here. Preprocessing steps could include normalization, converting to grayscale, etc.
### Feel free to use as many code cells as needed.
def preprocess(images):
    def denoise(img):
        """Denoising did not seem to improve accuracy"""
        return cv2.fastNlMeansDenoisingColored(img, h=10)
        
    def normalize(img_yuv):
        """Normalization did not seem to improve accuracy"""
        
        # Global equalization
        img = img_yuv.copy()
        img[:,:,0] = cv2.equalizeHist(img[:,:,0])

        # Local equalization
        clahe = cv2.createCLAHE(clipLimit=20.0, tileGridSize=(8,8))
        img[:,:,0] = clahe.apply(img[:,:,0])
        
        return img
    
    def convert_to_yuv(img):
        """Conversion into YUV colorspace is effective"""
        return cv2.cvtColor(img, cv2.COLOR_RGB2YUV)
    
    return [convert_to_yuv(img) for img in images]

X_train_p = preprocess(X_train)
X_valid_p = preprocess(X_valid)
X_test_p = preprocess(X_test)


# In "Traffic Sign Recognition with Multi-Scale Convolutional Networks", referenced above
# They augmented the dataset by producing 5x permutations on the original training set to
# aid in training
def permute_image(img):
    (w, h) = img.shape[:2]
    center = (w / 2, h / 2)
    rotation = np.random.random()*30-15;  # +/- 15 degrees rotation
    scale = 1.0+np.random.random()*0.2-0.1  # +/- 10% scaling
    M = cv2.getRotationMatrix2D(center, rotation, scale)
    rotated = cv2.warpAffine(img, M, (w, h))
    return rotated

permutation_factor = 5
X_train_p += [permute_image(img) for img in X_train_p*permutation_factor]
y_train_p = [cls for cls in y_train]
y_train_p += [cls for cls in y_train_p*permutation_factor]
In [5]:
### Define your architecture here.

def conv2d(x, output, stride, name):
    weights = tf.Variable(tf.truncated_normal(output), name=name+"_weights")
    biases = tf.Variable(tf.zeros(output[3]), name=name+"_biases")
    strides = [1, stride, stride, 1]
    padding = 'VALID'
    return tf.nn.conv2d(x, weights, strides, padding, name=name+"conv") + biases

def maxpool2d(x, k=2, name=""):
    return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, k, k, 1], padding='SAME', 
                          name=name+"_pool")

def fullyconnected(x, output, mu, sigma, name):
    weights = tf.Variable(tf.truncated_normal(output, mu, sigma), name=name+"_weights")
    biases = tf.Variable(tf.zeros(output[1]), name=name+"_biases")
    return tf.add(tf.matmul(x, weights), biases)

def LeNet(x, input_depth=1, n_classes=10):
    # Arguments used for tf.truncated_normal, randomly defines variables for the weights and biases for each layer
    mu = 0
    sigma = 0.1
        
    # Layer 1: Convolutional. Input = 32x32xinput_depth. Output = 28x28x6.
    layer1 = conv2d(x, (5,5,input_depth,6), 1, name="layer1")
    layer1 = tf.nn.relu(layer1, name="layer1_relu")
    layer1 = maxpool2d(layer1, 2, name="layer1")

    # Layer 2: Convolutional. Output = 10x10x16.
    layer2 = conv2d(layer1, (5,5,6,16), 1, name="layer2")
    layer2 = tf.nn.relu(layer2, name="layer2_relu")
    layer2 = maxpool2d(layer2, 2, name="layer2")
    
    # Flatten. Input = 5x5x16. Output = 400.
    flatten = tf.contrib.layers.flatten(layer2)
    
    # Layer 3: Fully Connected. Input = 400. Output = 120.
    layer3 = fullyconnected(flatten, [400, 120], mu, sigma, name="layer3")
    layer3 = tf.nn.relu(layer3, name="layer3_relu")

    # Layer 4: Fully Connected. Input = 120. Output = 84.
    layer4 = fullyconnected(layer3, [120,84], mu, sigma, name="layer4")
    layer4 = tf.nn.relu(layer4, name="layer4_relu")

    #Layer 5: Fully Connected. Input = 84. Output = n_classes.
    logits = fullyconnected(layer4, [84, n_classes], mu, sigma, name="layer5")
    
    return logits

Train, Validate and Test the Model

A validation set can be used to assess how well the model is performing. A low accuracy on the training and validation sets imply underfitting. A high accuracy on the test set but low accuracy on the validation set implies overfitting.

In [6]:
### Train your model here.
### Calculate and report the accuracy on the training and validation set.
### Once a final model architecture is selected, 
### the accuracy on the test set should be calculated and reported as well.
### Feel free to use as many code cells as needed.

rate = 0.00001
input_depth = image_shape[2]

x = tf.placeholder(tf.float32, (None, 32, 32, input_depth), name="X")
y = tf.placeholder(tf.int32, (None), name="y")
one_hot_y = tf.one_hot(y, n_classes)

logits = LeNet(x, input_depth, n_classes)
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=one_hot_y)
loss_operation = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate = rate, name="optimizer")
training_operation = optimizer.minimize(loss_operation)

correct_prediction = tf.equal(tf.argmax(logits, 1), tf.argmax(one_hot_y, 1))
accuracy_operation = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

def evaluate(sess, X_data, y_data):
    num_examples = len(X_data)
    total_accuracy = 0
    for offset in range(0, num_examples, BATCH_SIZE):
        batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
        accuracy = sess.run(accuracy_operation, feed_dict={x: batch_x, y: batch_y})
        total_accuracy += (accuracy * len(batch_x))
    return total_accuracy / num_examples
In [7]:
saver = tf.train.Saver()
sess = tf.Session()
do_restore = True
try:
    if do_restore:
        print("restoring from save file")
        saver.restore(sess, './lenet_signs')
    else:
        print("initializing variables")
        sess.run(tf.global_variables_initializer())
except Exception as ex:
    print(ex)
    print("initializing variables")
    sess.run(tf.global_variables_initializer())
restoring from save file
In [18]:
rate = 0.0001
EPOCHS = 100
BATCH_SIZE = 128
num_examples = len(X_train_p)

print("Training...")
print()
for i in range(EPOCHS):   
    for offset in range(0, num_examples, BATCH_SIZE):
        # Get the batch
        end = offset + BATCH_SIZE
        batch_x, batch_y = X_train_p[offset:end], y_train_p[offset:end]
        
        # Run the training operation
        sess.run(training_operation, feed_dict={x: batch_x, y: batch_y})
        
    validation_accuracy = evaluate(sess, X_valid_p, y_valid)
    print("EPOCH {} ...".format(i+1))
    print("Validation Accuracy = {:.3f}".format(validation_accuracy))
    print()
Training...

EPOCH 1 ...
Validation Accuracy = 0.739

EPOCH 2 ...
Validation Accuracy = 0.741

EPOCH 3 ...
Validation Accuracy = 0.741

EPOCH 4 ...
Validation Accuracy = 0.742

EPOCH 5 ...
Validation Accuracy = 0.742

EPOCH 6 ...
Validation Accuracy = 0.746

EPOCH 7 ...
Validation Accuracy = 0.746

EPOCH 8 ...
Validation Accuracy = 0.747

EPOCH 9 ...
Validation Accuracy = 0.748

EPOCH 10 ...
Validation Accuracy = 0.749

EPOCH 11 ...
Validation Accuracy = 0.751

EPOCH 12 ...
Validation Accuracy = 0.750

EPOCH 13 ...
Validation Accuracy = 0.749

EPOCH 14 ...
Validation Accuracy = 0.750

EPOCH 15 ...
Validation Accuracy = 0.752

EPOCH 16 ...
Validation Accuracy = 0.754

EPOCH 17 ...
Validation Accuracy = 0.753

EPOCH 18 ...
Validation Accuracy = 0.756

EPOCH 19 ...
Validation Accuracy = 0.755

EPOCH 20 ...
Validation Accuracy = 0.755

EPOCH 21 ...
Validation Accuracy = 0.757

EPOCH 22 ...
Validation Accuracy = 0.757

EPOCH 23 ...
Validation Accuracy = 0.759

EPOCH 24 ...
Validation Accuracy = 0.761

EPOCH 25 ...
Validation Accuracy = 0.761

EPOCH 26 ...
Validation Accuracy = 0.761

EPOCH 27 ...
Validation Accuracy = 0.763

EPOCH 28 ...
Validation Accuracy = 0.762

EPOCH 29 ...
Validation Accuracy = 0.764

EPOCH 30 ...
Validation Accuracy = 0.763

EPOCH 31 ...
Validation Accuracy = 0.767

EPOCH 32 ...
Validation Accuracy = 0.766

EPOCH 33 ...
Validation Accuracy = 0.768

EPOCH 34 ...
Validation Accuracy = 0.770

EPOCH 35 ...
Validation Accuracy = 0.769

EPOCH 36 ...
Validation Accuracy = 0.773

EPOCH 37 ...
Validation Accuracy = 0.773

EPOCH 38 ...
Validation Accuracy = 0.776

EPOCH 39 ...
Validation Accuracy = 0.774

EPOCH 40 ...
Validation Accuracy = 0.774

EPOCH 41 ...
Validation Accuracy = 0.776

EPOCH 42 ...
Validation Accuracy = 0.776

EPOCH 43 ...
Validation Accuracy = 0.777

EPOCH 44 ...
Validation Accuracy = 0.779

EPOCH 45 ...
Validation Accuracy = 0.778

EPOCH 46 ...
Validation Accuracy = 0.780

EPOCH 47 ...
Validation Accuracy = 0.778

EPOCH 48 ...
Validation Accuracy = 0.781

EPOCH 49 ...
Validation Accuracy = 0.779

EPOCH 50 ...
Validation Accuracy = 0.782

EPOCH 51 ...
Validation Accuracy = 0.783

EPOCH 52 ...
Validation Accuracy = 0.781

EPOCH 53 ...
Validation Accuracy = 0.782

EPOCH 54 ...
Validation Accuracy = 0.785

EPOCH 55 ...
Validation Accuracy = 0.787

EPOCH 56 ...
Validation Accuracy = 0.784

EPOCH 57 ...
Validation Accuracy = 0.785

EPOCH 58 ...
Validation Accuracy = 0.788

EPOCH 59 ...
Validation Accuracy = 0.789

EPOCH 60 ...
Validation Accuracy = 0.788

EPOCH 61 ...
Validation Accuracy = 0.789

EPOCH 62 ...
Validation Accuracy = 0.791

EPOCH 63 ...
Validation Accuracy = 0.789

EPOCH 64 ...
Validation Accuracy = 0.790

EPOCH 65 ...
Validation Accuracy = 0.792

EPOCH 66 ...
Validation Accuracy = 0.792

EPOCH 67 ...
Validation Accuracy = 0.790

EPOCH 68 ...
Validation Accuracy = 0.791

EPOCH 69 ...
Validation Accuracy = 0.790

EPOCH 70 ...
Validation Accuracy = 0.791

EPOCH 71 ...
Validation Accuracy = 0.790

EPOCH 72 ...
Validation Accuracy = 0.794

EPOCH 73 ...
Validation Accuracy = 0.792

EPOCH 74 ...
Validation Accuracy = 0.791

EPOCH 75 ...
Validation Accuracy = 0.793

EPOCH 76 ...
Validation Accuracy = 0.794

EPOCH 77 ...
Validation Accuracy = 0.795

EPOCH 78 ...
Validation Accuracy = 0.795

EPOCH 79 ...
Validation Accuracy = 0.793

EPOCH 80 ...
Validation Accuracy = 0.797

EPOCH 81 ...
Validation Accuracy = 0.796

EPOCH 82 ...
Validation Accuracy = 0.797

EPOCH 83 ...
Validation Accuracy = 0.797

EPOCH 84 ...
Validation Accuracy = 0.798

EPOCH 85 ...
Validation Accuracy = 0.799

EPOCH 86 ...
Validation Accuracy = 0.798

EPOCH 87 ...
Validation Accuracy = 0.797

EPOCH 88 ...
Validation Accuracy = 0.800

EPOCH 89 ...
Validation Accuracy = 0.797

EPOCH 90 ...
Validation Accuracy = 0.797

EPOCH 91 ...
Validation Accuracy = 0.798

EPOCH 92 ...
Validation Accuracy = 0.798

EPOCH 93 ...
Validation Accuracy = 0.800

EPOCH 94 ...
Validation Accuracy = 0.799

EPOCH 95 ...
Validation Accuracy = 0.799

EPOCH 96 ...
Validation Accuracy = 0.799

EPOCH 97 ...
Validation Accuracy = 0.801

EPOCH 98 ...
Validation Accuracy = 0.801

EPOCH 99 ...
Validation Accuracy = 0.800

EPOCH 100 ...
Validation Accuracy = 0.801

In [19]:
saver.save(sess, './lenet_signs')
print("Model saved")
Model saved
In [21]:
# Accuracy on the training set
# Not stastically useful for generalization, however if we have near-perfect accuracy 
# here, but not on validation and test data that is a signal that we have likely overfit
# the training data
train_accuracy = evaluate(sess, X_train_p, y_train_p)
print("Train Accuracy = {:.3f}".format(train_accuracy))

# Evaluate Validation accuracy
valid_accuracy = evaluate(sess, X_valid_p, y_valid)
print("Validation Accuracy = {:.3f}".format(valid_accuracy))

# Evaluate Test accuracy
test_accuracy = evaluate(sess, X_test_p, y_test)
print("Test Accuracy = {:.3f}".format(test_accuracy))
Train Accuracy = 0.899
Validation Accuracy = 0.801
Test Accuracy = 0.797
In [62]:
### More on accuracy, display a confusion matrix of the test results
prediction = sess.run(tf.argmax(logits, 1), feed_dict={x: X_test_p})
cnf_matrix = confusion_matrix(y_test, prediction)

# Normalize
cnf_matrix = cnf_matrix.astype('float') / cnf_matrix.sum(axis=1)[:, np.newaxis]

# Hack to get the matrix to display nicely by encourage all values to be 1 digit integers 
cnf_display = (cnf_matrix*10).astype('int')

np.set_printoptions(threshold=2000, linewidth=500)
print(cnf_display)
np.set_printoptions()
[[3 4 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
 [0 8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
 [0 0 8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
 [0 0 0 7 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
 [0 0 0 0 8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
 [0 0 2 0 0 5 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
 [0 0 0 1 0 0 7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
 [0 0 0 0 0 0 0 6 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
 [0 0 0 0 0 0 0 0 6 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
 [0 0 0 0 0 0 0 0 0 8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
 [0 0 0 0 0 0 0 0 0 0 9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
 [0 0 0 0 0 0 0 0 0 0 0 7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
 [0 0 0 0 0 0 0 0 0 0 0 0 9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
 [0 0 0 0 0 0 0 0 0 0 0 0 0 9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 8 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 9 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4 0 0 0 3 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0]
 [0 0 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 1 0 2 0 0 0 0 0 0 0 0 0 0 0 0]
 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 5 0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0]
 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 6 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0]
 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 1 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0]
 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 7 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 0 4 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0]
 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 6 0 1 0 0 0 0 0 0 0 0 0 0 0 0]
 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 3 2 0 0 0 0 0 0 0 0 0 0 0 0]
 [0 0 0 0 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 0 0 0 0 0 0 0 0 0 0 0 0]
 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 8 0 0 0 0 0 0 0 0 0 0 0]
 [0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 7 0 0 0 0 0 0 0 0 0 0]
 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 9 0 0 0 0 0 0 0 0 0]
 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 8 0 0 0 1 0 0 0 0]
 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 8 0 0 0 0 0 0 0]
 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4 0 3 0 1 0 0]
 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 8 0 0 0 0 0]
 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 9 0 0 0 0]
 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 1 4 0 0 0]
 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4 0 0 3 0 0]
 [0 0 0 0 0 0 0 0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 6 1]
 [0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 7]]
In [78]:
## Take a closer look at some signs that we did poorly on (< 40% of results correct)
df = pd.DataFrame({'label': y_train})
counts = np.array(df.groupby(['label']).agg({'label': 'count'}))
diag = np.diagonal(cnf_matrix)
poor = [i for (i, v) in enumerate(diag) if v < 0.40]
for offset in poor:
    print(sign_names[offset], "-", counts[offset], "examples in training set")
    classes = [i for (i, v) in enumerate(cnf_matrix[offset]) if v > 0.05]
    for index in classes:
        print("  ", "%4.1f%% -" % (cnf_matrix[offset][index]*100), sign_names[index])
Speed limit (20km/h) - [180] examples in training set
   35.0% - Speed limit (20km/h)
   43.3% - Speed limit (30km/h)
   18.3% - Speed limit (70km/h)
Dangerous curve to the right - [300] examples in training set
   21.1% - Right-of-way at the next intersection
   26.7% - Dangerous curve to the right
    5.6% - Slippery road
   10.0% - Children crossing
   22.2% - Beware of ice/snow
Road narrows on the right - [240] examples in training set
    5.6% - Right-of-way at the next intersection
    5.6% - General caution
   26.7% - Road narrows on the right
   10.0% - Road work
   18.9% - Bicycles crossing
   12.2% - Beware of ice/snow
Bicycles crossing - [240] examples in training set
   13.3% - Road work
   31.1% - Bicycles crossing
   27.8% - Beware of ice/snow
    7.8% - Wild animals crossing
Beware of ice/snow - [390] examples in training set
   32.0% - Right-of-way at the next intersection
    8.0% - Slippery road
    9.3% - Children crossing
   34.0% - Beware of ice/snow
Roundabout mandatory - [300] examples in training set
   43.3% - Go straight or left
   35.6% - Roundabout mandatory

Analysis of the above:

All of the worst performing sign classes had few examples in the training set which leads both to a low apriori expectation for those signs as well as insufficient training examples to be able to differentiate them well.

For example "Speed limit (20km/h)" had only 180 examples in the training set where there were > 1000 examples of all the other speed limit signs, and ~2000 examples on both "Speed limit (30km/h)" and "Speed limit (50km/h)". With close to a 10:1 apriori bias torwards "Speed limit 30km/h" over "Speed limit (20 km/h)", it should not be completely unexpected that the misclassification results tend to favor classification as 30/km over 20km/h.

This overall trend continues with the other signs with signicant misclassification error where all of them have relatively few examples in the training data compared to the signs with better classification rates.

The primary remedy for this would be to increase the amount of training data for the classes of signs that are currently significantly under-represented in the training data.


Step 3: Test a Model on New Images

To give yourself more insight into how your model is working, download at least five pictures of German traffic signs from the web and use your model to predict the traffic sign type.

You may find signnames.csv useful as it contains mappings from the class id (integer) to the actual sign name.

Load and Output the Images

In [95]:
### Load the images and plot them here.
import matplotlib.image as mpimg

# Load new images, all images have been pre-cropped to square images
mgdir = 'new_images'
X_new_images = []
for filename in os.listdir(imgdir):
    if filename.endswith('.jpg'):
        img = mpimg.imread(os.path.join(imgdir, filename))
        img = cv2.resize(img, (32,32), interpolation=cv2.INTER_AREA)
        X_new_images.append(img)

# Plot the new images
vis = np.concatenate(X_new_images[0:6], axis=1)
plt.imshow(vis)
plt.show()
vis = np.concatenate(X_new_images[6:], axis=1)
plt.imshow(vis)
plt.show()

Predict the Sign Type for Each Image

In [144]:
### Run the predictions here and use the model to output the prediction for each image.
### Make sure to pre-process the images with the same pre-processing pipeline used earlier.
### Feel free to use as many code cells as needed.
X_new_images_p = preprocess(X_new_images)
y_new_images = [33, 17, 27, 3, 2, 14, 11, 18, 13, 28, 38, 40]

prediction = sess.run(tf.argmax(logits, 1), feed_dict={x: X_new_images_p})

for (i,v) in enumerate(prediction):
    annotation = "Actual:    %s\nPredicted: %s" % (
        sign_names[y_new_images[i]], sign_names[v])
    fig = plt.figure(figsize=(1,1))
    plt.imshow(X_new_images[i])
    plt.annotate(annotation,xy=(0,0), xytext=(60,25), fontsize=12, family='monospace')
    plt.show()

Analyze Performance

In [129]:
### Calculate the accuracy for these 5 new images. 
### For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate on these new images.

accuracy = sum(y_new_images == prediction)/len(y_new_images)
print("Accuracy = %4.2f%%" % (100*accuracy))
Accuracy = 75.00%

Output Top 5 Softmax Probabilities For Each Image Found on the Web

For each of the new images, print out the model's softmax probabilities to show the certainty of the model's predictions (limit the output to the top 5 probabilities for each image). tf.nn.top_k could prove helpful here.

The example below demonstrates how tf.nn.top_k can be used to find the top k predictions for each image.

tf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.

Take this numpy array as an example. The values in the array represent predictions. The array contains softmax probabilities for five candidate images with six possible classes. tk.nn.top_k is used to choose the three classes with the highest probability:

# (5, 6) array
a = np.array([[ 0.24879643,  0.07032244,  0.12641572,  0.34763842,  0.07893497,
         0.12789202],
       [ 0.28086119,  0.27569815,  0.08594638,  0.0178669 ,  0.18063401,
         0.15899337],
       [ 0.26076848,  0.23664738,  0.08020603,  0.07001922,  0.1134371 ,
         0.23892179],
       [ 0.11943333,  0.29198961,  0.02605103,  0.26234032,  0.1351348 ,
         0.16505091],
       [ 0.09561176,  0.34396535,  0.0643941 ,  0.16240774,  0.24206137,
         0.09155967]])

Running it through sess.run(tf.nn.top_k(tf.constant(a), k=3)) produces:

TopKV2(values=array([[ 0.34763842,  0.24879643,  0.12789202],
       [ 0.28086119,  0.27569815,  0.18063401],
       [ 0.26076848,  0.23892179,  0.23664738],
       [ 0.29198961,  0.26234032,  0.16505091],
       [ 0.34396535,  0.24206137,  0.16240774]]), indices=array([[3, 0, 5],
       [0, 1, 4],
       [0, 5, 1],
       [1, 3, 5],
       [1, 4, 3]], dtype=int32))

Looking just at the first row we get [ 0.34763842, 0.24879643, 0.12789202], you can confirm these are the 3 largest probabilities in a. You'll also notice [3, 0, 5] are the corresponding indices.

In [161]:
### Print out the top five softmax probabilities for the predictions on the German traffic sign images found on the web. 
k = 5
top_k = tf.nn.top_k(tf.nn.softmax(logits), k=k)
top = sess.run(top_k, feed_dict={x: X_new_images_p})

#print(top[0])

for i in range(len(y_new_images)):
    print("\n       == Actual ==       -", sign_names[y_new_images[i]])
    for j in range(k):
        print("  %22.20f%% - %s" % (top[0][i][j], sign_names[top[1][i][j]]))
       == Actual ==       - Turn right ahead
  1.00000000000000000000% - Turn right ahead
  0.00000000000000938110% - Yield
  0.00000000000000000000% - Ahead only
  0.00000000000000000000% - End of no passing
  0.00000000000000000000% - No passing

       == Actual ==       - No entry
  0.99942445755004882812% - No entry
  0.00057554931845515966% - Stop
  0.00000000000001650239% - Road work
  0.00000000000000000000% - Go straight or right
  0.00000000000000000000% - Yield

       == Actual ==       - Pedestrians
  0.99999964237213134766% - Pedestrians
  0.00000030158270192260% - Right-of-way at the next intersection
  0.00000000000874672557% - General caution
  0.00000000000024022743% - Slippery road
  0.00000000000000000000% - Traffic signals

       == Actual ==       - Speed limit (60km/h)
  0.80218791961669921875% - Speed limit (60km/h)
  0.19507561624050140381% - Speed limit (80km/h)
  0.00186467752791941166% - No passing for vehicles over 3.5 metric tons
  0.00084722309838980436% - Speed limit (50km/h)
  0.00000968348376773065% - Speed limit (100km/h)

       == Actual ==       - Speed limit (50km/h)
  0.95975565910339355469% - Speed limit (50km/h)
  0.03532156720757484436% - Speed limit (30km/h)
  0.00492282304912805557% - Speed limit (80km/h)
  0.00000000096775476521% - Speed limit (120km/h)
  0.00000000039890279968% - Speed limit (100km/h)

       == Actual ==       - Stop
  1.00000000000000000000% - Stop
  0.00000000000907242770% - Road work
  0.00000000000001902495% - No passing
  0.00000000000000922953% - Bicycles crossing
  0.00000000000000781041% - No entry

       == Actual ==       - Right-of-way at the next intersection
  0.99961996078491210938% - Slippery road
  0.00038006567046977580% - Right-of-way at the next intersection
  0.00000000012087592038% - Dangerous curve to the left
  0.00000000000000000000% - General caution
  0.00000000000000000000% - Dangerous curve to the right

       == Actual ==       - General caution
  1.00000000000000000000% - General caution
  0.00000004330680525300% - Pedestrians
  0.00000000000000000003% - Traffic signals
  0.00000000000000000000% - Right-of-way at the next intersection
  0.00000000000000000000% - Slippery road

       == Actual ==       - Yield
  1.00000000000000000000% - Yield
  0.00000000000000000181% - No vehicles
  0.00000000000000000044% - No entry
  0.00000000000000000000% - Priority road
  0.00000000000000000000% - Keep right

       == Actual ==       - Children crossing
  0.51066815853118896484% - Right-of-way at the next intersection
  0.45330968499183654785% - Beware of ice/snow
  0.02962992154061794281% - Children crossing
  0.00588124757632613182% - Dangerous curve to the right
  0.00033603637712076306% - Bicycles crossing

       == Actual ==       - Keep right
  1.00000000000000000000% - Keep right
  0.00000000000000017187% - Go straight or right
  0.00000000000000014774% - Turn left ahead
  0.00000000000000000073% - Ahead only
  0.00000000000000000000% - Speed limit (60km/h)

       == Actual ==       - Roundabout mandatory
  0.64722990989685058594% - Speed limit (70km/h)
  0.34869647026062011719% - Roundabout mandatory
  0.00342189194634556770% - Speed limit (120km/h)
  0.00063805322861298919% - Speed limit (100km/h)
  0.00000541396593689569% - Pedestrians

Note: Once you have completed all of the code implementations, you need to finalize your work by exporting the IPython Notebook as an HTML document. Before exporting the notebook to html, all of the code cells need to have been run. You can then export the notebook by using the menu above and navigating to \n", "File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.

Project Writeup

Once you have completed the code implementation, document your results in a project writeup using this template as a guide. The writeup can be in a markdown or pdf file.