Digit Classification With RBM Features

Machine LearningMachine LearningBeginner
Practice Now

This tutorial is from open-source community. Access the source code

Introduction

This lab focuses on the use of Bernoulli Restricted Boltzmann Machine (RBM) for classification of handwritten digits. The RBM feature extractor is combined with a logistic regression classifier to predict the digits. The dataset used is a greyscale image data where pixel values can be interpreted as degrees of blackness on a white background.

VM Tips

After the VM startup is done, click the top left corner to switch to the Notebook tab to access Jupyter Notebook for practice.

Sometimes, you may need to wait a few seconds for Jupyter Notebook to finish loading. The validation of operations cannot be automated because of limitations in Jupyter Notebook.

If you face issues during learning, feel free to ask Labby. Provide feedback after the session, and we will promptly resolve the problem for you.


Skills Graph

%%%%{init: {'theme':'neutral'}}%%%% flowchart RL sklearn(("`Sklearn`")) -.-> sklearn/ModelSelectionandEvaluationGroup(["`Model Selection and Evaluation`"]) sklearn(("`Sklearn`")) -.-> sklearn/DataPreprocessingandFeatureEngineeringGroup(["`Data Preprocessing and Feature Engineering`"]) sklearn(("`Sklearn`")) -.-> sklearn/UtilitiesandDatasetsGroup(["`Utilities and Datasets`"]) sklearn(("`Sklearn`")) -.-> sklearn/CoreModelsandAlgorithmsGroup(["`Core Models and Algorithms`"]) ml(("`Machine Learning`")) -.-> ml/FrameworkandSoftwareGroup(["`Framework and Software`"]) sklearn/ModelSelectionandEvaluationGroup -.-> sklearn/metrics("`Metrics`") sklearn/DataPreprocessingandFeatureEngineeringGroup -.-> sklearn/pipeline("`Pipeline`") sklearn/UtilitiesandDatasetsGroup -.-> sklearn/base("`Base Classes and Utility Functions`") sklearn/UtilitiesandDatasetsGroup -.-> sklearn/datasets("`Datasets`") sklearn/CoreModelsandAlgorithmsGroup -.-> sklearn/linear_model("`Linear Models`") sklearn/ModelSelectionandEvaluationGroup -.-> sklearn/model_selection("`Model Selection`") sklearn/CoreModelsandAlgorithmsGroup -.-> sklearn/neural_network("`Neural Network Models`") sklearn/DataPreprocessingandFeatureEngineeringGroup -.-> sklearn/preprocessing("`Preprocessing and Normalization`") ml/FrameworkandSoftwareGroup -.-> ml/sklearn("`scikit-learn`") subgraph Lab Skills sklearn/metrics -.-> lab-49259{{"`Digit Classification With RBM Features`"}} sklearn/pipeline -.-> lab-49259{{"`Digit Classification With RBM Features`"}} sklearn/base -.-> lab-49259{{"`Digit Classification With RBM Features`"}} sklearn/datasets -.-> lab-49259{{"`Digit Classification With RBM Features`"}} sklearn/linear_model -.-> lab-49259{{"`Digit Classification With RBM Features`"}} sklearn/model_selection -.-> lab-49259{{"`Digit Classification With RBM Features`"}} sklearn/neural_network -.-> lab-49259{{"`Digit Classification With RBM Features`"}} sklearn/preprocessing -.-> lab-49259{{"`Digit Classification With RBM Features`"}} ml/sklearn -.-> lab-49259{{"`Digit Classification With RBM Features`"}} end

Data Preparation

In this step, we prepare the data for training and testing. We use load_digits function from sklearn.datasets to get the dataset. We then artificially generate more labeled data by perturbing the training data with linear shifts of 1 pixel in each direction. We scale the data between 0 and 1.

import numpy as np
from scipy.ndimage import convolve
from sklearn import datasets
from sklearn.preprocessing import minmax_scale
from sklearn.model_selection import train_test_split

def nudge_dataset(X, Y):
    """
    This produces a dataset 5 times bigger than the original one,
    by moving the 8x8 images in X around by 1px to left, right, down, up
    """
    direction_vectors = [
        [[0, 1, 0], [0, 0, 0], [0, 0, 0]],
        [[0, 0, 0], [1, 0, 0], [0, 0, 0]],
        [[0, 0, 0], [0, 0, 1], [0, 0, 0]],
        [[0, 0, 0], [0, 0, 0], [0, 1, 0]],
    ]

    def shift(x, w):
        return convolve(x.reshape((8, 8)), mode="constant", weights=w).ravel()

    X = np.concatenate(
        [X] + [np.apply_along_axis(shift, 1, X, vector) for vector in direction_vectors]
    )
    Y = np.concatenate([Y for _ in range(5)], axis=0)
    return X, Y

X, y = datasets.load_digits(return_X_y=True)
X = np.asarray(X, "float32")
X, Y = nudge_dataset(X, y)
X = minmax_scale(X, feature_range=(0, 1))  ## 0-1 scaling

X_train, X_test, Y_train, Y_test = train_test_split(X, Y, test_size=0.2, random_state=0)

Model Definition

In this step, we define the classification pipeline with a BernoulliRBM feature extractor and a logistic regression classifier. We use BernoulliRBM and LogisticRegression classes from sklearn.neural_network and sklearn.linear_model modules respectively. We then create a pipeline object rbm_features_classifier to combine the two models.

from sklearn import linear_model
from sklearn.neural_network import BernoulliRBM
from sklearn.pipeline import Pipeline

logistic = linear_model.LogisticRegression(solver="newton-cg", tol=1)
rbm = BernoulliRBM(random_state=0, verbose=True)

rbm_features_classifier = Pipeline(steps=[("rbm", rbm), ("logistic", logistic)])

Training

In this step, we train the pipeline model defined in the previous step. We set the hyperparameters of the model (learning rate, hidden layer size, regularization), and then fit the training data to the model.

from sklearn.base import clone

## Hyper-parameters. These were set by cross-validation,
## using a GridSearchCV. Here we are not performing cross-validation to
## save time.
rbm.learning_rate = 0.06
rbm.n_iter = 10

## More components tend to give better prediction performance, but larger
## fitting time
rbm.n_components = 100
logistic.C = 6000

## Training RBM-Logistic Pipeline
rbm_features_classifier.fit(X_train, Y_train)

Evaluation

In this step, we evaluate the performance of the model on the test dataset. We use classification_report function from sklearn.metrics module to generate the classification report for both the pipeline model and the logistic regression model.

from sklearn import metrics

Y_pred = rbm_features_classifier.predict(X_test)
print(
    "Logistic regression using RBM features:\n%s\n"
    % (metrics.classification_report(Y_test, Y_pred))
)

## Training the Logistic regression classifier directly on the pixel
raw_pixel_classifier = clone(logistic)
raw_pixel_classifier.C = 100.0
raw_pixel_classifier.fit(X_train, Y_train)

Y_pred = raw_pixel_classifier.predict(X_test)
print(
    "Logistic regression using raw pixel features:\n%s\n"
    % (metrics.classification_report(Y_test, Y_pred))
)

Plotting

In this step, we plot the 100 components extracted by the RBM. We use matplotlib.pyplot module to plot the images.

import matplotlib.pyplot as plt

plt.figure(figsize=(4.2, 4))
for i, comp in enumerate(rbm.components_):
    plt.subplot(10, 10, i + 1)
    plt.imshow(comp.reshape((8, 8)), cmap=plt.cm.gray_r, interpolation="nearest")
    plt.xticks(())
    plt.yticks(())
plt.suptitle("100 components extracted by RBM", fontsize=16)
plt.subplots_adjust(0.08, 0.02, 0.92, 0.85, 0.08, 0.23)

plt.show()

Summary

In this lab, we learned how to use Bernoulli Restricted Boltzmann Machine (RBM) with logistic regression for classification of handwritten digits. We also learned how to evaluate the performance of the model using classification report and how to plot the components extracted by RBM.

Other Machine Learning Tutorials you may like