# Exercise:Self-Taught Learning

### From Ufldl

(→Step Two: Train the sparse autoencoder: Added instructions on saving weights to disk) |
|||

Line 11: | Line 11: | ||

===Step One: Generate the input and test data sets=== | ===Step One: Generate the input and test data sets=== | ||

- | Download and decompress <tt>self_taught_assign.zip</tt>, which contains starter code for this exercise. Additionally, you will need to download the datasets from the MNIST Handwritten Digit Database for this project. Download and decompress the files <tt>getLabelledData.m</tt> and <tt>getUnlabelledData.m</tt> to the <tt>mnist/</tt> path of the project folder. The functions to read data from the raw files have already been provided. You will need to write functions to separate the cases into separate sets containing the digits | + | Download and decompress <tt>self_taught_assign.zip</tt>, which contains starter code for this exercise. Additionally, you will need to download the datasets from the MNIST Handwritten Digit Database for this project. Download and decompress the files <tt>getLabelledData.m</tt> and <tt>getUnlabelledData.m</tt> to the <tt>mnist/</tt> path of the project folder. The functions to read data from the raw files have already been provided. You will need to write functions to separate the cases into separate sets containing the digits 0 to 4, and 5 to 9. Fill in your code in the function files <tt>getLabelledData.m</tt> and <tt>getUnlabelledData.m</tt>. The results of your function will be visualized as three rows, corresponding to the unlabelled training set, the labelled training set, and the test set. Ensure that the first row only contains digits 0 to 4, and the second and third only contains digits 5 to 9. |

- | + | [[File:selfTaughtInput.png]] | |

===Step Two: Train the sparse autoencoder=== | ===Step Two: Train the sparse autoencoder=== | ||

Line 21: | Line 21: | ||

Hint: This step takes a very long time, so you might want to avoid running it on subsequent trials! To do so, after running this step, run <tt>save('theta.mat', 'theta');</tt> from the command line. Then modify one line near the top of the <tt>trainSelfTaught.m</tt> script to set <tt>loadTheta = true;</tt>. This skips the autoencoder training step and loads the saved weights from disk. | Hint: This step takes a very long time, so you might want to avoid running it on subsequent trials! To do so, after running this step, run <tt>save('theta.mat', 'theta');</tt> from the command line. Then modify one line near the top of the <tt>trainSelfTaught.m</tt> script to set <tt>loadTheta = true;</tt>. This skips the autoencoder training step and loads the saved weights from disk. | ||

- | + | [[File:selfTaughtFeatures.png]] | |

===Step Three: Training the logistic regression model=== | ===Step Three: Training the logistic regression model=== |

## Revision as of 18:37, 7 April 2011

## Contents |

### Overview

In this exercise, we will use the self-taught learning paradigm with the sparse autoencoder to build a classifier for handwritten digits.

In this context, the self-taught learning paradigm can be used as follows. First, the sparse autoencoder is trained on a unlabelled training dataset of images of handwritten digits. This produces feature detectors that correspond to the activation pattern for each hidden unit. The features learned are expected to be pen strokes to resemble segments of digits.

The sparse autoencoder is then used to augment the labelled training dataset, which is then used train the logistic regression classifier. For each example in the the labelled training dataset, , forward propogation is used to is used to obtain the activation of the hidden units . The concatenated data point is then used to train the logistic regression model. Finally, the test dataset is also augmented with hidden unit activations in the same manner, and run through the model to test the model's performance.

To demonstrate the effectiveness of this method, we will train the sparse autoencoder on an unlabelled data set comprised out of only the digits 0 to 4, and then test them on the digits 5 to 9. The purpose of this is to demonstrate that self-taught learning can be surprisingly effective even if the unlabelled training data set does not contain items from our classification task. In practice, there is no good reason to discard data from our training dataset, so we would choose to use all images available to use for training the sparse autoencoder.

### Step One: Generate the input and test data sets

Download and decompress `self_taught_assign.zip`, which contains starter code for this exercise. Additionally, you will need to download the datasets from the MNIST Handwritten Digit Database for this project. Download and decompress the files `getLabelledData.m` and `getUnlabelledData.m` to the `mnist/` path of the project folder. The functions to read data from the raw files have already been provided. You will need to write functions to separate the cases into separate sets containing the digits 0 to 4, and 5 to 9. Fill in your code in the function files `getLabelledData.m` and `getUnlabelledData.m`. The results of your function will be visualized as three rows, corresponding to the unlabelled training set, the labelled training set, and the test set. Ensure that the first row only contains digits 0 to 4, and the second and third only contains digits 5 to 9.

### Step Two: Train the sparse autoencoder

Next we will train the input data set on the sparse autoencoder. Copy the `sparseAutoencoderCost.m` function from the previous assignment and run the training step. Use the frameworks from previous assignments to ensure that your code is working and vectorized, as no testing facilities are provided in this assignment. After doing so, running the training step should take about half an hour (on a reasonably fast computer). When it is completed, a visualization of pen strokes should be displayed.

Hint: This step takes a very long time, so you might want to avoid running it on subsequent trials! To do so, after running this step, run `save('theta.mat', 'theta');` from the command line. Then modify one line near the top of the `trainSelfTaught.m` script to set `loadTheta = true;`. This skips the autoencoder training step and loads the saved weights from disk.

### Step Three: Training the logistic regression model

After the sparse autoencoder is trained, we can use it to detect pen strokes in images. Fill in `featureActivation.m` to use forward propagation to determine the activation of the hidden units for a given input. This is the same as performing forward computation without the output layer. Your function should compute the vector:

### Step Four: Training and testing the logistic regression model

After completing these steps, running the entire script in `trainSelfTaught.m` will use your sparse autoencoder to train the logistic model, then measure how well this system performs on the test set. Statistics about the model will be displayed afterwards. If you've done all the steps correctly, you should get an accuracy of about X percent.