Exercise:Self-Taught Learning

From Ufldl

Jump to: navigation, search
Line 1: Line 1:
===Overview===
===Overview===
-
In this exercise, we will use the self-taught learning paradigm with the sparse autoencoder to build a classifier for handwritten digits.
+
In this exercise, we will use the self-taught learning paradigm with the sparse autoencoder and softmax classifier to build a classifier for handwritten digits.
-
In this context, the self-taught learning paradigm can be used as follows. First, the sparse autoencoder is trained on a unlabelled training dataset of images of handwritten digits. This produces feature detectors that correspond to the activation pattern for each hidden unit. The features learned are expected to be pen strokes to resemble segments of digits.  
+
In this context, the self-taught learning paradigm can be used as follows. First, the sparse autoencoder is trained on a unlabeled training dataset of images of handwritten digits. This produces feature detectors that correspond to the activation pattern for each hidden unit. The features learned are expected to be pen strokes to resemble segments of digits.  
-
The sparse autoencoder is then used to augment the labelled training dataset, which is then used train the logistic regression classifier. For each example in the the labelled training dataset, <math>\textstyle (a_l^{(k)}</math>, forward propogation is used to is used to obtain the activation of the hidden units <math>\textstyle y^{(k)}</math>. The concatenated data point <math>\textstyle (a_l^{(k)}, y^{(k)})</math> is then used to train the logistic regression model. Finally, the test dataset is also augmented with hidden unit activations in the same manner, and run through the model to test the model's performance.
+
The sparse autoencoder is then used to trainsform the raw labeled training dataset into representation of features, which are then used to train train the softmax classifier. For each example in the the labeled training dataset, <math>\textstyle x^{(k)}</math>, forward propogation is used to is used to obtain the activation of the hidden units <math>\textstyle a^{(k)}</math>. The data point <math>\textstyle a^{(k)}</math> and its corresponding label is then used to train the softmax classifier. Finally, the test dataset is also transformed to hidden unit activations and passed to the softmax classifier to obtain predictions.
-
To demonstrate the effectiveness of this method, we will train the sparse autoencoder on an unlabelled data set comprised out of only the digits 0 to 4, and then test them on the digits 5 to 9. The purpose of this is to demonstrate that self-taught learning can be surprisingly effective even if the unlabelled training data set does not contain items from our classification task. In practice, there is no good reason to discard data from our training dataset, so we would choose to use all images available to use for training the sparse autoencoder.
+
To demonstrate the effectiveness of this method, we will train the sparse autoencoder on an unlabeled data set comprised out of all the digits 0 to 9, and then test them on the digits 5 to 9. The purpose of this is to demonstrate that self-taught learning can be surprisingly effective in improving results even if some data set items do not fall within the classes of our classification task.
===Step One: Generate the input and test data sets===
===Step One: Generate the input and test data sets===
-
Download and decompress <tt>self_taught_assign.zip</tt>, which contains starter code for this exercise. Additionally, you will need to download the datasets from the MNIST Handwritten Digit Database for this project. Download and decompress the files <tt>getLabelledData.m</tt> and <tt>getUnlabelledData.m</tt> to the <tt>mnist/</tt> path of the project folder. The functions to read data from the raw files have already been provided. You will need to write functions to separate the cases into separate sets containing the digits 0 to 4, and 5 to 9. Fill in your code in the function files  <tt>getLabelledData.m</tt> and <tt>getUnlabelledData.m</tt>. The results of your function will be visualized as three rows, corresponding to the unlabelled training set, the labelled training set, and the test set. Ensure that the first row only contains digits 0 to 4, and the second and third only contains digits 5 to 9.
+
Download and decompress <tt>stl_exercise.zip</tt>, which contains starter code for this exercise. Additionally, you will need to download the datasets from the MNIST Handwritten Digit Database for this project. You will need to use the f
 +
 
 +
To separate the the cases, you will need to fill in <tt>filterData.m</tt>, which should return a subset of the data set containing examples with labels within a certain bound. After implementing this function, run this step. The results of your function will be visualized as three rows, corresponding to the unlabelled training set, the labelled training set, and the test set. Ensure that the first row only contains digits 0 to 4, and the second and third only contains digits 5 to 9.
[[File:selfTaughtInput.png]]
[[File:selfTaughtInput.png]]
Line 17: Line 19:
===Step Two: Train the sparse autoencoder===
===Step Two: Train the sparse autoencoder===
-
Next we will train the input data set on the sparse autoencoder. Copy the <tt>sparseAutoencoderCost.m</tt> function from the previous assignment and run the training step. Use the frameworks from previous assignments to ensure that your code is working and vectorized, as no testing facilities are provided in this assignment. After doing so, running the training step should take about half an hour (on a reasonably fast computer). When it is completed, a visualization of pen strokes should be displayed.
+
Next we will train the input data set on the sparse autoencoder, using the same <tt>sparseAutoencoderCost.m</tt> function from the previous assignments. (Use the frameworks from previous assignments to ensure that your code is working and vectorized, as no testing facilities are provided in this assignment.) After doing so, running the training step should take about half an hour (on a reasonably fast computer). When it is completed, a visualization of pen strokes should be displayed.
-
Hint: This step takes a very long time, so you might want to avoid running it on subsequent trials! To do so, after running this step, run <tt>save('theta.mat', 'theta');</tt> from the command line. Then modify one line near the top of the <tt>trainSelfTaught.m</tt> script to set <tt>loadTheta = true;</tt>. This skips the autoencoder training step and loads the saved weights from disk.
+
Hint: This step takes a very long time, so you might want to avoid running it on subsequent trials! To do so, uncomment the lines containing the function <tt>save(...);</tt>, or run the save function from the command line. In future trials, you may skip the training set and load the save weights from disk.
[[File:selfTaughtFeatures.png]]
[[File:selfTaughtFeatures.png]]
Line 25: Line 27:
===Step Three: Training the logistic regression model===
===Step Three: Training the logistic regression model===
-
After the sparse autoencoder is trained, we can use it to detect pen strokes in images. Fill in <tt>featureActivation.m</tt> to use forward propagation to determine the activation of the hidden units for a given input. This is the same as performing forward computation without the output layer. Your function should compute the vector:
+
After the sparse autoencoder is trained, we can use it to detect pen strokes in images. To do so, you will need to modify the sparse autoencoder function to output its hidden unit activation.
 +
 
 +
Change the method signature of sparseAutoencoderCost from
 +
<tt>function [cost, grad] = sparseAutoencoderCost(...)</tt> to <tt>function [cost, grad, activation] = sparseAutoencoderCost(...)</tt> where activation should be a matrix with each column corresponding to activation of the hidden layer i.e. the vector <math>a^{(2)}</math> corresponding to activation of layer <math>L_{2}</math>. The remainder of the function should remain unchanged.
-
:<math>\begin{align}
+
After doing so, running this step should
-
z &= W^{(1)} a^{(1)} + b^{(1)}
+
-
\end{align}</math>
+
===Step Four: Training and testing the logistic regression model===
===Step Four: Training and testing the logistic regression model===

Revision as of 19:39, 21 April 2011

Personal tools