# Exercise:Self-Taught Learning

### From Ufldl

Line 1: | Line 1: | ||

===Overview=== | ===Overview=== | ||

- | In this exercise, we will use the self-taught learning paradigm with the sparse autoencoder to build a classifier for handwritten digits. | + | In this exercise, we will use the self-taught learning paradigm with the sparse autoencoder and softmax classifier to build a classifier for handwritten digits. |

- | In this context, the self-taught learning paradigm can be used as follows. First, the sparse autoencoder is trained on a | + | In this context, the self-taught learning paradigm can be used as follows. First, the sparse autoencoder is trained on a unlabeled training dataset of images of handwritten digits. This produces feature detectors that correspond to the activation pattern for each hidden unit. The features learned are expected to be pen strokes to resemble segments of digits. |

- | The sparse autoencoder is then used to | + | The sparse autoencoder is then used to trainsform the raw labeled training dataset into representation of features, which are then used to train train the softmax classifier. For each example in the the labeled training dataset, <math>\textstyle x^{(k)}</math>, forward propogation is used to is used to obtain the activation of the hidden units <math>\textstyle a^{(k)}</math>. The data point <math>\textstyle a^{(k)}</math> and its corresponding label is then used to train the softmax classifier. Finally, the test dataset is also transformed to hidden unit activations and passed to the softmax classifier to obtain predictions. |

- | To demonstrate the effectiveness of this method, we will train the sparse autoencoder on an | + | To demonstrate the effectiveness of this method, we will train the sparse autoencoder on an unlabeled data set comprised out of all the digits 0 to 9, and then test them on the digits 5 to 9. The purpose of this is to demonstrate that self-taught learning can be surprisingly effective in improving results even if some data set items do not fall within the classes of our classification task. |

===Step One: Generate the input and test data sets=== | ===Step One: Generate the input and test data sets=== | ||

- | Download and decompress <tt> | + | Download and decompress <tt>stl_exercise.zip</tt>, which contains starter code for this exercise. Additionally, you will need to download the datasets from the MNIST Handwritten Digit Database for this project. You will need to use the f |

+ | |||

+ | To separate the the cases, you will need to fill in <tt>filterData.m</tt>, which should return a subset of the data set containing examples with labels within a certain bound. After implementing this function, run this step. The results of your function will be visualized as three rows, corresponding to the unlabelled training set, the labelled training set, and the test set. Ensure that the first row only contains digits 0 to 4, and the second and third only contains digits 5 to 9. | ||

[[File:selfTaughtInput.png]] | [[File:selfTaughtInput.png]] | ||

Line 17: | Line 19: | ||

===Step Two: Train the sparse autoencoder=== | ===Step Two: Train the sparse autoencoder=== | ||

- | Next we will train the input data set on the sparse autoencoder | + | Next we will train the input data set on the sparse autoencoder, using the same <tt>sparseAutoencoderCost.m</tt> function from the previous assignments. (Use the frameworks from previous assignments to ensure that your code is working and vectorized, as no testing facilities are provided in this assignment.) After doing so, running the training step should take about half an hour (on a reasonably fast computer). When it is completed, a visualization of pen strokes should be displayed. |

- | Hint: This step takes a very long time, so you might want to avoid running it on subsequent trials! To do so, | + | Hint: This step takes a very long time, so you might want to avoid running it on subsequent trials! To do so, uncomment the lines containing the function <tt>save(...);</tt>, or run the save function from the command line. In future trials, you may skip the training set and load the save weights from disk. |

[[File:selfTaughtFeatures.png]] | [[File:selfTaughtFeatures.png]] | ||

Line 25: | Line 27: | ||

===Step Three: Training the logistic regression model=== | ===Step Three: Training the logistic regression model=== | ||

- | After the sparse autoencoder is trained, we can use it to detect pen strokes in images. | + | After the sparse autoencoder is trained, we can use it to detect pen strokes in images. To do so, you will need to modify the sparse autoencoder function to output its hidden unit activation. |

+ | |||

+ | Change the method signature of sparseAutoencoderCost from | ||

+ | <tt>function [cost, grad] = sparseAutoencoderCost(...)</tt> to <tt>function [cost, grad, activation] = sparseAutoencoderCost(...)</tt> where activation should be a matrix with each column corresponding to activation of the hidden layer i.e. the vector <math>a^{(2)}</math> corresponding to activation of layer <math>L_{2}</math>. The remainder of the function should remain unchanged. | ||

- | + | After doing so, running this step should | |

- | + | ||

- | + | ||

===Step Four: Training and testing the logistic regression model=== | ===Step Four: Training and testing the logistic regression model=== |

## Revision as of 19:39, 21 April 2011

## Contents |

### Overview

In this exercise, we will use the self-taught learning paradigm with the sparse autoencoder and softmax classifier to build a classifier for handwritten digits.

In this context, the self-taught learning paradigm can be used as follows. First, the sparse autoencoder is trained on a unlabeled training dataset of images of handwritten digits. This produces feature detectors that correspond to the activation pattern for each hidden unit. The features learned are expected to be pen strokes to resemble segments of digits.

The sparse autoencoder is then used to trainsform the raw labeled training dataset into representation of features, which are then used to train train the softmax classifier. For each example in the the labeled training dataset, , forward propogation is used to is used to obtain the activation of the hidden units . The data point and its corresponding label is then used to train the softmax classifier. Finally, the test dataset is also transformed to hidden unit activations and passed to the softmax classifier to obtain predictions.

To demonstrate the effectiveness of this method, we will train the sparse autoencoder on an unlabeled data set comprised out of all the digits 0 to 9, and then test them on the digits 5 to 9. The purpose of this is to demonstrate that self-taught learning can be surprisingly effective in improving results even if some data set items do not fall within the classes of our classification task.

### Step One: Generate the input and test data sets

Download and decompress `stl_exercise.zip`, which contains starter code for this exercise. Additionally, you will need to download the datasets from the MNIST Handwritten Digit Database for this project. You will need to use the f

To separate the the cases, you will need to fill in `filterData.m`, which should return a subset of the data set containing examples with labels within a certain bound. After implementing this function, run this step. The results of your function will be visualized as three rows, corresponding to the unlabelled training set, the labelled training set, and the test set. Ensure that the first row only contains digits 0 to 4, and the second and third only contains digits 5 to 9.

### Step Two: Train the sparse autoencoder

Next we will train the input data set on the sparse autoencoder, using the same `sparseAutoencoderCost.m` function from the previous assignments. (Use the frameworks from previous assignments to ensure that your code is working and vectorized, as no testing facilities are provided in this assignment.) After doing so, running the training step should take about half an hour (on a reasonably fast computer). When it is completed, a visualization of pen strokes should be displayed.

Hint: This step takes a very long time, so you might want to avoid running it on subsequent trials! To do so, uncomment the lines containing the function `save(...);`, or run the save function from the command line. In future trials, you may skip the training set and load the save weights from disk.

### Step Three: Training the logistic regression model

After the sparse autoencoder is trained, we can use it to detect pen strokes in images. To do so, you will need to modify the sparse autoencoder function to output its hidden unit activation.

Change the method signature of sparseAutoencoderCost from
`function [cost, grad] = sparseAutoencoderCost(...)` to `function [cost, grad, activation] = sparseAutoencoderCost(...)` where activation should be a matrix with each column corresponding to activation of the hidden layer i.e. the vector *a*^{(2)} corresponding to activation of layer *L*_{2}. The remainder of the function should remain unchanged.

After doing so, running this step should

### Step Four: Training and testing the logistic regression model

After completing these steps, running the entire script in `trainSelfTaught.m` will use your sparse autoencoder to train the logistic model, then measure how well this system performs on the test set. Statistics about the model will be displayed afterwards. If you've done all the steps correctly, you should get an accuracy of about X percent.