In the previous problem set, we implemented a sparse autoencoder for patches taken from natural images. In this problem set, you will adapt the sparse autoencoder to work on images of handwritten digits. You will be given a working but unvectorized implementation, and your task will be to vectorize a key step to improve its performance.
In the file vec_assign.zip, you will find MATLAB code implementing a sparse autoencoder. To run the code, you will need to download an additional data set from the MNIST handwritten digit database. Download the file train-images-idx3-ubyte.gz and decompress it to the MNIST/ folder in the project path. After obtaining the source images, we have provided functions help you load them up as Matlab matrices.
Use the following parameters for the MNIST dataset:
patchSize: 28x28 patches sparsityParam = 0.1 lambda = 3e-3 beta = 3 normalizeData: linear scaling (patches = patches / 255)
The autoencoder should learn pen strokes as features. These features should start to become obvious after 400 iterations of minFunc, which takes around 20 - 25 minutes on the Corn cluster. Visualised, the features should look like in the following image:
If your parameters are improperly tuned, or if your implementation of the autoencoder is buggy, you may get one of the following images instead:
If your image looks like one of the above images, check your code and parameters again. In particular, templates of digits are not very useful as features, since they do not generalise very well to digits written differently.
Use the following parameters for the natural images dataset:
visibleSize = 14*14; hiddenSize = 196; sparsityParam = 0.035; lambda = 0.0003; beta = 5;
As with the first problem, the autoencoder should learn edge features. Your code should run in under 10 minutes on a reasonably fast machine. If it takes significantly longer, check your code and ensure that it is vectorized.