Exercise:Vectorization

From Ufldl

Jump to: navigation, search
(Step 2: Learn features for handwritten digits)
 
Line 1: Line 1:
== Vectorization ==
== Vectorization ==
-
In the previous problem set, we implemented a sparse autoencoder for patches taken from natural images. In this problem set, you will adapt your sparse autoencoder to work on images of handwritten digits.
+
In the previous problem set, we implemented a sparse autoencoder for patches taken from natural images. In this problem set, you will vectorize your code to make it run much faster, and further adapt your sparse autoencoder to work on images of handwritten digits.  Your network for learning from handwritten digits will be much larger than the one you'd trained on the natural images, and so using the original implementation would have been painfully slow.  But with a vectorized implementation of the autoencoder, you will be able to get this to run in a reasonable amount of computation time.  
=== Support Code/Data ===
=== Support Code/Data ===
Line 7: Line 7:
The following additional files are required for this exercise:
The following additional files are required for this exercise:
* [http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz MNIST Dataset (Training Images)]
* [http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz MNIST Dataset (Training Images)]
 +
* [http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz MNIST Dataset (Training Labels)]
* [[Using the MNIST Dataset | Support functions for loading MNIST in Matlab ]]
* [[Using the MNIST Dataset | Support functions for loading MNIST in Matlab ]]
Line 17: Line 18:
=== Step 2: Learn features for handwritten digits ===
=== Step 2: Learn features for handwritten digits ===
-
Now that you have vectorized the code, it is easy to learn larger sets of features on medium sized images. In this part of the exercise, you will use your sparse autoencoder to learn features for handwritten digits from the MNIST dataset.  This is a large enough data set that running your older, unvectorized implementation would have been painfully slow.
+
Now that you have vectorized the code, it is easy to learn larger sets of features on medium sized images. In this part of the exercise, you will use your sparse autoencoder to learn features for handwritten digits from the MNIST dataset.   
The MNIST data is available at [http://yann.lecun.com/exdb/mnist/]. Download the file <tt>train-images-idx3-ubyte.gz</tt> and decompress it. After obtaining the source images, you should use [[Using the MNIST Dataset | helper functions that we provide]] to load the data into Matlab as matrices.  While the helper functions that we provide will load both the input examples <math>x</math> and the class labels <math>y</math>, for this assignment, you will only need the input examples <math>x</math> since the sparse autoencoder is an ''unsupervised'' learning algorithm.  (In a later assignment, we will use the labels <math>y</math> as well.)  
The MNIST data is available at [http://yann.lecun.com/exdb/mnist/]. Download the file <tt>train-images-idx3-ubyte.gz</tt> and decompress it. After obtaining the source images, you should use [[Using the MNIST Dataset | helper functions that we provide]] to load the data into Matlab as matrices.  While the helper functions that we provide will load both the input examples <math>x</math> and the class labels <math>y</math>, for this assignment, you will only need the input examples <math>x</math> since the sparse autoencoder is an ''unsupervised'' learning algorithm.  (In a later assignment, we will use the labels <math>y</math> as well.)  
Line 30: Line 31:
  patches = first 10000 images from the MNIST dataset
  patches = first 10000 images from the MNIST dataset
-
After 400 iterations of updates using minFunc, your autoencoder should have learned features that resemble pen strokes. Our implementation takes around 15-20 minutes on a fast machine. Visualized, the features should look like in the following image:
+
After 400 iterations of updates using minFunc, your autoencoder should have learned features that resemble pen strokes. In other words, this has learned to represent handwritten characters in terms of what pen strokes appear in an image.  Our implementation takes around 15-20 minutes on a fast machine. Visualized, the features should look like the following image:  
[[File:mnistVectorizationEx.png|400px]]
[[File:mnistVectorizationEx.png|400px]]
Line 54: Line 55:
As with the first problem, the autoencoder should learn edge features. Your code should run in under 10 minutes on a reasonably fast machine. If it takes significantly longer, check your code and ensure that it is vectorized.
As with the first problem, the autoencoder should learn edge features. Your code should run in under 10 minutes on a reasonably fast machine. If it takes significantly longer, check your code and ensure that it is vectorized.
-
[[Category:Exercises]]
+
[[Category:Exercises]] -->
 +
 
 +
 
 +
{{Vectorized Implementation}}

Latest revision as of 11:00, 26 May 2011

Personal tools