Exercise:Vectorization

From Ufldl

Jump to: navigation, search
 
Line 1: Line 1:
== Vectorization ==
== Vectorization ==
-
In the previous problem set, we implemented a sparse autoencoder for patches taken from natural images. In this problem set, you will adapt your sparse autoencoder to work on images of handwritten digits.
+
In the previous problem set, we implemented a sparse autoencoder for patches taken from natural images. In this problem set, you will vectorize your code to make it run much faster, and further adapt your sparse autoencoder to work on images of handwritten digits.  Your network for learning from handwritten digits will be much larger than the one you'd trained on the natural images, and so using the original implementation would have been painfully slow.  But with a vectorized implementation of the autoencoder, you will be able to get this to run in a reasonable amount of computation time.  
=== Support Code/Data ===
=== Support Code/Data ===
Line 7: Line 7:
The following additional files are required for this exercise:
The following additional files are required for this exercise:
* [http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz MNIST Dataset (Training Images)]
* [http://yann.lecun.com/exdb/mnist/train-images-idx3-ubyte.gz MNIST Dataset (Training Images)]
 +
* [http://yann.lecun.com/exdb/mnist/train-labels-idx1-ubyte.gz MNIST Dataset (Training Labels)]
* [[Using the MNIST Dataset | Support functions for loading MNIST in Matlab ]]
* [[Using the MNIST Dataset | Support functions for loading MNIST in Matlab ]]
=== Step 1: Vectorize your Sparse Autoencoder Implementation ===
=== Step 1: Vectorize your Sparse Autoencoder Implementation ===
-
Using the suggestions from [[Vectorization]] and [[Neural Network Vectorization]], vectorize your implementation of <tt>sparseAutoencoderCost.m</tt>. In our implementation, we were able to remove all for-loops with the use of matrix operations, <tt>repmat</tt> (and/or <tt>bsxfun</tt>). A vectorized version of our code ran in under one minute on a fast computer (for learning 25 features from 1000 8x8 image patches).  
+
Using the ideas from [[Vectorization]] and [[Neural Network Vectorization]], vectorize your implementation of <tt>sparseAutoencoderCost.m</tt>. In our implementation, we were able to remove all for-loops with the use of matrix operations and <tt>repmat</tt>. (If you want to play with more advanced vectorization ideas, also type <tt>help bsxfun</tt>.  The <tt>bsxfun</tt> function provides an alternative to <tt>repmat</tt> for some of the vectorization steps, but is not necessary for this exercise). A vectorized version of our sparse autoencoder code ran in under one minute on a fast computer (for learning 25 features from 10000 8x8 image patches).  
(Note that you do not need to vectorize the code in the other files.)
(Note that you do not need to vectorize the code in the other files.)
Line 17: Line 18:
=== Step 2: Learn features for handwritten digits ===
=== Step 2: Learn features for handwritten digits ===
-
Now that you have vectorized the code, it is easy to learn larger sets of features on medium sized images. In this part of the exercise, you will use your sparse autoencoder to learn features for handwritten digits from the MNIST dataset.
+
Now that you have vectorized the code, it is easy to learn larger sets of features on medium sized images. In this part of the exercise, you will use your sparse autoencoder to learn features for handwritten digits from the MNIST dataset.
-
The MNIST data is available at [http://yann.lecun.com/exdb/mnist/]. Download the file <tt>train-images-idx3-ubyte.gz</tt> and decompress it. After obtaining the source images, we have [[Using the MNIST Dataset | provided functions ]] help you load them up as Matlab matrices. While the provided functions allow you to load up both the labels and data, for this assignment, you will only need the data since the training is ''unsupervised''.
+
The MNIST data is available at [http://yann.lecun.com/exdb/mnist/]. Download the file <tt>train-images-idx3-ubyte.gz</tt> and decompress it. After obtaining the source images, you should use [[Using the MNIST Dataset | helper functions that we provide]] to load the data into Matlab as matrices. While the helper functions that we provide will load both the input examples <math>x</math> and the class labels <math>y</math>, for this assignment, you will only need the input examples <math>x</math> since the sparse autoencoder is an ''unsupervised'' learning algorithm.  (In a later assignment, we will use the labels <math>y</math> as well.)
The following set of parameters worked well for us to learn good features on the MNIST dataset:
The following set of parameters worked well for us to learn good features on the MNIST dataset:
  visibleSize = 28*28
  visibleSize = 28*28
 +
hiddenSize = 196
  sparsityParam = 0.1
  sparsityParam = 0.1
  lambda = 3e-3
  lambda = 3e-3
  beta = 3
  beta = 3
 +
patches = first 10000 images from the MNIST dataset
-
After 200 iterations of updates using minFunc, your autoencoder should have learned features that resemble pen strokes. Our implementation takes around 25-30 minutes on a fast machine. Visualized, the features should look like in the following image:
+
After 400 iterations of updates using minFunc, your autoencoder should have learned features that resemble pen strokes. In other words, this has learned to represent handwritten characters in terms of what pen strokes appear in an image.  Our implementation takes around 15-20 minutes on a fast machine. Visualized, the features should look like the following image:  
-
[[File:MNIST-false-0.1-3e-3-3-linear.png]]
+
[[File:mnistVectorizationEx.png|400px]]
If your parameters are improperly tuned, or if your implementation of the autoencoder is buggy, you may get one of the following images instead:
If your parameters are improperly tuned, or if your implementation of the autoencoder is buggy, you may get one of the following images instead:
Line 52: Line 55:
As with the first problem, the autoencoder should learn edge features. Your code should run in under 10 minutes on a reasonably fast machine. If it takes significantly longer, check your code and ensure that it is vectorized.
As with the first problem, the autoencoder should learn edge features. Your code should run in under 10 minutes on a reasonably fast machine. If it takes significantly longer, check your code and ensure that it is vectorized.
-
[[Category:Exercises]]
+
[[Category:Exercises]] -->
 +
 
 +
 
 +
{{Vectorized Implementation}}

Latest revision as of 11:00, 26 May 2011

Personal tools