Exercise: Implement deep networks for digit classification
From Ufldl
(→Overview) |
|||
Line 5: | Line 5: | ||
The code you have already implemented will allow you to stack various layers and perform layer-wise training. However, to perform fine-tuning, you will need to implement back-propogation as well. We will see that fine-tuning significantly improves the model's performance. | The code you have already implemented will allow you to stack various layers and perform layer-wise training. However, to perform fine-tuning, you will need to implement back-propogation as well. We will see that fine-tuning significantly improves the model's performance. | ||
- | In the file | + | In the file [http://ufldl.stanford.edu/wiki/resources/stackedae_exercise.zip stackedae_exercise.zip], we have provided some starter code. You will need to edit <tt>stackedAECost.m</tt>. You should also read <tt>stackedAETrain.m</tt> and ensure that you understand the steps. |
+ | |||
+ | === Dependencies === | ||
+ | |||
+ | The following additional files are required for this exercise: | ||
+ | * [http://yann.lecun.com/exdb/mnist/ MNIST Dataset] | ||
+ | * [[Using the MNIST Dataset | Support functions for loading MNIST in Matlab ]] | ||
+ | * [http://ufldl.stanford.edu/wiki/resources/stackedae_exercise.zip Starter Code (stackedae_exercise.zip)] | ||
+ | |||
+ | You will also need your code from the following exercises: | ||
+ | * [[Exercise:Sparse Autoencoder]] | ||
+ | * [[Exercise:Vectorization]] | ||
+ | * [[Exercise:Softmax Regression]] | ||
+ | * [[Exercise:Self-Taught Learning]] | ||
+ | |||
+ | ''If you have not completed the exercises listed above, we strongly suggest you complete them first.'' | ||
=== Step 0: Initialize constants and parameters === | === Step 0: Initialize constants and parameters === |