Exercise:Sparse Autoencoder
From Ufldl
Line 1: | Line 1: | ||
+ | ==Download Related Reading== | ||
+ | * [http://nlp.stanford.edu/~socherr/sparseAutoencoder_2011new.pdf sparseae_reading.pdf] | ||
+ | * [http://www.stanford.edu/class/cs294a/cs294a_2011-assignment.pdf sparseae_exercise.pdf] | ||
+ | |||
==Sparse autoencoder implementation== | ==Sparse autoencoder implementation== | ||
Line 131: | Line 135: | ||
should work, but feel free to play with different settings of the parameters as | should work, but feel free to play with different settings of the parameters as | ||
well. | well. | ||
+ | |||
+ | '''Implementational tip:''' Once you have your backpropagation implementation correctly computing the derivatives (as verified using gradient checking in Step 3), when you are now using it with L-BFGS to optimize <math>J_{\rm sparse}(W,b)</math>, make sure you're not doing gradient-checking on every step. Backpropagation can be used to compute the derivatives of <math>J_{\rm sparse}(W,b)</math> fairly efficiently, and if you were additionally computing the gradient numerically on every step, this would slow down your program significantly. | ||
+ | |||
===Step 5: Visualization=== | ===Step 5: Visualization=== | ||
Line 149: | Line 156: | ||
- | Our implementation took around | + | Our implementation took around 5 minutes to run on a fast computer. |
In case you end up needing to try out multiple implementations or | In case you end up needing to try out multiple implementations or | ||
different parameter values, be sure to budget enough time for debugging | different parameter values, be sure to budget enough time for debugging | ||
Line 165: | Line 172: | ||
[[Category:Exercises]] | [[Category:Exercises]] | ||
+ | |||
+ | |||
+ | {{Sparse_Autoencoder}} |