Fine-tuning Stacked AEs

From Ufldl

Jump to: navigation, search
(General Strategy)
(Recap of the Backpropagation Algorithm)
Line 5: Line 5:
Fortunately, we already have all the tools necessary to implement fine tuning for stacked autoencoders! In order to compute the gradients for all the layers of the stacked autoencoder in each iteration, we use the [[Backpropagation Algorithm]], as discussed in the sparse autoencoder section. As the backpropagation algorithm can be extended to apply for an arbitrary number of layers, we can actually use this algorithm on a stacked autoencoder of arbitrary depth.
Fortunately, we already have all the tools necessary to implement fine tuning for stacked autoencoders! In order to compute the gradients for all the layers of the stacked autoencoder in each iteration, we use the [[Backpropagation Algorithm]], as discussed in the sparse autoencoder section. As the backpropagation algorithm can be extended to apply for an arbitrary number of layers, we can actually use this algorithm on a stacked autoencoder of arbitrary depth.
-
=== Recap of the Backpropagation Algorithm ===
+
=== Finetuning for Classification Error ===
For your convenience, the summary of the backpropagation algorithm using element wise notation is below:
For your convenience, the summary of the backpropagation algorithm using element wise notation is below:

Revision as of 00:25, 12 May 2011

Personal tools