Fine-tuning Stacked AEs

From Ufldl

Jump to: navigation, search
(Strategy)
Line 3: Line 3:
=== Strategy ===
=== Strategy ===
-
Conceptually, fine tuning is quite simple. In order to view all layers of a stacked autoencoder as a single model, the gradients at each step are computed using back propagation [LINK], as discussed in the sparse autoencoder section.
+
Conceptually, fine tuning is quite simple. In order to view all layers of a stacked autoencoder as a single model, the gradients at each step are computed using the [[Backpropagation_Algorithm]], as discussed in the sparse autoencoder section.

Revision as of 21:46, 21 April 2011

Personal tools