Stacked Autoencoders

From Ufldl

Jump to: navigation, search
(Training)
(Training)
Line 33: Line 33:
{{Quote|
{{Quote|
-
If one is only interested in finetuning for the purposes of classification, the common practice is to then discard the "decoding" layers of the stacked autoencoder and link the last hidden layer <math>a^(n)</math> to the softmax classifier. The gradients from the (softmax) classification error will then be backpropagated into the encoding layers.
+
If one is only interested in finetuning for the purposes of classification, the common practice is to then discard the "decoding" layers of the stacked autoencoder and link the last hidden layer <math>a^{(n)}</math> to the softmax classifier. The gradients from the (softmax) classification error will then be backpropagated into the encoding layers.
}}
}}

Revision as of 04:33, 13 May 2011

Personal tools