Linear Decoders

From Ufldl

Jump to: navigation, search
(Linear Decoder)
(Linear Decoder)
Line 18: Line 18:
== Linear Decoder ==
== Linear Decoder ==
-
One easy fix for the fore-mentioned problem is to use a ''linear-decoder'', that is, we set <math>a^{(3)} = f(z^{(3)})</math>.
+
One easy fix for the fore-mentioned problem is to use a ''linear-decoder'', that is, we set <math>a^{(3)} = z^{(3)}</math>.
For a linear decoder, the activation function of the output unit is effectively the identity function. Formally, to reconstruct the input from the features using a linear decoder, we simply set <math>\hat{x} = a^{(3)} = z^{(3)} = W^{(2)}a + b^{(2)}</math> instead, without applying the sigmoid function. Now the reconstructed output <math>\hat{x}</math> is a linear function of the activations of the hidden units, which means that by varying <math>W</math>, each output unit <math>\hat{x}</math> can be made to produce any activation without the previous constraints. This allows us to train the sparse autoencoder on any input that takes on real values without any additional pre-processing. (Note that the hidden units are '''still sigmoid units''', that is, <math>a = \sigma(W^{(1)}*x + b^{(1)})</math>, where <math>x</math> is the input, and <math>W^{(1)}</math> and <math>b^{(1)}</math> are the weight and bias terms for the hidden units)
For a linear decoder, the activation function of the output unit is effectively the identity function. Formally, to reconstruct the input from the features using a linear decoder, we simply set <math>\hat{x} = a^{(3)} = z^{(3)} = W^{(2)}a + b^{(2)}</math> instead, without applying the sigmoid function. Now the reconstructed output <math>\hat{x}</math> is a linear function of the activations of the hidden units, which means that by varying <math>W</math>, each output unit <math>\hat{x}</math> can be made to produce any activation without the previous constraints. This allows us to train the sparse autoencoder on any input that takes on real values without any additional pre-processing. (Note that the hidden units are '''still sigmoid units''', that is, <math>a = \sigma(W^{(1)}*x + b^{(1)})</math>, where <math>x</math> is the input, and <math>W^{(1)}</math> and <math>b^{(1)}</math> are the weight and bias terms for the hidden units)

Revision as of 02:06, 22 May 2011

Personal tools