Visualizing a Trained Autoencoder

From Ufldl

Jump to: navigation, search
 
Line 6: Line 6:
a^{(2)}_i = f\left(\sum_{j=1}^{100} W^{(1)}_{ij} x_j  + b^{(1)}_i \right).
a^{(2)}_i = f\left(\sum_{j=1}^{100} W^{(1)}_{ij} x_j  + b^{(1)}_i \right).
\end{align}</math>
\end{align}</math>
-
%This is the activation function <math>\textstyle g(\cdot)</math> applied to an affine function of the input.
+
<!-- This is the activation function <math>\textstyle g(\cdot)</math> applied to an affine function of the input.!-->
We will visualize the function computed by hidden unit <math>\textstyle i</math>---which depends on the
We will visualize the function computed by hidden unit <math>\textstyle i</math>---which depends on the
parameters <math>\textstyle W^{(1)}_{ij}</math> (ignoring
parameters <math>\textstyle W^{(1)}_{ij}</math> (ignoring
-
the bias term for now) using a 2D image.  In particular, we think of
+
the bias term for now)---using a 2D image.  In particular, we think of
<math>\textstyle a^{(2)}_i</math> as some non-linear feature of the input <math>\textstyle x</math>.
<math>\textstyle a^{(2)}_i</math> as some non-linear feature of the input <math>\textstyle x</math>.
We ask:
We ask:
Line 51: Line 51:
<sup>1</sup> ''The learned features were obtained by training on '''whitened''' natural images.  Whitening is a preprocessing step which removes redundancy in the input, by causing adjacent pixels to become less correlated.''
<sup>1</sup> ''The learned features were obtained by training on '''whitened''' natural images.  Whitening is a preprocessing step which removes redundancy in the input, by causing adjacent pixels to become less correlated.''
 +
 +
 +
{{Sparse_Autoencoder}}
 +
 +
 +
{{Languages|可视化自编码器训练结果|中文}}

Latest revision as of 12:49, 7 April 2013

Personal tools