Visualizing a Trained Autoencoder
From Ufldl
Line 9: | Line 9: | ||
We will visualize the function computed by hidden unit <math>\textstyle i</math>---which depends on the | We will visualize the function computed by hidden unit <math>\textstyle i</math>---which depends on the | ||
parameters <math>\textstyle W^{(1)}_{ij}</math> (ignoring | parameters <math>\textstyle W^{(1)}_{ij}</math> (ignoring | ||
- | the bias term for now) using a 2D image. In particular, we think of | + | the bias term for now)---using a 2D image. In particular, we think of |
<math>\textstyle a^{(2)}_i</math> as some non-linear feature of the input <math>\textstyle x</math>. | <math>\textstyle a^{(2)}_i</math> as some non-linear feature of the input <math>\textstyle x</math>. | ||
We ask: | We ask: | ||
Line 51: | Line 51: | ||
<sup>1</sup> ''The learned features were obtained by training on '''whitened''' natural images. Whitening is a preprocessing step which removes redundancy in the input, by causing adjacent pixels to become less correlated.'' | <sup>1</sup> ''The learned features were obtained by training on '''whitened''' natural images. Whitening is a preprocessing step which removes redundancy in the input, by causing adjacent pixels to become less correlated.'' | ||
+ | |||
+ | |||
+ | {{Sparse_Autoencoder}} | ||
+ | |||
+ | |||
+ | {{Languages|可视化自编码器训练结果|中文}} |