Visualizing a Trained Autoencoder
From Ufldl
(Created page with "Having trained a (sparse) autoencoder, we would now like to visualize the function learned by the algorithm, to try to understand what it has learned. Consider the case of traini...") |
|||
Line 34: | Line 34: | ||
When we do this for a sparse autoencoder (trained with 100 hidden units on | When we do this for a sparse autoencoder (trained with 100 hidden units on | ||
- | 10x10 pixel inputs | + | 10x10 pixel inputs we get the following result: |
- | + | ||
- | + | ||
- | + | ||
- | [[Image:ExampleSparseAutoencoderWeights.png|400px|center]] | + | [[Image:ExampleSparseAutoencoderWeights.png|thumb|400px|center]] |
Each square in the figure above shows the (norm bounded) input image <math>\textstyle x</math> that | Each square in the figure above shows the (norm bounded) input image <math>\textstyle x</math> that | ||
Line 50: | Line 47: | ||
as audio), this algorithm also learns useful representations/features for those | as audio), this algorithm also learns useful representations/features for those | ||
domains too. | domains too. | ||
+ | |||
+ | ''The learned features were obtained by training on '''whitened''' natural images. Whitening is a preprocessing step which removes redundancy in the input, by causing adjacent pixels to become less correlated.'' |