Feature extraction using convolution

From Ufldl

Jump to: navigation, search
Line 7: Line 7:
Indeed, this intuition leads us to the method of '''feature extraction using convolution''' for large images. The idea is to first learn some features on smaller patches (say 8x8 patches) sampled from the large image, and then to '''convolve''' these features with the larger image to get the feature activations at various points in the image. Convolution corresponds precisely to the intuitive notion of translating the features. To give a concrete example, suppose you have learned features on 8x8 patches sampled from a 96x96 image. To get the convolved features, for every 8x8 region of the 96x96 image, that is, the 8x8 regions starting at <math>(1, 1), (1, 2), \ldots (89, 89)</math>, you would extract the 8x8 patch, and run it through your trained sparse autoencoder to get the feature activations. This would result in a set of 100 89x89 convolved features. These convolved features can then be '''[[#pooling | pooled]]''' together to produce a smaller set of pooled features, which can then be used for classification.  
Indeed, this intuition leads us to the method of '''feature extraction using convolution''' for large images. The idea is to first learn some features on smaller patches (say 8x8 patches) sampled from the large image, and then to '''convolve''' these features with the larger image to get the feature activations at various points in the image. Convolution corresponds precisely to the intuitive notion of translating the features. To give a concrete example, suppose you have learned features on 8x8 patches sampled from a 96x96 image. To get the convolved features, for every 8x8 region of the 96x96 image, that is, the 8x8 regions starting at <math>(1, 1), (1, 2), \ldots (89, 89)</math>, you would extract the 8x8 patch, and run it through your trained sparse autoencoder to get the feature activations. This would result in a set of 100 89x89 convolved features. These convolved features can then be '''[[#pooling | pooled]]''' together to produce a smaller set of pooled features, which can then be used for classification.  
-
Formally, given some large <math>r \times c</math> images <math>x_{large}</math>, we first train a sparse autoencoder on small <math>a \times b</math> patches <math>x_small</math> sampled from these images, learning <math>k</math> features <math>f = \sigma(W^{(1)}x_{small} + b^{(1)})</math> (where \sigma is the sigmoid function), given by the weights <math>W^{(1)}M</math> and biases <math>b^{(1)}</math> from the visible units to the hidden units. For every <math>a \times b</math> patch <math>x_s</math> in the large image, we compute <math>f_s = \sigma(W^{(1)}x_s + b^{(1)})</math>, giving us <math>f_{convolved}</math>, a <math>k \times (r - a + 1) \times (c - b + 1)</math> array of convolved features. These convolved features can then be [[#pooling | pooled]] for classification, as described below.
+
Formally, given some large <math>r \times c</math> images <math>x_{large}</math>, we first train a sparse autoencoder on small <math>a \times b</math> patches <math>x_{small}</math> sampled from these images, learning <math>k</math> features <math>f = \sigma(W^{(1)}x_{small} + b^{(1)})</math> (where \sigma is the sigmoid function), given by the weights <math>W^{(1)}M</math> and biases <math>b^{(1)}</math> from the visible units to the hidden units. For every <math>a \times b</math> patch <math>x_s</math> in the large image, we compute <math>f_s = \sigma(W^{(1)}x_s + b^{(1)})</math>, giving us <math>f_{convolved}</math>, a <math>k \times (r - a + 1) \times (c - b + 1)</math> array of convolved features. These convolved features can then be [[#pooling | pooled]] for classification, as described below.
=== Pooling ===
=== Pooling ===

Revision as of 05:34, 7 May 2011

Personal tools