Feature extraction using convolution

From Ufldl

Jump to: navigation, search
(Weight Sharing (Convolution))
Line 1: Line 1:
== Overview ==
== Overview ==
-
In the previous exercises, you have worked through problems which involved images that are on a relatively small scale such as hand-written digits and image patches. In this section of the course, we will be exploring methods which allow us to effectively scale up the methods to work with more realistic datasets with large images.
+
In the previous exercises, you worked through problems which involved images that were relatively low in resolution, such as small image patches and small images of hand-written digits. In this section, we will develop methods which allow us to scale up these methods to work with more realistic datasets that have larger images.
== Fully Connected Networks ==
== Fully Connected Networks ==
-
In sparse autoencoder models, one design choice that we made was to "fully connect" all the hidden units to all the input units. On relatively small images (e.g., 8x8 patches for the sparse autoencoder assignment, 28x28 images for the MNIST dataset), it is computationally feasible to learn features on the entire image. However, with larger images (e.g., 96x96 images) learning features that span the entire image (fully connected networks) is very computationally expensive - you would have to have <math>10^4</math> input units, and assuming you want to learn 100 features, you would have on the order of <math>10^6</math> parameters to learn. The feedforward and backpropagation computations would also be on the order of at least <math>10^2</math> slower, compared to 28x28 images.
+
In the sparse autoencoder, one design choice that we had made was to "fully connect" all the hidden units to all the input units. On relatively small images (e.g., 8x8 patches for the sparse autoencoder assignment, 28x28 images for the MNIST dataset), it is computationally feasible to learn features on the entire image. However, with larger images (e.g., 96x96 images) learning features that span the entire image (fully connected networks) is very computationally expensive--you would have about <math>10^4</math> input units, and assuming you want to learn 100 features, you would have on the order of <math>10^6</math> parameters to learn. The feedforward and backpropagation computations would also be about <math>10^2</math> times slower, compared to 28x28 images.
== Locally Connected Networks ==
== Locally Connected Networks ==
-
One simple solution to the problem is to restrict the connections between the hidden units and the input units, allowing each hidden unit to be able to only connect to a select number of input units. The selection of connections between the hidden and input units can often be determined based on the input modality -- e.g., for images, we will have hidden units that connect to local contiguous regions of pixels.
+
One simple solution to the problem is to restrict the connections between the hidden units and the input units, allowing each hidden unit to connect to only a select number of input units. The selection of connections between the hidden and input units can often be determined based on the input modality -- e.g., for images, we will have hidden units that connect to local contiguous regions of pixels.  
-
This idea of having locally connected networks also draws inspiration from how the early visual system is wired up. Specifically, neurons in the visual cortex are found to have localized receptive fields (i.e., they only respond to stimuli in a certain location).
+
This idea of having locally connected networks also draws inspiration from how the early visual system is wired up. Specifically, neurons in the visual cortex are found to have localized receptive fields (i.e., they respond only to stimuli in a certain location).  
== Weight Sharing (Convolution) ==
== Weight Sharing (Convolution) ==
-
Natural images have the property of being stationary, that is, the statistics of one part of the image are the same as any other part. This suggests that the features that we learn at one part of the image can also be applicable to other regions -- i.e., we should have the same features at all locations.
+
Natural images have the property of being '''stationary''', meaning that the statistics of one part of the image are the same as any other part. This suggests that the features that we learn at one part of the image can also be applicable to other regions -- i.e., we can have the same features at all locations.  
-
In practice, this is added as an additional constraint known as weight sharing (tying) between the hidden units at different locations. If one chooses to have the same hidden unit replicated at every possible location, this turns out to be equivalent to a convolution of the feature (as a filter) on the image.
+
<!--
 +
To capture this idea of learning the same features "everywhere in the image," one option is to add an additional  added as an additional constraint known as weight sharing (tying) between the hidden units at different locations. If one chooses to have the same hidden unit replicated at every possible location, this turns out to be equivalent to a convolution of the feature (as a filter) on the image.
== Fast Feature Learning and Extraction ==
== Fast Feature Learning and Extraction ==
While in principle one can learn feature convolutionally over the entire image, the learning procedure becomes more complicated to implement and often takes longer to execute.  
While in principle one can learn feature convolutionally over the entire image, the learning procedure becomes more complicated to implement and often takes longer to execute.  
 +
!-->
-
Hence, in practice, it is faster and easier to learn features heuristically and simply extract convolutionally thereafter.  
+
More precisely, having learned features over small (say 8x8) patches sampled randomly from the larger image, we can then apply this learned 8x8 feature detector anywhere in the image.  Specifically, we can take the learned 8x8 features and  
 +
'''convolve''' them with the larger image, thus obtaining a different feature activation value at each location in the image.
-
The idea is to first learn some features on smaller patches (say 8x8 patches) sampled from the large image, and then to convolve these features with the larger image to get the feature activations at various points in the image. Convolution then corresponds precisely to the intuitive notion of translating the features. To give a concrete example, suppose you have learned features on 8x8 patches sampled from a 96x96 image. To get the convolved features, for every 8x8 region of the 96x96 image, that is, the 8x8 regions starting at <math>(1, 1), (1, 2), \ldots (89, 89)</math>, you would extract the 8x8 patch, and run it through your trained sparse autoencoder to get the feature activations. This would result in a set of 100 89x89 convolved features. These convolved features can later be '''[[#pooling | pooled]]''' together to produce a smaller set of pooled features, which can then be used for classification.  
+
To give a concrete example, suppose you have learned features on 8x8 patches sampled from a 96x96 image. To get the convolved features, for every 8x8 region of the 96x96 image, that is, the 8x8 regions starting at <math>(1, 1), (1, 2), \ldots (89, 89)</math>, you would extract the 8x8 patch, and run it through your trained sparse autoencoder to get the feature activations. This would result in a set of 100 89x89 convolved features.
 +
 
 +
<!--
 +
These convolved features can later be '''[[#pooling | pooled]]''' together to produce a smaller set of pooled features, which can then be used for classification.  
 +
!-->
[[File:Convolution_schematic.gif]]
[[File:Convolution_schematic.gif]]
-
Formally, given some large <math>r \times c</math> images <math>x_{large}</math>, we first train a sparse autoencoder on small <math>a \times b</math> patches <math>x_{small}</math> sampled from these images, learning <math>k</math> features <math>f = \sigma(W^{(1)}x_{small} + b^{(1)})</math> (where <math>\sigma</math> is the sigmoid function), given by the weights <math>W^{(1)}</math> and biases <math>b^{(1)}</math> from the visible units to the hidden units. For every <math>a \times b</math> patch <math>x_s</math> in the large image, we compute <math>f_s = \sigma(W^{(1)}x_s + b^{(1)})</math>, giving us <math>f_{convolved}</math>, a <math>k \times (r - a + 1) \times (c - b + 1)</math> array of convolved features. These convolved features can then be [[#pooling | pooled]] for classification, as described in the next section.
+
Formally, given some large <math>r \times c</math> images <math>x_{large}</math>, we first train a sparse autoencoder on small <math>a \times b</math> patches <math>x_{small}</math> sampled from these images, learning <math>k</math> features <math>f = \sigma(W^{(1)}x_{small} + b^{(1)})</math> (where <math>\sigma</math> is the sigmoid function), given by the weights <math>W^{(1)}</math> and biases <math>b^{(1)}</math> from the visible units to the hidden units. For every <math>a \times b</math> patch <math>x_s</math> in the large image, we compute <math>f_s = \sigma(W^{(1)}x_s + b^{(1)})</math>, giving us <math>f_{convolved}</math>, a <math>k \times (r - a + 1) \times (c - b + 1)</math> array of convolved features.  
 +
 
 +
In the next section, we further describe how to "pool" these features together to get even better features for classification.

Revision as of 17:55, 27 May 2011

Personal tools