# Exercise:PCA and Whitening

(Difference between revisions)
 Revision as of 05:44, 17 April 2011 (view source)Cyfoo (Talk | contribs) (→PCA, PCA whitening and ZCA implementation)← Older edit Revision as of 01:37, 29 April 2011 (view source)Maiyifan (Talk | contribs) Newer edit → Line 5: Line 5: You will build on the MATLAB starter code which we have provided in [http://ufldl.stanford.edu/wiki/resources/pca_exercise.zip pca_exercise.zip]. You need only write code at the places indicated by "YOUR CODE HERE" in the files. The only file you need to modify is pca_gen.m. You will build on the MATLAB starter code which we have provided in [http://ufldl.stanford.edu/wiki/resources/pca_exercise.zip pca_exercise.zip]. You need only write code at the places indicated by "YOUR CODE HERE" in the files. The only file you need to modify is pca_gen.m. - === Step 0: Load data === + === Step 0a: Load data === The starter code contains code to load some natural images and sample 10000 14x14 patches from them. The raw patches sampled from the images will look something like this: The starter code contains code to load some natural images and sample 10000 14x14 patches from them. The raw patches sampled from the images will look something like this: Line 12: Line 12: These patches are stored as column vectors $x^{(i)} \in \mathbb{R}^{196}$ in the $196 \times 10000$ matrix $x$. These patches are stored as column vectors $x^{(i)} \in \mathbb{R}^{196}$ in the $196 \times 10000$ matrix $x$. + + === Step 0b: Zero mean the data === + + You should then zero-mean the data by subtracting the mean image from each image. === Step 1: Implement PCA === === Step 1: Implement PCA ===

## PCA, PCA whitening and ZCA implementation

In this problem set, you will implement PCA, PCA whitening and ZCA whitening, as described in the lecture notes.

You will build on the MATLAB starter code which we have provided in pca_exercise.zip. You need only write code at the places indicated by "YOUR CODE HERE" in the files. The only file you need to modify is pca_gen.m.

The starter code contains code to load some natural images and sample 10000 14x14 patches from them. The raw patches sampled from the images will look something like this:

These patches are stored as column vectors $x^{(i)} \in \mathbb{R}^{196}$ in the $196 \times 10000$ matrix x.

### Step 0b: Zero mean the data

You should then zero-mean the data by subtracting the mean image from each image.

### Step 1: Implement PCA

#### Step 1a: Implement PCA

In this step, you will implement PCA to obtain xrot, the matrix in which the data is "rotated" to the basis comprising the principal components (i.e. the eigenbasis of Σ).

#### Step 1b: Check covariance

To verify that your implementation of PCA is correct, you should check the covariance matrix for the rotated data. PCA guarantees that the covariance matrix for the rotated data is a diagonal matrix (a matrix with non-zero entries only along the main diagonal). Implement code to compute the covariance matrix and verify this property. One way to do this is to compute the covariance matrix, and visualise it using the MATLAB command imagesc. The image should show a multicoloured diagonal line against a blue background.

### Step 2: Find number of components to retain

In the next step, you will find k, the number of components to retain in order to retain at least 99% of the variance. In the step after this, you will discard all but k principal components, reducing the dimension of the original data to k.

### Step 3: PCA with dimension reduction

Now that you have found k, you can reduce the dimension of the data by discarding the remaining dimensions. In this way, you can represent the data in k dimensions instead of the original 196, which will save you computational time when running learning algorithms on the reduced representation.

To see the effect of dimension reduction, invert the PCA transformation to produce the matrix $\hat{x}$, the dimension-reduced data with respect to the original basis. Visualise $\hat{x}$ and compare it to the raw data, x. You will observe that there is little loss due to throwing away the principal components that correspond to dimensions with low variation. For comparison, you may also wish to generate and visualise $\hat{x}$ for when only 50% of the variance is retained.

 PCA dimension-reduced images (99% variance) Raw images PCA dimension-reduced images (50% variance)

### Step 4: PCA with whitening and regularisation

#### Step 4a: Implement PCA with whitening and regularisation

Now implement PCA with whitening and regularisation to produce the matrix xPCAWhite.

#### Step 4b: Check covariance

As with PCA alone, PCA with whitening results in processed data that has a diagonal covariance matrix. However, unlike PCA alone, whitening additionally ensures that the diagonal entries are (approximately) equal to 1, i.e. that the covariance matrix is (approximately) the identity matrix. To verify that your implementation of PCA with whitening is correct, you can check this property.Implement code to compute the covariance matrix and verify this property. As earlier, you can visualise the covariance matrix with imagesc. When visualised as an image, you should see a red line across the diagonal (corresponding to the one entries) against a blue background (corresponding to the zero entries).

### Step 5: ZCA whitening

Now implement ZCA whitening to produce the matrix xZCAWhite. Visualise xZCAWhite and compare it to the raw data, x. You should observe that whitening results in, among other things, enhanced edges.

 ZCA whitened images Raw images