Implementing PCA/Whitening

From Ufldl

Revision as of 00:14, 12 April 2011 by Maiyifan (Talk | contribs)
Jump to: navigation, search

In this section, we summarize the PCA, PCA whitening and ZCA whitening algorithms, and also describe how you can implement them using efficient linear algebra libraries.

First, we need to compute \textstyle \Sigma = \frac{1}{m} \sum_{i=1}^m (x^{(i)})(x^{(i)})^T. If you're implementing this in Matlab (or even if you're implementing this in C++, Java, etc., but have access to an efficient linear algebra library), doing it as an explicit sum is inefficient. Instead, we can instead compute this in one fell swoop as

sigma = x * x' / size(x, 2);

(Check the math yourself for correctness.) Here, we assume that x is a data structure that contains one training example per column (so, x is a \textstyle n-by-\textstyle m matrix).

Next, PCA computes the eigenvectors of Σ. One could do this using the Matlab eig function. However, because Σ is a symmetric positive semi-definite matrix, it is more numerically reliable to do this using the svd function. Concretely, if you implement

[U,S,V] = svd(sigma);

then the matrix U will contain the eigenvectors of Sigma (one eigenvector per column, sorted in order from top to bottom eigenvector), and the diagonal entries of the matrix S will contain the corresponding eigenvalues (also sorted in decreasing order). The matrix V will be equal to transpose of U, and can be safely ignored.

(Note: The svd function actually computes the singular vectors and singular values of a matrix, which for the special case of a symmetric positive semi-definite matrix---which is all that we're concerned with here---is equal to its eigenvectors and eigenvalues. A full discussion of singular vectors vs. eigenvectors is beyond the scope of these notes.)

Finally, you can compute \textstyle x_{\rm rot} and \textstyle \tilde{x} as follows:

xrot = U' * x;
xtilde = U(:,1:k)' * x;

This gives your PCA representation of the data in terms of \textstyle \tilde{x} \in \Re^k. Incidentally, if x is a \textstyle n-by-\textstyle m matrix containing all your training data, this is a vectorized implementation, and the expressions above work too for computing xrot and \tilde{x} for your entire training set all in one go. The resulting xrot and \tilde{x} will have one column corresponding to each training example.

To compute the PCA whitened data \textstyle x_{\rm PCAwhite}, use

xPCAwhite = diag(1./sqrt(diag(S) + epsilon)) * U' * x;

Since S's diagonal contains the eigenvalues \textstyle \lambda_i, this turns out to be a compact way of computing \textstyle x_{{\rm PCAwhite},i} = \frac{x_{{\rm rot},i} }{\sqrt{\lambda_i}} simultaneously for all \textstyle i.

Finally, you can also compute the ZCA whitened data \textstyle x_{\rm ZCAwhite} as:

xZCAwhite = U * diag(1./sqrt(diag(S) + epsilon)) * U' * x;
Personal tools