Neural Network Vectorization
From Ufldl
Line 116: | Line 116: | ||
== Sparse autoencoder == | == Sparse autoencoder == | ||
+ | |||
+ | The [http://ufldl/wiki/index.php/Autoencoders_and_Sparsity sparse autoencoder] neural network has an additional sparsity penalty that prevents constrains neurons to fire at a target activation. We take into the account the sparsity penalty by computing the following: | ||
+ | |||
+ | :<math>\begin{align} | ||
+ | \delta^{(2)}_i = | ||
+ | \left( \left( \sum_{j=1}^{s_{2}} W^{(2)}_{ji} \delta^{(3)}_j \right) | ||
+ | + \beta \left( - \frac{\rho}{\hat\rho_i} + \frac{1-\rho}{1-\hat\rho_i} \right) \right) f'(z^{(2)}_i) . | ||
+ | \end{align}</math> | ||
+ | |||
+ | In the ''unvectorized'' case, this is computed as: | ||
+ | |||
+ | <syntaxhighlight> | ||
+ | % Sparsity Penalty Delta | ||
+ | sparsity_delta = - rho / rho_hat + (1 - rho) / rho_hat; | ||
+ | delta2 = (W2'*delta3(:,i) + beta*sparsity_delta).* fprime(z2(:,i)); | ||
+ | </syntaxhighlight> | ||
+ | |||
+ | * [TODO] Clean up above and add repmat instructions |