Neural Network Vectorization

From Ufldl

Jump to: navigation, search
Line 93: Line 93:
\end{align}  
\end{align}  
</math>
</math>
-
Here, <math>\bullet</math> denote element-wise product.  For simplicity, our description here will ignore the derivatives with respect to <math>b^{(l)}</math>, though your implementation of backpropagation will have to compute those derivatives too.  
+
Here, <math>\bullet</math> denotes element-wise product.  For simplicity, our description here will ignore the derivatives with respect to <math>b^{(l)}</math>, though your implementation of backpropagation will have to compute those derivatives too.  
Suppose we have already implemented the vectorized forward propagation method, so that the matrix-valued <tt>z2</tt>, <tt>a2</tt>,  <tt>z3</tt> and <tt>h</tt> are computed as described above. We can then implement an ''unvectorized'' version of backpropagation as follows:
Suppose we have already implemented the vectorized forward propagation method, so that the matrix-valued <tt>z2</tt>, <tt>a2</tt>,  <tt>z3</tt> and <tt>h</tt> are computed as described above. We can then implement an ''unvectorized'' version of backpropagation as follows:
Line 117: Line 117:
== Sparse autoencoder ==
== Sparse autoencoder ==
-
The [http://ufldl/wiki/index.php/Autoencoders_and_Sparsity sparse autoencoder] neural network has an additional sparsity penalty that prevents constrains neurons to fire at a target activation. We take into the account the sparsity penalty by computing the following:
+
The [http://ufldl/wiki/index.php/Autoencoders_and_Sparsity sparse autoencoder] neural network has an additional sparsity penalty that constrains neurons' average firing rate to be close to some target activation <math>\rho</math>. We take into the account the sparsity penalty by computing the following:
:<math>\begin{align}
:<math>\begin{align}

Revision as of 18:38, 29 April 2011

Personal tools