# Sparse Autoencoder Notation Summary

### From Ufldl

Here is a summary of the symbols used in our derivation of the sparse autoencoder:

Symbol | Meaning |
---|---|

Input features for a training example, . | |

Output/target values. Here, can be vector valued. In the case of an autoencoder, . | |

The -th training example | |

Output of our hypothesis on input , using parameters . This should be a vector of
the same dimension as the target value . | |

The parameter associated with the connection between unit in layer , and
unit in layer . | |

The bias term associated with unit in layer . Can also be thought of as the parameter associated with the connection between the bias unit in layer and unit in layer . | |

Our parameter vector. It is useful to think of this as the result of taking the parameters and ``unrolling them into a long column vector.
| |

Activation (output) of unit in layer of the network.
In addition, since layer is the input layer, we also have . | |

The activation function. Throughout these notes, we used . | |

Total weighted sum of inputs to unit in layer . Thus, . | |

Learning rate parameter | |

Number of units in layer (not counting the bias unit). | |

Number layers in the network. Layer is usually the input layer, and layer the output layer. | |

Weight decay parameter. | |

For an autoencoder, its output; i.e., its reconstruction of the input . Same meaning as . | |

Sparsity parameter, which specifies our desired level of sparsity | |

The average activation of hidden unit (in the sparse autoencoder). | |

Weight of the sparsity penalty term (in the sparse autoencoder objective). |

Neural Networks | Backpropagation Algorithm | Gradient checking and advanced optimization | Autoencoders and Sparsity | Visualizing a Trained Autoencoder | **Sparse Autoencoder Notation Summary** | Exercise:Sparse Autoencoder

Language : 中文