# Neural Networks

### From Ufldl

Line 31: | Line 31: | ||

- | + | <div align=center> | |

- | [[Image:Sigmoid_Function.png|400px| | + | [[Image:Sigmoid_Function.png|400px|top|Sigmoid activation function.]] |

- | [[Image:Tanh_Function.png|400px| | + | [[Image:Tanh_Function.png|400px|top|Tanh activation function.]] |

+ | </div> | ||

The <math>\tanh(z)</math> function is a rescaled version of the sigmoid, and its output range is | The <math>\tanh(z)</math> function is a rescaled version of the sigmoid, and its output range is | ||

Line 63: | Line 64: | ||

node). The middle layer of nodes is called the '''hidden layer''', because its | node). The middle layer of nodes is called the '''hidden layer''', because its | ||

values are not observed in the training set. We also say that our example | values are not observed in the training set. We also say that our example | ||

- | neural network has 3 ''' | + | neural network has 3 '''in put units''' (not counting the bias unit), 3 |

'''hidden units''', and 1 '''output unit'''. | '''hidden units''', and 1 '''output unit'''. | ||