# Neural Networks

### From Ufldl

Consider a supervised learning problem where we have access to labeled training
examples (*x*^{(i)},*y*^{(i)}). Neural networks give a way of defining a complex,
non-linear form of hypotheses *h*_{W,b}(*x*), with parameters *W*,*b* that we can
fit to our data.

To describe neural networks, we will begin by describing the simplest possible neural network, one which comprises a single "neuron." We will use the following diagram to denote a single neuron:

This "neuron" is a computational unit that takes as input *x*_{1},*x*_{2},*x*_{3} (and a +1 intercept term), and
outputs , where is
called the **activation function**. In these notes, we will choose
to be the sigmoid function:

Thus, our single neuron corresponds exactly to the input-output mapping defined by logistic regression.

Although these notes will use the sigmoid function, it is worth noting that
another common choice for *f* is the hyperbolic tangent, or tanh, function:

Here are plots of the sigmoid and tanh functions: