# Softmax Regression

(Difference between revisions)
 Revision as of 23:24, 7 May 2011 (view source)Zellyn (Talk | contribs)m (→Parameterization)← Older edit Revision as of 23:26, 7 May 2011 (view source)Zellyn (Talk | contribs) m (→Binary Logistic Regression)Newer edit → Line 190: Line 190: &= &= - \frac{e^{ \Theta_1^T x^{(1)} } }{ 1 + e^{ \Theta_1^T x^{(i)} } } + \frac{e^{ \Theta_1^T x^{(i)} } }{ 1 + e^{ \Theta_1^T x^{(i)} } } \cdot \cdot - \frac{1}{e^{ \Theta_1^T x^{(1)} } } + \frac{1}{e^{ \Theta_1^T x^{(i)} } } \begin{bmatrix} \begin{bmatrix} e^{ \Theta_1^T x^{(i)} } \\ e^{ \Theta_1^T x^{(i)} } \\

## Introduction

Softmax regression, also known as multinomial logistic regression, is a generalisation of logistic regression to problems where there are more than 2 class labels.

Recall that in logistic regression, our hypothesis was of the form:

\begin{align} h_\theta(x) = \frac{1}{1+\exp(-\theta^Tx)}, \end{align}

where we trained the logistic regression weights to optimize the (conditional) log-likelihood of the dataset using p(y | x) = hθ(x). In softmax regression, we are interested in multi-class problems where each example (input image) is assigned to one of k labels. One example of a multi-class classification problem would be classifying digits on the MNIST dataset where each example has label 1 of 10 possible labels (i.e., where it is the digit 0, 1, ... or 9).

To extend the logistic regression framework which only outputs a single probability value, we consider a hypothesis that outputs K values (summing to 1) that represent the predicted probability distribution. Formally, let us consider the classification problem where we have m k-dimensional inputs $x^{(1)}, x^{(2)}, \ldots, x^{(m)}$ with corresponding class labels $y^{(1)}, y^{(2)}, \ldots, y^{(m)}$, where $y^{(i)} \in \{1, 2, \ldots, n\}$, with n being the number of classes.

Our hypothesis hθ(x), returns a vector of probabilities, such that

\begin{align} h(x^{(i)}) = \begin{bmatrix} P(y^{(i)} = 1 | x^{(i)}) \\ P(y^{(i)} = 2 | x^{(i)}) \\ \vdots \\ P(y^{(i)} = n | x^{(i)}) \end{bmatrix} = \frac{1}{ \sum_{j=1}^{n}{e^{ \theta_j^T x^{(i)} }} } \begin{bmatrix} e^{ \theta_1^T x^{(i)} } \\ e^{ \theta_2^T x^{(i)} } \\ \vdots \\ e^{ \theta_n^T x^{(i)} } \\ \end{bmatrix} \end{align}

where $\theta_1, \theta_2, \ldots, \theta_n$ are each k-dimensional column vectors that constitute the parameters of our hypothesis. Notice that $\frac{1}{ \sum_{j=1}^{n}{e^{ \theta_j^T x^{(i)} }} }$ normalizes the distribution so that it sums to one.

Strictly speaking, we only need n − 1 parameters for n classes, but for convenience, we use n parameters in our derivation.

Now, this hypothesis defines a predicted probability distribution given some x, P(y | x(i)) = h(x(i)). Thus to train the model, a natural choice is to maximize the (conditional) log-likelihood of the data, $l(\theta; x, y) = \sum_{i=1}^{m} \ln { P(y^{(i)} | x^{(i)}) }$.

Motivation: One reason for selecting this form of hypotheses comes from connections to linear discriminant analysis. In particular, if one assumes a generative model for the data in the form $p(x,y) = p(y) \times p(x | y)$ and selects for p(x | y) a member of the exponential family (which includes Gaussians, Poissons, etc.) it is possible to show that the conditional probability p(y | x) has the same form as our chosen hypotheses h(x). For more details, see the CS 229 Lecture 2 Notes.

## Optimizing Softmax Regression

Expanding the log-likelihood expression, we find that:

\begin{align} \ell(\theta) &= \ln L(\theta; x, y) \\ &= \ln \prod_{i=1}^{m}{ P(y^{(i)} | x^{(i)}) } \\ &= \sum_{i=1}^{m}{ \ln \frac{ e^{ \theta^T_{y^{(i)}} x^{(i)} } }{ \sum_{j=1}^{n}{e^{ \theta_j^T x^{(i)} }} } } \\ &= \theta^T_{y^{(i)}} x^{(i)} - \ln \sum_{j=1}^{n}{e^{ \theta_j^T x^{(i)} }} \end{align}

Unfortunately, there is no closed form solution to this optimization problem (although it is concave), and we usually use an off-the-shelf optimization method (e.g., L-BFGS, stochastic gradient descent) to find the optimal parameters. Using these optimization methods require computing the gradient ($\ell(\theta)$ w.r.t. θk), which can can be derived as follows:

\begin{align} \frac{\partial \ell(\theta)}{\partial \theta_k} &= \frac{\partial}{\partial \theta_k} \theta^T_{y^{(i)}} x^{(i)} - \ln \sum_{j=1}^{n}{e^{ \theta_j^T x^{(i)} }} \\ &= I_{ \{ y^{(i)} = k\} } x^{(i)} - \frac{1}{ \sum_{j=1}^{n}{e^{ \theta_j^T x^{(i)} }} } \cdot e^{ \theta_k^T x^{(i)} } \cdot x^{(i)} \qquad \text{(where } I_{ \{ y^{(i)} = k\} } \text{is 1 when } y^{(i)} = k \text{ and 0 otherwise) } \\ &= x^{(i)} ( I_{ \{ y^{(i)} = k\} } - P(y^{(i)} = k | x^{(i)}) ) \end{align}

With this, we can now find a set of parameters that maximizes $\ell(\theta)$, for instance by using L-BFGS with minFunc.

### Weight Regularization

When using softmax regression in practice, it is important to use weight regularization. In particular, if there exists a linear separator that perfectly classifies all the data points, then the softmax-objective is unbounded (given any θ that separates the data perfectly, one can always scale θ to be larger and obtain a better objective value). With weight regularization, one penalizes the weights for being large and thus avoids these degenerate situations.

Weight regularization is also important as it often results in models that generalize better. In particular, one can view weight regularization as placing a (Gaussian) prior on θ so as to prefer θ with smaller values.

In practice, we often use a L2 weight regularization on the weights where we penalize the squared value of each element of θ. Formally, we use:

\begin{align} w(\theta) = \frac{\lambda}{2} \sum_{i}{ \sum_{j}{ \theta_{ij}^2 } } \end{align}

This regularization term is added together with the log-likelihood function to give a cost function, J(θ), which we want to minimize (note that we want to minimize the negative log-likelihood, which corresponds to maximizing the log-likelihood):

\begin{align} J(\theta) = -\ell(\theta) + \frac{\lambda}{2} \sum_{i}{ \sum_{j}{ \theta_{ij}^2 } } \end{align}

The gradients with respect to the cost function must then be adjusted to account for the weight decay term:

\begin{align} \frac{\partial J(\theta)}{\partial \theta_k} &= x^{(i)} ( I_{ \{ y^{(i)} = k\} } - P(y^{(i)} = k | x^{(i)}) ) + \lambda \theta_k \end{align}

Minimizing J(θ) now performs regularized softmax regression.

## Parameterization

We noted earlier that we actually only need n − 1 parameters to model n classes. To see why this is so, consider our hypothesis again:

\begin{align} h(x^{(i)}) &= \frac{1}{ \sum_{j=1}^{n}{e^{ \theta_j^T x^{(i)} }} } \begin{bmatrix} e^{ \theta_1^T x^{(i)} } \\ e^{ \theta_2^T x^{(i)} } \\ \vdots \\ e^{ \theta_n^T x^{(i)} } \\ \end{bmatrix} \\ &= \frac{e^{ \theta_n^T x^{(i)} } }{ \sum_{j=1}^{n}{e^{ \theta_j^T x^{(i)} }} } \cdot \frac{1}{e^{ \theta_n^T x^{(i)} } } \begin{bmatrix} e^{ \theta_1^T x^{(i)} } \\ e^{ \theta_2^T x^{(i)} } \\ \vdots \\ e^{ \theta_n^T x^{(i)} } \\ \end{bmatrix} \\ &= \frac{1}{ \sum_{j=1}^{n}{e^{ (\theta_j^T - \theta_n^T) x^{(i)} }} } \begin{bmatrix} e^{ (\theta_1^T - \theta_n^T) x^{(i)} } \\ e^{ (\theta_2^T - \theta_n^T) x^{(i)} } \\ \vdots \\ e^{ (\theta_n^T - \theta_n^T) x^{(i)} } \\ \end{bmatrix} \\ \end{align}

Letting Θj = θj − θn for $j = 1, 2 \ldots n - 1$ gives

\begin{align} h(x^{(i)}) &= \frac{1}{ 1 + \sum_{j=1}^{n-1}{e^{ \Theta_j^T x^{(i)} }} } \begin{bmatrix} e^{ \Theta_1^T x^{(i)} } \\ e^{ \Theta_2^T x^{(i)} } \\ \vdots \\ 1 \\ \end{bmatrix} \\ \end{align}

Showing that only n − 1 parameters are required.

In practice, however, it is often easier to implement the version which is over-parametrized although both methods will lead to the same classifier.

### Binary Logistic Regression

In the special case where n = 2, one can also show that softmax regression reduces to logistic regression:

\begin{align} h(x^{(i)}) &= \frac{1}{ 1 + e^{ \Theta_1^T x^{(i)} } } \begin{bmatrix} e^{ \Theta_1^T x^{(i)} } \\ 1 \\ \end{bmatrix} \\ &= \frac{e^{ \Theta_1^T x^{(i)} } }{ 1 + e^{ \Theta_1^T x^{(i)} } } \cdot \frac{1}{e^{ \Theta_1^T x^{(i)} } } \begin{bmatrix} e^{ \Theta_1^T x^{(i)} } \\ 1 \\ \end{bmatrix} \\ &= \frac{1}{ e^{ -\Theta_1^T x^{(i)} } + 1 } \begin{bmatrix} 1 \\ e^{ -\Theta_1^T x^{(i)} } \\ \end{bmatrix} \\ \end{align}