Regularization and Dropout
Resources
Dropout as Regularization
Introduction
One of the important challenges in the use of neural networks is generalization. Since we have a huge hypothesis space in neural networks, maximum likelihood estimation of parameters almost always suffers over-fitting. The most popular workaround to this problem is dropout 1. Though it is clear that it causes the network to fit less to the training data, it is not clear at all what is the mechanism behind the dropout method and how it is linked to our classical methods, such as L-2 norm regularization and Lasso. With regards to this theoretical issue, Wager et al. 2 present a new view on dropout when applied to Generalized Linear Models. They view dropout as a process that artificially corrupts data with multiplicative Bernoullie noise. They prove that dropout when applied to Generalized Linear Models is nothing but adding adaptive L-2 norm regularization term (penalty term) to the negative log likelihood function up to the quadratic approximation. It turns out that the L-2 norm penalty term works in a way that favors rare but discriminative features.
Background
Generalized Linear Models and Exponential Family Distributions
The exponential family distribution is defined as the following:
And the Generalized Linear Model assumption is to set $\theta= \beta^T x$ where $x$ is an explanatory variable. $\theta$ is called the canonical parameter. The function $A$ is a normalizer, and
You can prove this, and this will come in handy later. For those of you who are interested in more details, see Andrew Ng’s Notes. Assuming that we have independent samples in the training set, the negative log likelihood is
and
Robustness and Constrained Optimization
It is a terrible idea to fit a nth-degree polynomial to n data points. It does a perfect job on the given data points, but it fails to generalize to new data points. over-fitting. Putting it in a more statistical language, the parameter estimation varies too much according to a particular set of realizations from our true distribution. This insight encourages us to constrain possible values that the parameter can take on so that the parameter estimate would not vary so much based on the realizations we get to observe. L-1 and L-2 norm constraints on the estimated parameter are both common choices for the constraint. This is the trade-off between bias and variance of a model. For instance, we can pose an optimization problem constrained by the L-2 norm regularization instead of the normal linear regression problem.
However, by the strong duality of the convex problem, we can solve the dual problem:
where there is a one-to-one relationship between $\lambda$ and $s$.
Adaptive Regularization and Dropout
The vanilla regularization scheme, such as Lasso and Ridge Regression, penalizes big parameters uniformly. Namely, our constraint is solely based on the parameter. However, sometimes, it might be useful to prefer some features than the other. For instance, when recognizing a handwritten digit in the MNIST dataset, we might want to look at rare features that are specific to each number. In such cases, adaptive regularization comes into play. Wager et al.2 proves that dropout works in a way that adaptive regularization happens. First, let’s think of the dropout training as a process artificially corrupting the data with noise.
Then, the negative log likelihood (the cost function) becomes:
We can approximate the R penalty term up to the quadratic Taylor expansion.
Now recall that $A^{“} = Cov(T(X))$ So for the logistic regression (Bernoulli distribution),
where $\sigma$ denotes the sigmoid function.
Also,
So intuitively, the penalty term penalizes the model when the model is not confident and the corresponding feature fires often. Namely, the dropout training favors rare but discriminative features.