1. Introduction
In this tutorial, we’ll study weight initialization techniques in artificial neural networks and why they’re important.
Initialization has a great influence on the speed and quality of the optimization achieved by the network training process.
2. Basic Notation
To illustrate the discussion, we’ll refer to classic fully interconnected feed-forward networks, such as the one illustrated in the following figure:
Each unit of the network performs a non-linear transformation (activation function) of a weighted sum of the outputs of the units of the previous level to generate its own output :
3. Breaking the Symmetry
We basically have two possible extreme choices for initializing the weights of a neural network: select a single value for all the weights in the network, or generate them randomly within a certain range.
Best practices recommend using a random set, with an initial bias of zero. The reason lies in the need to “break the symmetry”, that is, the need to make each neuron perform a different computation. In conditions of symmetry, training can be severely penalized or even impossible.
Breaking the symmetry has two different aspects, depending on the scale at which we consider the question: from the point of view of the connections of a single network or the point of view of different networks.
3.1. Breaking the Symmetry Within the Units of a Network
If all units of the net have the same initial parameters, then a deterministic learning algorithm applied to a deterministic cost and model will constantly update both of these units in the same way. Let’s see why.
In our article on nonlinear activation functions, we studied the classic learning mechanism based on the Delta rule (gradient descent), which provides a procedure for updating weights based on the presentation of examples.
Assuming for simplicity a network with a single layer that uses linear activation functions and the quadratic error between network output, , and target, , as a measure of the goodness of the prediction, using the records of a dataset of measured data:
the Delta rule provides the following expression for updating the weights:
where is the learning rate.
Suppose we initialize all weights in the network with the same value. Then, regardless of the functional form chosen for the activation function, the difference will be identical for all units, and the new set of weights will all have the same numerical value.
We can think of this “symmetrical situation” as a constraint. Practice shows that it is harmful and does not allow for optimal training.
3.2. Breaking the Symmetry in Different Networks
Identifying the optimal neural network for a problem requires a test campaign, trying different structures and parameterization, to identify the network that generates the least error. The procedure can be automated with, for example, a genetic algorithm, which proposes different solutions and puts them in competition.
Let’s suppose instead that we carry out different trials using the same structure and the same parameters and weights of the network. In this case, all networks will have the same starting point in the error space of the problem.
As we saw in the previous section, many training algorithms study the variation of the error gradient with the variation of weight. Starting from the same point means that the gradient’s direction will always be the same or very similar between different trials, and the weights will then be updated in the same way.
This is another aspect of a symmetrical situation. The choice of different weights allows you to explore space in different ways and increases the probability of finding optimal solutions.
4. Random Initialization
We understood from the previous sections the need to initialize the weights randomly, but within what interval? The answer largely depends on the activation functions that our neural network uses.
Let’s consider the as an example:
The resolution of the curve is poor for extreme values of the argument. The variation of for too large or too small values leads to small variations in the (vanishing gradient problem).
This fact gives us a criterion for the initialization range of weights, which should be located in an intermediate range. Some authors recommend , others as small as . If we use the logistic activation function or the , a range is adequate for most uses.
5. Advanced Random Initialization Techniques
The random initialization illustrated in the previous section considers that the generated weights are, within the selected range, equally probable. This is equivalent to the random generation according to a uniform distribution.
Other probability laws can be used, such as the Gaussian distribution. In this last case, the weights are not generated within an interval but normally distributed with a certain variance.
The techniques illustrated below give an estimate of these limits of variability: extension of the interval for uniform distribution, , and standard deviation for the Gaussian one, .
5.1. Xavier-Bengio Initialization
Xavier-Bengio initialization, also known as Xavier-Joshua initialization or Glorot initialization, can be used for the logistic activation function and hyperbolic tangent. It was derived by these authors considering the assumption of linear activation functions.
The logic of Xavier’s initialization method is to set an equal variance of the inputs and outputs of each layer to avoid the vanishing gradient problem and other aberrations.
We’ll call the variability interval for weights following a uniform distribution (interval ) and the standard deviation in the case of normal distribution with zero mean:
For the logistic function, Young-Man Kwon, Yong-woo Kwon, Dong-Keun Chung, and Myung-Jae Lim give expressions:
where is the weight matrix, and and are the number of input and output weight connections for a given network layer, also called and in the technical literature.
For the we have:
![[\mathbf{W}\in\mathcal{U}\left[-\sqrt[4]{\frac{6}{n_{i}+n_{o}}},\sqrt[4]{\frac{6}{n_{i}+n_{o}}}\right],,\mathbf{W}\sim\mathcal{N}\left(0,\sqrt[4]{\frac{6}{n_{i}+n_{o}}}\right)]](/wp-content/ql-cache/quicklatex.com-5408dceccb3c49c9c88ce5ddacc2368f_l3.svg "Rendered by QuickLaTeX.com")
Note that the and parameters act as a scale parameter applied to a specific probability distribution.
However, other expressions are more common in the technical literature. In particular, for the normal distribution:
with a variant given by:
5.2. He Initialization
Also called Kaiming initialization. This method is named after a famous paper by Kaiming He et al. published in 2005. It is almost similar to Xavier’s initialization, except that they use different scaling factors for the weights.
He et al. derived an initialization method by cautiously modeling the non-linearity of ReLUs, which makes extremely deep models (> 30 layers) difficult to converge. It is then associated with these activation functions.
Young-Man Kwon, Yong-woo Kwon, Dong-Keun Chung, and Myung-Jae Lim give the expressions:
Here, too, it is more common to use the following expression, suitable for the normal distribution:
There are solid theoretical justifications for this technique. Given that a proper initialization method should avoid reducing or magnifying the magnitudes of input signals exponentially (vanishing gradient problem), He et al. arrived in their work at the following conditions to avoid this type of aberration:
which leads to the expression of the text.
It is still possible to obtain a more general expression, given by:
where is the negative slope of the rectifier used after the current layer. for ReLU by default, which leads back to the text expression.
6. Other Forms of Initialization
Many other methods have been proposed. Scientific packages make many of these techniques available. For example, Keras has the following possibilities:
- Zeros: initialization to 0
- Ones: initialization to 1
- Constant: initialization to a constant value
- RandomNormal: initialization with a normal distribution
- RandomUniform: initialization with a uniform distribution
- TruncatedNormal: initialization with a truncated normal distribution
- VarianceScaling: initialization capable of adapting its scale to the shape of weights
- Orthogonal: initialization that generates a random orthogonal matrix
- Identity: initialization that generates the identity matrix
- lecun_uniform: LeCun uniform initializer
- glorot_normal: Xavier normal initializer
- glorot_uniform: Xavier uniform initializer
- he_normal: He normal initializer
- lecun_normal: LeCun normal initializer
- he_uniform: He uniform variance scaling initializer
7. Conclusion
In this article, we have made an overview of some weight initialization techniques within neural networks. Apparently, a secondary topic actually affects the quality of the results and the speed of convergence of the training process.
All these techniques have solid theoretical justifications and are aimed at mitigating or solving highly studied technical problems, such as the vanishing gradient problem.