1. Introduction
In this tutorial, we’ll study the Information Bottleneck Principle (IB). This principle allows for a qualitative understanding and provides quantitative evidence of how a multilevel neural network (DNN) works internally. The result is the clarification of a limit that can be used as a guiding principle in the training of a DNN.
IB is directly related to another principle that we can consider of an even more qualitative nature: the principle of Minimum Mutual Information (MinMI).
2. MinMI Principle
The basic problem in any predictive system based on neural networks is the identification of an unknown function that realizes an optimal mapping between input () and output () of a dataset. The training process consists of the identification of a series of internal parameters of the neural network that allow reaching this optimum. We’ll call in the next each intermediate or hidden layer of the network:
What happens in the hidden levels of the network and why this process works so well is largely unknown. This is the reason why they’re called black-box models.
The MinMI principle or Minimum Information Principle has been applied in the context of neural coding. It considers a basic quantity of interest, relevant for the identification of the optimum, the mutual information between input and output, that we can define respectively for the discrete and continuous cases as:
where are random variables, is the joint probability, and and are the marginal probabilities. Mutual information is always positive.
is a measure of the mutual dependence between the two variables. More specifically, it quantifies the amount of information that we obtain about one random variable by observing the other random variable.
Suppose a set of DNNs consistent with the observations and compatible with the problem. Each is characterized by a set of internal parameters that are the subject of the training procedure. MinMI establishes that the optimal structure is given by the one with the minimum mutual information.
2.1. MinMI Principle. Why?
The question in the title of this section is justified. At a superficial glance, since is a measure of the dependence between and , we might expect to maximize mutual information and not minimize it.
However, this is not the case. Of all the possible DNNs that we can build compatible with the problem, most structures explicit a map between inputs and outputs that contain additional superstructures to the real relationship in the data. Effects such as noise and collinearity are obstacles to achieving optimum.
This fact can be further clarified if we refer to the dimensionality of the dataset. Normally, the input has a high dimensionality while the output has a low dimensionality. This means that, in general, most of entropy is not very informative about . The relevant features of are highly distributed and difficult to extract.
These aberrations contribute to since, being forms of information (albeit not desired) that bind and , they increase the value of . The minimization of mutual information brings our neural network closer, therefore, to the identification of the mapping that contains only the relevant information in order to build an efficient predictive system, that is, the true relationship present in the data.
2.2. Compression
One way of putting these considerations into practice, which also explains the efficiency of DNNs and other similar predictive systems, is to put the network in the condition of having to do compression.
Using non-formal language, suppose that the structure of a DNN is, in the hidden levels, somehow “neuron deficient”. In these conditions, it is not possible, in general, to transmit all the information present in the data from one hidden level to the next. The training process leads the DNN to seek a compromise, which is expressed as a compression of the original information.
Compression means loss of information, but it is a “controlled” loss since our control parameter is, in general, a measure of the deviation of the prediction with respect to the measured data, which exerts continuous pressure during the training process. The final result is a decrease in the value of , a value that is given by a structure that contains a relationship that is closer to the true relationship present in the data than the one before compression and where many of the superstructures have been eliminated. Repeating this process on all hidden layers further calibrates the whole process.
In this discussion, it is implicitly assumed that there’s, of course, some relationship between and . In other words, we know that . If and are independent, then and we’ve:
Under these conditions, there’s no relationship to find, and it is not possible to build a predictive system.
The concepts of compression and minimization of lead us directly to the IB principle.
3. IB Principle
3.1. Data Processing Inequality and Markov Chains
Data Processing Inequality (DPI) is an information theory concept that states that no processing of data can increase its entropy. From the point of view of a dataset and of the predictive systems we’re considering, it can be translated as “post-processing on data cannot increase information”.
When we’ve processes in which we choose symbols by a set of probabilities, we deal with stochastic processes. When the choice of symbols for a stochastic process depends on the symbols or events previously chosen, we’ve got a Markov process.
If three random variables form a Markov chain, , then the conditional probability of depends only on and is conditionally independent of . Under these conditions, no process on can increase the information that contains about , and the DPI can be formalized as:
If we denote by the residual information between and , i.e., the relevant information not captured by , then the preceding expression achieves equality for , that is when it is verified that and contain the same amount of information about .
3.2. Minimal Sufficient Statistics for the Input
The compression of input allows capturing the relevant features, eliminating those irrelevant for the purposes of the prediction of . The MinMI principle states that this process leads to a decrease of . The minimum of this quantity allows us to identify the simplest mapping of , which we’ll call , which captures the mutual information . is the minimum sufficient statistics of with respect to .
The DPI allows us to qualitatively understand the reason for the MinMI principle since:
If we denote the output prediction with , the DPI also provides another important relationship:
with equality if and only if is sufficient statistics.
We can view the process of identifying and prediction as a Markov chain:
This approach is problematic. For a generic distribution may not exist an exact minimal sufficient statistics, resulting in an incorrect Markov chain. It is possible, however, to identify by an alternate way.
3.3. Minimum Condition for Minimal Sufficient Statistics
Let us consider the Markov chain:
We can consider the search for as the minimization of . This criterion is not sufficient on its own, as we can reduce this quantity by eliminating relevant information from the input. We must have another condition.
On the other hand, even if the identification of is generally given by the compression process and the minimization of , it is also true that sufficient statistics must be the most informative possible, that is, must be maximum.
These two conditions allow us to construct the following Lagrangian:
with , being a problem-dependent parameter that balances the complexity of the representation, , and the amount of preserved relevant information, . This function has a minimum that can be found through a variational procedure. If we consider the relevant information not captured by , it is possible to write the equivalent expression:
We thus have a minimum criterion that can be applied to the optimization of a DNN.
3.4. IB in the DNNs
The previous discussion allows us to increase the understanding of some well-established heuristics in the training of neural networks, such as the search for architectures that are as compact as possible. In fact, the IB principle teaches us that a DNN learns by extracting the most informative features, approximating the minimal sufficient statistics of .
In a DNN, each level depends solely on the output of the previous level. We can study it, therefore, as a Markov process:
with .
Since, according to the DPI, passing one level to the next cannot increase entropy, we can write:
We achieve equality at each pass if each level is a sufficient statistic of its input.
Each level, therefore, must convey the greatest possible amount of relevant information, while minimizing the complexity of the representation. That is, each level must maximize while minimizing (again, the MinMI principle. Note that this last quantity is the equivalent of within a single network layer).
4. Conclusion
In this tutorial, we present a brief overview of the fundamental issues underlying the IB principle. It is a formalism with great explanatory potential on the internal functioning of DNNs, but at the same time, allows us to quantify what happens during the training process.
The complexity of the topic does not allow a complete discussion. Among the issues that we haven’t considered are the equations relating to the limits of generalization, which we can derive from the formalism, and the analysis of the BI distortion curve, from which it is possible to identify the bifurcation points, which can assimilate to phase transitions between different network topologies.
These aspects can be the starting point for further study for the interested reader.