1. Introduction
In this tutorial, we’ll take a look at the concept of entropy and its applications within different branches of computer science.
Entropy is connected to the information theory developed by Shannon, which arose within the problems related to communication.
2. Communication
We can define communication as all the processes through which a mechanism actively enters into a relationship with another mechanism.
A communication system contains three basic elements: source, communication channel, and receiver.
Information theory is about answering questions regarding the measurement of information and its transmission, with or without noise.
3. Combinatorial Entropy
The derivation of the expression for information entropy allows us to understand its meaning.
Consider the possible fixed-length sequences of three different symbols: A, B, and C. Suppose that we have two instances of the symbol A, five of the symbol B, and three of the symbol C. Two possible sequences are:
B C B B C C A B A B
B A C B A C B B C B
Each of these combinations is called a permutation. Considering that the symbols of the same type are indistinguishable, the number of permutations is given by the multinomial coefficient. For our example, the number of permutations is:
In general, for a sequence of length formed by distinct symbols of multiplicity , we have:
Suppose we want to uniquely encode each of the possible permutations using a binary string. A string of length bits can encode different possibilities.
In order to encode the permutations, one must have . Taking the base 2 logarithm, we obtain that the length of the binary string must be:
The average length of the binary string for the symbols that make up our sequence is:
is the combinatorial entropy and has units of bits/symbol. It expresses the average length of a binary string needed to encode each symbol of the original sequence.
4. Entropy in Information Theory
Entropy in information theory, also called information entropy or Shannon entropy, is denoted by and can be derived from combinatorial entropy.
Operating on the logarithm in the expression of we can write:
We can approximate the logarithm of the factorial with the Stirling formula for sufficiently large:
Considering that , we have:
Taking into account that the ratio is nothing more than the probability of occurrence of the -th symbol in the sequence of length , we obtain the final general expression for entropy:
Given the approximation that we have introduced, the information entropy is an upper limit to the combinatorial entropy, .
is often used as a function of the natural logarithm:
In this case, the units are of nats/symbol. Both bits and nats are dimensionless units.
5. Entropy in the Continuous Case
In the passage from the discrete case to the continuous case, Shannon simply replaced the summation with an integral:
This quantity, generally denoted by the symbol , is called differential entropy. The integral implies the use of a probability density function instead of a probability distribution.
There is a problem here. Consider that is given by a normal probability density of mean and variance :
Then, calculating the integral, the differential entropy is:
Note that the above expression is less than zero for . Hence, differential entropy, unlike discrete entropy, can be negative. Moreover, is not generally invariant under change of variable.
Care should, therefore, be taken in applying differential entropy. Its uncritical use as a simple generalization of discrete entropy can give rise to unexpected results.
Jaynes proposed a corrected version of differential entropy, known as relative entropy, to solve these problems:
The expression above is scale-invariant, as long as we measure and on the same interval. We can consider as an a priori probability, often in the form of the uniform probability density.
The problems of differential entropy arise from the fact that an absolute measure of entropy in the continuum is not mathematically justified. Relative entropy measures the differential entropy concerning an arbitrary level, given by .
6. Entropy and Information
Shannon’s work originates, as noted by Von Neumann (who also suggested the name to Shannon), from Boltzmann’s observation in his work in statistical physics that entropy is “missing information”, considering that it is related to the number of possible alternatives for a physical system.
Information in information theory is not related to meaning. Shannon wrote:
“The semantic aspects of communication are irrelevant to the technical ones.”
Information is a measure of the freedom we have when choosing a message. So, it is not about what is being transmitted, but what could be transmitted. It is not related to individual messages.
If we have a possible choice between two messages, we can arbitrarily indicate the choice of one of the two as a unit quantity of information ( or bit, a term proposed by John W. Tukey). As we have seen, the bit arises when a message is encoded as a binary string.
6.1. Ergodic Processes
The symbol sequences of a message have, in general, different probabilities. We compose a message choosing among possible symbols in an alphabet. In real processes, the probability of choosing a symbol is not independent of previous choices.
Think of a message written in English, in which we compose the words with the symbols of the usual alphabet. The probability that after a there is a vowel is much higher than the probability that there is an , for example.
When we have processes in which we choose symbols by a set of probabilities, we deal with stochastic processes. When the choice of symbols of a stochastic process depends on the symbols or events previously chosen, we have a Markov process. If a Markov process leads to a statistic that is independent of the sample when the number of events is large, then we have an ergodic process.
Entropy is, therefore, a measure of uncertainty, surprise, or information related to a choice between a certain number of possibilities when we consider ergodic processes.
7. Maximum Entropy
is maximum if the possibilities involved are equally likely. For two options, we have:
which leads to the following graphical representation, showing a maximum for :
We can generalize this result to cases with more than two probabilities involved. It is possible to demonstrate that this maximum is unique.
If we use concrete functional forms of probability density, we have:
- The entropy of the normal density that we have seen previously, , is maximum among all densities with the same variance. In other words, the maximum entropy distribution under constraints of mean and variance is the Gaussian.
- The entropy of uniform density is greatest among all possible probability densities. Not being asymptotic, the entropy of the uniform density cannot be calculated in the whole domain given its constant value in all points (the integral does not exist), but it can be calculated in a finite range . In this case, the probability for the continuous case is , and the differential entropy holds:
which depends on the extent of the selected range. In the discrete case, the maximum entropy, for events and entropy expressed in nats, is:
8. The Inevitable Physical Analogy
We have highlighted in the previous sections the influence of thermodynamics in the early stages of the development of information theory. However, information entropy is of enormous importance in other areas of physics as well.
8.1. Thermodynamic Entropy
The thermodynamic entropy for an ideal gas is given by the Boltzmann-Planck equation:
where is the Boltzmann constant, and is the number of real microstates corresponding to the gas macrostate. The Boltzmann formula shows the relationship between entropy and the number of ways the atoms or molecules of a thermodynamic system can be arranged.
Note the similarity of the meaning of the Boltzmann equation and of what we have given to combinatorial entropy starting from the concept of permutation.
8.2. Order, Probability, and Entropy
We know that, in an isolated system, the disorder or entropy increases with each physical irreversible process until it reaches a maximum. The result is also valid for irreversible processes in adiabatic systems, in which there is no heat transfer with the outside. This fact has important macroscopic consequences.
If we inject gas into a container full of air and wait for sufficient time, we can observe that the gas will spontaneously diffuse into the air until it reaches the same concentration at all points.
Another example is the contact between two bodies at different temperatures. In this case, there will be a heat flow between the two bodies to equal their temperatures, which we can consider a measure of the concentration or density of energy. If the two bodies have different masses, they will have different amounts of energy at the end of the process, but the energy per unit of volume will be the same.
The second principle often manifests by the establishment of physical processes that try to equal some property in the systems. The result is the zeroing of the gradient of some physical observable. In isolated systems, the processes leading to an increase in entropy are spontaneous. The maximum entropy corresponds to the thermodynamical equilibrium.
The universe is an adiabatic and isolated system. When the maximum entropy is reached, there will no longer be any gradient of energy that will allow any spontaneous process. This conjecture is known as heat death or entropic death of the universe.
Note the similarity between the maximum thermodynamic entropy and the equality of physical properties in all points of the system, and the maximum information entropy that derives from equality in probabilities.
8.3. Uncertainty and Quantum Mechanics
The uncertainty principle is a fundamental natural limit. It concerns the possibility of measuring with arbitrary precision or certainty pairs of physical observables (conjugate variables).
Since entropy is a measure of uncertainty, it is no coincidence that we can formulate the uncertainty principle in terms of information entropy (entropic formulations).
Starting from the inequality proposed independently by Bourret, Everett, Hirschman, and Leipnik, satisfied by each function and its Fourier transform , that connects respectively coordinate and momentum space:
Beckner, and Bialinicki-Birula and Micielski demonstrated:
where is related to the Planck constant.
Since the differential entropy of the normal probability density, as we saw above, is maximum among all distributions with the same variance, for a generic probability density we can write the inequality
Substituting the last equation into the previous one, we have:
That is the formulation of the uncertainty principle.
9. Entropy Properties
Some fundamental properties of entropy are:
- Continuity: changing the values of the probabilities by a minimal amount should only change the entropy by a small amount.
- Symmetry: the measure is invariant under re-ordering of the outcomes.
- Additivity: the amount of entropy should be independent of how we consider the process and how we divide it into parts.
10. Other Forms of Entropy
Given two random variables , we can define their marginal probabilities , the joint probability , and the conditional probabilities , linked by the relationship
Since entropy is a function of distributions or probability densities, we can define analogous variants.
Joint entropy is a measure of the uncertainty associated with a set of variables. For discrete and continuous cases, it takes the form:
where:
Equality is valid only if and are statistically independent.
Conditional entropy or equivocation quantifies the amount of information needed to describe the outcome of a random variable given the value of another random variable :
Some important properties are:
with equivalent properties for the discrete case.
A very important quantity in information theory is mutual information (MI), which quantifies the amount of information obtained about one random variable by observing another random variable. It is a measure of the mutual dependence between the two variables:
MI is positive and symmetric, , and is linked to the different forms of entropy through the relationship:
11. Communication and Transmission of Information
The relationship between effective entropy and maximum entropy is the relative entropy of the source. One minus relative entropy is redundancy.
For a communication channel of capacity bits/s and a source with entropy bits/symbol, the average transmission speed in an undisturbed channel is limited above by in bits/symbol. The closer we get to this limit, the longer the encoding process.
A paradoxical fact is that in the presence of noise in the communication channel, the entropy of the received message is greater than that transmitted by the source. So, the received message has more information than the one sent because the disturbance introduces an entropy added to that of the pure source. The additional information is unwanted (noise), but from a formal point of view, it contributes to the total entropy of the message received.
12. Data Compression
Data compression is a notable example of the application of entropy and information theory concepts.
To transmit a message from a source to a receiver, we use a communication channel. The transmission involves a previous process of coding the message. We can try to reduce the original size (compression) through an algorithm.
Compression can be lossy or lossless. Using audio file compression as an example, mp3 is a lossy compression, while FLAC is lossless compression.
We will not study any concrete algorithm, but we will see the general laws governing data compression mechanisms.
12.1. Entropy and Coding
In coding, each symbol of the message is identified by a code or codeword, which can be of fixed or variable length. For example, we can encode the first 4 letters of the alphabet with .
If each symbol is identified by probability and codeword length , the average codeword length is:
The Source Coding Theorem or Shannon’s First Theorem establishes that it’s not possible to represent the outputs of an information source by a source code whose average length is less than the source entropy:
The average codeword length is limited below by the entropy of the source. The lower is, the lower the value of that we can theoretically reach. Hence, messages with lower entropy are more compressible.
12.2. Data Compression
Coding efficiency is:
The value of is the limit to aim for when developing a compression algorithm.
An intuitive result is that if any encoded string has only one possible source string producing it, then we have unique decodability. An example of this type of encoding is the prefix code.
A prefix code is a code where no codeword is the prefix of any other codeword. For example, if we use variable length coding , we do not have a prefix code, because the codeword of is the prefix code of the codeword for (substring 11).
In this context, an important result is Kraft-McMillan’s inequality:
If codeword lengths satisfy the Kraft-McMillan inequality, then we can construct a prefix code with these codeword lengths. This ensures the uniqueness of the decoding.
If, for each symbol , it is verified that , then we can prove that and . When this condition does not occur, we can increase efficiency by extending the source, using a binary Shannon-Fano code.
The Shannon-Fano code for blocks of symbols will have expected codeword length, , less than , the latter equality stemming from the definition of entropy. The expected codeword length for original source symbols will be less than:
By choosing to be large enough, we can make this as close to the entropy, , to obtain . The price to pay is an increase in the complexity of the decoding process.
12.3. A Compression Scheme
As a conclusion of this section, we report a classic compression scheme.
Suppose a vector of symbols, each of which has a probability , and consider the problem of encoding with a binary alphabet . For an integer and real numbers and , a coding scheme, , and decoding, , can be formalized as follows:
This schema forms an compression-schema for the if the following conditions are met:
In practice, from the last condition, the coding of the symbols according to the probabilities must allow a decoding process that approximates the original probabilities with the desired precision.
13. Entropy and Machine Learning
13.1. Cross-Entropy Error Function
In classifiers such as neural networks, we can use alternative error functions to quadratic error.
It is usual to minimize error functions constructed from the negative logarithm of the likelihood function, starting from independent conditional probabilities of the target given the input :
where are vectors of the records of the dataset and is the number of classes (network output units).
If we consider that the target belongs only to two mutually exclusive classes and ., conditional probability is:
is the cross-entropy error function and has a form that we have already seen in the discussion on the maximum value of entropy.
By analyzing the continuous case, we can demonstrate that the minimum of this error function is the same as that for the quadratic error.
13.2. Principle of Maximum Entropy
The Maximum Entropy or MaxEnt principle, proposed by Jaynes, establishes that the probability distribution that best represents knowledge within a problem is the one with the maximum entropy. Since the output of a neural network is, under certain conditions, an approximation to the conditional probability of the target given the input, the MaxEnt principle constitutes potentially an optimization criterion.
The MaxEnt principle is not without criticism. This led to the proposal of two similar principles:
- The principle of maximum mutual information (MaxMI), in which the optimal probability density is the one with the maximum MI between network outputs and targets.
- The principle of minimum mutual information (MinMI), in which the optimal probability density is the one with the minimum MI between network outputs and inputs.
In the latter case, if we know the marginal probabilities of inputs and targets (reasonable condition), MinMI is equivalent to the MaxEnt principle.
13.3. The Error of an Estimator
Considering a target and an estimator given, for example, by the output of a neural network, which depends on an input , it is possible to obtain a lower limit of the expected quadratic error, which depends on the conditional differential entropy:
This expression is important in quantum mechanics, which again underlines the importance of the connection between physics and information theory.
14. Conclusion
In this tutorial, we’ve made an overview of the concept of entropy and the implications of its use in different branches of computer science. Given the historical development of information theory, the analysis of entropy cannot ignore the meaning of thermodynamic entropy.
The topic is extensive and cannot, of course, be dealt with exhaustively in a single article. Nevertheless, the themes dealt with in the text can give ideas for further insights.