1. Introduction
In this tutorial, we’ll discuss the primary methods for feature selection and feature reduction in text classification.
2. Curse of Dimensionality and Blessings of Selection
All machine learning is affected by a curse: the curse of dimensionality. Today, for the first time in history, recording new information has a cost that’s close to zero. Consequently, datasets that originate from the real world, whether they relate to texts or not, tend to include many more features than those that we deem informative.
For this reason, we’d like to reduce the number of features in a dataset and select only those that matter the most. This is particularly important for texts: a typical text corpus can have several thousand unique words, while only a few would figure in any given text.
Therefore, we’d like to select only the features from a text that lead to the largest increase in classification accuracy and ignore the others. Three simple techniques can help us: the chi-squared test, information gain, and the usage of n-grams instead of uni-grams.
3. Chi-Squared Distributions
The chi-squared test is one of the foundational techniques for selecting features in classification tasks. It uses very few assumptions: for this reason, it’s easy for us to remember and implement it.
Let’s assume that we’re conducting a classification task. We then have observations that we want to classify in independent classes. Each observation belongs to one and only one class, so no multiclass classification here.
Let’s call the number of observations that belong to each class :
We can now calculate the probability that a given observation belongs to the -th class of the available. To do this, we divide the number of observations in a given class by the total number of observations. In this manner, for each class .
The sum of all observations in each class must amount to . Therefore, we can also write this formula:
Because , we can then treat as the probability distribution that describes the likelihood, for any generic observation, to fall into each of the classes. Therefore, if is the probability of belonging to class , we can call the expected number of observations that are associated with the -th class.
4. Chi-Squared Test
We can note that, in the construction we’ve developed so far, and have the same value for each of the classes, so it may seem redundant to have both. However, if the number of observations is sufficiently large, we can then assume that the population satisfies this rule:
The right-hand side of the equation shows a division and multiplication by so that the probability distribution emerges again. The advantage of reasoning in this manner is that we can use the expected counts , and not the measured observations since the latter don’t figure on the right-hand side.
We can then compare the real-world distributions with the abstract distribution, for which approaches infinity, and compute a Pearson’s value between the two. This is indeed the definition of Pearson’s chi-squared test.
5. Information Gain
Another method for selecting features in a dataset requires us to think in terms of entropy. We can ask ourselves: “By how much would the entropy in the distribution be reduced if we knew the value assumed by one of the features?”
This suggests that we can calculate the information we gain as the difference between two entropies. One is the entropy of the distribution, and the other, the entropy of the same distribution under the condition that one of its values is known.
Let’s call the entropy for the random variable . Let’s also call the conditional entropy for given the value assumed by . We can then define the information gain as:
We can refer to our other article on this website if we need a refresher on computing the entropy and the conditional entropy for random variables. The information gain is just the difference between these two.
6. N-Grams and Their Frequency
The final method that we discuss is the simplest, both intellectually and computationally. When we work with texts, we can create a frequency distribution of the words contained therein. These constitute the uni-grams for those texts, and they’re the standard features for text classification.
However, some texts may contain words that are more meaningful when analyzed in pairs rather than individually. Pairs of consecutive words take the name of bi-grams, and we can use them as features to classify texts. In general, an n-gram comprising words should be more informative than the individual words taken alone, at the price of increased memory consumption.
7. Conclusions
In this article, we studied the most common techniques for feature selection and reduction for text classification.