1. Intro

In this tutorial, we’ll explain the co-occurrence matrices in natural language processing (NLP) and their applications.

Firstly, we’ll introduce the term NLP and then explain co-occurrence matrices and their uses in NLP.

2. Natural Languages Processing (NLP)

Natural Language Processing (NLP) is a branch of computer science and artificial intelligence that focuses on the relationship between human language and computers. The objective of NLP is to develop computer programs that can analyze, understand and process large amounts of natural language data as humans do. This field has gained significance in recent years because of the abundance of digital text data and the development of advanced statistical techniques for language processing.

NLP has a wide range of applications, some of which include:

Due to the diversity of NLP tasks, different text preprocessing and transformation techniques are necessary to tackle them effectively. One such technique is the co-occurrence matrix.

3. What Are Co-occurrence Matrices?

Co-occurrence matrices are a fundamental concept in NLP, and we can use them to represent the relationship between elements in a text corpus. Usually, in NLP, we work with a collection of text or text corpus. Elements of text corpus can refer to sentences, words, phrases, or any other linguistic unit of interest.

With co-occurrence matrices, it is possible to represent these elements using rows and columns of a matrix. More precisely, each row and column of a matrix represents a unique element of a text corpus. Cells of the matrix represent the number of times two elements appear together in a predefined context. The context can be a document, sentence, word window, or any other relevant unit.

3.1. Example of Co-occurrence Matrices

As an example, let’s use three sentences below:

Apples are green and red.

Red apples are sweet.

Green oranges are sour.

Let’s assume that these three sentences are our text corpus, elements are words and their context is one sentence. It means that for each pair of two words from the sentences above, we need to count how many times they appear together in one sentence. For example, the words “apples” and “red” appear two times, in the first and second sentences, while the words “red” and “sour” don’t appear in the same sentence.

Following that logic, our co-occurrence matrix will look like this:

apples

are

green

and

red

sweet

oranges

sour

apples

2

2

1

1

2

1

0

0

are

2

3

1

1

2

1

1

1

green

1

1

2

1

1

0

1

1

and

1

1

1

1

1

0

0

0

red

2

2

1

1

2

1

0

0

sweet

1

1

0

0

1

1

0

0

oranges

0

1

1

0

0

0

1

1

sour

0

1

1

0

0

0

1

1

From the table above, we can notice that the co-occurrence matrix is symmetric. It means that the value with row X and column Y will be the same as the value with row Y and column X. In general, we don’t need to keep all elements from the text corpus in the co-occurrence matrix but only those of interest.

For instance, before creating a matrix, it would be useful to clean text, remove stop words, implement stemming and lemmatization, and similar.

4. What Are the Uses of Co-occurrence Matrices in NLP?

Co-occurrence matrices can be useful for analyzing relationships between elements. From the example above, we see that the words “apple”, “red”, and “sweet” appear in the same context, while the words “red” and “sour” do not appear. Of course, with a larger text corpus, values from the matrix will become more relevant, and more meaningful connections between elements can be seen. Also, we can analyze with which words our words of interest appear the most.

Besides text analysis and information retrieval, co-occurrence matrices can be used for element representation or embeddings. More commonly word embeddings, which is a semantic representation of a word expressed with a numerical vector. Basically, we can take any column or row from the co-occurrence matrix and represent it as a word embedding. This makes sense since similar words will have similar embedding vectors (especially after normalization) because they will have similar contexts in a text corpus. One problem might be the high dimension of these vectors, but it can be solved using dimensional reduction techniques such as PCA (principal component analysis) or SVD (singular value decomposition).

Once we have embeddings, our range of applications will expand significantly. Almost every NLP and machine learning algorithm requires a numerical vector as an input. Some applications include:

Lastly, it’s worth mentioning that we can use co-occurrence matrices in digital image processing for texture analysis. These co-occurrence matrices are created using image pixel values and represent features of image texture.

5. Conclusion

In this article, we’ve explained what co-occurrence matrices are using a simple explanation and example. This is a very simple concept but can be very helpful in NLP. The most important application of co-occurrence matrices is in text preparation for other algorithms, as we explained with word embeddings.