1. Introduction

In this tutorial, we’ll examine computational complexity issues within cryptographic algorithms.

The discussion will not focus on any concrete cryptographic algorithm, but we’ll expose their basic general laws.

2. Symmetric and Public Key Cryptosystems

Suppose two people, A and B, want to communicate in a secret form. A wants to send a message M so that only B can understand its content. To do this, it uses a protocol called cryptosystem or cipher, which produces a cryptogram or ciphertext C of the original message via an encryption function e (\cdot):

    [C=e(M)]

B uses a decryption function d (\cdot), basically the inverse function of e (\cdot), to get the original message:

Encryption and decryption can be considered components of the coding and decoding subsystems in a classic communication mechanism:

Cryptographic system

Suppose a third person Z able to intercept the message that A sent to B. The most unfavorable condition forces us to consider the fact that Z knows the protocol used by A and B. At this point, Z can decrypt the message by simply applying d(C).

Therefore, we need an additional element to make communication secure. This element is a secret key, or simply a key, known to B and without which Z cannot decrypt the message despite knowing the communication protocol used.

But does A have to know B‘s secret key? Until the end of the twentieth century, most cryptographers would have answered this question in the affirmative. In this case, we talk about symmetric key systems, and their security rests on the secrecy of the key.

In 1976, Diffie and Hellman published the article “New directions in cryptography“, in which they proposed public-key cryptosystems. In them, A encrypts the message with a public domain key or public key, that B can decrypt with his private key, known only to him. Anyone can send an encrypted message to B, but only B can know its content. This eliminates the problem of key security, the weak point of symmetric cryptosystems.

3. Classical vs. Modern Cryptography

In the scenario where A and B suspect a potential attack by Z, we can ask ourselves two fundamental questions:

  1. What can Z do with the message-cryptogram pair it gets?
  2. Which result, from the point of view of security, is satisfactory for A and B?

Depending on how we answer these questions, we have two different approaches: classical vs. modern cryptography.

To deepen the issues, we recommend the excellent text by Talbot and Welsh “Complexity and Cryptography, an introduction“.

3.1. Classical Cryptography

Based on Information Theory and largely elaborated by Shannon, for which it is known as the information-theoretic approach. The basic assumption is as follows:

The cryptogram must not reveal any information about the message.

This assumption leads to the concept of perfect-secrecy that we can formalize as follows:

    [\forall\left[m\in M;c\in C\right]\Rightarrow\Pr(M=m,C=c)=\Pr(M=m)]

This formula simply says that given a concrete message m between a set of possible messages M and given a concrete cipher c between a set of possible ciphers C, the probability of m is independent of c. Even if Z has access to the cryptogram of the message, he cannot learn anything about its content.

One problem with this approach is that a perfect-secrecy system requires a key length at least as large as any message that can be encrusted with it, making it unsuitable for modern communication systems, such as the Internet.

3.2. Modern Cryptography

Modern cryptography takes a completely different approach. The basic assumption is:

It doesn’t matter if a cryptogram reveals information about the message. What matters is whether Z can efficiently extract this information.

If we assume that Z has an unlimited computational capacity, then the previous proposition does not hold. Hence, modern cryptography considers that:

Z has limited computational resources.

But if this is true for Z, it is also true for A and B, which leads to an additional assumption:

There are mathematical functions that are easy to compute but difficult to invert, called one-way functions.

In this scenario, A and B can encrypt messages with few computational resources, but Z can get information from the message only if it has high computational capabilities. The last assumption clarifies the importance of issues related to the complexity of computational procedures, therefore to the ease or difficulty of their implementation.

Modern cryptography is the one used basically today in encrypted transactions and communications. However, any system that allows exponentially increasing computational capabilities, such as the quantum computer, is potentially endangered.

4. Framework of the Complexity Theory

Any computing system, including cryptographic ones, can only use computable functions. What matters is the ease or difficulty in making the calculation.

Suppose a specific problem, for example, sorting a set of numbers. We’ll call this problem \Pi. Complexity Theory tries to answer questions like these:

  1. \Pi is inherently easy or difficult to solve?
  2. Given \Pi_ {1} and \Pi_ {2}, which is easier to solve?

To give an answer, we classify algorithms into different complexity classes, which group computational procedures with common characteristics, according to the following ideas:

  1. We measure the execution time of an algorithm in terms of the number of elementary operations.
  2. The running time of an algorithm depends on the size of the input.

The O-notation establishes a symbolism to express these ideas. An algorithm of complexity O (n ^ {2}), for example, must perform a number of elementary operations equal to the square of the size of the input.

5. Complexity Classes