1. Overview
In this tutorial, we’ll talk about what Big O Notation means. Then, we’ll review a few examples to investigate its effect on running time.
2. The Intuition of Big O Notation
Big O Notation is an efficient way to evaluate algorithm performance. The study of the performance of algorithms – or algorithmic complexity – falls into the field of algorithm analysis. This method calculates the resources (e.g., disk space or time) needed to solve the assigned problem. Here, we’ll focus primarily on time, where the faster an algorithm can complete a task, the more efficient it is.
Big O notation helps us analyze how the input size affects an algorithm’s running time. To understand Big O, it is essential to know the growth rate. This refers to the amount of time needed for each input size.
Next, we’ll study some algorithms and evaluate their time complexity.
3. Constant Time Algorithms – O(1)
First, let’s see a simple algorithm that initializes a variable with the value of 10000 and then prints it:
algorithm initializeAndPrintVariable:
// INPUT
// None
// OUTPUT
// Initialize a variable with a big number and print it.
n <- 10000
print n
This code executes in a fixed amount of time regardless of the value of , and the time complexity for the algorithm is . Alternatively, we can print the variable three times using a for loop:
algorithm initializeAndPrintVariableThrice:
// INPUT
// None
// OUTPUT
// Initialize a variable with a big number and print it several times.
n <- 10000
for i in range(1, 4):
print n
The above example is also constant time. Even if it takes three times as long to run, it doesn’t depend on the input size . We denote algorithms with constant time as . Regardless of the input size , it takes three times as long as usual. Therefore, , , or even are the same thing as .
We don’t care about how long it takes to run, only that it takes constant time.
4. Logarithmic Time Algorithms – O(log(n))
Asymptotically, constant time algorithms are the quickest. Next comes algorithms that have a logarithmic time complexity. However, they are more challenging to visualize.
One typical example of a logarithmic time algorithm is the binary search algorithm:
algorithm binarySearch(A, x):
// INPUT
// A = Sorted array
// x = Target value
// OUTPUT
// Index of x in A, or -1 if not found
low <- 0
high <- len(A) - 1
while low <= high:
mid <- (low + high) / 2
if A[mid] < x:
low <- mid + 1
else if A[mid] > x:
high <- mid - 1
else:
return mid
return -1
In binary search, the input is the array size the algorithm splits in half on each iteration until it finds the target value or -1 if absent. Thus, the running time is proportional to the function, where is the number of elements in the array. For example, when is 8, the while loop will iterate for times.
5. Linear Time Algorithms – O(n)
Next, we’ll look at linear time algorithms whose time complexity is proportional to the size of their inputs.
For instance, consider the following pseudocode of an algorithm that enumerates the values, with provided as input:
algorithm numberCounter(n):
// INPUT
// n = Input value
// OUTPUT
// Print numbers from 1 to n
for i <- 1 to n:
print i
In this example, the number of iterations is directly proportional to the input size, . As increases, the time taken to execute the algorithm increases linearly. Therefore the algorithm’s time complexity is . When denoting the time complexity, we don’t discriminate between or as both have time complexity and grow directly related to the input size.
6. N Log N Time Algorithms – O(n log n)
The N log N algorithms perform worse than algorithms having linear time complexity. This is because their running time increases linearly and logarithmically with the input size. For example, let’s see the following algorithm with for loops:
algorithm allCombinationsOfTwoNumbers(n):
// INPUT
// n = Input value
// OUTPUT
// Prints all pairs of numbers from 1 to n
for i <- 1 to n:
for j <- 1 to log(n):
print(i, j)
In this example, the outer loop runs times, and the inner loop runs times. Since the loops are nested, the total number is , and we denote the time complexity of the algorithm as . Another example of an N log N time algorithm is the Quicksort algorithm.
7. Polynomial Time Algorithms – O(nm)
Next, we’ll delve into the topic of Polynomial-time algorithms, including algorithms with complexities such as , , and, more generally, , where is an integer. It’s important to note that compared to N log N algorithms, polynomial algorithms are relatively slower. Within the polynomial algorithms, is the most efficient, with , , and so on being successively slower.
Let’s have a look at a simple example of a quadratic time algorithm using for loops:
algorithm allPermutationsOfTwoNumbers(n):
// INPUT
// n = Input value
// OUTPUT
// Prints all pairs of numbers from 1 to n
for i <- 1 to n:
for j <- 1 to n:
print(i, j)
In this example, the outer loop runs times while the inner loop runs . Since the loops are nested, the total number of iterations is .
Another example of a polynomial time algorithm with a complexity of would be the following:
algorithm allPermutationsOfThreeNumbers(n):
// INPUT
// n = Input value
// OUTPUT
// Prints all triplets of numbers from 1 to n
for i <- 1 to n:
for j <- 1 to n:
for k <- 1 to n:
print(i, j, k)
Here, the total number of iterations is . In this case, there are three nested loops, each running times. Thus the computational complexity is .
8. Exponential Time Algorithms – O(kn)
Let’s analyze algorithms with exponent-dependent inputs, like . Their runtime increases significantly as the input size grows. Specifically, the algorithm’s runtime doubles with each additional input when is 2. For instance, if equals 2, the algorithm will run four times; if equals 3, the algorithm will run eight times. This behavior contrasts logarithmic time algorithms, which have a runtime that decreases with each additional input. Additionally, algorithms with complexities of triple their runtime with each added input. In general, algorithms with complexities of increase their runtime by a factor of with each additional input.
Let’s have a look at a simple example of an time algorithm:
algorithm decimalToBinaryEnumerator(n):
// INPUT
// n = Input value
// OUTPUT
// Print numbers from 1 to n in the binary format
for i <- 1 to 2^n:
print binary(i)
In this example, the for loop runs times, printing every binary number from 1 to . A typical example of an exponential time algorithm is the Recursive Fibonacci Sequence.
9. Factorial Time Algorithms – O(n!)
Finally, let’s analyze algorithms with a factorial runtime, our worst-case scenario. This class of algorithms has a runtime that increases proportionally with the factorial of the input size. A well-known example is solving the traveling salesman problem using a brute-force approach.
In short, the traveling salesman problem involves finding the shortest route that visits each city in a given list exactly once and returns to the starting city. Unfortunately, for a list of cities, there are possible permutations, and thus the brute-force approach has an runtime complexity.
While explaining the solution to this problem is outside the scope of this article, we can demonstrate a simple algorithm that prints all the numbers from 0 to at each iteration of the factorial number:
algorithm simulationOfFactorialTime(n):
// INPUT
// n = Integer
// OUTPUT
// Prints all numbers from 0 to each factorial of a number n
for i <- 1 to n!:
print i
In this example, the number of recursive calls grows with the factorial of the input size, thus resulting in a runtime complexity of .
10. Asymptotic Functions
The Big O notation belongs to a class of asymptotic functions that we use to study the performance of algorithms. While the Big O notation disregards the efficiency of algorithms with small input sizes, it is primarily concerned with the behavior of algorithms on significant inputs.
Additionally, there are two other asymptotic functions to describe algorithm performance at the limit: Big Θ and Big Ω notations.
For example, the Big O notation defines the algorithms that perform no worse than a certain speed denoting the upper bound. Instead, the Big Ω notation defines the algorithms that perform no better than a certain speed indicating the lower bound. Finally, The Big Θ notation indicates an algorithm operating at a constant speed that we can see as equality.
Big O, Big Ω, and Big Θ notations are used to describe the performance of algorithms, with Big O being the most common. They help understand the effect of input size on an algorithm’s performance and can be used to determine the best algorithm based on the input size.
11. Visualizing Different Complexities
All the time complexities presented are easier to visualize if we plot them on a graph with their input size and their time complexity:
Above, we can appreciate the need to reduce the complexity of algorithms.
12. Conclusion
In this article, we discussed the importance of understanding time complexity and analyzing algorithm performance using the Big O notation. We also examined time complexities, such as constant, logarithmic, linear, linearithmic, polynomial, exponential, and factorial time algorithms.
We can use the following cheat sheet to explore the time complexity for typical data structures.