1. Overview
In this short tutorial, we’ll look at two common interpretations of Euclid’s algorithm and analyze their time complexity.
2. Greatest Common Divisor
Euclid’s algorithm is a method for calculating the greatest common divisor of two integers.
Let’s start by recalling that the greatest common divisor of two integers is the largest number which divides both numbers with a remainder of zero. We’ll use to denote the greatest common divisor of integers and . So, for example:
- , where is a positive integer
This task might seem trivial for such small numbers, however, it becomes increasingly difficult as our numbers grow. So, if you need some convincing, try calculating !
Euclid’s algorithm can make this process a whole lot easier. Whilst this algorithm has a number of variations, each is ultimately derived from the propositions put forth by Euclid of Alexander in his works Elements.
Today we’ll explore two common ones.
3. Euclid’s Algorithm by Subtraction
The original version of Euclid’s algorithm, presented in Proposition 2 in Euclid’s Elements, employs subtraction.
For some positive integers and , it works by repeatedly subtracting the smaller number from the larger one until they become equal. At this point, the value of either term is the greatest common divisor of our inputs.
3.1. The Algorithm
We can define this algorithm in just a few steps:
Step 1: If , then return the value of
Step 2: Otherwise, if then let and return to Step 1
Step 3: Otherwise, if , then let and return to Step 1
Now, let’s step through this algorithm for the example :
We have reached , which means that .
3.2. Pseudocode
We can also present this algorithm in pseudocode:
algorithm gcd(a, b)
// INPUT
// a, b = positive integers
// OUTPUT
// The greatest common divisor of a and b
if a = b:
return a
else if a > b:
return gcd(a - b, b)
else:
return gcd(a, b - a)
3.3. Time Complexity
We can estimate the worst-case time complexity of our algorithm by thinking about the sum of . If we exclude our base case, this sum will decrease at each recursive step. Since the smallest possible deduction for each step is 1, and since all positive integers are guaranteed to have a common divisor of 1, our time complexity will have an upper bound of .
4. Euclid’s Algorithm by Division
A modern adaption of Euclid’s algorithm uses division to calculate the greatest common factor of two integers and , where . It is based upon a few key observations:
- is , for any positive integer
This first observation is quite intuitive, however, the second is less obvious – if you want to examine its proof check out these slides.
With these observations in mind, our algorithm simply repeats the following assignment:
until becomes zero. At this point, we can simply pluck our result from the final value of .
4.1. The Algorithm
Euclid’s algorithm by division has three steps:
Step 1: If , then return the value of
Step 2: Otherwise, divide by and store the remainder in some variable
Step 3: Let , and , and return to Step 1
Let’s step through the algorithm for the inputs and :
Now that we have reached , we know that .
4.2. Pseudocode
We can implement this algorithm as a recursive function :
algorithm gcd(a, b)
// INPUT
// a, b = positive integers
// OUTPUT
// The greatest common divisor of a and b
if b = 0:
return a
else:
return gcd(b, a mod b)
Alternatively, we can use a while loop to capture the same behavior:
algorithm gcd(a, b)
// INPUT
// a, b = positive integers
// OUTPUT
// The greatest common divisor of a and b
while b != 0:
r <- a mod b
a <- b
b <- r
return a
4.3. Time Complexity
It turns out that the number of steps our algorithm will take is maximized when the two inputs are consecutive Fibonacci numbers.
More specifically, if requires steps, and , then the smallest possible values of and are and , respectively. This can be proven by induction.
So, when steps are required:
Next, recall that the , where is the ‘golden ratio’ and equals . We can then deduce that:
Finally, we can solve for :
So, the number of steps will always be less than , where is the smaller of our two input values. Therefore, is an upper bound on the worst-case cost of our algorithm.
It’s worth noting that doesn’t need to be fed as the second argument to our algorithm. That is, we can execute where . In this case, our algorithm just performs one additional step at the beginning that effectively swaps the values. Think about why this might be the case … what is the result of when ?. Since this additional step adds constant time, so our worst-case bound is unaffected.
5. Conclusion
In this article, we implemented two variations of Euclid’s Algorithm to find the greatest common divisor of two positive integers. We then analyzed the complexity of each solution.