1. Introduction
In calculus, monotonic functions are those that either never decrease or never increase as their input values increase.
In this tutorial, we’ll define the monotonic functions and show how they’re used in computer science.
2. What Is a Monotonic Function?
Let’s define a variable . A monotonic function , defined as , is a function whose increment – is either nonnegative or nonpositive for any and . A function is strictly monotonic if is either positive or negative. So, a monotonic function can be constant, but a strictly monotonic function can’t.
There are four types of monotonic functions:
If for all , the function is strictly increasing. Its values only increase as the input increases.
Likewise, if for all , the function is strictly decreasing. This type is the opposite of strictly increasing.
If for , we say the function is increasing (also called non-decreasing). Similarly, if , we say the function is decreasing (also called non-decreasing).
Some authors drop the qualifier “strictly” for strictly increasing and strictly decreasing functions and use “non-increasing” and “non-decreasing” for monotonic functions that can be constant.
3. Applications of Monotonic Functions in Computer Science
3.1. Application in Sorting
Sorting algorithms play a crucial role in computer science, and some of them use comparison functions. A monotonic comparison function translates pairs of elements from the data set to a boolean value monotonically with respect to the desired ordering relation.
So, if is our comparison function, monotonicity means that if returns true (i.e., ), then for any other element , it must hold that is also true. This guarantees the transitivity of the ordering.
Although we defined monotonicity for one-variable functions, similar holds for multivariable functions.
3.2. Applications in Optimization
Monotonicity is useful in solving optimization problems. When a function always decreases or always increases with changes in a variable, we know how to change that variable to minimize or maximize the function.
Machine learning is an example of this, with the largely used gradient descent optimization algorithm. This algorithm iteratively adjusts model parameters to minimize a cost (objective) function. The cost function, such as mean squared error for the regression problem, may not be monotonic. However, in gradient-based algorithms, we assume that the cost is monotonic on relatively small subspaces of the search space. Gradient descent takes advantage of this by evaluating the steepest descent direction for the cost function. By following this direction during each iteration, the algorithm advances toward parameter values that minimize the cost function, effectively moving closer to the optimal solution.
4. Complexity Analysis
In computer science, algorithms are our primary focus. We are particularly interested in functions whose resource consumption (output) scales with input size (). This property perfectly agrees with increasing monotonic functions as they formalize how larger inputs result in greater resource demands.
The Big O notation is used to represent common time complexity functions:
All these functions, except for the constant, are increasing.
Space complexity deals with the amount of memory an algorithm uses as the input size increases. We can use the same functions to describe it as we do for time complexity.
5. Conclusion
In this article, we discussed monotonic functions and their application in computer science. They help us analyze the complexity (time and space) of algorithms’ actions and are useful in sorting and optimization.