An Intro to Time Complexity & Big-O Notation
A beginner’s guide to assessing and describing algorithm run time.
You’ve started a course/book/class/whatever on data structures and algorithms, and your brain is melting. Before you can even get to the meat — or the mushroom — you’re hit with “big-o notation” this and “time/space complexity” that. Here’s my attempt to break it down into “I-don’t-have-a-computer-science-degree-or-a-math-brain” terms.
What the [insert expletive] is time complexity and big-O notation?
Time complexity is the amount of time taken for an algorithm to run as a function of the number of operations performed to complete the task. The fewer the number of operations, the more efficient the algorithm in terms of time complexity. There also exists a relationship between the number of operations and the input data size (n).
Big-O Notation describes the run time of an algorithm in terms of growth in the number of operations in relation to input size (n). The given notation is written: O[n], where O is the order of growth and n is the number of operations.
Sounds cool, but how do we evaluate how fast an algorithm runs?
Let’s start with an algorithm to perform the following: Given positive integer n, print out all integers from 1 to n.
Consider the input. What happens when n huge?
If n = 1. 1 number is printed.
If n = 10. 10 numbers are printed.
If n = 100. 100 numbers are printed.
As n increases, so do the number of print statements. In this case, the number of print statements increases linearly with an increase in n. In big-O notation, this algorithm runs with O(n) time complexity where n is the input integer.
Let’s add a print statement before the loop. How does that change the time complexity?
Here’s our updated algorithm:
It’s the same O(n). But why? The new line of code is not dependent on input size. The number of print statements that occur could be expressed as n + 1, but when expressed in big-O notation (remember it’s a generalization), we care only about the most dominant part of the algorithm. In this case, only the loop is dependent on the input size.
Note: BIG-O SHOULD DESCRIBE THE MOST DOMINANT PART OF THE ALGORITHM*
Time to mix it up again.
How does the time complexity change if we add a loop to print number 1 to n a second time?
Given positive integer n, print out all integers from 1 to n. Twice.
Now there are n + n number of print statements, or 2n. The number of print statements is still increasing linearly with n, so the big-O of this algorithm is still O(n).
Note: DROP THE COEFFICIENTS**
What is the time complexity of the following summation algorithm?
Given a positive integer, return the sum from 1 to n.
O(n) again. OK — this is the last time I’ll do that, I promise! So, why does it also have a time complexity of O(n)? The loop runs n times, adding the value of the current number to the total for all numbers from 1 to n.
How do we make it more time-efficient?
The following code also returns the sum from 1 to n.
The above algorithm has a constant time complexity of O(1) because it runs the same amount of time regardless of the input size.
Code with O(1) time complexity
- Declaring a variable
const variable = 100;
2. Print statement
console.log("hello world")
3. Loop for a constant value
for(i = 0; i < 3; i++) {
console.log(i)
}
The last example is coming right up!
Given n that represents the length of a square grid, print all the coordinates of the grid.
What is the time complexity of this algorithm? We have a loop within a loop which means if n = 4, the print will execute n x n times or 16 times. The big-O of this algorithm is O(n^2). The time complexity is scaling quadratically. In other words, for every iteration of the outer loop, we need to iterate another n number of times (the inner loop).
We’ve looked at constant, O(1), linear, O(n), and quadratic, O(n^2) time complexity (fastest to slowest). Here are some other runtimes, from slowest to fastest:
- O(n!) - factorial
- O(2n) - exponential
- O(n^2) - quadratic
- O(nlogn) - nlogn
- O(n) - linear
- O(logn) - logarithmic
- O(1) - constant
Maybe we’ll get to those in future posts. Thanks for reading!
The Little Extras
*The most dominant part of the algorithm will describe whether it is asymptotically equivalent — i.e., essentially equal. In the case of big-o, this means that with an increase in the input size, be it n and n + 1, the rate increase will be the same for both.
**Coefficient: the numeric factor of a term, e.g., the coefficient of 3x is 3, the coefficient of 2n is 2.