Even though you have seen many tech interview exercises already, we have not covered algorithmic complexity yet. Therefore, I will just give you a straight to the point explanation of the big `O`

notation.

Big O is a performance metric measuring the *worst case complexity* of an algorithm. Suppose `N`

is the size of the input. Let’s see some examples for complexities considered in this article.

`O(1)`

: constant complexity`O(N)`

: linear complexity`O(N ** 2)`

: n-squared, quadratic complexity`O(2 ** N)`

: exponential complexity

### Defining Big O

Suppose `N`

is the length of the input.

`O(N)`

complexity means that there exists a finite constant`c`

for which the number of steps taken by your solution is less than`c * N`

assuming that`c`

is fixed and`N`

is arbitrarily large.`O(N ** 2)`

(`N`

squared) complexity means that there exists a finite constant`c`

for which the number of steps taken by your solution is less than`c * (N ** 2)`

assuming that`c`

is fixed and`N`

is arbitrarily large.

This is not the formal definition of the big O notation, but it is good enough for now. You may or may not understand why I stressed the “assuming that `c`

is fixed and `N`

is arbitrarily large” part, so let’s shed some lights on this.

We are defining an upper bound for the number of steps we allow the algorithm to take. Regardless of how large `c`

is, once `N`

becomes large, `c`

will become negligible when it comes to determining the magnitude of the solution. It is nothing else, but a constant multiplier. What matters is, whether the algorithm scales *linearly*, *quadratically*, or *exponentially*.

### How do algorithms scale?

`O(2 ** N)`

complexity means that there exists a finite constant `c`

for which the number of steps taken by your solution is less than `c * (2 ** N)`

assuming that `c`

is fixed and `N`

is arbitrarily large.

There is one more interesting complexity: `O( N * log(N) )`

. I suggest that you memorize that the best sorting algorithm has `N * log( N )`

time complexity. So when you call `array.sort();`

in JavaScript, you know that your algorithm has best case `N * log( N )`

complexity.

Let’s see how these four complexities scale:

1 2 3 4 5 6 |
N N*log(N) N**2 2**N 10 23.02585 100 1024 100 460.51701 10000 1.267e+30 1000 6907.75527 1000000 1.071e+301 |

You should get a feel for why complexity matters. Take just 1000 elements. A linear or an `N * log(N)`

solution is quite fast compared to the quadratic `N ** 2`

solution. The exponential solution is horrible even for an input of length 100. For an input of length 1000, ten to the power of 301 is a lot larger number than what we can compute within a limited amount of time.

This is why complexity analysis matters. We have to know in advance how good our algorithm is.

### Time and Space Complexity

When it comes to coding exercises, we deal with two types of complexities:

- time complexity,
- space complexity.

The time complexity of an algorithm determines the number of steps taken by the algorithm, measured with respect to `N`

, the size of the input.

The space complexity of an algorithm determines the amount of space required by the algorithm to execute, measured with respect to `N`

. Note that the input does not count when measuring space complexity. Therefore, if you have an array of length `N`

as an input, and you only create two number variables, your solution has constant space complexity. This means, there is a fixed constant `c`

that acts as an upper bound to the space required by your algorithm, regardless of how large `N`

is. Constant complexity is denoted by `O(1)`

.

`O(1)`

space complexity and `O(N)`

time complexity

1 2 3 4 5 6 7 8 9 |
function getMaximum( array ) { let max = -Infinity; for ( let value of array ) { if ( value > max ) max = value; } return max; } |

The input array can be arbitrarily large. While computing the maximum value, we only created one variable. Therefore, we only used constant size.

The time complexity is `O(N)`

, because we performed an operation with each value in the array.

It does not matter how many times we iterate on the array. For instance, consider the following code:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
function getMinMaxRange( array ) { let max = -Infinity; let min = Infinity; for ( let value of array ) { if ( value > max ) max = value; } for ( let value of array ) { if ( value < min ) min = value; } return max - min; } |

Even though we iterated on the array twice, the complexity of our algorithm is still linear.

Remember, `O(N)`

complexity means that the number of steps needed for the completion of the algorithm is less than `c * N`

, where `c`

is finite, fixed, and `N`

is arbitrarily large.

As long as we have a constant number of iterations, our upper bound for `c`

will stay finite regardless of how big our input is.

Note that the above solution is not optimal in terms of coding style, as we could have simply written the contents of the two loops into one loop. We used this example for illustration purposes.

### Polynomial and Exponential Time Complexity

`O( 2 ** N)`

is a complexity we often avoid unless it is absolutely necessary.

We are normally looking for algorithms that can be solved in *poynomial time*. Polynomial time means that there is a polynomial that can be multiplied by fixed constants to overestimate the number of steps.

Suppose our polynomial is

1 2 3 |
5 * (N ** 4) + 2 * (N ** 3) - 4 * (N ** 2) + 9 * N + 4 |

Once `N`

becomes arbitrarily large, all terms except `5 * (N ** 4)`

become negligible. The complexity of the algorithm becomes `O(N ** 4)`

. You have to look at the largest powered term in the polynomial.

What does an `O(N ** 4)`

algorithm look like? Simple. Four nested loops:

1 2 3 4 5 6 7 8 9 10 11 |
for ( let i = 0; i < array.length; ++i ) { for ( let j = i; j < array.length; ++j ) { for ( let k = j; k < array.length; ++k ) { for ( let l = k; l < array.length; ++l ) { // O(N**4) algorithm } } } } |

Four nested loops still run a lot faster than an exponential algorithm.

Suppose we would like to create all possible subsets of a set represented by an array:

1 2 3 4 5 6 7 8 9 10 11 12 |
function getSubsets( [head, ...tail] ) { if ( typeof head === 'undefined' ) return []; if ( tail.length === 0 ) return [[head], []]; let tailSubs = getSubsets( tail ); let tailSubsWithHead = tailSubs.map( list => { list.unshift( head ); return list; }); return [ ...tailSubsWithHead, tailSubs]; } |

This algorithm has exponential time and space complexity, because for `N`

values, there are `2 ** N`

possible combinations of arrays:

- the first element is either in the resulting array or not. That’s 2 possibilities
- the second element is either in the resulting array or not. That’s 2 possibilities
- …
- the
`N`

th element is either in the resulting array or not. That’s 2 possibilities

In total, we have `2 * 2 * ... * 2`

possibilities, where we multiply `2`

with itself `N`

times. This product is equal to `2 ** N`

.

The exponential algorithm is a lot worse than the `O(N ** 4)`

algorithm.

In general, polynomial time algorithms are applauded, while exponential algorithms are avoided when necessary.

In most interviews, even in polynomial time algorithms, it matters a lot whether you deliver an `O(N ** 2)`

, an `O(N * log(N))`

or an `O(N)`

solution.

`O(N * log(N))`

contains a logarithm. Why is it polynomial time?

Because `O(N ** log(N))`

can be overestimated by `O(N ** 2)`

, and the latter one is in polynomial time.

### Summary

There is nothing hard about the big O notation. All you need to do is run through this article a few times and understand the terminology.

Now that you know algorithmic complexity, you can start solving interview exercises with higher confidence.