# Beginner's guide to Big-O Notation

## Marty Jacobs · Feb 19, 2019 00:00 · 1432 words · 7 minute read

Why learn Big-O? Not only is it important to learn for coding interviews at Google or Microsoft, but it is an important concept to learn as it provides a method to *compare algorithms*. Big-O is used to classify algorithms as their running time or space requirements grow. To see how **scalable** an algorithm is, based on the amount of input it receives. Big-O has your answers.

## What is Big-O?

Big-O is a mathematical notation used to show the upper bound of an algorithm. What the heck is an upper bound? Let’s take an array for example, if we iterate through the whole array to find an item, at the worst case, we will perform `n`

operations, and therefore we will have an upper bound of `n`

. So we can say, a linear search (searching through an entire array) has `O(n)`

complexity. Depending on the input size of the array, this could take a **very long time.**

Can we do better than `n`

? Yeah maybe… If we use a Hash Table, or Hash Map, the time to search could be reduced to `O(1)`

complexity. This will makes for a much more scalable solution. By having an upper bound of `O(1)`

, it means that your algorithm will perform better than `O(n)`

when the size of input is becomes larger.

## How to use Big-O

We know that it provides the upper bound of an algorithm. But how is this useful?? Well, think about it like diagnosing an algorithm. We first find the Big-O to diagnose its upper bound. Then, we can tweak the algorithm to improve its running time or space requirements. For example, taking an algorithm down from `O(n) to O(1)`

is a significant scalability increase.

What are we missing? **how to find out the Big-O of an algorithm**. To do this, there is a method we can follow to help us diagnose/find the Big-O of an algorithm. It is really quite straightforward. Let’s take for example, this algorithm below:

```
/**
Function which counts the size of the array
**/
public int countArray(String[] array) {
int counter = 0; //1
for (int i = 0; i < array.length; i++) { //2
counter++; //3
}
return counter; //4
}
```

You will notice comments //1 through to //4 above, this is how we break down an algorithm to find the Big-O. We find the number of computational operations for each line of execution. Every line that does not depend on the size of the input data is performed in constant time, ie. `O(1)`

. We are counting the number of computational steps here, so line 1, is simply initialising the variable. This is 1 computational step (can be performed in constant time).

Now we reach the for loop… the performance of the loop depends on the size of input, or the size of the **array**. The for loop simply counts each item in the array. The array could be really, really large, like millions of items, and this requires to perform millions of operations. Therefore, we can say as an upper bound for this loop is `O(n)`

where `n`

is the number of items.

Lastly, the function returns the “total count” of the array on line //4. This operation is performed in `O(1)`

constant time. So, now, we can deduce the Big-O of the algorithm by:

- Disregarding all the
`O(1)`

constant operations, and - Taking the maximum Big-O to perform a step

Therefore, in this case, the maximum Big-O and upper bound is `O(n)`

(as we had a single for loop performing a linear search).

## Optimising functions with Big-O

The final step… we found previously that the Big-O of the algorithm has an upper bound of `O(n)`

. But can we do better?? Can tweaking our function increase the performance?

Let’s have a look…

How about we simply return `array.length`

, this will cut our operation costs down from `O(n)`

to `O(1)`

….

```
/**
Function which counts how many items in an array
**/
public int countArray(String[] array) {
return array.length; //1
}
```

In this optimisation, we have removed the entire loop, and removed *the need* to search through the entire array (at every index). Array.length simply accesses the field of the array, taking constant time `O(1)`

to perform the operation. We have now taken an algorithm that performs at upper bound linear time, to now, perform at an upper bound of constant time!

## More Examples

Great! We are now over the hurdle of knowing **what** Big-O actually is. We looked at some examples, common algorithms which perform in `O(1)`

constant time and `O(n)`

linear time. Well… let’s now dive deeper and look into a couple more examples using logarithmic and exponential complexities.

### O(log n)

*Quick catch-up* - Logarithmic growth is simply the inverse to exponential growth. Lots of growth in the beginning, and slowly dropping over time. For example, like a news article broadcasting a current event. On the day, lots of shares and posts, but then as time goes on, the views steadily start to drop off.

```
i = 1
while(i < n)
i = i * 2
```

The code snippet has an upper bound of `O(log n)`

, as every iteration of the loop the value `i`

doubles. If `n`

is `100000000`

, then the first few iterations will likely not get us closer to n, but after some time, `i = 50000000`

and only 1 iteration will be required to finish the loop. This is a prime example of a scalable solution as the input grows larger, `O(log n)`

will scale with a larger input size.

### O(n log n)

```
public int[] findPair_2(int[] A) {
Arrays.sort(A); // 1
for (int i = 0, l = A.length; i < l; i++) {
int j = Arrays.binarySearch(A, i + 1, l, -A[i]); //2
if (j > i) return new int[] { //3
A[i], A[j]
};
}
return null; //4
}
```

Line 1 - Java Arrays function `sort()`

, from the docs:

“The sorting algorithm is a modified mergesort (in which the merge is omitted if the highest element in the low sublist is less than the lowest element in the high sublist). This algorithm offers guaranteed `O(n log n)`

performance.”

Line 2 - We have a binary search function which has an upper bound of `O(log n)`

. However, since we are performing a binary search on **every iteration of the for loop**, the overall algorithm complexity is `O(n log n)`

, where n is the input size of the array and log n is the upper bound for the binary search.

Line 3 - We have an operation that can be performed in constant time `O(1)`

Line 4 - We have an operation that can be performed in constant time `O(1)`

Now we can deduce the Big-O for the algorithm:

- Disregarding all the
`O(1)`

constant operations, and - Taking the maximum Big-O to perform a step

We reach the end result - the algorithm upper bound of `O(n log n)`

### O(n^{2})

```
public void printBlog(int[] n) {
for (int i = 0; i < n.length; i++) {
for (int j = 0; j < n.length; j++) {
System.out.println("Zero Equals False");
}
}
}
```

The above solution has an upper bound of O(n^{2}). This solution has an outer loop and an inner loop, both iterating over the input size `n`

. The inner loop is run `n`

times, `n`

times. Therefore, algorithm upper bound of O(n^{2}).

### O(2^{n})

*Quick catch-up* - Exponential or “exponential growth” is generally whenever a certain amount grows at a rate proportional to its current value. You can think of it like a movie or song going to the top-10 charts on release. It starts off… steadily growing, then boom! It hits virality and grows *exponentially*, everyone is downloading it.

```
public void solveHanoi(int N, string fromPeg, string toPeg, string sparePeg) {
if (N < 1) {
return;
}
if (N > 1) {
solveHanoi(N - 1, fromPeg, sparePeg, toPeg); //first smaller problem
}
System.out.println("move from " + fromPeg + " to " + toPeg);
if (N > 1) {
solveHanoi(N - 1, sparePeg, toPeg, fromPeg); //second smaller problem
}
}
```

2^{n} is often the upper bound of recursive solutions, requiring the input `N`

to be split into smaller sub-problems of size `N-1`

. Often times, having the exponential upper bound is undesirable. However, if it’s the only way, then it’s the only way.

Hey thanks for reading! I hope you enjoyed the article, and don’t forget to share by hitting the buttons below!

Want to get serious about Algorithms and Data structures? We recommend picking up a copy of Introduction to Algorithms (The MIT Press) for a more extensive guide into improving your algorithms!

Happy Coding