Big O

Big O time is the language and metric we use to describe the efficiency of algorithms.

⏱️ Time complexity

O(n) The time to do the task increases linearly with n.
O(1) The time to do the task is always the same.

💡 Best case, Worst Case and Expected Case

Example: The Quick Sort

💾 Space Complexity

As well as time complexity, we might also care about the amount of memory or space required by an algorithm.

Example 1

The following code would take O(n) time and O(n) space.

int sum(int n) { /* Ex 1.*/
 if (n <= 0) {
  return 0;
 }
 return n + sum(n-1);
}

Each call adds a level to the stack.

   sum(4)
	-> sum(3)
		-> sum(2)
			-> sum(1)
				-> sum(0)

Each of these calls is added to the call stack and takes up actual memory.

Example 2

However, just because you have n calls total, doesn't mean it takes O(n) space.

int pairSumSequence(int n) { /* Ex 2.*/
 int sum= 0;
 for (int i= 0; i < n; i++) {
  sum+= pairSum(i, i + 1);
 }
 return sum;
}

int pairSum(int a, int b) {
 return a + b;
}

There will be roughly O(n) calls to pairSum. However, those calls do not exist simultaneously on the call stack, so you only need O(1) space.

❌ Drop the costants

Big O just describes the rate of increase, so we drop the costants in runtime.
Like O(2n) O(n).
We just need to accept that some O(n) algorithm might be much faster than other, because we don't know how the compiler optimizes operations.

Example

Add vs. Multiply

When do you multiply the runtimes and when do you
add them?

Add the Runtimes: O(A+B)

for (int a : arrA) {
	print(a);
}
for (int b: arrB) {
	print(b);
}

Multiply the Runtimes: O(AB)

for (int a : arrA) {
	for (int b : arrB)
		print(a + "," + b);
}

The difference is that on the left we do A chunks of work then B chunks of work, while on the right, we do B chunks of work for each element in A.

Recursive Runtimes

What's the runtime of this code?

int f(int n) {
	if (n <= 1) 
		return 1;
	return f(n - 1) + f(n - 1);
}

Suppose we call f(4), this is the result:

graph TD
		id1(("f(4)"))
		id2(("f(3)"))
		id3(("f(3)"))
		id4(("f(2)"))
		id5(("f(2)"))
		id6(("f(1)"))
		id7(("f(1)"))
		id8(("f(1)"))
		id9(("f(1)"))
		id10(("f(2)"))
		id11(("f(2)"))
		id12(("f(1)"))
		id13(("f(1)"))
		id14(("f(1)"))
		id15(("f(1)"))
		
		id1-->id2
		id1-->id3
		id2-->id4
		id2-->id5
		id3-->id10
		id3-->id11  
		id4-->id6
		id4-->id7
		id5-->id8
		id5-->id9
		id10-->id12
		id11-->id13
		id10-->id14
		id11-->id15

The tree has depth N. Each node has two children, so the runtime complexity is O(2N).

Remember

When you have a recursive function that makes multiple calls, the runtime will often (but not always) look like O(branchesdepth) where branches is the number of times each recursive call branches.

The space complexity of this algorithm will be O(N). Although we have O(2N) nodes in the tree total, only
O(N) exist at any given time. Therefore, we would only need to have O(N) memory available.

Another example

What is the runtime of the below code?

void printUnorderedPairs(int[] array) {
	for (int i= 0; i < array.length; i++) {
		for (int j = i + 1; j < array.length; j++) {
			System.out.println(array[i] + "," + array[j]);
	}
}

The first time through j runs for N1 steps. The second time, it's N2 steps. Then N3 steps. And so on.
So the number of total steps is

(N1) + (N2) + (N3) + ... +2+1=sum of 1 through (N1)

The sum of 1 through (N1) is N(N1)2, so the runtime will be O(N2).