Dynamic programming is tough. These are typically the hardest kinds of questions that get asked in coding interviews.

Turns out, at its heart, it's really not that complicated! This is meant to be a simple breakdown of what dynamic programming is and some common techniques to get you started.

### Motivation

First, we need some motivation. Let's take a look at an example. How can we compute the nth fibonacci number recursively?

def fibonacci(n):
if (n <= 1):
return n
else:
return fibonacci(n - 1) + fibonacci(n - 2)


Looking at the tree of recursive calls, we know that this has a runtime of O(2n). Check out The Complete Guide to Big O Notation if you want a more in-depth explanation as to why.

We have a lot of wasted calls here! For example, when n = 4, f(1) is computed 3 times. This gets exponentially worse for higher n.

Well, what if we had a way to store, or remember, the results we've already computed? So, instead of recomputing things, we can just reference the answer we remembered. That's exactly what memoization is.

### Memoization

Memoization is the heart of dynamic programming. It's all about remembering (or pre-computing) the results to subproblems.

Knowing this, how can we improve on the fibonacci example from earlier? What if we used a hash table to store the intermediate results, like this:

memo = {}
def fibonacci(n):
if (n not in memo):
if (n <= 1):
memo[n] = n
else:
memo[n] = fibonacci(n - 1) + fibonacci(n - 2)
return memo[n]


This is good, but we can actually improve on this even more using an iterative approach instead of recursive. Let's use an array for our memo instead of a hash table.

def fibonacci(n):
if (n <= 1):
return n
memo = [0]*(n + 1)
memo[1] = 1;
for i in range(2, n + 1):
memo[i] = memo[i - 1] + memo[i - 2]
return memo[n]


Note: you can actually improve this even more by just using two variables to keep track of the last two numbers. Try it yourself!

Now, all we do is fill this memoization array. To do that, we only compute each f(i) once, with i from 0 -> n. So, this gives us a runtime of O(n).

Memoizing has just brought us from a runtime of O(2n) to O(n). 🤯 Pretty great, right?

### Greedy

Greedy is a type of algorithm that you can use to solve certain dynamic programming problems. It is exactly how it sounds.

The greedy approach is, to get the best overall solution, pick the best local solution every time. In other words, be greedy! At each step, we want to pick the option that will benefit us the most in the current scope.

Let's say you had the following coin change 💰 problem:

Given an integer amount, return the minimum number of coins needed to create that amount. Assume the coins have values [25, 10, 5, 2, 1].

So, for example, if I wanted to create the amount 19 using the minimum amount of coins, I can first use coin 10, then 5, then two 2's. We used 4 coins, so our answer is 4.

What are we actually doing here? We're taking the largest coin at every step. This is the coin that benefits us most in the moment, since it gets us closest to our target amount. We're using the greedy algorithm!

Turns out, for this coin denomination, this algorithm works for any amount. So, the greedy solution would optimally solve this problem! 🎉

Now, Greedy doesn't always give the best answer. What if I changed the coins to [30, 24, 12, 6, 3, 1]? This actually used to be an old British coin system.

Well, if I want to make amount 19 again, then the greedy solution would first take coin 12, then 6, and then 1, which would be correct. ✅

What if the amount was 48? Greedy would first take coin 30, and then 12 followed by 6.

We used 3 coins, but there's a better way - we can just use coin 24 twice!

Because we were greedy and took coin 30 before we saw coin 24, we missed out on the optimal solution. ❌

So, for this coin denomination, greedy doesn't work in all cases!

Clearly, greedy won't solve every dynamic programming question you encounter, but for the problems it does solve, it's usually quick and elegant.

### General Tips

• Don't stress. Relax! Jobs aren't everything, life will go on 😀
• It's all about storing. Think about how you can break down the problem so that you can store or precompute results from subproblems, and use that to build your final answer.
• Master the basics. There's no need to spend hours on crazy elaborate dynamic programming problems. These are pretty unlikely to be asked. Work on the fundamentals first.
• Practice. And practice. Oh, and did I mention practice?