A popular problem to introduce dynamic programming is the minimal change problem. Suppose a cashier needs to give me a certain amount of change and wants to do it with the minimal amount of coins possible. The input is a set of denominations and the amount, and the output is the set of coins.

For example, I may need to give 45 cents change and available coins are 1, 5 and 20. A solution intuitively is 20 + 20 + 5, but what is the best way to achieve it?

A recursive solution may be the first to try. A minimal collection of coins definitely belongs to the following set:

- minimal collection of coins totalling 44 cents, plus 1 cent coin
- minimal collection of coins totalling 40 cents, plus 5 cent coin
- minimal collection of coins totalling 25 cents, plus 20 cent coin

So on the next step we will apply the same logic to our new change amounts: 44, 40 and 25. This looks like a classic recursion problem, which can be illustrated by the following image

**Solving Minimum Change with Recursion**

and described by the following algorithm / pseudocode (where |coins| is the number of denominations available)

RECURSIVECHANGE(money, coins) if money = 0 return 0 MinNumCoins ← ∞ for i ← 1 to |coins| if money ≥ coini NumCoins ← RECURSIVECHANGE(money − coini, coins) if NumCoins + 1 < MinNumCoins MinNumCoins** ← NumCoins + 1 output MinNumCoins*

This should work, but is there something wrong with this approach? Well, one can see that the recursive algorithm will calculate the full solution for 19 cents 6 times on the third step only, and it will only get worse on the following steps. If the input value is large enough, the memory and time required to compute the solution will be huge. So, this is a classic example of the benefits of dynamic programming. I came across dynamic programming a few times before, but just couldn't properly figure it out.

Finally I found a good explanation. It came as a part of a free course in Bioinformatics Algorithms. Here's how it goes:

The key to dynamic programming is to take a step that may seem counterintuitive. Instead of computing **MinNumCoins**(m) for every value of m from 45 downward toward m = 1 via recursive calls, we will invert our thinking and compute **MinNumCoins**(m) from m = 1 upward toward 45, storing all these values in an array so that we only need to compute **MinNumCoins**(m) once for each value of m. **MinNumCoins**(m) is still computed via the same recurrence relation.

**MinNumCoins**(m) = min{**MinNumCoins**(m − 20) + 1, **MinNumCoins**(m - 5) + 1, **MinNumCoins**(m - 1) + 1}

For example, assuming that we have already computed **MinNumCoins**(m) for m < 6, **MinNumCoins**(6) is equal to one more than the minimum of **MinNumCoins**(6 - 5) = 1 and **MinNumCoins**(6 - 1) = 5. Thus, **MinNumCoins**(6) is equal to 1 + 1 = 2.

This translates into the following algorithm / pseudocode

DPCHANGE(money, coins) MinNumCoins(0) ← 0 for m ← 1 to money MinNumCoins(m) ← ∞ for i ← 1 to |coins| if m ≥ coini if MinNumCoins(m - coini) + 1 < MinNumCoins(m) MinNumCoins(m) ← MinNumCoins(m - coini) + 1 output MinNumCoins(money)

And a further C# implementation, which takes a comma-separated string of denominations available, and the target amount.

public static void DPCHANGE(int val, string denoms) { int[] idenoms = Array.ConvertAll(denoms.Split(','), int.Parse); Array.Sort(idenoms); int[] minNumCoins = new int[val + 1]; minNumCoins[0] = 0; for (int m = 1; m <= val; m++) { minNumCoins[m] = Int32.MaxValue - 1; for (int i = 1; i <= idenoms.Count() - 1; i++) { if (m >= idenoms[i]) { if (minNumCoins[m - idenoms[i]] + 1 < minNumCoins[m]) { minNumCoins[m] = minNumCoins[m - idenoms[i]] + 1; } } } } }

**References**

An Introduction to Dynamic Programming: The Change Problem

by Evgeny. Also posted on my website

## No comments:

Post a Comment