You are reading another tutorial in the Algorithm Techniques multi-part series. As the title suggests, today we are going to briefly present the dynamic programming algorithm technique. The truth is, dynamic programming is one of the most discussed techniques in the algorithmic literature, especially since it refers to a large number of algorithms. Theoretically thousands of problems can be solved with its implementation.

Dynamic Programming Algorithm Technique - The Theory, Continued (Page 3 of 4 )

All right, here's the approach that is usually advocated. First of all, we need to think about how we can split the main problem into a number of sub-problems, to all of which, in the end, the optimality principle should apply. Thus, the first step is to define how a solution of a sub-problem looks and how we can "encode" it (for storage). Also, it is definitely important to pick the number of parameters that are going to be required.

The second step is deciding what kind of storage (array) we need to use: one-, two-, or N-dimensional. Once we know the count of individual parameters that represent a sub-problem, we also know how many dimensions are required to store the optimum values of each optimal solution of the sub-problems. Moreover, you need to consider whether it is necessary to store everything. Sometimes you can get away with less (with the Fibonacci series, for example, only the previous two values are required).

Moving on, it's time to formulate the recursive mathematical formula that describes the optimality principle and how the final solution can be calculated by reusing the earlier solutions of the sub-problems. Filling the aforementioned multi-dimensional array happens using a bottom-up approach, since on each layer we must already have ready the previous necessary optimum values of the sub-problem solutions.

The final step is the one that actually solves the problem. Remember, up to this point we have only stored the necessary optimum values of solutions of the sub-problems in that array. Now we can have two possible strategies, depending on the main problem; the solution can be either a "leaf-root" or a "root-leaf"path. It can never be both!

The optimality principle, the said recursive formula, applies only to one of the previous possibilities. Knowing that, you can completely reconstruct the optimal decision tree (the right decisions to move through the binary tree, finding the solution in the end); this is akin to finding the final solution. Voila. Let's see those two possibilities.

First, I'll cover the "leaf-root" route, where the root represents the root node, of course. It can be solved with a single loop. We just need to begin from the root's optimum value in the array and move further towards the leaf's optimum value. In this way we have totally reconstructed the decision sequence through which the optimal solution can be found.

Second, the "root-leaf" route requires a two-pass algorithm, but it is also linear so you shouldn't worry. We begin with the leaf's optimum value in the array, and move toward the root's optimum value, after which we retrace the entire decision sequence recursively, meaning that we move backwards (from the root to the leaf, again).

Now let's present a problem with its solution. It is called the Triangle Problem: let it be an NxN matrix, where, on its main diagonal and below the main diagonal, there can be found natural numbers. Which is the longest path through which, starting out from the top-left index [1, 1] we can move to the last row (n-th row)? Oh, and the "path" can be calculated by adding up the numbers through which we are moving (basically, a sum of the numbers).

However, there are restrictions; in fact, only two possible moves are possible. The first valid move is straight down/vertically one step down (from [i][j] to [i+1][j]). The second valid move is diagonally one step (from [i][j] to [i+1][j+1]), meaning one stop down and one step right. Knowing these are the only possible moves, the problem gets tough.

Here's the recursive algorithm:

Procedure triangle (c[][], a[][], n, i, j)

If c[i][j] NOT equals -1

Then return c[i][j]

Else

If i equals n

Then c[i][j] <- a[i][j]

Else

c[i][j] <- a[i][j] +

MAX(triangle(c, a, n, i+1, j), triangle(c, a, n, i+1, j+1))

End if

Return c[i][j]

End if

End procedure

This above algorithm is based on the following recursive mathematical formula:

c[1][1] = a[1][1]

c[i][1] = a[i][1] + c[i-1][1], where 2 <= i <= n

c[i][j] = a[i][j] + max {c[i-1][j], c[i-1][j-2]}, where 2 <= i <= n; j<=i

However, since we have opted for a "leaf-root" type of implementation, the formula must also be "reversed," meaning the following.

c[n][i] = a[n][j], where 1 <= j <= n

c[i][j] = a[i][j] + max {c[i+1][j], c[i+1][j+1]}, where n > i => 1, j <= i