You are reading another tutorial in the Algorithm Techniques multi-part series. As the title suggests, today we are going to briefly present the dynamic programming algorithm technique. The truth is, dynamic programming is one of the most discussed techniques in the algorithmic literature, especially since it refers to a large number of algorithms. Theoretically thousands of problems can be solved with its implementation.

Dynamic Programming Algorithm Technique - The Theory (Page 2 of 4 )

As you may recall, during our earlier articles we always used the binary tree structure like an analogy to help us visualize how the algorithm goes through the structure until it finds the solutions. In this case we can call these decision trees. The root node is the initial state of the problem; each layer can be considered another state, where the count of children defines the number of possible decisions.

In the end, the solution is represented by a path from the root node until a leaf is reached that satisfies the conditions, taking the respective decisions on each layer's node into consideration. This means that at each node we have two options, of which we can pick only one. This decision tree is the most used approach with optimization problems.

Moreover, we can delimit two types of decision trees. First we can say that after each decision, the rest of the tree is eliminated, just as we said with greedy. Then the problem becomes reduced to a smaller tree, and so on. The second possibility is that right after taking a decision, the tree falls into two (or more) totally identical smaller sub-problems. Combining these two types, we end up with the most complex structure.

Dynamic programming, by its very nature, is the most appropriate algorithm for those problems that have the aforementioned complex structure. As you can see, the greedy algorithm wouldn't suffice because its approach works only until the second type of tree comes in. But this combination is the specialty of dynamic programming.

The dynamic programming technique can be approached in two distinctive ways: top-down and bottom-up. The former is an advanced recursion combined with memorization, since it means breaking the problem into all of its sub-problems, solved them, and storing the solutions, assuming we're going to re-use them.

The latter predicts the sub-problems that might be required and solves them in advance; this is the way it tries to build the final solution from the sub-problems' solutions. This approach requires more resources (stack space).

All in all, the dynamic programming technique is based on the optimality principle. It starts from the trivial but optimal solutions of the sub-problems; moving further, it builds the final solution. However, it's really important that the algorithm stores the solution of each sub-problem; we use an array for this purpose. The array can be multi-dimensional depending on how many parameters describe the solution(s).

As a result, the optimality principle is practically a mathematical formula. By applying this recursive formula we can find the final solution, relying on the previous solutions. Due to this, dynamic programming eliminates the need to repeatedly solve identical sub-problems. Divide and conquer, on the other hand, splits the problem into sub-problems (preorder traversal) and then solves all of them, without exception, recursively. Thus, it isn't that efficient.

In the next section we are going to provide a general guideline that can be followed in each case when designing a dynamic programming algorithm. Additionally, you will learn how to apply the design principles to a problem. We will examine its requirements, design our algorithm, and give you the pseudo code algorithm.