Home Development Cycles Dynamic Programming Algorithm Technique

# Dynamic Programming Algorithm Technique

You are reading another tutorial in the Algorithm Techniques multi-part series. As the title suggests, today we are going to briefly present the dynamic programming algorithm technique. The truth is, dynamic programming is one of the most discussed techniques in the algorithmic literature, especially since it refers to a large number of algorithms. Theoretically thousands of problems can be solved with its implementation.

Author Info:
Rating:  / 15
October 14, 2008

SEARCH DEVARTICLES

TOOLS YOU CAN USE
Dynamic Programming Algorithm Technique
(Page 1 of 4 )

Up to this point in this article series we've covered backtracking, divide and conquer, the greedy strategy, and even the genetic programming. By now you should be able to clearly see the differences between these algorithm design patterns and techniques, as well as realize their advantages and drawbacks.

Gradually, as you become familiar with a multitude of techniques, you will see that there are lots of problems that can be solved with one technique, say divide and conquer; however, the dynamic programming approach is more elegant and efficient. For example, sometimes the solution returned by the greedy approach isn't satisfactory at all, and then the dynamic programming saves the day. But this is just an example.

In a nutshell, the concept of dynamic programming means that we break the large problem into incremental sub-problems, where each sub-problem can be solved and the solution is optimal. As a result, by using a formula you can generate the final solution without being required to alter the previously solved sub-problems or re-calculating some parts of the algorithm.

Now that you understand that definition, it's time for us to introduce two technical terms. First of all, a problem is said to have overlapping sub-problems if the problem can be broken down into numerous sub-problems that are going to be reused several times later on. The easiest illustration of this is the Fibonacci sequence. As you know, F(N) = F(n-1) + F(n-2), but also F(n-1) = F(n-2) + F(n-3). And so on.

Here is our next definition: a problem is said to have optimal sub-structure if the final solution can be constructed efficiently from the previously calculated solutions of the sub-problems, which on their end are also optimal. We have already learned something like this in our greedy article. However, greedy is a reckless approach that doesn't always give the optimal solution. That's why you need to know more strategies.

Now we can move on to the theory. Click on the link below and read on.