Home arrow Development Cycles arrow Dynamic Programming Algorithm Technique

Dynamic Programming Algorithm Technique

You are reading another tutorial in the Algorithm Techniques multi-part series. As the title suggests, today we are going to briefly present the dynamic programming algorithm technique. The truth is, dynamic programming is one of the most discussed techniques in the algorithmic literature, especially since it refers to a large number of algorithms. Theoretically thousands of problems can be solved with its implementation.

Author Info:
By: Barzan "Tony" Antal
Rating: 5 stars5 stars5 stars5 stars5 stars / 15
October 14, 2008
  1. · Dynamic Programming Algorithm Technique
  2. · The Theory
  3. · The Theory, Continued
  4. · Concluding Thoughts

print this article

Dynamic Programming Algorithm Technique
(Page 1 of 4 )

Up to this point in this article series we've covered backtracking, divide and conquer, the greedy strategy, and even the genetic programming. By now you should be able to clearly see the differences between these algorithm design patterns and techniques, as well as realize their advantages and drawbacks.

Gradually, as you become familiar with a multitude of techniques, you will see that there are lots of problems that can be solved with one technique, say divide and conquer; however, the dynamic programming approach is more elegant and efficient. For example, sometimes the solution returned by the greedy approach isn't satisfactory at all, and then the dynamic programming saves the day. But this is just an example.

In a nutshell, the concept of dynamic programming means that we break the large problem into incremental sub-problems, where each sub-problem can be solved and the solution is optimal. As a result, by using a formula you can generate the final solution without being required to alter the previously solved sub-problems or re-calculating some parts of the algorithm.

Now that you understand that definition, it's time for us to introduce two technical terms. First of all, a problem is said to have overlapping sub-problems if the problem can be broken down into numerous sub-problems that are going to be reused several times later on. The easiest illustration of this is the Fibonacci sequence. As you know, F(N) = F(n-1) + F(n-2), but also F(n-1) = F(n-2) + F(n-3). And so on.

Here is our next definition: a problem is said to have optimal sub-structure if the final solution can be constructed efficiently from the previously calculated solutions of the sub-problems, which on their end are also optimal. We have already learned something like this in our greedy article. However, greedy is a reckless approach that doesn't always give the optimal solution. That's why you need to know more strategies.

Now we can move on to the theory. Click on the link below and read on.

blog comments powered by Disqus

- Division of Large Numbers
- Branch and Bound Algorithm Technique
- Dynamic Programming Algorithm Technique
- Genetic Algorithm Techniques
- Greedy Strategy as an Algorithm Technique
- Divide and Conquer Algorithm Technique
- The Backtracking Algorithm Technique
- More Pattern Matching Algorithms: B-M
- Pattern Matching Algorithms Demystified: KMP
- Coding Standards
- A Peek into the Future: Transactional Memory
- Learning About the Graph Construct using Gam...
- Learning About the Graph Construct using Gam...
- Learning About the Graph Construct using Gam...
- How to Strike a Match

Watch our Tech Videos 
Dev Articles Forums 
 RSS  Articles
 RSS  Forums
 RSS  All Feeds
Write For Us 
Weekly Newsletter
Developer Updates  
Free Website Content 
Contact Us 
Site Map 
Privacy Policy 

Developer Shed Affiliates


© 2003-2019 by Developer Shed. All rights reserved. DS Cluster - Follow our Sitemap
Popular Web Development Topics
All Web Development Tutorials