Home arrow Development Cycles arrow Page 2 - Greedy Strategy as an Algorithm Technique
DEVELOPMENT CYCLES

Greedy Strategy as an Algorithm Technique


You are reading the third segment of our algorithm design patterns series. We have successfully covered the backtracking and divide-and-conquer techniques. In this article we will cover the so-called Greedy Strategy. Greedy algorithms often, but not always, yield the optimal solution in record time since they approach problem solving from a metaheuristic point of view. Thus, it is critical to master greediness.

Author Info:
By: Barzan "Tony" Antal
Rating: 5 stars5 stars5 stars5 stars5 stars / 10
September 30, 2008
TABLE OF CONTENTS:
  1. · Greedy Strategy as an Algorithm Technique
  2. · The Theory
  3. · Kruskal's Algorithm
  4. · Taking a Break

print this article
SEARCH DEVARTICLES

Greedy Strategy as an Algorithm Technique - The Theory
(Page 2 of 4 )

The greedy algorithm, as it is specified in the National Institute of Standards and Technology [NIST], always takes the best immediate, or local, solution while finding an answer. Greedy algorithms find the overall, or globally, optimal solution for some optimization problems, but may find less-than-optimal solutions for some instances of other problems. In general, we use greedy algorithms for optimization problems.

Optimization problems, by their very nature, always require an overall solution that is globally optimal. Just any solution doesn’t cut it. The best solution is necessary. Imagine the following decision tree scenario: Greedy walks through the following tree, beginning from the root node, and at each node looks around and picks the local optimum. As you can see, there are always two routes to take; it picks the one that currently looks best.

However, walking through the tree using the aforementioned mentality does not guarantee that you end up with a globally optimal solution despite the fact that you have always picked the “local best” route on each level. You cannot be absolutely sure that this leads to the overall best solution. This is why greedy often gives the optimal solution, but not always. You can’t take it for granted. You need to prove it.

But let’s stop for a moment and assume that a problem can be solved with the greedy technique. Glance over the above tree. As you can see, it has four levels (excluding the root level) and a total of 30 nodes (excluding the root node). An exhaustive iterative approach would require an algorithm that moves through the entire graph. Consider backtracking for a moment; it surely would take much more time. It would exhaust the entire tree.

Needless to say, the greedy approach picks the best on each level, and after each decision the “rest” of the tree is ignored. In the above example, since the first decision preferred the left sub-tree, the root node’s right sub-child becomes totally eliminated from the possible solutions. The problem is halved. After each decision the problem is halved again and that’s how we arrive at the end.

As a result, greedy algorithms are definitely faster and more resource efficient. An algorithm like this practically breezes through the decision tree, picking the best solution that it can see at that very moment, ignoring any possible alternatives and not thinking about the future consequences. It also eliminates the chance of stepping back in case of failure. Like it or not, it picks the local best, and in the end returns the result.

On the other hand, if a problem cannot be solved with the greedy algorithm, but we try to force it and run greedy on the problem, then it may either not find any solution at all (because it picks the local bests, moves further, and in the end realizes it was wrong somewhere down the road) or give a solution that is not globally optimal. Even if it does provide a globally optimal solution, chances are that we "lucked into" it. It won’t work for all kinds of problems.

Now, let’s talk about practicality. In our everyday life, we may face “problems” that need to be solved in record time and as resource-efficiently as possible, yet do not need a globally optimal solution. You just want it solved! In these cases it is much more important to solve it as quickly and with as few resources as possible; it may be worth sacrificing optimality in favor of reducing overall complexity.

Sometimes you may know in advance that writing an algorithm that generates the globally optimal (best) solution requires lots of human resources too (time, more people, etc.), not only technological resources. In all of these cases, being greedy is the best bet because, generally speaking, writing a greedy algorithm can be quicker and doesn’t require that much brain power either. Once you have practice, you can write the code quickly.

Here’s the pseudo-code for a typical greedy algorithm.

procedure greedy (let C candidate set)

let S solution set

while NOT solution (S) AND C is NOT empty do

x := select_best (C)

C := C {x}

if feasible (S, x) then

S := S + {x} // + stands for merging the sets

end if

end loop

 

if solution (S) then

print (S)

else

print “There are no solutions.”

end if

end procedure

As you can see from the above pseudo-code, the Greedy algorithm usually has four or five major pillars. First, there is a candidate set (called C in our case) that contains all of the elements from which you can build a solution, then a procedure (select_best in our case) that picks the local best candidate from the candidate set. This latter procedure works on the basis of being “greedy,” thus, it doesn’t care about the global solution.

Third, a feasible procedure checks whether or not this locally optimal solution can contribute to the overall and global solution (does not behave greedily); fourth, a solution procedure indicates if we have found a solution; and additionally, if it’s required, we use another function that adds the local best candidate, if feasible, to the solution set. This function is necessary if a simple array merging or addition isn’t enough.

It is also especially important that, as soon as we have made a local optimal pick, we instantaneously remove it from the candidate set. Then we check to see if it is feasible or not, meaning that it can lead (contribute) to the global solution. If it is, we add it to the solution set; if not, we forget about it.

As a final note, if a problem can be solved with a greedy algorithm, then it can be almost always formulated in the following way: its global optimal solution begins with a first-hand choice of the local optimum, then it becomes reduced to another sub-problem of the same (or a related) type. Then its global optimal solution again begins with a local optimum pick, and so on. This is why a collage of local optimums can lead to a global optimal solution.


blog comments powered by Disqus
DEVELOPMENT CYCLES ARTICLES

- Division of Large Numbers
- Branch and Bound Algorithm Technique
- Dynamic Programming Algorithm Technique
- Genetic Algorithm Techniques
- Greedy Strategy as an Algorithm Technique
- Divide and Conquer Algorithm Technique
- The Backtracking Algorithm Technique
- More Pattern Matching Algorithms: B-M
- Pattern Matching Algorithms Demystified: KMP
- Coding Standards
- A Peek into the Future: Transactional Memory
- Learning About the Graph Construct using Gam...
- Learning About the Graph Construct using Gam...
- Learning About the Graph Construct using Gam...
- How to Strike a Match

Watch our Tech Videos 
Dev Articles Forums 
 RSS  Articles
 RSS  Forums
 RSS  All Feeds
Write For Us 
Weekly Newsletter
 
Developer Updates  
Free Website Content 
Contact Us 
Site Map 
Privacy Policy 
Support 

Developer Shed Affiliates

 




© 2003-2017 by Developer Shed. All rights reserved. DS Cluster - Follow our Sitemap
Popular Web Development Topics
All Web Development Tutorials