Algorithms Foundations
Greedy Algorithms
The greedy paradigm: make the locally optimal choice at each step and never look back. When matroid structure or the exchange argument guarantees global optimality.
Why This Matters
Greedy algorithms are often the first approach you try for an optimization problem. They are fast, easy to implement, and sometimes provably optimal. The hard part is knowing when they work. A greedy algorithm that looks correct can silently produce suboptimal solutions if the problem lacks the right structure.
Mental Model
At each step, pick the option that looks best right now without worrying about future consequences. If the problem has the right combinatorial structure, this myopic strategy assembles a globally optimal solution.
The key insight: greedy works when you can always extend a partial solution by the locally best choice without painting yourself into a corner.
Core Definitions
Greedy Algorithm
An algorithm that builds a solution incrementally, at each step choosing the element that maximizes (or minimizes) some local criterion, and never revising past choices.
Matroid
A matroid is a pair where is a finite ground set and is a family of independent sets satisfying: (1) , (2) if then (hereditary), (3) if and , then there exists such that (exchange property).
Main Theorems
Greedy is Optimal on Matroids
Statement
Let be a matroid with a weight function . The greedy algorithm that iteratively adds the maximum-weight element maintaining independence produces a maximum-weight independent set.
Intuition
The exchange property guarantees that if the greedy solution were suboptimal, you could swap an element from a better solution into the greedy solution without breaking independence, contradicting the greedy choice.
Proof Sketch
Suppose greedy produces sorted by weight and an optimal solution is . If for some first index , the exchange property lets us replace with some element from of weight at least . But greedy chose as the heaviest feasible element, a contradiction.
Why It Matters
This theorem tells you exactly when greedy works: when the feasible sets form a matroid. Graphic matroids give you Kruskal's algorithm. Partition matroids give you scheduling problems. Recognizing matroid structure is the principled way to justify greedy correctness.
Failure Mode
When the independence structure is not a matroid, greedy can fail badly. The 0/1 knapsack problem is the classic example: the feasible sets do not form a matroid, and greedy by value-to-weight ratio can miss the optimal solution entirely.
Canonical Examples
Activity Selection (Interval Scheduling)
Given activities with start and finish times, select the maximum number of non-overlapping activities. Greedy rule: always pick the activity that finishes earliest. The non-overlapping subsets do not form a matroid (the exchange axiom fails: two equal-cardinality independent sets can have incompatible elements). Optimality instead follows from a greedy-stays-ahead or exchange argument: at each step, the earliest-finishing compatible activity leaves at least as much room for future picks as any alternative, so the greedy schedule matches an optimal one prefix by prefix. Time complexity: for sorting.
Kruskal's MST Algorithm
To find a minimum spanning tree, sort edges by weight and add each edge if it does not create a cycle. The acyclic subsets of a graph form a graphic matroid, so the matroid greedy theorem guarantees optimality. Time complexity: with union-find.
Huffman Coding
Build an optimal prefix-free code by repeatedly merging the two lowest-frequency symbols. The greedy choice (merge cheapest pair) is justified by an exchange argument: any optimal tree can be rearranged so the two rarest symbols are siblings at the deepest level.
Fractional Knapsack
Given items with values and weights, and a capacity , maximize total value. Greedy rule: sort by value-to-weight ratio and take as much of each item as fits. Because you can take fractions, this greedy strategy is provably optimal. Time complexity: .
Common Confusions
Fractional knapsack is greedy-optimal but 0/1 knapsack is NOT
In the fractional knapsack, you can take any fraction of an item, so the greedy strategy of maximizing value-to-weight ratio works perfectly. In the 0/1 knapsack, you must take an item entirely or not at all. This all-or-nothing constraint breaks the exchange property. Consider: items with values (60, 100, 120) and weights (10, 20, 30) with capacity 50. Greedy by ratio takes items 1 and 2 (value 160), but optimal is items 2 and 3 (value 220). Use dynamic programming for 0/1 knapsack instead.
Proving greedy correct requires a formal argument
It is not enough to say a greedy algorithm looks right. You need either: (1) a matroid structure argument, (2) an exchange argument showing you can transform any optimal solution to match greedy choices, or (3) a greedy-stays-ahead argument showing the greedy solution dominates any alternative at every step.
Proof Ideas and Templates Used
The exchange argument is the primary proof technique:
- Assume an optimal solution differs from the greedy solution .
- Find the first point of divergence.
- Show you can modify to agree with at that point without worsening the objective.
- By induction, can be transformed into without loss of quality.
Summary
- Greedy works when the problem has matroid structure or admits an exchange argument
- Always sort by the right criterion (finish time, weight, ratio, frequency)
- Greedy for fractional knapsack, DP for 0/1 knapsack
- Proving greedy correct is as important as designing the algorithm
- Time complexity is usually dominated by the initial sort:
Exercises
Problem
You have jobs each taking unit time, with deadlines and profits . You can schedule at most one job per time slot. Describe a greedy algorithm to maximize total profit from jobs completed by their deadlines.
Problem
Give a concrete example showing that greedy by value-to-weight ratio fails for the 0/1 knapsack problem. What is the approximation ratio of this greedy in the worst case?
References
Canonical:
- Cormen, Leiserson, Rivest, Stein, Introduction to Algorithms (CLRS), 3rd ed., MIT Press (2009), Chapters 15-16
- Edmonds, "Matroids and the Greedy Algorithm," Mathematical Programming 1(1):127-136 (1971)
- Rado, "Note on Independence Functions," Quarterly Journal of Mathematics 7(1):300-320 (1957)
- Huffman, "A Method for the Construction of Minimum-Redundancy Codes," Proceedings of the IRE 40(9):1098-1101 (1952)
- Kruskal, "On the Shortest Spanning Subtree of a Graph and the Traveling Salesman Problem," Proceedings of the AMS 7(1):48-50 (1956)
- Oxley, Matroid Theory, 2nd ed., Oxford University Press (2011), Chapter 1
Accessible:
- Kleinberg & Tardos, Algorithm Design, Pearson (2005), Chapter 4
Current:
- Nemhauser, Wolsey, Fisher, "An Analysis of Approximations for Maximizing Submodular Set Functions," Mathematical Programming 14(1):265-294 (1978)
Next Topics
- Dynamic programming: the paradigm for problems where greedy fails
Last reviewed: April 27, 2026
Canonical graph
Required before and derived from this topic
These links come from prerequisite edges in the curriculum graph. Editorial suggestions are shown here only when the target page also cites this page as a prerequisite.
Required prerequisites
0No direct prerequisites are declared; this is treated as an entry point.
Derived topics
4- Dynamic Programminglayer 0A · tier 1
- Knapsack Problemlayer 0A · tier 2
- Tabu Searchlayer 2 · tier 3
- Submodular Optimizationlayer 3 · tier 3
Graph-backed continuations