Build the optimal solution one step at a time by always choosing the locally best option, without looking back.
A greedy algorithm builds its solution piece by piece, always choosing the option that looks best right now. It never reconsiders a past choice. The magic: for the right class of problems, local optimality leads to global optimality.
At each step, make the choice that maximizes (or minimizes) the immediate gain. Unlike dynamic programming, you do not explore every possible subproblem. You commit once and move on.
Why does this work? Greedy algorithms require the greedy choice property: there exists an optimal solution that includes the greedy (locally best) choice. They also require optimal substructure: after making the greedy choice, the remaining problem is a smaller instance of the same type. When both hold, greedily choosing at every step assembles the global optimum.
Look for these recognition signals in the problem statement. If you spot one, a greedy approach is very likely the intended (or at least a valid) approach.
"Maximum number of events/activities"
"Minimum number of intervals to remove"
"Can you reach the end?" or "Minimum jumps"
"Partition into fewest groups" or "assign greedily"
The key clue: The problem asks for an optimal value (max/min) and you suspect that always picking the locally best option will never box you into a worse global outcome. If a counterexample breaks the greedy, use DP instead. If no counterexample exists, greedy is the way.
Sort intervals by their end time. Greedily pick each interval whose start time is at or after the previous interval's end. By always picking the interval that finishes earliest, you leave the most room for future intervals.
Find the maximum number of non-overlapping intervals.
Why sort by end time? The interval that ends earliest leaves the maximum remaining time for future intervals. If you sorted by start time or by length, you might pick an interval that blocks many shorter ones. Sorting by end time is provably optimal for this class of problems.
Scan the array left to right, maintaining the farthest index you can reach. At each position, update the farthest reach using max(farthest, i + nums[i]). If you ever find that i > farthest, you are stuck and cannot reach the end.
Can you reach the last index? Track the farthest reachable position.
Here the greedy approach correctly detects that you are stuck.
Extending to "minimum jumps": For Jump Game II (#45), add a second variable currentEnd that tracks the boundary of the current jump. When i reaches currentEnd, you must make another jump (increment the count) and set currentEnd = farthest. Same greedy scan, one extra variable.
Here are annotated templates for the two major greedy variants. Both run in linear time after any initial sort.
// Can we reach the last index? boolean canJump(int[] nums) { int farthest = 0; for (int i = 0; i < nums.length; i++) { if (i > farthest) return false; // stuck, can't reach i farthest = Math.max(farthest, i + nums[i]); } return true; // farthest >= last index }
// Maximum non-overlapping intervals int maxActivities(int[][] intervals) { Arrays.sort(intervals, (a, b) -> a[1] - b[1]); // sort by end int count = 0, prevEnd = Integer.MIN_VALUE; for (int[] iv : intervals) { if (iv[0] >= prevEnd) { // no overlap count++; prevEnd = iv[1]; // commit to this interval } } return count; }
The sorting step is critical. Most greedy algorithms require a preprocessing sort to establish the order in which choices are evaluated. Without it, the locally best choice at each step might not lead to a globally optimal answer. The sort gives the greedy scan the structure it needs.
Let us trace Jump Game II through a larger example step by step, showing the farthest reach and jump count at every stage.
Finding the minimum number of jumps to reach the last index.
| STEP | ACTION | FARTHEST | CUR END | JUMPS |
|---|---|---|---|---|
| init | Initialize variables | 0 | 0 | 0 |
| i=0 (2) | farthest = max(0, 0+2) = 2. i == curEnd, jump! curEnd = 2 | 2 | 2 | 1 |
| i=1 (3) | farthest = max(2, 1+3) = 4. i < curEnd, no jump yet. | 4 | 2 | 1 |
| i=2 (1) | farthest = max(4, 2+1) = 4. i == curEnd, jump! curEnd = 4 | 4 | 4 | 2 |
| done | curEnd (4) ≥ last index (4). Stop. | 4 | 4 | 2 |
Notice the pattern: We never actually "jump" in a specific direction. Instead we track the boundary of the current jump. When we reach the boundary, we must take another jump and extend the boundary to the farthest we discovered so far. This implicit BFS-by-level technique is what keeps it O(n).
These are classic LeetCode problems that use the greedy pattern, listed in rough order of difficulty.
Practice tip: Start with #55 (direct application of the farthest-reach template). Then try #45 which adds the "minimum jumps" layer. Move to #435 for interval scheduling (the flip side of activity selection: minimize removals = maximize kept intervals). #763 Partition Labels is a beautiful greedy problem that combines last-occurrence tracking with interval merging.
Enter an array of jump lengths and step through the greedy algorithm. Watch the farthest reachable position grow and see which indices are reachable.
The Jump Game variants scan the array exactly once with a single for loop. At each index, we do O(1) work: one comparison and one max() call. Total: O(n).
We only track two or three integer variables (farthest, currentEnd, jumps). No auxiliary arrays or stacks. Total space: O(1).
For interval scheduling: The sort takes O(n log n) and the scan takes O(n), so the total is O(n log n). Space is O(1) beyond the input (or O(n) if the sort is not in-place).
Greedy vs DP trade-off: Many greedy problems can also be solved with DP, but the DP solution is typically O(n^2) time and O(n) space. The greedy approach achieves O(n) time and O(1) space because it collapses the entire DP table into one or two running variables. This is the reward for proving that the greedy choice property holds.
Greedy = local optimality leads to global optimality. At each step, you make the best available choice without reconsidering past decisions. This only works when the problem has the greedy choice property and optimal substructure.
Sort first, then scan. Most greedy problems require sorting by a key metric (end time, deadline, ratio, etc.) before the greedy scan. The sort establishes the order that makes greedy choices safe.
Track one or two running variables. The farthest-reach pattern (Jump Game) collapses the entire state space into farthest and currentEnd. Activity selection just needs prevEnd and count. Greedy solutions are elegant because they are minimal.
If you cannot prove correctness, try a counterexample. The hardest part of greedy is knowing when it works. Try to construct a small input where the greedy choice leads to a worse outcome. If you can, use DP. If you cannot, the greedy is very likely correct.
Interval problems are the bread and butter of greedy. Activity selection, non-overlapping intervals, meeting rooms, and merge intervals all follow the same template: sort by end time, scan left to right, pick or skip. Master this family and you cover a huge chunk of greedy interview questions.