Break complex problems into overlapping subproblems and cache results to avoid redundant computation.
Dynamic Programming (DP) is an optimization technique that solves complex problems by breaking them into smaller subproblems, solving each subproblem only once, and storing the result. A problem is a candidate for DP when it exhibits two key properties:
Let's see this with Fibonacci. A naive recursion tree for fib(5) recomputes the same values many times:
f(3) is computed twice, f(2) three times. For fib(n), naive recursion does O(2^n) work.
With memoization, we compute each subproblem once. f(3) and f(2) are returned from cache. Total work: O(n).
DP vs Divide & Conquer: Both break problems into subproblems, but divide & conquer (like merge sort) has non-overlapping subproblems, so caching doesn't help. DP vs Greedy: Greedy makes locally optimal choices and never looks back. DP considers all possible choices at each step, guaranteeing a global optimum.
Look for these recognition signals in the problem statement. If you spot one, DP is very likely the intended approach.
"Find the minimum/maximum cost to..."
"Count the number of ways to..."
"Is it possible to..." (yes/no)
"Can you define it in terms of smaller subproblems?"
Not DP? If the problem has no overlapping subproblems (each sub-problem is unique), use divide & conquer. If a locally optimal choice always leads to a globally optimal solution (e.g., activity selection with greedy), DP is overkill. If the problem asks for all permutations or the actual path, you may need backtracking instead.
Follow these five steps for every DP problem. This structured approach eliminates guesswork and works universally, from 1D arrays to complex multi-dimensional states.
dp[i] (or dp[i][j]) represent? This is the most critical step. The state must capture enough information to make decisions and compute the answer. Example: for Coin Change, dp[i] = minimum coins needed to make amount i.dp[i] relate to smaller states? Express the current state in terms of previous states. Example: dp[i] = min(dp[i - coin] + 1) for each valid coin. This is the "transition function."dp[0] = 0 (zero coins needed to make amount 0). Base cases anchor the recursion and prevent infinite loops.dp[i], all states it depends on are already computed. Top-down: let recursion + memoization handle the order automatically. Either approach works; bottom-up avoids stack overhead.dp[n] or dp[m][n], but sometimes you need max(dp[...]) across all states. Example: for LIS, the answer is max(dp[0..n-1]), not just dp[n-1].You can climb 1 or 2 steps. How many distinct ways to reach step n?
dp[i] = number of ways to reach step idp[i] = dp[i-1] + dp[i-2] (arrive from step i-1 or i-2)dp[0] = 1, dp[1] = 1dp[n]There are two ways to implement every DP solution. Both produce identical results. Let's compare them side by side using Fibonacci.
Map<Integer,Integer> memo = new HashMap<>(); int fib(int n) { if (n <= 1) return n; if (memo.containsKey(n)) return memo.get(n); int res = fib(n-1) + fib(n-2); memo.put(n, res); return res; }
int fib(int n) { if (n <= 1) return n; int[] dp = new int[n + 1]; dp[0] = 0; dp[1] = 1; for (int i = 2; i <= n; i++) dp[i] = dp[i-1] + dp[i-2]; return dp[n]; }
Which to choose? Start with top-down (memoization) when figuring out a problem; it maps directly to the recurrence. Then convert to bottom-up for interviews, as it is generally faster in practice and easier to space-optimize. For most LeetCode problems, bottom-up is the expected approach.
Most DP problems fall into one of these four categories. Recognizing which pattern applies immediately tells you the state definition and recurrence shape.
dp[i] depends on dp[i-1] and/or dp[i-2]. Single array, linear scan.
// House Robber: dp[i] = max money robbing houses 0..i dp[i] = Math.max(dp[i-1], dp[i-2] + nums[i]);
dp[i][j] depends on neighboring cells. Grid traversal, fill row by row.
// Unique Paths: dp[i][j] = paths to cell (i,j) dp[i][j] = dp[i-1][j] + dp[i][j-1];
3x5 grid: 15 unique paths to bottom-right
At each item, choose to include or exclude it. The recurrence branches into two choices.
// 0/1 Knapsack: dp[i][w] = max value using items 0..i with capacity w dp[i][w] = Math.max( dp[i-1][w], // skip item i dp[i-1][w - wt[i]] + val[i] // take item i );
Two strings mapped to a 2D grid. Diagonal, left, and up transitions for match/mismatch.
// LCS: dp[i][j] = LCS length of s1[0..i-1] and s2[0..j-1] if (s1[i-1] == s2[j-1]) dp[i][j] = dp[i-1][j-1] + 1; // match: diagonal + 1 else dp[i][j] = Math.max(dp[i-1][j], dp[i][j-1]); // mismatch: max of skip either
| a | b | c | d | e | ||
| 0 | 0 | 0 | 0 | 0 | 0 | |
| a | 0 | 1 | 1 | 1 | 1 | 1 |
| c | 0 | 1 | 1 | 2 | 2 | 2 |
| e | 0 | 1 | 1 | 2 | 2 | 3 |
LCS("ace", "abcde") = 3
Given coins = [1, 2, 5] and amount = 11, find the minimum number of coins to make that amount. Let's apply the 5-step framework and watch the DP table fill.
dp[i] = minimum coins to make amount idp[i] = min(dp[i - c] + 1) for each coin c where i - c ≥ 0dp[0] = 0 (0 coins for amount 0)dp[11]Each cell shows the minimum coins needed. The small label shows which coin was used last.
| i | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| dp[i] | 0 | 1 +1 |
1 +2 |
2 +1 |
2 +2 |
1 +5 |
2 +1 |
2 +2 |
3 +1 |
3 +2 |
2 +5 |
3 +1 |
int coinChange(int[] coins, int amount) { int[] dp = new int[amount + 1]; Arrays.fill(dp, amount + 1); // "infinity" sentinel dp[0] = 0; // base case for (int i = 1; i <= amount; i++) { for (int coin : coins) { if (coin <= i) { dp[i] = Math.min(dp[i], dp[i - coin] + 1); } } } return dp[amount] > amount ? -1 : dp[amount]; }
Tracing the solution: dp[11] = 3 comes from dp[10] + 1 (coin 1). dp[10] = 2 comes from dp[5] + 1 (coin 5). dp[5] = 1 comes from dp[0] + 1 (coin 5). So the coins used are: 5 + 5 + 1 = 11.
Many DP problems use an entire table but only reference the previous row (or previous few values). You can reduce space by keeping only what you need.
If dp[i] only depends on dp[i-1] and dp[i-2], you don't need the full array. Just keep two variables.
int[] dp = new int[n + 1]; dp[0] = 1; dp[1] = 1; for (int i = 2; i <= n; i++) dp[i] = dp[i-1] + dp[i-2]; return dp[n];
int prev2 = 1, prev1 = 1; for (int i = 2; i <= n; i++) { int curr = prev1 + prev2; prev2 = prev1; prev1 = curr; } return prev1;
For 2D DP where dp[i][j] only depends on row i-1, keep just one row and overwrite in place.
// Unique Paths: O(m*n) space -> O(n) space int[] dp = new int[n]; Arrays.fill(dp, 1); // first row: all 1s for (int i = 1; i < m; i++) for (int j = 1; j < n; j++) dp[j] += dp[j - 1]; // dp[j] already has "above" return dp[n - 1];
When can you optimize? Look at which states each cell references. If it only looks at the previous row (or a fixed window of previous values), you can compress. The Coin Change problem already uses 1D because dp[i] references dp[i - coin] in the same array, so it's already optimal at O(amount) space.
These are the essential LeetCode problems for mastering dynamic programming. They are grouped by sub-pattern and listed in rough order of difficulty.
Practice tip: Start with #70 (Climbing Stairs) to internalize the framework. Then do #198 (House Robber) for the include/exclude pattern. #322 (Coin Change) is the gateway to knapsack-type problems. #1143 (LCS) and #72 (Edit Distance) teach you 2D string DP. Master these seven and you can handle most DP problems on LeetCode.
Enter coins and a target amount, then step through the Coin Change DP algorithm. Watch each cell fill in and see which coin produced the optimal value.
DP complexity varies by problem, but a universal formula applies:
| Problem | States | Transitions | Time | Space |
| Climbing Stairs | n | 2 | O(n) | O(1)* |
| Coin Change | amount | |coins| | O(n*k) | O(n) |
| LCS | m * n | 1 | O(mn) | O(n)* |
| Edit Distance | m * n | 3 | O(mn) | O(n)* |
| LIS | n | n | O(n²) | O(n) |
* with space optimization (rolling array)
Quick estimation: Count the number of unique states (the size of your DP array/table), then multiply by the cost of computing each state. This gives you a tight bound on both time and the number of operations.
The 5-step framework is universal. Define state, write recurrence, set base cases, determine fill order, locate the answer. This works for every single DP problem, from Fibonacci to complex multi-dimensional problems.
Defining the state is the hardest part. If you cannot clearly state what dp[i] represents in plain English, the rest of the solution will not follow. Spend the most time here.
Start top-down, optimize bottom-up. Memoized recursion lets you think in the natural recursive structure of the problem. Once your solution works, convert to tabulation for better constant-factor performance and space optimization.
Recognize the four sub-patterns. 1D DP, 2D grid DP, Knapsack (include/exclude), and String DP cover the vast majority of DP problems. Identifying which one applies narrows your approach immediately.
Space optimization is a follow-up, not a first step. Get the correct O(n) or O(mn) solution first. Then observe which previous states each cell needs and compress to a rolling array. Interviewers appreciate this structured approach.