Answer range queries and perform point or range updates in O(log n) using a binary tree where each node stores an aggregate over a subarray.
A segment tree is a binary tree where each node stores an aggregate value (sum, minimum, maximum, GCD, etc.) over a contiguous range of the original array. Leaf nodes correspond to individual array elements, and each internal node merges the results of its two children.
An array of n elements is represented as a complete binary tree of height O(log n). Each node covers a range [l, r] and stores the aggregate for that range. Queries and updates both take O(log n) by traversing from root to the relevant leaves.
Why not just use prefix sums? Prefix sums give O(1) range queries but O(n) point updates. A segment tree gives O(log n) for both queries and updates. When the array is mutable and you need many queries interleaved with updates, the segment tree is the go-to data structure.
Look for these recognition signals in the problem statement. If you spot one, a segment tree is very likely the intended approach.
"Range query" with "update"
"Count of elements" in a range
"Interval scheduling" or "skyline"
"Inversions" or "smaller after self"
The key clue: You need to repeatedly answer aggregate queries (sum, min, max, count) over arbitrary subranges of an array that is being modified between queries. Brute-force recomputation would be O(n) per query. The segment tree reduces each operation to O(log n) by decomposing any range into O(log n) pre-computed segments.
The simplest segment tree variant. Each node stores the sum of its range. Point updates modify a single leaf and propagate the change upward. Range queries decompose [l, r] into O(log n) nodes and sum them.
Build the tree bottom-up, then query sum(1, 3) and update index 2 to 7.
When you need to update an entire range at once (e.g., "add 5 to every element from index 2 to 6"), visiting each leaf would be O(n). Lazy propagation defers updates to child nodes until they are actually needed, keeping both range updates and range queries at O(log n).
Each node gets a "lazy" tag that records a pending update. When we visit a node during a query or update, we first push down any pending lazy value to its children before proceeding.
Choosing the variant: If you only need point updates (modify one element at a time), the basic segment tree without lazy propagation is simpler and sufficient. Use lazy propagation when the problem requires range updates (add a value to all elements in [l, r]) combined with range queries.
Here is the annotated Java template for a segment tree with point update and range sum query. The tree is stored in a flat array of size 4n.
int[] tree; int n; // Build the segment tree from array nums void build(int[] nums) { n = nums.length; tree = new int[4 * n]; buildHelper(1, 0, n - 1, nums); } void buildHelper(int node, int start, int end, int[] nums) { if (start == end) { tree[node] = nums[start]; // leaf node return; } int mid = (start + end) / 2; buildHelper(2 * node, start, mid, nums); // left child buildHelper(2 * node + 1, mid + 1, end, nums); // right child tree[node] = tree[2 * node] + tree[2 * node + 1]; // merge } // Point update: set nums[idx] = val void update(int node, int start, int end, int idx, int val) { if (start == end) { tree[node] = val; return; } int mid = (start + end) / 2; if (idx <= mid) update(2 * node, start, mid, idx, val); else update(2 * node + 1, mid + 1, end, idx, val); tree[node] = tree[2 * node] + tree[2 * node + 1]; // re-merge } // Range query: sum of nums[l..r] int query(int node, int start, int end, int l, int r) { if (r < start || end < l) return 0; // no overlap if (l <= start && end <= r) return tree[node]; // total overlap int mid = (start + end) / 2; return query(2 * node, start, mid, l, r) + query(2 * node + 1, mid + 1, end, l, r); // partial overlap }
int[] lazy; // initialize to 0 void pushDown(int node, int start, int end) { if (lazy[node] != 0) { int mid = (start + end) / 2; applyLazy(2 * node, start, mid, lazy[node]); applyLazy(2 * node + 1, mid + 1, end, lazy[node]); lazy[node] = 0; } } void applyLazy(int node, int start, int end, int val) { tree[node] += val * (end - start + 1); // sum aggregate lazy[node] += val; // accumulate for children }
Why 4n space? The tree array needs at most 4n slots. In the worst case, when n is not a power of 2, the last level of the implicit binary tree may have gaps. Using 4n guarantees no out-of-bounds access. For n that is a power of 2, 2n would suffice.
Let us trace through building a segment tree from [1, 3, 5, 7, 9, 2], performing a range query, and then a point update.
Building a sum segment tree, then querying sum(2, 4) and updating index 3 to 1.
| OPERATION | PATH | NODES VISITED | RESULT |
|---|---|---|---|
| Build | Bottom-up | All 11 nodes | Root = 27 |
| query(2, 4) | root → both children | [0,5] → [0,2]: [2,2]✓ · [3,5]: [3,4]✓ | 5 + 16 = 21 |
| update(3, 1) | root → right → left → leaf | [0,5] → [3,5] → [3,4] → [3,3] | leaf 7→1, propagate -6 up |
| query(2, 4) again | Same path | [2,2]=5 + [3,4]=10 | 5 + 10 = 15 |
Notice the efficiency: The query sum(2, 4) does not visit all 3 leaves individually. It picks up node [2,2] (a single leaf) and node [3,4] (an internal node covering two elements) for a total of just 2 contributions. For larger arrays, the savings are dramatic: a range of 1000 elements might be covered by only ~20 nodes.
These are classic LeetCode problems that use the segment tree pattern, listed in rough order of difficulty.
Practice tip: Start with #307 which is the direct application of the basic segment tree template. Then tackle #315 which requires a creative twist: process the array from right to left and use the segment tree to count how many previously-seen elements are smaller. #218 and #732 use the segment tree for interval/event-based problems.
Enter an array, build a segment tree, then perform point updates and range sum queries. Watch the tree structure with highlighted nodes during each operation.
Build O(n): Each of the n leaves is visited once, and there are n-1 internal nodes. The total work at each level is proportional to the number of nodes at that level, which sums to O(n) across all levels.
Query O(log n): At each level of the tree, at most 2 nodes have their ranges partially overlapping with the query range. All other nodes are either fully inside (returned immediately) or fully outside (pruned). This means we visit at most O(log n) nodes total.
Update O(log n): A point update follows a single root-to-leaf path, updating the aggregate at each level. The tree has O(log n) levels.
Space: The tree array uses O(n) space (specifically 4n slots). With lazy propagation, an additional O(n) lazy array is needed.
Comparison with alternatives: A Fenwick tree (Binary Indexed Tree) also achieves O(log n) per operation for prefix sums but is harder to extend to non-invertible operations like min/max. A segment tree is more versatile: it handles any associative merge function and supports lazy propagation for range updates.
Divide and conquer on ranges. The segment tree recursively splits the array into halves. Each node owns a range [l, r] and stores the aggregate for that range. This divide-and-conquer structure is what enables O(log n) queries and updates.
Three cases in every query. When querying [ql, qr] at a node covering [l, r]: (1) no overlap → return identity, (2) total overlap → return node value, (3) partial overlap → recurse into both children and merge. Master this three-way check and you can implement any segment tree.
Lazy propagation defers work. For range updates, do not visit every leaf. Store a pending update at the highest node that fully covers the update range, and push it down only when needed. This keeps range updates at O(log n).
Works with any associative merge. Sum, min, max, GCD, bitwise OR, count of zeros, matrix products. As long as the merge operation is associative, the segment tree structure applies. Change the merge function and identity element to adapt.
Use 4n space for safety. The flat array representation uses indices 1 to ~4n. Children of node i are at 2i and 2i+1. This implicit representation avoids pointer overhead and is cache-friendly, making it fast in practice.