Off-by-one on range boundaries
Wrong move: Loop endpoints miss first/last candidate.
Usually fails on: Fails on minimal arrays and exact-boundary answers.
Fix: Re-derive loops from inclusive/exclusive ranges before coding.
Move from brute-force thinking to an efficient approach using array strategy.
You have a data structure of employee information, including the employee's unique ID, importance value, and direct subordinates' IDs.
You are given an array of employees employees where:
employees[i].id is the ID of the ith employee.employees[i].importance is the importance value of the ith employee.employees[i].subordinates is a list of the IDs of the direct subordinates of the ith employee.Given an integer id that represents an employee's ID, return the total importance value of this employee and all their direct and indirect subordinates.
Example 1:
Input: employees = [[1,5,[2,3]],[2,3,[]],[3,3,[]]], id = 1 Output: 11 Explanation: Employee 1 has an importance value of 5 and has two direct subordinates: employee 2 and employee 3. They both have an importance value of 3. Thus, the total importance value of employee 1 is 5 + 3 + 3 = 11.
Example 2:
Input: employees = [[1,2,[5]],[5,-3,[]]], id = 5 Output: -3 Explanation: Employee 5 has an importance value of -3 and has no direct subordinates. Thus, the total importance value of employee 5 is -3.
Constraints:
1 <= employees.length <= 20001 <= employees[i].id <= 2000employees[i].id are unique.-100 <= employees[i].importance <= 100employees[i].subordinates are valid IDs.Problem summary: You have a data structure of employee information, including the employee's unique ID, importance value, and direct subordinates' IDs. You are given an array of employees employees where: employees[i].id is the ID of the ith employee. employees[i].importance is the importance value of the ith employee. employees[i].subordinates is a list of the IDs of the direct subordinates of the ith employee. Given an integer id that represents an employee's ID, return the total importance value of this employee and all their direct and indirect subordinates.
Start with the most direct exhaustive search. That gives a correctness anchor before optimizing.
Pattern signal: Array · Hash Map · Tree
[[1,5,[2,3]],[2,3,[]],[3,3,[]]] 1
[[1,2,[5]],[5,-3,[]]] 5
nested-list-weight-sum)Source-backed implementations are provided below for direct study and interview prep.
// Accepted solution for LeetCode #690: Employee Importance
/*
// Definition for Employee.
class Employee {
public int id;
public int importance;
public List<Integer> subordinates;
};
*/
class Solution {
private final Map<Integer, Employee> d = new HashMap<>();
public int getImportance(List<Employee> employees, int id) {
for (var e : employees) {
d.put(e.id, e);
}
return dfs(id);
}
private int dfs(int i) {
Employee e = d.get(i);
int s = e.importance;
for (int j : e.subordinates) {
s += dfs(j);
}
return s;
}
}
// Accepted solution for LeetCode #690: Employee Importance
/**
* Definition for Employee.
* type Employee struct {
* Id int
* Importance int
* Subordinates []int
* }
*/
func getImportance(employees []*Employee, id int) int {
d := map[int]*Employee{}
for _, e := range employees {
d[e.Id] = e
}
var dfs func(int) int
dfs = func(i int) int {
s := d[i].Importance
for _, j := range d[i].Subordinates {
s += dfs(j)
}
return s
}
return dfs(id)
}
# Accepted solution for LeetCode #690: Employee Importance
"""
# Definition for Employee.
class Employee:
def __init__(self, id: int, importance: int, subordinates: List[int]):
self.id = id
self.importance = importance
self.subordinates = subordinates
"""
class Solution:
def getImportance(self, employees: List["Employee"], id: int) -> int:
def dfs(i: int) -> int:
return d[i].importance + sum(dfs(j) for j in d[i].subordinates)
d = {e.id: e for e in employees}
return dfs(id)
// Accepted solution for LeetCode #690: Employee Importance
// Rust example auto-generated from java reference.
// Replace the signature and local types with the exact LeetCode harness for this problem.
impl Solution {
pub fn rust_example() {
// Port the logic from the reference block below.
}
}
// Reference (java):
// // Accepted solution for LeetCode #690: Employee Importance
// /*
// // Definition for Employee.
// class Employee {
// public int id;
// public int importance;
// public List<Integer> subordinates;
// };
// */
//
// class Solution {
// private final Map<Integer, Employee> d = new HashMap<>();
//
// public int getImportance(List<Employee> employees, int id) {
// for (var e : employees) {
// d.put(e.id, e);
// }
// return dfs(id);
// }
//
// private int dfs(int i) {
// Employee e = d.get(i);
// int s = e.importance;
// for (int j : e.subordinates) {
// s += dfs(j);
// }
// return s;
// }
// }
// Accepted solution for LeetCode #690: Employee Importance
/**
* Definition for Employee.
* class Employee {
* id: number
* importance: number
* subordinates: number[]
* constructor(id: number, importance: number, subordinates: number[]) {
* this.id = (id === undefined) ? 0 : id;
* this.importance = (importance === undefined) ? 0 : importance;
* this.subordinates = (subordinates === undefined) ? [] : subordinates;
* }
* }
*/
function getImportance(employees: Employee[], id: number): number {
const d = new Map<number, Employee>();
for (const e of employees) {
d.set(e.id, e);
}
const dfs = (i: number): number => {
let s = d.get(i)!.importance;
for (const j of d.get(i)!.subordinates) {
s += dfs(j);
}
return s;
};
return dfs(id);
}
Use this to step through a reusable interview workflow for this problem.
BFS with a queue visits every node exactly once — O(n) time. The queue may hold an entire level of the tree, which for a complete binary tree is up to n/2 nodes = O(n) space. This is optimal in time but costly in space for wide trees.
Every node is visited exactly once, giving O(n) time. Space depends on tree shape: O(h) for recursive DFS (stack depth = height h), or O(w) for BFS (queue width = widest level). For balanced trees h = log n; for skewed trees h = n.
Review these before coding to avoid predictable interview regressions.
Wrong move: Loop endpoints miss first/last candidate.
Usually fails on: Fails on minimal arrays and exact-boundary answers.
Fix: Re-derive loops from inclusive/exclusive ranges before coding.
Wrong move: Zero-count keys stay in map and break distinct/count constraints.
Usually fails on: Window/map size checks are consistently off by one.
Fix: Delete keys when count reaches zero.
Wrong move: Recursive traversal assumes children always exist.
Usually fails on: Leaf nodes throw errors or create wrong depth/path values.
Fix: Handle null/base cases before recursive transitions.