Off-by-one on range boundaries
Wrong move: Loop endpoints miss first/last candidate.
Usually fails on: Fails on minimal arrays and exact-boundary answers.
Fix: Re-derive loops from inclusive/exclusive ranges before coding.
Move from brute-force thinking to an efficient approach using array strategy.
Given an array of strings words and an integer k, return the k most frequent strings.
Return the answer sorted by the frequency from highest to lowest. Sort the words with the same frequency by their lexicographical order.
Example 1:
Input: words = ["i","love","leetcode","i","love","coding"], k = 2 Output: ["i","love"] Explanation: "i" and "love" are the two most frequent words. Note that "i" comes before "love" due to a lower alphabetical order.
Example 2:
Input: words = ["the","day","is","sunny","the","the","the","sunny","is","is"], k = 4 Output: ["the","is","sunny","day"] Explanation: "the", "is", "sunny" and "day" are the four most frequent words, with the number of occurrence being 4, 3, 2 and 1 respectively.
Constraints:
1 <= words.length <= 5001 <= words[i].length <= 10words[i] consists of lowercase English letters.k is in the range [1, The number of unique words[i]]Follow-up: Could you solve it in O(n log(k)) time and O(n) extra space?
Problem summary: Given an array of strings words and an integer k, return the k most frequent strings. Return the answer sorted by the frequency from highest to lowest. Sort the words with the same frequency by their lexicographical order.
Start with the most direct exhaustive search. That gives a correctness anchor before optimizing.
Pattern signal: Array · Hash Map · Trie
["i","love","leetcode","i","love","coding"] 2
["the","day","is","sunny","the","the","the","sunny","is","is"] 4
top-k-frequent-elements)k-closest-points-to-origin)sort-features-by-popularity)sender-with-largest-word-count)maximum-number-of-pairs-in-array)Source-backed implementations are provided below for direct study and interview prep.
// Accepted solution for LeetCode #692: Top K Frequent Words
class Solution {
public List<String> topKFrequent(String[] words, int k) {
Map<String, Integer> cnt = new HashMap<>();
for (String w : words) {
cnt.merge(w, 1, Integer::sum);
}
Arrays.sort(words, (a, b) -> {
int c1 = cnt.get(a), c2 = cnt.get(b);
return c1 == c2 ? a.compareTo(b) : c2 - c1;
});
List<String> ans = new ArrayList<>();
for (int i = 0; i < words.length && ans.size() < k; ++i) {
if (i == 0 || !words[i].equals(words[i - 1])) {
ans.add(words[i]);
}
}
return ans;
}
}
// Accepted solution for LeetCode #692: Top K Frequent Words
func topKFrequent(words []string, k int) (ans []string) {
cnt := map[string]int{}
for _, w := range words {
cnt[w]++
}
for w := range cnt {
ans = append(ans, w)
}
sort.Slice(ans, func(i, j int) bool { a, b := ans[i], ans[j]; return cnt[a] > cnt[b] || cnt[a] == cnt[b] && a < b })
return ans[:k]
}
# Accepted solution for LeetCode #692: Top K Frequent Words
class Solution:
def topKFrequent(self, words: List[str], k: int) -> List[str]:
cnt = Counter(words)
return sorted(cnt, key=lambda x: (-cnt[x], x))[:k]
// Accepted solution for LeetCode #692: Top K Frequent Words
struct Solution;
use std::cmp::Ordering;
use std::collections::HashMap;
struct Pair<'a> {
word: &'a str,
freq: usize,
}
impl Solution {
fn top_k_frequent(words: Vec<String>, k: i32) -> Vec<String> {
let mut hm: HashMap<&str, usize> = HashMap::new();
let mut v: Vec<Pair> = vec![];
for w in words.iter() {
*hm.entry(w).or_default() += 1;
}
for (word, freq) in hm {
v.push(Pair { word, freq });
}
v.sort_unstable_by(|a, b| match b.freq.cmp(&a.freq) {
Ordering::Equal => a.word.cmp(b.word),
e => e,
});
v.iter()
.take(k as usize)
.map(|x| x.word.to_string())
.collect()
}
}
#[test]
fn test() {
let words: Vec<String> = vec_string!["i", "love", "leetcode", "i", "love", "coding"];
let res: Vec<String> = vec_string!["i", "love"];
let k = 2;
assert_eq!(Solution::top_k_frequent(words, k), res);
let words: Vec<String> =
vec_string!["the", "day", "is", "sunny", "the", "the", "the", "sunny", "is", "is"];
let res: Vec<String> = vec_string!["the", "is", "sunny", "day"];
let k = 4;
assert_eq!(Solution::top_k_frequent(words, k), res);
}
// Accepted solution for LeetCode #692: Top K Frequent Words
function topKFrequent(words: string[], k: number): string[] {
const cnt: Map<string, number> = new Map();
for (const w of words) {
cnt.set(w, (cnt.get(w) || 0) + 1);
}
const ans: string[] = Array.from(cnt.keys());
ans.sort((a, b) => {
return cnt.get(a) === cnt.get(b) ? a.localeCompare(b) : cnt.get(b)! - cnt.get(a)!;
});
return ans.slice(0, k);
}
Use this to step through a reusable interview workflow for this problem.
Store all N words in a hash set. Each insert/lookup hashes the entire word of length L, giving O(L) per operation. Prefix queries require checking every stored word against the prefix — O(N × L) per prefix search. Space is O(N × L) for storing all characters.
Each operation (insert, search, prefix) takes O(L) time where L is the word length — one node visited per character. Total space is bounded by the sum of all stored word lengths. Tries win over hash sets when you need prefix matching: O(L) prefix search vs. checking every stored word.
Review these before coding to avoid predictable interview regressions.
Wrong move: Loop endpoints miss first/last candidate.
Usually fails on: Fails on minimal arrays and exact-boundary answers.
Fix: Re-derive loops from inclusive/exclusive ranges before coding.
Wrong move: Zero-count keys stay in map and break distinct/count constraints.
Usually fails on: Window/map size checks are consistently off by one.
Fix: Delete keys when count reaches zero.