Mutating counts without cleanup
Wrong move: Zero-count keys stay in map and break distinct/count constraints.
Usually fails on: Window/map size checks are consistently off by one.
Fix: Delete keys when count reaches zero.
Move from brute-force thinking to an efficient approach using hash map strategy.
Given a string s, sort it in decreasing order based on the frequency of the characters. The frequency of a character is the number of times it appears in the string.
Return the sorted string. If there are multiple answers, return any of them.
Example 1:
Input: s = "tree" Output: "eert" Explanation: 'e' appears twice while 'r' and 't' both appear once. So 'e' must appear before both 'r' and 't'. Therefore "eetr" is also a valid answer.
Example 2:
Input: s = "cccaaa" Output: "aaaccc" Explanation: Both 'c' and 'a' appear three times, so both "cccaaa" and "aaaccc" are valid answers. Note that "cacaca" is incorrect, as the same characters must be together.
Example 3:
Input: s = "Aabb" Output: "bbAa" Explanation: "bbaA" is also a valid answer, but "Aabb" is incorrect. Note that 'A' and 'a' are treated as two different characters.
Constraints:
1 <= s.length <= 5 * 105s consists of uppercase and lowercase English letters and digits.Problem summary: Given a string s, sort it in decreasing order based on the frequency of the characters. The frequency of a character is the number of times it appears in the string. Return the sorted string. If there are multiple answers, return any of them.
Start with the most direct exhaustive search. That gives a correctness anchor before optimizing.
Pattern signal: Hash Map
"tree"
"cccaaa"
"Aabb"
top-k-frequent-elements)first-unique-character-in-a-string)sort-array-by-increasing-frequency)percentage-of-letter-in-string)maximum-number-of-pairs-in-array)Source-backed implementations are provided below for direct study and interview prep.
// Accepted solution for LeetCode #451: Sort Characters By Frequency
class Solution {
public String frequencySort(String s) {
Map<Character, Integer> cnt = new HashMap<>(52);
for (int i = 0; i < s.length(); ++i) {
cnt.merge(s.charAt(i), 1, Integer::sum);
}
List<Character> cs = new ArrayList<>(cnt.keySet());
cs.sort((a, b) -> cnt.get(b) - cnt.get(a));
StringBuilder ans = new StringBuilder();
for (char c : cs) {
for (int v = cnt.get(c); v > 0; --v) {
ans.append(c);
}
}
return ans.toString();
}
}
// Accepted solution for LeetCode #451: Sort Characters By Frequency
func frequencySort(s string) string {
cnt := map[byte]int{}
for i := range s {
cnt[s[i]]++
}
cs := make([]byte, 0, len(s))
for c := range cnt {
cs = append(cs, c)
}
sort.Slice(cs, func(i, j int) bool { return cnt[cs[i]] > cnt[cs[j]] })
ans := make([]byte, 0, len(s))
for _, c := range cs {
ans = append(ans, bytes.Repeat([]byte{c}, cnt[c])...)
}
return string(ans)
}
# Accepted solution for LeetCode #451: Sort Characters By Frequency
class Solution:
def frequencySort(self, s: str) -> str:
cnt = Counter(s)
return ''.join(c * v for c, v in sorted(cnt.items(), key=lambda x: -x[1]))
// Accepted solution for LeetCode #451: Sort Characters By Frequency
use std::collections::HashMap;
impl Solution {
pub fn frequency_sort(s: String) -> String {
let mut cnt = HashMap::new();
for c in s.chars() {
cnt.insert(c, cnt.get(&c).unwrap_or(&0) + 1);
}
let mut cs = cnt.into_iter().collect::<Vec<(char, i32)>>();
cs.sort_unstable_by(|(_, a), (_, b)| b.cmp(&a));
cs.into_iter()
.map(|(c, v)| vec![c; v as usize].into_iter().collect::<String>())
.collect()
}
}
// Accepted solution for LeetCode #451: Sort Characters By Frequency
function frequencySort(s: string): string {
const cnt: Map<string, number> = new Map();
for (const c of s) {
cnt.set(c, (cnt.get(c) || 0) + 1);
}
const cs = Array.from(cnt.keys()).sort((a, b) => cnt.get(b)! - cnt.get(a)!);
const ans: string[] = [];
for (const c of cs) {
ans.push(c.repeat(cnt.get(c)!));
}
return ans.join('');
}
Use this to step through a reusable interview workflow for this problem.
For each element, scan the rest of the array looking for a match. Two nested loops give n × (n−1)/2 comparisons = O(n²). No extra space since we only use loop indices.
One pass through the input, performing O(1) hash map lookups and insertions at each step. The hash map may store up to n entries in the worst case. This is the classic space-for-time tradeoff: O(n) extra memory eliminates an inner loop.
Review these before coding to avoid predictable interview regressions.
Wrong move: Zero-count keys stay in map and break distinct/count constraints.
Usually fails on: Window/map size checks are consistently off by one.
Fix: Delete keys when count reaches zero.