Contains Duplicate

O(n) Time / O(n) Space

Goal: Check if any value appears at least twice.

Technique: Hash Set

Why Hash Set? Problem asks "have we seen this before?" Hash set provides O(1) lookup and insertion, perfect for membership queries. Alternative (sorting) is O(n log n).

Key Insight: Trading space for time. We memorize what we've seen to avoid re-scanning.

if _, ok := seen[n]; ok { return true } seen[n] = true

Valid Anagram

O(n) Time / O(1) Space

Goal: Check if strings `s` and `t` have exact same char counts.

Technique: Frequency Array

Why Frequency Counting? Anagrams = same letters, different order. Order doesn't matter, only frequency. Sorting works but is O(n log n). Counting is O(n).

Key Insight: Fixed alphabet size (26) means O(1) space. Increment for first string, decrement for second. If all zeros, they're anagrams.

counts := [26]int{} for c in s: counts[c-'a']++ for c in t: counts[c-'a']-- // All must be 0

Unicode?

Use a generic HashMap `map[rune]int`.

Two Sum

O(n) Time / O(n) Space

Goal: Find indices of two numbers adding to `target`.

Technique: Complement Map

Why Complement? Brute force is O(n²) checking all pairs. Instead of "what pairs sum to target?", ask "have I seen the complement (target - current)?"

Key Insight: For each number, we instantly know what value would complete the pair. Hash map gives O(1) lookup. Store as we go to avoid using same element twice.

m := map[int]int{} // val -> index for i, n := range nums: diff := target - n if idx, ok := m[diff]; ok: return {idx, i} m[n] = i

Trap:

Don't use same element twice. Map check handles this (looks back).

Group Anagrams

O(n * k) Time

Goal: Group strings that are anagrams.

Technique: Hash by Count

Why Character Count Key? Need a way to identify anagrams. Sorting each string works (O(n·k log k)) but counting is faster (O(n·k)). Same character frequencies = same key.

Key Insight: Use character frequency as the "signature" of an anagram group. All anagrams share the same signature. Group by this signature using a map.

key := [26]int{} // filled with char counts map[key] = append(map[key], s)

In Go, arrays `[26]int` are comparable keys. In other langs, convert array to string "1#0#2...".

Top K Frequent Elements

O(n) Time

Goal: Find `k` most common numbers.

Technique: Bucket Sort

Why Bucket Sort? Need to rank by frequency. Heap is O(n log k). But frequency range is limited: 1 to n. This bounded range allows bucket sort in O(n).

Key Insight: Index = frequency, Value = numbers with that frequency. Walk backwards from highest frequency. Exploits the constraint that frequencies are bounded.

  1. Count freq of each num: `map[num]count`.
  2. Invert to buckets: `buckets[count] = [list of nums]`.
  3. Iterate buckets `n` down to `1`, collect `k` results.
buckets := make([][]int, len(nums)+1)

Encode & Decode Strings

O(n) Time

Goal: Serialize list of strings to one string.

Technique: Length Prefixing

Why Length Prefix? Simple delimiters (`,`, `|`) break if strings contain them. Escaping gets complex. Length prefix is unambiguous: tells you exactly how many chars to read.

Key Insight: Encode metadata (length) separate from data (string). Decoder knows where one string ends and next begins, regardless of content. Handles any character, even delimiter itself.

Enc: "4#neet4#code" (len#str) Dec: Read digits until '#', parse len, read slice.

Stateless:

No global state needed, handles any char.

Product Except Self

O(n) Time / O(1) Extra Space

Goal: Output[i] = product of all except nums[i]. No division.

Technique: Prefix & Suffix

Why Prefix/Suffix? Division trick (total product / nums[i]) fails with zeros. Instead, decompose: product except i = (product before i) × (product after i).

Key Insight: Build answer in two passes. First pass: accumulate products from left. Second pass: multiply by products from right. Clever use of output array avoids extra space.

  1. Pass 1 (L->R): `res[i]` = `res[i-1] * nums[i-1]`. (Prefix)
  2. Pass 2 (R->L): Maintain `postfix` var. `res[i] *= postfix`. Update `postfix *= nums[i]`.
res[0]=1 for i=1..n: res[i] = res[i-1]*nums[i-1] post=1 for i=n-1..0: res[i]*=post; post*=nums[i]

Valid Sudoku

O(1) Time (Fixed 9x9)

Goal: Check duplicates in Row, Col, 3x3 Box.

Technique: Composite Keys

Why Composite Keys? Need to track 27 constraints (9 rows, 9 cols, 9 boxes). Could use 27 sets, but encoding position+value into single key is cleaner.

Key Insight: Each cell participates in 3 constraints. Encode each as unique string: "r5:3" = "3 in row 5", "c2:3" = "3 in col 2", "b4:3" = "3 in box 4". One set tracks all.

seen.add("r" + row + val) seen.add("c" + col + val) seen.add("b" + (r/3)*3+(c/3) + val)

Box Index: `(row / 3) * 3 + (col / 3)` maps (0..8, 0..8) to 0..8.

Longest Consecutive Seq

O(n) Time

Goal: Length of longest seq (1, 2, 3, 4) in unsorted array.

Technique: Start-of-Sequence Check

Why Start-of-Sequence? Sorting finds sequences easily but is O(n log n). Hash set gives O(1) lookups. Key: only count from sequence starts to avoid redundant work.

Key Insight: If (n-1) exists, n is NOT a sequence start—skip it. Only when (n-1) missing, n begins a sequence. Walk forward counting n+1, n+2... Each element visited at most twice: once in outer loop, once in inner loop. Still O(n)!

  1. Put all nums in Set.
  2. Iterate nums. If `(n-1)` in Set, SKIP (it's not start).
  3. If `(n-1)` NOT in Set, it IS start. Loop `n+1`, `n+2`... while in Set. Count length.
if !set.has(n-1): curr = n while set.has(curr+1): curr++