|
1 | | -Design an LRU cache |
| 1 | +# Design an LRU cache |
| 2 | + |
| 3 | +## Constraints and assumptions |
2 | 4 |
|
3 | | -Constraints and assumptions |
4 | 5 | - What are we caching? |
5 | 6 | We are cahing the results of web queries |
6 | 7 | - Can we assume inputs are valid or do we have to validate them? |
7 | 8 | Assume they're valid |
8 | 9 | - Can we assume this fits memory? |
9 | | - Yes |
| 10 | + Yes |
| 11 | + |
| 12 | +## Solution |
| 13 | + |
| 14 | +A hash map alone gives us `O(1)` key-value lookup, but doesn't track order. An array tracks order, but inserting/removing |
| 15 | +from the middle is `O(n)`. We need a data structure that supports `O(1)` insertion, deletion, AND reordering. This |
| 16 | +points us toward a structure where we can move items to the front/back instantly. Think of a playlist where songs move |
| 17 | +to the top when played: - When you play a song (access), it jumps to position #1 - When you add a new song and the |
| 18 | +playlist is full, the song at the bottom (least recently played) gets removed - You need to find any song by name |
| 19 | +instantly (hash map), but also know which song is at the bottom (ordering) This dual requirement - fast lookup by key |
| 20 | +AND fast reordering - suggests combining two data structures: one for `O(1)` key access, another for `O(1)` position |
| 21 | +changes. |
| 22 | + |
| 23 | +### HashMap + Doubly Linked List hybrid |
| 24 | + |
| 25 | +When you need O(1) access AND O(1) ordering operations (move to front/back, remove), |
| 26 | +combine a hash map for lookups with a doubly linked list for order tracking. The hash map stores pointers to list nodes, |
| 27 | +enabling instant node location and manipulation. |
| 28 | + |
| 29 | +### Sentinel nodes eliminate edge cases |
| 30 | + |
| 31 | +Use dummy head and tail nodes in your doubly linked list to avoid null checks when adding/removing nodes at boundaries. |
| 32 | +This means every real node always has non-null prev/next pointers, simplifying insertion and deletion logic dramatically. |
| 33 | + |
| 34 | +### Access equals update pattern: |
| 35 | + |
| 36 | +In LRU cache, every get() operation must update recency by moving the accessed node to the most-recent position |
| 37 | +(typically the head or tail). Forgetting this is the most common bug - reads aren't passive in cache implementations. |
| 38 | + |
| 39 | +### Capacity check timing matters |
| 40 | + |
| 41 | +Always check capacity and evict after inserting the new element, not before. For updates (key exists), no eviction is |
| 42 | +needed. For new insertions at capacity, evict the LRU item, then add - this handles the edge case where capacity=1 |
| 43 | +correctly. |
| 44 | + |
| 45 | +### Bidirectional pointer maintenance |
| 46 | + |
| 47 | +When manipulating doubly linked list nodes, always update four pointers in the correct order: the node's prev/next AND |
| 48 | +its neighbors' pointers. A common pattern is to extract a node (reconnect its neighbors), then insert it elsewhere |
| 49 | +(update new neighbors and the node itself). |
| 50 | + |
| 51 | +### Cache eviction policy abstraction |
| 52 | + |
| 53 | +This LRU pattern extends to LFU (Least Frequently Used), MRU (Most Recently Used), and TTL caches. The core insight - |
| 54 | +combining hash map for O(1) lookup with an auxiliary structure (list, heap, or multiple lists) for O(1) policy |
| 55 | +enforcement - applies broadly to cache replacement algorithms. |
| 56 | + |
| 57 | +## Complexity Analysis |
| 58 | + |
| 59 | +### Time Complexity |
| 60 | + |
| 61 | +O(1) Both get and put operations involve hash map lookup (O(1)), and linked list node |
| 62 | +insertion/deletion/movement (O(1) with doubly linked list). No iteration through the cache is needed. |
| 63 | + |
| 64 | +### Space Complexity |
| 65 | + |
| 66 | +O(capacity) We store at most 'capacity' key-value pairs in the hash map, and the same number of nodes in the doubly |
| 67 | +linked list. Space grows linearly with capacity. |
0 commit comments