You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
multicache is a high-performance cache for Go that automatically adapts to your workload. It combines **multiple eviction strategies** that switch based on access patterns, with an optional **multi-tier architecture** for persistence.
11
+
multicache is a high-performance cache for Go implementing the **S3-FIFO** algorithm from the SOSP'23 paper ["FIFO queues are all you need for cache eviction"](https://s3fifo.com/). It combines **best-in-class hit rates**, **multi-threaded** scalability, and an optional **multi-tier architecture** for persistence.
12
12
13
-
## Why "multi"?
14
-
15
-
### Multiple Adaptive Strategies
13
+
**Our philosophy**: Hit rate matters most (cache misses are expensive), then throughput (handle load), then single-threaded latency. We aim to excel at all three.
16
14
17
-
multicache monitors ghost hit rates (how often evicted keys return) and automatically selects the optimal eviction strategy:
No tuning required - the cache learns your workload and adapts.
19
+
Designed for high-concurrency workloads with dynamic sharding (up to 2048 shards) that scales with `GOMAXPROCS`. At 32 threads, multicache delivers **185M+ QPS** for GetOrSet operations.
27
20
28
21
### Multi-Tier Architecture
29
22
@@ -35,7 +28,7 @@ Stack fast in-memory caching with durable persistence:
multicache prioritizes high hit-rates and low read latency. We have our own built in `make bench` that asserts cache dominance:
104
+
multicache prioritizes **hit rate** first, **multi-threaded throughput** second, and **single-threaded latency** third—but aims to excel at all three. We have our own built in `make bench` that asserts cache dominance:
112
105
113
106
```
114
107
>>> TestLatencyNoEviction: Latency - No Evictions (Set cycles within cache size) (go test -run=TestLatencyNoEviction -v)
@@ -202,24 +195,16 @@ Want even more comprehensive benchmarks? See https://github.com/tstromberg/gocac
202
195
203
196
## Implementation Notes
204
197
205
-
### S3-FIFO Enhancements
198
+
multicache implements the S3-FIFO algorithm from the SOSP'23 paper with these optimizations for production use:
206
199
207
-
multicache implements the S3-FIFO algorithm from SOSP'23 with these optimizations:
208
-
209
-
1.**Dynamic Sharding** - 1-2048 independent shards for concurrent workloads
210
-
2.**Bloom Filter Ghosts** - Two rotating Bloom filters (vs storing keys), 10-100x less memory
211
-
3.**Lazy Ghost Checks** - Only check ghosts at capacity, saving 5-9% latency during warmup
200
+
1.**Dynamic Sharding** - Up to 2048 shards (capped at 2× GOMAXPROCS) for concurrent workloads
201
+
2.**Bloom Filter Ghosts** - Two rotating Bloom filters instead of storing keys, 10-100× less memory
202
+
3.**Lazy Ghost Checks** - Only check ghosts at capacity, saving latency during warmup
5.**Fast-path Hashing** - Specialized `int`/`string` hashing via wyhash
205
+
6.**Higher Frequency Cap** - Max freq=7 (vs paper's 3) for better hot/warm discrimination
214
206
215
-
### Adaptive Mode Details
216
-
217
-
Mode switching uses **hysteresis** to prevent oscillation. Mode 2 (frequency-biased) requires 7-12% ghost rate to enter, but stays active while rate is 5-22%.
218
-
219
-
Additional tuning beyond the paper:
220
-
-**Adaptive queue sizing** - Small queue is 20% for caches ≤32K, 15% for ≤128K, 10% for larger
221
-
-**Ghost frequency boost** - Returning items start with freq=1 instead of 0
222
-
-**Higher frequency cap** - Max freq=7 (vs 3) for better hot/warm discrimination
207
+
The core algorithm follows the paper closely: items enter the small queue, get promoted to main after 2+ accesses, and evicted items are tracked in a ghost queue to inform future admissions.
0 commit comments