Skip to content

Commit 111f0b1

Browse files
committed
recycle entries before eviction
1 parent 250342f commit 111f0b1

File tree

4 files changed

+577
-542
lines changed

4 files changed

+577
-542
lines changed

README.md

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
11
# multicache
22

3-
multicache is an in-memory #golang cache library.
3+
multicache is an absurdly fast multi-threaded multi-tiered in-memory cache library for Go -- it offers higher performance than any other option ever created for the language.
44

5-
It's been optimized over hundreds of experiments to be the highest performing cache available - both in terms of hit rates and throughput - and also features an optional multi-tier persistent cache option.
5+
It offers optional persistence with compression, and has been specifically optimized for Cloud Compute environments where the process is periodically restarted, such as Kubernetes or Google Cloud Run.
66

77
## Install
88

@@ -58,22 +58,22 @@ For maximum efficiency, all backends support S2 or Zstd compression via `pkg/sto
5858

5959
## Performance
6060

61-
multicache has been exhaustively tested for performance using [gocachemark](https://github.com/tstromberg/gocachemark). As of Dec 2025, it's the highest performing cache implementation for Go.
61+
multicache has been exhaustively tested for performance using [gocachemark](https://github.com/tstromberg/gocachemark).
6262

6363
Where multicache wins:
6464

65-
- **Throughput**: 1 billion ints/second at 16 threads or higher. (2-3X faster than otter)
66-
- **Hit rate**: Highest average across datasets (1.6% higher than sieve, 4.4% higher than otter)
67-
- **Latency**: 8-11ns Gets, zero allocations (3-4X lower latency than otter)
65+
- **Throughput**: 954M int gets/sec at 16 threads (2.2X faster than otter). 140M string sets/sec (9X faster than otter).
66+
- **Hit rate**: Wins 7 of 9 workloads. Highest average across all datasets (+2.9% vs otter, +0.9% vs sieve).
67+
- **Latency**: 8ns int gets, 10ns string gets, zero allocations (4X lower latency than otter)
6868

6969
Where others win:
7070

71-
- **Memory**: freelru and otter use less memory per entry
72-
- **Temporal workload hit rates**: Some caches work marginally better in certain workloads by a very thin margin: clock (+0.006% on thesios-block), sieve (+0.005% on thesios-file)
71+
- **Memory**: freelru and otter use less memory per entry (73 bytes/item overhead vs 15 for otter)
72+
- **Specific workloads**: clock +0.07% on ibm-docker, theine +0.34% on zipf
7373

7474
Much of the credit for high throughput goes to [puzpuzpuz/xsync](https://github.com/puzpuzpuz/xsync). While highly sharded maps and flightGroups performed well, you can't beat xsync's lock-free data structures.
7575

76-
Run `make competive-bench` for full results.
76+
Run `make benchmark` for full results, or see [benchmarks/gocachemark_results.md](benchmarks/gocachemark_results.md).
7777

7878
## Algorithm
7979

0 commit comments

Comments
 (0)