You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/content/docs/articles/benchmark-analysis-tidesdb-v7-4-4-rocksdb-v10-9-1.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -40,7 +40,7 @@ I ran the benchmarks using `tidesdb_rocksdb.sh` within the <a href="https://gith
40
40
**Gains from v7.4.3**
41
41

42
42
43
-
Overall, v7.4.4 delivers consistent and often substantial performance gains across the majority of workloads. Many read-, seek-, and range-oriented workloads show improvements in the 1.2x–1.6x range, with the largest gains exceeding 2.3x.
43
+
Overall, v7.4.4 delivers consistent and often substantial performance gains across the majority of workloads. Many read-, seek-, and range-oriented workloads show improvements in the 1.2x-1.6x range, with the largest gains exceeding 2.3x.
Copy file name to clipboardExpand all lines: src/content/docs/articles/benchmark-analysis-tidesdb-v8-2-1-rocksdb-v10-10-1.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -41,16 +41,16 @@ The tool used for this analysis is the TidesDB benchtool project, which can be f
41
41
42
42
**PUT throughput across workloads**
43
43

44
-
With a single RocksDB baseline per workload, both TidesDB formats are consistently faster on writes. The largest gain is sequential ingest - block-klog ~5.04x and btree-klog ~4.88x vs RocksDB (≈7.80M / 7.56M vs ≈1.55M ops/s baseline). Random and mixed writes remain strong ~1.57–1.59x on random write and ~1.40–1.46x on mixed random. Zipfian workloads also show robust advantages ~1.82–1.84x on Zipf write and ~1.79–1.86x on Zipf mixed.
44
+
With a single RocksDB baseline per workload, both TidesDB formats are consistently faster on writes. The largest gain is sequential ingest - block-klog ~5.04x and btree-klog ~4.88x vs RocksDB (≈7.80M / 7.56M vs ≈1.55M ops/s baseline). Random and mixed writes remain strong ~1.57-1.59x on random write and ~1.40-1.46x on mixed random. Zipfian workloads also show robust advantages ~1.82-1.84x on Zipf write and ~1.79-1.86x on Zipf mixed.
45
45
46
46
**GET throughput across workloads**
47
47

48
48
49
-
Reads are format - and workload-sensitive against the collapsed RocksDB baseline. On pure random read, block-klog is ~1.98x faster (≈3.07M vs ≈1.55M ops/s), while btree-klog is ~1.16x (≈1.80M vs ≈1.55M). On mixed random, block-klog is below baseline (~0.83x), but btree-klog becomes above baseline (~1.1x). On Zipf mixed, both formats are clearly ahead block-klog ~1.69x and btree-klog ~1.73x vs RocksDB baseline (≈3.15–3.23M vs ≈1.86M ops/s).
49
+
Reads are format - and workload-sensitive against the collapsed RocksDB baseline. On pure random read, block-klog is ~1.98x faster (≈3.07M vs ≈1.55M ops/s), while btree-klog is ~1.16x (≈1.80M vs ≈1.55M). On mixed random, block-klog is below baseline (~0.83x), but btree-klog becomes above baseline (~1.1x). On Zipf mixed, both formats are clearly ahead block-klog ~1.69x and btree-klog ~1.73x vs RocksDB baseline (≈3.15-3.23M vs ≈1.86M ops/s).
50
50
51
51
**PUT p99 tail latency across workloads**
52
52

53
-
Using the collapsed RocksDB baseline (and noting the two-run range), TidesDB generally improves tail latency on write-heavy workloads, especially sequential and Zipfian cases. For sequential write, TidesDB p99 is ≈1.55–1.62 ms versus RocksDB’s ≈5–6 ms range across the two baseline runs, indicating materially better tail behavior alongside the throughput advantage. Random write tail latency is closer block-klog is lower than RocksDB, while btree-klog is competitive but can be higher depending on workload shape - so the key point is that the major tail-latency win is most pronounced in seq/Zipf patterns, not uniformly in every random-heavy case.
53
+
Using the collapsed RocksDB baseline (and noting the two-run range), TidesDB generally improves tail latency on write-heavy workloads, especially sequential and Zipfian cases. For sequential write, TidesDB p99 is ≈1.55-1.62 ms versus RocksDB’s ≈5-6 ms range across the two baseline runs, indicating materially better tail behavior alongside the throughput advantage. Random write tail latency is closer block-klog is lower than RocksDB, while btree-klog is competitive but can be higher depending on workload shape - so the key point is that the major tail-latency win is most pronounced in seq/Zipf patterns, not uniformly in every random-heavy case.
54
54
55
55
**On-disk database size after workload**
56
56

@@ -60,7 +60,7 @@ For example, on seq write, RocksDB baseline is ≈200.9 MB, block-klog ≈132.8
60
60
61
61
**Space amplification factor (lower is better)**
62
62

63
-
Against the single RocksDB baseline, block-klog is consistently the most space-efficient, while btree-klog incurs high space amplification on uniform workloads. On seq/random/mixed-rand writes, RocksDB baseline is roughly ~0.105–0.18x, block-klog improves further to ~0.08–0.12x, but btree-klog is about ~1.05× (roughly an order of magnitude higher than RocksDB in these cases). On Zipf write, all engines improve, but the ordering remains - RocksDB baseline ≈0.11x, block-klog ≈0.02x, btree-klog ≈0.16x.
63
+
Against the single RocksDB baseline, block-klog is consistently the most space-efficient, while btree-klog incurs high space amplification on uniform workloads. On seq/random/mixed-rand writes, RocksDB baseline is roughly ~0.105-0.18x, block-klog improves further to ~0.08-0.12x, but btree-klog is about ~1.05× (roughly an order of magnitude higher than RocksDB in these cases). On Zipf write, all engines improve, but the ordering remains - RocksDB baseline ≈0.11x, block-klog ≈0.02x, btree-klog ≈0.16x.
64
64
65
65
So the results are rather interesting as you can see, if your priority is space efficiency and strong read performance on pure random reads with some more memory usage, block-klog is the clear winner (tiny DB sizes + very low space amp + top GET throughput on random read). Though if your priority is mixed-workload GET throughput (and you can tolerate much larger footprint on uniform workloads) the btree klog format can be attractive, especially where "mixed random" GET matters.
Copy file name to clipboardExpand all lines: src/content/docs/articles/benchmark-analysis-tidesdb-v8-6-0-rocksdb-v10-10-1.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -124,19 +124,19 @@ Sequential write p99/p50 ratio is 1.9x for TidesDB vs 4.4x for RocksDB, where th
124
124
125
125
**Write Amplification**
126
126
127
-
TidesDB stays between 1.03–1.21 across all workloads; RocksDB ranges 1.22–1.51. The 15–24% gap translates directly to less SSD wear, less background I/O contention, and tighter tail latencies. Tightest amplification is on large values (1.03 vs 1.22) and the widest gap is on 50M small-value writes (1.21 vs 1.51).
127
+
TidesDB stays between 1.03-1.21 across all workloads; RocksDB ranges 1.22-1.51. The 15-24% gap translates directly to less SSD wear, less background I/O contention, and tighter tail latencies. Tightest amplification is on large values (1.03 vs 1.22) and the widest gap is on 50M small-value writes (1.21 vs 1.51).
Sequential 10M keys land at 111 MB vs 208 MB (47% smaller), random 10M at 87 MB vs 142 MB (38%). Space amplification ratios are TidesDB 0.07–0.14 vs RocksDB 0.08–0.19. Sequential writes produce the tightest compaction on our sorted runs.
133
+
Sequential 10M keys land at 111 MB vs 208 MB (47% smaller), random 10M at 87 MB vs 142 MB (38%). Space amplification ratios are TidesDB 0.07-0.14 vs RocksDB 0.08-0.19. Sequential writes produce the tightest compaction on our sorted runs.
TidesDB uses ~4x more memory (2,035 MB vs 485 MB peak RSS on sequential writes), an intentional trade-off for speed. We write 18–24% less data to disk. CPU is higher on writes (582% vs 258%) due to more aggressive parallelism across 8 threads. The new `max_memory_usage` cap in v8.6.0 keeps this bounded.
139
+
TidesDB uses ~4x more memory (2,035 MB vs 485 MB peak RSS on sequential writes), an intentional trade-off for speed. We write 18-24% less data to disk. CPU is higher on writes (582% vs 258%) due to more aggressive parallelism across 8 threads. The new `max_memory_usage` cap in v8.6.0 keeps this bounded.
@@ -154,7 +154,7 @@ On 4KB values, TidesDB p99/avg ratio is 1.92x (36,427 µs / 18,959 µs) while Ro
154
154
155
155
**Latency Variability**
156
156
157
-
Write CV is TidesDB 25–35% vs RocksDB 200–497%, making us 9–19x more consistent on writes as RocksDB's compaction stalls create huge latency spikes. Read and seek CV reverses with RocksDB's random read CV at 48% vs our 163%. The higher relative variability is spread around much smaller absolute numbers (2 µs vs 4.6 µs), meaning faster reads with a bit more jitter.
157
+
Write CV is TidesDB 25-35% vs RocksDB 200-497%, making us 9-19x more consistent on writes as RocksDB's compaction stalls create huge latency spikes. Read and seek CV reverses with RocksDB's random read CV at 48% vs our 163%. The higher relative variability is spread around much smaller absolute numbers (2 µs vs 4.6 µs), meaning faster reads with a bit more jitter.
@@ -242,7 +242,7 @@ Sequential writes at 8 threads show TidesDB p50 1,194 µs, p99 2,035 µs (ratio
242
242
243
243
**Write Amplification**
244
244
245
-
TidesDB ranges 1.04–1.23 across all workloads while RocksDB ranges 1.23–1.75. The gap is wider here than in Environment 1, with RocksDB's sequential write amplification at 16 threads hitting 1.75 (versus 1.07 for TidesDB). Zipfian remains the tightest at 1.04 for TidesDB.
245
+
TidesDB ranges 1.04-1.23 across all workloads while RocksDB ranges 1.23-1.75. The gap is wider here than in Environment 1, with RocksDB's sequential write amplification at 16 threads hitting 1.75 (versus 1.07 for TidesDB). Zipfian remains the tightest at 1.04 for TidesDB.
@@ -282,7 +282,7 @@ Write CV for TidesDB sequential is 16.4% vs RocksDB at 8.2%, and interestingly R
282
282
283
283
TidesDB v8.6.0 outperforms RocksDB v10.10.1 across the vast majority of workloads on both environments. On Environment 1 (i7-11700K, 48 GB, SATA SSD, 8 threads), speedups range from 1.27x to 4.97x with a geometric mean around 2.2x. On Environment 2 (Threadripper 2950X, 128 GB, NVMe, 8 + 16 threads), the same workloads show wider margins, with sequential write speedups reaching 8.32x at 8 threads and 9.90x at 16 threads, and synchronous writes scaling to 14.7x at 16 threads.
284
284
285
-
The consistent strengths across both environments include sequential and batched writes (5–10x), range scans (2–3x), seeks with skewed access patterns (3–5x), large-value writes (3–4x with dramatically better tail latency), and low write amplification (1.03–1.23 vs 1.22–1.75). The consistent weaknesses include single-key writes and deletes without batching (RocksDB wins by 1.5–2.5x), read/seek latency variability (RocksDB delivers more uniform timing despite higher absolute latencies), and higher memory usage (~4–6x RSS at 8 threads, narrowing to ~1.7x at 16 threads).
285
+
The consistent strengths across both environments include sequential and batched writes (5-10x), range scans (2-3x), seeks with skewed access patterns (3-5x), large-value writes (3-4x with dramatically better tail latency), and low write amplification (1.03-1.23 vs 1.22-1.75). The consistent weaknesses include single-key writes and deletes without batching (RocksDB wins by 1.5-2.5x), read/seek latency variability (RocksDB delivers more uniform timing despite higher absolute latencies), and higher memory usage (~4-6x RSS at 8 threads, narrowing to ~1.7x at 16 threads).
286
286
287
287
Environment 2 also exposed a new weak spot not visible at 8 threads. Under extreme concurrent write pressure at 16 threads, the mixed random GET path drops as the back-pressure systems over-throttle operations.
Copy file name to clipboardExpand all lines: src/content/docs/articles/benchmark-analysis-tidesdb-v8-7-1-rocksdb-11-0-3.md
+4-4Lines changed: 4 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -102,13 +102,13 @@ Sequential write p99/p50 ratio is 1.82x for TidesDB (611/1111 µs) vs 1.48x for
102
102
103
103
**Write Amplification**
104
104
105
-
TidesDB stays between 1.04–1.25 across all workloads; RocksDB ranges 1.05–1.52. The tightest is Zipfian at 1.04 vs 1.05 where hot-key overwrites keep both engines lean. The widest gap is on 50M small-value random writes at 1.25 vs 1.52. Lower write amplification means less SSD wear and less background I/O contention.
105
+
TidesDB stays between 1.04-1.25 across all workloads; RocksDB ranges 1.05-1.52. The tightest is Zipfian at 1.04 vs 1.05 where hot-key overwrites keep both engines lean. The widest gap is on 50M small-value random writes at 1.25 vs 1.52. Lower write amplification means less SSD wear and less background I/O contention.
Sequential 10M keys land at 111 MB vs 205 MB (46% smaller), random 10M at 90 MB vs 140 MB (36% smaller). Small-value 50M sits at 522 MB vs 503 MB (4% larger for TidesDB). Large-value 1M is 302 MB vs 348 MB (13% smaller). Space amplification ratios are TidesDB 0.07–0.14 vs RocksDB 0.08–0.19.
111
+
Sequential 10M keys land at 111 MB vs 205 MB (46% smaller), random 10M at 90 MB vs 140 MB (36% smaller). Small-value 50M sits at 522 MB vs 503 MB (4% larger for TidesDB). Large-value 1M is 302 MB vs 348 MB (13% smaller). Space amplification ratios are TidesDB 0.07-0.14 vs RocksDB 0.08-0.19.
@@ -132,7 +132,7 @@ On 4KB values TidesDB p99/avg is 1.79x (35,326 µs / 19,723 µs) while RocksDB i
132
132
133
133
**Latency Variability**
134
134
135
-
Write CV is TidesDB 11–38% vs RocksDB 11–457% across write workloads. Zipfian writes are the tightest at 11% for both engines. Random write CV is 37% for TidesDB vs 253% for RocksDB, a 6.8x consistency advantage. Read CV shows TidesDB random reads at 187% vs RocksDB at 48%, the same pattern as before where higher relative variability sits around much smaller absolute latencies (2.13 µs vs 5.06 µs). Random seek CV is very high for TidesDB at 22,750% but that's a quirk of sub-microsecond median latency where even tiny absolute jitter produces a large coefficient.
135
+
Write CV is TidesDB 11-38% vs RocksDB 11-457% across write workloads. Zipfian writes are the tightest at 11% for both engines. Random write CV is 37% for TidesDB vs 253% for RocksDB, a 6.8x consistency advantage. Read CV shows TidesDB random reads at 187% vs RocksDB at 48%, the same pattern as before where higher relative variability sits around much smaller absolute latencies (2.13 µs vs 5.06 µs). Random seek CV is very high for TidesDB at 22,750% but that's a quirk of sub-microsecond median latency where even tiny absolute jitter produces a large coefficient.
@@ -142,7 +142,7 @@ TidesDB v8.7.1 delivers rather great improvements across the board versus v8.6.x
142
142
143
143
The backpressure consolidation from per-op to per-column-family-per-commit fixed TidesDB's historical weakness on single-key operations. Batch-1 writes and single-key deletes now favor TidesDB where previously RocksDB won, and the double-sleep elimination means mixed workloads no longer over-throttle under combined L0 and memory pressure.
144
144
145
-
The raw byte cache replacing the old block cache improved cache utilization and shows up in the random read improvement from 2.23x to 2.43x. Write amplification remains consistently lower than RocksDB at 1.04–1.25 vs 1.05–1.52, and space efficiency holds with 36–46% smaller on-disk sizes for standard workloads.
145
+
The raw byte cache replacing the old block cache improved cache utilization and shows up in the random read improvement from 2.23x to 2.43x. Write amplification remains consistently lower than RocksDB at 1.04-1.25 vs 1.05-1.52, and space efficiency holds with 36-46% smaller on-disk sizes for standard workloads.
146
146
147
147
The reaper's stack-allocated eviction buffer and the pwritev block manager writes are less visible in the headline numbers but contribute to the overall consistency, removing per-cycle mallocs and reducing syscalls on the write path.
Copy file name to clipboardExpand all lines: src/content/docs/articles/benchmark-analysis-tidesql-v1-0-0-innodb-in-mariadb-v12-1-2.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -97,22 +97,22 @@ This plot shows a large and meaningful gap. InnoDB uses roughly 12 MB to store a
97
97
98
98
That's not noise and it's not tuning; it's a consequence of structure. InnoDB pays for B-trees, pages, free space, and metadata. If storage footprint matters, this result alone is hard to ignore.
At the 95th percentile, TidesDB inserts are dramatically more predictable. InnoDB shows a long tail, with occasional stalls that push p95 close to 0.4 ms, while TidesDB stays well under 0.1 ms
For reads, the situation reverses. InnoDB's p95 SELECT latency is significantly lower, while TidesDB shows both higher average and worse tail latency obviously.
Copy file name to clipboardExpand all lines: src/content/docs/articles/tidesdb-8-optional-lsmb+.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -48,7 +48,7 @@ This figure captures the core architectural win. For point lookups and seeks, th
48
48
## PUT tail latency (p95 with p99 markers)
49
49

50
50
51
-
In random, mixed, and populate phases, p95 latency drops by roughly 30–40%, and p99 follows the same trend. The exception is range-populate, where B+tree p95 is worse, consistent with its lower PUT throughput there, but not too concerning.
51
+
In random, mixed, and populate phases, p95 latency drops by roughly 30-40%, and p99 follows the same trend. The exception is range-populate, where B+tree p95 is worse, consistent with its lower PUT throughput there, but not too concerning.
52
52
53
53
## Read / seek / range tail latency (log scale)
54
54

@@ -58,7 +58,7 @@ On a log scale, this plot highlights how the B+tree collapses read-side tail lat
The B+tree variant consumes an order of magnitude more disk space than the block layout in several workloads (~1.1–1.2GB vs ~100MB). This makes sense, as the block layout is highly optimized for space efficiency.
61
+
The B+tree variant consumes an order of magnitude more disk space than the block layout in several workloads (~1.1-1.2GB vs ~100MB). This makes sense, as the block layout is highly optimized for space efficiency.
0 commit comments