You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/content/docs/reference/cplusplus.md
+13-1Lines changed: 13 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -231,6 +231,18 @@ txn.del(cf, "key");
231
231
txn.commit();
232
232
```
233
233
234
+
#### Single Delete
235
+
236
+
`singleDelete` emits a single-delete tombstone. When the tombstone meets exactly one prior put for the same key during compaction, both records are dropped, so the tombstone does not persist past its matching put. Use it only when the caller guarantees at most one put precedes the delete; otherwise prefer `del`.
237
+
238
+
```cpp
239
+
auto cf = db.getColumnFamily("my_cf");
240
+
241
+
auto txn = db.beginTransaction();
242
+
txn.singleDelete(cf, "key");
243
+
txn.commit();
244
+
```
245
+
234
246
#### Multi-Operation Transactions
235
247
236
248
```cpp
@@ -1093,7 +1105,7 @@ Use `ObjectStoreConfig::defaultConfig()` for sensible defaults, then override fi
1093
1105
1094
1106
### Per-CF Object Store Tuning
1095
1107
1096
-
Column family configurations include three object store tuning fields.
1108
+
Column family configurations include two object store tuning fields.
1097
1109
1098
1110
-`objectLazyCompaction` · 1 to compact less aggressively for remote storage (default: 0)
1099
1111
-`objectPrefetchCompaction` · 1 to download all inputs before compaction merge (default: 1)
`Transaction.SingleDelete` writes a tombstone with the same read semantics as `Delete`, but carries a caller-provided promise that lets compaction drop the put and the tombstone together as soon as both appear in the same merge input, rather than carrying the tombstone forward until it reaches the largest active level.
340
+
341
+
Between any two single-deletes on the same key, and between the start of the key's history and its first single-delete, the key has been put **at most once**. The engine does not and cannot verify this at runtime; violating the contract can leave older puts visible after the single-delete and is a bug in the caller.
342
+
343
+
This is the right choice for workloads that insert each key exactly once and then delete it exactly once (classic insert-benchmark patterns, secondary-index entries on columns that are never updated, log-style tables with scheduled purges). It is **not** safe for tables that issue repeated updates to the same key.
Copy file name to clipboardExpand all lines: src/content/docs/reference/go.md
+36Lines changed: 36 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -408,6 +408,39 @@ if err != nil {
408
408
}
409
409
```
410
410
411
+
#### Single-Delete
412
+
413
+
`SingleDelete` writes a tombstone with the same read semantics as `Delete`, but carries a caller-provided promise that lets compaction drop the put and the tombstone together as soon as both appear in the same merge input, rather than carrying the tombstone forward until it reaches the largest active level.
414
+
415
+
Between any two single-deletes on the same key, and between the start of the key's history and its first single-delete, the key has been put **at most once**. The engine does not and cannot verify this at runtime; violating the contract can leave older puts visible after the single-delete and is a bug in the caller.
416
+
417
+
This is the right choice for workloads that insert each key exactly once and then delete it exactly once (classic insert-benchmark patterns, secondary-index entries on columns that are never updated, log-style tables with scheduled purges). It is **not** safe for tables that issue repeated updates to the same key.
418
+
419
+
```go
420
+
cf, err:= db.GetColumnFamily("my_cf")
421
+
if err != nil {
422
+
log.Fatal(err)
423
+
}
424
+
425
+
txn, err:= db.BeginTxn()
426
+
if err != nil {
427
+
log.Fatal(err)
428
+
}
429
+
defer txn.Free()
430
+
431
+
err = txn.SingleDelete(cf, []byte("key"))
432
+
if err != nil {
433
+
log.Fatal(err)
434
+
}
435
+
436
+
err = txn.Commit()
437
+
if err != nil {
438
+
log.Fatal(err)
439
+
}
440
+
```
441
+
442
+
Returns `nil` on success or a non-nil error on failure. When in doubt, prefer `Delete`.
443
+
411
444
#### Multi-Operation Transactions
412
445
413
446
```go
@@ -1807,4 +1840,7 @@ go test -v -run TestCfConfigIni
`txn.singleDelete` writes a tombstone with the same read semantics as `txn.delete`, but carries a caller-provided promise that lets compaction drop the put and the tombstone together as soon as both appear in the same merge input, rather than carrying the tombstone forward until it reaches the largest active level.
198
+
199
+
Between any two single-deletes on the same key, and between the start of the key's history and its first single-delete, the key has been put **at most once**. The engine does not and cannot verify this at runtime; violating the contract can leave older puts visible after the single-delete and is a bug in the caller.
200
+
201
+
This is the right choice for workloads that insert each key exactly once and then delete it exactly once (classic insert-benchmark patterns, secondary-index entries on columns that are never updated, log-style tables with scheduled purges). It is **not** safe for tables that issue repeated updates to the same key.
202
+
203
+
```java
204
+
ColumnFamily cf = db.getColumnFamily("my_cf");
205
+
206
+
try (Transaction txn = db.beginTransaction()) {
207
+
txn.singleDelete(cf, "key".getBytes());
208
+
txn.commit();
209
+
}
210
+
```
211
+
212
+
When in doubt, prefer `txn.delete`.
213
+
195
214
#### Transaction Rollback
196
215
197
216
```java
@@ -748,7 +767,7 @@ try (TidesDB db = TidesDB.open(config)) {
748
767
749
768
### Per-CF Object Store Tuning
750
769
751
-
Column family configurations include three object store tuning fields:
770
+
Column family configurations include two object store tuning fields:
Copy file name to clipboardExpand all lines: src/content/docs/reference/lua.md
+22Lines changed: 22 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -270,6 +270,27 @@ txn:commit()
270
270
txn:free()
271
271
```
272
272
273
+
#### Single-Delete
274
+
275
+
`txn:single_delete(cf, key)` writes a tombstone with the same read semantics as `txn:delete`, but carries a caller-provided promise that lets compaction drop the put and the tombstone together as soon as both appear in the same merge input, rather than carrying the tombstone forward until it reaches the largest active level.
276
+
277
+
Between any two single-deletes on the same key, and between the start of the key's history and its first single-delete, the key has been put **at most once**. The engine does not and cannot verify this at runtime; violating the contract can leave older puts visible after the single-delete and is a bug in the caller.
278
+
279
+
This is the right choice for workloads that insert each key exactly once and then delete it exactly once (classic insert-benchmark patterns, secondary-index entries on columns that are never updated, log-style tables with scheduled purges). It is **not** safe for tables that issue repeated updates to the same key.
280
+
281
+
```lua
282
+
localcf=db:get_column_family("my_cf")
283
+
284
+
localtxn=db:begin_txn()
285
+
286
+
txn:single_delete(cf, "mykey")
287
+
288
+
txn:commit()
289
+
txn:free()
290
+
```
291
+
292
+
When in doubt, prefer `txn:delete`.
293
+
273
294
#### Multi-Operation Transactions
274
295
275
296
```lua
@@ -1401,6 +1422,7 @@ lua test_tidesdb.lua
1401
1422
|`txn:put(cf, key, value, ttl)`| Put a key-value pair |
1402
1423
|`txn:get(cf, key)`| Get a value by key |
1403
1424
|`txn:delete(cf, key)`| Delete a key |
1425
+
|`txn:single_delete(cf, key)`| Delete a key with at-most-one-put promise (see Single-Delete) |
1404
1426
|`txn:commit()`| Commit the transaction |
1405
1427
|`txn:rollback()`| Rollback the transaction |
1406
1428
|`txn:reset(isolation)`| Reset transaction for reuse with new isolation level |
Copy file name to clipboardExpand all lines: src/content/docs/reference/python.md
+18Lines changed: 18 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -202,6 +202,24 @@ with db.begin_txn() as txn:
202
202
txn.commit()
203
203
```
204
204
205
+
### Single-Delete
206
+
207
+
`single_delete` writes a tombstone with the same read semantics as `delete`, but carries a caller-provided promise that lets compaction drop the put and the tombstone together as soon as both appear in the same merge input, rather than carrying the tombstone forward until it reaches the largest active level.
208
+
209
+
Between any two single-deletes on the same key, and between the start of the key's history and its first single-delete, the key has been put **at most once**. The engine does not and cannot verify this at runtime; violating the contract can leave older puts visible after the single-delete and is a bug in the caller.
210
+
211
+
This is the right choice for workloads that insert each key exactly once and then delete it exactly once (classic insert-benchmark patterns, secondary-index entries on columns that are never updated, log-style tables with scheduled purges). It is **not** safe for tables that issue repeated updates to the same key.
212
+
213
+
```python
214
+
cf = db.get_column_family("my_cf")
215
+
216
+
with db.begin_txn() as txn:
217
+
txn.single_delete(cf, b"mykey")
218
+
txn.commit()
219
+
```
220
+
221
+
When in doubt, prefer `delete`.
222
+
205
223
### Transaction Reset
206
224
207
225
`reset()` resets a committed or aborted transaction for reuse with a new isolation level. This avoids the overhead of freeing and reallocating transaction resources in hot loops.
Copy file name to clipboardExpand all lines: src/content/docs/reference/rust.md
+41-6Lines changed: 41 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -40,7 +40,7 @@ The easiest way to add TidesDB to your project is via [crates.io](https://crates
40
40
41
41
```toml
42
42
[dependencies]
43
-
tidesdb = "0.6"
43
+
tidesdb = "0.7"
44
44
```
45
45
46
46
Or using `cargo add`:
@@ -65,22 +65,22 @@ Each crate release defaults to a specific TidesDB C library version. You can sel
65
65
66
66
```toml
67
67
[dependencies]
68
-
# Uses the default version (currently v9.0.6)
69
-
tidesdb = "0.6"
68
+
# Uses the default version (currently v9.1.0)
69
+
tidesdb = "0.7"
70
70
71
71
# Pin to a specific TidesDB version
72
-
tidesdb = { version = "0.6", default-features = false, features = ["v9_0_5"] }
72
+
tidesdb = { version = "0.7", default-features = false, features = ["v9_0_5"] }
73
73
```
74
74
75
-
Only one version feature can be enabled at a time. The version feature (e.g., `v9_0_6`) maps directly to the TidesDB C library release tag (e.g., `v9.0.6`).
75
+
Only one version feature can be enabled at a time. The version feature (e.g., `v9_1_0`) maps directly to the TidesDB C library release tag (e.g., `v9.1.0`).
76
76
77
77
### Object Store Support
78
78
79
79
To enable S3 object store support, enable the `objectstore` feature:
80
80
81
81
```toml
82
82
[dependencies]
83
-
tidesdb = { version = "0.6", features = ["objectstore"] }
83
+
tidesdb = { version = "0.7", features = ["objectstore"] }
`Transaction::single_delete` writes a tombstone with the same read semantics as `Transaction::delete`, but carries a caller-provided promise that lets compaction drop the put and the tombstone together as soon as both appear in the same merge input, rather than carrying the tombstone forward until it reaches the largest active level.
375
+
376
+
Between any two single-deletes on the same key, and between the start of the key's history and its first single-delete, the key has been put **at most once**. The engine does not and cannot verify this at runtime; violating the contract can leave older puts visible after the single-delete and is a bug in the caller.
377
+
378
+
This is the right choice for workloads that insert each key exactly once and then delete it exactly once (classic insert-benchmark patterns, secondary-index entries on columns that are never updated, log-style tables with scheduled purges). It is **not** safe for tables that issue repeated updates to the same key. When in doubt, prefer `Transaction::delete`.
379
+
380
+
Requires tidesdb >= 9.1.0 (the `v9_1_0` Cargo feature, enabled by default in `tidesdb` 0.7).
Copy file name to clipboardExpand all lines: src/content/docs/reference/tidesql.md
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -208,20 +208,20 @@ The engine supports Index Condition Pushdown for secondary index scans. When the
208
208
209
209
### Multi-Range Read (MRR)
210
210
211
-
The engine implements a custom MRR path for point-lookup batches such as `WHERE col IN (v1, v2, ..., vN)` on a primary or full-key unique index. When every range the optimizer hands the engine is a full-key point equality (`UNIQUE_RANGE | EQ_RANGE`) and there are at least two ranges, the engine buffers them, converts each key into comparable bytes, and sorts by those bytes so the LSM sees a monotone stream of seeks — much friendlier to the block cache and the merge-heap than N scattered seeks in user-supplied order. Primary-key lookups bypass the iterator entirely via `fetch_row_by_pk`; secondary-index lookups reuse a single cached iterator and do one seek per entry. Ranges whose rows have been deleted concurrently are silently skipped.
211
+
The engine implements a custom MRR path for point-lookup batches such as `WHERE col IN (v1, v2, ..., vN)` on a primary or full-key unique index. When every range the optimizer hands the engine is a full-key point equality (`UNIQUE_RANGE | EQ_RANGE`) and there are at least two ranges, the engine buffers them, converts each key into comparable bytes, and sorts by those bytes so the LSM sees a monotone stream of seeks - much friendlier to the block cache and the merge-heap than N scattered seeks in user-supplied order. Primary-key lookups bypass the iterator entirely via `fetch_row_by_pk`; secondary-index lookups reuse a single cached iterator and do one seek per entry. Ranges whose rows have been deleted concurrently are silently skipped.
212
212
213
213
The engine deliberately declines MRR in three cases, falling back to the base handler's default implementation:
214
214
215
-
- Single-range scans (`count < 2`) — MRR has no sorting win for one key, and the eq_ref path is where pessimistic row locking engages.
216
-
- Non-point ranges — true `BETWEEN`/`<`/`>` scans stay on `read_range_first`.
217
-
- Partitioned tables —`ha_partition` already dispatches MRR across children using its own DS-MRR logic.
215
+
- Single-range scans (`count < 2`) - MRR has no sorting win for one key, and the eq_ref path is where pessimistic row locking engages.
216
+
- Non-point ranges - true `BETWEEN`/`<`/`>` scans stay on `read_range_first`.
217
+
- Partitioned tables -`ha_partition` already dispatches MRR across children using its own DS-MRR logic.
218
218
219
219
220
220
## Auto-Increment
221
221
222
222
Auto-increment works in a similar way to InnoDB. The engine calls MariaDB's built-in `update_auto_increment()` mechanism during `write_row()`. Rather than calling `index_last()` on every INSERT (which would create and destroy a TidesDB merge-heap iterator each time), the engine maintains an in-memory atomic counter on the shared table descriptor. The counter is seeded once at table open time by seeking to the last key in the primary key column family, and is atomically incremented via a CAS loop on each INSERT - making auto-increment assignment O(1). When a user inserts an explicit value larger than the current counter, `write_row()` bumps the counter to match.
223
223
224
-
`TRUNCATE TABLE` and `ALTER TABLE ... AUTO_INCREMENT=N` both reset the counter via the engine's `reset_auto_increment` handler hook — the next generated ID equals `N` (or `1` after a bare `TRUNCATE`). This applies to both user-defined AUTO_INCREMENT columns and hidden-PK tables.
224
+
`TRUNCATE TABLE` and `ALTER TABLE ... AUTO_INCREMENT=N` both reset the counter via the engine's `reset_auto_increment` handler hook - the next generated ID equals `N` (or `1` after a bare `TRUNCATE`). This applies to both user-defined AUTO_INCREMENT columns and hidden-PK tables.
`txn.singleDelete()` writes a tombstone with the same read semantics as `txn.delete()`, but carries a caller-provided promise that lets compaction drop the put and the tombstone together as soon as both appear in the same merge input, rather than carrying the tombstone forward until it reaches the largest active level.
218
+
219
+
Between any two single-deletes on the same key, and between the start of the key's history and its first single-delete, the key has been put **at most once**. The engine does not and cannot verify this at runtime; violating the contract can leave older puts visible after the single-delete and is a bug in the caller.
220
+
221
+
This is the right choice for workloads that insert each key exactly once and then delete it exactly once (classic insert-benchmark patterns, secondary-index entries on columns that are never updated, log-style tables with scheduled purges). It is **not** safe for tables that issue repeated updates to the same key.
0 commit comments