[SPARK-56400][SS] Apply rangeScan API in transformWithState Timer/TTL#55265
[SPARK-56400][SS] Apply rangeScan API in transformWithState Timer/TTL#55265HeartSaVioR wants to merge 5 commits intoapache:masterfrom
Conversation
|
Only the last commit is related to this PR. Once #55226 is merged, I'll rebase. |
bogao007
left a comment
There was a problem hiding this comment.
Got some questions but LGTM overall!
| // The schema of the UnsafeRow returned by this iterator is (expirationMs, elementKey). | ||
| private[sql] def ttlEvictionIterator(): Iterator[UnsafeRow] = { | ||
| val ttlIterator = store.iterator(TTL_INDEX) | ||
| val dummyElementKey = ELEMENT_KEY_PROJECTION |
There was a problem hiding this comment.
Is the dummyElementKey here just for using TTL_ENCODER.encodeTTLRow requires an elementKey?
| } else { | ||
| None | ||
| } | ||
| val ttlIterator = store.rangeScan(startKey, endKey, TTL_INDEX) |
There was a problem hiding this comment.
Should we update the method comment indicating that we are using range scan now?
3c99f65 to
6025cf0
Compare
|
cc. @viirya |
Use bounded scan ranges in transformWithState TTL eviction and timer expiry to narrow the iteration scope: - TTLState.ttlEvictionIterator: use store.scan with startKey from prevBatchTimestampMs+1 and endKey from batchTimestampMs+1 to skip entries already cleaned up in the previous batch. - TimerStateImpl.getExpiredTimers: use store.scan with startKey from prevExpiryTimestampMs+1 and endKey from expiryTimestampMs+1. Processing-time timers use prevBatchTimestampMs; event-time timers use eventTimeWatermarkForLateEvents. Thread prevBatchTimestampMs from IncrementalExecution (via prevOffsetSeqMetadata) through TransformWithStateExec -> StatefulProcessorHandleImpl -> TTLState / TimerStateImpl. Copy UnsafeRow results from encodeTTLRow/UnsafeProjection to avoid the mutable-row-reuse bug where startKey and endKey alias the same internal buffer.
6025cf0 to
5520bb6
Compare
|
@anishshri-db @viirya PTAL, thanks! |
| val row = new GenericInternalRow(keySchemaForSecIndex.length) | ||
| row.setLong(0, tsMs + 1) |
There was a problem hiding this comment.
Is it always valid to only fill the first field of keySchemaForSecIndex?
There was a problem hiding this comment.
Unfortunately no - null is "greater" than non-null in UnsafeRow format. I'm going to apply the same approach with #55267 via using defaults on Literal.
| val prevWatermark = | ||
| if (prevBatchTimestampMs.isDefined) eventTimeWatermarkForLateEvents else None |
There was a problem hiding this comment.
So if prevBatchTimestampMs is defined, the value of eventTimeWatermarkForLateEvents is always valid to use?
There was a problem hiding this comment.
You spotted on.
This has some subtle bug - this does not work with legacy config (spark.sql.streaming.statefulOperator.allowMultiple=true). I think there is few users leveraging this config and we would probably want to kick the config out (we no longer need such a kill switch - this was introduced long time ago), so probably we can skip optimization for the legacy path, and remove the config sooner.
I indicated #55267 has the same issue, so I'm going to fix that PR as well.
| * Caveats where the "smallest" property does NOT hold and which therefore should | ||
| * not appear in caller key schemas: | ||
| * - `CharType(n)`: default is space-padded (0x20), but real values may legally | ||
| * contain control bytes (0x00..0x1F). | ||
| * - `VariantType`: the Variant binary layout is not guaranteed minimized by | ||
| * `castToVariant(0, IntegerType)`. |
There was a problem hiding this comment.
If key schema includes these types, should we have some runtime detection?
There was a problem hiding this comment.
Let's do that - I guess it's unlikely to hit but better to be safer.
There was a problem hiding this comment.
…chemas Address apache#55265 (comment): Literal.default's recursive "smallest" property does not hold for CharType or VariantType, so unguarded use can silently produce incorrect range-scan boundaries when such types appear in the key schema. - CharType(n): override with n zero-bytes (U+0000 repeated, UTF-8 encoded), which is the byte-wise smallest legitimate CharType(n) value. This keeps preserveCharVarcharTypeInfo=true users supported. - VariantType: assert-reject, because the Variant binary spec is @unstable and a hand-encoded minimum would be brittle. Existing Spark analysis already blocks VariantType in grouping / hashing positions, so this is defensive and should never fire in practice. Add RangeScanBoundaryUtilsSuite covering the happy path, byte-wise sanity check for CharType, nested CharType in struct, VarcharType empty default, and VariantType rejection at top level / nested in struct / nested in array.
…chemas Address apache#55265 (comment): Literal.default's recursive "smallest" property does not hold for CharType or VariantType, so unguarded use can silently produce incorrect range-scan boundaries when such types appear in the key schema. - CharType(n): override with n zero-bytes (U+0000 repeated, UTF-8 encoded), which is the byte-wise smallest legitimate CharType(n) value. This keeps preserveCharVarcharTypeInfo=true users supported. - VariantType: assert-reject, because the Variant binary spec is @unstable and a hand-encoded minimum would be brittle. Existing Spark analysis already blocks VariantType in grouping / hashing positions, so this is defensive and should never fire in practice. Add RangeScanBoundaryUtilsSuite covering the happy path, byte-wise sanity check for CharType, nested CharType in struct, VarcharType empty default, and VariantType rejection at top level / nested in struct / nested in array.
|
It's unfortunate that null field is flipped as a binary order in UnsafeRow so we are doing a heavy lifting work. (Otherwise this should be just filling nulls.) Also technically we should just ask the caller to fill out the range part of the data on calling rangeScan, not to fill out all keys. Arguably if we are very clear about the intention of the scan e.g. timestamp in API, we really don't need to do the heavy work like this. |
|
https://github.com/HeartSaVioR/spark/actions/runs/24624331383/job/72002194205 CI only fails on k8s integratino test and sparkr which are irrelevant. |
|
Thanks! Merging to master. |
…chemas Address apache#55265 (comment): Literal.default's recursive "smallest" property does not hold for CharType or VariantType, so unguarded use can silently produce incorrect range-scan boundaries when such types appear in the key schema. - CharType(n): override with n zero-bytes (U+0000 repeated, UTF-8 encoded), which is the byte-wise smallest legitimate CharType(n) value. This keeps preserveCharVarcharTypeInfo=true users supported. - VariantType: assert-reject, because the Variant binary spec is @unstable and a hand-encoded minimum would be brittle. Existing Spark analysis already blocks VariantType in grouping / hashing positions, so this is defensive and should never fire in practice. Add RangeScanBoundaryUtilsSuite covering the happy path, byte-wise sanity check for CharType, nested CharType in struct, VarcharType empty default, and VariantType rejection at top level / nested in struct / nested in array.
What changes were proposed in this pull request?
This PR proposes to apply rangeScan API in transformWithState Timer/TTL, which will give an improvement of scanning on expired timers and entries with configured TTL.
The main idea is to perform scanning of expired timers and TTL entries from [the end timestamp of previous scan + 1, new end timestamp], which was [None, new end timestamp]. Previously it had to go through tombstones prior batches made in prior evictions (till compaction happens), and with this change we will be able to skip those tombstones.
Why are the changes needed?
This change will give a hit to RocksDB about the exact range to scan, reducing the chance of reading tombstone a lot.
Does this PR introduce any user-facing change?
No.
How was this patch tested?
New UTs, and existing UTs.
Was this patch authored or co-authored using generative AI tooling?
Generated-by: Claude 4.6 Opus