Skip to content

[SPARK-56400][SS] Apply rangeScan API in transformWithState Timer/TTL#55265

Closed
HeartSaVioR wants to merge 5 commits intoapache:masterfrom
HeartSaVioR:SPARK-56400-on-top-of-SPARK-56369
Closed

[SPARK-56400][SS] Apply rangeScan API in transformWithState Timer/TTL#55265
HeartSaVioR wants to merge 5 commits intoapache:masterfrom
HeartSaVioR:SPARK-56400-on-top-of-SPARK-56369

Conversation

@HeartSaVioR
Copy link
Copy Markdown
Contributor

What changes were proposed in this pull request?

This PR proposes to apply rangeScan API in transformWithState Timer/TTL, which will give an improvement of scanning on expired timers and entries with configured TTL.

The main idea is to perform scanning of expired timers and TTL entries from [the end timestamp of previous scan + 1, new end timestamp], which was [None, new end timestamp]. Previously it had to go through tombstones prior batches made in prior evictions (till compaction happens), and with this change we will be able to skip those tombstones.

Why are the changes needed?

This change will give a hit to RocksDB about the exact range to scan, reducing the chance of reading tombstone a lot.

Does this PR introduce any user-facing change?

No.

How was this patch tested?

New UTs, and existing UTs.

Was this patch authored or co-authored using generative AI tooling?

Generated-by: Claude 4.6 Opus

@HeartSaVioR
Copy link
Copy Markdown
Contributor Author

Only the last commit is related to this PR. Once #55226 is merged, I'll rebase.

Copy link
Copy Markdown
Contributor

@bogao007 bogao007 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got some questions but LGTM overall!

// The schema of the UnsafeRow returned by this iterator is (expirationMs, elementKey).
private[sql] def ttlEvictionIterator(): Iterator[UnsafeRow] = {
val ttlIterator = store.iterator(TTL_INDEX)
val dummyElementKey = ELEMENT_KEY_PROJECTION
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the dummyElementKey here just for using TTL_ENCODER.encodeTTLRow requires an elementKey?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, correct.

} else {
None
}
val ttlIterator = store.rangeScan(startKey, endKey, TTL_INDEX)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we update the method comment indicating that we are using range scan now?

@HeartSaVioR HeartSaVioR force-pushed the SPARK-56400-on-top-of-SPARK-56369 branch from 3c99f65 to 6025cf0 Compare April 15, 2026 03:34
@HeartSaVioR
Copy link
Copy Markdown
Contributor Author

cc. @viirya

Use bounded scan ranges in transformWithState TTL eviction and timer
expiry to narrow the iteration scope:

- TTLState.ttlEvictionIterator: use store.scan with startKey from
  prevBatchTimestampMs+1 and endKey from batchTimestampMs+1 to skip
  entries already cleaned up in the previous batch.
- TimerStateImpl.getExpiredTimers: use store.scan with startKey from
  prevExpiryTimestampMs+1 and endKey from expiryTimestampMs+1.
  Processing-time timers use prevBatchTimestampMs; event-time timers
  use eventTimeWatermarkForLateEvents.

Thread prevBatchTimestampMs from IncrementalExecution (via
prevOffsetSeqMetadata) through TransformWithStateExec ->
StatefulProcessorHandleImpl -> TTLState / TimerStateImpl.

Copy UnsafeRow results from encodeTTLRow/UnsafeProjection to avoid
the mutable-row-reuse bug where startKey and endKey alias the same
internal buffer.
@HeartSaVioR HeartSaVioR force-pushed the SPARK-56400-on-top-of-SPARK-56369 branch from 6025cf0 to 5520bb6 Compare April 18, 2026 12:54
@HeartSaVioR
Copy link
Copy Markdown
Contributor Author

@anishshri-db @viirya PTAL, thanks!

Comment on lines +128 to +129
val row = new GenericInternalRow(keySchemaForSecIndex.length)
row.setLong(0, tsMs + 1)
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it always valid to only fill the first field of keySchemaForSecIndex?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unfortunately no - null is "greater" than non-null in UnsafeRow format. I'm going to apply the same approach with #55267 via using defaults on Literal.

Comment on lines +268 to +269
val prevWatermark =
if (prevBatchTimestampMs.isDefined) eventTimeWatermarkForLateEvents else None
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So if prevBatchTimestampMs is defined, the value of eventTimeWatermarkForLateEvents is always valid to use?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You spotted on.

This has some subtle bug - this does not work with legacy config (spark.sql.streaming.statefulOperator.allowMultiple=true). I think there is few users leveraging this config and we would probably want to kick the config out (we no longer need such a kill switch - this was introduced long time ago), so probably we can skip optimization for the legacy path, and remove the config sooner.

I indicated #55267 has the same issue, so I'm going to fix that PR as well.

@HeartSaVioR
Copy link
Copy Markdown
Contributor Author

0c5e948

The code change is a big large - @viirya would you mind having another look and re-approve if the code change looks OK to you for your review comments? Thanks in advance!

@HeartSaVioR HeartSaVioR requested a review from viirya April 19, 2026 06:59
Comment on lines +40 to +45
* Caveats where the "smallest" property does NOT hold and which therefore should
* not appear in caller key schemas:
* - `CharType(n)`: default is space-padded (0x20), but real values may legally
* contain control bytes (0x00..0x1F).
* - `VariantType`: the Variant binary layout is not guaranteed minimized by
* `castToVariant(0, IntegerType)`.
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If key schema includes these types, should we have some runtime detection?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's do that - I guess it's unlikely to hit but better to be safer.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

291553b

I've taken a bit more work - we have a consistent binary lowest value for CharType, so we can actually address it without rejection. We can still reject VariantType since it's not practical for VariantType to be used in grouping key anyway.

The same change is applied to the PR here - #55267

HeartSaVioR added a commit to HeartSaVioR/spark that referenced this pull request Apr 19, 2026
…chemas

Address apache#55265 (comment):
Literal.default's recursive "smallest" property does not hold for CharType
or VariantType, so unguarded use can silently produce incorrect range-scan
boundaries when such types appear in the key schema.

- CharType(n): override with n zero-bytes (U+0000 repeated, UTF-8 encoded),
  which is the byte-wise smallest legitimate CharType(n) value. This keeps
  preserveCharVarcharTypeInfo=true users supported.
- VariantType: assert-reject, because the Variant binary spec is @unstable
  and a hand-encoded minimum would be brittle. Existing Spark analysis
  already blocks VariantType in grouping / hashing positions, so this is
  defensive and should never fire in practice.

Add RangeScanBoundaryUtilsSuite covering the happy path, byte-wise sanity
check for CharType, nested CharType in struct, VarcharType empty default,
and VariantType rejection at top level / nested in struct / nested in array.
HeartSaVioR added a commit to HeartSaVioR/spark that referenced this pull request Apr 19, 2026
…chemas

Address apache#55265 (comment):
Literal.default's recursive "smallest" property does not hold for CharType
or VariantType, so unguarded use can silently produce incorrect range-scan
boundaries when such types appear in the key schema.

- CharType(n): override with n zero-bytes (U+0000 repeated, UTF-8 encoded),
  which is the byte-wise smallest legitimate CharType(n) value. This keeps
  preserveCharVarcharTypeInfo=true users supported.
- VariantType: assert-reject, because the Variant binary spec is @unstable
  and a hand-encoded minimum would be brittle. Existing Spark analysis
  already blocks VariantType in grouping / hashing positions, so this is
  defensive and should never fire in practice.

Add RangeScanBoundaryUtilsSuite covering the happy path, byte-wise sanity
check for CharType, nested CharType in struct, VarcharType empty default,
and VariantType rejection at top level / nested in struct / nested in array.
@HeartSaVioR HeartSaVioR requested a review from viirya April 19, 2026 08:02
@HeartSaVioR
Copy link
Copy Markdown
Contributor Author

HeartSaVioR commented Apr 19, 2026

It's unfortunate that null field is flipped as a binary order in UnsafeRow so we are doing a heavy lifting work. (Otherwise this should be just filling nulls.) Also technically we should just ask the caller to fill out the range part of the data on calling rangeScan, not to fill out all keys. Arguably if we are very clear about the intention of the scan e.g. timestamp in API, we really don't need to do the heavy work like this.

@HeartSaVioR
Copy link
Copy Markdown
Contributor Author

https://github.com/HeartSaVioR/spark/actions/runs/24624331383/job/72002194205

CI only fails on k8s integratino test and sparkr which are irrelevant.

@HeartSaVioR
Copy link
Copy Markdown
Contributor Author

Thanks! Merging to master.

HeartSaVioR added a commit to HeartSaVioR/spark that referenced this pull request Apr 19, 2026
HeartSaVioR added a commit to HeartSaVioR/spark that referenced this pull request Apr 19, 2026
…chemas

Address apache#55265 (comment):
Literal.default's recursive "smallest" property does not hold for CharType
or VariantType, so unguarded use can silently produce incorrect range-scan
boundaries when such types appear in the key schema.

- CharType(n): override with n zero-bytes (U+0000 repeated, UTF-8 encoded),
  which is the byte-wise smallest legitimate CharType(n) value. This keeps
  preserveCharVarcharTypeInfo=true users supported.
- VariantType: assert-reject, because the Variant binary spec is @unstable
  and a hand-encoded minimum would be brittle. Existing Spark analysis
  already blocks VariantType in grouping / hashing positions, so this is
  defensive and should never fire in practice.

Add RangeScanBoundaryUtilsSuite covering the happy path, byte-wise sanity
check for CharType, nested CharType in struct, VarcharType empty default,
and VariantType rejection at top level / nested in struct / nested in array.
HeartSaVioR added a commit to HeartSaVioR/spark that referenced this pull request Apr 19, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants