Commit 0eb4fc1
committed
[SPARK-56687][SQL] Support netChanges for DSv2 CDC streaming reads
### What changes were proposed in this pull request?
This PR completes the DSv2 CDC streaming post-processing surface by implementing `deduplicationMode = netChanges` for streaming reads. The previous PR (#55636 / SPARK-56686) added carry-over removal and update detection for streaming but left netChanges batch-only.
The batch path (`ResolveChangelogTable.injectNetChangeComputation`) uses a Catalyst `Window` partitioned by `rowId` and ordered by `(_commit_version, change_type_rank)` to find the first and last events per row identity, then applies the SPIP collapse matrix on `(existedBefore, existsAfter)`. `Window` is rejected on streaming children (`NON_TIME_WINDOW_NOT_SUPPORTED_IN_STREAMING`), and unlike the row-level passes the netChanges aggregate is keyed by `rowId` only -- there's no commit-version + commit-timestamp grouping that would let us reuse the streaming Aggregate pattern.
This PR adds a streaming-friendly equivalent by delegating per-row-identity state management to a new `CdcNetChangesStatefulProcessor` driven by `TransformWithState`:
- The processor stores the first event ever observed and the most-recent event observed for each row identity in `ValueState[Row]`.
- An event-time timer is armed on each batch to the latest `_commit_timestamp` observed for the key. When the global watermark advances past the timer, `handleExpiredTimer` evaluates the SPIP matrix and emits 0, 1, or 2 output rows -- identical semantics to the batch path.
- Existing per-key timers are deleted before re-arming so that out-of-order events for an earlier commit can't fire a stale timer between batches and produce duplicate output for the same row identity.
The analyzer rule constructs `TransformWithState` directly (no precedent in catalyst for this; the typed-Dataset DSL is the usual entry point). Encoders for the input/output `Row` and the rowId tuple are built via `ExpressionEncoder(StructType)`. Nested rowId paths (e.g. `payload.id`) are handled by aliasing each rowId expression to a top-level `__spark_cdc_rowid_<i>` helper column before the `TransformWithState`, then dropping the helpers in a final `Project` so the user-visible schema matches the connector's declared changelog schema.
Plan shape:
```
EventTimeWatermark(_commit_timestamp, 0s)
-> Project (alias rowId expressions to flat helper columns)
-> TransformWithState (grouping = rowId helpers, EventTime mode, Append)
-> SerializeFromObject
-> Project (drop rowId helper columns)
```
When carry-over removal / update detection are also requested, the row-level rewrite is applied first; the netChanges `TransformWithState` then sits on top of it and the rule reuses the existing `EventTimeWatermark` rather than stacking another (multi-watermark stacking is rejected unless `STATEFUL_OPERATOR_ALLOW_MULTIPLE` is set).
#### Documented limitation
Row identities only touched in the latest observed commit are held back until a later commit (with strictly greater `_commit_timestamp`) advances the watermark past them, or the source terminates. End-of-input flushes all timers, so bounded streams produce output equivalent to the corresponding batch read. This matches the steady-state behavior of the row-level streaming rewrite.
Also removes the now-obsolete error class `INVALID_CDC_OPTION.STREAMING_NET_CHANGES_NOT_SUPPORTED` introduced in SPARK-56686.
### Why are the changes needed?
Without this PR, `deduplicationMode = netChanges` is unavailable on streaming CDC reads, forcing users with intermediate-state connectors (`containsIntermediateChanges = true`) to fall back to batch reads when they want a deduplicated change feed. With SPARK-56686 already shipping carry-over removal and update detection for streaming, netChanges was the only post-processing pass still gated to batch -- this completes the surface.
### Does this PR introduce _any_ user-facing change?
Yes.
- Streaming `spark.readStream.changes(...)` now supports `deduplicationMode = netChanges`. Previously this threw `INVALID_CDC_OPTION.STREAMING_NET_CHANGES_NOT_SUPPORTED`.
- That error class is removed; the wording in `DataStreamReader.changes()` and `Changelog.java` Scaladoc has been updated to describe the supported behavior and the latest-commit limitation.
Note: the netChanges streaming path uses `TransformWithState`, which requires the RocksDB state store backend (`spark.sql.streaming.stateStore.providerClass = ...RocksDBStateStoreProvider`). Spark surfaces `UNSUPPORTED_FEATURE.STORE_BACKEND_NOT_SUPPORTED_FOR_TWS` if the default HDFS-backed provider is left in place, so this is discoverable.
### How was this patch tested?
89 tests pass across 4 CDC suites (all green):
- `ResolveChangelogTableStreamingPostProcessingSuite` -- two new plan-shape tests: `netChanges alone injects watermark + TransformWithState` and `netChanges + carry-over removal share a single watermark` (verifies that the netChanges `TransformWithState` reuses the row-level rewrite's `EventTimeWatermark` rather than stacking another).
- `ChangelogResolutionSuite` -- the `netChanges throws` test from SPARK-56686 is flipped to assert that exactly one `TransformWithState` appears in the analyzed plan.
- `ResolveChangelogTablePostProcessingSuite` -- the corresponding netChanges throw test is similarly flipped.
- `ChangelogEndToEndSuite` -- two new end-to-end tests that drive a streaming query against `InMemoryChangelogCatalog` with the RocksDB state store: `streaming netChanges collapses INSERT then DELETE to no output` (confirms the `(false, false)` cancel case and that end-of-input flushes the latest commit's group) and `streaming netChanges with computeUpdates labels persisting rows as updates` (confirms the `(false, true)` case relabels correctly).
Also confirmed `UnsupportedOperationsSuite` (216 tests) still passes.
### Was this patch authored or co-authored using generative AI tooling?
Generated-by: Claude Code (claude-opus-4-7)
Closes #55637 from gengliangwang/streamingCDC-netChanges.
Authored-by: Gengliang Wang <gengliang@apache.org>
Signed-off-by: Gengliang Wang <gengliang@apache.org>1 parent 2df302d commit 0eb4fc1
12 files changed
Lines changed: 843 additions & 106 deletions
File tree
- common/utils/src/main/resources/error
- sql
- api/src/main/scala/org/apache/spark/sql/streaming
- catalyst/src/main
- java/org/apache/spark/sql/connector/catalog
- scala/org/apache/spark/sql
- catalyst/analysis
- errors
- execution/datasources/v2
- core/src/test/scala/org/apache/spark/sql/connector
Lines changed: 0 additions & 5 deletions
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
3308 | 3308 | | |
3309 | 3309 | | |
3310 | 3310 | | |
3311 | | - | |
3312 | | - | |
3313 | | - | |
3314 | | - | |
3315 | | - | |
3316 | 3311 | | |
3317 | 3312 | | |
3318 | 3313 | | |
| |||
Lines changed: 29 additions & 9 deletions
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
131 | 131 | | |
132 | 132 | | |
133 | 133 | | |
134 | | - | |
135 | | - | |
136 | | - | |
137 | | - | |
138 | | - | |
139 | | - | |
140 | | - | |
141 | | - | |
142 | | - | |
| 134 | + | |
| 135 | + | |
| 136 | + | |
| 137 | + | |
| 138 | + | |
| 139 | + | |
| 140 | + | |
| 141 | + | |
| 142 | + | |
| 143 | + | |
| 144 | + | |
| 145 | + | |
| 146 | + | |
| 147 | + | |
| 148 | + | |
| 149 | + | |
| 150 | + | |
| 151 | + | |
| 152 | + | |
| 153 | + | |
| 154 | + | |
| 155 | + | |
| 156 | + | |
| 157 | + | |
| 158 | + | |
| 159 | + | |
| 160 | + | |
| 161 | + | |
| 162 | + | |
143 | 163 | | |
144 | 164 | | |
145 | 165 | | |
| |||
Lines changed: 15 additions & 13 deletions
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
33 | 33 | | |
34 | 34 | | |
35 | 35 | | |
36 | | - | |
37 | | - | |
| 36 | + | |
| 37 | + | |
| 38 | + | |
| 39 | + | |
| 40 | + | |
| 41 | + | |
38 | 42 | | |
39 | 43 | | |
40 | 44 | | |
| |||
61 | 65 | | |
62 | 66 | | |
63 | 67 | | |
64 | | - | |
65 | | - | |
66 | | - | |
67 | | - | |
| 68 | + | |
| 69 | + | |
| 70 | + | |
68 | 71 | | |
69 | 72 | | |
70 | | - | |
71 | | - | |
72 | | - | |
| 73 | + | |
| 74 | + | |
| 75 | + | |
| 76 | + | |
| 77 | + | |
73 | 78 | | |
74 | 79 | | |
75 | 80 | | |
| |||
116 | 121 | | |
117 | 122 | | |
118 | 123 | | |
119 | | - | |
120 | | - | |
121 | | - | |
122 | | - | |
| 124 | + | |
123 | 125 | | |
124 | 126 | | |
125 | 127 | | |
| |||
Lines changed: 203 additions & 0 deletions
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
| 1 | + | |
| 2 | + | |
| 3 | + | |
| 4 | + | |
| 5 | + | |
| 6 | + | |
| 7 | + | |
| 8 | + | |
| 9 | + | |
| 10 | + | |
| 11 | + | |
| 12 | + | |
| 13 | + | |
| 14 | + | |
| 15 | + | |
| 16 | + | |
| 17 | + | |
| 18 | + | |
| 19 | + | |
| 20 | + | |
| 21 | + | |
| 22 | + | |
| 23 | + | |
| 24 | + | |
| 25 | + | |
| 26 | + | |
| 27 | + | |
| 28 | + | |
| 29 | + | |
| 30 | + | |
| 31 | + | |
| 32 | + | |
| 33 | + | |
| 34 | + | |
| 35 | + | |
| 36 | + | |
| 37 | + | |
| 38 | + | |
| 39 | + | |
| 40 | + | |
| 41 | + | |
| 42 | + | |
| 43 | + | |
| 44 | + | |
| 45 | + | |
| 46 | + | |
| 47 | + | |
| 48 | + | |
| 49 | + | |
| 50 | + | |
| 51 | + | |
| 52 | + | |
| 53 | + | |
| 54 | + | |
| 55 | + | |
| 56 | + | |
| 57 | + | |
| 58 | + | |
| 59 | + | |
| 60 | + | |
| 61 | + | |
| 62 | + | |
| 63 | + | |
| 64 | + | |
| 65 | + | |
| 66 | + | |
| 67 | + | |
| 68 | + | |
| 69 | + | |
| 70 | + | |
| 71 | + | |
| 72 | + | |
| 73 | + | |
| 74 | + | |
| 75 | + | |
| 76 | + | |
| 77 | + | |
| 78 | + | |
| 79 | + | |
| 80 | + | |
| 81 | + | |
| 82 | + | |
| 83 | + | |
| 84 | + | |
| 85 | + | |
| 86 | + | |
| 87 | + | |
| 88 | + | |
| 89 | + | |
| 90 | + | |
| 91 | + | |
| 92 | + | |
| 93 | + | |
| 94 | + | |
| 95 | + | |
| 96 | + | |
| 97 | + | |
| 98 | + | |
| 99 | + | |
| 100 | + | |
| 101 | + | |
| 102 | + | |
| 103 | + | |
| 104 | + | |
| 105 | + | |
| 106 | + | |
| 107 | + | |
| 108 | + | |
| 109 | + | |
| 110 | + | |
| 111 | + | |
| 112 | + | |
| 113 | + | |
| 114 | + | |
| 115 | + | |
| 116 | + | |
| 117 | + | |
| 118 | + | |
| 119 | + | |
| 120 | + | |
| 121 | + | |
| 122 | + | |
| 123 | + | |
| 124 | + | |
| 125 | + | |
| 126 | + | |
| 127 | + | |
| 128 | + | |
| 129 | + | |
| 130 | + | |
| 131 | + | |
| 132 | + | |
| 133 | + | |
| 134 | + | |
| 135 | + | |
| 136 | + | |
| 137 | + | |
| 138 | + | |
| 139 | + | |
| 140 | + | |
| 141 | + | |
| 142 | + | |
| 143 | + | |
| 144 | + | |
| 145 | + | |
| 146 | + | |
| 147 | + | |
| 148 | + | |
| 149 | + | |
| 150 | + | |
| 151 | + | |
| 152 | + | |
| 153 | + | |
| 154 | + | |
| 155 | + | |
| 156 | + | |
| 157 | + | |
| 158 | + | |
| 159 | + | |
| 160 | + | |
| 161 | + | |
| 162 | + | |
| 163 | + | |
| 164 | + | |
| 165 | + | |
| 166 | + | |
| 167 | + | |
| 168 | + | |
| 169 | + | |
| 170 | + | |
| 171 | + | |
| 172 | + | |
| 173 | + | |
| 174 | + | |
| 175 | + | |
| 176 | + | |
| 177 | + | |
| 178 | + | |
| 179 | + | |
| 180 | + | |
| 181 | + | |
| 182 | + | |
| 183 | + | |
| 184 | + | |
| 185 | + | |
| 186 | + | |
| 187 | + | |
| 188 | + | |
| 189 | + | |
| 190 | + | |
| 191 | + | |
| 192 | + | |
| 193 | + | |
| 194 | + | |
| 195 | + | |
| 196 | + | |
| 197 | + | |
| 198 | + | |
| 199 | + | |
| 200 | + | |
| 201 | + | |
| 202 | + | |
| 203 | + | |
0 commit comments