Skip to content

group/tx/stm: limit local snapshot to max_removable_local_log_offset#30071

Merged
bharathv merged 2 commits intoredpanda-data:devfrom
bharathv:fix_group_tx
Apr 6, 2026
Merged

group/tx/stm: limit local snapshot to max_removable_local_log_offset#30071
bharathv merged 2 commits intoredpanda-data:devfrom
bharathv:fix_group_tx

Conversation

@bharathv
Copy link
Copy Markdown
Contributor

@bharathv bharathv commented Apr 3, 2026

The group_tx_tracker_stm snapshot could capture open transaction state
(begin_offsets/producer_states) at an offset where the corresponding
commit batch had not yet been written. If compaction later removed that
commit batch, which is allowed once max_removable_offset advances past
it, the open transaction could never be resolved on restart, permanently
blocking max_removable_offset and preventing further compaction.

The sequence:

  1. snapshot taken while a tx is open (fence at F, snapshot offset >= F)
  2. tx commits at offset C, max_removable advances past C
  3. Compaction removes the commit batch at C
  4. On restart, snapshot loads stale open tx at F, replay cannot find
    the commit -> max_removable stuck at prev(F) forever

Fix: snapshot at max_removable_local_log_offset with an empty
transactions map. Since this STM's sole purpose is tracking open
transactions for max_removable_local_log_offset, and closed transactions
leave no state, all meaningful state can be reconstructed from log
replay. Open transactions are re-discovered from fence batches in the
log, which are guaranteed to be present since compaction is bounded by
max_removable while the STM is live.

Also adds a regression test that reproduces the scenario by taking a
snapshot during an open tx, committing, compacting, re-persisting the
stale snapshot, and restarting.

To fix existing setups that have stale snapshots, this commit also bumps
supported_local_snapshot_version, this invalidates saved snapshots upon
upgrade and applies everything from log and reconstructs the correct
snapshots the next time with the newer logic.

Note: the issue is very rare/hard to reproduce because it requires the broker
to quickly restart between steps 3/4 and the stm doesn't recompute the
local snapshot and the window is very tiny but is possible in a rolling upgrade
test with a tight compaction interval.

Backports Required

  • none - not a bug fix
  • none - this is a backport
  • none - issue does not exist in previous branches
  • none - papercut/not impactful enough to backport
  • v26.1.x
  • v25.3.x
  • v25.2.x

Release Notes

Bug Fixes

  • Fixes a rare race condition between snapshots and compaction of consumer offset partitions resulting in dangling open transactions

Copilot AI review requested due to automatic review settings April 3, 2026 18:21
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Fixes a compaction/snapshot interaction in group_tx_tracker_stm that could persist “open transaction” state into a local snapshot and later lose the corresponding commit marker to compaction, permanently pinning max_removable_local_log_offset after restart.

Changes:

  • Modify group_tx_tracker_stm::take_local_snapshot() to snapshot at max_removable_local_log_offset() and omit transaction state.
  • Bump supported_local_snapshot_version to invalidate existing on-disk snapshots on upgrade.
  • Add a new regression test covering snapshot/compaction/restart behavior, and update test BUILD deps/timeout.

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 2 comments.

File Description
src/v/kafka/server/tests/group_tx_compaction_test.cc Adds a regression scenario around local snapshots, compaction, and restart recovery for consumer offsets partitions.
src/v/kafka/server/tests/BUILD Updates the group_tx_compaction_test target timeout and adds a dependency for ssx::semaphore_units.
src/v/kafka/server/group_tx_tracker_stm.h Bumps the supported local snapshot version.
src/v/kafka/server/group_tx_tracker_stm.cc Changes local snapshot contents/offset to avoid persisting open transaction state.
Comments suppressed due to low confidence (2)

src/v/kafka/server/group_tx_tracker_stm.h:146

  • After bumping supported_local_snapshot_version to 2, snapshots with header version 1 will be rejected in apply_local_snapshot(), so the comment/branch describing reconstruction of a “legacy snapshot from version 1” is now effectively unreachable. Consider updating/removing the legacy wording/branch to avoid confusion for future maintainers.
private:
    static constexpr int8_t supported_local_snapshot_version = 2;
    struct snapshot
      : serde::envelope<snapshot, serde::version<2>, serde::compat_version<0>> {
        all_txs_t transactions;

        // legacy for version 1 RP to decode version 2+ snapshots
        chunked_vector<kafka::group_id> blocked_groups;

src/v/kafka/server/tests/BUILD:742

  • The target’s timeout is reduced from "moderate" to "short" even though this test file includes long-running compaction loops with RPTEST_REQUIRE_EVENTUALLY_CORO(30s, ...) (and the new restart/compaction scenario adds more work). This is likely to cause spurious CI timeouts/flakes; consider keeping this as "moderate" (or reducing the test’s runtime/upper bounds if "short" is required).
redpanda_cc_gtest(
    name = "group_tx_compaction_test",
    timeout = "short",
    srcs = [
        "group_tx_compaction_test.cc",
    ],
    cpu = 1,

Comment thread src/v/kafka/server/group_tx_tracker_stm.cc Outdated
Comment thread src/v/kafka/server/tests/group_tx_compaction_test.cc Outdated
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 4 out of 4 changed files in this pull request and generated 1 comment.

Comment thread src/v/kafka/server/tests/BUILD
@WillemKauf
Copy link
Copy Markdown
Contributor

If compaction...

tx bugs and compaction, what a duo

@bharathv
Copy link
Copy Markdown
Contributor Author

bharathv commented Apr 3, 2026

If compaction...

tx bugs and compaction, what a duo

:D

bharathv added 2 commits April 3, 2026 12:54
The group_tx_tracker_stm snapshot could capture open transaction state
(begin_offsets/producer_states) at an offset where the corresponding
commit batch had not yet been written. If compaction later removed that
commit batch — which is allowed once max_removable_offset advances past
it — the open transaction could never be resolved on restart, permanently
blocking max_removable_offset and preventing further compaction.

The sequence:
1. Snapshot taken while a tx is open (fence at F, snapshot offset >= F)
2. Tx commits at offset C, max_removable advances past C
3. Compaction removes the commit batch at C
4. On restart, snapshot loads stale open tx at F, replay cannot find
   the commit -> max_removable stuck at prev(F) forever

Fix: snapshot at max_removable_local_log_offset with an empty
transactions map. Since this STM's sole purpose is tracking open
transactions for max_removable_local_log_offset, and closed transactions
leave no state, all meaningful state can be reconstructed from log
replay. Open transactions are re-discovered from fence batches in the
log, which are guaranteed to be present since compaction is bounded by
max_removable while the STM is live.

Also adds a regression test that reproduces the scenario by taking a
snapshot during an open tx, committing, compacting, re-persisting the
stale snapshot, and restarting.

To fix existing setups that have stale snapshots, this commit also bumps
supported_local_snapshot_version, this invalidates saved snapshots upon
upgrade and applies everything from log and reconstructs the correct
snapshots the next time with the newer logic.
@vbotbuildovich
Copy link
Copy Markdown
Collaborator

Retry command for Build#82752

please wait until all jobs are finished before running the slash command

/ci-repeat 1
skip-redpanda-build
skip-units
skip-rebase
tests/rptest/tests/upgrade_test.py::RedpandaInstallerTest.test_install_by_line

@vbotbuildovich
Copy link
Copy Markdown
Collaborator

vbotbuildovich commented Apr 3, 2026

Retry command for Build#82755

please wait until all jobs are finished before running the slash command

/ci-repeat 1
skip-redpanda-build
skip-units
skip-rebase
tests/rptest/tests/upgrade_test.py::RedpandaInstallerTest.test_install_by_line

@bharathv
Copy link
Copy Markdown
Contributor Author

bharathv commented Apr 3, 2026

/ci-repeat 3
skip-redpanda-build
skip-units
dt-repeat=30
tests/rptest/transactions/consumer_offsets_test.py

Copy link
Copy Markdown
Contributor

@WillemKauf WillemKauf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm wondering why this situation exists if we are already actively clamping the max tx removal offset by the last snapshotted offset here:

// We clamp the offset up to which we can remove transactional control
// batches to the last snapshot taken by the transactional stm. This
// ensures that we do not remove control batches that may be needed to
// reconstruct the state machine during recovery.
model::offset max_tx_remove_offset = std::min(
max_tx_end_remove_offset, tx_snapshot_offset);

Answered, this only considers rm_stm, not group_tx_tracker_stm! We unconditionally remove group_commit_tx batches here:

bool is_removable_control_batch(
const model::ntp& ntp,
const model::record_batch_type batch_type,
bool remove_user_tx_fence_enabled) {
// Control batches in consumer offsets are special compared to
// the ones in data partitions can be safely compacted away.
// Fence batches can also be immediately removed when seen in the
// `__consumer_offsets` topic or safely removed from a user topic. However,
// removal in a user topic is gated by
// `log_compaction_tx_batch_removal_enabled()`.
auto is_co_topic = model::is_consumer_offsets_topic(ntp);
auto tx_fence_removable = batch_type == model::record_batch_type::tx_fence
&& (is_co_topic || remove_user_tx_fence_enabled);
return tx_fence_removable
|| batch_type == model::record_batch_type::group_fence_tx
|| batch_type == model::record_batch_type::group_prepare_tx
|| batch_type == model::record_batch_type::group_abort_tx
|| batch_type == model::record_batch_type::group_commit_tx;
}

🤯

Also, is it about time we drop some tx w/ compaction tests in antithesis to try to shake the tree as hard as possible for any remaining bugs?

Copy link
Copy Markdown
Contributor

@WillemKauf WillemKauf left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for reasons we discussed in Slack

@bharathv
Copy link
Copy Markdown
Contributor Author

bharathv commented Apr 6, 2026

I'm wondering why this situation exists if we are already actively clamping the max tx removal offset by the last snapshotted offset here:

// We clamp the offset up to which we can remove transactional control
// batches to the last snapshot taken by the transactional stm. This
// ensures that we do not remove control batches that may be needed to
// reconstruct the state machine during recovery.
model::offset max_tx_remove_offset = std::min(
max_tx_end_remove_offset, tx_snapshot_offset);

Answered, this only considers rm_stm, not group_tx_tracker_stm! We unconditionally remove group_commit_tx batches here:

bool is_removable_control_batch(
const model::ntp& ntp,
const model::record_batch_type batch_type,
bool remove_user_tx_fence_enabled) {
// Control batches in consumer offsets are special compared to
// the ones in data partitions can be safely compacted away.
// Fence batches can also be immediately removed when seen in the
// `__consumer_offsets` topic or safely removed from a user topic. However,
// removal in a user topic is gated by
// `log_compaction_tx_batch_removal_enabled()`.
auto is_co_topic = model::is_consumer_offsets_topic(ntp);
auto tx_fence_removable = batch_type == model::record_batch_type::tx_fence
&& (is_co_topic || remove_user_tx_fence_enabled);
return tx_fence_removable
|| batch_type == model::record_batch_type::group_fence_tx
|| batch_type == model::record_batch_type::group_prepare_tx
|| batch_type == model::record_batch_type::group_abort_tx
|| batch_type == model::record_batch_type::group_commit_tx;
}

🤯

Also, is it about time we drop some tx w/ compaction tests in antithesis to try to shake the tree as hard as possible for any remaining bugs?

yes good idea.

@bharathv bharathv merged commit 79347cf into redpanda-data:dev Apr 6, 2026
18 checks passed
@bharathv bharathv deleted the fix_group_tx branch April 6, 2026 18:48
@vbotbuildovich
Copy link
Copy Markdown
Collaborator

/backport v26.1.x

@vbotbuildovich
Copy link
Copy Markdown
Collaborator

/backport v25.3.x

@vbotbuildovich
Copy link
Copy Markdown
Collaborator

/backport v25.2.x

@vbotbuildovich
Copy link
Copy Markdown
Collaborator

Failed to create a backport PR to v25.2.x branch. I tried:

git remote add upstream https://github.com/redpanda-data/redpanda.git
git fetch --all
git checkout -b backport-pr-30071-v25.2.x-161 remotes/upstream/v25.2.x
git cherry-pick -x ede6621399 45fff0d7b9

Workflow run logs.

@vbotbuildovich
Copy link
Copy Markdown
Collaborator

Failed to create a backport PR to v25.3.x branch. I tried:

git remote add upstream https://github.com/redpanda-data/redpanda.git
git fetch --all
git checkout -b backport-pr-30071-v25.3.x-629 remotes/upstream/v25.3.x
git cherry-pick -x ede6621399 45fff0d7b9

Workflow run logs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants