If the partition count or kafka IO size is large, then skip committin…#37510
Conversation
…g offsets that are not changed. Reduce kafka commit load
Summary of ChangesHello @kishorepola, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request optimizes the Kafka offset commitment process within Apache Beam's KafkaIO connector. Previously, the system would commit offsets for all topic-partitions, even if some were idle and their offsets hadn't changed. This behavior could lead to excessive metadata load on Kafka brokers. The implemented solution introduces in-memory tracking of the last committed offsets and ensures that only partitions with advanced offsets are committed, thereby reducing the commit load and improving overall system efficiency. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
Assigning reviewers: R: @ahmedabu98 for label java. Note: If you would like to opt out of this review, comment Available commands:
The PR bot will only process comments in the main thread (not review comments). |
|
Reminder, please take a look at this pr: @ahmedabu98 @sjvanrossum |
|
Hi @tomstepp , can you help review this. |
tomstepp
left a comment
There was a problem hiding this comment.
Thanks for contributing this!
|
@kishorepola - please let us know if you need any help with the feedback, thanks! |
- Refactor commitCheckpointMark to use Java streams (per @johnjcasey) Changed from explicit for-loop to streams-based filtering for better code consistency with existing patterns - Add debug logging for idle partitions (per @tomstepp) Log the count of idle partitions skipped during each commit to aid in monitoring and debugging the optimization - Implement time-based periodic commits (per @tomstepp) Track last commit time per partition and ensure commits happen at least every 10 minutes even for idle partitions. This supports time lag monitoring use cases where customers track time since last commit. - Add unit test for idle partition behavior (per @tomstepp) New test KafkaUnboundedReaderIdlePartitionTest verifies that: * Idle partitions are not committed repeatedly * Active partitions trigger commits correctly * Uses mock consumer to track commit calls All changes maintain backward compatibility and follow Apache Beam coding standards (spotless formatting applied).
fabf88f to
39add93
Compare
|
@tomstepp @johnjcasey Thank you for the thorough review! I've addressed all the feedback: ✅ Refactored All changes maintain backward compatibility and follow spotless formatting standards. Ready for another review! |
Rewrote KafkaUnboundedReaderIdlePartitionTest to follow the exact pattern used in KafkaIOTest.java: - Proper MockConsumer initialization with partition metadata - Correct setup of beginning/end offsets - Consumer records with proper offsets and timestamps - schedulePollTask for record enqueueing based on position - Override commitSync to track commit calls - Use reader.start() before reader.advance() This ensures the test properly initializes the Kafka consumer and doesn't fail with IllegalStateException during source.split().
|
@Abacn and/or @johnjcasey can you help approve/submit please? |
…g offsets that are not changed. Reduce kafka commit load
Please add a meaningful description for your change here
While committing offsets back to Kafka, Beam commits offsets back to all the topics and partitions in the KafkaIO. If some topic-partitions are idle, even then the same old offset is committed back. This causes lot of metadata pressure on the brokers if the kafka cluster has lot of idle partitions or cluster size is decently big.
Added in memory tracking for offsets.
Commit back only those offsets that are modified.
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, commentfixes #<ISSUE NUMBER>instead.CHANGES.mdwith noteworthy changes.See the Contributor Guide for more tips on how to make review process smoother.
To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.