Add DiskProvisionedIops and DiskProvisionedThroughputMibps pipeline options #37377
Add DiskProvisionedIops and DiskProvisionedThroughputMibps pipeline options #37377bambadiouf1 wants to merge 36 commits intoapache:masterfrom
Conversation
Summary of ChangesHello @bambadiouf1, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances Google Cloud Dataflow's capabilities by allowing users to specify detailed performance characteristics for worker VM root disks. By introducing options for provisioned IOPS and throughput, it provides greater control over the underlying infrastructure, which can be crucial for optimizing the performance and cost-efficiency of data processing pipelines. This change is integrated across all supported SDKs, ensuring a consistent experience for developers. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
Checks are failing. Will not request review until checks are succeeding. If you'd like to override that behavior, comment |
309f5fd to
6cb13d9
Compare
efa4d20 to
6cb13d9
Compare
…rker pools for java sdk
…am/runners/dataflow/options/DataflowPipelineWorkerPoolOptions.java Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
…am/runners/dataflow/options/DataflowPipelineWorkerPoolOptions.java Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
…ouf1/beam into iops_throughput_pipeline_options
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
|
/gemini review |
damccorm
left a comment
There was a problem hiding this comment.
Could you please take a look at the failing precommits?
| cloud.google.com/go/datastore v1.21.0 | ||
| cloud.google.com/go/bigquery v1.74.0 | ||
| cloud.google.com/go/bigtable v1.42.0 | ||
| cloud.google.com/go/datastore v1.22.0 |
There was a problem hiding this comment.
Do we need all these dependency changes? I think they're maybe responsible for precommit failures
There was a problem hiding this comment.
Code Review
This pull request introduces support for provisioned IOPS and throughput for worker disks in Google Cloud Dataflow across the Java, Python, and Go SDKs. The changes include adding new pipeline options, updating the Dataflow job translation logic to include these parameters in worker pool configurations, and adding corresponding unit tests. Additionally, several Go dependencies and the Dataflow API client version were updated. Review feedback highlights potential type mismatches in the Java implementation, specifically regarding the conversion of Long options to the expected types in the worker pool setter and incorrect type comparisons in the test assertions.
| if (options.getDiskProvisionedIops() != null && options.getDiskProvisionedIops() > 0) { | ||
| workerPool.setDiskProvisionedIops((long) options.getDiskProvisionedIops()); | ||
| } | ||
| if (options.getDiskProvisionedThroughputMibps() != null && options.getDiskProvisionedThroughputMibps() > 0) { | ||
| workerPool.setDiskProvisionedThroughputMibps( | ||
| (long) options.getDiskProvisionedThroughputMibps()); | ||
| } |
There was a problem hiding this comment.
The cast to (long) is likely incorrect and may cause a compilation error. Based on the Dataflow API definition (and the Python SDK changes in this PR using INT32), these fields are typically 32-bit integers. Since options.getDiskProvisionedIops() returns a Long, you should convert it to an Integer (e.g., using .intValue()) before passing it to the workerPool setter, assuming the generated client library expects an Integer for these fields. Alternatively, consider changing the option type to Integer in DataflowPipelineWorkerPoolOptions for consistency with other fields like diskSizeGb.
| if (options.getDiskProvisionedIops() != null && options.getDiskProvisionedIops() > 0) { | |
| workerPool.setDiskProvisionedIops((long) options.getDiskProvisionedIops()); | |
| } | |
| if (options.getDiskProvisionedThroughputMibps() != null && options.getDiskProvisionedThroughputMibps() > 0) { | |
| workerPool.setDiskProvisionedThroughputMibps( | |
| (long) options.getDiskProvisionedThroughputMibps()); | |
| } | |
| if (options.getDiskProvisionedIops() != null && options.getDiskProvisionedIops() > 0) { | |
| workerPool.setDiskProvisionedIops(options.getDiskProvisionedIops().intValue()); | |
| } | |
| if (options.getDiskProvisionedThroughputMibps() != null && options.getDiskProvisionedThroughputMibps() > 0) { | |
| workerPool.setDiskProvisionedThroughputMibps( | |
| options.getDiskProvisionedThroughputMibps().intValue()); | |
| } |
| assertEquals(Integer.valueOf(1000), options.getDiskProvisionedIops()); | ||
| assertEquals(Integer.valueOf(100), options.getDiskProvisionedThroughputMibps()); |
There was a problem hiding this comment.
These assertions will fail because options.getDiskProvisionedIops() returns a Long, while Integer.valueOf(1000) creates an Integer. In Java, an Integer object is never equal to a Long object via .equals(). You should use Long.valueOf() or a long literal (e.g., 1000L) to match the return type of the getter.
| assertEquals(Integer.valueOf(1000), options.getDiskProvisionedIops()); | |
| assertEquals(Integer.valueOf(100), options.getDiskProvisionedThroughputMibps()); | |
| assertEquals(Long.valueOf(1000), options.getDiskProvisionedIops()); | |
| assertEquals(Long.valueOf(100), options.getDiskProvisionedThroughputMibps()); |
…g in DataflowPipelineOptions
…throughput in DataflowPipelineTranslator
This pull request introduces two new pipeline options for the Google Cloud Dataflow runner for the Java, Python and Go SDKs. These options allow users to specify provisioned performance for worker VM root disks:
disk_provisioned_iops: Sets the provisioned IOPS for the root disk. If unspecified, the service chooses a default
disk_provisioned_throughput_mibps: Sets the provisioned throughput in MiB/s for the root disk.
Added getters and setters to DataflowPipelineWorkerPoolOptions and updated DataflowPipelineTranslator to pass these values to the worker pool configuration.
Will follow up with a PR for the Go and Python SDKs
Tests have been added/updated to verify that these options are correctly parsed and translated.
More context:
we need to add these pipeline options before submitting this cl: https://critique.corp.google.com/cl/858930428
Issue: #37374
Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:
addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, commentfixes #<ISSUE NUMBER>instead.CHANGES.mdwith noteworthy changes.See the Contributor Guide for more tips on how to make review process smoother.
To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md
GitHub Actions Tests Status (on master branch)
See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.