Skip to content

Fix vLLM worker#38008

Merged
damccorm merged 1 commit intoapache:masterfrom
aIbrahiim:fix-30513-postcommit-python-vLLM
Apr 2, 2026
Merged

Fix vLLM worker#38008
damccorm merged 1 commit intoapache:masterfrom
aIbrahiim:fix-30513-postcommit-python-vLLM

Conversation

@aIbrahiim
Copy link
Copy Markdown
Contributor

@aIbrahiim aIbrahiim commented Mar 31, 2026

Fixes: #30513
Successful run: https://github.com/apache/beam/actions/runs/23814494318

Fixes Dataflow postcommit vllmTests failures caused by vLLM exiting during engine startup on NVIDIA T4 workers. The failure was CUDA OOM during vLLM V1 engine initialization. The example now passes memory-aware vLLM server flags via the existing vllm_server_kwargs pattern

as after investigation Task :sdks:python:test-suites:dataflow:py312:vllmTests failed and job dataflow logs showed:

Exception: Failed to start vLLM server, polling process exited with code 1.

Starting service with ['/opt/apache/beam-venv/beam-venv-worker-sdk-0-0/bin/python' '-m'
'vllm.entrypoints.openai.api_server' '--model' 'facebook/opt-125m' '--port' '…']

torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 100.00 MiB.
GPU 0 has a total capacity of 14.58 GiB of which 33.56 MiB is free.
… 13.62 GiB is allocated by PyTorch …

vLLM then raised:

RuntimeError: CUDA out of memory occurred when warming up sampler with 256 dummy requests.
Please try lowering max_num_seqs or gpu_memory_utilization when initializing the engine.

So this PR :

Uses vllm_server_kwargs (same pattern as other vLLM examples, e.g. vllm_gemma_batch.py) to pass --max-num-seqs and --gpu-memory-utilization with conservative defaults suited to ~16 GiB GPUs.
Adds --vllm_max_num_seqs and --vllm_gpu_memory_utilization so larger GPUs can override.


Thank you for your contribution! Follow this checklist to help us incorporate your contribution quickly and easily:

  • Mention the appropriate issue in your description (for example: addresses #123), if applicable. This will automatically add a link to the pull request in the issue. If you would like the issue to automatically close on merging the pull request, comment fixes #<ISSUE NUMBER> instead.
  • Update CHANGES.md with noteworthy changes.
  • If this contribution is large, please file an Apache Individual Contributor License Agreement.

See the Contributor Guide for more tips on how to make review process smoother.

To check the build health, please visit https://github.com/apache/beam/blob/master/.test-infra/BUILD_STATUS.md

GitHub Actions Tests Status (on master branch)

Build python source distribution and wheels
Python tests
Java tests
Go tests

See CI.md for more information about GitHub Actions CI or the workflows README to see a list of phrases to trigger workflows.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical issue where the vLLM worker in the text completion example could fail to initialize on GPUs with approximately 16GB of memory due to CUDA out-of-memory errors. By introducing configurable parameters for max-num-seqs and gpu-memory-utilization, the changes allow users to fine-tune vLLM's memory footprint, ensuring the example runs reliably on a wider range of hardware. The update includes both the implementation of these configuration options and clear documentation to guide users in optimizing their vLLM deployments.

Highlights

  • Memory Optimization for vLLM: Introduced new command-line arguments --vllm_max_num_seqs and --vllm_gpu_memory_utilization to the vllm_text_completion example to prevent CUDA out-of-memory errors on GPUs with limited memory (e.g., 16GiB NVIDIA T4).
  • Enhanced Example Configuration: The vllm_text_completion.py example now allows users to explicitly configure vLLM server parameters, providing greater control over memory usage during engine startup and inference.
  • Documentation Update: Updated the README.md for the vLLM example to explain the purpose and usage of the new memory configuration parameters, guiding users on how to avoid common OOM issues.
  • Model Handler Parameterization: Modified VLLMCompletionsModelHandler and VLLMChatModelHandler instantiation in the example to accept and apply the new vllm_server_kwargs.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@aIbrahiim aIbrahiim marked this pull request as ready for review March 31, 2026 21:46
@github-actions
Copy link
Copy Markdown
Contributor

Checks are failing. Will not request review until checks are succeeding. If you'd like to override that behavior, comment assign set of reviewers

@aIbrahiim
Copy link
Copy Markdown
Contributor Author

@damccorm @Abacn

@aIbrahiim aIbrahiim force-pushed the fix-30513-postcommit-python-vLLM branch from 4019df3 to c43fcb7 Compare March 31, 2026 21:56
@Amar3tto Amar3tto requested review from Abacn and damccorm April 1, 2026 20:17
@Amar3tto
Copy link
Copy Markdown
Collaborator

Amar3tto commented Apr 1, 2026

PreCommit checks are unrelated

Copy link
Copy Markdown
Contributor

@damccorm damccorm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thank you!

@damccorm damccorm merged commit f7bdf51 into apache:master Apr 2, 2026
98 of 113 checks passed
@jrmccluskey
Copy link
Copy Markdown
Contributor

I know a lot of workflows are in various states of failure or flakiness but the linting workflow is not one of them, please don't ignore signal from it.

Running pylint...
************* Module apache_beam.examples.inference.vllm_text_completion
apache_beam/examples/inference/vllm_text_completion.py:41:0: C0301: Line too long (81/80) (line-too-long)
apache_beam/examples/inference/vllm_text_completion.py:42:0: C0301: Line too long (81/80) (line-too-long)
apache_beam/examples/inference/vllm_text_completion.py:144:0: C0301: Line too long (91/80) (line-too-long)
************* Module apache_beam.ml.inference.vllm_inference
apache_beam/ml/inference/vllm_inference.py:204:0: C0301: Line too long (81/80) (line-too-long)

@damccorm
Copy link
Copy Markdown
Contributor

damccorm commented Apr 2, 2026

I know a lot of workflows are in various states of failure or flakiness but the linting workflow is not one of them, please don't ignore signal from it.

Actually it was in a state of flakiness when this ran - the errors were buried behind the adk issues. I happened to miss the vllm issues when I took a look.

https://github.com/apache/beam/actions/runs/23821313391/job/69434006422

Regardless - #38051 should fix

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

The PostCommit Python job is flaky

4 participants