Skip to content

[bugfix]: fix SP deadlock in negative prompt encoding during training#1178

Merged
mergify[bot] merged 6 commits intohao-ai-lab:mainfrom
FoundationResearch:bugfix/train/sp
Apr 28, 2026
Merged

[bugfix]: fix SP deadlock in negative prompt encoding during training#1178
mergify[bot] merged 6 commits intohao-ai-lab:mainfrom
FoundationResearch:bugfix/train/sp

Conversation

@alexzms
Copy link
Copy Markdown
Collaborator

@alexzms alexzms commented Mar 22, 2026

Summary

  • Fix NCCL deadlock when training with sp_size > 1
  • Root cause: ensure_negative_conditioning() created an inference pipeline on rank 0 only, but post_init() called warmup_sequence_parallel_communication() which does all-to-all on the SP group — other ranks never participated, causing a hang
  • Fix: every rank now encodes the negative prompt independently using a lightweight tokenizer + T5 encoder, eliminating the rank-0 guard and all broadcast logic (small overhead)

Tests

  • Tested with sp_size=4 on 4 and 8 GPUs. No hangs, no quality degradation
  • Pre-commit passes

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical deadlock issue encountered during distributed training, specifically when using sequence parallelism. The previous implementation of negative prompt encoding inadvertently created a bottleneck by centralizing the process on a single rank, causing other ranks to wait indefinitely for a collective operation that never fully materialized. The updated approach decentralizes this encoding, allowing each rank to process its negative prompt independently, thereby resolving the deadlock and ensuring smooth distributed training.

Highlights

  • Deadlock Fix: Resolved an NCCL deadlock that occurred during training with sp_size > 1 when encoding negative prompts.
  • Root Cause Identified: The deadlock was caused by ensure_negative_conditioning() creating an inference pipeline only on rank 0, while warmup_sequence_parallel_communication() (called by post_init()) expected all ranks in the Sequence Parallel (SP) group to participate in an all-to-all collective, leading to a hang for non-rank 0 processes.
  • Solution Implemented: Modified the negative prompt encoding mechanism so that every rank independently encodes the negative prompt using a lightweight tokenizer and T5 encoder. This eliminates the need for a rank-0 guard and broadcast logic, preventing the collective communication deadlock.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively resolves a deadlock issue in distributed training with sequence parallelism by refactoring the negative prompt encoding. The previous approach of encoding on a single rank and broadcasting the results is replaced by independent encoding on each rank. This change simplifies the code and eliminates complex collective communication. The implementation is solid, but I have one suggestion to improve robustness by using the configured precision for the text encoder instead of a hardcoded value.

Comment thread fastvideo/train/models/wan/wan.py Outdated
Comment thread fastvideo/train/models/wan/wan.py Outdated
@Eigensystem Eigensystem added the ready PR is ready to merge label Mar 23, 2026
@mergify mergify Bot added type: bugfix Bug fix scope: training Training pipeline, methods, configs labels Mar 30, 2026
@mergify
Copy link
Copy Markdown
Contributor

mergify Bot commented Mar 30, 2026

Merge Protections

Your pull request matches the following merge protections and will not be merged until they are valid.

🟢 PR merge requirements

Wonderful, this rule succeeded.
  • #approved-reviews-by>=1
  • check-success=fastcheck-passed
  • check-success=full-suite-passed
  • check-success~=pre-commit
  • title~=(?i)^\[(feat|feature|bugfix|fix|refactor|perf|ci|doc|docs|misc|chore|kernel|new.?model)\]

@alexzms
Copy link
Copy Markdown
Collaborator Author

alexzms commented Apr 5, 2026

/merge

@github-actions github-actions Bot added ready PR is ready to merge and removed ready PR is ready to merge labels Apr 5, 2026
Comment thread fastvideo/train/models/wan/wan.py Outdated
alexzms added a commit to FoundationResearch/FastVideo that referenced this pull request Apr 26, 2026
Addresses review feedback on hao-ai-lab#1178: the previous fix loaded the negative
prompt encoder via transformers' T5EncoderModel, but Wan's text_encoder
is UMT5 (per-layer relative position bias, not shared). Loading UMT5
weights into a T5 architecture silently produces wrong embeddings for
the negative prompt and diverges the training-time CFG from inference.

Switch to TextEncoderLoader so the encoder class is resolved from
pipeline_config (UMT5EncoderModel for Wan) and the postprocess_text
function is reused instead of imported by name. This keeps the
fix to the original SP deadlock (every rank encodes independently;
no full WanPipeline construction, no NCCL collectives) while staying
inside the existing prompt-encoding abstraction.

text_encoder_cpu_offload is forced off for this short-lived load to
avoid initializing an FSDP device mesh, which would re-introduce
collectives.
@alexzms
Copy link
Copy Markdown
Collaborator Author

alexzms commented Apr 26, 2026

@SolitaryThinker thanks — both points were correct, sorry for the lazy first pass. Pushed a follow-up commit (61f6b39) that addresses them:

  • UMT5 vs T5. You were right that loading the Wan text encoder via transformers.T5EncoderModel is wrong — Wan uses UMT5 (per-layer relative position bias, not shared), so T5EncoderModel.from_pretrained was silently giving us bad embeddings for the negative prompt. The repo's own tests/encoders/test_t5_encoder.py even uses UMT5EncoderModel as the reference for the Wan checkpoint, which I should have caught.
  • Reusing the prompt-encoding stage. Switched to TextEncoderLoader so the encoder class is resolved from pipeline_config.text_encoder_configs (UMT5 for Wan) and the postprocess function comes from pipeline_config.postprocess_text_funcs[0] instead of being imported by name. Tokenizer is still loaded with AutoTokenizer.from_pretrained since TextEncoderLoader doesn't handle tokenizers.

The original deadlock fix is preserved: every rank loads its own encoder and encodes the negative prompt independently, no full WanPipeline construction, no NCCL collectives. I also force text_encoder_cpu_offload=False for this short-lived load so it doesn't initialize an FSDP device mesh (which would re-introduce collectives).

pre-commit run passes locally (yapf / ruff / mypy / codespell). Mind taking another look when you have a minute?

alexzms added 5 commits April 26, 2026 06:42
Addresses review feedback on hao-ai-lab#1178: the previous fix loaded the negative
prompt encoder via transformers' T5EncoderModel, but Wan's text_encoder
is UMT5 (per-layer relative position bias, not shared). Loading UMT5
weights into a T5 architecture silently produces wrong embeddings for
the negative prompt and diverges the training-time CFG from inference.

Switch to TextEncoderLoader so the encoder class is resolved from
pipeline_config (UMT5EncoderModel for Wan) and the postprocess_text
function is reused instead of imported by name. This keeps the
fix to the original SP deadlock (every rank encodes independently;
no full WanPipeline construction, no NCCL collectives) while staying
inside the existing prompt-encoding abstraction.

text_encoder_cpu_offload is forced off for this short-lived load to
avoid initializing an FSDP device mesh, which would re-introduce
collectives.
Per-rank negative prompt encoding was duplicated inline in WanModel and
will be needed for any future single-encoder model. Move it to
fastvideo/train/utils/negative_prompt.py and have WanModel call into it.
Hunyuan keeps its bespoke dual-encoder path for now.

Also moves the in-method imports introduced by the deadlock fix out to
the top of the helper module.
@mergify mergify Bot merged commit 9a8bbe1 into hao-ai-lab:main Apr 28, 2026
15 of 16 checks passed
alexzms added a commit to aryan5v/FastVideo that referenced this pull request Apr 29, 2026
…ncoding

The ensure_negative_conditioning rewrite in 88958db (this PR's first
commit, Apr 21) predates encode_negative_prompt from hao-ai-lab#1178. Now that
hao-ai-lab#1178 is merged, revert the in-method rank-0 + broadcast path to call
the helper instead.

This restores hao-ai-lab#1178's per-rank text-encoder load — the rank-0 +
broadcast pattern removed here is the exact path hao-ai-lab#1178's docstring
identifies as deadlocking on FSDP collectives during text-encoder load.

The _prompt_pipeline_cls ClassVar hook is removed: the helper reads
pipeline_config.text_encoder_configs directly, and LongCat's text
encoder works through its own pipeline_config without any further
override.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready PR is ready to merge scope: training Training pipeline, methods, configs type: bugfix Bug fix

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants