Skip to content

To support LTX2 ComfyUI format#972

Merged
jingyu-ml merged 5 commits intomainfrom
jingyux/ltx-2-updates
Mar 6, 2026
Merged

To support LTX2 ComfyUI format#972
jingyu-ml merged 5 commits intomainfrom
jingyux/ltx-2-updates

Conversation

@jingyu-ml
Copy link
Copy Markdown
Contributor

@jingyu-ml jingyu-ml commented Mar 4, 2026

What does this PR do?

Type of change: Bug fix

  • Added a flag merged_base_safetensor_path to the example code so that user can export the ComfyUI style ckpt.

Usage

python quantize.py \
    --model ltx-2 --format fp4 --batch-size 1 --calib-size 32 --n-steps 40 \
    --extra-param checkpoint_path=./ltx-2-19b-dev-fp8.safetensors \
    --extra-param distilled_lora_path=./ltx-2-19b-distilled-lora-384.safetensors \
    --extra-param spatial_upsampler_path=./ltx-2-spatial-upscaler-x2-1.0.safetensors \
    --extra-param gemma_root=./gemma-3-12b-it-qat-q4_0-unquantized \
    --extra-param fp8transformer=true \
    --quantized-torch-ckpt-save-path ./ltx-2-transformer.pt \
    --hf-ckpt-dir ./LTX2-NVFP4/ \
    --extra-param merged_base_safetensor_path=./ltx-2-19b-dev-fp8.safetensors

Testing

Before your PR is "Ready for review"

Make sure you read and follow Contributor guidelines and your commits are signed (git commit -s -S).

Make sure you read and follow the Security Best Practices (e.g. avoiding hardcoded trust_remote_code=True, using torch.load(..., weights_only=True), avoiding pickle, etc.).

  • Is this change backward compatible?: ✅ / ❌ / N/A
  • If you copied code from any other source, did you follow IP policy in CONTRIBUTING.md?: ✅ / ❌ / N/A
  • Did you write any new necessary tests?: ✅ / ❌ / N/A
  • Did you update Changelog?: ✅ / ❌ / N/A

Additional Information

Summary by CodeRabbit

  • Documentation

    • Added command examples and descriptions for new quantization options (--hf-ckpt-dir and merged_base_safetensor_path) and clarified outputs for LTX-2 FP4 quantization.
  • Improvements

    • Quantization export now conditionally includes model-specific export behavior for LTX-2 models.
    • Broadened model-filtering patterns to detect additional model blocks during processing.

Signed-off-by: Jingyu Xin <jingyux@nvidia.com>
@jingyu-ml jingyu-ml requested a review from a team as a code owner March 4, 2026 18:34
@jingyu-ml jingyu-ml requested a review from kevalmorabia97 March 4, 2026 18:34
@jingyu-ml jingyu-ml self-assigned this Mar 4, 2026
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Mar 4, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 47d1d38f-f41f-4ae2-99fe-273b6bcd9988

📥 Commits

Reviewing files that changed from the base of the PR and between a1af8ea and 58b60c9.

📒 Files selected for processing (2)
  • examples/diffusers/README.md
  • examples/diffusers/quantization/utils.py
🚧 Files skipped from review as they are similar to previous changes (2)
  • examples/diffusers/README.md
  • examples/diffusers/quantization/utils.py

📝 Walkthrough

Walkthrough

Adds documentation and runtime handling for a new merged_base_safetensor_path export option: README shows CLI usage, quantize.py accepts model_config and conditionally passes the safetensor path when exporting HF checkpoints for LTX2 models, and pipeline_manager.py removes the param from extraParams. A regex in utils.py was broadened.

Changes

Cohort / File(s) Summary
Documentation
examples/diffusers/README.md
Added --hf-ckpt-dir and --extra-param merged_base_safetensor_path to the LTX-2 FP4 quantization example command and expanded header/description.
Export handling
examples/diffusers/quantization/quantize.py
ExportManager.export_hf_ckpt signature now accepts optional model_config; when model_config.model_type == ModelType.LTX2 it adds merged_base_safetensor_path to export kwargs passed to export_hf_checkpoint.
Pipeline params cleanup
examples/diffusers/quantization/pipeline_manager.py
_create_ltx2_pipeline now pops merged_base_safetensor_path from extraParams (discarding it if present).
Model filtering / regex
examples/diffusers/quantization/utils.py
Expanded the WAN-Video block-name regex to match additional block indices (`blocks.(0

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

🚥 Pre-merge checks | ✅ 3 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 60.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly references LTX2 ComfyUI format support, which aligns with the PR's main objective of adding merged_base_safetensor_path to enable ComfyUI-style checkpoint exports.
Security Anti-Patterns ✅ Passed No security anti-patterns detected. Changes involve safe parameter handling and filter updates without unsafe deserialization, remote code execution, or credential exposure.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch jingyux/ltx-2-updates

Comment @coderabbitai help to get the list of available commands and usage tips.

@jingyu-ml jingyu-ml requested a review from ynankani March 4, 2026 18:34
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
examples/diffusers/quantization/utils.py (1)

87-87: Tighten WAN block regex to avoid unintended matches.

On Line 87, the pattern duplicates blocks.39 and uses unescaped . in blocks.0\. / blocks.1\. / blocks.2\., which can match unintended names. This may disable quantizers for extra modules.

Proposed regex cleanup
-        r".*(patch_embedding|condition_embedder|proj_out|blocks.0\.|blocks.1\.|blocks.2\.|blocks.39|blocks.38|blocks.39).*"
+        r".*(patch_embedding|condition_embedder|proj_out|blocks\.(0|1|2|38|39)\.).*"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/diffusers/quantization/utils.py` at line 87, The regex pattern
string
r".*(patch_embedding|condition_embedder|proj_out|blocks.0\.|blocks.1\.|blocks.2\.|blocks.39|blocks.38|blocks.39).*"
is too loose and duplicates blocks.39; update the pattern in utils.py to escape
the unescaped dots in the "blocks.N" parts and remove the duplicate entry, and
consolidate the block indices (e.g., use a grouped alternative like
blocks\.(?:0|1|2|38|39)) so only the intended module names are matched and
quantizers aren't accidentally disabled.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@examples/diffusers/quantization/quantize.py`:
- Around line 313-318: The code forwards a raw merged_base_safetensor_path to
export_hf_checkpoint which can fail later; before setting kwargs and calling
export_hf_checkpoint, validate that merged_path exists and is a file (use
Path(merged_path).exists() / is_file() or os.path.exists), and if invalid log a
clear error via self.logger.error and raise a ValueError (or return/exit) so
export_hf_checkpoint is not called with a bad path; update the block around
model_config.model_type == ModelType.LTX2 that assigns
kwargs["merged_base_safetensor_path"] accordingly.

---

Nitpick comments:
In `@examples/diffusers/quantization/utils.py`:
- Line 87: The regex pattern string
r".*(patch_embedding|condition_embedder|proj_out|blocks.0\.|blocks.1\.|blocks.2\.|blocks.39|blocks.38|blocks.39).*"
is too loose and duplicates blocks.39; update the pattern in utils.py to escape
the unescaped dots in the "blocks.N" parts and remove the duplicate entry, and
consolidate the block indices (e.g., use a grouped alternative like
blocks\.(?:0|1|2|38|39)) so only the intended module names are matched and
quantizers aren't accidentally disabled.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 4384aed0-4ef8-468a-88ac-b7d010bfa15b

📥 Commits

Reviewing files that changed from the base of the PR and between e8f9687 and a1af8ea.

📒 Files selected for processing (4)
  • examples/diffusers/README.md
  • examples/diffusers/quantization/pipeline_manager.py
  • examples/diffusers/quantization/quantize.py
  • examples/diffusers/quantization/utils.py

Comment on lines +313 to +318
if model_config and model_config.model_type == ModelType.LTX2:
merged_path = model_config.extra_params.get("merged_base_safetensor_path")
if merged_path:
self.logger.info(f"Merging base safetensors from {merged_path} for LTX2 export")
kwargs["merged_base_safetensor_path"] = merged_path
export_hf_checkpoint(pipe, export_dir=self.config.hf_ckpt_dir, **kwargs)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Validate merged_base_safetensor_path before forwarding to export.

Current flow forwards raw user-provided value; a bad path will fail deeper in export. Adding a local existence check gives a clearer error and faster fail.

Proposed validation guard
         if model_config and model_config.model_type == ModelType.LTX2:
             merged_path = model_config.extra_params.get("merged_base_safetensor_path")
             if merged_path:
-                self.logger.info(f"Merging base safetensors from {merged_path} for LTX2 export")
-                kwargs["merged_base_safetensor_path"] = merged_path
+                merged_path = Path(merged_path)
+                if not merged_path.is_file():
+                    raise FileNotFoundError(
+                        f"merged_base_safetensor_path does not exist or is not a file: {merged_path}"
+                    )
+                self.logger.info(f"Merging base safetensors from {merged_path} for LTX2 export")
+                kwargs["merged_base_safetensor_path"] = str(merged_path)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@examples/diffusers/quantization/quantize.py` around lines 313 - 318, The code
forwards a raw merged_base_safetensor_path to export_hf_checkpoint which can
fail later; before setting kwargs and calling export_hf_checkpoint, validate
that merged_path exists and is a file (use Path(merged_path).exists() /
is_file() or os.path.exists), and if invalid log a clear error via
self.logger.error and raise a ValueError (or return/exit) so
export_hf_checkpoint is not called with a bad path; update the block around
model_config.model_type == ModelType.LTX2 that assigns
kwargs["merged_base_safetensor_path"] accordingly.

@kevalmorabia97 kevalmorabia97 requested review from Edwardf0t1 and removed request for kevalmorabia97 March 4, 2026 18:53
@codecov
Copy link
Copy Markdown

codecov Bot commented Mar 4, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 72.10%. Comparing base (42482b1) to head (790aac0).
⚠️ Report is 2 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #972      +/-   ##
==========================================
- Coverage   72.13%   72.10%   -0.03%     
==========================================
  Files         209      209              
  Lines       23631    23631              
==========================================
- Hits        17046    17040       -6     
- Misses       6585     6591       +6     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds support for exporting an LTX-2 quantized checkpoint in a ComfyUI-compatible single-file safetensors format by plumbing a new merged_base_safetensor_path option through the diffusers quantization example workflow.

Changes:

  • Extend HF checkpoint export in quantize.py to pass LTX2-specific export kwargs (merged_base_safetensor_path).
  • Ignore merged_base_safetensor_path when constructing the LTX2 pipeline so it doesn’t get forwarded as a pipeline kwarg.
  • Update WAN model quantization filter pattern and document the new export parameters in the diffusers README.

Reviewed changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 4 comments.

File Description
examples/diffusers/quantization/utils.py Updates WAN filter regex used to disable/select quantizers for specific module name patterns.
examples/diffusers/quantization/quantize.py Adds LTX2-specific HF export behavior to enable ComfyUI-style merged safetensors export.
examples/diffusers/quantization/pipeline_manager.py Drops the new export-only extra param from LTX2 pipeline construction kwargs.
examples/diffusers/README.md Documents LTX-2 HF export directory and merged-base safetensors flag in the example command.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread examples/diffusers/quantization/quantize.py
Comment thread examples/diffusers/README.md Outdated
Comment on lines +119 to +131
--quantized-torch-ckpt-save-path ./ltx-2-transformer.pt
--quantized-torch-ckpt-save-path ./ltx-2-transformer.pt \
--hf-ckpt-dir ./LTX2-NVFP4/ \
--extra-param merged_base_safetensor_path=./ltx-2-19b-dev-fp8.safetensors
Copy link

Copilot AI Mar 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This section header says “FP4 (torch checkpoint export)”, but the example now also documents HuggingFace export (--hf-ckpt-dir) and ComfyUI-style merging (merged_base_safetensor_path). Consider updating the heading/nearby text to reflect that this example covers HF/ComfyUI checkpoint export as well, so readers aren’t confused about what gets produced.

Copilot uses AI. Check for mistakes.
Comment thread examples/diffusers/quantization/utils.py
Comment thread examples/diffusers/quantization/quantize.py
Copy link
Copy Markdown
Contributor

@ynankani ynankani left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Copy Markdown
Contributor

@Edwardf0t1 Edwardf0t1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Please check copilot's reviews as well.

Signed-off-by: Jingyu Xin <jingyux@nvidia.com>
@jingyu-ml
Copy link
Copy Markdown
Contributor Author

LGTM. Please check copilot's reviews as well.

Thank you. I’ve addressed it, and I found their suggestion genuinely useful.

@jingyu-ml jingyu-ml enabled auto-merge (squash) March 6, 2026 00:56
@jingyu-ml jingyu-ml merged commit 37d3f10 into main Mar 6, 2026
38 checks passed
@jingyu-ml jingyu-ml deleted the jingyux/ltx-2-updates branch March 6, 2026 05:06
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants