Skip to content

Commit b4fd384

Browse files
committed
Fix batch size calculation in Wan VACE pipeline for precomputed embeddings
Derives `batch_size` from `len(prompt_embeds)` after prompt encoding rather than `len(prompt)` beforehand. This resolves a `TypeError` crash when running inference using only precomputed embeddings (`prompt=None` and `prompt_embeds` provided)
1 parent ad6391a commit b4fd384

1 file changed

Lines changed: 1 addition & 1 deletion

File tree

src/maxdiffusion/pipelines/wan/wan_vace_pipeline_2_1.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -562,7 +562,6 @@ def __call__(
562562
if prompt is not None and isinstance(prompt, str):
563563
prompt = [prompt]
564564

565-
batch_size = len(prompt)
566565
if num_videos_per_prompt != 1:
567566
raise ValueError("Generating multiple videos per prompt is not yet supported. This may be supported in the future.")
568567

@@ -573,6 +572,7 @@ def __call__(
573572
prompt_embeds=prompt_embeds,
574573
negative_prompt_embeds=negative_prompt_embeds,
575574
)
575+
batch_size = len(prompt_embeds)
576576

577577
transformer_dtype = self.transformer.proj_out.bias.dtype
578578
vace_layers = self.transformer.config.vace_layers

0 commit comments

Comments
 (0)