Skip to content

Commit 7ffdcb7

Browse files
committed
Fix TypeError in LongCatImageEditPipeline truncation warning
`_encode_prompt` in `LongCatImageEditPipeline` calls `len()` twice on `all_tokens` when logging the truncation warning: f" {self.tokenizer_max_length} input token nums : {len(len(all_tokens))}" `len(all_tokens)` already returns an `int`, so the outer `len()` raises `TypeError: object of type 'int' has no len()`. The failure triggers exactly in the only branch this warning exists for (prompts longer than `tokenizer_max_length`, default 512), turning the intended informational warning into a hard crash. The sibling `LongCatImagePipeline._encode_prompt` has the correct `{len(all_tokens)}` at line 291, so this is a typo local to the edit pipeline. Minimal fix: drop the extra `len()` call. Reproduces with any prompt whose tokenization exceeds 512 tokens: pipe = LongCatImageEditPipeline.from_pretrained(...) pipe(prompt="a very long prompt ..." * 200, image=...) # TypeError: object of type 'int' has no len()
1 parent c8c8401 commit 7ffdcb7

1 file changed

Lines changed: 1 addition & 1 deletion

File tree

src/diffusers/pipelines/longcat_image/pipeline_longcat_image_edit.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -284,7 +284,7 @@ def _encode_prompt(self, prompt, image):
284284
if len(all_tokens) > self.tokenizer_max_length:
285285
logger.warning(
286286
"Your input was truncated because `max_sequence_length` is set to "
287-
f" {self.tokenizer_max_length} input token nums : {len(len(all_tokens))}"
287+
f" {self.tokenizer_max_length} input token nums : {len(all_tokens)}"
288288
)
289289
all_tokens = all_tokens[: self.tokenizer_max_length]
290290

0 commit comments

Comments
 (0)