Skip to content

update qwen-image default tokenizer_max_length#1041

Merged
llmc-reviewer merged 1 commit intomainfrom
qw
Apr 27, 2026
Merged

update qwen-image default tokenizer_max_length#1041
llmc-reviewer merged 1 commit intomainfrom
qw

Conversation

@helloyongyang
Copy link
Copy Markdown
Contributor

No description provided.

@llmc-reviewer llmc-reviewer merged commit c7cf058 into main Apr 27, 2026
2 checks passed
@llmc-reviewer llmc-reviewer deleted the qw branch April 27, 2026 03:15
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the Qwen25_VLForConditionalGeneration_TextEncoder to allow a configurable tokenizer_max_length with a default of 4096, replacing the previous hardcoded value of 1024. Feedback indicates that this configuration is currently only applied to the text-only encoding path and should be extended to the image-to-image (i2i) task to ensure consistent truncation behavior and memory usage across all encoding paths.

def __init__(self, config):
self.config = config
self.tokenizer_max_length = 1024
self.tokenizer_max_length = config.get("tokenizer_max_length", 4096)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The tokenizer_max_length configuration is currently only applied to the text-only encoding path (line 221). The processor call used for the i2i task (line 202) does not utilize this parameter, which leads to inconsistent truncation behavior between tasks. If a user configures a specific max length, it should ideally be respected across all encoding paths to ensure predictable behavior and memory usage.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants