Skip to content

Commit a187fea

Browse files
Copilotlstein
andcommitted
fix: load AutoProcessor from tokenizer subdirectory, not root model path
Co-authored-by: lstein <111189+lstein@users.noreply.github.com> Agent-Logs-Url: https://github.com/lstein/InvokeAI/sessions/174390c9-a290-4a1b-83cc-c55441b2d9b8
1 parent 62aebab commit a187fea

1 file changed

Lines changed: 3 additions & 1 deletion

File tree

invokeai/app/invocations/qwen_image_edit_text_encoder.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -74,8 +74,10 @@ def _encode(self, context: InvocationContext, images: list[PILImage.Image]) -> t
7474
# Load the full processor (image_processor + tokenizer) from the tokenizer submodel path.
7575
# Using AutoProcessor.from_pretrained ensures all components are loaded correctly
7676
# regardless of whether the model uses Qwen2VLProcessor or Qwen2_5_VLProcessor.
77+
# For diffusers models the processor config lives in the `tokenizer` subdirectory,
78+
# so we append the submodel directory name to the root model path.
7779
tokenizer_config = context.models.get_config(self.qwen_vl_encoder.tokenizer)
78-
tokenizer_abs_path = context.models.get_absolute_path(tokenizer_config)
80+
tokenizer_abs_path = context.models.get_absolute_path(tokenizer_config) / "tokenizer"
7981
processor = AutoProcessor.from_pretrained(str(tokenizer_abs_path), local_files_only=True)
8082

8183
text_encoder_info = context.models.load(self.qwen_vl_encoder.text_encoder)

0 commit comments

Comments
 (0)