Add support for LLaVA-OneVision-1.5 model#54
Conversation
randydl
commented
Apr 28, 2026
There was a problem hiding this comment.
Code Review
This pull request adds support for the llava_onevision1_5 model, including its registration, constants, and a new implementation file. A typo was found in the configuration parser where the model string was missing an underscore, which would prevent the qk_layernorm setting from being correctly applied to this model.
| rope_scaling = res.get('rope_scaling') or {} | ||
| if llm_model_type in {'qwen3', 'qwen3_moe', 'qwen3_next'} or hf_model_type in { | ||
| 'qwen3_omni_moe', 'qwen3_omni', 'qwen3_vl', 'qwen3_vl_moe', 'qwen3_5', 'qwen3_5_moe' | ||
| 'qwen3_omni_moe', 'qwen3_omni', 'qwen3_vl', 'qwen3_vl_moe', 'qwen3_5', 'qwen3_5_moe', 'llavaonevision1_5' |
There was a problem hiding this comment.
The model type string 'llavaonevision1_5' appears to have a typo (missing underscore). It should be 'llava_onevision1_5' to be consistent with the definition in src/mcore_bridge/model/constant.py and the registration in src/mcore_bridge/model/mm_gpts/llava.py. Without the underscore, the qk_layernorm configuration will not be correctly applied to this model.
| 'qwen3_omni_moe', 'qwen3_omni', 'qwen3_vl', 'qwen3_vl_moe', 'qwen3_5', 'qwen3_5_moe', 'llavaonevision1_5' | |
| 'qwen3_omni_moe', 'qwen3_omni', 'qwen3_vl', 'qwen3_vl_moe', 'qwen3_5', 'qwen3_5_moe', 'llava_onevision1_5' |
There was a problem hiding this comment.
The expected value is "llavaonevision1_5" (see https://huggingface.co/lmms-lab/LLaVA-OneVision-1.5-4B-Instruct/blob/main/configuration_llavaonevision1_5.py#:~:text=model_type%20%3D%20%22-,llavaonevision1_5,-%22), but note that in Swift it is referred to as "llava_onevision1_5".
|
thanks! |