Skip to content

Commit a0a6199

Browse files
committed
Regenerate qwen3_omni_moe and qwen3_vl_moe modeling files to propagate return type fix
1 parent 7d852a1 commit a0a6199

2 files changed

Lines changed: 2 additions & 2 deletions

File tree

src/transformers/models/qwen3_omni_moe/modeling_qwen3_omni_moe.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1415,7 +1415,7 @@ def __init__(self, config: Qwen3OmniMoeThinkerConfig):
14151415
self.experts = Qwen3OmniMoeThinkerTextExperts(config)
14161416
self.gate = Qwen3OmniMoeThinkerTextTopKRouter(config)
14171417

1418-
def forward(self, hidden_states: torch.Tensor) -> tuple[torch.Tensor, torch.Tensor]:
1418+
def forward(self, hidden_states: torch.Tensor) -> torch.Tensor:
14191419
batch_size, sequence_length, hidden_dim = hidden_states.shape
14201420
hidden_states_reshaped = hidden_states.view(-1, hidden_dim)
14211421
_, routing_weights, selected_experts = self.gate(hidden_states_reshaped)

src/transformers/models/qwen3_vl_moe/modeling_qwen3_vl_moe.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -136,7 +136,7 @@ def __init__(self, config: Qwen3VLMoeTextConfig):
136136
self.experts = Qwen3VLMoeTextExperts(config)
137137
self.gate = Qwen3VLMoeTextTopKRouter(config)
138138

139-
def forward(self, hidden_states: torch.Tensor) -> tuple[torch.Tensor, torch.Tensor]:
139+
def forward(self, hidden_states: torch.Tensor) -> torch.Tensor:
140140
batch_size, sequence_length, hidden_dim = hidden_states.shape
141141
hidden_states_reshaped = hidden_states.view(-1, hidden_dim)
142142
_, routing_weights, selected_experts = self.gate(hidden_states_reshaped)

0 commit comments

Comments
 (0)