Skip to content

[fix](gpt-oss): fix quark quantized model in moe bias#787

Open
PerryZhang01 wants to merge 1 commit into
mainfrom
quant_gpt_oss
Open

[fix](gpt-oss): fix quark quantized model in moe bias#787
PerryZhang01 wants to merge 1 commit into
mainfrom
quant_gpt_oss

Conversation

@PerryZhang01
Copy link
Copy Markdown
Contributor

@PerryZhang01 PerryZhang01 commented May 14, 2026

Motivation

This PR fixed the padding error in quantized gpt_oss. the quantized gpt-oss-120b is from quark team(https://huggingface.co/amd/gpt-oss-120b-moe-ori-attn-ptpc), it only quantized gemm weights in attention with PTPC methods. the bias in moe are padding, using empty tensor will introduce dirty data, so use zero bias data.

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants