Skip to content

for qwen3-next:add kernel fused split qkvgate gemma rmsnorm rope#405

Merged
RuixuanZhang06 merged 6 commits intosgl-project:mainfrom
McZyWu:splitqkvgate-gemma-rope
Apr 7, 2026
Merged

for qwen3-next:add kernel fused split qkvgate gemma rmsnorm rope#405
RuixuanZhang06 merged 6 commits intosgl-project:mainfrom
McZyWu:splitqkvgate-gemma-rope

Conversation

@McZyWu
Copy link
Copy Markdown
Contributor

@McZyWu McZyWu commented Mar 23, 2026

for qwen3-next and coder next, fuse split q,gate,k,v,reshape, gemma-rmsnorm, rope ops. enhance performance (profiling) from 100u to 40u per ops.

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a performance optimization by fusing several operations related to attention mechanisms in large language models. The primary goal is to enhance the speed and efficiency of these models, specifically Qwen3-next and Coder, by reducing the overhead associated with individual operations. The fused kernel combines splitting Q, K, V, and gate, applying Gemma RMSNorm, and RoPE (Rotary Position Embedding) into a single operation.

Highlights

  • Kernel Fusion: This PR introduces a kernel-fused implementation for split Q, Gate, K, V, reshape, Gemma RMSNorm, and RoPE operations, targeting improved performance for Qwen3-next and Coder models.
  • Performance Enhancement: The fused kernel aims to reduce the operation time from 100µs to 40µs per operation through optimized execution.
  • Triton Implementation: The new functionality is implemented using Triton, a language and compiler for writing high-performance custom deep learning primitives.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a new fused Triton kernel, split_qkvgate_gemma_rmsnorm_rope, aimed at enhancing performance. While the initiative to fuse operations is commendable, the current implementation contains several critical issues within the kernel logic that will lead to incorrect behavior or runtime errors. Additionally, the wrapper function's signature has problematic default arguments that could cause unexpected failures. I've provided specific comments and suggestions to address these issues to ensure the new kernel is both correct and robust.

Comment thread python/sgl_kernel_npu/sgl_kernel_npu/norm/split_qkv_rmsnorm_rope.py Outdated
Comment thread python/sgl_kernel_npu/sgl_kernel_npu/norm/split_qkv_rmsnorm_rope.py Outdated
Comment thread python/sgl_kernel_npu/sgl_kernel_npu/norm/split_qkv_rmsnorm_rope.py
Comment thread python/sgl_kernel_npu/sgl_kernel_npu/norm/split_qkv_rmsnorm_rope.py Outdated
@McZyWu McZyWu requested a review from shengzhaotian April 2, 2026 02:06
@RuixuanZhang06 RuixuanZhang06 merged commit 7f42644 into sgl-project:main Apr 7, 2026
4 of 7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants