[310P] support shared experts path in fused MoE for qwen3.5#7674
[310P] support shared experts path in fused MoE for qwen3.5#7674Tflowers-0129 wants to merge 5 commits intovllm-project:mainfrom
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances the fused Mixture-of-Experts (MoE) system to properly integrate and process shared experts, a critical feature for models like Qwen3.5. The changes ensure that the MoE layer can correctly handle the unique gating and activation requirements of shared expert configurations, particularly for the 310P Ascend environment. This improves the model's compatibility and performance with advanced MoE architectures. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
3a15471 to
8bec9e9
Compare
There was a problem hiding this comment.
Code Review
This pull request enhances the FusedMoE layer for Ascend 310P by introducing new methods, _shared_experts_part1 and _shared_experts_part2, to manage the forward pass for shared experts, including specific handling for Qwen3.5/Qwen3-Next models with an expert_gate. It also imports torch.nn.functional and sets the is_internal_router property to False for the 310P Ascend path. A review comment suggests refactoring the repeated tuple unpacking logic within the newly added shared expert methods into a helper function to improve code maintainability and reduce duplication.
|
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
Signed-off-by: Tflowers-0129 <2906339855@qq.com>
1b6091a to
4181676
Compare
What this PR does / why we need it?
310P originally supported only the Qwen3 series. Recent adaptation work for Qwen3.5 introduced the new shared-experts structure, which had not been considered on the 310P path, so this fix was made. The fix aligns the 310P execution flow with the A2/A3 implementation path.
Does this PR introduce any user-facing change?
NO
How was this patch tested?
local e2e test