Skip to content

[CI] Refresh the CI for DeepSeek-V3.2#7678

Open
Nagisa125 wants to merge 2 commits intovllm-project:mainfrom
Nagisa125:main
Open

[CI] Refresh the CI for DeepSeek-V3.2#7678
Nagisa125 wants to merge 2 commits intovllm-project:mainfrom
Nagisa125:main

Conversation

@Nagisa125
Copy link
Contributor

@Nagisa125 Nagisa125 commented Mar 26, 2026

What this PR does / why we need it?

This PR fixes the issue of multi-node-deepseek-V3_2-W8A8-cp test errors.
During the map compilation, the roundup operation is performed. If the value of tp is not a multiple of num_speculative_tokens +1, an error occurs in the dimension.If this requirement is not met, an error will be reported due to the check added in #6856.

Does this PR introduce any user-facing change?

no

How was this patch tested?

The test has passed.
https://github.com/vllm-project/vllm-ascend/actions/runs/23587800565/job/68685248957?pr=7678

zhangxinyuehfad and others added 2 commits March 26, 2026 15:19
Signed-off-by: hfadzxy <starmoon_zhang@163.com>
Signed-off-by: wyh145 <1987244901@qq.com>
@Nagisa125 Nagisa125 requested a review from wangxiyuan as a code owner March 26, 2026 07:35
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on refreshing and optimizing the Continuous Integration (CI) configurations for the DeepSeek-V3.2 model. The changes involve fine-tuning benchmark parameters, specifically increasing the batch size for a multi-node benchmark, and adjusting model deployment configurations related to speculative decoding and CUDA graph compilation. These updates aim to enhance the efficiency and coverage of the CI testing environment for DeepSeek-V3.2.

Highlights

  • Benchmark Configuration Update: The batch_size for the gsm8k benchmark in the DeepSeek-V3_2-W8A8-A3-dual-nodes.yaml configuration was significantly increased from 1 to 512.
  • Speculative Decoding Tuning: The num_speculative_tokens parameter in the speculative-config for DeepSeek-V3.2 checkpointing tests was adjusted from 2 to 3.
  • CUDA Graph Optimization: The cudagraph_capture_sizes array in the compilation-config for DeepSeek-V3.2 checkpointing tests was extended to include larger sizes (52, 56, 60, 64), optimizing for a broader range of tensor dimensions.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@github-actions
Copy link
Contributor

👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:‌‌

  • A PR should do only one thing, smaller PRs enable faster reviews.
  • Every PR should include unit tests and end-to-end tests ‌to ensure it works and is not broken by other future PRs.
  • Write the commit message by fulfilling the PR description to help reviewer and future developers understand.

If CI fails, you can run linting and testing checks locally according Contributing and Testing.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates configuration files for DeepSeek-V3_2 models. Specifically, the batch_size for the perf_short benchmark was increased from 1 to 512, which requires updating the corresponding baseline value to accurately reflect performance for the new configuration. Additionally, the num_speculative_tokens was changed from 2 to 3, and the cudagraph_capture_sizes list was extended. However, the cudagraph_capture_sizes list needs to be recalculated to ensure consistency with the new num_speculative_tokens value (K=3), as it should follow the n * (K+1) pattern to prevent incorrect graph capturing or suboptimal performance.

batch_size: 1
batch_size: 512
request_rate: 11.2
baseline: 148 # after switch vllm to 0.15.0, the baseline reduced significantly, need to confirm if it's a regression or just a more strict measurement
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The batch_size for the perf_short benchmark has been significantly increased from 1 to 512. The existing baseline value (148) was likely established with the previous batch_size: 1. To ensure the benchmark accurately reflects performance and can detect regressions for the new configuration, the baseline should be updated to a value appropriate for batch_size: 512.

Comment on lines +39 to +40
--speculative-config '{"num_speculative_tokens": 3, "method":"deepseek_mtp"}'
--compilation-config '{"cudagraph_capture_sizes": [3, 6, 9, 12, 15, 18, 21, 24, 27, 30, 33, 36, 39, 42, 45, 48, 52, 56, 60, 64], "cudagraph_mode": "FULL_DECODE_ONLY"}'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The num_speculative_tokens has been updated from 2 to 3. According to the documentation in docs/source/user_guide/feature_guide/speculative_decoding.md (lines 89-90), when num_speculative_tokens is K, the cudagraph_capture_sizes should be calculated as n * (K + 1). With K=3, the sizes should be multiples of (3 + 1) = 4. The current cudagraph_capture_sizes list [3, 6, 9, ..., 48, 52, 56, 60, 64] does not consistently follow this n * 4 pattern (e.g., it starts with 3, 6, 9). This inconsistency could lead to incorrect graph capturing or suboptimal performance. Please recalculate the cudagraph_capture_sizes to be consistent with K=3.

For example, if the intention is to support batch sizes n from 1 to 16, the list should be [4, 8, 12, 16, 20, 24, 28, 32, 36, 40, 44, 48, 52, 56, 60, 64].

      --speculative-config '{"num_speculative_tokens": 3, "method":"deepseek_mtp"}'
      --compilation-config '{"cudagraph_capture_sizes": [4, 8, 12, 16, 20, 24, 28, 32, 36, 40, 44, 48, 52, 56, 60, 64], "cudagraph_mode": "FULL_DECODE_ONLY"}'

Comment on lines +65 to +66
--speculative-config '{"num_speculative_tokens": 3, "method":"deepseek_mtp"}'
--compilation-config '{"cudagraph_capture_sizes": [3, 6, 9, 12, 15, 18, 21, 24, 27, 30, 33, 36, 39, 42, 45, 48, 52, 56, 60, 64], "cudagraph_mode": "FULL_DECODE_ONLY"}'
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The num_speculative_tokens has been updated from 2 to 3. According to the documentation in docs/source/user_guide/feature_guide/speculative_decoding.md (lines 89-90), when num_speculative_tokens is K, the cudagraph_capture_sizes should be calculated as n * (K + 1). With K=3, the sizes should be multiples of (3 + 1) = 4. The current cudagraph_capture_sizes list [3, 6, 9, ..., 48, 52, 56, 60, 64] does not consistently follow this n * 4 pattern (e.g., it starts with 3, 6, 9). This inconsistency could lead to incorrect graph capturing or suboptimal performance. Please recalculate the cudagraph_capture_sizes to be consistent with K=3.

For example, if the intention is to support batch sizes n from 1 to 16, the list should be [4, 8, 12, 16, 20, 24, 28, 32, 36, 40, 44, 48, 52, 56, 60, 64].

      --speculative-config '{"num_speculative_tokens": 3, "method":"deepseek_mtp"}'
      --compilation-config '{"cudagraph_capture_sizes": [4, 8, 12, 16, 20, 24, 28, 32, 36, 40, 44, 48, 52, 56, 60, 64], "cudagraph_mode": "FULL_DECODE_ONLY"}'

@zhangxinyuehfad
Copy link
Collaborator

/nightly multi-node-deepseek-V3_2-W8A8-cp

@github-actions
Copy link
Contributor

This pull request has conflicts, please resolve those before we can evaluate the pull request.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants