[CI] Refresh the CI for DeepSeek-V3.2#7678
[CI] Refresh the CI for DeepSeek-V3.2#7678Nagisa125 wants to merge 2 commits intovllm-project:mainfrom
Conversation
Signed-off-by: hfadzxy <starmoon_zhang@163.com>
Signed-off-by: wyh145 <1987244901@qq.com>
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request focuses on refreshing and optimizing the Continuous Integration (CI) configurations for the DeepSeek-V3.2 model. The changes involve fine-tuning benchmark parameters, specifically increasing the batch size for a multi-node benchmark, and adjusting model deployment configurations related to speculative decoding and CUDA graph compilation. These updates aim to enhance the efficiency and coverage of the CI testing environment for DeepSeek-V3.2. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
|
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
There was a problem hiding this comment.
Code Review
This pull request updates configuration files for DeepSeek-V3_2 models. Specifically, the batch_size for the perf_short benchmark was increased from 1 to 512, which requires updating the corresponding baseline value to accurately reflect performance for the new configuration. Additionally, the num_speculative_tokens was changed from 2 to 3, and the cudagraph_capture_sizes list was extended. However, the cudagraph_capture_sizes list needs to be recalculated to ensure consistency with the new num_speculative_tokens value (K=3), as it should follow the n * (K+1) pattern to prevent incorrect graph capturing or suboptimal performance.
| batch_size: 1 | ||
| batch_size: 512 | ||
| request_rate: 11.2 | ||
| baseline: 148 # after switch vllm to 0.15.0, the baseline reduced significantly, need to confirm if it's a regression or just a more strict measurement |
There was a problem hiding this comment.
The batch_size for the perf_short benchmark has been significantly increased from 1 to 512. The existing baseline value (148) was likely established with the previous batch_size: 1. To ensure the benchmark accurately reflects performance and can detect regressions for the new configuration, the baseline should be updated to a value appropriate for batch_size: 512.
| --speculative-config '{"num_speculative_tokens": 3, "method":"deepseek_mtp"}' | ||
| --compilation-config '{"cudagraph_capture_sizes": [3, 6, 9, 12, 15, 18, 21, 24, 27, 30, 33, 36, 39, 42, 45, 48, 52, 56, 60, 64], "cudagraph_mode": "FULL_DECODE_ONLY"}' |
There was a problem hiding this comment.
The num_speculative_tokens has been updated from 2 to 3. According to the documentation in docs/source/user_guide/feature_guide/speculative_decoding.md (lines 89-90), when num_speculative_tokens is K, the cudagraph_capture_sizes should be calculated as n * (K + 1). With K=3, the sizes should be multiples of (3 + 1) = 4. The current cudagraph_capture_sizes list [3, 6, 9, ..., 48, 52, 56, 60, 64] does not consistently follow this n * 4 pattern (e.g., it starts with 3, 6, 9). This inconsistency could lead to incorrect graph capturing or suboptimal performance. Please recalculate the cudagraph_capture_sizes to be consistent with K=3.
For example, if the intention is to support batch sizes n from 1 to 16, the list should be [4, 8, 12, 16, 20, 24, 28, 32, 36, 40, 44, 48, 52, 56, 60, 64].
--speculative-config '{"num_speculative_tokens": 3, "method":"deepseek_mtp"}'
--compilation-config '{"cudagraph_capture_sizes": [4, 8, 12, 16, 20, 24, 28, 32, 36, 40, 44, 48, 52, 56, 60, 64], "cudagraph_mode": "FULL_DECODE_ONLY"}'| --speculative-config '{"num_speculative_tokens": 3, "method":"deepseek_mtp"}' | ||
| --compilation-config '{"cudagraph_capture_sizes": [3, 6, 9, 12, 15, 18, 21, 24, 27, 30, 33, 36, 39, 42, 45, 48, 52, 56, 60, 64], "cudagraph_mode": "FULL_DECODE_ONLY"}' |
There was a problem hiding this comment.
The num_speculative_tokens has been updated from 2 to 3. According to the documentation in docs/source/user_guide/feature_guide/speculative_decoding.md (lines 89-90), when num_speculative_tokens is K, the cudagraph_capture_sizes should be calculated as n * (K + 1). With K=3, the sizes should be multiples of (3 + 1) = 4. The current cudagraph_capture_sizes list [3, 6, 9, ..., 48, 52, 56, 60, 64] does not consistently follow this n * 4 pattern (e.g., it starts with 3, 6, 9). This inconsistency could lead to incorrect graph capturing or suboptimal performance. Please recalculate the cudagraph_capture_sizes to be consistent with K=3.
For example, if the intention is to support batch sizes n from 1 to 16, the list should be [4, 8, 12, 16, 20, 24, 28, 32, 36, 40, 44, 48, 52, 56, 60, 64].
--speculative-config '{"num_speculative_tokens": 3, "method":"deepseek_mtp"}'
--compilation-config '{"cudagraph_capture_sizes": [4, 8, 12, 16, 20, 24, 28, 32, 36, 40, 44, 48, 52, 56, 60, 64], "cudagraph_mode": "FULL_DECODE_ONLY"}'|
/nightly multi-node-deepseek-V3_2-W8A8-cp |
|
This pull request has conflicts, please resolve those before we can evaluate the pull request. |
What this PR does / why we need it?
This PR fixes the issue of multi-node-deepseek-V3_2-W8A8-cp test errors.
During the map compilation, the roundup operation is performed. If the value of tp is not a multiple of num_speculative_tokens +1, an error occurs in the dimension.If this requirement is not met, an error will be reported due to the check added in #6856.
Does this PR introduce any user-facing change?
no
How was this patch tested?
The test has passed.
https://github.com/vllm-project/vllm-ascend/actions/runs/23587800565/job/68685248957?pr=7678