Skip to content

[v0.18.0][Misc] Recompute scheduler upgrade to vLLM 0.18.0#7720

Merged
yiz-liu merged 2 commits intovllm-project:releases/v0.18.0from
Angazenn:recompute_0180
Mar 27, 2026
Merged

[v0.18.0][Misc] Recompute scheduler upgrade to vLLM 0.18.0#7720
yiz-liu merged 2 commits intovllm-project:releases/v0.18.0from
Angazenn:recompute_0180

Conversation

@Angazenn
Copy link
Collaborator

What this PR does / why we need it?

cherry-pick from #7675 .
The current RecomputeScheduler is aligned to Scheduler in vLLM v0.16.0. Since upstream vLLM has upgraded to v0.18.0, we also need to upgrade RecomputeScheduler to pick up missing updates.

Does this PR introduce any user-facing change?

How was this patch tested?

Signed-off-by: Angazenn <supperccell@163.com>
Signed-off-by: Angazenn <supperccell@163.com>
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request updates the RecomputeScheduler to maintain compatibility with the vLLM v0.18.0 release. The changes focus on aligning internal scheduling mechanisms with upstream updates, improving the robustness of request handling during asynchronous operations, and enhancing support for advanced KV cache configurations.

Highlights

  • Scheduler Upgrade: Updated the RecomputeScheduler to align with the vLLM v0.18.0 upstream changes.
  • Scheduling Logic Improvements: Refactored the waiting request scheduling loop to support pausing, improved handling of blocked requests, and added support for multi-group KV cache configurations.
  • KV Cache Management: Added support for zeroing new KV cache blocks and refined the asynchronous KV transfer state machine.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@Angazenn Angazenn added ready read for review ready-for-test start test by label for PR labels Mar 27, 2026
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the RecomputeScheduler in the vllm-ascend module to improve request scheduling and KV cache management. Key updates include the integration of PauseState for scheduling control, a refactored waiting request queue that utilizes a skipped requests mechanism, and improved encoder cache allocation for encoder-decoder models. The changes also ensure accurate token count tracking during asynchronous KV transfers and add support for zeroing new block IDs. I have no feedback to provide as no review comments were present. Suggested PR Title: [vllm-ascend][Scheduler][Misc] Refactor recompute scheduler and enhance KV cache handling Suggested PR Summary: markdown ### What this PR does / why we need it? This PR refactors the RecomputeScheduler to improve request handling and KV cache management. It introduces PauseState support, refactors the waiting request scheduling logic using a skipped requests queue, and updates encoder cache allocation for encoder-decoder models. Additionally, it ensures correct token count tracking during asynchronous KV transfers and adds support for zeroing new block IDs. ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? CI passed with existing tests.

@yiz-liu yiz-liu added this to the v0.18.0rc1 milestone Mar 27, 2026
@yiz-liu yiz-liu merged commit 7cca7e6 into vllm-project:releases/v0.18.0 Mar 27, 2026
26 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready read for review ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants