Skip to content

[0.18.0][Bugfix][KV Pool]Fix KV transfer put logic #7718

Open
Pz1116 wants to merge 4 commits intovllm-project:releases/v0.18.0from
Pz1116:put_all_keys_v0_18_0
Open

[0.18.0][Bugfix][KV Pool]Fix KV transfer put logic #7718
Pz1116 wants to merge 4 commits intovllm-project:releases/v0.18.0from
Pz1116:put_all_keys_v0_18_0

Conversation

@Pz1116
Copy link
Collaborator

@Pz1116 Pz1116 commented Mar 27, 2026

What this PR does / why we need it?

Before when we do put for KV Pool, we find the first non-existing key and put all the blocks starting from that index; however, if the prefix cache blocks is from another request, and some of the blocks are evicted due to LRU, we will be putting blocks that still exist in the pool, and causing MooncakeStore printing unnecessary logs in master service.

What this PR does:

Now we lookup all the keys and only put the ones that are missing.
Fix lookup_scheduler in pool_worker so it handles GQA correctly.
Fixes a few existing typos
Add UT, written by codex

Does this PR introduce any user-facing change?

How was this patch tested?

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request improves the efficiency of the KV pool transfer mechanism by replacing a sequential block-check approach with a more precise lookup that identifies and transfers only missing blocks. This change prevents redundant data transfers and reduces unnecessary logging in the master service. Additionally, the PR includes minor refactoring to fix naming inconsistencies and adds unit tests to ensure the robustness of the new transfer logic.

Highlights

  • KV Transfer Logic Optimization: Updated the KV pool transfer logic to perform a granular lookup of all keys, ensuring only missing blocks are transferred instead of assuming a sequential range.
  • Codebase Cleanup: Corrected multiple instances of the typo 'LasyerMultiBlockReqMeta' to 'LayerMultiBlockReqMeta' throughout the codebase.
  • Testing: Added comprehensive unit tests to verify that the sending threads correctly identify and transfer only missing keys.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Signed-off-by: Pz1116 <zpbzpb123123@gmail.com>
Co-authored-by: DreamerLeader <2270923832@qq.com>
Co-authored-by: fems14 <1804143737@qq.com>
@Pz1116 Pz1116 force-pushed the put_all_keys_v0_18_0 branch from f72bc7d to 4136ef1 Compare March 27, 2026 05:27
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

Suggested PR Title:

[Distributed][BugFix] Fix KV cache lookup and correct LayerMultiBlockReqMeta typo

Suggested PR Summary:

### What this PR does / why we need it?
This PR refactors the KV cache lookup mechanism to return a boolean existence list, allowing for the identification and storage of specific missing blocks instead of stopping at the first missing one. It also corrects a typo in the `LayerMultiBlockReqMeta` class name. Review feedback points out a potential `ImportError` if the class definition itself isn't updated and suggests optimizing list filtering logic using `zip` and `map` for better performance.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
New unit tests were added in `tests/ut/distributed/mooncake/test_kv_transfer.py` to verify the new lookup and storage logic.

from vllm_ascend.distributed.kv_transfer.kv_pool.ascend_store.config_data import (
ChunkedTokenDatabase,
LasyerMultiBlockReqMeta,
LayerMultiBlockReqMeta,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This change corrects a typo in the import (LasyerMultiBlockReqMeta -> LayerMultiBlockReqMeta). However, the corresponding class definition in vllm_ascend/distributed/kv_transfer/kv_pool/ascend_store/config_data.py still has the typo (class LasyerMultiBlockReqMeta). This will lead to an ImportError. Please ensure you also correct the class name at its definition to LayerMultiBlockReqMeta in vllm_ascend/distributed/kv_transfer/kv_pool/ascend_store/config_data.py to fix this.

Comment on lines +184 to +193
missing_indices = [index for index, exists in enumerate(exists_states) if not exists]

if skip_block_num == len(keys):
if not missing_indices:
self.dec_stored_request(req_id)
return

starts = starts[skip_block_num:]
ends = ends[skip_block_num:]
keys = keys[skip_block_num:]
starts = [starts[index] for index in missing_indices]
ends = [ends[index] for index in missing_indices]
keys = [keys[index] for index in missing_indices]
block_hashes = [block_hashes[index] for index in missing_indices]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The current implementation first creates a list of missing indices and then iterates over it four times to filter other lists. This can be optimized for performance and readability by creating the filtered lists in a single pass.

Suggested change
missing_indices = [index for index, exists in enumerate(exists_states) if not exists]
if skip_block_num == len(keys):
if not missing_indices:
self.dec_stored_request(req_id)
return
starts = starts[skip_block_num:]
ends = ends[skip_block_num:]
keys = keys[skip_block_num:]
starts = [starts[index] for index in missing_indices]
ends = [ends[index] for index in missing_indices]
keys = [keys[index] for index in missing_indices]
block_hashes = [block_hashes[index] for index in missing_indices]
missing_data = [
(starts[i], ends[i], keys[i], block_hashes[i])
for i, exists in enumerate(exists_states) if not exists
]
if not missing_data:
self.dec_stored_request(req_id)
return
starts, ends, keys, block_hashes = map(list, zip(*missing_data))

Comment on lines +339 to +348
missing_indices = [index for index, exists in enumerate(exists_states) if not exists]

if skip_block_num == len(key_list):
if not missing_indices:
if is_last_chunk and layer_id == self.final_layer_id:
self.set_finished_request(req_meta.req_id)
return

starts = starts[skip_block_num:]
ends = ends[skip_block_num:]
key_list = key_list[skip_block_num:]
starts = [starts[index] for index in missing_indices]
ends = [ends[index] for index in missing_indices]
key_list = [key_list[index] for index in missing_indices]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Similar to the other _handle_request method, this section reconstructs lists by iterating over missing_indices multiple times. This can be refactored for better performance and readability.

Suggested change
missing_indices = [index for index, exists in enumerate(exists_states) if not exists]
if skip_block_num == len(key_list):
if not missing_indices:
if is_last_chunk and layer_id == self.final_layer_id:
self.set_finished_request(req_meta.req_id)
return
starts = starts[skip_block_num:]
ends = ends[skip_block_num:]
key_list = key_list[skip_block_num:]
starts = [starts[index] for index in missing_indices]
ends = [ends[index] for index in missing_indices]
key_list = [key_list[index] for index in missing_indices]
missing_data = [
(starts[i], ends[i], key_list[i])
for i, exists in enumerate(exists_states) if not exists
]
if not missing_data:
if is_last_chunk and layer_id == self.final_layer_id:
self.set_finished_request(req_meta.req_id)
return
starts, ends, key_list = map(list, zip(*missing_data))

Pz1116 added 3 commits March 27, 2026 13:33
Signed-off-by: Pz1116 <zpbzpb123123@gmail.com>
Co-authored-by: DreamerLeader <2270923832@qq.com>
Co-authored-by: fems14 <1804143737@qq.com>

Signed-off-by: Pz1116 <zpbzpb123123@gmail.com>
Co-authored-by: DreamerLeader <2270923832@qq.com>
Co-authored-by: fems14 <1804143737@qq.com>

Signed-off-by: Pz1116 <zpbzpb123123@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants