[Main][Bugfix][KV Pool]Fix KV transfer Put Logic#7717
[Main][Bugfix][KV Pool]Fix KV transfer Put Logic#7717Pz1116 wants to merge 4 commits intovllm-project:mainfrom
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request optimizes the KV pool transfer mechanism by refining how blocks are identified and sent to the store. By shifting from a sequential prefix-based check to an explicit existence check for all keys, the system avoids redundant data transfers and unnecessary log noise. Additionally, the PR includes necessary refactoring for type consistency and improved test coverage to ensure robust behavior in the KV transfer pipeline. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request refactors the KV cache lookup mechanism in the Ascend store to return a boolean list for key existence, allowing for the filtering of non-contiguous missing blocks. It also fixes a typo in LayerMultiBlockReqMeta and introduces unit tests for the sending threads. Feedback was provided to ensure the keys list is filtered in KVCacheStoreLayerSendingThread to maintain accurate logging.
Suggested PR Title:
[vllm-ascend][Distributed][BugFix] Refactor KV cache lookup to support non-contiguous missing blocksSuggested PR Summary:
### What this PR does / why we need it?
This PR refactors the KV cache lookup mechanism in the Ascend store. It changes the `lookup` method to return a list of booleans indicating the existence of each key, allowing the sending threads to filter and transmit only the specific blocks missing from the cache. This replaces the previous logic which only supported skipping a prefix of blocks. Additionally, it fixes a typo in `LayerMultiBlockReqMeta` and adds unit tests to verify the new filtering logic.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
New unit tests were added in `tests/ut/distributed/mooncake/test_kv_transfer.py` to verify that both `KVCacheStoreSendingThread` and `KVCacheStoreLayerSendingThread` correctly identify and process only missing keys.| starts = [starts[index] for index in missing_indices] | ||
| ends = [ends[index] for index in missing_indices] | ||
| key_list = [key_list[index] for index in missing_indices] |
There was a problem hiding this comment.
For consistency with KVCacheStoreSendingThread._handle_request and to ensure logging is accurate, the keys list should also be filtered to only include the missing keys. Currently, len(keys) in the subsequent log message refers to the number of keys before filtering, which is misleading as it doesn't reflect the actual number of blocks being stored.
| starts = [starts[index] for index in missing_indices] | |
| ends = [ends[index] for index in missing_indices] | |
| key_list = [key_list[index] for index in missing_indices] | |
| starts = [starts[index] for index in missing_indices] | |
| ends = [ends[index] for index in missing_indices] | |
| key_list = [key_list[index] for index in missing_indices] | |
| keys = [keys[index] for index in missing_indices] |
|
👋 Hi! Thank you for contributing to the vLLM Ascend project. The following points will speed up your PR merge:
If CI fails, you can run linting and testing checks locally according Contributing and Testing. |
Signed-off-by: Pz1116 <zpbzpb123123@gmail.com> Co-authored-by: DreamerLeader <2270923832@qq.com> Co-authored-by: fems14 <1804143737@qq.com>
Co-authored-by: DreamerLeader <2270923832@qq.com> Co-authored-by: fems14 <1804143737@qq.com> Signed-off-by: Pz1116 <zpbzpb123123@gmail.com>
Co-authored-by: DreamerLeader <2270923832@qq.com> Co-authored-by: fems14 <1804143737@qq.com> Signed-off-by: Pz1116 <zpbzpb123123@gmail.com>
Co-authored-by: DreamerLeader <2270923832@qq.com> Co-authored-by: fems14 <1804143737@qq.com> Signed-off-by: Pz1116 <zpbzpb123123@gmail.com>
|
please take a look @LCAIZJ @fems14 @baxingpiaochong |
What this PR does / why we need it?
Before when we do put for KV Pool, we find the first non-existing key and put all the blocks starting from that index; however, if the prefix cache blocks is from another request, and some of the blocks are evicted due to LRU, we will be putting blocks that still exist in the pool, and causing MooncakeStore printing unnecessary logs in master service.
What this PR does:
lookup_schedulerinpool_workerso it handles GQA correctly.Does this PR introduce any user-facing change?
How was this patch tested?