[Bugfix][Platform] Fix GLM47 tool-call finish backfill#7710
[Bugfix][Platform] Fix GLM47 tool-call finish backfill#7710QwertyJack wants to merge 2 commits intovllm-project:releases/v0.18.0from
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces a fix for GLM47 OpenAI-compatible streaming tool-call responses. By monkey-patching the tool-call parser and the chat completion stream generator, it ensures that final chunks are correctly drained and that the finish backfill logic accurately reflects the arguments sent to the client, preventing dropped string values or malformed final chunks. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
Suggested PR Title:
[vllm-ascend][Ops][BugFix] Backport GLM tool-call parser and finish-suffix fixesSuggested PR Summary:
### What this PR does / why we need it?
This pull request introduces monkey-patches for `OpenAIServingChat` and `Glm4MoeModelToolParser` to resolve critical bugs in GLM-4.7/4.5 tool-call streaming. It addresses issues where the parser could leave terminal chunks undrained and ensures the finish-backfill logic correctly tracks argument bytes emitted to the client. Feedback was provided regarding the fragility of using string replacements to patch the `chat_completion_stream_generator` method, suggesting a full method replacement for better maintainability.
### Does this PR introduce _any_ user-facing change?
Yes, it improves the reliability of tool-call streaming for GLM models, ensuring that final JSON arguments are correctly closed and emitted in the final SSE chunks.
### How was this patch tested?
The changes were verified with new unit tests in `tests/ut/patch/platform/test_patch_glm_tool_call_parser.py`, covering delta creation, argument tracking, and the patched streaming extraction logic.|
Warning Gemini encountered an error creating the review. You can try again by commenting |
dfdda1a to
86c610d
Compare
|
/gemini review |
There was a problem hiding this comment.
Code Review
This pull request implements a monkey-patch for GLM-4.7 and GLM-4.5 tool-call streaming to resolve two primary issues: undrained terminal chunks in the Glm4MoeModelToolParser and incorrect finish-backfill logic in OpenAIServingChat that relied on internal parser state instead of emitted bytes. The patch introduces logic to track arguments actually sent to the client and ensures the parser drains the buffer completely during terminal state transitions. A new test suite in tests/ut/patch/platform/test_patch_glm_tool_call_parser.py validates these fixes across various JSON formatting scenarios. I have no feedback to provide as no review comments were submitted. Suggested PR Title: txt [Platform][BugFix] Backport GLM tool-call parser and finish-suffix fixes Suggested PR Summary: markdown ### What this PR does / why we need it? This PR backports fixes for GLM-4.7 / GLM-4.5 tool-call streaming to address undrained terminal chunks and incorrect finish-backfill logic. These bugs could cause dropped values or malformed JSON in the final SSE chunk. ### Does this PR introduce _any_ user-facing change? Yes, it improves the reliability of tool-call streaming for GLM models. ### How was this patch tested? Tested with new unit tests covering delta creation, argument tracking, and suffix computation.
Refresh the GLM47 tool-call parser backport on top of the latest releases/v0.18.0 branch. - replace the chat stream patching-by-string-rewrite approach with a copied, self-contained chat_completion_stream_generator implementation - keep the MiniMax usage-accounting integration inside the copied method instead of layering more source rewrites on top - retain the updated GLM parser pending-delta merge fix and the expanded regression coverage from ~/tmp/glm5fc Signed-off-by: QwertyJack <7554089+QwertyJack@users.noreply.github.com>
86c610d to
5284c0b
Compare
Annotate the copied GLM chat streaming backport so the Python 3.10 mypy job used by CI no longer fails on raw_output_token_ids. Also add a focused glm47 regression test that ensures terminal tool-call deltas survive exclude_unset serialization instead of collapsing to an empty object. Signed-off-by: QwertyJack <7554089+QwertyJack@users.noreply.github.com>
What this PR does / why we need it?
This rebases the GLM47 tool-call parser fix onto
releases/v0.18.0after the MiniMax usage-accounting patch merged upstream on March 27, 2026.It fixes OpenAI chat tool-call streaming for GLM47 by:
</tool_call>suffixDoes this PR introduce any user-facing change?
Yes. GLM47 OpenAI-compatible streaming tool-call responses now emit correct final chunks and argument payloads on
releases/v0.18.0.How was this patch tested?
pytest -q tests/ut/patch/platform/test_patch_glm_tool_call_parser.py tests/ut/patch/platform/test_patch_minimax_usage_accounting.pypython -m pre_commit run --files vllm_ascend/patch/platform/patch_glm_tool_call_parser.py tests/ut/patch/platform/test_patch_glm_tool_call_parser.py vllm_ascend/patch/platform/__init__.py vllm_ascend/patch/__init__.py