Skip to content

[Bugfix][Platform] Fix GLM47 tool-call finish backfill#7710

Open
QwertyJack wants to merge 2 commits intovllm-project:releases/v0.18.0from
QwertyJack:pr/glm47-tool-call-parser-fix
Open

[Bugfix][Platform] Fix GLM47 tool-call finish backfill#7710
QwertyJack wants to merge 2 commits intovllm-project:releases/v0.18.0from
QwertyJack:pr/glm47-tool-call-parser-fix

Conversation

@QwertyJack
Copy link
Contributor

What this PR does / why we need it?

This rebases the GLM47 tool-call parser fix onto releases/v0.18.0 after the MiniMax usage-accounting patch merged upstream on March 27, 2026.

It fixes OpenAI chat tool-call streaming for GLM47 by:

  • draining terminal parser chunks that contain both the final argument text and the closing </tool_call> suffix
  • computing finish backfill from the tool argument bytes actually emitted to the client, instead of trusting parser-internal buffered state
  • adding focused regression tests for finish backfill and terminal chunk handling

Does this PR introduce any user-facing change?

Yes. GLM47 OpenAI-compatible streaming tool-call responses now emit correct final chunks and argument payloads on releases/v0.18.0.

How was this patch tested?

  • pytest -q tests/ut/patch/platform/test_patch_glm_tool_call_parser.py tests/ut/patch/platform/test_patch_minimax_usage_accounting.py
  • python -m pre_commit run --files vllm_ascend/patch/platform/patch_glm_tool_call_parser.py tests/ut/patch/platform/test_patch_glm_tool_call_parser.py vllm_ascend/patch/platform/__init__.py vllm_ascend/patch/__init__.py

@QwertyJack QwertyJack requested a review from wangxiyuan as a code owner March 27, 2026 02:56
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a fix for GLM47 OpenAI-compatible streaming tool-call responses. By monkey-patching the tool-call parser and the chat completion stream generator, it ensures that final chunks are correctly drained and that the finish backfill logic accurately reflects the arguments sent to the client, preventing dropped string values or malformed final chunks.

Highlights

  • GLM47 Tool-Call Streaming Fix: Addressed issues where terminal parser chunks were not fully drained and finish backfill logic incorrectly relied on internal parser state instead of actual emitted bytes.
  • Improved Backfill Accuracy: Implemented tracking of per-tool arguments emitted to the client to ensure the final SSE chunk correctly completes the tool-call payload.
  • Regression Testing: Added comprehensive unit tests to verify terminal chunk handling and finish backfill behavior for GLM47 tool calls.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

Suggested PR Title:

[vllm-ascend][Ops][BugFix] Backport GLM tool-call parser and finish-suffix fixes

Suggested PR Summary:

### What this PR does / why we need it?
This pull request introduces monkey-patches for `OpenAIServingChat` and `Glm4MoeModelToolParser` to resolve critical bugs in GLM-4.7/4.5 tool-call streaming. It addresses issues where the parser could leave terminal chunks undrained and ensures the finish-backfill logic correctly tracks argument bytes emitted to the client. Feedback was provided regarding the fragility of using string replacements to patch the `chat_completion_stream_generator` method, suggesting a full method replacement for better maintainability.

### Does this PR introduce _any_ user-facing change?
Yes, it improves the reliability of tool-call streaming for GLM models, ensuring that final JSON arguments are correctly closed and emitted in the final SSE chunks.

### How was this patch tested?
The changes were verified with new unit tests in `tests/ut/patch/platform/test_patch_glm_tool_call_parser.py`, covering delta creation, argument tracking, and the patched streaming extraction logic.

@gemini-code-assist
Copy link
Contributor

Warning

Gemini encountered an error creating the review. You can try again by commenting /gemini review.

@QwertyJack QwertyJack force-pushed the pr/glm47-tool-call-parser-fix branch from dfdda1a to 86c610d Compare March 27, 2026 08:23
@QwertyJack
Copy link
Contributor Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request implements a monkey-patch for GLM-4.7 and GLM-4.5 tool-call streaming to resolve two primary issues: undrained terminal chunks in the Glm4MoeModelToolParser and incorrect finish-backfill logic in OpenAIServingChat that relied on internal parser state instead of emitted bytes. The patch introduces logic to track arguments actually sent to the client and ensures the parser drains the buffer completely during terminal state transitions. A new test suite in tests/ut/patch/platform/test_patch_glm_tool_call_parser.py validates these fixes across various JSON formatting scenarios. I have no feedback to provide as no review comments were submitted. Suggested PR Title: txt [Platform][BugFix] Backport GLM tool-call parser and finish-suffix fixes Suggested PR Summary: markdown ### What this PR does / why we need it? This PR backports fixes for GLM-4.7 / GLM-4.5 tool-call streaming to address undrained terminal chunks and incorrect finish-backfill logic. These bugs could cause dropped values or malformed JSON in the final SSE chunk. ### Does this PR introduce _any_ user-facing change? Yes, it improves the reliability of tool-call streaming for GLM models. ### How was this patch tested? Tested with new unit tests covering delta creation, argument tracking, and suffix computation.

@QwertyJack QwertyJack changed the title [v0.18.0][Bugfix][Platform] Fix GLM47 tool-call finish backfill [Bugfix][Platform] Fix GLM47 tool-call finish backfill Mar 27, 2026
Refresh the GLM47 tool-call parser backport on top of the latest releases/v0.18.0 branch.

- replace the chat stream patching-by-string-rewrite approach with a copied, self-contained chat_completion_stream_generator implementation
- keep the MiniMax usage-accounting integration inside the copied method instead of layering more source rewrites on top
- retain the updated GLM parser pending-delta merge fix and the expanded regression coverage from ~/tmp/glm5fc

Signed-off-by: QwertyJack <7554089+QwertyJack@users.noreply.github.com>
@QwertyJack QwertyJack force-pushed the pr/glm47-tool-call-parser-fix branch from 86c610d to 5284c0b Compare March 27, 2026 08:37
Annotate the copied GLM chat streaming backport so the Python 3.10 mypy job used by CI no longer fails on raw_output_token_ids.

Also add a focused glm47 regression test that ensures terminal tool-call deltas survive exclude_unset serialization instead of collapsing to an empty object.

Signed-off-by: QwertyJack <7554089+QwertyJack@users.noreply.github.com>
@yiz-liu yiz-liu added ready read for review ready-for-test start test by label for PR labels Mar 27, 2026
@yiz-liu yiz-liu added this to the v0.18.0rc1 milestone Mar 27, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready read for review ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants