Skip to content

[v0.18.0][Refactor] Use forward mapping instead of reverse mapping in AscendMo…#7716

Merged
yiz-liu merged 1 commit intovllm-project:releases/v0.18.0from
Feng-xiaosuo:new_branch
Mar 27, 2026
Merged

[v0.18.0][Refactor] Use forward mapping instead of reverse mapping in AscendMo…#7716
yiz-liu merged 1 commit intovllm-project:releases/v0.18.0from
Feng-xiaosuo:new_branch

Conversation

@Feng-xiaosuo
Copy link
Copy Markdown
Contributor

…delSlimConfig (#7596)

What this PR does / why we need it?

This PR refactors the AscendModelSlimConfig class to use forward mapping instead of reverse mapping for quantization config key transformation.

Changes:

  1. Modified apply_vllm_mapper() to directly apply hf_to_vllm_mapper.apply_dict() to transform quant_description keys from HF format to vLLM format
  2. Simplified quant_prefix_mapper() to return the prefix directly (no longer needs mapping since keys are already in vLLM format)
  3. Removed QUANT_MODEL_PREFIX_MAPPINGS dictionary (~50 lines) - no longer needed
  4. Removed get_prefix_mapping() function - no longer needed
  5. Removed vllm_to_hf_mapper attribute - no longer needed

Why this change is needed:

The previous implementation used reverse mapping (vLLM → HF) which had several issues:

  • Some keys might not be used in the forward direction but would be incorrectly used in reverse
  • Empty values in the mapping would cause issues when reversed
  • Required maintaining a separate QUANT_MODEL_PREFIX_MAPPINGS dict that duplicated information already available in vLLM's model-specific WeightsMapper

The new approach:

  • Uses the forward mapping (HF → vLLM) directly from vLLM's WeightsMapper

  • Eliminates the need for duplicate mapping definitions

  • Avoids issues with reverse mapping (unused keys, empty values)

  • Aligns with how compressed_tensors_config.py handles the same scenario

  • vLLM version: v0.18.0

  • vLLM main: vllm-project/vllm@ed359c4 ---------

What this PR does / why we need it?

Does this PR introduce any user-facing change?

How was this patch tested?

…delSlimConfig (vllm-project#7596)

### What this PR does / why we need it?

This PR refactors the `AscendModelSlimConfig` class to use **forward
mapping** instead of reverse mapping for quantization config key
transformation.

**Changes:**
1. Modified `apply_vllm_mapper()` to directly apply
`hf_to_vllm_mapper.apply_dict()` to transform `quant_description` keys
from HF format to vLLM format
2. Simplified `quant_prefix_mapper()` to return the prefix directly (no
longer needs mapping since keys are already in vLLM format)
3. Removed `QUANT_MODEL_PREFIX_MAPPINGS` dictionary (~50 lines) - no
longer needed
4. Removed `get_prefix_mapping()` function - no longer needed
5. Removed `vllm_to_hf_mapper` attribute - no longer needed

**Why this change is needed:**

The previous implementation used reverse mapping (vLLM → HF) which had
several issues:
- Some keys might not be used in the forward direction but would be
incorrectly used in reverse
- Empty values in the mapping would cause issues when reversed
- Required maintaining a separate `QUANT_MODEL_PREFIX_MAPPINGS` dict
that duplicated information already available in vLLM's model-specific
`WeightsMapper`

The new approach:
- Uses the forward mapping (HF → vLLM) directly from vLLM's
`WeightsMapper`
- Eliminates the need for duplicate mapping definitions
- Avoids issues with reverse mapping (unused keys, empty values)
- Aligns with how `compressed_tensors_config.py` handles the same
scenario

- vLLM version: v0.18.0
- vLLM main:
vllm-project/vllm@ed359c4
---------
Signed-off-by: Matrix_K <zhangke144@huawei.com>
Signed-off-by: Feng-xiaosuo <tengchang1@huawei.com>
Co-authored-by: Matrix_K <zhangke144@huawei.com>
Co-authored-by: Wang Kunpeng <1289706727@qq.com>
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request refactors the AscendModelSlimConfig class to improve how quantization configuration keys are transformed. By adopting a forward mapping approach consistent with vLLM's internal mechanisms, the changes eliminate the need for manual, error-prone reverse mapping logic and reduce code duplication, leading to a more maintainable and robust configuration handling system.

Highlights

  • Refactored Quantization Mapping: Transitioned from reverse mapping to forward mapping (HF to vLLM) for quantization configuration keys, simplifying the transformation process.
  • Code Cleanup: Removed the redundant QUANT_MODEL_PREFIX_MAPPINGS dictionary, the get_prefix_mapping function, and the vllm_to_hf_mapper attribute, significantly reducing technical debt.
  • Simplified Logic: Updated apply_vllm_mapper to directly use hf_to_vllm_mapper.apply_dict() and streamlined quant_prefix_mapper to return the prefix directly.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

Suggested PR Title:

[Ops][Misc] Refactor ModelSlim quantization mapping to apply HF-to-vLLM transformation directly

Suggested PR Summary:

### What this PR does / why we need it?
This PR refactors the `AscendModelSlimConfig` to simplify weight prefix mapping by removing the static `QUANT_MODEL_PREFIX_MAPPINGS` and reverse mapping logic. It now transforms the `quant_description` keys from HF to vLLM format directly when the mapper is applied. A critical bug was identified in `apply_vllm_mapper` where multiple calls with different mappers could lead to state corruption, and a suggestion was provided to ensure the method is idempotent.

### Does this PR introduce _any_ user-facing change?
No.

### How was this patch tested?
CI passed with existing tests.

Comment on lines +421 to 422
if self._mapper_applied and self.hf_to_vllm_mapper is hf_to_vllm_mapper:
return
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The current implementation has a potential state corruption bug. If apply_vllm_mapper is called a second time with a different hf_to_vllm_mapper instance, it will attempt to apply the new mapping to self.quant_description, which has already been transformed by the first mapper. This will lead to an incorrect quantization configuration.

The method should be made idempotent to prevent this. A safe approach is to prevent any re-application once a mapper has been applied, and log a warning if a different mapper is provided on a subsequent call.

Suggested change
if self._mapper_applied and self.hf_to_vllm_mapper is hf_to_vllm_mapper:
return
if self._mapper_applied:
if self.hf_to_vllm_mapper is not hf_to_vllm_mapper:
logger.warning(
"Attempted to apply a different vLLM mapper. This is not "
"supported and will be ignored.")
return

@MengqingCao MengqingCao added this to the v0.18.0rc1 milestone Mar 27, 2026
@MengqingCao MengqingCao added ready read for review ready-for-test start test by label for PR labels Mar 27, 2026
@yiz-liu yiz-liu merged commit 60e88d9 into vllm-project:releases/v0.18.0 Mar 27, 2026
26 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready read for review ready-for-test start test by label for PR

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants