fix(qwen 35): fix qwen35 thd attention mask dtype to bool#3314
Open
li126com wants to merge 1 commit intoNVIDIA-NeMo:mainfrom
Open
fix(qwen 35): fix qwen35 thd attention mask dtype to bool#3314li126com wants to merge 1 commit intoNVIDIA-NeMo:mainfrom
li126com wants to merge 1 commit intoNVIDIA-NeMo:mainfrom
Conversation
075d530 to
c63350a
Compare
Contributor
📝 WalkthroughWalkthroughAttention mask dtype handling is updated in Qwen3VL model's packed sequence preprocessing. The model now initializes attention masks with Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes 🚥 Pre-merge checks | ✅ 4✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
# 🐛 Bug Fix ## Fix attention_mask dtype causing silent data corruption - Change attention_mask dtype from `torch.int32` to `torch.bool` in `Qwen3VLModel.forward()` at two call sites where the mask is created via `torch.ones_like` - Add a defensive bool-cast guard in `preprocess_packed_seqs()` to ensure correct advanced indexing semantics (bool mask-select vs int fancy-index which silently corrupts data when values are 0/1) Signed-off-by: root <li126com2@126.com>
c63350a to
3212ba6
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
What does this PR do ?
fix(qwen3-vl): use bool dtype for attention_mask
🐛 Bug Fix
Fix attention_mask dtype causing silent data corruption
torch.int32totorch.boolinQwen3VLModel.forward()at two call sites where the mask is createdvia
torch.ones_likepreprocess_packed_seqs()toensure correct advanced indexing semantics (bool mask-select vs
int fancy-index which silently corrupts data when values are 0/1)
GitHub Actions CI
See the CI sectionin the Contributing doc for how to trigger the CI. A Nvidia developer will need to approve and trigger the CI for external contributors.
Before your PR is "Ready for review"
Pre checks:
If you haven't finished some of the above items you can still open "Draft" PR.
Additional Information
Summary by CodeRabbit