Skip to content

[Enhancement] Support RunAI Model Streamer for diffusion weight loading#2199

Open
NickCao wants to merge 1 commit intovllm-project:mainfrom
NickCao:runai-streamer
Open

[Enhancement] Support RunAI Model Streamer for diffusion weight loading#2199
NickCao wants to merge 1 commit intovllm-project:mainfrom
NickCao:runai-streamer

Conversation

@NickCao
Copy link
Contributor

@NickCao NickCao commented Mar 25, 2026

Purpose

Add enable_runai_streamer flag to OmniDiffusionConfig so diffusion models can use the runai_model_streamer library for streaming safetensors weights, matching the support already available in the LLM weight loading path.

Test Plan

vllm-omni serve --omni Tongyi-MAI/Z-Image-Turbo --port 8000
vllm-omni serve --omni Tongyi-MAI/Z-Image-Turbo --port 8000 --enable-runai-streamer

Test Result

It runs. Reducing model load time from 55.68s to 39.45s.


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan. Please provide the test scripts & test commands. Please state the reasons if your codes don't require additional test scripts. For test file guidelines, please check the test style doc
  • The test results. Please paste the results comparison before and after, or the e2e results.
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model. Please run mkdocs serve to sync the documentation editions to ./docs.
  • (Optional) Release notes update. If your change is user-facing, please update the release notes draft.

BEFORE SUBMITTING, PLEASE READ https://github.com/vllm-project/vllm-omni/blob/main/CONTRIBUTING.md (anything written below this line will be removed by GitHub Actions)

@NickCao NickCao force-pushed the runai-streamer branch 2 times, most recently from 4e245db to ede67f7 Compare March 25, 2026 19:30
@NickCao NickCao marked this pull request as ready for review March 25, 2026 19:51
@NickCao NickCao requested a review from hsliuustc0106 as a code owner March 25, 2026 19:51
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: ede67f76da

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Add enable_runai_streamer flag to OmniDiffusionConfig so diffusion
models can use the runai_model_streamer library for streaming
safetensors weights, matching the support already available in
the LLM weight loading path.

Co-authored-by: Claude <noreply@anthropic.com>
Signed-off-by: Nick Cao <ncao@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant