-
Notifications
You must be signed in to change notification settings - Fork 296
[model] support Qwen3next #1184
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This PR adds support for Moore Threads (MUSA) GPU platform, expanding
LightLLM's hardware compatibility.
*NOTE:*
1. `_fwd_kernel_token_att1` has been slightly updated to ensure
compatibility with the Triton version.
2. `has_mtlink` will be used in upcoming enhancements to enable
multi-GPU support.
3. `torch` / `torch_musa` need to be upgraded to the latest versions.
### Testing Done
```bash
root@worker3218:/ws# python -m lightllm.server.api_server --model_dir /home/dist/Qwen3-0.6B/ --disable_cudagraph --host 0.0.0.0
WARNING 01-02 12:22:47 [sgl_utils.py:29] sgl_kernel is not installed, or the installed version did not support fa3. Try to upgrade it.
WARNING 01-02 12:22:47 [light_utils.py:13] lightllm_kernel is not installed, you can't use the api of it.
INFO 01-02 12:22:48 [__init__.py:36] Available plugins for group vllm.platform_plugins:
INFO 01-02 12:22:48 [__init__.py:38] - musa -> vllm_musa:register
INFO 01-02 12:22:48 [__init__.py:41] All plugins in this group will be loaded. Set `VLLM_PLUGINS` to control which plugins to load.
INFO 01-02 12:22:48 [__init__.py:232] Platform plugin musa is activated
WARNING 01-02 12:22:48 [vllm_utils.py:18] vllm is not installed, you can't use the api of it. You can solve it by running `pip install vllm`.
INFO 01-02 12:22:48 [communication_op.py:57] deep_ep is not installed, you can't use the api of it.
INFO 01-02 12:22:48 [cache_tensor_manager.py:17] USE_GPU_TENSOR_CACHE is On
WARNING 01-02 12:22:48 [grouped_fused_moe_ep.py:28] no deepep or deep_gemm
WARNING 01-02 12:22:48 [nixl_kv_transporter.py:19] nixl is not installed, which is required for pd disagreggation!!!
INFO 01-02 12:22:48 [shm_size_check.py:21] SHM check: Available=500.00 GB,Recommended=2.32 GB.Sufficient: True
INFO 01-02 12:22:48 [api_start.py:94] zmq mode head: ipc:///tmp/_28765_0_
INFO 01-02 12:22:48 [api_start.py:96] use tgi api: False
INFO 01-02 12:22:48 [api_start.py:233] alloced ports: [10105, 10128, 10009, 10002, 10268, 10173, 10255, 10190, 10225, 10305]
INFO 01-02 12:22:48 [api_start.py:284] all start args:Namespace(run_mode='normal', host='0.0.0.0', port=8000, httpserver_workers=1, zmq_mode='ipc:///tmp/_28765_0_', pd_master_ip='0.0.0.0', pd_master_port=1212, pd_decode_rpyc_port=42000, select_p_d_node_strategy='round_robin', config_server_host=None, config_server_port=None, nixl_pd_kv_page_num=16, nixl_pd_kv_page_size=1024, model_name='default_model_name', model_dir='/home/dist/Qwen3-0.6B/', tokenizer_mode='fast', load_way='HF', max_total_token_num=None, mem_fraction=0.9, batch_max_tokens=8448, eos_id=[151645], tool_call_parser=None, reasoning_parser=None, chat_template=None, running_max_req_size=1000, nnodes=1, node_rank=0, multinode_httpmanager_port=12345, multinode_router_gloo_port=20001, tp=1, dp=1, dp_balancer='bs_balancer', max_req_total_len=16384, nccl_host='127.0.0.1', nccl_port=28765, use_config_server_to_init_nccl=False, mode=[], trust_remote_code=False, disable_log_stats=False, log_stats_interval=10, disable_shm_warning=False, router_token_ratio=0.0, router_max_new_token_len=1024, router_max_wait_tokens=1, disable_aggressive_schedule=False, use_dynamic_prompt_cache=False, disable_dynamic_prompt_cache=False, chunked_prefill_size=4096, disable_chunked_prefill=False, diverse_mode=False, token_healing_mode=False, output_constraint_mode='none', first_token_constraint_mode=False, enable_multimodal=False, enable_multimodal_audio=False, enable_mps=False, disable_custom_allreduce=False, enable_custom_allgather=False, enable_tpsp_mix_mode=False, enable_dp_prefill_balance=False, enable_prefill_microbatch_overlap=False, enable_decode_microbatch_overlap=False, enable_flashinfer_prefill=False, enable_flashinfer_decode=False, enable_fa3=False, cache_capacity=200, embed_cache_storage_size=4, data_type='bfloat16', return_all_prompt_logprobs=False, use_reward_model=False, long_truncation_mode=None, use_tgi_api=False, health_monitor=False, metric_gateway=None, job_name='lightllm', grouping_key=[], push_interval=10, visual_infer_batch_size=1, visual_send_batch_size=1, visual_gpu_ids=[0], visual_tp=1, visual_dp=1, visual_nccl_ports=[29500], enable_monitor_auth=False, disable_cudagraph=True, enable_prefill_cudagraph=False, prefll_cudagraph_max_handle_token=512, graph_max_batch_size=256, graph_split_batch_size=32, graph_grow_step_size=16, graph_max_len_in_batch=16384, quant_type='none', quant_cfg=None, vit_quant_type='none', vit_quant_cfg=None, sampling_backend='triton', penalty_counter_mode='gpu_counter', ep_redundancy_expert_config_path=None, auto_update_redundancy_expert=False, enable_fused_shared_experts=False, mtp_mode=None, mtp_draft_model_dir=None, mtp_step=0, kv_quant_calibration_config_path=None, schedule_time_interval=0.03, enable_cpu_cache=False, cpu_cache_storage_size=2, cpu_cache_token_page_size=256, enable_disk_cache=False, disk_cache_storage_size=10, disk_cache_dir=None, enable_dp_prompt_cache_fetch=False, router_port=10105, detokenization_port=10128, http_server_port=10009, visual_port=10002, audio_port=10268, cache_port=10173, metric_port=10255, multi_level_kv_cache_port=10190, pd_node_infer_rpyc_ports=[10305], pd_node_id=294623010895931863621527973304373176200, pd_p_allowed_port_min=20000, pd_p_allowed_port_max=30000)
WARNING 01-02 12:22:55 [sgl_utils.py:29] sgl_kernel is not installed, or the installed version did not support fa3. Try to upgrade it.
WARNING 01-02 12:22:55 [light_utils.py:13] lightllm_kernel is not installed, you can't use the api of it.
INFO 01-02 12:22:55 [__init__.py:36] Available plugins for group vllm.platform_plugins:
INFO 01-02 12:22:55 [__init__.py:38] - musa -> vllm_musa:register
INFO 01-02 12:22:55 [__init__.py:41] All plugins in this group will be loaded. Set `VLLM_PLUGINS` to control which plugins to load.
INFO 01-02 12:22:55 [__init__.py:232] Platform plugin musa is activated
WARNING 01-02 12:22:55 [vllm_utils.py:18] vllm is not installed, you can't use the api of it. You can solve it by running `pip install vllm`.
INFO 01-02 12:22:55 [communication_op.py:57] deep_ep is not installed, you can't use the api of it.
2026-01-02 12:22:55 | server | 140684395422848 | INFO : server started on [0.0.0.0]:10255
INFO 01-02 12:22:55 [start_utils.py:37] init func start_metric_manager : init ok
WARNING 01-02 12:23:02 [sgl_utils.py:29] sgl_kernel is not installed, or the installed version did not support fa3. Try to upgrade it.
WARNING 01-02 12:23:02 [light_utils.py:13] lightllm_kernel is not installed, you can't use the api of it.
WARNING 01-02 12:23:02 [sgl_utils.py:29] sgl_kernel is not installed, or the installed version did not support fa3. Try to upgrade it.
WARNING 01-02 12:23:02 [light_utils.py:13] lightllm_kernel is not installed, you can't use the api of it.
INFO 01-02 12:23:02 [__init__.py:36] Available plugins for group vllm.platform_plugins:
INFO 01-02 12:23:02 [__init__.py:38] - musa -> vllm_musa:register
INFO 01-02 12:23:02 [__init__.py:41] All plugins in this group will be loaded. Set `VLLM_PLUGINS` to control which plugins to load.
INFO 01-02 12:23:02 [__init__.py:232] Platform plugin musa is activated
WARNING 01-02 12:23:02 [vllm_utils.py:18] vllm is not installed, you can't use the api of it. You can solve it by running `pip install vllm`.
INFO 01-02 12:23:02 [communication_op.py:57] deep_ep is not installed, you can't use the api of it.
INFO 01-02 12:23:02 [cache_tensor_manager.py:17] USE_GPU_TENSOR_CACHE is On
INFO 01-02 12:23:02 [__init__.py:36] Available plugins for group vllm.platform_plugins:
INFO 01-02 12:23:02 [__init__.py:38] - musa -> vllm_musa:register
INFO 01-02 12:23:02 [__init__.py:41] All plugins in this group will be loaded. Set `VLLM_PLUGINS` to control which plugins to load.
INFO 01-02 12:23:02 [__init__.py:232] Platform plugin musa is activated
WARNING 01-02 12:23:02 [vllm_utils.py:18] vllm is not installed, you can't use the api of it. You can solve it by running `pip install vllm`.
INFO 01-02 12:23:02 [communication_op.py:57] deep_ep is not installed, you can't use the api of it.
WARNING 01-02 12:23:02 [grouped_fused_moe_ep.py:28] no deepep or deep_gemm
INFO 01-02 12:23:02 [cache_tensor_manager.py:17] USE_GPU_TENSOR_CACHE is On
WARNING 01-02 12:23:03 [grouped_fused_moe_ep.py:28] no deepep or deep_gemm
INFO 01-02 12:23:03 [manager.py:36] pub_to_httpserver sendhwm 1000
WARNING 01-02 12:23:03 [nixl_kv_transporter.py:19] nixl is not installed, which is required for pd disagreggation!!!
2026-01-02 12:23:03 | server | 140684395422848 | INFO : accepted ('127.0.0.1', 36414) with fd 25
2026-01-02 12:23:03 | server | 140653235951168 | INFO : welcome ('127.0.0.1', 36414)
INFO 01-02 12:23:08 [cache_tensor_manager.py:17] USE_GPU_TENSOR_CACHE is On
WARNING 01-02 12:23:09 [sgl_utils.py:29] sgl_kernel is not installed, or the installed version did not support fa3. Try to upgrade it.
INFO 01-02 12:23:10 [__init__.py:36] Available plugins for group vllm.platform_plugins:
INFO 01-02 12:23:10 [__init__.py:38] - musa -> vllm_musa:register
INFO 01-02 12:23:10 [__init__.py:41] All plugins in this group will be loaded. Set `VLLM_PLUGINS` to control which plugins to load.
INFO 01-02 12:23:10 [__init__.py:232] Platform plugin musa is activated
WARNING 01-02 12:23:10 [vllm_utils.py:18] vllm is not installed, you can't use the api of it. You can solve it by running `pip install vllm`.
WARNING 01-02 12:23:10 [light_utils.py:13] lightllm_kernel is not installed, you can't use the api of it.
WARNING 01-02 12:23:10 [grouped_fused_moe_ep.py:28] no deepep or deep_gemm
INFO 01-02 12:23:10 [communication_op.py:57] deep_ep is not installed, you can't use the api of it.
WARNING 01-02 12:23:10 [nixl_kv_transporter.py:19] nixl is not installed, which is required for pd disagreggation!!!
INFO 01-02 12:23:10 [model_rpc.py:67] Initialized RPC server for rank 0.
INFO 01-02 12:23:10 [model_rpc.py:168] use ChunkedPrefillBackend
INFO 01-02 12:23:11 [basemodel.py:157] Initial quantization. The default quantization method is none
pid 39235 Loading model weights with 1 workers: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 1.01it/s]
INFO 01-02 12:23:12 [mem_utils.py:37] mode setting params: []
INFO 01-02 12:23:12 [mem_utils.py:57] Model kv cache using mode normal
INFO 01-02 12:23:12 [mem_manager.py:84] 69.38735313415528 GB space is available after load the model weight
INFO 01-02 12:23:12 [mem_manager.py:84] 0.109375 MB is the size of one token kv cache
INFO 01-02 12:23:12 [mem_manager.py:84] 649624 is the profiled max_total_token_num with the mem_fraction 0.9
INFO 01-02 12:23:12 [mem_manager.py:84]
warming up: 0%| | 0/12 [00:00<?, ?it/s]WARNING 01-02 12:23:23 [autotuner.py:169] No kernel config for silu_and_mul_fwd:v1 in {N=3072,out_dtype=torch.bfloat16}_MTT_S5000.json,the performance may be suboptimal!You can use LIGHTLLM_TRITON_AUTOTUNE_LEVEL=1 to enable autotune.
WARNING 01-02 12:23:23 [kernel_config.py:40] can not find config_path /ws/lightllm/common/all_kernel_configs/moe_silu_and_mul_kernel/{N=3072,out_dtype=torch.bfloat16}_MTT_S5000.json kernel name moe_silu_and_mul_kernel use default kernel setting
warming up: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 12/12 [00:15<00:00, 1.29s/it]
INFO 01-02 12:23:30 [basemodel.py:812] begin check max_len infer
INFO 01-02 12:23:30 [basemodel.py:849] check max_len 8448 infer ok
INFO 01-02 12:23:45 [base_backend.py:185] loaded model class <class 'lightllm.models.qwen3.model.Qwen3TpPartModel'>
INFO 01-02 12:23:45 [manager.py:196] use req queue ChunkedPrefillQueue
INFO 01-02 12:23:45 [start_utils.py:37] init func start_router_process : init ok
INFO 01-02 12:23:45 [start_utils.py:37] init func start_detokenization_process : init ok
INFO 01-02 12:23:45 [api_start.py:58] start process pid 30307
INFO 01-02 12:23:45 [api_start.py:59] http server pid 54746
[2026-01-02 12:23:45 +0800] [54746] [INFO] Starting gunicorn 23.0.0
[2026-01-02 12:23:45 +0800] [54746] [INFO] Listening at: http://0.0.0.0:8000 (54746)
[2026-01-02 12:23:45 +0800] [54746] [INFO] Using worker: uvicorn.workers.UvicornWorker
[2026-01-02 12:23:45 +0800] [54966] [INFO] Booting worker with pid: 54966
WARNING 01-02 12:23:51 [sgl_utils.py:29] sgl_kernel is not installed, or the installed version did not support fa3. Try to upgrade it.
WARNING 01-02 12:23:51 [light_utils.py:13] lightllm_kernel is not installed, you can't use the api of it.
INFO 01-02 12:23:52 [__init__.py:36] Available plugins for group vllm.platform_plugins:
INFO 01-02 12:23:52 [__init__.py:38] - musa -> vllm_musa:register
INFO 01-02 12:23:52 [__init__.py:41] All plugins in this group will be loaded. Set `VLLM_PLUGINS` to control which plugins to load.
INFO 01-02 12:23:52 [__init__.py:232] Platform plugin musa is activated
WARNING 01-02 12:23:52 [vllm_utils.py:18] vllm is not installed, you can't use the api of it. You can solve it by running `pip install vllm`.
INFO 01-02 12:23:52 [communication_op.py:57] deep_ep is not installed, you can't use the api of it.
INFO 01-02 12:23:52 [cache_tensor_manager.py:17] USE_GPU_TENSOR_CACHE is On
WARNING 01-02 12:23:52 [grouped_fused_moe_ep.py:28] no deepep or deep_gemm
[2026-01-02 12:23:52 +0800] [54966] [INFO] Started server process [54966]
[2026-01-02 12:23:52 +0800] [54966] [INFO] Waiting for application startup.
INFO 01-02 12:23:52 [api_http.py:359] server start up
2026-01-02 12:23:53 | server | 140684395422848 | INFO : accepted ('127.0.0.1', 55128) with fd 26
2026-01-02 12:23:53 | server | 140653227558464 | INFO : welcome ('127.0.0.1', 55128)
2026-01-02 12:23:53 | server | 140684395422848 | INFO : accepted ('127.0.0.1', 55144) with fd 27
2026-01-02 12:23:53 | server | 140653219165760 | INFO : welcome ('127.0.0.1', 55144)
INFO 01-02 12:23:54 [req_id_generator.py:34] ReqIDGenerator init finished
INFO 01-02 12:23:54 [api_http.py:363] server start up ok, loop use is <uvloop.Loop running=True closed=False debug=False>
[2026-01-02 12:23:54 +0800] [54966] [INFO] Application startup complete.
INFO 01-02 12:23:58 [manager.py:417] recieved req X-Request-Id: X-Session-Id: start_time:2026-01-02 12:23:58 lightllm_req_id:8
INFO 01-02 12:23:58 [manager.py:424] router recive req id 8 cost time 0.05271601676940918 s
DEBUG 01-02 12:23:58 [manager.py:322] Prefill Batch: batch_id=-1, time:1767327838.6764812s req_ids:[8]
DEBUG 01-02 12:23:58 [manager.py:322]
INFO 01-02 12:23:58 [manager.py:55] detokenization recv req id 8 cost time 0.0744318962097168 s
INFO 01-02 12:23:59 [manager.py:163] detoken release req id 8
INFO 01-02 12:23:59 [manager.py:611] X-Request-Id: X-Session-Id: start_time:2026-01-02 12:23:58 lightllm_req_id:8 first_token_cost:409.63053703308105ms total_cost_time:907.1474075317383ms,out_token_counter:17 mean_per_token_cost_time: 29.265698264626895ms prompt_token_num:4 gpu cache hit: False gpu_prompt_cache_len:0 gpu_prompt_cache_ratio:0.0 cpu cache hit: False cpu_prompt_cache_len:0 cpu_prompt_cache_ratio:0.0 disk cache hit: False disk_prompt_cache_len:0 disk_prompt_cache_ratio:0.0 mtp_avg_token_per_step:1.0
127.0.0.1:38158 - "POST /generate HTTP/1.1" 200
DEBUG 01-02 12:23:59 [req_manager.py:78] freed all request size 1008
DEBUG 01-02 12:23:59 [infer_batch.py:172] free a batch state:
DEBUG 01-02 12:23:59 [infer_batch.py:172] radix refed token num 0
DEBUG 01-02 12:23:59 [infer_batch.py:172] radix hold token num 21
DEBUG 01-02 12:23:59 [infer_batch.py:172] mem manager can alloc token num 649603
DEBUG 01-02 12:23:59 [infer_batch.py:172] mem manager total size 649624
INFO 01-02 12:23:59 [batch.py:56] router release req id 8
INFO 01-02 12:23:59 [shm_req_manager.py:111] all shm req has been release ok
```
Signed-off-by: Xiaodong Ye <xiaodong.ye@mthreads.com>
Co-authored-by: wangzaijun <wangzaijun@sensetime.com> Co-authored-by: root <root@DESKTOP-5FJJCPK.localdomain>
Summary of ChangesHello @sufubao, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request integrates the Qwen3Next model, which employs a hybrid attention mechanism combining full attention with Gated Delta Networks (GDN) linear attention. It includes substantial enhancements to the Multi-Token Prediction (MTP) system, featuring specialized memory management and optimized Triton kernels for efficient state handling during inference. The changes streamline memory allocation, introduce new inference logic for GDN layers, and provide autotuning for critical kernel operations, aiming to boost performance and memory efficiency for this new model. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request adds support for the Qwen3next model, which features a hybrid attention mechanism (standard attention + Gated Delta Networks). The changes are extensive and well-structured, including a new model implementation, a hybrid memory manager for both KV cache and Mamba-style buffers, and several performance optimizations using custom Triton kernels. The core components like ReqManager and MemoryManager have been refactored to be more generic, which is a great improvement for future extensibility. Overall, this is a high-quality contribution that significantly expands the framework's capabilities. I have a few minor suggestions regarding a misleading docstring, a removed validation check, and improving logging for better user experience and debuggability.
| input_ids.extend(origin_ids[start_idx:]) | ||
| return input_ids |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The validation check that ensures the number of image tags in the prompt matches the number of provided images has been removed. This could lead to silent errors or unexpected behavior if there's a mismatch. It's recommended to restore this check to maintain data integrity.
| input_ids.extend(origin_ids[start_idx:]) | |
| return input_ids | |
| input_ids.extend(origin_ids[start_idx:]) | |
| if multimodal_params: | |
| image_cnt = len(multimodal_params.images) | |
| assert image_cnt == image_id, f"invalid image tag num: {image_cnt} vs {image_id}!" |
| """ | ||
| Copy buffers from source indices to destination indices using optimized Triton kernel. | ||
| Args: | ||
| src_buffer_indexes: Source buffer indices (1D tensor) | ||
| dst_buffer_indexes: Destination buffer indices (1D tensor) | ||
| """ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The docstring here mentions using an "optimized Triton kernel", but the implementation below uses PyTorch's advanced indexing for the copy operation. To avoid confusion, the docstring should be updated to accurately reflect the implementation.
| """ | |
| Copy buffers from source indices to destination indices using optimized Triton kernel. | |
| Args: | |
| src_buffer_indexes: Source buffer indices (1D tensor) | |
| dst_buffer_indexes: Destination buffer indices (1D tensor) | |
| """ | |
| """ | |
| Copy buffers from source indices to destination indices. | |
| Args: | |
| src_buffer_indexes: Source buffer indices (1D tensor) | |
| dst_buffer_indexes: Destination buffer indices (1D tensor) | |
| """ |
lightllm/server/api_start.py
Outdated
| if args.mtp_draft_model_dir is None: | ||
| args.mtp_draft_model_dir = [args.model_dir] * args.mtp_step |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When mtp_draft_model_dir is not provided, the code now defaults to using the main model directory. This is a convenient fallback, but it might be surprising to users. It would be helpful to add a log message to inform the user that this default behavior is being applied.
| if args.mtp_draft_model_dir is None: | |
| args.mtp_draft_model_dir = [args.model_dir] * args.mtp_step | |
| if args.mtp_draft_model_dir is None: | |
| logger.info(f"'mtp_draft_model_dir' not set, using main model dir '{args.model_dir}' as draft model.") | |
| args.mtp_draft_model_dir = [args.model_dir] * args.mtp_step |
lightllm/server/router/manager.py
Outdated
| estimated_peak_token_count = self.shared_token_load.get_estimated_peak_token_count(d_i) | ||
| paused_req_num = self._get_paused_req_num_in_dp_index(dp_index=d_i) | ||
| logger.debug( | ||
| logger.warning( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The log level for this message about DP status was changed from debug to warning. While this makes it more visible, this information seems more suited for info or debug levels, as it reflects normal operational state rather than a potential problem. Using warning might cause unnecessary alarm in production logs.
| logger.warning( | |
| logger.info( |
b09c18e to
2c64777
Compare
Code review (additional unnecessary changes)Found additional file modifications that appear unrelated to supporting qwen3next: 1. int8kv kernel file renames and refactoring - Files renamed from Lines 1 to 30 in 71804ab
2. MTP metrics removed - Removes LightLLM/lightllm/server/metrics/metrics.py Lines 1 to 60 in 71804ab
3. Mistral/Qwen3 MoE MTP model refactoring - Changes to
4. Test/benchmark files - New files LightLLM/test/test_api/test_chat.py Lines 1 to 30 in 71804ab
5. CUDA graph tqdm progress bar - Adds UI enhancement to show progress during CUDA graph warmup. This is a general improvement unrelated to qwen3next. LightLLM/lightllm/common/basemodel/cuda_graph.py Lines 1 to 50 in 71804ab
6. start_args_type.py default value changes - Many default value changes (tokenizer_mode slow->fast, max_req_total_len 3072->16384, chunked_prefill_size 8192->4096, etc.) are bundled configuration changes unrelated to qwen3next. LightLLM/lightllm/server/core/objs/start_args_type.py Lines 1 to 100 in 71804ab
7. api_start.py SIGHUP handler - Adds graceful shutdown signal handling which is infrastructure work unrelated to model support. LightLLM/lightllm/server/api_start.py Lines 1 to 50 in 71804ab
Consider splitting these into separate PRs for cleaner review and easier rollback if needed. Generated with Claude Code - If this code review was useful, please react with 👍. Otherwise, react with 👎. |
Co-authored-by: shihaobai <42648726+shihaobai@users.noreply.github.com>
Co-authored-by: wangzaijun <wangzaijun@sensetime.com>
Co-authored-by: sangchengmeng <sangchengmeng@sensetime.com>
Add autotune kernel configurations for NVIDIA H200: - FLA chunk kernels (chunk_fwd_o, chunk_gated_delta_rule_fwd_h) - Cumsum and dot product kernels - Fused GDN gating and gated RMSNorm kernels - MoE grouped matmul and alignment kernels - SiLU activation kernels Configs provided for both triton 3.4.0 and 3.5.1
Add support for Qwen3Next architecture including: - New model implementation with GDN (Gated Delta Network) attention - Mamba cache memory manager for hybrid architecture - FLA (Flash Linear Attention) triton kernels - Custom triton kernels for causal conv1d, gated RMSNorm, fused gating - MTP (Multi-Token Prediction) variant support - Allocator utilities and parameter weight management - Hybrid radix cache for dynamic prompt handling
No description provided.