Skip to content

amd的显卡,rocm7.1 #23

@gqyalh

Description

@gqyalh

在尝试启动大模型的时候报下面的异常
VLLM_USE_MODELSCOPE=true vllm serve deepseek-ai/DeepSeek-R1-Distill-Llama-70B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager
INFO 11-20 18:08:10 [init.py:220] No platform detected, vLLM is running on UnspecifiedPlatform
WARNING 11-20 18:08:11 [_custom_ops.py:20] Failed to import from vllm._C with ImportError('libcudart.so.12: cannot open shared object file: No such file or directory')
Traceback (most recent call last):
File "/home/gqy/miniconda3/envs/vllm/bin/vllm", line 7, in
sys.exit(main())
^^^^^^
File "/home/gqy/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/entrypoints/cli/main.py", line 46, in main
cmd.subparser_init(subparsers).set_defaults(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/gqy/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/entrypoints/cli/serve.py", line 70, in subparser_init
serve_parser = make_arg_parser(serve_parser)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/gqy/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/entrypoints/openai/cli_args.py", line 263, in make_arg_parser
parser = AsyncEngineArgs.add_cli_args(parser)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/gqy/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/engine/arg_utils.py", line 1714, in add_cli_args
parser = EngineArgs.add_cli_args(parser)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/gqy/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/engine/arg_utils.py", line 919, in add_cli_args
vllm_kwargs = get_kwargs(VllmConfig)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/gqy/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/engine/arg_utils.py", line 281, in get_kwargs
return copy.deepcopy(_compute_kwargs(cls))
^^^^^^^^^^^^^^^^^^^^
File "/home/gqy/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/engine/arg_utils.py", line 182, in _compute_kwargs
default = field.default_factory()
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/gqy/miniconda3/envs/vllm/lib/python3.12/site-packages/pydantic/_internal/_dataclasses.py", line 121, in init
s.pydantic_validator.validate_python(ArgsKwargs(args, kwargs), self_instance=s)
File "/home/gqy/miniconda3/envs/vllm/lib/python3.12/site-packages/vllm/config/device.py", line 58, in post_init
raise RuntimeError(
RuntimeError: Failed to infer device type, please set the environment variable VLLM_LOGGING_LEVEL=DEBUG to turn on verbose logging to help debug the issue.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions