Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Cannot serve Qwen2.5 in OpenVINO #12350

Open
1 task done
cheng358 opened this issue Jan 23, 2025 · 1 comment
Open
1 task done

[Bug]: Cannot serve Qwen2.5 in OpenVINO #12350

cheng358 opened this issue Jan 23, 2025 · 1 comment
Labels
bug Something isn't working

Comments

@cheng358
Copy link

Your current environment

Collecting environment information...
PyTorch version: 2.5.1+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.5 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.31.4
Libc version: glibc-2.31

Python version: 3.11.4 (main, Jul 5 2023, 13:45:01) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.14.0-284.25.1.el9_2.x86_64-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090 D
Nvidia driver version: 550.90.07
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.7.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.7.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 8
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7543 32-Core Processor
Stepping: 1
Frequency boost: enabled
CPU MHz: 2800.000
CPU max MHz: 3737.8899
CPU min MHz: 1500.0000
BogoMIPS: 5599.84
Virtualization: AMD-V
L1d cache: 2 MiB
L1i cache: 2 MiB
L2 cache: 32 MiB
L3 cache: 512 MiB
NUMA node0 CPU(s): 0-7,64-71
NUMA node1 CPU(s): 8-15,72-79
NUMA node2 CPU(s): 16-23,80-87
NUMA node3 CPU(s): 24-31,88-95
NUMA node4 CPU(s): 32-39,96-103
NUMA node5 CPU(s): 40-47,104-111
NUMA node6 CPU(s): 48-55,112-119
NUMA node7 CPU(s): 56-63,120-127
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd amd_ppin brs arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm

Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] onnx==1.17.0
[pip3] pyzmq==26.2.0
[pip3] torch==2.5.1+cpu
[pip3] transformers==4.47.1
[conda] No relevant packages
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.6.6.post2.dev324+g016e3676.d20250123
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0 NIC0 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X SYS 40-47,104-111 5 N/A
NIC0 SYS X

Legend:

X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks

NIC Legend:

NIC0: mlx5_bond_0

NVIDIA_VISIBLE_DEVICES=GPU-4842a075-5dcf-0737-7b5f-476d15c8acd1
NVIDIA_REQUIRE_CUDA=cuda>=11.8 brand=tesla,driver>=450,driver<451 brand=tesla,driver>=470,driver<471 brand=unknown,driver>=470,driver<471 brand=nvidia,driver>=470,driver<471 brand=nvidiartx,driver>=470,driver<471 brand=geforce,driver>=470,driver<471 brand=geforcertx,driver>=470,driver<471 brand=quadro,driver>=470,driver<471 brand=quadrortx,driver>=470,driver<471 brand=titan,driver>=470,driver<471 brand=titanrtx,driver>=470,driver<471 brand=tesla,driver>=510,driver<511 brand=unknown,driver>=510,driver<511 brand=nvidia,driver>=510,driver<511 brand=nvidiartx,driver>=510,driver<511 brand=geforce,driver>=510,driver<511 brand=geforcertx,driver>=510,driver<511 brand=quadro,driver>=510,driver<511 brand=quadrortx,driver>=510,driver<511 brand=titan,driver>=510,driver<511 brand=titanrtx,driver>=510,driver<511 brand=tesla,driver>=515,driver<516 brand=unknown,driver>=515,driver<516 brand=nvidia,driver>=515,driver<516 brand=nvidiartx,driver>=515,driver<516 brand=geforce,driver>=515,driver<516 brand=geforcertx,driver>=515,driver<516 brand=quadro,driver>=515,driver<516 brand=quadrortx,driver>=515,driver<516 brand=titan,driver>=515,driver<516 brand=titanrtx,driver>=515,driver<516
NCCL_VERSION=2.16.2-1
NVIDIA_DRIVER_CAPABILITIES=compute,utility
NVIDIA_PRODUCT_NAME=CUDA
CUDA_VERSION=11.8.0
LD_LIBRARY_PATH=/mnt/new_afs/demo/vllm_vino/vino/lib/python3.11/site-packages/cv2/../../lib64:/usr/local/nvidia/lib:/usr/local/nvidia/lib64
NCCL_CUMEM_ENABLE=0
TORCHINDUCTOR_COMPILE_THREADS=1

Model Input Dumps

python -m vllm.entrypoints.openai.api_server --model /mnt/new_afs/demo/models/qwen2.5-RolePlay-Action --served-model-name Qwen25-7B-Instruct --device openvino --dtype float32 --max-model-len 32768 --tensor-parallel-size 1 --host 0.0.0.0 --port 6800 --task generate --gpu-memory-utilization 0.9

🐛 Describe the bug

run cmd:
python -m vllm.entrypoints.openai.api_server --model /mnt/new_afs/demo/models/qwen2.5-RolePlay-Action --served-model-name Qwen25-7B-Instruct --device openvino --dtype float32 --max-model-len 32768 --tensor-parallel-size 1 --host 0.0.0.0 --port 6800 --task generate --gpu-memory-utilization 0.9
then error:
INFO 01-23 16:46:54 init.py:183] Automatically detected platform openvino.
INFO 01-23 16:46:56 api_server.py:768] vLLM API server version 0.6.6.post2.dev324+g016e3676.d20250123
INFO 01-23 16:46:56 api_server.py:769] args: Namespace(host='0.0.0.0', port=6800, uvicorn_log_level='info', allow_credentials=False, allowed_origins=[''], allowed_methods=[''], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, chat_template_content_format='auto', response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, enable_request_id_headers=False, enable_auto_tool_choice=False, tool_call_parser=None, tool_parser_plugin='', model='/mnt/new_afs/demo/models/qwen2.5-RolePlay-Action', task='generate', tokenizer=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, allowed_local_media_path=None, download_dir=None, load_format='auto', config_format=<ConfigFormat.AUTO: 'auto'>, dtype='float32', kv_cache_dtype='auto', quantization_param_path=None, max_model_len=32768, guided_decoding_backend='xgrammar', logits_processor_pattern=None, distributed_executor_backend=None, worker_use_ray=False, pipeline_parallel_size=1, tensor_parallel_size=1, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=None, enable_prefix_caching=None, disable_sliding_window=False, use_v2_block_manager=True, num_lookahead_slots=0, seed=0, swap_space=4, cpu_offload_gb=0, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_seqs=None, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, hf_overrides=None, enforce_eager=False, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, mm_processor_kwargs=None, disable_mm_preprocessor_cache=False, enable_lora=False, enable_lora_bias=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='openvino', num_scheduler_steps=1, multi_step_stream_outputs=True, scheduler_delay_factor=0.0, enable_chunked_prefill=None, speculative_model=None, speculative_model_quantization=None, num_speculative_tokens=None, speculative_disable_mqa_scorer=False, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=['Qwen25-7B-Instruct'], qlora_adapter_name_or_path=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, scheduling_policy='fcfs', override_neuron_config=None, override_pooler_config=None, compilation_config=None, kv_transfer_config=None, worker_cls='auto', generation_config=None, disable_log_requests=False, max_log_len=None, disable_fastapi_docs=False, enable_prompt_tokens_details=False)
INFO 01-23 16:46:56 api_server.py:195] Started engine process with PID 9162
INFO 01-23 16:46:56 config.py:2309] Upcasting torch.bfloat16 to torch.float32.
INFO 01-23 16:47:09 init.py:183] Automatically detected platform openvino.
INFO 01-23 16:47:11 config.py:2309] Upcasting torch.bfloat16 to torch.float32.
WARNING 01-23 16:47:12 config.py:656] Async output processing is not supported on the current platform type openvino.
WARNING 01-23 16:47:12 openvino.py:84] CUDA graph is not supported on OpenVINO backend, fallback to the eager mode.
INFO 01-23 16:47:13 openvino.py:118] OpenVINO CPU optimal block size is 32, overriding currently set 16
WARNING 01-23 16:47:13 openvino.py:133] Environment variable VLLM_OPENVINO_KVCACHE_SPACE (GB) for OpenVINO backend is not set, using 4 by default.
WARNING 01-23 16:47:25 config.py:656] Async output processing is not supported on the current platform type openvino.
WARNING 01-23 16:47:25 openvino.py:84] CUDA graph is not supported on OpenVINO backend, fallback to the eager mode.
INFO 01-23 16:47:25 openvino.py:118] OpenVINO CPU optimal block size is 32, overriding currently set 16
WARNING 01-23 16:47:25 openvino.py:133] Environment variable VLLM_OPENVINO_KVCACHE_SPACE (GB) for OpenVINO backend is not set, using 4 by default.
INFO 01-23 16:47:25 llm_engine.py:232] Initializing an LLM engine (v0.6.6.post2.dev324+g016e3676.d20250123) with config: model='/mnt/new_afs/demo/models/qwen2.5-RolePlay-Action', speculative_config=None, tokenizer='/mnt/new_afs/demo/models/qwen2.5-RolePlay-Action', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.float32, max_seq_len=32768, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=True, kv_cache_dtype=<Type: 'float16'>, quantization_param_path=None, device_config=cpu, decoding_config=DecodingConfig(guided_decoding_backend='xgrammar'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=Qwen25-7B-Instruct, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=False, chunked_prefill_enabled=False, use_async_output_proc=False, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"splitting_ops":[],"candidate_compile_sizes":[],"compile_sizes":[],"capture_sizes":[256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":256}, use_cached_outputs=True,
INFO 01-23 16:47:30 openvino.py:35] Cannot use None backend on OpenVINO.
INFO 01-23 16:47:30 openvino.py:36] Using OpenVINO Attention backend.
WARNING 01-23 16:47:30 _custom_ops.py:19] Failed to import from vllm._C with ModuleNotFoundError("No module named 'vllm._C'")
WARNING 01-23 16:47:30 config.py:3350] Current VLLM config is not set.
ERROR 01-23 16:47:30 engine.py:381] 'NoneType' object has no attribute 'dtype'
ERROR 01-23 16:47:30 engine.py:381] Traceback (most recent call last):
ERROR 01-23 16:47:30 engine.py:381] File "/mnt/new_afs/demo/vllm_vino/vino/lib/python3.11/site-packages/vllm/engine/multiprocessing/engine.py", line 372, in run_mp_engine
ERROR 01-23 16:47:30 engine.py:381] engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
ERROR 01-23 16:47:30 engine.py:381] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-23 16:47:30 engine.py:381] File "/mnt/new_afs/demo/vllm_vino/vino/lib/python3.11/site-packages/vllm/engine/multiprocessing/engine.py", line 120, in from_engine_args
ERROR 01-23 16:47:30 engine.py:381] return cls(ipc_path=ipc_path,
ERROR 01-23 16:47:30 engine.py:381] ^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-23 16:47:30 engine.py:381] File "/mnt/new_afs/demo/vllm_vino/vino/lib/python3.11/site-packages/vllm/engine/multiprocessing/engine.py", line 72, in init
ERROR 01-23 16:47:30 engine.py:381] self.engine = LLMEngine(*args, **kwargs)
ERROR 01-23 16:47:30 engine.py:381] ^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-23 16:47:30 engine.py:381] File "/mnt/new_afs/demo/vllm_vino/vino/lib/python3.11/site-packages/vllm/engine/llm_engine.py", line 271, in init
ERROR 01-23 16:47:30 engine.py:381] self.model_executor = executor_class(vllm_config=vllm_config, )
ERROR 01-23 16:47:30 engine.py:381] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-23 16:47:30 engine.py:381] File "/mnt/new_afs/demo/vllm_vino/vino/lib/python3.11/site-packages/vllm/executor/executor_base.py", line 49, in init
ERROR 01-23 16:47:30 engine.py:381] self._init_executor()
ERROR 01-23 16:47:30 engine.py:381] File "/mnt/new_afs/demo/vllm_vino/vino/lib/python3.11/site-packages/vllm/executor/uniproc_executor.py", line 40, in _init_executor
ERROR 01-23 16:47:30 engine.py:381] self.collective_rpc("load_model")
ERROR 01-23 16:47:30 engine.py:381] File "/mnt/new_afs/demo/vllm_vino/vino/lib/python3.11/site-packages/vllm/executor/uniproc_executor.py", line 49, in collective_rpc
ERROR 01-23 16:47:30 engine.py:381] answer = run_method(self.driver_worker, method, args, kwargs)
ERROR 01-23 16:47:30 engine.py:381] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-23 16:47:30 engine.py:381] File "/mnt/new_afs/demo/vllm_vino/vino/lib/python3.11/site-packages/vllm/utils.py", line 2208, in run_method
ERROR 01-23 16:47:30 engine.py:381] return func(*args, **kwargs)
ERROR 01-23 16:47:30 engine.py:381] ^^^^^^^^^^^^^^^^^^^^^
ERROR 01-23 16:47:30 engine.py:381] File "/mnt/new_afs/demo/vllm_vino/vino/lib/python3.11/site-packages/vllm/worker/openvino_worker.py", line 253, in load_model
ERROR 01-23 16:47:30 engine.py:381] self.model_runner.load_model()
ERROR 01-23 16:47:30 engine.py:381] File "/mnt/new_afs/demo/vllm_vino/vino/lib/python3.11/site-packages/vllm/worker/openvino_model_runner.py", line 82, in load_model
ERROR 01-23 16:47:30 engine.py:381] self.model = get_model(model_config=self.model_config,
ERROR 01-23 16:47:30 engine.py:381] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-23 16:47:30 engine.py:381] File "/mnt/new_afs/demo/vllm_vino/vino/lib/python3.11/site-packages/vllm/model_executor/model_loader/openvino.py", line 202, in get_model
ERROR 01-23 16:47:30 engine.py:381] return OpenVINOCausalLM(ov_core, model_config, device_config,
ERROR 01-23 16:47:30 engine.py:381] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-23 16:47:30 engine.py:381] File "/mnt/new_afs/demo/vllm_vino/vino/lib/python3.11/site-packages/vllm/model_executor/model_loader/openvino.py", line 108, in init
ERROR 01-23 16:47:30 engine.py:381] self.logits_processor = LogitsProcessor(
ERROR 01-23 16:47:30 engine.py:381] ^^^^^^^^^^^^^^^^
ERROR 01-23 16:47:30 engine.py:381] File "/mnt/new_afs/demo/vllm_vino/vino/lib/python3.11/site-packages/vllm/model_executor/layers/logits_processor.py", line 48, in init
ERROR 01-23 16:47:30 engine.py:381] parallel_config = get_current_vllm_config().parallel_config
ERROR 01-23 16:47:30 engine.py:381] ^^^^^^^^^^^^^^^^^^^^^^^^^
ERROR 01-23 16:47:30 engine.py:381] File "/mnt/new_afs/demo/vllm_vino/vino/lib/python3.11/site-packages/vllm/config.py", line 3352, in get_current_vllm_config
ERROR 01-23 16:47:30 engine.py:381] return VllmConfig()
ERROR 01-23 16:47:30 engine.py:381] ^^^^^^^^^^^^
ERROR 01-23 16:47:30 engine.py:381] File "", line 19, in init
ERROR 01-23 16:47:30 engine.py:381] File "/mnt/new_afs/demo/vllm_vino/vino/lib/python3.11/site-packages/vllm/config.py", line 3199, in post_init
ERROR 01-23 16:47:30 engine.py:381] current_platform.check_and_update_config(self)
ERROR 01-23 16:47:30 engine.py:381] File "/mnt/new_afs/demo/vllm_vino/vino/lib/python3.11/site-packages/vllm/platforms/openvino.py", line 78, in check_and_update_config
ERROR 01-23 16:47:30 engine.py:381] if model_config.dtype != torch.float32:
ERROR 01-23 16:47:30 engine.py:381] ^^^^^^^^^^^^^^^^^^
ERROR 01-23 16:47:30 engine.py:381] AttributeError: 'NoneType' object has no attribute 'dtype'
Process SpawnProcess-2:
Traceback (most recent call last):
File "/usr/local/lib/miniconda3/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/local/lib/miniconda3/lib/python3.11/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/mnt/new_afs/demo/vllm_vino/vino/lib/python3.11/site-packages/vllm/engine/multiprocessing/engine.py", line 383, in run_mp_engine
raise e
File "/mnt/new_afs/demo/vllm_vino/vino/lib/python3.11/site-packages/vllm/engine/multiprocessing/engine.py", line 372, in run_mp_engine
engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/new_afs/demo/vllm_vino/vino/lib/python3.11/site-packages/vllm/engine/multiprocessing/engine.py", line 120, in from_engine_args
return cls(ipc_path=ipc_path,
^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/new_afs/demo/vllm_vino/vino/lib/python3.11/site-packages/vllm/engine/multiprocessing/engine.py", line 72, in init
self.engine = LLMEngine(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/new_afs/demo/vllm_vino/vino/lib/python3.11/site-packages/vllm/engine/llm_engine.py", line 271, in init
self.model_executor = executor_class(vllm_config=vllm_config, )
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/new_afs/demo/vllm_vino/vino/lib/python3.11/site-packages/vllm/executor/executor_base.py", line 49, in init
self._init_executor()
File "/mnt/new_afs/demo/vllm_vino/vino/lib/python3.11/site-packages/vllm/executor/uniproc_executor.py", line 40, in _init_executor
self.collective_rpc("load_model")
File "/mnt/new_afs/demo/vllm_vino/vino/lib/python3.11/site-packages/vllm/executor/uniproc_executor.py", line 49, in collective_rpc
answer = run_method(self.driver_worker, method, args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/new_afs/demo/vllm_vino/vino/lib/python3.11/site-packages/vllm/utils.py", line 2208, in run_method
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/mnt/new_afs/demo/vllm_vino/vino/lib/python3.11/site-packages/vllm/worker/openvino_worker.py", line 253, in load_model
self.model_runner.load_model()
File "/mnt/new_afs/demo/vllm_vino/vino/lib/python3.11/site-packages/vllm/worker/openvino_model_runner.py", line 82, in load_model
self.model = get_model(model_config=self.model_config,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/new_afs/demo/vllm_vino/vino/lib/python3.11/site-packages/vllm/model_executor/model_loader/openvino.py", line 202, in get_model
return OpenVINOCausalLM(ov_core, model_config, device_config,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/new_afs/demo/vllm_vino/vino/lib/python3.11/site-packages/vllm/model_executor/model_loader/openvino.py", line 108, in init
self.logits_processor = LogitsProcessor(
^^^^^^^^^^^^^^^^
File "/mnt/new_afs/demo/vllm_vino/vino/lib/python3.11/site-packages/vllm/model_executor/layers/logits_processor.py", line 48, in init
parallel_config = get_current_vllm_config().parallel_config
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/mnt/new_afs/demo/vllm_vino/vino/lib/python3.11/site-packages/vllm/config.py", line 3352, in get_current_vllm_config
return VllmConfig()
^^^^^^^^^^^^
File "", line 19, in init
File "/mnt/new_afs/demo/vllm_vino/vino/lib/python3.11/site-packages/vllm/config.py", line 3199, in post_init
current_platform.check_and_update_config(self)
File "/mnt/new_afs/demo/vllm_vino/vino/lib/python3.11/site-packages/vllm/platforms/openvino.py", line 78, in check_and_update_config
if model_config.dtype != torch.float32:
^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'dtype'
Traceback (most recent call last):
File "", line 198, in _run_module_as_main
File "", line 88, in _run_code
File "/mnt/new_afs/demo/vllm_vino/vino/lib/python3.11/site-packages/vllm/entrypoints/openai/api_server.py", line 832, in
uvloop.run(run_server(args))
File "/mnt/new_afs/demo/vllm_vino/vino/lib/python3.11/site-packages/uvloop/init.py", line 105, in run
return runner.run(wrapper())
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/miniconda3/lib/python3.11/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
File "/mnt/new_afs/demo/vllm_vino/vino/lib/python3.11/site-packages/uvloop/init.py", line 61, in wrapper
return await main
^^^^^^^^^^
File "/mnt/new_afs/demo/vllm_vino/vino/lib/python3.11/site-packages/vllm/entrypoints/openai/api_server.py", line 796, in run_server
async with build_async_engine_client(args) as engine_client:
File "/usr/local/lib/miniconda3/lib/python3.11/contextlib.py", line 204, in aenter
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/mnt/new_afs/demo/vllm_vino/vino/lib/python3.11/site-packages/vllm/entrypoints/openai/api_server.py", line 125, in build_async_engine_client
async with build_async_engine_client_from_engine_args(
File "/usr/local/lib/miniconda3/lib/python3.11/contextlib.py", line 204, in aenter
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/mnt/new_afs/demo/vllm_vino/vino/lib/python3.11/site-packages/vllm/entrypoints/openai/api_server.py", line 219, in build_async_engine_client_from_engine_args
raise RuntimeError(
RuntimeError: Engine process failed to start. See stack trace for the root cause.

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
@cheng358 cheng358 added the bug Something isn't working label Jan 23, 2025
@cheng358
Copy link
Author

cheng358 commented Jan 23, 2025

I install vllm follow the steps on page: https://docs.vllm.ai/en/stable/getting_started/openvino-installation.html

@DarkLight1337 DarkLight1337 changed the title [Bug]: AttributeError: 'NoneType' object has no attribute 'dtype' [Bug]: Cannot serve Qwen2.5 in OpenVINO Jan 23, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant