Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: TypeError: FusedSDPA.forward() takes from 4 to 9 positional arguments but 12 were given #462

Open
1 task done
Zhenzhong1 opened this issue Nov 6, 2024 · 0 comments
Labels
bug Something isn't working

Comments

@Zhenzhong1
Copy link

Zhenzhong1 commented Nov 6, 2024

Your current environment

Gaudi2H

Model Input Dumps

startup issues.

🐛 Describe the bug

vLLM API server version 0.6.3.dev563+ga5136ec1 should be OK when startup in the 6 Nov morning.

But afternoon, it doesn't work. I found the vLLM API version changed.

relate to in the last two days commits I guess.

vLLM API server version 0.6.3.dev563+ga5136ec1 Correct Logs:

2024-11-06T03:03:29.155418275Z INFO 11-06 03:03:29 api_server.py:529] args: Namespace(host='0.0.0.0', port=80, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, enable_auto_tool_choice=False, tool_call_parser=None, tool_parser_plugin='', model='Intel/neural-chat-7b-v3-3', tokenizer=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, download_dir=None, load_format='auto', weights_load_device=None, config_format='auto', dtype='auto', kv_cache_dtype='auto', quantization_param_path=None, max_model_len=None, guided_decoding_backend='outlines', distributed_executor_backend=None, worker_use_ray=False, pipeline_parallel_size=1, tensor_parallel_size=1, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=128, enable_prefix_caching=False, disable_sliding_window=False, use_v2_block_manager=True, use_padding_aware_scheduling=False, num_lookahead_slots=0, seed=0, swap_space=4, cpu_offload_gb=0, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_seqs=256, max_num_prefill_seqs=None, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, enforce_eager=False, max_context_len_to_capture=None, max_seq_len_to_capture=2048, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, mm_processor_kwargs=None, enable_lora=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', num_scheduler_steps=1, multi_step_stream_outputs=True, scheduler_delay_factor=0.0, enable_chunked_prefill=None, speculative_model=None, speculative_model_quantization=None, num_speculative_tokens=None, speculative_disable_mqa_scorer=False, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=None, qlora_adapter_name_or_path=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, override_neuron_config=None, scheduling_policy='fcfs', disable_log_requests=False, max_log_len=None, disable_fastapi_docs=False)
2024-11-06T03:03:29.163998073Z INFO 11-06 03:03:29 api_server.py:166] Multiprocessing frontend to use ipc:///tmp/1ee25630-2597-4160-93d0-cdd1a81947c5 for IPC Path.
2024-11-06T03:03:29.165811090Z INFO 11-06 03:03:29 api_server.py:179] Started engine process with PID 76
2024-11-06T03:03:29.660939574Z INFO 11-06 03:03:29 config.py:1684] For HPU, we cast models to bfloat16 instead ofusing float16 by default. Please specify `dtype` if you want to use float16.
2024-11-06T03:03:29.660961399Z WARNING 11-06 03:03:29 config.py:1710] Casting torch.float16 to torch.bfloat16.
2024-11-06T03:03:32.786212994Z INFO 11-06 03:03:32 config.py:1684] For HPU, we cast models to bfloat16 instead ofusing float16 by default. Please specify `dtype` if you want to use float16.
2024-11-06T03:03:32.786235116Z WARNING 11-06 03:03:32 config.py:1710] Casting torch.float16 to torch.bfloat16.
2024-11-06T03:03:36.178231668Z INFO 11-06 03:03:36 llm_engine.py:238] Initializing an LLM engine (v0.6.3.dev563+ga5136ec1) with config: model='Intel/neural-chat-7b-v3-3', speculative_config=None, tokenizer='Intel/neural-chat-7b-v3-3', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=32768, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, weights_load_device=hpu, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=hpu, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=Intel/neural-chat-7b-v3-3, use_v2_block_manager=True, num_scheduler_steps=1, chunked_prefill_enabled=False multi_step_stream_outputs=True, enable_prefix_caching=False, use_async_output_proc=True, use_cached_outputs=True, mm_processor_kwargs=None)
2024-11-06T03:03:36.720531727Z WARNING 11-06 03:03:36 utils.py:809] Pin memory is not supported on HPU.
2024-11-06T03:03:36.721573220Z INFO 11-06 03:03:36 selector.py:146] Using HPUAttention backend.
2024-11-06T03:03:36.723618110Z INFO 11-06 03:03:36 hpu_model_runner.py:119] VLLM_PROMPT_BS_BUCKET_MIN=1 (default:1)
2024-11-06T03:03:36.723645923Z INFO 11-06 03:03:36 hpu_model_runner.py:119] VLLM_PROMPT_BS_BUCKET_STEP=32 (default:32)
2024-11-06T03:03:36.723672481Z INFO 11-06 03:03:36 hpu_model_runner.py:119] VLLM_PROMPT_BS_BUCKET_MAX=256 (default:256)
2024-11-06T03:03:36.723702296Z INFO 11-06 03:03:36 hpu_model_runner.py:119] VLLM_DECODE_BS_BUCKET_MIN=1 (default:1)
2024-11-06T03:03:36.723728332Z INFO 11-06 03:03:36 hpu_model_runner.py:119] VLLM_DECODE_BS_BUCKET_STEP=32 (default:32)
2024-11-06T03:03:36.723749955Z INFO 11-06 03:03:36 hpu_model_runner.py:119] VLLM_DECODE_BS_BUCKET_MAX=256 (default:256)
2024-11-06T03:03:36.723778711Z INFO 11-06 03:03:36 hpu_model_runner.py:119] VLLM_PROMPT_SEQ_BUCKET_MIN=128 (default:128)
2024-11-06T03:03:36.723793062Z INFO 11-06 03:03:36 hpu_model_runner.py:119] VLLM_PROMPT_SEQ_BUCKET_STEP=128 (default:128)
2024-11-06T03:03:36.723815041Z INFO 11-06 03:03:36 hpu_model_runner.py:119] VLLM_PROMPT_SEQ_BUCKET_MAX=1024 (default:1024)
2024-11-06T03:03:36.723841828Z INFO 11-06 03:03:36 hpu_model_runner.py:119] VLLM_DECODE_BLOCK_BUCKET_MIN=128 (default:128)
2024-11-06T03:03:36.723865656Z INFO 11-06 03:03:36 hpu_model_runner.py:119] VLLM_DECODE_BLOCK_BUCKET_STEP=128 (default:128)
2024-11-06T03:03:36.723898476Z INFO 11-06 03:03:36 hpu_model_runner.py:119] VLLM_DECODE_BLOCK_BUCKET_MAX=4096 (default:4096)
2024-11-06T03:03:36.723932389Z INFO 11-06 03:03:36 hpu_model_runner.py:737] Prompt bucket config (min, step, max_warmup) bs:[1, 32, 256], seq:[128, 128, 1024]
2024-11-06T03:03:36.723955300Z INFO 11-06 03:03:36 hpu_model_runner.py:742] Decode bucket config (min, step, max_warmup) bs:[1, 32, 256], block:[128, 128, 4096]
2024-11-06T03:03:39.827954695Z ============================= HABANA PT BRIDGE CONFIGURATION =========================== 
2024-11-06T03:03:39.827982307Z  PT_HPU_LAZY_MODE = 1
2024-11-06T03:03:39.827985476Z  PT_RECIPE_CACHE_PATH = 
2024-11-06T03:03:39.827988196Z  PT_CACHE_FOLDER_DELETE = 0
2024-11-06T03:03:39.827990275Z  PT_HPU_RECIPE_CACHE_CONFIG = 
2024-11-06T03:03:39.827995646Z  PT_HPU_MAX_COMPOUND_OP_SIZE = 9223372036854775807
2024-11-06T03:03:39.827997970Z  PT_HPU_LAZY_ACC_PAR_MODE = 1
2024-11-06T03:03:39.828000602Z  PT_HPU_ENABLE_REFINE_DYNAMIC_SHAPES = 0
2024-11-06T03:03:39.828003207Z ---------------------------: System Configuration :---------------------------
2024-11-06T03:03:39.828014638Z Num CPU Cores : 160
2024-11-06T03:03:39.828016656Z CPU RAM       : 1056375272 KB
2024-11-06T03:03:39.828018639Z ------------------------------------------------------------------------------
2024-11-06T03:03:40.213686829Z INFO 11-06 03:03:40 selector.py:146] Using HPUAttention backend.
2024-11-06T03:03:40.241571954Z INFO 11-06 03:03:40 loader.py:405] Loading weights on hpu...
2024-11-06T03:03:40.449113011Z INFO 11-06 03:03:40 weight_utils.py:243] Using model weights format ['*.bin']
Loading pt checkpoint shards:   0% Completed | 0/2 [00:00<?, ?it/s]
Loading pt checkpoint shards:  50% Completed | 1/2 [00:03<00:03,  3.54s/it]
Loading pt checkpoint shards: 100% Completed | 2/2 [00:09<00:00,  4.81s/it]
Loading pt checkpoint shards: 100% Completed | 2/2 [00:09<00:00,  4.62s/it]
2024-11-06T03:07:46.308211440Z 
2024-11-06T03:07:46.436228994Z INFO 11-06 03:07:46 hpu_model_runner.py:641] Pre-loading model weights on hpu:0 took 13.51 GiB of device memory (13.51 GiB/94.62 GiB used) and 11.26 GiB of host memory (106.2 GiB/1007 GiB used)
2024-11-06T03:07:46.602223130Z INFO 11-06 03:07:46 hpu_model_runner.py:688] Wrapping in HPU Graph took 0 B of device memory (13.51 GiB/94.62 GiB used) and 0 B of host memory (106.2 GiB/1007 GiB used)
2024-11-06T03:07:46.678575262Z INFO 11-06 03:07:46 hpu_model_runner.py:692] Loading model weights took in total 13.51 GiB of device memory (13.51 GiB/94.62 GiB used) and 11.26 GiB of host memory (106.2 GiB/1007 GiB used)
2024-11-06T03:07:49.744303758Z INFO 11-06 03:07:49 hpu_worker.py:184] Model profiling run took 2.674 GiB of device memory (16.19 GiB/94.62 GiB used) and 190.3 MiB of host memory (106.3 GiB/1007 GiB used)
2024-11-06T03:07:49.745197776Z INFO 11-06 03:07:49 hpu_worker.py:208] Free device memory: 78.44 GiB, 70.59 GiB usable (gpu_memory_utilization=0.9), 7.059 GiB reserved for HPUGraphs (VLLM_GRAPH_RESERVED_MEM=0.1), 63.53 GiB reserved for KV cache
2024-11-06T03:07:49.819872212Z INFO 11-06 03:07:49 hpu_executor.py:85] # HPU blocks: 4066, # CPU blocks: 256
2024-11-06T03:07:50.221380730Z INFO 11-06 03:07:50 hpu_worker.py:239] Initializing cache engine took 63.53 GiB of device memory (79.72 GiB/94.62 GiB used) and 540 KiB of host memory (106.3 GiB/1007 GiB used)
2024-11-06T03:07:50.221510200Z INFO 11-06 03:07:50 hpu_model_runner.py:1534] Generated 60 prompt buckets [bs, seq]:                 [(1, 128), (1, 256), (1, 384), (1, 512), (1, 640), (1, 768), (1, 896), (1, 1024), (2, 128), (2, 256), (2, 384), (2, 512), (2, 640), (2, 768), (2, 896), (2, 1024), (4, 128), (4, 256), (4, 384), (4, 512), (4, 640), (4, 768), (4, 896), (4, 1024), (8, 128), (8, 256), (8, 384), (8, 512), (8, 640), (8, 768), (8, 896), (8, 1024), (16, 128), (16, 256), (16, 384), (16, 512), (16, 640), (16, 768), (16, 896), (16, 1024), (32, 128), (32, 256), (32, 384), (32, 512), (32, 640), (32, 768), (32, 896), (32, 1024), (64, 128), (64, 256), (64, 384), (64, 512), (96, 128), (96, 256), (128, 128), (128, 256), (160, 128), (192, 128), (224, 128), (256, 128)]
2024-11-06T03:07:50.221531088Z INFO 11-06 03:07:50 hpu_model_runner.py:1539] Omitted 44 prompt buckets due to exceeded token budget (max_num_batched_tokens=32768)
2024-11-06T03:07:50.221961143Z INFO 11-06 03:07:50 hpu_model_runner.py:1547] Generated 416 decode buckets [bs, total_blocks]: [(1, 128), (1, 256), (1, 384), (1, 512), (1, 640), (1, 768), (1, 896), (1, 1024), (1, 1152), (1, 1280), (1, 1408), (1, 1536), (1, 1664), (1, 1792), (1, 1920), (1, 2048), (1, 2176), (1, 2304), (1, 2432), (1, 2560), (1, 2688), (1, 2816), (1, 2944), (1, 3072), (1, 3200), (1, 3328), (1, 3456), (1, 3584), (1, 3712), (1, 3840), (1, 3968), (1, 4096), (2, 128), (2, 256), (2, 384), (2, 512), (2, 640), (2, 768), (2, 896), (2, 1024), (2, 1152), (2, 1280), (2, 1408), (2, 1536), (2, 1664), (2, 1792), (2, 1920), (2, 2048), (2, 2176), (2, 2304), (2, 2432), (2, 2560), (2, 2688), (2, 2816), (2, 2944), (2, 3072), (2, 3200), (2, 3328), (2, 3456), (2, 3584), (2, 3712), (2, 3840), (2, 3968), (2, 4096), (4, 128), (4, 256), (4, 384), (4, 512), (4, 640), (4, 768), (4, 896), (4, 1024), (4, 1152), (4, 1280), (4, 1408), (4, 1536), (4, 1664), (4, 1792), (4, 1920), (4, 2048), (4, 2176), (4, 2304), (4, 2432), (4, 2560), (4, 2688), (4, 2816), (4, 2944), (4, 3072), (4, 3200), (4, 3328), (4, 3456), (4, 3584), (4, 3712), (4, 3840), (4, 3968), (4, 4096), (8, 128), (8, 256), (8, 384), (8, 512), (8, 640), (8, 768), (8, 896), (8, 1024), (8, 1152), (8, 1280), (8, 1408), (8, 1536), (8, 1664), (8, 1792), (8, 1920), (8, 2048), (8, 2176), (8, 2304), (8, 2432), (8, 2560), (8, 2688), (8, 2816), (8, 2944), (8, 3072), (8, 3200), (8, 3328), (8, 3456), (8, 3584), (8, 3712), (8, 3840), (8, 3968), (8, 4096), (16, 128), (16, 256), (16, 384), (16, 512), (16, 640), (16, 768), (16, 896), (16, 1024), (16, 1152), (16, 1280), (16, 1408), (16, 1536), (16, 1664), (16, 1792), (16, 1920), (16, 2048), (16, 2176), (16, 2304), (16, 2432), (16, 2560), (16, 2688), (16, 2816), (16, 2944), (16, 3072), (16, 3200), (16, 3328), (16, 3456), (16, 3584), (16, 3712), (16, 3840), (16, 3968), (16, 4096), (32, 128), (32, 256), (32, 384), (32, 512), (32, 640), (32, 768), (32, 896), (32, 1024), (32, 1152), (32, 1280), (32, 1408), (32, 1536), (32, 1664), (32, 1792), (32, 1920), (32, 2048), (32, 2176), (32, 2304), (32, 2432), (32, 2560), (32, 2688), (32, 2816), (32, 2944), (32, 3072), (32, 3200), (32, 3328), (32, 3456), (32, 3584), (32, 3712), (32, 3840), (32, 3968), (32, 4096), (64, 128), (64, 256), (64, 384), (64, 512), (64, 640), (64, 768), (64, 896), (64, 1024), (64, 1152), (64, 1280), (64, 1408), (64, 1536), (64, 1664), (64, 1792), (64, 1920), (64, 2048), (64, 2176), (64, 2304), (64, 2432), (64, 2560), (64, 2688), (64, 2816), (64, 2944), (64, 3072), (64, 3200), (64, 3328), (64, 3456), (64, 3584), (64, 3712), (64, 3840), (64, 3968), (64, 4096), (96, 128), (96, 256), (96, 384), (96, 512), (96, 640), (96, 768), (96, 896), (96, 1024), (96, 1152), (96, 1280), (96, 1408), (96, 1536), (96, 1664), (96, 1792), (96, 1920), (96, 2048), (96, 2176), (96, 2304), (96, 2432), (96, 2560), (96, 2688), (96, 2816), (96, 2944), (96, 3072), (96, 3200), (96, 3328), (96, 3456), (96, 3584), (96, 3712), (96, 3840), (96, 3968), (96, 4096), (128, 128), (128, 256), (128, 384), (128, 512), (128, 640), (128, 768), (128, 896), (128, 1024), (128, 1152), (128, 1280), (128, 1408), (128, 1536), (128, 1664), (128, 1792), (128, 1920), (128, 2048), (128, 2176), (128, 2304), (128, 2432), (128, 2560), (128, 2688), (128, 2816), (128, 2944), (128, 3072), (128, 3200), (128, 3328), (128, 3456), (128, 3584), (128, 3712), (128, 3840), (128, 3968), (128, 4096), (160, 128), (160, 256), (160, 384), (160, 512), (160, 640), (160, 768), (160, 896), (160, 1024), (160, 1152), (160, 1280), (160, 1408), (160, 1536), (160, 1664), (160, 1792), (160, 1920), (160, 2048), (160, 2176), (160, 2304), (160, 2432), (160, 2560), (160, 2688), (160, 2816), (160, 2944), (160, 3072), (160, 3200), (160, 3328), (160, 3456), (160, 3584), (160, 3712), (160, 3840), (160, 3968), (160, 4096), (192, 128), (192, 256), (192, 384), (192, 512), (192, 640), (192, 768), (192, 896), (192, 1024), (192, 1152), (192, 1280), (192, 1408), (192, 1536), (192, 1664), (192, 1792), (192, 1920), (192, 2048), (192, 2176), (192, 2304), (192, 2432), (192, 2560), (192, 2688), (192, 2816), (192, 2944), (192, 3072), (192, 3200), (192, 3328), (192, 3456), (192, 3584), (192, 3712), (192, 3840), (192, 3968), (192, 4096), (224, 128), (224, 256), (224, 384), (224, 512), (224, 640), (224, 768), (224, 896), (224, 1024), (224, 1152), (224, 1280), (224, 1408), (224, 1536), (224, 1664), (224, 1792), (224, 1920), (224, 2048), (224, 2176), (224, 2304), (224, 2432), (224, 2560), (224, 2688), (224, 2816), (224, 2944), (224, 3072), (224, 3200), (224, 3328), (224, 3456), (224, 3584), (224, 3712), (224, 3840), (224, 3968), (224, 4096), (256, 128), (256, 256), (256, 384), (256, 512), (256, 640), (256, 768), (256, 896), (256, 1024), (256, 1152), (256, 1280), (256, 1408), (256, 1536), (256, 1664), (256, 1792), (256, 1920), (256, 2048), (256, 2176), (256, 2304), (256, 2432), (256, 2560), (256, 2688), (256, 2816), (256, 2944), (256, 3072), (256, 3200), (256, 3328), (256, 3456), (256, 3584), (256, 3712), (256, 3840), (256, 3968), (256, 4096)]
2024-11-06T03:07:50.222778696Z WARNING 11-06 03:07:50 hpu_model_runner.py:1576] Cannot use PT_COMPILE_ONLY_MODE. Warmup time will be negatively impacted. Please update Gaudi Software Suite.
2024-11-06T03:07:50.223512987Z INFO 11-06 03:07:50 hpu_model_runner.py:1442] [Warmup][Prompt][1/60] batch_size:32 seq_len:1024 free_mem:14.9 GiB
2024-11-06T03:07:53.086264418Z INFO 11-06 03:07:53 hpu_model_runner.py:1442] [Warmup][Prompt][2/60] batch_size:64 seq_len:512 free_mem:14.9 GiB
2024-11-06T03:07:55.592932360Z INFO 11-06 03:07:55 hpu_model_runner.py:1442] [Warmup][Prompt][3/60] batch_size:128 seq_len:256 free_mem:14.9 GiB
2024-11-06T03:07:58.045743052Z INFO 11-06 03:07:58 hpu_model_runner.py:1442] [Warmup][Prompt][4/60] batch_size:256 seq_len:128 free_mem:14.9 GiB
2024-11-06T03:08:00.201769490Z INFO 11-06 03:08:00 hpu_model_runner.py:1442] [Warmup][Prompt][5/60] batch_size:32 seq_len:896 free_mem:14.9 GiB
2024-11-06T03:08:03.148196637Z INFO 11-06 03:08:03 hpu_model_runner.py:1442] [Warmup][Prompt][6/60] batch_size:224 seq_len:128 free_mem:14.9 GiB
2024-11-06T03:08:05.091796055Z INFO 11-06 03:08:05 hpu_model_runner.py:1442] [Warmup][Prompt][7/60] batch_size:32 seq_len:768 free_mem:14.9 GiB
2024-11-06T03:08:07.674572544Z INFO 11-06 03:08:07 hpu_model_runner.py:1442] [Warmup][Prompt][8/60] batch_size:64 seq_len:384 free_mem:14.9 GiB
2024-11-06T03:08:09.941175997Z INFO 11-06 03:08:09 hpu_model_runner.py:1442] [Warmup][Prompt][9/60] batch_size:96 seq_len:256 free_mem:14.9 GiB
2024-11-06T03:08:11.961956708Z INFO 11-06 03:08:11 hpu_model_runner.py:1442] [Warmup][Prompt][10/60] batch_size:192 seq_len:128 free_mem:14.9 GiB
2024-11-06T03:08:13.697575370Z INFO 11-06 03:08:13 hpu_model_runner.py:1442] [Warmup][Prompt][11/60] batch_size:32 seq_len:640 free_mem:14.9 GiB
2024-11-06T03:08:15.996277563Z INFO 11-06 03:08:15 hpu_model_runner.py:1442] [Warmup][Prompt][12/60] batch_size:160 seq_len:128 free_mem:14.9 GiB
2024-11-06T03:08:17.500452752Z INFO 11-06 03:08:17 hpu_model_runner.py:1442] [Warmup][Prompt][13/60] batch_size:16 seq_len:1024 free_mem:14.9 GiB
2024-11-06T03:08:19.556252226Z INFO 11-06 03:08:19 hpu_model_runner.py:1442] [Warmup][Prompt][14/60] batch_size:32 seq_len:512 free_mem:14.9 GiB
2024-11-06T03:08:21.126260440Z INFO 11-06 03:08:21 hpu_model_runner.py:1442] [Warmup][Prompt][15/60] batch_size:64 seq_len:256 free_mem:14.9 GiB
2024-11-06T03:08:22.689492442Z INFO 11-06 03:08:22 hpu_model_runner.py:1442] [Warmup][Prompt][16/60] batch_size:128 seq_len:128 free_mem:14.9 GiB
2024-11-06T03:08:23.950776739Z INFO 11-06 03:08:23 hpu_model_runner.py:1442] [Warmup][Prompt][17/60] batch_size:16 seq_len:896 free_mem:14.9 GiB
2024-11-06T03:08:25.812142357Z INFO 11-06 03:08:25 hpu_model_runner.py:1442] [Warmup][Prompt][18/60] batch_size:16 seq_len:768 free_mem:14.9 GiB
2024-11-06T03:08:27.487782974Z INFO 11-06 03:08:27 hpu_model_runner.py:1442] [Warmup][Prompt][19/60] batch_size:32 seq_len:384 free_mem:14.9 GiB
2024-11-06T03:08:28.915072234Z INFO 11-06 03:08:28 hpu_model_runner.py:1442] [Warmup][Prompt][20/60] batch_size:96 seq_len:128 free_mem:14.9 GiB
2024-11-06T03:08:29.960194385Z INFO 11-06 03:08:29 hpu_model_runner.py:1442] [Warmup][Prompt][21/60] batch_size:16 seq_len:640 free_mem:14.9 GiB
2024-11-06T03:08:31.465702255Z INFO 11-06 03:08:31 hpu_model_runner.py:1442] [Warmup][Prompt][22/60] batch_size:8 seq_len:1024 free_mem:14.9 GiB
2024-11-06T03:08:32.759030341Z INFO 11-06 03:08:32 hpu_model_runner.py:1442] [Warmup][Prompt][23/60] batch_size:16 seq_len:512 free_mem:14.9 GiB
2024-11-06T03:08:33.855191071Z INFO 11-06 03:08:33 hpu_model_runner.py:1442] [Warmup][Prompt][24/60] batch_size:32 seq_len:256 free_mem:14.9 GiB
2024-11-06T03:08:34.949415461Z INFO 11-06 03:08:34 hpu_model_runner.py:1442] [Warmup][Prompt][25/60] batch_size:64 seq_len:128 free_mem:14.9 GiB
2024-11-06T03:08:35.759350421Z INFO 11-06 03:08:35 hpu_model_runner.py:1442] [Warmup][Prompt][26/60] batch_size:8 seq_len:896 free_mem:14.9 GiB
2024-11-06T03:08:37.061488081Z INFO 11-06 03:08:37 hpu_model_runner.py:1442] [Warmup][Prompt][27/60] batch_size:8 seq_len:768 free_mem:14.9 GiB
2024-11-06T03:08:38.269617467Z INFO 11-06 03:08:38 hpu_model_runner.py:1442] [Warmup][Prompt][28/60] batch_size:16 seq_len:384 free_mem:14.9 GiB
2024-11-06T03:08:39.288016482Z INFO 11-06 03:08:39 hpu_model_runner.py:1442] [Warmup][Prompt][29/60] batch_size:8 seq_len:640 free_mem:14.9 GiB
2024-11-06T03:08:40.896771394Z INFO 11-06 03:08:40 hpu_model_runner.py:1442] [Warmup][Prompt][30/60] batch_size:4 seq_len:1024 free_mem:14.9 GiB
2024-11-06T03:08:41.843848007Z INFO 11-06 03:08:41 hpu_model_runner.py:1442] [Warmup][Prompt][31/60] batch_size:8 seq_len:512 free_mem:14.9 GiB
2024-11-06T03:08:42.725007973Z INFO 11-06 03:08:42 hpu_model_runner.py:1442] [Warmup][Prompt][32/60] batch_size:16 seq_len:256 free_mem:14.9 GiB


... ...

vLLM API server version 0.6.3.dev588+g1033c3eb ERROR logs:

vLLM API server version 0.6.3.dev588+g1033c3eb
2024-11-06T06:07:44.937593775Z INFO 11-06 06:07:44 api_server.py:529] args: Namespace(host='0.0.0.0', port=80, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, enable_auto_tool_choice=False, tool_call_parser=None, tool_parser_plugin='', model='Intel/neural-chat-7b-v3-3', tokenizer=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, download_dir=None, load_format='auto', weights_load_device=None, config_format='auto', dtype='auto', kv_cache_dtype='auto', quantization_param_path=None, max_model_len=None, guided_decoding_backend='outlines', distributed_executor_backend=None, worker_use_ray=False, pipeline_parallel_size=1, tensor_parallel_size=1, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=128, enable_prefix_caching=False, disable_sliding_window=False, use_v2_block_manager=True, use_padding_aware_scheduling=False, num_lookahead_slots=0, seed=0, swap_space=4, cpu_offload_gb=0, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_seqs=256, max_num_prefill_seqs=None, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, enforce_eager=False, max_context_len_to_capture=None, max_seq_len_to_capture=2048, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, mm_processor_kwargs=None, enable_lora=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', num_scheduler_steps=1, multi_step_stream_outputs=True, scheduler_delay_factor=0.0, enable_chunked_prefill=None, speculative_model=None, speculative_model_quantization=None, num_speculative_tokens=None, speculative_disable_mqa_scorer=False, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=None, qlora_adapter_name_or_path=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, override_neuron_config=None, scheduling_policy='fcfs', disable_log_requests=False, max_log_len=None, disable_fastapi_docs=False)
2024-11-06T06:07:44.945568835Z INFO 11-06 06:07:44 api_server.py:166] Multiprocessing frontend to use ipc:///tmp/4a9933a4-7df0-4f34-8fab-7e282d15998b for IPC Path.
2024-11-06T06:07:44.947903590Z INFO 11-06 06:07:44 api_server.py:179] Started engine process with PID 76
2024-11-06T06:07:45.754159186Z INFO 11-06 06:07:45 config.py:1684] For HPU, we cast models to bfloat16 instead ofusing float16 by default. Please specify `dtype` if you want to use float16.
2024-11-06T06:07:45.754172719Z WARNING 11-06 06:07:45 config.py:1710] Casting torch.float16 to torch.bfloat16.
2024-11-06T06:07:48.321337853Z INFO 11-06 06:07:48 config.py:1684] For HPU, we cast models to bfloat16 instead ofusing float16 by default. Please specify `dtype` if you want to use float16.
2024-11-06T06:07:48.321961332Z WARNING 11-06 06:07:48 config.py:1710] Casting torch.float16 to torch.bfloat16.
2024-11-06T06:07:51.591056452Z INFO 11-06 06:07:51 llm_engine.py:238] Initializing an LLM engine (v0.6.3.dev588+g1033c3eb) with config: model='Intel/neural-chat-7b-v3-3', speculative_config=None, tokenizer='Intel/neural-chat-7b-v3-3', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, rope_scaling=None, rope_theta=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.bfloat16, max_seq_len=32768, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=None, weights_load_device=hpu, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=hpu, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=Intel/neural-chat-7b-v3-3, use_v2_block_manager=True, num_scheduler_steps=1, chunked_prefill_enabled=False multi_step_stream_outputs=True, enable_prefix_caching=False, use_async_output_proc=True, use_cached_outputs=True, mm_processor_kwargs=None)
2024-11-06T06:07:51.953135882Z WARNING 11-06 06:07:51 utils.py:809] Pin memory is not supported on HPU.
2024-11-06T06:07:51.954068705Z INFO 11-06 06:07:51 selector.py:146] Using HPUAttention backend.
2024-11-06T06:07:51.955959907Z INFO 11-06 06:07:51 hpu_model_runner.py:126] VLLM_PROMPT_BS_BUCKET_MIN=1 (default:1)
2024-11-06T06:07:51.955977928Z INFO 11-06 06:07:51 hpu_model_runner.py:126] VLLM_PROMPT_BS_BUCKET_STEP=32 (default:32)
2024-11-06T06:07:51.956000940Z INFO 11-06 06:07:51 hpu_model_runner.py:126] VLLM_PROMPT_BS_BUCKET_MAX=256 (default:256)
2024-11-06T06:07:51.956030106Z INFO 11-06 06:07:51 hpu_model_runner.py:126] VLLM_DECODE_BS_BUCKET_MIN=1 (default:1)
2024-11-06T06:07:51.956048597Z INFO 11-06 06:07:51 hpu_model_runner.py:126] VLLM_DECODE_BS_BUCKET_STEP=32 (default:32)
2024-11-06T06:07:51.956063713Z INFO 11-06 06:07:51 hpu_model_runner.py:126] VLLM_DECODE_BS_BUCKET_MAX=256 (default:256)
2024-11-06T06:07:51.956092830Z INFO 11-06 06:07:51 hpu_model_runner.py:126] VLLM_PROMPT_SEQ_BUCKET_MIN=128 (default:128)
2024-11-06T06:07:51.956107333Z INFO 11-06 06:07:51 hpu_model_runner.py:126] VLLM_PROMPT_SEQ_BUCKET_STEP=128 (default:128)
2024-11-06T06:07:51.956126117Z INFO 11-06 06:07:51 hpu_model_runner.py:126] VLLM_PROMPT_SEQ_BUCKET_MAX=1024 (default:1024)
2024-11-06T06:07:51.956152364Z INFO 11-06 06:07:51 hpu_model_runner.py:126] VLLM_DECODE_BLOCK_BUCKET_MIN=128 (default:128)
2024-11-06T06:07:51.956169377Z INFO 11-06 06:07:51 hpu_model_runner.py:126] VLLM_DECODE_BLOCK_BUCKET_STEP=128 (default:128)
2024-11-06T06:07:51.956184949Z INFO 11-06 06:07:51 hpu_model_runner.py:126] VLLM_DECODE_BLOCK_BUCKET_MAX=4096 (default:4096)
2024-11-06T06:07:51.956211634Z INFO 11-06 06:07:51 hpu_model_runner.py:791] Prompt bucket config (min, step, max_warmup) bs:[1, 32, 256], seq:[128, 128, 1024]
2024-11-06T06:07:51.956232964Z INFO 11-06 06:07:51 hpu_model_runner.py:796] Decode bucket config (min, step, max_warmup) bs:[1, 32, 256], block:[128, 128, 4096]
2024-11-06T06:07:54.963275449Z ============================= HABANA PT BRIDGE CONFIGURATION =========================== 
2024-11-06T06:07:54.963298190Z  PT_HPU_LAZY_MODE = 1
2024-11-06T06:07:54.963301336Z  PT_RECIPE_CACHE_PATH = 
2024-11-06T06:07:54.963303909Z  PT_CACHE_FOLDER_DELETE = 0
2024-11-06T06:07:54.963306020Z  PT_HPU_RECIPE_CACHE_CONFIG = 
2024-11-06T06:07:54.963308151Z  PT_HPU_MAX_COMPOUND_OP_SIZE = 9223372036854775807
2024-11-06T06:07:54.963310288Z  PT_HPU_LAZY_ACC_PAR_MODE = 1
2024-11-06T06:07:54.963313898Z  PT_HPU_ENABLE_REFINE_DYNAMIC_SHAPES = 0
2024-11-06T06:07:54.963316883Z ---------------------------: System Configuration :---------------------------
2024-11-06T06:07:54.963334143Z Num CPU Cores : 160
2024-11-06T06:07:54.963342494Z CPU RAM       : 1056375272 KB
2024-11-06T06:07:54.963345045Z ------------------------------------------------------------------------------
2024-11-06T06:07:55.325741089Z INFO 11-06 06:07:55 selector.py:146] Using HPUAttention backend.
2024-11-06T06:07:55.351805323Z INFO 11-06 06:07:55 loader.py:405] Loading weights on hpu...
2024-11-06T06:07:55.523109974Z INFO 11-06 06:07:55 weight_utils.py:243] Using model weights format ['*.bin']
Loading pt checkpoint shards:   0% Completed | 0/2 [00:00<?, ?it/s]
Loading pt checkpoint shards:  50% Completed | 1/2 [00:03<00:03,  3.65s/it]
Loading pt checkpoint shards: 100% Completed | 2/2 [00:09<00:00,  4.96s/it]
Loading pt checkpoint shards: 100% Completed | 2/2 [00:09<00:00,  4.76s/it]
2024-11-06T06:12:01.584748470Z 
2024-11-06T06:12:01.712776811Z INFO 11-06 06:12:01 hpu_model_runner.py:677] Pre-loading model weights on hpu:0 took 13.51 GiB of device memory (13.51 GiB/94.62 GiB used) and 10.13 GiB of host memory (101.3 GiB/1007 GiB used)
2024-11-06T06:12:01.879108760Z INFO 11-06 06:12:01 hpu_model_runner.py:742] Wrapping in HPU Graph took 0 B of device memory (13.51 GiB/94.62 GiB used) and 0 B of host memory (101.3 GiB/1007 GiB used)
2024-11-06T06:12:01.955971188Z INFO 11-06 06:12:01 hpu_model_runner.py:746] Loading model weights took in total 13.51 GiB of device memory (13.51 GiB/94.62 GiB used) and 10.13 GiB of host memory (101.3 GiB/1007 GiB used)
2024-11-06T06:12:02.122943905Z Process SpawnProcess-1:
2024-11-06T06:12:02.124353987Z Traceback (most recent call last):
2024-11-06T06:12:02.124418381Z   File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
2024-11-06T06:12:02.124431776Z     self.run()
2024-11-06T06:12:02.124434528Z   File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
2024-11-06T06:12:02.124437394Z     self._target(*self._args, **self._kwargs)
2024-11-06T06:12:02.124440431Z   File "/usr/local/lib/python3.10/dist-packages/vllm-0.6.3.dev588+g1033c3eb.gaudi000-py3.10.egg/vllm/engine/multiprocessing/engine.py", line 394, in run_mp_engine
2024-11-06T06:12:02.124443967Z     engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
2024-11-06T06:12:02.124446295Z   File "/usr/local/lib/python3.10/dist-packages/vllm-0.6.3.dev588+g1033c3eb.gaudi000-py3.10.egg/vllm/engine/multiprocessing/engine.py", line 141, in from_engine_args
2024-11-06T06:12:02.124448424Z     return cls(
2024-11-06T06:12:02.124450915Z   File "/usr/local/lib/python3.10/dist-packages/vllm-0.6.3.dev588+g1033c3eb.gaudi000-py3.10.egg/vllm/engine/multiprocessing/engine.py", line 78, in __init__
2024-11-06T06:12:02.124452957Z     self.engine = LLMEngine(*args,
2024-11-06T06:12:02.124455140Z   File "/usr/local/lib/python3.10/dist-packages/vllm-0.6.3.dev588+g1033c3eb.gaudi000-py3.10.egg/vllm/engine/llm_engine.py", line 351, in __init__
2024-11-06T06:12:02.124457352Z     self._initialize_kv_caches()
2024-11-06T06:12:02.124459464Z   File "/usr/local/lib/python3.10/dist-packages/vllm-0.6.3.dev588+g1033c3eb.gaudi000-py3.10.egg/vllm/engine/llm_engine.py", line 486, in _initialize_kv_caches
2024-11-06T06:12:02.124461680Z     self.model_executor.determine_num_available_blocks())
2024-11-06T06:12:02.124463829Z   File "/usr/local/lib/python3.10/dist-packages/vllm-0.6.3.dev588+g1033c3eb.gaudi000-py3.10.egg/vllm/executor/hpu_executor.py", line 84, in determine_num_available_blocks
2024-11-06T06:12:02.124465962Z     return self.driver_worker.determine_num_available_blocks()
2024-11-06T06:12:02.124472282Z   File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
2024-11-06T06:12:02.124483563Z     return func(*args, **kwargs)
2024-11-06T06:12:02.124485860Z   File "/usr/local/lib/python3.10/dist-packages/vllm-0.6.3.dev588+g1033c3eb.gaudi000-py3.10.egg/vllm/worker/hpu_worker.py", line 180, in determine_num_available_blocks
2024-11-06T06:12:02.124487890Z     self.model_runner.profile_run()
2024-11-06T06:12:02.124490049Z   File "/usr/local/lib/python3.10/dist-packages/vllm-0.6.3.dev588+g1033c3eb.gaudi000-py3.10.egg/vllm/worker/hpu_model_runner.py", line 1451, in profile_run
2024-11-06T06:12:02.124492106Z     self.warmup_scenario(max_batch_size, max_seq_len, True, kv_caches,
2024-11-06T06:12:02.124494159Z   File "/usr/local/lib/python3.10/dist-packages/vllm-0.6.3.dev588+g1033c3eb.gaudi000-py3.10.egg/vllm/worker/hpu_model_runner.py", line 1523, in warmup_scenario
2024-11-06T06:12:02.124496750Z     self.execute_model(inputs, kv_caches, warmup_mode=True)
2024-11-06T06:12:02.124498854Z   File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
2024-11-06T06:12:02.124500918Z     return func(*args, **kwargs)
2024-11-06T06:12:02.124503086Z   File "/usr/local/lib/python3.10/dist-packages/vllm-0.6.3.dev588+g1033c3eb.gaudi000-py3.10.egg/vllm/worker/hpu_model_runner.py", line 2134, in execute_model
2024-11-06T06:12:02.124505077Z     hidden_states = self.model.forward(
2024-11-06T06:12:02.124506999Z   File "/usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/graphs.py", line 716, in forward
2024-11-06T06:12:02.124508957Z     return wrapped_hpugraph_forward(
2024-11-06T06:12:02.124511111Z   File "/usr/local/lib/python3.10/dist-packages/habana_frameworks/torch/hpu/graphs.py", line 570, in wrapped_hpugraph_forward
2024-11-06T06:12:02.124513092Z     return orig_fwd(*args, **kwargs)
2024-11-06T06:12:02.124518413Z   File "/usr/local/lib/python3.10/dist-packages/vllm-0.6.3.dev588+g1033c3eb.gaudi000-py3.10.egg/vllm/worker/hpu_model_runner.py", line 387, in forward
2024-11-06T06:12:02.124520552Z     hidden_states = self.model(*args, **kwargs)
2024-11-06T06:12:02.124522575Z   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1514, in _wrapped_call_impl
2024-11-06T06:12:02.124524603Z     return self._call_impl(*args, **kwargs)
2024-11-06T06:12:02.124526561Z   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1523, in _call_impl
2024-11-06T06:12:02.124528467Z     return forward_call(*args, **kwargs)
2024-11-06T06:12:02.124530484Z   File "/usr/local/lib/python3.10/dist-packages/vllm-0.6.3.dev588+g1033c3eb.gaudi000-py3.10.egg/vllm/model_executor/models/llama.py", line 566, in forward
2024-11-06T06:12:02.124532466Z     model_output = self.model(input_ids, positions, kv_caches,
2024-11-06T06:12:02.124534463Z   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1514, in _wrapped_call_impl
2024-11-06T06:12:02.124536495Z     return self._call_impl(*args, **kwargs)
2024-11-06T06:12:02.124538647Z   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1564, in _call_impl
2024-11-06T06:12:02.124541079Z     result = forward_call(*args, **kwargs)
2024-11-06T06:12:02.124543216Z   File "/usr/local/lib/python3.10/dist-packages/vllm-0.6.3.dev588+g1033c3eb.gaudi000-py3.10.egg/vllm/model_executor/models/llama.py", line 352, in forward
2024-11-06T06:12:02.124545224Z     hidden_states, residual = layer(positions, hidden_states,
2024-11-06T06:12:02.124549176Z   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1514, in _wrapped_call_impl
2024-11-06T06:12:02.124551285Z     return self._call_impl(*args, **kwargs)
2024-11-06T06:12:02.124553290Z   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1564, in _call_impl
2024-11-06T06:12:02.124558764Z     result = forward_call(*args, **kwargs)
2024-11-06T06:12:02.124560997Z   File "/usr/local/lib/python3.10/dist-packages/vllm-0.6.3.dev588+g1033c3eb.gaudi000-py3.10.egg/vllm/model_executor/models/llama.py", line 261, in forward
2024-11-06T06:12:02.124564877Z     hidden_states = self.self_attn(positions=positions,
2024-11-06T06:12:02.124566833Z   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1514, in _wrapped_call_impl
2024-11-06T06:12:02.124568776Z     return self._call_impl(*args, **kwargs)
2024-11-06T06:12:02.124570827Z   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1564, in _call_impl
2024-11-06T06:12:02.124572827Z     result = forward_call(*args, **kwargs)
2024-11-06T06:12:02.124574956Z   File "/usr/local/lib/python3.10/dist-packages/vllm-0.6.3.dev588+g1033c3eb.gaudi000-py3.10.egg/vllm/model_executor/models/llama.py", line 191, in forward
2024-11-06T06:12:02.124577015Z     attn_output = self.attn(q, k, v, kv_cache, attn_metadata)
2024-11-06T06:12:02.124579095Z   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1514, in _wrapped_call_impl
2024-11-06T06:12:02.124581179Z     return self._call_impl(*args, **kwargs)
2024-11-06T06:12:02.124583093Z   File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1564, in _call_impl
2024-11-06T06:12:02.124585070Z     result = forward_call(*args, **kwargs)
2024-11-06T06:12:02.124587455Z   File "/usr/local/lib/python3.10/dist-packages/vllm-0.6.3.dev588+g1033c3eb.gaudi000-py3.10.egg/vllm/attention/layer.py", line 100, in forward
2024-11-06T06:12:02.124589583Z     return self.impl.forward(query,
2024-11-06T06:12:02.124591560Z   File "/usr/local/lib/python3.10/dist-packages/vllm-0.6.3.dev588+g1033c3eb.gaudi000-py3.10.egg/vllm/attention/backends/hpu_attn.py", line 208, in forward
2024-11-06T06:12:02.124593825Z     out = ops.prompt_attention(
2024-11-06T06:12:02.124595895Z   File "/usr/local/lib/python3.10/dist-packages/vllm_hpu_extension/ops.py", line 226, in prompt_attention
2024-11-06T06:12:02.124597898Z     attn_weights = FusedSDPA.apply(query, key, value, None, 0.0, True,
2024-11-06T06:12:02.124600173Z   File "/usr/local/lib/python3.10/dist-packages/torch/autograd/function.py", line 553, in apply
2024-11-06T06:12:02.124602156Z     return super().apply(*args, **kwargs)  # type: ignore[misc]
2024-11-06T06:12:02.124604321Z TypeError: FusedSDPA.forward() takes from 4 to 9 positional arguments but 12 were given
2024-11-06T06:12:10.476259445Z Traceback (most recent call last):
2024-11-06T06:12:10.476285257Z   File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main
2024-11-06T06:12:10.476288074Z     return _run_code(code, main_globals, None,
2024-11-06T06:12:10.476290838Z   File "/usr/lib/python3.10/runpy.py", line 86, in _run_code
2024-11-06T06:12:10.476293813Z     exec(code, run_globals)
2024-11-06T06:12:10.476297432Z   File "/usr/local/lib/python3.10/dist-packages/vllm-0.6.3.dev588+g1033c3eb.gaudi000-py3.10.egg/vllm/entrypoints/openai/api_server.py", line 585, in <module>
2024-11-06T06:12:10.476319911Z     uvloop.run(run_server(args))
2024-11-06T06:12:10.476338742Z   File "/usr/local/lib/python3.10/dist-packages/uvloop/__init__.py", line 82, in run
2024-11-06T06:12:10.476343922Z     return loop.run_until_complete(wrapper())
2024-11-06T06:12:10.476346216Z   File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
2024-11-06T06:12:10.476427701Z   File "/usr/local/lib/python3.10/dist-packages/uvloop/__init__.py", line 61, in wrapper
2024-11-06T06:12:10.476454277Z     return await main
2024-11-06T06:12:10.476457891Z   File "/usr/local/lib/python3.10/dist-packages/vllm-0.6.3.dev588+g1033c3eb.gaudi000-py3.10.egg/vllm/entrypoints/openai/api_server.py", line 552, in run_server
2024-11-06T06:12:10.476523987Z     async with build_async_engine_client(args) as engine_client:
2024-11-06T06:12:10.476531429Z   File "/usr/lib/python3.10/contextlib.py", line 199, in __aenter__
2024-11-06T06:12:10.476555838Z     return await anext(self.gen)
2024-11-06T06:12:10.476559711Z   File "/usr/local/lib/python3.10/dist-packages/vllm-0.6.3.dev588+g1033c3eb.gaudi000-py3.10.egg/vllm/entrypoints/openai/api_server.py", line 107, in build_async_engine_client
2024-11-06T06:12:10.476583794Z     async with build_async_engine_client_from_engine_args(
2024-11-06T06:12:10.476589087Z   File "/usr/lib/python3.10/contextlib.py", line 199, in __aenter__
2024-11-06T06:12:10.476613263Z     return await anext(self.gen)
2024-11-06T06:12:10.476627725Z   File "/usr/local/lib/python3.10/dist-packages/vllm-0.6.3.dev588+g1033c3eb.gaudi000-py3.10.egg/vllm/entrypoints/openai/api_server.py", line 194, in build_async_engine_client_from_engine_args
2024-11-06T06:12:10.476633460Z     raise RuntimeError(
2024-11-06T06:12:10.476638090Z RuntimeError: Engine process failed to start

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
@Zhenzhong1 Zhenzhong1 added the bug Something isn't working label Nov 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant