You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
root@titan:/ws/vLLM_ModelCoverageTest# tail mct-20250122.log
2025-01-22 19:19:41,374 - DEBUG - https://huggingface.co:443 "HEAD /BAAI/Aquila-7B/resolve/main/generation_config.json HTTP/1.1" 200 0
2025-01-22 19:19:41,375 - DEBUG - Attempting to acquire lock 127590362120848 on /root/.cache/huggingface/hub/.locks/models--BAAI--Aquila-7B/684bc56cb1fb502fe6bfecbc2bb6713f2db918d7.lock
2025-01-22 19:19:41,375 - DEBUG - Lock 127590362120848 acquired on /root/.cache/huggingface/hub/.locks/models--BAAI--Aquila-7B/684bc56cb1fb502fe6bfecbc2bb6713f2db918d7.lock
2025-01-22 19:19:41,461 - DEBUG - https://huggingface.co:443 "GET /BAAI/Aquila-7B/resolve/main/generation_config.json HTTP/1.1" 200 132
2025-01-22 19:19:41,462 - DEBUG - Attempting to release lock 127590362120848 on /root/.cache/huggingface/hub/.locks/models--BAAI--Aquila-7B/684bc56cb1fb502fe6bfecbc2bb6713f2db918d7.lock
2025-01-22 19:19:41,462 - DEBUG - Lock 127590362120848 released on /root/.cache/huggingface/hub/.locks/models--BAAI--Aquila-7B/684bc56cb1fb502fe6bfecbc2bb6713f2db918d7.lock
2025-01-22 19:19:41,701 - ERROR - <vLLM-CMT> Error during inference for model BAAI/Aquila-7B: tensor parallel group already initialized, but of unexpected size: get_tensor_model_parallel_world_size()=2 vs. tensor_model_parallel_size=1
2025-01-22 19:19:41,713 - INFO - <vLLM-CMT> Model BAAI/Aquila-7B inference status: FAILED
2025-01-22 19:19:41,715 - INFO - <vLLM-CMT> Intermediate results saved to: dev_results.csv
2025-01-22 19:19:41,718 - INFO - <vLLM-CMT> Model cache directory deleted: /root/.cache/huggingface/hub/
Before submitting a new issue...
Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
The text was updated successfully, but these errors were encountered:
Your current environment
The output of `python collect_env.py`
I use my test tool could get from here https://github.com/alexhegit/vLLM_ModelCoverageTest/
The python code copied here,
The reproduce steps are,
Steps:
Step1: setup a MTP LLM list in csv, e.g.
Create the model list in csv file like that,
Step2: run test
Step3: check the results
If set the model list with TP like that, all of them could run with
PASS
Step4: check the log
Model Input Dumps
No response
🐛 Describe the bug
I use my test tool could get from here https://github.com/alexhegit/vLLM_ModelCoverageTest/
The python code copied here,
The reproduce steps are,
Steps:
Step1: setup a MTP LLM list in csv, e.g.
Here is the MTP batch test csv file
Step2: run test
Step3: check the results
If set the model list with TP like that, all of them could run with
PASS
Step4: check the log
Before submitting a new issue...
The text was updated successfully, but these errors were encountered: