Skip to content

Issues: vllm-project/vllm-ascend

[main] vllm-ascend Roadmap Q1 2025
#71 opened Feb 17, 2025 by Yikun
Open 2
[v0.7.1rc1] FAQ & Feedback
#19 opened Feb 8, 2025 by Yikun
Open 10
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Assignee
Filter by who’s assigned
Sort

Issues list

[Doc]: Can not useTP PP in tihs Version bug Something isn't working question Further information is requested
#148 opened Feb 24, 2025 by shuowoshishui
Qwen2.5-VL-7B的问题 bug Something isn't working
#131 opened Feb 21, 2025 by ffanyt
Failed to infer device type question Further information is requested
#130 opened Feb 21, 2025 by Qukka0914
DeepSeek-R1 on 0.7.1-dev with Torch not compiled with CUDA enabled question Further information is requested
#122 opened Feb 20, 2025 by ColdeZhang
[Bug]: Qwen2-VL-72B-Instruct Inference failure bug Something isn't working
#115 opened Feb 19, 2025 by invokerbyxv
Add a turtorial for Qwen 2.5-VL documentation Improvements or additions to documentation help wanted Extra attention is needed
#75 opened Feb 17, 2025 by Yikun
[main] vllm-ascend Roadmap Q1 2025
#71 opened Feb 17, 2025 by Yikun
24 of 34 tasks
Abnormal First Token Output on 910B GPU during Inference bug Something isn't working
#46 opened Feb 11, 2025 by Jozenn
Add doc for benchmark and profiling on Ascend NPU documentation Improvements or additions to documentation
#26 opened Feb 10, 2025 by Yikun
[v0.7.1rc1] FAQ & Feedback
#19 opened Feb 8, 2025 by Yikun
ProTip! Type g p on any issue or pull request to go back to the pull request listing page.