Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support to vLLM inference engine - to possibly gain x10 speedup in inference #2785

Closed
ofirkris opened this issue Jun 20, 2023 · 16 comments
Closed
Labels
enhancement New feature or request stale

Comments

@ofirkris
Copy link
Contributor

vLLM is an open-source LLM inference and serving library that accelerates HuggingFace Transformers by 24x and powers Vicuna and Chatbot Arena.

Blog post: https://vllm.ai/
Repo: https://github.com/vllm-project/vllm

@ofirkris ofirkris added the enhancement New feature or request label Jun 20, 2023
@Slug-Cat
Copy link

If the performance claims aren't overcooked or super situational, this could be huge

@CamiloMM
Copy link

AI is where you have some of the brightest minds in the world working on some of the most complicated maths and somehow someone just comes and does something like this (assuming it's real).

Are we in an "AI summer"? 😂

@Ph0rk0z
Copy link
Contributor

Ph0rk0z commented Jun 21, 2023

It's Exlllama for everything else.. and can just have a new loader added.

@tensiondriven
Copy link
Contributor

vLLM only speeds up 24x for running full fat models with massive parallelization, so if you need to run 100 inferences at the same time, its fast. But for most people, exllama is still faster/better. @turboderp has some good insights on the local llama reddit.

Unless someone is feeling ambitious, I think this could be closed. The issue poster probably didnt understand what vLLM is really for.

@Ph0rk0z
Copy link
Contributor

Ph0rk0z commented Jun 24, 2023

Does tensor parallel help multi-gpu? And with the multi-user support this might actually serve the intended purpose.

@cibernicola
Copy link

Does anyone know anything about this?
image

@turboderp
Copy link
Contributor

I'm not sure how they arrive at those results. Plain HF Transformers can be mighty slow, but you have to really try to make it that slow, I feel. As for vLLM, it's not for quantized models, and as such it's quite a bit slower than ExLllama (or Llama.cpp with GPU acceleration for that matter.) If you're deploying a full-precision model to serve inference to multiple clients it might be very useful, though.

@github-actions github-actions bot added the stale label Aug 7, 2023
@github-actions
Copy link

github-actions bot commented Aug 7, 2023

This issue has been closed due to inactivity for 30 days. If you believe it is still relevant, please leave a comment below.

@yhyu13
Copy link
Contributor

yhyu13 commented Dec 4, 2023

@oobabooga

#4794 (comment)

As we do not consider adding new model loader for single mode, we should consider vllm now, as it is freqently support newly release models like Qwen, with both multi-client servering and quantization (AWQ) https://github.com/vllm-project/vllm

@rafa-9
Copy link
Contributor

rafa-9 commented Jan 8, 2024

@oobabooga is this on the roadmap?

@nonetrix
Copy link

Seems it's not coming for now at least
#4860

@fblgit
Copy link

fblgit commented Feb 7, 2024

This should be re-considered, the concerns of plaguing the codebase with CUDA dependants is true.. we should address the design constraints to make this happen and not close the door entirely to something that potentially can benefit ooga's tool.
I guess you could serve externally an OpenAI format from a VLLM model and override such thing at ooga's side. It could be merely a different script with different requirements to hack this up?

@oobabooga what could be the acceptance criteria? I do feel very handy serve/eval/play at the same time in a friendly eco like ooga's.

@micsama
Copy link

micsama commented Apr 23, 2024

VLLM has gradually introduced support for GPTQ and AWQ models, with imminent plans to accommodate the as-yet-unmerged QLORA and QALORA models. Moreover, the acceleration effects delivered by VLLM are now strikingly evident. Given these developments, I propose considering the incorporation of VLLM support. The project is rapidly evolving and poised for a promising future.

@eigen2017
Copy link

+1 for vllm
vllm now becomes the first choice when we need LLM serves on line.
it's not only a IT ditributed for more throughput thing, but also accelerated on batch=1.
it has flash attention\ page attention...
some how i found someone here has misunderstandings of that "paralleling is only for more tps, not for batch=1",
high parallel, or as parallel as you can , is good for batch=1 according to cuda design.

@eigen2017
Copy link

for example , vllm manages all the tokens' kv caches in to blocks, it can be faster even batch is 1.

@KnutJaegersberg
Copy link

yeah vLLM support should be added.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request stale
Projects
None yet
Development

No branches or pull requests