Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FEATURE] Enables offline /score for embedding models #12021

Open
wants to merge 13 commits into
base: main
Choose a base branch
from

Conversation

gmarinho2
Copy link

@gmarinho2 gmarinho2 commented Jan 13, 2025

Enables LLM.score() for all embedding models. The request_id consists of the request_ids of each embedding in the pair, separated by "_". The prompt_token_ids are the concatenation of all the token ids, in order and separated by the padding token when it is available. This PR is the first of two for completing the issue. The second PR will implement the same feature in the OpenAI API.

Issue: [Feature]: Enable /score endpoint for all embedding models (1/2)

Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@mergify mergify bot added the frontend label Jan 13, 2025
@gmarinho2 gmarinho2 force-pushed the main branch 5 times, most recently from 96484b7 to d37339e Compare January 14, 2025 17:58
@joerunde
Copy link
Collaborator

joerunde commented Jan 15, 2025

@maxdebayser @gmarinho2 This looks like it only touches the offline entrypoint, but the PR title mentions /score endpoint which I had assumed meant the online REST interface.

It's not 100% clear to me from the linked issue either what was intended- is there more work planned to support the online interface or are we only aiming for offline?

@maxdebayser
Copy link
Contributor

@joerunde, we're aiming for both. @gmarinho2 started with the offline API first.

Copy link

mergify bot commented Jan 16, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @gmarinho2.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Jan 16, 2025
@gmarinho2 gmarinho2 changed the title [FEATURE] Enables /score endpoint for embedding models [FEATURE] Enables offline /score for embedding models Jan 16, 2025
vllm/entrypoints/llm.py Outdated Show resolved Hide resolved
Copy link
Contributor

@maxdebayser maxdebayser left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've left some suggestions, but it looks good to me. I think we can open this as a PR now.

Signed-off-by: Gabriel Marinho <[email protected]>
@mergify mergify bot removed the needs-rebase label Jan 21, 2025
@gmarinho2 gmarinho2 marked this pull request as ready for review January 21, 2025 15:51
Copy link
Member

@DarkLight1337 DarkLight1337 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some initial comments.

I also suggest splitting out the logic for scoring and general embedding models into separate functions.

vllm/entrypoints/llm.py Outdated Show resolved Hide resolved
vllm/entrypoints/llm.py Outdated Show resolved Hide resolved
vllm/entrypoints/llm.py Outdated Show resolved Hide resolved
@DarkLight1337
Copy link
Member

I also suggest splitting out the logic for scoring and general embedding models into separate functions.

Have you addressed this?

use_tqdm, lora_request,
prompt_adapter_request) -> List[ScoringRequestOutput]:

encoded_output = self.encode(text_1 + text_2)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's pass use_tqdm, lora_request, prompt_adapter_request to self.encode() here

Comment on lines 1003 to 1004
use_tqdm, lora_request,
prompt_adapter_request) -> List[ScoringRequestOutput]:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add type annotations for the parameters.

raise ValueError(
"MistralTokenizer not supported for cross-encoding")
"Score API is only enabled for `--task embed or score`")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"Score API is only enabled for `--task embed or score`")
"Score API is only enabled for `--task embed` or `--task score`")

@@ -1032,6 +1133,7 @@ def score(
A list of ``ScoringRequestOutput`` objects containing the
generated scores in the same order as the input prompts.
"""

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change

Avoid unnecessary line changes

Signed-off-by: Gabriel Marinho <[email protected]>
@joerunde joerunde added the ready ONLY add when PR is ready to merge/full CI is needed label Jan 23, 2025
@joerunde
Copy link
Collaborator

Added the ready tag so the full CI run can run your tests as well. Looks like the only fastcheck failure was a network issue

Signed-off-by: Gabriel Marinho <[email protected]>
Signed-off-by: Gabriel Marinho <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
frontend ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants