Skip to content

Commit

Permalink
[Fix] Better error message for batched prompts (vllm-project#342)
Browse files Browse the repository at this point in the history
  • Loading branch information
zhuohan123 authored Jul 3, 2023
1 parent 0bd2a57 commit 0ffded8
Showing 1 changed file with 7 additions and 1 deletion.
8 changes: 7 additions & 1 deletion vllm/entrypoints/openai/api_server.py
Original file line number Diff line number Diff line change
Expand Up @@ -358,7 +358,13 @@ async def create_completion(raw_request: Request):
model_name = request.model
request_id = f"cmpl-{random_uuid()}"
if isinstance(request.prompt, list):
assert len(request.prompt) == 1
if len(request.prompt) == 0:
return create_error_response(HTTPStatus.BAD_REQUEST,
"please provide at least one prompt")
if len(request.prompt) > 1:
return create_error_response(HTTPStatus.BAD_REQUEST,
"multiple prompts in a batch is not "
"currently supported")
prompt = request.prompt[0]
else:
prompt = request.prompt
Expand Down

0 comments on commit 0ffded8

Please sign in to comment.