Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix streaming function type #8409

Conversation

Kaushikdkrikhanu
Copy link

@Kaushikdkrikhanu Kaushikdkrikhanu commented Feb 9, 2025

Title

Let function type be none to follow openai style.

Relevant issues

Fixes #8012

Type

🐛 Bug Fix

Changes

[REQUIRED] Testing - Attach a screenshot of any new tests passing locally

Will add some tests once my query is addresed. Trying to understand a comment.

How to reproduce

Open ai

import os
from openai import OpenAI

client = OpenAI(api_key="")

response = client.chat.completions.create(
    model="gpt-4-1106-preview",
    messages=[{
        "role": "user", 
        "content": "Search for recent AI breakthroughs"
    }],
    tools=[{
        "type": "function",
        "function": {
            "name": "web_search",
            "description": "Searches the web",
            "parameters": {
                "type": "object",
                "properties": {
                    "query": {"type": "string"}
                },
                "required": ["query"]
            }
        }
    }],
    stream=True
)

print("Stream output:")
for chunk in response:
    print(chunk)

Litellm

import os
from litellm import completion
import json

# Set your API key
os.environ["OPENAI_API_KEY"] = ""

# Create the chat completion request
response = completion(
    model="gpt-4-1106-preview",
    messages=[{
        "role": "user", 
        "content": "Search for recent AI breakthroughs"
    }],
    tools=[{
        "type": "function",
        "function": {
            "name": "web_search",
            "description": "Searches the web",
            "parameters": {
                "type": "object",
                "properties": {
                    "query": {"type": "string"}
                },
                "required": ["query"]
            }
        }
    }],
    stream=True
)

print("Stream output:")
for chunk in response:
    # Pretty print the chunk
    print(json.dumps(chunk, indent=2, default=str))
    print("---")  

Copy link

vercel bot commented Feb 9, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
litellm ✅ Ready (Inspect) Visit Preview 💬 Add feedback Feb 10, 2025 4:16am

ishaan-jaff and others added 10 commits February 10, 2025 04:15
* test_bedrock_completion_with_region_name

* test_bedrock_base_model_helper

* test_bedrock_base_model_helper

* fix aws_bedrock_runtime_endpoint

* test_dynamic_aws_params_propagation

* test_dynamic_aws_params_propagation
…BerriAI#6192) (BerriAI#8357)

* [Bug] UI: Newly created key does not display on the View Key Page (BerriAI#8039)

- Fixed issue where all keys appeared blank for admin users.
- Implemented filtering of data via team settings to ensure all keys are displayed correctly.

* Fix:
- Updated the validator to allow model editing when `keyTeam.team_alias === "Default Team"`.
- Ensured other teams still follow the original validation rules.

* - added some classes in global.css
- added text wrap in output of request,response and metadata in index.tsx
- fixed styles of table in table.tsx

* - added full payload when we open single log entry
- added Combined Info Card in index.tsx

* fix: keys not showing on refresh for internal user

* add: search added in teams
* fix(parallel_request_limiter.py): add back parallel request information to max parallel request limiter

Resolves BerriAI#8392

* test: mark flaky test to handle time based tracking issues

* feat(model_management_endpoints.py): expose new patch `/model/{model_id}/update` endpoint

Allows updating specific values of a model in db - makes it easy for admin to know this by calling it a PA
TCH

* feat(edit_model_modal.tsx): allow user to update llm provider + api key on the ui

* fix: fix linting error
* fix(client_initialization_utils.py): handle custom llm provider set with valid value not from model name

* fix(handle_jwt.py): handle groups not existing in jwt token

if user not in group, this won't exist

* fix(handle_jwt.py): add new `enforce_team_based_model_access` flag to jwt auth

allows proxy admin to enforce user can only call model if team has access

* feat(navbar.tsx): expose new dropdown in navbar - allow org admin to create teams within org context

* fix(navbar.tsx): remove non-functional cogicon

* fix(proxy/utils.py): include user-org memberships in `/user/info` response

return orgs user is a member of and the user role within org

* feat(organization_endpoints.py): allow internal user to query `/organizations/list` and get all orgs they belong to

enables org admin to select org they belong to, to create teams

* fix(navbar.tsx): show change in ui when org switcher clicked

* feat(page.tsx): update user role based on org they're in

allows org admin to create teams in the org context

* feat(teams.tsx): working e2e flow for allowing org admin to add new teams

* style(navbar.tsx): clarify switching orgs on UI is in BETA

* fix(organization_endpoints.py): handle getting but not setting members

* test: fix test

* fix(client_initialization_utils.py): revert custom llm provider handling fix - causing unintended issues

* docs(token_auth.md): cleanup docs
@Kaushikdkrikhanu Kaushikdkrikhanu marked this pull request as ready for review February 10, 2025 04:15
@Kaushikdkrikhanu Kaushikdkrikhanu marked this pull request as draft February 10, 2025 04:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug]: Inconsistent stream output between OpenAI and LiteLLM clients during tool calling
5 participants