Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Invalid metadata when calling OpenAI models #8209

Open
wipash opened this issue Feb 3, 2025 · 0 comments
Open

[Bug]: Invalid metadata when calling OpenAI models #8209

wipash opened this issue Feb 3, 2025 · 0 comments
Labels
bug Something isn't working

Comments

@wipash
Copy link

wipash commented Feb 3, 2025

What happened?

When calling OpenAI models, I get this error:

Error occurred while generating model response. Please try again. Error: Error: 400 litellm.BadRequestError: OpenAIException - Error code: 400 - {'error': {'message': "Invalid 'metadata': too many properties. Expected an object with at most 16 properties, but got an object with 28 properties instead.", 'type': 'invalid_request_error', 'param': 'metadata', 'code': 'object_above_max_properties'}} Received Model Group=gpt-4o-mini Available Model Group Fallbacks=None

This happens with gpt-4o, gpt-4o-mini, and o3-mini
I have the same models configured via Azure OpenAI, and they work fine.

I can trigger this error from the LiteLLM playground:

Image

Config:

model_list:
  # OpenAI
  - model_name: o3-mini
    litellm_params:
      model: openai/o3-mini
      api_key: os.environ/OPENAI_API_KEY

  - model_name: gpt-4o
    litellm_params:
      model: openai/gpt-4o
      api_key: os.environ/OPENAI_API_KEY

  - model_name: gpt-4o-mini
    litellm_params:
      model: openai/gpt-4o-mini
      api_key: os.environ/OPENAI_API_KEY

  # Azure
  - model_name: azure-gpt-4o-mini
    litellm_params:
      model: azure/gpt-4o-mini
      api_key: os.environ/AZURE_OPENAI_AUSTRALIAEAST_API_KEY
      api_base: os.environ/AZURE_OPENAI_AUSTRALIAEAST_ENDPOINT
      api_version: 2024-08-01-preview

  - model_name: azure-gpt-4o
    litellm_params:
      model: azure/gpt-4o
      api_key: os.environ/AZURE_OPENAI_WESTUS_API_KEY
      api_base: os.environ/AZURE_OPENAI_WESTUS_ENDPOINT
      api_version: 2024-08-01-preview

  - model_name: azure-o1
    litellm_params:
      model: azure/o1
      api_key: os.environ/AZURE_OPENAI_EASTUS2_API_KEY
      api_base: os.environ/AZURE_OPENAI_EASTUS2_ENDPOINT
      api_version: 2024-12-01-preview

  - model_name: azure-o3-mini
    litellm_params:
      model: azure/o3-mini
      api_key: os.environ/AZURE_OPENAI_EASTUS2_API_KEY
      api_base: os.environ/AZURE_OPENAI_EASTUS2_ENDPOINT
      api_version: 2024-12-01-preview


general_settings:
  proxy_batch_write_at: 60
  database_connection_pool_limit: 10

  disable_spend_logs: false
  disable_error_logs: false

  background_health_checks: false
  health_check_interval: 300

  store_model_in_db: true

litellm_settings:
  request_timeout: 600
  json_logs: true
  enable_preview_features: true
  redact_user_api_key_info: true
  turn_off_message_logging: true

Relevant log output

{
  "message": "Inside async function with retries: args - (); kwargs - {'user': 'redacted', 'proxy_server_request': {'url': 'http://litellm.litellm.svc.cluster.local:4000/chat/completions', 'method': 'POST', 'headers': {'content-length': '164', 'accept': 'application/json', 'content-type': 'application/json', 'user-agent': 'OpenAI/JS 4.71.1', 'x-stainless-lang': 'js', 'x-stainless-package-version': '4.71.1', 'x-stainless-os': 'Linux', 'x-stainless-arch': 'x64', 'x-stainless-runtime': 'node', 'x-stainless-runtime-version': 'v20.18.2', 'x-stainless-helper-method': 'stream', 'x-stainless-retry-count': '0', 'accept-encoding': 'gzip,deflate', 'host': 'litellm.litellm.svc.cluster.local:4000', 'connection': 'keep-alive'}, 'body': {'model': 'o3-mini', 'user': '6736c195266216764eecdf43', 'stream': True, 'messages': [{'role': 'user', 'content': 'Hi there!'}]}}, 'metadata': {'requester_metadata': {}, 'user_api_key_hash': 'redacted', 'user_api_key_alias': 'redacted', 'user_api_key_team_id': 'redacted', 'user_api_key_user_id': None, 'user_api_key_org_id': None, 'user_api_key_team_alias': 'redacted', 'user_api_key_end_user_id': 'redacted', 'user_api_key': 'redacted', 'user_api_end_user_max_budget': None, 'litellm_api_version': '1.60.0', 'global_max_parallel_requests': None, 'user_api_key_team_max_budget': None, 'user_api_key_team_spend': 0.001455905, 'user_api_key_spend': 0.001455905, 'user_api_key_max_budget': None, 'user_api_key_model_max_budget': {}, 'user_api_key_metadata': {'service_account_id': 'redacted'}, 'headers': {'content-length': '164', 'accept': 'application/json', 'content-type': 'application/json', 'user-agent': 'OpenAI/JS 4.71.1', 'x-stainless-lang': 'js', 'x-stainless-package-version': '4.71.1', 'x-stainless-os': 'Linux', 'x-stainless-arch': 'x64', 'x-stainless-runtime': 'node', 'x-stainless-runtime-version': 'v20.18.2', 'x-stainless-helper-method': 'stream', 'x-stainless-retry-count': '0', 'accept-encoding': 'gzip,deflate', 'host': 'litellm.litellm.svc.cluster.local:4000', 'connection': 'keep-alive'}, 'endpoint': 'http://litellm.litellm.svc.cluster.local:4000/chat/completions', 'litellm_parent_otel_span': None, 'requester_ip_address': '', 'model_group': 'o3-mini'}, 'litellm_call_id': '8e10b1f1-f07c-47e4-bcdf-813d1bb22cb5', 'litellm_logging_obj': <litellm.litellm_core_utils.litellm_logging.Logging object at 0x7f3701f4d400>, 'model': 'o3-mini', 'messages': [{'role': 'user', 'content': 'Hi there!'}], 'stream': True, 'original_function': <bound method Router._acompletion of <litellm.router.Router object at 0x7f3701f4fa10>>, 'num_retries': 2, 'litellm_trace_id': '5757f792-6509-4fb9-a626-069fdc20bca8'}",
  "level": "DEBUG",
  "timestamp": "2025-02-03T22:42:09.845082"
}

I have these environment variables configured:

TZ: Pacific/Auckland
LITELLM_LOG: "DEBUG"
LITELLM_MODE: "PRODUCTION" # This disabled loading .env env vars
LITELLM_DONT_SHOW_FEEDBACK_BOX: "true"
NO_DOCS: "true"

Are you a ML Ops Team?

No

What LiteLLM version are you on ?

v1.60.0

Twitter / LinkedIn details

No response

@wipash wipash added the bug Something isn't working label Feb 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant