You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I got such a return {"error":{"message":"litellm.BadRequestError: GithubException - Error code: 400 - {'error': {'message': \"Unsupported parameter: 'max_tokens' is not supported with this model. Use 'max_completion_tokens' instead.\", 'type': 'invalid_request_error', 'param': 'max_tokens', 'code': 'unsupported_parameter'...
I tried downgrade to 1.59.10 but this still not solved. I remember I modified proxy_server.py in older version and manually removed these params but this no longer working
Relevant log output
Are you a ML Ops Team?
No
What LiteLLM version are you on ?
1.60
Twitter / LinkedIn details
No response
The text was updated successfully, but these errors were encountered:
What happened?
After I upgrade to 1.60, this param no longer working for me.
In my proxy config, I have
Then
I got such a return
{"error":{"message":"litellm.BadRequestError: GithubException - Error code: 400 - {'error': {'message': \"Unsupported parameter: 'max_tokens' is not supported with this model. Use 'max_completion_tokens' instead.\", 'type': 'invalid_request_error', 'param': 'max_tokens', 'code': 'unsupported_parameter'...
I tried downgrade to 1.59.10 but this still not solved. I remember I modified proxy_server.py in older version and manually removed these params but this no longer working
Relevant log output
Are you a ML Ops Team?
No
What LiteLLM version are you on ?
1.60
Twitter / LinkedIn details
No response
The text was updated successfully, but these errors were encountered: