You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have been working on modifying the Swarm agents library to integrate with LiteLLM, allowing for the use of multiple other LLMs instead of being limited to OpenAI's. The modification required was minimal; I only needed to change line 70 in core.py from return self.client.chat.completions.create(**create_params) to return completion(**create_params).
However, I encountered an issue when using streaming to run the agents. The error is as follows:
Traceback (most recent call last):
File "/Users/vicente/Developer/agents-test/swarm_test/main.py", line 53, in <module>
for chunk in stream:
^^^^^^
File "/Users/vicente/Developer/agents-test/swarm_test/swarm/core.py", line 215, in run_and_stream
tool_call_object = ChatCompletionMessageToolCall(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/vicente/Developer/agents-test/.venv/lib/python3.12/site-packages/pydantic/main.py", line 214, in __init__
validated_self = self.__pydantic_validator__.validate_python(data, self_instance=self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pydantic_core._pydantic_core.ValidationError: 1 validation error for ChatCompletionMessageToolCall
type
Input should be 'function' [type=literal_error, input_value='functionfunctionfunction...unctionfunctionfunction', input_type=str]
The issue appears to be related to the fact that the OpenAI client outputs the tool's type only once in the stream. Consequently, only the first chunk of the tool call contains that property, as illustrated below:
This discrepancy was straightforward to address on my end. However, considering that LiteLLM aims to provide an OpenAI-compatible client, resolving this minor compatibility issue could be advantageous.
Interestingly, the 'assistant' property also appears only once in the first chunk of the OpenAI stream, whereas it is present in every chunk of the LiteLLM stream. While this behavior did not cause issues in my specific use case, it may be worth investigating further.
Relevant log output
Are you a ML Ops Team?
Yes
What LiteLLM version are you on ?
v1.59.8
Twitter / LinkedIn details
No response
The text was updated successfully, but these errors were encountered:
What happened?
I have been working on modifying the Swarm agents library to integrate with LiteLLM, allowing for the use of multiple other LLMs instead of being limited to OpenAI's. The modification required was minimal; I only needed to change line 70 in
core.py
fromreturn self.client.chat.completions.create(**create_params)
toreturn completion(**create_params)
.However, I encountered an issue when using streaming to run the agents. The error is as follows:
The issue appears to be related to the fact that the OpenAI client outputs the tool's type only once in the stream. Consequently, only the first chunk of the tool call contains that property, as illustrated below:
In contrast, LiteLLM seems to output the tool type in every chunk of the stream:
This discrepancy was straightforward to address on my end. However, considering that LiteLLM aims to provide an OpenAI-compatible client, resolving this minor compatibility issue could be advantageous.
Interestingly, the 'assistant' property also appears only once in the first chunk of the OpenAI stream, whereas it is present in every chunk of the LiteLLM stream. While this behavior did not cause issues in my specific use case, it may be worth investigating further.
Relevant log output
Are you a ML Ops Team?
Yes
What LiteLLM version are you on ?
v1.59.8
Twitter / LinkedIn details
No response
The text was updated successfully, but these errors were encountered: