Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue with API validation and tool calling #34

Open
mger1608 opened this issue Dec 17, 2024 · 3 comments
Open

Issue with API validation and tool calling #34

mger1608 opened this issue Dec 17, 2024 · 3 comments

Comments

@mger1608
Copy link

I am playing around with a local environment and running into a few issues.

  1. Not every time, but often I get an error telling me that my context window is too large to answer the question, generally in some form of the following:
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "PATH\openbb_agents\agent.py", line 87, in openbb_agent
    answered_subquestion = _fetch_tools_and_answer_subquestion(
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "PATH\\openbb_agents\agent.py", line 221, in _fetch_tools_and_answer_subquestion
    tools = search_tools(
            ^^^^^^^^^^^^^
  File "PATH\\openbb_agents\chains.py", line 199, in search_tools
    tool_names = _search_tools(subquestion.question, answered_subquestions)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "PATH\Lib\site-packages\magentic\prompt_chain.py", line 83, in wrapper
    chat = chat.exec_function_call().submit()
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "PATH\Lib\site-packages\magentic\chat.py", line 95, in submit
    output_message: AssistantMessage[Any] = self.model.complete(
                                            ^^^^^^^^^^^^^^^^^^^^
  File "PATH\Lib\site-packages\magentic\chat_model\openai_chat_model.py", line 451, in complete
    response: Iterator[ChatCompletionChunk] = discard_none_arguments(
                                              ^^^^^^^^^^^^^^^^^^^^^^^
  File "PATH\Lib\site-packages\magentic\chat_model\openai_chat_model.py", line 313, in wrapped
    return func(*args, **non_none_kwargs)  # type: ignore[arg-type]
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "PATH\Lib\site-packages\openai\_utils\_utils.py", line 275, in wrapper
    return func(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^
  File "PATH\Lib\site-packages\openai\resources\chat\completions.py", line 829, in create
    return self._post(
           ^^^^^^^^^^^
  File "PATH\Lib\site-packages\openai\_base_client.py", line 1280, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "PATH\Lib\site-packages\openai\_base_client.py", line 957, in request
    return self._request(
           ^^^^^^^^^^^^^^
  File "PATH\Lib\site-packages\openai\_base_client.py", line 1061, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 16385 tokens. However, your messages resulted in 20851 tokens (20781 in the messages, 70 in the functions). Please reduce the length of the messages or functions.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}

My query was pretty straight foward: "Perform a fundamental analysis of AMZN."

Not sure where the context length limit issue is coming from? All the LLM's have 100K+ token context windows?

  1. Issues with API credentialling. I have a .openbb_platform\user_settings doc in my files with all my current api's along with the personal access token authenticated but I'm still seeing lots of error messages with data provider authentication in the stream of the agent's response.

  2. Issues with "limit" in the "Function call: metrics". See command terminal output below:

PATH\Lib\site-packages\pydantic\main.py:398: UserWarning: Pydantic serializer warnings:
  Expected `int` but got `str` - serialized value may not be as expected
  return self.__pydantic_serializer__.to_json(
2024-12-16 19:01:35,276 - INFO - openbb_agents.chains - Function call: metrics({'symbol': 'WMT', 'limit': 'intrinio'})
2024-12-16 19:01:35,277 - ERROR - openbb_agents.chains - Error calling function:
[Error] -> 1 validations error(s)
[Arg] 2 -> input: intrinio -> Input should be a valid integer, unable to parse string as an integer
2024-12-16 19:01:35,322 - INFO - openbb_agents.chains - Function call: metrics({'symbol': 'MSFT', 'limit': 'intrinio'})
2024-12-16 19:01:35,325 - ERROR - openbb_agents.chains - Error calling function:
[Error] -> 1 validations error(s)
[Arg] 2 -> input: intrinio -> Input should be a valid integer, unable to parse string as an integer
2024-12-16 19:01:35,327 - INFO - openbb_agents.chains - Function call: metrics({'symbol': 'GOOGL', 'limit': 'intrinio'})
2024-12-16 19:01:35,329 - ERROR - openbb_agents.chains - Error calling function:
[Error] -> 1 validations error(s)
[Arg] 2 -> input: intrinio -> Input should be a valid integer, unable to parse string as an integer
2024-12-16 19:01:35,331 - INFO - openbb_agents.chains - Function call: metrics({'symbol': 'EBAY', 'limit': 'intrinio'})
2024-12-16 19:01:35,333 - ERROR - openbb_agents.chains - Error calling function:
[Error] -> 1 validations error(s)
[Arg] 2 -> input: intrinio -> Input should be a valid integer, unable to parse string as an integer
2024-12-16 19:01:37,708 - INFO - openbb_agents.agent - Answered subquestion: I encountered issues retrieving the market capitalization data for Amazon's peers using the available tools. The errors were related to incorrect parameter usage and missing credentials for the data providers. Therefore, I am unable to provide the latest market cap information for Amazon's peers at this time.

For some reason, the data provider is being passed to the metrics for the 'limit' value rather than an integer. This could also be related to the second issue where the data provider is not being properly passed to the function calling the specific data provider needed to answer the sub-task query.

@mger1608
Copy link
Author

Update, newish to OpenAI API specifically, I see that the max context length for a call is ~16,000 tokens. Is there a way to batch the API call from the OpenBB agent to get around this error and/or quantize load?

@mger1608
Copy link
Author

It seems like the error is actually based on output and not input length. Max input for GPT-4o is 128,000 while max output is 16,384.

@gianfrancolombardo
Copy link

+1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants