Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Response from calling tool (call_AEaiVEPwzfvnH5vWDy0wFbRT) ***** Error: list index out of range #25

Open
ELIINH opened this issue Jun 17, 2024 · 0 comments
Assignees
Labels
bug Something isn't working

Comments

@ELIINH
Copy link

ELIINH commented Jun 17, 2024

User_Proxy (to Expert_Investor):

With the tools you've been provided, write an annual report based on Microsoft's 2023 10-k report, format it into a pdf.
Pay attention to the followings:

  • Explicitly explain your working plan before you kick off.
  • Use tools one by one for clarity, especially when asking for instructions.
  • All your file operations should be done in "../report".
  • Display any image in the chat once generated.
  • All the paragraphs should combine between 400 and 450 words, don't generate the pdf until this is explicitly fulfilled.

Expert_Investor (to User_Proxy):

To create an annual report based on Microsoft's 2023 10-K report, I will follow a structured approach using the provided tools. Here's the working plan I will adhere to:

  1. Retrieve Key Data: I will gather key financial data for Microsoft's 2023 report to understand its financial performance.
  2. Analyze Income Statement: I will analyze Microsoft's income statement to identify its revenue, expenses, and profitability.
  3. Analyze Segment Statement: I will delve into Microsoft's segment analysis to understand how its different business segments contribute to overall performance.
  4. Income Summarization: I will synthesize the income statement and segment analysis to create a comprehensive income summarization.
  5. Analyze Business Highlights: I will explore the business highlights section to understand Microsoft's key achievements and strategic moves.
  6. Analyze Company Description: I will analyze the company description to grasp Microsoft's industry positioning, strengths, trends, and initiatives.
  7. Get Risk Assessment: I will identify and summarize the top 3 key risks faced by Microsoft.
  8. Generate Share Performance Chart: I will plot Microsoft's stock performance compared to the S&P 500 over the past year.
  9. Generate PE & EPS Performance Chart: I will plot Microsoft's PE ratio and EPS performance over the last few years.
  10. Build Annual Report: Finally, I will aggregate all the gathered insights and analyses into a detailed annual report in PDF format.

Let's start with retrieving the key financial data for Microsoft's 2023 report.
***** Suggested tool call (call_AEaiVEPwzfvnH5vWDy0wFbRT): get_key_data *****
Arguments:
{"ticker_symbol":"MSFT","filing_date":"2023-01-01"}



USING AUTO REPLY...

EXECUTING FUNCTION get_key_data...
FMP api key found successfully.
FMP api key found successfully.
User_Proxy (to Expert_Investor):

User_Proxy (to Expert_Investor):

***** Response from calling tool (call_AEaiVEPwzfvnH5vWDy0wFbRT) *****
Error: list index out of range



Expert_Investor (to User_Proxy):

I encountered an issue while trying to retrieve the key financial data for Microsoft's 2023 report. Let me attempt the key data retrieval again.
***** Suggested tool call (call_o9K6szRUarxH9AhcHecJx17t): get_key_data *****
Arguments:
{"ticker_symbol":"MSFT","filing_date":"2023-01-01"}



USING AUTO REPLY...

EXECUTING FUNCTION get_key_data...
FMP api key found successfully.
FMP api key found successfully.
User_Proxy (to Expert_Investor):

User_Proxy (to Expert_Investor):

***** Response from calling tool (call_o9K6szRUarxH9AhcHecJx17t) *****
Error: list index out of range



Expert_Investor (to User_Proxy):

It seems there is an issue with retrieving the key financial data for Microsoft's 2023 report. I will proceed with the next step and analyze the income statement. Let's delve into Microsoft's income statement for 2023.
***** Suggested tool call (call_X2QMpohGa3RpE6tYCRPR9UjP): analyze_income_stmt *****
Arguments:
{"ticker_symbol":"MSFT","fyear":"2023","save_path":"../report/income_statement_analysis.txt"}



USING AUTO REPLY...

EXECUTING FUNCTION analyze_income_stmt...
Sec Api initialized
FMP api key found successfully.
[runtime logging] log_function_use: autogen logger is None
User_Proxy (to Expert_Investor):

User_Proxy (to Expert_Investor):

***** Response from calling tool (call_X2QMpohGa3RpE6tYCRPR9UjP) *****
instruction & resources saved to ../report/income_statement_analysis.txt




BadRequestError Traceback (most recent call last)
Cell In[38], line 16
2 fyear = "2023"
4 message = dedent(
5 f"""
6 With the tools you've been provided, write an annual report based on {company}'s {fyear} 10-k report, format it into a pdf.
(...)
13 """
14 )
---> 16 assistant.chat(message, use_cache=True, max_turns=50,
17 summary_method="last_msg")

File F:\Code\FinRobot-master\finrobot\agents\workflow.py:151, in SingleAssistant.chat(self, message, use_cache, **kwargs)
149 def chat(self, message: str, use_cache=False, **kwargs):
150 with Cache.disk() as cache:
--> 151 self.user_proxy.initiate_chat(
152 self.assistant,
153 message=message,
154 cache=cache if use_cache else None,
155 **kwargs,
156 )
158 print("Current chat finished. Resetting agents ...")
159 self.reset()

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\autogen\agentchat\conversable_agent.py:1011, in ConversableAgent.initiate_chat(self, recipient, clear_history, silent, cache, max_turns, summary_method, summary_args, message, **kwargs)
1009 if msg2send is None:
1010 break
-> 1011 self.send(msg2send, recipient, request_reply=True, silent=silent)
1012 else:
1013 self._prepare_chat(recipient, clear_history)

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\autogen\agentchat\conversable_agent.py:655, in ConversableAgent.send(self, message, recipient, request_reply, silent)
653 valid = self._append_oai_message(message, "assistant", recipient)
654 if valid:
--> 655 recipient.receive(message, self, request_reply, silent)
656 else:
657 raise ValueError(
658 "Message can't be converted into a valid ChatCompletion message. Either content or function_call must be provided."
659 )

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\autogen\agentchat\conversable_agent.py:818, in ConversableAgent.receive(self, message, sender, request_reply, silent)
816 if request_reply is False or request_reply is None and self.reply_at_receive[sender] is False:
817 return
--> 818 reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
819 if reply is not None:
820 self.send(reply, sender, silent=silent)

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\autogen\agentchat\conversable_agent.py:1972, in ConversableAgent.generate_reply(self, messages, sender, **kwargs)
1970 continue
1971 if self._match_trigger(reply_func_tuple["trigger"], sender):
-> 1972 final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
1973 if logging_enabled():
1974 log_event(
1975 self,
1976 "reply_func_executed",
(...)
1980 reply=reply,
1981 )

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\autogen\agentchat\conversable_agent.py:438, in ConversableAgent.register_nested_chats..wrapped_reply_func(recipient, messages, sender, config)
437 def wrapped_reply_func(recipient, messages=None, sender=None, config=None):
--> 438 return reply_func_from_nested_chats(chat_queue, recipient, messages, sender, config)

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\autogen\agentchat\conversable_agent.py:402, in ConversableAgent._summary_from_nested_chats(chat_queue, recipient, messages, sender, config)
400 if not chat_to_run:
401 return True, None
--> 402 res = initiate_chats(chat_to_run)
403 return True, res[-1].summary

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\autogen\agentchat\chat.py:202, in initiate_chats(chat_queue)
199 __post_carryover_processing(chat_info)
201 sender = chat_info["sender"]
--> 202 chat_res = sender.initiate_chat(**chat_info)
203 finished_chats.append(chat_res)
204 return finished_chats

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\autogen\agentchat\conversable_agent.py:1011, in ConversableAgent.initiate_chat(self, recipient, clear_history, silent, cache, max_turns, summary_method, summary_args, message, **kwargs)
1009 if msg2send is None:
1010 break
-> 1011 self.send(msg2send, recipient, request_reply=True, silent=silent)
1012 else:
1013 self._prepare_chat(recipient, clear_history)

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\autogen\agentchat\conversable_agent.py:655, in ConversableAgent.send(self, message, recipient, request_reply, silent)
653 valid = self._append_oai_message(message, "assistant", recipient)
654 if valid:
--> 655 recipient.receive(message, self, request_reply, silent)
656 else:
657 raise ValueError(
658 "Message can't be converted into a valid ChatCompletion message. Either content or function_call must be provided."
659 )

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\autogen\agentchat\conversable_agent.py:818, in ConversableAgent.receive(self, message, sender, request_reply, silent)
816 if request_reply is False or request_reply is None and self.reply_at_receive[sender] is False:
817 return
--> 818 reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
819 if reply is not None:
820 self.send(reply, sender, silent=silent)

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\autogen\agentchat\conversable_agent.py:1972, in ConversableAgent.generate_reply(self, messages, sender, **kwargs)
1970 continue
1971 if self._match_trigger(reply_func_tuple["trigger"], sender):
-> 1972 final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
1973 if logging_enabled():
1974 log_event(
1975 self,
1976 "reply_func_executed",
(...)
1980 reply=reply,
1981 )

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\autogen\agentchat\conversable_agent.py:1340, in ConversableAgent.generate_oai_reply(self, messages, sender, config)
1338 if messages is None:
1339 messages = self._oai_messages[sender]
-> 1340 extracted_response = self._generate_oai_reply_from_client(
1341 client, self._oai_system_message + messages, self.client_cache
1342 )
1343 return (False, None) if extracted_response is None else (True, extracted_response)

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\autogen\agentchat\conversable_agent.py:1359, in ConversableAgent._generate_oai_reply_from_client(self, llm_client, messages, cache)
1356 all_messages.append(message)
1358 # TODO: #1143 handle token limit exceeded error
-> 1359 response = llm_client.create(
1360 context=messages[-1].pop("context", None), messages=all_messages, cache=cache, agent=self
1361 )
1362 extracted_response = llm_client.extract_text_or_completion_object(response)[0]
1364 if extracted_response is None:

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\autogen\oai\client.py:662, in OpenAIWrapper.create(self, **config)
660 try:
661 request_ts = get_current_ts()
--> 662 response = client.create(params)
663 except APITimeoutError as err:
664 logger.debug(f"config {i} timed out", exc_info=True)

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\autogen\oai\client.py:285, in OpenAIClient.create(self, params)
283 params = params.copy()
284 params["stream"] = False
--> 285 response = completions.create(**params)
287 return response

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai_utils_utils.py:277, in required_args..inner..wrapper(*args, **kwargs)
275 msg = f"Missing required argument: {quote(missing[0])}"
276 raise TypeError(msg)
--> 277 return func(*args, **kwargs)

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai\resources\chat\completions.py:606, in Completions.create(self, messages, model, frequency_penalty, function_call, functions, logit_bias, logprobs, max_tokens, n, parallel_tool_calls, presence_penalty, response_format, seed, stop, stream, stream_options, temperature, tool_choice, tools, top_logprobs, top_p, user, extra_headers, extra_query, extra_body, timeout)
573 @required_args(["messages", "model"], ["messages", "model", "stream"])
574 def create(
575 self,
(...)
604 timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
605 ) -> ChatCompletion | Stream[ChatCompletionChunk]:
--> 606 return self._post(
607 "/chat/completions",
608 body=maybe_transform(
609 {
610 "messages": messages,
611 "model": model,
612 "frequency_penalty": frequency_penalty,
613 "function_call": function_call,
614 "functions": functions,
615 "logit_bias": logit_bias,
616 "logprobs": logprobs,
617 "max_tokens": max_tokens,
618 "n": n,
619 "parallel_tool_calls": parallel_tool_calls,
620 "presence_penalty": presence_penalty,
621 "response_format": response_format,
622 "seed": seed,
623 "stop": stop,
624 "stream": stream,
625 "stream_options": stream_options,
626 "temperature": temperature,
627 "tool_choice": tool_choice,
628 "tools": tools,
629 "top_logprobs": top_logprobs,
630 "top_p": top_p,
631 "user": user,
632 },
633 completion_create_params.CompletionCreateParams,
634 ),
635 options=make_request_options(
636 extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
637 ),
638 cast_to=ChatCompletion,
639 stream=stream or False,
640 stream_cls=Stream[ChatCompletionChunk],
641 )

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai_base_client.py:1240, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls)
1226 def post(
1227 self,
1228 path: str,
(...)
1235 stream_cls: type[_StreamT] | None = None,
1236 ) -> ResponseT | _StreamT:
1237 opts = FinalRequestOptions.construct(
1238 method="post", url=path, json_data=body, files=to_httpx_files(files), **options
1239 )
-> 1240 return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai_base_client.py:921, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls)
912 def request(
913 self,
914 cast_to: Type[ResponseT],
(...)
919 stream_cls: type[_StreamT] | None = None,
920 ) -> ResponseT | _StreamT:
--> 921 return self._request(
922 cast_to=cast_to,
923 options=options,
924 stream=stream,
925 stream_cls=stream_cls,
926 remaining_retries=remaining_retries,
927 )

File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\openai_base_client.py:1020, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
1017 err.response.read()
1019 log.debug("Re-raising status error")
-> 1020 raise self._make_status_error_from_response(err.response) from None
1022 return self._process_response(
1023 cast_to=cast_to,
1024 options=options,
(...)
1027 stream_cls=stream_cls,
1028 )

BadRequestError: Error code: 400 - {'error': {'message': "This model's maximum context length is 16385 tokens. However, your messages resulted in 16488 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}

@ELIINH ELIINH changed the title BadRequestError: Error code: 400 - {'error': {'message': "Invalid schema for function 'get_key_data': '' is not valid under any of the given schemas ***** Response from calling tool (call_AEaiVEPwzfvnH5vWDy0wFbRT) ***** Error: list index out of range ********************************************************************** Jun 17, 2024
@ELIINH ELIINH changed the title ***** Response from calling tool (call_AEaiVEPwzfvnH5vWDy0wFbRT) ***** Error: list index out of range ********************************************************************** ***** Response from calling tool (call_AEaiVEPwzfvnH5vWDy0wFbRT) ***** Error: list index out of range Jun 17, 2024
@ELIINH ELIINH changed the title ***** Response from calling tool (call_AEaiVEPwzfvnH5vWDy0wFbRT) ***** Error: list index out of range Response from calling tool (call_AEaiVEPwzfvnH5vWDy0wFbRT) ***** Error: list index out of range Jun 17, 2024
@BruceYanghy BruceYanghy added the bug Something isn't working label Jun 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants