You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am using a conversational agent with a RAG tool implemented. My problem is that whenever I want the agent to explain something or provide a more extensive final_answer, the model returns truncated or incomplete answers. The following is how I set up the agent:
prompt_memory_node=PromptNode(
memory_model,
api_key=self.openai_api_key,
stop_words=["<|endoftext|>"],
)
memory=ConversationSummaryMemory(prompt_memory_node, summary_frequency=1)
conversation_history=Tool(
name="conversation_history",
pipeline_or_node=lambdatool_input, **kwargs: memory.load(),
description="useful for when you need to remember what you've already discussed.",
logging_color=Color.MAGENTA,
)
chat_agent=ConversationalAgent(
prompt_memory_node,
max_steps=5,
prompt_template=PromptTemplate(prompt=PROMPT),
tools=[sentence_transformer_based_search_tool, conversation_history],
memory=memory,
)
The prompt here is the same deepset/conversational-agent in the prompthub. However, I tried tuning the prompt in many different ways. The only thing that is consistent is this problem I'm having.
Example1:
query: what does Celanese do?
Final Answer: Celanese Corporation is a global technology leader that produces differentiated chemistry
Example2:
query: what does the future hold for Celanese
thought: Thought: The document_search tool provided information about Celanese's future plans. They are focusing on sustainability, evaluating raw materials with lower carbon footprints, and working to understand and reduce the Scope 3 impact of their operations. They also plan to collaborate with key trade associations and leverage governmental programs that encourage sustainable products and operations. However, it's also mentioned that there are risks and uncertainties that could cause actual results to differ from these plans.
Final Answer: Celanese's future plans involve a focus on
Usually the thought is very informative and complete, but then the final answer seems to be truncated. I know this is not the model's fault(in my case gpt4) Because I used the same query prompt and documents in langchain and was not able to recreate this bug.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I am using a conversational agent with a RAG tool implemented. My problem is that whenever I want the agent to explain something or provide a more extensive final_answer, the model returns truncated or incomplete answers. The following is how I set up the agent:
The prompt here is the same
deepset/conversational-agent
in the prompthub. However, I tried tuning the prompt in many different ways. The only thing that is consistent is this problem I'm having.Example1:
Example2:
Usually the thought is very informative and complete, but then the final answer seems to be truncated. I know this is not the model's fault(in my case gpt4) Because I used the same query prompt and documents in langchain and was not able to recreate this bug.
Beta Was this translation helpful? Give feedback.
All reactions