Replies: 2 comments 2 replies
-
The agent uses a large language model to imitate a thought process. During this process, it makes multiple requests to the language model. One of its steps is to execute the generative QA pipeline. As a result, using the agent on your query will always take longer than using the generative QA pipeline tool directly on your query.
You can set
We can't provide a score for answers generated by an Agent either. Also, we don't provide document IDs for Agents.
Can you give more details about this? How do you instruct the Agent to output the answer in a different language? Which models are you using? A code example would be very helpful here. |
Beta Was this translation helpful? Give feedback.
-
Yes, setting max_steps to 4 will lower the number of maximum tool executions per query, so this should reduce the execution time for queries that otherwise use more than 4 tool executions. Regarding answering in Korean instead of English: I think for this you need to experiment a bit with the prompts. One thing you could try is to add an instruction to the agent prompt defined here%20Define%20the%20Prompt) that the Agent should answer the question in Korean (or in the language the question was raised by the user.) |
Beta Was this translation helpful? Give feedback.
-
Hello,
This is the third inquiry about the tutorial '25_Customizing_Agent'.
I think there are many things that are suitable for practical application and that's why I can't stop making questions.
I have many questions at this time as below;
It always need to spend a long time to get answer from Agent chatting.
It takes about from 10 to 20 seconds to get feedback from Agent and it's much longer than the time from Generative QA Pipeline.
(Generative QA Pipeline needs average 2 or 3 seconds to produce an answer.)
Is there any solution to reduce the feedback time from Agent?
I can't get the score, context and the name of file from the answer made by Generative QA Pipeline as below.
How can I get those information from Generative QA Pipeline when I run print_answer function?
< Question >
response = generative_pipeline.run("What does Rhodes Statue look like?")
print_answers(response, details="minimum")
< Answer >
'Query: What does Rhodes Statue look like?'
'Answers:'
[ { 'answer': 'The documents do not provide a description or image of what '
'the Rhodes Statue looked like. Answering the question is '
'not possible given the available information.'}]
I can't get the score, context, the name if file and document from from the answer made by Agent, either.
Thew are only things like observation and transcript that I can find in the response.
But, I think those information as score, context, the name of file and document are more useful for the users.
Is there any solution to display them in the response from the Agent?
When I try a question in local language during Agent chatting, the correct answer comes out in English.
But, When I request Agent to answer in local language, it produce the answer 'KeyError: 'translation'.
In addition, I ask Agent to translate the answer in English to local language and then Agent make an error as "Input: Translate the previous answer into Korean"
How can I use local language in question and answer when using the Agent chatting?
Thank you for your help in advance.
Beta Was this translation helpful? Give feedback.
All reactions