Replies: 2 comments
-
Hi @dkbs12 let me try to help you by giving some context about the warnings. |
Beta Was this translation helpful? Give feedback.
-
Hi, @julian-risch
Thank you for your help in advance. |
Beta Was this translation helpful? Give feedback.
-
Hi,
I'm trying to create the QA system for Long Form QA and QA Chatbot on my own using Haystack.
Finally I made prototypes of both since I started it a few month ago.
But I faced some warning messages while I run them on Colab and I'd like to ask you about them as below.
QA Chatbot with Agent
: warnings.warn(
/usr/local/lib/python3.10/dist-packages/transformers/generation/configuration_utils.py:418: UserWarning:
num_beams
is set to 1. However,length_penalty
is set to2.0
-- this flag is only used in beam-based generation modes. You should setnum_beams>1
or unsetlength_penalty
: /usr/local/lib/python3.10/dist-packages/transformers/pipelines/base.py:1090: UserWarning: You seem to be using the pipelines sequentially on GPU. In order to maximize efficiency please use a dataset
Long Form QA
: WARNING:haystack.nodes.prompt.invocation_layer.open_ai:The prompt has been truncated from 6285 tokens to 3897 tokens so that the prompt length and answer length (200 tokens) fit within the max token limit (4097 tokens). Reduce the length of the prompt to prevent it from being cut off.
: WARNING:haystack.utils.openai_utils:1 out of the 1 completions have been truncated before reaching a natural stopping point. Increase the max_tokens parameter to allow for longer completions.
I would Appreciate it if you let me know the reason of above warning meaages and how to avoid them.
Thanks.
Beta Was this translation helpful? Give feedback.
All reactions