When using PromptNode, score is always null in Answers #6351
Unanswered
demongolem-biz2
asked this question in
Questions
Replies: 2 comments 2 replies
-
Hey, @demongolem-biz2! Sharing your code would be useful for us to provide help... |
Beta Was this translation helpful? Give feedback.
2 replies
-
Yes, I am using the similarity scores from Document objects which have the
highest score and find that very useful in general. The only thing I have
had to work out is the spread of similarity scores seems to be smaller and
I have employed techniques to widen the difference between "good" documents
and "irrelevant" documents.
The notion of using the PromptNode would be this. I have a Document which
is similar as the Retriever tells me. But what part of that Document is
the relevant part of the document which led to the high similarity score?
PromptNode seems to try to do this, to offer a small snippet of that
Document which is the focus. The use of this would be let's say I
presented a user in a UI a list of the top 10 retrieved documents from the
Retriever. The user clicks on the document they are interested in and that
document opens in some sort of external viewer and the part identified by
the PromptNode is highlighted. That is the idea I had anyway.
…On Tue, Nov 21, 2023 at 3:28 AM Stefano Fiorucci ***@***.***> wrote:
Thanks for sharing the code.
To find the documents most similar to the query, you simply need a
Retriever. Probably a Prompt Node is not necessary.
The Retriever also returns similarity scores.
This part of the documentation
<https://docs.haystack.deepset.ai/docs/ready_made_pipelines#documentsearchpipeline>
may help.
—
Reply to this email directly, view it on GitHub
<#6351 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/A45OQCIG7OJGFQJFU6SFPFTYFRQ2LAVCNFSM6AAAAAA7QGXSXOVHI2DSMVQWIX3LMV43SRDJONRXK43TNFXW4Q3PNVWWK3TUHM3TMMRXGU4TM>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have a Retriever feeding into a PromptNode as part of a QueryPipeline. Is there any way I can adjust this pipeline to get some sort score value in the Answers given when the PromptNode uses the AnswerParser? All my answers have score of null.
Beta Was this translation helpful? Give feedback.
All reactions