-
Notifications
You must be signed in to change notification settings - Fork 7.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: support reranker #1532
feat: support reranker #1532
Conversation
@Anhui-tqhuang I think the idea is great! I have a general question though: those reranker models can be used through FlagEmebedding (as you proposed) or through SentenceTransformers. The benefit of the latter is that LlamaIndex already contains a SentenceTransformerRerank class. That way we could add the benefits you are proposing without:
I'd love to know your opinion. Maybe you want to give a try to the SentenceTransformer reranker using the same model. The rest of the PR looks like an amazing addition to the project. I'd ask you to update the documentation to reflect it (fern/docs/pages...) Thanks! |
Hey thanks for your suggestion! I would try to use the sentence transformer as well as updating the documentation when i back from the spring festival! |
btw though llamax index has already supported the llm reranker. But it is not that stable since the output of llm is not expectable, as a result For example i have one parser when want to read the output of the llm in the following format
Sometime it gives
Someone it gives
Even it could give some summaries which is not expected No matter what kind of of prompts i make for llm reranker, it could not pass all the cases That's the reason i want to use a dedicated model which is used for reranker as its result will contains the similarity only I might need some time on the investigation could we use the sentences transformer to run that dedicated model directly |
@imartinez hey i give another pass to the model card again: https://huggingface.co/BAAI/bge-reranker-large#usage-for-reranker it cannot be used with the sentence transformer moreover with my previous comments, i still try to avoid using llm for reranker purpose as the result is not stable for |
Published docs preview URL: https://privategpt-preview-558e5b9a-a6ea-4a3d-9894-ac503104ed33.docs.buildwithfern.com |
Published docs preview URL: https://privategpt-preview-d4b9d30f-33e4-4cc1-bb59-b80a5e0348da.docs.buildwithfern.com |
Published docs preview URL: https://privategpt-preview-761df179-fe21-475c-a6b6-a48205602e71.docs.buildwithfern.com |
c3da53c
to
f3b4374
Compare
Published docs preview URL: https://privategpt-preview-c71c8bab-4dcb-4e83-ae34-1c22bb1ee919.docs.buildwithfern.com |
f3b4374
to
831e438
Compare
Published docs preview URL: https://privategpt-preview-f74f9e65-9a99-4e9e-b870-24b676de7b39.docs.buildwithfern.com |
Published docs preview URL: https://privategpt-preview-25d90b0b-5f0f-4688-90d9-47e2d6085778.docs.buildwithfern.com |
Published docs preview URL: https://privategpt-preview-276aed0d-3166-4f3f-b46a-e1fb501c61ad.docs.buildwithfern.com |
@imartinez hey, could you plz have a pass? it will be appreciated |
hi anhui also is bge-reranker the best local reranker we can have ? another thing,i made a PR about query search results , in order be able to do both semantic search and classic keyword-based search to retrieve more relevant context for user query can you please take a look at it and tell me your opinion on it,its bugs and etc |
@cloudrage999 hey
i am still waiting for review from @imartinez
to be honest, i haven't tried other reranker models, so i cannot tell |
could you show me the link to the PR plz? |
Published docs preview URL: https://privategpt-preview-80a4d6a7-7a8e-4d32-97d1-cc81c0ccf737.docs.buildwithfern.com |
Published docs preview URL: https://privategpt-preview-3b7f1967-eff1-469a-8121-8ca2fb3b3d1e.docs.buildwithfern.com |
Published docs preview URL: https://privategpt-preview-f45b863c-5d78-4345-8e0c-6b8f78877a56.docs.buildwithfern.com |
Published docs preview URL: https://privategpt-preview-5e757e5b-3be9-45db-9fa7-ad16ce1d2e73.docs.buildwithfern.com |
Published docs preview URL: https://privategpt-preview-91aabc0a-cd11-4e11-992b-0a3777f2b1a8.docs.buildwithfern.com |
Published docs preview URL: https://privategpt-preview-021ccb45-1090-4d16-85db-869335d4f115.docs.buildwithfern.com |
Published docs preview URL: https://privategpt-preview-3d250861-0983-4862-b7cd-8268bb725490.docs.buildwithfern.com |
Signed-off-by: Anhui-tqhuang <[email protected]>
@imartinez hey review appreciated
i want to add the support for reranker as a postnodeprocesser
The functionality of the
Reranker
is as follows:cut_off
, the document is excluded from the results.top_n
, the system defaults to providing the toptop_n
documents ignoring thecut_off
score.hf_model_name
parameter allows users to specify the particular FlagReranker model from Hugging Face for the reranking process.Use the
enabled
flag to toggle theReranker
as per requirement for optimized results.