Does it make sense to pre-compute the compressions for each document chunk before indexing in a vector store? #134
-
Is it advisable to pre-compute every document chunk's compression before indexing it into a vector store? For example
Please let me know if you forsee any issues with this approach? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Hi @stephenleo, thanks for your support. I think it's reasonable. However, one thing you might need to consider is that during retrieval, you may still need to use the original document, as the retrieval model is trained on natural language. After that, you can recall the compressed document and concatenate it to the prompt. |
Beta Was this translation helpful? Give feedback.
Hi @stephenleo, thanks for your support.
I think it's reasonable. However, one thing you might need to consider is that during retrieval, you may still need to use the original document, as the retrieval model is trained on natural language. After that, you can recall the compressed document and concatenate it to the prompt.