-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Embeddings #19
Comments
Hi, I am a little bit confused, maybe you are mixing terms! |
The latent space of the encoder output. |
I don't think the encoder output is exposed by But why you would use |
I was thinking to build an animation that shows the model encoder output moving inside a 2d or 3d projection of the latent space with a background of text chunks data distribution of the document that is used by chat_with_document personality. This allows me to see how the model is exploring the ideas while generating its outputs compared to the reference texts. So I kinda wanted to use the model being used at the instant and not need to reload another pytorch model that is not exactly the same and is not quantized. It eats memory space. I figured, it would be better to have every thing done by the same model. Maybe I'll ask llamacpp for this feature. Thank you anyway. |
I thought about it and maybe just expose the embed function of llamacpp would already be useful for me. |
Yeah, I understand. Nice idea. |
I think this is already exposed, |
I need to give it text and it gives me the embeddings for the input text. Can you expose that in the model? |
Ok, I will try to expose it in the model class. |
In llama-cpp-python binding, they have embed function in their model: Also the ctransformers binding have embed method: I think they use the llamacpp in background. |
yeah, if you want it just like what they did then I can add it. they are using the same function under hood. If you are using the Anyways I think I will add the two functions, one to get the last embedding and one to create them based on a string as input ? |
Excellent! |
if you don't have any other request I will push a new version to PYPI ? |
Thanks. No requests for now. I'll update my binding as soon as you push it to pypi. Thanks alot |
You are welcome. |
Hi there. I am upgrading my bindings for the lord of llms tool and I now need to be able to vectorize text to embedding space of the current model. Is there a way to have access to the latent space of the model ? I input a text and get the encoder output in latent space?
Best regards
The text was updated successfully, but these errors were encountered: