You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Our local computer may not be able to run such embedding model.
Besides, it has introduce lots of dependencies, such torch, transformer, vllm.
In the real project, these infra should be run independently.
If we remove all this, the project maybe more clear and tiny, what do you think?
The text was updated successfully, but these errors were encountered:
We don't really want to eliminate a user's ability to run a local embedding model since it is more efficient and cheaper for people with a machine that can handle it. However, we will introduce an option to use an OpenAI embedding endpoint instead of the huggingface model in the near future.
yeah, thanks. @bernaljg . But as a developer, we actually serve embedding model in like xinference, LM studio, gpu stack or ollama. All these provide open ai compatible embedding api.
We have pushed a version of HippoRAG that can use the text-embedding OpenAI models. It is currently in the develop branch, we will close this issue once its merged into main.
We are still evaluating whether we will be able to accept PRs since we are a small academic team but will keep you posted. Thanks again for your interest!
Our local computer may not be able to run such embedding model.
Besides, it has introduce lots of dependencies, such torch, transformer, vllm.
In the real project, these infra should be run independently.
If we remove all this, the project maybe more clear and tiny, what do you think?
The text was updated successfully, but these errors were encountered: