-
-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for Remote Model APIs #480
Comments
This comment has been minimized.
This comment has been minimized.
Sorry I misunderstood your suggestion, I might add API integration with OpenAI and Gemini, I've been working on a new model manager that might make this possible. If you have any suggestions of other APIs to add feel free to tell me |
Anthropic support would be amazing. I also think that supporting a generic, OpenAI compatible api would be better, that is OPENAI_BASE_URL , since most model providers have adopted their api schema |
Also adding this one here |
That will be very useful feature for example to use vllm deployed models instead ollama, not all models are available in ollama like format. |
Just so you guys know, this would mean rewriting a big part of the codebase to make it more generic instead of Ollama specific. I will do it but I'm a college student so it will take a while unfortunately. |
First, I want to say that the UI design of Alpaca is great - it's clean, intuitive, and makes interacting with AI models a pleasure. It would be fantastic to extend this great user experience to support remote model APIs.
This would allow users to leverage both local Ollama models and remote APIs within the same polished interface.
Would it be possible to consider adding this capability in a future update?
The text was updated successfully, but these errors were encountered: