Replies: 7 comments 3 replies
-
Thanks for your request. You're using an unreleased, uncompleted, and completely unsupported feature at this point, subject to much change. We're happy to review PRs at this point in time if you'd like to make a contribution. |
Beta Was this translation helpful? Give feedback.
-
It should be a pretty simple change since we already have |
Beta Was this translation helpful? Give feedback.
-
yes @hunterjm your suggestion works as does
|
Beta Was this translation helpful? Give feedback.
-
the ollama provider does not seem to work. It calls api/generate which returns a vector but no text description. I was trying moondream:latest as the model ( https://ollama.com/library/moondream ) |
Beta Was this translation helpful? Give feedback.
-
@hunterjm
|
Beta Was this translation helpful? Give feedback.
-
@hunterjm I took a crack at separating them by adding a search_source argument. Thoughts?
|
Beta Was this translation helpful? Give feedback.
-
@hunterjm I endorse your previous direction that with a good CLIP model we can get quite far add the ability to choose searching text by thumbnail or description There are some new CLIP models that are probably superior to what we had previously such as this one from Google |
Beta Was this translation helpful? Give feedback.
-
Describe the problem you are having
Only the openai provider implementation invokes the chat completion api whereas the ollama provider implementation does not
Is it possible to broaden the implementation of openai to override the https://api.openai.com/v1 to provide an alternate baseurl so any openai api compatible backend also works
Version
0.15.0
Frigate config file
allow for an alternate baseurl so projects like https://github.com/matatonic/openedai-vision can be in play as the backend
Relevant log output
Frigate stats
No response
Operating system
Debian
Install method
Docker Compose
Object Detector
Coral
Any other information that may be helpful
No response
Beta Was this translation helpful? Give feedback.
All reactions