Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[pull] master from Mintplex-Labs:master #10

Open
wants to merge 258 commits into
base: master
Choose a base branch
from

Conversation

pull[bot]
Copy link

@pull pull bot commented Aug 27, 2024

See Commits and Changes for more details.


Created by pull[bot]

Can you help keep this open source service alive? 💖 Please sponsor : )

timothycarambat and others added 30 commits August 27, 2024 16:38
bump browser extension commit
handle onblur for emebed domains
new voyageai embedding models
update source to ensure swagger.json is openapi 3.0.0 compliant
* Add ability to use Esc keypress to close modal for documents

* move escape close to hook

---------

Co-authored-by: Mr Simon C <[email protected]>
* Add SearchApi to web browsing

* UI modifications for SearchAPI

---------

Co-authored-by: Sebastjan Prachovskij <[email protected]>
(not a big deal, just to avoid someone pointing it out)
* ollama: Switch from parallel to sequential chunk embedding

* throw error on empty embeddings

---------

Co-authored-by: John Blomberg <[email protected]>
remove Jazzicons
update pfps
#2247)

update docs showing no need for manual port forwarding of Server in GHCodespaces
* patch no text results for milvus chunks

* wrap addDocumentToNamespace in try catch for handling milvus errors

* lint

* revert milvus db changes

* add try catch to handle grpc error from milvus
* fix ui for slash cmd presets

* hide scroll

---------

Co-authored-by: timothycarambat <[email protected]>
* Add support for custom agent skills via plugins
Update Admin.systemPreferences to updated endpoint (legacy has deprecation notice

* lint

* dev build

* patch safeJson
patch label loading

* allow plugins with no config options

* lint

* catch invalid setupArgs in frontend

* update link to docs page for agent skills

* remove unneeded files

---------

Co-authored-by: shatfield4 <[email protected]>
Add Gemini  models
resolves #2263
* Update OpenAI models

* Sort OpenAI models by created timestamp in ascending order

* Update OpenAI models price

* uncheck fallback listing (even if old)
closes #2261

* linting

---------

Co-authored-by: Yaner <[email protected]>
Remove FineTuningBanner
remove AgentAlert for first time users
* Support `@agent` custom skills

* move import
* Patch UI bug with agent skill

* wrap call in try/catch for failures
res?. optional call for settings since null is default

* uncheck
* Patch 11Labs selection UI bug

* remove log
shatfield4 and others added 30 commits January 24, 2025 11:06
* implement dynamic fetching of togetherai models

* implement caching for togetherai models

* update gitignore for togetherai model caching

* Remove models.json from git tracking

* Remove .cached_at from git tracking

* lint

* revert unneeded change

---------

Co-authored-by: Timothy Carambat <[email protected]>
* wip change workspace llm settings

* allow editing of workspace llm and agent config inside workspace settings

* lint + put back deleted comment

---------

Co-authored-by: Timothy Carambat <[email protected]>
Add better data-handling for unknown providers
* Breakout LaTeX plugin for modification

* backport regular markdown link
* remove native llm

* remove node-llama-cpp from dockerfile

* remove unneeded items from dockerfile

---------

Co-authored-by: Timothy Carambat <[email protected]>
* Add ability to disable default agent skills

* debug build
* wip agent ui animation

* WIP agent ui revision

* linting

* simplify css

* memoize agent responses

* patch hook memo issue

* dev build

---------

Co-authored-by: shatfield4 <[email protected]>
* Add tokenizer improvments via Singleton class
linting

* dev build

* Estimation fallback when string exceeds a fixed byte size

* Add notice to tiktoken on backend
connect #3023
Note: depends on user naming the deployment correctly.
)

* Enable `num_ctx` to match defined chunk length in ollama embedder

* remove console
Add in-text citations as well for PPLX token streaming
handle timeouts for stream/buffer hanging
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.