Releases
v0.5.0
0.5.0 (2024-04-02)
Features
code: improve concat of strings in ui (#1785 ) (bac818a )
docker: set default Docker to use Ollama (#1812 ) (f83abff )
docs: Add guide Llama-CPP Linux AMD GPU support (#1782 ) (8a836e4 )
docs: Feature/upgrade docs (#1741 ) (5725181 )
docs: upgrade fern (#1596 ) (84ad16a )
ingest: Created a faster ingestion mode - pipeline (#1750 ) (134fc54 )
llm - embed: Add support for Azure OpenAI (#1698 ) (1efac6a )
llm: adds serveral settings for llamacpp and ollama (#1703 ) (02dc83e )
llm: Ollama LLM-Embeddings decouple + longer keep_alive settings (#1800 ) (b3b0140 )
llm: Ollama timeout setting (#1773 ) (6f6c785 )
local: tiktoken cache within repo for offline (#1467 ) (821bca3 )
nodestore: add Postgres for the doc and index store (#1706 ) (68b3a34 )
rag: expose similarity_top_k and similarity_score to settings (#1771 ) (087cb0b )
RAG: Introduce SentenceTransformer Reranker (#1810 ) (83adc12 )
scripts: Wipe qdrant and obtain db Stats command (#1783 ) (ea153fb )
ui: Add Model Information to ChatInterface label (f0b174c )
ui: add sources check to not repeat identical sources (#1705 ) (290b9fb )
UI: Faster startup and document listing (#1763 ) (348df78 )
ui: maintain score order when curating sources (#1643 ) (410bf7a )
unify settings for vector and nodestore connections to PostgreSQL (#1730 ) (63de7e4 )
wipe per storage type (#1772 ) (c2d6948 )
Bug Fixes
You can’t perform that action at this time.