Skip to content

Releases: run-llama/llama_index

2024-08-22 (v0.11.0)

22 Aug 23:36
78f9a11
Compare
Choose a tag to compare

llama-index-core [0.11.0]

  • removed deprecated ServiceContext -- using this now will print an error with a link to the migration guide
  • removed deprecated LLMPredictor -- using this now will print an error, any existing LLM is a drop-in replacement
  • made pandas an optional dependency
  • moved to pydanticV2 officially with full support

Everything Else

  • bumped the minor version of every package to account for the new version of llama-index-core

2024-08-21 (v0.10.68)

21 Aug 23:18
ef9a21c
Compare
Choose a tag to compare

llama-index-core [0.10.68]

  • remove nested progress bars in base element node parser (#15550)
  • Adding exhaustive docs for workflows (#15556)
  • Adding multi-strategy workflow with reflection notebook example (#15445)
  • remove openai dep from core (#15527)
  • Improve token counter to handle more response types (#15501)
  • feat: Allow using step decorator without parentheses (#15540)
  • feat: workflow services (aka nested workflows) (#15325)
  • Remove requirement to specify "allowed_query_fields" parameter when using "cypher_validator" in TextToCypher retriever (#15506)

llama-index-embeddings-mistralai [0.1.6]

  • fix mistral embeddings usage (#15508)

llama-index-embeddings-ollama [0.2.0]

  • use ollama client for embeddings (#15478)

llama-index-embeddings-openvino [0.2.1]

  • support static input shape for openvino embedding and reranker (#15521)

llama-index-graph-stores-neptune [0.1.8]

  • Added code to expose structured schema for Neptune (#15507)

llama-index-llms-ai21 [0.3.2]

  • Integration: AI21 Tools support (#15518)

llama-index-llms-bedrock [0.1.13]

  • Support token counting for llama-index integration with bedrock (#15491)

llama-index-llms-cohere [0.2.2]

  • feat: add tool calling support for achat cohere (#15539)

llama-index-llms-gigachat [0.1.0]

  • Adding gigachat LLM support (#15313)

llama-index-llms-openai [0.1.31]

  • Fix incorrect type in OpenAI token usage report (#15524)
  • allow streaming token counts for openai (#15548)

llama-index-postprocessor-nvidia-rerank [0.2.1]

  • add truncate support (#15490)
  • Update to 0.2.0, remove old code (#15533)
  • update default model to nvidia/nv-rerankqa-mistral-4b-v3 (#15543)

llama-index-readers-bitbucket [0.1.4]

  • Fixing the issues in loading file paths from bitbucket (#15311)

llama-index-readers-google [0.3.1]

  • enhance google drive reader for improved functionality and usability (#15512)

llama-index-readers-remote [0.1.6]

  • check and sanitize remote reader urls (#15494)

llama-index-vector-stores-qdrant [0.2.17]

  • fix: setting IDF modifier in QdrantVectorStore for sparse vectors (#15538)

v0.10.67.post1

19 Aug 21:51
729d5f2
Compare
Choose a tag to compare
v0.10.67.post1

v0.10.65

12 Aug 14:47
Compare
Choose a tag to compare
v0.10.65

v0.10.64

09 Aug 23:25
Compare
Choose a tag to compare
v0.10.64

v0.10.63

09 Aug 13:23
8a48fdc
Compare
Choose a tag to compare
v0.10.63

v0.10.62

07 Aug 04:33
497ec04
Compare
Choose a tag to compare
v0.10.62

v0.10.61

06 Aug 00:00
da2f3fa
Compare
Choose a tag to compare
v0.10.61

2024-07-31 [v0.10.59]

01 Aug 04:46
821ca7c
Compare
Choose a tag to compare

llama-index-core [0.10.59]

  • Introduce Workflows for event-driven orchestration (#15067)
  • Added feature to context chat engine allowing previous chunks to be inserted into the current context window (#14889)
  • MLflow Integration added to docs (#14977)
  • docs(literalai): add Literal AI integration to documentation (#15023)
  • expand span coverage for query pipeline (#14997)
  • make re-raising error skip constructor during asyncio_run() (#14970)

llama-index-embeddings-ollama [0.1.3]

  • Add proper async embedding support

llama-index-embeddings-textembed [0.0.1]

  • add support for textembed embedding (#14968)

llama-index-graph-stores-falkordb [0.1.5]

  • initial implementation FalkorDBPropertyGraphStore (#14936)

llama-index-llms-azure-inference [0.1.1]

  • Fix: Azure AI inference integration support for tools (#15044)

llama-index-llms-fireworks [0.1.7]

  • Updates to Default model for support for function calling (#15046)

llama-index-llms-ollama [0.2.2]

  • toggle for ollama function calling (#14972)
  • Add function calling for Ollama (#14948)

llama-index-llms-openllm [0.2.0]

  • update to OpenLLM 0.6 (#14935)

llama-index-packs-longrag [0.1.0]

  • Adds a LlamaPack that implements LongRAG (#14916)

llama-index-postprocessor-tei-rerank [0.1.0]

  • Support for Re-Ranker via Text Embedding Interface (#15063)

llama-index-readers-confluence [0.1.7]

  • confluence reader sort auth parameters priority (#14905)

llama-index-readers-file [0.1.31]

  • UnstructuredReader use filename as ID (#14946)

llama-index-readers-gitlab [0.1.0]

  • Add GitLab reader integration (#15030)

llama-index-readers-google [0.2.11]

  • Fix issue with average ratings being a float vs an int (#15070)

llama-index-retrievers-bm25 [0.2.2]

  • use proper stemmer in bm25 tokenize (#14965)

llama-index-vector-stores-azureaisearch [0.1.13]

  • Fix issue with deleting non-existent index (#14949)

llama-index-vector-stores-elasticsearch [0.2.5]

  • disable embeddings for sparse strategy (#15032)

llama-index-vector-stores-kdbai [0.2.0]

  • Update default sparse encoder for Hybrid search (#15019)

llama-index-vector-stores-milvus [0.1.22]

  • Enhance MilvusVectorStore with flexible index management for overwriting (#15058)

llama-index-vector-stores-postgres [0.1.13]

  • Adds option to construct PGVectorStore with a HNSW index (#15024)

v0.10.58

24 Jul 19:43
d94e0b0
Compare
Choose a tag to compare
v0.10.58