Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(deps): update dependency llama-index to v0.10.13 [security] #1296

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

plural-renovate[bot]
Copy link
Contributor

@plural-renovate plural-renovate bot commented Feb 10, 2024

This PR contains the following updates:

Package Update Change
llama-index (source) minor ==0.7.4 -> ==0.10.13

GitHub Vulnerability Alerts

CVE-2023-39662

An issue in llama_index v.0.7.13 and before allows a remote attacker to execute arbitrary code via the exec parameter in PandasQueryEngine function.

CVE-2024-4181

A command injection vulnerability exists in the RunGptLLM class of the llama_index library, version 0.9.47, used by the RunGpt framework from JinaAI to connect to Language Learning Models (LLMs). The vulnerability arises from the improper use of the eval function, allowing a malicious or compromised LLM hosting provider to execute arbitrary commands on the client's machine. This issue was fixed in version 0.10.13. The exploitation of this vulnerability could lead to a hosting provider gaining full control over client machines.


Release Notes

run-llama/llama_index (llama-index)

v0.10.13

Compare Source

New Features
  • Added a llama-pack for KodaRetriever, for on-the-fly alpha tuning (#​11311)
  • Added support for mistral-large (#​11398)
  • Last token pooling mode for huggingface embeddings models like SFR-Embedding-Mistral (#​11373)
  • Added fsspec support to SimpleDirectoryReader (#​11303)
Bug Fixes / Nits
  • Fixed an issue with context window + prompt helper (#​11379)
  • Moved OpenSearch vector store to BasePydanticVectorStore (#​11400)
  • Fixed function calling in fireworks LLM (#​11363)
  • Made cohere embedding types more automatic (#​11288)
  • Improve function calling in react agent (#​11280)
  • Fixed MockLLM imports (#​11376)

v0.10.12

Compare Source

New Features
Bug Fixes / Nits
  • Fixed string formatting in weaviate (#​11294)
  • Fixed off-by-one error in semantic splitter (#​11295)
  • Fixed download_llama_pack for multiple files (#​11272)
  • Removed BUILD files from packages (#​11267)
  • Loosened python version reqs for all packages (#​11267)
  • Fixed args issue with chromadb (#​11104)

v0.10.11

Compare Source

Bug Fixes / Nits
  • Fixed multi-modal LLM for async acomplete (#​11064)
  • Fixed issue with llamaindex-cli imports (#​11068)

v0.10.10

Compare Source

I'm still a bit wonky with our publishing process -- apologies. This is just a version
bump to ensure the changes that were supposed to happen in 0.10.9 actually
did get published. (AF)

v0.10.9

Compare Source

  • add llama-index-cli dependency

v0.10.8

Compare Source

v0.10.7

Compare Source

New Features
Bug Fixes / Nits
  • Fixed linting in CICD (#​10945)
  • Fixed using remote graph stores (#​10971)
  • Added missing LLM kwarg in NoText response synthesizer (#​10971)
  • Fixed openai import in rankgpt (#​10971)
  • Fixed resolving model name to string in openai embeddings (#​10971)
  • Off by one error in sentence window node parser (#​10971)

v0.10.6

Compare Source

First, apologies for missing the changelog the last few versions. Trying to figure out the best process with 400+ packages.

At some point, each package will have a dedicated changelog.

But for now, onto the "master" changelog.

New Features
Bug Fixes / Nits
  • Various fixes for clickhouse vector store (#​10799)
  • Fix index name in neo4j vector store (#​10749)
  • Fixes to sagemaker embeddings (#​10778)
  • Fixed performance issues when splitting nodes (#​10766)
  • Fix non-float values in reranker + b25 (#​10930)
  • OpenAI-agent should be a dep of openai program (#​10930)
  • Add missing shortcut imports for query pipeline components (#​10930)
  • Fix NLTK and tiktoken not being bundled properly with core (#​10930)
  • Add back llama_index.core.__version__ (#​10930)

v0.10.5

  • added dead simple FnAgentWorker for custom agents (#​14329)
  • Pass the kwargs on when build_index_from_nodes (#​14341)
  • make async utils a bit more robust to nested async (#​14356)

v0.10.4

  • Added PropertyGraphIndex and other supporting abstractions. See the full guide for more details (#​13747)
  • Updated AutoPrevNextNodePostprocessor to allow passing in response mode and LLM (#​13771)
  • fix type handling with return direct (#​13776)
  • Correct the method name to _aget_retrieved_ids_and_texts in retrievval evaluator (#​13765)
  • fix: QueryTransformComponent incorrect call self._query_transform (#​13756)
  • implement more filters for SimpleVectorStoreIndex (#​13365)

v0.10.3

Compare Source

Bug Fixes / Nits
  • Fixed passing in LLM to as_chat_engine (#​10605)
  • Fixed system prompt formatting for anthropic (#​10603)
  • Fixed elasticsearch vector store error on __version__ (#​10656)
  • Fixed import on openai pydantic program (#​10657)
  • Added client back to opensearch vector store exports (#​10660)
  • Fixed bug in SimpleDirectoryReader not using file loaders properly (#​10655)
  • Added lazy LLM initialization to RankGPT (#​10648)
  • Fixed bedrock embedding from_credentials passing ing the model name (#​10640)
  • Added back recent changes to TelegramReader (#​10625)

v0.10.1

Compare Source

New Features
Bug Fixes / Nits
  • Ensure order in async embeddings generation (#​11562)
  • Fixed empty metadata for csv reader (#​11563)
  • Serializable fix for composable retrievers (#​11617)
  • Fixed milvus metadata filter support (#​11566)
  • FIxed pydantic import in clickhouse vector store (#​11631)
  • Fixed system prompts for gemini/vertext-gemini (#​11511)

v0.10.0

Compare Source

Breaking Changes
  • Several changes are introduced. See the full blog post for complete details.

v0.9.48

Compare Source

Bug Fixes / Nits
  • Add back deprecated API for BedrockEmbdding (#​10581)

v0.9.47

Compare Source

Last patch before v0.10!

New Features
  • add conditional links to query pipeline (#​10520)
  • refactor conditional links + add to cookbook (#​10544)
  • agent + query pipeline cleanups (#​10563)
Bug Fixes / Nits
  • Add sleep to fix lag in chat stream (#​10339)
  • OllamaMultiModal kwargs (#​10541)
  • Update Ingestion Pipeline to handle empty documents (#​10543)
  • Fixing minor spelling error (#​10516)
  • fix elasticsearch async check (#​10549)
  • Docs/update slack demo colab (#​10534)
  • Adding the possibility to use the IN operator for PGVectorStore (#​10547)
  • fix agent reset (#​10562)
  • Fix MD duplicated Node id from multiple docs (#​10564)

v0.9.46

Compare Source

New Features
  • Update pooling strategy for embedding models (#​10536)
  • Add Multimodal Video RAG example (#​10530)
  • Add SECURITY.md (#​10531)
  • Move agent module guide up one-level (#​10519)
Bug Fixes / Nits

v0.9.45.post1

Compare Source

New Features
  • Upgraded deeplake vector database to use BasePydanticVectorStore (#​10504)
Bug Fixes / Nits
  • Fix MD parser for inconsistency tables (#​10488)
  • Fix ImportError for pypdf in MetadataExtractionSEC.ipynb (#​10491)

v0.9.45

Compare Source

New Features
  • Upgraded deeplake vector database to use BasePydanticVectorStore (#​10504)
Bug Fixes / Nits
  • Fix MD parser for inconsistency tables (#​10488)
  • Fix ImportError for pypdf in MetadataExtractionSEC.ipynb (#​10491)

v0.9.44

Compare Source

New Features

v0.9.43

Compare Source

New Features
Bug Fixes / Nits

v0.9.42.post2

Compare Source

v0.9.42.post1

Compare Source

New Features
  • Add Async support for Base nodes parser (#​10418)

v0.9.42

Compare Source

New Features
  • Add Async support for Base nodes parser (#​10418)

v0.9.41

Compare Source

New Features
Bug Fixes / Nits
  • fix full node content in KeywordExtractor (#​10398)

v0.9.40

Compare Source

New Features
  • Improve and fix bugs for MarkdownElementNodeParser (#​10340)
  • Fixed and improve Perplexity support for new models (#​10319)
  • Ensure system_prompt is passed to Perplexity LLM (#​10326)
  • Extended BaseRetrievalEvaluator to include an optional PostProcessor (#​10321)

v0.9.39

Compare Source

New Features
  • Support for new GPT Turbo Models (#​10291)
  • Support Multiple docs for Sentence Transformer Fine tuning(#​10297)
Bug Fixes / Nits

v0.9.38

Compare Source

New Features
  • Support for new OpenAI v3 embedding models (#​10279)
Bug Fixes / Nits
  • Extra checks on sparse embeddings for qdrant (#​10275)

v0.9.37.post1

Compare Source

v0.9.37

Compare Source

New Features
  • Added a RAG CLI utility (#​10193)
  • Added a textai vector store (#​10240)
  • Added a Postgresql based docstore and index store (#​10233)
  • specify tool spec in tool specs (#​10263)
Bug Fixes / Nits
  • Fixed serialization error in ollama chat (#​10230)
  • Added missing fields to SentenceTransformerRerank (#​10225)
  • Fixed title extraction (#​10209, #​10226)
  • nit: make chainable output parser more exposed in library/docs (#​10262)
  • 🐛 summary index not carrying over excluded metadata keys (#​10259)

v0.9.36

Compare Source

New Features
  • Added support for SageMakerEmbedding (#​10207)
Bug Fixes / Nits
  • Fix duplicated file_id on openai assistant (#​10223)
  • Fix circular dependencies for programs (#​10222)
  • Run TitleExtractor on groups of nodes from the same parent document (#​10209)
  • Improve vectara auto-retrieval (#​10195)

v0.9.35

Compare Source

New Features
  • beautifulsoup4 dependency to new optional extra html (#​10156)
  • make BaseNode.hash an @property (#​10163)
  • Neutrino (#​10150)
  • feat: JSONalyze Query Engine (#​10067)
  • [wip] add custom hybrid retriever notebook (#​10164)
  • add from_collection method to ChromaVectorStore class (#​10167)
  • CLI experiment v0: ask (#​10168)
  • make react agent prompts more editable (#​10154)
  • Add agent query pipeline (#​10180)
Bug Fixes / Nits
  • Update supabase vecs metadata filter function to support multiple fields (#​10133)
  • Bugfix/code improvement for LanceDB integration (#​10144)
  • beautifulsoup4 optional dependency (#​10156)
  • Fix qdrant aquery hybrid search (#​10159)
  • make hash a @​property (#​10163)
  • fix: bug on poetry install of llama-index[postgres] (#​10171)
  • [doc] update jaguar vector store documentation (#​10179)
  • Remove use of not-launched finish_message (#​10188)
  • Updates to Lantern vector stores docs (#​10192)
  • fix typo in multi_document_agents.ipynb (#​10196)

v0.9.34

Compare Source

New Features
Bug Fixes / Nits
  • Update bedrock utils for Claude 2:1 (#​10139)
  • BugFix: deadlocks using multiprocessing (#​10125)

v0.9.33

Compare Source

New Features
  • Added RankGPT as a postprocessor (#​10054)
  • Ensure backwards compatibility with new Pinecone client version bifucation (#​9995)
  • Recursive retriever all the things (#​10019)
Bug Fixes / Nits
  • BugFix: When using markdown element parser on a table containing comma (#​9926)
  • extend auto-retrieval notebook (#​10065)
  • Updated the Attribute name in llm_generators (#​10070)
  • jaguar vector store add text_tag to add_kwargs in add() (#​10057)

v0.9.32

Compare Source

New Features
  • added query-time row retrieval + fix nits with query pipeline over structured data (#​10061)
  • ReActive Agents w/ Context + updated stale link (#​10058)

v0.9.31

Compare Source

New Features
  • Added selectors and routers to query pipeline (#​9979)
  • Added sparse-only search to qdrant vector store (#​10041)
  • Added Tonic evaluators (#​10000)
  • Adding async support to firestore docstore (#​9983)
  • Implement mongodb docstore put_all method (#​10014)
Bug Fixes / Nits
  • Properly truncate sql results based on max_string_length (#​10015)
  • Fixed node.resolve_image() for base64 strings (#​10026)
  • Fixed cohere system prompt role (#​10020)
  • Remove redundant token counting operation in SentenceSplitter (#​10053)

v0.9.30

Compare Source

New Features
  • Implements a Node Parser using embeddings for Semantic Splitting (#​9988)
  • Add Anyscale Embedding model support (#​9470)
Bug Fixes / Nits
  • nit: fix pandas get prompt (#​10001)
  • Fix: Token counting bug (#​9912)
  • Bump jinja2 from 3.1.2 to 3.1.3 (#​9997)
  • Fix corner case for qdrant hybrid search (#​9993)
  • Bugfix: sphinx generation errors (#​9944)
  • Fix: language used before assignment in CodeSplitter (#​9987)
  • fix inconsistent name "text_parser" in section "Use a Text Splitter… (#​9980)
  • 🐛 fixing batch size (#​9982)
  • add auto-async execution to query pipelines (#​9967)
  • 🐛 fixing init (#​9977)
  • Parallel Loading with SimpleDirectoryReader (#​9965)
  • do not force delete an index in milvus (#​9974)

v0.9.29

Compare Source

New Features
  • Added support for together.ai models (#​9962)
  • Added support for batch redis/firestore kvstores, async firestore kvstore (#​9827)
  • Parallelize IngestionPipeline.run() (#​9920)
  • Added new query pipeline components: function, argpack, kwargpack (#​9952)
Bug Fixes / Nits
  • Updated optional langchain imports to avoid warnings (#​9964)
  • Raise an error if empty nodes are embedded (#​9953)

v0.9.28.post2

Compare Source

v0.9.28.post1

Compare Source

v0.9.28

Compare Source

New Features
  • Added support for Nvidia TenorRT LLM (#​9842)
  • Allow tool_choice to be set during agent construction (#​9924)
  • Added streaming support for QueryPipeline (#​9919)
Bug Fixes / Nits
  • Set consistent doc-ids for llama-index readers (#​9923, #​9916)
  • Remove unneeded model inputs for HuggingFaceEmbedding (#​9922)
  • Propagate tool_choice flag to downstream APIs (#​9901)
  • Add chat_store_key to chat memory from_defaults() (#​9928)

v0.9.27

Compare Source

New Features
Bug Fixes / Nits / Smaller Features
  • Propagate tool_choice flag to downstream APIs (#​9901)
  • filter out negative indexes from faiss query (#​9907)
  • added NE filter for qdrant payloads (#​9897)
  • Fix incorrect id assignment in MyScale query result (#​9900)
  • Qdrant Text Match Filter (#​9895)
  • Fusion top k for hybrid search (#​9894)
  • Fix (#​9867) sync_to_async to avoid blocking during asynchronous calls (#​9869)
  • A single node passed into compute_scores returns as a float (#​9866)
  • Remove extra linting steps (#​9878)
  • add vectara links (#​9886)

v0.9.26

Compare Source

New Features
  • Added a BaseChatStore and SimpleChatStore abstraction for dedicated chat memory storage (#​9863)
  • Enable custom tree_sitter parser to be passed into CodeSplitter (#​9845)
  • Created a BaseAutoRetriever base class, to allow other retrievers to extend to auto modes (#​9846)
  • Added support for Nvidia Triton LLM (#​9488)
  • Added DeepEval one-click observability (#​9801)
Bug Fixes / Nits
  • Updated the guidance integration to work with the latest version (#​9830)
  • Made text storage optional for doctores/ingestion pipeline (#​9847)
  • Added missing sphinx-automodapi dependency for docs (#​9852)
  • Return actual node ids in weaviate query results (#​9854)
  • Added prompt formatting to LangChainLLM (#​9844)

v0.9.25.post1

Compare Source

v0.9.25

Compare Source

New Features
  • Added concurrancy limits for dataset generation (#​9779)
  • New deepeval one-click observability handler (#​9801)
  • Added jaguar vector store (#​9754)
  • Add beta multimodal ReAct agent (#​9807)
Bug Fixes / Nits
  • Changed default batch size for OpenAI embeddings to 100 (#​9805)
  • Use batch size properly for qdrant upserts (#​9814)
  • _verify_source_safety uses AST, not regexes, for proper safety checks (#​9789)
  • use provided LLM in element node parsers (#​9776)
  • updated legacy vectordb loading function to be more robust (#​9773)
  • Use provided http client in AzureOpenAI (#​9772)

v0.9.24

Compare Source

New Features
  • Add reranker for BEIR evaluation (#​9743)
  • Add Pathway integration. (#​9719)
  • custom agents implementation + notebook (#​9746)
Bug Fixes / Nits
  • fix beam search for vllm: add missing parameter (#​9741)
  • Fix alpha for hrbrid search (#​9742)
  • fix token counter (#​9744)
  • BM25 tokenizer lowercase (#​9745)

v0.9.23

Compare Source

Bug Fixes / Nits
  • docs: fixes qdrant_hybrid.ipynb typos (#​9729)
  • make llm completion program more general (#​9731)
  • Refactor MM Vector store and Index for empty collection (#​9717)
  • Adding IF statement to check for Schema using "Select" (#​9712)
  • allow skipping module loading in download_module and download_llama_pack (#​9734)

v0.9.22

Compare Source

New Features
  • Added .iter_data() method to SimpleDirectoryReader (#​9658)
  • Added async support to Ollama LLM (#​9689)
  • Expanding pinecone filter support for in and not in (#​9683)
Bug Fixes / Nits
  • Improve BM25Retriever performance (#​9675)
  • Improved qdrant hybrid search error handling (#​9707)
  • Fixed None handling in ChromaVectorStore (#​9697)
  • Fixed postgres schema creation if not existing (#​9712)

v0.9.21

Compare Source

New Features
  • Added zilliz cloud as a managed index (#​9605)
Bug Fixes / Nits

v0.9.20

Compare Source

New Features
  • Added insert_batch_size to limit number of embeddings held in memory when creating an index, defaults to 2048 (#​9630)
  • Improve auto-retrieval (#​9647)
  • Configurable Node ID Generating Function (#​9574)
  • Introduced action input parser (#​9575)
  • qdrant sparse vector support (#​9644)
  • Introduced upserts and delete in ingestion pipeline (#​9643)
  • Add Zilliz Cloud Pipeline as a Managed Index (#​9605)
  • Add support for Google Gemini models via VertexAI (#​9624)
  • support allowing additional metadata filters on autoretriever (#​9662)
Bug Fixes / Nits
  • Fix pip install commands in LM Format Enforcer notebooks (#​9648)
  • Fixing some more links and documentations (#​9633)
  • some bedrock nits and fixes (#​9646)

v0.9.19

Compare Source

New Features
  • new llama datasets LabelledEvaluatorDataset & LabelledPairwiseEvaluatorDataset (#​9531)

v0.9.18

Compare Source

New Features
  • multi-doc auto-retrieval guide (#​9631)
Bug Fixes / Nits
  • fix(vllm): make Vllm's 'complete' method behave the same as other LLM class (#​9634)
  • FIx Doc links and other documentation issue (#​9632)

v0.9.17

Compare Source

New Features
  • [example] adding user feedback (#​9601)
  • FEATURE: Cohere ReRank Relevancy Metric for Retrieval Eval (#​9495)
Bug Fixes / Nits
  • Fix Gemini Chat Mode (#​9599)
  • Fixed types-protobuf from being a primary dependency (#​9595)
  • Adding an optional auth token to the TextEmbeddingInference class (#​9606)
  • fix: out of index get latest tool call (#​9608)
  • fix(azure_openai.py): add missing return to subclass override (#​9598)
  • fix mix up b/w 'formatted' and 'format' params for ollama api call (#​9594)

v0.9.16.post1

Compare Source

v0.9.16

Compare Source

New Features
  • agent refactor: step-wise execution (#​9584)
  • Add OpenRouter, with Mixtral demo (#​9464)
  • Add hybrid search to neo4j vector store (#​9530)
  • Add support for auth service accounts for Google Semantic Retriever (#​9545)
Bug Fixes / Nits
  • Fixed missing default=None for LLM.system_prompt (#​9504)
  • Fix #​9580 : Incorporate metadata properly (#​9582)
  • Integrations: Gradient[Embeddings,LLM] - sdk-upgrade (#​9528)
  • Add mixtral 8x7b model to anyscale available models (#​9573)
  • Gemini Model Checks (#​9563)
  • Update OpenAI fine-tuning with latest changes (#​9564)
  • fix/Reintroduce WHERE filter to the Sparse Query for PgVectorStore (#​9529)
  • Update Ollama API to ollama v0.1.16 (#​9558)
  • ollama: strip invalid formatted option (#​9555)
  • add a device in optimum push #​9541 (#​9554)
  • Title vs content difference for Gemini Embedding (#​9547)
  • fix pydantic fields to float (#​9542)

v0.9.15.post2

Compare Source

v0.9.15.post1

Compare Source

v0.9.15

Compare Source

New Features
  • Added full support for Google Gemini text+vision models (#​9452)
  • Added new Google Semantic Retriever (#​9440)
  • added from_existing() method + async support to OpenAI assistants (#​9367)
Bug Fixes / Nits
  • Fixed huggingface LLM system prompt and messages to prompt (#​9463)
  • Fixed ollama additional kwargs usage (#​9455)

v0.9.14.post3

Compare Source

v0.9.14.post2

Compare Source

v0.9.14.post1

Compare Source

v0.9.14

Compare Source

New Features
  • Add MistralAI LLM (#​9444)
  • Add MistralAI Embeddings (#​9441)
  • Add Ollama Embedding class (#​9341)
  • Add FlagEmbeddingReranker for reranking (#​9285)
  • feat: PgVectorStore support advanced metadata filtering (#​9377)
  • Added sql_only parameter to SQL query engines to avoid executing SQL (#​9422)
Bug Fixes / Nits
  • Feat/PgVector Support custom hnsw.ef_search and ivfflat.probes (#​9420)
  • fix F1 score definition, update copyright year (#​9424)
  • Change more than one image input for Replicate Multi-modal models from error to warning (#​9360)
  • Removed GPT-Licensed aiostream dependency (#​9403)
  • Fix result of BedrockEmbedding with Cohere model (#​9396)
  • Only capture valid tool names in react agent (#​9412)
  • Fixed top_k being multiplied by 10 in azure cosmos (#​9438)
  • Fixed hybrid search for OpenSearch (#​9430)
Breaking Changes
  • Updated the base LLM interface to match LLMPredictor (#​9388)
  • Deprecated LLMPredictor (#​9388)

v0.9.13

Compare Source

New Features
  • Added batch prediction support for LabelledRagDataset (#​9332)
Bug Fixes / Nits
  • Fixed save and load for faiss vector store (#​9330)

v0.9.12

Compare Source

New Features
  • Added an option reuse_client to openai/azure to help with async timeouts. Set to False to see improvements (#​9301)
  • Added support for vLLM llm (#​9257)
  • Add support for python 3.12 (#​9304)
  • Support for claude-2.1 model name (#​9275)
Bug Fixes / Nits
  • Fix embedding format for bedrock cohere embeddings (#​9265)
  • Use delete_kwargs for filtering in weaviate vector store (#​9300)
  • Fixed automatic qdrant client construction (#​9267)

v0.9.11.post1

Compare Source

v0.9.11

Compare Source

New Features
  • Make reference_contexts optional in LabelledRagDataset (#​9266)
  • Re-organize download module (#​9253)
  • Added document management to ingestion pipeline (#​9135)
  • Add docs for LabelledRagDataset (#​9228)
  • Add submission template notebook and other doc updates for LabelledRagDataset (#​9273)
Bug Fixes / Nits
  • Convert numpy to list for InstructorEmbedding (#​9255)

v0.9.10

Compare Source

New Features
  • Advanced Metadata filter for vector stores (#​9216)
  • Amazon Bedrock Embeddings New models (#​9222)
  • Added PromptLayer callback integration (#​9190)
  • Reuse file ids for OpenAIAssistant (#​9125)
Breaking Changes / Deprecations

v0.9.9

Compare Source

New Features
  • Add new abstractions for LlamaDataset's (#​9165)
  • Add metadata filtering and MMR mode support for AstraDBVectorStore (#​9193)
  • Allowing newest scikit-learn versions (#​9213)
Breaking Changes / Deprecations
  • Added LocalAI demo and began deprecation cycle (#​9151)
  • Deprecate QueryResponseDataset and DatasetGenerator of evaluation module (#​9165)
Bug Fixes / Nits

Configuration

📅 Schedule: Branch creation - "" (UTC), Automerge - At any time (no schedule defined).

🚦 Automerge: Disabled by config. Please merge this manually once you are satisfied.

Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.

🔕 Ignore: Close this PR and you won't be reminded about this update again.


  • If you want to rebase/retry this PR, check this box

This PR has been generated by Renovate Bot.

@plural-renovate plural-renovate bot added the dependencies Pull requests that update a dependency file label Feb 10, 2024
Copy link

stoat-app bot commented Feb 10, 2024

Easy and customizable dashboards for your build system. Learn more about Stoat ↗︎

Static Hosting

Name Link Commit Status
api-coverage Visit e4268c9
rtc-coverage Visit e4268c9
core-coverage Visit e4268c9
cron-coverage Visit e4268c9
email-coverage Visit e4268c9
worker-coverage Visit e4268c9
api-test-results Visit e4268c9
graphql-coverage Visit e4268c9
rtc-test-results Visit e4268c9
core-test-results Visit e4268c9
cron-test-results Visit e4268c9
email-test-results Visit e4268c9
worker-test-results Visit e4268c9
graphql-test-results Visit e4268c9

Job Runtime

job runtime chart

debug

@plural-renovate plural-renovate bot force-pushed the renovate/pypi-llama-index-vulnerability branch 2 times, most recently from fa41eda to 7c222df Compare March 7, 2024 17:14
@plural-renovate plural-renovate bot force-pushed the renovate/pypi-llama-index-vulnerability branch 2 times, most recently from ccb0794 to bd7358a Compare May 7, 2024 21:43
@plural-renovate plural-renovate bot force-pushed the renovate/pypi-llama-index-vulnerability branch from bd7358a to ae26a24 Compare May 10, 2024 23:05
@plural-renovate plural-renovate bot force-pushed the renovate/pypi-llama-index-vulnerability branch from ae26a24 to b85f456 Compare May 24, 2024 20:36
@plural-renovate plural-renovate bot changed the title chore(deps): update dependency llama-index to v0.9.14 [security] chore(deps): update dependency llama-index to v0.10.13 [security] May 24, 2024
@plural-renovate plural-renovate bot force-pushed the renovate/pypi-llama-index-vulnerability branch from b85f456 to adc7ebb Compare June 5, 2024 16:51
@plural-renovate plural-renovate bot force-pushed the renovate/pypi-llama-index-vulnerability branch 4 times, most recently from bea36d9 to d8f0e40 Compare June 24, 2024 22:12
@plural-renovate plural-renovate bot force-pushed the renovate/pypi-llama-index-vulnerability branch 2 times, most recently from 624ceeb to 0019e2c Compare July 3, 2024 16:01
@plural-renovate plural-renovate bot force-pushed the renovate/pypi-llama-index-vulnerability branch from 0019e2c to e4268c9 Compare July 4, 2024 00:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file
Projects
None yet
Development

Successfully merging this pull request may close these issues.

0 participants