Skip to content

Commit

Permalink
DEFAULT_DOCUMENT_PROMPT
Browse files Browse the repository at this point in the history
  • Loading branch information
jacopo-chevallard committed Oct 23, 2024
2 parents 93257d5 + 973c678 commit cbec0ca
Show file tree
Hide file tree
Showing 19 changed files with 120 additions and 854 deletions.
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
"core": "0.0.21"
"core": "0.0.22"
}
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ Quivr, helps you build your second brain, utilizes the power of GenerativeAI to

>We take care of the RAG so you can focus on your product. Simply install quivr-core and add it to your project. You can now ingest your files and ask questions.*
**We will be improving the RAG and adding more features everything, stay tuned!**
**We will be improving the RAG and adding more features, stay tuned!**


This is the core of Quivr, the brain of Quivr.com.
Expand Down Expand Up @@ -53,7 +53,7 @@ Ensure you have the following installed:

- **Step 2**: Create a RAG with 5 lines of code

```python
```python
import tempfile

from quivr_core import Brain
Expand All @@ -72,7 +72,7 @@ Ensure you have the following installed:
"what is gold? asnwer in french"
)
print("answer:", answer)
```
```

## Examples

Expand Down
7 changes: 7 additions & 0 deletions core/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,12 @@
# Changelog

## [0.0.22](https://github.com/QuivrHQ/quivr/compare/core-0.0.21...core-0.0.22) (2024-10-21)


### Features

* **ask:** non-streaming now calls streaming ([#3409](https://github.com/QuivrHQ/quivr/issues/3409)) ([e71e46b](https://github.com/QuivrHQ/quivr/commit/e71e46bcdfbab0d583aef015604278343fd46c6f))

## [0.0.21](https://github.com/QuivrHQ/quivr/compare/core-0.0.20...core-0.0.21) (2024-10-21)


Expand Down
2 changes: 1 addition & 1 deletion core/pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "quivr-core"
version = "0.0.21"
version = "0.0.22"
description = "Quivr core RAG package"
authors = [
{ name = "Stan Girard", email = "[email protected]" }
Expand Down
43 changes: 19 additions & 24 deletions core/quivr_core/brain/brain.py
Original file line number Diff line number Diff line change
Expand Up @@ -545,34 +545,28 @@ def ask(
print(answer.answer)
```
"""
llm = self.llm

# If you passed a different llm model we'll override the brain one
if retrieval_config:
if retrieval_config.llm_config != self.llm.get_config():
llm = LLMEndpoint.from_config(config=retrieval_config.llm_config)
else:
retrieval_config = RetrievalConfig(llm_config=self.llm.get_config())

if rag_pipeline is None:
rag_pipeline = QuivrQARAGLangGraph

rag_instance = rag_pipeline(
retrieval_config=retrieval_config, llm=llm, vector_store=self.vector_db
)
async def collect_streamed_response():
full_answer = ""
async for response in self.ask_streaming(
question=question,
retrieval_config=retrieval_config,
rag_pipeline=rag_pipeline,
list_files=list_files,
chat_history=chat_history
):
full_answer += response.answer
return full_answer

# Run the async function in the event loop
loop = asyncio.get_event_loop()
full_answer = loop.run_until_complete(collect_streamed_response())

chat_history = self.default_chat if chat_history is None else chat_history
list_files = [] if list_files is None else list_files

parsed_response = rag_instance.answer(
question=question, history=chat_history, list_files=list_files
)

chat_history.append(HumanMessage(content=question))
chat_history.append(AIMessage(content=parsed_response.answer))
chat_history.append(AIMessage(content=full_answer))

# Save answer to the chat history
return parsed_response
# Return the final response
return ParsedRAGResponse(answer=full_answer)

async def ask_streaming(
self,
Expand Down Expand Up @@ -635,3 +629,4 @@ async def ask_streaming(
chat_history.append(HumanMessage(content=question))
chat_history.append(AIMessage(content=full_answer))
yield response

4 changes: 2 additions & 2 deletions core/quivr_core/config.py
Original file line number Diff line number Diff line change
Expand Up @@ -233,7 +233,7 @@ class LLMEndpointConfig(QuivrBaseConfig):
Attributes:
supplier (DefaultModelSuppliers): The LLM provider (default: OPENAI).
model (str): The specific model to use (default: "gpt-3.5-turbo-0125").
model (str): The specific model to use (default: "gpt-4o").
context_length (int | None): The maximum context length for the model.
tokenizer_hub (str | None): The tokenizer to use for this model.
llm_base_url (str | None): Base URL for the LLM API.
Expand All @@ -247,7 +247,7 @@ class LLMEndpointConfig(QuivrBaseConfig):
"""

supplier: DefaultModelSuppliers = DefaultModelSuppliers.OPENAI
model: str = "gpt-3.5-turbo-0125"
model: str = "gpt-4o"
context_length: int | None = None
tokenizer_hub: str | None = None
llm_base_url: str | None = None
Expand Down
2 changes: 1 addition & 1 deletion core/quivr_core/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ def model_supports_function_calling(model_name: str):
"gpt-4",
"gpt-4-1106-preview",
"gpt-4-0613",
"gpt-3.5-turbo-0125",
"gpt-4o",
"gpt-3.5-turbo-1106",
"gpt-3.5-turbo-0613",
"gpt-4-0125-preview",
Expand Down
2 changes: 1 addition & 1 deletion core/tests/rag_config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ retrieval_config:
supplier: "openai"

# The model to use for the LLM for the given supplier
model: "gpt-3.5-turbo-0125"
model: "gpt-4o"

max_input_tokens: 2000

Expand Down
2 changes: 1 addition & 1 deletion core/tests/rag_config_workflow.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ retrieval_config:
supplier: "openai"

# The model to use for the LLM for the given supplier
model: "gpt-3.5-turbo-0125"
model: "gpt-4o"

max_input_tokens: 2000

Expand Down
2 changes: 1 addition & 1 deletion core/tests/test_config.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ def test_default_llm_config():
config = LLMEndpointConfig()

assert config.model_dump(exclude={"llm_api_key"}) == LLMEndpointConfig(
model="gpt-3.5-turbo-0125",
model="gpt-4o",
llm_base_url=None,
llm_api_key=None,
max_input_tokens=2000,
Expand Down
79 changes: 53 additions & 26 deletions docs/docs/index.md
Original file line number Diff line number Diff line change
@@ -1,41 +1,68 @@
# Welcome to Quivr Documentation

Welcome to the documentation of Quivr! This is the place where you'll find help, guidance and support for collaborative software development. Whether you're involved in an open-source community or a large software team, these resources should get you up and running quickly!
Quivr, helps you build your second brain, utilizes the power of GenerativeAI to be your personal assistant !

[Quivr](https://quivr.app) is your **Second Brain** that can act as your **personal assistant**. Quivr is a platform that enables the creation of AI assistants, referred to as "Brain". These assistants are designed with specialized capabilities. Some can connect to specific data sources, allowing users to interact directly with the data. Others serve as specialized tools for particular use cases, powered by Rag technology. These tools process specific inputs to generate practical outputs, such as summaries, translations, and more.
## Key Features 🎯

## Quick Links
- **Opiniated RAG**: We created a RAG that is opinionated, fast and efficient so you can focus on your product
- **LLMs**: Quivr works with any LLM, you can use it with OpenAI, Anthropic, Mistral, Gemma, etc.
- **Any File**: Quivr works with any file, you can use it with PDF, TXT, Markdown, etc and even add your own parsers.
- **Customize your RAG**: Quivr allows you to customize your RAG, add internet search, add tools, etc.
- **Integrations with Megaparse**: Quivr works with [Megaparse](https://github.com/quivrhq/megaparse), so you can ingest your files with Megaparse and use the RAG with Quivr.

- [Video Installation](https://dub.sh/quivr-demo)
>We take care of the RAG so you can focus on your product. Simply install quivr-core and add it to your project. You can now ingest your files and ask questions.*
!!! note
**Our goal** is to make Quivr the **best personal assistant** that is powered by your knowledge and your applications 🔥
**We will be improving the RAG and adding more features everything, stay tuned!**

## What does it do?

<div style="text-align: center;">
<video width="640" height="480" controls>
<source src="https://quivr-cms.s3.eu-west-3.amazonaws.com/singlestore_demo_quivr_232893659c.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
</div>
This is the core of Quivr, the brain of Quivr.com.

## How to get started? 👀
<!-- ## Demo Highlight 🎥
!!! tip
It takes less than **5 seconds** to get started with Quivr. You can even use your Google account to sign up.
https://github.com/quivrhq/quivr/assets/19614572/a6463b73-76c7-4bc0-978d-70562dca71f5 -->

1. **Create an account**: Go to [Quivr](https://quivr.app) and create an account.
2. **Create a Brain**: Let us guide you to create your first brain!
3. **Feed this Brain**: Add documentation and/or URLs to feed your brain.
4. **Ask Questions to your Brain**: Ask your Brain questions about the knowledge that you provide.
## Getting Started 🚀

## Empowering Innovation with Foundation Models & Generative AI
You can find everything on the [documentation](https://core.quivr.app/).

As a Leader in AI, Quivr leverages Foundation Models and Generative AI to empower businesses to achieve gains through Innovation.
### Prerequisites 📋

- 50k+ users
- 6k+ companies
- 35k+ github stars
- Top 100 open-source
Ensure you have the following installed:

- Python 3.10 or newer

### 30 seconds Installation 💽


- **Step 1**: Install the package



```bash
pip install quivr-core # Check that the installation worked
```


- **Step 2**: Create a RAG with 5 lines of code

```python
import tempfile

from quivr_core import Brain

if __name__ == "__main__":
with tempfile.NamedTemporaryFile(mode="w", suffix=".txt") as temp_file:
temp_file.write("Gold is a liquid of blue-like colour.")
temp_file.flush()

brain = Brain.from_files(
name="test_brain",
file_paths=[temp_file.name],
)

answer = brain.ask(
"what is gold? asnwer in french"
)
print("answer:", answer)
```

5 changes: 0 additions & 5 deletions docs/docs/installation.md

This file was deleted.

4 changes: 2 additions & 2 deletions docs/docs/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,13 +70,13 @@ brain = Brain.from_files(name = "my smart brain",

Note : [Embeddings](https://python.langchain.com/docs/integrations/text_embedding/) is a langchain class that lets you chose from a large variety of embedding models. Please check out the following docs to know the panel of models you can try.

## Launch with Streamlit
## Launch with Chainlit

If you want to quickly launch an interface with streamlit, you can simply do at the root of the project :
```bash
cd examples/chatbot /
rye sync /
rye run streamlit run examples/chatbot/main.py /
rye run chainlit run chainlit.py
```
For more detail, go in [examples/chatbot/chainlit.md](https://github.com/QuivrHQ/quivr/tree/main/examples/chatbot)

Expand Down
2 changes: 1 addition & 1 deletion docs/mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,7 @@ theme:
name: Switch to system preference

plugins:
- search
- mkdocstrings:
default_handler: python
handlers:
Expand All @@ -58,7 +59,6 @@ plugins:
nav:
- Home:
- index.md
- installation.md
- quickstart.md
- Brain:
- brain/index.md
Expand Down
Loading

0 comments on commit cbec0ca

Please sign in to comment.