-
-
Notifications
You must be signed in to change notification settings - Fork 3.6k
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
feat(chatbot): add basic RAG workflow configuration
- Added a new file `basic_rag_workflow.yaml` to the `examples/chatbot` directory. - Configured a standard RAG workflow with multiple nodes and edges. - Set the maximum number of previous conversation iterations to include in the answer context. - Added configuration for the LLM (Language Model) with maximum output tokens and temperature.
- Loading branch information
1 parent
8801237
commit 0b147d8
Showing
2 changed files
with
62 additions
and
5 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,41 @@ | ||
workflow_config: | ||
name: "standard RAG" | ||
nodes: | ||
- name: "START" | ||
edges: ["filter_history"] | ||
|
||
- name: "filter_history" | ||
edges: ["rewrite"] | ||
|
||
- name: "rewrite" | ||
edges: ["retrieve"] | ||
|
||
- name: "retrieve" | ||
edges: ["generate_rag"] | ||
|
||
- name: "generate_rag" # the name of the last node, from which we want to stream the answer to the user | ||
edges: ["END"] | ||
|
||
# Maximum number of previous conversation iterations | ||
# to include in the context of the answer | ||
max_history: 10 | ||
|
||
# Reranker configuration | ||
# reranker_config: | ||
# # The reranker supplier to use | ||
# supplier: "cohere" | ||
|
||
# # The model to use for the reranker for the given supplier | ||
# model: "rerank-multilingual-v3.0" | ||
|
||
# # Number of chunks returned by the reranker | ||
# top_n: 5 | ||
|
||
# Configuration for the LLM | ||
llm_config: | ||
|
||
# maximum number of tokens passed to the LLM to generate the answer | ||
max_output_tokens: 4000 | ||
|
||
# temperature for the LLM | ||
temperature: 0.7 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters