Skip to content

Commit

Permalink
Addressed location of docs comment
Browse files Browse the repository at this point in the history
  • Loading branch information
gianniacquisto committed Nov 26, 2023
1 parent 630813a commit d4235f7
Show file tree
Hide file tree
Showing 2 changed files with 15 additions and 15 deletions.
15 changes: 15 additions & 0 deletions fern/docs/pages/installation/installation.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -89,6 +89,21 @@ Currently, not all the parameters of `llama.cpp` and `llama-cpp-python` are avai
In case you need to customize parameters such as the number of layers loaded into the GPU, you might change
these at the `llm_component.py` file under the `private_gpt/components/llm/llm_component.py`.

##### Available LLM config options

The `llm` section of the settings allows for the following configurations:

- `mode`: how to run your llm
- `max_new_tokens`: this lets you configure the number of new tokens the LLM will generate and add to the context window (by default Llama.cpp uses `256`)

Example:

```yaml
llm:
mode: local
max_new_tokens: 256
```
If you are getting an out of memory error, you might also try a smaller model or stick to the proposed
recommended models, instead of custom tuning the parameters.
Expand Down
15 changes: 0 additions & 15 deletions fern/docs/pages/manual/settings.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -77,19 +77,4 @@ Missing variables with no default will produce an error.
```yaml
server:
port: ${PORT:8001}
```
## LLM config options
The `llm` section of the settings allows for the following configurations:

- `mode`: how to run your llm
- `max_new_tokens`: this lets you configure the number of new tokens the LLM will generate and add to the context window (by default Llama.cpp uses `256`)

Example:

```yaml
llm:
mode: local
max_new_tokens: 256
```

0 comments on commit d4235f7

Please sign in to comment.