Skip to content

Commit

Permalink
add: add documentation to self-hosting ollama instance
Browse files Browse the repository at this point in the history
  • Loading branch information
namwoam committed Aug 1, 2024
1 parent 77e5ee1 commit 1acca91
Show file tree
Hide file tree
Showing 3 changed files with 26 additions and 1 deletion.
13 changes: 13 additions & 0 deletions ai/ollama/v0/.compogen/setup-hosting.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
#### Local Ollama Instance

To set up an Ollama instance on your local machine, follow the instructions below:

> Note: These instructions only work for Instill Core CE
1. Follow the tutorial on the official [GitHub repository](https://github.com/ollama/ollama) to install Ollama on your machine.
2. Follow the instructions in the [FAQ section](https://github.com/ollama/ollama/blob/main/docs/faq.md) to modify the variable `OLLAMA_HOST` to `0.0.0.0`, then restart Ollama.
3. Get the IP address of your machine on the local network.
- On Linux and macOS, open the terminal and type `ifconfig`.
- On Windows, open the command prompt and type `ipconfig`.
4. Suppose the IP address is `192.168.178.88`, then the Ollama hosting endpoint would be `192.168.178.88:11434`.
5. Enjoy fast LLM inference on your local machine with VDP's capability.
12 changes: 12 additions & 0 deletions ai/ollama/v0/README.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,19 @@ Provide text outputs in response to text/image inputs.
| Text | `text` | string | Model Output |


#### Local Ollama Instance

To set up an Ollama instance on your local machine, follow the instructions below:

> Note: These instructions only work for Instill Core CE
1. Follow the tutorial on the official [GitHub repository](https://github.com/ollama/ollama) to install Ollama on your machine.
2. Follow the instructions in the [FAQ section](https://github.com/ollama/ollama/blob/main/docs/faq.md) to modify the variable `OLLAMA_HOST` to `0.0.0.0`, then restart Ollama.
3. Get the IP address of your machine on the local network.
- On Linux and macOS, open the terminal and type `ifconfig`.
- On Windows, open the command prompt and type `ipconfig`.
4. Suppose the IP address is `192.168.178.88`, then the Ollama hosting endpoint would be `192.168.178.88:11434`.
5. Enjoy fast LLM inference on your local machine with VDP's capability.



Expand Down
2 changes: 1 addition & 1 deletion ai/ollama/v0/main.go
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
//go:generate compogen readme ./config ./README.mdx
//go:generate compogen readme ./config ./README.mdx --extraContents TASK_TEXT_GENERATION_CHAT=.compogen/setup-hosting.mdx
package ollama

import (
Expand Down

0 comments on commit 1acca91

Please sign in to comment.