Skip to content

Commit

Permalink
Update index.md
Browse files Browse the repository at this point in the history
  • Loading branch information
tjbck committed Apr 21, 2024
1 parent c0494a3 commit fff5f1e
Showing 1 changed file with 89 additions and 71 deletions.
160 changes: 89 additions & 71 deletions docs/getting-started/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,9 +25,11 @@ title: "🚀 Getting Started"
- Follow the installation instructions provided on the website. After installation, open Docker Desktop to ensure it's running properly.

#### For Ubuntu Users:

1. **Open your terminal.**

2. **Set up Docker's apt repository:**

- Update your package index:
```bash
sudo apt-get update
Expand Down Expand Up @@ -55,6 +57,7 @@ title: "🚀 Getting Started"
Note: If you're using an Ubuntu derivative distro, such as Linux Mint, you might need to use `UBUNTU_CODENAME` instead of `VERSION_CODENAME`.

3. **Install Docker Engine:**

- Update your package index again:
```bash
sudo apt-get update
Expand Down Expand Up @@ -85,6 +88,91 @@ title: "🚀 Getting Started"

</details>

## Quick Start with Docker 🐳

:::info
When using Docker to install Open WebUI, make sure to include the `-v open-webui:/app/backend/data` in your Docker command. This step is crucial as it ensures your database is properly mounted and prevents any loss of data.
:::

- **If Ollama is on your computer**, use this command:

```bash
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
```

- **If Ollama is on a Different Server**, use this command:

To connect to Ollama on another server, change the `OLLAMA_BASE_URL` to the server's URL:
```bash
docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=https://example.com -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
```
### Installing Both Open WebUI and Ollama Together
- **With GPU Support**, Use this command:
```bash
docker run -d -p 3000:8080 --gpus=all -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama
```
- **For CPU Only**, Use this command:
```bash
docker run -d -p 3000:8080 -v ollama:/root/.ollama -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:ollama
```
- After installation, you can access Open WebUI at [http://localhost:3000](http://localhost:3000). Enjoy! 😄
### GPU Support
#### Nvidia CUDA
To run Ollama with Nvidia GPU support, utilize the Nvidia-docker tool for GPU access, and set the appropriate environment variables for CUDA support:
```bash
docker run -d -p 3000:8080 \
--gpus all \
--add-host=host.docker.internal:host-gateway \
--volume open-webui:/app/backend/data \
--name open-webui \
--restart always \
ghcr.io/open-webui/open-webui:main
```
#### AMD ROCm
To run Ollama with AMD GPU support, set the `HSA_OVERRIDE_GFX_VERSION` environment variable and ensure the Docker container can access the GPU:
```bash
docker run -d -p 3000:8080 \
-e HSA_OVERRIDE_GFX_VERSION=11.0.0 \
--device /dev/kfd \
--device /dev/dri \
--group-add video \
--add-host=host.docker.internal:host-gateway \
--volume open-webui:/app/backend/data \
--name open-webui \
--restart always \
ghcr.io/open-webui/open-webui:main
```
Replace `HSA_OVERRIDE_GFX_VERSION=11.0.0` with the version appropriate for your AMD GPU model as described in the earlier sections. This command ensures compatibility and optimal performance with AMD GPUs.
#### Open WebUI: Server Connection Error
Encountering connection issues between the Open WebUI Docker container and the Ollama server? This problem often arises because distro-packaged versions of Docker—like those from the Ubuntu repository—do not support the `host.docker.internal` alias for reaching the host directly. Inside a container, referring to `localhost` or `127.0.0.1` typically points back to the container itself, not the host machine.
To address this, we recommend using the `--network=host` flag in your Docker command. This flag allows the container to use the host's networking stack, effectively making `localhost` or `127.0.0.1` in the container refer to the host machine. As a result, the WebUI can successfully connect to the Ollama server at `127.0.0.1:11434`. Please note, with `--network=host`, the container's port configuration aligns directly with the host, changing the access link to `http://localhost:8080`.
**Here's how you can modify your Docker command**:

```bash
docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main
```

For more details on networking in Docker and addressing common connectivity issues, visit our [FAQ page](/faq/). This page provides additional context and solutions for frequently encountered problems, ensuring a smoother operation of Open WebUI in various environments.

## One-line Command to Install Ollama and Open WebUI Together

#### Using Docker Compose
Expand Down Expand Up @@ -119,6 +207,7 @@ title: "🚀 Getting Started"
Ensure to replace `<version>` with the appropriate version number based on your GPU model and the guidelines above. For a detailed list of compatible versions and more in-depth instructions, refer to the [ROCm documentation](https://rocm.docs.amd.com) and the [openSUSE Wiki on AMD GPGPU](https://en.opensuse.org/SDB:AMD_GPGPU).
Example command for RDNA1 & RDNA2 GPUs:
```bash
HSA_OVERRIDE_GFX_VERSION=10.3.0 docker compose -f docker-compose.yaml -f docker-compose.amdgpu.yaml up -d --build
```
Expand Down Expand Up @@ -157,77 +246,6 @@ title: "🚀 Getting Started"
./run-compose.sh --enable-gpu --build
```
## Quick Start with Docker 🐳
:::info
When using Docker to install Open WebUI, make sure to include the `-v open-webui:/app/backend/data` in your Docker command. This step is crucial as it ensures your database is properly mounted and prevents any loss of data.
:::
- **If Ollama is on your computer**, use this command:
```bash
docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
```
- **If Ollama is on a Different Server**, use this command:
- To connect to Ollama on another server, change the `OLLAMA_BASE_URL` to the server's URL:

```bash
docker run -d -p 3000:8080 -e OLLAMA_BASE_URL=https://example.com -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main
```

- After installation, you can access Open WebUI at [http://localhost:3000](http://localhost:3000). Enjoy! 😄

### GPU Support

#### Nvidia CUDA

To run Ollama with Nvidia GPU support, utilize the Nvidia-docker tool for GPU access, and set the appropriate environment variables for CUDA support:

```bash
docker run -d -p 3000:8080 \
--gpus all \
--add-host=host.docker.internal:host-gateway \
--volume open-webui:/app/backend/data \
--name open-webui \
--restart always \
ghcr.io/open-webui/open-webui:main
```

#### AMD ROCm

To run Ollama with AMD GPU support, set the `HSA_OVERRIDE_GFX_VERSION` environment variable and ensure the Docker container can access the GPU:

```bash
docker run -d -p 3000:8080 \
-e HSA_OVERRIDE_GFX_VERSION=11.0.0 \
--device /dev/kfd \
--device /dev/dri \
--group-add video \
--add-host=host.docker.internal:host-gateway \
--volume open-webui:/app/backend/data \
--name open-webui \
--restart always \
ghcr.io/open-webui/open-webui:main
```

Replace `HSA_OVERRIDE_GFX_VERSION=11.0.0` with the version appropriate for your AMD GPU model as described in the earlier sections. This command ensures compatibility and optimal performance with AMD GPUs.

#### Open WebUI: Server Connection Error

Encountering connection issues between the Open WebUI Docker container and the Ollama server? This problem often arises because distro-packaged versions of Docker—like those from the Ubuntu repository—do not support the `host.docker.internal` alias for reaching the host directly. Inside a container, referring to `localhost` or `127.0.0.1` typically points back to the container itself, not the host machine.

To address this, we recommend using the `--network=host` flag in your Docker command. This flag allows the container to use the host's networking stack, effectively making `localhost` or `127.0.0.1` in the container refer to the host machine. As a result, the WebUI can successfully connect to the Ollama server at `127.0.0.1:11434`. Please note, with `--network=host`, the container's port configuration aligns directly with the host, changing the access link to `http://localhost:8080`.

**Here's how you can modify your Docker command**:
```bash
docker run -d --network=host -v open-webui:/app/backend/data -e OLLAMA_BASE_URL=http://127.0.0.1:11434 --name open-webui --restart always ghcr.io/open-webui/open-webui:main
```
For more details on networking in Docker and addressing common connectivity issues, visit our [FAQ page](/faq/). This page provides additional context and solutions for frequently encountered problems, ensuring a smoother operation of Open WebUI in various environments.
## Installing with Podman
<details>
Expand Down

0 comments on commit fff5f1e

Please sign in to comment.