Skip to content

Commit

Permalink
ADD TGI docs
Browse files Browse the repository at this point in the history
  • Loading branch information
philschmid authored Sep 2, 2024
1 parent ad4a8b8 commit 247f100
Showing 1 changed file with 16 additions and 2 deletions.
18 changes: 16 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,15 +48,29 @@ For detailed installation instructions and requirements, see the [Installation G

### Quick Start

#### 1. Start an OpenAI Compatible Server (vLLM)
#### 1a. Start an OpenAI Compatible Server (vLLM)

GuideLLM requires an OpenAI-compatible server to run evaluations. [vLLM](https://github.com/vllm-project/vllm) is recommended for this purpose. To start a vLLM server with a Llama 3.1 8B quantized model, run the following command:

```bash
vllm serve "neuralmagic/Meta-Llama-3.1-8B-Instruct-quantized.w4a16"
```

For more information on starting a vLLM server, see the [vLLM Documentation](https://docs.vllm.ai/en/latest/serving/openai_compatible_server.html).
#### 1b. Start an OpenAI Compatible Server (Hugging Face TGI)

GuideLLM requires an OpenAI-compatible server to run evaluations. [Text Generation Inference](https://github.com/huggingface/text-generation-inference) can be used here. To start a TGI server with a Llama 3.1 8B using docker, run the following command:

```bash
docker run --gpus 1 -ti --shm-size 1g --ipc=host --rm -p 8080:80 \
-e MODEL_ID=https://huggingface.co/llhf/Meta-Llama-3.1-8B-Instruct \
-e NUM_SHARD=1 \
-e MAX_INPUT_TOKENS=4096 \
-e MAX_TOTAL_TOKENS=6000 \
-e HF_TOKEN=$(cat ~/.cache/huggingface/token) \
ghcr.io/huggingface/text-generation-inference:2.2.0
```

For more information on starting a TGI server, see the [TGI Documentation](https://huggingface.co/docs/text-generation-inference/index).

#### 2. Run a GuideLLM Evaluation

Expand Down

0 comments on commit 247f100

Please sign in to comment.