To build this Docker image you need to be in the root of the repository (i.e. the parent directory of this one).
The following command will build it the image for the (full) Reginald model which has the model + slack bot and tag it as reginald:latest
:
docker build . -t reginald:latest -f docker/reginald/Dockerfile
The following environment variables can be used by this image:
REGINALD_MODEL
: name of model to useREGINALD_MODEL_NAME
: name of sub-model to use with the one requested if not usinghello
model.- For
llama-index-llama-cpp
and `llama-index-hf`` models, this specifies the LLM (or path to that model) which we would like to use - For
chat-completion-azure
andllama-index-gpt-azure
, this refers to the deployment name on Azure - For
chat-completion-openai
andllama-index-gpt-openai
, this refers to the model/engine name on OpenAI
- For
LLAMA_INDEX_MODE
: mode to use ("query" or "chat") if usingllama-index
modelLLAMA_INDEX_DATA_DIR
: data directory if usingllama-index
modelLLAMA_INDEX_WHICH_INDEX
: index to use ("handbook", "wikis", "public", "reg" or "all_data") if usingllama-index
modelLLAMA_INDEX_FORCE_NEW_INDEX
: whether to force a new index if usingllama-index
modelLLAMA_INDEX_MAX_INPUT_SIZE
: max input size if usingllama-index-llama-cpp
orllama-index-hf
modelLLAMA_INDEX_IS_PATH
: whether to treat REGINALD_MODEL_NAME as a path if usingllama-index-llama-cpp
modelLLAMA_INDEX_N_GPU_LAYERS
: number of GPU layers if usingllama-index-llama-cpp
modelLLAMA_INDEX_DEVICE
: device to use if usingllama-index-hf
modelOPENAI_API_KEY
: API key for OpenAI if usingchat-completion-openai
orllama-index-gpt-openai
modelsOPENAI_AZURE_API_BASE
: API base for Azure OpenAI if usingchat-completion-azure
orllama-index-gpt-azure
modelsOPENAI_AZURE_API_KEY
: API key for Azure OpenAI if usingchat-completion-azure
orllama-index-gpt-azure
modelsSLACK_APP_TOKEN
: app token for SlackSLACK_BOT_TOKEN
: bot token for Slack
To run you can use the following command (to run the hello
model):
docker run -e REGINALD_MODEL=hello -e SLACK_APP_TOKEN=<slack-app-token> -e SLACK_BOT_TOKEN=<slack-bot-token> reginald:latest
The following command will build it the image for the Slack bot only and tag it as reginald-slack-bot:latest
:
docker build . -t reginald-slack-bot:latest -f docker/slack_bot/Dockerfile
The following environment variables will be used by this image:
REGINALD_EMOJI
: emoji to use for botSLACK_APP_TOKEN
: app token for SlackSLACK_BOT_TOKEN
: bot token for Slack
To run you can use the following command:
docker run -e REGINALD_EMOJI=wave -e SLACK_APP_TOKEN=<slack-app-token> -e SLACK_BOT_TOKEN=<slack-bot-token> reginald-slack-bot:latest
Rather than passing in the environment variables on the command line using the -e
flag in docker run
, you can use an environment file:
docker run --env-file .env reginald:latest
where .env
is a file containing the environment variables, e.g. for running the llama-index-llama-cpp
model using the handbook
index:
REGINALD_MODEL=llama-index-llama-cpp
REGINALD_MODEL_NAME=https://huggingface.co/TheBloke/Llama-2-7B-chat-GGUF/resolve/main/llama-2-7b-chat.Q4_K_M.gguf
LLAMA_INDEX_MODE=chat
LLAMA_INDEX_DATA_DIR=data
LLAMA_INDEX_WHICH_INDEX=handbook
LLAMA_INDEX_MAX_INPUT_SIZE=2048
SLACK_APP_TOKEN=<slack-app-token>
SLACK_BOT_TOKEN=<slack-bot-token>