From 9ab94a3e8f34da9e7acc23d6fd937fc512026056 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Sebasti=C3=A1n=20Est=C3=A9vez?= Date: Thu, 18 Jul 2024 00:06:17 -0400 Subject: [PATCH] Update README.md --- README.md | 4 +--- 1 file changed, 1 insertion(+), 3 deletions(-) diff --git a/README.md b/README.md index b3d9c00..e8b4e40 100644 --- a/README.md +++ b/README.md @@ -178,9 +178,7 @@ you need to pull the model you want to ollama before using it curl http://localhost:11434/api/pull -d '{ "name": "deepseek-coder-v2" }' -your assistants client should route to the ollama container by passing the llm-param-base-url header: - - client = patch(OpenAI(default_headers={"LLM-PARAM-base-url": "http://ollama:11434"})) +your assistants client should route to the ollama container setting OLLAMA_API_BASE_URL. OLLAMA_API_BASE_URL should be set to http://ollama:11434 if you are using docker-compose. If you are using ollama on your localhost you can set it to http://localhost:11434 ## Feedback / Help