Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

functions support with local LLMs #672

Open
beyondszine opened this issue Feb 17, 2025 · 1 comment
Open

functions support with local LLMs #672

beyondszine opened this issue Feb 17, 2025 · 1 comment

Comments

@beyondszine
Copy link

Hi there,
first of all; awesome project. super like for this :)

It may be my lack of knowledge but I am not able to figure out how to get my functions running using local LLMs. I am runing ollama and LM studio both & trying to get it working via any path.

for LMstudio with mistral models - I am getting error around 'system' role not supported.
For ollama with mistral:7b-instruct I am not getting error per say, but function is also not executing.

So, I'd like to ask how can I know more on this if funtion execution even works with local LLMs.

Thanks,
saurabh

@robscurity
Copy link

robscurity commented Feb 19, 2025

Maybe this will help:

Setting up sgpt on macos to run with ollama locally

pip install "shell-gpt[litellm]"

Create this file in ~/.config/shell_gpt/.sgptrc

vi ~/.config/shell_gpt/.sgptrc

add below
CHAT_CACHE_PATH=/tmp/chat_cache
CACHE_PATH=/tmp/cache
CHAT_CACHE_LENGTH=100
CACHE_LENGTH=100
REQUEST_TIMEOUT=60
DEFAULT_MODEL=ollama/mistral:7b-instruct
DEFAULT_COLOR=magenta
ROLE_STORAGE_PATH= /.config/shell_gpt/roles
DEFAULT_EXECUTE_SHELL_CMD=false
DISABLE_STREAMING=false
CODE_THEME=dracula
OPENAI_FUNCTIONS_PATH=
/.config/shell_gpt/functions
OPENAI_USE_FUNCTIONS=false
SHOW_FUNCTIONS_OUTPUT=false
API_BASE_URL=http://127.0.0.1:11434
PRETTIFY_MARKDOWN=true
USE_LITELLM=true
SHELL_INTERACTION=true
OS_NAME=auto
SHELL_NAME=auto

Download a model with ollama e.g mistral:7b-instruct which is the DEFAULT_MODEL. (see config file above)

ollama pull mistral:7b-instruct

list downloaded ollama models

ollama list

Example of output:

NAME ID SIZE MODIFIED
mistral:7b-instruct f974a74358d6 4.1 GB About an hour ago
llama3.1:8b-instruct-q8_0 b158ded76fa0 8.5 GB 10 days ago
deepseek-r1:14b ea35dfe18182 9.0 GB 4 weeks ago
llama3.2-vision:latest 085a1fdae525 7.9 GB 4 weeks ago
llama3.2:latest a80c4f17acd5 2.0 GB 4 weeks ago
phi4:latest ac896e5b8b34 9.1 GB 4 weeks ago

Start ollama server

ollama serve

change the modelname to one of the names above e.g phi4:latest if you don't want to use the DEFAULT_MODEL=ollama/mistral:7b-instruct

sgpt --model ollama/phi4:latest "What is the fibonacci sequence"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants