Skip to content

Are Copilots Local Yet? The frontier of local LLM Copilots for code completion, project generation, shell assistance, and more. Find tools shaping tomorrow's developer experience, today!

License

Notifications You must be signed in to change notification settings

ErikBjare/are-copilots-local-yet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

71 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🛠️ Are Copilots Local Yet?

Current trends and state of the art for using open & local LLM models as copilots to complete code, generate projects, act as shell assistants, automatically fix bugs, and more.

📝 Help keep this list relevant and up-to-date by making edits!

Table of Contents

📋 Summary

Local Copilots are now fully functional, although with output quality still not on par with those offered by cloud-based services like GitHub Copilot.

This document is a curated list of local Copilots, shell assistants, and related projects. It is intended to be a resource for those interested in a survey of the existing tools, and to help developers discover the state of the art for projects like these.

📚 Background

In 2021, GitHub released Copilot which quickly became popular among devs. Since then, with the flurry of AI developments around LLMs, local models that can run on consumer machines have become available, and it has seemed only a matter of time before Copilot will go local.

Many perceived limitations of GitHub's Copilot are related to its closed and cloud-hosted nature.

As an alternative, local Copilots enable:

  • 🌐 Offline & private use
  • ⚡ Improved responsiveness
  • 📚 Better project/context awareness
  • 🎯 The ability to run models specialized for a particular language/task
  • 🔒 Constraining the LLM output to fit a particular format/syntax.

🧩 Editor Extensions

Editor extensions used to complete code using LLMs:

Name Editor Released Notes
GitHub Copilot VSCode, vim 9125 2021-6-29 The GitHub Original, not local or open-source.
Cursor VSCode 27112 2023-3-14 Fork of VSCode, not open-source
Fauxpilot VSCode 14645 2022-9-3 Early local PoC. Stale?
Tabby VSCode, vim, IntelliJ 29074 2023-9-30 Completes the cursor selection
turbopilot VSCode 3818 2023-4-10 Completions with FIM support, inspired by fauxpilot
HuggingFace-vscode VSCode 1255 2023-6-19 Fork of Tabnine, supports Starcoder
localpilot VSCode 3369 2023-10-2 Utility for easily hosting models locally, for use with official Copilot extension using custom API endpoint.
StarcoderEx VSCode 101 2023-5-5 Completes the cursor selection
WizardCoder-VSC VSCode 145 2023-6-19 PoC, article available
KoboldAIConnect VSCode 2023-10-7 Copilot clone using local KoboldAI backend
gen.nvim vim 1323 2023-10-1 Edit selection using custom prompts
uniteai VSCode, emacs, lsp 309 2023-8-27
Privy VSCode 916 2024-1-8 A privacy-first coding assistant.
twinny VSCode 3279 2024-1-24 The most no-nonsense locally hosted AI code completion plugin for VS Code
continue 21966 2023-5-24 VSCode extension with chat, autocomplete, and actions.

🛠️ Tools

Tools that try to generate projects/features from specification:

Name Released Notes
gpt-engineer 52940 2023-6-6 Specify what you want it to build, the AI asks for clarification, and then builds it.
gpt-pilot 32250 2023-7-18 Very similar to gpt-engineer
aider 25618 2023-6-8 AI pair programming in your terminal, works well with pre-existing, larger codebases
rift 3051 2023-6-20 VSCode extension. Lets you write code by chatting, makes your IDE agentic, AI engineer that works alongside you.
mentat 2583 2023-7-25 Mentat coordinates edits across multiple locations and files.
clippinator 364 2023-4-15 Uses a team of agents to plan, write, debug, and test
Refact.AI 1660 2023-10-06 Full self-hostable code completion, chat and training service, complete with VSCode extension.
LocalCompletion 27 2023-11-15 Inline completion with support for any OpenAI compatible backend

🗨️ Chat Interfaces

Chat interfaces with shell/REPL/notebook access. Similar to/inspired by ChatGPT's "Advanced Data Analysis" feature (previously "Code Interpreter").

Name Notes
open-interpreter 57982 open-source, locally running implementation of OpenAI's Code Interpreter
gptme 3131 Supporting open models. Developed by me, @ErikBjare
octogen 256 Local Code Interpreter executing in Docker environment.
terminal-x 34 Very early prototype that converts natural language into shell commands, unmaintained since Sept. 2021
DODA >50 Electron based GUI for a local OpenAI Dev Assistant

🤖 Models

Models relevant for local Copilot-use. Ordered by most recent first.

Name Size Languages Released Notes
Phind CodeLlama v2 34B Many 829 2023-8-27
WizardCoder-Python 7/13/34B Python 765 2023-8
CodeLlama 7/13/34B Many 16165 2023-8
WizardCoder 15B 80+ 750 2023-6 Fine-tuning of Starcoder
replit-glaive 3B 1? 88 2023-7 Small model fine-tuned on high-quality data with impressive performance.
Starcoder 15B 80+ 7351 2023-5
replit-v1-3b 3B 20+ 724 2023-5
SantaCoder 1.1B Python, Java, JavaScript 331 2023-4 Tiny model selectively trained on 3 languages from 'The Stack'
Qwen 2.5 Coder 32b 92 different languages 3998 2024-11
Deepseek R1 671B Many 3052 2025-01

Note: due to the pace of new model releases, this section is doomed to be out of date.

📚 Datasets

Datasets relevant for training models.

Name Size Languages Released Notes
The Stack 3TB/6TB 358 760 2022-10 Excludes weak-copyleft licenses (MPL, LGPL, EGL) since v1.1

Tools

Misc relevant useful tools.

Name Released Notes
ollama 111009 2023-8-27 Easily get up and running with large language models locally.

Suggested setup

As you can see above there are many options for models and editor extensions. If you use VS Code or JetBrains and want to get started straight away you can use the following setup:

  1. Install LM Studio.
  2. Install Continue.dev extension.
  3. Download one or several models in LM Studio. As of January 2025, Qwen 2.5 Coder is a good choice for autocomplete and Deepseek R1 is a good choice for chat. Depending on your hardware you'll have to experiment with which model size and quantization level gives you sufficient speed. For example on a Macbook Pro M2 with 32GB RAM, Qwen2.5-Coder-7B-Instruct-Q4_K_M works well for autocomplete and DeepSeek-R1-Distill-Qwen-14B-Q4_0 works well for chat.
  4. Go to the Developer tab in LM Studio and start the server.
  5. Configure Continue.dev extension with by adding your selected models. For example:
    {
        "models": [
            {
            "apiBase": "http://localhost:1234/v1/",
            "title": "Deepseek R1",
            "model": "bartowski/deepseek-r1-distill-qwen-14b",
            "provider": "lmstudio"
            }
        ],
        "tabAutocompleteModel": {
            "provider": "lmstudio",
            "apiBase": "http://localhost:1234/v1/",
            "title": "Qwen 2.5 Coder",
            "model": "qwen2.5-coder-7b-instruct"
        },
    }
    

📰 History

📈 Stats

Stargazers over time:

Stargazers over time

About

Are Copilots Local Yet? The frontier of local LLM Copilots for code completion, project generation, shell assistance, and more. Find tools shaping tomorrow's developer experience, today!

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published