Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/plazide/local-ai-code-completion
A vscode extension providing AI code completion locally on your computer.
https://github.com/plazide/local-ai-code-completion
artificial-intelligence developer-tools large-language-models typescript vscode-extension
Last synced: 3 months ago
JSON representation
A vscode extension providing AI code completion locally on your computer.
- Host: GitHub
- URL: https://github.com/plazide/local-ai-code-completion
- Owner: Plazide
- License: mit
- Created: 2023-10-22T18:02:07.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-02-24T17:00:59.000Z (12 months ago)
- Last Synced: 2024-10-12T13:25:08.291Z (4 months ago)
- Topics: artificial-intelligence, developer-tools, large-language-models, typescript, vscode-extension
- Language: TypeScript
- Homepage:
- Size: 7.52 MB
- Stars: 17
- Watchers: 1
- Forks: 4
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
- Changelog: CHANGELOG.md
Awesome Lists containing this project
README
# local-ai-code-completion README
Enables AI Assisted code completion, similar to Github Copilot, completely locally. No code leaves your machine. This has two major benefits:
- **Cost**. This extension is completely free to use.
- **Privacy**. No data is shared with third-parties, everything stays on your computer.## Features
AI Assisted code completion.
You trigger code completion by pressing Ctrl+Alt+C.
You accept a completion by pressing Tab.
You cancel an ongoing completion by pressing Escape.
You delete a non-accepted completion by pressing Escape.
GIF is sped up.
![usage example](./assets/lacc-example.gif)
The extension uses codellama 7B under the hood, which supports many languages including Python, C++, Java, PHP, Typescript (Javascript), C# and Bash.
According to evaluation results from Meta, codellama 7B is almost on par with Codex, the model used by Github Copilot.
## Requirements
This extension requires an [Ollama](https://ollama.ai/) installation to run the language model locally. Ollama does not currently support Windows, which also means that this extension is not compatible with Windows.
## Known Issues
- Time to start generating can be very long. This is an inherent issue to the model running locally on your computer.
- Inference is slow. Also a consequence of running the model locally, but depends on your system.## Release Notes
### 1.2.0
#### Added
- Config option for generation timeout
- Config options for baseUrl of Ollama API (enables use of the extension with a remote or local Ollama server)#### Changed
- Improved logging
#### Fixed
- Bug where aborting generation would not work
Thanks to [@johnnyasantoss](https://github.com/johnnyasantoss) for making these changes.
---
### 1.1.0
Added options for changing model, temperature and top_p parameters.
---
### 1.0.0
Initial release.
---