Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/srikanth235/privy
An open-source alternative to GitHub copilot that runs locally.
https://github.com/srikanth235/privy
ai code-completion codellama copilot developer-tools gen-ai ide ollama privacy self-hosting
Last synced: about 1 month ago
JSON representation
An open-source alternative to GitHub copilot that runs locally.
- Host: GitHub
- URL: https://github.com/srikanth235/privy
- Owner: srikanth235
- License: mit
- Created: 2023-12-18T06:20:08.000Z (11 months ago)
- Default Branch: master
- Last Pushed: 2024-05-14T11:43:13.000Z (6 months ago)
- Last Synced: 2024-10-10T18:43:15.895Z (about 1 month ago)
- Topics: ai, code-completion, codellama, copilot, developer-tools, gen-ai, ide, ollama, privacy, self-hosting
- Language: TypeScript
- Homepage:
- Size: 11.7 MB
- Stars: 869
- Watchers: 8
- Forks: 43
- Open Issues: 20
-
Metadata Files:
- Readme: README.md
- Changelog: CHANGELOG.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
Awesome Lists containing this project
README
## 👀 See it in action
#### Real time code completion
#### Chat with AI about your code
## 🛠️ Pre-requisites
If you haven't done already, please pick one of the following platforms to run LLM of your choice on your system **locally**.
- [Ollama](https://github.com/jmorganca/ollama) (Highly Recommended)
- [llamafile](https://github.com/Mozilla-Ocho/llamafile) (Experimental)
- [llama.cpp](https://github.com/ggerganov/llama.cpp) (Experimental)## 👍 LLM Recommendations
Please note that you need to configure LLM for code completion and chat feature **separately**. Some of the popular LLMs that we recommend are as follows. Please pick the size (i.e. 1.3b, 7b, 13b or 34b) of the model based on your hardware capabilities.
| Code Completion | Chat | Links |
| ------------------------------------------ | --------------------------------------------- | --------------------------------------------------------------------------------------------------------------- |
| deepseek-coder:{1.3b or 6.7b or 33b }-base | deepseek-coder:{1.3b or 6.7b or 33b}-instruct | [Ollama Tags](https://ollama.com/library/deepseek-coder), [Home](https://github.com/deepseek-ai/DeepSeek-Coder) |
| codellama:{7b or 13b or 34b}-code | codellama:{7b or 13b or 34b}-instruct | [Ollama Tags](https://ollama.com/library/codellama), [Home](https://github.com/facebookresearch/codellama) |
| | mistral:{7b}-instruct | [Ollama Tags](https://ollama.com/library/mistral), [Home](https://mistral.ai/) |You can also pick a model by evaluating your local LLMs using [Benchllama](https://github.com/srikanth235/benchllama).
## 🚀 Quick Install
You can install Privy extension from the Visual Studio Code Marketplace or from the Open VSX Registry.
- [Visual Studio Code Marketplace](https://marketplace.visualstudio.com/items?itemName=privy.privy-vscode)
- [Open VSX Registry](https://open-vsx.org/extension/Privy/privy-vscode)## ⚙️ Configuration Options
Please set the following options in the **settings** for Privy extension.
- **privy.provider**(`required`): Pick the platform that is being used for running LLMs locally. There is support for using OpenAI, but this will affect the privacy aspects of the solution. The default is `Ollama`.
- **privy.providerUrl**(`required`): The URL of the platform that is being used for running LLMs locally. The default is `http://localhost:11434`.
- **privy.autocomplete.mode**: Use this setting for enabling/disabling autocompletion feature.
- **privy.autocomplete.model**: Input the name of local Ollama model that you want to use for autocompletion. Supported formats are DeepSeek Coder, LLama & Stable Code. We have chosen deepseek-coder:1.3b-base as it requires least amount of VRAM. You can customize based on your hardware setup.
- **privy.autocomplete.debounceWait**: Use this for setting the time gap before triggering the next completion in milliseconds. Default is 300 ms.
- **privy.model**: Select the LLM that you want to chat with. Currently, supports DeepSeek, Mistral and CodeLLama. If you want to use other LLMs, please select `custom` and configure `privy.customModel` accordingly.
- **privy.customModel**: If you want to pick any other models running on your Ollama, please input their name.# ✨ Key Features
- 👍 Open Source
- 🔐 Privacy first
- 🚀 Auto code completion
- 🤖 Copilot style chat
- 💬 Threaded conversations
- 💻 Support for code explanation, unit tests, finding bugs, diagnosing errors etc# ⌨️ Keyboard shortcuts
| Shortcut (Mac) | Description |
| ----------------------------------------------------------------- | ------------------------------ |
| `Alt + \` (for Windows/Linux) or `Cmd + \` (for Mac) | Trigger inline code completion |
| `Ctrl + Alt + c` (for Windows/Linux) or `Ctrl + Cmd + c`(for Mac) | Start Chat |# 💡 Tips and Tricks
Understanding these concepts will help you get the most out of Privy.
- **Be specific**.
When you ask for, e.g., code changes, include concrete names and describe the desired outcome. Avoid vague references.
- **Provide context**.
You can include the programming language ("in Rust") or other relevant contexts for basic questions.
You can select a meaningful code snippet for code explanations and error diagnosis.
- **Do not trust answers blindly**.
It's a big step for Privy to be able to respond to your questions.
It might respond with inaccurate answers, especially when talking about
less well-known topics or when the conversation gets too detailed.
- **Use different chat threads for different topics**.
Shorter threads with specific topics will help Privy respond more accurately.## 🤝 Credits
- [RubberDuck AI](https://github.com/rubberduck-ai/rubberduck-vscode) - This project is heavily inspired by RubberDuck AI's work, and we're indebted to them for building on top of it. The following is the list of contributors to this project and we extend our sincere gratitude to all of them.
Lars Grammel
🤔 💻 📖 👀 💬 🐛
Iain Majer
🐛 💻
Nicolas Carlo
💻 📖 🐛
RatoGBM
🐛
Lionel Okpeicha
🐛
MercerK
🐛
Lundeen.Bryan
🤔
DucoG
🤔
sbstn87
🤔
Manuel
🤔
alessandro-newzoo
🤔
Void&Null
🤔
WittyDingo
🤔
Eva
🤔
AlexeyLavrentev
🤔
linshu123
📖
Michael Adams
💻 🐛
restlessronin
💻
## 🎉 Code Contributions
### [Contributing Guide][contributing]
Read our [contributing guide][contributing] to learn about our development process, how to propose bugfixes and improvements, and how to build and test your changes.
### [Good First Issues][good-first-issues]
To help you get your feet wet and become familiar with our contribution process, we have a list of [good first issues][good-first-issues] that contains things with a relatively limited scope. This is a great place to get started!
[contributing]: https://github.com/srikanth235/privy/blob/master/CONTRIBUTING.md
[good-first-issues]: https://github.com/srikanth235/privy/labels/good%20first%20issue## :star: Star History