Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/ParthSareen/ducky
Local AI pair programming tool
https://github.com/ParthSareen/ducky
langchain llama llm ollama pair-programming
Last synced: 3 days ago
JSON representation
Local AI pair programming tool
- Host: GitHub
- URL: https://github.com/ParthSareen/ducky
- Owner: ParthSareen
- License: mit
- Created: 2023-12-14T17:07:47.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2023-12-17T20:03:44.000Z (about 1 year ago)
- Last Synced: 2024-10-22T22:36:03.782Z (3 months ago)
- Topics: langchain, llama, llm, ollama, pair-programming
- Language: Python
- Homepage:
- Size: 331 KB
- Stars: 15
- Watchers: 2
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome_ai_agents - Ducky - Local AI pair programming tool (Building / Tools)
- awesome_ai_agents - Ducky - Local AI pair programming tool (Building / Tools)
README
# rubber ducky
## tl;dr
- `pip install rubber-ducky`
- Install ollama
- `ollama pull codellama` (first time and then you can just have application in background)
- There are probably other dependencies which I forgot to put in setup.py sorry in advance.
- Run with `ducky ` or `ducky `## Dependencies
You will need Ollama installed on your machine. The model I use for this project is `codellama`.
For the first installation you can run `ollama pull codellama` and it should pull the necessary binaries for you.
Ollama is also great because it'll spin up a server which can run in the background and can even do automatic model switching as long as you have it installed.
## Usage
Install through [pypi](https://pypi.org/project/rubber-ducky/):
`pip install rubber-ducky` .
### Simple run
`ducky`or
`ducky `
or
`ducky -f `
### All options
`ducky --file --prompt --directory --chain --model `Where:
- `--prompt` or `-p`: Custom prompt to be used
- `--file` or `-f`: The file to be processed
- `--directory` or `-d`: The directory to be processed
- `--chain` or `-c`: Chain the output of the previous command to the next command
- `--model` or `-m`: The model to be used (default is "codellama")## Example output
![Screenshot of ducky](image.png)