Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/pluja/maestro
Turn natual language into commands. Your CLI tasks, now as easy as a conversation. Run it 100% offline, or use OpenAI's models.
https://github.com/pluja/maestro
ai assistant bash cli golang llama llm openai terminal
Last synced: about 1 month ago
JSON representation
Turn natual language into commands. Your CLI tasks, now as easy as a conversation. Run it 100% offline, or use OpenAI's models.
- Host: GitHub
- URL: https://github.com/pluja/maestro
- Owner: pluja
- Created: 2023-11-25T21:43:34.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2024-06-26T05:11:52.000Z (7 months ago)
- Last Synced: 2024-10-30T01:01:48.298Z (3 months ago)
- Topics: ai, assistant, bash, cli, golang, llama, llm, openai, terminal
- Language: Go
- Homepage:
- Size: 2.1 MB
- Stars: 53
- Watchers: 3
- Forks: 8
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
![maestro banner](banner.png)
`maestro` converts natural language instructions into cli commands. It's designed for both offline use with Ollama and online integration with ChatGPT API.
![](maestro.svg)
## Key Features
- **Ease of Use**: Simply type your instructions and press enter.
- **Direct Execution**: Use the `-e` flag to directly execute commands with a confirmation prompt for safety.
- **Context Awareness**: Maestro understands your system's context, including the current directory, system, and user.
- **Support for Multiple LLM Models**: Choose from a variety of models for offline and online usage.
- Offline: [Ollama](https://ollama.ai) with [over 40 models available](https://ollama.ai/library).
- Online: GPT4-Turbo and GPT3.5-Turbo.
- **Lightweight**: Maestro is a single small binary with no dependencies.## Installation
1. Download the latest binary from the [releases page](https://github.com/pluja/maestro/releases).
2. Execute `./maestro -h` to start.> Tip: Place the binary in a directory within your `$PATH` and rename it to `maestro` for global access, e.g., `sudo mv ./maestro /usr/local/bin/maestro`.
## Offline Usage with [Ollama](https://ollama.ai)
> [!IMPORTANT]
> You need at least Ollama v0.1.24 or greater1. Install Ollama from [here](https://ollama.ai/download) (or use [ollama's docker image](https://hub.docker.com/r/ollama/ollama)).
2. Download models using `ollama pull `.
- **Note**: If you haven't changed it, you will need to pull the default model: `ollama pull dolphin-mistral:latest`
3. Start the Ollama server with `ollama serve`.
4. Configure Maestro to use Ollama with `./maestro -set-ollama-url `, for example, `./maestro -set-ollama-url http://localhost:8080`.## Online Usage with OpenAI's API
1. Obtain an API token from [OpenAI](https://platform.openai.com/).
2. Set the token using `./maestro -set-openai-token `.
3. Choose between GPT4-Turbo with `-4` flag and GPT3.5-Turbo with `-3` flag.
- Example: `./maestro -4 `