Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/dh1011/llm-term
A Rust-based CLI tool that generates and executes terminal commands using OpenAI's language models.
https://github.com/dh1011/llm-term
Last synced: 8 days ago
JSON representation
A Rust-based CLI tool that generates and executes terminal commands using OpenAI's language models.
- Host: GitHub
- URL: https://github.com/dh1011/llm-term
- Owner: dh1011
- License: mit
- Created: 2024-08-29T07:17:14.000Z (5 months ago)
- Default Branch: main
- Last Pushed: 2024-10-30T04:04:57.000Z (3 months ago)
- Last Synced: 2024-10-30T07:13:14.415Z (3 months ago)
- Language: Rust
- Homepage:
- Size: 1.15 MB
- Stars: 90
- Watchers: 1
- Forks: 2
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-cli-apps - llm-term - A Rust-based CLI tool that generates and executes terminal commands using OpenAI's language models. (<a name="ai"></a>AI / ChatGPT)
- awesome-cli-apps-in-a-csv - llm-term - A Rust-based CLI tool that generates and executes terminal commands using OpenAI's language models. (<a name="ai"></a>AI / ChatGPT)
- awesome_ai_agents - Llm-Term - A Rust-based CLI tool that generates and executes terminal commands using OpenAI's language models. (Building / Tools)
- awesome_ai_agents - Llm-Term - A Rust-based CLI tool that generates and executes terminal commands using OpenAI's language models. (Building / Tools)
README
# 🖥️ LLM-Term
A Rust-based CLI tool that generates and executes terminal commands using OpenAI's language models or local Ollama models.
## Features
- Configurable model and token limit (gpt-4o-mini, gpt-4o, or Ollama)
- Generate and execute terminal commands based on user prompts
- Works on both PowerShell and Unix-like shells (Automatically detected)## Demo
![LLM-Term Demo](vhs-video/demo.gif)
## Installation
- Download the binary from the [Releases](https://github.com/dh1011/llm-term/releases) page
- Set PATH to the binary
- MacOS/Linux:
```
export PATH="$PATH:/path/to/llm-term"
```
- To set it permanently, add `export PATH="$PATH:/path/to/llm-term"` to your shell configuration file (e.g., `.bashrc`, `.zshrc`)- Windows:
```
set PATH="%PATH%;C:\path\to\llm-term"
```
- To set it permanently, add `set PATH="%PATH%;C:\path\to\llm-term"` to your shell configuration file (e.g., `$PROFILE`)## Development
1. Clone the repository
2. Build the project using Cargo: `cargo build --release`
3. The executable will be available in the `target/release` directory## Usage
1. Set your OpenAI API key (if using OpenAI models):
- MacOS/Linux:
```
export OPENAI_API_KEY="sk-..."
```- Windows:
```
set OPENAI_API_KEY="sk-..."
```2. If using Ollama, make sure it's running locally on the default port (11434)
3. Run the application with a prompt:
```
./llm-term "your prompt here"
```4. The app will generate a command based on your prompt and ask for confirmation before execution.
## Configuration
A `config.json` file will be created in the same directory as the binary on first run. You can modify this file to change the default model and token limit.
## Options
- `-c, --config `: Specify a custom config file path
## Supported Models
- OpenAI GPT-4 (gpt-4o)
- OpenAI GPT-4 Mini (gpt-4o-mini)
- Ollama (local models, default: llama3.1)