An open API service indexing awesome lists of open source software.

https://github.com/4383/inkspect

A CLI tool to streamline your LLM workflow by bringing prompt refinement and generation directly into your favorite editor
https://github.com/4383/inkspect

cli editor llm prompt-engineering refinement

Last synced: 25 days ago
JSON representation

A CLI tool to streamline your LLM workflow by bringing prompt refinement and generation directly into your favorite editor

Awesome Lists containing this project

README

          

# inkspect

[![Rust](https://github.com/4383/inkspect/actions/workflows/rust.yml/badge.svg)](https://github.com/4383/inkspect/actions/workflows/rust.yml)
![Crates.io Version](https://img.shields.io/crates/v/inkspect)
![Crates.io Total Downloads](https://img.shields.io/crates/d/inkspect)

Inkspect is a command-line interface (CLI) tool that streamlines your workflow with Large Language Models (LLMs) by integrating prompt refinement and generation directly into your favorite code editor and into your terminal pipelines. One of its main strengths lies in its embedded prompts, which are pre-defined query templates that help you get precise results without having to write complex instructions every time.

For example, instead of writing a lengthy description to generate code, you can use a style like code-spec to quickly create a detailed, TDD-focused specification, or code-debug to get debugging suggestions for a generic bug. These pre-defined prompts save time, ensure better consistency in the results, and make using LLMs more efficient for specific development tasks.

## The Motivation: Stop Juggling, Start Creating

Does this workflow sound familiar?

1. You start drafting a prompt in a web-based chatbot.
2. You copy the text and paste it into a local file in your favorite editor for serious refinement.
3. You tweak and iterate, getting it just right.
4. Finally, you copy the finished prompt and paste it *again* into its final destination—a script, an application, or another AI agent.

This constant context-switching between your browser, your editor, and your terminal is inefficient and breaks your creative flow.

`inkspect` was built to solve this. It brings the power of LLMs directly to your command line, allowing you to work on prompts in your editor of choice and use them immediately, all without leaving your development environment.

## Features

* **Multiple LLM Backends:** Supports Gemini (default) and Claude.
* **Flexible Input:** Provide input via command-line flag, from existing files, or your favorite text editor.
* **File Processing:** Read from existing files and optionally update them in-place with optimized output.
* **Powerful Prompt Styles:** Use pre-defined prompt styles to get the exact output you need, from refining text to generating code.
* **Intelligent Prompting:** A global system prompt ensures LLM outputs are direct and clean, with an option to disable it for full control.
* **Customizable:** Configure API keys, default providers, and custom prompt styles in a simple TOML file.
* **Verbose Logging:** A `--verbose` mode for debugging and inspecting the full interaction with the LLM.
* **Easy Setup:** A `setup` command to get you started in seconds.

## Installation

### Recommended: From Crates.io

The easiest way to install `inkspect` is directly from `crates.io` using `cargo`.

```bash
cargo install inkspect
```

This will download, compile, and install the `inkspect` binary in your Cargo home directory (`~/.cargo/bin`), making it available from anywhere in your terminal.

### From Source

If you want to build the latest development version, you can build it from the source code.

1. **Install Rust:** If you don't already have it, install the Rust toolchain from [rust-lang.org](https://www.rust-lang.org/tools/install).
2. **Clone and Build:**
```bash
git clone https://github.com/4383/inkspect.git
cd inkspect
cargo build --release
```
3. **Install (Optional):** For easy access, you can copy the executable to a directory in your system's `PATH`.
```bash
sudo cp target/release/inkspect /usr/local/bin/
```

## Configuration

The easiest way to get started is with the interactive `setup` command. It will create the configuration file for you at the correct location (`~/.config/inkspect/inkspect.toml`) and prompt you for your API keys.

```bash
inkspect setup
```

You can also provide a custom configuration path for any command using the `--config` flag.

## Usage

The basic command structure is `inkspect [OPTIONS] `.

### Commands

#### `optimize`

This is the main command for processing text with an LLM.

* **From an Editor (Default):**
```bash
inkspect optimize
```
This will open your default text editor (`$EDITOR`). Write your prompt, save, and close the file.

* **From an Inline String:**
```bash
inkspect optimize --input "your prompt here"
```

* **From an Existing File:**
```bash
inkspect optimize --file my_prompt.txt
```
This reads the content of `my_prompt.txt` as input and outputs the optimized result to stdout.

* **Update a File In-Place:**
```bash
inkspect optimize --file my_document.txt --in-place
```
This reads from `my_document.txt`, processes it with the LLM, and updates the same file with the optimized output.

* **Saving to a File:**
```bash
inkspect optimize --input "your prompt here" --output my_file.txt
# or from a file
inkspect optimize --file input.txt --output output.txt
```

#### `list-prompts`

Lists all the available prompt styles from your configuration file.

```bash
inkspect list-prompts
```

#### `list-models`

Lists the available models from a specific provider.

```bash
inkspect list-models --provider gemini
```

#### `setup`

Runs the interactive setup to create your configuration file.

```bash
inkspect setup
```

### File Input and Output Options

The `optimize` command supports several options for handling file input and output:

#### Input Options

- `--input `: Provide text directly as a command-line argument
- `--file `: Read input from an existing file
- No input flag: Opens your default editor for input (uses `$EDITOR` environment variable)

**Note:** You cannot use both `--input` and `--file` at the same time.

#### Output Options

- `--output `: Save the optimized result to a specific file
- `--in-place`: Update the input file with the optimized output (requires `--file`)
- No output flag: Display the result on stdout

#### Priority Order

When multiple output options are specified, they are handled in this order:
1. `--output` takes highest priority (saves to specified file)
2. `--in-place` updates the input file (only works with `--file`)
3. Default: output to stdout

#### Examples

```bash
# Read from a file and display result
inkspect optimize --file my_draft.md

# Read from a file and update it in-place
inkspect optimize --file my_draft.md --in-place

# Read from a file and save to another file
inkspect optimize --file draft.md --output final.md

# Use with different prompt styles
inkspect optimize --file code_request.txt --style code-spec --in-place

# Combine with other options
inkspect optimize --file my_prompt.txt --provider claude --style refine --in-place
```

#### Error Handling

The tool provides helpful error messages for invalid combinations:
- Using both `--input` and `--file` will show an error
- Using `--in-place` without `--file` will show an error
- If the input file doesn't exist or can't be read, a descriptive error is shown

## Piping and Integration

`inkspect` is a standard command-line application, which means it can be seamlessly integrated into your existing scripts and workflows using pipes. You can chain the output of `inkspect` into other tools for further processing.

This is especially powerful when using the `code-spec` or `code-gen` styles. You can generate a detailed specification or a block of code and pipe it directly to another AI agent or a file.

### Example: Generating a Spec and Piping it to a Coding Agent

In this example, we'll use `inkspect` with the `code-spec` style to generate a detailed TDD specification for a Python script.
Then, we'll pipe that specification directly to the official Google `gemini-cli` tool ([https://github.com/google-gemini/gemini-cli](https://github.com/google-gemini/gemini-cli)) to generate the final code.

```bash
# Generate the spec with inkspect and pipe it directly to the gemini-cli agent
inkspect optimize --style code-spec --input "create a python script to fetch and display the weather for a given city" -o specs.md && gemini -p "execute the @specs.md file"
```

You can also work with existing files and create iterative workflows:

```bash
# Start with a rough idea in a file, refine it iteratively
echo "Create a web scraper for news articles" > idea.txt
inkspect optimize --file idea.txt --style code-spec --in-place
inkspect optimize --file idea.txt --style code-gen --output implementation.py
```

This workflow allows you to use `inkspect` as a powerful "front-end" for generating high-quality, structured prompts for other automated systems, all from the comfort of your terminal.
The above example would work the same with [Claude Code](https://www.anthropic.com/claude-code).

### The System Prompt

To ensure that the LLM's output is clean and direct, `inkspect` uses a **system prompt** by default. This is a set of instructions that is automatically prepended to every request sent to the LLM. It tells the model to avoid conversational filler like "Of course, here is..." and concluding remarks.

You can see the full system prompt in the configuration file created by the `setup` command.

#### Disabling the System Prompt

In some cases, you might want the LLM to be more conversational. You can disable the system prompt for a single run by using the `--no-system-prompt` flag.

```bash
inkspect optimize --input "Tell me a short story." --no-system-prompt
```

### Prompt Styles (`--style`)

This is where the power of `inkspect` comes in. Use the `--style` flag with the `optimize` command to transform your input in different ways. You can also provide a custom prompt directly with the `--prompt` flag.

#### `code-spec` (Default)

Transforms a high-level feature request into a detailed, TDD-focused specification for an AI coding agent. **This does not write code, just specs.**

* **Example:**
```bash
inkspect optimize --style code-spec --input "I want to create a simple command-line todo list app in Rust"
```
* **Sample Output:**
> ### 1. High-Level Goal
> Create a command-line interface (CLI) application in Rust for managing a todo list.
>
> ### 2. Key Features
> - Add a new task.
> - List all tasks.
> - Mark a task as complete.
> - Persist tasks to a local file.
>
> ### 3. Proposed Architecture & File Structure
> A single `main.rs` file will contain all the logic. Tasks will be stored in a `todos.json` file.
>
> ### 4. Step-by-Step TDD Implementation Plan
> **Feature: Add a Task**
> - **Test:** Write a failing test `test_add_task` that checks if a new task is added to the list.
> - **Implementation:** Create the `add_task` function and the `Task` struct.
> ...

#### `code-gen`

Generates a complete, single-file application from a high-level description.

* **Example:**
```bash
inkspect optimize --style code-gen --input "create a simple command-line todo list app in Rust"
```
* **Sample Output:**
> ```rust
> use clap::{Parser, Subcommand};
> use serde::{Deserialize, Serialize};
> use std::fs;
> // ... (complete, working Rust code for a todo app) ...
> ```
>
> ### How to Build and Run
> 1. **Save the code** to `src/main.rs`.
> 2. **Add dependencies** to `Cargo.toml`: `clap`, `serde`, `serde_json`.
> 3. **Run** `cargo build --release`.
> 4. **Execute** `./target/release/todo-cli add "My first task"`.

#### `code-debug`

Crafts a prompt for a coding AI agent to help debug a generic bug.

* **Example:**
```bash
inkspect optimize --style code-debug --input "My python script is failing with a 'KeyError' when I try to access a dictionary key that should exist."
```
* **Sample Output:**
> **Prompt for AI Agent:**
>
> I'm encountering a `KeyError` in my Python script when accessing a dictionary key that I expect to be present. Here is the relevant code snippet:
>
> ```python
> # ... (user would paste their code here) ...
> ```
>
> Please perform the following tasks:
>
> 1. **Diagnose Potential Causes:**
> * Is the key being misspelled?
> * Is the dictionary being modified or overwritten before the key is accessed?
> * Is there a case sensitivity issue (e.g., 'key' vs 'Key')?
> * Could the data source for the dictionary be missing the key?
>
> 2. **Propose Fixes:**
> * Show how to print the dictionary keys right before the access attempt to verify its contents.
> * Suggest using the `.get()` method with a default value to avoid the `KeyError`.
> * Provide a code snippet demonstrating a check for the key's existence before accessing it.
>
> 3. **Explain the Problem:**
> * Briefly explain what a `KeyError` is and why it occurs in Python.

#### `prompt-improver`

Transforms a raw task prompt into a robust, unambiguous, production-ready prompt for a Claude-like model.

* **Example:**
```bash
inkspect optimize --style prompt-improver --input "Write a python script that takes a {{url}} and returns the links on the page"
```
* **Sample Output:**
>
> **Task Goal:** Write a Python script to extract all hyperlink URLs from a given web page URL.
>
> **Inputs:**
> - `{{url}}`: The URL of the web page to scrape.
>
> **Constraints:**
> - The script must be written in Python 3.
> - It must not make external network requests other than to the specified `{{url}}`.
> - It should handle potential HTTP errors gracefully (e.g., 404 Not Found, 500 Server Error).
> - The output should only be a list of URLs, one per line.
>
> **Method:**
> 1. Import the `requests` and `BeautifulSoup` libraries.
> 2. Fetch the content of the `{{url}}`.
> 3. Parse the HTML content.
> 4. Find all `` tags with an `href` attribute.
> 5. Print each `href` value to standard output.
>
> **Output Format:**
> A plain text list of URLs, one per line.
> ```
> http://example.com/link1
> http://example.com/link2
> ```
>
> **Pitfalls to Avoid:**
> - Do not return relative URLs; resolve them to absolute URLs.
> - Do not include non-HTTP links (e.g., `mailto:`, `ftp:`).
> - Do not print any text other than the extracted URLs.
>

>
> - The user's prompt is clear. No TODOs needed.
>