{"id":13877952,"url":"https://github.com/MadBomber/aia","last_synced_at":"2025-07-16T14:30:36.259Z","repository":{"id":208914635,"uuid":"722774172","full_name":"MadBomber/aia","owner":"MadBomber","description":"AI Assistant (aia) a Ruby Gem for using genAI on the CLI","archived":false,"fork":false,"pushed_at":"2024-08-21T14:47:49.000Z","size":357,"stargazers_count":17,"open_issues_count":3,"forks_count":0,"subscribers_count":3,"default_branch":"main","last_synced_at":"2024-10-06T03:09:11.998Z","etag":null,"topics":["gem","genai","prompt","prompt-engineering","ruby"],"latest_commit_sha":null,"homepage":"","language":"Ruby","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/MadBomber.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-11-23T23:22:40.000Z","updated_at":"2024-07-01T17:12:18.000Z","dependencies_parsed_at":"2023-12-13T04:21:04.102Z","dependency_job_id":"bd6d37fe-03a1-4e91-96c4-1bf6fcc386f5","html_url":"https://github.com/MadBomber/aia","commit_stats":null,"previous_names":["madbomber/aia"],"tags_count":35,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/MadBomber%2Faia","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/MadBomber%2Faia/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/MadBomber%2Faia/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/MadBomber%2Faia/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/MadBomber","download_url":"https://codeload.github.com/MadBomber/aia/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":226134226,"owners_count":17578778,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["gem","genai","prompt","prompt-engineering","ruby"],"created_at":"2024-08-06T08:01:35.827Z","updated_at":"2025-07-16T14:30:36.221Z","avatar_url":"https://github.com/MadBomber.png","language":"Ruby","readme":"\u003cdiv align=\"center\"\u003e\n  \u003ch1\u003eAI Assistant (AIA)\u003c/h1\u003e\n  \u003cimg src=\"images/aia.png\" alt=\"Robots waiter ready to take your order.\"\u003e\u003cbr /\u003e\n  **The Prompt is the Code**\n\u003c/div\u003e\n\nAIA is a command-line utility that facilitates interaction with AI models through dynamic prompt management. It automates the management of pre-compositional prompts and executes generative AI commands with enhanced features including embedded directives, shell integration, embedded Ruby, history management, interactive chat, and prompt workflows.\n\nAIA leverages the [prompt_manager gem](https://github.com/madbomber/prompt_manager) to manage prompts, utilizes the [CLI tool fzf](https://github.com/junegunn/fzf) for prompt selection, and can use the [shared_tools gem](https://github.com/madbomber/shared_tools) which provides a collection of common ready-to-use functions for use with LLMs that support tools.\n\n**Wiki**: [Checkout the AIA Wiki](https://github.com/MadBomber/aia/wiki)\n\n## Quick Start\n\n1. **Install AIA:**\n   ```bash\n   gem install aia\n   ```\n\n2. **Install dependencies:**\n   ```bash\n   brew install fzf\n   ```\n\n3. **Create your first prompt:**\n   ```bash\n   mkdir -p ~/.prompts\n   echo \"What is [TOPIC]?\" \u003e ~/.prompts/what_is.txt\n   ```\n\n4. **Run your prompt:**\n   ```bash\n   aia what_is\n   ```\n   You'll be prompted to enter a value for `[TOPIC]`, then AIA will send your question to the AI model.\n\n5. **Start an interactive chat:**\n   ```bash\n   aia --chat\n   ```\n\n```plain\n\n       ,      ,\n       (\\____/) AI Assistant (v0.9.7) is Online\n        (_oo_)   gpt-4o-mini\n         (O)       using ruby_llm (v1.3.1)\n       __||__    \\) model db was last refreshed on\n     [/______\\]  /    2025-06-18\n    / \\__AI__/ \\/      You can share my tools\n   /    /__\\\n  (\\   /____\\\n\n```\n\n\u003c!-- Tocer[start]: Auto-generated, don't remove. --\u003e\n\n## Table of Contents\n\n  - [Installation \u0026 Prerequisites](#installation--prerequisites)\n    - [Requirements](#requirements)\n    - [Installation](#installation)\n    - [Setup Shell Completion](#setup-shell-completion)\n  - [Basic Usage](#basic-usage)\n    - [Command Line Interface](#command-line-interface)\n    - [Key Command-Line Options](#key-command-line-options)\n    - [Directory Structure](#directory-structure)\n  - [Configuration](#configuration)\n    - [Essential Configuration Options](#essential-configuration-options)\n    - [Configuration Precedence](#configuration-precedence)\n    - [Configuration Methods](#configuration-methods)\n    - [Complete Configuration Reference](#complete-configuration-reference)\n  - [Advanced Features](#advanced-features)\n    - [Prompt Directives](#prompt-directives)\n      - [Configuration Directive Examples](#configuration-directive-examples)\n      - [Dynamic Content Examples](#dynamic-content-examples)\n    - [Shell Integration](#shell-integration)\n    - [Embedded Ruby (ERB)](#embedded-ruby-erb)\n    - [Prompt Sequences](#prompt-sequences)\n      - [Using --next](#using---next)\n      - [Using --pipeline](#using---pipeline)\n      - [Example Workflow](#example-workflow)\n    - [Roles and System Prompts](#roles-and-system-prompts)\n    - [RubyLLM::Tool Support](#rubyllmtool-support)\n  - [Examples \u0026 Tips](#examples--tips)\n    - [Practical Examples](#practical-examples)\n      - [Code Review Prompt](#code-review-prompt)\n      - [Meeting Notes Processor](#meeting-notes-processor)\n      - [Documentation Generator](#documentation-generator)\n    - [Executable Prompts](#executable-prompts)\n    - [Tips from the Author](#tips-from-the-author)\n      - [The run Prompt](#the-run-prompt)\n      - [The Ad Hoc One-shot Prompt](#the-ad-hoc-one-shot-prompt)\n      - [Recommended Shell Setup](#recommended-shell-setup)\n      - [Prompt Directory Organization](#prompt-directory-organization)\n  - [Security Considerations](#security-considerations)\n    - [Shell Command Execution](#shell-command-execution)\n    - [Safe Practices](#safe-practices)\n    - [Recommended Security Setup](#recommended-security-setup)\n  - [Troubleshooting](#troubleshooting)\n    - [Common Issues](#common-issues)\n    - [Error Messages](#error-messages)\n    - [Debug Mode](#debug-mode)\n    - [Performance Issues](#performance-issues)\n  - [Development](#development)\n    - [Testing](#testing)\n    - [Building](#building)\n    - [Architecture Notes](#architecture-notes)\n  - [Contributing](#contributing)\n    - [Reporting Issues](#reporting-issues)\n    - [Development Setup](#development-setup)\n    - [Areas for Improvement](#areas-for-improvement)\n  - [Roadmap](#roadmap)\n  - [License](#license)\n\n\u003c!-- Tocer[finish]: Auto-generated, don't remove. --\u003e\n\n## Installation \u0026 Prerequisites\n\n### Requirements\n\n- **Ruby**: \u003e= 3.2.0\n- **External Tools**:\n  - [fzf](https://github.com/junegunn/fzf) - Command-line fuzzy finder\n\n### Installation\n\n```bash\n# Install AIA gem\ngem install aia\n\n# Install required external tools (macOS)\nbrew install fzf\n\n# Install required external tools (Linux)\n# Ubuntu/Debian\nsudo apt install fzf\n\n# Arch Linux\nsudo pacman -S fzf\n```\n\n### Setup Shell Completion\n\nGet completion functions for your shell:\n\n```bash\n# For bash users\naia --completion bash \u003e\u003e ~/.bashrc\n\n# For zsh users\naia --completion zsh \u003e\u003e ~/.zshrc\n\n# For fish users\naia --completion fish \u003e\u003e ~/.config/fish/config.fish\n```\n\n## Basic Usage\n\n### Command Line Interface\n\n```bash\n# Basic usage\naia [OPTIONS] PROMPT_ID [CONTEXT_FILES...]\n\n# Interactive chat session\naia --chat [--role ROLE] [--model MODEL]\n\n# Use a specific model\naia --model gpt-4 my_prompt\n\n# Specify output file\naia --out_file result.md my_prompt\n\n# Use a role/system prompt\naia --role expert my_prompt\n\n# Enable fuzzy search for prompts\naia --fuzzy\n```\n\n### Key Command-Line Options\n\n| Option | Description | Example |\n|--------|-------------|---------|\n| `--chat` | Start interactive chat session | `aia --chat` |\n| `--model MODEL` | Specify AI model to use | `aia --model gpt-4` |\n| `--role ROLE` | Use a role/system prompt | `aia --role expert` |\n| `--out_file FILE` | Specify output file | `aia --out_file results.md` |\n| `--fuzzy` | Use fuzzy search for prompts | `aia --fuzzy` |\n| `--help` | Show complete help | `aia --help` |\n\n### Directory Structure\n\n```\n~/.prompts/              # Default prompts directory\n├── ask.txt             # Simple question prompt\n├── code_review.txt     # Code review prompt\n├── roles/              # Role/system prompts\n│   ├── expert.txt      # Expert role\n│   └── teacher.txt     # Teaching role\n└── _prompts.log        # History log\n```\n\n## Configuration\n\n### Essential Configuration Options\n\nThe most commonly used configuration options:\n\n| Option | Default | Description |\n|--------|---------|-------------|\n| `model` | `gpt-4o-mini` | AI model to use |\n| `prompts_dir` | `~/.prompts` | Directory containing prompts |\n| `out_file` | `temp.md` | Default output file |\n| `temperature` | `0.7` | Model creativity (0.0-1.0) |\n| `chat` | `false` | Start in chat mode |\n\n### Configuration Precedence\n\nAIA determines configuration settings using this order (highest to lowest priority):\n\n1. **Embedded config directives** (in prompt files): `//config model = gpt-4`\n2. **Command-line arguments**: `--model gpt-4`\n3. **Environment variables**: `export AIA_MODEL=gpt-4`\n4. **Configuration files**: `~/.aia/config.yml`\n5. **Default values**\n\n### Configuration Methods\n\n**Environment Variables:**\n```bash\nexport AIA_MODEL=gpt-4\nexport AIA_PROMPTS_DIR=~/my-prompts\nexport AIA_TEMPERATURE=0.8\n```\n\n**Configuration File** (`~/.aia/config.yml`):\n```yaml\nmodel: gpt-4\nprompts_dir: ~/my-prompts\ntemperature: 0.8\nchat: false\n```\n\n**Embedded Directives** (in prompt files):\n```\n//config model = gpt-4\n//config temperature = 0.8\n\nYour prompt content here...\n```\n\n### Complete Configuration Reference\n\n\u003cdetails\u003e\n\u003csummary\u003eClick to view all configuration options\u003c/summary\u003e\n\n| Config Item Name | CLI Options | Default Value | Environment Variable |\n|------------------|-------------|---------------|---------------------|\n| adapter | --adapter | ruby_llm | AIA_ADAPTER |\n| aia_dir | | ~/.aia | AIA_DIR |\n| append | -a, --append | false | AIA_APPEND |\n| chat | --chat | false | AIA_CHAT |\n| clear | --clear | false | AIA_CLEAR |\n| config_file | -c, --config_file | ~/.aia/config.yml | AIA_CONFIG_FILE |\n| debug | -d, --debug | false | AIA_DEBUG |\n| embedding_model | --em, --embedding_model | text-embedding-ada-002 | AIA_EMBEDDING_MODEL |\n| erb | | true | AIA_ERB |\n| frequency_penalty | --frequency_penalty | 0.0 | AIA_FREQUENCY_PENALTY |\n| fuzzy | -f, --fuzzy | false | AIA_FUZZY |\n| image_quality | --iq, --image_quality | standard | AIA_IMAGE_QUALITY |\n| image_size | --is, --image_size | 1024x1024 | AIA_IMAGE_SIZE |\n| image_style | --style, --image_style | vivid | AIA_IMAGE_STYLE |\n| log_file | -l, --log_file | ~/.prompts/_prompts.log | AIA_LOG_FILE |\n| markdown | --md, --markdown | true | AIA_MARKDOWN |\n| max_tokens | --max_tokens | 2048 | AIA_MAX_TOKENS |\n| model | -m, --model | gpt-4o-mini | AIA_MODEL |\n| next | -n, --next | nil | AIA_NEXT |\n| out_file | -o, --out_file | temp.md | AIA_OUT_FILE |\n| parameter_regex | --regex | '(?-mix:(\\[[A-Z _\\|]+\\]))' | AIA_PARAMETER_REGEX |\n| pipeline | --pipeline | [] | AIA_PIPELINE |\n| presence_penalty | --presence_penalty | 0.0 | AIA_PRESENCE_PENALTY |\n| prompt_extname | | .txt | AIA_PROMPT_EXTNAME |\n| prompts_dir | -p, --prompts_dir | ~/.prompts | AIA_PROMPTS_DIR |\n| refresh | --refresh | 7 (days) | AIA_REFRESH |\n| require_libs | --rq --require | [] | AIA_REQUIRE_LIBS |\n| role | -r, --role | | AIA_ROLE |\n| roles_dir | | ~/.prompts/roles | AIA_ROLES_DIR |\n| roles_prefix | --roles_prefix | roles | AIA_ROLES_PREFIX |\n| shell | | true | AIA_SHELL |\n| speak | --speak | false | AIA_SPEAK |\n| speak_command | | afplay | AIA_SPEAK_COMMAND |\n| speech_model | --sm, --speech_model | tts-1 | AIA_SPEECH_MODEL |\n| system_prompt | --system_prompt | | AIA_SYSTEM_PROMPT |\n| temperature | -t, --temperature | 0.7 | AIA_TEMPERATURE |\n| terse | --terse | false | AIA_TERSE |\n| tool_paths | --tools | [] | AIA_TOOL_PATHS |\n| allowed_tools | --at --allowed_tools | nil | AIA_ALLOWED_TOOLS |\n| rejected_tools | --rt --rejected_tools | nil | AIA_REJECTED_TOOLS |\n| top_p | --top_p | 1.0 | AIA_TOP_P |\n| transcription_model | --tm, --transcription_model | whisper-1 | AIA_TRANSCRIPTION_MODEL |\n| verbose | -v, --verbose | false | AIA_VERBOSE |\n| voice | --voice | alloy | AIA_VOICE |\n\n\u003c/details\u003e\n\n## Advanced Features\n\n### Prompt Directives\n\nDirectives are special commands in prompt files that begin with `//` and provide dynamic functionality:\n\n| Directive | Description | Example |\n|-----------|-------------|---------|\n| `//config` | Set configuration values | `//config model = gpt-4` |\n| `//context` | Show context for this conversation | `//context` |\n| `//include` | Insert file contents | `//include path/to/file.txt` |\n| `//shell` | Execute shell commands | `//shell ls -la` |\n| `//robot` | Show the pet robot ASCII art w/versions | `//robot` |\n| `//ruby` | Execute Ruby code | `//ruby puts \"Hello World\"` |\n| `//next` | Set next prompt in sequence | `//next summary` |\n| `//pipeline` | Set prompt workflow | `//pipeline analyze,summarize,report` |\n| `//clear` | Clear conversation history | `//clear` |\n| `//help` | Show available directives | `//help` |\n| `//available_models` | List available models | `//available_models` |\n| `//tools` | Show a list of available tools and their description | `//tools` |\n| `//review` | Review current context | `//review` |\n\nDirectives can also be used in the interactive chat sessions.\n\n#### Configuration Directive Examples\n\n```bash\n# Set model and temperature for this prompt\n//config model = gpt-4\n//config temperature = 0.9\n\n# Enable chat mode and terse responses\n//config chat = true\n//config terse = true\n\nYour prompt content here...\n```\n\n#### Dynamic Content Examples\n\n```bash\n# Include file contents\n//include ~/project/README.md\n\n# Execute shell commands\n//shell git log --oneline -10\n\n# Run Ruby code\n//ruby require 'json'; puts JSON.pretty_generate({status: \"ready\"})\n\nAnalyze the above information and provide insights.\n```\n\n#### Custom Directive Examples\n\nYou can extend AIA with custom directives by creating Ruby files that define new directive methods:\n\n```ruby\n# examples/directives/ask.rb\nmodule AIA\n  class DirectiveProcessor\n    private\n    desc \"A meta-prompt to LLM making its response available as part of the primary prompt\"\n    def ask(args, context_manager=nil)\n      meta_prompt = args.empty? ? \"What is meta-prompting?\" : args.join(' ')\n      AIA.config.client.chat(meta_prompt)\n    end\n  end\nend\n```\n\n**Usage:** Use the --tools option to specific a specific directive file or a directory full of files\n```bash\n# Load custom directive\naia --tools examples/directives/ask.rb --chat\n\n# Use the results of the custom directive as input to a prompt\n//ask gather the latest closing data for the DOW, NASDAQ, and S\u0026P 500\n```\n\n### Shell Integration\n\nAIA automatically processes shell patterns in prompts:\n\n- **Environment variables**: `$HOME`, `${USER}`\n- **Command substitution**: `$(date)`, `$(git branch --show-current)`\n\n**Examples:**\n\n```bash\n# Dynamic system information\nAs a system administrator on a $(uname -s) platform, how do I optimize performance?\n\n# Include file contents via shell\nHere's my current configuration: $(cat ~/.bashrc | head -20)\n\n# Use environment variables\nMy home directory is $HOME and I'm user $USER.\n```\n\n**Security Note**: Be cautious with shell integration. Review prompts before execution as they can run arbitrary commands.\n\n### Embedded Ruby (ERB)\n\nAIA supports full ERB processing in prompts for dynamic content generation:\n\n```erb\n\u003c%# ERB example in prompt file %\u003e\nCurrent time: \u003c%= Time.now %\u003e\nRandom number: \u003c%= rand(100) %\u003e\n\n\u003c% if ENV['USER'] == 'admin' %\u003e\nYou have admin privileges.\n\u003c% else %\u003e\nYou have standard user privileges.\n\u003c% end %\u003e\n\n\u003c%= AIA.config.model %\u003e is the current model.\n```\n\n### Prompt Sequences\n\nChain multiple prompts for complex workflows:\n\n#### Using --next\n\n```bash\n# Command line\naia analyze --next summarize --next report\n\n# In prompt files\n# analyze.txt contains: //next summarize\n# summarize.txt contains: //next report\n```\n\n#### Using --pipeline\n\n```bash\n# Command line\naia research --pipeline analyze,summarize,report,present\n\n# In prompt file\n//pipeline analyze,summarize,report,present\n```\n\n#### Example Workflow\n\n**research.txt:**\n```\n//config model = gpt-4\n//next analyze\n\nResearch the topic: [RESEARCH_TOPIC]\nProvide comprehensive background information.\n```\n\n**analyze.txt:**\n```\n//config out_file = analysis.md\n//next summarize\n\nAnalyze the research data and identify key insights.\n```\n\n**summarize.txt:**\n```\n//config out_file = summary.md\n\nCreate a concise summary of the analysis with actionable recommendations.\n```\n\n### Roles and System Prompts\n\nRoles define the context and personality for AI responses:\n\n```bash\n# Use a predefined role\naia --role expert analyze_code.rb\n\n# Roles are stored in ~/.prompts/roles/\n# expert.txt might contain:\n# \"You are a senior software engineer with 15 years of experience...\"\n```\n\n**Creating Custom Roles:**\n\n```bash\n# Create a code reviewer role\ncat \u003e ~/.prompts/roles/code_reviewer.txt \u003c\u003c EOF\nYou are an experienced code reviewer. Focus on:\n- Code quality and best practices\n- Security vulnerabilities\n- Performance optimizations\n- Maintainability issues\n\nProvide specific, actionable feedback.\nEOF\n```\n\n### RubyLLM::Tool Support\n\nAIA supports function calling through RubyLLM tools for extended capabilities:\n\n```bash\n# Load tools from directory\naia --tools ~/my-tools/ --chat\n\n# Load specific tool files\naia --tools weather.rb,calculator.rb --chat\n\n# Filter tools\naia --tools ~/tools/ --allowed_tools weather,calc\naia --tools ~/tools/ --rejected_tools deprecated\n```\n\n**Tool Examples** (see `examples/tools/` directory):\n- File operations (read, write, list)\n- Shell command execution\n- API integrations\n- Data processing utilities\n\n**MCP Client Examples** (see `examples/tools/mcp/` directory):\n\nAIA supports Model Context Protocol (MCP) clients for extended functionality:\n\n```bash\n# GitHub MCP Server (requires: brew install github-mcp-server)\n# Set GITHUB_PERSONAL_ACCESS_TOKEN environment variable\naia --tools examples/tools/mcp/github_mcp_server.rb --chat\n\n# iMCP for macOS (requires: brew install --cask loopwork/tap/iMCP)\n# Provides access to Notes, Calendar, Contacts, etc.\naia --tools examples/tools/mcp/imcp.rb --chat\n```\n\nThese MCP clients require the `ruby_llm-mcp` gem and provide access to external services and data sources through the Model Context Protocol.\n\n**Shared Tools Collection:**\nAIA can use the [shared_tools gem](https://github.com/madbomber/shared_tools) which provides a curated collection of commonly-used  tools (aka functions) via the --require option.\n\n```bash\n# Access shared tools automatically (included with AIA)\naia --require shared_tools/ruby_llm --chat\n\n# To access just one specific shared tool\naia --require shared_tools/ruby_llm/edit_file --chat\n\n# Combine with your own local custom RubyLLM-based tools\naia --require shared_tools/ruby_llm --tools ~/my-tools/ --chat\n```\n\nThe above examples show the shared_tools being used within an interactive chat session.  They are also available in batch prompts as well using the same --require option.  You can also use the //ruby directive to require the shared_tools as well and using a require statement within an ERB block.\n\n## Examples \u0026 Tips\n\n### Practical Examples\n\n#### Code Review Prompt\n```bash\n# ~/.prompts/code_review.txt\n//config model = gpt-4o-mini\n//config temperature = 0.3\n\nReview this code for:\n- Best practices adherence\n- Security vulnerabilities\n- Performance issues\n- Maintainability concerns\n\nCode to review:\n```\n\nUsage: `aia code_review mycode.rb`\n\n#### Meeting Notes Processor\n```bash\n# ~/.prompts/meeting_notes.txt\n//config model = gpt-4o-mini\n//pipeline format,action_items\n\nRaw meeting notes:\n//include [NOTES_FILE]\n\nPlease clean up and structure these meeting notes.\n```\n\n#### Documentation Generator\n```bash\n# ~/.prompts/document.txt\n//config model = gpt-4o-mini\n//shell find [PROJECT_DIR] -name \"*.rb\" | head -10\n\nGenerate documentation for the Ruby project shown above.\nInclude: API references, usage examples, and setup instructions.\n```\n\n### Executable Prompts\n\nThe `--exec` flag is used to create executable prompts.  If it is not present on the shebang line then the prompt file will be treated like any other context file.  That means that the file will be included as context in the prompt but no dynamic content integration or directives will be processed. All other AIA options are, well, optional.  All you need is an initial prompt ID and the --exec flag.\n\nIn the example below the option `--no-out_file` is used to direct the output from the LLM processing of the prompt to STDOUT.  This way the executable prompts can be good citizens on the *nix command line receiving piped in input via STDIN and send its output to STDOUT.\n\nCreate executable prompts:\n\n**weather_report** (make executable with `chmod +x`):\n```bash\n#!/usr/bin/env aia run --no-out_file --exec\n# Get current storm activity for the east and south coast of the US\n\nSummarize the tropical storm outlook fpr the Atlantic, Caribbean Sea and Gulf of America.\n\n//webpage https://www.nhc.noaa.gov/text/refresh/MIATWOAT+shtml/201724_MIATWOAT.shtml\n```\n\nUsage:\n```bash\n./weather_report\n./weather_report | glow  # Render the markdown with glow\n```\n\n### Tips from the Author\n\n#### The run Prompt\n```bash\n# ~/.prompts/run.txt\n# Desc: A configuration only prompt file for use with executable prompts\n#       Put whatever you want here to setup the configuration desired.\n#       You could also add a system prompt to preface your intended prompt\n```\n\nUsage: `echo \"What is the meaning of life?\" | aia run`\n\n#### The Ad Hoc One-shot Prompt\n```bash\n# ~/.prompts/ad_hoc.txt\n[WHAT_NOW_HUMAN]\n```\nUsage: `aia ad_hoc` - perfect for any quick one-shot question without cluttering shell history.\n\n#### Recommended Shell Setup\n```bash\n# ~/.bashrc_aia\nexport AIA_PROMPTS_DIR=~/.prompts\nexport AIA_OUT_FILE=./temp.md\nexport AIA_MODEL=gpt-4o-mini\nexport AIA_VERBOSE=true  # Shows spinner while waiting for LLM response\n\nalias chat='aia --chat --terse'\nask() { echo \"$1\" | aia run --no-out_file; }\n```\n\nThe `chat` alias and the `ask` function (shown above in HASH) are two powerful tools for interacting with the AI assistant. The `chat` alias allows you to engage in an interactive conversation with the AI assistant, while the `ask` function allows you to ask a question and receive a response. Later in this document the `run` prompt ID is discussed.  Besides using the run prompt ID here its also used in making executable prompt files.\n\n#### Prompt Directory Organization\n```\n~/.prompts/\n├── daily/           # Daily workflow prompts\n├── development/     # Coding and review prompts\n├── research/        # Research and analysis\n├── roles/          # System prompts\n└── workflows/      # Multi-step pipelines\n```\n\n## Security Considerations\n\n### Shell Command Execution\n\n**⚠️ Important Security Warning**\n\nAIA executes shell commands and Ruby code embedded in prompts. This provides powerful functionality but requires caution:\n\n- **Review prompts before execution**, especially from untrusted sources\n- **Avoid storing sensitive data** in prompts (API keys, passwords)\n- **Use parameterized prompts** instead of hardcoding sensitive values\n- **Limit file permissions** on prompt directories if sharing systems\n\n### Safe Practices\n\n```bash\n# ✅ Good: Use parameters for sensitive data\n//config api_key = [API_KEY]\n\n# ❌ Bad: Hardcode secrets\n//config api_key = sk-1234567890abcdef\n\n# ✅ Good: Validate shell commands\n//shell ls -la /safe/directory\n\n# ❌ Bad: Dangerous shell commands\n//shell rm -rf / # Never do this!\n```\n\n### Recommended Security Setup\n\n```bash\n# Set restrictive permissions on prompts directory\nchmod 700 ~/.prompts\nchmod 600 ~/.prompts/*.txt\n```\n\n## Troubleshooting\n\n### Common Issues\n\n**Prompt not found:**\n```bash\n# Check prompts directory\nls $AIA_PROMPTS_DIR\n\n# Verify prompt file exists\nls ~/.prompts/my_prompt.txt\n\n# Use fuzzy search\naia --fuzzy\n```\n\n**Model errors:**\n```bash\n# List available models\naia --available_models\n\n# Check model name spelling\naia --model gpt-4o  # Correct\naia --model gpt4    # Incorrect\n```\n\n**Shell integration not working:**\n```bash\n# Verify shell patterns\necho \"Test: $(date)\"  # Should show current date\necho \"Home: $HOME\"    # Should show home directory\n```\n\n**Configuration issues:**\n```bash\n# Check current configuration\naia --config\n\n# Debug configuration loading\naia --debug --config\n```\n\n### Error Messages\n\n| Error | Cause | Solution |\n|-------|-------|----------|\n| \"Prompt not found\" | Missing prompt file | Check file exists and spelling |\n| \"Model not available\" | Invalid model name | Use `--available_models` to list valid models |\n| \"Shell command failed\" | Invalid shell syntax | Test shell commands separately first |\n| \"Configuration error\" | Invalid config syntax | Check config file YAML syntax |\n\n### Debug Mode\n\nEnable debug output for troubleshooting:\n\n```bash\n# Enable debug mode\naia --debug my_prompt\n\n# Combine with verbose for maximum output\naia --debug --verbose my_prompt\n```\n\n### Performance Issues\n\n**Slow model responses:**\n- Try smaller/faster models: `--model gpt-4o-mini`\n- Reduce max_tokens: `--max_tokens 1000`\n- Use lower temperature for faster responses: `--temperature 0.1`\n\n**Large prompt processing:**\n- Break into smaller prompts using `--pipeline`\n- Use `//include` selectively instead of large files\n- Consider model context limits\n\n## Development\n\n### Testing\n\n```bash\n# Run unit tests\nrake test\n\n# Run integration tests\nrake integration\n\n# Run all tests with coverage\nrake all_tests\nopen coverage/index.html\n```\n\n### Building\n\n```bash\n# Install locally with documentation\njust install\n\n# Generate documentation\njust gen_doc\n\n# Static code analysis\njust flay\n```\n\n### Architecture Notes\n\n**ShellCommandExecutor Refactor:**\nThe `ShellCommandExecutor` is now a class (previously a module) with instance variables for cleaner encapsulation. Class-level methods remain for backward compatibility.\n\n**Prompt Variable Fallback:**\nVariables are always parsed from prompt text when no `.json` history file exists, ensuring parameter prompting works correctly.\n\n## Contributing\n\nBug reports and pull requests are welcome on GitHub at https://github.com/MadBomber/aia.\n\n### Reporting Issues\n\nWhen reporting issues, please include:\n- AIA version: `aia --version`\n- Ruby version: `ruby --version`\n- Operating system\n- Minimal reproduction example\n- Error messages and debug output\n\n### Development Setup\n\n```bash\ngit clone https://github.com/MadBomber/aia.git\ncd aia\nbundle install\nrake test\n```\n\n### Areas for Improvement\n\n- Configuration UI for complex setups\n- Better error handling and user feedback\n- Performance optimization for large prompt libraries\n- Enhanced security controls for shell integration\n\n## Roadmap\n\n- **Enhanced Search**: Restore full-text search within prompt files\n- **UI Improvements**: Better configuration management for fzf and rg tools\n- **Logging**: Enhanced logging using Ruby Logger class; integration with RubyLLM and RubyLLM::MCP logging\n\n## License\n\nThe gem is available as open source under the terms of the [MIT License](https://opensource.org/licenses/MIT).\n\n## Articles on AIA\n\n1. [The Philosophy of Prompt-Driven Development with AIA](https://madbomber.github.io/blog/engineering/AIA-Philosophy/)\n2. [Mastering AIA's Batch Mode: From Simple Questions to Complex Workflows](https://madbomber.github.io/blog/engineering/AIA-Batch-Mode/)\n3. [Building AI Workflows: AIA's Prompt Sequencing and Pipelines](https://madbomber.github.io/blog/engineering/AIA-Workflows/)\n4. [Interactive AI Sessions: Mastering AIA's Chat Mode](https://madbomber.github.io/blog/engineering/AIA-Chat-Mode/)\n5. [From Dynamic Prompts to Advanced Tool Integration](https://madbomber.github.io/blog/engineering/AIA-Advanced-Tool-Integration/)\n","funding_links":[],"categories":["Ruby"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FMadBomber%2Faia","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FMadBomber%2Faia","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FMadBomber%2Faia/lists"}