{"id":23785296,"url":"https://github.com/meain/esa","last_synced_at":"2026-03-10T07:31:55.765Z","repository":{"id":268659535,"uuid":"905062296","full_name":"meain/esa","owner":"meain","description":"Fastest way to create personalized AI agents","archived":false,"fork":false,"pushed_at":"2026-03-04T14:48:04.000Z","size":276,"stargazers_count":36,"open_issues_count":20,"forks_count":5,"subscribers_count":1,"default_branch":"master","last_synced_at":"2026-03-04T20:58:59.656Z","etag":null,"topics":["function-calling","llm"],"latest_commit_sha":null,"homepage":"https://blog.meain.io/2025/building-personalized-micro-agents/","language":"Go","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/meain.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":"AGENTS.md","dco":null,"cla":null}},"created_at":"2024-12-18T04:53:37.000Z","updated_at":"2026-03-04T14:49:26.000Z","dependencies_parsed_at":"2025-01-19T13:29:53.816Z","dependency_job_id":"7ffb4c31-5ee0-4b55-9477-e7366ffa536b","html_url":"https://github.com/meain/esa","commit_stats":null,"previous_names":["meain/esa"],"tags_count":5,"template":false,"template_full_name":null,"purl":"pkg:github/meain/esa","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/meain%2Fesa","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/meain%2Fesa/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/meain%2Fesa/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/meain%2Fesa/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/meain","download_url":"https://codeload.github.com/meain/esa/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/meain%2Fesa/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":30326909,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-03-10T05:25:20.737Z","status":"ssl_error","status_checked_at":"2026-03-10T05:25:17.430Z","response_time":106,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["function-calling","llm"],"created_at":"2025-01-01T13:15:51.867Z","updated_at":"2026-03-10T07:31:55.725Z","avatar_url":"https://github.com/meain.png","language":"Go","readme":"# ESA\n\n![Screencast GIF](https://github.com/user-attachments/assets/99abe3a1-c620-4909-a503-22b4d70a5cac)\n\n\u003cimg src=\"https://github.com/user-attachments/assets/5c2915ab-4a8e-4b49-b3b6-394d5644dac2\" alt=\"Mascot\" width=\"300\" align=\"right\"/\u003e\n\n**ESA** is an AI-powered command-line tool that lets you create powerful personalized small agents. By connecting Large Language Models (LLMs) with shell scripts as functions, ESA lets you control your system, automate tasks, and query information using plain English commands.\n\n## ✨ Features\n\n- **Natural Language Interface**: Execute system commands using conversational language\n- **Multi-Provider LLM Support**: Works with OpenAI, Groq, Ollama, OpenRouter, GitHub Models, and custom providers\n- **Extensible Agent System**: Create specialized agents for different domains (DevOps, Git, coding, etc.)\n- **Function-Based Architecture**: Define custom commands via TOML configuration files\n- **MCP Server Integration**: Connect with Model Context Protocol servers for enhanced capabilities\n- **Conversation History**: Continue and retry conversations with full context preservation\n- **Safety Controls**: Built-in confirmation levels and safe/unsafe command classification\n- **Flexible Output**: Support for text, markdown, and JSON output formats\n\n| \u003cvideo src=\"https://github.com/user-attachments/assets/cda852f3-edc2-4612-9920-4c53dc76a9a8\"\u003e\u003c/video\u003e |\n| ----------------------------------------------------------------------------------------------------- |\n| **Tree-Sitter agent**                                                                                 |\n\n| \u003cvideo src=\"https://github.com/user-attachments/assets/ce584248-c50d-456d-aa35-4a080790c2a4\"\u003e\u003c/video\u003e |\n| ----------------------------------------------------------------------------------------------------- |\n| **Coder agent**                                                                                       |\n\n\u003e See [meain/esa#4](https://github.com/meain/esa/issues/4) for more demos\n\n## 🚀 Quick Start\n\n### 1. Installation\n\n**Option A: Using Go**\n\n```bash\ngo install github.com/meain/esa@latest\n```\n\n**Option B: Clone and Build**\n\n```bash\ngit clone https://github.com/meain/esa.git\ncd esa\ngo build -o esa\n```\n\n### 2. Setup API Key\n\nESA works with multiple LLM providers. Set up at least one:\n\n```bash\n# OpenAI\nexport OPENAI_API_KEY=\"your-openai-key\"\n\n# Or use other providers\nexport GROQ_API_KEY=\"your-groq-key\"\nexport OLLAMA_API_KEY=\"\"  # Leave empty for local Ollama\n```\n\n### 3. Try Your First Commands\n\n```bash\n# Get help and see all available options\nesa --help\n\n# Basic queries\nesa what time is it\nesa what files are in the current directory\nesa \"calculate 15% tip on $47.50\"\n\n# More complex tasks\nesa will it rain today\nesa set an alarm for 2 hours from now\n\n# Use esa's investigations to learn\nesa --show-history 7 | esa convert this interaction into a doc\n```\n\nThe operations you can do and questions you can ask depends on the agent config. While the real power of ESA comes from the fact that you can create custom agents, the default builtin agent can do some basic stuff.\n\n\u003e 💡 **Tip**: The default agent provides basic system functions. See the [Agent Creation Guide](./docs/agents.md) to create specialized agents.\n\n### Built-in Agents\n\nESA comes with several built-in agents that are always available:\n\n| Agent        | Description                                                   | Usage                          |\n| ------------ | ------------------------------------------------------------- | ------------------------------ |\n| **+default** | Basic system operations like file management and calculations | `esa +default what time is it` |\n| **+new**     | Creates new custom agents                                     | `esa +new create a git agent`  |\n| **+auto**    | Automatically selects the right agent based on your query     | `esa +auto analyze this code`  |\n\n\u003e 💡 **Tip**: You can override built-in agents by creating your own agent with the same name in `~/.config/esa/agents/`. When a name conflict occurs, your custom agent will be used instead of the built-in one.\n\n### Using Specialized Agents\n\nESA becomes powerful when you use specialized agents. **See the [Agent Creation Guide](./docs/agents.md) for detailed instructions on:**\n\n- Writing agent configuration files\n- Defining custom functions\n- Parameter handling and validation\n- Advanced templating features\n- Best practices and examples\n\nThe following are things you can do with the example agents provided in `./example`. Use the `+agent-name` syntax:\n\n```bash\n# Kubernetes operations\nesa +k8s what is the secret value that starts with AUD_\nesa +k8s how to get latest cronjob pod logs\n\n# JIRA integration\nesa +jira list all open issues assigned to me\nesa +jira pending issues related to authentication\n\n# Git operations with the commit agent\ngit diff --staged | esa +commit\n```\n\n\u003e You can see my personal list of custom agents at [esa/agents](https://github.com/meain/dotfiles/tree/master/esa/.config/esa/agents).\n\n### Conversation Features\n\n```bash\n# Continue the last conversation\nesa -c \"and what about yesterday's weather\"\n\n# Continue specific conversations using custom IDs\nesa -C my-project \"continue our discussion about the design\"\nesa -C debugging-session \"what was the error we found?\"\n\n# Continue conversations by index (1 = most recent)\nesa -C 1 \"follow up question\"\nesa -C 2 \"continue the second most recent conversation\"\n\n# Retry the last command with modifications\nesa -r make it more detailed\n\n# View conversation history (shows custom IDs when available)\nesa --list-history\nesa --show-history 3\nesa --show-history my-project        # View by custom ID\nesa --show-history 1 --output json\n\n# Show last output of a previous interaction\nesa --show-output 1\nesa --show-output my-project        # View output by custom ID\n\n# Display agent and model statistics\nesa --show-stats\n```\n\n### REPL Mode (Interactive Sessions)\n\nESA supports REPL (Read-Eval-Print Loop) mode for interactive conversations. This is perfect for extended sessions where you want to have back-and-forth conversations with your AI assistant.\n\n```bash\n# Start REPL mode\nesa --repl\n\n# Start REPL with an initial query\nesa --repl what time is it\n\n# Start REPL with a specific agent\nesa --repl +k8s show me all pods\n```\n\n#### REPL Commands\n\nOnce in REPL mode, you can use special commands:\n\n```bash\n# Get help\nyou\u003e /help\n\n# Exit the session\nyou\u003e /exit\nyou\u003e /quit\n\n# Show current configuration\nyou\u003e /config\n\n# View or change the model\nyou\u003e /model                    # Show current model\nyou\u003e /model openai/gpt-4o     # Switch to a different model\nyou\u003e /model mini              # Use a model alias\n```\n\n#### REPL Features\n\n- **Persistent Context**: The conversation continues across multiple inputs\n- **Agent Selection**: Use `+agent` syntax in your initial query or when starting REPL\n- **Model Switching**: Change models mid-conversation with `/model`\n- **Configuration Display**: View current settings with `/config`\n- **Multi-line Input**: Press enter twice to send your message\n- **History Preservation**: All REPL conversations are saved and can be viewed later\n\n#### Example REPL Session\n\n```bash\n$ esa --repl \"+k8s\"\n[REPL] Starting interactive mode\n- '/exit' or '/quit' to end the session\n- '/help' for available commands\n- Press enter twice to send your message.\n\nyou\u003e show me all pods in the default namespace\n\nesa\u003e Here are all the pods in the default namespace:\n[... pod listing ...]\n\nyou\u003e what about in the kube-system namespace\n\nesa\u003e Here are the pods in the kube-system namespace:\n[... pod listing ...]\n\nyou\u003e /model openai/gpt-4o\n[REPL] Model updated to: openai/gpt-4o\n\nyou\u003e can you explain what each of these pods does\n\nesa\u003e [... detailed explanation ...]\n\nyou\u003e /exit\n[REPL] Goodbye!\n```\n\n### Working with Different Models\n\n```bash\n# Use a specific model\nesa --model \"openai/o3\" \"complex reasoning task\"\nesa --model \"groq/llama3-70b\" \"quick question\"\n\n# Use model aliases (defined in config)\nesa --model \"mini\" \"your command\"\n```\n\n## 🛠️ Configuration\n\n### Global Configuration\n\nCreate `~/.config/esa/config.toml` for global settings:\n\n```toml\n[settings]\nshow_commands = true                     # Show executed commands\ndefault_model = \"openai/gpt-4o-mini\"    # Default model\n\n[model_aliases]\n# Create shortcuts for frequently used models\n4o = \"openai/gpt-4o\"\nmini = \"openai/gpt-4o-mini\"\ngroq = \"groq/llama3-70b-8192\"\nlocal = \"ollama/llama3.2\"\n\n[providers.localai]\n# Add custom OpenAI-compatible providers\nbase_url = \"http://localhost:8080/v1\"\napi_key_env = \"LOCALAI_API_KEY\"\n```\n\n### Agent Management\n\n```bash\n# List available agents\nesa --list-agents\n\n# View agent details and available functions\nesa --show-agent +k8s\nesa --show-agent +commit\nesa --show-agent ~/.config/esa/agents/custom.toml\n\n# Agents are stored in ~/.config/esa/agents/\n# Each agent is a .toml file defining its capabilities\n```\n\n## 🎯 Available Agents\n\n### Built-in Agents\n\nESA includes several built-in agents that are always available:\n\n| Agent        | Purpose                                                         | Example Usage                    |\n| ------------ | --------------------------------------------------------------- | -------------------------------- |\n| **+default** | Basic system operations (files, calculations, weather)          | `esa +default \"what time is it\"` |\n| **+new**     | Creates new custom agent configurations                         | `esa +new \"create a git agent\"`  |\n| **+auto**    | Automatically selects the appropriate agent based on your query | `esa +auto \"analyze this code\"`  |\n\n\u003e 💡 **Note**: You can override built-in agents by creating your own agent with the same name in `~/.config/esa/agents/`. Your custom agent will take precedence over the built-in one.\n\n### Example Agents\n\nESA also includes several example agents you can use or customize:\n\n| Agent      | Purpose                       | Example Usage                         |\n| ---------- | ----------------------------- | ------------------------------------- |\n| **commit** | Git commit message generation | `esa +commit \"create commit message\"` |\n| **k8s**    | Kubernetes cluster operations | `esa +k8s \"show pod status\"`          |\n| **jira**   | JIRA issue management         | `esa +jira \"list open issues\"`        |\n| **web**    | Web development tasks         | `esa +web \"what is an agent?\"`        |\n\nEach agent can specify a preferred model optimized for its tasks. For\nexample, lightweight agents might use `gpt-4.1-nano` for quick\nresponses, while complex analysis agents might use `o3` for better\nreasoning.\n\nSee the [`examples/`](examples/) directory for more agent configurations.\n\n## 🔧 Command-Line Options\n\n```bash\n# Core options\n--model, -m \u003cmodel\u003e      # Specify model (e.g., \"openai/gpt-4\")\n--agent \u003cpath\u003e           # Path to agent config file\n--config \u003cpath\u003e          # Path to config file\n--debug                  # Enable debug output\n--ask \u003clevel\u003e            # Confirmation level: none/unsafe/all\n--repl                   # Start interactive REPL mode\n\n# Conversation management\n-c, --continue           # Continue last conversation\n-C, --conversation \u003cid\u003e  # Continue/retry specific conversation by ID or index\n-r, --retry              # Retry last command (optionally with new text)\n\n# Output and display\n--show-commands          # Show executed commands\n--show-tool-calls        # Show LLM tool call requests and responses\n--hide-progress          # Disable progress indicators\n--output \u003cformat\u003e        # Output format for --show-history: text/markdown/json\n\n# Information commands\n--list-agents            # Show all available agents\n--list-history           # Show conversation history\n--show-history \u003cindex\u003e   # Display specific conversation (e.g., --show-history 1)\n--show-output \u003cindex\u003e    # Display only last output from conversation (e.g., --show-output 1)\n--show-agent \u003cagent\u003e     # Show agent details (e.g., --show-agent +coder)\n--show-stats             # Display agent and model statistics\n--pretty, -p             # Pretty print markdown output (disables streaming)\n```\n\n### Examples\n\n```bash\n# Basic usage\nesa \"What is the weather like?\"\nesa +coder \"How do I write a function in Go?\"\nesa --agent ~/.config/esa/agents/custom.toml \"analyze this code\"\n\n# Agent and history management\nesa --list-agents\nesa --show-agent +coder\nesa --show-agent ~/.config/esa/agents/custom.toml\nesa --list-history\nesa --show-history 1\nesa --show-history 1 --output json\nesa --show-output 1\nesa --show-output 1 --pretty   # Pretty print markdown output\n\n# Conversation flow\nesa --continue \"tell me more about that\"\nesa --conversation my-session \"continue our previous discussion\"\nesa --conversation 1 \"follow up on the most recent conversation\"\nesa --retry \"make it shorter\"\n\n# REPL mode\nesa --repl                                    # Start interactive mode\nesa --repl \"what time is it\"                  # Start with initial query\nesa --repl \"+k8s show me all pods\"            # Start with specific agent\n\n# Display options\nesa --show-commands \"list files\"              # Show command executions\nesa --show-tool-calls \"read README.md\"        # Show tool calls and results\n```\n\n## 📋 Safety and Security\n\nESA includes several safety mechanisms:\n\n### Confirmation Levels\n\n- **`--ask none`** (default): No confirmation required\n- **`--ask unsafe`**: Confirm potentially dangerous commands\n- **`--ask all`**: Confirm every command execution\n\n### Function Safety Classification\n\nFunctions in agent configurations can be marked as:\n\n- **`safe = true`**: Commands that only read data or perform safe operations\n- **`safe = false`**: Commands that modify system state or could be dangerous\n\n### Example: Safe vs Unsafe\n\n```toml\n[[functions]]\nname = \"list_files\"\ncommand = \"ls {{path}}\"\nsafe = true              # Reading directory contents is safe\n\n[[functions]]\nname = \"delete_file\"\ncommand = \"rm {{file}}\"\nsafe = false             # File deletion requires confirmation\n```\n\n## 🌐 Supported LLM Providers\n\n| Provider       | Models                 | API Key Environment         |\n| -------------- | ---------------------- | --------------------------- |\n| **OpenAI**     | GPT-4, GPT-3.5, etc.   | `OPENAI_API_KEY`            |\n| **Groq**       | Llama, Mixtral models  | `GROQ_API_KEY`              |\n| **OpenRouter** | Various models         | `OPENROUTER_API_KEY`        |\n| **GitHub**     | Azure-hosted models    | `GITHUB_MODELS_API_KEY`     |\n| **Ollama**     | Local models           | `OLLAMA_API_KEY` (optional) |\n| **Custom**     | OpenAI-compatible APIs | Configurable                |\n\n## 🔌 MCP Server Support\n\nESA supports [Model Context Protocol (MCP)](https://github.com/modelcontextprotocol/spec) servers, allowing you to integrate with external tools and services that implement the MCP specification.\n\n### What is MCP?\n\nMCP is a protocol that allows AI assistants to securely connect with external data sources and tools. MCP servers can provide:\n\n- **File system access** (read/write files, directory operations)\n- **Database connectivity** (query databases, execute operations)\n- **Web services** (fetch URLs, API integrations)\n- **Custom tools** (domain-specific functionality)\n\n### Configuration\n\nAdd MCP servers to your agent configuration alongside regular functions:\n\n```toml\nname = \"Filesystem Agent\"\ndescription = \"Agent with file system access via MCP\"\n\n# MCP Servers with security and function filtering\n[mcp_servers.filesystem]\ncommand = \"npx\"\nargs = [\n    \"-y\", \"@modelcontextprotocol/server-filesystem\",\n    \"/Users/username/Documents\"\n]\nsafe = false  # File operations are potentially unsafe\nsafe_functions = [\"read_file\", \"list_directory\"]  # These specific functions are considered safe\nallowed_functions = [\"read_file\", \"write_file\", \"list_directory\"]  # Only expose these functions to the LLM\n\n[mcp_servers.database]\ncommand = \"uvx\"\nargs = [\"mcp-server-postgres\", \"postgresql://localhost/mydb\"]\nsafe = false  # Database operations are unsafe by default\nsafe_functions = [\"select\", \"show\"]  # Only SELECT and SHOW queries are safe\n\n# Regular functions work alongside MCP servers\n[[functions]]\nname = \"list_files\"\ndescription = \"List files using shell command\"\ncommand = \"ls -la {path}\"\nsafe = true\n```\n\n### Security and Function Control\n\nMCP servers support the same security model as regular functions:\n\n- **Server-level Safety**: Set `safe = true/false` to mark all functions from a server as safe by default\n- **Function-level Safety**: Use `safe_functions = [\"func1\", \"func2\"]` to override safety for specific functions\n- **Function Filtering**: Use `allowed_functions = [\"func1\", \"func2\"]` to limit which functions are exposed to the LLM\n\n### Command and Tool Call Display\n\nUse `--show-commands` to see MCP tool executions:\n\n```bash\nesa --show-commands +filesystem \"list files in current directory\"\n# Shows: # filesystem:list_directory({\"path\": \".\"})\n```\n\nUse `--show-tool-calls` to see the raw tool call requests and responses:\n\n```bash\nesa --show-tool-calls +filesystem \"read the first 10 lines of README.md\"\n# Shows detailed JSON of tool call request and response\n```\n\n### Usage\n\nMCP tools are automatically discovered and integrated with your agent:\n\n```bash\n# MCP tools are prefixed with 'mcp_{server_name}_{tool_name}'\nesa +filesystem \"read the contents of config.json\"\nesa +database \"show me all users in the database\"\n\n# View available MCP tools and their security settings\nesa --show-agent examples/mcp.toml\n\n# Use with confirmation and tool visibility\nesa --ask unsafe --show-commands +filesystem \"write a file\"  # See command execution\nesa --ask unsafe --show-tool-calls +filesystem \"write a file\"  # See command and output\n```\n\n### Benefits\n\n- **Security**: MCP servers run in isolation with defined permissions and granular safety controls\n- **Extensibility**: Easy integration with existing MCP-compatible tools\n- **Flexibility**: Combine MCP tools with regular shell functions\n- **Standardization**: Use any MCP server implementation\n- **Function Control**: Filter and control which MCP functions are available to the LLM\n- **Command Visibility**: Full transparency with `--show-commands` flag support\n\nSee [`examples/mcp.toml`](examples/mcp.toml) for a complete example.\n\n## FAQ\n\n\u003cdetails\u003e\n\u003csummary\u003eWhat all agents do I have?\u003c/summary\u003e\nI have quite a few personal agents. The ones that I can make public are available in my [dotfiles](https://github.com/meain/dotfiles/tree/master/esa/.config/esa/agents).\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eHow to setup GitHub Copilot\u003c/summary\u003e\n1. The easiest way to get the Copilot token is to sign in to Copilot from any JetBrains IDE (PyCharm, GoLand, etc).\n\n2. After authentication, locate the configuration file:\n\n   - Linux/macOS: `~/.config/github-copilot/apps.json`\n   - Windows: `~\\AppData\\Local\\github-copilot\\apps.json`\n\n3. Copy the `oauth_token` value from this file.\n\n4. Set the token as your `COPILOT_API_KEY`:\n\n   ```bash\n   export COPILOT_API_KEY=your_oauth_token_here\n   ```\n\nImportant Note: Tokens created by the Neovim copilot.lua plugin (old `hosts.json`) sometimes lack the needed scopes. If you see \"access to this endpoint is forbidden\", regenerate the token with a JetBrains IDE.\n\n\u003c/details\u003e.\n","funding_links":[],"categories":["Go"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmeain%2Fesa","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fmeain%2Fesa","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmeain%2Fesa/lists"}