{"id":46192526,"url":"https://github.com/jrswab/axe","last_synced_at":"2026-04-02T21:57:52.341Z","repository":{"id":340914055,"uuid":"1168117547","full_name":"jrswab/axe","owner":"jrswab","description":"A ligthweight cli for running single-purpose AI agents. Define focused agents in TOML, trigger them from anywhere; pipes, git hooks, cron, or the terminal.","archived":false,"fork":false,"pushed_at":"2026-03-06T14:59:02.000Z","size":3899,"stargazers_count":10,"open_issues_count":0,"forks_count":0,"subscribers_count":0,"default_branch":"master","last_synced_at":"2026-03-06T17:34:21.364Z","etag":null,"topics":["ai-agents","automation","cli","command-line","developer-tools","golang","llm"],"latest_commit_sha":null,"homepage":"","language":"Go","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/jrswab.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2026-02-27T03:04:23.000Z","updated_at":"2026-03-06T14:59:06.000Z","dependencies_parsed_at":null,"dependency_job_id":null,"html_url":"https://github.com/jrswab/axe","commit_stats":null,"previous_names":["jrswab/axe"],"tags_count":1,"template":false,"template_full_name":null,"purl":"pkg:github/jrswab/axe","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jrswab%2Faxe","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jrswab%2Faxe/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jrswab%2Faxe/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jrswab%2Faxe/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/jrswab","download_url":"https://codeload.github.com/jrswab/axe/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jrswab%2Faxe/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":30348851,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-03-10T15:55:29.454Z","status":"ssl_error","status_checked_at":"2026-03-10T15:54:58.440Z","response_time":106,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.5:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai-agents","automation","cli","command-line","developer-tools","golang","llm"],"created_at":"2026-03-03T01:06:25.344Z","updated_at":"2026-04-02T21:57:52.325Z","avatar_url":"https://github.com/jrswab.png","language":"Go","readme":"# Axe\n\n![axe banner](banner.png)\n\nA CLI tool for managing and running LLM-powered agents.\n\n## Why Axe?\n\nMost AI tooling assumes you want a chatbot. A long-running session with a massive context window doing everything at once. But that's not how good software works. Good software is small, focused, and composable.\n\nAxe treats LLM agents the same way Unix treats programs. Each agent does one thing well. You define it in a TOML file, give it a focused skill, and run it from the command line. Pipe data in, get results out. Chain agents together. Trigger them from cron, git hooks, or CI. Whatever you already use. No daemon, no GUI, no framework to buy into. Just a binary and your configs.\n\n## Overview\n\nAxe orchestrates LLM-powered agents defined via TOML configuration files. Each agent has its own system prompt, model selection, skill files, context files, working directory, persistent memory, and the ability to delegate to sub-agents.\n\nAxe is the executor, not the scheduler. It is designed to be composed with standard Unix tools — cron, git hooks, pipes, file watchers — rather than reinventing scheduling or workflow orchestration.\n\n## Features\n\n- **Multi-provider support** — Anthropic, OpenAI, Ollama (local models), OpenCode, and AWS Bedrock\n- **TOML-based agent configuration** — declarative, version-controllable agent definitions\n- **Sub-agent delegation** — agents can call other agents via LLM tool use, with depth limiting and parallel execution\n- **Persistent memory** — timestamped markdown logs that carry context across runs\n- **Memory garbage collection** — LLM-assisted pattern analysis and trimming\n- **Skill system** — reusable instruction sets that can be shared across agents\n- **Stdin piping** — pipe any output directly into an agent (`git diff | axe run reviewer`)\n- **Local agent directories** — auto-discovers agents from `\u003ccwd\u003e/axe/agents/` before the global config, or use `--agents-dir` to point anywhere\n- **Dry-run mode** — inspect resolved context without calling the LLM\n- **JSON output** — structured output with metadata for scripting\n- **Built-in tools** — file operations (read, write, edit, list) sandboxed to working directory; shell command execution; URL fetching; web search\n- **Output allowlist** — restrict `url_fetch` and `web_search` to specific hostnames; private/reserved IPs are always blocked (SSRF protection)\n- **Token budgets** — cap cumulative token usage per agent run via `[budget]` config or `--max-tokens` flag\n- **MCP tool support** — connect to external MCP servers for additional tools via SSE or streamable-HTTP transport\n- **Configurable retry** — exponential, linear, or fixed backoff for transient provider errors (429, 5xx, timeouts)\n- **Minimal dependencies** — four direct dependencies (cobra, toml, mcp-go-sdk, x/net); all LLM calls use the standard library\n\n## Installation\n\nRequires Go 1.25+.\n\n**Pre-built binaries** (no Go required) are available for Linux, macOS, and Windows on the [GitHub Releases page](https://github.com/jrswab/axe/releases/latest).\n\nInstall via Go:\n\n```bash\ngo install github.com/jrswab/axe@latest\n```\n\n\u003e If this fails with `invalid go version`, your Go toolchain is older than 1.25. Upgrade from [go.dev/dl](https://go.dev/dl/) or download a pre-built binary instead.\n\nOr build from source:\n\n```bash\ngit clone https://github.com/jrswab/axe.git\ncd axe\ngo build .\n```\n\n## Quick Start\n\nInitialize the configuration directory:\n\n```bash\naxe config init\n```\n\nThis creates the directory structure at `$XDG_CONFIG_HOME/axe/` with a sample skill and a default `config.toml` for provider credentials.\n\nScaffold a new agent:\n\n```bash\naxe agents init my-agent\n```\n\nEdit its configuration:\n\n```bash\naxe agents edit my-agent\n```\n\nRun the agent:\n\n```bash\naxe run my-agent\n```\n\nPipe input from other tools:\n\n```bash\ngit diff --cached | axe run pr-reviewer\ncat error.log | axe run log-analyzer\n```\n\n## Examples\n\nThe [`examples/`](examples/) directory contains ready-to-run agents you can copy into your config and use immediately. Includes a code reviewer, commit message generator, and text summarizer — each with a focused SKILL.md.\n\n```bash\n# Copy an example agent into your config\ncp examples/code-reviewer/code-reviewer.toml \"$(axe config path)/agents/\"\ncp -r examples/code-reviewer/skills/ \"$(axe config path)/skills/\"\n\n# Set your API key and run\nexport ANTHROPIC_API_KEY=\"your-key-here\"\ngit diff | axe run code-reviewer\n```\n\nSee [`examples/README.md`](examples/README.md) for full setup instructions.\n\n## Docker\n\nAxe provides a Docker image for running agents in an isolated, hardened container.\n\n### Build the Image\n\n```bash\ndocker build -t axe .\n```\n\nMulti-architecture builds (linux/amd64, linux/arm64) are supported via buildx:\n\n```bash\ndocker buildx build --platform linux/amd64,linux/arm64 -t axe:latest .\n```\n\n### Run an Agent\n\nMount your config directory and pass API keys as environment variables:\n\n```bash\ndocker run --rm \\\n  -v ./my-config:/home/axe/.config/axe \\\n  -e ANTHROPIC_API_KEY \\\n  axe run my-agent\n```\n\nPipe stdin with the `-i` flag:\n\n```bash\ngit diff | docker run --rm -i \\\n  -v ./my-config:/home/axe/.config/axe \\\n  -e ANTHROPIC_API_KEY \\\n  axe run pr-reviewer\n```\n\nWithout a config volume mounted, axe exits with code 2 (config error) because no agent TOML files exist.\n\n### Running a Single Agent\n\nThe examples above mount the entire config directory. If you only need to run one agent with one skill, mount just those files to their expected XDG paths inside the container. No `config.toml` is needed when API keys are passed via environment variables.\n\n```bash\ndocker run --rm -i \\\n  -e ANTHROPIC_API_KEY \\\n  -v ./agents/reviewer.toml:/home/axe/.config/axe/agents/reviewer.toml:ro \\\n  -v ./skills/code-review/:/home/axe/.config/axe/skills/code-review/:ro \\\n  axe run reviewer\n```\n\nThe agent's `skill` field resolves automatically against the XDG config path inside the container, so no `--skill` flag is needed.\n\nTo use a **different skill** than the one declared in the agent's TOML, use the `--skill` flag to override it. In this case you only mount the replacement skill — the original skill declared in the TOML is ignored entirely:\n\n```bash\ndocker run --rm -i \\\n  -e ANTHROPIC_API_KEY \\\n  -v ./agents/reviewer.toml:/home/axe/.config/axe/agents/reviewer.toml:ro \\\n  -v ./alt-review.md:/home/axe/alt-review.md:ro \\\n  axe run reviewer --skill /home/axe/alt-review.md\n```\n\nIf the agent declares `sub_agents`, all referenced agent TOMLs and their skills must also be mounted.\n\n### Persistent Data\n\nAgent memory persists across runs when you mount a data volume:\n\n```bash\ndocker run --rm \\\n  -v ./my-config:/home/axe/.config/axe \\\n  -v axe-data:/home/axe/.local/share/axe \\\n  -e ANTHROPIC_API_KEY \\\n  axe run my-agent\n```\n\n### Docker Compose\n\nA `docker-compose.yml` is included for running axe alongside a local Ollama instance.\n\n**Cloud provider only (no Ollama):**\n\n```bash\ndocker compose run --rm axe run my-agent\n```\n\n**With Ollama sidecar:**\n\n```bash\ndocker compose --profile ollama up -d ollama\ndocker compose --profile cli run --rm axe run my-agent\n```\n\n**Pull an Ollama model:**\n\n```bash\ndocker compose --profile ollama exec ollama ollama pull llama3\n```\n\n\u003e **Note:** The compose `axe` service declares `depends_on: ollama`. Docker Compose will attempt to start the Ollama service whenever axe is started via compose, even for cloud-only runs. For cloud-only usage without Ollama, use `docker run` directly instead of `docker compose run`.\n\n### Ollama on the Host\n\nIf Ollama runs directly on the host (not via compose), point to it with:\n\n- **Linux:** `--add-host=host.docker.internal:host-gateway -e AXE_OLLAMA_BASE_URL=http://host.docker.internal:11434`\n- **macOS / Windows (Docker Desktop):** `-e AXE_OLLAMA_BASE_URL=http://host.docker.internal:11434`\n\n### Security\n\nThe container runs with the following hardening by default (via compose):\n\n- **Non-root user** — UID 10001\n- **Read-only root filesystem** — writable locations are the config mount, data mount, and `/tmp/axe` tmpfs\n- **All capabilities dropped** — `cap_drop: ALL`\n- **No privilege escalation** — `no-new-privileges:true`\n\nThese settings do not restrict outbound network access. To isolate an agent that only talks to a local Ollama instance, add `--network=none` and connect it to the shared Docker network manually.\n\n### Volume Mounts\n\n| Container Path | Purpose | Default Access |\n|---|---|---|\n| `/home/axe/.config/axe/` | Agent TOML files, skills, `config.toml` | Read-write |\n| `/home/axe/.local/share/axe/` | Persistent memory files | Read-write |\n\nConfig is read-write because `axe config init` and `axe agents init` write into it. Mount as `:ro` if you only run agents.\n\n### Environment Variables\n\n| Variable | Required | Purpose |\n|---|---|---|\n| `ANTHROPIC_API_KEY` | If using Anthropic | API authentication |\n| `OPENAI_API_KEY` | If using OpenAI | API authentication |\n| `AXE_OLLAMA_BASE_URL` | If using Ollama | Ollama endpoint (default in compose: `http://ollama:11434`) |\n| `AXE_ANTHROPIC_BASE_URL` | No | Override Anthropic API endpoint |\n| `AXE_OPENAI_BASE_URL` | No | Override OpenAI API endpoint |\n| `AXE_OPENCODE_BASE_URL` | No | Override OpenCode API endpoint |\n| `TAVILY_API_KEY` | If using web_search | Tavily web search API key |\n| `AXE_WEB_SEARCH_BASE_URL` | No | Override web search endpoint |\n\n## CLI Reference\n\n### Commands\n\n| Command | Description |\n|---|---|\n| `axe run \u003cagent\u003e` | Run an agent |\n| `axe agents list` | List all configured agents |\n| `axe agents show \u003cagent\u003e` | Display an agent's full configuration |\n| `axe agents init \u003cagent\u003e` | Scaffold a new agent TOML file |\n| `axe agents edit \u003cagent\u003e` | Open an agent TOML in `$EDITOR` |\n| `axe config path` | Print the configuration directory path |\n| `axe config init` | Initialize the config directory with defaults |\n| `axe gc \u003cagent\u003e` | Run memory garbage collection for an agent |\n| `axe gc --all` | Run GC on all memory-enabled agents |\n| `axe version` | Print the current version |\n\n### Run Flags\n\n| Flag | Default | Description |\n|---|---|---|\n| `--model \u003cprovider/model\u003e` | from TOML | Override the model (e.g. `anthropic/claude-sonnet-4-20250514`) |\n| `--skill \u003cpath\u003e` | from TOML | Override the skill file path |\n| `--workdir \u003cpath\u003e` | from TOML or cwd | Override the working directory |\n| `--timeout \u003cseconds\u003e` | 120 | Request timeout |\n| `--max-tokens \u003cint\u003e` | 0 (unlimited) | Cap cumulative token usage for the run (exit code 4 if exceeded) |\n| `--dry-run` | false | Show resolved context without calling the LLM |\n| `--verbose` / `-v` | false | Print debug info (model, timing, tokens, retries) to stderr |\n| `--json` | false | Wrap output in a JSON envelope with metadata |\n| `-p` / `--prompt \u003cstring\u003e` | (none) | Inline prompt used as the user message; takes precedence over stdin |\n| `--agents-dir \u003cpath\u003e` | (auto-discover) | Override agent search directory |\n\n#### User Message Precedence\n\nThe user message sent to the LLM is resolved in this order:\n\n1. **`-p` / `--prompt` flag** — If provided with a non-empty, non-whitespace value, it is used as the user message.\n2. **Piped stdin** — If `-p` is absent or empty/whitespace-only, piped stdin is used.\n3. **Built-in default** — If neither `-p` nor stdin provides content, the default message `\"Execute the task described in your instructions.\"` is used.\n\nWhen `-p` is provided alongside piped stdin, the piped stdin is silently ignored (no warning is emitted). An empty or whitespace-only `-p` value is treated as absent and falls through to stdin, then the default.\n\n**Example:**\n```bash\naxe run my-agent -p \"Summarize the README\"\n```\n\n### Exit Codes\n\n| Code | Meaning |\n|---|---|\n| 0 | Success |\n| 1 | Runtime error |\n| 2 | Configuration error |\n| 3 | Provider/network error |\n| 4 | Token budget exceeded |\n\n## Agent Configuration\n\nAgents are defined as TOML files in `$XDG_CONFIG_HOME/axe/agents/`.\n\n```toml\nname = \"pr-reviewer\"\ndescription = \"Reviews pull requests for issues and improvements\"\nmodel = \"anthropic/claude-sonnet-4-20250514\"\nsystem_prompt = \"You are a senior code reviewer. Be concise and actionable.\"\nskill = \"skills/code-review/SKILL.md\"\nfiles = [\"src/**/*.go\", \"CONTRIBUTING.md\"]\nworkdir = \"/home/user/projects/myapp\"\ntools = [\"read_file\", \"list_directory\", \"run_command\"]\nsub_agents = [\"test-runner\", \"lint-checker\"]\nallowed_hosts = [\"api.example.com\", \"docs.example.com\"]\n\n[sub_agents_config]\nmax_depth = 3       # maximum nesting depth (hard max: 5)\nparallel = true     # run sub-agents concurrently\ntimeout = 120       # per sub-agent timeout in seconds\n\n[memory]\nenabled = true\nlast_n = 10         # load last N entries into context\nmax_entries = 100   # warn when exceeded\n\n[[mcp_servers]]\nname = \"my-tools\"\nurl = \"https://my-mcp-server.example.com/sse\"\ntransport = \"sse\"\nheaders = { Authorization = \"Bearer ${MY_TOKEN}\" }\n\n[params]\ntemperature = 0.3\nmax_tokens = 4096\n\n[budget]\nmax_tokens = 50000    # 0 = unlimited (default)\n\n[retry]\nmax_retries = 3           # retry up to 3 times on transient errors\nbackoff = \"exponential\"   # \"exponential\", \"linear\", or \"fixed\"\ninitial_delay_ms = 500    # base delay before first retry\nmax_delay_ms = 30000      # maximum delay cap\n\n[[mcp_servers]]\nname = \"filesystem\"\ntransport = \"stdio\"\ncommand = \"/usr/local/bin/mcp-server-filesystem\"\nargs = [\"--root\", \"/home/user/projects\"]\n```\n\nAll fields except `name` and `model` are optional.\n\n### Retry\n\nAgents can retry on transient LLM provider errors — rate limits (429), server\nerrors (5xx), and timeouts. Retry is opt-in and disabled by default.\n\n| Field | Default | Description |\n|---|---|---|\n| `max_retries` | 0 | Number of retry attempts after the initial request. 0 disables retry. |\n| `backoff` | `\"exponential\"` | Strategy: `\"exponential\"` (with jitter), `\"linear\"`, or `\"fixed\"` |\n| `initial_delay_ms` | 500 | Base delay in milliseconds before the first retry |\n| `max_delay_ms` | 30000 | Maximum delay cap in milliseconds |\n\nOnly transient errors are retried. Authentication errors (401/403) and bad\nrequests (400) are never retried. When `--verbose` is enabled, each retry\nattempt is logged to stderr. The `--json` envelope includes a `retry_attempts`\nfield for observability.\n\n### Output Allowlist\n\nAgents that use `url_fetch` or `web_search` can be restricted to specific hostnames with the `allowed_hosts` field:\n\n```toml\nallowed_hosts = [\"api.example.com\", \"docs.example.com\"]\n```\n\n| Behavior | Detail |\n|---|---|\n| Empty or absent | All public hostnames allowed |\n| Non-empty list | Only exact hostname matches permitted (case-insensitive, no wildcard subdomains) |\n| Private IPs | Always blocked regardless of allowlist — loopback, link-local, RFC 1918, CGNAT, IPv6 private |\n| Redirects | Each redirect destination is re-validated against the allowlist and private IP check |\n| Sub-agents | Inherit the parent's `allowed_hosts` unless the sub-agent TOML explicitly sets its own |\n\n### Token Budget\n\nCap cumulative token usage (input + output, across all turns and sub-agent calls) for a single run:\n\n```toml\n[budget]\nmax_tokens = 50000   # 0 = unlimited (default)\n```\n\nOr override via flag:\n\n```bash\naxe run my-agent --max-tokens 10000\n```\n\nThe flag takes precedence over TOML when set to a value greater than zero.\n\nWhen the budget is exceeded, the current response is returned but no further tool calls execute. The process exits with **code 4**. Memory is not appended on a budget-exceeded run.\n\nWith `--verbose`, each turn logs cumulative usage to stderr. With `--json`, the output envelope includes `budget_max_tokens`, `budget_used_tokens`, and `budget_exceeded` fields (omitted when unlimited).\n\n## Tools\n\nAgents can use built-in tools to interact with the filesystem and run commands. When tools are enabled, the agent enters a conversation loop — the LLM can make tool calls, receive results, and continue reasoning for up to 50 turns.\n\n### Built-in Tools\n\n| Tool | Description |\n|---|---|\n| `list_directory` | List contents of a directory relative to the working directory |\n| `read_file` | Read file contents with line-numbered output and optional pagination (offset/limit) |\n| `write_file` | Create or overwrite a file, creating parent directories as needed |\n| `edit_file` | Find and replace exact text in a file, with optional replace-all mode |\n| `run_command` | Execute a shell command via `sh -c` and return combined output |\n| `url_fetch` | Fetch URL content with HTML stripping and truncation |\n| `web_search` | Search the web and return results |\n| `call_agent` | Delegate a task to a sub-agent (controlled via `sub_agents`, not `tools`) |\n\nEnable tools by adding them to the agent's `tools` field:\n\n```toml\ntools = [\"read_file\", \"list_directory\", \"run_command\"]\n```\n\nThe `call_agent` tool is not listed in `tools` — it is automatically available when `sub_agents` is configured and the depth limit has not been reached.\n\n### Path Security\n\nAll file tools (`list_directory`, `read_file`, `write_file`, `edit_file`) are sandboxed to the agent's working directory. Absolute paths, `..` traversal, and symlink escapes are rejected.\n\n### Parallel Execution\n\nWhen an LLM returns multiple tool calls in a single turn, they run concurrently by default. This applies to both built-in tools and sub-agent calls. Disable with `parallel = false` in `[sub_agents_config]`.\n\n### MCP Tools\n\nAgents can use tools from external [MCP](https://modelcontextprotocol.io/)\nservers. Declare servers in the agent TOML with `[[mcp_servers]]`:\n\n```toml\n[[mcp_servers]]\nname = \"my-tools\"\nurl = \"https://my-mcp-server.example.com/sse\"\ntransport = \"sse\"\nheaders = { Authorization = \"Bearer ${MY_TOKEN}\" }\n```\n\nAt startup, axe connects to each declared server, discovers available tools via\n`tools/list`, and makes them available to the LLM alongside built-in tools.\n\n| Field | Required | Description |\n|---|---|---|\n| `name` | Yes | Human-readable identifier for the server |\n| `url` | Yes | MCP server endpoint URL |\n| `transport` | Yes | `\"sse\"` or `\"streamable-http\"` |\n| `headers` | No | HTTP headers; values support `${ENV_VAR}` interpolation |\n\nMCP tools are controlled entirely by `[[mcp_servers]]` — they are not listed in\nthe `tools` field. If an MCP tool has the same name as an enabled built-in tool,\nthe built-in takes precedence.\n\n## Skills\n\nSkills are reusable instruction sets that provide an agent with domain-specific knowledge and workflows. They are defined as `SKILL.md` files following the community SKILL.md format.\n\n### Skill Resolution\n\nThe `skill` field in an agent TOML is resolved in order:\n\n1. **Absolute path** — used as-is (e.g. `/home/user/skills/SKILL.md`)\n2. **Relative to config dir** — e.g. `skills/code-review/SKILL.md` resolves to `$XDG_CONFIG_HOME/axe/skills/code-review/SKILL.md`\n3. **Bare name** — e.g. `code-review` resolves to `$XDG_CONFIG_HOME/axe/skills/code-review/SKILL.md`\n\n### Script Paths\n\nSkills often reference helper scripts. Since `run_command` executes in the agent's `workdir` (not the skill directory), **script paths in SKILL.md must be absolute**. Relative paths will fail because the scripts don't exist in the agent's working directory.\n\n```\n# Correct — absolute path\n/home/user/.config/axe/skills/my-skill/scripts/fetch.sh \u003cargs\u003e\n\n# Wrong — relative path won't resolve from the agent's workdir\nscripts/fetch.sh \u003cargs\u003e\n```\n\n### Directory Structure\n\n```\n$XDG_CONFIG_HOME/axe/\n├── config.toml\n├── agents/\n│   └── my-agent.toml\n└── skills/\n    └── my-skill/\n        ├── SKILL.md\n        └── scripts/\n            └── fetch.sh\n```\n\n## Local Agent Directories\n\nBy default, agents are loaded from `$XDG_CONFIG_HOME/axe/agents/`. Axe also supports project-local agent directories for per-repo agent definitions.\n\n### Auto-Discovery\n\nIf `\u003ccwd\u003e/axe/agents/` exists, axe searches it before the global config directory. A local agent with the same name as a global agent shadows the global one.\n\n```\nmy-project/\n└── axe/\n    └── agents/\n        └── my-agent.toml   ← found automatically\n```\n\n### Explicit Override\n\nUse `--agents-dir` to point to any directory:\n\n```bash\naxe run my-agent --agents-dir ./custom/agents\n```\n\nThis flag is available on all commands: `run`, `agents list`, `agents show`, `agents init`, `agents edit`, and `gc`.\n\n### Resolution Order\n\n1. `--agents-dir` (if provided)\n2. `\u003ccwd\u003e/axe/agents/` (auto-discovered)\n3. `$XDG_CONFIG_HOME/axe/agents/` (global fallback)\n\nThe first directory containing a matching `\u003cname\u003e.toml` wins.\n\n### Smart Scaffolding\n\n`axe agents init \u003cname\u003e` writes to `\u003ccwd\u003e/axe/agents/` if that directory already exists, otherwise falls back to the global config directory.\n\n## Providers\n\n| Provider | API Key Env Var | Default Base URL |\n|---|---|---|\n| Anthropic | `ANTHROPIC_API_KEY` | `https://api.anthropic.com` |\n| OpenAI | `OPENAI_API_KEY` | `https://api.openai.com` |\n| Ollama | (none required) | `http://localhost:11434` |\n| OpenCode | `OPENCODE_API_KEY` | Configurable |\n| AWS Bedrock | (uses AWS credentials) | Region-based |\n\n**AWS Bedrock Configuration:**\n- Region: Set via `AWS_REGION` environment variable or `[providers.bedrock] region = \"us-east-1\"` in config.toml\n- Credentials: Uses environment variables (`AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_SESSION_TOKEN`) or `~/.aws/credentials` file (supports `AWS_PROFILE` and `AWS_SHARED_CREDENTIALS_FILE`)\n- Model IDs: Use full Bedrock model IDs (e.g., `bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0`)\n\nBase URLs can be overridden with `AXE_\u003cPROVIDER\u003e_BASE_URL` environment variables or in `config.toml`.\n\n## License\n\nApache-2.0. See [LICENSE](LICENSE).\n","funding_links":[],"categories":["Agent Applications"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fjrswab%2Faxe","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fjrswab%2Faxe","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fjrswab%2Faxe/lists"}