{"id":47700031,"url":"https://github.com/iyaki/ralph","last_synced_at":"2026-04-02T17:05:18.572Z","repository":{"id":341333933,"uuid":"1169611078","full_name":"iyaki/ralph","owner":"iyaki","description":"Ralph POSIX-compliant implementation. Agentic loop runner for spec-driven development, with configurable prompts","archived":false,"fork":false,"pushed_at":"2026-03-27T20:48:07.000Z","size":443,"stargazers_count":0,"open_issues_count":0,"forks_count":0,"subscribers_count":0,"default_branch":"main","last_synced_at":"2026-03-28T03:12:50.997Z","etag":null,"topics":["ai-developer-tools","software-development","spec-driven-development"],"latest_commit_sha":null,"homepage":"","language":"Shell","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/iyaki.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2026-02-28T23:56:43.000Z","updated_at":"2026-03-25T13:56:41.000Z","dependencies_parsed_at":null,"dependency_job_id":null,"html_url":"https://github.com/iyaki/ralph","commit_stats":null,"previous_names":["iyaki/ralph"],"tags_count":2,"template":false,"template_full_name":null,"purl":"pkg:github/iyaki/ralph","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/iyaki%2Fralph","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/iyaki%2Fralph/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/iyaki%2Fralph/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/iyaki%2Fralph/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/iyaki","download_url":"https://codeload.github.com/iyaki/ralph/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/iyaki%2Fralph/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31311087,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-02T12:59:32.332Z","status":"ssl_error","status_checked_at":"2026-04-02T12:54:48.875Z","response_time":89,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.5:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai-developer-tools","software-development","spec-driven-development"],"created_at":"2026-04-02T17:05:13.968Z","updated_at":"2026-04-02T17:05:18.556Z","avatar_url":"https://github.com/iyaki.png","language":"Shell","readme":"# Ralph Wiggum Agentic Loop Runner\n\nA cross-platform AI agentic loop runner for spec-driven development workflows.\n\n## What Ralph Does\n\n- Runs iterative prompt loops against supported AI CLIs until a completion signal is produced.\n- Loads prompts from built-ins, prompt files, stdin, or inline text.\n- Applies deterministic configuration precedence across flags, environment variables, config files, local overlays, and prompt front matter.\n\n## Supported Agents\n\n- `opencode` (default)\n- `claude`\n- `cursor`\n- **Codex, Copilot, Gemini, and more agents, coming soon**\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eAdding support for new agents\u003c/strong\u003e\u003c/summary\u003e\n\n### Adding Support for New Agents\n\nAgent support Pull Requests are always welcomed. To add or update agent integrations, follow the workflow in [`CONTRIBUTING.md` (\"Adding Support for a New Agent\")](CONTRIBUTING.md#adding-support-for-a-new-agent):\n\n1. `agent-spec-creation` for spec definition\n2. `agent-implementation` for TDD-based code changes\n\n\u003c/details\u003e\n\n## Installation\n\n### Prerequisites\n\n- A supported agent CLI available in `PATH` (`opencode`, `claude`, or `cursor`)\n\n### Pre-built Binaries\n\nInstall the latest pre-built binary from [GitHub Releases](https://github.com/iyaki/ralph/releases):\n\n```bash\n# 1) Download the archive for your OS/architecture from:\n#    https://github.com/iyaki/ralph/releases/latest\n\n# 2) Extract and install the `ralph` binary\ntar -xzf ralph_\u003cversion\u003e_\u003cos\u003e_\u003carch\u003e.tar.gz\nsudo install -m 0755 ralph /usr/local/bin/ralph\n```\n\n### Dev Containers\n\nInstall Ralph in a [devcontainer](https://containers.dev/) using the feature `ghcr.io/iyaki/devcontainer-features/ralph:1`:\n\n```json\n{\n\t\"features\": {\n\t\t\"ghcr.io/iyaki/devcontainer-features/ralph:1\": {}\n\t}\n}\n```\n\nAdd this to your `.devcontainer/devcontainer.json`, then rebuild the container.\n\n### From Source\n\nRequires Go `1.25` (see `go.mod`).\n\n```bash\nmake build\n```\n\nBinary output path defaults to `bin/ralph`.\n\n### Install System-Wide\n\n```bash\nmake install\n```\n\nOr manually:\n\n```bash\nsudo install -m 0755 bin/ralph /usr/local/bin/ralph\n```\n\n### Initialize Config File\n\n```bash\nralph init\n```\n\n## Quick Start\n\nIf you are already doing spec-driven development:\n\n1. Run `ralph plan my-feature` to generate an implementation plan.\n2. Run `ralph` (defaults to `run build`) to start implementing the feature.\n\n## About the Ralph Wiggum Methodology\n\nThis implementation is based on the [Ralph Wiggum methodology](https://ghuntley.com/ralph/) pioneered by [Geoffrey Huntley](https://ghuntley.com/).\n\n**Core Principles:**\n\n- **Spec-driven development** - Requirements defined upfront in markdown specs\n- **Monolithic operation** - One agent, one task, one loop iteration at a time\n- **Fresh context** - Each iteration starts with a clean context window\n- **Backpressure** - Tests and validation provide immediate feedback (Architectural constraints of [Harness engineering](https://martinfowler.com/articles/exploring-gen-ai/harness-engineering.html))\n- **Let Ralph Ralph** - Trust the agent to self-correct through iteration\n- **Disposable plans** - Regenerate implementation plans when they go stale\n- **Simple loops** - Minimal Bash loops feeding prompts to AI agents\n\nThe methodology works in three phases:\n\n1. Define requirements through human and LLM conversations to create specs\n2. Gap analysis to generate/update implementation plans\n3. Build loops that implement one task at a time, commit, update the plan and repeat until completion\n\nFor a comprehensive guide, see the [Ralph Playbook](https://github.com/ClaytonFarr/ralph-playbook).\n\nThis tool implements what has worked well for me, inspired by how Geoffrey worked on [Loom](https://github.com/ghuntley/loom).\n\nThis CLI tool alone is not enough to achieve good results. The quality of the prompts and specs, and the backpressure you set, will greatly influence the outcomes.\n\nIf you don't know where to start implementing backpressure, [lefthook](https://github.com/evilmartians/lefthook) is a great tool for setting pre-commit hooks.\n\n## Command Reference\n\nRalph exposes the root command and a `run` subcommand with shared behavior.\n\n```bash\nralph [subcommand] [options] [prompt] [scope]\n```\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eShow command examples\u003c/strong\u003e\u003c/summary\u003e\n\nExamples:\n\n```bash\n# Run with default build prompt\nralph # Equivalent to `ralph run build`\n\n# Run with plan prompt\nralph plan my-feature\n\n# Use custom max iterations\nralph --max-iterations 10 build\n\n# Use inline prompt\nralph --prompt \"Custom prompt text\"\n\n# Read prompt from stdin\necho \"prompt from stdin\" | ralph -\n\n# Use a specific agent and agent mode\nralph --agent claude --agent-mode planner\n\n# Override child agent environment variables\nralph --env OPENAI_API_KEY=\u003credacted\u003e --env HTTP_PROXY=http://127.0.0.1:8080 build\n\n# Show help\nralph --help\n```\n\n\u003c/details\u003e\n\n## Prompt Sources\n\nRalph resolves prompt content in this precedence order:\n\n- `--prompt` inline text (highest for prompt content)\n- `--prompt-file \u003cpath\u003e` (or `-` to read from stdin)\n- named prompt resolution (for example `build`, `plan`) from prompt directories\n\nFor markdown prompt files, YAML front matter supports runtime overrides for:\n\n- `model`\n- `agent-mode`\n\nFront matter is stripped from the prompt body before sending text to the agent process.\n\n## Configuration\n\nRalph supports flags, environment variables, and TOML config files.\n\n### General Precedence\n\n`flags \u003e env vars \u003e config file values \u003e defaults`\n\n### `model` / `agent-mode` Effective Precedence\n\n1. `--model` / `--agent-mode`\n2. `RALPH_MODEL` / `RALPH_AGENT_MODE`\n3. prompt file front matter (`model`, `agent-mode`)\n4. `[prompt-overrides.\u003cprompt\u003e]` from config\n5. global `model` / `agent-mode` in config\n6. defaults (empty)\n\n### Agent Process Environment Precedence\n\n1. inherited process environment (`os.Environ()`)\n2. config `[env]`\n3. repeated `--env KEY=VALUE` flags (highest)\n\nNotes:\n\n- `--env` splits on the first `=`.\n- Empty values are valid (`KEY=`).\n- Duplicate keys resolve by command-line order (last value wins).\n\n### Config File Selection and Local Overlay\n\nBase config selection order:\n\n1. `--config \u003cpath\u003e`\n2. `RALPH_CONFIG=\u003cpath\u003e`\n3. auto-discovery of `ralph.toml` in the current directory.\n\nIf a base config is selected and a sibling `ralph-local.toml` exists, it is merged over the base config.\n\n## Flags, Env Vars, and TOML Keys\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eShow full settings reference table\u003c/strong\u003e\u003c/summary\u003e\n\n| Setting                  | Flag                               | Env var                          | TOML key                   | Default                               |\n| ------------------------ | ---------------------------------- | -------------------------------- | -------------------------- | ------------------------------------- |\n| Config path              | `--config`, `-c`                   | `RALPH_CONFIG`                   | n/a                        | Auto-discover `ralph.toml` in cwd     |\n| Max iterations           | `--max-iterations`, `-m`           | `RALPH_MAX_ITERATIONS`           | `max-iterations`           | `25`                                  |\n| Prompt file path         | `--prompt-file`, `-p`              | n/a                              | `prompt-file`              | unset                                 |\n| Specs dir                | `--specs-dir`, `-s`                | `RALPH_SPECS_DIR`                | `specs-dir`                | `specs`                               |\n| Specs index file         | `--specs-index`, `-i`              | `RALPH_SPECS_INDEX_FILE`         | `specs-index-file`         | `README.md`                           |\n| Disable specs index      | `--no-specs-index`                 | n/a                              | `no-specs-index`           | `false`                               |\n| Implementation plan name | `--implementation-plan-name`, `-n` | `RALPH_IMPLEMENTATION_PLAN_NAME` | `implementation-plan-name` | `IMPLEMENTATION_PLAN.md`              |\n| Inline custom prompt     | `--prompt`                         | `RALPH_CUSTOM_PROMPT`            | `custom-prompt`            | unset                                 |\n| Log file path            | `--log-file`, `-l`                 | `RALPH_LOG_FILE`                 | `log-file`                 | `\u003ccwd\u003e/ralph.log`                     |\n| Disable logging          | `--no-log`                         | `RALPH_LOG_ENABLED=0`            | `no-log`                   | disabled by default (`no-log = true`) |\n| Truncate log file        | `--log-truncate`                   | `RALPH_LOG_APPEND=0`             | `log-truncate`             | append mode (`log-truncate = false`)  |\n| Prompt templates dir     | none                               | `RALPH_PROMPTS_DIR`              | `prompts-dir`              | `$HOME/.ralph`                        |\n| Agent                    | `--agent`, `-a`                    | `RALPH_AGENT`                    | `agent`                    | `opencode`                            |\n| Model                    | `--model`                          | `RALPH_MODEL`                    | `model`                    | unset                                 |\n| Agent mode               | `--agent-mode`                     | `RALPH_AGENT_MODE`               | `agent-mode`               | unset                                 |\n| Agent env overrides      | `--env KEY=VALUE` (repeatable)     | n/a                              | `[env]`                    | inherited process env only            |\n\n\u003c/details\u003e\n\n## Configuration Examples\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eRepository defaults\u003c/strong\u003e\u003c/summary\u003e\n\nRepository baseline:\n\n```toml\n# ralph.toml\nagent = \"opencode\"\nmodel = \"gpt-5\"\nagent-mode = \"builder\"\n\nmax-iterations = 30\nspecs-dir = \"specs\"\nspecs-index-file = \"README.md\"\nimplementation-plan-name = \"IMPLEMENTATION_PLAN.md\"\n\nprompts-dir = \".ralph/prompts\"\n\nlog-file = \"logs/ralph.log\"\nno-log = true\nlog-truncate = false\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003ePer-prompt overrides\u003c/strong\u003e\u003c/summary\u003e\n\nPer-prompt overrides:\n\n```toml\n# ralph.toml\nmodel = \"gpt-5\"\n\n[prompt-overrides.plan]\nagent-mode = \"planner\"\n\n[prompt-overrides.custom-prompt-name]\nmodel = \"gpt-5.3-codex\"\nagent-mode = \"reviewer\"\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eLocal overlay (keep untracked)\u003c/strong\u003e\u003c/summary\u003e\n\nLocal overlay (keep untracked):\n\n```toml\n# ralph-local.toml\n[prompt-overrides.build]\nagent-mode = \"architect\"\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eChild agent env overrides\u003c/strong\u003e\u003c/summary\u003e\n\nChild agent env overrides:\n\n```toml\n# ralph.toml or ralph-local.toml\n[env]\nOPENAI_API_KEY = \"\u003credacted\u003e\"\nANTHROPIC_API_KEY = \"\u003credacted\u003e\"\nHTTP_PROXY = \"http://127.0.0.1:8080\"\n```\n\n```bash\nralph --config ./ralph.toml --env OPENAI_API_KEY=\u003credacted\u003e --env HTTP_PROXY=http://127.0.0.1:8080 build\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003ePrompt front matter override\u003c/strong\u003e\u003c/summary\u003e\n\nPrompt front matter override:\n\n```md\n---\nmodel: claude-sonnet-4\nagent-mode: planner\n---\n```\n\n\u003c/details\u003e\n\n## Spec Creator Skill\n\nThis repo includes the `spec-creator` [skill](https://agentskills.io/home) (see [.agents/skills/spec-creator/SKILL.md](.agents/skills/spec-creator/SKILL.md)) for use in the first phase of the Ralph Wiggum methodology (see [Ralph Methodology section](#about-the-ralph-wiggum-methodology)).\n\nTo install it using Vercel's skills CLI, run:\n\n```sh\nnpx skills add https://github.com/iyaki/ralph/ --skill spec-creator\n```\n\n## Contributing\n\n[See CONTRIBUTING.md](CONTRIBUTING.md)\n\n## License\n\n[MIT License](LICENSE.md)\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fiyaki%2Fralph","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fiyaki%2Fralph","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fiyaki%2Fralph/lists"}