https://github.com/federiconeri/wiggum-cli
AI agent that plugs into any codebase — scans your stack, generates specs through AI interviews, and runs Ralph loops.
https://github.com/federiconeri/wiggum-cli
ai ai-agent autonomous-coding claude-code cli codex developer-tools gemini-cli loops open-source opencode ralph-loop spec-generation typescript
Last synced: 15 days ago
JSON representation
AI agent that plugs into any codebase — scans your stack, generates specs through AI interviews, and runs Ralph loops.
- Host: GitHub
- URL: https://github.com/federiconeri/wiggum-cli
- Owner: federiconeri
- License: other
- Created: 2026-01-19T21:01:50.000Z (2 months ago)
- Default Branch: main
- Last Pushed: 2026-03-03T16:23:50.000Z (21 days ago)
- Last Synced: 2026-03-03T18:30:17.217Z (21 days ago)
- Topics: ai, ai-agent, autonomous-coding, claude-code, cli, codex, developer-tools, gemini-cli, loops, open-source, opencode, ralph-loop, spec-generation, typescript
- Language: TypeScript
- Homepage: https://wiggum.app
- Size: 2.34 MB
- Stars: 3
- Watchers: 0
- Forks: 0
- Open Issues: 52
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
Awesome Lists containing this project
README
Plug into any codebase. Generate specs. Ship features while you sleep.
Quick Start ·
How It Works ·
Website ·
Blog ·
Pricing ·
Issues
---
## What is Wiggum?
Wiggum is an **AI agent** that plugs into any codebase and makes it ready for autonomous feature development — no configuration, no boilerplate.
It works in two phases. First, **Wiggum itself is the agent**: it scans your project, detects your stack, and runs an AI-guided interview to produce detailed specs, prompts, and scripts — all tailored to your codebase. Then it delegates the actual coding to [Claude Code](https://docs.anthropic.com/en/docs/claude-code) or any CLI-based coding agent, running an autonomous **implement → test → fix** loop until the feature ships.
Plug & play. Point it at a repo. It figures out the rest.
```
Wiggum (agent) Coding Agent
┌────────────────────────────┐ ┌────────────────────┐
│ │ │ │
│ Scan ──▶ Interview ──▶ Spec ──▶ Run loops │
│ detect AI-guided .ralph/ implement │
│ 80+ tech questions specs test + fix │
│ plug&play prompts guides until done │
│ │ │ │
└────────────────────────────┘ └────────────────────┘
runs in your terminal Claude Code / any agent
```
---
## 🚀 Quick Start
```bash
npm install -g wiggum-cli
```
Then, in your project:
```bash
wiggum init # Scan project, configure AI provider
wiggum new user-auth # AI interview → feature spec
wiggum run user-auth # Autonomous coding loop
```
Or skip the global install:
```bash
npx wiggum-cli init
```
---
## ⚡ Features
🔍 **Smart Detection** — Auto-detects 80+ technologies: frameworks, databases, ORMs, testing tools, deployment targets, MCP servers, and more.
🎙️ **AI-Guided Interviews** — Generates detailed, project-aware feature specs through a structured 4-phase interview. No more blank-page problem.
🔁 **Autonomous Coding Loops** — Hands specs to Claude Code (or any agent) and runs implement → test → fix cycles with git worktree isolation.
✨ **Spec Autocomplete** — AI pre-fills spec names from your codebase context when running `/run`.
📥 **Action Inbox** — Review AI decisions inline without breaking your flow. The loop pauses, you approve or redirect, it continues.
📊 **Run Summaries** — See exactly what changed and why after each loop completes, with activity feed and diff stats.
📋 **Tailored Prompts** — Generates prompts, guides, and scripts specific to your stack. Not generic templates — actual context about *your* project.
🔌 **BYOK** — Bring your own API keys. Works with Anthropic, OpenAI, or OpenRouter. Keys stay local, never leave your machine.
🖥️ **Interactive TUI** — Full terminal interface with persistent session state. No flags to remember.
---
## 🎯 How It Works
### 1. Scan
```bash
wiggum init
```
Wiggum reads your `package.json`, config files, source tree, and directory structure. A multi-agent AI system then analyzes the results:
1. **Planning Orchestrator** — creates an analysis plan based on detected stack
2. **Parallel Workers** — Context Enricher explores code while Tech Researchers gather best practices
3. **Synthesis** — merges results, detects relevant MCP servers
4. **Evaluator-Optimizer** — QA loop that validates and refines the output
Output: a `.ralph/` directory with configuration, prompts, guides, and scripts — all tuned to your project.
### 2. Spec
```bash
wiggum new payment-flow
```
An AI-guided interview walks you through:
| Phase | What happens |
|-------|-------------|
| **Context** | Share reference URLs, docs, or files |
| **Goals** | Describe what you want to build |
| **Interview** | AI asks 3–5 clarifying questions |
| **Generation** | Produces a detailed feature spec in `.ralph/specs/` |
### 3. Loop
```bash
wiggum run payment-flow
```
Wiggum hands the spec + prompts + project context to your coding agent and runs an autonomous loop:
```
implement → run tests → fix failures → repeat
```
Supports git worktree isolation (`--worktree`) for running multiple features in parallel.
---
## 🖥️ Interactive Mode
Running `wiggum` with no arguments opens the TUI — the recommended way to use Wiggum:
```bash
$ wiggum
```
| Command | Alias | Description |
|---------|-------|-------------|
| `/init` | `/i` | Scan project, configure AI provider |
| `/new ` | `/n` | AI interview → feature spec |
| `/run ` | `/r` | Run autonomous coding loop |
| `/monitor ` | `/m` | Monitor a running feature |
| `/sync` | `/s` | Re-scan project, update context |
| `/help` | `/h` | Show commands |
| `/exit` | `/q` | Exit |
---
## 📁 Generated Files
```
.ralph/
├── ralph.config.cjs # Stack detection results + loop config
├── prompts/
│ ├── PROMPT.md # Implementation prompt
│ ├── PROMPT_feature.md # Feature planning
│ ├── PROMPT_e2e.md # E2E testing
│ ├── PROMPT_verify.md # Verification
│ ├── PROMPT_review_manual.md # PR review (manual - stop at PR)
│ ├── PROMPT_review_auto.md # PR review (auto - review, no merge)
│ └── PROMPT_review_merge.md # PR review (merge - review + auto-merge)
├── guides/
│ ├── AGENTS.md # Agent instructions (CLAUDE.md)
│ ├── FRONTEND.md # Frontend patterns
│ ├── SECURITY.md # Security guidelines
│ └── PERFORMANCE.md # Performance patterns
├── scripts/
│ └── feature-loop.sh # Main loop script
├── specs/
│ └── _example.md # Example spec template
└── LEARNINGS.md # Accumulated project learnings
```
---
## 🔧 CLI Reference
wiggum init [options]
Scan the project, detect the tech stack, generate configuration.
| Flag | Description |
|------|-------------|
| `--provider ` | AI provider: `anthropic`, `openai`, `openrouter` (default: `anthropic`) |
| `-i, --interactive` | Stay in interactive mode after init |
| `-y, --yes` | Accept defaults, skip confirmations |
wiggum new <feature> [options]
Create a feature specification via AI-powered interview.
| Flag | Description |
|------|-------------|
| `--ai` | Use AI interview (default in TUI mode) |
| `--provider ` | AI provider for spec generation |
| `--model ` | Model to use |
| `-e, --edit` | Open in editor after creation |
| `-f, --force` | Overwrite existing spec |
wiggum run <feature> [options]
Run the autonomous development loop.
| Flag | Description |
|------|-------------|
| `--worktree` | Git worktree isolation (parallel features) |
| `--resume` | Resume an interrupted loop |
| `--model ` | Claude model (`opus`, `sonnet`) |
| `--max-iterations ` | Max iterations (default: 10) |
| `--max-e2e-attempts ` | Max E2E retries (default: 5) |
| `--review-mode ` | `manual` (stop at PR), `auto` (review, no merge), or `merge` (review + merge). Default: `manual` |
wiggum monitor <feature> [options]
Track feature development progress in real-time.
| Flag | Description |
|------|-------------|
| `--interval ` | Refresh interval (default: 5) |
| `--bash` | Use bash monitor script |
---
## 🔌 AI Providers
Wiggum requires an API key from one of these providers:
| Provider | Environment Variable |
|----------|---------------------|
| Anthropic | `ANTHROPIC_API_KEY` |
| OpenAI | `OPENAI_API_KEY` |
| OpenRouter | `OPENROUTER_API_KEY` |
Optional services for deeper analysis:
| Service | Variable | Purpose |
|---------|----------|---------|
| [Tavily](https://tavily.com) | `TAVILY_API_KEY` | Web search for current best practices |
| [Context7](https://context7.com) | `CONTEXT7_API_KEY` | Up-to-date documentation lookup |
Keys are stored in `.ralph/.env.local` and never leave your machine.
---
🔍 Detection Capabilities (80+ technologies)
| Category | Technologies |
|----------|-------------|
| **Frameworks** | Next.js (App/Pages Router), React, Vue, Nuxt, Svelte, SvelteKit, Remix, Astro |
| **Package Managers** | npm, yarn, pnpm, bun |
| **Testing** | Jest, Vitest, Playwright, Cypress |
| **Styling** | Tailwind CSS, CSS Modules, Styled Components, Emotion, Sass |
| **Databases** | PostgreSQL, MySQL, SQLite, MongoDB, Redis |
| **ORMs** | Prisma, Drizzle, TypeORM, Mongoose, Kysely |
| **APIs** | REST, GraphQL, tRPC, OpenAPI |
| **State** | Zustand, Jotai, Redux, Pinia, Recoil, MobX, Valtio |
| **UI Libraries** | shadcn/ui, Radix, Material UI, Chakra UI, Ant Design, Headless UI |
| **Auth** | NextAuth.js, Clerk, Auth0, Supabase Auth, Lucia, Better Auth |
| **Analytics** | PostHog, Mixpanel, Amplitude, Google Analytics, Plausible |
| **Payments** | Stripe, Paddle, LemonSqueezy |
| **Email** | Resend, SendGrid, Postmark, Mailgun |
| **Deployment** | Vercel, Netlify, Railway, Fly.io, Docker, AWS |
| **Monorepos** | Turborepo, Nx, Lerna, pnpm workspaces |
| **MCP** | Detects MCP server/client configs, recommends servers based on stack |
---
## 📋 Requirements
- **Node.js** >= 18.0.0
- **Git** (for worktree features)
- An AI provider API key (Anthropic, OpenAI, or OpenRouter)
- [Claude Code](https://docs.anthropic.com/en/docs/claude-code) or another coding agent (for `wiggum run`)
---
## 🤝 Contributing
Contributions welcome! See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines.
```bash
git clone https://github.com/federiconeri/wiggum-cli.git
cd wiggum-cli
npm install
npm run build
npm test
```
---
## 📖 Learn More
- [What Is Wiggum CLI?](https://wiggum.app/blog/what-is-wiggum-cli) — Overview of the autonomous coding agent
- [What Is the Ralph Loop?](https://wiggum.app/blog/what-is-the-ralph-loop) — Deep dive into the Ralph loop methodology
- [Wiggum vs Bash Scripts](https://wiggum.app/blog/wiggum-vs-ralph-wiggum-scripts) — Why spec generation matters
- [Roadmap](https://wiggum.app/roadmap) — What's coming next
- [Changelog](https://wiggum.app/changelog) — Release history
---
## 📄 License
**MIT + Commons Clause** — see [LICENSE](LICENSE).
You can use, modify, and distribute Wiggum freely. You may **not** sell the software or a service whose value derives substantially from Wiggum's functionality.
---
Built on the Ralph loop technique by Geoffrey Huntley