An open API service indexing awesome lists of open source software.

https://github.com/gregorydickson/pickle-rick-claude

πŸ₯’ Pickle Rick for Claude Code β€” autonomous PRD-driven coding loops + relentless code review. Ralph Loop toolkit.
https://github.com/gregorydickson/pickle-rick-claude

agentic ai-coding anthropic autonomous-agent claude claude-code code-review iterative-development llm meeseeks pickle-rick tmux

Last synced: 8 days ago
JSON representation

πŸ₯’ Pickle Rick for Claude Code β€” autonomous PRD-driven coding loops + relentless code review. Ralph Loop toolkit.

Awesome Lists containing this project

README

          


Pickle Rick for Claude Code

# πŸ₯’ Pickle Rick for Claude Code

> *"Wubba Lubba Dub Dub! πŸ₯’ I'm not just an AI assistant, Morty β€” I'm an **autonomous engineering machine** trapped in a pickle jar!"*

Pickle Rick is a complete agentic engineering toolbelt built on the [Ralph Wiggum loop](https://ghuntley.com/ralph/) and ideas from Andrej Karpathy's [AutoResearch](https://github.com/karpathy/autoresearch) project. Hand it a PRD β€” or let it draft one β€” and it decomposes work into tickets, spawns isolated worker subprocesses, and drives each through a full **research β†’ plan β†’ implement β†’ verify β†’ review β†’ simplify** lifecycle without human intervention.

New to PRDs? See the **[PRD Writing Guide](PRD_GUIDE.md)** for developers or the **[Product Manager's Guide](PM_GUIDE.md)** for PMs defining and refining requirements. For internals, see [Architecture](architecture.md). For what's coming next, see the [Feature Roadmap](roadmap.md).

---

## How to Build Things with Pickle Rick

This is the actual workflow. You don't need to memorize commands β€” just follow the flow.

### Step 1: Write a PRD

Every feature starts with a PRD. Open a Claude Code session in your project and describe what you want to build:

```
"Help me create a PRD for caching the loan status API responses in Redis"
```

Rick interrogates you β€” *why* are you building this, *who* is it for, and critically: **how will we verify each requirement automatically?** This is a back-and-forth conversation, not a form to fill out. Rick also explores your codebase during the interview, grounding the PRD in what actually exists.

Or write your own `prd.md` and skip the interview β€” whatever gets requirements on paper with machine-checkable acceptance criteria.

```bash
/pickle-prd # Interactive PRD drafting interview
# or just start talking β€” "Help me write a PRD for X"
```

### Step 2: Refine the PRD

Three AI analysts run in parallel and tear your PRD apart from different angles β€” requirements gaps, codebase integration points, and risk/scope. They cross-reference each other across 3 cycles.

```bash
/pickle-refine-prd my-prd.md # Refine with 3 parallel analysts
```

What you get back:
- `prd_refined.md` β€” your PRD with concrete file paths, interface contracts, and gap fills
- Atomic tickets β€” each < 30 min of work, < 5 files, < 4 acceptance criteria, self-contained
- Wiring ticket (3+ tickets) β€” integrates isolated modules into a working whole
- **Hardening tickets** β€” auto-appended code quality review + data flow audit scoped to modified files

The hardening tickets (skipped for trivial/small single-ticket PRDs) run as normal Morty workers after all implementation work:
1. **Code Quality Hardening** β€” szechuan-sauce principles review (KISS, DRY, dead code, edge cases) on all modified files
2. **Data Flow Audit** β€” anatomy-park-style trace through affected subsystems (ID mismatches, stale schemas, cross-ticket interface alignment)

**Review the tickets before proceeding.** Check ordering, scope, and acceptance criteria. You can edit them directly β€” they're markdown files.

### Step 3: Implement with tmux (the Ralph Loop)

This is where Rick takes over. Each ticket goes through 8 phases autonomously: Research β†’ Review β†’ Plan β†’ Review β†’ Implement β†’ Spec Conformance β†’ Code Review β†’ Simplify. Context clears between every iteration β€” no drift, even on 500+ iteration epics.

```bash
/pickle-tmux --resume # Launch tmux mode, picks up refined tickets
# or combine refine + implement in one shot:
/pickle-refine-prd --run my-prd.md
```

Rick prints a `tmux attach` command β€” open a second terminal to watch the live 3-pane dashboard:
- **Top-left**: ticket status, phase, elapsed time, circuit breaker state
- **Top-right**: iteration log stream
- **Bottom**: live worker output (research, implementation, test runs, commits)

Sit back. Rick handles the rest.

### Step 4 (Optional): Metric-Driven Refinement

If you can define a measurable goal β€” test coverage, response time, bundle size, extraction accuracy β€” the Microverse grinds toward it. Each cycle: make one change, measure, keep or revert. Failed approaches are tracked so it never repeats a dead end.

```bash
/pickle-microverse --metric "npm run coverage:score" --task "hit 90% test coverage"
/pickle-microverse --metric "node perf-test.js" --task "reduce p99 latency" --direction lower
/pickle-microverse --goal "error messages are user-friendly and actionable" --task "improve UX"
```

### Step 5 (Optional): Cleanup

Three options for polishing the result:

**Full Pipeline** β€” chains all three phases in a single tmux session: build, deep review, then deslop. No manual intervention between phases.

```bash
/pickle-pipeline "build the caching layer" # Full pipeline
/pickle-pipeline --skip-anatomy "refactor auth" # Skip deep review
/pickle-pipeline --target src/services "add retry logic" # Scope review phases
```

**Szechuan Sauce** β€” hunts coding principle violations (KISS, DRY, SOLID, security, style) and fixes them one at a time until zero remain. Great for post-feature polish before merging.

```bash
/szechuan-sauce src/services/ # Deslop a directory
/szechuan-sauce --dry-run src/ # Catalog violations without fixing
/szechuan-sauce --focus "error handling" src/ # Narrow the review
```

**Anatomy Park** β€” traces data flows through subsystems looking for runtime bugs: data corruption, timezone issues, rounding errors, schema drift. Catalogs "trap doors" (files that keep breaking) in `CLAUDE.md` files for future engineers.

```bash
/anatomy-park src/ # Deep subsystem review
/anatomy-park --dry-run # Review only, no fixes
```

### The Full Flow at a Glance

```
You describe a feature
β”‚
β–Ό
/pickle-prd ← Interactive PRD drafting (or write your own)
β”‚
β–Ό
/pickle-refine-prd ← 3 parallel analysts refine + decompose into tickets
β”‚ Includes auto-generated hardening tickets:
β”‚ β€’ Code quality review (szechuan-sauce principles)
β”‚ β€’ Data flow audit (anatomy-park trace)
β–Ό
/pickle-tmux --resume ← Autonomous implementation (Ralph loop)
β”‚ Research β†’ Plan β†’ Implement β†’ Verify β†’ Review β†’ Simplify
β”‚ Context clears every iteration. Circuit breaker auto-stops runaways.
β”‚ Hardening tickets run automatically after implementation.
β–Ό
/pickle-microverse ← (Optional) Metric-driven optimization loop
β”‚
β–Ό
/pickle-pipeline ← (Optional) Full lifecycle: build β†’ deep review β†’ deslop
─ or run phases individually ─
/szechuan-sauce ← (Optional) Code quality cleanup
/anatomy-park ← (Optional) Data flow correctness review
β”‚
β–Ό
Ship it πŸ₯’
```

---

## ⚑ Quick Start

### 1. Install

```bash
git clone https://github.com/gregorydickson/pickle-rick-claude.git
cd pickle-rick-claude
bash install.sh
```

### 2. Add the Pickle Rick persona to your project

The installer deploys `persona.md` to `~/.claude/pickle-rick/`. Add it to your project's `CLAUDE.md`:

```bash
# Already have a CLAUDE.md? Append (safe β€” won't overwrite your content):
cat ~/.claude/pickle-rick/persona.md >> /path/to/your/project/.claude/CLAUDE.md

# Starting fresh:
mkdir -p /path/to/your/project/.claude
cp ~/.claude/pickle-rick/persona.md /path/to/your/project/.claude/CLAUDE.md
```

> **After upgrading:** `bash install.sh` deploys a fresh `persona.md`. If you appended it to your project's `CLAUDE.md`, re-sync by replacing the old persona block with the updated one.

### 3. Run

> **Permissions:** Launch Claude with `claude --dangerously-skip-permissions`. Pickle Rick's loops spawn worker subprocesses that already run permissionless, but the root instance needs it too β€” otherwise you'll drown in permission prompts for every file write, bash command, and hook invocation.

```bash
cd /path/to/your/project
claude --dangerously-skip-permissions
# then follow the workflow above β€” start with a PRD
```

### 4. Uninstall

Two uninstall paths depending on how much you want to remove.

**Remove hooks only** β€” disables automatic behavior (Stop loop enforcement, commit logging, config protection) but keeps extension files and slash commands available for manual use:

```bash
bash uninstall-hooks.sh
```

Settings are backed up to `~/.claude/backups/settings.json.pickle-uninstall-hooks.` before modification. Run `bash install.sh` to re-enable hooks later β€” `install.sh` is idempotent, safe to re-run any time. Third-party hooks in `settings.json` (GitNexus, RTK, etc.) are never touched.

**What still works without hooks:**

- **One-shot utilities and reporters** (never needed hooks) β€” `/pickle-prd`, `/pickle-refine-prd`, `/pickle-dot`, `/pickle-dot-patterns`, `/pickle-metrics`, `/pickle-status`, `/pickle-standup`, `/help-pickle`, `/attract`.
- **Detached-runner commands** (bootstrap a separate process that runs independently inside tmux/zellij) β€” `/pickle-tmux`, `/pickle-zellij`, `/pickle-jar-open`, `/pickle-microverse`, `/szechuan-sauce`, `/anatomy-park`, `/pickle-pipeline`. These launch `mux-runner.js` / `jar-runner.js` / `microverse-runner.js` / `pipeline-runner.js` inside the multiplexer; the runner spawns its own `claude -p` subprocesses and drives iteration via Node.js, not via the Stop hook. In tmux mode the Stop hook is a pass-through anyway.

**What needs hooks** β€” in-session loops where the Stop hook is the iteration driver for the same Claude session: `/pickle` (interactive mode), `/council-of-ricks`, `/portal-gun`, `/project-mayhem`, `/pickle-retry`. Without hooks these run the first step and stop.

**Full uninstall** β€” removes hooks, extension scripts at `~/.claude/pickle-rick/`, and all pickle-rick slash commands at `~/.claude/commands/`:

```bash
bash uninstall.sh
```

**Preserved after full uninstall** (delete manually if desired):
- Session history at `~/.claude/pickle-rick/sessions/`
- Activity logs at `~/.claude/pickle-rick/activity/`
- Settings backups at `~/.claude/backups/`
- Project-local `CLAUDE.md` files β€” remove the appended persona block manually

Third-party hooks in `settings.json` (GitNexus, RTK, etc.) are never touched.

---

## Advanced Workflows

### Pipeline Mode: Self-Correcting DAGs

For complex epics with parallel workstreams, conditional logic, and multiple quality gates. Instead of a linear ticket queue, define work as a convergence graph where failures automatically route back for correction.

```bash
/pickle-dot my-prd.md # Convert PRD β†’ validated DOT digraph (builder path, default)
/attract pipeline.dot # Submit to attractor server for execution
```

The builder enforces 32+ active patterns and 15 structural validation rules β€” test-fix loops, goal gates, conditional routing, parallel fan-out/in, human gates, security scanning, coverage qualification, scope creep detection, drift detection, and more. See [DotBuilder details](#-dotbuilder--programmatic-dot-codegen) below.

### Council of Ricks: Graphite Stack Review

Reviews your [Graphite](https://graphite.dev) PR stack iteratively β€” but never touches your code. Generates **agent-executable directives** you feed to your coding agent. Escalates through focus areas: stack structure β†’ CLAUDE.md compliance β†’ correctness β†’ cross-branch contracts β†’ test coverage β†’ security β†’ polish.

```bash
/council-of-ricks # Review the current Graphite stack
```

### Portal Gun: Gene Transfusion

Portal Gun β€” gene transfusion for codebases

> *"You see that code over there, Morty? In that other repo? I'm gonna open a portal, reach in, and yank its DNA into OUR dimension."*

`/portal-gun` implements [gene transfusion](https://factory.strongdm.ai/techniques/gene-transfusion) β€” transferring proven coding patterns between codebases using AI agents. Point it at a GitHub URL, local file, npm package, or just describe a pattern, and it extracts the structural DNA, analyzes your target codebase, then generates a transplant PRD with behavioral validation tests and automatic refinement.

The `--run` flag goes further: after generating the transplant PRD, it launches a convergence loop that executes the migration, scans coverage against the original inventory, generates a delta PRD for any missing items, and re-executes until 100% of the donor pattern has been transplanted.

**v2** added a persistent **pattern library** (cached patterns reused across sessions), **complete file manifests** with anti-truncation enforcement, **multi-language import graph tracing** (TypeScript/JavaScript, Python, Go, Rust), **6-category transplant classification** (direct transplant, type-only, behavioral reference, replace with equivalent, environment prerequisite, not needed), a **PRD validation pass** that verifies every file path against the filesystem with 6 error classes, **post-edit consistency checking** that catches contradictions after scope changes, and **deep target diffs** with line-level modification specs.


```bash
/portal-gun https://github.com/org/repo/blob/main/src/auth.ts # Transplant from GitHub
/portal-gun ../other-project/src/cache.ts # Transplant from local file
/portal-gun --run https://github.com/org/repo/tree/main/src/lib # Transplant + auto-execute convergence loop
/portal-gun --save-pattern retry ../donor/retry-logic.ts # Save pattern to library for reuse
/portal-gun --depth shallow https://github.com/org/repo # Summary + structural pattern only
```

### Pickle Jar: Night Shift Batch Mode

Queue tasks for unattended batch execution overnight.

```bash
/add-to-pickle-jar # Queue current session
/pickle-jar-open # Run all queued tasks sequentially
```

---

## πŸš€ Command Reference

| Command | Description |
|---|---|
| `/pickle "task"` | Start the full autonomous loop β€” PRD β†’ breakdown β†’ 8-phase execution |
| `/pickle prd.md` | Pick up an existing PRD, skip drafting |
| `/pickle-tmux "task"` | Same loop with context clearing via tmux. Best for long epics (8+ iterations) |
| `/pickle-zellij "task"` | Same loop in Zellij with KDL layouts. Requires Zellij >= 0.40.0 |
| `/pickle-refine-prd [path]` | Refine PRD with 3 parallel analysts β†’ decompose into tickets |
| `/pickle-refine-prd --run [path]` | Refine + decompose + auto-launch unlimited tmux session |
| `/pickle-microverse` | Metric convergence loop. `--metric` for numeric, `--goal` for LLM judge |
| `/szechuan-sauce [target]` | Principle-driven deslopping. `--dry-run`, `--focus`, `--domain` |
| `/anatomy-park` | Three-phase deep subsystem review with trap door cataloging |
| `/pickle-pipeline "task"` | Full lifecycle: pickle-tmux β†’ anatomy-park β†’ szechuan-sauce in one tmux session |
| `/plumbus ` | Iterative DAG shaping on a single `.dot` file. `--dry-run`, `--focus`, `--no-validator` |
| `/council-of-ricks` | Graphite PR stack review β€” generates directives, never fixes code |
| `/portal-gun ` | Gene transfusion from another codebase |
| `/pickle-dot [path]` | Convert PRD β†’ attractor-compatible DOT digraph |
| `/attract [file.dot]` | Submit pipeline to attractor server |
| `/pickle-prd` | Draft a PRD standalone (no execution) |
| `/pickle-metrics` | Token usage, commits, LOC. `--days N`, `--weekly`, `--json` |
| `/pickle-standup` | Formatted standup summary from activity logs |
| `/pickle-status` | Current session phase, iteration, ticket status |
| `/eat-pickle` | Cancel the active loop |
| `/pickle-retry ` | Re-attempt a failed ticket |
| `/add-to-pickle-jar` | Queue session for Night Shift |
| `/pickle-jar-open` | Run all Jar tasks sequentially |
| `/disable-pickle` | Disable the stop hook globally |
| `/enable-pickle` | Re-enable the stop hook |
| `/help-pickle` | Show all commands and flags |
| `/meeseeks` | **Deprecated** β€” superseded by `/anatomy-park` and `/szechuan-sauce` |

### Flags

```
--max-iterations Stop after N iterations (default: 500; 0 = unlimited)
--max-time Stop after M minutes (default: 720 / 12 hours; 0 = unlimited)
--worker-timeout Timeout for individual workers in seconds (default: 1200)
--completion-promise "TXT" Only stop when the agent outputs TXT
--resume [PATH] Resume from an existing session
--reset Reset iteration counter and start time (use with --resume)
--paused Start in paused mode (PRD only)
--run (/pickle-refine-prd, /portal-gun) Auto-launch tmux
--interactive (/pickle-microverse) Run inline instead of tmux
--legacy (/pickle-dot) Prompt-only fallback β€” skips builder codegen for this run
--provider (/pickle-dot) LLM provider: anthropic, openai, qwen, gemini, deepseek, ollama, vllm
--review-provider (/pickle-dot) Separate provider for review/critical nodes
--isolated (/pickle-dot) Isolated workspace mode
--metric "" (/pickle-microverse) Shell command outputting a numeric score
--goal "" (/pickle-microverse) Natural language goal for LLM judge
--direction (/pickle-microverse) Optimization direction (default: higher)
--judge-model (/pickle-microverse) Judge model for LLM scoring
--tolerance (/pickle-microverse) Score delta for "held" status (default: 0)
--stall-limit (/pickle-microverse) Non-improving iterations before convergence (default: 5)
--target (/portal-gun) Target repo (default: cwd)
--depth (/portal-gun) Extraction depth (default: deep)
--no-refine (/portal-gun) Skip automatic refinement
--max-passes (/portal-gun) Max convergence passes (default: 3)
--save-pattern (/portal-gun) Persist pattern to library
--target (/pickle-pipeline) Target directory for review phases (default: cwd)
--skip-anatomy (/pickle-pipeline) Skip anatomy-park phase
--skip-szechuan (/pickle-pipeline) Skip szechuan-sauce phase
--anatomy-max-iterations N (/pickle-pipeline) Anatomy Park iteration limit (default: 100)
--anatomy-stall-limit N (/pickle-pipeline) Anatomy Park stall limit (default: 3)
--szechuan-max-iterations N (/pickle-pipeline) Szechuan Sauce iteration limit (default: 50)
--szechuan-stall-limit N (/pickle-pipeline) Szechuan Sauce stall limit (default: 5)
--szechuan-domain (/pickle-pipeline) Domain-specific principles for Szechuan phase
--szechuan-focus "" (/pickle-pipeline) Focus directive for Szechuan phase
--dry-run (/szechuan-sauce, /plumbus) Catalog violations without fixing
--domain (/szechuan-sauce) Domain-specific principles (e.g., financial)
--focus "" (/szechuan-sauce, /plumbus) Direct review toward specific concern
--no-validator (/plumbus) Disable attractor validator gate (pattern-only review)
--repo (/council-of-ricks) Target repo (default: cwd)
```

### Tips

**`/pickle` vs `/pickle-tmux`** β€” Use `/pickle` for short epics (1–7 iterations) with full keyboard access. Use `/pickle-tmux` for long epics (8+) where context drift matters β€” each iteration spawns a fresh Claude subprocess with a clean context window.

**Phase-resume** β€” When resuming after `/pickle-refine-prd`, the resume flow auto-detects the session's current phase and skips completed phases.

**Notifications (macOS)** β€” `/pickle-tmux` and `/pickle-jar-open` send macOS notifications on completion or failure.

**Recovering from a failed Morty** β€” Use `/pickle-retry ` instead of restarting the whole epic.

**"Stop hook error" is normal** β€” Claude Code labels every `decision: block` from the stop hook as "Stop hook error" in the UI. This is not an error β€” it means the loop is working.

### Settings (`pickle_settings.json`)

All defaults are configurable via `~/.claude/pickle-rick/pickle_settings.json`:

| Setting | Default | Description |
|---|---|---|
| `default_max_iterations` | 500 | Max loop iterations before auto-stop |
| `default_max_time_minutes` | 720 | Session wall-clock limit (12 hours) |
| `default_worker_timeout_seconds` | 1200 | Per-worker subprocess timeout |
| `default_manager_max_turns` | 50 | Max Claude turns per iteration (interactive/jar) |
| `default_tmux_max_turns` | 200 | Max Claude turns per iteration (tmux) |
| `default_refinement_cycles` | 3 | Number of refinement analysis passes |
| `default_refinement_max_turns` | 100 | Max Claude turns per refinement worker |
| `default_council_min_passes` | 5 | Minimum Council of Ricks review passes |
| `default_council_max_passes` | 20 | Maximum Council of Ricks review passes |
| `default_circuit_breaker_enabled` | true | Enable circuit breaker |
| `default_cb_no_progress_threshold` | 5 | No-progress iterations before OPEN |
| `default_cb_same_error_threshold` | 5 | Identical errors before OPEN |
| `default_cb_half_open_after` | 2 | No-progress iterations before HALF_OPEN |
| `default_rate_limit_wait_minutes` | 60 | Fallback wait when no API reset time |
| `default_max_rate_limit_retries` | 3 | Consecutive rate limits before stopping |

---

## Tool Deep Dives

### πŸ”¬ Microverse β€” Metric Convergence Loop


The Microverse β€” powering your Pickle Rick app

> *"I put a universe inside a box, Morty, and it powers my car battery. This is the same thing, except the universe is your codebase and the battery is a metric."*

Two modes: **Command Metric** (`--metric`) for objective numeric scores, and **LLM Judge** (`--goal`) for subjective quality assessment.

```
Gap Analysis (iteration 0)
β”‚ measure baseline, analyze codebase, identify bottlenecks
β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Iteration Loop β”‚
β”‚ 1. Plan one targeted change (avoid failed list) β”‚
β”‚ 2. Implement + commit β”‚
β”‚ 3. Measure metric β”‚
β”‚ β€’ Improved β†’ accept, reset stall counter β”‚
β”‚ β€’ Held β†’ accept, increment stall counter β”‚
β”‚ β€’ Regressed β†’ git reset, log failed approach β”‚
β”‚ 4. Converged? (stall_counter β‰₯ stall_limit) β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β–Ό
Final Report
```

| | **Microverse** | **Pickle** |
|---|---|---|
| **Goal** | Optimize toward a measurable target | Build features from a PRD |
| **Iteration unit** | One atomic change per cycle | Full ticket lifecycle |
| **Progress signal** | Metric score | Ticket completion |
| **Defines "done"** | Convergence (score stops improving) | All tickets complete |

### πŸ— Szechuan Sauce β€” Iterative Code Deslopping


Command: Szechwan Sauce β€” The Quest for Clean Code

> *"I'm not driven by avenging my dead family, Morty. That was fake. I-I-I'm driven by finding that McNugget sauce."*

Reads 30+ coding principles (KISS, YAGNI, DRY, SOLID, Guard Clauses, Fail-Fast, Encapsulation, Cognitive Load, etc.) and scores against a priority matrix (P0 security/data-loss through P4 style). Each iteration: find highest-priority violation, fix atomically, run tests, commit, measure. Regressions auto-revert.

**Phase 0: Contract Discovery** β€” greps the codebase for importers of every export in target files, builds a contract map, flags cross-module mismatches. Re-checked after every fix.

Supports `--domain ` for domain-specific principles (e.g., `financial` adds monetary precision, rounding, regulatory compliance) and `--focus ""` to elevate specific concerns.

### πŸ₯ Anatomy Park β€” Deep Subsystem Review


Anatomy Park β€” Deep Subsystem Review

> *"Welcome to Anatomy Park! It's like Jurassic Park but inside a human body. Way more dangerous."*

Auto-discovers subsystems, rotates through them round-robin, three-phase protocol per iteration:
1. **Review** (read-only): trace data flows, check git history, rate CRITICAL/HIGH, propose fixes
2. **Fix**: apply minimal edits, write regression tests, run full suite
3. **Verify** (read-only): verify callers/consumers, combinatorial branch verification, revert on regression

**Trap doors** β€” files with repeated fixes or structural invariants get documented in subsystem `CLAUDE.md` files:

```markdown
## Trap Doors
- `bank-statement.service.ts` β€” borrowerFileId MUST equal S3 batch UUID; tenant isolation depends on effectiveLenderId threading
```

### πŸ—οΈ DotBuilder β€” Programmatic DOT Codegen

`/pickle-dot` builds DOT pipelines by default via the `DotBuilder` TypeScript class β€” a schema-validated codegen path that enforces 32 active patterns and 15 structural validation rules and produces deterministic output. Use `--builder` to explicitly opt into the builder (e.g., when a global config overrides it), or `--legacy` to fall back to prompt-only generation for a specific run.

```bash
/pickle-dot my-prd.md # Builder codegen path (default)
/pickle-dot --builder my-prd.md # Explicit opt-in to builder (same as default)
/pickle-dot --legacy my-prd.md # Prompt-only fallback β€” rollback for a single run
```

#### Builder API

```typescript
import { DotBuilder } from '~/.claude/pickle-rick/extension/services/dot-builder.js';

// Static factory β€” validates and parses the spec, then returns a builder instance
const builder = DotBuilder.fromSpec(spec); // throws BuildError on invalid spec

// Fluent chain β€” call build() once; calling it again throws ALREADY_BUILT
const result = builder.build();
// result: BuildResult {
// dot: string, β€” the complete DOT digraph string
// slug: string, β€” URL-safe pipeline identifier
// patternsApplied: string[] β€” Tier 1/2 patterns auto-applied (e.g. ["test_fix_loop","fan_out"])
// defenseMatrix: { β€” Layer coverage summary
// competitive: boolean, β€” Pattern 18 (competing impls) applied
// specDriven: string, β€” "ALL" | "PARTIAL" | "NONE" (conformance nodes present)
// adversarial: boolean, β€” Pattern 17 (red team) applied
// },
// diagnostics: Diagnostic[] β€” warnings/infos from validation (non-blocking)
// }
```

#### BuilderSpec JSON

```jsonc
{
"slug": "auth_refactor", // required β€” URL-safe, lowercase underscores
"goal": "Refactor auth module", // required β€” single-sentence goal
"phases": [ // required β€” list of implementation phases (may be [] for microverse-only)
{
"name": "implement", // required β€” lowercase underscores; must be unique
"prompt": "...", // required β€” full impl instruction; agent has NO access to the PRD
"allowedPaths": ["src/auth/"], // required β€” glob patterns for permission scoping
"dependsOn": ["research"], // optional β€” phase names this phase depends on; omit for parallel fan-out
"goalGate": true, // optional β€” Pattern 2: verify progress before continuing
"timeout": "30m", // optional β€” per-phase duration string (default: "30m")
"securityScan": true, // optional β€” Pattern 8: npm audit node after progress gate
"coverageTarget": 80, // optional β€” Pattern 9: numeric coverage % gate
"competing": true, // optional β€” Pattern 18: fan-out to two competing impls
"redTeam": true, // optional β€” Pattern 17: adversarial review after conformance
"bddScenarios": true, // optional β€” Pattern 16b: Given/When/Then scenario generation
"specFirst": true, // optional β€” Pattern 16: write tests before impl (default: true when goalGate)
"docOnly": false, // optional β€” suppress verify chain for doc-only phases
"escalateOn": ["package.json"], // optional β€” files that trigger escalation (default: ["package.json","*.lock","*.config.*"])
"contextOnSuccess": { // optional β€” custom AC keys emitted by this phase's conformance node
"auth_secure": "true"
}
}
],
"acceptanceCriteria": { // required β€” exit gate conditions
"tests_pass": "true", // Tier 2 keys (auto-sourced): tests_pass, lint_clean, types_compile,
"lint_clean": "true", // cli_contract, determinism, validation_rules
"auth_secure": "true" // Tier 1 keys (custom): must appear in a phase's contextOnSuccess
},
"workingDir": "${WORKING_DIR}", // optional β€” attractor resolves at runtime
"specFile": "/repos/myapp/prd.md", // optional β€” path to PRD; interpolated as $spec_file in node prompts
"reviewRatchet": 2, // optional β€” min consecutive clean review passes (must be β‰₯ 2)
"workspace": "isolated", // optional β€” omit for shared (default)
"workspaceOpts": { // required when workspace: "isolated"
"repoUrl": "https://github.com/org/repo.git", // HTTPS required (not SSH)
"repoBranch": "main",
"cleanup": "preserve" // "preserve" (default) | "delete"
},
"microverse": { // optional β€” numeric optimization loop (replaces impl/verify chain)
"name": "bundle_opt",
"opts": {
"prompt": "...",
"measureCommand": "npm run build 2>/dev/null && wc -c < dist/bundle.js",
"target": 819200,
"direction": "reduce", // "reduce" | "improve"
"allowedPaths": ["src/**"]
}
},
"modelStylesheet": { // optional β€” model tier overrides
"defaultModel": "claude-sonnet-4-6",
"criticalModel": "claude-opus-4-6",
"reviewModel": "claude-opus-4-6"
},
"convergence": { // optional β€” Pattern 32 iterative convergence loop (replaces phases)
"until": "V_total == 0 && fixed_point && reproducibility", // predicate from canonical set
"impl": { "harness": "hermes" }, // required β€” default harness for fix nodes
"maxIterations": 6, // default: 6 β€” max body executions before non-convergence declared
"maxVisits": 5, // default: 5 β€” per-converge-node visit budget
"timeout": "21600s", // default: 21600s β€” overall converge node timeout
"convergenceEpsilon": 100, // default: 100 β€” V_total threshold for convergence declaration
"fixBackend": { // optional β€” override fix_backend node
"model": "provider/model-id",
"harness": "hermes",
"prompt": "...",
"timeout": "3600s",
"maxVisits": 10
},
"fixFrontend": { // optional β€” override fix_frontend node (same shape as fixBackend)
"model": "provider/model-id",
"harness": "hermes",
"prompt": "..."
},
"mechanicalGates": { // optional β€” override mechanical gate tool_commands
"buildApi": "cd /repos/app/packages/api && npx tsc --noEmit 2>&1 && echo 'api typecheck pass'",
"testsApi": "cd /repos/app/packages/api && npm test --silent 2>&1 && echo 'api tests pass'",
"buildUi": "cd /repos/app/packages/ui && npx tsc --noEmit 2>&1 && echo 'ui typecheck pass'",
"lint": "cd /repos/app && npx eslint packages/api/src --max-warnings=0 2>&1 && echo 'lint pass'"
},
"reviewers": { // optional β€” override reviewer node attrs
"be": { "model": "provider/model-id", "harness": "hermes", "prompt": "..." },
"fe": { "model": "provider/model-id", "harness": "hermes", "prompt": "..." },
"int": { "model": "provider/model-id", "harness": "hermes", "prompt": "..." }
},
"adversary": { // optional β€” override adversary node
"model": "provider/model-id",
"harness": "hermes",
"prompt": "...",
"sealedFromSource": "packages/api/src/**,packages/ui/app/**"
},
"fpVerify": { // optional β€” override fp_verify goal gate
"command": "set -o pipefail; cd /repos/app && npm install 2>&1 | tail -3 && cd packages/api && npx tsc --noEmit && npm test && cd ../ui && npx tsc --noEmit && echo 'fixed-point verified'",
"timeout": "900s",
"maxVisits": 5
},
"reproVerify": { // optional β€” override repro_verify goal gate
"command": "set -o pipefail; cd /repos/app && rm -rf packages/api/node_modules packages/ui/node_modules && npm install 2>&1 | tail -3 && cd packages/api && npx tsc --noEmit && npm test && cd ../ui && npx tsc --noEmit && echo 'reproducibility verified'",
"timeout": "900s",
"maxVisits": 5
}
}
}
```

#### CLI Contract

The builder binary reads `BuilderSpec` JSON from stdin and writes to stdout/stderr:

```bash
echo '' | node ~/.claude/pickle-rick/extension/bin/dot-builder.js
```

| Exit | Stream | Payload |
|---|---|---|
| `0` | stdout | `BuildResult` JSON β€” `{ dot, slug, patternsApplied, defenseMatrix, diagnostics }` |
| `1` | stderr | `BuildError` JSON β€” `{ error: BuildErrorCode, message, diagnostics }` β€” validation failure, recoverable |
| `2` | stderr | `{ error: "UNEXPECTED_ERROR", message }` β€” I/O or parse failure, not recoverable |

#### Fix-Loop and `.dot.draft` Files

When the builder exits 1, `/pickle-dot` enters an automatic fix loop. It reads the `diagnostics` array from stderr, applies minimum-scope fixes to the `BuilderSpec`, and re-invokes the CLI. The loop tracks the best attempt (fewest errors) and reverts to it after 2 consecutive non-improvements. After 3 total failed iterations without improvement:

1. The best `BuilderSpec` output is saved as `./.dot.draft`
2. All remaining diagnostics with their `.fix` hints are listed
3. The loop stops β€” manual intervention required

Re-run after fixing: `/pickle-dot `. The `.dot.draft` file is not a valid pipeline β€” do not submit it to `/attract` until errors are resolved.

**Legacy (prompt-only) path:** `/pickle-dot --legacy` also runs a post-save validate-fix loop with the same convergence guard, invoking the attractor validator CLI (`bun packages/attractor/src/cli.ts validate`) on the emitted raw DOT. On exhaustion it saves the best attempt as `./.dot.draft`. If the validator CLI is unavailable (attractor root not detected), the loop is skipped and the initial DOT is saved as-is with a warning.

**Validation error codes:** `EMPTY_SLUG`, `EMPTY_GOAL`, `DUPLICATE_PHASE`, `INVALID_SPEC`, `MISSING_AC_MAPPING`, `MISSING_TIMEOUT`, `INVALID_TIMEOUT`, `MISSING_ALLOWED_PATHS`, `INVALID_ALLOWED_PATHS`, `PROMPT_PATH_MISMATCH`, `INVALID_STRUCTURE`, `START_HAS_INCOMING`, `UNREACHABLE_NODE`, `DIAMOND_MISSING_EDGES`, `FAN_OUT_SCOPE_LEAK`, `GOAL_GATE_NO_MAX_VISITS`, `REVIEW_MISSING_READONLY`, `WORKSPACE_NO_HTTPS`, `WORKSPACE_NO_PUSH`, `PLAN_MODE_DEADLOCK`, `COMPONENT_NO_MERGE`, `INVALID_RATCHET`, `NON_NUMERIC_TARGET`, `ALREADY_BUILT`, `DUPLICATE_MODEL`, `INVALID_CONVERGENCE_SPEC`

### πŸ›οΈ Council of Ricks β€” Details

Council of Ricks β€” Graphite PR Stack Reviewer

Requires a Graphite stack with at least one non-trunk branch, a `CLAUDE.md` with project rules, passing lint, and architectural lint rules in ESLint. Escalates through focus areas: stack structure (pass 1) β†’ CLAUDE.md compliance (2–3) β†’ per-branch correctness (4–5) β†’ cross-branch contracts (6–7) β†’ test coverage (8–9) β†’ security (10–11) β†’ polish (12+). Issues triaged: **P0** (must-fix), **P1** (should-fix), **P2** (nice-to-fix).


### πŸͺ  Plumbus β€” DAG Shaping Loop


Plumbus β€” iterative DAG shaping loop

> *"Everybody has a plumbus in their home, Morty. First they take the dinglebop, smooth it out with a bunch of schleem..."*

The same convergence loop applied to a single attractor `.dot` pipeline. Runs the attractor validator as a hard gate, walks every edge, and converges against the `pickle-dot-patterns` rubric (DAG validity, Tier 1 mandatory patterns, anti-patterns). Use it after `/pickle-dot` generates a graph you want hardened before `/attract`.

```bash
/plumbus pipeline.dot # Shape a DAG into a proper plumbus
/plumbus --dry-run pipeline.dot # Catalog violations only
/plumbus --focus "fan-out safety" pipeline.dot
/plumbus --no-validator pipeline.dot # Pattern-only (no attractor repo)
```

**When to use which:** Szechuan Sauce asks *"is this code well-designed?"* β€” Anatomy Park asks *"is this code correct?"* β€” Plumbus asks *"will this DAG actually run without deadlocking?"*

#### Generative Audit Frames

Plumbus runs six analysis frames during the first iteration Edge Walk (Override 6). Each frame produces findings in `gap_analysis.md` under `## Generative Findings`, using a three-severity model (`pre_verification_severity` / `post_verification_severity` / `rendered_severity`).

| Frame | What it checks |
|-------|----------------|
| **Frame 1: Context Key Lifecycle Trace** | Orphan readers/writers, asymmetric writers, multi-writer conflicts |
| **Frame 2: Success/Failure Symmetry** | State-mutating nodes missing the opposite-outcome unwind |
| **Frame 3: Edge Condition Exhaustiveness** | Cartesian-product stuck states and non-deterministic routing |
| **Frame 4: Tool Exit Code Semantics Audit** | Routing-signal vs. build/check tool wiring mismatches |
| **Frame 5: Loop Convergence Proof Obligation** | SCCs without a reachable finite-exit convergence key |
| **Frame 6: Counterfactual Outcome Test** | State-mutating tool nodes lacking a direct or transitive guard |

**Kill-switch**: set `PLUMBUS_GENERATIVE_AUDIT=off` to skip Override 6 entirely (no analyzer invocation, no `## Generative Findings` written). Any other value (including absent) runs the audit normally.

---

## 🧬 The Pickle Rick Lifecycle β€” Under the Hood

Each ticket goes through 8 phases in the autonomous loop:

```
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ πŸ“‹ PRD β”‚ ← Requirements + verification strategy + interface contracts
β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜
β”‚
β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ πŸ“¦ Breakdownβ”‚ ← Atomize into tickets, each self-contained with spec
β””β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”˜
β”‚
β”Œβ”€β”€β”€β”€β”΄β”€β”€β”€β”€β” per ticket (Morty workers πŸ‘Ά)
β–Ό β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”
β”‚πŸ”¬ Re-β”‚ β”‚πŸ”¬ Re-β”‚ 1. Research the codebase
β”‚searchβ”‚ β”‚searchβ”‚
β””β”€β”€β”¬β”€β”€β”€β”˜ β””β”€β”€β”¬β”€β”€β”€β”˜
β–Ό β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”
β”‚πŸ“ Re-β”‚ β”‚πŸ“ Re-β”‚ 2. Review the research
β”‚view β”‚ β”‚view β”‚
β””β”€β”€β”¬β”€β”€β”€β”˜ β””β”€β”€β”¬β”€β”€β”€β”˜
β–Ό β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”
β”‚πŸ“Planβ”‚ β”‚πŸ“Planβ”‚ 3. Architect the solution
β””β”€β”€β”¬β”€β”€β”€β”˜ β””β”€β”€β”¬β”€β”€β”€β”˜
β–Ό β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”
β”‚πŸ“ Re-β”‚ β”‚πŸ“ Re-β”‚ 4. Review the plan
β”‚view β”‚ β”‚view β”‚
β””β”€β”€β”¬β”€β”€β”€β”˜ β””β”€β”€β”¬β”€β”€β”€β”˜
β–Ό β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”
β”‚βš‘ Im-β”‚ β”‚βš‘ Im-β”‚ 5. Implement
β”‚plem β”‚ β”‚plem β”‚
β””β”€β”€β”¬β”€β”€β”€β”˜ β””β”€β”€β”¬β”€β”€β”€β”˜
β–Ό β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”
β”‚βœ… Ve-β”‚ β”‚βœ… Ve-β”‚ 6. Spec conformance
β”‚rify β”‚ β”‚rify β”‚
β””β”€β”€β”¬β”€β”€β”€β”˜ β””β”€β”€β”¬β”€β”€β”€β”˜
β–Ό β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”
β”‚πŸ” Re-β”‚ β”‚πŸ” Re-β”‚ 7. Code review
β”‚view β”‚ β”‚view β”‚
β””β”€β”€β”¬β”€β”€β”€β”˜ β””β”€β”€β”¬β”€β”€β”€β”˜
β–Ό β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”
β”‚πŸ§ΉSim-β”‚ β”‚πŸ§ΉSim-β”‚ 8. Simplify
β”‚plify β”‚ β”‚plify β”‚
β””β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”˜
```

The **Stop hook** prevents Claude from exiting until the task is genuinely complete. Between each iteration, the hook injects a fresh session summary β€” current phase, ticket list, active task β€” so Rick always wakes up knowing exactly where he is, even after full context compression.

All modes support both tmux and Zellij monitor layouts.

---

## πŸ“Š Metrics

```bash
/pickle-metrics # Last 7 days, daily breakdown
/pickle-metrics --days 30 # Last 30 days
/pickle-metrics --weekly # Weekly buckets (defaults to 28 days)
/pickle-metrics --json # Machine-readable JSON output
```

---

## πŸ“‹ Requirements

- **Node.js** 18+
- **Claude Code** CLI (`claude`) β€” v2.1.49+
- **jq** (for `install.sh`, `uninstall.sh`, `uninstall-hooks.sh`)
- **rsync** (for `install.sh`)
- **tmux** *(optional β€” for `/pickle-tmux`, `/szechuan-sauce`, `/anatomy-park`)*
- **Zellij** >= 0.40.0 *(optional β€” for `/pickle-zellij`)*
- **Graphite CLI** (`gt`) *(optional β€” for `/council-of-ricks`)*
- macOS or Linux (Windows not supported)

---

## πŸ† Credits

This port stands on the shoulders of giants. *Wubba Lubba Dub Dub.*

| | |
|---|---|
| πŸ₯’ **[galz10](https://github.com/galz10)** | Creator of the original [Pickle Rick Gemini CLI extension](https://github.com/galz10/pickle-rick-extension) β€” the autonomous lifecycle, manager/worker model, hook loop, and all the skill content that makes this thing work. This project is a faithful port of their work. |
| 🧠 **[Geoffrey Huntley](https://ghuntley.com)** | Inventor of the ["Ralph Wiggum" technique](https://ghuntley.com/ralph/) β€” the foundational insight that "Ralph is a Bash loop": feed an AI agent a prompt, block its exit, repeat until done. Everything here traces back to that idea. |
| πŸ”§ **[AsyncFuncAI/ralph-wiggum-extension](https://github.com/AsyncFuncAI/ralph-wiggum-extension)** | Reference implementation of the Ralph Wiggum loop that inspired the Pickle Rick extension. |
| ✍️ **[dexhorthy](https://github.com/dexhorthy)** | Context engineering and prompt techniques used throughout. |
| πŸ“Ί **Rick and Morty** | For *Pickle Riiiick!* πŸ₯’ |

---

## πŸ₯’ License

Apache 2.0 β€” same as the original Pickle Rick extension.

---

*"I'm not a tool, Morty. I'm a **methodology**."* πŸ₯’