https://github.com/breezewish/codexpotter
Ralph loop for codex โ continuously reconciles codebase toward your instructed state
https://github.com/breezewish/codexpotter
codex codex-cli gpt openai ralph ralph-loop ralph-wiggum
Last synced: about 2 hours ago
JSON representation
Ralph loop for codex โ continuously reconciles codebase toward your instructed state
- Host: GitHub
- URL: https://github.com/breezewish/codexpotter
- Owner: breezewish
- License: apache-2.0
- Created: 2026-01-28T17:13:14.000Z (3 months ago)
- Default Branch: main
- Last Pushed: 2026-03-07T14:48:25.000Z (about 2 months ago)
- Last Synced: 2026-03-07T14:58:23.055Z (about 2 months ago)
- Topics: codex, codex-cli, gpt, openai, ralph, ralph-loop, ralph-wiggum
- Language: Rust
- Homepage:
- Size: 1.33 MB
- Stars: 9
- Watchers: 0
- Forks: 2
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
- Agents: AGENTS.md
Awesome Lists containing this project
README
โ
## ๐ก Why CodexPotter
[](#)
[](https://www.npmjs.com/package/codex-potter)
[](https://github.com/breezewish/CodexPotter/actions/workflows/ci.yml)
[](./LICENSE)
[](https://linux.do)
**CodexPotter** continuously **reconciles** code base toward your instructed state ([Ralph Wiggum pattern](https://ghuntley.com/ralph/)):
- ๐ค **Codex-first** โ Codex subscription is all you need; no extra LLM needed.
- ๐งญ **Auto-review / reconcile** โ Review and polish multi rounds until fully aligned with your instruction.
- ๐ฆ **Clean-room** โ Use clean context in each round, avoid context poisoning, maximize IQ.
- ๐ฏ **Attention is all you need** โ Keep you focused on _crafting_ tasks, instead of _cleaning up_ unfinished work.
- ๐ **Never worse than Codex** โ Drive Codex, nothing more; no business prompts which may not suit you.
- ๐งฉ **Seamless integration** โ AGENTS.md, skills & MCPs just workโข ; opt in to improve plan / review.
- ๐ง **File system as memory** โ Store instructions in files to resist compaction and preserve all details.
- ๐ชถ **Tiny footprint** โ Use [<1k tokens](./cli/prompts/developer_prompt.md), ensuring LLM context fully serves your business logic.
- ๐ **Built-in knowledge base** โ Keep a local KB as index so Codex learns project fast in clean contexts.
โ
## ๐ How does it work
```plain
๐๐ถ๐ผ๐น ๐ท๐น๐ถ๐ด๐ท๐ป:
๐๐ช๐ฎ๐ฑ๐ญ๐ช๐ง๐บ ๐ต๐ฉ๐ฆ ๐ฒ๐ถ๐ฆ๐ณ๐บ ๐ฆ๐ฏ๐จ๐ช๐ฏ๐ฆ ๐ฃ๐บ ๐ง๐ฐ๐ญ๐ญ๐ฐ๐ธ๐ช๐ฏ๐จ ...
โ
โ
codex: Work or review according to MAIN.md โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ
โ โ โผ
โโโโโโโโโโโดโโโโโโโโโโ โโโโโโโโโโโโผโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโ
โ CodexPotter โ โ codex โโโโโโโโบโ MAIN.md โ
โโโโโโโโโโโฒโโโโโโโโโโ โโโโโโโโโโโโฌโโโโโโโโโ โโโโโโโโโโโโโโโโโโโโโ
โ โ
โ Work finished โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโ
```
โ
## โก๏ธ Getting started
**1. Prerequisites:** ensure you have [codex CLI](https://developers.openai.com/codex/quickstart?setup=cli) locally. CodexPotter drives your local codex to perform tasks.
**2. Install CodexPotter via npm or bun:**
```shell
# Install via npm
npm install -g codex-potter
```
```shell
# Install via bun
bun install -g codex-potter
```
**3. Run:** Start CodexPotter in your project directory, just like Codex:
```sh
# --yolo is recommended to be fully autonomous
codex-potter --yolo
```
โ ๏ธ **Note:** Unlike Codex, every follow up prompt turns into a **new** task, **not sharing previous contexts**. Assign tasks to CodexPotter, instead of chat with it.
โ ๏ธ **Note:** CodexPotter is **not a replacement** for codex, because CodexPotter is a loop executor โ it executes tasks instead of chatting with you. See below for details.
โ
## Tips
### Prompt Examples
**โ
tasks with clear goals or scopes:**
- "port upstream codex's /resume into this project, keep code aligned"
**โ
persist results to review in later rounds:**
- "create a design doc for ... **in DESIGN.md**"
**โ interactive tasks with human feedback loops:**
CodexPotter is not suitable for such tasks, use codex instead:
- Front-end development with human UI feedback
- Question-answering
- Brainstorming sessions
### Howto
Ask followup questions in codex
Just pass the project file to codex, like:
```plain
based on .codexpotter/projects/2026/03/18/1/MAIN.md,
please explain more about the root cause of the issue
```
Plan and execute
Simpliy queue two tasks in CodexPotter, one is plan, one is implement, CodexPotter will execute one by one, for example:
Task prompt 1 (CodexPotter):
```plain
Analyze the codebase, research and design a solution for introducing subscription system.
Output plan to docs/subscription_design.md.
Your solution should meet the following requirements: ...
Do not implement the plan, just design a good and simple solution.
```
โ Your existing facility to write good plans will be utilized, including skills, plan doc principles
in AGENTS.md, etc. **Writing plan to a file is CRITICAL** so that the plan can be iterated multiple rounds and task 2 can pick it up.
Task prompt 2 (CodexPotter):
```plain
Implement according to docs/subscription_design.md
Make sure all user journeys are properly covered by e2e tests and pass.
```
If you even don't know what you are designing for, just discuss with **codex** to carry out a basic plan first, then use **CodexPotter** to continously polish and implement it.
โ
## Configuration
- [Config File](./docs/config.md)
- [Hooks](./docs/hooks.md)
โ
## Other Features
- `--xmodel` (experimental): Use gpt-5.2 first, then use gpt-5.5 to cross review gpt-5.2's work in later rounds. In clear coding tasks this may produce better results than only using gpt-5.2 or gpt-5.5.
- `/yolo`: Toggle whether YOLO (no sandbox) is enabled by default for all sessions.
- `/list` or `ctrl+l`: View all projects (tasks) and their results.
โ
## Roadmap
- [x] Skill popup
- [x] Resume (history replay + continue iterating)
- [x] Better handling of stream disconnect / similar network issues
- [x] Agent-call friendly (non-interactive exec and resume)
- [x] Interoperability with codex CLI sessions (for follow-up prompts)
- [ ] Better plan / user selection support
- [ ] Better sandbox support
โ
## Development
```sh
# Formatting
cargo fmt
# Lints
cargo clippy
# Tests
cargo nextest run
# Build
cargo build
```
โ
## Community & License
- This project is community-driven fork of [openai/codex](https://github.com/openai/codex) repository, licensed under the same Apache-2.0 License.