https://github.com/breezewish/codexpotter
Ralph loop for codex β continuously reconciles codebase toward your instructed state
https://github.com/breezewish/codexpotter
codex codex-cli gpt openai ralph ralph-loop ralph-wiggum
Last synced: 20 days ago
JSON representation
Ralph loop for codex β continuously reconciles codebase toward your instructed state
- Host: GitHub
- URL: https://github.com/breezewish/codexpotter
- Owner: breezewish
- License: apache-2.0
- Created: 2026-01-28T17:13:14.000Z (about 2 months ago)
- Default Branch: main
- Last Pushed: 2026-03-07T14:48:25.000Z (20 days ago)
- Last Synced: 2026-03-07T14:58:23.055Z (20 days ago)
- Topics: codex, codex-cli, gpt, openai, ralph, ralph-loop, ralph-wiggum
- Language: Rust
- Homepage:
- Size: 1.33 MB
- Stars: 9
- Watchers: 0
- Forks: 2
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
- Agents: AGENTS.md
Awesome Lists containing this project
README
β
## π‘ Why CodexPotter
**CodexPotter** continuously **reconciles** code base toward your instructed state ([Ralph Wiggum pattern](https://ghuntley.com/ralph/)):
- π€ **Codex-first** β Codex subscription is all you need; no extra LLM needed.
- π§ **Auto-review / reconcile** β Review and polish multi rounds until fully aligned with your instruction.
- π¦ **Clean-room** β Use clean context in each round, avoid context poisoning, maximize IQ.
- π― **Attention is all you need** β Keep you focused on _crafting_ tasks, instead of _cleaning up_ unfinished work.
- π **Never worse than Codex** β Drive Codex, nothing more; no business prompts which may not suit you.
- π§© **Seamless integration** β AGENTS.md, skills & MCPs just workβ’ ; opt in to improve plan / review.
- π§ **File system as memory** β Store instructions in files to resist compaction and preserve all details.
- πͺΆ **Tiny footprint** β Use [<1k tokens](./cli/prompts/developer_prompt.md), ensuring LLM context fully serves your business logic.
- π **Built-in knowledge base** β Keep a local KB as index so Codex learns project fast in clean contexts.
β
## β‘οΈ Getting started
**1. Prerequisites:** ensure you have [codex CLI](https://developers.openai.com/codex/quickstart?setup=cli) locally. CodexPotter drives your local codex to perform tasks.
**2. Install CodexPotter via npm or bun:**
```shell
# Install via npm
npm install -g codex-potter
```
```shell
# Install via bun
bun install -g codex-potter
```
**3. Run:** Start CodexPotter in your project directory, just like Codex:
```sh
# --yolo is recommended to be fully autonomous
codex-potter --yolo
```
β οΈ **Note:** Unlike Codex, every follow up prompt turns into a **new** task, **not sharing previous contexts**. Assign tasks to CodexPotter, instead of chat with it.
### Prompting tips
**β
tasks with clear goals or scopes:**
- "port upstream codex's /resume into this project, keep code aligned"
**β
persist results to review in later rounds:**
- "create a design doc for ... **in DESIGN.md**"
**β interactive tasks with human feedback loops:**
CodexPotter is not suitable for such tasks:
- Front-end development with human UI feedback
- Question-answering
- Brainstorming sessions
β
## Roadmap
- [x] Skill popup
- [x] Resume (history replay + continue iterating)
- [x] Better handling of stream disconnect / similar network issues
- [x] Agent-call friendly (non-interactive exec and resume)
- [ ] Better plan / user selection support
- [ ] Better sandbox support
- [ ] Interoperability with codex CLI sessions (for follow-up prompts)
- [ ] Allow opting out knowledge base
- [ ] Recommended skills for PRD and code review
β
## Development
```sh
# Formatting
cargo fmt
# Lints
cargo clippy
# Tests
cargo nextest run
# Build
cargo build
```
β
## License
This project is community-driven fork of [openai/codex](https://github.com/openai/codex) repository, licensed under the same Apache-2.0 License.