{"id":48742523,"url":"https://github.com/weprodev/flowai","last_synced_at":"2026-04-15T22:01:57.823Z","repository":{"id":350799770,"uuid":"1198808792","full_name":"weprodev/FlowAI","owner":"weprodev","description":"Bash-native multi-agent terminal orchestrator for spec-driven software development. Coordinates AI agents (Claude, Gemini, Cursor, Copilot) through a five-phase pipeline —    Spec → Plan → Tasks → Implement → Review — with a compiled knowledge graph, real-time event log, and human-in-the-loop approval gates.","archived":false,"fork":false,"pushed_at":"2026-04-12T07:18:35.000Z","size":389,"stargazers_count":1,"open_issues_count":0,"forks_count":0,"subscribers_count":0,"default_branch":"main","last_synced_at":"2026-04-12T08:08:08.005Z","etag":null,"topics":["agent","agent-orchestration","ai","ai-agent-development","flowai","graphrag","knowledge-graph","skills","spec-driven","spec-driven-development","spec-kit"],"latest_commit_sha":null,"homepage":"https://weprodev.com","language":"Shell","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/weprodev.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2026-04-01T19:26:37.000Z","updated_at":"2026-04-12T07:18:42.000Z","dependencies_parsed_at":null,"dependency_job_id":null,"html_url":"https://github.com/weprodev/FlowAI","commit_stats":null,"previous_names":["weprodev/flowai"],"tags_count":7,"template":false,"template_full_name":null,"purl":"pkg:github/weprodev/FlowAI","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/weprodev%2FFlowAI","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/weprodev%2FFlowAI/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/weprodev%2FFlowAI/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/weprodev%2FFlowAI/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/weprodev","download_url":"https://codeload.github.com/weprodev/FlowAI/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/weprodev%2FFlowAI/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31861708,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-15T15:24:51.572Z","status":"ssl_error","status_checked_at":"2026-04-15T15:24:39.138Z","response_time":63,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["agent","agent-orchestration","ai","ai-agent-development","flowai","graphrag","knowledge-graph","skills","spec-driven","spec-driven-development","spec-kit"],"created_at":"2026-04-12T08:04:34.646Z","updated_at":"2026-04-15T22:01:57.815Z","avatar_url":"https://github.com/weprodev.png","language":"Shell","readme":"\u003cp align=\"center\"\u003e\n  \u003cimg src=\"logo.png\" alt=\"FlowAI\" width=\"420\" /\u003e\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003cstrong\u003eSpec-driven multi-agent orchestration for the terminal.\u003c/strong\u003e\u003cbr /\u003e\n  Coordinate AI agents across your entire development lifecycle — from spec to review — in one tmux session.\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003ccode\u003ebash\u003c/code\u003e + \u003ccode\u003ejq\u003c/code\u003e + \u003ccode\u003etmux\u003c/code\u003e — no Python, no Docker, no external runtime.\u003cbr /\u003e\n  Works on \u003cstrong\u003emacOS\u003c/strong\u003e, \u003cstrong\u003eLinux\u003c/strong\u003e, and \u003cstrong\u003eWindows\u003c/strong\u003e (Git Bash).\n\u003c/p\u003e\n\n\nAI coding assistants are powerful, but using them effectively across a full feature lifecycle is hard. You end up copy-pasting context between tools, re-explaining your codebase to every agent, burning tokens on redundant context, and hoping the implementation matches the spec.\n\n**FlowAI solves this by turning your terminal into a structured, multi-agent pipeline:**\n\n- **One spec, many agents** — Write the spec once. FlowAI routes it through plan, tasks, implementation, and review automatically.\n- **Right agent for the job** — Assign Gemini for planning, Claude for implementation, any tool for review. Each phase gets the best model for the task.\n- **Knowledge graph = less tokens, better code** — Your codebase is pre-analyzed into a compiled graph. Agents get precise, relevant context instead of scanning thousands of files. Less noise, fewer tokens, higher quality output.\n- **Skills make agents smarter** — Attach behavioral skills (TDD, systematic debugging, code review) that constrain how agents work, not just what they produce.\n- **MCP servers extend reach** — Connect agents to GitHub PRs, databases, documentation, and file systems through the Model Context Protocol.\n- **Human in the loop** — Every phase waits for your approval. You stay in control while agents do the heavy lifting.\n\n---\n\n## What It Does\n\nFlowAI orchestrates multiple AI agent CLIs through a **five-phase pipeline**:\n\n```\nSpec  →  Plan  →  Tasks  →  Implement  →  Review\n```\n\nEach phase runs in its own tmux pane with a dedicated **role**, attached **skills**, and pre-loaded **knowledge graph context**. The master agent monitors the entire pipeline, tracks progress via the event log, and intervenes on failures.\n\n**You bring the AI tools. FlowAI wires them together.**\n\n---\n\n## Quick Start\n\n### Install\n\n**macOS / Linux:**\n```bash\ncurl -fsSL https://raw.githubusercontent.com/WeProDev/FlowAI/main/install.sh | bash\n```\n\n**Windows (Git Bash):**\n```bash\ncurl -fsSL https://raw.githubusercontent.com/WeProDev/FlowAI/main/install.sh | PREFIX=\"$HOME/.local\" bash\n```\n\n### Set up a project\n\n```bash\ncd /path/to/your/repo\nflowai init       # interactive wizard: pick AI provider, configure roles, scaffold editor configs\nflowai start      # builds knowledge graph → launches the tmux pipeline\n```\n\n\u003e **Tip:** `fai` is a shortcut for `flowai` — same commands, faster to type.\n\n### Update\n\n```bash\nflowai update             # self-update to latest release\nflowai update --check     # check without installing\n```\n\n---\n\n## Requirements\n\n| Dependency | Purpose | Install |\n|------------|---------|---------|\n| `bash` | Runtime | Pre-installed on macOS/Linux |\n| `jq` | JSON processing (graph, config) | `brew install jq` / `apt install jq` |\n| `tmux` | Multi-pane session management | `brew install tmux` / `apt install tmux` |\n| `gum` | Interactive menus \u0026 approval prompts | `brew install gum` |\n| At least one AI CLI | The agents that do the work | [Gemini CLI](https://github.com/google-gemini/gemini-cli) · [Claude Code](https://docs.anthropic.com/en/docs/claude-code) · [Cursor](https://cursor.com) · [GitHub Copilot](https://github.com/features/copilot) |\n\n`flowai init` validates all dependencies and exits with platform-specific install instructions if anything is missing.\n\n---\n\n## Design Principles\n\nThese are the **non-negotiable rules** that govern how FlowAI works. Every code change must respect them.\n\n### Tool-Agnostic Core\n\nThe pipeline coordination layer (`src/core/`, `src/phases/`) **never** contains tool-specific logic. Each AI tool (Claude, Gemini, Cursor, Copilot) has its own plugin at `src/tools/\u003cname\u003e.sh` — that is the **only** place tool-specific commands, flags, or behaviors may live. You can swap tools freely without touching orchestration code.\n\n### Behavior from Scripts, Not Roles or Skills\n\nAll agent coordination behavior — what to read, where to write, when to exit, how to signal — is defined in `src/phases/*.sh` and `src/core/phase.sh`. Roles describe domain expertise only. Skills add capabilities only. **Neither roles nor skills may contain pipeline coordination logic, signal paths, or artifact rules.** This makes behavior reliable and consistent regardless of which role or skill is assigned.\n\n### KISS, DRY, Clean Code, DDD\n\n- **KISS** — Each component does one thing. Phase scripts orchestrate; tool plugins launch CLIs; roles describe expertise; skills add capabilities.\n- **DRY** — Shared constants and logic live in one place. Tool plugins reference shared constants — they don't duplicate them.\n- **Clean Code** — Functions are small and named for what they do. No magic globals. Plugin API is discoverable.\n- **DDD** — The codebase maps domain concepts directly: pipeline phases → `src/phases/`, AI tools → `src/tools/`, agent roles → `src/roles/`, agent skills → `src/skills/`, core engine → `src/core/`.\n\n### Review Cycle with Multiple Feedback Loops\n\nThe review cycle ensures quality through structured feedback:\n\n```\nImplement → Review agent (creates review.md) → User approves or gives feedback\n  ↓ (if feedback)\nReview agent re-analyzes → Implement agent fixes → Review again\n  ↓ (if user approves review)\nMaster agent final review (reads review.md + all artifacts)\n  ├── Needs follow-up → feedback sent to Implement → cycle repeats\n  └── Ready → User approve / needs changes\n        ├── Approve → pipeline complete\n        └── Needs changes → feedback to Implement → cycle repeats\n```\n\nKey points:\n- Review agent writes a full QA report to **`review.md`** — the user can read it before deciding\n- Master reads `review.md` during its final review for full context\n- Both Master AI and the user can send revision context back to Implement\n- The cycle is self-healing: impl → review → master → (feedback) → impl → ...\n\n\u003e See [Agent Communication](docs/AGENT-COMMUNICATION.md) for the full approval matrix, signal protocol, and rejection flows.\n\n---\n\n## Features\n\n### 🤖 Multi-Agent Orchestration\n\nAssign different AI tools and models to each pipeline phase. FlowAI manages the handoff, context sharing, and approval gates between them.\n\n```json\n{\n  \"master\":   { \"tool\": \"gemini\", \"model\": \"gemini-2.5-pro\" },\n  \"pipeline\": {\n    \"plan\":   { \"role\": \"team-lead\",        \"tool\": \"gemini\" },\n    \"impl\":   { \"role\": \"backend-engineer\", \"tool\": \"claude\" },\n    \"review\": { \"role\": \"reviewer\",         \"tool\": \"gemini\" }\n  }\n}\n```\n\nUse Gemini for planning (fast, large context), Claude for implementation (precise, code-heavy), and rotate reviewers — all in the same session.\n\n---\n\n### 🧠 Knowledge Graph — Better Code, Fewer Tokens\n\nTraditional AI workflows dump your entire codebase into the context window. FlowAI pre-compiles a **structural knowledge graph** of your project — functions, classes, imports, specs, and their relationships — so agents get targeted, relevant context instead of raw file listings.\n\n**The result:** agents produce higher-quality output because they understand your architecture, and you burn fewer tokens because irrelevant files are excluded.\n\n```bash\nflowai graph build          # full build (Bash, Python, TS/JS, Go, Markdown, JSON)\nflowai graph update         # incremental — only re-processes changed files\nflowai graph lint           # health check: orphaned specs, zombie code, coverage gaps\nflowai graph query \"...\"    # ask questions about your codebase structure\nflowai graph rollback       # restore a previous graph version\n```\n\nFeatures:\n- **Structural extraction** for Bash, Python, TypeScript/JS, and Go — no LLM required\n- **Community detection** via label propagation — identifies clusters and god-objects\n- **Structural lint** — detects unimplemented specs, zombie code, and test gaps\n- **Spec traceability** — links specs to implementations via `SPECIFIES` / `IMPLEMENTS` edges\n- **Incremental builds** with SHA-based caching — sub-second updates on large codebases\n- **Graph versioning** with configurable rollback depth\n\n---\n\n### 🎭 Roles — Specialized Agent Personas\n\nEach pipeline phase is assigned a **role** — a markdown prompt that defines the agent's expertise, constraints, and quality standards. FlowAI ships with 12 specialist roles:\n\n| Role | Focus |\n|------|-------|\n| `master` | Pipeline orchestration, monitoring, failure recovery |\n| `team-lead` | Architecture decisions, planning, technical direction |\n| `backend-engineer` | Go/Python/Node backend, DDD, API design |\n| `frontend-engineer` | React, TypeScript, UI components, accessibility |\n| `api-engineer` | REST/GraphQL contracts, versioning, documentation |\n| `security-engineer` | Auth, encryption, vulnerability assessment |\n| `devops-engineer` | CI/CD, Docker, Kubernetes, infrastructure |\n| `qa-engineer` | Test strategy, coverage, edge cases |\n| `data-engineer` | Databases, migrations, query optimization |\n| `performance-engineer` | Profiling, caching, load testing |\n| `docs-writer` | Technical writing, API docs, tutorials |\n| `reviewer` | Code review, standards enforcement, approval |\n\n**Fully customizable:** Drop a `.flowai/roles/plan.md` or `.flowai/roles/backend-engineer.md` into your project and it overrides the bundled role. A 5-tier resolution chain ensures the most specific prompt always wins.\n\n```bash\nflowai role list                           # see all available roles\nflowai role edit backend-engineer          # customize a role for this project\nflowai role set-prompt plan ./my-plan.md   # use a custom prompt file\nflowai role reset plan                     # revert to bundled default\n```\n\n---\n\n### ⚡ Skills — Behavioral Constraints for Agents\n\nSkills are reusable markdown documents that teach agents **how to work**, not just what to build. They enforce patterns like test-driven development, systematic debugging, and structured code review.\n\n**9 bundled skills** from [obra/superpowers](https://github.com/obra/superpowers):\n\n| Skill | What it enforces |\n|-------|-----------------|\n| `test-driven-development` | Write tests first, implement second, verify always |\n| `systematic-debugging` | Root cause analysis before any fix attempt |\n| `writing-plans` | Structured planning before implementation |\n| `executing-plans` | Follow plans step-by-step, no skipping |\n| `requesting-code-review` | Structured review requests with context |\n| `verification-before-completion` | Verify all changes before marking done |\n| `subagent-driven-development` | Decompose work into focused sub-tasks |\n| `finishing-a-development-branch` | Clean up, squash, document before merge |\n| `graph-aware-navigation` | Use the knowledge graph for codebase navigation |\n\n```bash\nflowai skill add obra/superpowers/systematic-debugging     # install from GitHub\nflowai skill add context7 obra/superpowers/writing-plans   # install with MCP context\nflowai skill list                                          # see installed skills\nflowai skill remove systematic-debugging                   # remove a skill\n```\n\nSkills are resolved through a **4-tier chain**: installed → project-relative → bundled → skip. Project-local skills in `.flowai/skills/` always win.\n\n---\n\n### 🔌 MCP Servers — Extend Agent Capabilities\n\nConnect your agents to external tools and data sources through the [Model Context Protocol](https://modelcontextprotocol.io/). FlowAI manages the MCP configuration so every agent in the pipeline has access.\n\n**Built-in catalog:**\n\n| Server | What it provides |\n|--------|-----------------|\n| `context7` | Real-time library documentation (npm, PyPI, Go) |\n| `github` | GitHub API — PRs, issues, branches, code search |\n| `gitlab` | GitLab API — MRs, issues, pipelines |\n| `filesystem` | Local file system operations |\n| `postgres` | PostgreSQL database introspection |\n\n```bash\nflowai mcp add github          # add from built-in catalog\nflowai mcp add context7        # add library docs server\nflowai mcp list                # see configured servers\nflowai mcp remove github       # remove a server\n```\n\nThe MCP config is written to `.flowai/mcp.json` and automatically loaded by supported AI CLIs.\n\n---\n\n### 🔄 Review Cycle — Multi-Layer Quality Gates\n\nThe review cycle has **three feedback loops** to catch issues at different levels:\n\n1. **Review Agent** writes a full QA report to `review.md` — the user reads it and approves or provides feedback\n2. **Master Agent** runs a final sign-off reading `review.md` + all artifacts — catches cross-cutting issues the reviewer may miss\n3. **User** gets final approval with Master's opinion appended — approve to complete, or send changes back\n\nWhen any loop rejects, the implement agent receives structured rejection context — what failed, why, and what to fix — and focuses **only on failed items**, not a full re-implementation. This dramatically reduces iteration time and token usage.\n\n---\n\n### 📝 Editor Integration\n\n`flowai init` scaffolds project-level context files for your AI editor, ensuring agents understand your project from the start:\n\n| Editor | Config file | Created by |\n|--------|------------|------------|\n| Claude Code | `.claude/CLAUDE.md` | `flowai init` |\n| Gemini | `.gemini/GEMINI.md` | `flowai init` |\n| Cursor | `.cursor/rules/flowai.mdc` | `flowai init` |\n| GitHub Copilot | `.github/copilot-instructions.md` | `flowai init` |\n\nFiles are created once and never overwritten — safe to customize.\n\n---\n\n### 📊 Event Log — Pipeline Visibility\n\nAn append-only JSONL log at `.flowai/events.jsonl` gives every agent real-time visibility into what's happening across the pipeline. Configurable prompt injection format controls token usage:\n\n| Format | Tokens | Best for |\n|--------|--------|----------|\n| `compact` | Low | Standard development |\n| `minimal` | Very low | Large codebases, cost-sensitive |\n| `full` | High | Debugging pipeline issues |\n\n---\n\n## Commands\n\n| Command | Description |\n|---------|------------|\n| `flowai init` | Interactive wizard: pick AI provider, configure roles, scaffold editor configs |\n| `flowai start` | Build knowledge graph → launch tmux pipeline (interactive by default) |\n| `flowai start --headless` | Background mode for CI (no interactive prompts) |\n| `flowai kill` | Stop the session |\n| `flowai status` | Show session, config, skills, MCP health |\n| `flowai run \u003cphase\u003e` | Run a single phase (`spec`, `plan`, `tasks`, `impl`, `review`) |\n| `flowai graph build\\|update\\|lint\\|query\\|rollback` | Knowledge graph operations |\n| `flowai skill add\\|list\\|remove` | Manage agent skills |\n| `flowai role list\\|edit\\|set-prompt\\|reset` | Manage role prompts |\n| `flowai mcp add\\|list\\|remove` | Manage MCP servers |\n| `flowai models list` | Show valid model IDs per tool |\n| `flowai validate` | Check config against models catalog |\n| `flowai update` | Self-update to latest version |\n| `flowai version` | Print version |\n\n---\n\n## For Developers\n\nContributing to FlowAI:\n\n```bash\ngit clone https://github.com/weprodev/FlowAI.git\ncd FlowAI\nmake link         # symlinks fai/flowai to this workspace — edits are live\nmake test         # run the full test suite\nmake audit        # shellcheck + tests\nmake install      # production install (copy to /usr/local/flowai)\nmake uninstall    # remove from system\n```\n\n### Releasing\n\n```bash\necho \"0.2.0\" \u003e VERSION\ngit add VERSION \u0026\u0026 git commit -m \"Bump to 0.2.0\"\ngit tag v0.2.0 \u0026\u0026 git push origin main --tags\n# → GitHub Actions: test on macOS + Linux + Windows → create Release\n```\n\n---\n\n## Documentation\n\n| Guide | What It Covers |\n|-------|----------------|\n| [Architecture](docs/ARCHITECTURE.md) | Pipeline, signals, plugins, event log, master monitoring, resolution chains |\n| [Commands](docs/COMMANDS.md) | Every CLI command, environment variables, event log config |\n| [Agent Communication](docs/AGENT-COMMUNICATION.md) | Must rules, design principles, approval matrix, review cycle, rejection flows, adaptive memory |\n| [Knowledge Graph](docs/GRAPH.md) | Build passes, community detection, versioning, chronicle, configuration |\n| [Supported Tools](docs/TOOLS.md) | Tool plugin API, model catalog, config keys, vendor references |\n---\n\n## License\n\nMIT — see [`LICENSE`](LICENSE).\n\n\u003cp align=\"center\"\u003e\n  Built with ❤️ by \u003ca href=\"https://github.com/weprodev\"\u003eWeProDev\u003c/a\u003e\n  \u003cbr /\u003e\u003cbr /\u003e\n  \u003cem\u003eWe build for growth, with growth in mind.\u003cbr /\u003eJoin our community, contribute to the project, and let's shape the future of AI orchestration together!\u003c/em\u003e\n\u003c/p\u003e\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fweprodev%2Fflowai","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fweprodev%2Fflowai","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fweprodev%2Fflowai/lists"}