{"id":47594579,"url":"https://github.com/roberto-mello/lavra","last_synced_at":"2026-04-08T04:01:10.185Z","repository":{"id":337344087,"uuid":"1153107623","full_name":"roberto-mello/lavra","owner":"roberto-mello","description":"A plugin with compound engineering workflows and memory for AI coding agents","archived":false,"fork":false,"pushed_at":"2026-03-30T02:16:13.000Z","size":9403,"stargazers_count":34,"open_issues_count":2,"forks_count":1,"subscribers_count":0,"default_branch":"main","last_synced_at":"2026-04-01T21:31:21.245Z","etag":null,"topics":["agents","ai","beads","claude-code","coding","cortex-code","gemini-cli","multi-agent","opencode","workflows"],"latest_commit_sha":null,"homepage":"https://lavra.dev","language":"Shell","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/roberto-mello.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":"AGENTS.md","dco":null,"cla":null}},"created_at":"2026-02-08T22:43:27.000Z","updated_at":"2026-04-01T16:55:51.000Z","dependencies_parsed_at":null,"dependency_job_id":null,"html_url":"https://github.com/roberto-mello/lavra","commit_stats":null,"previous_names":["roberto-mello/beads-compound-plugin"],"tags_count":14,"template":false,"template_full_name":null,"purl":"pkg:github/roberto-mello/lavra","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/roberto-mello%2Flavra","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/roberto-mello%2Flavra/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/roberto-mello%2Flavra/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/roberto-mello%2Flavra/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/roberto-mello","download_url":"https://codeload.github.com/roberto-mello/lavra/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/roberto-mello%2Flavra/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31539229,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-07T16:28:08.000Z","status":"online","status_checked_at":"2026-04-08T02:00:06.127Z","response_time":54,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["agents","ai","beads","claude-code","coding","cortex-code","gemini-cli","multi-agent","opencode","workflows"],"created_at":"2026-04-01T17:57:26.553Z","updated_at":"2026-04-08T04:01:10.179Z","avatar_url":"https://github.com/roberto-mello.png","language":"Shell","readme":"# Lavra (/ˈla.vɾɐ/ — Portuguese for \"harvest\")\n\n[![License](https://img.shields.io/github/license/roberto-mello/lavra)](LICENSE)\n[![Release](https://img.shields.io/github/v/release/roberto-mello/lavra)](https://github.com/roberto-mello/lavra/releases)\n[![Beads CLI](https://img.shields.io/badge/requires-beads%20CLI-blue)](https://github.com/steveyegge/beads)\n\n**Lavra turns your AI coding agent into a development team that gets smarter with every task.**\n\nA plugin for coding agents that orchestrates the full development lifecycle -- from brainstorming to shipping -- while automatically capturing and recalling knowledge so each unit of work makes the next one easier.\n\n[![Claude Code](https://img.shields.io/badge/Claude_Code-cc785c?logo=anthropic\u0026logoColor=white)](https://docs.anthropic.com/en/docs/claude-code)\n[![OpenCode](https://img.shields.io/badge/OpenCode-000000?logo=data:image/svg+xml;base64,PHN2ZyB4bWxucz0iaHR0cDovL3d3dy53My5vcmcvMjAwMC9zdmciIHdpZHRoPSIyNCIgaGVpZ2h0PSIyNCIgdmlld0JveD0iMCAwIDI0IDI0IiBmaWxsPSJub25lIiBzdHJva2U9IndoaXRlIiBzdHJva2Utd2lkdGg9IjIiPjxwb2x5bGluZSBwb2ludHM9IjE2IDE4IDIyIDEyIDE2IDYiLz48cG9seWxpbmUgcG9pbnRzPSI4IDYgMiAxMiA4IDE4Ii8+PC9zdmc+\u0026logoColor=white)](https://opencode.ai/)\n[![Gemini CLI](https://img.shields.io/badge/Gemini_CLI-4285F4?logo=google\u0026logoColor=white)](https://github.com/google-gemini/gemini-cli)\n[![Cortex Code](https://img.shields.io/badge/Cortex_Code-29B5E8?logo=snowflake\u0026logoColor=white)](https://www.snowflake.com/en/product/features/cortex-code/)\n\n**[Quick Start](https://lavra.dev/docs/quickstart)** | **[Full Catalog](https://lavra.dev/docs/catalog)** | **[Architecture](https://lavra.dev/docs/architecture)** | **[Security](https://lavra.dev/docs/security)** | **[Command Map](https://lavra.dev/command-map)** | **[v0.7.1 Release Notes](https://lavra.dev/docs/releases/v0.7.1)**\n\n\u003ctable\u003e\n\u003ctr\u003e\n\u003ctd width=\"65%\"\u003e\n\n### Without Lavra\n\n- The agent forgets everything between sessions -- you re-explain context every time\n- Planning is shallow: it jumps to code before thinking through the problem\n- Review is inconsistent: sometimes thorough, sometimes a rubber stamp\n- Knowledge stays in your head. When a teammate hits the same bug, they start from zero\n- Shipping is manual: you run tests, create the PR, close tickets, push -- every time\n\n\u003c/td\u003e\n\u003ctd width=\"35%\" align=\"center\"\u003e\n\u003cimg src=\"site/src/assets/forgeftul-agent.jpg\" alt=\"Forgetful agent surrounded by scattered documents\" width=\"280\"\u003e\n\u003c/td\u003e\n\u003c/tr\u003e\n\u003c/table\u003e\n\n\u003ctable\u003e\n\u003ctr\u003e\n\u003ctd width=\"65%\"\u003e\n\n### With Lavra\n\n- **Automatic memory.** Knowledge is captured inline during work and recalled automatically at the start of every session. Hit the same OAuth bug next month? The agent already knows the fix.\n- **Structured planning.** Brainstorm with scope sharpening, research with domain-matched agents, adversarial plan review -- all before a single line of code is written.\n- **Disciplined execution.** Agents follow deviation rules (what to auto-fix vs. escalate), commit per task with traceable bead IDs, and verify every success criterion before marking work done.\n- **Built-in quality gates.** Every implementation runs through a review-fix-learn loop. Knowledge capture is mandatory, not optional.\n- **Team-shareable knowledge.** Memory lives in `.lavra/memory/knowledge.jsonl`, tracked by git. Your team compounds knowledge together.\n\n\u003c/td\u003e\n\u003ctd width=\"35%\" align=\"center\"\u003e\n\u003cimg src=\"site/src/assets/agent-with-memory.jpg\" alt=\"Agent with memory climbing stairs of compounding knowledge\" width=\"280\"\u003e\n\u003c/td\u003e\n\u003c/tr\u003e\n\u003c/table\u003e\n\n## The workflow\n\nMost of the time, you type three commands:\n\n```\n/lavra-design \"I want users to upload photos for listings\"\n```\n\nThis runs the full planning pipeline as a single command: interactive brainstorm with scope sharpening, structured plan with phased beads, domain-matched research agents, plan revision, and adversarial review. The output is detailed enough that implementation is mechanical.\n\n```\n/lavra-work\n```\n\nPicks up the approved plan and implements it. Auto-routes between single and multi-bead parallel execution. Includes mandatory review, fix loop, and knowledge curation -- all automatic.\n\n```\n/lavra-ship\n```\n\nRebases on main, runs tests, scans for secrets and debug leftovers, creates the PR, closes beads, and pushes the backup. One command to land the plane.\n\nAdd `/lavra-qa` between work and ship when building web apps -- it maps changed files to routes and runs browser-based verification with screenshots.\n\n## Who this is for\n\nAnyone using coding agents who wants consistent, high-quality output instead of hoping the agent gets it right this time.\n\n- **Non-technical users:** `/lavra-design \"build me X\"` handles the brainstorming, planning, and research. `/lavra-work` handles the implementation with built-in quality gates. You get working software without needing to know how to code.\n- **Solo developers:** The memory system acts as a second brain. Past decisions, patterns, and gotchas surface automatically when they're relevant.\n- **Teams:** Knowledge compounds across contributors. One person's hard-won insight becomes everyone's starting context.\n\n## Install\n\n**Requires:** [beads CLI](https://github.com/steveyegge/beads), `jq`, `sqlite3`\n\n```bash\nnpx @lavralabs/lavra@latest\n```\n\nOr manual:\n\n```bash\ngit clone https://github.com/roberto-mello/lavra.git\ncd lavra\n./install.sh               # Claude Code (default)\n./install.sh --opencode    # OpenCode\n./install.sh --gemini      # Gemini CLI\n./install.sh --cortex      # Cortex Code\n```\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eAll commands\u003c/strong\u003e\u003c/summary\u003e\n\n**Pipeline (4):** `/lavra-design`, `/lavra-work`, `/lavra-qa`, `/lavra-ship`\n\n**Supporting (9):** `/lavra-quick` (fast path with escalation), `/lavra-learn` (knowledge curation), `/lavra-recall` (mid-session search), `/lavra-checkpoint` (save progress), `/lavra-retro` (weekly analytics), `/lavra-import`, `/lavra-triage`, `/changelog`, `/test-browser`\n\n**Power-user (6):** `/lavra-plan`, `/lavra-research`, `/lavra-eng-review`, `/lavra-review` (15 specialized review agents), `/lavra-work-ralph` (autonomous retry), `/lavra-work-teams` (persistent workers)\n\n**30 specialized agents** across review, research, design, workflow, and docs. Each runs at the right model tier to keep costs 60-70% lower than running everything on Opus.\n\nSee [docs/CATALOG.md](docs/CATALOG.md) for the full listing.\n\n\u003c/details\u003e\n\n## How knowledge compounds\n\n```\nbrainstorm  --DECISION--\u003e  design\ndesign      \u003c--LEARNED/PATTERN--  auto-recall from prior work\nresearch    --FACT/INVESTIGATION--\u003e  plan revision\nwork        --LEARNED (inline)--\u003e  mandatory knowledge gate\nreview      --LEARNED--\u003e  issues become future recall\nretro       synthesizes patterns, surfaces gaps\n```\n\nSix knowledge types (LEARNED, DECISION, FACT, PATTERN, INVESTIGATION, DEVIATION) are captured inline during work and stored in `.lavra/memory/knowledge.jsonl`. At session start, relevant entries are recalled automatically based on your current beads and git branch. The system gets smarter over time -- not just for you, but for your whole team.\n\n### Configuration\n\n**`.lavra/config/lavra.json`** can be created manually or by the `/lavra-setup` command.\nIt allows users to toggle workflow phases, planning and execution behavior:\n\n```jsonc\n{\n  \"workflow\": {\n    \"research\": true,             // run research agents in /lavra-design\n    \"plan_review\": true,          // run plan review phase in /lavra-design\n    \"goal_verification\": true,    // verify completion criteria in /lavra-work and /lavra-ship\n    \"testing_scope\": \"targeted\"   // \"targeted\" (hooks, API routes, complex logic only) or \"full\" (all tests)\n  },\n  \"execution\": {\n    \"max_parallel_agents\": 3,     // max subagents running at once\n    \"commit_granularity\": \"task\"  // \"task\" (atomic, default) or \"wave\" (legacy)\n  },\n  \"model_profile\": \"balanced\"     // \"balanced\" (default) or \"quality\" (opus for review/verification agents)\n}\n```\n\n**`/lavra-setup`**: run this to generate a codebase profile (stack, architecture, conventions) that informs planning.\n\n## Acknowledgments\n\nBuilt by [Roberto Mello](https://github.com/roberto-mello), extending [compound-engineering](https://github.com/EveryInc/compound-engineering-plugin) by [Every](https://every.to). Task tracking by [Beads](https://github.com/steveyegge/beads).\n\nInspired by Every's writing on [compound engineering](https://every.to/chain-of-thought/compound-engineering-how-every-codes-with-agents), with ideas from [Mario Zechner](https://x.com/badlogicgames/), [Simon Willison](https://https://simonwillison.net/), [Get Shit Done](https://github.com/gsd-build/get-shit-done), [gstach by Garry Tan](https://github.com/garrytan/gstack) and many others. Thanks to my friend Dan for rubber-ducking Lavra.\n\n---\n\n_Lavra (/ˈla.vɾɐ/) is the Portuguese word for \"harvest\" — the idea that every session plants knowledge that the next one reaps. Named by Roberto Mello, who is Brazilian._\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Froberto-mello%2Flavra","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Froberto-mello%2Flavra","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Froberto-mello%2Flavra/lists"}