{"id":47683603,"url":"https://github.com/vb-nattamai/agent-ready","last_synced_at":"2026-04-22T10:00:52.579Z","repository":{"id":346847220,"uuid":"1191866120","full_name":"vb-nattamai/agent-ready","owner":"vb-nattamai","description":"AgentReady — Transform any legacy repo into an AI-agent-ready codebase. Generates agent-context.json, AGENTS.md, CLAUDE.md, MCP config, and tool scaffolds via a single GitHub issue or Actions workflow.","archived":false,"fork":false,"pushed_at":"2026-04-17T13:28:26.000Z","size":407,"stargazers_count":2,"open_issues_count":0,"forks_count":2,"subscribers_count":1,"default_branch":"main","last_synced_at":"2026-04-17T13:36:46.307Z","etag":null,"topics":["ai-agents","claude","code-generation","devops","github-actions","legacy-modernization","llm","mcp"],"latest_commit_sha":null,"homepage":"https://github.com/vb-nattamai/agent-ready","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/vb-nattamai.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2026-03-25T16:58:40.000Z","updated_at":"2026-04-17T13:28:23.000Z","dependencies_parsed_at":null,"dependency_job_id":null,"html_url":"https://github.com/vb-nattamai/agent-ready","commit_stats":null,"previous_names":["vb-nattamai/legacy-to-agentic-ready"],"tags_count":36,"template":false,"template_full_name":null,"purl":"pkg:github/vb-nattamai/agent-ready","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/vb-nattamai%2Fagent-ready","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/vb-nattamai%2Fagent-ready/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/vb-nattamai%2Fagent-ready/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/vb-nattamai%2Fagent-ready/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/vb-nattamai","download_url":"https://codeload.github.com/vb-nattamai/agent-ready/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/vb-nattamai%2Fagent-ready/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":32130776,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-22T08:34:57.708Z","status":"ssl_error","status_checked_at":"2026-04-22T08:34:55.583Z","response_time":58,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai-agents","claude","code-generation","devops","github-actions","legacy-modernization","llm","mcp"],"created_at":"2026-04-02T14:22:28.573Z","updated_at":"2026-04-22T10:00:52.573Z","avatar_url":"https://github.com/vb-nattamai.png","language":"Python","readme":"# AgentReady\n\n[![Version](https://img.shields.io/github/v/release/vb-nattamai/agent-ready)](https://github.com/vb-nattamai/agent-ready/releases)\n[![CI](https://github.com/vb-nattamai/agent-ready/actions/workflows/ci.yml/badge.svg)](https://github.com/vb-nattamai/agent-ready/actions/workflows/ci.yml)\n[![CodeQL](https://github.com/vb-nattamai/agent-ready/actions/workflows/codeql.yml/badge.svg)](https://github.com/vb-nattamai/agent-ready/actions/workflows/codeql.yml)\n[![License](https://img.shields.io/github/license/vb-nattamai/agent-ready)](LICENSE)\n\n**Transform any legacy repository into an AI-agent-ready codebase.**\n\nAI agents fail on unfamiliar codebases — they invent file paths, guess commands, miss domain concepts, and make dangerous mistakes. AgentReady fixes this by generating up to 12 scaffolding files grounded in your actual code, giving agents verified knowledge before they touch a single line.\n\n---\n\n## What You Get\n\nOne command. Up to 12 files generated from your real code. No templates, no placeholders.\n\n| File | What it does |\n|------|-------------|\n| `agent-context.json` | Machine-readable repo map — entry point, test command, domain concepts, restricted paths, env vars |\n| `AGENTS.md` | Operating contract for GitHub Copilot, OpenAI agents, and any agentic workflow |\n| `CLAUDE.md` | Auto-loaded by Claude Code at every session start |\n| `.github/copilot-instructions.md` | GitHub Copilot workspace instructions |\n| `system_prompt.md` | Universal system prompt — paste as `system:` in any LLM API call |\n| `mcp.json` | MCP server configuration for Claude Desktop, Cursor, Continue |\n| `memory/schema.md` | Agent memory and state contract |\n| `tools/refresh_context.py` | Script to refresh `agent-context.json` on demand |\n| `.github/dependabot.yml` | Dependency update schedule |\n| `.github/CODEOWNERS` | Code ownership for PR routing |\n| `openapi.yaml` | OpenAPI stub (generated for REST API repos) |\n| `.agent-ready/custom_questions.json` | Hook to add repo-specific eval questions |\n\n---\n\n## Quick Start (Local)\n\n### 1. Install\n\n```bash\npip install \"git+https://github.com/vb-nattamai/agent-ready.git[ai]\"\n```\n\n### 2. Set your API key\n\n```bash\nexport ANTHROPIC_API_KEY=\"sk-ant-...\"   # Anthropic (default)\n# or\nexport OPENAI_API_KEY=\"sk-...\"          # OpenAI\nexport GOOGLE_API_KEY=\"...\"             # Google\nexport GROQ_API_KEY=\"gsk_...\"           # Groq (free tier available)\n```\n\n### 3. Run\n\n```bash\n# Transform your repo (generates all scaffolding files)\nagent-ready --target /path/to/your/repo\n\n# Transform + measure how much the context improves AI responses\nagent-ready --target /path/to/your/repo --eval\n\n# Preview what would be generated without writing anything\nagent-ready --target /path/to/your/repo --dry-run\n```\n\nThat's it. Review the generated files, commit them, and your repo is agent-ready.\n\n---\n\n## GitHub Actions — Transform via Issue (Recommended for Teams)\n\nThe zero-install path. Open an issue, get a PR with all 12 files.\n\n### Step 1 — Install the trigger workflow into your target repo\n\nGo to the [AgentReady Actions tab](https://github.com/vb-nattamai/agent-ready/actions/workflows/install-to-target-repo.yml) → **Run workflow**:\n\n| Input | Value |\n|-------|-------|\n| `target_repo` | `your-org/your-repo` |\n| `provider` | `anthropic` (or `openai`, `google`, `groq`, `mistral`, `together`) |\n| `eval` | ✅ (recommended — runs quality measurement after transform) |\n\nThis pushes **5 workflow files** into your repo:\n- `.github/workflows/agentic-ready.yml` — issue-triggered transformer\n- `.github/workflows/context-drift-detector.yml` — weekly staleness check\n- `.github/workflows/pr-review.yml` — AI-powered PR review on every PR\n- `.github/workflows/agentic-ready-eval.yml` — eval-only (runs on every push to main)\n- `.github/ISSUE_TEMPLATE/agentic-ready.yml` — pre-filled issue template\n\n### Step 2 — Add secrets to your target repo\n\nGo to your repo → **Settings → Secrets and variables → Actions** and add:\n\n```\nANTHROPIC_API_KEY = sk-ant-...    # (or your provider's key)\nINSTALL_TOKEN     = ghp_...       # GitHub PAT with repo + workflow scopes\n```\n\n\u003e Only collaborators with `write`, `maintain`, or `admin` access can trigger the workflow.\n\n### Step 3 — Open an issue\n\nIssues → New Issue → **\"🤖 AgentReady — Transform this repo\"** → Submit.\n\n```\nYou open the issue\n    │\n    ├── AgentReady checks you're a collaborator\n    ├── Runs LLM analysis of your codebase (~60s)\n    ├── Generates up to 12 scaffolding files\n    ├── (Optional) Runs 19-question eval\n    ├── Opens a PR: \"🤖 Add agentic-ready scaffolding\"\n    ├── Posts the PR link as an issue comment\n    └── Closes the issue ✅\n```\n\n### Step 4 — Review and merge the PR\n\nRead through the generated files — especially `AGENTS.md` and `agent-context.json`. Edit the `static` section of `agent-context.json` if anything is wrong. The `dynamic` section is auto-refreshed on every run.\n\n---\n\n## How the Pipeline Works\n\n```\nPhase 1 — Collect   reads your file tree, source files, config files, CI, README\nPhase 2 — Analyse   LLM reads your code → structured JSON (architecture, domain, pitfalls)\nPhase 3 — Generate  LLM writes all scaffolding files from the analysis JSON\nPhase 4 — Score     100-point readiness score based on what was captured\nPhase 5 — Evaluate  19-question golden set measures whether context improves AI responses\n```\n\n**Every phase is LLM-first** — content is written from what the model read in your code, not from templates.\n\n**Provider strategy — use the best model where it counts, the fastest where it doesn't:**\n\n| Provider | Analysis model | Generation model | Evaluation model | Key |\n|---|---|---|---|---|\n| `anthropic` | claude-opus-4-6 | claude-sonnet-4-6 | claude-haiku-4-5 | `ANTHROPIC_API_KEY` |\n| `openai` | gpt-5.4 | gpt-5.4-mini | gpt-5.4-nano | `OPENAI_API_KEY` |\n| `google` | gemini-2.5-pro | gemini-2.5-pro | gemini-2.5-flash-lite | `GOOGLE_API_KEY` |\n| `groq` | llama-3.3-70b | llama-3.3-70b | llama-3.1-8b-instant | `GROQ_API_KEY` |\n| `mistral` | mistral-large | mistral-large | mistral-small | `MISTRAL_API_KEY` |\n| `together` | Qwen3.5-397B | Llama-3.3-70B | Qwen3.5-9B | `TOGETHER_API_KEY` |\n| `ollama` | llama3.3 | llama3.3 | llama3.2 | _(local — no key)_ |\n\n---\n\n## Understanding Your Results\n\n### Agentic Readiness Score (0–100)\n\nEvery run prints a score. It's an **actionable checklist**, not a grade.\n\n| Criterion | Points | What it means if missing |\n|-----------|--------|--------------------------|\n| `agent-context.json` exists | 10 | Core machine-readable context is absent |\n| `CLAUDE.md` exists | 10 | Claude Code won't auto-load any context |\n| `AGENTS.md` exists | 10 | No operating contract for agentic workflows |\n| `system_prompt.md` exists | 5 | No universal LLM system prompt |\n| `tools/` has ≥1 file | 10 | No refresh script for context maintenance |\n| Entry point file verified | 10 | Agents don't know where execution starts |\n| Test command set | 10 | Agents will guess/hallucinate test commands |\n| `restricted_write_paths` populated | 10 | Agents may overwrite protected files |\n| `environment_variables` populated | 10 | Agents won't know which env vars to set |\n| `domain_concepts` has ≥3 entries | 5 | Domain knowledge missing from context |\n| OpenAPI spec exists | 5 | REST API shape not documented for agents |\n| CI config exists | 5 | No CI signals for the analyser to read |\n\nA score of **75–80** is typical for a clean repo with tests and CI. **85+** requires explicit env vars, an entry point, and populated restricted paths.\n\n### Eval Results (Pass Rate + Hallucination Rate)\n\nThe eval runs 19 questions (13 base + 6 Python/JS overlay) from a versioned golden set and measures:\n- **Pass rate** — what % of questions the context answers correctly\n- **Hallucination rate** — what % of responses contain invented facts\n\n**Benchmark on `ar-test-python-complex`** (multi-module Python inventory service, pytest, CI, pyproject.toml):\n\n| Category | Score (no ctx) | Score (with ctx) | Delta | Pass rate |\n|---|---|---|---|---|\n| **Overall** | 2.0/10 | **6.3/10** | +4.3 pts | 47% |\n| commands | 2.6/10 | 5.5/10 | +2.9 pts | 40% |\n| safety | 2.5/10 | 5.6/10 | +3.1 pts | 25% |\n| architecture | 1.8/10 | 6.8/10 | +5.0 pts | 60% |\n| **domain** | **0.0/10** | **9.0/10** | **+9.0 pts** | **100%** |\n| adversarial | 2.0/10 | 6.1/10 | +4.1 pts | 33% |\n\n\u003e **Domain knowledge is perfect.** Commands and safety questions are harder — they require the context files to explicitly capture runtime version, test flags, and restricted paths. Richer repos (with `.env.example`, explicit restricted paths, and a clearly runnable entry point) score higher across all categories.\n\n**How to interpret your hallucination rate:**\n\n| Rate | Meaning |\n|------|---------|\n| \u003c 20% | Excellent — context is grounded and specific |\n| 20–40% | Good — a few gaps; check which categories failed |\n| 40–60% | Fair — context files exist but miss key specifics (runtime version, commands) |\n| \u003e 60% | Poor — the generator may have sparse source to work with |\n\n**Improving a low score:** Check which questions failed in `AGENTIC_EVAL.md`. The most common fixes:\n- Add an `.env.example` listing your real env vars → fixes `environment_variables`\n- Add a `Makefile` or `pyproject.toml` with explicit `[tool.pytest.ini_options]` → fixes `test_command`\n- Add a `SECURITY.md` or explicit `.github/CODEOWNERS` → fixes `restricted_write_paths`\n- Add repo-specific questions in `.agent-ready/custom_questions.json`\n\n---\n\n## CLI Reference\n\n```bash\n# Basic transformation\nagent-ready --target /path/to/repo\n\n# Choose provider\nagent-ready --target /path/to/repo --provider openai\nagent-ready --target /path/to/repo --provider groq\nagent-ready --target /path/to/repo --model ollama/llama3.3   # local, free\n\n# Transform + run eval in one shot\nagent-ready --target /path/to/repo --eval\n\n# Eval only (context files already exist — no re-transformation)\nagent-ready --target /path/to/repo --eval-only\n\n# CI gate — exit 1 if pass rate falls below 50%\nagent-ready --target /path/to/repo --eval-only --fail-level 0.5\n\n# Preview without writing any files (dry run)\nagent-ready --target /path/to/repo --dry-run\n\n# Force overwrite existing generated files\nagent-ready --target /path/to/repo --force\n\n# Regenerate only specific file groups\nagent-ready --target /path/to/repo --only agents    # AGENTS.md, CLAUDE.md, copilot-instructions.md, system_prompt.md\nagent-ready --target /path/to/repo --only context   # agent-context.json, tools/refresh_context.py\nagent-ready --target /path/to/repo --only memory    # memory/schema.md\n\n# Suppress output (CI-friendly)\nagent-ready --target /path/to/repo --quiet\n\n# Install pre-commit hook for automatic context refresh\nagent-ready --target /path/to/repo --install-hooks\n\n# Review a pull request (posts APPROVE/REQUEST_CHANGES to GitHub)\nagent-ready --target /path/to/repo --review-pr 42\nagent-ready --target /path/to/repo --review-pr 42 --dry-run   # print review without posting\n```\n\n**Required env vars — set the one for your provider:**\n\n```bash\nexport ANTHROPIC_API_KEY=\"sk-ant-...\"   # anthropic (default)\nexport OPENAI_API_KEY=\"sk-...\"          # openai\nexport GOOGLE_API_KEY=\"...\"             # google\nexport GROQ_API_KEY=\"gsk_...\"           # groq\nexport MISTRAL_API_KEY=\"...\"            # mistral\nexport TOGETHER_API_KEY=\"...\"           # together\n# ollama: no key — runs locally\n```\n\n---\n\n## Customising the Eval\n\n### Add repo-specific questions\n\nDrop a `.agent-ready/custom_questions.json` in your target repo (generated automatically on every transform):\n\n```json\n[\n  {\n    \"id\": \"custom_001\",\n    \"category\": \"commands\",\n    \"question\": \"What is the exact command to run database migrations in this project?\",\n    \"hint\": \"Check the Makefile and README\"\n  }\n]\n```\n\nThese questions are included in every eval run alongside the standard golden set.\n\n### Adjust the CI pass threshold\n\nThe installed `agentic-ready-eval.yml` workflow runs eval on every push to `main`. Set `fail_level` to exit 1 and fail the workflow if quality drops:\n\n```yaml\n# In your target repo's .github/workflows/agentic-ready-eval.yml\nwith:\n  fail_level: \"0.6\"   # fail if pass rate \u003c 60%\n```\n\n---\n\n## MCP Server\n\nAgentReady ships an MCP server that exposes its core capabilities as tools for Claude Desktop, Cursor, Continue, and any other MCP client.\n\n```bash\npip install \"git+https://github.com/vb-nattamai/agent-ready.git[ai]\"\nagent-ready-mcp\n```\n\n**Available tools:**\n\n| Tool | What it does |\n|------|-------------|\n| `transform` | Run the full 5-phase pipeline on a target repo |\n| `score` | Compute the agentic readiness score for an existing repo |\n| `evaluate` | Run the golden-set eval against existing context files |\n| `review_pr` | Review a pull request and return structured feedback |\n\n**Configure in Claude Desktop** (`~/.claude/claude_desktop_config.json`):\n\n```json\n{\n  \"mcpServers\": {\n    \"agent-ready\": {\n      \"command\": \"agent-ready-mcp\",\n      \"env\": { \"ANTHROPIC_API_KEY\": \"sk-ant-...\" }\n    }\n  }\n}\n```\n\n---\n\n## After Merging the PR\n\n| Tool | File | How |\n|------|------|-----|\n| Claude Code | `CLAUDE.md` | Auto-loaded at every session start |\n| GitHub Copilot | `.github/copilot-instructions.md` | Loaded as workspace instructions |\n| Any LLM | `system_prompt.md` | Paste as the `system:` parameter |\n| MCP clients | `mcp.json` | Loaded by the MCP host |\n| Agentic workflows | `AGENTS.md` | Drop into any agent that reads workspace files |\n| Any script | `agent-context.json` | Parse as JSON for programmatic access |\n\n---\n\n## Keeping Context Fresh\n\nGenerated files go stale as code evolves. Three mechanisms keep them current:\n\n**Weekly CI drift detection** (installed automatically as `.github/workflows/context-drift-detector.yml`):\n```\nEvery Monday 09:00 UTC → re-analyses codebase → opens a PR if drift detected\n```\n\n**Pre-commit hook** (optional — for active development):\n```bash\nagent-ready --target /path/to/repo --install-hooks\n```\n\n**Manual refresh** (any time):\n```bash\nagent-ready --target /path/to/repo --only context --force\n# or run the generated script:\npython tools/refresh_context.py\n```\n\n---\n\n## Workflows Reference\n\n### `reusable-transformer.yml` — Core transformer\n\n| Input | Default | Purpose |\n|---|---|---|\n| `target_repo` | required | Target repo in `owner/repo` format |\n| `target_branch` | `main` | Branch the PR is opened against |\n| `provider` | `anthropic` | LLM provider |\n| `eval` | `true` | Run eval after transformation |\n| `fail_level` | `0.5` | Exit 1 if eval pass rate below threshold |\n| `only` | _(all)_ | Limit to: `agents`, `context`, `memory` |\n| `force` | `false` | Overwrite existing generated files |\n| `issue_number` | _(none)_ | Issue to close after PR is opened |\n\n### `reusable-eval.yml` — Standalone evaluator\n\n| Input | Default | Purpose |\n|---|---|---|\n| `target_repo` | required | Target repo |\n| `provider` | `anthropic` | LLM provider |\n| `fail_level` | `0.5` | Exit 1 if pass rate below threshold |\n\nSaves `AGENTIC_EVAL.md` as a workflow artifact (retained 30 days).\n\n### `install-to-target-repo.yml` — Installer\n\nPushes trigger workflows into a target repo. Requires `INSTALL_TOKEN` (PAT with `repo` + `workflow` scopes).\n\n### `context-drift-detector.yml` — Weekly drift check\n\nRuns every Monday. Opens a PR if `agent-context.json` has drifted from the current codebase.\n\n### `pr-review.yml` — AI-powered PR review\n\nInstalled in target repos. Posts APPROVE or REQUEST_CHANGES on every PR, grounded in `agent-context.json`.\n\n**Security:** Uses `pull_request_target` — review script runs from base branch, keeping secrets inaccessible to PR authors. PR diffs are sent to the LLM API (Anthropic by default) — do not use if your diffs may contain secrets or confidential material.\n\n---\n\n## Supported Languages\n\n| Language | Frameworks detected |\n|---|---|\n| Python | Django, Flask, FastAPI, setuptools, pytest |\n| TypeScript / JavaScript | React, Next.js, Node.js, Express, Jest, Vitest |\n| Java | Spring Boot, Maven, Gradle |\n| Kotlin | Spring Boot, Gradle |\n| Go | standard library, Gin, Echo |\n| Rust | Cargo |\n| C# / .NET | ASP.NET |\n| Ruby | Rails, Bundler |\n\n---\n\n## Security Model\n\n- **No `${{ inputs.* }}` in `run:` blocks** — all user-controlled values go through `env:` variables first\n- **Provider allowlist** — validated against `^(anthropic|openai|google|groq|mistral|together|ollama)$` before any shell command is built\n- **Bash arrays for command construction** — `CMD=(...)` / `\"${CMD[@]}\"`, never string concatenation\n- **LiteLLM logging suppressed** — `litellm.suppress_debug_info = True` prevents prompts and responses from appearing in workflow logs\n- **PR diffs are untrusted** — forwarded to LLM API but never executed locally; all subprocess calls use argument lists, never `shell=True`\n\n---\n\n## CI \u0026 Release\n\n### `ci.yml` — Continuous integration\n\nRuns on every push and PR: lint (`ruff`), format check (`ruff format --check`), tests (`pytest` with coverage). **Coverage gate: ≥ 50%.**\n\n### `codeql.yml` — Static security analysis\n\nRuns CodeQL on every push and PR (`security-and-quality` suite). Flags CWE-78 (injection), CWE-312 (clear-text logging), and related issues.\n\n### `release.yml` — Semantic versioning\n\n| Commit prefix | Version bump |\n|---|---|\n| `feat:` | minor |\n| `fix:` | patch |\n| `BREAKING CHANGE:` | major |\n| `docs:`, `chore:`, `style:` | none |\n\n---\n\n## Philosophy\n\n1. **LLM-first** — your chosen LLM reads your actual code and writes every file from scratch\n2. **Measurable** — the eval framework proves whether the context files actually improve AI responses\n3. **Never modify existing code** — only additive changes, always\n4. **Non-circular eval** — ground truth is extracted from raw source, not from the generated files themselves\n5. **Platform-agnostic** — works with Claude, OpenAI, Gemini, Groq, or any LLM via `system_prompt.md`\n6. **Idempotent** — safe to run multiple times; the `static` section of `agent-context.json` is always preserved\n\n---\n\n## Contributing\n\nContributions are very welcome. AgentReady is an early-stage open-source project.\n\n**Good first issues:**\n- Add support for a new language or framework in the analyser\n- Improve golden set questions for specific tech stacks (Django, Rails, Spring Boot)\n- Write tests for `analyser.py` and `generator.py` to improve coverage\n\n**Bigger contributions:**\n- Monorepo support — detect and handle multiple modules with per-module context files\n- VS Code extension — surface the readiness score inline\n- Improve hallucination rate on sparse repos (single file, no tests, no CI)\n\n**How to contribute:**\n\n```bash\ngit checkout -b feat/my-improvement\n# make changes\nruff format src tests\nruff check src tests\npython -m pytest tests/ -q --cov=src/agent_ready --cov-fail-under=50\ngit push\n# open a Pull Request — all CI gates must pass\n```\n\n---\n\n## License\n\nMIT — see [LICENSE](LICENSE) for details.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fvb-nattamai%2Fagent-ready","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fvb-nattamai%2Fagent-ready","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fvb-nattamai%2Fagent-ready/lists"}