{"id":47619398,"url":"https://github.com/caliber-ai-org/ai-setup","last_synced_at":"2026-04-13T08:01:42.560Z","repository":{"id":344574086,"uuid":"1178198291","full_name":"caliber-ai-org/ai-setup","owner":"caliber-ai-org","description":"Continuously sync your AI setups with one command. Codebase tailor suited agent skills, MCPs and config files for Claude Code, Cursor, and Codex.","archived":false,"fork":false,"pushed_at":"2026-04-08T09:20:51.000Z","size":122159,"stargazers_count":632,"open_issues_count":24,"forks_count":81,"subscribers_count":10,"default_branch":"master","last_synced_at":"2026-04-08T11:29:52.955Z","etag":null,"topics":["agent-config","ai-agents","anthropic","claude-code","claude-md","cli","codex","cursor","cursorrules","developer-tools","llm","mcp","openai","openai-codex","skills"],"latest_commit_sha":null,"homepage":"https://caliber-ai.dev","language":"TypeScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/caliber-ai-org.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":"AGENTS.md","dco":null,"cla":null}},"created_at":"2026-03-10T19:39:38.000Z","updated_at":"2026-04-08T10:34:33.000Z","dependencies_parsed_at":"2026-03-17T02:01:20.441Z","dependency_job_id":null,"html_url":"https://github.com/caliber-ai-org/ai-setup","commit_stats":null,"previous_names":["rely-ai-org/caliber","caliber-ai-org/ai-setup"],"tags_count":162,"template":false,"template_full_name":null,"purl":"pkg:github/caliber-ai-org/ai-setup","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/caliber-ai-org%2Fai-setup","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/caliber-ai-org%2Fai-setup/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/caliber-ai-org%2Fai-setup/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/caliber-ai-org%2Fai-setup/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/caliber-ai-org","download_url":"https://codeload.github.com/caliber-ai-org/ai-setup/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/caliber-ai-org%2Fai-setup/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31744404,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-13T06:26:45.479Z","status":"ssl_error","status_checked_at":"2026-04-13T06:26:44.645Z","response_time":93,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["agent-config","ai-agents","anthropic","claude-code","claude-md","cli","codex","cursor","cursorrules","developer-tools","llm","mcp","openai","openai-codex","skills"],"created_at":"2026-04-01T21:55:24.188Z","updated_at":"2026-04-13T08:01:42.530Z","avatar_url":"https://github.com/caliber-ai-org.png","language":"TypeScript","readme":"# Caliber\n\n**Hand-written `CLAUDE.md` files go stale the moment you refactor.** Your AI agent hallucinates paths that no longer exist, misses new dependencies, and gives advice based on yesterday's architecture. Caliber generates and maintains your AI context files (`CLAUDE.md`, `.cursor/rules/`, `AGENTS.md`, `copilot-instructions.md`) so they stay accurate as your code evolves — and keeps every agent on your team in sync, whether they use Claude Code, Cursor, Codex, OpenCode, or GitHub Copilot.\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"assets/demo-header.gif\" alt=\"Caliber product demo\" width=\"900\"\u003e\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003ca href=\"https://www.npmjs.com/package/@rely-ai/caliber\"\u003e\u003cimg src=\"https://img.shields.io/npm/v/@rely-ai/caliber\" alt=\"npm version\"\u003e\u003c/a\u003e\n  \u003ca href=\"./LICENSE\"\u003e\u003cimg src=\"https://img.shields.io/npm/l/@rely-ai/caliber\" alt=\"license\"\u003e\u003c/a\u003e\n  \u003ca href=\"https://nodejs.org\"\u003e\u003cimg src=\"https://img.shields.io/node/v/@rely-ai/caliber\" alt=\"node\"\u003e\u003c/a\u003e\n  \u003cimg src=\"https://img.shields.io/badge/caliber-94%2F100-brightgreen\" alt=\"Caliber Score\"\u003e\n  \u003cimg src=\"https://img.shields.io/badge/Claude_Code-supported-blue\" alt=\"Claude Code\"\u003e\n  \u003cimg src=\"https://img.shields.io/badge/Cursor-supported-blue\" alt=\"Cursor\"\u003e\n  \u003cimg src=\"https://img.shields.io/badge/Codex-supported-blue\" alt=\"Codex\"\u003e\n  \u003cimg src=\"https://img.shields.io/badge/OpenCode-supported-blue\" alt=\"OpenCode\"\u003e\n  \u003cimg src=\"https://img.shields.io/badge/GitHub_Copilot-supported-blue\" alt=\"GitHub Copilot\"\u003e\n\u003c/p\u003e\n\n## Before / After\n\nMost repos start with a hand-written `CLAUDE.md` and nothing else. Here's what Caliber finds — and fixes:\n\n```\n  Before                                    After /setup-caliber\n  ──────────────────────────────            ──────────────────────────────\n\n  Agent Config Score    35 / 100            Agent Config Score    94 / 100\n  Grade D                                   Grade A\n\n  FILES \u0026 SETUP           6 / 25            FILES \u0026 SETUP          24 / 25\n  QUALITY                12 / 25            QUALITY                22 / 25\n  GROUNDING               7 / 20            GROUNDING              19 / 20\n  ACCURACY                5 / 15            ACCURACY               13 / 15\n  FRESHNESS               5 / 10            FRESHNESS              10 / 10\n  BONUS                   0 / 5             BONUS                   5 / 5\n```\n\nScoring is deterministic — no LLM, no API calls. It cross-references your config files against your actual project filesystem: do referenced paths exist? Are code blocks present? Is there config drift since your last commit?\n\n```bash\ncaliber score --compare main    # See how your branch changed the score\n```\n\n## Get Started\n\nRequires **Node.js \u003e= 20**.\n\n```bash\nnpx @rely-ai/caliber bootstrap\n```\n\nThen, in your terminal (not the IDE chat), start a Claude Code or Cursor CLI session and type:\n\n\u003e **/setup-caliber**\n\nYour agent detects your stack, generates tailored configs for every platform your team uses, sets up pre-commit hooks, and enables continuous sync — all from inside your normal workflow.\n\n**Don't use Claude Code or Cursor?** Run `caliber init` instead — it's the same setup as a CLI wizard. Works with any LLM provider: bring your own Anthropic, OpenAI, or Vertex AI key.\n\n\u003e **Your code stays on your machine.** Bootstrap is 100% local — no LLM calls, no code sent anywhere. Generation uses your own AI subscription or API key. Caliber never sees your code.\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eWindows Users\u003c/strong\u003e\u003c/summary\u003e\n\nCaliber works on Windows with a few notes:\n\n- **Run from your terminal** (PowerShell, CMD, or Git Bash) — not from inside an IDE chat window. Open a terminal, `cd` into your project folder, then run `npx @rely-ai/caliber bootstrap`.\n- **Git Bash is recommended.** Caliber's pre-commit hooks and auto-sync scripts use shell syntax. Git for Windows includes Git Bash, which handles this automatically. If you only use PowerShell, hooks may be skipped silently.\n- **Cursor Agent CLI:** If prompted to install it, download from [cursor.com/downloads](https://www.cursor.com/downloads) instead of the `curl | bash` command shown on macOS/Linux. Then run `agent login` in your terminal to authenticate.\n- **One terminal at a time.** Avoid running Caliber from multiple terminals simultaneously — this can cause conflicting state and unexpected provider detection.\n\n\u003c/details\u003e\n\n## Audits first, writes second\n\nCaliber never overwrites your existing configs without asking. The workflow mirrors code review:\n\n1. **Score** — read-only audit of your current setup\n2. **Propose** — generate or improve configs, shown as a diff\n3. **Review** — accept, refine via chat, or decline each change\n4. **Backup** — originals saved to `.caliber/backups/` before every write\n5. **Undo** — `caliber undo` restores everything to its previous state\n\nIf your existing config scores **95+**, Caliber skips full regeneration and applies targeted fixes to the specific checks that are failing.\n\n## How It Works\n\nBootstrap gives your agent the `/setup-caliber` skill. Your agent analyzes your project — languages, frameworks, dependencies, architecture — generates configs, and installs hooks. From there, it's a loop:\n\n```\n  npx @rely-ai/caliber bootstrap       ← one-time, 2 seconds\n              │\n              ▼\n  agent runs /setup-caliber             ← agent handles everything\n              │\n              ▼\n  ┌──── configs generated ◄────────────┐\n  │           │                        │\n  │           ▼                        │\n  │     your code evolves              │\n  │     (new deps, renamed files,      │\n  │      changed architecture)         │\n  │           │                        │\n  │           ▼                        │\n  └──► caliber refresh ──────────────►─┘\n       (auto, on every commit)\n```\n\nPre-commit hooks run the refresh loop automatically. New team members get nudged to bootstrap on their first session.\n\n### What It Generates\n\n**Claude Code**\n- `CLAUDE.md` — Project context, build/test commands, architecture, conventions\n- `CALIBER_LEARNINGS.md` — Patterns learned from your AI coding sessions\n- `.claude/skills/*/SKILL.md` — Reusable skills ([OpenSkills](https://agentskills.io) format)\n- `.mcp.json` — Auto-discovered MCP server configurations\n- `.claude/settings.json` — Permissions and hooks\n\n**Cursor**\n- `.cursor/rules/*.mdc` — Modern rules with frontmatter (description, globs, alwaysApply)\n- `.cursor/skills/*/SKILL.md` — Skills for Cursor\n- `.cursor/mcp.json` — MCP server configurations\n\n**OpenAI Codex**\n- `AGENTS.md` — Project context for Codex\n- `.agents/skills/*/SKILL.md` — Skills for Codex\n\n**OpenCode**\n- `AGENTS.md` — Project context (shared with Codex when both are targeted)\n- `.opencode/skills/*/SKILL.md` — Skills for OpenCode\n\n**GitHub Copilot**\n- `.github/copilot-instructions.md` — Project context for Copilot\n\n## Key Features\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eAny Codebase\u003c/strong\u003e\u003c/summary\u003e\n\nTypeScript, Python, Go, Rust, Java, Ruby, Terraform, and more. Language and framework detection is fully LLM-driven — no hardcoded mappings. Caliber works on any project.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eAny AI Tool\u003c/strong\u003e\u003c/summary\u003e\n\n`caliber bootstrap` auto-detects which agents you have installed. For manual control:\n```bash\ncaliber init --agent claude        # Claude Code only\ncaliber init --agent cursor        # Cursor only\ncaliber init --agent codex         # Codex only\ncaliber init --agent opencode        # OpenCode only\ncaliber init --agent github-copilot  # GitHub Copilot only\ncaliber init --agent all             # All platforms\ncaliber init --agent claude,cursor   # Comma-separated\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eChat-Based Refinement\u003c/strong\u003e\u003c/summary\u003e\n\nNot happy with the generated output? During review, refine via natural language — describe what you want changed and Caliber iterates until you're satisfied.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eMCP Server Discovery\u003c/strong\u003e\u003c/summary\u003e\n\nCaliber detects the tools your project uses (databases, APIs, services) and auto-configures matching MCP servers for Claude Code and Cursor.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eDeterministic Scoring\u003c/strong\u003e\u003c/summary\u003e\n\n`caliber score` evaluates your config quality without any LLM calls — purely by cross-referencing config files against your actual project filesystem.\n\n| Category | Points | What it checks |\n|---|---|---|\n| **Files \u0026 Setup** | 25 | Config files exist, skills present, MCP servers, cross-platform parity |\n| **Quality** | 25 | Code blocks, concise token budget, concrete instructions, structured headings |\n| **Grounding** | 20 | Config references actual project directories and files |\n| **Accuracy** | 15 | Referenced paths exist on disk, config freshness vs. git history |\n| **Freshness \u0026 Safety** | 10 | Recently updated, no leaked secrets, permissions configured |\n| **Bonus** | 5 | Auto-refresh hooks, AGENTS.md, OpenSkills format |\n\nEvery failing check includes structured fix data — when `caliber init` runs, the LLM receives exactly what's wrong and how to fix it.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eSession Learning\u003c/strong\u003e\u003c/summary\u003e\n\nCaliber watches your AI coding sessions and learns from them. Hooks capture tool usage, failures, and your corrections — then an LLM distills operational patterns into `CALIBER_LEARNINGS.md`.\n\n```bash\ncaliber learn install      # Install hooks for Claude Code and Cursor\ncaliber learn status       # View hook status, event count, and ROI summary\ncaliber learn finalize     # Manually trigger analysis (auto-runs on session end)\ncaliber learn remove       # Remove hooks\n```\n\nLearned items are categorized by type — **[correction]**, **[gotcha]**, **[fix]**, **[pattern]**, **[env]**, **[convention]** — and automatically deduplicated.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eAuto-Refresh\u003c/strong\u003e\u003c/summary\u003e\n\nKeep configs in sync with your codebase automatically:\n\n| Hook | Trigger | What it does |\n|---|---|---|\n| **Git pre-commit** | Before each commit | Refreshes docs and stages updated files |\n| **Claude Code session end** | End of each session | Runs `caliber refresh` and updates docs |\n| **Learning hooks** | During each session | Captures events for session learning |\n\n```bash\ncaliber hooks --install    # Enable refresh hooks\ncaliber hooks --remove     # Disable refresh hooks\n```\n\nThe `refresh` command analyzes your git diff (committed, staged, and unstaged changes) and updates config files to reflect what changed.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eTeam Onboarding\u003c/strong\u003e\u003c/summary\u003e\n\nWhen Caliber is set up in a repo, it automatically nudges new team members to configure it on their machine. A lightweight session hook checks whether the pre-commit hook is installed and prompts setup if not — no manual coordination needed.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eFully Reversible\u003c/strong\u003e\u003c/summary\u003e\n\n- **Automatic backups** — originals saved to `.caliber/backups/` before every write\n- **Score regression guard** — if a regeneration produces a lower score, changes are auto-reverted\n- **Full undo** — `caliber undo` restores everything to its previous state\n- **Clean uninstall** — `caliber uninstall` removes everything Caliber added (hooks, generated sections, skills, learnings) while preserving your own content\n- **Dry run** — preview changes with `--dry-run` before applying\n\n\u003c/details\u003e\n\n## Commands\n\n| Command | Description |\n|---|---|\n| `caliber bootstrap` | Install agent skills — the fastest way to get started |\n| `caliber init` | Full setup wizard — analyze, generate, review, install hooks |\n| `caliber score` | Score config quality (deterministic, no LLM) |\n| `caliber score --compare \u003cref\u003e` | Compare current score against a git ref |\n| `caliber regenerate` | Re-analyze and regenerate configs (aliases: `regen`, `re`) |\n| `caliber refresh` | Update docs based on recent code changes |\n| `caliber skills` | Discover and install community skills |\n| `caliber learn` | Session learning — install hooks, view status, finalize analysis |\n| `caliber hooks` | Manage auto-refresh hooks |\n| `caliber config` | Configure LLM provider, API key, and model |\n| `caliber status` | Show current setup status |\n| `caliber uninstall` | Remove all Caliber resources from a project |\n| `caliber undo` | Revert all changes made by Caliber |\n\n## FAQ\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eDoes it overwrite my existing configs?\u003c/strong\u003e\u003c/summary\u003e\n\nNo. Caliber shows you a diff of every proposed change. You accept, refine, or decline each one. Originals are backed up automatically.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eDoes it need an API key?\u003c/strong\u003e\u003c/summary\u003e\n\n**Bootstrap \u0026 scoring:** No. Both run 100% locally with no LLM.\n\n**Generation** (via `/setup-caliber` or `caliber init`): Uses your existing Claude Code or Cursor subscription (no API key needed), or bring your own key for Anthropic, OpenAI, or Vertex AI.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eWhat's the difference between bootstrap and init?\u003c/strong\u003e\u003c/summary\u003e\n\n`caliber bootstrap` installs agent skills in 2 seconds — your agent then runs `/setup-caliber` to handle the rest from inside your session. `caliber init` is the full interactive wizard for users who prefer a CLI-driven setup. Both end up in the same place.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eWhat if I don't like what it generates?\u003c/strong\u003e\u003c/summary\u003e\n\nRefine it via chat during review, or decline the changes entirely. If you already accepted, `caliber undo` restores everything. You can also preview with `--dry-run`.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eDoes it work with monorepos?\u003c/strong\u003e\u003c/summary\u003e\n\nYes. Run `caliber init` from any directory. `caliber refresh` can update configs across multiple repos when run from a parent directory.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eDoes it send my code anywhere?\u003c/strong\u003e\u003c/summary\u003e\n\nScoring is fully local. Generation sends a project summary (languages, structure, dependencies — not source code) to whatever LLM provider you configure — the same provider your AI editor already uses. Anonymous usage analytics (no code, no file contents) can be disabled via `caliber config`.\n\n\u003c/details\u003e\n\n## LLM Providers\n\nNo API key? No problem. Caliber works with your existing AI tool subscription:\n\n| Provider | Setup | Default Model |\n|---|---|---|\n| **Claude Code** (your seat) | `caliber config` → Claude Code | Inherited from Claude Code |\n| **Cursor** (your seat) | `caliber config` → Cursor | Inherited from Cursor |\n| **Anthropic** | `export ANTHROPIC_API_KEY=sk-ant-...` | `claude-sonnet-4-6` |\n| **OpenAI** | `export OPENAI_API_KEY=sk-...` | `gpt-5.4-mini` |\n| **Vertex AI** | `export VERTEX_PROJECT_ID=my-project` | `claude-sonnet-4-6` |\n| **Custom endpoint** | `OPENAI_API_KEY` + `OPENAI_BASE_URL` | `gpt-5.4-mini` |\n\nOverride the model for any provider: `export CALIBER_MODEL=\u003cmodel-name\u003e` or use `caliber config`.\n\nCaliber uses a **two-tier model system** — lightweight tasks (classification, scoring) auto-use a faster model, while heavy tasks (generation, refinement) use the default. This keeps costs low and speed high.\n\nConfiguration is stored in `~/.caliber/config.json` with restricted permissions (`0600`). API keys are never written to project files.\n\n\u003cdetails\u003e\n\u003csummary\u003eVertex AI advanced setup\u003c/summary\u003e\n\n```bash\n# Custom region\nexport VERTEX_PROJECT_ID=my-gcp-project\nexport VERTEX_REGION=europe-west1\n\n# Service account credentials (inline JSON)\nexport VERTEX_PROJECT_ID=my-gcp-project\nexport VERTEX_SA_CREDENTIALS='{\"type\":\"service_account\",...}'\n\n# Service account credentials (file path)\nexport VERTEX_PROJECT_ID=my-gcp-project\nexport GOOGLE_APPLICATION_CREDENTIALS=/path/to/service-account.json\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003eEnvironment variables reference\u003c/summary\u003e\n\n| Variable | Purpose |\n|---|---|\n| `ANTHROPIC_API_KEY` | Anthropic API key |\n| `OPENAI_API_KEY` | OpenAI API key |\n| `OPENAI_BASE_URL` | Custom OpenAI-compatible endpoint |\n| `VERTEX_PROJECT_ID` | GCP project ID for Vertex AI |\n| `VERTEX_REGION` | Vertex AI region (default: `us-east5`) |\n| `VERTEX_SA_CREDENTIALS` | Service account JSON (inline) |\n| `GOOGLE_APPLICATION_CREDENTIALS` | Service account JSON file path |\n| `CALIBER_USE_CLAUDE_CLI` | Use Claude Code CLI (`1` to enable) |\n| `CALIBER_USE_CURSOR_SEAT` | Use Cursor subscription (`1` to enable) |\n| `CALIBER_MODEL` | Override model for any provider |\n| `CALIBER_FAST_MODEL` | Override fast model for any provider |\n\n\u003c/details\u003e\n\n## Contributing\n\nSee [CONTRIBUTING.md](./CONTRIBUTING.md) for detailed guidelines.\n\n```bash\ngit clone https://github.com/caliber-ai-org/ai-setup.git\ncd caliber\nnpm install\nnpm run dev      # Watch mode\nnpm run test     # Run tests\nnpm run build    # Compile\n```\n\nUses [conventional commits](https://www.conventionalcommits.org/) — `feat:` for features, `fix:` for bug fixes.\n\n## Add a Caliber badge to your repo\n\nAfter scoring your project, add a badge to your README:\n\n![Caliber Score](https://img.shields.io/badge/caliber-94%2F100-brightgreen)\n\nCopy this markdown and replace `94` with your actual score:\n\n```\n![Caliber Score](https://img.shields.io/badge/caliber-SCORE%2F100-COLOR)\n```\n\nColor guide: `brightgreen` (90+), `green` (70-89), `yellow` (40-69), `red` (\u003c40).\n\n## License\n\nMIT\n","funding_links":[],"categories":["🖥 Coding Agents","ツール","CLI \u0026 Terminal Tools","Ecosystem"],"sub_categories":["Terminal and CLI Agents","IDE \u0026 エディタアシスタント","AI Coding CLIs","Quick Setup with cc-safe-setup"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcaliber-ai-org%2Fai-setup","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fcaliber-ai-org%2Fai-setup","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcaliber-ai-org%2Fai-setup/lists"}