{"id":48237115,"url":"https://github.com/ramtinj95/opencode-tokenscope","last_synced_at":"2026-04-04T20:01:47.217Z","repository":{"id":323568027,"uuid":"1086270865","full_name":"ramtinJ95/opencode-tokenscope","owner":"ramtinJ95","description":"Comprehensive token usage analysis and cost tracking for opencode sessions","archived":false,"fork":false,"pushed_at":"2026-04-04T08:22:01.000Z","size":297,"stargazers_count":113,"open_issues_count":6,"forks_count":3,"subscribers_count":0,"default_branch":"main","last_synced_at":"2026-04-04T08:27:22.772Z","etag":null,"topics":["ai-tools","cost-tracking","developer-tools","opencode","opencode-plugins"],"latest_commit_sha":null,"homepage":"","language":"TypeScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/ramtinJ95.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2025-10-30T07:27:21.000Z","updated_at":"2026-04-04T08:22:06.000Z","dependencies_parsed_at":"2026-04-04T20:00:55.227Z","dependency_job_id":null,"html_url":"https://github.com/ramtinJ95/opencode-tokenscope","commit_stats":null,"previous_names":["ramtinj95/opencode-tokenscope"],"tags_count":13,"template":false,"template_full_name":null,"purl":"pkg:github/ramtinJ95/opencode-tokenscope","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ramtinJ95%2Fopencode-tokenscope","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ramtinJ95%2Fopencode-tokenscope/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ramtinJ95%2Fopencode-tokenscope/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ramtinJ95%2Fopencode-tokenscope/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/ramtinJ95","download_url":"https://codeload.github.com/ramtinJ95/opencode-tokenscope/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ramtinJ95%2Fopencode-tokenscope/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31411659,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-04T19:29:44.979Z","status":"ssl_error","status_checked_at":"2026-04-04T19:29:11.535Z","response_time":60,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.6:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai-tools","cost-tracking","developer-tools","opencode","opencode-plugins"],"created_at":"2026-04-04T20:00:48.177Z","updated_at":"2026-04-04T20:01:42.354Z","avatar_url":"https://github.com/ramtinJ95.png","language":"TypeScript","readme":"# OpenCode-Tokenscope, Token Analyzer Plugin\n\n[![npm version](https://img.shields.io/npm/v/@ramtinj95/opencode-tokenscope.svg)](https://www.npmjs.com/package/@ramtinj95/opencode-tokenscope)\n\n\u003e Comprehensive token usage analysis and cost tracking for OpenCode AI sessions\n\nTrack and optimize your token usage across system prompts, user messages, tool outputs, and more. Get detailed breakdowns, accurate cost estimates, and visual insights for your AI development workflow.\n\n## Installation\n\n### Option 1: npm (Recommended)\n\n1. **Install globally:**\n   ```bash\n   npm install -g @ramtinj95/opencode-tokenscope\n   ```\n\n2. **Add to your `opencode.json`** (create one in your project root or `~/.config/opencode/opencode.json` for global config):\n   ```json\n   {\n     \"$schema\": \"https://opencode.ai/config.json\",\n     \"plugin\": [\"@ramtinj95/opencode-tokenscope\"]\n   }\n   ```\n\n3. **Create the `/tokenscope` command** by creating `~/.config/opencode/command/tokenscope.md`:\n\n```bash\nmkdir -p ~/.config/opencode/command\ncat \u003e ~/.config/opencode/command/tokenscope.md \u003c\u003c 'EOF'\n---\ndescription: Analyze token usage across the current session with detailed breakdowns by category\n---\n\nCall the tokenscope tool directly without delegating to other agents.\nThen cat the token-usage-output.txt. DONT DO ANYTHING ELSE WITH THE OUTPUT.\nEOF\n```\n\n4. **Restart OpenCode** and run `/tokenscope`\n\nTo always get the latest version automatically, use `@latest`:\n```json\n{\n  \"plugin\": [\"@ramtinj95/opencode-tokenscope@latest\"]\n}\n```\n\n### Option 2: Install Script\n\n```bash\ncurl -sSL https://raw.githubusercontent.com/ramtinJ95/opencode-tokenscope/main/plugin/install.sh | bash\n```\n\nThen restart OpenCode and run `/tokenscope`\n\n## Updating\n\n### If installed via npm:\n\n| Config in `opencode.json` | Behavior |\n|---------------------------|----------|\n| `\"@ramtinj95/opencode-tokenscope\"` | Uses the version installed at install time. **Never auto-updates.** |\n| `\"@ramtinj95/opencode-tokenscope@latest\"` | Fetches latest version **every time OpenCode starts**. |\n| `\"@ramtinj95/opencode-tokenscope@1.6.1\"` | Pins to exact version 1.6.1. Never updates. |\n\nTo manually update:\n```bash\nnpm update -g @ramtinj95/opencode-tokenscope\n```\n\nOr use `@latest` in your `opencode.json` to auto-update on OpenCode restart.\n\n### If installed via script:\n\n**Option 1: Local script** (if you have the plugin installed)\n```bash\nbash ~/.config/opencode/plugin/install.sh --update\n```\n\n**Option 2: Remote script** (always works)\n```bash\ncurl -sSL https://raw.githubusercontent.com/ramtinJ95/opencode-tokenscope/main/plugin/install.sh | bash -s -- --update\n```\n\nThe `--update` flag skips dependency installation for faster updates.\n\n## Usage\n\nSimply type in OpenCode:\n```\n/tokenscope\n```\n\nThe plugin will:\n1. Analyze the current session\n2. Count tokens across all categories\n3. Analyze all subagent (Task tool) child sessions recursively\n4. Calculate costs based on API telemetry\n5. Save detailed report to `token-usage-output.txt`\n\n### Options\n\n- **sessionID**: Analyze a specific session instead of the current one\n- **limitMessages**: Limit entries shown per category (1-10, default: 3)\n- **includeSubagents**: Include subagent child session costs (default: true)\n\n### Reading the Full Report\n\n```bash\ncat token-usage-output.txt\n```\n\n## Features\n\n### Comprehensive Token Analysis\n- **5 Category Breakdown**: System prompts, user messages, assistant responses, tool outputs, and reasoning traces\n- **Visual Charts**: Easy-to-read ASCII bar charts with percentages and token counts\n- **Smart Inference**: Automatically infers system prompts from API telemetry (since they're not exposed in session messages)\n\n### Context Breakdown Analysis\n- **System Prompt Components**: See token distribution across base prompt, tool definitions, environment context, project tree, and custom instructions\n- **Automatic Estimation**: Estimates breakdown from `cache_write` tokens when system prompt content isn't directly available\n- **Tool Count**: Shows how many tools are loaded and their combined token cost\n\n### Tool Definition Cost Estimates\n- **Per-Tool Estimates**: Lists all enabled tools with estimated schema token costs\n- **Argument Analysis**: Infers argument count and complexity from actual tool calls in the session\n- **Complexity Detection**: Distinguishes between simple arguments and complex ones (arrays/objects)\n\n### Cache Efficiency Metrics\n- **Cache Hit Rate**: Visual display of cache read vs fresh input token distribution\n- **Cost Savings**: Calculates actual savings from prompt caching\n- **Effective Rate**: Shows what you're actually paying per token vs standard rates\n\n### Accurate Cost Tracking\n- **Models.dev Pricing Database**: Pricing data synced from models.dev across thousands of provider/model entries\n- **Cache-Aware Pricing**: Properly handles cache read/write tokens with discounted rates\n- **Per-Call Step Telemetry**: Reads stored `step-finish` records so multi-step assistant turns and tool loops count every API call, not just the final step saved on the assistant message\n- **Session-Wide Billing**: Aggregates costs across all API calls in your session\n\n### Subagent Cost Tracking\n- **Child Session Analysis**: Recursively analyzes all subagent sessions spawned by the Task tool\n- **Aggregated Totals**: Shows combined tokens, costs, and API calls across main session and all subagents\n- **Per-Agent Breakdown**: Lists each subagent with its type, token usage, cost, and API call count\n- **Optional Toggle**: Enable/disable subagent analysis with the `includeSubagents` parameter\n\n### Advanced Features\n- **Tool Usage Stats**: Track which tools consume the most tokens and how many times each is called\n- **API Call Tracking**: See total API calls for main session and subagents\n- **Top Contributors**: Identify the biggest token consumers\n- **Model Normalization**: Handles `provider/model` format automatically\n- **Multi-Tokenizer Support**: Uses official tokenizers (tiktoken for OpenAI, transformers for others)\n- **Configurable Sections**: Enable/disable analysis features via `tokenscope-config.json`\n\n### Skill Analysis\n- **Available Skills**: Shows the always-available skill catalog token cost (including the verbose system-prompt catalog OpenCode injects on every API call)\n- **Available Subagents**: Shows all subagents listed in the Task tool definition with their token cost\n- **Loaded Skills**: Tracks skills loaded during the session with call counts\n- **Cumulative Token Tracking**: Accurately counts token cost when skills are called multiple times\n\n## Understanding OpenCode Skill Behavior\n\nThis section explains how OpenCode handles skills and why the token counting works the way it does.\n\n### How Skills Work\n\nSkills are on-demand instructions that agents can load via the `skill` tool. They have multiple token consumption points:\n\n1. **Always-Available Skill Catalog**: Current OpenCode versions inject a verbose XML skill catalog into the system prompt on **every API call**.\n\n2. **Skill Tool Description**: The `skill` tool also includes a compact markdown list of available skills in its tool description, which also consumes tokens on every API call.\n\n3. **Loaded Skill Content**: When an agent calls `skill({ name: \"my-skill\" })`, the full SKILL.md content is loaded and returned as a tool result.\n\n### Why Multiple Skill Calls Multiply Token Cost\n\n**Important**: OpenCode does **not** deduplicate skill content. Each time the same skill is called, the full content is added to context again as a new tool result.\n\nThis means if you call `skill({ name: \"git-release\" })` 3 times and it contains 500 tokens:\n- Total context cost = 500 × 3 = **1,500 tokens**\n\nThis behavior is by design in OpenCode. You can verify this in the source code:\n\n| Component | Source Link |\n|-----------|-------------|\n| Skill tool execution | [packages/opencode/src/tool/skill.ts](https://github.com/anomalyco/opencode/blob/dev/packages/opencode/src/tool/skill.ts) |\n| Tool result handling | [packages/opencode/src/session/message-v2.ts](https://github.com/anomalyco/opencode/blob/dev/packages/opencode/src/session/message-v2.ts) |\n| Skill pruning protection | [packages/opencode/src/session/compaction.ts](https://github.com/anomalyco/opencode/blob/dev/packages/opencode/src/session/compaction.ts) |\n\n### Skill Content is Protected from Pruning\n\nOpenCode protects skill tool results from being pruned during context management. From the [compaction.ts source](https://github.com/anomalyco/opencode/blob/dev/packages/opencode/src/session/compaction.ts):\n\n```typescript\nconst PRUNE_PROTECTED_TOOLS = [\"skill\"]\n```\n\nThis means loaded skill content stays in context for the duration of the session (unless full session compaction/summarization occurs).\n\n### Recommendations\n\n- **Call skills sparingly**: Since each call adds full content, avoid calling the same skill multiple times\n- **Monitor skill token usage**: Use TokenScope to see which skills consume the most tokens\n- **Consider skill size**: Large skills (1000+ tokens) can quickly inflate context when called repeatedly\n\n## Example Output\n\n```\n═══════════════════════════════════════════════════════════════════════════\nToken Analysis: Session ses_50c712089ffeshuuuJPmOoXCPX\nModel: claude-opus-4-5\n═══════════════════════════════════════════════════════════════════════════\n\nTOKEN BREAKDOWN BY CATEGORY\n─────────────────────────────────────────────────────────────────────────\nEstimated using tokenizer analysis of message content:\n\nInput Categories:\n  SYSTEM    ██████████████░░░░░░░░░░░░░░░░    45.8% (22,367)\n  USER      ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░        0.8% (375)\n  TOOLS     ████████████████░░░░░░░░░░░░░░    53.5% (26,146)\n\n  Subtotal: 48,888 estimated input tokens\n\nOutput Categories:\n  ASSISTANT ██████████████████████████████     100.0% (1,806)\n\n  Subtotal: 1,806 estimated output tokens\n\nLocal Total: 50,694 tokens (estimated)\n\nTOOL USAGE BREAKDOWN\n─────────────────────────────────────────────────────────────────────────\nbash                 ██████████░░░░░░░░░░░░░░░░░░░░     34.0% (8,886)    4x\nread                 ██████████░░░░░░░░░░░░░░░░░░░░     33.1% (8,643)    3x\ntask                 ████████░░░░░░░░░░░░░░░░░░░░░░     27.7% (7,245)    4x\nwebfetch             █░░░░░░░░░░░░░░░░░░░░░░░░░░░░░      4.9% (1,286)    1x\ntokenscope           ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░         0.3% (75)    2x\nbatch                ░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░         0.0% (11)    1x\n\nTOP CONTRIBUTORS\n─────────────────────────────────────────────────────────────────────────\n• System (inferred from API)   22,367 tokens (44.1%)\n• bash                         8,886 tokens (17.5%)\n• read                         8,643 tokens (17.0%)\n• task                         7,245 tokens (14.3%)\n• webfetch                     1,286 tokens (2.5%)\n\n═══════════════════════════════════════════════════════════════════════════\nMOST RECENT API CALL\n─────────────────────────────────────────────────────────────────────────\n\nRaw telemetry from last API response:\n  Input (fresh):              2 tokens\n  Cache read:            48,886 tokens\n  Cache write:               54 tokens\n  Output:                   391 tokens\n  ───────────────────────────────────\n  Total:                 49,333 tokens\n\n═══════════════════════════════════════════════════════════════════════════\nSESSION TOTALS (All 15 API calls)\n─────────────────────────────────────────────────────────────────────────\n\nTotal tokens processed across the entire session (for cost calculation):\n\n  Input tokens:              10 (fresh tokens across all calls)\n  Cache read:           320,479 (cached tokens across all calls)\n  Cache write:           51,866 (tokens written to cache)\n  Output tokens:          3,331 (all model responses)\n  ───────────────────────────────────\n  Session Total:        375,686 tokens (for billing)\n\n═══════════════════════════════════════════════════════════════════════════\nESTIMATED SESSION COST (API Key Pricing)\n─────────────────────────────────────────────────────────────────────────\n\nYou appear to be on a subscription plan (API cost is $0).\nHere's what this session would cost with direct API access:\n\n  Input tokens:              10 × $5.00/M  = $0.0001\n  Output tokens:          3,331 × $25.00/M  = $0.0833\n  Cache read:           320,479 × $0.50/M  = $0.1602\n  Cache write:           51,866 × $6.25/M  = $0.3242\n─────────────────────────────────────────────────────────────────────────\nESTIMATED TOTAL: $0.5677\n\nNote: This estimate uses standard API pricing from models.json.\nActual API costs may vary based on provider and context size.\n\n═══════════════════════════════════════════════════════════════════════════\nCONTEXT BREAKDOWN (Estimated from cache_write tokens)\n─────────────────────────────────────────────────────────────────────────\n\n  Base System Prompt   ████████████░░░░░░░░░░░░░░░░░░   ~42,816 tokens\n  Tool Definitions (14)██████░░░░░░░░░░░░░░░░░░░░░░░░    ~4,900 tokens\n  Environment Context  █░░░░░░░░░░░░░░░░░░░░░░░░░░░░░      ~150 tokens\n  Project Tree         ████░░░░░░░░░░░░░░░░░░░░░░░░░░    ~4,000 tokens\n  ───────────────────────────────────────────────────────────────────────\n  Total Cached Context:                                  ~51,866 tokens\n\n  Note: Breakdown estimated from first cache_write. Actual distribution may vary.\n\n═══════════════════════════════════════════════════════════════════════════\nTOOL DEFINITION COSTS (Estimated from argument analysis)\n─────────────────────────────────────────────────────────────────────────\n\n  Tool                Est. Tokens   Args   Complexity\n  ───────────────────────────────────────────────────────────────────────\n  task                       ~480      3   complex (arrays/objects)\n  batch                      ~410      1   complex (arrays/objects)\n  edit                       ~370      4   simple\n  read                       ~340      3   simple\n  bash                       ~340      3   simple\n  ───────────────────────────────────────────────────────────────────────\n  Total:                   ~4,520 tokens (14 enabled tools)\n\n  Note: Estimates inferred from tool call arguments in this session.\n        Actual schema tokens may vary +/-20%.\n\n═══════════════════════════════════════════════════════════════════════════\nCACHE EFFICIENCY\n─────────────────────────────────────────────────────────────────────────\n\n  Token Distribution:\n    Cache Read:           320,479 tokens   ████████████████████████████░░  86.2%\n    Fresh Input:           51,320 tokens   ████░░░░░░░░░░░░░░░░░░░░░░░░░░  13.8%\n  ───────────────────────────────────────────────────────────────────────\n  Cache Hit Rate:      86.2%\n\n  Cost Analysis (claude-opus-4-5 @ $5.00/M input, $0.50/M cache read):\n    Without caching:   $1.8590  (371,799 tokens x $5.00/M)\n    With caching:      $0.4169  (fresh x $5.00/M + cached x $0.50/M)\n  ───────────────────────────────────────────────────────────────────────\n  Cost Savings:        $1.4421  (77.6% reduction)\n  Effective Rate:      $1.12/M tokens  (vs. $5.00/M standard)\n\n═══════════════════════════════════════════════════════════════════════════\nSUBAGENT COSTS (4 child sessions, 23 API calls)\n─────────────────────────────────────────────────────────────────────────\n\n  docs                         $0.3190  (194,701 tokens, 8 calls)\n  general                      $0.2957  (104,794 tokens, 4 calls)\n  docs                         $0.2736  (69,411 tokens, 4 calls)\n  general                      $0.5006  (197,568 tokens, 7 calls)\n─────────────────────────────────────────────────────────────────────────\nSubagent Total:            $1.3888  (566,474 tokens, 23 calls)\n\n═══════════════════════════════════════════════════════════════════════════\nSUMMARY\n─────────────────────────────────────────────────────────────────────────\n\n                          Cost        Tokens          API Calls\n  Main session:      $    0.5677       375,686            15\n  Subagents:         $    1.3888       566,474            23\n─────────────────────────────────────────────────────────────────────────\n  TOTAL:             $    1.9565       942,160            38\n\n═══════════════════════════════════════════════════════════════════════════\n\n```\n\n## Supported Models\n\n**Pricing data is synced from models.dev across thousands of provider/model entries:**\n\n### Claude Models\n- Claude Opus 4.5, 4.1, 4\n- Claude Sonnet 4, 4-5, 3.7, 3.5, 3\n- Claude Haiku 4-5, 3.5, 3\n\n### OpenAI Models\n- GPT-4, GPT-4 Turbo, GPT-4o, GPT-4o Mini\n- GPT-3.5 Turbo\n- GPT-5 and all its variations\n\n### Other Models\n- DeepSeek (R1, V2, V3)\n- Llama (3.1, 3.2, 3.3)\n- Mistral (Large, Small)\n- Qwen, Kimi, GLM, Grok\n- And more...\n\n**Free/Open models** are marked with zero pricing.\n\n## Configuration\n\nTokenScope now supports a stable user override file at:\n\n```bash\n~/.config/opencode/tokenscope-config.json\n```\n\nOn startup, the plugin loads config in this order:\n1. `~/.config/opencode/tokenscope-config.json`\n2. bundled package config: `tokenscope-config.json`\n3. in-code defaults\n\nAny missing keys are filled from the built-in defaults, so you can override only the flags you care about.\nThis is safer than editing the file inside the global npm package directory, because `npm update -g` can replace that installed package and overwrite local changes.\n\nDefault flags:\n\n```json\n{\n  \"enableContextBreakdown\": true,\n  \"enableToolSchemaEstimation\": true,\n  \"enableCacheEfficiency\": true,\n  \"enableSubagentAnalysis\": true,\n  \"enableSkillAnalysis\": true\n}\n```\n\nExample user override:\n\n```json\n{\n  \"enableSubagentAnalysis\": false,\n  \"enableSkillAnalysis\": false\n}\n```\n\nSet any option to `false` to hide that section from the output.\n\n## Troubleshooting\n\n### Command `/tokenscope` Not Appearing\n\n1. Verify `tokenscope.md` exists:\n   ```bash\n   ls ~/.config/opencode/command/tokenscope.md\n   ```\n2. If missing, create it (see Installation step 3)\n3. Restart OpenCode completely\n\n### Wrong Token Counts\n\nThe plugin uses API telemetry (ground truth). If counts seem off:\n- **Expected ~2K difference from TUI**: Plugin analyzes before its own response is added\n- **Approximate fallback warning**: If the report says token counting fell back to approximate mode, reinstall the plugin (`npm install -g @ramtinj95/opencode-tokenscope@latest`) or rerun `~/.config/opencode/plugin/install.sh`\n- **Model detection**: Check that the model name is recognized in the output\n\n## Privacy \u0026 Security\n\n- **All processing is local**: No session data sent to external services\n- **Open source**: Audit the code yourself\n\n## Contributing\n\nContributions welcome! Ideas for enhancement:\n\n- Historical trend analysis\n- Export to CSV/JSON/PDF\n- Optimization suggestions\n- Custom categorization rules\n- Real-time monitoring with alerts\n- Compare sessions\n- Token burn rate calculation\n\n## Support\n\n- **Issues**: [GitHub Issues](https://github.com/ramtinJ95/opencode-tokenscope/issues)\n- **Discussions**: [GitHub Discussions](https://github.com/ramtinJ95/opencode-tokenscope/discussions)\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Framtinj95%2Fopencode-tokenscope","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Framtinj95%2Fopencode-tokenscope","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Framtinj95%2Fopencode-tokenscope/lists"}