{"id":35582843,"url":"https://github.com/tobilg/ai-observer","last_synced_at":"2026-04-02T18:04:41.388Z","repository":{"id":332409694,"uuid":"1121732381","full_name":"tobilg/ai-observer","owner":"tobilg","description":"Unified local observability for AI coding assistants","archived":false,"fork":false,"pushed_at":"2026-01-05T12:25:23.000Z","size":2261,"stargazers_count":144,"open_issues_count":0,"forks_count":9,"subscribers_count":0,"default_branch":"main","last_synced_at":"2026-01-13T19:48:39.349Z","etag":null,"topics":["ai","claude-code","codex-cli","duckdb","gemini-cli","observability","opentelemetry"],"latest_commit_sha":null,"homepage":"","language":"Go","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/tobilg.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2025-12-23T13:13:06.000Z","updated_at":"2026-01-13T05:52:59.000Z","dependencies_parsed_at":null,"dependency_job_id":null,"html_url":"https://github.com/tobilg/ai-observer","commit_stats":null,"previous_names":["tobilg/ai-observer"],"tags_count":2,"template":false,"template_full_name":null,"purl":"pkg:github/tobilg/ai-observer","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tobilg%2Fai-observer","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tobilg%2Fai-observer/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tobilg%2Fai-observer/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tobilg%2Fai-observer/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/tobilg","download_url":"https://codeload.github.com/tobilg/ai-observer/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tobilg%2Fai-observer/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":29738081,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-02-23T04:36:46.119Z","status":"ssl_error","status_checked_at":"2026-02-23T04:36:25.794Z","response_time":90,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai","claude-code","codex-cli","duckdb","gemini-cli","observability","opentelemetry"],"created_at":"2026-01-04T21:11:02.226Z","updated_at":"2026-04-02T18:04:41.374Z","avatar_url":"https://github.com/tobilg.png","language":"Go","readme":"# AI Observer\n\n\u003e Unified local observability for AI coding assistants\n\n**AI Observer** is a self-hosted, single-binary, OpenTelemetry-compatible observability backend designed specifically for monitoring local AI coding tools like Claude Code, Gemini CLI, and OpenAI Codex CLI.\n\nTrack token usage, costs, API latency, error rates, and session activity across all your AI coding assistants in one unified dashboard—with real-time updates and zero external dependencies.\n\n## Why AI Observer?\n\nAI coding assistants are becoming essential development tools, but understanding their behavior and costs remains a challenge:\n\n- **Visibility**: See exactly how your AI tools are performing across sessions\n- **Cost tracking**: Monitor token usage and API calls to understand spending\n- **Debugging**: Trace errors and slow responses back to specific interactions\n- **Privacy**: Keep your telemetry data local—no third-party services required\n\n## Features\n\n- **Multi-tool support** — Works with Claude Code, Gemini CLI, and OpenAI Codex CLI\n- **Real-time dashboard** — Live updates via WebSocket as telemetry arrives\n- **Customizable widgets** — Drag-and-drop dashboard builder with multiple widget types\n- **Historical import** — Import past sessions from local JSONL/JSON files with cost calculation\n- **Cost tracking** — Embedded pricing data for 67+ models across Claude, Codex, and Gemini\n- **Fast analytics** — DuckDB-powered storage for instant queries on large datasets\n- **Single binary** — One ~54MB executable with embedded frontend—no external dependencies\n- **Multi-arch Docker** — Ready-to-run ~97MB images for `linux/amd64` and `linux/arm64`\n- **OTLP-native** — Standard OpenTelemetry Protocol ingestion (HTTP/JSON and HTTP/Protobuf)\n\n## Documentation\n\n- [Import Command](docs/import.md) — Import historical session data from local AI tool files\n- [Export Command](docs/export.md) — Export telemetry data to Parquet files for archiving and sharing\n- [Pricing System](docs/pricing.md) — Cost calculation for Claude, Codex, and Gemini models\n\n## Screenshots\n\n### Dashboard\n\n![AI Observer Dashboard](docs/images/dashboard.png)\n\n### Metrics View\n\n![AI Observer Metrics](docs/images/metrics.png)\n\n### Logs View\n\n![AI Observer Logs](docs/images/logs.png)\n\n### Traces View\n\n![AI Observer traces](docs/images/traces.png)\n\n## Quick Start\n\n### Using Docker (Recommended)\n\n```bash\ndocker run -d \\\n  -p 8080:8080 \\\n  -p 4318:4318 \\\n  -v ai-observer-data:/app/data \\\n  --name ai-observer \\\n  tobilg/ai-observer:latest\n```\n\nDashboard: http://localhost:8080\n\n**Using a local directory for data persistence:**\n\n```bash\n# Create a local data directory\nmkdir -p ./ai-observer-data\n\n# Run with local volume mount\ndocker run -d \\\n  -p 8080:8080 \\\n  -p 4318:4318 \\\n  -v $(pwd)/ai-observer-data:/app/data \\\n  -e AI_OBSERVER_DATABASE_PATH=/app/data/ai-observer.duckdb \\\n  --name ai-observer \\\n  tobilg/ai-observer:latest\n```\n\nThis stores the DuckDB database in your local `./ai-observer-data` directory, making it easy to backup or inspect.\n\n### Using Homebrew (macOS Apple Silicon)\n\n```bash\nbrew tap tobilg/ai-observer\nbrew install ai-observer\nai-observer\n```\n\n### Using Binary\n\nDownload the latest release for your platform from [Releases](https://github.com/tobilg/ai-observer/releases), then:\n\n```bash\n./ai-observer\n```\n\n### Building from Source\n\n```bash\ngit clone https://github.com/tobilg/ai-observer.git\ncd ai-observer\nmake setup   # Install dependencies\nmake all     # Build single binary with embedded frontend\n./bin/ai-observer\n```\n\n## Configuration\n\n### Environment Variables\n\n| Variable | Default | Description |\n|----------|---------|-------------|\n| `AI_OBSERVER_API_PORT` | `8080` | HTTP server port (dashboard + API) |\n| `AI_OBSERVER_OTLP_PORT` | `4318` | OTLP ingestion port |\n| `AI_OBSERVER_DATABASE_PATH` | `./data/ai-observer.duckdb` (binary) or `/app/data/ai-observer.duckdb` (Docker) | DuckDB database file path |\n| `AI_OBSERVER_FRONTEND_URL` | `http://localhost:5173` | Allowed CORS origin (dev mode) |\n| `AI_OBSERVER_LOG_LEVEL` | `INFO` | Log level: `DEBUG`, `INFO`, `WARN`, `ERROR` |\n\nCORS and WebSocket origins allow `AI_OBSERVER_FRONTEND_URL` plus `http://localhost:5173` and `http://localhost:8080`; set `AI_OBSERVER_FRONTEND_URL` when serving a custom UI origin.\n\n### CLI Options\n\n```bash\nai-observer [command] [options]\n```\n\n**Commands:**\n| Command | Description |\n|---------|-------------|\n| `import` | Import local sessions from AI tool files |\n| `export` | Export telemetry data to Parquet files |\n| `delete` | Delete telemetry data from database |\n| `setup` | Show setup instructions for AI tools |\n| `serve` | Start the OTLP server (default if no command) |\n\n**Global Options:**\n| Option | Description |\n|--------|-------------|\n| `-h`, `--help` | Show help message and exit |\n| `-v`, `--version` | Show version information and exit |\n\n**Examples:**\n\n```bash\n# Start the server (default, no command needed)\nai-observer\n\n# Show version\nai-observer --version\n\n# Show setup instructions for Claude Code\nai-observer setup claude-code\n\n# Import data from all AI tools\nai-observer import all\n\n# Export data to Parquet files\nai-observer export all --output ./export\n\n# Delete data in a date range\nai-observer delete all --from 2025-01-01 --to 2025-01-31\n```\n\n### Import Command\n\nImport historical session data from local AI coding tool files into AI Observer.\n\n```bash\nai-observer import [claude-code|codex|gemini|all] [options]\n```\n\n| Option | Description |\n|--------|-------------|\n| `--from DATE` | Only import sessions from DATE (YYYY-MM-DD) |\n| `--to DATE` | Only import sessions up to DATE (YYYY-MM-DD) |\n| `--force` | Re-import already imported files |\n| `--dry-run` | Show what would be imported without making changes |\n| `--skip-confirm` | Skip confirmation prompt |\n| `--purge` | Delete existing data in time range before importing |\n| `--pricing-mode MODE` | Cost calculation mode for Claude: `auto` (default), `calculate`, `display` |\n| `--verbose` | Show detailed progress |\n\n**File locations:**\n\n| Tool | Default Location |\n|------|------------------|\n| Claude Code | `~/.claude/projects/**/*.jsonl` |\n| Codex CLI | `~/.codex/sessions/*.jsonl` |\n| Gemini CLI | `~/.gemini/tmp/**/session-*.json` |\n\nOverride with environment variables: `AI_OBSERVER_CLAUDE_PATH`, `AI_OBSERVER_CODEX_PATH`, `AI_OBSERVER_GEMINI_PATH`\n\n**Examples:**\n\n```bash\n# Import from all tools\nai-observer import all\n\n# Import Claude data from specific date range\nai-observer import claude-code --from 2025-01-01 --to 2025-12-31\n\n# Dry run to see what would be imported\nai-observer import all --dry-run\n\n# Force re-import and recalculate costs\nai-observer import claude-code --force --pricing-mode calculate\n```\n\nSee [docs/import.md](docs/import.md) for detailed documentation and [docs/pricing.md](docs/pricing.md) for pricing calculation details.\n\n### Export Command\n\nExport telemetry data to portable Parquet files with an optional DuckDB views database.\n\n```bash\nai-observer export [claude-code|codex|gemini|all] --output \u003cdirectory\u003e [options]\n```\n\n| Option | Description |\n|--------|-------------|\n| `--output DIR` | Output directory (required) |\n| `--from DATE` | Start date filter (YYYY-MM-DD) |\n| `--to DATE` | End date filter (YYYY-MM-DD) |\n| `--from-files` | Read from raw JSON/JSONL files instead of database |\n| `--zip` | Create single ZIP archive of exported files |\n| `--dry-run` | Preview what would be exported |\n| `--verbose` | Show detailed progress |\n| `--yes` | Skip confirmation prompt |\n\n**Output files:**\n- `traces.parquet` — All trace/span data\n- `logs.parquet` — All log records\n- `metrics.parquet` — All metric data points\n- `ai-observer-export-{SOURCE}-{RANGE}.duckdb` — Views database with relative paths\n\n**Examples:**\n\n```bash\n# Export all data from database\nai-observer export all --output ./export\n\n# Export Claude data with date filter\nai-observer export claude-code --output ./export --from 2025-01-01 --to 2025-01-15\n\n# Export to ZIP archive\nai-observer export all --output ./export --zip\n\n# Export directly from raw files (without prior import)\nai-observer export claude-code --output ./export --from-files\n\n# Dry run to preview export\nai-observer export all --output ./export --dry-run\n```\n\nSee [docs/export.md](docs/export.md) for detailed documentation.\n\n### Delete Command\n\nDelete telemetry data from the database by time range.\n\n```bash\nai-observer delete [logs|metrics|traces|all] --from DATE --to DATE [options]\n```\n\n| Option | Description |\n|--------|-------------|\n| `--from DATE` | Start date (YYYY-MM-DD, required) |\n| `--to DATE` | End date (YYYY-MM-DD, required) |\n| `--service NAME` | Only delete data for specific service |\n| `--yes` | Skip confirmation prompt |\n\n**Examples:**\n\n```bash\n# Delete all data in a date range\nai-observer delete all --from 2025-01-01 --to 2025-01-31\n\n# Delete only logs in a date range\nai-observer delete logs --from 2025-01-01 --to 2025-01-31\n\n# Delete only Claude Code data\nai-observer delete all --from 2025-01-01 --to 2025-01-31 --service claude-code\n\n# Skip confirmation prompt\nai-observer delete all --from 2025-01-01 --to 2025-01-31 --yes\n```\n\n### AI Tool Setup\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eClaude Code\u003c/strong\u003e\u003c/summary\u003e\n\nConfigure the following environment variables:\n\n```bash\n# Enable telemetry (required)\nexport CLAUDE_CODE_ENABLE_TELEMETRY=1\n\n# Configure exporters\nexport OTEL_METRICS_EXPORTER=otlp\nexport OTEL_LOGS_EXPORTER=otlp\n\n# Set OTLP endpoint (HTTP)\nexport OTEL_EXPORTER_OTLP_PROTOCOL=http/protobuf\nexport OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318\n\n# Set shorter intervals\nexport OTEL_METRIC_EXPORT_INTERVAL=10000  # 10 seconds (default: 60000ms)\nexport OTEL_LOGS_EXPORT_INTERVAL=5000     # 5 seconds (default: 5000ms)\n```\n\nAdd these to your `~/.bashrc`, `~/.zshrc`, or shell profile to persist across sessions.\n\nClaude Code will then automatically send metrics and events to AI Observer.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eGemini CLI\u003c/strong\u003e\u003c/summary\u003e\nWe assume you have at least Gemini CLI in version `v0.34.0` because all version before had a bug regarding OTLP publishing.\n\nAdd to `~/.gemini/settings.json`:\n\n```json\n{\n  \"telemetry\": {\n    \"enabled\": true,\n    \"target\": \"local\",\n    \"useCollector\": true,\n    \"otlpEndpoint\": \"http://localhost:4318\",\n    \"otlpProtocol\": \"http\",\n    \"logPrompts\": true\n  }\n}\n```\n\n**Required environment variables** (workaround for Gemini CLI timing issues):\n\n```bash\nexport OTEL_METRIC_EXPORT_TIMEOUT=10000\nexport OTEL_LOGS_EXPORT_TIMEOUT=5000\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eOpenAI Codex CLI\u003c/strong\u003e\u003c/summary\u003e\n\nAdd to `~/.codex/config.toml`:\n\n```toml\n[otel]\nlog_user_prompt = true  # set to false to redact prompts\nexporter = { otlp-http = { endpoint = \"http://localhost:4318/v1/logs\", protocol = \"binary\" } }\ntrace_exporter = { otlp-http = { endpoint = \"http://localhost:4318/v1/traces\", protocol = \"binary\" } }\n```\n\n\u003e **Note**: Codex CLI exports logs and traces (no metrics). The `trace_exporter` option is undocumented but available—if omitted, traces are sent to the same endpoint as logs.\n\n\u003c/details\u003e\n\n## Architecture\n\n```\n┌─────────────────┐     ┌─────────────────┐     ┌─────────────────┐\n│   Claude Code   │     │   Gemini CLI    │     │   Codex CLI     │\n└────────┬────────┘     └────────┬────────┘     └────────┬────────┘\n         │                       │                       │\n         │ OTLP/HTTP             │ OTLP/HTTP             │ OTLP/HTTP\n         │ (traces, metrics,     │ (traces, metrics,     │ (logs)\n         │  logs)                │  logs)                │\n         └───────────────────────┼───────────────────────┘\n                                 │\n                                 ▼\n                    ┌────────────────────────┐\n                    │     AI Observer        │\n                    │  ┌──────────────────┐  │\n                    │  │   OTLP Ingestion │  │  ← Port 4318\n                    │  │   (HTTP/Proto)   │  │\n                    │  └────────┬─────────┘  │\n                    │           │            │\n                    │  ┌────────▼─────────┐  │\n                    │  │     DuckDB       │  │\n                    │  │   (Analytics)    │  │\n                    │  └────────┬─────────┘  │\n                    │           │            │\n                    │  ┌────────▼─────────┐  │\n                    │  │   REST API +     │  │  ← Port 8080\n                    │  │   WebSocket Hub  │  │\n                    │  └────────┬─────────┘  │\n                    │           │            │\n                    │  ┌────────▼─────────┐  │\n                    │  │  React Dashboard │  │\n                    │  │   (embedded)     │  │\n                    │  └──────────────────┘  │\n                    └────────────────────────┘\n```\n\n**Tech Stack**:\n- **Backend**: Go 1.24+, chi router, DuckDB, gorilla/websocket\n- **Frontend**: React 19, TypeScript, Vite, Tailwind CSS v4, Zustand, Recharts\n\n## API Reference\n\nAI Observer exposes two HTTP servers:\n\n### OTLP Ingestion (Port 4318)\n\nStandard OpenTelemetry Protocol endpoints for receiving telemetry data.\n- Transport is HTTP/1.1 + h2c (no gRPC listener exposed); `Content-Encoding: gzip` is supported for compressed payloads.\n\n| Method | Endpoint | Description |\n|--------|----------|-------------|\n| `POST` | `/v1/traces` | Ingest trace spans (protobuf or JSON) |\n| `POST` | `/v1/metrics` | Ingest metrics (protobuf or JSON) |\n| `POST` | `/v1/logs` | Ingest logs (protobuf or JSON) |\n| `GET` | `/health` | Health check |\n\n### Query API (Port 8080)\n\nREST API for querying stored telemetry data. Unless otherwise specified, `from`/`to` default to the last 24 hours.\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eTraces\u003c/strong\u003e\u003c/summary\u003e\n\n| Method | Endpoint | Description |\n|--------|----------|-------------|\n| `GET` | `/api/traces` | List traces with filtering and pagination |\n| `GET` | `/api/traces/recent` | Get most recent traces |\n| `GET` | `/api/traces/{traceId}` | Get a specific trace |\n| `GET` | `/api/traces/{traceId}/spans` | Get all spans for a trace |\n\n**Query parameters for `/api/traces`:**\n- `service` — Filter by service name\n- `search` — Full-text search\n- `from`, `to` — Time range (ISO 8601)\n- `limit`, `offset` — Pagination\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eMetrics\u003c/strong\u003e\u003c/summary\u003e\n\n| Method | Endpoint | Description |\n|--------|----------|-------------|\n| `GET` | `/api/metrics` | List metrics with filtering |\n| `GET` | `/api/metrics/names` | List all metric names |\n| `GET` | `/api/metrics/series` | Get time series data for a metric |\n| `POST` | `/api/metrics/batch-series` | Get multiple time series in one request |\n\n**Query parameters for `/api/metrics/series`:**\n- `name` — Metric name (required)\n- `service` — Filter by service\n- `from`, `to` — Time range (ISO 8601)\n- `interval` — Aggregation interval (e.g., `1 minute`, `1 hour`)\n- `aggregate` — Aggregate all series into one (default: `false`)\n\n**Batch series (`POST /api/metrics/batch-series`) request body:**\n- Each query requires `id` and `name`; optional `service`, `aggregate`, `interval`.\n- Maximum 50 queries per request.\n- `from`/`to` in the body also default to the last 24 hours if omitted.\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eLogs\u003c/strong\u003e\u003c/summary\u003e\n\n| Method | Endpoint | Description |\n|--------|----------|-------------|\n| `GET` | `/api/logs` | List logs with filtering and pagination |\n| `GET` | `/api/logs/levels` | Get log counts by severity level |\n\n**Query parameters for `/api/logs`:**\n- `service` — Filter by service name\n- `severity` — Filter by severity (TRACE, DEBUG, INFO, WARN, ERROR, FATAL)\n- `traceId` — Filter logs linked to a specific trace\n- `search` — Full-text search\n- `from`, `to` — Time range (ISO 8601)\n- `limit`, `offset` — Pagination\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eDashboards\u003c/strong\u003e\u003c/summary\u003e\n\n| Method | Endpoint | Description |\n|--------|----------|-------------|\n| `GET` | `/api/dashboards` | List all dashboards |\n| `POST` | `/api/dashboards` | Create a new dashboard |\n| `GET` | `/api/dashboards/default` | Get the default dashboard with widgets |\n| `GET` | `/api/dashboards/{id}` | Get a dashboard by ID |\n| `PUT` | `/api/dashboards/{id}` | Update a dashboard |\n| `DELETE` | `/api/dashboards/{id}` | Delete a dashboard |\n| `PUT` | `/api/dashboards/{id}/default` | Set as default dashboard |\n| `POST` | `/api/dashboards/{id}/widgets` | Add a widget |\n| `PUT` | `/api/dashboards/{id}/widgets/positions` | Update widget positions |\n| `PUT` | `/api/dashboards/{id}/widgets/{widgetId}` | Update a widget |\n| `DELETE` | `/api/dashboards/{id}/widgets/{widgetId}` | Delete a widget |\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eOther\u003c/strong\u003e\u003c/summary\u003e\n\n| Method | Endpoint | Description |\n|--------|----------|-------------|\n| `GET` | `/api/services` | List all services sending telemetry |\n| `GET` | `/api/stats` | Get aggregate statistics |\n| `GET` | `/ws` | WebSocket for real-time updates |\n| `GET` | `/health` | Health check |\n\n\u003c/details\u003e\n\n## Data Collected\n\nAI Observer receives standard OpenTelemetry data:\n\n| Signal | Description | Example Data |\n|--------|-------------|--------------|\n| **Traces** | Distributed tracing spans | API calls, tool executions, session timelines |\n| **Metrics** | Numeric measurements | Token counts, latency histograms, request rates |\n| **Logs** | Structured log records | Errors, prompts (if enabled), system events |\n\nAll data is stored locally in DuckDB. Nothing is sent to external services.\n\n## Telemetry Reference\n\nEach AI coding tool exports different telemetry signals. Here's what you can observe:\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eClaude Code Metrics \u0026 Events\u003c/strong\u003e\u003c/summary\u003e\n\n### Metrics\n\n| Metric | Display Name | Type | Description |\n|--------|--------------|------|-------------|\n| `claude_code.session.count` | Sessions | Counter | CLI sessions started |\n| `claude_code.token.usage` | Token Usage | Counter | Tokens used (by type: input/output/cache) |\n| `claude_code.cost.usage` | Cost | Counter | Session cost in USD |\n| `claude_code.lines_of_code.count` | Lines of Code | Counter | Lines of code modified (added/removed) |\n| `claude_code.pull_request.count` | Pull Requests | Counter | Pull requests created |\n| `claude_code.commit.count` | Commits | Counter | Git commits created |\n| `claude_code.code_edit_tool.decision` | Edit Decisions | Counter | Tool permission decisions (accept/reject) |\n| `claude_code.active_time.total` | Active Time | Counter | Active time in seconds |\n\n**Common attributes**: `session.id`, `organization.id`, `user.account_uuid`, `terminal.type`, `model`\n\n### Derived Metrics\n\nAI Observer computes user-facing metrics that filter out tool-routing API calls (which have no cache tokens). These metrics match the token counts shown by tools like [ccusage](https://github.com/ryoppippi/ccusage):\n\n| Metric | Display Name | Description |\n|--------|--------------|-------------|\n| `claude_code.token.usage_user_facing` | Token Usage (User-Facing) | Tokens from user-facing API calls only (excludes tool-routing) |\n| `claude_code.cost.usage_user_facing` | Cost (User-Facing) | Cost from user-facing API calls only (excludes tool-routing) |\n\n\u003e **Note**: Claude Code makes internal API calls for tool routing that don't involve user interaction. These calls have no cache tokens. The user-facing metrics exclude these calls to provide counts that match what users see in their billing and usage reports.\n\n### Events (Logs)\n\n| Event | Display Name | Description | Key Attributes |\n|-------|--------------|-------------|----------------|\n| `claude_code.user_prompt` | User Prompt | User submits a prompt | `prompt_length`, `prompt` (if enabled) |\n| `claude_code.api_request` | API Request | API request to Claude | `model`, `cost_usd`, `duration_ms`, `input_tokens`, `output_tokens` |\n| `claude_code.api_error` | API Error | Failed API request | `error`, `status_code`, `attempt` |\n| `claude_code.tool_result` | Tool Result | Tool execution completes | `tool_name`, `success`, `duration_ms`, `decision` |\n| `claude_code.tool_decision` | Tool Decision | Permission decision made | `tool_name`, `decision`, `source` |\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eGemini CLI Metrics \u0026 Logs\u003c/strong\u003e\u003c/summary\u003e\n\n### Metrics\n\n| Metric | Display Name | Type | Description |\n|--------|--------------|------|-------------|\n| `gemini_cli.session.count` | Sessions (Cumulative) | Counter | Sessions started (cumulative) |\n| `gemini_cli.token.usage` | Token Usage (Cumulative) | Counter | Tokens by type (cumulative) |\n| `gemini_cli.cost.usage` | Cost | Counter | Session cost in USD |\n| `gemini_cli.api.request.count` | API Requests (Cumulative) | Counter | API requests by model and status (cumulative) |\n| `gemini_cli.api.request.latency` | API Latency | Histogram | API request duration (ms) |\n| `gemini_cli.api.request.breakdown` | API Request Breakdown | Histogram | Request phase analysis (ms) |\n| `gemini_cli.tool.call.count` | Tool Calls | Counter | Tool invocations with success/decision |\n| `gemini_cli.tool.call.latency` | Tool Latency | Histogram | Tool execution duration (ms) |\n| `gemini_cli.tool.queue.depth` | Tool Queue Depth | Histogram | Number of pending tools in queue |\n| `gemini_cli.tool.execution.breakdown` | Tool Execution Breakdown | Histogram | Phase-level tool execution durations (ms) |\n| `gemini_cli.file.operation.count` | File Operations (Cumulative) | Counter | File operations by type and language (cumulative) |\n| `gemini_cli.lines.changed` | Lines Changed | Counter | Lines added/removed |\n| `gemini_cli.agent.run.count` | Agent Runs | Counter | Agent executions |\n| `gemini_cli.agent.duration` | Agent Duration | Histogram | Agent run duration (ms) |\n| `gemini_cli.agent.turns` | Agent Turns | Histogram | Interaction iterations per agent run |\n| `gemini_cli.startup.duration` | Startup Duration | Histogram | Initialization time by phase (ms) |\n| `gemini_cli.memory.usage` | Memory Usage | Histogram | Memory consumption (bytes) |\n| `gemini_cli.cpu.usage` | CPU Usage | Histogram | Processor utilization (%) |\n| `gemini_cli.chat_compression` | Chat Compression | Counter | Context compression events |\n| `gemini_cli.chat.invalid_chunk.count` | Invalid Chunks | Counter | Malformed stream data count |\n| `gemini_cli.chat.content_retry.count` | Content Retries | Counter | Recovery attempt count |\n| `gemini_cli.chat.content_retry_failure.count` | Retry Failures | Counter | Exhausted retry attempts count |\n| `gemini_cli.slash_command.model.call_count` | Model Commands | Counter | Model selections via slash commands |\n| `gemini_cli.model_routing.latency` | Routing Latency | Histogram | Router decision timing (ms) |\n| `gemini_cli.model_routing.failure.count` | Routing Failures | Counter | Model routing failure count |\n| `gemini_cli.ui.flicker.count` | UI Flicker | Counter | Rendering instability events |\n| `gemini_cli.token.efficiency` | Token Efficiency | Histogram | Output quality metrics ratio |\n| `gemini_cli.performance.score` | Performance Score | Histogram | Composite performance benchmark |\n| `gemini_cli.performance.regression` | Performance Regressions | Counter | Performance degradation count |\n| `gemini_cli.performance.regression.percentage_change` | Regression Percentage | Histogram | Performance variance magnitude (%) |\n| `gemini_cli.performance.baseline.comparison` | Baseline Comparison | Histogram | Performance baseline drift (%) |\n| `gen_ai.client.token.usage` | GenAI Token Usage (Cumulative) | Histogram | Token consumption (OTel semantic convention) |\n| `gen_ai.client.operation.duration` | GenAI Operation Duration | Histogram | Operation timing in seconds (OTel semantic convention) |\n\n### Derived Metrics\n\nAI Observer computes delta metrics from cumulative counters to show per-interval changes:\n\n| Metric | Display Name | Description |\n|--------|--------------|-------------|\n| `gemini_cli.session.count.delta` | Sessions | Sessions per interval |\n| `gemini_cli.token.usage.delta` | Token Usage | Tokens consumed per interval |\n| `gemini_cli.api.request.count.delta` | API Requests | API requests per interval |\n| `gemini_cli.file.operation.count.delta` | File Operations | File operations per interval |\n| `gen_ai.client.token.usage.delta` | GenAI Token Usage | Token consumption per interval (OTel semantic convention) |\n\n### Logs\n\n| Log | Display Name | Description |\n|-----|--------------|-------------|\n| `gemini_cli.config` | Config | Startup configuration (model, sandbox, tools, extensions) |\n| `gemini_cli.user_prompt` | User Prompt | User prompt with length and auth type |\n| `gemini_cli.api_request` | API Request | API request details |\n| `gemini_cli.api_response` | API Response | Response with token counts and finish reason |\n| `gemini_cli.api_error` | API Error | Failed requests with error details |\n| `gemini_cli.tool_call` | Tool Call | Tool execution with duration and arguments |\n| `gemini_cli.file_operation` | File Operation | File create/read/update operations |\n| `gemini_cli.agent.start` / `agent.finish` | Agent Start/Finish | Agent lifecycle events |\n| `gemini_cli.model_routing` | Model Routing | Routing decisions with latency |\n| `gemini_cli.chat_compression` | Chat Compression | Context compression events |\n| `gemini_cli.conversation_finished` | Conversation Finished | Session completion with turn count |\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eOpenAI Codex CLI Metrics \u0026 Events\u003c/strong\u003e\u003c/summary\u003e\n\nCodex CLI exports logs and traces directly. AI Observer derives metrics from these log events.\n\n### Derived Metrics\n\nAI Observer computes these metrics from Codex CLI log events:\n\n| Metric | Display Name | Type | Description |\n|--------|--------------|------|-------------|\n| `codex_cli_rs.token.usage` | Token Usage | Counter | Tokens by type (input/output/cache/reasoning/tool) |\n| `codex_cli_rs.cost.usage` | Cost | Counter | Session cost in USD |\n\n### Events (Logs)\n\n| Event | Display Name | Description | Key Attributes |\n|-------|--------------|-------------|----------------|\n| `codex.conversation_starts` | Sessions | Session initialization | Model, reasoning config, sandbox mode |\n| `codex.api_request` | API Requests | API request to OpenAI | Duration, HTTP status, token counts |\n| `codex.sse_event` | SSE Events | Streamed response chunk (filtered out / not stored) | Response metrics |\n| `codex.user_prompt` | User Prompts | User prompt submitted | Character length (content redacted by default) |\n| `codex.tool_decision` | Tool Decisions | Tool permission decision | Approval/denial status, decision source |\n| `codex.tool_result` | Tool Results | Tool execution result | Duration, success status, output preview |\n\n\u003e **Note**: `codex.sse_event` events are filtered out by AI Observer to reduce noise—these are emitted for every SSE streaming chunk from the API.\n\n### Traces\n\nCodex CLI uses a **single trace per session**—all operations within a CLI session share the same trace ID with spans nested hierarchically:\n\n```\nTraceID (session-level)\n└── run_task\n    ├── run_turn (conversation turn 1)\n    │   ├── try_run_turn\n    │   ├── receiving_stream\n    │   │   ├── reasoning / function_call\n    │   │   └── receiving\n    │   └── ...\n    ├── run_turn (conversation turn 2)\n    └── ...\n```\n\nThis means long CLI sessions produce traces with thousands of spans spanning hours, rather than many short traces.\n\n**AI Observer Trace Handling**: To improve usability, AI Observer treats each first-level child span (direct children of the session root) as a separate \"virtual trace\" in the dashboard. This splits long sessions into manageable units. However, since spans may arrive out of order, you may briefly see intermediate states where a span appears as its own trace before its parent arrives—once the parent span is received, the child automatically merges into the parent's trace on the next query refresh.\n\n\u003c/details\u003e\n\n## Understanding Token Metrics: OTLP vs Local Files\n\nWhen comparing token usage from AI Observer's OTLP ingestion with tools like [ccusage](https://github.com/ryoppippi/ccusage) that parse local session files, you may notice significant differences in reported values. This is expected behavior due to different counting semantics.\n\n### Example Comparison\n\nHere's a real comparison from a single day of Claude Code usage:\n\n| Token Type | ccusage | OTLP | OTLP/ccusage |\n|------------|---------|------|--------------|\n| **Input** | 84,103 | 681,669 | **8.1x** |\n| **Output** | 5,073 | 445,143 | **87.8x** |\n| **Cache Create** | 3,856,624 | 4,854,456 | 1.26x |\n| **Cache Read** | 59,803,276 | 62,460,204 | 1.04x |\n| **Total** | 63,749,076 | 68,441,472 | 1.07x |\n| **Cost** | $48.35 | $65.94 | 1.36x |\n\n### Why This Happens\n\nThe discrepancy is most pronounced for **input** and **output** tokens:\n\n1. **Claude Code OTLP metrics** appear to report tokens differently than the API response's `usage` object that gets written to JSONL files.\n\n2. **Local JSONL files** store the exact `usage.input_tokens` and `usage.output_tokens` values from Claude's API response, which ccusage reads directly.\n\n3. **Cache tokens** (creation and read) are much closer between the two sources, suggesting these are counted consistently.\n\n### Token Type Comparison\n\n| Token Type | OTLP vs Local File Ratio | Notes |\n|------------|-------------------------|-------|\n| **Input** | ~8x higher in OTLP | Largest discrepancy |\n| **Output** | ~80-90x higher in OTLP | Significant discrepancy |\n| **Cache Creation** | ~1.2-1.3x (similar) | Minor difference |\n| **Cache Read** | ~1.0x (nearly identical) | Consistent counting |\n\n### Which Data Source Should I Use?\n\n| Use Case | Recommended Source |\n|----------|-------------------|\n| **Billing verification** | Local files / ccusage (matches API billing) |\n| **Understanding API load** | OTLP metrics (shows actual tokens transmitted) |\n| **Cost tracking** | Either (both calculate costs correctly) |\n| **Historical analysis** | Import command (`ai-observer import`) for ccusage-compatible data |\n\n### Reconciling the Data\n\nIf you need ccusage-compatible metrics in AI Observer:\n\n```bash\n# Import from local files instead of relying on OTLP\nai-observer import claude-code --from 2025-01-01 --to 2025-12-31\n```\n\nImported data uses the same token counting as ccusage and will show matching values.\n\n### Technical Details\n\n- OTLP metrics arrive with `aggregationTemporality: 1` (DELTA), meaning each data point is a per-request value\n- The `type` attribute distinguishes token types: `input`, `output`, `cacheCreation`, `cacheRead`\n- Imported metrics include an `import_source: local_jsonl` attribute to distinguish them from OTLP data\n- OTLP metrics have no `import_source` attribute (or it's null)\n\n## Development\n\n### Developer quickstart\n\n```bash\nmake setup          # install Go + frontend deps\nmake backend-dev    # terminal 1: run API/OTLP server on 8080/4318\nmake frontend-dev   # terminal 2: Vite dev server on http://localhost:5173\n# browse http://localhost:5173 (API + /ws proxied to :8080)\n```\n\n### Prerequisites\n\n- Go 1.24+\n- Node.js 20+\n- pnpm\n- Make\n\n### Commands\n\n```bash\nmake setup        # Install all dependencies\nmake dev          # Run backend + frontend in dev mode\nmake test         # Run all tests\nmake lint         # Run linters\nmake clean        # Clean build artifacts\n```\n\n### Project Structure\n\n```\nai-observer/\n├── backend/\n│   ├── cmd/server/       # Main entry point\n│   ├── internal/\n│   │   ├── api/          # API types and helpers\n│   │   ├── deleter/      # Data deletion logic\n│   │   ├── exporter/     # Parquet export and views database\n│   │   ├── handlers/     # HTTP handlers\n│   │   ├── importer/     # Historical data import (Claude, Codex, Gemini)\n│   │   ├── otlp/         # OTLP decoders (proto/JSON)\n│   │   ├── pricing/      # Embedded pricing data and cost calculation\n│   │   ├── server/       # Server setup and routing\n│   │   ├── storage/      # DuckDB storage layer\n│   │   └── websocket/    # Real-time updates\n│   └── pkg/compression/  # GZIP decompression\n├── frontend/\n│   ├── src/\n│   │   ├── components/   # React components\n│   │   ├── pages/        # Page components\n│   │   ├── stores/       # Zustand stores\n│   │   └── lib/          # Utilities\n│   └── ...\n├── docs/                 # Documentation\n└── Makefile\n```\n\n## CI/CD\n\nGitHub Actions automatically:\n\n| Trigger | Actions |\n|---------|---------|\n| Push/PR | Run tests (Go + frontend) |\n| Push | Build binaries (linux/amd64, darwin/arm64, windows/amd64) |\n| Tag `v*` | Create GitHub Release with archives |\n| Tag `v*` | Push multi-arch Docker images |\n| Release published | Update Homebrew formula in [ai-observer-homebrew](https://github.com/tobilg/ai-observer-homebrew) tap |\n\n### Creating a Release\n\n```bash\ngit tag v1.0.0\ngit push origin v1.0.0\n```\n\n## Troubleshooting\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003ePort already in use\u003c/strong\u003e\u003c/summary\u003e\n\nChange the ports using environment variables:\n\n```bash\nAI_OBSERVER_API_PORT=9090 AI_OBSERVER_OTLP_PORT=4319 ./ai-observer\n```\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eNo data appearing in dashboard\u003c/strong\u003e\u003c/summary\u003e\n\n1. Verify your AI tool is configured correctly\n2. Check that the OTLP endpoint is reachable: `curl http://localhost:4318/health`\n3. Look for errors in the AI Observer logs\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cstrong\u003eCORS errors in browser console\u003c/strong\u003e\u003c/summary\u003e\n\nSet the `AI_OBSERVER_FRONTEND_URL` environment variable to match your frontend origin:\n\n```bash\nAI_OBSERVER_FRONTEND_URL=http://localhost:3000 ./ai-observer\n```\n\n\u003c/details\u003e\n\n## Contributing\n\nContributions are welcome! Please feel free to submit a Pull Request.\n\n1. Fork the repository\n2. Create your feature branch (`git checkout -b feature/amazing-feature`)\n3. Commit your changes (`git commit -m 'Add some amazing feature'`)\n4. Push to the branch (`git push origin feature/amazing-feature`)\n5. Open a Pull Request\n\n## License\n\nThis project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.\n\n## Acknowledgments\n\n- Built with [OpenTelemetry](https://opentelemetry.io/) standards\n- Powered by [DuckDB](https://duckdb.org/) for fast analytics\n- UI components from [shadcn/ui](https://ui.shadcn.com/)\n","funding_links":[],"categories":["Go"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftobilg%2Fai-observer","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ftobilg%2Fai-observer","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftobilg%2Fai-observer/lists"}