{"id":49122455,"url":"https://github.com/settinghead/voxlert","last_synced_at":"2026-04-21T12:36:23.079Z","repository":{"id":341531956,"uuid":"1170362396","full_name":"settinghead/voxlert","owner":"settinghead","description":"LLM-generated voice notifications for Claude Code, Cursor, OpenAI Codex, pi, and OpenClaw, spoken by game characters like the StarCraft Adjutant, Kerrigan, C\u0026C EVA, SHODAN, and more.","archived":false,"fork":false,"pushed_at":"2026-04-07T05:56:27.000Z","size":30566,"stargazers_count":8,"open_issues_count":4,"forks_count":1,"subscribers_count":0,"default_branch":"main","last_synced_at":"2026-04-07T07:25:11.002Z","etag":null,"topics":["ai-agents","audio-processing","claude-code","cli","coding-assistant","cursor","cursor-ide","developer-tools","gaming-voices","llm","local-first","nodejs","notifications","openai-codex","pi-package","starcraft","text-to-speech","tts","voice-cloning","voice-notifications"],"latest_commit_sha":null,"homepage":"https://www.npmjs.com/package/@settinghead/voiceforge","language":"JavaScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/settinghead.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2026-03-02T03:18:02.000Z","updated_at":"2026-04-07T05:56:31.000Z","dependencies_parsed_at":null,"dependency_job_id":null,"html_url":"https://github.com/settinghead/voxlert","commit_stats":null,"previous_names":["settinghead/sc-commander","settinghead/voiceforge","settinghead/voxlert"],"tags_count":1,"template":false,"template_full_name":null,"purl":"pkg:github/settinghead/voxlert","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/settinghead%2Fvoxlert","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/settinghead%2Fvoxlert/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/settinghead%2Fvoxlert/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/settinghead%2Fvoxlert/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/settinghead","download_url":"https://codeload.github.com/settinghead/voxlert/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/settinghead%2Fvoxlert/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":32092185,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-21T11:25:29.218Z","status":"ssl_error","status_checked_at":"2026-04-21T11:25:28.499Z","response_time":128,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.5:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai-agents","audio-processing","claude-code","cli","coding-assistant","cursor","cursor-ide","developer-tools","gaming-voices","llm","local-first","nodejs","notifications","openai-codex","pi-package","starcraft","text-to-speech","tts","voice-cloning","voice-notifications"],"created_at":"2026-04-21T12:36:22.402Z","updated_at":"2026-04-21T12:36:23.065Z","avatar_url":"https://github.com/settinghead.png","language":"JavaScript","funding_links":[],"categories":[],"sub_categories":[],"readme":"\u003cp align=\"center\"\u003e\n  \u003ca href=\"https://youtu.be/5xFXGijwJuk?utm_source=github\u0026utm_medium=readme\u0026utm_campaign=phase1\"\u003e\n    \u003cimg src=\"https://raw.githubusercontent.com/settinghead/voxlert/main/assets/demo-thumbnail.png\" alt=\"Voxlert Demo\" width=\"100%\" /\u003e\n  \u003c/a\u003e\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003ca href=\"https://github.com/settinghead/voxlert/actions/workflows/cli-integration.yml\"\u003e\n    \u003cimg src=\"https://github.com/settinghead/voxlert/actions/workflows/cli-integration.yml/badge.svg\" alt=\"CLI Integration\" /\u003e\n  \u003c/a\u003e\n  \u003ca href=\"https://fazier.com/launches/voxlert\"\u003e\n    \u003cimg src=\"https://fazier.com/api/v1//public/badges/launch_badges.svg?badge_type=launched\u0026theme=light\" alt=\"Launched on Fazier\" /\u003e\n  \u003c/a\u003e\n\u003c/p\u003e\n\n# Voxlert\n\nLLM-generated voice notifications for [Claude Code](https://docs.anthropic.com/en/docs/claude-code), [Cursor](https://cursor.com/docs/agent/hooks), [OpenAI Codex](https://developers.openai.com/codex/), [pi](https://github.com/badlogic/pi-mono), and [OpenClaw](https://openclaw.dev), spoken by game characters like the StarCraft Adjutant, Kerrigan, C\u0026C EVA, SHODAN, and more.\n\n## Why Voxlert?\n\nExisting notification chimes (like [peon-ping](https://github.com/PeonPing/peon-ping)) do a great job of telling you *when* something happened, but not *what* happened or *which* agent needs your attention. If you have several agent sessions running at once, you end up alt-tabbing through windows just to find the one waiting on you.\n\nVoxlert makes each session speak in a distinct character voice with its own tone and vocabulary. You hear *\"Query efficiency restored to nominal\"* from the HEV Suit in one window and *\"Pathetic test suite for code validation processed\"* from SHODAN in another, and you know immediately what changed. Because phrases are generated by an LLM instead of picked from a tiny fixed set, they stay varied instead of becoming wallpaper.\n\n## Who this is for\n\nVoxlert is for users who:\n\n- Run two or more AI coding agent sessions concurrently (Claude Code, Cursor, Codex, pi, OpenClaw)\n- Get interrupted by notification chimes but can't tell which window needs attention\n- Want ambient audio feedback that doesn't require looking at a screen\n- Are comfortable installing local tooling (Node.js, optionally Python for TTS)\n\nIf you run a single agent session and it's always in focus, Voxlert adds personality but not much utility. If you run several at once and context-switch between them, it's meaningfully useful.\n\n## Quick Start\n\n### 1. Install prerequisites\n\n**Minimum:** Node.js 18+ and `afplay` (macOS built-in) or [FFmpeg](docs/installing-ffmpeg.md) (Windows/Linux). That's enough to get started; TTS and SoX are optional.\n\n| Aspect | macOS | Windows | Linux |\n|--------|-------|---------|-------|\n| **Node.js 18+** | [nodejs.org](https://nodejs.org) or `brew install node` | [nodejs.org](https://nodejs.org) or `winget install OpenJS.NodeJS` | [nodejs.org](https://nodejs.org) or distro package (for example `sudo apt install nodejs`) |\n| **Audio playback** | Built-in (`afplay`) | [FFmpeg](docs/installing-ffmpeg.md) so `ffplay` is on PATH | [FFmpeg](docs/installing-ffmpeg.md) so `ffplay` is on PATH |\n| **Audio effects** | [SoX](docs/installing-sox.md) (optional) | [SoX](docs/installing-sox.md) (optional) | [SoX](docs/installing-sox.md) (optional) |\n\nSee [Installing FFmpeg](docs/installing-ffmpeg.md) and [Installing SoX](docs/installing-sox.md) for platform-specific commands.\n\nYou will also want:\n\n- An **LLM API key** from [OpenRouter](https://openrouter.ai) (recommended), [OpenAI](https://platform.openai.com/api-keys), [Google Gemini](https://aistudio.google.com/apikey), or [Anthropic](https://console.anthropic.com/settings/keys). You can skip this and use fallback phrases only.\n- At least one **TTS backend** if you want spoken output instead of notifications only.\n\n| Backend | Best for | Requirements |\n|---|---|---|\n| [**Qwen3-TTS**](qwen3-tts-server/README.md) (recommended) | Apple Silicon or NVIDIA GPU | Python 3.13+, 16 GB RAM, ~8 GB disk |\n| [**Chatterbox**](docs/chatterbox-tts.md) | Any platform with GPU | Python 3.10+, CUDA or MPS |\n\nThe setup wizard auto-detects running TTS backends. If none are running yet, setup still completes, but you will only get text notifications and fallback phrases until you start one and rerun setup.\n\n\u003e **Can't run local TTS?** Both backends require a GPU or Apple Silicon. Voxlert still works without TTS — you'll get text notifications and fallback phrases. Need help? [Post in Setup help \u0026 troubleshooting](https://github.com/settinghead/voxlert/discussions/6).\n\n### 2. Install and run setup\n\n```bash\nnpx voxlert --onboard\n```\n\nThe setup wizard configures:\n\n- LLM provider and API key\n- Voice pack downloads\n- Active voice pack\n- TTS backend\n- Platform hooks for Claude Code, Cursor, Codex, and pi\n\nFor OpenClaw, install the separate [OpenClaw plugin](docs/openclaw.md).\n\n### 3. Start a TTS backend for spoken voice\n\nStart [Qwen3-TTS](qwen3-tts-server/README.md) or [Chatterbox](docs/chatterbox-tts.md), then run:\n\n```bash\nvoxlert setup\n```\n\nThis lets the wizard detect the backend and store it in config.\n\n### 4. Verify\n\n```bash\nvoxlert test \"Hello\"\n```\n\nYou should hear a phrase and see a notification. If you do not hear speech, check that:\n\n- A TTS server is running\n- `voxlert config` shows the expected `tts_backend`\n\n\u003e **Visual notifications**: Voxlert shows a popup with each phrase without extra install. On macOS you can use the custom overlay or system Notification Center. On Windows and Linux you get system toasts. Change it anytime with:\n\u003e ```bash\n\u003e voxlert notification\n\u003e ```\n\n### From a git clone\n\nRun `npm install` inside `cli/`, then use `node src/cli.js` or link it globally if you prefer. Config and cache live in `~/.voxlert` (Windows: `%USERPROFILE%\\.voxlert`).\n\n## Development\n\nRun tests locally with:\n\n```bash\nnpm test\n```\n\n## Supported Voices\n\nThe `sc1-adjutant` preview below uses the animated in-game portrait GIF from `assets/sc1-adjutant.gif`.\n\n| | Pack ID | Voice | Source | Status |\n|---|---------|-------|--------|--------|\n| \u003cimg src=\"https://raw.githubusercontent.com/settinghead/voxlert/main/assets/sc1-adjutant.gif\" width=\"48\" height=\"48\" /\u003e | `sc1-adjutant` | **SC1 Adjutant** | StarCraft | ✅ Available |\n| \u003cimg src=\"https://raw.githubusercontent.com/settinghead/voxlert/main/assets/sc2-adjutant.jpg\" width=\"48\" height=\"48\" /\u003e | `sc2-adjutant` | **SC2 Adjutant** | StarCraft II | ✅ Available |\n| \u003cimg src=\"https://raw.githubusercontent.com/settinghead/voxlert/main/assets/red-alert-eva.png\" width=\"48\" height=\"48\" /\u003e | `red-alert-eva` | **EVA** | Command \u0026 Conquer: Red Alert | ✅ Available |\n| \u003cimg src=\"https://raw.githubusercontent.com/settinghead/voxlert/main/assets/sc1-kerrigan.gif\" width=\"48\" height=\"48\" /\u003e | `sc1-kerrigan` | **SC1 Kerrigan** | StarCraft | ✅ Available |\n| \u003cimg src=\"https://raw.githubusercontent.com/settinghead/voxlert/main/assets/sc1-kerrigan-infested.jpg\" width=\"48\" height=\"48\" /\u003e | `sc1-kerrigan-infested` | **SC1 Infested Kerrigan** | StarCraft | ✅ Available |\n| \u003cimg src=\"https://raw.githubusercontent.com/settinghead/voxlert/main/assets/sc2-kerrigan-infested.jpg\" width=\"48\" height=\"48\" /\u003e | `sc2-kerrigan-infested` | **SC2 Infested Kerrigan** | StarCraft II | ✅ Available |\n| \u003cimg src=\"https://raw.githubusercontent.com/settinghead/voxlert/main/assets/sc1-protoss-advisor.jpg\" width=\"48\" height=\"48\" /\u003e | `sc1-protoss-advisor` | **Protoss Advisor** | StarCraft | ✅ Available |\n| \u003cimg src=\"https://raw.githubusercontent.com/settinghead/voxlert/main/assets/ss1-shodan.png\" width=\"48\" height=\"48\" /\u003e | `ss1-shodan` | **SHODAN** | System Shock | ✅ Available |\n| \u003cimg src=\"https://raw.githubusercontent.com/settinghead/voxlert/main/assets/hl-hev-suit.png\" width=\"48\" height=\"48\" /\u003e | `hl-hev-suit` | **HEV Suit** | Half-Life | ✅ Available |\n\nMore coming soon: [Request a voice](https://github.com/settinghead/voxlert/issues/new?title=Voice+request%3A+%5BCharacter+Name%5D\u0026body=**Character%3A**+%0A**Game%2FSource%3A**+%0A**Why%3A**+)\n\n```bash\nvoxlert voice\n```\n\n## Integrations\n\n### Claude Code\n\nInstalled through `voxlert setup`. Claude Code hook events are processed by:\n\n```bash\nvoxlert hook\n```\n\n### Cursor\n\nInstalled through `voxlert setup`, or add hooks manually in `~/.cursor/hooks.json`:\n\n```json\n{\n  \"version\": 1,\n  \"hooks\": {\n    \"sessionStart\": [{ \"command\": \"voxlert cursor-hook\", \"timeout\": 10 }],\n    \"sessionEnd\": [{ \"command\": \"voxlert cursor-hook\", \"timeout\": 10 }],\n    \"stop\": [{ \"command\": \"voxlert cursor-hook\", \"timeout\": 10 }],\n    \"postToolUseFailure\": [{ \"command\": \"voxlert cursor-hook\", \"timeout\": 10 }],\n    \"preCompact\": [{ \"command\": \"voxlert cursor-hook\", \"timeout\": 10 }]\n  }\n}\n```\n\n| Cursor Hook Event | Voxlert Event | Category |\n|---|---|---|\n| `sessionStart` | SessionStart | `session.start` |\n| `sessionEnd` | SessionEnd | `session.end` |\n| `stop` | Stop | `task.complete` |\n| `postToolUseFailure` | PostToolUseFailure | `task.error` |\n| `preCompact` | PreCompact | `resource.limit` |\n\nRestart Cursor after installing or changing hooks. See [Cursor integration](docs/cursor.md) for details.\n\n### Codex\n\nVoxlert uses Codex's `notify` config so that completed agent turns call:\n\n```bash\nvoxlert codex-notify\n```\n\n`voxlert setup` can install or update the `notify` entry in `~/.codex/config.toml`. See [Codex integration](docs/codex.md).\n\n### pi\n\nInstalled through `voxlert setup`, which copies a TypeScript extension to `~/.pi/agent/extensions/voxlert.ts`. Alternatively, install the pi package directly:\n\n```bash\npi install npm:@settinghead/pi-voxlert\n```\n\nThe extension hooks into pi's lifecycle events and pipes them through `voxlert hook`:\n\n| pi Event | Voxlert Event | Category |\n|---|---|---|\n| `agent_end` | Stop | `task.complete` |\n| `tool_result` (error) | PostToolUseFailure | `task.error` |\n| `session_shutdown` | SessionEnd | `session.end` |\n| `session_before_compact` | PreCompact | `resource.limit` |\n\nThe extension also registers a `/voxlert` command (test, status) and a `voxlert_speak` tool that lets the LLM speak phrases on demand.\n\nRun `/reload` in pi or start a new session after installing. See the [pi-voxlert README](pi-package/README.md) for details.\n\n### OpenClaw\n\nOpenClaw uses a separate plugin. See [OpenClaw integration](docs/openclaw.md) for installation, config, and troubleshooting.\n\n## Common Commands\n\n```bash\nvoxlert setup                  # Interactive setup wizard\nvoxlert voice                  # Interactive voice pack picker\nvoxlert pack list              # List available voice packs\nvoxlert pack show              # Show active pack details\nvoxlert pack use \u003cpack-id\u003e     # Switch active voice pack\nvoxlert config                 # Show current configuration\nvoxlert config set \u003ckey\u003e \u003cval\u003e # Set a config value\nvoxlert volume                 # Show or change playback volume\nvoxlert notification           # Choose popup / system / off\nvoxlert test \"\u003ctext\u003e\"          # Run the full pipeline\nvoxlert log                    # Stream activity log\nvoxlert uninstall              # Remove installed integrations\nvoxlert help                   # Show full help\n```\n\n## How It Works\n\n```mermaid\nflowchart TD\n    A1[Claude Code Hook] --\u003e B[voxlert.sh]\n    A2[OpenClaw Plugin] --\u003e B\n    A3[Cursor Hook] --\u003e B\n    A4[Codex notify] --\u003e B\n    A5[pi Extension] --\u003e B\n    B --\u003e C[src/voxlert.js]\n    C --\u003e D{Event type?}\n    D -- \"Contextual (e.g. Stop)\" --\u003e E[LLM\u003cbr\u003e\u003ci\u003egenerate in-character phrase\u003c/i\u003e]\n    D -- \"Other events\" --\u003e F[Fallback phrases\u003cbr\u003e\u003ci\u003efrom voice pack\u003c/i\u003e]\n    E --\u003e G{TTS backend?}\n    F --\u003e G\n    G -- Chatterbox --\u003e G1[Chatterbox TTS\u003cbr\u003e\u003ci\u003elocal speech synthesis\u003c/i\u003e]\n    G -- Qwen3 --\u003e G2[Qwen3-TTS\u003cbr\u003e\u003ci\u003elocal speech synthesis\u003c/i\u003e]\n    G1 --\u003e H[Audio processing\u003cbr\u003e\u003ci\u003eecho · normalize · post-process\u003c/i\u003e]\n    G2 --\u003e H\n    H --\u003e I[(Cache\u003cbr\u003e\u003ci\u003eLRU, keyed by phrase + params\u003c/i\u003e)]\n    I --\u003e J[Playback queue\u003cbr\u003e\u003ci\u003eserial via file lock\u003c/i\u003e]\n    J --\u003e K[afplay / ffplay]\n```\n\n1. A hook or notify event fires from Claude Code, Cursor, Codex, pi, or OpenClaw.\n2. Voxlert maps it to an event category and loads the active voice pack.\n3. Contextual events such as task completion or tool failure can use the configured LLM to generate a short in-character phrase.\n4. Other events use predefined fallback phrases from the pack.\n5. The chosen phrase is synthesized by the configured TTS backend.\n6. Audio is optionally post-processed, cached, then played through a serialized queue.\n\n### What does it cost?\n\nThe LLM step (turning events into in-character phrases) uses a small, cheap model — not Claude. Each notification costs a fraction of a cent via OpenRouter, or **zero** if you use a local LLM. TTS and audio run entirely on your machine at zero cost. You can also skip the LLM entirely and use only fallback phrases from the voice pack (no API key needed).\n\n### Fully local mode (no cloud at all)\n\nVoxlert supports local LLM servers for the phrase generation step. Run `voxlert setup` and choose **\"Local LLM (Ollama / LM Studio / llama.cpp)\"**. Any OpenAI-compatible local server works:\n\n| Server | Default URL |\n|--------|------------|\n| [Ollama](https://ollama.ai) | `http://localhost:11434/v1` |\n| [LM Studio](https://lmstudio.ai) | `http://localhost:1234/v1` |\n| [llama.cpp server](https://github.com/ggerganov/llama.cpp) | `http://localhost:8080/v1` |\n\nCombined with local TTS (Qwen3-TTS), this gives you a completely offline setup — no API keys, no cloud, no cost.\n\n## Configuration\n\nRun `voxlert config path` to find `config.json`. You can edit it directly or use `voxlert setup` and `voxlert config set`.\n\n| Field | Type | Default | Description |\n|---|---|---|---|\n| `enabled` | boolean | `true` | Master on/off switch |\n| `llm_backend` | string | `\"openrouter\"` | LLM provider: `openrouter`, `openai`, `gemini`, `anthropic`, or `local` |\n| `llm_api_key` | string \\| null | `null` | API key for the chosen LLM provider |\n| `llm_model` | string \\| null | `null` | Model ID (`null` = provider default) |\n| `openrouter_api_key` | string \\| null | `null` | Legacy alias used when `llm_backend` is `openrouter` and `llm_api_key` is empty |\n| `openrouter_model` | string \\| null | `null` | Legacy alias used when `llm_model` is empty and backend is `openrouter` |\n| `chatterbox_url` | string | `\"http://localhost:8004\"` | Chatterbox TTS server URL |\n| `tts_backend` | string | `\"qwen\"` | TTS backend: `qwen` or `chatterbox` |\n| `active_pack` | string | `\"sc1-kerrigan-infested\"` | Active voice pack ID |\n| `volume` | number | `1.0` | Playback volume (0.0-1.0) |\n| `categories` | object | — | Per-category enable/disable settings |\n| `logging` | boolean | `true` | Activity log in `~/.voxlert/voxlert.log` |\n| `error_log` | boolean | `false` | Fallback/error log in `~/.voxlert/fallback.log` |\n\n### Event categories\n\nEvent categories apply across Claude Code, Cursor, Codex, pi, and OpenClaw where the corresponding event exists.\n\n| Category | Hook Event | Description | Default |\n|---|---|---|---|\n| `session.start` | SessionStart | New session begins | on |\n| `session.end` | SessionEnd | Session ends | on |\n| `task.complete` | Stop | Agent finishes a task | on |\n| `task.acknowledge` | UserPromptSubmit | User sends a prompt | off |\n| `task.error` | PostToolUseFailure | A tool call fails | on |\n| `input.required` | PermissionRequest | Agent needs user approval | on |\n| `resource.limit` | PreCompact | Context window nearing limit | on |\n| `notification` | Notification | General notification | on |\n\nOmitted categories default to enabled. Set any category to `false` to disable it:\n\n```bash\nvoxlert config set categories.task.complete true\nvoxlert config set categories.task.acknowledge false\nvoxlert config set categories.session.start true\n```\n\n### Logging\n\n- Activity logging is on by default and writes one line per event to `~/.voxlert/voxlert.log`.\n- Error logging is off by default and records fallback situations in `~/.voxlert/fallback.log`.\n- Debug logging for hook sources is written to `~/.voxlert/hook-debug.log`.\n\nUseful commands:\n\n```bash\nvoxlert log\nvoxlert log on\nvoxlert log off\nvoxlert log path\nvoxlert log error on\nvoxlert log error off\nvoxlert log error-path\n```\n\nYou can also manage configuration interactively with the `/voxlert-config` slash command in Claude Code.\n\n### Integration behavior\n\n- `voxlert setup` installs hooks for Claude Code, Cursor, Codex, and pi.\n- Re-run setup anytime to add a platform you skipped earlier.\n- `voxlert uninstall` removes Claude Code, Cursor, Codex, and pi integration.\n- OpenClaw is managed separately through its plugin.\n- The global `enabled` flag disables processing everywhere; there is no separate per-integration toggle in `config.json`.\n\n## Full CLI Reference\n\n```bash\nvoxlert setup                  # Interactive setup wizard (LLM, voice, TTS, hooks)\nvoxlert hook                   # Process hook event from stdin (Claude Code)\nvoxlert cursor-hook            # Process hook event from stdin (Cursor)\nvoxlert codex-notify           # Process notify payload from argv (Codex)\nvoxlert config                 # Show current configuration\nvoxlert config show            # Show current configuration\nvoxlert config set \u003ck\u003e \u003cv\u003e     # Set a config value (supports categories.X dot notation)\nvoxlert config path            # Print config file path\nvoxlert log                    # Stream activity log (tail -f style)\nvoxlert log path               # Print activity log file path\nvoxlert log error-path         # Print error/fallback log file path\nvoxlert log on | off           # Enable or disable activity logging\nvoxlert log error on | off     # Enable or disable error logging\nvoxlert voice                  # Interactive voice pack picker\nvoxlert pack list              # List available voice packs\nvoxlert pack show              # Show active pack details\nvoxlert pack use \u003cpack-id\u003e     # Switch active voice pack\nvoxlert volume                 # Show current volume and prompt for new value\nvoxlert volume \u003c0-100\u003e         # Set playback volume (0 = mute, 100 = max)\nvoxlert notification           # Choose notification style (popup / system / off)\nvoxlert test \"\u003ctext\u003e\"          # Run full pipeline: LLM -\u003e TTS -\u003e audio playback\nvoxlert cost                   # Show accumulated token usage and estimated cost\nvoxlert cost reset             # Clear the usage log\nvoxlert uninstall              # Remove hooks from Claude Code, Cursor, Codex, and pi, optionally config/cache\nvoxlert help                   # Show help\nvoxlert --version              # Show version\n```\n\n## Platform Notes\n\n- **Windows**: Install [Node.js](https://nodejs.org) and [FFmpeg](docs/installing-ffmpeg.md). Ensure the npm global bin directory is on PATH so hooks can find `voxlert` or `voxlert.cmd`.\n- **Linux**: Install Node and [FFmpeg](docs/installing-ffmpeg.md) so `ffplay` is on PATH.\n- **macOS**: Playback uses the built-in `afplay`; install [SoX](docs/installing-sox.md) if you want optional effects and processing.\n\n## Uninstall\n\n```bash\nvoxlert uninstall\nnpm uninstall -g @settinghead/voxlert\n```\n\nThis removes Voxlert hooks from Claude Code, Cursor, Codex, and pi, the `voxlert-config` skill, and optionally your local config and cache in `~/.voxlert`.\n\n## Advanced\n\nSee [Creating Voice Packs](docs/creating-voice-packs.md) for building your own character voice packs.\n\n## Credits\n\n- **Protoss Advisor** voice pack inspired by [openclaw/protoss-voice](https://playbooks.com/skills/openclaw/skills/protoss-voice)\n\n## Need help?\n\nHaving trouble with setup? Post in the [Setup help \u0026 troubleshooting Discussion](https://github.com/settinghead/voxlert/discussions/6).\n\n## License\n\nMIT - see [LICENSE](LICENSE).\n","project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsettinghead%2Fvoxlert","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fsettinghead%2Fvoxlert","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsettinghead%2Fvoxlert/lists"}