https://github.com/settinghead/voxlert
LLM-generated voice notifications for Claude Code, Cursor, OpenAI Codex, pi, and OpenClaw, spoken by game characters like the StarCraft Adjutant, Kerrigan, C&C EVA, SHODAN, and more.
https://github.com/settinghead/voxlert
ai-agents audio-processing claude-code cli coding-assistant cursor cursor-ide developer-tools gaming-voices llm local-first nodejs notifications openai-codex pi-package starcraft text-to-speech tts voice-cloning voice-notifications
Last synced: 22 days ago
JSON representation
LLM-generated voice notifications for Claude Code, Cursor, OpenAI Codex, pi, and OpenClaw, spoken by game characters like the StarCraft Adjutant, Kerrigan, C&C EVA, SHODAN, and more.
- Host: GitHub
- URL: https://github.com/settinghead/voxlert
- Owner: settinghead
- License: mit
- Created: 2026-03-02T03:18:02.000Z (2 months ago)
- Default Branch: main
- Last Pushed: 2026-04-07T05:56:27.000Z (about 1 month ago)
- Last Synced: 2026-04-07T07:25:11.002Z (about 1 month ago)
- Topics: ai-agents, audio-processing, claude-code, cli, coding-assistant, cursor, cursor-ide, developer-tools, gaming-voices, llm, local-first, nodejs, notifications, openai-codex, pi-package, starcraft, text-to-speech, tts, voice-cloning, voice-notifications
- Language: JavaScript
- Homepage: https://www.npmjs.com/package/@settinghead/voiceforge
- Size: 29.2 MB
- Stars: 8
- Watchers: 0
- Forks: 1
- Open Issues: 4
-
Metadata Files:
- Readme: README.md
- Changelog: CHANGELOG.md
- License: LICENSE
Awesome Lists containing this project
README
# Voxlert
LLM-generated voice notifications for [Claude Code](https://docs.anthropic.com/en/docs/claude-code), [Cursor](https://cursor.com/docs/agent/hooks), [OpenAI Codex](https://developers.openai.com/codex/), [pi](https://github.com/badlogic/pi-mono), and [OpenClaw](https://openclaw.dev), spoken by game characters like the StarCraft Adjutant, Kerrigan, C&C EVA, SHODAN, and more.
## Why Voxlert?
Existing notification chimes (like [peon-ping](https://github.com/PeonPing/peon-ping)) do a great job of telling you *when* something happened, but not *what* happened or *which* agent needs your attention. If you have several agent sessions running at once, you end up alt-tabbing through windows just to find the one waiting on you.
Voxlert makes each session speak in a distinct character voice with its own tone and vocabulary. You hear *"Query efficiency restored to nominal"* from the HEV Suit in one window and *"Pathetic test suite for code validation processed"* from SHODAN in another, and you know immediately what changed. Because phrases are generated by an LLM instead of picked from a tiny fixed set, they stay varied instead of becoming wallpaper.
## Who this is for
Voxlert is for users who:
- Run two or more AI coding agent sessions concurrently (Claude Code, Cursor, Codex, pi, OpenClaw)
- Get interrupted by notification chimes but can't tell which window needs attention
- Want ambient audio feedback that doesn't require looking at a screen
- Are comfortable installing local tooling (Node.js, optionally Python for TTS)
If you run a single agent session and it's always in focus, Voxlert adds personality but not much utility. If you run several at once and context-switch between them, it's meaningfully useful.
## Quick Start
### 1. Install prerequisites
**Minimum:** Node.js 18+ and `afplay` (macOS built-in) or [FFmpeg](docs/installing-ffmpeg.md) (Windows/Linux). That's enough to get started; TTS and SoX are optional.
| Aspect | macOS | Windows | Linux |
|--------|-------|---------|-------|
| **Node.js 18+** | [nodejs.org](https://nodejs.org) or `brew install node` | [nodejs.org](https://nodejs.org) or `winget install OpenJS.NodeJS` | [nodejs.org](https://nodejs.org) or distro package (for example `sudo apt install nodejs`) |
| **Audio playback** | Built-in (`afplay`) | [FFmpeg](docs/installing-ffmpeg.md) so `ffplay` is on PATH | [FFmpeg](docs/installing-ffmpeg.md) so `ffplay` is on PATH |
| **Audio effects** | [SoX](docs/installing-sox.md) (optional) | [SoX](docs/installing-sox.md) (optional) | [SoX](docs/installing-sox.md) (optional) |
See [Installing FFmpeg](docs/installing-ffmpeg.md) and [Installing SoX](docs/installing-sox.md) for platform-specific commands.
You will also want:
- An **LLM API key** from [OpenRouter](https://openrouter.ai) (recommended), [OpenAI](https://platform.openai.com/api-keys), [Google Gemini](https://aistudio.google.com/apikey), or [Anthropic](https://console.anthropic.com/settings/keys). You can skip this and use fallback phrases only.
- At least one **TTS backend** if you want spoken output instead of notifications only.
| Backend | Best for | Requirements |
|---|---|---|
| [**Qwen3-TTS**](qwen3-tts-server/README.md) (recommended) | Apple Silicon or NVIDIA GPU | Python 3.13+, 16 GB RAM, ~8 GB disk |
| [**Chatterbox**](docs/chatterbox-tts.md) | Any platform with GPU | Python 3.10+, CUDA or MPS |
The setup wizard auto-detects running TTS backends. If none are running yet, setup still completes, but you will only get text notifications and fallback phrases until you start one and rerun setup.
> **Can't run local TTS?** Both backends require a GPU or Apple Silicon. Voxlert still works without TTS — you'll get text notifications and fallback phrases. Need help? [Post in Setup help & troubleshooting](https://github.com/settinghead/voxlert/discussions/6).
### 2. Install and run setup
```bash
npx voxlert --onboard
```
The setup wizard configures:
- LLM provider and API key
- Voice pack downloads
- Active voice pack
- TTS backend
- Platform hooks for Claude Code, Cursor, Codex, and pi
For OpenClaw, install the separate [OpenClaw plugin](docs/openclaw.md).
### 3. Start a TTS backend for spoken voice
Start [Qwen3-TTS](qwen3-tts-server/README.md) or [Chatterbox](docs/chatterbox-tts.md), then run:
```bash
voxlert setup
```
This lets the wizard detect the backend and store it in config.
### 4. Verify
```bash
voxlert test "Hello"
```
You should hear a phrase and see a notification. If you do not hear speech, check that:
- A TTS server is running
- `voxlert config` shows the expected `tts_backend`
> **Visual notifications**: Voxlert shows a popup with each phrase without extra install. On macOS you can use the custom overlay or system Notification Center. On Windows and Linux you get system toasts. Change it anytime with:
> ```bash
> voxlert notification
> ```
### From a git clone
Run `npm install` inside `cli/`, then use `node src/cli.js` or link it globally if you prefer. Config and cache live in `~/.voxlert` (Windows: `%USERPROFILE%\.voxlert`).
## Development
Run tests locally with:
```bash
npm test
```
## Supported Voices
The `sc1-adjutant` preview below uses the animated in-game portrait GIF from `assets/sc1-adjutant.gif`.
| | Pack ID | Voice | Source | Status |
|---|---------|-------|--------|--------|
|
| `sc1-adjutant` | **SC1 Adjutant** | StarCraft | ✅ Available |
|
| `sc2-adjutant` | **SC2 Adjutant** | StarCraft II | ✅ Available |
|
| `red-alert-eva` | **EVA** | Command & Conquer: Red Alert | ✅ Available |
|
| `sc1-kerrigan` | **SC1 Kerrigan** | StarCraft | ✅ Available |
|
| `sc1-kerrigan-infested` | **SC1 Infested Kerrigan** | StarCraft | ✅ Available |
|
| `sc2-kerrigan-infested` | **SC2 Infested Kerrigan** | StarCraft II | ✅ Available |
|
| `sc1-protoss-advisor` | **Protoss Advisor** | StarCraft | ✅ Available |
|
| `ss1-shodan` | **SHODAN** | System Shock | ✅ Available |
|
| `hl-hev-suit` | **HEV Suit** | Half-Life | ✅ Available |
More coming soon: [Request a voice](https://github.com/settinghead/voxlert/issues/new?title=Voice+request%3A+%5BCharacter+Name%5D&body=**Character%3A**+%0A**Game%2FSource%3A**+%0A**Why%3A**+)
```bash
voxlert voice
```
## Integrations
### Claude Code
Installed through `voxlert setup`. Claude Code hook events are processed by:
```bash
voxlert hook
```
### Cursor
Installed through `voxlert setup`, or add hooks manually in `~/.cursor/hooks.json`:
```json
{
"version": 1,
"hooks": {
"sessionStart": [{ "command": "voxlert cursor-hook", "timeout": 10 }],
"sessionEnd": [{ "command": "voxlert cursor-hook", "timeout": 10 }],
"stop": [{ "command": "voxlert cursor-hook", "timeout": 10 }],
"postToolUseFailure": [{ "command": "voxlert cursor-hook", "timeout": 10 }],
"preCompact": [{ "command": "voxlert cursor-hook", "timeout": 10 }]
}
}
```
| Cursor Hook Event | Voxlert Event | Category |
|---|---|---|
| `sessionStart` | SessionStart | `session.start` |
| `sessionEnd` | SessionEnd | `session.end` |
| `stop` | Stop | `task.complete` |
| `postToolUseFailure` | PostToolUseFailure | `task.error` |
| `preCompact` | PreCompact | `resource.limit` |
Restart Cursor after installing or changing hooks. See [Cursor integration](docs/cursor.md) for details.
### Codex
Voxlert uses Codex's `notify` config so that completed agent turns call:
```bash
voxlert codex-notify
```
`voxlert setup` can install or update the `notify` entry in `~/.codex/config.toml`. See [Codex integration](docs/codex.md).
### pi
Installed through `voxlert setup`, which copies a TypeScript extension to `~/.pi/agent/extensions/voxlert.ts`. Alternatively, install the pi package directly:
```bash
pi install npm:@settinghead/pi-voxlert
```
The extension hooks into pi's lifecycle events and pipes them through `voxlert hook`:
| pi Event | Voxlert Event | Category |
|---|---|---|
| `agent_end` | Stop | `task.complete` |
| `tool_result` (error) | PostToolUseFailure | `task.error` |
| `session_shutdown` | SessionEnd | `session.end` |
| `session_before_compact` | PreCompact | `resource.limit` |
The extension also registers a `/voxlert` command (test, status) and a `voxlert_speak` tool that lets the LLM speak phrases on demand.
Run `/reload` in pi or start a new session after installing. See the [pi-voxlert README](pi-package/README.md) for details.
### OpenClaw
OpenClaw uses a separate plugin. See [OpenClaw integration](docs/openclaw.md) for installation, config, and troubleshooting.
## Common Commands
```bash
voxlert setup # Interactive setup wizard
voxlert voice # Interactive voice pack picker
voxlert pack list # List available voice packs
voxlert pack show # Show active pack details
voxlert pack use # Switch active voice pack
voxlert config # Show current configuration
voxlert config set # Set a config value
voxlert volume # Show or change playback volume
voxlert notification # Choose popup / system / off
voxlert test "" # Run the full pipeline
voxlert log # Stream activity log
voxlert uninstall # Remove installed integrations
voxlert help # Show full help
```
## How It Works
```mermaid
flowchart TD
A1[Claude Code Hook] --> B[voxlert.sh]
A2[OpenClaw Plugin] --> B
A3[Cursor Hook] --> B
A4[Codex notify] --> B
A5[pi Extension] --> B
B --> C[src/voxlert.js]
C --> D{Event type?}
D -- "Contextual (e.g. Stop)" --> E[LLM
generate in-character phrase]
D -- "Other events" --> F[Fallback phrases
from voice pack]
E --> G{TTS backend?}
F --> G
G -- Chatterbox --> G1[Chatterbox TTS
local speech synthesis]
G -- Qwen3 --> G2[Qwen3-TTS
local speech synthesis]
G1 --> H[Audio processing
echo · normalize · post-process]
G2 --> H
H --> I[(Cache
LRU, keyed by phrase + params)]
I --> J[Playback queue
serial via file lock]
J --> K[afplay / ffplay]
```
1. A hook or notify event fires from Claude Code, Cursor, Codex, pi, or OpenClaw.
2. Voxlert maps it to an event category and loads the active voice pack.
3. Contextual events such as task completion or tool failure can use the configured LLM to generate a short in-character phrase.
4. Other events use predefined fallback phrases from the pack.
5. The chosen phrase is synthesized by the configured TTS backend.
6. Audio is optionally post-processed, cached, then played through a serialized queue.
### What does it cost?
The LLM step (turning events into in-character phrases) uses a small, cheap model — not Claude. Each notification costs a fraction of a cent via OpenRouter, or **zero** if you use a local LLM. TTS and audio run entirely on your machine at zero cost. You can also skip the LLM entirely and use only fallback phrases from the voice pack (no API key needed).
### Fully local mode (no cloud at all)
Voxlert supports local LLM servers for the phrase generation step. Run `voxlert setup` and choose **"Local LLM (Ollama / LM Studio / llama.cpp)"**. Any OpenAI-compatible local server works:
| Server | Default URL |
|--------|------------|
| [Ollama](https://ollama.ai) | `http://localhost:11434/v1` |
| [LM Studio](https://lmstudio.ai) | `http://localhost:1234/v1` |
| [llama.cpp server](https://github.com/ggerganov/llama.cpp) | `http://localhost:8080/v1` |
Combined with local TTS (Qwen3-TTS), this gives you a completely offline setup — no API keys, no cloud, no cost.
## Configuration
Run `voxlert config path` to find `config.json`. You can edit it directly or use `voxlert setup` and `voxlert config set`.
| Field | Type | Default | Description |
|---|---|---|---|
| `enabled` | boolean | `true` | Master on/off switch |
| `llm_backend` | string | `"openrouter"` | LLM provider: `openrouter`, `openai`, `gemini`, `anthropic`, or `local` |
| `llm_api_key` | string \| null | `null` | API key for the chosen LLM provider |
| `llm_model` | string \| null | `null` | Model ID (`null` = provider default) |
| `openrouter_api_key` | string \| null | `null` | Legacy alias used when `llm_backend` is `openrouter` and `llm_api_key` is empty |
| `openrouter_model` | string \| null | `null` | Legacy alias used when `llm_model` is empty and backend is `openrouter` |
| `chatterbox_url` | string | `"http://localhost:8004"` | Chatterbox TTS server URL |
| `tts_backend` | string | `"qwen"` | TTS backend: `qwen` or `chatterbox` |
| `active_pack` | string | `"sc1-kerrigan-infested"` | Active voice pack ID |
| `volume` | number | `1.0` | Playback volume (0.0-1.0) |
| `categories` | object | — | Per-category enable/disable settings |
| `logging` | boolean | `true` | Activity log in `~/.voxlert/voxlert.log` |
| `error_log` | boolean | `false` | Fallback/error log in `~/.voxlert/fallback.log` |
### Event categories
Event categories apply across Claude Code, Cursor, Codex, pi, and OpenClaw where the corresponding event exists.
| Category | Hook Event | Description | Default |
|---|---|---|---|
| `session.start` | SessionStart | New session begins | on |
| `session.end` | SessionEnd | Session ends | on |
| `task.complete` | Stop | Agent finishes a task | on |
| `task.acknowledge` | UserPromptSubmit | User sends a prompt | off |
| `task.error` | PostToolUseFailure | A tool call fails | on |
| `input.required` | PermissionRequest | Agent needs user approval | on |
| `resource.limit` | PreCompact | Context window nearing limit | on |
| `notification` | Notification | General notification | on |
Omitted categories default to enabled. Set any category to `false` to disable it:
```bash
voxlert config set categories.task.complete true
voxlert config set categories.task.acknowledge false
voxlert config set categories.session.start true
```
### Logging
- Activity logging is on by default and writes one line per event to `~/.voxlert/voxlert.log`.
- Error logging is off by default and records fallback situations in `~/.voxlert/fallback.log`.
- Debug logging for hook sources is written to `~/.voxlert/hook-debug.log`.
Useful commands:
```bash
voxlert log
voxlert log on
voxlert log off
voxlert log path
voxlert log error on
voxlert log error off
voxlert log error-path
```
You can also manage configuration interactively with the `/voxlert-config` slash command in Claude Code.
### Integration behavior
- `voxlert setup` installs hooks for Claude Code, Cursor, Codex, and pi.
- Re-run setup anytime to add a platform you skipped earlier.
- `voxlert uninstall` removes Claude Code, Cursor, Codex, and pi integration.
- OpenClaw is managed separately through its plugin.
- The global `enabled` flag disables processing everywhere; there is no separate per-integration toggle in `config.json`.
## Full CLI Reference
```bash
voxlert setup # Interactive setup wizard (LLM, voice, TTS, hooks)
voxlert hook # Process hook event from stdin (Claude Code)
voxlert cursor-hook # Process hook event from stdin (Cursor)
voxlert codex-notify # Process notify payload from argv (Codex)
voxlert config # Show current configuration
voxlert config show # Show current configuration
voxlert config set # Set a config value (supports categories.X dot notation)
voxlert config path # Print config file path
voxlert log # Stream activity log (tail -f style)
voxlert log path # Print activity log file path
voxlert log error-path # Print error/fallback log file path
voxlert log on | off # Enable or disable activity logging
voxlert log error on | off # Enable or disable error logging
voxlert voice # Interactive voice pack picker
voxlert pack list # List available voice packs
voxlert pack show # Show active pack details
voxlert pack use # Switch active voice pack
voxlert volume # Show current volume and prompt for new value
voxlert volume <0-100> # Set playback volume (0 = mute, 100 = max)
voxlert notification # Choose notification style (popup / system / off)
voxlert test "" # Run full pipeline: LLM -> TTS -> audio playback
voxlert cost # Show accumulated token usage and estimated cost
voxlert cost reset # Clear the usage log
voxlert uninstall # Remove hooks from Claude Code, Cursor, Codex, and pi, optionally config/cache
voxlert help # Show help
voxlert --version # Show version
```
## Platform Notes
- **Windows**: Install [Node.js](https://nodejs.org) and [FFmpeg](docs/installing-ffmpeg.md). Ensure the npm global bin directory is on PATH so hooks can find `voxlert` or `voxlert.cmd`.
- **Linux**: Install Node and [FFmpeg](docs/installing-ffmpeg.md) so `ffplay` is on PATH.
- **macOS**: Playback uses the built-in `afplay`; install [SoX](docs/installing-sox.md) if you want optional effects and processing.
## Uninstall
```bash
voxlert uninstall
npm uninstall -g @settinghead/voxlert
```
This removes Voxlert hooks from Claude Code, Cursor, Codex, and pi, the `voxlert-config` skill, and optionally your local config and cache in `~/.voxlert`.
## Advanced
See [Creating Voice Packs](docs/creating-voice-packs.md) for building your own character voice packs.
## Credits
- **Protoss Advisor** voice pack inspired by [openclaw/protoss-voice](https://playbooks.com/skills/openclaw/skills/protoss-voice)
## Need help?
Having trouble with setup? Post in the [Setup help & troubleshooting Discussion](https://github.com/settinghead/voxlert/discussions/6).
## License
MIT - see [LICENSE](LICENSE).