{"id":44484192,"url":"https://github.com/typewhisper/typewhisper-mac","last_synced_at":"2026-04-21T21:01:11.732Z","repository":{"id":338103630,"uuid":"1156604835","full_name":"TypeWhisper/typewhisper-mac","owner":"TypeWhisper","description":"Local speech-to-text for macOS — on-device AI, fully private, no cloud","archived":false,"fork":false,"pushed_at":"2026-04-19T17:03:01.000Z","size":37643,"stargazers_count":847,"open_issues_count":5,"forks_count":51,"subscribers_count":6,"default_branch":"main","last_synced_at":"2026-04-19T19:03:14.422Z","etag":null,"topics":["apple-silicon","dictation","macos","on-device","privacy","speech-to-text","swiftui","transcription","whisper","whisperkit"],"latest_commit_sha":null,"homepage":null,"language":"Swift","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"gpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/TypeWhisper.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":".github/FUNDING.yml","license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":".github/CODEOWNERS","security":"SECURITY.md","support":"docs/support-matrix.md","governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":"AGENTS.md","dco":null,"cla":null},"funding":{"github":"seofood","ko_fi":"seofood"}},"created_at":"2026-02-12T20:54:53.000Z","updated_at":"2026-04-19T18:45:28.000Z","dependencies_parsed_at":"2026-04-06T08:01:30.305Z","dependency_job_id":"835997e8-1ab2-4ff6-92e6-1ca4e5aa2600","html_url":"https://github.com/TypeWhisper/typewhisper-mac","commit_stats":null,"previous_names":["typewhisper/typewhisper-mac"],"tags_count":335,"template":false,"template_full_name":null,"purl":"pkg:github/TypeWhisper/typewhisper-mac","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/TypeWhisper%2Ftypewhisper-mac","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/TypeWhisper%2Ftypewhisper-mac/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/TypeWhisper%2Ftypewhisper-mac/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/TypeWhisper%2Ftypewhisper-mac/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/TypeWhisper","download_url":"https://codeload.github.com/TypeWhisper/typewhisper-mac/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/TypeWhisper%2Ftypewhisper-mac/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":32110137,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-21T11:25:29.218Z","status":"ssl_error","status_checked_at":"2026-04-21T11:25:28.499Z","response_time":128,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.5:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["apple-silicon","dictation","macos","on-device","privacy","speech-to-text","swiftui","transcription","whisper","whisperkit"],"created_at":"2026-02-13T01:20:06.456Z","updated_at":"2026-04-21T21:01:11.688Z","avatar_url":"https://github.com/TypeWhisper.png","language":"Swift","readme":"# TypeWhisper for Mac\n\n[![License: GPL v3](https://img.shields.io/badge/License-GPLv3-blue.svg)](https://www.gnu.org/licenses/gpl-3.0)\n[![macOS](https://img.shields.io/badge/macOS-14.0%2B-black.svg)](https://www.apple.com/macos/)\n[![Swift](https://img.shields.io/badge/Swift-6-orange.svg)](https://swift.org)\n\nSpeech-to-text and AI text processing for macOS. Transcribe audio using on-device AI models or cloud APIs (Groq, OpenAI), then process the result with custom LLM prompts. Your voice data stays on your Mac with local models - or use cloud APIs for faster processing.\n\nTypeWhisper `1.2` is the current stable direct-download release for macOS. It includes system-wide dictation, file transcription, prompt processing, rules, history, dictionary, snippets, and bundled integrations. Advanced surfaces like the HTTP API, CLI, widgets, watch folders, and the plugin SDK remain supported for power users and automation.\n\nSee the [release readiness guide](docs/release-readiness.md), [support matrix](docs/support-matrix.md), and [release checklist](docs/release-checklist.md) for the current release definition and ship gates.\n\n\u003cp align=\"center\"\u003e\n  \u003cvideo src=\"https://github.com/user-attachments/assets/22fe922d-4a4c-47d1-805e-684a148ebd03\" autoplay loop muted playsinline width=\"270\"\u003e\u003c/video\u003e\n\u003c/p\u003e\n\n## Screenshots\n\n\u003cp align=\"center\"\u003e\n  \u003ca href=\".github/screenshots/home.png\"\u003e\u003cimg src=\".github/screenshots/home.png\" width=\"270\" alt=\"Home Dashboard\"\u003e\u003c/a\u003e\n  \u003ca href=\".github/screenshots/recording.png\"\u003e\u003cimg src=\".github/screenshots/recording.png\" width=\"270\" alt=\"Recording \u0026 Hotkeys\"\u003e\u003c/a\u003e\n  \u003ca href=\".github/screenshots/prompts.png\"\u003e\u003cimg src=\".github/screenshots/prompts.png\" width=\"270\" alt=\"Custom Prompts\"\u003e\u003c/a\u003e\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003ca href=\".github/screenshots/history.png\"\u003e\u003cimg src=\".github/screenshots/history.png\" width=\"270\" alt=\"Transcription History\"\u003e\u003c/a\u003e\n  \u003ca href=\".github/screenshots/dictionary.png\"\u003e\u003cimg src=\".github/screenshots/dictionary.png\" width=\"270\" alt=\"Dictionary\"\u003e\u003c/a\u003e\n  \u003ca href=\".github/screenshots/profiles.png\"\u003e\u003cimg src=\".github/screenshots/profiles.png\" width=\"270\" alt=\"Rules\"\u003e\u003c/a\u003e\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003ca href=\".github/screenshots/general.png\"\u003e\u003cimg src=\".github/screenshots/general.png\" width=\"270\" alt=\"General Settings\"\u003e\u003c/a\u003e\n  \u003ca href=\".github/screenshots/plugins.png\"\u003e\u003cimg src=\".github/screenshots/plugins.png\" width=\"270\" alt=\"Integrations\"\u003e\u003c/a\u003e\n  \u003ca href=\".github/screenshots/file-transcription.png\"\u003e\u003cimg src=\".github/screenshots/file-transcription.png\" width=\"270\" alt=\"File Transcription\"\u003e\u003c/a\u003e\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003ca href=\".github/screenshots/snippets.png\"\u003e\u003cimg src=\".github/screenshots/snippets.png\" width=\"270\" alt=\"Snippets\"\u003e\u003c/a\u003e\n  \u003ca href=\".github/screenshots/advanced.png\"\u003e\u003cimg src=\".github/screenshots/advanced.png\" width=\"270\" alt=\"Advanced Settings\"\u003e\u003c/a\u003e\n\u003c/p\u003e\n\n## What's New in 1.2\n\n- **Minimal indicator** - A compact power-user status view alongside the existing Notch and Overlay styles\n- **Transcript preview toggle** - Live preview can now be disabled for Notch and Overlay indicators\n- **Faster dictation start** - Metadata capture and URL resolution move off the critical start path\n- **Short-clip improvements** - Better handling for brief utterances, especially with streaming preview and Parakeet\n- **Audio recovery fixes** - More resilient recording and preview after device switches, AirPods profile changes, and `AVAudioEngine` reconfiguration\n- **MLX plugin setup** - Qwen3, Granite, and Voxtral now support an optional HuggingFace token in settings for higher download limits\n- **Localized term packs** - Built-in term pack metadata now renders in English and German\n\n## Features\n\n### Transcription\n\n- **Nine engines** - WhisperKit (99+ languages, streaming, translation), Parakeet TDT v3 (25 European languages, extremely fast), Apple SpeechAnalyzer (macOS 26+, no model download needed), Granite Speech (MLX-based), Qwen3 ASR (MLX-based), Voxtral (local Voxtral Mini 4B, MLX-based), Groq Whisper, OpenAI Whisper, and OpenAI Compatible (any OpenAI-compatible API)\n- **On-device or cloud** - All processing happens locally on your Mac, or use Groq/OpenAI Whisper APIs for faster processing\n- **Streaming preview** - See partial transcription in real-time while speaking (WhisperKit)\n- **Short-clip handling** - Better retention of brief utterances and fewer false no-speech discards\n- **File transcription** - Batch-process multiple audio/video files with drag \u0026 drop\n- **Subtitle export** - Export transcriptions as SRT or WebVTT with timestamps\n\n### Dictation\n\n- **System-wide** - Push-to-talk, toggle, or hybrid mode via global hotkey, auto-pastes into any app\n- **Modifier-key hotkeys** - Use a single modifier key (Command, Shift, Option, Control) as your hotkey\n- **Indicator styles** - Choose Notch, Overlay, or Minimal, with optional live transcript preview where supported\n- **Sound feedback** - Audio cues for recording start, transcription success, and errors\n- **Microphone selection** - Choose a specific input device with live preview and improved recovery after route changes\n\n### AI Processing\n\n- **Custom prompts** - Process transcriptions (or any text) with LLM prompts. 8 presets included (Translate, Formal, Summarize, Fix Grammar, Email, List, Shorter, Explain). Standalone Prompt Palette via global hotkey - a floating panel for AI text processing independent of dictation\n- **LLM providers** - Apple Intelligence (macOS 26+), Groq, OpenAI / ChatGPT, Gemini, and OpenAI Compatible with per-prompt provider and model override\n- **Local prompt processing** - Gemma 4 via MLX runs on-device on Apple Silicon, with the current verified release path limited to the E2B/E4B 4-bit models\n- **Translation** - Translate transcriptions on-device using Apple Translate\n\n### Personalization\n\n- **Rules** - Per-app and per-website overrides for language, task, engine, prompt, hotkey, and auto-submit. Match by app (bundle ID) and/or domain with subdomain support\n- **Dictionary** - Terms improve cloud recognition accuracy. Corrections fix common transcription mistakes automatically. Auto-learns from manual corrections. Includes importable term packs\n- **Localized term packs** - Built-in term pack names and descriptions are localized in English and German\n- **Snippets** - Text shortcuts with trigger/replacement. Supports placeholders like `{{DATE}}`, `{{TIME}}`, and `{{CLIPBOARD}}`\n- **History** - Searchable transcription history with inline editing, correction detection, app context tracking, timeline grouping, filters, bulk delete, multi-select export, auto-retention, and a standalone window accessible from the tray menu\n\n### Integration \u0026 Extensibility\n\n- **Plugin system** - Extend TypeWhisper with custom LLM providers, transcription engines, post-processors, and action plugins. Granite, Groq, OpenAI / ChatGPT, OpenAI Compatible, Gemini, Linear, Qwen3, Voxtral, and Webhook ship as bundled plugins, alongside the local engine plugins. Linear plugin enables voice-to-issue creation. See [Plugins/README.md](Plugins/README.md)\n- **MLX download controls** - Bundled Qwen3, Granite, and Voxtral plugins support an optional HuggingFace token for higher rate limits and clearer download errors\n- **HTTP API** - Local REST API for integration with external tools and scripts\n- **CLI tool** - Shell-friendly transcription via the command line\n- **Discord claim service** - Optional external service for Polar supporter and GitHub Sponsors Discord role claims\n\n### General\n\n- **Home dashboard** - Usage statistics, activity chart, and onboarding tutorial\n- **Auto-update** - Built-in updates via Sparkle with stable, release-candidate, and daily channels\n- **Universal binary** - Runs natively on Apple Silicon and Intel Macs\n- **Widgets** - Desktop widgets for usage stats, last transcription, activity chart, and transcription history\n- **Multilingual UI** - English and German\n- **Launch at Login** - Start automatically with macOS\n\n## Install\n\n### Homebrew\n\n```bash\nbrew install --cask typewhisper/tap/typewhisper\n```\n\n### Direct Download\n\nDownload the latest DMG from [GitHub Releases](https://github.com/TypeWhisper/typewhisper-mac/releases/latest).\n\nStable direct-download releases use the default Sparkle channel. Release candidates such as `1.2.0-rc*` and daily builds are published as GitHub prereleases, update the shared Sparkle appcast on their own channels, and are excluded from Homebrew.\nInstalled builds can switch channels in `Settings -\u003e About` via the `Update Channel` picker.\n\n## Quick Start\n\n1. Install TypeWhisper from Homebrew or the latest DMG.\n2. Open Settings and grant Microphone plus Accessibility access.\n3. Pick an engine and, if needed, download a local model.\n4. Trigger the global hotkey and complete your first dictation.\n\n## Manual Uninstall (macOS)\n\nThese steps are for official TypeWhisper release builds on macOS. They remove the app itself, its local state, widget data, and stored secrets so you can reinstall from a clean slate.\n\nIf you installed via Homebrew, you can optionally start with:\n\n```bash\nbrew uninstall --cask typewhisper\n```\n\nThat removes the app bundle, but it does not reliably remove all files in `~/Library` or TypeWhisper entries in Keychain.\n\nIf `~/Library` is hidden in Finder, use `Go -\u003e Go to Folder...` and paste the paths below.\n\n1. Quit TypeWhisper if it is running.\n2. Delete the app bundle:\n   ```bash\n   rm -rf /Applications/TypeWhisper.app\n   ```\n3. Delete app data and plugins:\n   ```bash\n   rm -rf ~/Library/Application\\ Support/TypeWhisper\n   ```\n4. Delete preferences:\n   ```bash\n   rm -f ~/Library/Preferences/com.typewhisper.mac.plist\n   ```\n5. Delete widget and app group data used by official releases:\n   ```bash\n   rm -rf ~/Library/Group\\ Containers/2D8ALY3LCL.com.typewhisper.mac\n   ```\n6. Remove TypeWhisper secrets from Keychain:\n   - In Keychain Access, search for `com.typewhisper.mac.apikey` and delete matching items.\n   - This includes API and plugin secrets stored under the `com.typewhisper.mac.apikey.*` service prefix.\n   - Also remove the license items stored under service `com.typewhisper.mac.apikey.license`, especially the `polar-license` and `polar-supporter` accounts.\n7. If you installed the CLI tool from Settings \u003e Advanced, remove it too:\n   ```bash\n   rm -f /usr/local/bin/typewhisper\n   ```\n8. Optional: if you want to remove exported user files as well, delete:\n   ```bash\n   rm -rf ~/Documents/TypeWhisper\\ Recordings\n   ```\n9. Restart your Mac, then install the latest build again.\n\nIf a fresh install still crashes immediately after these steps, please open an issue and include your macOS version, how you installed TypeWhisper, and whether the crash happens on first launch or after granting permissions.\n\n## System Requirements\n\n- macOS 14.0 (Sonoma) or later\n- Apple Silicon (M1 or later) recommended\n- 8 GB RAM minimum, 16 GB+ recommended for larger models\n- Some features (Apple Translate, improved Settings UI) require macOS 15+. Apple Intelligence and SpeechAnalyzer require macOS 26+.\n\n## Gemma 4 Support\n\nTypeWhisper includes a bundled local Gemma 4 plugin powered by MLX for on-device prompt processing on Apple Silicon. In the current verified release path, Gemma 4 support is limited to the dense `E2B 4-bit` and `E4B 4-bit` variants; larger or unverified variants stay visible in the UI but remain disabled until they are validated end to end.\n\n## Model Recommendations\n\n| RAM | Recommended Models |\n|-----|-------------------|\n| \u003c 8 GB | Whisper Tiny, Whisper Base |\n| 8-16 GB | Whisper Small, Whisper Large v3 Turbo, Parakeet TDT v3, Voxtral Mini 4B |\n| \u003e 16 GB | Whisper Large v3 |\n\n## Build\n\n1. Clone the repository:\n   ```bash\n   git clone https://github.com/TypeWhisper/typewhisper-mac.git\n   cd typewhisper-mac\n   ```\n\n2. Open in Xcode 16+:\n   ```bash\n   open TypeWhisper.xcodeproj\n   ```\n\n3. Select the TypeWhisper scheme and build (Cmd+B). Swift Package dependencies (WhisperKit, FluidAudio, Sparkle, TypeWhisperPluginSDK) resolve automatically.\n\n4. Run the app. It appears as a menu bar icon - open Settings to download a model.\n\n5. Run the automated checks before shipping changes:\n   ```bash\n   xcodebuild test -project TypeWhisper.xcodeproj -scheme TypeWhisper -destination 'platform=macOS,arch=arm64' -parallel-testing-enabled NO CODE_SIGN_IDENTITY='-' CODE_SIGNING_REQUIRED=NO CODE_SIGNING_ALLOWED=NO\n   swift test --package-path TypeWhisperPluginSDK\n   ```\n\n## HTTP API\n\nThe HTTP API is an advanced local automation surface. It binds to `127.0.0.1` only, is disabled by default, and is intended for local tools and scripts.\n\nEnable the API server in Settings \u003e Advanced (default port: `8978`).\n\n### Check Status\n\n```bash\ncurl http://localhost:8978/v1/status\n```\n\n```json\n{\n  \"status\": \"ready\",\n  \"engine\": \"whisper\",\n  \"model\": \"openai_whisper-large-v3_turbo\",\n  \"supports_streaming\": true,\n  \"supports_translation\": true\n}\n```\n\n### Transcribe Audio\n\n```bash\ncurl -X POST http://localhost:8978/v1/transcribe \\\n  -F \"file=@recording.wav\" \\\n  -F \"language=en\"\n\ncurl -X POST http://localhost:8978/v1/transcribe \\\n  -F \"file=@recording.wav\" \\\n  -F \"language_hint=de\" \\\n  -F \"language_hint=en\"\n```\n\n```json\n{\n  \"text\": \"Hello, world!\",\n  \"language\": \"en\",\n  \"duration\": 2.5,\n  \"processing_time\": 0.8,\n  \"engine\": \"whisper\",\n  \"model\": \"openai_whisper-large-v3_turbo\"\n}\n```\n\nOptional parameters:\n- `language` - ISO 639-1 code (e.g., `en`, `de`). Omit for full auto-detection.\n- `language_hint` - Repeatable language hint for restricted auto-detection. Do not combine with `language`.\n- `task` - `transcribe` (default) or `translate` (translates to English, WhisperKit only).\n- `target_language` - ISO 639-1 code for translation target language (e.g., `es`, `fr`). Uses Apple Translate.\n\n### List Models\n\n```bash\ncurl http://localhost:8978/v1/models\n```\n\n```json\n{\n  \"models\": [\n    {\n      \"id\": \"openai_whisper-large-v3_turbo\",\n      \"engine\": \"whisper\",\n      \"ready\": true\n    }\n  ]\n}\n```\n\n### History\n\n```bash\n# Search history\ncurl \"http://localhost:8978/v1/history?q=meeting\u0026limit=10\u0026offset=0\"\n\n# Delete entry\ncurl -X DELETE \"http://localhost:8978/v1/history?id=\u003cuuid\u003e\"\n```\n\n### Rules\n\n```bash\n# List all rules\ncurl http://localhost:8978/v1/rules\n\n# Toggle a rule on/off\ncurl -X PUT \"http://localhost:8978/v1/rules/toggle?id=\u003cuuid\u003e\"\n```\n\n### Dictation Control\n\n```bash\n# Start dictation (returns session id)\ncurl -X POST http://localhost:8978/v1/dictation/start\n\n# Stop dictation (returns same session id)\ncurl -X POST http://localhost:8978/v1/dictation/stop\n\n# Check whether dictation is currently recording\ncurl http://localhost:8978/v1/dictation/status\n\n# Fetch status/result for a specific dictation session\ncurl \"http://localhost:8978/v1/dictation/transcription?id=\u003cuuid\u003e\"\n```\n\n## CLI Tool\n\nTypeWhisper includes a command-line tool for shell-friendly transcription. It is part of the advanced automation surface and connects to the running local API server.\n\n### Installation\n\nInstall via Settings \u003e Advanced \u003e CLI Tool \u003e Install. This places the `typewhisper` binary in `/usr/local/bin`.\n\n### Commands\n\n```bash\ntypewhisper status              # Show server status\ntypewhisper models              # List available models\ntypewhisper transcribe file.wav # Transcribe an audio file\n```\n\n### Options\n\n| Option | Description |\n|--------|-------------|\n| `--port \u003cN\u003e` | Server port (default: auto-detect) |\n| `--json` | Output as JSON |\n| `--language \u003ccode\u003e` | Source language (e.g. `en`, `de`) |\n| `--language-hint \u003ccode\u003e` | Repeatable language hint for restricted auto-detection |\n| `--task \u003ctask\u003e` | `transcribe` (default) or `translate` |\n| `--translate-to \u003ccode\u003e` | Target language for translation |\n\n### Examples\n\n```bash\n# Transcribe with language and JSON output\ntypewhisper transcribe recording.wav --language de --json\n\n# Restrict auto-detection to a shortlist\ntypewhisper transcribe recording.wav --language-hint de --language-hint en\n\n# Pipe audio from stdin\ncat audio.wav | typewhisper transcribe -\n\n# Use in a script\ntypewhisper transcribe meeting.m4a --json | jq -r '.text'\n```\n\nThe CLI requires the API server to be running (Settings \u003e Advanced) and follows the documented command and flag surface for the current stable release.\n\n## Rules\n\nRules let you configure transcription settings per application or website. For example:\n\n- **Mail** - German language, Whisper Large v3\n- **Slack** - English language, Parakeet TDT v3\n- **Terminal** - English language, auto-submit enabled\n- **github.com** - English language (matches in any browser)\n- **docs.google.com** - German language, translate to English\n\nCreate rules in Settings \u003e Regeln. Assign apps and/or URL patterns, set language/task/engine overrides, assign a custom prompt for automatic post-processing, optionally configure a manual rule shortcut, enable auto-submit (automatically sends text in chat apps), and adjust priority. Spoken language can be left on full auto-detect, fixed to one exact language, or restricted to a shortlist of likely languages for better detection accuracy. URL patterns support subdomain matching - e.g. `google.com` also matches `docs.google.com`. The domain autocomplete suggests domains from your transcription history.\n\nWhen you start dictating, TypeWhisper matches the active app and browser URL against your rules with the following priority:\n1. **App + URL match** - highest specificity (e.g. Chrome + github.com)\n2. **URL-only match** - cross-browser rules (e.g. github.com in any browser)\n3. **App-only match** - generic app rules (e.g. all of Chrome)\n\nThe active rule name is shown as a badge in the notch indicator, together with a short explanation of why it matched.\n\nMultiple engines can be loaded simultaneously for instant switching between profiles. Note that loading multiple local models increases memory usage. Cloud engines (Groq, OpenAI) have negligible memory overhead.\n\n## Plugins\n\nTypeWhisper supports plugins for adding custom LLM providers, transcription engines, post-processors, and action plugins. Plugins are macOS `.bundle` files placed in `~/Library/Application Support/TypeWhisper/Plugins/`.\n\nAll 12 engines and integrations (WhisperKit, Parakeet, SpeechAnalyzer, Granite, Qwen3, Voxtral, Groq, OpenAI, OpenAI Compatible, Gemini, Linear, Webhook) are implemented as bundled plugins and serve as reference implementations.\n\nSee [Plugins/README.md](Plugins/README.md) for the full plugin development guide, including the event bus, host services API, and manifest format.\n\n## Architecture\n\n```\nTypeWhisper/\n├── typewhisper-cli/           # Command-line tool (status, models, transcribe)\n├── Plugins/                # Bundled plugins (WhisperKit, Parakeet, SpeechAnalyzer, Granite,\n│                           #   Qwen3, Voxtral, Groq, OpenAI, OpenAI Compatible, Gemini, Linear, Webhook)\n├── TypeWhisperPluginSDK/   # Plugin SDK (Swift package)\n├── TypeWhisperWidgetExtension/ # WidgetKit widgets (stats, activity, history)\n├── TypeWhisperWidgetShared/    # Shared widget data models\n├── App/                    # App entry point, dependency injection\n├── Models/                 # Data models (TranscriptionResult, Profile, PromptAction, etc.)\n├── Services/\n│   ├── Cloud/              # KeychainService, WavEncoder (shared cloud utilities)\n│   ├── LLM/               # Apple Intelligence provider (cloud LLM providers are plugins)\n│   ├── HTTPServer/         # Local REST API (HTTPServer, APIRouter, APIHandlers)\n│   ├── ModelManagerService # Transcription dispatch (delegates to plugins)\n│   ├── AudioRecordingService\n│   ├── AudioFileService    # Audio/video - 16kHz PCM conversion\n│   ├── HotkeyService\n│   ├── TextInsertionService\n│   ├── ProfileService      # Per-app profile matching and persistence\n│   ├── HistoryService      # Transcription history persistence (SwiftData)\n│   ├── DictionaryService   # Custom term corrections\n│   ├── SnippetService      # Text snippets with placeholders\n│   ├── PromptActionService # Custom prompt management (SwiftData)\n│   ├── PromptProcessingService # LLM orchestration for prompt execution\n│   ├── PluginManager       # Plugin discovery, loading, and lifecycle\n│   ├── PluginRegistryService # Plugin marketplace (download, install, update)\n│   ├── PostProcessingPipeline # Priority-based text processing chain\n│   ├── EventBus            # Typed publish/subscribe event system\n│   ├── TranslationService  # On-device translation via Apple Translate\n│   ├── SubtitleExporter    # SRT/VTT export\n│   └── SoundService        # Audio feedback for recording events\n├── ViewModels/             # MVVM view models with Combine\n├── Views/                  # SwiftUI views\n└── Resources/              # Info.plist, entitlements, localization, sounds\n```\n\n**Patterns:** MVVM with `ServiceContainer` singleton for dependency injection. ViewModels use a static `_shared` pattern. Localization via `String(localized:)` with `Localizable.xcstrings`.\n\n## License\n\nGPLv3 - see [LICENSE](LICENSE) for details. Commercial licensing available - see [LICENSE-COMMERCIAL.md](LICENSE-COMMERCIAL.md).\n","funding_links":["https://github.com/sponsors/seofood","https://ko-fi.com/seofood"],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftypewhisper%2Ftypewhisper-mac","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ftypewhisper%2Ftypewhisper-mac","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftypewhisper%2Ftypewhisper-mac/lists"}