{"id":31041718,"url":"https://github.com/fetchte/ispeak","last_synced_at":"2026-01-20T17:32:28.970Z","repository":{"id":314128545,"uuid":"1050061754","full_name":"fetchTe/ispeak","owner":"fetchTe","description":"Keyboard-centric inline speech-to-text whisper tool that works wherever you can type; vim, emacs, firefox, and CLI/AIs like aider, codex, claude, or any/all","archived":false,"fork":false,"pushed_at":"2025-09-10T16:25:35.000Z","size":701,"stargazers_count":0,"open_issues_count":0,"forks_count":0,"subscribers_count":0,"default_branch":"master","last_synced_at":"2025-09-10T20:21:39.076Z","etag":null,"topics":["aider","claude","cli","clipboard","codex","firefox","inline","keyboard","speech-to-text","stt","whisper"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/fetchTe.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2025-09-03T22:27:29.000Z","updated_at":"2025-09-10T16:25:38.000Z","dependencies_parsed_at":"2025-09-10T20:24:30.453Z","dependency_job_id":"02f4f3b4-a3f2-4310-8df2-f082a6883d21","html_url":"https://github.com/fetchTe/ispeak","commit_stats":null,"previous_names":["fetchte/ispeak"],"tags_count":3,"template":false,"template_full_name":null,"purl":"pkg:github/fetchTe/ispeak","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/fetchTe%2Fispeak","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/fetchTe%2Fispeak/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/fetchTe%2Fispeak/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/fetchTe%2Fispeak/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/fetchTe","download_url":"https://codeload.github.com/fetchTe/ispeak/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/fetchTe%2Fispeak/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":275094398,"owners_count":25404446,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-09-14T02:00:10.474Z","response_time":75,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["aider","claude","cli","clipboard","codex","firefox","inline","keyboard","speech-to-text","stt","whisper"],"created_at":"2025-09-14T10:40:29.111Z","updated_at":"2026-01-20T17:32:28.958Z","avatar_url":"https://github.com/fetchTe.png","language":"Python","readme":"\u003ch1\u003e\nispeak\n\u003cimg align=\"right\" src=\"https://img.shields.io/badge/Python-3.11%2B-3776AB?logo=python\u0026logoColor=white\" /\u003e\n\u0026nbsp;\n\u003ca href=\"https://mibecode.com\"\u003e\n  \u003cimg align=\"right\" title=\"\u0026#8805;95% Human Code\" alt=\"\u0026#8805;95% Human Code\" src=\"https://mibecode.com/badge.svg\" /\u003e\n\u003c/a\u003e\n\u003c/h1\u003e\n\n\n\nAn inline speech-to-text tool that works wherever you can type; [`vim`](https://www.vim.org/), [`emacs`](https://www.gnu.org/software/emacs/), [`firefox`](https://www.firefox.com), and CLI/AI tools like [`aider`](https://github.com/paul-gauthier/aider), [`codex`](https://github.com/openai/codex), [`claude`](https://claude.ai/code), or whatever you fancy\n\n\u003cimg align=\"right\"  width=\"188\" height=\"204\" alt=\"ispeak logo\" src=\"https://raw.githubusercontent.com/fetchTe/ispeak/master/docs/ispeak-logo.png\" /\u003e\n\n+ **Multilingual, Local, Fast** - Powered via [faster-whisper](https://github.com/SYSTRAN/faster-whisper) \n+ **Transcribed Speech** - As keyboard (type) or clipboard (copy) events\n+ **Inline UX** - Recording indicator displayed in the active buffer \u0026 self-deletes\n+ **Hotkey-Driven \u0026 Configurable** - Tune the operation/model to your liking\n+ **Post-Transcribe Plugin Pipeline** - [Replace](#-replace), [text2num](#-text2num), and [num2text](#-num2text)\n+ **Cross-Platform** - Works on [Linux](#linux)/[macOS](#macos)/[Windows](#windows) with GPU or CPU\n\n\u003cbr /\u003e\n\n\u003cimg align=\"center\" alt=\"ispeak-demo-short\" src=\"https://raw.githubusercontent.com/fetchTe/ispeak/master/docs/ispeak-demo-short.gif\" /\u003e\n\n\n## Quick Start\n\n\n1. **Run**: `ispeak` (add `-b \u003cprogram\u003e` to target a specific executable)\n2. **Activate**: Press the hotkey (default `shift_l`) - the 'recording indicator' is text-based (default `;`)\n3. **Record**: Speak freely; no automatic timeout or voice activity cutoff\n4. **Complete**: Press the hotkey again to delete the indicator and transcribe your speech (abort via `escape`)\n5. **Output**: Your words appear as typed text at your cursor's location\n\n\n\u003e **IMPORTANT**: The output goes to the application that currently has keyboard focus, which allows you to use the same `ispeak` instance between applications. This may be a feature or a bug.\n\n\n### ▎Install\n\n```sh\n#\u003e copy'n'paste system/global install\npip install ispeak\nuv tool install ispeak\n# cpu-only + plugins; it's better to simply clone \u0026 run: uv tool install \".[cpu,plugin]\"\nuv pip install --system \"ispeak[plugin]\" --torch-backend=cpu\n```\n\u003e [`uv`](https://docs.astral.sh/uv/) is a python package installer\n\n\n```sh\n#\u003e clone'n'install\ngit clone https://github.com/fetchTe/ispeak \u0026\u0026 cd ispeak\n\n# global install (extra: cpu, cu118, cu128, plugin)\nuv tool install \".[plugin]\"      # CUDA + plugins\nuv tool install \".[cpu,plugin]\"  # CPU-only (no CUDA) + plugins\n\n# local install (extra: cpu, cu118, cu128, plugin)\nuv sync --group dev                # CUDA (default) + dev (ruff, pyright, pytest)\nuv sync --extra cpu --extra plugin # CPU-only (no CUDA) + plugins\n\n# pip install + plugins\npip install RealtimeSTT pynput pyperclip num2words text2num\n```\n\n\n### ▎Usage\n\n```crystal\n# USAGE\n  ispeak [options...]\n\n# OPTIONS\n  -b, --binary      Executable to launch with voice input (default: none)\n  -c, --config      Path to configuration file\n  -l, --log-file    Path to voice transcription append log file\n  -n, --no-output   Disables all output/actions - typing, copying, and record indicator\n  -p, --copy        Use the 'clipboard' to copy instead of the 'keyboard' to type the output\n  -s, --setup       Configure voice settings\n  -t, --test        Test voice input functionality\n  --config-show     Show current configuration\n\n# EXAMPLES\nispeak --setup         # Interactive configuration wizard\nispeak --copy          # Start with the output mode set as 'clipboard'\nispeak -l words.log    # Log transcriptions to file\n\n# DEV/LOCAL USAGE\nuv run ispeak --setup  # via uv\n```\n\u003cbr /\u003e\n\n\n\n## Configuration\nCan be defined via [JSON](https://en.wikipedia.org/wiki/JSON) or [TOML](https://en.wikipedia.org/wiki/TOML), and the lookup is performed in the following order:\n\n1. **Environment Variable**: `ISPEAK_CONFIG` environment variable is set to the path of the config file\n2. **Platform-Specific Config**\n   - **macOS**: `~/Library/Preferences/ispeak/ispeak.{json,toml}`\n   - **Windows**: `%APPDATA%\\ispeak\\ispeak.{json,toml}` (or `~/AppData/Roaming/ispeak/ispeak.{json,toml}`)\n   - **Linux**: `$XDG_CONFIG_HOME/ispeak/ispeak.{json,toml}` (or `~/.config/ispeak/ispeak.{json,toml}`)\n3. **Local**: `./ispeak.{json,toml}` in the current working directory\n4. **Default**: fallback\n\n```json\n{\n  \"ispeak\": {\n    \"binary\": null,\n    \"delete_key\": null,\n    \"delete_keyword\": [\"delete\", \"undo\"],\n    \"escape_key\": \"esc\",\n    \"keyboard_interval\": 0,\n    \"log_file\": null,\n    \"output\": \"keyboard\",\n    \"push_to_talk_key\": \"shift_l\",\n    \"push_to_talk_key_delay\": 0.3,\n    \"recording_indicator\": \";\",\n    \"strip_whitespace\": true\n  },\n  \"stt\": {\n    \"model\": \"tiny\",\n    \"language\": \"auto\",\n    \"beam_size\": 5,\n    \"compute_type\": \"auto\",\n    \"download_root\": null,\n    \"enable_realtime_transcription\": false,\n    \"ensure_sentence_ends_with_period\": true,\n    \"ensure_sentence_starting_uppercase\": true,\n    \"initial_prompt\": null,\n    \"no_log_file\": true,\n    \"normalize_audio\": true,\n    \"spinner\": false\n  },\n  \"plugin\": {}\n}\n```\n\u003e **NOTE**: Highly recommend using `ispeak --setup` for initial setup\n\n\n\u003cbr /\u003e\n\n\n### ▎ `ispeak`\n\n- `binary` (str/null): Default executable to launch with voice input\n- `delete_key` (str/null): Key to trigger deletion of previous input via backspace\n- `delete_keyword` (list/bool): Words that trigger deletion of previous input via backspace (must be exact)\n- `escape_key` (str/null): Key to cancel current recording without transcription\n- `keyboard_interval` (float/null): delay applied after each 'keyboard' character\n- `log_file` (str/null): Path to file for logging voice transcriptions\n- `output` (str/false): Mode of output; 'keyboard' (type), 'clipboard' (copy), or false for none\n  - For all languages aside from English, using 'clipboard' is recommended\n- `push_to_talk_key_delay` (float): Brief delay after hotkey press to prevent input conflicts\n- `push_to_talk_key` (str/null): Hotkey to start/stop recording sessions\n- `recording_indicator` (str/null): Visual indicator typed when recording starts **must be a typeable**\n- `strip_whitespace` (bool): Remove extra whitespace from transcribed text\n\n\u003e Hotkeys work via [pynput](https://github.com/moses-palmer/pynput) and support: \u003cbr /\u003e\n\u003e ╸ Simple characters: `a`, `b`, `c`, `1`, etc. \u003cbr /\u003e\n\u003e ╸ Special keys: `end`, `alt_l`, `ctrl_l` - (see [pynput Key class](https://github.com/moses-palmer/pynput/blob/74c5220a61fecf9eec0734abdbca23389001ea6b/lib/pynput/keyboard/_base.py#L162)) \u003cbr /\u003e\n\u003e ╸ Key combinations: `\u003cctrl\u003e+\u003calt\u003e+h`, `\u003cshift\u003e+\u003cf1\u003e`\u003cbr /\u003e\n\u003cbr /\u003e\n\n\n### ▎`stt`\n\u003e A full config reference can be found in [`./docs/stt-options.md`](https://github.com/fetchTe/ispeak/blob/master/docs/stt-options.md) \u003cbr /\u003e\n\u003e ╸ [`RealtimeSTT`](https://github.com/KoljaB/RealtimeSTT) handles the input/mic setup and processing \u003cbr /\u003e\n\u003e ╸ [`faster-whisper`](https://github.com/SYSTRAN/faster-whisper) is the actual speech-to-text engine implementation\n\n- `model` (str): Model size or path to local CTranslate2 model (for English variants append `.en`)\n    - `tiny`: Ultra fast, workable accuracy (~39MB, CPU/GPU)\n    - `base`: Respectable accuracy/speed (~74MB, CPU/GPU ~1GB/VRAM)\n    - `small`: Decent accuracy (~244MB,  CPU+/GPU ~2GB/VRAM)\n    - `medium`: Good accuracy (~769MB, GPU ~3GB/VRAM)\n    - `large-v1`/`large-v2`: Superb accuracy (~1550MB, GPU ~4GB/VRAM) \n- `language` (str): Language code (`en`, `es`, `fr`, `de`, etc) or `\"auto\"` for automatic detection\n- `beam_size` (int): Size to use for beam search decoding (worth bumping up)\n- `download_root` (str/null): Root path were the models are downloaded/loaded from\n- `enable_realtime_transcription` (bool): Enable continuous transcription (2x computation)\n- `ensure_sentence_ends_with_period` (bool): Add periods to sentences without punctuation\n- `ensure_sentence_starting_uppercase` (bool): Ensure sentences start with uppercase letters\n- `initial_prompt` (null/str): Initial prompt to be fed to the main transcription model\n- `no_log_file` (bool): Skip debug log file creation\n- `normalize_audio` (bool): Normalize audio range before processing for better transcription quality\n- `spinner` (bool): Show spinner animation (set to `false` to avoid terminal conflicts)\n\n\n\u003e Apart from using [faster-distil-whisper-large-v3](https://huggingface.co/Systran/faster-distil-whisper-large-v3), I've had good results with the following\n\n```json\n{\n  \"model\": \"Systran/faster-distil-whisper-medium.en\",\n  \"initial_prompt\": \"In this session, we'll discuss concise expression.\",\n  \"beam_size\": 8,\n  \"post_speech_silence_duration\": 0.4,\n}\n```\n\u003e **NOTE**: `initial_prompt` defines style and/or spelling, not instructions [cookbook](https://cookbook.openai.com/examples/whisper_prompting_guide#comparison-with-gpt-prompting)/[ref](https://platform.openai.com/docs/guides/speech-to-text/improving-reliability)\n\n\n\u003cbr /\u003e\n\n\n\n## Plugin\n\nThe plugin system processes transcribed text through a configurable pipeline of text transformation plugins. Plugins are loaded and executed in order based on their configuration, and each can be configured with the following fields:\n\n- `use` (bool): Enable/disable the plugin (default: `true`)\n- `order` (int): Execution order - plugins run in ascending order (default: `999`)\n- `settings` (dict): Plugin-specific configuration options\n\n\n### ▎ `replace`\nRegex-based text replacement, mainly for simple string replacements, but also capable of handling Regex patterns with capture groups and flags.\n\n```json5\n{\n  \"plugin\": {\n    \"replace\": {\n      \"use\": true,\n      \"order\": 1,\n      \"settings\": {\n        // simple string replacements\n        \"iSpeak\": \"ispeak\",\n        \" one \": \" 1 \",\n        \"read me\": \"README\",\n\n        // regex with capture groups\n        \"(\\\\s+)(semi)(\\\\s+)\": \";\\\\g\u003c3\u003e\",\n        \"(\\\\s+)(comma)(\\\\s+)\": \",\\\\g\u003c3\u003e\",\n\n        // common voice transcription cleanup\n        \"\\\\s+question\\\\s*mark\\\\.?\": \"?\",\n        \"\\\\s+exclamation\\\\s*mark\\\\.?\": \"!\",\n        \n        // code-specific replacements\n        \"\\\\s+open\\\\s*paren\\\\s*\": \"(\",\n        \"\\\\s+close\\\\s*paren\\\\s*\": \")\",\n        \"\\\\s+open\\\\s*brace\\\\s*\": \"{\",\n        \"\\\\s+close\\\\s*brace\\\\s*\": \"}\",\n\n        // regex patterns with flags (/pattern/flags format)\n        \"/hello/i\": \"HI\",           // case insensitive\n        \"/^start/m\": \"BEGIN\",       // multiline\n        \"/comma/gmi\": \",\"           // global, multiline, case insensitive\n      }\n    }\n  }\n}\n```\n\u003e **Flags**: Use `/pattern/flags` format (supports `i`, `m`, `s`, `x` flags) \u003cbr /\u003e\n\u003e **Substitution**: Use `\\1`, `\\2` or `\\g\u003c1\u003e`, `\\g\u003c2\u003e` syntax \u003cbr /\u003e\n\u003e **Tests**: [`./tests/test_plugin_replace.py`](https://github.com/fetchTe/ispeak/blob/master/tests/test_plugin_replace.py) \u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\n### ▎ `num2text` \nConvert digits to text numbers, like \"42\" into \"forty-two\" via [`num2words`](https://github.com/savoirfairelinux/num2words)\n\n```json5\n{\n  \"plugin\": {\n    \"num2text\": {\n      \"use\": true,\n      \"order\": 3,\n      \"settings\": {\n        \"lang\": \"en\",         // language code\n        \"to\": \"cardinal\",     // cardinal, ordinal, ordinal_num, currency, year\n        \"min\": null,          // minimum value to convert\n        \"max\": null,          // maximum value to convert\n        \"currency\": \"USD\",    // currency code for currency conversion\n        \"cents\": true,        // include cents in currency\n        \"percent\": \"percent\"  // suffix for percentage conversion\n      }\n    }\n  }\n}\n```\n\u003e **Tests**: [`./tests/test_plugin_num2text.py`](https://github.com/fetchTe/ispeak/blob/master/tests/test_plugin_num2text.py)  \u003cbr /\u003e\n\u003e **Dependency**: [`num2words`](https://github.com/savoirfairelinux/num2words) -\u003e `uv pip install num2words` \u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\n### ▎ `text2num`\nConvert text numbers to digits, like \"forty-two\" into \"42\" via [`text_to_num`](https://github.com/allo-media/text2num)\n\n\n```json\n{\n  \"plugin\": {\n    \"text2num\": {\n      \"use\": true,\n      \"order\": 2,\n      \"settings\": {\n        \"lang\": \"en\",\n        \"threshold\": 0\n      }\n    }\n  }\n}\n```\n\u003e **Tests**: [`./tests/test_plugin_text2num.py`](https://github.com/fetchTe/ispeak/blob/master/tests/test_plugin_text2num.py)  \u003cbr /\u003e\n\u003e **Dependency**: [`text_to_num`](https://github.com/allo-media/text2num) -\u003e `uv pip install text_to_num` \u003cbr /\u003e\n\u003e **IMPORTANT**: the `threshold` may, or, may not work if cardinal; check out the `TestWishyWashyThreshold` test for more dets\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\n\n## Troubleshooting\n\n+ **Hotkey Issues**: Check/grant permissions see [linux](#linux), [macOS](#macos), [windows](#windows)\n+ **Recording Indicator Misfire(s)**: Increase `push_to_talk_key_delay` (try 0.2-1.0)\n+ **Typing/Character Issues**: Try using `\"output\": \"clipboard\"`\n  + If missing/skipping ASCII characters try using `\"keyboard_interval\": 0.1`\n+ **Transcription Issues**: Try the CPU-only and/or the following minimal test code to isolate the problem:\n\n```python\n# test_audio.py -\u003e uv run ./test_audio.py\nfrom RealtimeSTT import AudioToTextRecorder\n\ndef process_text(text):\n    print(f\"Transcribed: {text}\")\n\nif __name__ == '__main__':\n    print(\"Testing RealtimeSTT - speak after you see 'Listening...'\")\n    try:\n        recorder = AudioToTextRecorder()\n        while True:\n            recorder.text(process_text)\n    except KeyboardInterrupt:\n        print(\"\\nTest completed.\")\n    except Exception as e:\n        print(f\"Error: {e}\")\n```\n\n\u003cbr /\u003e\n\n\n\n## Platform Limitations\n\u003e These limitations/quirks come from the `pynput` [docs](https://pynput.readthedocs.io/en/latest/limitations.html)\n\n\n### ▎Linux\nWhen running under *X*, the following must be true:\n- An *X server* must be running\n- The environment variable `$DISPLAY` must be set\n\nWhen running under *uinput*, the following must be true:\n- You must run your script as root, so that it has the required permissions for *uinput*\n\nThe latter requirement for *X* means that running *pynput* over *SSH* generally will not work. To work around that, make sure to set `$DISPLAY`:\n\n``` sh\n$ DISPLAY=:0 python -c 'import pynput'\n```\n\nPlease note that the value `DISPLAY=:0` is just an example. To find the\nactual value, please launch a terminal application from your desktop\nenvironment and issue the command `echo $DISPLAY`.\n\nWhen running under *Wayland*, the *X server* emulator `Xwayland` will usually run, providing limited functionality. Notably, you will only receive input events from applications running under this emulator.\n\n\n### ▎macOS\nRecent versions of *macOS* restrict monitoring of the keyboard for security reasons. For that reason, one of the following must be true:\n\n- The process must run as root.\n- Your application must be white listed under *Enable access for assistive devices*. Note that this might require that you package your application, since otherwise the entire *Python* installation must be white listed.\n- On versions after *Mojave*, you may also need to whitelist your terminal application if running your script from a terminal.\n\nAll listener classes have the additional attribute `IS_TRUSTED`, which is `True` if no permissions are lacking.\n\n\n### ▎Windows\nVirtual events sent by *other* processes may not be received. This library takes precautions, however, to dispatch any virtual events generated to all currently running listeners of the current process.\n\n\u003cbr /\u003e\n\n\n\n## Development\n\n```\n# USAGE (ispeak)\n   make [flags...] \u003ctarget\u003e\n\n# TARGET\n  -------------------\n   run                   execute entry-point -\u003e uv run main.py\n   build                 build wheel/source distributions -\u003e hatch build\n   clean                 delete build artifacts, cache files, and temporary files\n  -------------------\n   publish               publish to pypi.org -\u003e twine upload\n   publish_test          publish to test.pypi.org -\u003e twine upload --repository testpypi\n   publish_check         check distributions -\u003e twine check\n   release               clean, format, lint, test, build, check, and optionally publish\n  -------------------\n   install               install dependencies -\u003e uv sync\n   install_cpu           install dependencies -\u003e uv sync --extra cpu\n   install_dev           install dev dependencies -\u003e uv sync --group dev --extra plugin\n   install_plugin        install plugin dependencies -\u003e uv sync --extra plugin\n   update                update dependencies -\u003e uv lock --upgrade \u0026\u0026 uv sync\n   update_dry            show outdated dependencies  -\u003e uv tree --outdated\n   venv                  setup virtual environment if needed -\u003e uv venv -p 3.11\n  -------------------\n   check                 run all checks: lint, type, and format\n   format                format check -\u003e ruff format --check\n   lint                  lint check -\u003e ruff check\n   type                  type check -\u003e pyright\n   format_fix            auto-fix format -\u003e ruff format\n   lint_fix              auto-fix lint -\u003e ruff check --fix\n  -------------------\n   test                  test -\u003e pytest\n   test_fast             test \u0026 fail-fast -\u003e pytest -x -q\n  -------------------\n   help                  displays (this) help screen\n\n# FLAGS\n  -------------------\n   UV                    [? ] uv build flag(s) (e.g: make UV=\"--no-build-isolation\")\n  -------------------\n   BAIL                  [?1] fail fast (bail) on the first test or lint error\n   PUBLISH               [?0] publishes to PyPI after build (requires twine config)\n  -------------------\n   DEBUG                 [?0] enables verbose logging for tools (uv, pytest, ruff)\n   QUIET                 [?0] disables pretty-printed/log target (INIT/DONE) info\n   NO_COLOR              [?0] disables color logging/ANSI codes\n```\n\n\n\u003cbr /\u003e\n\n\n\n## Contributing\n\n1. Fork the repository\n2. Create a feature branch: `git checkout -b feature/amazing-feature`\n3. Install development dependencies: `uv sync --group dev`\n4. Make your changes following the existing code style\n5. Run quality checks \u0026 test:\n   ```sh\n   make format_fix  # auto-fix format -\u003e ruff format\n   make check       # run all checks: lint, type, and format\n   make test        # run all tests\n   ```\n6. Commit your changes: `git commit -m 'feat: add amazing feature'`\n7. Push to your branch: `git push origin feature/amazing-feature`\n8. Open a Pull Request with a clear description of your changes\n\n\u003cbr /\u003e\n\n\n\n## Respects\n\n- **[`RealtimeSTT`](https://github.com/KoljaB/RealtimeSTT)** - A swell wrapper around [`faster-whisper`](https://github.com/SYSTRAN/faster-whisper) that powers the speech-to-text engine\n- **[`pynput`](https://github.com/moses-palmer/pynput)** - Cross-platform controller and monitorer for the keyboard\n- **[`pyperclip`](https://github.com/asweigart/pyperclip)** - Cross-platform clipboard\n- **[`whisper`](https://github.com/openai/whisper)** - The foundational speech-to-text recognition model\n\n\n\u003cbr /\u003e\n\n\n\n## License\n\n```\nMIT License\n\nCopyright (c) 2025 te \u003clegal@fetchTe.com\u003e\n\nPermission is hereby granted, free of charge, to any person obtaining a copy\nof this software and associated documentation files (the \"Software\"), to deal\nin the Software without restriction, including without limitation the rights\nto use, copy, modify, merge, publish, distribute, sublicense, and/or sell\ncopies of the Software, and to permit persons to whom the Software is\nfurnished to do so, subject to the following conditions:\n\nThe above copyright notice and this permission notice shall be included in all\ncopies or substantial portions of the Software.\n\nTHE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\nIMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\nFITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE\nAUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\nLIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,\nOUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE\nSOFTWARE.\n```\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ffetchte%2Fispeak","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ffetchte%2Fispeak","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ffetchte%2Fispeak/lists"}