{"id":48518983,"url":"https://github.com/soniqo/speech-swift","last_synced_at":"2026-04-11T17:33:27.116Z","repository":{"id":336412287,"uuid":"1149307294","full_name":"soniqo/speech-swift","owner":"soniqo","description":"AI speech toolkit for Apple Silicon — ASR, TTS, speech-to-speech, VAD, and diarization powered by MLX and CoreML","archived":false,"fork":false,"pushed_at":"2026-03-30T20:57:05.000Z","size":1476,"stargazers_count":496,"open_issues_count":12,"forks_count":57,"subscribers_count":7,"default_branch":"main","last_synced_at":"2026-03-30T21:06:43.672Z","etag":null,"topics":["apple-silicon","asr","coreml","ios","macos","mlx","neural-engine","on-device","speaker-diarization","speech-enhancement","speech-recognition","speech-to-speech","swift","text-to-speech","tts","voice-activity-detection"],"latest_commit_sha":null,"homepage":"https://soniqo.audio","language":"Swift","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/soniqo.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":"AGENTS.md","dco":null,"cla":null}},"created_at":"2026-02-04T00:52:46.000Z","updated_at":"2026-03-30T20:48:38.000Z","dependencies_parsed_at":null,"dependency_job_id":null,"html_url":"https://github.com/soniqo/speech-swift","commit_stats":null,"previous_names":["ivan-digital/qwen3-asr-swift","soniqo/speech-swift"],"tags_count":7,"template":false,"template_full_name":null,"purl":"pkg:github/soniqo/speech-swift","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/soniqo%2Fspeech-swift","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/soniqo%2Fspeech-swift/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/soniqo%2Fspeech-swift/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/soniqo%2Fspeech-swift/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/soniqo","download_url":"https://codeload.github.com/soniqo/speech-swift/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/soniqo%2Fspeech-swift/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31526676,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-07T16:28:08.000Z","status":"ssl_error","status_checked_at":"2026-04-07T16:28:06.951Z","response_time":105,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.5:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["apple-silicon","asr","coreml","ios","macos","mlx","neural-engine","on-device","speaker-diarization","speech-enhancement","speech-recognition","speech-to-speech","swift","text-to-speech","tts","voice-activity-detection"],"created_at":"2026-04-07T20:04:34.748Z","updated_at":"2026-04-11T17:33:27.109Z","avatar_url":"https://github.com/soniqo.png","language":"Swift","readme":"# Speech Swift\n\nAI speech models for Apple Silicon, powered by MLX Swift and CoreML.\n\n📖 Read in: [English](README.md) · [中文](README_zh.md) · [日本語](README_ja.md) · [한국어](README_ko.md) · [Español](README_es.md) · [Deutsch](README_de.md) · [Français](README_fr.md) · [हिन्दी](README_hi.md) · [Português](README_pt.md) · [Русский](README_ru.md)\n\nOn-device speech recognition, synthesis, and understanding for Mac and iOS. Runs locally on Apple Silicon — no cloud, no API keys, no data leaves your device.\n\n**[📚 Full Documentation →](https://soniqo.audio)** · **[🤗 HuggingFace Models](https://huggingface.co/aufklarer)** · **[📝 Blog](https://blog.ivan.digital)**\n\n- **[Qwen3-ASR](https://soniqo.audio/guides/transcribe)** — Speech-to-text (automatic speech recognition, 52 languages, MLX + CoreML)\n- **[Parakeet TDT](https://soniqo.audio/guides/parakeet)** — Speech-to-text via CoreML (Neural Engine, NVIDIA FastConformer + TDT decoder, 25 languages)\n- **[Omnilingual ASR](https://soniqo.audio/guides/omnilingual)** — Speech-to-text (Meta wav2vec2 + CTC, **1,672 languages** across 32 scripts, CoreML 300M + MLX 300M/1B/3B/7B)\n- **[Streaming Dictation](https://soniqo.audio/guides/dictate)** — Real-time dictation with partials and end-of-utterance detection (Parakeet-EOU-120M)\n- **[Qwen3-ForcedAligner](https://soniqo.audio/guides/align)** — Word-level timestamp alignment (audio + text → timestamps)\n- **[Qwen3-TTS](https://soniqo.audio/guides/speak)** — Text-to-speech (highest quality, streaming, custom speakers, 10 languages)\n- **[CosyVoice TTS](https://soniqo.audio/guides/cosyvoice)** — Streaming TTS with voice cloning, multi-speaker dialogue, emotion tags (9 languages)\n- **[Kokoro TTS](https://soniqo.audio/guides/kokoro)** — On-device TTS (82M, CoreML/Neural Engine, 54 voices, iOS-ready, 10 languages)\n- **[Qwen3.5-Chat](https://soniqo.audio/guides/chat)** — On-device LLM chat (0.8B, MLX INT4 + CoreML INT8, DeltaNet hybrid, streaming tokens)\n- **[PersonaPlex](https://soniqo.audio/guides/respond)** — Full-duplex speech-to-speech (7B, audio in → audio out, 18 voice presets)\n- **[DeepFilterNet3](https://soniqo.audio/guides/denoise)** — Real-time noise suppression (2.1M params, 48 kHz)\n- **[VAD](https://soniqo.audio/guides/vad)** — Voice activity detection (Silero streaming, Pyannote offline, FireRedVAD 100+ languages)\n- **[Speaker Diarization](https://soniqo.audio/guides/diarize)** — Who spoke when (Pyannote pipeline, Sortformer end-to-end on Neural Engine)\n- **[Speaker Embeddings](https://soniqo.audio/guides/embed-speaker)** — WeSpeaker ResNet34 (256-dim), CAM++ (192-dim)\n\nPapers: [Qwen3-ASR](https://arxiv.org/abs/2601.21337) (Alibaba) · [Qwen3-TTS](https://arxiv.org/abs/2601.15621) (Alibaba) · [Omnilingual ASR](https://arxiv.org/abs/2511.09690) (Meta) · [Parakeet TDT](https://arxiv.org/abs/2304.06795) (NVIDIA) · [CosyVoice 3](https://arxiv.org/abs/2505.17589) (Alibaba) · [Kokoro](https://arxiv.org/abs/2301.01695) (StyleTTS 2) · [PersonaPlex](https://arxiv.org/abs/2602.06053) (NVIDIA) · [Mimi](https://arxiv.org/abs/2410.00037) (Kyutai) · [Sortformer](https://arxiv.org/abs/2409.06656) (NVIDIA)\n\n## News\n\n- **20 Mar 2026** — [We Beat Whisper Large v3 with a 600M Model Running Entirely on Your Mac](https://blog.ivan.digital/we-beat-whisper-large-v3-with-a-600m-model-running-entirely-on-your-mac-20e6ce191174)\n- **26 Feb 2026** — [Speaker Diarization and Voice Activity Detection on Apple Silicon — Native Swift with MLX](https://blog.ivan.digital/speaker-diarization-and-voice-activity-detection-on-apple-silicon-native-swift-with-mlx-92ea0c9aca0f)\n- **23 Feb 2026** — [NVIDIA PersonaPlex 7B on Apple Silicon — Full-Duplex Speech-to-Speech in Native Swift with MLX](https://blog.ivan.digital/nvidia-personaplex-7b-on-apple-silicon-full-duplex-speech-to-speech-in-native-swift-with-mlx-0aa5276f2e23)\n- **12 Feb 2026** — [Qwen3-ASR Swift: On-Device ASR + TTS for Apple Silicon — Architecture and Benchmarks](https://blog.ivan.digital/qwen3-asr-swift-on-device-asr-tts-for-apple-silicon-architecture-and-benchmarks-27cbf1e4463f)\n\n## Quick start\n\nAdd the package to your `Package.swift`:\n\n```swift\n.package(url: \"https://github.com/soniqo/speech-swift\", from: \"0.0.9\")\n```\n\nImport only the modules you need — every model is its own SPM library, so you don't pay for what you don't use:\n\n```swift\n.product(name: \"ParakeetStreamingASR\", package: \"speech-swift\"),\n.product(name: \"SpeechUI\",             package: \"speech-swift\"),  // optional SwiftUI views\n```\n\n**Transcribe an audio buffer in 3 lines:**\n\n```swift\nimport ParakeetStreamingASR\n\nlet model = try await ParakeetStreamingASRModel.fromPretrained()\nlet text = try model.transcribeAudio(audioSamples, sampleRate: 16000)\n```\n\n**Live streaming with partials:**\n\n```swift\nfor await partial in model.transcribeStream(audio: samples, sampleRate: 16000) {\n    print(partial.isFinal ? \"FINAL: \\(partial.text)\" : \"... \\(partial.text)\")\n}\n```\n\n**SwiftUI dictation view in ~10 lines:**\n\n```swift\nimport SwiftUI\nimport ParakeetStreamingASR\nimport SpeechUI\n\n@MainActor\nstruct DictateView: View {\n    @State private var store = TranscriptionStore()\n\n    var body: some View {\n        TranscriptionView(finals: store.finalLines, currentPartial: store.currentPartial)\n            .task {\n                let model = try? await ParakeetStreamingASRModel.fromPretrained()\n                guard let model else { return }\n                for await p in model.transcribeStream(audio: samples, sampleRate: 16000) {\n                    store.apply(text: p.text, isFinal: p.isFinal)\n                }\n            }\n    }\n}\n```\n\n`SpeechUI` ships only `TranscriptionView` (finals + partials) and `TranscriptionStore` (streaming ASR adapter). Use AVFoundation for audio visualization and playback.\n\nAvailable SPM products: `Qwen3ASR`, `Qwen3TTS`, `Qwen3TTSCoreML`, `ParakeetASR`, `ParakeetStreamingASR`, `OmnilingualASR`, `KokoroTTS`, `CosyVoiceTTS`, `PersonaPlex`, `SpeechVAD`, `SpeechEnhancement`, `Qwen3Chat`, `SpeechCore`, `SpeechUI`, `AudioCommon`.\n\n## Models\n\nCompact view below. **[Full model catalogue with sizes, quantisations, download URLs, and memory tables → soniqo.audio/architecture](https://soniqo.audio/architecture)**.\n\n| Model | Task | Backends | Sizes | Languages |\n|-------|------|----------|-------|-----------|\n| [Qwen3-ASR](https://soniqo.audio/guides/transcribe) | Speech → Text | MLX, CoreML (hybrid) | 0.6B, 1.7B | 52 |\n| [Parakeet TDT](https://soniqo.audio/guides/parakeet) | Speech → Text | CoreML (ANE) | 0.6B | 25 European |\n| [Parakeet EOU](https://soniqo.audio/guides/dictate) | Speech → Text (streaming) | CoreML (ANE) | 120M | 25 European |\n| [Omnilingual ASR](https://soniqo.audio/guides/omnilingual) | Speech → Text | CoreML (ANE), MLX | 300M / 1B / 3B / 7B | **[1,672](https://github.com/facebookresearch/omnilingual-asr/blob/main/src/omnilingual_asr/models/wav2vec2_llama/lang_ids.py)** |\n| [Qwen3-ForcedAligner](https://soniqo.audio/guides/align) | Audio + Text → Timestamps | MLX, CoreML | 0.6B | Multi |\n| [Qwen3-TTS](https://soniqo.audio/guides/speak) | Text → Speech | MLX, CoreML | 0.6B, 1.7B | 10 |\n| [CosyVoice3](https://soniqo.audio/guides/cosyvoice) | Text → Speech | MLX | 0.5B | 9 |\n| [Kokoro-82M](https://soniqo.audio/guides/kokoro) | Text → Speech | CoreML (ANE) | 82M | 10 |\n| [Qwen3.5-Chat](https://soniqo.audio/guides/chat) | Text → Text (LLM) | MLX, CoreML | 0.8B | Multi |\n| [PersonaPlex](https://soniqo.audio/guides/respond) | Speech → Speech | MLX | 7B | EN |\n| [Silero VAD](https://soniqo.audio/guides/vad) | Voice Activity Detection | MLX, CoreML | 309K | Agnostic |\n| [Pyannote](https://soniqo.audio/guides/diarize) | VAD + Diarization | MLX | 1.5M | Agnostic |\n| [Sortformer](https://soniqo.audio/guides/diarize) | Diarization (E2E) | CoreML (ANE) | — | Agnostic |\n| [DeepFilterNet3](https://soniqo.audio/guides/denoise) | Speech Enhancement | CoreML | 2.1M | Agnostic |\n| [WeSpeaker](https://soniqo.audio/guides/embed-speaker) | Speaker Embedding | MLX, CoreML | 6.6M | Agnostic |\n\n## Installation\n\n### Homebrew\n\nRequires native ARM Homebrew (`/opt/homebrew`). Rosetta/x86_64 Homebrew is not supported.\n\n```bash\nbrew tap soniqo/speech https://github.com/soniqo/speech-swift\nbrew install speech\n```\n\nThen:\n\n```bash\naudio transcribe recording.wav\naudio speak \"Hello world\"\naudio respond --input question.wav --transcript\n```\n\n**[Full CLI reference →](https://soniqo.audio/cli)**\n\n### Swift Package Manager\n\n```swift\ndependencies: [\n    .package(url: \"https://github.com/soniqo/speech-swift\", from: \"0.0.9\")\n]\n```\n\nImport only what you need — every model is its own SPM target:\n\n```swift\nimport Qwen3ASR             // Speech recognition (MLX)\nimport ParakeetASR          // Speech recognition (CoreML, batch)\nimport ParakeetStreamingASR // Streaming dictation with partials + EOU\nimport OmnilingualASR       // 1,672 languages (CoreML + MLX)\nimport Qwen3TTS             // Text-to-speech\nimport CosyVoiceTTS         // Text-to-speech with voice cloning\nimport KokoroTTS            // Text-to-speech (iOS-ready)\nimport Qwen3Chat            // On-device LLM chat\nimport PersonaPlex          // Full-duplex speech-to-speech\nimport SpeechVAD            // VAD + speaker diarization + embeddings\nimport SpeechEnhancement    // Noise suppression\nimport SpeechUI             // SwiftUI components for streaming transcripts\nimport AudioCommon          // Shared protocols and utilities\n```\n\n### Requirements\n\n- Swift 5.9+, Xcode 15+ (with Metal Toolchain)\n- macOS 14+ or iOS 17+, Apple Silicon (M1/M2/M3/M4)\n\n### Build from source\n\n```bash\ngit clone https://github.com/soniqo/speech-swift\ncd speech-swift\nmake build\n```\n\n`make build` compiles the Swift package **and** the MLX Metal shader library. The Metal library is required for GPU inference — without it you'll see `Failed to load the default metallib` at runtime. `make debug` for debug builds, `make test` for the test suite.\n\n**[Full build and install guide →](https://soniqo.audio/getting-started)**\n\n## Demo apps\n\n- **[DictateDemo](Examples/DictateDemo/)** ([docs](https://soniqo.audio/guides/dictate)) — macOS menu-bar streaming dictation with live partials, VAD-driven end-of-utterance detection, and one-click copy. Runs as a background agent (Parakeet-EOU-120M + Silero VAD).\n- **[iOSEchoDemo](Examples/iOSEchoDemo/)** — iOS echo demo (Parakeet ASR + Kokoro TTS). Device and simulator.\n- **[PersonaPlexDemo](Examples/PersonaPlexDemo/)** — Conversational voice assistant with mic input, VAD, and multi-turn context. macOS. RTF ~0.94 on M2 Max (faster than real-time).\n- **[SpeechDemo](Examples/SpeechDemo/)** — Dictation and TTS synthesis in a tabbed interface. macOS.\n\nEach demo's README has build instructions.\n\n## Code examples\n\nThe snippets below show the minimal path for each domain. Every section links to a full guide on [soniqo.audio](https://soniqo.audio) with configuration options, multiple backends, streaming patterns, and CLI recipes.\n\n### Speech-to-Text — [full guide →](https://soniqo.audio/guides/transcribe)\n\n```swift\nimport Qwen3ASR\n\nlet model = try await Qwen3ASRModel.fromPretrained()\nlet text = model.transcribe(audio: audioSamples, sampleRate: 16000)\n```\n\nAlternative backends: [Parakeet TDT](https://soniqo.audio/guides/parakeet) (CoreML, 32× realtime), [Omnilingual ASR](https://soniqo.audio/guides/omnilingual) (1,672 languages, CoreML or MLX), [Streaming dictation](https://soniqo.audio/guides/dictate) (live partials).\n\n### Forced Alignment — [full guide →](https://soniqo.audio/guides/align)\n\n```swift\nimport Qwen3ASR\n\nlet aligner = try await Qwen3ForcedAligner.fromPretrained()\nlet aligned = aligner.align(\n    audio: audioSamples,\n    text: \"Can you guarantee that the replacement part will be shipped tomorrow?\",\n    sampleRate: 24000\n)\nfor word in aligned {\n    print(\"[\\(word.startTime)s - \\(word.endTime)s] \\(word.text)\")\n}\n```\n\n### Text-to-Speech — [full guide →](https://soniqo.audio/guides/speak)\n\n```swift\nimport Qwen3TTS\nimport AudioCommon\n\nlet model = try await Qwen3TTSModel.fromPretrained()\nlet audio = model.synthesize(text: \"Hello world\", language: \"english\")\ntry WAVWriter.write(samples: audio, sampleRate: 24000, to: outputURL)\n```\n\nAlternative TTS engines: [CosyVoice3](https://soniqo.audio/guides/cosyvoice) (streaming + voice cloning + emotion tags), [Kokoro-82M](https://soniqo.audio/guides/kokoro) (iOS-ready, 54 voices), [Voice cloning](https://soniqo.audio/guides/voice-cloning).\n\n### Speech-to-Speech — [full guide →](https://soniqo.audio/guides/respond)\n\n```swift\nimport PersonaPlex\n\nlet model = try await PersonaPlexModel.fromPretrained()\nlet responseAudio = model.respond(userAudio: userSamples)\n// 24 kHz mono Float32 output ready for playback\n```\n\n### LLM Chat — [full guide →](https://soniqo.audio/guides/chat)\n\n```swift\nimport Qwen3Chat\n\nlet chat = try await Qwen35MLXChat.fromPretrained()\nchat.chat(messages: [(.user, \"Explain MLX in one sentence\")]) { token, isFinal in\n    print(token, terminator: \"\")\n}\n```\n\n### Voice Activity Detection — [full guide →](https://soniqo.audio/guides/vad)\n\n```swift\nimport SpeechVAD\n\nlet vad = try await SileroVADModel.fromPretrained()\nlet segments = vad.detectSpeech(audio: samples, sampleRate: 16000)\nfor s in segments { print(\"\\(s.startTime)s → \\(s.endTime)s\") }\n```\n\n### Speaker Diarization — [full guide →](https://soniqo.audio/guides/diarize)\n\n```swift\nimport SpeechVAD\n\nlet diarizer = try await DiarizationPipeline.fromPretrained()\nlet segments = diarizer.diarize(audio: samples, sampleRate: 16000)\nfor s in segments { print(\"Speaker \\(s.speakerId): \\(s.startTime)s - \\(s.endTime)s\") }\n```\n\n### Speech Enhancement — [full guide →](https://soniqo.audio/guides/denoise)\n\n```swift\nimport SpeechEnhancement\n\nlet denoiser = try await DeepFilterNet3Model.fromPretrained()\nlet clean = try denoiser.enhance(audio: noisySamples, sampleRate: 48000)\n```\n\n### Voice Pipeline (ASR → LLM → TTS) — [full guide →](https://soniqo.audio/api)\n\n```swift\nimport SpeechCore\n\nlet pipeline = VoicePipeline(\n    stt: parakeetASR,\n    tts: qwen3TTS,\n    vad: sileroVAD,\n    config: .init(mode: .voicePipeline),\n    onEvent: { event in print(event) }\n)\npipeline.start()\npipeline.pushAudio(micSamples)\n```\n\n`VoicePipeline` is the real-time voice-agent state machine (powered by [speech-core](https://github.com/soniqo/speech-core)) with VAD-driven turn detection, interruption handling, and eager STT. It connects any `SpeechRecognitionModel` + `SpeechGenerationModel` + `StreamingVADProvider`.\n\n### HTTP API server\n\n```bash\naudio-server --port 8080\n```\n\nExposes every model via HTTP REST + WebSocket endpoints, including an OpenAI Realtime API-compatible WebSocket at `/v1/realtime`. See [`Sources/AudioServer/`](Sources/AudioServer/).\n\n## Architecture\n\nspeech-swift is split into one SPM target per model so consumers only pay for what they import. Shared infrastructure lives in `AudioCommon` (protocols, audio I/O, HuggingFace downloader, `SentencePieceModel`) and `MLXCommon` (weight loading, `QuantizedLinear` helpers, `SDPA` multi-head attention helper).\n\n**[Full architecture diagram with backends, memory tables, and module map → soniqo.audio/architecture](https://soniqo.audio/architecture)** · **[API reference → soniqo.audio/api](https://soniqo.audio/api)** · **[Benchmarks → soniqo.audio/benchmarks](https://soniqo.audio/benchmarks)**\n\nLocal docs (repo):\n- **Models:** [Qwen3-ASR](docs/models/asr-model.md) · [Qwen3-TTS](docs/models/tts-model.md) · [CosyVoice](docs/models/cosyvoice-tts.md) · [Kokoro](docs/models/kokoro-tts.md) · [Parakeet TDT](docs/models/parakeet-asr.md) · [Parakeet Streaming](docs/models/parakeet-streaming-asr.md) · [Omnilingual ASR](docs/models/omnilingual-asr.md) · [PersonaPlex](docs/models/personaplex.md) · [FireRedVAD](docs/models/fireredvad.md)\n- **Inference:** [Qwen3-ASR](docs/inference/qwen3-asr-inference.md) · [Parakeet TDT](docs/inference/parakeet-asr-inference.md) · [Parakeet Streaming](docs/inference/parakeet-streaming-asr-inference.md) · [Omnilingual ASR](docs/inference/omnilingual-asr-inference.md) · [TTS](docs/inference/qwen3-tts-inference.md) · [Forced Aligner](docs/inference/forced-aligner.md) · [Silero VAD](docs/inference/silero-vad.md) · [Speaker Diarization](docs/inference/speaker-diarization.md) · [Speech Enhancement](docs/inference/speech-enhancement.md)\n- **Reference:** [Shared Protocols](docs/shared-protocols.md)\n\n## Cache configuration\n\nModel weights download from HuggingFace on first use and cache to `~/Library/Caches/qwen3-speech/`. Override with `QWEN3_CACHE_DIR` (CLI) or `cacheDir:` (Swift API). All `fromPretrained()` entry points also accept `offlineMode: true` to skip network when weights are already cached.\n\nSee [`docs/inference/cache-and-offline.md`](docs/inference/cache-and-offline.md) for full details including sandboxed iOS container paths.\n\n## MLX Metal library\n\nIf you see `Failed to load the default metallib` at runtime, the Metal shader library is missing. Run `make build` or `./scripts/build_mlx_metallib.sh release` after a manual `swift build`. If the Metal Toolchain is missing, install it first:\n\n```bash\nxcodebuild -downloadComponent MetalToolchain\n```\n\n## Testing\n\n```bash\nmake test                            # full suite (unit + E2E with model downloads)\nswift test --skip E2E                # unit only (CI-safe, no downloads)\nswift test --filter Qwen3ASRTests    # specific module\n```\n\nE2E test classes use the `E2E` prefix so CI can filter them out with `--skip E2E`. See [CLAUDE.md](CLAUDE.md#testing) for the full testing convention.\n\n## Contributing\n\nPRs welcome — bug fixes, new model integrations, documentation. Fork, create a feature branch, `make build \u0026\u0026 make test`, open a PR against `main`.\n\n## License\n\nApache 2.0\n","funding_links":[],"categories":["Core MLX \u0026 Examples"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsoniqo%2Fspeech-swift","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fsoniqo%2Fspeech-swift","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsoniqo%2Fspeech-swift/lists"}