{"id":48671458,"url":"https://github.com/itsdevcoffee/mojo-audio","last_synced_at":"2026-04-10T12:30:33.977Z","repository":{"id":331839279,"uuid":"1130131146","full_name":"itsdevcoffee/mojo-audio","owner":"itsdevcoffee","description":"Mojo audio library: FFI-enabled, pure Mojo DSP.","archived":false,"fork":false,"pushed_at":"2026-04-09T11:12:45.000Z","size":17993,"stargazers_count":5,"open_issues_count":0,"forks_count":0,"subscribers_count":1,"default_branch":"main","last_synced_at":"2026-04-09T11:25:30.387Z","etag":null,"topics":["ai","audio","audio-library","audio-processing","deep-learning","dsp","fft","machine-learning","mel-spectrogram","mojo","mojo-lang","openai-whisper","python","signal-processing","simd-optimizations","speech-recognition","whisper"],"latest_commit_sha":null,"homepage":"https://devcoffee.io/demo/mojo-audio/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/itsdevcoffee.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2026-01-08T04:34:09.000Z","updated_at":"2026-04-09T11:13:02.000Z","dependencies_parsed_at":null,"dependency_job_id":null,"html_url":"https://github.com/itsdevcoffee/mojo-audio","commit_stats":null,"previous_names":["itsdevcoffee/mojo-audio"],"tags_count":3,"template":false,"template_full_name":null,"purl":"pkg:github/itsdevcoffee/mojo-audio","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/itsdevcoffee%2Fmojo-audio","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/itsdevcoffee%2Fmojo-audio/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/itsdevcoffee%2Fmojo-audio/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/itsdevcoffee%2Fmojo-audio/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/itsdevcoffee","download_url":"https://codeload.github.com/itsdevcoffee/mojo-audio/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/itsdevcoffee%2Fmojo-audio/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31642613,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-10T07:40:12.752Z","status":"ssl_error","status_checked_at":"2026-04-10T07:40:11.664Z","response_time":98,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai","audio","audio-library","audio-processing","deep-learning","dsp","fft","machine-learning","mel-spectrogram","mojo","mojo-lang","openai-whisper","python","signal-processing","simd-optimizations","speech-recognition","whisper"],"created_at":"2026-04-10T12:30:33.039Z","updated_at":"2026-04-10T12:30:33.958Z","avatar_url":"https://github.com/itsdevcoffee.png","language":"Python","readme":"# mojo-audio\n\nHigh-performance audio DSP and ML inference library for voice conversion — built in Mojo and Python, runs on NVIDIA DGX Spark ARM64 with zero PyTorch CUDA dependency.\n\n[![Mojo](https://img.shields.io/badge/Mojo-0.26.1-orange?logo=fire)](https://docs.modular.com/mojo/)\n[![License](https://img.shields.io/badge/license-MIT-blue.svg)](LICENSE)\n[![Performance](https://img.shields.io/badge/vs_librosa-20--40%25_faster-brightgreen)](benchmarks/)\n\n---\n\n## What it is\n\nmojo-audio has two layers:\n\n**DSP layer (Mojo)** — low-level audio processing: FFT, mel spectrogram, resampling, VAD, pitch shifting, iSTFT. 20–40% faster than librosa through SIMD vectorization and multi-core parallelization.\n\n**ML inference layer (Python + MAX Graph)** — GPU-accelerated neural network inference without PyTorch CUDA. Runs natively on DGX Spark SM_121 ARM64 via [MAX Engine](https://www.modular.com/max):\n\n| Model | Purpose | Backend |\n|-------|---------|---------|\n| `AudioEncoder` | HuBERT / ContentVec content features | MAX Graph GPU |\n| `PitchExtractor` | RMVPE pitch (F0) estimation | MAX Graph GPU + numpy BiGRU |\n\nTogether these form the core of a voice conversion pipeline (content extraction → pitch extraction → synthesis) that runs fully on Spark without cloud or PyTorch.\n\n---\n\n## ML Inference\n\n### AudioEncoder — HuBERT / ContentVec\n\nExtracts content feature vectors from raw audio. Supports `facebook/hubert-base-ls960` and `lengyue233/content-vec-best`. Automatically uses GPU if available.\n\n```python\nfrom models import AudioEncoder\n\nmodel = AudioEncoder.from_pretrained(\"facebook/hubert-base-ls960\")\nfeatures = model.encode(audio_np)  # [1, N] float32 @16kHz → [1, T, 768]\n```\n\nGPU pipeline: CNN feature extractor + positional conv (numpy, avoids MAX conv2d groups bug) + 12× transformer blocks.\n\n### PitchExtractor — RMVPE\n\nExtracts F0 (fundamental frequency) per 10ms frame. No PyTorch CUDA needed — runs on DGX Spark ARM64.\n\n```python\nfrom models import PitchExtractor\n\nmodel = PitchExtractor.from_pretrained()  # downloads lj1995/VoiceConversionWebUI/rmvpe.pt\nf0_hz = model.extract(audio_np)  # [1, N] float32 @16kHz → [T] float32 Hz, 0=unvoiced\n```\n\nArchitecture: U-Net MAX Graph (5-level encoder + bottleneck + 5-level decoder) → numpy BiGRU → pitch salience bins → Hz per frame.\n\n### Running the models\n\n```bash\n# Fast tests (no download)\npixi run test-models\npixi run test-pitch-extractor\n\n# Full correctness tests (downloads model weights ~180–360MB)\npixi run test-models-full\npixi run test-pitch-extractor-full\n\n# GPU benchmark\npixi run bench-models\n```\n\n---\n\n## DSP Layer\n\n### Mel Spectrogram (Mojo)\n\nWhisper-compatible mel spectrogram preprocessing — 20–40% faster than librosa.\n\n```mojo\nfrom audio import mel_spectrogram\n\nvar mel = mel_spectrogram(audio)  // (80, 2998) for 30s @16kHz, ~12ms with -O3\n```\n\n**Performance:**\n```\n30-second audio @16kHz:\n\nlibrosa (Python):   15ms  (1993x realtime)\nmojo-audio (-O3):   12ms  (2457x realtime)  ← 20–40% faster\n```\n\nOptimization journey: 476ms (naive) → 12ms (-O3) = **40x total speedup** through iterative FFT, RFFT, twiddle caching, sparse mel filterbank, SIMD float32, radix-4 butterflies, and multi-core parallelization.\n\n### Other DSP components\n\n| Module | What it does |\n|--------|-------------|\n| `resample.mojo` | Lanczos resampler (48kHz → 16kHz) |\n| `vad.mojo` | Voice activity detection / silence trimming |\n| `pitch.mojo` | Phase vocoder pitch shifting |\n| `wav_io.mojo` | WAV file I/O |\n| `ffi/` | C-compatible shared library (`libmojo_audio.so`) |\n\n### Running DSP tests\n\n```bash\n# All Mojo DSP tests\npixi run test\n\n# Individual\npixi run test-pitch\npixi run bench-optimized   # mel spectrogram benchmark\npixi run bench-python      # librosa baseline comparison\n```\n\n---\n\n## Installation\n\n**Requirements:** [pixi](https://pixi.sh), Mojo 0.26+, Linux x86_64 or aarch64\n\n```bash\ngit clone https://github.com/itsdevcoffee/mojo-audio.git\ncd mojo-audio\npixi install\n```\n\n**Build FFI shared library** (for C/Rust/Python DSP integration):\n```bash\npixi run build-ffi-optimized   # → libmojo_audio.so (Linux) or .dylib (macOS)\n```\n\nSee [macOS Build Guide](docs/guides/02-04-2026-macos-build-guide.md) for macOS-specific setup.\n\n---\n\n## Project Structure\n\n```\nmojo-audio/\n├── src/\n│   ├── audio.mojo              # Mel spectrogram, FFT, STFT, windowing\n│   ├── pitch.mojo              # Phase vocoder pitch shifting\n│   ├── resample.mojo           # Lanczos resampler\n│   ├── vad.mojo                # Voice activity detection\n│   ├── wav_io.mojo             # WAV I/O\n│   ├── ffi/                    # C-compatible shared library exports\n│   └── models/                 # MAX Graph ML inference (Python)\n│       ├── audio_encoder.py    # HuBERT / ContentVec via MAX Graph\n│       ├── pitch_extractor.py  # RMVPE pitch extraction via MAX Graph\n│       ├── _rmvpe.py           # U-Net graph + numpy BiGRU\n│       ├── _rmvpe_weight_loader.py\n│       └── _weight_loader.py   # HuBERT/ContentVec weight loader\n├── tests/\n│   ├── test_audio_encoder.py   # AudioEncoder tests (pytest)\n│   ├── test_pitch_extractor.py # PitchExtractor tests (pytest)\n│   ├── test_fft.mojo           # FFT correctness\n│   ├── test_mel.mojo           # Mel spectrogram\n│   └── ...                     # Other Mojo DSP tests\n├── experiments/\n│   ├── hubert-max/             # HuBERT MAX Graph experiments\n│   ├── contentvec-max/         # ContentVec benchmarks\n│   └── max-bug-repro/          # MAX Engine bug reproductions\n├── docs/\n│   ├── plans/                  # Implementation plans\n│   ├── context/                # Architecture reference\n│   └── project/                # Roadmap\n└── pixi.toml\n```\n\n---\n\n## Platform Support\n\n| Platform | DSP | ML Inference |\n|----------|-----|-------------|\n| Linux x86_64 (NVIDIA RTX) | ✅ | ✅ GPU |\n| Linux aarch64 (DGX Spark SM_121) | ✅ | ✅ GPU |\n| macOS Apple Silicon | ✅ | ✅ CPU |\n| macOS Intel | ✅ | ✅ CPU |\n\n---\n\n## Roadmap\n\nThe next steps are tracked in [docs/project/03-06-2026-roadmap.md](docs/project/03-06-2026-roadmap.md):\n\n- **Sprint 2:** Full GPU AudioEncoder (remove numpy bridge once MAX conv2d groups bug is fixed), phase-locked phase vocoder\n- **Sprint 3:** HiFiGAN vocoder in MAX Graph\n- **Sprint 4:** Full VITS synthesis — end-to-end voice conversion on Spark\n- **Sprint 5:** Shade integration and demo\n\n---\n\n## Comparison\n\n| Feature | mojo-audio | librosa | torchaudio | RVC / Applio | pyworld |\n|---------|:----------:|:-------:|:----------:|:------------:|:-------:|\n| **DSP** | | | | | |\n| Mel spectrogram | ✅ | ✅ | ✅ | via librosa | ❌ |\n| FFT / STFT | ✅ | ✅ | ✅ | via librosa | partial |\n| Resampling | ✅ | ✅ | ✅ | via librosa | ❌ |\n| Voice activity detection | ✅ | ❌ | ❌ | via silero | ❌ |\n| Phase vocoder pitch shift | ✅ | ✅ | ❌ | ✅ | ❌ |\n| iSTFT / Griffin-Lim | ✅ | ✅ | ✅ | ❌ | ❌ |\n| WAV I/O | ✅ | ✅ | ✅ | ✅ | ❌ |\n| C FFI / shared library | ✅ | ❌ | ❌ | ❌ | ❌ |\n| **ML Inference** | | | | | |\n| HuBERT content features | ✅ MAX Graph | ❌ | ❌ | ✅ PyTorch | ❌ |\n| ContentVec content features | ✅ MAX Graph | ❌ | ❌ | ✅ PyTorch | ❌ |\n| RMVPE pitch extraction | ✅ MAX Graph | ❌ | ❌ | ✅ PyTorch | ❌ |\n| WORLD pitch extraction | ❌ | ❌ | ❌ | via pyworld | ✅ |\n| GPU inference | ✅ MAX Engine | ❌ | ✅ CUDA | ✅ CUDA | ❌ |\n| **Platform** | | | | | |\n| Linux x86_64 | ✅ | ✅ | ✅ | ✅ | ✅ |\n| DGX Spark ARM64 | ✅ | ✅ | ❌ | ❌ | ❌ |\n| macOS Apple Silicon | ✅ | ✅ | ✅ | partial | ✅ |\n| PyTorch CUDA required | ❌ | ❌ | ✅ | ✅ | ❌ |\n| **Performance** | | | | | |\n| Mel spec vs librosa | **+20–40%** | baseline | ~parity | baseline | — |\n| GPU inference without CUDA | ✅ | ❌ | ❌ | ❌ | ❌ |\n\n---\n\n## Known Issues\n\n**MAX Engine conv2d groups bug (v26.1):** `ops.conv2d` returns incorrect results when `groups \u003e 1` and kernel size is large (K≥128). Filed as [modular/modular#6129](https://github.com/modular/modular/issues/6129). Workaround: HuBERT's `pos_conv` layer runs outside the MAX Graph via numpy.\n\n---\n\n## Citation\n\n```bibtex\n@software{mojo_audio_2026,\n  author = {Dev Coffee},\n  title = {mojo-audio: Audio DSP and ML inference for voice conversion},\n  year = {2026},\n  url = {https://github.com/itsdevcoffee/mojo-audio}\n}\n```\n\n---\n\n**[GitHub](https://github.com/itsdevcoffee/mojo-audio)** | **[Issues](https://github.com/itsdevcoffee/mojo-audio/issues)** | **[Roadmap](docs/project/03-06-2026-roadmap.md)**\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fitsdevcoffee%2Fmojo-audio","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fitsdevcoffee%2Fmojo-audio","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fitsdevcoffee%2Fmojo-audio/lists"}