{"id":23722889,"url":"https://github.com/tky823/dnn-based_source_separation","last_synced_at":"2025-04-07T18:15:26.751Z","repository":{"id":42664521,"uuid":"291297151","full_name":"tky823/DNN-based_source_separation","owner":"tky823","description":"A PyTorch implementation of DNN-based source separation.","archived":false,"fork":false,"pushed_at":"2022-03-29T09:16:12.000Z","size":307540,"stargazers_count":297,"open_issues_count":7,"forks_count":51,"subscribers_count":7,"default_branch":"main","last_synced_at":"2025-03-31T16:15:11.638Z","etag":null,"topics":["audio-separation","conv-tasnet","pytorch","source-separation","speech-separation","tasnet"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/tky823.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2020-08-29T15:27:58.000Z","updated_at":"2025-03-28T22:40:34.000Z","dependencies_parsed_at":"2022-08-31T15:02:30.421Z","dependency_job_id":null,"html_url":"https://github.com/tky823/DNN-based_source_separation","commit_stats":null,"previous_names":[],"tags_count":30,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tky823%2FDNN-based_source_separation","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tky823%2FDNN-based_source_separation/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tky823%2FDNN-based_source_separation/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tky823%2FDNN-based_source_separation/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/tky823","download_url":"https://codeload.github.com/tky823/DNN-based_source_separation/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247704571,"owners_count":20982298,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["audio-separation","conv-tasnet","pytorch","source-separation","speech-separation","tasnet"],"created_at":"2024-12-30T23:58:13.215Z","updated_at":"2025-04-07T18:15:26.715Z","avatar_url":"https://github.com/tky823.png","language":"Python","readme":"# DNN-based source separation\nA PyTorch implementation of DNN-based source separation.\n\n## New information\n- v0.7.2\n  - Update jupyter notebooks.\n\n## Model\n| Model | Reference | Done |\n| :---: | :---: | :---: |\n| WaveNet | [WaveNet: A Generative Model for Raw Audio](https://arxiv.org/abs/1609.03499) | ✔ |\n| Wave-U-Net | [Wave-U-Net: A Multi-Scale Neural Network for End-to-End Audio Source Separation](https://arxiv.org/abs/1806.03185) |  |\n| Deep Clustering | [Deep Clustering: Discriminative Embeddings for Segmentation and Separation](https://arxiv.org/abs/1508.04306) | ✔ |\n| Deep Clustering++ | [Single-Channel Multi-Speaker Separation using Deep Clustering](https://arxiv.org/abs/1607.02173) |  |\n| Chimera | [Alternative Objective Functions for Deep Clustering](https://www.merl.com/publications/docs/TR2018-005.pdf) |  |\n| DANet | [Deep Attractor Network for Single-microphone Apeaker Aeparation](https://arxiv.org/abs/1611.08930) | ✔ |\n| ADANet | [Speaker-independent Speech Separation with Deep Attractor Network](https://arxiv.org/abs/1707.03634) | ✔ |\n| TasNet | [TasNet: Time-domain Audio Separation Network for Real-time, Single-channel Speech Separation](https://arxiv.org/abs/1711.00541) | ✔ |\n| Conv-TasNet | [Conv-TasNet: Surpassing Ideal Time-Frequency Magnitude Masking for Speech Separation](https://arxiv.org/abs/1809.07454) | ✔ |\n| DPRNN-TasNet | [Dual-path RNN: Efficient Long Sequence Modeling for Time-domain Single-channel Speech Separation](https://arxiv.org/abs/1910.06379) | ✔ |\n| Gated DPRNN-TasNet | [Voice Separation with an Unknown Number of Multiple Speakers](https://arxiv.org/abs/2003.01531) |  |\n| FurcaNet | [FurcaNet: An End-to-End Deep Gated Convolutional, Long Short-term Memory, Deep Neural Networks for Single Channel Speech Separation](https://arxiv.org/abs/1902.00651) |  |\n| FurcaNeXt | [FurcaNeXt: End-to-End Monaural Speech Separation with Dynamic Gated Dilated Temporal Convolutional Networks](https://arxiv.org/abs/1902.04891) |\n| DeepCASA | [Divide and Conquer: A Deep Casa Approach to Talker-independent Monaural Speaker Separation](https://arxiv.org/abs/1904.11148) |  |\n| Conditioned-U-Net | [Conditioned-U-Net: Introducing a Control Mechanism in the U-Net for multiple source separations](https://arxiv.org/abs/1907.01277) | ✔ |\n| MMDenseNet | [Multi-scale Multi-band DenseNets for Audio Source Separation](https://arxiv.org/abs/1706.09588) | ✔ |\n| MMDenseLSTM | [MMDenseLSTM: An Efficient Combination of Convolutional and Recurrent Neural Networks for Audio Source Separation](https://arxiv.org/abs/1805.02410) | ✔ |\n| Open-Unmix (UMX) | [Open-Unmix - A Reference Implementation for Music Source Separation](https://hal.inria.fr/hal-02293689/document) | ✔ |\n| Wavesplit | [Wavesplit: End-to-End Speech Separation by Speaker Clustering](https://arxiv.org/abs/2002.08933) |  |\n| Hydranet | [Hydranet: A Real-Time Waveform Separation Network](https://ieeexplore.ieee.org/document/9053357) |  |\n| Dual-Path Transformer Network (DPTNet) | [Dual-Path Transformer Network: Direct Context-Aware Modeling for End-to-End Monaural Speech Separation](https://arxiv.org/abs/2007.13975) | ✔ |\n| CrossNet-Open-Unmix (X-UMX) | [All for One and One for All: Improving Music Separation by Bridging Networks](https://arxiv.org/abs/2010.04228) | ✔ |\n| D3Net | [D3Net: Densely connected multidilated DenseNet for music source separation](https://arxiv.org/abs/2010.01733) | ✔ |\n| LaSAFT | [LaSAFT: Latent Source Attentive Frequency Transformation for Conditioned Source Separation](https://arxiv.org/abs/2010.11631) |  |\n| SepFormer | [Attention is All You Need in Speech Separation](https://arxiv.org/abs/2010.13154) | ✔ |\n| GALR | [Effective Low-Cost Time-Domain Audio Separation Using Globally Attentive Locally Reccurent networks](https://arxiv.org/abs/2101.05014) | ✔ |\n| HRNet | [Vocal Melody Extraction via HRNet-Based Singing Voice Separation and Encoder-Decoder-Based F0 Estimation](https://www.mdpi.com/2079-9292/10/3/298) | ✔ |\n| MRX | [The Cocktail Fork Problem: Three-Stem Audio Separation for Real-World Soundtracks](https://arxiv.org/abs/2110.09958) |  |\n\n## Modules\n| Module | Reference | Done |\n| :---: | :---: | :---: |\n| Depthwise-separable convolution | [Xception: Deep Learning with Depthwise Separable Convolutions](https://arxiv.org/abs/1610.02357) | ✔ |\n| Gated Linear Units (GLU) | [Language Modeling with Gated Convolutional Networks](https://arxiv.org/abs/1612.08083) | ✔ |\n| Sigmoid Linear Units (SiLU) | [Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning](https://arxiv.org/abs/1702.03118) | ✔ |\n| Feature-wise Linear Modulation (FiLM) | [FiLM: Visual Reasoning with a General Conditioning Layer](https://arxiv.org/abs/1709.07871) | ✔ |\n| Point-wise Convolutional Modulation (PoCM) | [LaSAFT: Latent Source Attentive Frequency Transformation for Conditioned Source Separation](https://arxiv.org/abs/2010.11631) | ✔ |\n\n## Method related to training\n| Method | Reference | Done |\n| :---: | :---: | :---: |\n| Pemutation invariant training (PIT) | [Multi-talker Speech Separation with Utterance-level Permutation Invariant Training of Deep Recurrent Neural Networks](https://arxiv.org/abs/1703.06284) | ✔ |\n| One-and-rest PIT | [Recursive Speech Separation for Unknown Number of Speakers](https://arxiv.org/abs/1904.03065) | ✔ |\n| Probabilistic PIT | [Probabilistic Permutation Invariant Training for Speech Separation](https://arxiv.org/abs/1908.01768) |  |\n| Sinkhorn PIT | [Towards Listening to 10 People Simultaneously: An Efficient Permutation Invariant Training of Audio Source Separation Using Sinkhorn's Algorithm](https://arxiv.org/abs/2010.11871) | ✔ |\n| Combination Loss | [All for One and One for All: Improving Music Separation by Bridging Networks](https://arxiv.org/abs/2010.04228) | ✔ |\n\n## Example\n[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tky823/DNN-based_source_separation/blob/main/egs/tutorials/conv-tasnet/train_librispeech.ipynb)\n\nLibriSpeech example using [Conv-TasNet](https://arxiv.org/abs/1809.07454)\n\nYou can check other tutorials in `\u003cREPOSITORY_ROOT\u003e/egs/tutorials/`.\n\n### 0. Preparation\n```sh\ncd \u003cREPOSITORY_ROOT\u003e/egs/tutorials/common/\n. ./prepare_librispeech.sh \\\n--librispeech_root \u003cLIBRISPEECH_ROOT\u003e \\\n--n_sources \u003c#SPEAKERS\u003e\n```\n\n### 1. Training\n```sh\ncd \u003cREPOSITORY_ROOT\u003e/egs/tutorials/conv-tasnet/\n. ./train.sh \\\n--exp_dir \u003cOUTPUT_DIR\u003e\n```\n\nIf you want to resume training,\n```sh\n. ./train.sh \\\n--exp_dir \u003cOUTPUT_DIR\u003e \\\n--continue_from \u003cMODEL_PATH\u003e\n```\n\n### 2. Evaluation\n```sh\ncd \u003cREPOSITORY_ROOT\u003e/egs/tutorials/conv-tasnet/\n. ./test.sh \\\n--exp_dir \u003cOUTPUT_DIR\u003e\n```\n\n### 3. Demo\n```sh\ncd \u003cREPOSITORY_ROOT\u003e/egs/tutorials/conv-tasnet/\n. ./demo.sh\n```\n\n## Pretrained Models\nYou need `gdown` to download pretrained models.\n```sh\npip install gdown\n```\nYou can load pretrained models.\n```py\nfrom models.conv_tasnet import ConvTasNet\n\nmodel = ConvTasNet.build_from_pretrained(task=\"musdb18\", sample_rate=44100, target=\"vocals\")\n```\n\nSee `PRETRAINED.md`, `egs/tutorials/hub/pretrained.ipynb` or click [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tky823/DNN-based_source_separation/blob/main/egs/tutorials/hub/pretrained.ipynb) for details.\n\n### Time Domain Wrappers for Time-Frequency Domain Models\nSee `egs/tutorials/hub/time-domain_wrapper.ipynb` or click [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tky823/DNN-based_source_separation/blob/main/egs/tutorials/hub/time-domain_wrapper.ipynb).\n\n### Speech Separation by Pretrained Models\nSee `egs/tutorials/hub/speech-separation.ipynb` or click [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tky823/DNN-based_source_separation/blob/main/egs/tutorials/hub/speech-separation.ipynb).\n\n### Music Source Separation by Pretrained Models\nSee `egs/tutorials/hub/music-source-separation.ipynb` or click [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tky823/DNN-based_source_separation/blob/main/egs/tutorials/hub/music-source-separation.ipynb).\n\nIf you want to separate your own music file, see below:\n- MMDenseLSTM: See `egs/tutorials/mm-dense-lstm/separate_music.ipynb` or click [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tky823/DNN-based_source_separation/blob/main/egs/tutorials/mm-dense-lstm/separate_music.ipynb).\n- Conv-TasNet: See `egs/tutorials/conv-tasnet/separate_music.ipynb` or click [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tky823/DNN-based_source_separation/blob/main/egs/tutorials/conv-tasnet/separate_music.ipynb).\n- UMX: See `egs/tutorials/umx/separate_music.ipynb` or click [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tky823/DNN-based_source_separation/blob/main/egs/tutorials/umx/separate_music.ipynb).\n- X-UMX: See `egs/tutorials/x-umx/separate_music.ipynb` or click [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tky823/DNN-based_source_separation/blob/main/egs/tutorials/x-umx/separate_music.ipynb).\n- D3Net: See `egs/tutorials/d3net/separate_music.ipynb` or click [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tky823/DNN-based_source_separation/blob/main/egs/tutorials/d3net/separate_music.ipynb).","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftky823%2Fdnn-based_source_separation","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ftky823%2Fdnn-based_source_separation","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftky823%2Fdnn-based_source_separation/lists"}