{"id":13544559,"url":"https://github.com/bshall/soft-vc","last_synced_at":"2025-04-05T21:07:40.261Z","repository":{"id":38405910,"uuid":"414909163","full_name":"bshall/soft-vc","owner":"bshall","description":"Soft speech units for voice conversion","archived":false,"fork":false,"pushed_at":"2024-03-14T11:24:46.000Z","size":362,"stargazers_count":425,"open_issues_count":9,"forks_count":31,"subscribers_count":12,"default_branch":"main","last_synced_at":"2025-03-29T20:05:29.387Z","etag":null,"topics":["self-supervised-learning","speech-synthesis","voice-conversion"],"latest_commit_sha":null,"homepage":"https://bshall.github.io/soft-vc/","language":"Jupyter Notebook","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/bshall.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null}},"created_at":"2021-10-08T08:38:05.000Z","updated_at":"2025-03-26T12:30:21.000Z","dependencies_parsed_at":"2024-03-14T12:45:03.064Z","dependency_job_id":null,"html_url":"https://github.com/bshall/soft-vc","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bshall%2Fsoft-vc","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bshall%2Fsoft-vc/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bshall%2Fsoft-vc/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bshall%2Fsoft-vc/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/bshall","download_url":"https://codeload.github.com/bshall/soft-vc/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247399877,"owners_count":20932876,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["self-supervised-learning","speech-synthesis","voice-conversion"],"created_at":"2024-08-01T11:00:50.958Z","updated_at":"2025-04-05T21:07:40.238Z","avatar_url":"https://github.com/bshall.png","language":"Jupyter Notebook","readme":"# Soft Speech Units for Improved Voice Conversion\n\n[![arXiv](https://img.shields.io/badge/arXiv-Paper-\u003cCOLOR\u003e.svg)](https://arxiv.org/abs/2111.02392)\n[![demo](https://img.shields.io/static/v1?message=Audio%20Samples\u0026logo=Github\u0026labelColor=grey\u0026color=blue\u0026logoColor=white\u0026label=%20\u0026style=flat)](https://bshall.github.io/soft-vc/)\n[![colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/bshall/soft-vc/blob/main/soft-vc-demo.ipynb)\n\nOfficial repository for [A Comparison of Discrete and Soft Speech Units for Improved Voice Conversion](https://ieeexplore.ieee.org/abstract/document/9746484). Audio samples can be found [here](https://bshall.github.io/soft-vc/). Colab demo can be found [here](https://colab.research.google.com/github/bshall/soft-vc/blob/main/soft-vc-demo.ipynb).\n\n**Abstract:** The goal of voice conversion is to transform source speech into a target voice, keeping the content unchanged. In this paper, we focus on self-supervised representation learning for voice conversion. Specifically, we compare discrete and soft speech units as input features. We find that discrete representations effectively remove speaker information but discard some linguistic content – leading to mispronunciations. As a solution, we propose soft speech units learned by predicting a distribution over the discrete units. By modeling uncertainty, soft units capture more content information, improving the intelligibility and naturalness of converted speech.\n\nFor modularity, each component of the system is housed in a separate repository. Please visit the following links for more details:\n\n- [HuBERT content encoders](https://github.com/bshall/hubert)\n- [Acoustic Models](https://github.com/bshall/acoustic-model)\n- [HiFiGAN vocoder](https://github.com/bshall/hifigan)\n\n\u003cdiv align=\"center\"\u003e\n    \u003cimg width=\"100%\" alt=\"Soft-VC\"\n      src=\"https://raw.githubusercontent.com/bshall/soft-vc/main/soft-vc.png\"\u003e\n\u003c/div\u003e\n\u003cdiv\u003e\n  \u003csup\u003e\n    \u003cstrong\u003eFig 1:\u003c/strong\u003e Architecture of the voice conversion system. a) The \u003cstrong\u003ediscrete\u003c/strong\u003e content encoder clusters audio features to produce a sequence of discrete speech units. b) The \u003cstrong\u003esoft\u003c/strong\u003e content encoder is trained to predict the discrete units. The acoustic model transforms the discrete/soft speech units into a target spectrogram. The vocoder converts the spectrogram into an audio waveform.\n  \u003c/sup\u003e\n\u003c/div\u003e\n\n## Example Usage\n\n### Programmatic Usage\n\n```python\nimport torch, torchaudio\n\n# Load the content encoder (either hubert_soft or hubert_discrete)\nhubert = torch.hub.load(\"bshall/hubert:main\", \"hubert_soft\", trust_repo=True).cuda()\n\n# Load the acoustic model (either hubert_soft or hubert_discrete)\nacoustic = torch.hub.load(\"bshall/acoustic-model:main\", \"hubert_soft\", trust_repo=True).cuda()\n\n# Load the vocoder (either hifigan_hubert_soft or hifigan_hubert_discrete)\nhifigan = torch.hub.load(\"bshall/hifigan:main\", \"hifigan_hubert_soft\", trust_repo=True).cuda()\n\n# Load the source audio\nsource, sr = torchaudio.load(\"path/to/wav\")\nassert sr == 16000\nsource = source.unsqueeze(0).cuda()\n\n# Convert to the target speaker\nwith torch.inference_mode():\n    # Extract speech units\n    units = hubert.units(source)\n    # Generate target spectrogram\n    mel = acoustic.generate(units).transpose(1, 2)\n    # Generate audio waveform\n    target = hifigan(mel)\n```\n\n## Citation\n\nIf you found this work helpful please consider citing our paper:\n\n```\n@inproceedings{\n    soft-vc-2022,\n    author={van Niekerk, Benjamin and Carbonneau, Marc-André and Zaïdi, Julian and Baas, Matthew and Seuté, Hugo and Kamper, Herman},\n    booktitle={ICASSP}, \n    title={A Comparison of Discrete and Soft Speech Units for Improved Voice Conversion}, \n    year={2022}\n}\n```\n","funding_links":[],"categories":["Jupyter Notebook","Modified"],"sub_categories":["SoftVC"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbshall%2Fsoft-vc","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fbshall%2Fsoft-vc","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbshall%2Fsoft-vc/lists"}