{"id":13545397,"url":"https://github.com/jaywalnut310/vits","last_synced_at":"2025-05-14T08:05:19.477Z","repository":{"id":37406761,"uuid":"371194369","full_name":"jaywalnut310/vits","owner":"jaywalnut310","description":"VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech","archived":false,"fork":false,"pushed_at":"2023-12-06T01:29:50.000Z","size":3423,"stargazers_count":7328,"open_issues_count":161,"forks_count":1324,"subscribers_count":54,"default_branch":"main","last_synced_at":"2025-04-11T02:51:47.506Z","etag":null,"topics":["deep-learning","pytorch","speech-synthesis","text-to-speech","tts"],"latest_commit_sha":null,"homepage":"https://jaywalnut310.github.io/vits-demo/index.html","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/jaywalnut310.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null}},"created_at":"2021-05-26T23:38:12.000Z","updated_at":"2025-04-10T14:52:50.000Z","dependencies_parsed_at":"2023-02-10T23:00:53.639Z","dependency_job_id":"9af94ef8-5385-45b0-8fd7-2dd343192244","html_url":"https://github.com/jaywalnut310/vits","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jaywalnut310%2Fvits","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jaywalnut310%2Fvits/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jaywalnut310%2Fvits/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jaywalnut310%2Fvits/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/jaywalnut310","download_url":"https://codeload.github.com/jaywalnut310/vits/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254101588,"owners_count":22014907,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["deep-learning","pytorch","speech-synthesis","text-to-speech","tts"],"created_at":"2024-08-01T11:01:02.003Z","updated_at":"2025-05-14T08:05:19.454Z","avatar_url":"https://github.com/jaywalnut310.png","language":"Python","readme":"# VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech\n\n### Jaehyeon Kim, Jungil Kong, and Juhee Son\n\nIn our recent [paper](https://arxiv.org/abs/2106.06103), we propose VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech.\n\nSeveral recent end-to-end text-to-speech (TTS) models enabling single-stage training and parallel sampling have been proposed, but their sample quality does not match that of two-stage TTS systems. In this work, we present a parallel end-to-end TTS method that generates more natural sounding audio than current two-stage models. Our method adopts variational inference augmented with normalizing flows and an adversarial training process, which improves the expressive power of generative modeling. We also propose a stochastic duration predictor to synthesize speech with diverse rhythms from input text. With the uncertainty modeling over latent variables and the stochastic duration predictor, our method expresses the natural one-to-many relationship in which a text input can be spoken in multiple ways with different pitches and rhythms. A subjective human evaluation (mean opinion score, or MOS) on the LJ Speech, a single speaker dataset, shows that our method outperforms the best publicly available TTS systems and achieves a MOS comparable to ground truth.\n\nVisit our [demo](https://jaywalnut310.github.io/vits-demo/index.html) for audio samples.\n\nWe also provide the [pretrained models](https://drive.google.com/drive/folders/1ksarh-cJf3F5eKJjLVWY0X1j1qsQqiS2?usp=sharing).\n\n** Update note: Thanks to [Rishikesh (ऋषिकेश)](https://github.com/jaywalnut310/vits/issues/1), our interactive TTS demo is now available on [Colab Notebook](https://colab.research.google.com/drive/1CO61pZizDj7en71NQG_aqqKdGaA_SaBf?usp=sharing).\n\n\u003ctable style=\"width:100%\"\u003e\n  \u003ctr\u003e\n    \u003cth\u003eVITS at training\u003c/th\u003e\n    \u003cth\u003eVITS at inference\u003c/th\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003cimg src=\"resources/fig_1a.png\" alt=\"VITS at training\" height=\"400\"\u003e\u003c/td\u003e\n    \u003ctd\u003e\u003cimg src=\"resources/fig_1b.png\" alt=\"VITS at inference\" height=\"400\"\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n\u003c/table\u003e\n\n\n## Pre-requisites\n0. Python \u003e= 3.6\n0. Clone this repository\n0. Install python requirements. Please refer [requirements.txt](requirements.txt)\n    1. You may need to install espeak first: `apt-get install espeak`\n0. Download datasets\n    1. Download and extract the LJ Speech dataset, then rename or create a link to the dataset folder: `ln -s /path/to/LJSpeech-1.1/wavs DUMMY1`\n    1. For mult-speaker setting, download and extract the VCTK dataset, and downsample wav files to 22050 Hz. Then rename or create a link to the dataset folder: `ln -s /path/to/VCTK-Corpus/downsampled_wavs DUMMY2`\n0. Build Monotonic Alignment Search and run preprocessing if you use your own datasets.\n```sh\n# Cython-version Monotonoic Alignment Search\ncd monotonic_align\npython setup.py build_ext --inplace\n\n# Preprocessing (g2p) for your own datasets. Preprocessed phonemes for LJ Speech and VCTK have been already provided.\n# python preprocess.py --text_index 1 --filelists filelists/ljs_audio_text_train_filelist.txt filelists/ljs_audio_text_val_filelist.txt filelists/ljs_audio_text_test_filelist.txt \n# python preprocess.py --text_index 2 --filelists filelists/vctk_audio_sid_text_train_filelist.txt filelists/vctk_audio_sid_text_val_filelist.txt filelists/vctk_audio_sid_text_test_filelist.txt\n```\n\n\n## Training Exmaple\n```sh\n# LJ Speech\npython train.py -c configs/ljs_base.json -m ljs_base\n\n# VCTK\npython train_ms.py -c configs/vctk_base.json -m vctk_base\n```\n\n\n## Inference Example\nSee [inference.ipynb](inference.ipynb)\n","funding_links":[],"categories":["Python","Original","语音合成","Projects","Tools \u0026 Frameworks","Repos","二、开源库（按数据类型分类，附实用场景）"],"sub_categories":["网络服务_其他","Global AI Projects","Open-source projects","3. 音频（音乐、语音生成）"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fjaywalnut310%2Fvits","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fjaywalnut310%2Fvits","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fjaywalnut310%2Fvits/lists"}