{"id":13456471,"url":"https://github.com/FunAudioLLM/CosyVoice","last_synced_at":"2025-03-24T10:32:46.020Z","repository":{"id":246935196,"uuid":"823430322","full_name":"FunAudioLLM/CosyVoice","owner":"FunAudioLLM","description":"Multi-lingual large voice generation model, providing inference, training and deployment full-stack ability.","archived":false,"fork":false,"pushed_at":"2024-10-22T05:54:01.000Z","size":1322,"stargazers_count":5818,"open_issues_count":296,"forks_count":623,"subscribers_count":57,"default_branch":"main","last_synced_at":"2024-10-26T09:13:42.136Z","etag":null,"topics":["audio-generation","cantonese","chatbot","chatgpt","chinese","cosyvoice","cross-lingual","english","fine-grained","fine-tuning","gpt-4o","japanese","korean","multi-lingual","natural-language-generation","python","text-to-speech","tts","voice-cloning"],"latest_commit_sha":null,"homepage":"https://funaudiollm.github.io/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/FunAudioLLM.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-07-03T02:59:22.000Z","updated_at":"2024-10-26T08:29:12.000Z","dependencies_parsed_at":"2024-08-19T03:23:27.104Z","dependency_job_id":"886911ec-b49c-41b6-8104-4e704f3cd182","html_url":"https://github.com/FunAudioLLM/CosyVoice","commit_stats":null,"previous_names":["funaudiollm/cosyvoice"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/FunAudioLLM%2FCosyVoice","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/FunAudioLLM%2FCosyVoice/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/FunAudioLLM%2FCosyVoice/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/FunAudioLLM%2FCosyVoice/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/FunAudioLLM","download_url":"https://codeload.github.com/FunAudioLLM/CosyVoice/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":221962389,"owners_count":16908332,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["audio-generation","cantonese","chatbot","chatgpt","chinese","cosyvoice","cross-lingual","english","fine-grained","fine-tuning","gpt-4o","japanese","korean","multi-lingual","natural-language-generation","python","text-to-speech","tts","voice-cloning"],"created_at":"2024-07-31T08:01:22.659Z","updated_at":"2025-03-24T10:32:46.002Z","avatar_url":"https://github.com/FunAudioLLM.png","language":"Python","readme":"[![SVG Banners](https://svg-banners.vercel.app/api?type=origin\u0026text1=CosyVoice🤠\u0026text2=Text-to-Speech%20💖%20Large%20Language%20Model\u0026width=800\u0026height=210)](https://github.com/Akshay090/svg-banners)\n\n## 👉🏻 CosyVoice 👈🏻\n**CosyVoice 2.0**: [Demos](https://funaudiollm.github.io/cosyvoice2/); [Paper](https://arxiv.org/abs/2412.10117); [Modelscope](https://www.modelscope.cn/studios/iic/CosyVoice2-0.5B); [HuggingFace](https://huggingface.co/spaces/FunAudioLLM/CosyVoice2-0.5B)\n\n**CosyVoice 1.0**: [Demos](https://fun-audio-llm.github.io); [Paper](https://funaudiollm.github.io/pdf/CosyVoice_v1.pdf); [Modelscope](https://www.modelscope.cn/studios/iic/CosyVoice-300M)\n\n## Highlight🔥\n\n**CosyVoice 2.0** has been released! Compared to version 1.0, the new version offers more accurate, more stable, faster, and better speech generation capabilities.\n### Multilingual\n- **Supported Language**: Chinese, English, Japanese, Korean, Chinese dialects (Cantonese, Sichuanese, Shanghainese, Tianjinese, Wuhanese, etc.)\n- **Crosslingual \u0026 Mixlingual**：Support zero-shot voice cloning for cross-lingual and code-switching scenarios.\n### Ultra-Low Latency\n- **Bidirectional Streaming Support**: CosyVoice 2.0 integrates offline and streaming modeling technologies.\n- **Rapid First Packet Synthesis**: Achieves latency as low as 150ms while maintaining high-quality audio output.\n### High Accuracy\n- **Improved Pronunciation**: Reduces pronunciation errors by 30% to 50% compared to CosyVoice 1.0.\n- **Benchmark Achievements**: Attains the lowest character error rate on the hard test set of the Seed-TTS evaluation set.\n### Strong Stability\n- **Consistency in Timbre**: Ensures reliable voice consistency for zero-shot and cross-language speech synthesis.\n- **Cross-language Synthesis**: Marked improvements compared to version 1.0.\n### Natural Experience\n- **Enhanced Prosody and Sound Quality**: Improved alignment of synthesized audio, raising MOS evaluation scores from 5.4 to 5.53.\n- **Emotional and Dialectal Flexibility**: Now supports more granular emotional controls and accent adjustments.\n\n## Roadmap\n\n- [x] 2024/12\n\n    - [x] 25hz cosyvoice 2.0 released\n\n- [x] 2024/09\n\n    - [x] 25hz cosyvoice base model\n    - [x] 25hz cosyvoice voice conversion model\n\n- [x] 2024/08\n\n    - [x] Repetition Aware Sampling(RAS) inference for llm stability\n    - [x] Streaming inference mode support, including kv cache and sdpa for rtf optimization\n\n- [x] 2024/07\n\n    - [x] Flow matching training support\n    - [x] WeTextProcessing support when ttsfrd is not available\n    - [x] Fastapi server and client\n\n\n## Install\n\n**Clone and install**\n\n- Clone the repo\n``` sh\ngit clone --recursive https://github.com/FunAudioLLM/CosyVoice.git\n# If you failed to clone submodule due to network failures, please run following command until success\ncd CosyVoice\ngit submodule update --init --recursive\n```\n\n- Install Conda: please see https://docs.conda.io/en/latest/miniconda.html\n- Create Conda env:\n\n``` sh\nconda create -n cosyvoice -y python=3.10\nconda activate cosyvoice\n# pynini is required by WeTextProcessing, use conda to install it as it can be executed on all platform.\nconda install -y -c conda-forge pynini==2.1.5\npip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ --trusted-host=mirrors.aliyun.com\n\n# If you encounter sox compatibility issues\n# ubuntu\nsudo apt-get install sox libsox-dev\n# centos\nsudo yum install sox sox-devel\n```\n\n**Model download**\n\nWe strongly recommend that you download our pretrained `CosyVoice2-0.5B` `CosyVoice-300M` `CosyVoice-300M-SFT` `CosyVoice-300M-Instruct` model and `CosyVoice-ttsfrd` resource.\n\n``` python\n# SDK模型下载\nfrom modelscope import snapshot_download\nsnapshot_download('iic/CosyVoice2-0.5B', local_dir='pretrained_models/CosyVoice2-0.5B')\nsnapshot_download('iic/CosyVoice-300M', local_dir='pretrained_models/CosyVoice-300M')\nsnapshot_download('iic/CosyVoice-300M-25Hz', local_dir='pretrained_models/CosyVoice-300M-25Hz')\nsnapshot_download('iic/CosyVoice-300M-SFT', local_dir='pretrained_models/CosyVoice-300M-SFT')\nsnapshot_download('iic/CosyVoice-300M-Instruct', local_dir='pretrained_models/CosyVoice-300M-Instruct')\nsnapshot_download('iic/CosyVoice-ttsfrd', local_dir='pretrained_models/CosyVoice-ttsfrd')\n```\n\n``` sh\n# git模型下载，请确保已安装git lfs\nmkdir -p pretrained_models\ngit clone https://www.modelscope.cn/iic/CosyVoice2-0.5B.git pretrained_models/CosyVoice2-0.5B\ngit clone https://www.modelscope.cn/iic/CosyVoice-300M.git pretrained_models/CosyVoice-300M\ngit clone https://www.modelscope.cn/iic/CosyVoice-300M-25Hz.git pretrained_models/CosyVoice-300M-25Hz\ngit clone https://www.modelscope.cn/iic/CosyVoice-300M-SFT.git pretrained_models/CosyVoice-300M-SFT\ngit clone https://www.modelscope.cn/iic/CosyVoice-300M-Instruct.git pretrained_models/CosyVoice-300M-Instruct\ngit clone https://www.modelscope.cn/iic/CosyVoice-ttsfrd.git pretrained_models/CosyVoice-ttsfrd\n```\n\nOptionally, you can unzip `ttsfrd` resouce and install `ttsfrd` package for better text normalization performance.\n\nNotice that this step is not necessary. If you do not install `ttsfrd` package, we will use WeTextProcessing by default.\n\n``` sh\ncd pretrained_models/CosyVoice-ttsfrd/\nunzip resource.zip -d .\npip install ttsfrd_dependency-0.1-py3-none-any.whl\npip install ttsfrd-0.4.2-cp310-cp310-linux_x86_64.whl\n```\n\n**Basic Usage**\n\nWe strongly recommend using `CosyVoice2-0.5B` for better performance.\nFollow code below for detailed usage of each model.\n\n``` python\nimport sys\nsys.path.append('third_party/Matcha-TTS')\nfrom cosyvoice.cli.cosyvoice import CosyVoice, CosyVoice2\nfrom cosyvoice.utils.file_utils import load_wav\nimport torchaudio\n```\n\n**CosyVoice2 Usage**\n```python\ncosyvoice = CosyVoice2('pretrained_models/CosyVoice2-0.5B', load_jit=False, load_trt=False, fp16=False)\n\n# NOTE if you want to reproduce the results on https://funaudiollm.github.io/cosyvoice2, please add text_frontend=False during inference\n# zero_shot usage\nprompt_speech_16k = load_wav('./asset/zero_shot_prompt.wav', 16000)\nfor i, j in enumerate(cosyvoice.inference_zero_shot('收到好友从远方寄来的生日礼物，那份意外的惊喜与深深的祝福让我心中充满了甜蜜的快乐，笑容如花儿般绽放。', '希望你以后能够做的比我还好呦。', prompt_speech_16k, stream=False)):\n    torchaudio.save('zero_shot_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)\n\n# fine grained control, for supported control, check cosyvoice/tokenizer/tokenizer.py#L248\nfor i, j in enumerate(cosyvoice.inference_cross_lingual('在他讲述那个荒诞故事的过程中，他突然[laughter]停下来，因为他自己也被逗笑了[laughter]。', prompt_speech_16k, stream=False)):\n    torchaudio.save('fine_grained_control_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)\n\n# instruct usage\nfor i, j in enumerate(cosyvoice.inference_instruct2('收到好友从远方寄来的生日礼物，那份意外的惊喜与深深的祝福让我心中充满了甜蜜的快乐，笑容如花儿般绽放。', '用四川话说这句话', prompt_speech_16k, stream=False)):\n    torchaudio.save('instruct_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)\n\n# bistream usage, you can use generator as input, this is useful when using text llm model as input\n# NOTE you should still have some basic sentence split logic because llm can not handle arbitrary sentence length\ndef text_generator():\n    yield '收到好友从远方寄来的生日礼物，'\n    yield '那份意外的惊喜与深深的祝福'\n    yield '让我心中充满了甜蜜的快乐，'\n    yield '笑容如花儿般绽放。'\nfor i, j in enumerate(cosyvoice.inference_zero_shot(text_generator(), '希望你以后能够做的比我还好呦。', prompt_speech_16k, stream=False)):\n    torchaudio.save('zero_shot_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)\n```\n\n**CosyVoice Usage**\n```python\ncosyvoice = CosyVoice('pretrained_models/CosyVoice-300M-SFT', load_jit=False, load_trt=False, fp16=False)\n# sft usage\nprint(cosyvoice.list_available_spks())\n# change stream=True for chunk stream inference\nfor i, j in enumerate(cosyvoice.inference_sft('你好，我是通义生成式语音大模型，请问有什么可以帮您的吗？', '中文女', stream=False)):\n    torchaudio.save('sft_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)\n\ncosyvoice = CosyVoice('pretrained_models/CosyVoice-300M') # or change to pretrained_models/CosyVoice-300M-25Hz for 25Hz inference\n# zero_shot usage, \u003c|zh|\u003e\u003c|en|\u003e\u003c|jp|\u003e\u003c|yue|\u003e\u003c|ko|\u003e for Chinese/English/Japanese/Cantonese/Korean\nprompt_speech_16k = load_wav('./asset/zero_shot_prompt.wav', 16000)\nfor i, j in enumerate(cosyvoice.inference_zero_shot('收到好友从远方寄来的生日礼物，那份意外的惊喜与深深的祝福让我心中充满了甜蜜的快乐，笑容如花儿般绽放。', '希望你以后能够做的比我还好呦。', prompt_speech_16k, stream=False)):\n    torchaudio.save('zero_shot_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)\n# cross_lingual usage\nprompt_speech_16k = load_wav('./asset/cross_lingual_prompt.wav', 16000)\nfor i, j in enumerate(cosyvoice.inference_cross_lingual('\u003c|en|\u003eAnd then later on, fully acquiring that company. So keeping management in line, interest in line with the asset that\\'s coming into the family is a reason why sometimes we don\\'t buy the whole thing.', prompt_speech_16k, stream=False)):\n    torchaudio.save('cross_lingual_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)\n# vc usage\nprompt_speech_16k = load_wav('./asset/zero_shot_prompt.wav', 16000)\nsource_speech_16k = load_wav('./asset/cross_lingual_prompt.wav', 16000)\nfor i, j in enumerate(cosyvoice.inference_vc(source_speech_16k, prompt_speech_16k, stream=False)):\n    torchaudio.save('vc_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)\n\ncosyvoice = CosyVoice('pretrained_models/CosyVoice-300M-Instruct')\n# instruct usage, support \u003claughter\u003e\u003c/laughter\u003e\u003cstrong\u003e\u003c/strong\u003e[laughter][breath]\nfor i, j in enumerate(cosyvoice.inference_instruct('在面对挑战时，他展现了非凡的\u003cstrong\u003e勇气\u003c/strong\u003e与\u003cstrong\u003e智慧\u003c/strong\u003e。', '中文男', 'Theo \\'Crimson\\', is a fiery, passionate rebel leader. Fights with fervor for justice, but struggles with impulsiveness.', stream=False)):\n    torchaudio.save('instruct_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)\n```\n\n**Start web demo**\n\nYou can use our web demo page to get familiar with CosyVoice quickly.\n\nPlease see the demo website for details.\n\n``` python\n# change iic/CosyVoice-300M-SFT for sft inference, or iic/CosyVoice-300M-Instruct for instruct inference\npython3 webui.py --port 50000 --model_dir pretrained_models/CosyVoice-300M\n```\n\n**Advanced Usage**\n\nFor advanced user, we have provided train and inference scripts in `examples/libritts/cosyvoice/run.sh`.\n\n**Build for deployment**\n\nOptionally, if you want service deployment,\nyou can run following steps.\n\n``` sh\ncd runtime/python\ndocker build -t cosyvoice:v1.0 .\n# change iic/CosyVoice-300M to iic/CosyVoice-300M-Instruct if you want to use instruct inference\n# for grpc usage\ndocker run -d --runtime=nvidia -p 50000:50000 cosyvoice:v1.0 /bin/bash -c \"cd /opt/CosyVoice/CosyVoice/runtime/python/grpc \u0026\u0026 python3 server.py --port 50000 --max_conc 4 --model_dir iic/CosyVoice-300M \u0026\u0026 sleep infinity\"\ncd grpc \u0026\u0026 python3 client.py --port 50000 --mode \u003csft|zero_shot|cross_lingual|instruct\u003e\n# for fastapi usage\ndocker run -d --runtime=nvidia -p 50000:50000 cosyvoice:v1.0 /bin/bash -c \"cd /opt/CosyVoice/CosyVoice/runtime/python/fastapi \u0026\u0026 python3 server.py --port 50000 --model_dir iic/CosyVoice-300M \u0026\u0026 sleep infinity\"\ncd fastapi \u0026\u0026 python3 client.py --port 50000 --mode \u003csft|zero_shot|cross_lingual|instruct\u003e\n```\n\n## Discussion \u0026 Communication\n\nYou can directly discuss on [Github Issues](https://github.com/FunAudioLLM/CosyVoice/issues).\n\nYou can also scan the QR code to join our official Dingding chat group.\n\n\u003cimg src=\"./asset/dingding.png\" width=\"250px\"\u003e\n\n## Acknowledge\n\n1. We borrowed a lot of code from [FunASR](https://github.com/modelscope/FunASR).\n2. We borrowed a lot of code from [FunCodec](https://github.com/modelscope/FunCodec).\n3. We borrowed a lot of code from [Matcha-TTS](https://github.com/shivammehta25/Matcha-TTS).\n4. We borrowed a lot of code from [AcademiCodec](https://github.com/yangdongchao/AcademiCodec).\n5. We borrowed a lot of code from [WeNet](https://github.com/wenet-e2e/wenet).\n\n## Disclaimer\nThe content provided above is for academic purposes only and is intended to demonstrate technical capabilities. Some examples are sourced from the internet. If any content infringes on your rights, please contact us to request its removal.\n","funding_links":[],"categories":["Python","\u003cspan id=\"speech\"\u003eSpeech\u003c/span\u003e","精选文章","语音识别与合成_其他","Projekte"],"sub_categories":["\u003cspan id=\"tool\"\u003eLLM (LLM \u0026 Tool)\u003c/span\u003e","文字转语音","网络服务_其他","🗣️ Voice"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FFunAudioLLM%2FCosyVoice","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FFunAudioLLM%2FCosyVoice","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FFunAudioLLM%2FCosyVoice/lists"}