{"id":33892495,"url":"https://github.com/tencent-ailab/SongBloom","last_synced_at":"2025-12-16T07:00:50.153Z","repository":{"id":300055341,"uuid":"1005062609","full_name":"tencent-ailab/SongBloom","owner":"tencent-ailab","description":"The official code repository for SongBloom: Coherent Song Generation via Interleaved Autoregressive Sketching and Diffusion Refinement","archived":false,"fork":false,"pushed_at":"2025-12-04T07:10:13.000Z","size":25247,"stargazers_count":682,"open_issues_count":16,"forks_count":72,"subscribers_count":20,"default_branch":"master","last_synced_at":"2025-12-07T14:19:48.460Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/tencent-ailab.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2025-06-19T15:52:02.000Z","updated_at":"2025-12-06T16:24:28.000Z","dependencies_parsed_at":"2025-06-19T17:36:31.550Z","dependency_job_id":null,"html_url":"https://github.com/tencent-ailab/SongBloom","commit_stats":null,"previous_names":["cypress-yang/songbloom","tencent-ailab/songbloom"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/tencent-ailab/SongBloom","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tencent-ailab%2FSongBloom","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tencent-ailab%2FSongBloom/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tencent-ailab%2FSongBloom/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tencent-ailab%2FSongBloom/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/tencent-ailab","download_url":"https://codeload.github.com/tencent-ailab/SongBloom/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tencent-ailab%2FSongBloom/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":27760425,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-12-16T02:00:10.477Z","response_time":57,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2025-12-11T08:00:27.199Z","updated_at":"2025-12-16T07:00:50.142Z","avatar_url":"https://github.com/tencent-ailab.png","language":"Python","readme":"\n\n\u003cp align=\"center\"\u003e\u003cimg src=\"docs/icon.png\" width=\"50%\"\u003e\u003c/p\u003e\n\n\n# **SongBloom**: *Coherent Song Generation via Interleaved Autoregressive Sketching and Diffusion Refinement*\n\n\u003cdiv align=\"center\"\u003e\n\n[![Paper](https://img.shields.io/badge/arXiv-2506.07634-b31b1b.svg)](https://arxiv.org/abs/2506.07634)\n[![Hugging Face](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-yellow)](https://huggingface.co/CypressYang/SongBloom)\n[![Demo Page](https://img.shields.io/badge/Demo-Audio%20Samples-green)](https://cypress-yang.github.io/SongBloom_demo)\n\n\u003c/div\u003e\n\nWe propose **SongBloom**, a novel framework for full-length song generation that leverages an interleaved paradigm of autoregressive sketching and diffusion-based refinement. SongBloom employs an autoregressive diffusion model that combines the high fidelity of diffusion models with the scalability of language models.\nSpecifically, it gradually extends a musical sketch from short to long and refines the details from coarse to fine-grained. The interleaved generation paradigm effectively integrates prior semantic and acoustic context to guide the generation process.\nExperimental results demonstrate that SongBloom outperforms existing methods across both subjective and objective metrics and achieves performance comparable to the state-of-the-art commercial music generation platforms.\n\n![img](docs/architecture.png)\n\n\n\n## Models\n\n| Name                 | Size | Max Length | Prompt type | 🤗                                            |\n| -------------------- | ---- | ---------- | ----------- | -------------------------------------------- |\n| songbloom_full_150s  | 2B   | 2m30s      | 10s wav     | [link](https://huggingface.co/CypressYang/SongBloom) |\n| songbloom_full_150s_dpo  | 2B   | 2m30s      | 10s wav     | [link](https://huggingface.co/CypressYang/SongBloom) |\n| songbloom_full_240s$^{[1]}$ | 2B   | 4m     | 10s wav |        [link](https://huggingface.co/CypressYang/SongBloom_long)                           |\n| ... |      |            |             |                                              |\n\n- [1] For the **_150s** series models, each `[intro]`, `[outro]`, and `[inst]` corresponds to an expected duration of 1 second; whereas for the **_240s** series models, each token corresponds to 5 seconds (details in [docs/lyric_format](docs/lyric_format.md)).\n\n## Updates\n- **Oct 2025**: Release songbloom_full_240s; fix bugs in half-precision inference ; Reduce GPU memory consumption during the VAE stage.\n- **Sep 2025**: Release the songbloom_full_150s model with DPO post-training\n- **Jun 2025**: Release the songbloom_full_150s and inference script\n\n\n\n\n## Getting Started\n\n### Prepare Environments\n\n```bash\nconda create -n SongBloom python==3.8.12\nconda activate SongBloom\n\n# yum install libsndfile\n# pip install torch==2.2.0 torchaudio==2.2.0 --index-url https://download.pytorch.org/whl/cu118 # For different CUDA version\npip install -r requirements.txt\n```\n\n### Data Preparation\n\nA  .jsonl file, where each line is a json object:\n\n```json\n{\n\t\"idx\": \"The index of each sample\", \n\t\"lyrics\": \"The lyrics to be generated\",\n\t\"prompt_wav\": \"The path of the style prompt audio\",\n}\n```\n\nOne example can be refered to as: [example/test.jsonl](example/test.jsonl)\n\nThe prompt wav should be a 10-second, 48kHz audio clip.\n\nFor details on lyric formatting, see [docs/lyric_format.md](docs/lyric_format.md).\n\n### Inference\n\n```bash\nsource set_env.sh\n\npython3 infer.py --input-jsonl example/test.jsonl\n\n\n# For GPUs with low VRAM like RTX4090, you should set the dtype as bfloat16\npython3 infer.py --input-jsonl example/test.jsonl --dtype bfloat16\n\n# SongBloom also supports flash-attn (optional). To enable it, please install flash-attn (v2.6.3 is used during training) manually and set os.environ['DISABLE_FLASH_ATTN'] = \"0\" in infer.py:8\n```\n\n- model-name: Specify model version, see the model cards (eg: songbloom_full_150s/songbloom_full_150s_dpo);\n- local-dir: Dir where the weights and config files are downloaded;\n- input-jsonl: input raw data;\n- output-dir: Dir where the output audio saved;\n- n-samples: How many audios will be generated for each input term;\n\n## Mac Silicon\n\nSet these environment variables before running:\n\n```\nexport PYTORCH_ENABLE_MPS_FALLBACK=1\nexport DISABLE_FLASH_ATTN=1\n```\n\nWhen loading the model, explicitly pass the MPS device and use float32, not bfloat16:\n\n```\nimport torch\n\ndevice = torch.device('mps')\nmodel = SongBloom_Sampler.build_from_trainer(cfg, strict=False, dtype=torch.float32, device=device)\n```\n\n## Citation\n\n```\n@article{yang2025songbloom,\ntitle={SongBloom: Coherent Song Generation via Interleaved Autoregressive Sketching and Diffusion Refinement},\nauthor={Yang, Chenyu and Wang, Shuai and Chen, Hangting and Tan, Wei and Yu, Jianwei and Li, Haizhou},\njournal={arXiv preprint arXiv:2506.07634},\nyear={2025}\n}\n```\n\n## License\n\nSongBloom (codes and weights) is released under the [LICENSE](LICENSE). \n","funding_links":[],"categories":["语音合成"],"sub_categories":["资源传输下载"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftencent-ailab%2FSongBloom","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ftencent-ailab%2FSongBloom","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftencent-ailab%2FSongBloom/lists"}