https://github.com/affige/genmusic_demo_list
a list of demo websites for automatic music generation research
https://github.com/affige/genmusic_demo_list
artificial-intelligence music-generation
Last synced: 7 months ago
JSON representation
a list of demo websites for automatic music generation research
- Host: GitHub
- URL: https://github.com/affige/genmusic_demo_list
- Owner: affige
- Created: 2018-12-26T09:44:17.000Z (almost 7 years ago)
- Default Branch: master
- Last Pushed: 2025-03-07T05:54:31.000Z (8 months ago)
- Last Synced: 2025-03-07T06:26:17.530Z (8 months ago)
- Topics: artificial-intelligence, music-generation
- Size: 403 KB
- Stars: 673
- Watchers: 37
- Forks: 45
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
```A list of demo websites for automatic music generation research```
### lyrics-to-song (vocal+backing)
* [DiffRhythm](https://arxiv.org/pdf/2503.01183) (diffusion; ning25arxiv): https://nzqian.github.io/DiffRhythm/
* [SongGen](https://arxiv.org/abs/2502.13128) (transformer; liu25arxiv): https://liuzh-19.github.io/SongGen/
* [YuE](https://arxiv.org/pdf/2503.08638) (transformer; yuan25arxiv): https://map-yue.github.io/### text-to-music/audio
* [InspireMusic](https://arxiv.org/pdf/2503.00084) (diffusion; 25arxiv): https://huggingface.co/spaces/FunAudioLLM/InspireMusic
* [stable-audio-control](https://arxiv.org/pdf/2410.05151) (diffusion; hou25icassp): https://stable-audio-control.github.io/
* [improved-diff-a-riff](https://arxiv.org/pdf/2410.23005) (diffusion; nistal24neuripsworkshop): https://sonycslparis.github.io/improved_dar/
* [Multi-Aspect Conditioning](https://benadar293.github.io/multi-aspect-conditioning/static/pdfs/Multi_Aspect_Conditioning.pdf) (diffusion; maman24): https://benadar293.github.io/multi-aspect-conditioning/
* [Presto](https://arxiv.org/abs/2410.05167) (diffusion; novack25iclr): https://presto-music.github.io/web/
* [MMGen](https://arxiv.org/abs/2409.20196) (diffusion; wei24arxiv): https://awesome-mmgen.github.io/
* [Seed-Music](https://arxiv.org/pdf/2409.09214) (diffusion+transformer; bai24arxiv): https://team.doubao.com/en/special/seed-music
* [SongCreator](https://arxiv.org/pdf/2409.06029) (diffusion; lei24arxiv): https://songcreator.github.io/
* [MSLDM](https://arxiv.org/abs/2409.06190) (diffusion; xu24arxiv): https://xzwy.github.io/MSLDMDemo/
* [Multi-Track MusicLDM](https://arxiv.org/abs/2409.02845) (diffusion; karchkhadze24arxiv): https://mt-musicldm.github.io/
* [FluxMusic](https://arxiv.org/pdf/2409.00587) (diffusion; fei24arxiv): https://github.com/feizc/FluxMusic
* [control-transfer-diffusion](https://arxiv.org/pdf/2408.00196) (diffusion; demerlé24ismir): https://nilsdem.github.io/control-transfer-diffusion/
* [AP-adapter](https://arxiv.org/abs/2407.16564) (diffusion; tsai24arxiv): https://rebrand.ly/AP-adapter
* [MusiConGen](https://arxiv.org/abs/2407.15060) (transformer; lan24arxiv): https://musicongen.github.io/musicongen_demo/
* [Stable audio Open](https://arxiv.org/abs/2407.14358) (diffusion; evans24arxiv): https://stability-ai.github.io/stable-audio-open-demo/
* [MEDIC](https://arxiv.org/abs/2407.13220) (diffusion; liu24arxiv): https://medic-zero.github.io/
* [MusicGenStyle](https://arxiv.org/pdf/2407.12563) (transformer; rouard24ismir): https://musicgenstyle.github.io/
* [MelodyFlow](https://arxiv.org/abs/2407.03648) (transformer+diffusion; lelan24arxiv): https://melodyflow.github.io/
* [MelodyLM](https://www.arxiv.org/abs/2407.02049) (transformer+diffusion; li24arxiv): https://melodylm666.github.io/
* [JASCO](https://arxiv.org/pdf/2406.10970) (flow; tal24ismir): https://pages.cs.huji.ac.il/adiyoss-lab/JASCO/
* [MusicFlow](https://openreview.net/pdf?id=kOczKjmYum) (diffusion; prajwal24icml): N/A
* [Diff-A-Riff](https://arxiv.org/abs/2406.08384) (diffusion; nistal24ismir): https://sonycslparis.github.io/diffariff-companion/
* [DITTO-2](https://arxiv.org/abs/2405.20289) (diffusion; novack24ismir): https://ditto-music.github.io/ditto2/
* [SoundCTM](https://arxiv.org/abs/2405.18503) (diffusion; saito24arxiv): N/A
* [Instruct-MusicGen](https://arxiv.org/abs/2405.18386) (transformer; zhang24arxiv): https://foul-ice-5ea.notion.site/Instruct-MusicGen-Demo-Page-Under-construction-a1e7d8d474f74df18bda9539d96687ab
* [QA-MDT](https://arxiv.org/pdf/2405.15863) (diffusion; li24arxiv): https://qa-mdt.github.io/
* [Stable audio 2](https://arxiv.org/abs/2404.10301) (diffusion; evans24ismir): https://stability-ai.github.io/stable-audio-2-demo/
* [Melodist](https://arxiv.org/abs/2404.09313) (transformer; hong24arxiv): https://text2songmelodist.github.io/Sample/
* [SMITIN](https://arxiv.org/abs/2404.02252) (transformer; koo24arxiv): https://wide-wood-512.notion.site/SMITIN-Self-Monitored-Inference-Time-INtervention-for-Generative-Music-Transformers-Demo-Page-983723e6e9ac4f008298f3c427a23241
* [Stable audio](https://arxiv.org/abs/2402.04825) (diffusion; evans24arxiv): https://stability-ai.github.io/stable-audio-demo/
* [MusicMagus](https://arxiv.org/abs/2402.06178) (diffusion; zhang24ijcai): https://wry-neighbor-173.notion.site/MusicMagus-Zero-Shot-Text-to-Music-Editing-via-Diffusion-Models-8f55a82f34944eb9a4028ca56c546d9d
* [DITTO](https://arxiv.org/abs/2401.12179) (diffusion; novack24arxiv): https://ditto-music.github.io/web/
* [MAGNeT](https://arxiv.org/abs/2401.04577) (transformer; ziv24arxiv): https://pages.cs.huji.ac.il/adiyoss-lab/MAGNeT/
* [Mustango](https://arxiv.org/abs/2311.08355) (diffusion; melechovsky24naacl): https://github.com/AMAAI-Lab/mustango
* [Music ControlNet](https://arxiv.org/abs/2311.07069) (diffusion; wu24taslp): https://musiccontrolnet.github.io/web/
* [InstrumentGen](https://arxiv.org/abs/2311.04339) (transformer; nercessian23ml4audio): https://instrumentgen.netlify.app/
* [Coco-Mulla](https://arxiv.org/pdf/2310.17162.pdf) (transformer; lin23arxiv): https://kikyo-16.github.io/coco-mulla/
* [JEN-1 Composer](https://arxiv.org/pdf/2310.19180.pdf) (diffusion; yao23arxiv): https://www.jenmusic.ai/audio-demos
* [UniAudio](https://arxiv.org/abs/2310.00704) (transformer; yang23arxiv): http://dongchaoyang.top/UniAudio_demo/
* [MusicLDM](https://arxiv.org/abs/2308.01546) (diffusion; chen23arxiv): https://musicldm.github.io/
* [InstructME](https://arxiv.org/abs/2308.14360) (diffusion; han23arxiv): https://musicedit.github.io/
* [JEN-1](https://arxiv.org/abs/2308.04729) (diffusion; li23arxiv): https://www.futureverse.com/research/jen/demos/jen1
* [MusicGen](https://arxiv.org/abs/2306.05284) (Transformer; copet23arxiv): https://ai.honu.io/papers/musicgen/
* [MeLoDy](https://arxiv.org/abs/2305.15719) (Transformer+diffusion; lam23arxiv): https://efficient-melody.github.io/
* [MSDM](https://arxiv.org/pdf/2302.02257) (diffusion; mariani24iclr): https://gladia-research-group.github.io/multi-source-diffusion-models/
* [MusicLM](https://arxiv.org/abs/2301.11325) (Transformer; agostinelli23arxiv): https://google-research.github.io/seanet/musiclm/examples/
* [Noise2Music](https://arxiv.org/abs/2302.03917) (diffusion; huang23arxiv): https://noise2music.github.io/
* [ERNIE-Music](https://arxiv.org/pdf/2302.04456.pdf) (diffusion; zhu23arxiv): N/A
* [Riffusion]() (diffusion;): https://www.riffusion.com/### text-to-audio
* [MambaFoley](https://arxiv.org/pdf/2409.09162) (mamba; xie24arxiv): n/a
* [PicoAudio](https://arxiv.org/pdf/2407.02869) (diffusion; xie24arxiv): https://zeyuxie29.github.io/PicoAudio.github.io/
* [AudioLCM](https://arxiv.org/abs/2406.00356) (diffusion; liu24arxiv): https://audiolcm.github.io/
* [UniAudio 1.5](https://arxiv.org/pdf/2406.10056) (transformer; yang24arxiv): https://github.com/yangdongchao/LLM-Codec
* [Tango 2](https://arxiv.org/pdf/2404.09956) (diffusion; majumder24mm): https://tango2-web.github.io/
* [Baton](https://arxiv.org/abs/2402.00744) (diffusion; liao24arxiv): https://baton2024.github.io/
* [T-FOLEY](https://arxiv.org/abs/2401.09294) (diffusion; chung24icassp): https://yoonjinxd.github.io/Event-guided_FSS_Demo.github.io/
* [Audiobox](https://arxiv.org/abs/2312.15821) (diffusion; vyas23arxiv): https://audiobox.metademolab.com/
* [Amphion](https://arxiv.org/abs/2312.09911) (zhang23arxiv): https://github.com/open-mmlab/Amphion
* [VoiceLDM](https://arxiv.org/abs/2309.13664) (diffusion; lee23arxiv): https://voiceldm.github.io/
* [AudioLDM 2](https://arxiv.org/abs/2308.05734) (diffusion; liu23arxiv): https://audioldm.github.io/audioldm2/
* [WavJourney](https://arxiv.org/abs/2307.14335) (; liu23arxiv): https://audio-agi.github.io/WavJourney_demopage/
* [CLIPSynth](https://sightsound.org/papers/2023/Dong_CLIPSynth_Learning_Text-to-audio_Synthesis_from_Videos.pdf) (diffusion; dong23cvprw): https://salu133445.github.io/clipsynth/
* [CLIPSonic](https://arxiv.org/abs/2306.09635) (diffusion; dong23waspaa): https://salu133445.github.io/clipsonic/
* [SoundStorm](https://arxiv.org/abs/2305.09636) (Transformer; borsos23arxiv): https://google-research.github.io/seanet/soundstorm/examples/
* [AUDIT](https://arxiv.org/abs/2304.00830) (diffusion; wang23arxiv): https://audit-demo.github.io/
* [VALL-E](https://arxiv.org/abs/2301.02111) (Transformer; wang23arxiv): https://www.microsoft.com/en-us/research/project/vall-e/ (for speech)
* [multi-source-diffusion-models](https://arxiv.org/abs/2302.02257) (diffusion; 23arxiv): https://gladia-research-group.github.io/multi-source-diffusion-models/
* [Make-An-Audio](https://text-to-audio.github.io/paper.pdf) (diffusion; huang23arxiv): https://text-to-audio.github.io/ (for general sounds)
* [AudioLDM](https://arxiv.org/pdf/2301.12503.pdf) (diffusion; liu23arxiv): https://audioldm.github.io/ (for general sounds)
* [AudioGen](https://arxiv.org/abs/2209.15352) (Transformer; kreuk23iclr): https://felixkreuk.github.io/audiogen/ (for general sounds)
* [AudioLM](https://arxiv.org/abs/2209.03143) (Transformer; borsos23taslp): https://google-research.github.io/seanet/audiolm/examples/ (for general sounds)### text-to-midi
* [XMusic](https://arxiv.org/abs/2501.08809) (Transformer; tian25tmm): https://xmusic-project.github.io/
* [text2midi](https://arxiv.org/pdf/2412.16526) (Transformer; bhandari25aaai): https://huggingface.co/spaces/amaai-lab/text2midi
* [MuseCoco](https://arxiv.org/abs/2306.00110) (Transformer; lu23arxiv): https://ai-muzic.github.io/musecoco/### audio-domain music generation
* [VampNet](https://arxiv.org/abs/2307.04686) (transformer; garcia23ismir): https://hugo-does-things.notion.site/VampNet-Music-Generation-via-Masked-Acoustic-Token-Modeling-e37aabd0d5f1493aa42c5711d0764b33
* [fast JukeBox](https://www.mdpi.com/2076-3417/13/9/5630) (jukebox+knowledge distilling; pezzat-morales23mdpi): https://soundcloud.com/michel-pezzat-615988723
* [DAG](https://arxiv.org/abs/2210.14661) (diffusion; pascual23icassp): https://diffusionaudiosynthesis.github.io/
* [musika!](https://arxiv.org/pdf/2208.08706.pdf) (GAN; pasini22ismir): https://huggingface.co/spaces/marcop/musika
* [JukeNox](https://arxiv.org/abs/2005.00341) (VQVAE+Transformer; dhariwal20arxiv): https://openai.com/blog/jukebox/
* [UNAGAN](https://arxiv.org/abs/2005.08526) (GAN; liu20arxiv): https://github.com/ciaua/unagan
* [dadabots](https://arxiv.org/abs/1811.06633) (sampleRNN; carr18mume): http://dadabots.com/music.php### given singing, generate accompaniments
* [Llambada](https://arxiv.org/pdf/2411.01661) (transformer; trinh24arxiv): https://songgen-ai.github.io/llambada-demo/
* [FastSAG](https://arxiv.org/pdf/2405.07682) (diffusion; chen24arxiv): https://fastsag.github.io/
* [SingSong](https://arxiv.org/abs/2301.12662) (VQVAE+Transofmrer; donahue23arxiv): https://storage.googleapis.com/sing-song/index.html### given drumless audio, generate drum accompaniments
* [JukeDrummer](https://arxiv.org/pdf/2210.06007.pdf) (VQVAE+Transofmrer; wu22ismir): https://legoodmanner.github.io/jukedrummer-demo/### audio-domain singing voice synthesis (SVS)
* [TechSinger](https://arxiv.org/abs/2502.12572) (flow; dai25aaai): https://tech-singer.github.io/
* [Everyone-Can-Sing](https://arxiv.org/abs/2501.13870) (diffusion; dai25arxiv): https://everyone-can-sing.github.io/
* [ExpressiveSinger](https://dl.acm.org/doi/pdf/10.1145/3664647.3681642) (diffusion; dai24mm): https://expressivesinger.github.io/ExpressiveSinger/
* [InstructSing](https://arxiv.org/abs/2409.06330) (ddsp; zeng24slt): https://wavelandspeech.github.io/instructsing/
* [Freestyler](https://www.arxiv.org/abs/2408.15474) (transformer; ning24arxiv): https://nzqian.github.io/Freestyler/
* [Prompt-Singer](https://arxiv.org/pdf/2403.11780.pdf) (transformer; wang24naacl): https://prompt-singer.github.io/
* [StyleSinger](https://arxiv.org/abs/2312.10741) (diffusion; zhang24aaai): https://stylesinger.github.io/
* [BiSinger](https://arxiv.org/abs/2309.14089) (transformer; zhou23asru): https://bisinger-svs.github.io/
* [HiddenSinger](https://arxiv.org/abs/2306.06814) (diffusion; hwang23arxiv): https://jisang93.github.io/hiddensinger-demo/
* [Make-A-Voice](https://arxiv.org/abs/2305.19269) (transformer; huang23arxiv): https://make-a-voice.github.io/
* [RMSSinger](https://arxiv.org/abs/2305.10686) (diffusion; he23aclf): https://rmssinger.github.io/
* [NaturalSpeech 2](https://arxiv.org/abs/2304.09116) (diffusion; shen23arxiv): https://speechresearch.github.io/naturalspeech2/
* [NANSY++](https://arxiv.org/abs/2211.09407) (Transformer; choi23iclr): https://bald-lifeboat-9af.notion.site/Demo-Page-For-NANSY-67d92406f62b4630906282117c7f0c39
* [UniSyn](https://arxiv.org/abs/2212.01546) (; lei23aaai): https://leiyi420.github.io/UniSyn/
* [VISinger 2](https://arxiv.org/abs/2211.02903) (zhang22arxiv): https://zhangyongmao.github.io/VISinger2/
* [xiaoicesing 2](https://arxiv.org/abs/2210.14666) (Transformer+GAN; wang22arxiv): https://wavelandspeech.github.io/xiaoice2/
* [WeSinger 2](https://arxiv.org/pdf/2207.01886.pdf) (Transformer+GAN; zhang22arxiv): https://zzw922cn.github.io/wesinger2/
* [U-Singer](https://arxiv.org/pdf/2203.00931.pdf) (Transformer; kim22arxiv): https://u-singer.github.io/
* [Singing-Tacotron](https://arxiv.org/pdf/2202.07907.pdf) (Transformer; wang22arxiv): https://hairuo55.github.io/SingingTacotron/
* [KaraSinger](https://arxiv.org/abs/2110.04005) (GRU/Transformer; liao22icassp): https://jerrygood0703.github.io/KaraSinger/
* [VISinger](https://arxiv.org/abs/2110.08813) (flow; zhang2): https://zhangyongmao.github.io/VISinger/
* [MLP singer](https://arxiv.org/abs/2106.07886) (mixer blocks; tae21arxiv): https://github.com/neosapience/mlp-singer
* [LiteSing](https://ieeexplore.ieee.org/document/9414043) (wavenet; zhuang21icassp): https://auzxb.github.io/LiteSing/
* [DiffSinger](https://arxiv.org/abs/2105.02446) (diffusion; liu22aaai)[no duration modeling]: https://diffsinger.github.io/
* [HiFiSinger](https://arxiv.org/abs/2009.01776) (Transformer; chen20arxiv): https://speechresearch.github.io/hifisinger/
* [DeepSinger](https://arxiv.org/abs/2007.04590) (Transformer; ren20kdd): https://speechresearch.github.io/deepsinger/
* [xiaoice-multi-singer](https://arxiv.org/pdf/2006.10317.pdf): https://jiewu-demo.github.io/INTERSPEECH2020/
* [xiaoicesing](https://arxiv.org/pdf/2006.06261.pdf): https://xiaoicesing.github.io/
* [bytesing](https://arxiv.org/pdf/2004.11012.pdf): https://bytesings.github.io/
* [mellotron](https://arxiv.org/abs/1910.11997): https://nv-adlr.github.io/Mellotron
* [lee's model](https://arxiv.org/pdf/1908.01919.pdf) (lee19arxiv): http://ksinging.mystrikingly.com/
* http://home.ustc.edu.cn/~yiyh/interspeech2019/### audio-domain singing style transfer / singing voice conversion (SVC)
* [Everyone-Can-Sing](https://arxiv.org/abs/2501.13870) (diffusion; dai25arxiv): https://everyone-can-sing.github.io/
* [ROSVC](https://arxiv.org/abs/2210.11096) (; takahashi22arxiv): https://t-naoya.github.io/rosvc/
* [DiffSVC](https://arxiv.org/abs/2105.13871) (diffusion; liu21asru): https://liusongxiang.github.io/diffsvc/
* [FastSVC](https://arxiv.org/abs/2011.05731) (CNN; liu21icme): https://nobody996.github.io/FastSVC/
* [SoftVC VITS]() (): https://github.com/svc-develop-team/so-vits-svc
* [Assem-VC](https://neuripscreativityworkshop.github.io/2021/accepted/ncw_58.pdf) (; kim21nipsw): https://mindslab-ai.github.io/assem-vc/singer/
* [iZotope-SVC](https://program.ismir2020.net/poster_1-08.html) (conv-encoder/decoder; nercessian20ismir): https://sites.google.com/izotope.com/ismir2020-audio-demo
* [VAW-GAN](https://arxiv.org/pdf/2008.03992.pdf) (GAN; lu20arxiv): https://kunzhou9646.github.io/singvaw-gan/
* [polyak20interspeech](https://arxiv.org/pdf/2008.02830.pdf) (GAN; polyak20interspeech): https://singing-conversion.github.io/
* [SINGAN](https://www.researchgate.net/publication/336058156_SINGAN_Singing_Voice_Conversion_with_Generative_Adversarial_Networks) (GAN; sisman19apsipa): N/A
* [MSVC-GAN] (GAN): https://hujinsen.github.io/
* https://mtg.github.io/singing-synthesis-demos/voice-cloning/
* https://enk100.github.io/Unsupervised_Singing_Voice_Conversion/
* [Yong&Nam](https://seyong92.github.io/publications/yong_ICASSP_2018.pdf) (DSP; yong18icassp): https://seyong92.github.io/singing-expression-transfer/
* [cybegan](https://arxiv.org/pdf/1807.02254.pdf) (CNN+GAN; wu18faim): http://mirlab.org/users/haley.wu/cybegan/### audio-domain speech-to-singing conversion
* [AlignSTS](https://arxiv.org/abs/2305.04476) (encoder/adaptor/aligner/diff-decoder; li23facl): https://alignsts.github.io/
* [speech2sing2](https://arxiv.org/pdf/2005.13835.pdf) (GAN; wu20interspeech): https://ericwudayi.github.io/Speech2Singing-DEMO/
* [speech2sing](https://arxiv.org/pdf/2002.06595.pdf) (encoder/decoder; parekh20icassp): https://jayneelparekh.github.io/icassp20/### audio-domain singing correction
* [deep-autotuner](https://arxiv.org/abs/1902.00956) (CGRU; wagner19icassp): http://homes.sice.indiana.edu/scwager/deepautotuner.html### audio-domain style transfer (general)
* [WaveTransfer](https://arxiv.org/abs/2409.15321) (diffusion; baoueb24mlsp): https://wavetransfer.github.io/
* [MusicTI](https://arxiv.org/abs/2402.13763) (diffusion; li24aaai): https://lsfhuihuiff.github.io/MusicTI/
* [DiffTransfer](https://ismir2023program.ismir.net/poster_197.html) (diffusion; comanducci23ismir): https://lucacoma.github.io/DiffTransfer/
* [RAVE-Latent Diffusion]() (diffusion;): https://github.com/moiseshorta/RAVE-Latent-Diffusion
* [RAVE](https://arxiv.org/abs/2111.05011) (VAE;caillon21arxiv): https://anonymous84654.github.io/RAVE_anonymous/; https://github.com/acids-ircam/RAVE
* [VAE-GAN](https://arxiv.org/abs/2109.02096) (VAE-GAN; bonnici22ijcnn): https://github.com/RussellSB/tt-vae-gan
* [VQ-VAE](https://arxiv.org/pdf/2102.05749.pdf) (VQ-VAE; cifka21icassp): https://adasp.telecom-paris.fr/rc/demos_companion-pages/cifka-ss-vq-vae/
* [MelGAN-VC](https://arxiv.org/pdf/1910.03713.pdf) (GAN; pasini19arxiv): https://www.youtube.com/watch?v=3BN577LK62Y&feature=youtu.be
* [RaGAN](https://www.aaai.org/Papers/AAAI/2019/AAAI-LuC.2259.pdf) (GAN; lu19aaai): https://github.com/ChienYuLu/Play-As-You-Like-Timbre-Enhanced-Multi-modal-Music-Style-Transfer
* [TimbreTron](http://www.cs.toronto.edu/~huang/TimbreTron/pdf/TImbreTron_arxiv.pdf) (GAN; huang19iclr): https://www.cs.toronto.edu/~huang/TimbreTron/samples_page.html
* [string2woodwind](https://minjekim.com/papers/icassp2017_swager.pdf) (DSP; wagner17icassp): http://homes.sice.indiana.edu/scwager/css.html### TTS
* [NaturalSpeech 3](https://arxiv.org/abs/2403.03100) (diffusion; ju24arxiv): https://speechresearch.github.io/naturalspeech3/
* [VITS](https://arxiv.org/abs/2106.06103) (transformer+flow+GAN; kim21icml): https://github.com/jaywalnut310/vits### speech voice conversion / voice cloning
* [Applio]() (): https://github.com/IAHispano/Applio### vocoder (general)
* [MusicHiFi](https://arxiv.org/abs/2403.10493) (GAN+diffusion; zhu24arxiv): https://musichifi.github.io/web/
* [BigVGAN](https://arxiv.org/abs/2206.04658) (GAN; lee23iclr): https://bigvgan-demo.github.io/
* [HifiGAN](https://arxiv.org/abs/2010.05646) (GAN; kong20neurips): https://jik876.github.io/hifi-gan-demo/
* [DiffWave](https://arxiv.org/abs/2009.09761) (diffusion; kong21iclr): https://diffwave-demo.github.io/
* [Parallel WaveGAN](https://arxiv.org/abs/1910.11480) (GAN; yamamoto20icassp): https://r9y9.github.io/projects/pwg/
* [MelGAN](https://arxiv.org/abs/1910.06711) (GAN; kumar19neurips): https://melgan-neurips.github.io/### vocoder (singing)
* [GOLF](https://arxiv.org/abs/2306.17252) (DDSP; yu23ismir): https://yoyololicon.github.io/golf-demo/
* [DSPGAN](https://arxiv.org/abs/2211.01087) (GAN; song23icassp): https://kunsung.github.io/DSPGAN/
* [Sifi-GAN](https://arxiv.org/abs/2210.15533) (GAN; yoneyama23icassp): https://chomeyama.github.io/SiFiGAN-Demo/
* [SawSing](https://arxiv.org/pdf/2208.04756.pdf) (DDSP; wu22ismir): https://ddspvocoder.github.io/ismir-demo/
* [Multi-Singer](https://dl.acm.org/doi/abs/10.1145/3474085.3475437) (wavenet; huang21mm): https://multi-singer.github.io/
* [SingGAN](https://arxiv.org/pdf/2110.07468.pdf) (GAN; chen21arxiv): https://singgan.github.io/### audio tokenzier
* [Improved RVQGAN](https://arxiv.org/abs/2306.06546) (VQ; kumar23arxiv): https://descript.notion.site/Descript-Audio-Codec-11389fce0ce2419891d6591a68f814d5
* [HiFi-Codec](https://arxiv.org/abs/2305.02765) (VQ; yang23arxiv): https://github.com/yangdongchao/AcademiCodec
* [EnCodec](https://arxiv.org/abs/2210.13438) (VQ; défossez22arxiv): https://github.com/facebookresearch/encodec
* [SoundStream](https://arxiv.org/abs/2107.03312) (VQ; zeghidour21arxiv): https://google-research.github.io/seanet/soundstream/examples/### audio super-resolution
* [AudioSR](https://arxiv.org/abs/2309.07314) (diffusion; liu23arxiv): https://audioldm.github.io/audiosr/### audio-domain loop generation
* [PJLoopGAN](https://arxiv.org/pdf/2209.01751.pdf) (GAN; yeh22ismir): https://arthurddd.github.io/PjLoopGAN/
* [LoopGen](https://archives.ismir.net/ismir2021/paper/000038.pdf) (GAN; hung21ismir): https://loopgen.github.io/### given score, generate musical audio (performance): Piano only
* [TTS-based MIDI-to-audio](https://arxiv.org/abs/2211.13868) (Transformer-TTS; shi23icassp): https://nii-yamagishilab.github.io/sample-midi-to-audio/
* [Wave2Midi2Wave](https://arxiv.org/abs/1810.12247) (transformer+wavenet; hawthorne19iclr): https://magenta.tensorflow.org/maestro-wave2midi2wave
* [BasisMixer](https://wp.nyu.edu/ismir2016/wp-content/uploads/sites/2294/2016/08/cancino-basis.pdf) (RNN+FFNN; chacon16ismir-lbd): https://www.youtube.com/watch?v=zdU8C6Su3TI### given score, generate musical audio (performance): Not limited to Piano [a.k.a. MIDI-to-audio]
* [TokenSynth](https://arxiv.org/pdf/2502.08939) (Transformer; kim25icassp): https://kyungsukim.notion.site/TokenSynth-A-Token-based-Neural-Synthesizer-for-Instrument-Cloning-and-Text-to-Instrument-2c4f5c0850dc4006971b33ad0e580842
* [M2A](https://arxiv.org/pdf/2501.10222) (Transformer; tang25icassp): https://tangjjbetsy.github.io/S2A/
* [Deep Performer](https://arxiv.org/pdf/2202.06034.pdf) (Transformer; dong22icassp): https://salu133445.github.io/deepperformer/
* [PerformanceNet](https://arxiv.org/abs/1811.04357) (CNN+GAN; wang19aaai): https://github.com/bwang514/PerformanceNet
* [Conditioned Wavenet](https://archives.ismir.net/ismir2018/paper/000192.pdf) (Wavenet; manzelli18ismir): http://people.bu.edu/bkulis/projects/music/index.html### audio/timbre synthesis
* [gen-inst]() (transformer; nercessian24ismir): https://gen-inst.netlify.app/
* [GANStrument](https://arxiv.org/abs/2211.05385) (narita22arxiv): https://ganstrument.github.io/ganstrument-demo/
* [NEWT](https://archives.ismir.net/ismir2021/paper/000031.pdf) (DDSP; hayes21ismir): https://benhayes.net/projects/nws/
* [CRASH](https://archives.ismir.net/ismir2021/paper/000072.pdf) (diffusion; rouard21ismir): https://crash-diffusion.github.io/crash/
* [DarkGAN](https://archives.ismir.net/ismir2021/paper/000060.pdf) (GAN; nistal21ismir): https://an-1673.github.io/DarkGAN.io/
* [MP3net](https://arxiv.org/abs/2101.04785) (GAN; broek21arxiv): https://korneelvdbroek.github.io/mp3net/
* [Michelashvili](https://program.ismir2020.net/poster_6-19.html) (dsp-inspired; michelashvili20iclr): https://github.com/mosheman5/timbre_painting
* [GAAE](https://arxiv.org/abs/2006.00877) (GAN+AAE; haque20arxiv): https://drive.google.com/drive/folders/1et_BuZ_XDMrdsYzZDprLvEpmmuZrJ7jk
* [MANNe](https://arxiv.org/abs/2001.11296) (): https://github.com/JTColonel/manne
* [DDSP](https://openreview.net/forum?id=B1x1ma4tDr) (dsp-inspired; lamtharn20iclr): https://storage.googleapis.com/ddsp/index.html
* [MelNet](https://arxiv.org/pdf/1906.01083.pdf) (auto-regressive; vasquez19arxiv): https://audio-samples.github.io/
* [AdVoc](https://arxiv.org/abs/1904.07944) (; neekhara19arxiv): http://chrisdonahue.com/advoc_examples/
* [GANSynth](https://arxiv.org/abs/1902.08710) (CNN+GAN; engel19iclr): https://magenta.tensorflow.org/gansynth
* [SynthNet](https://www.ijcai.org/proceedings/2019/467) (schimbinschi19ijcai): https://www.dropbox.com/sh/hkp3o5xjyexp2x0/AADvrfXTbHBXs9W7GN6Yeorua?dl=0
* [TiFGAN](https://arxiv.org/abs/1902.04072) (CNN+GAN; marafioti19arxiv): https://tifgan.github.io/
* [SING](https://arxiv.org/abs/1810.09785) (defossez18nips): https://research.fb.com/wp-content/themes/fb-research/research/sing-paper/
* [WaveGAN](https://arxiv.org/abs/1802.04208) (CNN+GAN; donahue19iclr): https://github.com/chrisdonahue/wavegan
* [WaveNet autoencoder](https://arxiv.org/abs/1704.01279) (WaveNet; engel17arxiv): https://magenta.tensorflow.org/nsynth### image-to-music/audio
* [Art2Mus](https://arxiv.org/abs/2410.04906) (diffusion; rinaldi24ai4va): https://drive.google.com/drive/u/1/folders/1dHBxLWnyBqhVMJgUkTk0hKnFbGDVhw__
* [MeLFusion](https://openaccess.thecvf.com/content/CVPR2024/papers/Chowdhury_MeLFusion_Synthesizing_Music_from_Image_and_Language_Cues_using_Diffusion_CVPR_2024_paper.pdf) (diffusion; chowdhury24cvpr): https://schowdhury671.github.io/melfusion_cvpr2024/
* [Vis2Mus](https://arxiv.org/abs/2211.05543) (encoder/decoder; zhang22arxiv): https://github.com/ldzhangyx/vis2mus
* [ConchShell](https://arxiv.org/abs/2210.05076) (encoder/decoder; fan22arxiv): n/a### video-to-music/audio
* [GVMGen](https://arxiv.org/pdf/2501.09972) (diffusion; zuo25aaai): https://chouliuzuo.github.io/GVMGen/
* [SONIQUE](https://arxiv.org/pdf/2410.03879) (diffusion; zhang24arxiv): https://github.com/zxxwxyyy/sonique
* [Herrmann-1](https://ieeexplore.ieee.org/document/10447950) (LLM+transformer; haseeb24icassp): https://audiomatic-research.github.io/herrmann-1/
* [Diff-BGM](https://openaccess.thecvf.com/content/CVPR2024/papers/Li_Diff-BGM_A_Diffusion_Model_for_Video_Background_Music_Generation_CVPR_2024_paper.pdf) (diffusion; li24cvpr): https://github.com/sizhelee/Diff-BGM
* [Frieren](https://arxiv.org/abs/2406.00320) (diffusion; wang24arxiv): https://frieren-v2a.github.io/
* [Video2Music](https://arxiv.org/abs/2311.00968) (transformer; kang23arxiv): https://github.com/AMAAI-Lab/Video2Music
* [LORIS](https://arxiv.org/pdf/2305.01319) (diffusion; yu23icml): https://justinyuu.github.io/LORIS/### interactive multi-track music composition
* [Jamming with Yating](http://mac.citi.sinica.edu.tw/~yang/pub/ailabs19ismirlbd_2.pdf) (RNN; hsiao19ismir-lbd): https://www.youtube.com/watch?v=9ZIJrr6lmHg### interactive piano composition
* [Piano Genie](https://nips2018creativity.github.io/doc/pianogenie.pdf) (RNN; donahue18nips-creativity): https://piano-genie.glitch.me/
* [AI duet](https://nips.cc/Conferences/2016/Schedule?showEvent=6307) (RNN; roberts16nips-demo): https://experiments.withgoogle.com/ai/ai-duet/view/### interactive monoaural music composition
* [musicalspeech] (Transformer; d'Eon20nips-demo): https://jasondeon.github.io/musicalSpeech/### compose melody
* [Yin-Yang](https://arxiv.org/pdf/2501.17759) (transformer; bhandari25evomusart): https://github.com/keshavbhandari/yinyang
* [MelodyT5](https://arxiv.org/abs/2407.02277) (transformer; wu24ismir): https://github.com/sanderwood/melodyt5
* [MelodyGLM](https://arxiv.org/pdf/2309.10738.pdf) (transformer; wu23arxiv): https://nextlab-zju.github.io/melodyglm/
* [TunesFormer](https://arxiv.org/abs/2301.02884) (transformer; wu23arxiv): https://github.com/sander-wood/tunesformer
* [MeloForm](https://arxiv.org/pdf/2208.14345.pdf) (transformer; lu22arxiv): https://ai-muzic.github.io/meloform/
* [parkR](https://transactions.ismir.net/articles/10.5334/tismir.87/) (markov; frieler22tismir): https://github.com/klausfrieler/parkR
* [xai-lsr](https://xai4debugging.github.io/files/papers/exploring_xai_for_the_arts_exp.pdf) (VAE; bryankinns21nipsw): https://xai-lsr-ui.vercel.app/
* [Trans-LSTM](https://archives.ismir.net/ismir2021/paper/000017.pdf) (Transformer+LSTM; dai21ismir): N/A...
* [diffusion](https://archives.ismir.net/ismir2021/paper/000058.pdf) (diffusion+musicVAE; mittal21ismir): https://storage.googleapis.com/magentadata/papers/symbolic-music-diffusion/index.html
* [MELONS](https://arxiv.org/pdf/2110.05020.pdf) (Transformer; zhou21arxiv): https://yiathena.github.io/MELONS/
* [Sketchnet](https://program.ismir2020.net/poster_1-09.html) (VAE+GRU; chen20ismir): https://github.com/RetroCirce/Music-SketchNet
* [SSMGAN](https://drive.google.com/file/d/1Ol4Ym3KqUkjcfL_Yeu0It3BP7NFS2mor/view) (VAE+LSTM+GAN; jhamtani19ml4md): https://drive.google.com/drive/folders/1TlOrbYAm7vGUvRrxa-uiH17bP-4N4e9z
* [StructureNet](http://ismir2018.ircam.fr/doc/pdfs/126_Paper.pdf) (LSTM; medeot18ismir) https://www.dropbox.com/sh/yxkxlnzi913ba50/AAA_mDbhdmaGJC9qj0zSlqCea?dl=0
* [MusicVAE](https://arxiv.org/abs/1803.05428) (LSTM+VAE; roberts18icml): https://magenta.tensorflow.org/music-vae
* [MidiNet](https://arxiv.org/abs/1703.10847) (CNN+GAN; yang17ismir): https://richardyang40148.github.io/TheBlog/midinet_arxiv_demo.html
* [C-RNN-GAN](https://mogren.one/publications/2016/c-rnn-gan/mogren2016crnngan.pdf) (LSTM+GAN; mogren16cml): http://mogren.one/publications/2016/c-rnn-gan/
* [folkRNN](https://github.com/IraKorshunova/folk-rnn) (LSTM): https://folkrnn.org/### compose single-track piano music
* [RenderBox](https://arxiv.org/pdf/2502.07711) (transformer; zhang25arxiv): https://renderbox-page.vercel.app/
* [ImprovNet](https://arxiv.org/pdf/2502.04522) (transformer; bhandari25arxiv): https://keshavbhandari.github.io/portfolio/improvnet.html
* [MusicMamba](https://arxiv.org/abs/2409.02421) (mamba; chen25icassp): https://moersxm.github.io/MusicMamba_Demo/
* [EMO-Disentanger](https://arxiv.org/abs/2407.20955) (transformer; huang24ismir): https://emo-disentanger.github.io/
* [MuseBarControl](https://arxiv.org/pdf/2407.04331) (transformer; shu24arxiv): https://ganperf.github.io/musebarcontrol.github.io/musebarcontrol/
* [WholeSong](https://openreview.net/forum?id=sn7CYWyavh¬eId=3X6BSBDIPB) (diffusion; 24iclr): https://wholesonggen.github.io/
* [MGM](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10269036) (transformer; 24tmm): https://github.com/hu-music/MGM
* [Polyffusion](https://arxiv.org/abs/2307.10304) (diffusion; min23ismir): https://polyffusion.github.io/
* [EmoGen](https://arxiv.org/abs/2307.01229) (Transformer; kang23arxiv): https://ai-muzic.github.io/emogen/
* [Compose & Embellish](https://arxiv.org/abs/2209.08212) (Transformer; wu22arxiv): https://drive.google.com/drive/folders/1Y7HfExAz3PpPbFl0OnccxYDNF1KZUP-3
* [Theme Transformer](https://arxiv.org/abs/2111.04093) (Transformer; shih21arxiv): https://atosystem.github.io/ThemeTransformer/
* [EMOPIA](https://archives.ismir.net/ismir2021/paper/000039.pdf) (Transformer; hung21ismir): https://annahung31.github.io/EMOPIA/
* [dadagp](https://archives.ismir.net/ismir2021/paper/000076.pdf) (Transformer; sarmento21ismir): https://drive.google.com/drive/folders/1USNH8olG9uy6vodslM3iXInBT725zult
* [CP Transformer](https://arxiv.org/abs/2101.02402) (Transformer; hsiao21aaai): https://ailabs.tw/human-interaction/compound-word-transformer-generate-pop-piano-music-of-full-song-length/
* [PIANOTREE VAE](https://arxiv.org/abs/2008.07118) (VAE+GRU; wang20ismir): https://github.com/ZZWaang/PianoTree-VAE
* [Guitar Transformer](https://arxiv.org/abs/2008.01431) (Transformer; chen20ismir): https://ss12f32v.github.io/Guitar-Transformer-Demo/
* [Pop Music Transformer](https://arxiv.org/abs/2002.00212) (Transformer; huang20mm): https://github.com/YatingMusic/remi
* [Conditional Music Transformer](https://arxiv.org/abs/1912.05537) (Transformer; choi19arxiv): https://storage.googleapis.com/magentadata/papers/music-transformer-autoencoder/index.html; and https://magenta.tensorflow.org/transformer-autoencoder
* [PopRNN](http://mac.citi.sinica.edu.tw/~yang/pub/ailabs19ismirlbd_1.pdf) (RNN; yeh19ismir-lbd): https://soundcloud.com/yating_ai/sets/ismir-2019-submission/
* [VGMIDI](http://www.lucasnferreira.com/papers/2019/ismir-learning.pdf) (LSTM; ferreira19ismir): https://github.com/lucasnfe/music-sentneuron
* [Amadeus](https://arxiv.org/pdf/1902.01973.pdf) (LSTM+RL; kumar19arxiv): https://goo.gl/ogVMSq
* [Modularized VAE](https://arxiv.org/pdf/1811.00162.pdf) (GRU+VAE; wang19icassp): https://github.com/MiuLab/MVAE_Music
* [BachProp](https://arxiv.org/abs/1812.06669) (GRU; colombo18arxiv): https://sites.google.com/view/bachprop
* [Music Transformer](https://arxiv.org/abs/1809.04281) (Transformer; huang19iclr): https://magenta.tensorflow.org/music-transformer### Rearrangement (e.g., pop2piano)
* [PiCoGen2](https://arxiv.org/abs/2408.01551) (transformer; tan24ismir): https://tanchihpin0517.github.io/PiCoGen/
* [PiCoGen](https://arxiv.org/abs/2407.20883) (transformer; tan24icmr): https://tanchihpin0517.github.io/PiCoGen/
* [Pop2Piano](https://arxiv.org/abs/2211.00895) (transformer; choi23icassp): https://sweetcocoa.github.io/pop2piano_samples/
* [audio2midi](https://arxiv.org/abs/2112.15110) (GRU; wang21arxiv): https://github.com/ZZWaang/audio2midi
* [InverseMV](https://arxiv.org/abs/2112.15320) (GRU; lin21arxiv): https://github.com/linchintung/VMT### compose single-track polyphonic music by combinging existing ones
* [CollageNet](https://archives.ismir.net/ismir2021/paper/000098.pdf) (VAE; wuerkaixi21ismir): https://github.com/urkax/CollageNet### compose multi-track music
* [NotaGen](https://arxiv.org/abs/2502.18008) (transformer; wang25arxiv): https://electricalexis.github.io/notagen-demo/
* [MIDI-GPT](https://arxiv.org/abs/2501.17011) (transformer; pasquier25aaai): https://www.metacreation.net/projects/midi-gpt
* [Cadenza](https://arxiv.org/abs/2410.02060) (transformer; lenz24ismir): https://lemo123.notion.site/Cadenza-A-Generative-Framework-for-Expressive-Ideas-Variations-7028ad6ac0ed41ac814b44928261cb68
* [SymPAC](https://arxiv.org/abs/2409.03055) (transformer; chen24ismir): n/a
* [MMT-BERT](https://arxiv.org/abs/2409.00919) (transformer; zhu24ismir): n/a
* [Nested Music Transformer](https://arxiv.org/abs/2408.01180) (transformer; ryu24ismir): https://github.com/JudeJiwoo/nmt
* [MET]() (transformer; ): https://github.com/SkyTNT/midi-model
* [MuPT](https://arxiv.org/abs/2404.06393) (transformer; qu25iclr): https://map-mupt.github.io/
* [MMT-GI](https://arxiv.org/abs/2311.12257) (transformer; xu23arxiv): https://goatlazy.github.io/MUSICAI/
* [MorpheuS](https://arxiv.org/abs/1812.04832): https://dorienherremans.com/morpheus
* [Anticipatory Music Transformer](https://arxiv.org/abs/2306.08620) (; thickstun23arxiv): https://crfm.stanford.edu/2023/06/16/anticipatory-music-transformer.html
* [SCHmUBERT](https://arxiv.org/abs/2305.09489) (diffusion; plasser23ijcai): https://github.com/plassma/symbolic-music-discrete-diffusion
* [DiffuseRoll](https://arxiv.org/abs/2303.07794) (diffusion; wang23arxiv): n/a
* [Museformer](https://arxiv.org/abs/2210.10349) (Transformer; yu22neurips): https://ai-muzic.github.io/museformer/
* [SymphonyNet](https://arxiv.org/pdf/2205.05448.pdf) (Transformer; liu22ismir): https://symphonynet.github.io/
* [CMT](https://arxiv.org/abs/2111.08380) (Transformer; di21mm): https://wzk1015.github.io/cmt/
* [CONLON](https://program.ismir2020.net/poster_6-14.html) (GAN; angioloni20ismir): https://paolo-f.github.io/CONLON/
* [MMM](https://arxiv.org/pdf/2008.06048.pdf) (Transformer; ens20arxiv): https://jeffreyjohnens.github.io/MMM/
* [MahlerNet](http://www.mahlernet.se/files/SMC2019.pdf) (RNN+VAE; lousseief19smc): https://github.com/fast-reflexes/MahlerNet
* [Measure-by-Measure](https://openreview.net/forum?id=Hklk6xrYPB) (RNN): https://sites.google.com/view/pjgbjzom
* [JazzRNN](http://mac.citi.sinica.edu.tw/~yang/pub/ailabs19ismirlbd_1.pdf) (RNN; yeh19ismir-lbd): https://soundcloud.com/yating_ai/sets/ismir-2019-submission/
* [MIDI-Sandwich2](https://arxiv.org/pdf/1909.03522.pdf) (RNN+VAE; liang19arxiv): https://github.com/LiangHsia/MIDI-S2
* [LakhNES](https://arxiv.org/abs/1907.04868) (Transformer; donahue19ismir): https://chrisdonahue.com/LakhNES/
* [MuseNet](https://openai.com/blog/musenet/) (Transformer): https://openai.com/blog/musenet/
* [MIDI-VAE](https://arxiv.org/abs/1809.07600) (GRU+VAE; brunner18ismir): https://www.youtube.com/channel/UCCkFzSvCae8ySmKCCWM5Mpg
* [Multitrack MusicVAE](https://arxiv.org/abs/1806.00195) (LSTM+VAE; simon18ismir): https://magenta.tensorflow.org/multitrack
* [MuseGAN](https://arxiv.org/abs/1709.06298) (CNN+GAN; dong18aaai): https://salu133445.github.io/musegan/### compose multitrack covers (cover generation; need reference MIDI)
* [FIGARO](https://arxiv.org/abs/2201.10936) (Transformer; rütte22arxiv): https://github.com/dvruette/figaro### given chord, compose melody
* [MelodyDiffusion](https://www.mdpi.com/2227-7390/11/8/1915) (diffusion; li23mathematics): https://www.mdpi.com/article/10.3390/math11081915/s1
* [H-EC2-VAE](https://archives.ismir.net/ismir2021/paper/000092.pdf) (GRU+VAE; wei21ismir): N/A...
* [MINGUS](https://archives.ismir.net/ismir2021/paper/000051.pdf) (Transformer; madaghiele21ismir): https://github.com/vincenzomadaghiele/MINGUS
* [BebopNet](https://program.ismir2020.net/poster_6-08.html) (LSTM): https://shunithaviv.github.io/bebopnet/
* [JazzGAN](http://musicalmetacreation.org/mume2018/proceedings/Trieu.pdf) (GAN; trieu18mume): https://www.cs.hmc.edu/~keller/jazz/improvisor/
* [XiaoIce Band](http://staff.ustc.edu.cn/~qiliuql/files/Publications/Hongyuan-Zhu-KDD2018.pdf) (GRU; zhu18kdd): http://tv.cctv.com/2017/11/24/VIDEo7JWp0u0oWRmPbM4uCBt171124.shtml### given melody, compose chord (melody harmonization)
* [ReaLchords](https://icml.cc/virtual/2024/poster/33172) (RL; wu24icml): https://storage.googleapis.com/realchords/index.html
* [EMO-Harmonizer]() (transformer): https://yuer867.github.io/emo_harmonizer/
* [LHVAE](https://arxiv.org/pdf/2306.03718.pdf) (VAE+LSTM; ji23arxiv): n/a
* [DeepChoir](https://arxiv.org/abs/2202.08423) (transformer; wu23icassp): https://github.com/sander-wood/deepchoir
* [DAT-CVAE](https://arxiv.org/pdf/2209.07144.pdf) (transformer-vae; zhao22ismir): https://zhaojw1998.github.io/DAT_CVAE
* [SurpriseNet](https://arxiv.org/pdf/2108.00378.pdf) (VAE; chen21ismir): https://github.com/scmvp301135/SurpriseNet
* [MTHarmonizer](https://arxiv.org/pdf/2001.02360.pdf) (RNN; yeh21jnmr)### given lyrics, compose melody
* [CSL-L2M](https://arxiv.org/pdf/2412.09887) (LLM; wang25aaai): https://lichaiustc.github.io/CSL-L2M/
* [MuDiT/MuSiT](https://arxiv.org/pdf/2407.03188) (LLM; wang24arxiv): N/A
* [SongComposer](https://arxiv.org/abs/2402.17645) (LLM; ding24arxiv): https://pjlab-songcomposer.github.io/
* [ROC](https://arxiv.org/pdf/2208.05697.pdf) (transformer; lv22arxiv): https://ai-muzic.github.io/roc/
* [pop-melody](https://archives.ismir.net/ismir2022/paper/000016.pdf) (transformer; zhang22ismir): N/A
* [ReLyMe](https://arxiv.org/abs/2207.05688) (transformer; chen22mm): https://ai-muzic.github.io/relyme/
* [TeleMelody](https://arxiv.org/pdf/2109.09617) (transformer; ju21arxiv): https://github.com/microsoft/muzic
* [Conditional LSTM-GAN](https://arxiv.org/pdf/1908.05551.pdf) (LSTM+GAN; yu19arxiv): https://github.com/yy1lab/Lyrics-Conditioned-Neural-Melody-Generation
* [iComposer](https://www.aclweb.org/anthology/N19-4015) (LSTM; lee19acl): https://www.youtube.com/watch?v=Gstzqls2f4A
* [SongWriter](https://arxiv.org/pdf/1809.04318.pdf) (GRU; bao18arxiv): N/A### compose drum MIDI
* [Conditional drum generation by Markis](https://arxiv.org/abs/2202.04464) (BiLSTM/Transformer): https://github.com/melkor169/CP_Drums_Generation
* [Nuttall's model](https://nime.pubpub.org/pub/8947fhly/release/1?readingCollection=71dd0131) (Transformer; nuttall21nime): https://nime.pubpub.org/pub/8947fhly/release/1?readingCollection=71dd0131
* [Wei's model](https://drive.google.com/file/d/1149HnGliYtl45Cjp9XwJadL_YHRLvq5F/view) (VAE+GAN; wei19ismir): https://github.com/Sma1033/drum_generation_with_ssm
* [DrumNet](https://arxiv.org/pdf/1908.00948.pdf) (GAE; lattner19waspaa): https://sites.google.com/view/drum-generation
* [DrumVAE](https://arxiv.org/abs/1902.03722) (GRU+VAE; thio19milc): http://vibertthio.com/drum-vae-client### compose melody+chords (two tracks)
* [Emotional Lead Sheet Generation](https://arxiv.org/abs/2104.13056) (sen2seq): https://github.com/melkor169/LeadSheetGen_Valence
* [EmoMusicTV](https://ieeexplore.ieee.org/document/10124351) (Transformer; ji23tmm): https://github.com/Tayjsl97/EmoMusicTV
* [Jazz Transformer](https://arxiv.org/abs/2008.01307) (Transformer; wu20ismir): https://drive.google.com/drive/folders/1-09SoxumYPdYetsUWHIHSugK99E2tNYD
* [Transformer VAE](https://ieeexplore.ieee.org/document/9054554) (Transformer+VAE; jiang20icassp): https://drive.google.com/drive/folders/1Su-8qrK__28mAesSCJdjo6QZf9zEgIx6
* [Two-stage RNN](https://arxiv.org/abs/2002.10266) (RNN; deboom20arxiv): https://users.ugent.be/~cdboom/music/
* [LeadsheetGAN](https://arxiv.org/abs/1807.11161) (CRNN+GAN; liu18icmla): https://liuhaumin.github.io/LeadsheetArrangement/results
* [LeadsheetVAE](https://drive.google.com/file/d/10uGRGEI9IOfu_LyzDSG393fGhwUrEOi4/view) (RNN+VAE; liu18ismir-lbd): https://liuhaumin.github.io/LeadsheetArrangement/results### given any MIDI tracks, compose other MIDI tracks
* [GETMusic](https://openreview.net/forum?id=z80CwkWXmq) (discrete diffusion): https://getmusicdemo.github.io/### given melody or lead sheet, compose arrangement
* [AccoMontage3](https://arxiv.org/pdf/2310.16334.pdf) (; zhao23arxiv): https://zhaojw1998.github.io/AccoMontage-3
* [GETMusic](https://openreview.net/forum?id=z80CwkWXmq) (discrete diffusion): https://getmusicdemo.github.io/
* [SongDriver](https://arxiv.org/pdf/2209.06054.pdf) (Transformer-CRF; wang22mm):
* [AccoMontage2](https://arxiv.org/pdf/2209.00353.pdf) : https://billyyi.top/accomontage2/
* [AccoMontage](https://archives.ismir.net/ismir2021/paper/000104.pdf) (template-based; zhao21ismir): https://github.com/zhaojw1998/AccoMontage
* [CP Transformer](https://arxiv.org/abs/2101.02402) (Transformer; hsiao21aaai): https://ailabs.tw/human-interaction/compound-word-transformer-generate-pop-piano-music-of-full-song-length/
* [PopMAG](https://arxiv.org/abs/2008.07703) (transformer; ren20mm): https://music-popmag.github.io/popmag/
* LeadsheetGAN: see above
* LeadsheetVAE: see above
* XiaoIce Band (the "multi-instrument co-arrangement model"): N/A### given mix (audio), compose bass
* [latent diffusion](https://arxiv.org/abs/2402.01412) (diffusion; pasini24arxiv): https://sonycslparis.github.io/bass_accompaniment_demo/
* [BassNet](https://www.mdpi.com/2076-3417/10/18/6627) (GAE+CNN; ren20mm): https://sonycslparis.github.io/bassnet/### given prime melody, compose melody+chords
* [local_conv_music_generation](http://ouyangzhihao.com/wp-content/uploads/2018/12/MUSIC-GENERATION-WITH-LOCAL-CONNECTED-CONVOLUTIONAL-NEURAL-NETWORK.pdf) (CNN; ouyang18arxiv): https://somedaywilldo.github.io/local_conv_music_generation/### given prime melody, compose melody+chords+bass
* [BandNet](https://arxiv.org/abs/1812.07126) (RNN; zhou18arxiv): https://soundcloud.com/yichao-zhou-555747812/sets/bandnet-sound-samples-1### given piano score, compose an orchestration
* [LOP](https://qsdfo.github.io/LOP/index.html) (RBM; crestel17smc): https://qsdfo.github.io/LOP/results.html### piano infilling
* [Polyffusion](https://arxiv.org/abs/2307.10304) (diffusion; min23ismir): https://polyffusion.github.io/
* [structure-aware infilling](https://arxiv.org/pdf/2210.02829.pdf) : https://tanchihpin0517.github.io/structure-aware_infilling
* [VLI](https://arxiv.org/pdf/2108.05064.pdf) (Transformer; chang21ismir): https://jackyhsiung.github.io/piano-infilling-demo/
* [The Piano Inpainting Application](https://arxiv.org/pdf/2107.05944.pdf) (): https://ghadjeres.github.io/piano-inpainting-application/### melody infilling
* [CLSM](https://archives.ismir.net/ismir2021/paper/000002.pdf) (Transformer+LSTM; akama21ismir): https://contextual-latent-space-model.github.io/demo/### symbolic-domain genre style transfer
* [Pop2Jazz](http://mac.citi.sinica.edu.tw/~yang/pub/ailabs19ismirlbd_1.pdf) (RNN; yeh19ismir-lbd): https://soundcloud.com/yating_ai/sets/ismir-2019-submission/
* [Groove2Groove](https://hal.archives-ouvertes.fr/hal-02923548/document) (RNN; cífka19ismir, cífka20taslp): https://groove2groove.telecom-paris.fr/
* [CycleGAN2](https://tik-old.ee.ethz.ch/file/0d41d7d657f1a65f65373c4797caaeac/Music_Genre_Transfer___ECML_MML_Workshop_CR.pdf) (CNN+GAN; brunner19mml): https://drive.google.com/drive/folders/1Jr_p6pnKvhA2YW9sp-ABChiFgV3gY1aT
* [CycleGAN](https://arxiv.org/pdf/1809.07575.pdf) (CNN+GAN; brunner18ictai): https://github.com/sumuzhao/CycleGAN-Music-Style-Transfer
* [FusionGAN](https://dac.cs.vt.edu/wp-content/uploads/2017/11/learning-to-fuse.pdf) (GAN; chen17icdm): http://people.cs.vt.edu/czq/publication/fusiongan/### symbolic-domain arrangement style transfer
* [UnetED](https://arxiv.org/abs/1905.13567) (CNN+Unet; hung19ijcai): https://biboamy.github.io/disentangle_demo/result/index.html### symbolic-domain emotion/rhythm/pitch style transfer
* [MuseMorphose](https://arxiv.org/abs/2105.04090) (Transformer+VAE; wu21arxiv): https://slseanwu.github.io/site-musemorphose/
* [Kawai](https://program.ismir2020.net/poster_5-06.html) (VAE+GRU+adversarial; kawai20ismir): https://lisakawai.github.io/music_transformation/
* [Wang](https://program.ismir2020.net/poster_5-05.html) (VAE+GRU; wang20ismir): https://github.com/ZZWaang/polyphonic-chord-texture-disentanglement
* [Music FaderNets](https://program.ismir2020.net/poster_1-13.html) (VAE; tan20ismir): https://music-fadernets.github.io/
* [deep-music-analogy](https://arxiv.org/pdf/1906.03626.pdf) (yang19ismir): https://github.com/cdyrhjohn/Deep-Music-Analogy-Demos### performance generation (given MIDI, generate human-like MIDI): Piano only
* [ScorePerformer](https://ismir2023program.ismir.net/poster_183.html) (transformer; borovik23ismir): https://github.com/ilya16/scoreperformer
* [CVRNN](http://archives.ismir.net/ismir2019/paper/000105.pdf) (CVRNN; maezawa19ismir): https://sites.google.com/view/cvrnn-performance-render
* [GGNN](http://proceedings.mlr.press/v97/jeong19a/jeong19a.pdf) (graph NN + hierarchical attention RNN; jeong19icml)
* [VirtuosoNet](https://nips2018creativity.github.io/doc/virtuosonet.pdf) (LSTM+hierarchical attention network; jeong18nipsw): https://www.youtube.com/playlist?list=PLkIVXCxCZ08rD1PXbrb0KNOSYVh5Pvg-c
* [PerformanceRNN](https://magenta.tensorflow.org/performance-rnn) (RNN): https://magenta.tensorflow.org/performance-rnn### given MIDI, generate human-like MIDI: Drum only
* [GrooVAE](https://magenta.tensorflow.org/groovae) (seq2seq+VAE; gillick19icml): https://magenta.tensorflow.org/groovae### compose ABC MIDI by LLM
* [ComposerX](https://arxiv.org/abs/2404.18081) (LLM; deng24arxiv): https://lllindsey0615.github.io/ComposerX_demo/