Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/Curated-Awesome-Lists/awesome-ai-music-generation
A curated compilation of AI-driven generative music resources and projects. Explore the blend of machine learning algorithms and musical creativity.
https://github.com/Curated-Awesome-Lists/awesome-ai-music-generation
List: awesome-ai-music-generation
ai awesome awesome-list generative-ai music-generation
Last synced: 16 days ago
JSON representation
A curated compilation of AI-driven generative music resources and projects. Explore the blend of machine learning algorithms and musical creativity.
- Host: GitHub
- URL: https://github.com/Curated-Awesome-Lists/awesome-ai-music-generation
- Owner: Curated-Awesome-Lists
- License: apache-2.0
- Created: 2023-11-03T17:57:26.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2023-11-03T17:57:43.000Z (about 1 year ago)
- Last Synced: 2024-11-21T12:02:14.962Z (about 1 month ago)
- Topics: ai, awesome, awesome-list, generative-ai, music-generation
- Homepage:
- Size: 15.6 KB
- Stars: 197
- Watchers: 2
- Forks: 14
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- ultimate-awesome - awesome-ai-music-generation - A curated compilation of AI-driven generative music resources and projects. Explore the blend of machine learning algorithms and musical creativity. (Other Lists / Monkey C Lists)
README
# Awesome Music Generation with AI
Welcome to the Awesome Music Generation with AI list, a curated collection of resources, projects, and frameworks at the intersection of artificial intelligence and music creation. Over the years, the field of generative music has witnessed a significant evolution, propelled by advancements in machine learning and deep learning technologies. From algorithmic composition to real-time music generation, AI has opened new horizons, enabling a blend of creativity and automation that was once unimaginable.
This list aims to be a comprehensive hub for enthusiasts, researchers, and professionals, bringing together the pioneering projects, influential research papers, and state-of-the-art frameworks that are shaping the future of music generation through AI. Whether you are a musician exploring the digital frontier, a researcher pushing the boundaries of what's possible, or a developer aiming to integrate AI-driven music capabilities into applications, this collection will provide a rich source of inspiration and knowledge.
## Table of Contents
- [GitHub projects](#github-projects)
- [Articles & Blogs](#articles-&-blogs)
- [Online Courses](#online-courses)
- [Books](#books)
- [Research Papers](#research-papers)
- [Videos](#videos)
- [Tools & Software](#tools-&-software)
- [Conferences & Events](#conferences-&-events)
- [Slides & Presentations](#slides-&-presentations)## GitHub projects
- [Magenta](https://github.com/magenta/magenta): Music and Art Generation with Machine Intelligence π΅ποΈ (18712 stars)
- [Audiocraft](https://github.com/facebookresearch/audiocraft): A library for audio processing and generation with deep learning, including MusicGen, a controllable music generation LM with textual and melodic conditioning π§ (17044 stars)
- [Muzic](https://github.com/microsoft/muzic): Music Understanding and Generation with Artificial Intelligence πΆ (3765 stars)
- [musiclm-pytorch](https://github.com/lucidrains/musiclm-pytorch): PyTorch implementation of MusicLM, Google's state-of-the-art model for music generation using attention networks πΌ (2763 stars)
- [riffusion](https://github.com/riffusion/riffusion): Stable diffusion for real-time music generation π΅ (2727 stars)
- [Mubert-Text-to-Music](https://github.com/MubertAI/Mubert-Text-to-Music): A notebook demonstrating prompt-based music generation using Mubert API π΅ (2674 stars)
- [riffusion-app](https://github.com/riffusion/riffusion-app): Stable diffusion for real-time music generation in a web app π΅ (2474 stars)
- [Magenta.js](https://github.com/magenta/magenta-js): Music and Art Generation with Machine Learning in the browser π΅ποΈ (1899 stars)
- [AudioLDM2](https://github.com/haoheliu/AudioLDM2): Text-to-Audio/Music Generation π΅ (1733 stars)
- [musegan](https://github.com/salu133445/musegan): An AI for Music Generation π΅ (1602 stars)
- [**Radium**](https://github.com/kmatheussen/radium): A graphical music editor and next generation tracker. π΅β‘οΈ (805 stars)
- [**GRUV**](https://github.com/MattVitelli/GRUV): A Python project for algorithmic music generation. ππΆ (798 stars)
- [**DeepJ**](https://github.com/calclavia/DeepJ): A deep learning model for style-specific music generation. π΅π₯ (717 stars)
- [**Music Generation with Deep Learning**](https://github.com/umbrellabeach/music-generation-with-DL): Resources on music generation using deep learning. πΆπ» (700 stars)
- [**Musika**](https://github.com/marcoppasini/musika): Fast infinite waveform music generation. π΅π¨ (646 stars)
- [**Music Generation Research**](https://github.com/AI-Guru/music-generation-research): A collection of music generation research resources. πΆπ¬ (516 stars)
- [**MusPy**](https://github.com/salu133445/muspy): A toolkit for symbolic music generation. π΅π§ (387 stars)
- [**MusicGenerator**](https://github.com/Conchylicultor/MusicGenerator): Experiment with diverse deep learning models for music generation with TensorFlow. πΆπ§ͺ (309 stars)
- [**MuseTree**](https://github.com/stevenwaterman/musetree): AI music generation for the real world. π΅π (215 stars)
- [**VampNET**](https://github.com/hugofloresgarcia/vampnet): Music generation with masked transformers! πΆπ¦ (204 stars)## Articles & Blogs
- [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284): A single Language Model (LM) called MusicGen that operates over compressed discrete music representation, allowing better control over the generated output. Music samples, code, and models are available at the provided link.
- [AI-Based Affective Music Generation Systems: A Review of Methods](https://arxiv.org/abs/2301.06890): A comprehensive review of AI-AMG systems, discussing their building blocks, categorizing existing systems based on core algorithms, and exploring AI-based approaches for composing affective music.
- [Music FaderNets: Controllable Music Generation Based On High](https://arxiv.org/abs/2007.15474): A framework (Music FaderNets) for learning high-level feature representations by manipulating low-level attributes through feature disentanglement and latent regularization techniques.
- [Music Generation by Deep Learning-Challenges and Directions](https://arxiv.org/abs/1712.04371): An overview of deep learning approaches for music generation, discussing their limitations in terms of creativity and control.
- [MusPy: A Toolkit for Symbolic Music Generation](https://arxiv.org/abs/2008.01951): Introduction of MusPy, an open-source Python library for symbolic music generation, providing tools for dataset management, data preprocessing, and model evaluation. Statistical analysis of supported datasets is also included.
- [Music generation with variational recurrent autoencoder supported](https://arxiv.org/abs/1705.05458): Introduction of a new network architecture, variational autoencoder supported by history, for generating longer melodic patterns. Filtering heuristics are used to enhance the generated music.
- [Symbolic Music Generation with Diffusion Models](https://arxiv.org/abs/2103.16091): Application of diffusion models to modeling symbolic music, demonstrating strong generation and conditional infilling results.
- [Magenta](https://research.google/teams/brain/magenta/): A research project exploring the role of machine learning in the creation of art and music.
- [How to generate music with Python: The Basics](https://medium.com/@stevehiehn/how-to-generate-music-with-python-the-basics-62e8ea9b99a5): An article discussing the basics of generating music with Python, highlighting its use in procedural MIDI generation.
- [MidiNet: A Convolutional Generative Adversarial Network for](https://arxiv.org/abs/1703.10847): Investigation of using convolutional neural networks (CNNs) for generating melody in the symbolic domain, introducing conditional mechanisms and expanding to multiple MIDI channels.
- [A Survey on Artificial Intelligence for Music Generation: Agents ...](https://arxiv.org/abs/2210.13944) :octocat:
- This survey paper explores the field of music generation with artificial intelligence (AI), discussing the composition techniques and advances in AI systems imitating the music generation process. It also highlights the role of datasets, models, interfaces, and users in the music generation process, along with potential applications and future research directions.
- [Generating Music With Artificial Intelligence](https://towardsdatascience.com/generating-music-with-artificial-intelligence-9ce3c9eef806) :musical_note:
- This article provides insights into how recurrent neural networks (RNNs) can be used for music generation with machine learning. It serves as a refresher for RNN-based text generation techniques.
- [From Artificial Neural Networks to Deep Learning for Music ...](https://arxiv.org/abs/2004.03586)
- This paper explores the application of deep learning techniques in music generation. It offers a tutorial on how deep learning can be used to automatically learn musical styles and generate music samples.
- [Noise2Music: Text-conditioned Music Generation with Diffusion ...](https://arxiv.org/abs/2302.03917) :sound:
- This research introduces Noise2Music, a system that utilizes diffusion models to generate high-quality music clips from text prompts. It demonstrates how the generated audio can capture the genre, tempo, instruments, mood, and era specified in the text.
- [A Classifying Variational Autoencoder with Application to ...](https://arxiv.org/abs/1711.07050) :musical_keyboard:
- This paper presents a model based on the variational autoencoder (VAE) framework for algorithmic music generation. The model incorporates a classifier to infer the discrete class of the modeled data, allowing for the generation of musical sequences in different keys.
- [Generating Ambient Music from WaveNet](https://medium.com/@rachelchen_49210/generating-ambient-noise-from-wavenet-95aa7f0a8f77)
- This post discusses the motivation and approach for generating ambient music using Google DeepMind's WaveNet, an audio-generative model.
- [Generating Music using an LSTM Neural Network](https://david-exiga.medium.com/music-generation-using-lstm-neural-networks-44f6780a4c5)
- This blog post presents the use of a long short-term memory (LSTM) neural network for music generation. It covers improvements made to an existing LSTM model.
- [Discrete Diffusion Probabilistic Models for Symbolic Music Generation](https://arxiv.org/abs/2305.09489) :notes:
- This work introduces the generation of polyphonic symbolic music using Discrete Diffusion Probabilistic Models (DDPMs). The models exhibit high-quality sample generation and allow for flexible infilling at the note level. The paper also discusses the evaluation of music sample quality and the possible applications of these models.## Online Courses
- [Generative AI Courses & Certifications](https://www.coursera.org/courses?query=generative%20ai): Take the next step in your professional journey and enroll in a Generative AI course today! Browse Generative AI Courses offered from top universities and industry leaders. π
- [Complete A.I. Art Generation Course - Beginner 2 MASTER](https://www.udemy.com/course/complete-ai-art-generation/): Learn how to generate everything from Language, Art, Music & much more using cutting-edge A.I. algorithms. π¨π΅
- [Andrew Ng: Announcing My New Deep Learning Specialization](https://blog.coursera.org/andrew-ng-announcing-new-deep-learning-specialization-coursera/): Dive into deep learning with Andrew Ng, a renowned AI expert, and learn the foundations of this exciting field. π§
- [Best Deep Learning Courses & Certifications](https://www.coursera.org/courses?query=deep%20learning) (Coursera): Enhance your deep learning skills and knowledge by enrolling in a wide range of courses offered by top universities and industry leaders.## Books
- [Deep Learning Techniques for Music Generation](https://link.springer.com/book/10.1007/978-3-319-70163-9) π: This book presents a survey and analysis of how deep learning can be utilized to generate musical content, providing insights for students, practitioners, and researchers.
- [Algorithmic Composition: Paradigms of Automated Music Generation](https://link.springer.com/book/10.1007/978-3-211-75540-2) π: Offering a detailed overview of algorithmic composition, this book focuses on prominent procedures and principles in a practical manner.
- [Hands-On Music Generation with Magenta](https://www.amazon.com/Hands-Music-Generation-Magenta-composition/dp/1838824413) π: Explore the role of deep learning in music generation and assisted composition with Magenta. This hands-on guide integrates ML models into existing music production tools.
- [Machine Learning and Music Generation](https://www.amazon.com/Machine-Learning-Music-Generation-I%C3%B1esta/dp/0367892855) π: Delve into the intersection of machine learning and music generation with this comprehensive book, covering the use of ML techniques in creating music.## Research Papers
- [Simple and Controllable Music Generation](https://arxiv.org/abs/2306.05284) : This paper introduces MusicGen, a single Language Model (LM) that generates high-quality music samples conditioned on textual description or melodic features, allowing better control over the generated output. It showcases superior performance compared to baselines on a standard text-to-music benchmark.
- [Efficient Neural Music Generation](https://arxiv.org/abs/2305.15719) : MeLoDy (M for music; L for LM; D for diffusion) is proposed as an LM-guided diffusion model that generates music audios of state-of-the-art quality while reducing forward passes in the sampling process, making it computationally efficient.
- [Noise2Music: Text-conditioned Music Generation with Diffusion](https://arxiv.org/abs/2302.03917) : This paper presents Noise2Music, a series of diffusion models trained to generate high-quality 30-second music clips from text prompts. It explores different options for intermediate representations and demonstrates the ability to faithfully reflect key elements of the text prompt.
- [VampNet: Music Generation via Masked Acoustic Token Modeling](https://arxiv.org/abs/2307.04686) : VampNet leverages a bidirectional transformer architecture and a variable masking schedule during training to generate coherent high-fidelity musical waveforms. It showcases capabilities in music synthesis, compression, inpainting, and variation.
- [MuseGAN: Multi-track Sequential Generative Adversarial Networks](https://arxiv.org/abs/1709.06298) : This paper proposes three models for symbolic multi-track music generation using generative adversarial networks (GANs), taking into account temporal dynamics and interdependencies between tracks.
- [JEN-1: Text-Guided Universal Music Generation with Diffusion](https://arxiv.org/abs/2308.04729) : JEN-1 is introduced as a universal high-fidelity model for text-to-music generation, incorporating both autoregressive and non-autoregressive training. It demonstrates superior performance in text-music alignment and music quality.
- [Museformer: Transformer with Fine-and Coarse-Grained Attention](https://arxiv.org/abs/2210.10349) : Museformer is a Transformer-based approach for music generation that addresses challenges related to long music sequences and musical repetition structures. It introduces fine- and coarse-grained attention mechanisms to capture relevant music structures efficiently.
- [A Comprehensive Survey on Deep Music Generation: Multi-level Perspectives](https://arxiv.org/pdf/2011.06801) : This survey provides an overview of deep learning techniques in music generation, covering various composition tasks under different music generation levels (score generation, performance generation, and audio generation).
- [Quantized GAN for Complex Music Generation from Dance Videos](https://arxiv.org/abs/2204.00604) : Dance2Music-GAN (D2M-GAN) is an adversarial multi-modal framework that generates complex musical samples conditioned on dance videos. It uses Vector Quantized (VQ) audio representation to generate diverse dance music styles.
- [Musika! Fast Infinite Waveform Music Generation](https://arxiv.org/abs/2208.08706): Fast and user-controllable music generation system that allows for much faster than real-time generation of music of arbitrary length on a consumer CPU.
- [A systematic review of artificial intelligence-based music generation](https://www.sciencedirect.com/science/article/pii/S0957417422013537): Provides a wide range of publications and explores the interest of both musicians and computer scientists in AI-based automatic music generation.
- [MidiNet: A Convolutional Generative Adversarial Network for Music Generation](https://arxiv.org/abs/1703.10847): Introduces the use of convolutional neural networks (CNNs) for generating melodies in the symbolic domain.
- [Music Generation by Deep Learning-Challenges and Directions](https://arxiv.org/abs/1712.04371): Explores the limitations of deep learning for music generation and the need for control, structure, creativity, and interactivity.
- [What is missing in deep music generation? A study of repetition and structure](https://arxiv.org/abs/2209.00182): Investigates the understanding of music structure and repetition in the context of music generation and suggests new formal music criteria and evaluation methods.
- [Symbolic Music Generation with Diffusion Models](https://arxiv.org/abs/2103.16091): Presents a technique for training diffusion models on sequential data to generate symbolic music with strong unconditional generation and post-hoc conditional infilling results.
- [Discrete Diffusion Probabilistic Models for Symbolic Music Generation](https://arxiv.org/abs/2305.09489): Explores the application of Discrete Diffusion Probabilistic Models (D3PMs) for generating polyphonic symbolic music with high sample quality and flexible infilling.
- [MMM: Exploring Conditional Multi-Track Music Generation with the Transformer](https://arxiv.org/abs/2008.06048): Introduces a generative system based on the Transformer architecture for generating multi-track music with greater control and handling of long-term dependencies.
- [Deep Learning Techniques for Music Generation - A Survey](https://arxiv.org/abs/1709.01620): Analyses the different ways of using deep learning for generating musical content, covering objectives, representations, architectures, challenges, and evaluation.
- [Mo^usai: Text-to-Music Generation with Long-Context Latent Diffusion](https://arxiv.org/abs/2301.11757): Bridges the connection between text and music with a highly efficient text-to-music generation model that can generate multiple minutes of high-quality stereo music from textual descriptions.## Videos
- [Music Generation with Magenta: Using Machine Learning in Arts](https://www.youtube.com/watch?v=O4uBa0KMeNY) - Nov 7, 2019. Composing music is hard, and the lack of inspiration can be daunting. This video explores how machine learning can be used in music generation.
- [How to code a music generation genetic algorithm?](https://www.youtube.com/watch?v=nypJ3b4rMhE) - Apr 3, 2021. This video discusses coding a genetic algorithm for generating music, building upon the concepts presented in a previous video.
- [Deep Learning for Music Generation](https://www.youtube.com/watch?v=4bCrNl4Bx1M) - Feb 8, 2018. In this episode of the AI show, Erika explains how to create deep learning models using music as input, delving into the technical aspects of music generation using deep learning.
- [Composing Heavy Metal with GPT - HuggingFace for Music](https://www.youtube.com/watch?v=rpp5hQDnWtc) - Jan 26, 2022. This video showcases the usage of HuggingFace for music generation, specifically focusing on composing heavy metal music.
- [MusicGen: Simple and Controllable Music Generation Explained](https://www.youtube.com/watch?v=5xqUoseyffw) - Jun 25, 2023. This video provides an explanation of MusicGen, a framework for simple and controllable music generation.
- [Jawlove - Everything Will Be Alright - YouTube](https://www.youtube.com/watch?v=Wr59PK8U-dI) - A music video featuring the song "Everything Will Be Alright" by Jawlove.
- [Musical Beginnings with Karen #7 Slippery Fish - YouTube](https://m.youtube.com/watch?v=FK0Be7OvFPk) - A video from the Music Generation Waterford program showcasing a music education performance.
- [Cybernetic Celebration | EDM | Loudly AI Music Generator - YouTube](https://www.youtube.com/watch?v=zSlnctOOChY) - A video demonstrating the use of the Loudly AI Music Generator to create EDM music.
- [Music Generation Cork City - YouTube](https://m.youtube.com/playlist?list=PLaYlsrBdxcGSisQvTWbPfb2HHYVcPIR2V) - A playlist of videos showcasing performances by Music Generation Cork City.
- [Top AI music generating tools (publicly available tools) - video](https://www.dailymotion.com/video/x8hwvyb) - A video exploring the top AI music generating tools available, including Mubert AI, AIVA, Soundraw, Beatoven AI, Boomy, and Amper Music.## Tools & Software
- [Stability AI unveils 'Stable Audio'](https://alternativeto.net/news/2023/9/stability-ai-unveils-stable-audio--a-versatile-platform-for-ai-music-generation/): A versatile platform for AI Music Generation. Stability AI has launched a new AI platform, Stable Audio, which offers a novel latent diffusion model for generating audio conditioned on metadata and timing, providing faster inference times and creative control.
- [SuperCollider](https://sourceforge.net/directory/?q=algorithmic%20music%20composition): An audio server, programming language, and IDE for sound synthesis. SuperCollider is a platform for audio synthesis and algorithmic composition.
- [Best Open Source AI Music Generators](https://sourceforge.net/directory/ai-music-generators/): Implementation of AudioLM, a language modeling approach to audio generation using Pytorch. It includes conditioning mechanisms for more control over generated music.
- [Soundful](https://www.producthunt.com/products/soundful): An AI Music Generator that allows creators to generate royalty-free tracks instantly. Soundful generates high-quality music using AI technology, making it easy for anyone to create professional-sounding music.
- [Strasheela](https://sourceforge.net/projects/strasheela/): A constraint-based music composition system. Users define music theories with sets of compositional rules, and the system generates music that complies with these theories.
- [Best AI Music Generators - 2023 Reviews & Comparison](https://sourceforge.net/software/ai-music-generators/): An online tool with multiple ways to create song covers, including searching for songs, uploading audio files, and recording directly.
- [What Is AI Generated Music? Best Music Tools for 2023](https://www.g2.com/articles/ai-generated-music): A software that allows businesses to explore AI-generated music as a cheaper alternative, offering a free trial with unlimited music projects and monthly song downloads.
- [Best Audio Editing Software in 2023: Compare Reviews on 100+ | G2](https://www.g2.com/categories/audio-editing): A comprehensive list of audio editing software commonly used by audio engineers and music producers, with real-time product reviews from verified users.
- [Psycle Modular Music Creation Studio Reviews - 2023](https://sourceforge.net/projects/psycle/reviews/): User reviews and ratings of the Psycle Modular Music Creation Studio free open-source software project.## Conferences & Events
- [Neuton.AI Events](https://www.eventbrite.com/o/neutonai-63873737103) - Neuton.AI is hosting various events, including an ARM Tech Talk about the Next Generation Smart Toothbrush and showcasing their unique neural network framework for building compact models with optimal size and accuracy.
- [FUTURE DEAD ARTISTS Events](https://www.eventbrite.com/o/future-dead-artists-16784330599) - Stay updated on upcoming events by FUTURE DEAD ARTISTS, including the FDA 2023 Freshman Class: FUTURE GENERATION Artists Talk.
- [Generative AI, Apps, and DevOps | AI/ML Talks](https://www.eventbrite.com/e/generative-ai-apps-and-devops-aiml-talks-tickets-726386941897) - Pulumi presents a talk on Generative AI, Apps, and DevOps in the field of AI/ML Talks on October 19, 2023, in Seattle, WA.
- [Women in Tech & Entrepreneurship - Fort Lauderdale Chapter Happy Hour](https://www.eventbrite.com/e/women-in-tech-entrepreneurship-fort-lauderdale-chapter-happy-hour-tickets-707928020767) - Fort Lauderdale Chapter Happy Hour event for Women in Tech & Entrepreneurship.## Slides & Presentations
- [Algorithmic music generation | PPT](https://www.slideshare.net/sunitabhagwat/algorithmic-music-generation): Slides discussing algorithmic music generation, available for free as a PDF or online view.
- [Music Generation with Deep Learning | PPT](https://www.slideshare.net/JinxiLeviGuo/music-generation-with-deep-learning): Presentation exploring music generation using deep learning, downloadable as a PDF or for online viewing.
- [Video Background Music Generation with Controllable Music Transformer](https://www.slideshare.net/ivaderivader/video-background-music-generation-with-controllable-music-transformer): Slides discussing the generation of video background music using a controllable music transformer, available as a PDF or for online viewing.
- [Automatic Music Generation Using Deep Learning | PDF](https://www.slideshare.net/DanielWachtel4/automatic-music-generation-using-deep-learning-259642436): Slides explaining the process of automatic music generation using deep learning, downloadable as a PDF or for online viewing.
- [MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment (AAAI 2018)](https://www.slideshare.net/HermanDong/musegan-multitrack-sequential-generative-adversarial-networks-for-symbolic-music-generation-and-accompaniment-aaai-2018): Slides presenting MuseGAN, a framework for multi-track sequential generative adversarial networks for symbolic music generation and accompaniment, available as a PDF or for online viewing.
- [Automatic Music Composition with Transformers, Jan 2021 | PPT](https://www.slideshare.net/affige/automatic-music-composition-with-transformers-jan-2021): Presentation introducing ongoing projects on automatic music composition using Transformers, downloadable as a PDF or for online viewing.
- [ISMIR 2019 tutorial: Generating music with generative adversarial networks (GANs)](https://www.slideshare.net/affige/ismir2019tutorialgan4music): Slides from the ISMIR 2019 tutorial on generating music with generative adversarial networks (GANs), available as a PDF or for online viewing.
- [PopMAG: Pop Music Accompaniment Generation | PPT](https://www.slideshare.net/ivaderivader/popmag-pop-music-accompaniment-generation): Slides discussing PopMAG, a framework for pop music accompaniment generation, available as a PDF or for online viewing.
- [Artificial intelligence and Music | PPT](https://www.slideshare.net/leonardjessesuccesslord/artificial-intelligence-and-music): Slides exploring the application of recurrent neural networks paired with LSTM for music generation, downloadable as a PDF or for online viewing.
- [Machine learning for creative AI applications in music (2018 nov) | PPT](https://www.slideshare.net/affige/machine-learning-for-creative-ai-applications-in-music-2018-nov): Presentation on machine learning for creative AI applications in music, available as a PDF or for online viewing.---
This initial version of the Awesome List was generated with the help of the [Awesome List Generator](https://github.com/alialsaeedi19/GPT-Awesome-List-Maker). It's an open-source Python package that uses the power of GPT models to automatically curate and generate starting points for resource lists related to a specific topic.