https://github.com/zotroneneis/deep_music
Recurrent neural network in TensorFlow for generating novel monophonic melodies.
https://github.com/zotroneneis/deep_music
neural-network python3 recurrent-neural-network recurrent-neural-networks tensorflow variational-autoencoder variational-autoencoders
Last synced: 6 months ago
JSON representation
Recurrent neural network in TensorFlow for generating novel monophonic melodies.
- Host: GitHub
- URL: https://github.com/zotroneneis/deep_music
- Owner: zotroneneis
- Created: 2017-07-30T07:24:46.000Z (about 8 years ago)
- Default Branch: master
- Last Pushed: 2019-02-10T08:14:39.000Z (over 6 years ago)
- Last Synced: 2025-03-30T19:41:30.078Z (7 months ago)
- Topics: neural-network, python3, recurrent-neural-network, recurrent-neural-networks, tensorflow, variational-autoencoder, variational-autoencoders
- Language: Jupyter Notebook
- Homepage:
- Size: 82.8 MB
- Stars: 11
- Watchers: 5
- Forks: 4
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
deepmusic
==============================The main part of this project is based on exploring the ability of different recurrent Long Short-Term Memory architectures to generate novel monophonic melodies. All details about this part of the project can be found in the corresponding [report](https://github.com/zotroneneis/deep_music/blob/master/reports/report_deepmusic.pdf).
In addition, we trained and tested a variational autoencoder on the same task.
All code is written in Python and uses TensorFlow.
Project authors: Anna-Lena Popkes, Pascal Wenker
Project Organization
------------├── Makefile <- Makefile with commands like `make data` or `make train`
├── README.md <- The top-level README for developers using this project.
├── data
│ ├── midis <- Original MIDI files
│ ├── notesequences <- Computed notesequence protocols
│ └── sequence_examples <- Sequence examples used to train the model
│
├── models <- Trained and serialized models, model predictions, or model summaries
│
├── references <- Data dictionaries, manuals, and all other explanatory materials.
│
├── reports <- Final latex report for the project
│
├── requirements.txt <- The requirements file for reproducing the analysis environment, e.g.
│ generated with `pip freeze > requirements.txt`
│
├── src <- Source code for use in this project.
│ ├── __init__.py <- Makes src a Python module
│ │
│ ├── main.py <- Main method
│ ├── config.yaml <- Config file, storing all network parameters
│ │
│ ├── data <- Scripts to transform MIDI files to notesequences and
│ │ │ notesequences to sequence examples
│ │ │
│ │ ├── 01_create_notesequences
│ │ └── 02_create_sequenceExampes
│ │
│ │
│ ├── models <- Scripts to train models and then use trained models to make
│ │ │ predictions
│ │ ├── basic_model.py
│ │ ├── attention_model.py
│ │
│ ├── helper <- Scripts that contain helper functions used by the model
│ │ ├── misc.py
│ │ └── visualization.py
│ │
│ ├── scripts <- Scripts to create exploratory and results oriented visualizations
│ ├── create_debug_midis.py
│ ├── midi_to_melody.py
│ └── tensorboardify.py
│
│
├── vae <- Additional project training a variational autoencoder
│ │ to generate music
│ │
│ ├── models <- Trained and serialized models, model predictions,
│ │ │ or model predictions
│ │ ├── checkpoints
│ │ ├── generated_midis
│ │ └── tensorboard
│ │
│ ├── src
│ │
│ ├── main.py <- Main method
│ ├── config.yaml <- Config file, storing all network parameters
│ ├── models <- Model definition and code to train the model
│ └── helper <- Helper functions used to train the model
└──--------
Project based on the cookiecutter data science project template. #cookiecutterdatascience