https://github.com/Ludwiggle/GRUMIDI
Recurrent Neural Network for generative MIDI music
https://github.com/Ludwiggle/GRUMIDI
algorave electronic-music gated-recurrent-units generative-art generative-music machine-learning mathematica midi midi-sequencer music recurrent-neural-networks wolfram-language wolfram-mathematica wolframlanguage wolframscript
Last synced: about 1 month ago
JSON representation
Recurrent Neural Network for generative MIDI music
- Host: GitHub
- URL: https://github.com/Ludwiggle/GRUMIDI
- Owner: Ludwiggle
- Created: 2018-08-08T14:30:02.000Z (almost 7 years ago)
- Default Branch: master
- Last Pushed: 2018-08-21T11:14:25.000Z (over 6 years ago)
- Last Synced: 2024-11-03T09:31:34.089Z (6 months ago)
- Topics: algorave, electronic-music, gated-recurrent-units, generative-art, generative-music, machine-learning, mathematica, midi, midi-sequencer, music, recurrent-neural-networks, wolfram-language, wolfram-mathematica, wolframlanguage, wolframscript
- Language: Mathematica
- Homepage:
- Size: 8.79 KB
- Stars: 4
- Watchers: 2
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# GRUMIDI
A GRU1-based RNN2 for rhythmic pattern generation.
The RNN model is a
[char-rnn](http://karpathy.github.io/2015/05/21/rnn-effectiveness/)
that gets trained on an input MIDI file encoded as a sequence of
[unit vectors](https://en.wikipedia.org/wiki/Unit_vector)## Prerequisite
[WolframKernel](https://www.wolfram.com/cdf-player)
[Wolframscript](https://www.wolfram.com/wolframscript)Run `$ wolframscript -configure` and set the variable `WOLFRAMSCRIPT_KERNELPATH` to your local `WolframKernel` address
## Usage
1. Run `$ wolframscript -f encodeAndTrain.wl`
Type the input filename5 `*.mid`
The trained net and decoding parameters are saved in `data/`.
2. Run `$ wolframscript -f generateAndDecode.wl`
Generated `*.mid` is saved in `data/`.
## Discussion
In general, a MIDI file is not defined on a time-grid; MIDI events might be defined by machine-precision digits.
The first script will take care of time-quantization by fitting every MIDI event on a time-grid the resolution of which is equal to the minimum distance between two consecutive events that are found in the input MIDI file.
The generated MIDI inherits this time-quantization.The dimension of the [unit vectors](http://reference.wolfram.com/language/ref/UnitVector.html) is equal to the number of different "notes" found in the input MIDI, e.g. the chromatic scale would be encoded with 12-dimensional unit vectors. Polyphony is encoded by vector addition of simultaneous events.
Similarly to [LSTMetallica](https://github.com/keunwoochoi/LSTMetallica), the encoded input MIDI is riffled with "BAR" every 16 unit vectors for *segmentation of measures*. These "BAR" markers are deleted once the nerual net output is decoded to MIDI format.
--------------------------------
1Gated Recurrent Unit
2Recurrent Neural Network
3Musical Instrument Digital Interface
4Full address or local address.