{"id":26547151,"url":"https://github.com/cpmpercussion/empi","last_synced_at":"2025-10-26T12:14:25.031Z","repository":{"id":39736472,"uuid":"93524477","full_name":"cpmpercussion/empi","owner":"cpmpercussion","description":"An Embodied Musical Predictive Interface using Mixture Density Networks","archived":false,"fork":false,"pushed_at":"2024-03-05T12:33:27.000Z","size":20374,"stargazers_count":6,"open_issues_count":4,"forks_count":0,"subscribers_count":4,"default_branch":"master","last_synced_at":"2024-03-06T06:34:49.318Z","etag":null,"topics":["creativity","deep-learning","interaction","mixture-density-networks","music","nime"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/cpmpercussion.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null}},"created_at":"2017-06-06T13:56:59.000Z","updated_at":"2024-03-06T06:34:49.319Z","dependencies_parsed_at":"2024-02-06T22:37:00.726Z","dependency_job_id":null,"html_url":"https://github.com/cpmpercussion/empi","commit_stats":null,"previous_names":[],"tags_count":3,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/cpmpercussion%2Fempi","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/cpmpercussion%2Fempi/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/cpmpercussion%2Fempi/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/cpmpercussion%2Fempi/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/cpmpercussion","download_url":"https://codeload.github.com/cpmpercussion/empi/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":244912800,"owners_count":20530764,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["creativity","deep-learning","interaction","mixture-density-networks","music","nime"],"created_at":"2025-03-22T05:30:10.144Z","updated_at":"2025-10-26T12:14:24.944Z","avatar_url":"https://github.com/cpmpercussion.png","language":"Python","readme":"# EMPI: Embodied Musical Predictive Interface\n\n[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.3451730.svg)](https://doi.org/10.5281/zenodo.3451730)\n\nAn embedded musical instrument for studying musical prediction and embodied interaction.\n\n![](https://media.giphy.com/media/KFoOINQn0moVJB8uUe/giphy.gif)\n\n![Musical MDN Example](https://github.com/cpmpercussion/empi/raw/master/images/model_output/human32_model_output.png)\n\nIn this work musical data is considered to consist a time-series of continuous valued events. We seek to model the values of the events as well as the time in between each one. That means that these networks model data of at least two dimensions (event value and time).\n\nMultiple implementations of a mixture density recurrent neural network are included for comparison.\n\n## MDN Interaction Interface\n\nPart of this work is concerned with using these networks in a Raspberry Pi-based musical interface.\n\n\u003c!-- ![Musical Interface](https://github.com/cpmpercussion/creative-mdns/raw/master/images/rnn-interface.jpg) --\u003e\n\n### Installing on Raspberry Pi\n\n### Assembly\n\n![](https://media.giphy.com/media/KeKzvZvpjpWcKgXFzR/giphy.gif)\n\nThe EMPI consists of a Raspberry Pi, audio amplifier and speaker, input lever, and output lever, in a 3D-printed enclosure. The assembly materials and plans are below.\n\n### Raspberry Pi\n\n- Raspberry Pi 3B+\n- Seeed Studios Grove Base Hat\n- Alternatively, Arduino Pro Micro MIDI interface over USB.\n\n### Training\n\n1. download or generate some human data, then run `train_human_empi_mdn.py` to train the human model.\n2. use `train_synthetic_mdn_data` notebook to generate and train synthetic data.\n\n### SSH Access\n\nYou'll need ssh access to install EMPI: `ssh pi@rp1802.local`, `ssh pi@epecpi.local`\n\nGood to use the [headless install hints](https://www.raspberrypi.org/documentation/configuration/wireless/headless.md)\n\n### Raspbian Packages\n\n\tsudo apt-get install -y python3-numpy python3-pandas python3-pip puredata git\n\n### Python Packages\n\n- [Tensorflow 1.14.0](https://github.com/PINTO0309/Tensorflow-bin) - slightly weird instructions while there is no piwheels build for TF 1.14.0 on Python 3.7.\n\n\tsudo apt-get install -y libhdf5-dev libc-ares-dev libeigen3-dev\n\tsudo pip3 install keras_applications==1.0.7 --no-deps\n\tsudo pip3 install keras_preprocessing==1.0.9 --no-deps\n\tsudo pip3 install h5py==2.9.0\n\tsudo apt-get install -y openmpi-bin libopenmpi-dev\n\tsudo apt-get install -y libatlas-base-dev\n\tpip3 install -U --user six wheel mock\n\twget https://github.com/PINTO0309/Tensorflow-bin/raw/master/tensorflow-1.14.0-cp37-cp37m-linux_armv7l.whl\n\tsudo pip3 install tensorflow-1.14.0-cp37-cp37m-linux_armv7l.whl\n\n- [Tensorflow probability 0.7.0]()\n\n\tpip3 install -U tensorflow-probability==0.7.0\n\n- Keras-MDN-Layer: `pip3 install -U keras-mdn-layer`\n- Python-OSC: `pip3 install -U python-osc`\n- Keras: `pip3 install -U keras`\n\n### Install the service\n\nEMPI's startup script uses a [systemd service](https://www.raspberrypi.org/documentation/linux/usage/systemd.md) to start automatically.\n\nTo install type:\n\n\tsudo cp empistartup.service /etc/systemd/system/empistartup.service\n\tsudo systemctl enable empistartup\n\nTo start manually type:\n\n\tsudo systemctl start empistartup\n\nTo stop manually for studies etc, run:\n\n\tsudo systemctl stop empistartup\n\nThe service file simply runs the script: `empi_2_run.sh` with default arguments.\n\nTo follow the stdout from the service, run:\n\n\tsudo journalctl -f -u empistartup\n\n#### Start just Pd:\n\n\t./start_pd.sh \n\n\tpd -nogui synth/lever_synthesis.pd \u0026\n\n#### Start just the prediction server:\n\n\tpython3 predictive_music_model.py -d=2 --modelfile=\"models/musicMDRNN-dim2-layers2-units32-mixtures5-scale10-human.h5\" --modelsize xs --call --log --verbose \u0026\n\n#### Stop Pd and Python:\n\n\tpkill -u pi pd\n\tpkill -u pi python3\n\tpkill -u pi python3\n\n#### Start prediction server on local system:\n\n\tpython3 predictive_music_model.py -d=2 --modelfile=\"models/musicMDRNN-dim2-layers2-units32-mixtures5-scale10-human.h5\" --modelsize xs --call --log --verbose --clientip=\"rp1802.local\"\n\n\tpython3 predictive_music_model.py -d=2 --modelfile=\"models/musicMDRNN-dim2-layers2-units32-mixtures5-scale10-synth.h5\" --modelsize xs --call --log --verbose --clientip=\"rp1802.local\"\n\n\tpython3 predictive_music_model.py -d=2 --modelfile=\"models/musicMDRNN-dim2-layers2-units32-mixtures5-scale10-noise.h5\" --modelsize xs --call --log --verbose --clientip=\"rp1802.local\" --serverip=\"voyager.local\"\n\n#### Start RNN run loop to send/receive from local system\n\n\tpython3 empi_2_runloop.py --synthip=\"127.0.0.1\" --serverip=\"rp1802.local\" -v\n\n\tpython3 empi_2_runloop.py --synthip=\"127.0.0.1\" --serverip=\"rp1802.local\" --predictorip=\"voyager.local\" -v\n\n### Study Procedure:\n\n1. Connect to EMPI over ssh: `ssh pi@rp1802.local`\n2. Cancel existing EMPI process with: `sudo systemctl stop empistartup.service`\n3. Go to EMPI directory: `cd empi`\n4. Choose commmand for required option from the list below.\n5. It takes 25s to load the system on the RPi 3B+\n\n- Human, no servo: `./empi_2_run.sh --human --noservo`\n- Synth, no servo: `./empi_2_run.sh --synth --noservo`\n- Noise, no servo: `./empi_2_run.sh --noise --noservo`\n- Human, servo: `./empi_2_run.sh --human --servo`\n- Synth, servo: `./empi_2_run.sh --synth --servo`\n- Noise, servo: `./empi_2_run.sh --noise --servo`\n\nFinally, shutdown before turning off: `sudo shutdown -h now`\n\n# Training the models\n\n## Generate Data\n\n1. Generate the synthetic datasets by running the Jupyter Notbook: `notebooks/generate_synthetic_data.ipynb`\n2. Generate the human dataset by running `empi_generate_human_dataset.py`\n3. Train the neural network models by running `python3 train_empi_mdn.py -p`, `python3 train_empi_mdn.py -s`, and `python3 train_empi_mdn.py -n`.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcpmpercussion%2Fempi","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fcpmpercussion%2Fempi","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcpmpercussion%2Fempi/lists"}