Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/audiofhrozen/motion_dance
Sequential Learning for Dance generation
https://github.com/audiofhrozen/motion_dance
chainer dance-generation deep-learning motion-dance sequential-learning
Last synced: about 2 months ago
JSON representation
Sequential Learning for Dance generation
- Host: GitHub
- URL: https://github.com/audiofhrozen/motion_dance
- Owner: audiofhrozen
- Created: 2018-11-14T10:13:02.000Z (about 6 years ago)
- Default Branch: master
- Last Pushed: 2021-01-13T11:50:40.000Z (about 4 years ago)
- Last Synced: 2023-06-02T13:34:01.569Z (over 1 year ago)
- Topics: chainer, dance-generation, deep-learning, motion-dance, sequential-learning
- Language: Python
- Homepage: https://audiofhrozen.github.io/motion_dance/
- Size: 65.8 MB
- Stars: 20
- Watchers: 3
- Forks: 4
- Open Issues: 3
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Sequential Learning for Dance generation
[![Build Status](https://travis-ci.com/Fhrozen/motion_dance.svg?branch=master)](https://travis-ci.com/Fhrozen/motion_dance)
Generating dance using deep learning techniques.
The proposed model is shown in the following image:
![Proposed Model](images/seq2seq_mc.png?raw=true "Title")
The joints of the skeleton employed in the experiment are shown in the following image:
![Skeleton](images/skeleton.png?raw=true "Title")
### Use of GPU
If you use GPU in your experiment, set `--gpu` option in `run.sh` appropriately, e.g.,
```sh
$ ./run.sh --gpu 0
```
Default setup uses GPU 0 (`--gpu 0`). For CPU execution set gpu to -1## Execution
The main routine is executed by:
```sh
$ ./run.sh --net $net --exp $exp --sequence $sequence --epoch $epochs --stage $stage
```
Being possible to train different type of datasets (`$exp`)To run into a docker container use the file (`run_in_docker.sh`) instead of (`run.sh`)
## Unreal Engine 4 Visualization
For demostration from evaluation files or for testing training files use (`local/ue4_send_osc.py`).
For realtime emulation execute (`run_realtime.sh`).## Requirements
For training and evaluating the following python libraries are required:
- [chainer=>3.1.0](https://github.com/chainer/chainer)
- [chainerui](https://github.com/chainer/chainerui)
- [cupy=>2.1.0](https://github.com/cupy/cupy)
- [madmom](https://github.com/CPJKU/madmom/)
- [Beat Tracking Evaluation toolbox](https://github.com/Fhrozen/Beat-Tracking-Evaluation-Toolbox). The original code is found [here](https://github.com/adamstark/Beat-Tracking-Evaluation-Toolbox)
- [mir_eval](https://github.com/craffel/mir_eval)
- [transforms3d](https://github.com/matthew-brett/transforms3d)
- h5py, numpy, soundfile, scipy, scikit-learn, pandasInstall the following music libraries to convert the audio files:
```sh
$ sudo apt-get install libsox-fmt-mp3
```Additionally, you may require [Marsyas](http://marsyas.info/doc/manual/marsyas-user/Building-latest-Marsyas-on-Debian_002fUbuntu.html#Building-latest-Marsyas-on-Debian_002fUbuntu) to extract the bet reference information.
For real-time emulation:
- pyOSC (for python v2)
- python-osc (for python v3)
- vlc (optional)## ToDo:
- New dataset
- Detailed audio information
- Virtual environment release## Acknowledgement
- Thanks Johnson Lai for the comments## References
[1] Nelson Yalta, Shinji Watanabe, Kazuhiro Nakadai, Tetsuya Ogata, "Weakly Supervised Deep Recurrent Neural Networks for Basic Dance Step Generation", [arXiv](https://arxiv.org/abs/1807.01126)
[2] Nelson Yalta, Kazuhiro Nakadai, Tetsuya Ogata, "Sequential Deep Learning for Dancing Motion Generation", [SIG-Challenge 2016](http://www.osaka-kyoiku.ac.jp/~challeng/SIG-Challenge-046/SIG-Challenge-046-08.pdf)