Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/jsn5/dancenet
DanceNet -ππDance generator using Autoencoder, LSTM and Mixture Density Network. (Keras)
https://github.com/jsn5/dancenet
autoencoder computer-vision generative-model keras lstm mixture-density-networks
Last synced: 2 months ago
JSON representation
DanceNet -ππDance generator using Autoencoder, LSTM and Mixture Density Network. (Keras)
- Host: GitHub
- URL: https://github.com/jsn5/dancenet
- Owner: jsn5
- License: mit
- Created: 2018-08-06T06:25:22.000Z (over 6 years ago)
- Default Branch: master
- Last Pushed: 2019-09-15T18:09:41.000Z (over 5 years ago)
- Last Synced: 2024-08-08T23:22:53.573Z (6 months ago)
- Topics: autoencoder, computer-vision, generative-model, keras, lstm, mixture-density-networks
- Language: Python
- Homepage:
- Size: 1.67 MB
- Stars: 513
- Watchers: 30
- Forks: 82
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# DanceNet - Dance generator using Variational Autoencoder, LSTM and Mixture Density Network. (Keras)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://github.com/jsn5/dancenet/blob/master/LICENSE) [![Run on FloydHub](https://static.floydhub.com/button/button-small.svg)](https://floydhub.com/run)
[![DOI](https://zenodo.org/badge/143685321.svg)](https://zenodo.org/badge/latestdoi/143685321)![](https://github.com/jsn5/dancenet/blob/master/demo.gif ) ![](https://github.com/jsn5/dancenet/blob/master/demo2.gif )
## Main components:
* Variational autoencoder
* LSTM + Mixture Density Layer## Requirements:
* Python version = 3.5.2
### Packages
* keras==2.2.0
* sklearn==0.19.1
* numpy==1.14.3
* opencv-python==3.4.1## Dataset
https://www.youtube.com/watch?v=NdSqAAT28v0
This is the video used for training.## How to run locally
* Download the trained weights from [here](https://drive.google.com/file/d/1LWtERyPAzYeZjL816gBoLyQdC2MDK961/view?usp=sharing). and extract it to the dancenet dir.
* Run dancegen.ipynb## How to run in your browser
[![Run on FloydHub](https://static.floydhub.com/button/button-small.svg)](https://floydhub.com/run)
* Click the button above to open this code in a FloydHub workspace (the [trained weights dataset](https://www.floydhub.com/whatrocks/datasets/dancenet-weights) will be automatically attached to the environment)
* Run dancegen.ipynb## Training from scratch
* fill dance sequence images labeled as `1.jpg`, `2.jpg` ... in `imgs/` folder
* run `model.py`
* run `gen_lv.py` to encode images
* run `video_from_lv.py` to test decoded video
* run jupyter notebook `dancegen.ipynb` to train dancenet and generate new video.## References
* [Does my AI have better dance moves than me?](https://www.youtube.com/watch?v=Sc7RiNgHHaE&t=9s) by Cary Huang
* [Generative Choreography using Deep Learning (Chor-RNN)](https://arxiv.org/abs/1605.06921)
* [Building autoencoders in keras](https://blog.keras.io/building-autoencoders-in-keras.html) by [Francois Chollet](https://twitter.com/fchollet)
* [Time Series Prediction with LSTM Recurrent Neural Networks in Python with Keras](https://machinelearningmastery.com/time-series-prediction-lstm-recurrent-neural-networks-python-keras/)
* [Mixture Density Networks](http://blog.otoro.net/2015/06/14/mixture-density-networks/) by [David Ha](https://twitter.com/hardmaru)
* [Mixture Density Layer for Keras](https://github.com/cpmpercussion/keras-mdn-layer) by [Charles Martin](https://github.com/cpmpercussion/)