Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/GV1028/videogan
Implementation of "Generating Videos with Scene Dynamics" in Tensorflow
https://github.com/GV1028/videogan
generative-adversarial-network tensorflow video video-generation video-representation-learning
Last synced: about 2 months ago
JSON representation
Implementation of "Generating Videos with Scene Dynamics" in Tensorflow
- Host: GitHub
- URL: https://github.com/GV1028/videogan
- Owner: GV1028
- Created: 2018-03-02T00:59:51.000Z (almost 7 years ago)
- Default Branch: master
- Last Pushed: 2018-03-13T05:04:20.000Z (almost 7 years ago)
- Last Synced: 2024-08-08T23:23:10.571Z (5 months ago)
- Topics: generative-adversarial-network, tensorflow, video, video-generation, video-representation-learning
- Language: Python
- Size: 731 KB
- Stars: 76
- Watchers: 3
- Forks: 20
- Open Issues: 8
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Generating Videos with Scene Dynamics
## Introduction
This repository contains an implementation of "Generating Videos with Scene Dynamics" in Tensorflow. The paper can be found here (http://carlvondrick.com/tinyvideo/paper.pdf). The model learns to generate a video by upsampling from some latent space, using adversarial training.## Requirements
For running this code and reproducing the results, you need the following packages. Python 2.7 has been used.Packages:
* TensorFlow
* NumPy
* cv2
* scikit-video
* scikit-image## VideoGAN - Architecture and Working
Attached below is the architecture used in the paper [paper](http://carlvondrick.com/tinyvideo/paper.pdf).
![Video_GAN](images/videogan.png)## Usage
Place the videos inside a folder called "trainvideos".
Run main.py with the required values for each flag variable.## Results
Below are some of the results on the model trained on MPII Cooking Activities dataset.Real videos
Generated videos
## Acknowledgements
* [Generating Videos With Scene Dynamics](http://carlvondrick.com/tinyvideo/paper.pdf) - Carl Vondrick et al.