Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/cvondrick/videogan
Generating Videos with Scene Dynamics. NIPS 2016.
https://github.com/cvondrick/videogan
computer-vision deep-learning generative-adversarial-network video
Last synced: 13 days ago
JSON representation
Generating Videos with Scene Dynamics. NIPS 2016.
- Host: GitHub
- URL: https://github.com/cvondrick/videogan
- Owner: cvondrick
- Created: 2016-08-31T21:37:50.000Z (about 8 years ago)
- Default Branch: master
- Last Pushed: 2018-05-03T01:24:08.000Z (over 6 years ago)
- Last Synced: 2024-08-01T19:33:50.572Z (3 months ago)
- Topics: computer-vision, deep-learning, generative-adversarial-network, video
- Language: Lua
- Homepage: http://web.mit.edu/vondrick/tinyvideo/
- Size: 19.5 KB
- Stars: 711
- Watchers: 39
- Forks: 143
- Open Issues: 9
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
Generating Videos with Scene Dynamics
=====================================This repository contains an implementation of [Generating Videos with Scene Dynamics](http://carlvondrick.com/tinyvideo/) by Carl Vondrick, Hamed Pirsiavash, Antonio Torralba, to appear at NIPS 2016. The model learns to generate tiny videos using adversarial networks.
Example Generations
-------------------
Below are some selected videos that are generated by our model. These videos are not real; they are hallucinated by a generative video model. While they are not photo-realistic, the motions are fairly reasonable for the scene category they are trained on.Beach
Golf
Train Station
Baby
Training
--------The code requires a Torch7 installation.
To train a generator for video, see main.lua. This file will construct the networks, start many threads to load data, and train the networks.
For the conditional version, see main_conditional.lua. This is similar to main.lua, except the input to the model is a static image.
To generate videos, see generate.lua. This file will also output intermediate layers,
such as the mask and background image, which you can inspect manually.Data
----
The data loading is designed assuming videos have been stabilized and flattened
into JPEG images. We do this for efficiency. Stabilization is computationally slow and
must be done offline, and reading one file per video is more efficient on NFS.For our stabilization code, see the 'extra' directory.
Essentially, this will convert each video into an image of vertically
concatenated frames. After doing this, you create a text file listing
all the frames, which you pass into the data loader.Models
------
You can download our pre-trained models [here](https://drive.google.com/file/d/0B-xMJ5CYz_F9QS1BTE5yWl9aUWs/view?usp=sharing) (1 GB ZIP file).Notes
-----
The code is based on [DCGAN](https://github.com/soumith/dcgan.torch) and our [starter code](https://github.com/cvondrick/torch-starter) in [Torch7](https://github.com/torch/torch7).If you find this useful for your research, please consider citing our NIPS
paper.License
-------
MIT