https://github.com/bernhard2202/improved-video-gan
GitHub repository for "Improving Video Generation for Multi-functional Applications"
https://github.com/bernhard2202/improved-video-gan
gan video-generation
Last synced: 7 months ago
JSON representation
GitHub repository for "Improving Video Generation for Multi-functional Applications"
- Host: GitHub
- URL: https://github.com/bernhard2202/improved-video-gan
- Owner: bernhard2202
- Created: 2017-09-13T07:24:02.000Z (about 8 years ago)
- Default Branch: master
- Last Pushed: 2019-03-08T12:33:28.000Z (over 6 years ago)
- Last Synced: 2025-04-09T15:08:33.983Z (7 months ago)
- Topics: gan, video-generation
- Language: Python
- Homepage:
- Size: 1.11 MB
- Stars: 329
- Watchers: 9
- Forks: 33
- Open Issues: 4
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
Improving Video Generation for Multi-functional Applications
==================================================================GitHub repository for "Improving Video Generation for Multi-functional Applications"
[Paper Link](https://arxiv.org/abs/1711.11453)
For more information please refer to [our homepage](https://bernhard2202.github.io/ivgan/index.html).
Requirements
------------
* Tensorflow 1.2.1
* Python 2.7
* ffmpegData Format
-----------
Videos are stored as JPEGs of vertically stacked frames. Every frame needs to be at least 64x64 pixels; videos contain between 16 and 32 frames.
For an example datasets see: http://carlvondrick.com/tinyvideo/#dataTraining
--------python main_train.py
Important Parameters:
* mode: one of 'generate', 'predict', 'bw2rgb', 'inpaint' depending on weather you want to generate videos, predict future frames, colorize videos or do inpainting.
* batch_size: Recommended 64, for colorization use 32 for memory issues.
* root_dir: root directory of dataset
* index_file: must be in root_dir, containing a list of all training data clips; path relative to root_dir.
* experiment_name: name of experiment
* output_every: output loss to stdout and write to tensorboard summary every xx steps.
* sample_every: generate a visual sample every xx steps.
* save_model_very: save the model every xx steps.
* recover_model: if true recover model and continue training