Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/LavieC/VideoImagination
https://github.com/LavieC/VideoImagination
Last synced: 4 months ago
JSON representation
- Host: GitHub
- URL: https://github.com/LavieC/VideoImagination
- Owner: LavieC
- Created: 2017-04-20T16:00:22.000Z (almost 8 years ago)
- Default Branch: master
- Last Pushed: 2018-02-25T14:18:51.000Z (almost 7 years ago)
- Last Synced: 2024-08-01T19:51:32.512Z (7 months ago)
- Language: Python
- Size: 4.81 MB
- Stars: 100
- Watchers: 2
- Forks: 19
- Open Issues: 5
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Video Imagination from a Single Image with Transformation Generation
This repository contains an implementation of [Video Imagination from a Single Image with Transformation Generation](https://dl.acm.org/citation.cfm?id=3126737). The framework can synthesize multiple imaginary video from a single image.## Imaginary Video Example
We randomly pick some imaginary videos synthesized by our framework. The input is a single image from UCF101 dataset, and the output imaginary video contains five frames.
The following gif picture is a demo of synthesized imaginary video. The network may bring some delay, please wait a while fro the demonstration.
> Imaginary Video
>
> > Input image
>
>## Data
The framework can be trained on three datasets : moving MNIST, 2D shape, UCF101. No pre-process is needed except normalizing images to be in the range [0, 1].
The videos (or image tuples) needs to be convert to tfrecords at first.
## Training
The code requires a TensorFlowr r1.0 installationTo train the framework, after you prepare the tfrecords, run main.py. This file will build model and graph, and train the networks.
## Notes
The code is modified based on [A Tensorflow Implementation of DCGAN](https://github.com/bamos/dcgan-completion.tensorflow). The on-the-fly 2D shape dataset generating codes are modified from [the author of the dataset](https://github.com/tensorflow/models/tree/master/next_frame_prediction).