Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/alex04072000/cyclicgen
Deep Video Frame Interpolation using Cyclic Frame Generation
https://github.com/alex04072000/cyclicgen
aaai cycle-consistency-loss deep-learning dvf frame interpolation motion motion-linearity-loss tensorflow video video-frame-interpolation
Last synced: about 1 month ago
JSON representation
Deep Video Frame Interpolation using Cyclic Frame Generation
- Host: GitHub
- URL: https://github.com/alex04072000/cyclicgen
- Owner: alex04072000
- Created: 2018-11-05T14:05:40.000Z (about 6 years ago)
- Default Branch: master
- Last Pushed: 2019-06-18T15:32:58.000Z (over 5 years ago)
- Last Synced: 2024-10-10T11:01:06.821Z (about 1 month ago)
- Topics: aaai, cycle-consistency-loss, deep-learning, dvf, frame, interpolation, motion, motion-linearity-loss, tensorflow, video, video-frame-interpolation
- Language: Python
- Size: 492 KB
- Stars: 158
- Watchers: 7
- Forks: 25
- Open Issues: 7
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Deep Video Frame Interpolation using Cyclic Frame Generation
Video frame interpolation algorithms predict intermediate frames to produce videos with higher frame rates and smooth view transitions given two consecutive frames as inputs. We propose that: synthesized frames are more reliable if they can be used to reconstruct the input frames with high quality. Based on this idea, we introduce a new loss term, the cycle consistency loss. The cycle consistency loss can better utilize the training data to not only enhance the interpolation results, but also maintain the performance better with less training data. It can be integrated into any frame interpolation network and trained in an end-to-end manner. In addition to the cycle consistency loss, we propose two extensions: motion linearity loss and edge-guided training. The motion linearity loss approximates the motion between two input frames to be linear and regularizes the training. By applying edge-guided training, we further improve results by integrating edge information into training. Both qualitative and quantitative experiments demonstrate that our model outperforms the state-of-the-art methods.[[Project]](https://www.cmlab.csie.ntu.edu.tw/~yulunliu/CyclicGen)
Paper
## Overview
This is the author's reference implementation of the video frame interpolation using TensorFlow described in:
"Deep Video Frame Interpolation using Cyclic Frame Generation"
[Yu-Lun Liu](http://www.cmlab.csie.ntu.edu.tw/~yulunliu/), [Yi-Tung Liao](http://www.cmlab.csie.ntu.edu.tw/~queenieliaw/), [Yen-Yu Lin](https://www.citi.sinica.edu.tw/pages/yylin/), [Yung-Yu Chuang](https://www.csie.ntu.edu.tw/~cyy/) (Academia Sinica & National Taiwan University & MediaTek)
in 33rd AAAI Conference on Artificial Intelligence (AAAI) 2019, Oral Presentation.
Should you be making use of our work, please cite our paper [1]. Some codes are forked from [Deep Voxel Flow (DVF)](https://github.com/liuziwei7/voxel-flow) [2] and [Holistically-Nested Edge Detection (HED)](https://github.com/moabitcoin/holy-edge) [3].Further information please contact [Yu-Lun Liu](http://www.cmlab.csie.ntu.edu.tw/~yulunliu/).
## Requirements setup
* [TensorFlow](https://www.tensorflow.org/)* To download the pre-trained CyclicGen models and HED model:
* [ckpt_and_hed_model](https://drive.google.com/open?id=1X7PWDY2nAx8ZeSLso5qeypRUCDokNFms)
## Data Preparation
* [Deep Voxel Flow (DVF)](https://github.com/liuziwei7/voxel-flow)## Usage
* Run the training script for stage 1:
``` bash
python3 CyclicGen_train_stage1.py --subset=train
```
* Run the training script for stage 2:
``` bash
python3 CyclicGen_train_stage2.py --subset=train --pretrained_model_checkpoint_path=./ckpt/CyclicGen/model
```
* Run the testing and evaluation script:
``` bash
python3 CyclicGen_train_stage1.py --pretrained_model_checkpoint_path=./ckpt/CyclicGen/model --subset=test --batch_size=1
```
* Run your own pair of frames:
``` bash
python3 run.py --pretrained_model_checkpoint_path=./ckpt/CyclicGen/model --first=./first.png --second=./second.png --out=./out.png
```
Note that we provide two baseline models: 1) original DVF ```CyclicGen_model.py```, and 2) DVF with more layers in order to increase the receptive field ```CyclicGen_model_large.py```. While test on UCF-101 dataset, we use the ```CyclicGen_model.py``` network. The motions in Middlebury dataset are much larger than UCF-101, some even exceed the receiptive field of DVF network. So we use ```CyclicGen_model_large.py``` for fine-tuning and testing. You can easily switch between these two models by changing the line```from CyclicGen_model import Voxel_flow_model```
to
```from CyclicGen_model_large import Voxel_flow_model```.
## [Video](https://www.youtube.com/watch?v=R8vQjgAtPOE)
## Citation
```
[1] @inproceedings{liu2019cyclicgen,
author = {Yu-Lun Liu and Yi-Tung Liao and Yen-Yu Lin and Yung-Yu Chuang},
title = {Deep Video Frame Interpolation using Cyclic Frame Generation},
booktitle = {Proceedings of the 33rd Conference on Artificial Intelligence (AAAI)},
year = {2019}
}
```
```
[2] @inproceedings{liu2017voxelflow,
author = {Ziwei Liu, Raymond Yeh, Xiaoou Tang, Yiming Liu, and Aseem Agarwala},
title = {Video Frame Synthesis using Deep Voxel Flow},
booktitle = {Proceedings of International Conference on Computer Vision (ICCV)},
month = {October},
year = {2017}
}
```
```
[3] @InProceedings{xie15hed,
author = {"Xie, Saining and Tu, Zhuowen"},
Title = {Holistically-Nested Edge Detection},
Booktitle = "Proceedings of IEEE International Conference on Computer Vision",
Year = {2015},
}
```## Acknowledgment
This work was supported in part by Ministry of Science and Technology (MOST) under grants MOST 107-2628-E-001-005-MY3, MOST107-2221-E-002-147-MY3, and MOST Joint Research Center for AI Technology and All Vista Healthcare under grant 107-2634-F-002-007.