https://github.com/liuzhen03/awesome-video-enhancement
Paper list for video enhancement, including video super-resolution, interpolation, denoising, deblurring and inpainting.
https://github.com/liuzhen03/awesome-video-enhancement
List: awesome-video-enhancement
Last synced: about 2 months ago
JSON representation
Paper list for video enhancement, including video super-resolution, interpolation, denoising, deblurring and inpainting.
- Host: GitHub
- URL: https://github.com/liuzhen03/awesome-video-enhancement
- Owner: liuzhen03
- Created: 2019-11-20T08:29:11.000Z (over 5 years ago)
- Default Branch: master
- Last Pushed: 2022-06-02T08:58:40.000Z (almost 3 years ago)
- Last Synced: 2024-05-21T04:01:04.593Z (about 1 year ago)
- Size: 8.79 KB
- Stars: 73
- Watchers: 4
- Forks: 5
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- awesome-github-repos - liuzhen03/awesome-video-enhancement - Paper list for video enhancement, including video super-resolution, interpolation, denoising, deblurring and inpainting. (Others)
- ultimate-awesome - awesome-video-enhancement - Paper list for video enhancement, including video super-resolution, interpolation, denoising, deblurring and inpainting. (Other Lists / Julia Lists)
README
# Awesome Video Enhancement
Paper list for video enhancement, including video super-resolution, interpolation, denoising, deblurring and inpainting.
By Zhen Liu. If you have any suggestions, please email me. ([email protected])
## 1. Video Super Resolution
### CVPR 2024
* Kai Xu et al., **Enhancing Video Super-Resolution via Implicit Resampling-based Alignment**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2024/papers/Xu_Enhancing_Video_Super-Resolution_via_Implicit_Resampling-based_Alignment_CVPR_2024_paper.pdf) [[PyTorch]](https://github.com/kai422/IART)
* Zhikai Chen et al., **Learning Spatial Adaptation and Temporal Coherence in Diffusion Models for Video Super-Resolution**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2024/papers/Chen_Learning_Spatial_Adaptation_and_Temporal_Coherence_in_Diffusion_Models_for_CVPR_2024_paper.pdf)
* Xingyu Zhou et al., **Video Super-Resolution Transformer with Masked Inter&Intra-Frame Attention**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2024/papers/Zhou_Video_Super-Resolution_Transformer_with_Masked_InterIntra-Frame_Attention_CVPR_2024_paper.pdf) [[PyTorch]] (https://github.com/LabShuHangGU/MIA-VSR)
* Geunhyuk Youk et al., **FMA-Net: Flow-Guided Dynamic Filtering and Iterative Feature Refinement with Multi-Attention for Joint Video Super-Resolution and Deblurring**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2024/papers/Youk_FMA-Net_Flow-Guided_Dynamic_Filtering_and_Iterative_Feature_Refinement_with_Multi-Attention_CVPR_2024_paper.pdf) [[PyTorch]](https://github.com/KAIST-VICLab/FMA-Net)
* Shangchen Zhou et al., **Upscale-A-Video: Temporal-Consistent Diffusion Model for Real-World Video Super-Resolution**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2024/papers/Zhou_Upscale-A-Video_Temporal-Consistent_Diffusion_Model_for_Real-World_Video_Super-Resolution_CVPR_2024_paper.pdf)### ECCV 2024
* Wei Shang et al., **Arbitrary-Scale Video Super-Resolution with Structural and Textural Priors**, [[pdf]](https://arxiv.org/abs/2407.09919), [[PyTorch]](https://github.com/shangwei5/ST-AVSR)
* Claudio Rota et al., **Enhancing Perceptual Quality in Video Super-Resolution through Temporally-Consistent Detail Synthesis using Diffusion Models**, [[pdf]](https://arxiv.org/abs/2311.15908), [[Soon]](https://github.com/claudiom4sir/StableVSR)
* Ruicheng Feng et al., **Kalman-Inspired Feature Propagation for Video Face Super-Resolution**, [[pdf]](https://arxiv.org/abs/2408.05205), [[PyTorch(test only)]](https://github.com/jnjaby/KEEP)
* Xi Yang et al., **Motion-Guided Latent Diffusion for Temporally Consistent Real-world Video Super-resolution**, [[pdf]](https://arxiv.org/abs/2312.00853), [[MMengine]](https://github.com/IanYeung/MGLD-VSR)
* Yuehan Zhang et al., **RealViformer: Investigating Attention for Real-World Video Super-Resolution**, [[pdf]](https://arxiv.org/abs/2407.13987), [[Soon]](https://github.com/Yuehan717/RealViformer)
* Yuan Shen et al., **SuperGaussian: Repurposing Video Models for 3D Super Resolution**, [[pdf]](https://arxiv.org/abs/2406.00609), [[Soon]](https://github.com/yshen47/SuperGaussian_ECCV24)### CVPR 2023
* Gen Li et al., **Towards High-Quality and Efficient Video Super-Resolution via Spatial-Temporal Data Overfitting**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2023/papers/Li_Towards_High-Quality_and_Efficient_Video_Super-Resolution_via_Spatial-Temporal_Data_Overfitting_CVPR_2023_paper.pdf) [[PyTorch]](https://github.com/coulsonlee/STDO-CVPR2023)
* Yingwei Wang et al., **Compression-Aware Video Super-Resolution**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2023/papers/Wang_Compression-Aware_Video_Super-Resolution_CVPR_2023_paper.pdf) [[PyTorch(test only)]](https://github.com/aprBlue/CAVSR/tree/master)
* Bin Xia et al., **Structured Sparsity Learning for Efficient Video Super-Resolution**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2023/papers/Xia_Structured_Sparsity_Learning_for_Efficient_Video_Super-Resolution_CVPR_2023_paper.pdf) [[PyTorch]](https://github.com/Zj-BinXia/SSL)
* Yunfan Lu et al., **Learning Spatial-Temporal Implicit Neural Representations for Event-Guided Video Super-Resolution**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2023/papers/Lu_Learning_Spatial-Temporal_Implicit_Neural_Representations_for_Event-Guided_Video_Super-Resolution_CVPR_2023_paper.pdf) [[PyTorch]](https://github.com/yunfanLu/INR-Event-VSR/tree/main)### ICCV 2023
* Yi-Hsin Chen et al., **MoTIF: Learning Motion Trajectories with Local Implicit Neural Functions for Continuous Space-Time Video Super-Resolution**, [[pdf]](https://openaccess.thecvf.com/content/ICCV2023/papers/Chen_MoTIF_Learning_Motion_Trajectories_with_Local_Implicit_Neural_Functions_for_ICCV_2023_paper.pdf), [[PyTorch(test only)]](https://github.com/sichun233746/MoTIF)
* Zixi Tuo et al., **Learning Data-Driven Vector-Quantized Degradation Model for Animation Video Super-Resolution**, [[pdf]](https://openaccess.thecvf.com/content/ICCV2023/papers/Tuo_Learning_Data-Driven_Vector-Quantized_Degradation_Model_for_Animation_Video_Super-Resolution_ICCV_2023_paper.pdf), [[PyTorch]](https://github.com/researchmm/VQD-SR)### CVPR 2022
* Jiyang Yu et al., **Memory-Augmented Non-Local Attention for Video Super-Resolution**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2022/papers/Yu_Memory-Augmented_Non-Local_Attention_for_Video_Super-Resolution_CVPR_2022_paper.pdf) [[PyTorch]](https://github.com/jiy173/MANA)
* Zeyuan Chen et al., **VideoINR: Learning Video Implicit Neural Representation for Continuous Space-Time Super-Resolution**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2022/papers/Chen_VideoINR_Learning_Video_Implicit_Neural_Representation_for_Continuous_Space-Time_Super-Resolution_CVPR_2022_paper.pdf) [[PyTorch]](https://github.com/Picsart-AI-Research/VideoINR-Continuous-Space-Time-Super-Resolution)
* Kelvin C.K. Chan et al., **BasicVSR++: Improving Video Super-Resolution With Enhanced Propagation and Alignment**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2022/papers/Chan_BasicVSR_Improving_Video_Super-Resolution_With_Enhanced_Propagation_and_Alignment_CVPR_2022_paper.pdf) [[MMengine]](https://github.com/ckkelvinchan/BasicVSR_PlusPlus/tree/master)
* Zhicheng Geng et al., **RSTT: Real-Time Spatial Temporal Transformer for Space-Time Video Super-Resolution**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2022/papers/Geng_RSTT_Real-Time_Spatial_Temporal_Transformer_for_Space-Time_Video_Super-Resolution_CVPR_2022_paper.pdf) [[PyTorch]](https://github.com/llmpass/RSTT)
* Chengxu Liu et al., **Learning Trajectory-Aware Transformer for Video Super-Resolution**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2022/papers/Liu_Learning_Trajectory-Aware_Transformer_for_Video_Super-Resolution_CVPR_2022_paper.pdf) [[MMengine]](https://github.com/researchmm/TTVSR)
* Junyong Lee et al., **Reference-Based Video Super-Resolution Using Multi-Camera Video Triplets**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2022/papers/Lee_Reference-Based_Video_Super-Resolution_Using_Multi-Camera_Video_Triplets_CVPR_2022_paper.pdf) [[PyTorch]](https://github.com/codeslake/RefVSR)
* Kelvin C.K. Chan et al., **Investigating Tradeoffs in Real-World Video Super-Resolution**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2022/papers/Chan_Investigating_Tradeoffs_in_Real-World_Video_Super-Resolution_CVPR_2022_paper.pdf) [[MMengine]](https://github.com/ckkelvinchan/RealBasicVSR)### ECCV 2022
* Zhongwei Qiu et al., **Learning Spatiotemporal Frequency-Transformer for Compressed Video Super-Resolution**, [[pdf]](https://arxiv.org/abs/2208.03012), [[PyTorch]](https://github.com/researchmm/FTVSR)
* Huanjing Yue et al., **Real-RawVSR: Real-World Raw Video Super-Resolution with a Benchmark Dataset**, [[pdf]](https://arxiv.org/abs/2209.12475), [[PyTorch]](https://github.com/zmzhang1998/Real-RawVSR)
* Jiezhang Cao et al., **Towards Interpretable Video Super-Resolution via Alternating Optimization**, [[pdf]](https://arxiv.org/abs/2207.10765), [[PyTorch]](https://github.com/caojiezhang/DAVSR)### ICCV 2021
* Peng Yi et al., **Omniscient Video Super-Resolution**, [[pdf]](https://arxiv.org/abs/2103.15683) [[PyTorch]](https://github.com/psychopa4/OVSR).
* Yinxiao Li et al., **COMISR: Compression-Informed Video Super-Resolution**, [[pdf]](https://arxiv.org/abs/2105.01237) [[Tersorflow]](https://github.com/google-research/google-research/tree/master/comisr).
* Jinshan Pan et al., **Deep Blind Video Super-Resolution**, [[pdf]](https://arxiv.org/abs/2003.04716) [[PyTorch]](https://github.com/csbhr/Deep-Blind-VSR).
* Xi Yang et al., **Real-World Video Super-Resolution: A Benchmark Dataset and a Decomposition Based Learning Scheme**, [[pdf]](https://openaccess.thecvf.com/content/ICCV2021/papers/Yang_Real-World_Video_Super-Resolution_A_Benchmark_Dataset_and_a_Decomposition_Based_ICCV_2021_paper.pdf) [[PyTorch]](https://github.com/IanYeung/RealVSR).### CVPR 2021
* Kelvin C.K. Chan et al., **BasicVSR: The Search for Essential Components in Video Super-Resolution and Beyond**, [[pdf]](https://arxiv.org/abs/2012.02181) [[PyTorch]](https://github.com/ckkelvinchan/BasicVSR-IconVSR).
* Gang Xu et al., **Temporal Modulation Network for Controllable Space-Time Video Super-Resolution**, [[pdf]](https://arxiv.org/abs/2104.10642) [[PyTorch]](https://github.com/CS-GangXu/TMNet).
* Zebu Xiao et al., **Space-Time Distillation for Video Super-Resolution**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2021/papers/Xiao_Space-Time_Distillation_for_Video_Super-Resolution_CVPR_2021_paper.pdf).
* Yongcheng Jing et al., **Turning Frequency to Resolution: Video Super-Resolution via Event Cameras**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2021/papers/Jing_Turning_Frequency_to_Resolution_Video_Super-Resolution_via_Event_Cameras_CVPR_2021_paper.pdf).
### ECCV 2020
* Takashi Isobe et al., **Video Super-Resolution with Recurrent Structure-Detail Network**, [[pdf]](https://arxiv.org/pdf/2008.00455).
* Wenbo Li et al., **MuCAN: Multi-Correspondence Aggregation Network for Video Super-Resolution**, [[pdf\]](https://arxiv.org/pdf/2007.11803).### CVPR 2020
* Xiaoyu Xiang et al., **Zooming Slow-Mo: Fast and Accurate One-Stage Space-Time Video Super-Resolution**, [[pdf\]](https://openaccess.thecvf.com/content_CVPR_2020/papers/Xiang_Zooming_Slow-Mo_Fast_and_Accurate_One-Stage_Space-Time_Video_Super-Resolution_CVPR_2020_paper.pdf) [[PyTorch\]]().
* Takashi Isobe et al., **Video Super-Resolution With Temporal Group Attention**, [[pdf\]](https://openaccess.thecvf.com/content_CVPR_2020/papers/Isobe_Video_Super-Resolution_With_Temporal_Group_Attention_CVPR_2020_paper.pdf).
* Yapeng Tian et al., **TDAN: Temporally-Deformable Alignment Network for Video Super-Resolution**, [[pdf\]](https://openaccess.thecvf.com/content_CVPR_2020/papers/Tian_TDAN_Temporally-Deformable_Alignment_Network_for_Video_Super-Resolution_CVPR_2020_paper.pdf)### CVPR 2019
* Muhammad Haris et al., **Recurrent Back-Projection Network for Video Super-Resolution**, [[pdf\]](http://openaccess.thecvf.com/content_CVPR_2019/papers/Haris_Recurrent_Back-Projection_Network_for_Video_Super-Resolution_CVPR_2019_paper.pdf) [[PyTorch\]]().
* Sheng Li et al., **Fast Spatio-Temporal Residual Network for Video Super-Resolution**, [[pdf\]](http://openaccess.thecvf.com/content_CVPR_2019/papers/Li_Fast_Spatio-Temporal_Residual_Network_for_Video_Super-Resolution_CVPR_2019_paper.pdf).### CVPRW 2019
* Xintao Wang et al., **EDVR: Video Restoration with Enhanced Deformable Convolutional Networks**, [[pdf\]]() [[PyTorch\]]()
### ICCV 2019
* Peng Yi et al., **Progressive Fusion Video Super-Resolution Network via Exploiting Non-Local Spatio-Temporal Correlations**, [[pdf\]](http://openaccess.thecvf.com/content_ICCV_2019/papers/Yi_Progressive_Fusion_Video_Super-Resolution_Network_via_Exploiting_Non-Local_Spatio-Temporal_Correlations_ICCV_2019_paper.pdf) [[Tensorflow\]]().
* Haochen Zhang et al., **Two-Stream Action Recognition-Oriented Video Super-Resolution**, [[pdf\]](http://openaccess.thecvf.com/content_ICCV_2019/papers/Zhang_Two-Stream_Action_Recognition-Oriented_Video_Super-Resolution_ICCV_2019_paper.pdf) [[Tensorflow & PyTorch\]]().### CVPR 2018
* Younghyun Jo et al., **Deep Video Super-Resolution Network Using Dynamic Upsampling Filters Without Explicit Motion Compensation**, [[pdf\]](http://openaccess.thecvf.com/content_cvpr_2018/papers/Jo_Deep_Video_Super-Resolution_CVPR_2018_paper.pdf) [[PyTorch (only test code)\]]().
* Mehdi S. M. Sajjadi et al., **Frame-Recurrent Video Super-Resolution**, [[pdf\]](http://openaccess.thecvf.com/content_cvpr_2018/papers/Sajjadi_Frame-Recurrent_Video_Super-Resolution_CVPR_2018_paper.pdf).### CVPR 2017
* Jose Caballero et al., **Real-Time Video Super-Resolution With Spatio-Temporal Networks and Motion Compensation**, [[pdf\]](http://openaccess.thecvf.com/content_cvpr_2017/papers/Caballero_Real-Time_Video_Super-Resolution_CVPR_2017_paper.pdf).
### ICCV 2017
* Ding Liu et al., **Robust Video Super-Resolution With Learned Temporal Dynamics**, [[pdf\]](http://openaccess.thecvf.com/content_ICCV_2017/papers/Liu_Robust_Video_Super-Resolution_ICCV_2017_paper.pdf).
* Xin Tao et al., **Detail-Revealing Deep Video Super-Resolution**, [[pdf\]](http://openaccess.thecvf.com/content_ICCV_2017/papers/Tao_Detail-Revealing_Deep_Video_ICCV_2017_paper.pdf).### CVPR 2016
* Wenzhe Shi et al., **Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network**, [[pdf\]](http://openaccess.thecvf.com/content_cvpr_2016/papers/Shi_Real-Time_Single_Image_CVPR_2016_paper.pdf).
### ICCV 2015
* Renjie Liao et al., **Video Super-Resolution via Deep Draft-Ensemble Learning**, [[pdf\]](http://openaccess.thecvf.com/content_iccv_2015/papers/Liao_Video_Super-Resolution_via_ICCV_2015_paper.pdf).
## 2. Video Frame Interpolation
### CVPR 2024
* Guangyang Wu et al., **Perception-Oriented Video Frame Interpolation via Asymmetric Blending**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2024/papers/Wu_Perception-Oriented_Video_Frame_Interpolation_via_Asymmetric_Blending_CVPR_2024_paper.pdf) [[PyTorch]](https://github.com/mulns/PerVFI/tree/main)
* Chunxu Liu et al., **Sparse Global Matching for Video Frame Interpolation with Large Motion**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2024/papers/Liu_Sparse_Global_Matching_for_Video_Frame_Interpolation_with_Large_Motion_CVPR_2024_paper.pdf) [[PyTorch]](https://github.com/MCG-NJU/SGM-VFI)### ECCV 2024
* Zhihang Zhong et al., **Clearer Frames, Anytime: Resolving Velocity Ambiguity in Video Frame Interpolation**, [[pdf]](https://arxiv.org/abs/2311.08007), [[PyTorch]](https://github.com/zzh-tech/InterpAny-Clearer)
*### CVPR 2023
* Guozhen Zhang et al., **Extracting Motion and Appearance via Inter-Frame Attention for Efficient Video Frame Interpolation**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2023/papers/Zhang_Extracting_Motion_and_Appearance_via_Inter-Frame_Attention_for_Efficient_Video_CVPR_2023_paper.pdf) [[PyTorch]](https://github.com/MCG-NJU/EMA-VFI)
* Xin Jin et al., **A Unified Pyramid Recurrent Network for Video Frame Interpolation**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2023/papers/Jin_A_Unified_Pyramid_Recurrent_Network_for_Video_Frame_Interpolation_CVPR_2023_paper.pdf) [[PyTorch]](https://github.com/srcn-ivl/UPR-Net)
* Taewoo Kim et al., **Event-Based Video Frame Interpolation With Cross-Modal Asymmetric Bidirectional Motion Fields**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2023/papers/Kim_Event-Based_Video_Frame_Interpolation_With_Cross-Modal_Asymmetric_Bidirectional_Motion_Fields_CVPR_2023_paper.pdf) [[PyTorch]](https://github.com/intelpro/CBMNet)
* Sangjin Lee et al., **Exploring Discontinuity for Video Frame Interpolation**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2023/papers/Lee_Exploring_Discontinuity_for_Video_Frame_Interpolation_CVPR_2023_paper.pdf) [[PyTorch(test only)]](https://github.com/pandatimo/Exploring-Discontinuity-for-VFI/tree/main)
* Wei Shang et al., **Joint Video Multi-Frame Interpolation and Deblurring Under Unknown Exposure Time**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2023/papers/Shang_Joint_Video_Multi-Frame_Interpolation_and_Deblurring_Under_Unknown_Exposure_Time_CVPR_2023_paper.pdf), [[PyTorch]](https://github.com/shangwei5/VIDUE)
* Junheum Park et al., **BiFormer: Learning Bilateral Motion Estimation via Bilateral Transformer for 4K Video Frame Interpolation**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2023/papers/Park_BiFormer_Learning_Bilateral_Motion_Estimation_via_Bilateral_Transformer_for_4K_CVPR_2023_paper.pdf), [[PyTorch]](https://github.com/JunHeum/BiFormer)### ICCV 2023
* Xiang Ji et al., **Rethinking Video Frame Interpolation from Shutter Mode Induced Degradation**, [[pdf]](https://openaccess.thecvf.com/content/ICCV2023/papers/Ji_Rethinking_Video_Frame_Interpolation_from_Shutter_Mode_Induced_Degradation_ICCV_2023_paper.pdf), [[PyTorch]](https://github.com/jixiang2016/PMBNet)
* Jun-Sang Yoo et al., **Video Object Segmentation-aware Video Frame Interpolation**, [[pdf]](https://openaccess.thecvf.com/content/ICCV2023/papers/Yoo_Video_Object_Segmentation-aware_Video_Frame_Interpolation_ICCV_2023_paper.pdf), [[PyTorch]](https://github.com/junsang7777/VOS-VFI)### CVPR 2022
* Xiao Lu et al., **Video Shadow Detection via Spatio-Temporal Interpolation Consistency Training**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2022/papers/Lu_Video_Shadow_Detection_via_Spatio-Temporal_Interpolation_Consistency_Training_CVPR_2022_paper.pdf), [[PyTorch]](https://github.com/yihong-97/STICT)
* Zhihao Shi et al., **Video Frame Interpolation Transformer**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2022/papers/Shi_Video_Frame_Interpolation_Transformer_CVPR_2022_paper.pdf), [[PyTorch]](https://github.com/zhshi0816/Video-Frame-Interpolation-Transformer)
* Liying Lu et al., **Video Frame Interpolation with Transformer**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2022/papers/Lu_Video_Frame_Interpolation_With_Transformer_CVPR_2022_paper.pdf), [[PyTorch]](https://github.com/dvlab-research/VFIformer)
* Yue Wu et al., **Optimizing Video Prediction via Video Frame Interpolation**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2022/papers/Wu_Optimizing_Video_Prediction_via_Video_Frame_Interpolation_CVPR_2022_paper.pdf), [[PyTorch]](https://github.com/YueWuHKUST/CVPR2022-Optimizing-Video-Prediction-via-Video-Frame-Interpolation)
* Ping Hu et al., **Many-to-Many Splatting for Efficient Video Frame Interpolation**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2022/papers/Hu_Many-to-Many_Splatting_for_Efficient_Video_Frame_Interpolation_CVPR_2022_paper.pdf), [[PyTorch]](https://github.com/feinanshan/M2M_VFI)### ECCV 2022
* Zhewei Huang et al., **Real-Time Intermediate Flow Estimation for Video Frame Interpolation**, [[pdf]](https://arxiv.org/abs/2011.06294), [[PyTorch]](https://github.com/hzwer/ECCV2022-RIFE)
* Fitsum Reda et al., **FILM: Frame Interpolation for Large Motion**, [[pdf]](https://arxiv.org/abs/2202.04901), [[PyTorch]](https://github.com/google-research/frame-interpolation)
* Zhiyang Yu et al., **Deep Bayesian Video Frame Interpolation**, [[pdf]](https://www.ecva.net/papers/eccv_2022/papers_ECCV/html/1287_ECCV_2022_paper.php), [[PyTorch]](https://github.com/Oceanlib/DBVI)
* Qiqi Hou et al., **A Perceptual Quality Metric for Video Frame Interpolation**, [[pdf]](https://arxiv.org/abs/2210.01879), [[PyTorch]](https://github.com/hqqxyy/VFIPS)
* Jihyong Oh et al., **DeMFI: Deep Joint Deblurring and Multi-Frame Interpolation with Flow-Guided Attentive Correlation and Recursive Boosting**, [[pdf]](https://arxiv.org/abs/2111.09985), [[PyTorch]](https://github.com/JihyongOh/DeMFI)### ICCV 2021
* Zhiyang Yu et al., **Training Weakly Supervised Video Frame Interpolation with Events**, [[pdf]](https://openaccess.thecvf.com/content/ICCV2021/html/Yu_Training_Weakly_Supervised_Video_Frame_Interpolation_With_Events_ICCV_2021_paper.html) [[PyTorch]](https://github.com/YU-Zhiyang/WEVI).
* Junheum Park et al., **Asymmetric Bilateral Motion Estimation for Video Frame Interpolation**, [[pdf]](https://arxiv.org/abs/2108.06815) [[PyTorch]](https://github.com/JunHeum/ABME).
* Hyeonjun Sim et al., **XVFI: eXtreme Video Frame Interpolation**, [[pdf]](http://arxiv.org/abs/2103.16206) [[PyTorch]](https://github.com/JihyongOh/XVFI).### CVPR 2021
* Tianyu Ding et al., **CDFI: Compression-Driven Network Design for Frame Interpolation**, [[pdf]](https://arxiv.org/abs/2103.10559) [[PyTorch]](https://github.com/tding1/CDFI).
* Stepan Tulyakov et al., **Time Lens: Event-Based Video Frame Interpolation**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2021/papers/Tulyakov_Time_Lens_Event-Based_Video_Frame_Interpolation_CVPR_2021_paper.pdf) [[PyTorch]](https://github.com/uzh-rpg/rpg_timelens)### ECCV 2020
* Junheum Park et al., **BMBC: Bilateral Motion Estimation with Bilateral Cost Volume for Video Interpolation**, [[pdf\]](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123590103.pdf)
### CVPR 2020
* Simon Niklaus et al., **Softmax Splatting for Video Frame Interpolation**, [[pdf\]](https://openaccess.thecvf.com/content_CVPR_2020/papers/Niklaus_Softmax_Splatting_for_Video_Frame_Interpolation_CVPR_2020_paper.pdf)
* Hyeongmin Lee et al., **AdaCoF: Adaptive Collaboration of Flows for Video Frame Interpolation**, [[pdf\]](https://openaccess.thecvf.com/content_CVPR_2020/papers/Lee_AdaCoF_Adaptive_Collaboration_of_Flows_for_Video_Frame_Interpolation_CVPR_2020_paper.pdf)
* Wang Shen et al., **Blurry Video Frame Interpolation**, [[pdf\]](https://openaccess.thecvf.com/content_CVPR_2020/papers/Shen_Blurry_Video_Frame_Interpolation_CVPR_2020_paper.pdf)
* Shurui Gui et al., **FeatureFlow: Robust Video Interpolation via Structure-to-Texture Generation**, [[pdf\]](https://openaccess.thecvf.com/content_CVPR_2020/papers/Gui_FeatureFlow_Robust_Video_Interpolation_via_Structure-to-Texture_Generation_CVPR_2020_paper.pdf)
* Myungsub Choi et al., **Scene-Adaptive Video Frame Interpolation via Meta-Learning**, [[pdf\]](https://openaccess.thecvf.com/content_CVPR_2020/papers/Choi_Scene-Adaptive_Video_Frame_Interpolation_via_Meta-Learning_CVPR_2020_paper.pdf)### CVPR 2019
* Tomer Peleg et al., **IM-Net for High Resolution Video Frame Interpolation**, [[pdf\]](http://openaccess.thecvf.com/content_CVPR_2019/papers/Peleg_IM-Net_for_High_Resolution_Video_Frame_Interpolation_CVPR_2019_paper.pdf).
* Wenbo Bao et al., **Depth-Aware Video Frame Interpolation**, [[pdf\]](http://openaccess.thecvf.com/content_CVPR_2019/papers/Bao_Depth-Aware_Video_Frame_Interpolation_CVPR_2019_paper.pdf) [[PyTorch\]]().
* Liangzhe Yuan et al., **Zoom-In-To-Check: Boosting Video Interpolation via Instance-Level Discrimination**, [[pdf\]](http://openaccess.thecvf.com/content_CVPR_2019/papers/Yuan_Zoom-In-To-Check_Boosting_Video_Interpolation_via_Instance-Level_Discrimination_CVPR_2019_paper.pdf).### ICCV 2019
* Fitsum A. Reda et al., **Unsupervised Video Interpolation Using Cycle Consistency**, [[pdf\]](http://openaccess.thecvf.com/content_ICCV_2019/papers/Reda_Unsupervised_Video_Interpolation_Using_Cycle_Consistency_ICCV_2019_paper.pdf).
### CVPR 2018
* Simone Meyer et al., **PhaseNet for Video Frame Interpolation**, [[pdf\]](http://openaccess.thecvf.com/content_cvpr_2018/papers/Meyer_PhaseNet_for_Video_CVPR_2018_paper.pdf).
* Simon Niklaus et al., **Context-Aware Synthesis for Video Frame Interpolation**, [[pdf\]](http://openaccess.thecvf.com/content_cvpr_2018/papers/Niklaus_Context-Aware_Synthesis_for_CVPR_2018_paper.pdf).
* Huaizu Jiang et al., **Super SloMo: High Quality Estimation of Multiple Intermediate Frames for Video Interpolation**, [[pdf\]](http://openaccess.thecvf.com/content_cvpr_2018/papers/Jiang_Super_SloMo_High_CVPR_2018_paper.pdf) [[PyTorch\]]().### ECCV 2018
* Chao-Yuan Wu et al., **Video Compression through Image Interpolation**, [[pdf\]](http://openaccess.thecvf.com/content_ECCV_2018/papers/Chao-Yuan_Wu_Video_Compression_through_ECCV_2018_paper.pdf).
### CVPR 2017
* Simon Niklaus et al., **Video Frame Interpolation via Adaptive Convolution**, [[pdf\]](http://openaccess.thecvf.com/content_cvpr_2017/papers/Niklaus_Video_Frame_Interpolation_CVPR_2017_paper.pdf)).
### ICCV 2017
* Simon Niklaus et al., **Video Frame Interpolation via Adaptive Separable Convolution**, [[pdf\]](http://openaccess.thecvf.com/content_ICCV_2017/papers/Niklaus_Video_Frame_Interpolation_ICCV_2017_paper.pdf) [[PyTorch\]]().
* Ziwei Liu et al., **Video Frame Synthesis using Deep Voxel Flow**, [[pdf\]](http://openaccess.thecvf.com/content_ICCV_2017/papers/Liu_Video_Frame_Synthesis_ICCV_2017_paper.pdf).### CVPR 2015
* Simone Meyer et al., **Phase-Based Frame Interpolation for Video**, [[pdf\]](http://openaccess.thecvf.com/content_cvpr_2015/papers/Meyer_Phase-Based_Frame_Interpolation_2015_CVPR_paper.pdf).
## 3. Video Deblurring
### CVPR 2024
* Huicong Zhang et al., **Blur-aware Spatio-temporal Sparse Transformer for Video Deblurring**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2024/papers/Zhang_Blur-aware_Spatio-temporal_Sparse_Transformer_for_Video_Deblurring_CVPR_2024_paper.pdf), [[MMengine]](https://github.com/huicongzhang/BSSTNet)
* Geunhyuk Youk et al., **FMA-Net: Flow-Guided Dynamic Filtering and Iterative Feature Refinement with Multi-Attention for Joint Video Super-Resolution and Deblurring**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2024/papers/Youk_FMA-Net_Flow-Guided_Dynamic_Filtering_and_Iterative_Feature_Refinement_with_Multi-Attention_CVPR_2024_paper.pdf) [[PyTorch]](https://github.com/KAIST-VICLab/FMA-Net)### ECCV 2024
* Jin-Ting He et al., **Domain-adaptive Video Deblurring via Test-time Blurring**, [[pdf]](https://arxiv.org/abs/2407.09059), [[PyTorch(test only)]](https://github.com/Jin-Ting-He/DADeblur)
* Taewoo Kim et al., **Towards Real-world Event-guided Low-light Video Enhancement and Deblurring**, [[pdf]](https://arxiv.org/abs/2408.14916), [[Soon]](https://github.com/intelpro/ELEDNet)### CVPR 2023
* Jinshan Pan et al., **Deep Discriminative Spatial and Temporal Network for Efficient Video Deblurring**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2023/papers/Pan_Deep_Discriminative_Spatial_and_Temporal_Network_for_Efficient_Video_Deblurring_CVPR_2023_paper.pdf), [[PyTorch]](https://github.com/xuboming8/DSTNet)
* Wei Shang et al., **Joint Video Multi-Frame Interpolation and Deblurring Under Unknown Exposure Time**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2023/papers/Shang_Joint_Video_Multi-Frame_Interpolation_and_Deblurring_Under_Unknown_Exposure_Time_CVPR_2023_paper.pdf), [[PyTorch]](https://github.com/shangwei5/VIDUE)### CVPR 2022
* Bo Ji et al., **Multi-Scale Memory-Based Video Deblurring**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2022/papers/Ji_Multi-Scale_Memory-Based_Video_Deblurring_CVPR_2022_paper.pdf), [[PyTorch]](https://github.com/jibo27/MemDeblur)### ECCV 2022
* Huicong Zhang et al., **Spatio-Temporal Deformable Attention Network for Video Deblurring**, [[pdf]](https://arxiv.org/abs/2207.10852), [[PyTorch]](https://github.com/huicongzhang/STDAN)
* Yusheng Wang et al., **Efficient Video Deblurring Guided by Motion Magnitude**, [[pdf]](https://arxiv.org/abs/2207.13374), [[PyTorch]](https://github.com/sollynoay/MMP-RNN)
* Bangrui Jiang et al., **ERDN: Equivalent Receptive Field Deformable Network for Video Deblurring**, [[pdf]](https://www.ecva.net/papers/eccv_2022/papers_ECCV/html/4085_ECCV_2022_paper.php), [[PyTorch(test only)]](https://github.com/TencentCloud/ERDN)
* Jihyong Oh et al., **DeMFI: Deep Joint Deblurring and Multi-Frame Interpolation with Flow-Guided Attentive Correlation and Recursive Boosting**, [[pdf]](https://arxiv.org/abs/2111.09985), [[PyTorch]](https://github.com/JihyongOh/DeMFI)### ICCV 2021
* Wei Shang et al., **Bringing Events Into Video Deblurring With Non-Consecutively Blurry Frames**, [[pdf]](https://openaccess.thecvf.com/content/ICCV2021/papers/Shang_Bringing_Events_Into_Video_Deblurring_With_Non-Consecutively_Blurry_Frames_ICCV_2021_paper.pdf) [[PyTorch]](https://github.com/shangwei5/D2Net).
* Senyou Deng et al., **Multi-Scale Separable Network for Ultra-High-Definition Video Deblurring**, [[pdf]](https://openaccess.thecvf.com/content/ICCV2021/papers/Deng_Multi-Scale_Separable_Network_for_Ultra-High-Definition_Video_Deblurring_ICCV_2021_paper.pdf).### CVPR 2021
* Maitreya Suin et al., **Gated Spatio-Temporal Attention-Guided Video Deblurring**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2021/papers/Suin_Gated_Spatio-Temporal_Attention-Guided_Video_Deblurring_CVPR_2021_paper.pdf).
* Dongxu Li et al., **ARVo: Learning All-Range Volumetric Correspondence for Video Deblurring**, [[pdf]](https://arxiv.org/abs/2103.04260).### ECCV 2020
* Zhihang Zhong et al., **Efficient Spatio-Temporal Recurrent Neural Network for Video Deblurring**, [[pdf\]](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123510188.pdf)
* Songnan Lin et al., **Learning Event-Driven Video Deblurring and Interpolation**, [[pdf\]](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123530681.pdf)### CVPR 2020
* Jinshan Pan et al., **Cascaded Deep Video Deblurring Using Temporal Sharpness Prior**, [[pdf\]](https://openaccess.thecvf.com/content_CVPR_2020/papers/Pan_Cascaded_Deep_Video_Deblurring_Using_Temporal_Sharpness_Prior_CVPR_2020_paper.pdf)
### CVPR 2019
* Seungjun Nah et al., **Recurrent Neural Networks With Intra-Frame Iterations for Video Deblurring**, [[pdf\]](http://openaccess.thecvf.com/content_CVPR_2019/papers/Nah_Recurrent_Neural_Networks_With_Intra-Frame_Iterations_for_Video_Deblurring_CVPR_2019_paper.pdf).
### ICCV 2019
* Shangchen Zhou et al., **Spatio-Temporal Filter Adaptive Network for Video Deblurring**, [[pdf\]](http://openaccess.thecvf.com/content_ICCV_2019/papers/Zhou_Spatio-Temporal_Filter_Adaptive_Network_for_Video_Deblurring_ICCV_2019_paper.pdf).
* Wenqi Ren et al., **Face Video Deblurring Using 3D Facial Priors**, [[pdf\]](http://openaccess.thecvf.com/content_ICCV_2019/papers/Ren_Face_Video_Deblurring_Using_3D_Facial_Priors_ICCV_2019_paper.pdf).### CVPR 2017
* Shuochen Su et al., **Deep Video Deblurring for Hand-Held Cameras**, [[pdf\]](http://openaccess.thecvf.com/content_cvpr_2017/papers/Su_Deep_Video_Deblurring_CVPR_2017_paper.pdf).
* Liyuan Pan et al., **Simultaneous Stereo Video Deblurring and Scene Flow Estimation**, [[pdf\]](http://openaccess.thecvf.com/content_cvpr_2017/papers/Pan_Simultaneous_Stereo_Video_CVPR_2017_paper.pdf).### ICCV 2017
* Wenqi Ren et al., **Video Deblurring via Semantic Segmentation and Pixel-Wise Non-Linear Kernel**, [[pdf\]](http://openaccess.thecvf.com/content_ICCV_2017/papers/Ren_Video_Deblurring_via_ICCV_2017_paper.pdf).
* Tae Hyun Kim et al., **Online Video Deblurring via Dynamic Temporal Blending Network**, [[pdf\]](http://openaccess.thecvf.com/content_ICCV_2017/papers/Kim_Online_Video_Deblurring_ICCV_2017_paper.pdf).### ECCV 2016
* Anita Sellent et al., **video deblurring**, [[pdf\]]().
### CVPR 2015
* Tae Hyun Kim et al., **Generalized Video Deblurring for Dynamic Scenes**, [[pdf\]](http://openaccess.thecvf.com/content_cvpr_2015/papers/Kim_Generalized_Video_Deblurring_2015_CVPR_paper.pdf).
### ECCV 2014
* **Modeling Blurred Video with Layers**, [[pdf\]]().
## 4. Video Inpainting
### CVPR 2024
* Jianzong Wu et al., **Towards Language-Driven Video Inpainting via Multimodal Large Language Models**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2024/papers/Wu_Towards_Language-Driven_Video_Inpainting_via_Multimodal_Large_Language_Models_CVPR_2024_paper.pdf), [[PyTorch(test only)]](https://github.com/jianzongwu/Language-Driven-Video-Inpainting)### ECCV 2024
* Fu-Yun Wang et al., **Be-Your-Outpainter: Mastering Video Outpainting through Input-Specific Adaptation**, [[pdf]](https://arxiv.org/abs/2403.13745), [[PyTorch(test only)]](https://github.com/G-U-N/Be-Your-Outpainter)### ICCV 2023
* Shangchen Zhou et al., **ProPainter: Improving Propagation and Transformer for Video Inpainting**, [[pdf]](https://openaccess.thecvf.com/content/ICCV2023/papers/Zhou_ProPainter_Improving_Propagation_and_Transformer_for_Video_Inpainting_ICCV_2023_paper.pdf), [[PyTorch]](https://github.com/sczhou/ProPainter)### CVPR 2022
* Ryan Szeto et al., **The DEVIL Is in the Details: A Diagnostic Evaluation Benchmark for Video Inpainting**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2022/papers/Szeto_The_DEVIL_Is_in_the_Details_A_Diagnostic_Evaluation_Benchmark_CVPR_2022_paper.pdf), [[PyTorch]](https://github.com/MichiganCOG/devil/tree/public)
* Zhen Li et al., **Towards An End-to-End Framework for Flow-Guided Video Inpainting**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2022/papers/Li_Towards_an_End-to-End_Framework_for_Flow-Guided_Video_Inpainting_CVPR_2022_paper.pdf), [[PyTorch]](https://github.com/MCG-NKU/E2FGVI)
* Jingjing Ren et al., **DLFormer: Discrete Latent Transformer for Video Inpainting**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2022/papers/Ren_DLFormer_Discrete_Latent_Transformer_for_Video_Inpainting_CVPR_2022_paper.pdf), [[PyTorch]](https://github.com/JingjingRenabc/dlformer)
* Kaidong Zhang et al., **Inertia-Guided Flow Completion and Style Fusion for Video Inpainting**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2022/papers/Zhang_Inertia-Guided_Flow_Completion_and_Style_Fusion_for_Video_Inpainting_CVPR_2022_paper.pdf), [[PyTorch]](https://github.com/hitachinsk/ISVI)### ECCV 2022
* Kaidong Zhang et al., **Flow-Guided Transformer for Video Inpainting**, [[pdf]](https://arxiv.org/abs/2208.06768), [[PyTorch]](https://github.com/hitachinsk/FGT)### ICCV 2021
* Rui Liu et al., **FuseFormer: Fusing Fine-Grained Information in Transformers for Video Inpainting**, [[pdf]](https://arxiv.org/abs/2109.02974) [[PyTorch]](https://github.com/ruiliu-ai/FuseFormer).
* Dong Lao et al., **Flow-Guided Video Inpainting With Scene Templates**, [[pdf]](https://arxiv.org/abs/2108.12845).
* Bingyao Yu et al., **Frequency-Aware Spatiotemporal Transformers for Video Inpainting Detection**, [[pdf]](https://openaccess.thecvf.com/content/ICCV2021/papers/Yu_Frequency-Aware_Spatiotemporal_Transformers_for_Video_Inpainting_Detection_ICCV_2021_paper.pdf).
* Hao Ouyang et al., **Internal Video Inpainting by Implicit Long-Range Propagation**, [[pdf]](https://arxiv.org/abs/2108.01912) [[Tensorflow]](https://github.com/Tengfei-Wang/Implicit-Internal-Video-Inpainting).### CVPR 2021
* Xueyan Zou et al., **Progressive Temporal Feature Alignment Network for Video Inpainting**, [[pdf]](https://arxiv.org/abs/2104.03507) [[PyTorch]](https://github.com/MaureenZOU/TSAM).
### ECCV 2020
* Ang Li et al., **Short-Term and Long-Term Context Aggregation Network for Video Inpainting**, [[pdf\]](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123490698.pdf)
* Yanhong Zeng et al., **Learning Joint Spatial-Temporal Transformations for Video Inpainting**, [[pdf\]](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123610511.pdf)
* Miao Liao et al., **DVI: Depth Guided Video Inpainting for Autonomous Driving**, [[pdf\]](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123660001.pdf)### CVPR 2019
* Rui Xu et al., **Deep Flow-Guided Video Inpainting**, [[pdf\]](http://openaccess.thecvf.com/content_CVPR_2019/papers/Xu_Deep_Flow-Guided_Video_Inpainting_CVPR_2019_paper.pdf).
* Dahun Kim et al., **Deep Video Inpainting**, [[pdf\]](http://openaccess.thecvf.com/content_CVPR_2019/papers/Kim_Deep_Video_Inpainting_CVPR_2019_paper.pdf).### ICCV 2019
* Haotian Zhang et al., **An Internal Learning Approach to Video Inpainting**, [[pdf\]](http://openaccess.thecvf.com/content_ICCV_2019/papers/Zhang_An_Internal_Learning_Approach_to_Video_Inpainting_ICCV_2019_paper.pdf).
* Sungho Lee et al., **Copy-and-Paste Networks for Deep Video Inpainting**, [[pdf\]](http://openaccess.thecvf.com/content_ICCV_2019/papers/Lee_Copy-and-Paste_Networks_for_Deep_Video_Inpainting_ICCV_2019_paper.pdf).
* Ya-Liang Chang et al., **Free-Form Video Inpainting With 3D Gated Convolution and Temporal PatchGAN**, [[pdf\]](http://openaccess.thecvf.com/content_ICCV_2019/papers/Chang_Free-Form_Video_Inpainting_With_3D_Gated_Convolution_and_Temporal_PatchGAN_ICCV_2019_paper.pdf).## 5. Video Denoising
### CVPR 2022
* Wenwen Pan et al., **Wnet: Audio-Guided Video Object Segmentation via Wavelet-Based Cross-Modal Denoising Networks**, [[pdf]](https://openaccess.thecvf.com/content/CVPR2022/papers/Pan_Wnet_Audio-Guided_Video_Object_Segmentation_via_Wavelet-Based_Cross-Modal_Denoising_Networks_CVPR_2022_paper.pdf), [[PyTorch(test only)]](https://github.com/asudahkzj/Wnet)### ECCV 2022
* Junyi Li et al., **Unidirectional Video Denoising by Mimicking Backward Recurrent Modules with Look-Ahead Forward Ones**, [[pdf]](https://www.ecva.net/papers/eccv_2022/papers_ECCV/html/4024_ECCV_2022_paper.php), [[PyTorch]](https://github.com/nagejacob/FloRNN)### ICCV 2021
* Gregory Vaksman et al., **Patch Craft: Video Denoising by Deep Modeling and Patch Matching**, [[pdf]](https://arxiv.org/abs/2103.13767).
* Dev Yashpal Sheth et al., **Unsupervised Deep Video Denoising**, [[pdf]](https://arxiv.org/abs/2011.15045) [[PyTorch]](https://github.com/sreyas-mohan/udvd).### CVPR 2021
* Matteo Maggioni et al., **Efficient Multi-Stage Video Denoising with Recurrent Spatio-Temporal Fusion**, [[pdf]](https://arxiv.org/abs/2103.05407) [[PyTorch]](https://github.com/Baymax-chen/EMVD).
### CVPR 2020
* Huanjing Yue et al., **Supervised Raw Video Denoising With a Benchmark Dataset on Dynamic Scenes**, [[pdf\]](https://openaccess.thecvf.com/content_CVPR_2020/papers/Yue_Supervised_Raw_Video_Denoising_With_a_Benchmark_Dataset_on_Dynamic_CVPR_2020_paper.pdf)
* Matias Tassano et al., **FastDVDnet: Towards Real-Time Deep Video Denoising Without Flow Estimation**, [[pdf\]](https://openaccess.thecvf.com/content_CVPR_2020/papers/Tassano_FastDVDnet_Towards_Real-Time_Deep_Video_Denoising_Without_Flow_Estimation_CVPR_2020_paper.pdf)### CVPR 2019
- Thibaud Ehret et al., **Model-Blind Video Denoising via Frame-To-Frame Training**, [[pdf\]](http://openaccess.thecvf.com/content_CVPR_2019/papers/Ehret_Model-Blind_Video_Denoising_via_Frame-To-Frame_Training_CVPR_2019_paper.pdf).
### ICCV 2017
- Bihan Wen et al., **Joint Adaptive Sparsity and Low-Rankness on the Fly: An Online Tensor Reconstruction Scheme for Video Denoising**, [[pdf\]](http://openaccess.thecvf.com/content_ICCV_2017/papers/Wen_Joint_Adaptive_Sparsity_ICCV_2017_paper.pdf).
## 6. Video HDR(Inverse Tone-Mapping)
### ICCV 2019
* Soo Ye Kim et al., **Deep SR-ITM: Joint Learning of Super-Resolution and Inverse Tone-Mapping for 4K UHD HDR Applications**, [[pdf\]](https://arxiv.org/pdf/1904.11176) [[Matlab\]](https://github.com/sooyekim/Deep-SR-ITM)
### AAAI 2019
* Soo Ye Kim et al., **JSI-GAN: GAN-Based Joint Super-Resolution and Inverse Tone-Mapping with Pixel-Wise Task-Specific Filters for UHD HDR Video**, [[pdf\]](https://arxiv.org/abs/1909.04391) [[Tensorflow\]](https://github.com/JihyongOh/JSI-GAN)