Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/jnjaby/KEEP
[ECCV'24] Kalman-Inspired Feature Propagation for Video Face Super-Resolution
https://github.com/jnjaby/KEEP
Last synced: about 2 months ago
JSON representation
[ECCV'24] Kalman-Inspired Feature Propagation for Video Face Super-Resolution
- Host: GitHub
- URL: https://github.com/jnjaby/KEEP
- Owner: jnjaby
- License: other
- Created: 2024-08-09T06:26:22.000Z (6 months ago)
- Default Branch: main
- Last Pushed: 2024-08-29T02:58:04.000Z (5 months ago)
- Last Synced: 2024-08-29T06:24:26.420Z (5 months ago)
- Language: Python
- Homepage:
- Size: 11.5 MB
- Stars: 236
- Watchers: 30
- Forks: 11
- Open Issues: 7
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-diffusion-categorized - [Code
README
KEEP: Kalman-Inspired Feature Propagation for Video Face Super-Resolution
S-Lab, Nanyang Technological Universityβ
ECCV 2024
π₯ For more results, visit our project page π₯
β If you found this project helpful to your projects, please help star this repo. Thanks! π€# Update
- **2024.08**: We released the initial version of the inference code and models. Stay tuned for continuous updates!
- **2024.07**: This repo is created!# Getting Started
## Dependencies and Installation
- Pytorch >= 1.7.1
- CUDA >= 10.1
- Other required packages in `requirements.txt`
```
# git clone this repository. Don't forget to add --recursive!!
git clone --recursive https://github.com/jnjaby/KEEP
cd KEEP# create new anaconda env
conda create -n keep python=3.8 -y
conda activate keep# install python dependencies
pip3 install -r requirements.txt
python basicsr/setup.py develop
conda install -c conda-forge dlib # only for face detection or cropping with dlib
conda install -c conda-forge ffmpeg
```[Optional] If you forget to clone the repo with `--recursive`, you can update the submodule by
```
git submodule init
git submodule update
```## Quick Inference
### Download Pre-trained Models
All pretrained models can also be automatically downloaded during the first inference.
You can also download our pretrained models from [Releases V0.1.0](https://github.com/jnjaby/KEEP/releases/tag/v0.1.0) to the `weights` folder.### Prepare Testing Data
We provide both synthetic (VFHQ) and real (collected) examples in `assets/examples` folder. If you would like to test your own face videos, place them in the same folder.
You can also download the full synthetic and real test data from [[Google Drive](https://drive.google.com/drive/folders/16yqGKQnjCzrdVK_SQSzFhULEfhSxMUH_?usp=sharing)].### Inference
**[Note]** If you want to compare KEEP in your paper, please make sure the face alignment is consistent and run the following command with `--has_aligned` to indicate faces are already cropped and aligned. The results will be saved in the `results` folder.π§π» Video Face Restoration for synthetic data (cropped and aligned face)
```
# For cropped and aligned faces
python inference_keep.py -i=./assets/examples/synthetic_1.mp4 -o=results/ --has_aligned --save_video -s=1
```π¬ Video Face Restoration for real data (in the wild)
```
# For whole video
# Add '--bg_upsampler realesrgan' to enhance the background regions with Real-ESRGAN
# Add '--face_upsample' to further upsample restorated face with Real-ESRGAN
# Add '--draw_box' to show the bounding box of detected faces.
python inference_keep.py -i=./assets/examples/real_1.mp4 -o=results/ --draw_box --save_video -s=1 --bg_upsampler=realesrgan
```## Citation
If you find our repo useful for your research, please consider citing our paper:
```bibtex
@InProceedings{feng2024keep,
title = {Kalman-Inspired FEaturE Propagation for Video Face Super-Resolution},
author = {Feng, Ruicheng and Li, Chongyi and Loy, Chen Change},
booktitle = {European Conference on Computer Vision (ECCV)},
year = {2024}
}
```## License and Acknowledgement
This project is open sourced under [NTU S-Lab License 1.0](https://github.com/jnjaby/KEEP/blob/main/LICENSE). Redistribution and use should follow this license.
The code framework is mainly modified from [CodeFormer](https://github.com/sczhou/CodeFormer/). Please refer to the original repo for more usage and documents.## Contact
If you have any question, please feel free to contact us via `[email protected]`.