https://github.com/wangzhiyaoo/SVFR
  
  
    Official implementation of SVFR. 
    https://github.com/wangzhiyaoo/SVFR
  
        Last synced: about 2 months ago 
        JSON representation
    
Official implementation of SVFR.
- Host: GitHub
 - URL: https://github.com/wangzhiyaoo/SVFR
 - Owner: wangzhiyaoo
 - Created: 2024-12-17T12:01:14.000Z (11 months ago)
 - Default Branch: main
 - Last Pushed: 2025-01-03T03:19:30.000Z (10 months ago)
 - Last Synced: 2025-01-03T03:30:42.025Z (10 months ago)
 - Language: Python
 - Homepage: https://wangzhiyaoo.github.io/SVFR/
 - Size: 1.95 KB
 - Stars: 3
 - Watchers: 1
 - Forks: 0
 - Open Issues: 0
 - 
            Metadata Files:
            
- Readme: README.md
 
 
Awesome Lists containing this project
- awesome-diffusion-categorized - [Code
 
README
          
SVFR: A Unified Framework for Generalized Video Face Restoration
[](https://arxiv.org/pdf/2501.01235)
[](https://wangzhiyaoo.github.io/SVFR/)
[](https://huggingface.co/spaces/fffiloni/SVFR-demo)
## 🔥 Overview
SVFR is a unified framework for face video restoration that supports tasks such as **BFR, Colorization, Inpainting**, and **their combinations** within one cohesive system.

## 🎬 Demo
### BFR
| Case1                                                                                                                        | Case2                                                                                                                        |
|--------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------|
| |  |
### BFR+Colorization
| Case3                                                                                                                        | Case4                                                                                                                        |
|--------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------|
| |  |
### BFR+Colorization+Inpainting
| Case5                                                                                                                        | Case6                                                                                                                        |
|--------------------------------------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------------------------------------|
| |  |
## 🎙️ News
- **[2025.01.17]**: HuggingFace demo [Hub](https://huggingface.co/spaces/fffiloni/SVFR-demo) is available now! 
- **[2025.01.02]**: We released the initial version of the [inference code](#inference) and [models](#download-checkpoints). Stay tuned for continuous updates!
- **[2024.12.17]**: This repo is created!
## 🚀 Getting Started
> Note: It is recommended to use a GPU with 16GB or more VRAM.
## Setup
Use the following command to install a conda environment for SVFR from scratch:
```bash
conda create -n svfr python=3.9 -y
conda activate svfr
```
Install PyTorch:  make sure to select the appropriate CUDA version based on your hardware, for example,
```bash
pip install torch==2.2.2 torchvision==0.17.2 torchaudio==2.2.2
```
Install Dependencies:
```bash
pip install -r requirements.txt
```
## Download checkpoints
```
conda install git-lfs
git lfs install
git clone https://huggingface.co/stabilityai/stable-video-diffusion-img2vid-xt models/stable-video-diffusion-img2vid-xt
```
You can download checkpoints manually through link on [Google Drive](https://drive.google.com/drive/folders/1nzy9Vk-yA_DwXm1Pm4dyE2o0r7V6_5mn?usp=share_link).
Put checkpoints as follows:
```
└── models
    ├── face_align
    │   ├── yoloface_v5m.pt
    ├── face_restoration
    │   ├── unet.pth
    │   ├── id_linear.pth
    │   ├── insightface_glint360k.pth
    └── stable-video-diffusion-img2vid-xt
        ├── vae
        ├── scheduler
        └── ...
```
## Inference
### Inference single or multi task
```
# Make sure the input face video has equal width and height,
# or enable the --crop_face_region flag.
python3 infer.py \
 --config config/infer.yaml \
 --task_ids 0 \
 --input_path ./assert/lq/lq1.mp4 \
 --output_dir ./results/ \
 --crop_face_region
```
> 0 -- bfr  
> 1 -- colorization  
> 2 -- inpainting  
> 0,1 -- bfr and colorization  
> 0,1,2 -- bfr and colorization and inpainting  
> ...
> Add the --crop_face_region flag at the end of the command to preprocess the input video by cropping the face region. This helps focus on the facial area and enhances processing results.
### Inference with additional inpainting mask
```
# For Inference with Inpainting
# Add '--mask_path' if you need to specify the mask file.
python3 infer.py \
 --config config/infer.yaml \
 --task_ids 0,1,2 \
 --input_path ./assert/lq/lq3.mp4 \
 --output_dir ./results/ \
 --mask_path ./assert/mask/lq3.png \
 --crop_face_region
```
## Gradio Demo
A web demo is shown at [Click here](https://huggingface.co/spaces/fffiloni/SVFR-demo). You can also easily run gradio demo locally. Please install gradio by `pip install gradio`, then run
```bash
python3 demo.py
```
## License
The code of SVFR is released under the MIT License. There is no limitation for both academic and commercial usage.
**The pretrained models we provided with this library are available for non-commercial research purposes only, including both auto-downloading models and manual-downloading models.**
## Acknowledgments
- This work is built on the architecture of [Sonic](https://github.com/jixiaozhong/Sonic)🌟.
- Thanks to community contributor [@fffiloni](https://huggingface.co/fffiloni) for supporting the online demo.
## BibTex
```
@misc{wang2025svfrunifiedframeworkgeneralized,
      title={SVFR: A Unified Framework for Generalized Video Face Restoration}, 
      author={Zhiyao Wang and Xu Chen and Chengming Xu and Junwei Zhu and Xiaobin Hu and Jiangning Zhang and Chengjie Wang and Yuqi Liu and Yiyi Zhou and Rongrong Ji},
      year={2025},
      eprint={2501.01235},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2501.01235}, 
}
```