Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/jiarenchang/facecycle
Learning Facial Representations from the Cycle-consistency of Face (ICCV 2021)
https://github.com/jiarenchang/facecycle
Last synced: 2 months ago
JSON representation
Learning Facial Representations from the Cycle-consistency of Face (ICCV 2021)
- Host: GitHub
- URL: https://github.com/jiarenchang/facecycle
- Owner: JiaRenChang
- Created: 2021-07-28T12:47:54.000Z (over 3 years ago)
- Default Branch: master
- Last Pushed: 2021-08-10T02:42:27.000Z (over 3 years ago)
- Last Synced: 2023-03-05T18:10:28.888Z (almost 2 years ago)
- Language: Python
- Size: 2.41 MB
- Stars: 42
- Watchers: 5
- Forks: 6
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Learning Facial Representations from the Cycle-consistency of Face (ICCV 2021)
This repository contains the code for our ICCV2021 paper by Jia-Ren Chang, Yong-Sheng Chen, and Wei-Chen Chiu.
[Paper Arxiv Link](https://arxiv.org/pdf/2108.03427.pdf)
## Contents
1. [Introduction](#introduction)
2. [Results](#results)
3. [Usage](#usage)
4. [Contacts](#contacts)## Introduction
In this work, we introduce cycle-consistency in facial characteristics as free supervisory signal to learn facial representations from unlabeled facial images. The learning is realized by superimposing the facial motion cycle-consistency and identity cycle-consistency constraints. The main idea of the facial motion cycle-consistency is that, given a face with expression, we can perform de-expression to a neutral face via the removal of facial motion and further perform re-expression to reconstruct back to the original face. The main idea of the identity cycle-consistency is to exploit both de-identity into mean face by depriving the given neutral face of its identity via feature re-normalization and re-identity into neutral face by adding the personal attributes to the mean face.## Results
#### More visualization
#### Emotion recognition
We use linear protocol to evaluate learnt representations for emotion recognition. We report accuracy (%) for two dataset.
| Method | FER-2013 | RAF-DB |
|---|---|---|
| Ours | 48.76 % | 71.01 % |
| [FAb-Net](https://arxiv.org/abs/1808.06882) | 46.98 % | 66.72 % |
| [TCAE](https://openaccess.thecvf.com/content_CVPR_2019/papers/Li_Self-Supervised_Representation_Learning_From_Videos_for_Facial_Action_Unit_Detection_CVPR_2019_paper.pdf) | 45.05 % | 65.32 % |
| [BMVC’20](https://www.bmvc2020-conference.com/assets/papers/0861.pdf) | 47.61 % | 58.86 % |#### Head pose regression
We use linear regression to evaluate learnt representations for head pose regression.
| Method | Yaw | Pitch | Roll |
|---|---|---|---|
| Ours | 11.70 | 12.76 | 12.94 |
| [FAb-Net](https://arxiv.org/abs/1808.06882) | 13.92 | 13.25 | 14.51 |
| [TCAE](https://openaccess.thecvf.com/content_CVPR_2019/papers/Li_Self-Supervised_Representation_Learning_From_Videos_for_Facial_Action_Unit_Detection_CVPR_2019_paper.pdf) | 21.75 | 14.57 | 14.83 |
| [BMVC’20](https://www.bmvc2020-conference.com/assets/papers/0861.pdf) | 22.06 | 13.50 | 15.14 |#### Person recognition
We directly adopt learnt representation for person recognition.
| Method | LFW | CPLFW |
|---|---|---|
| Ours | 73.72 % | 58.52 % |
| [VGG-like](https://arxiv.org/abs/1803.01260) | 71.48 % | - |
| LBP | 56.90 % | 51.50 % |
| HoG | 62.73 % | 51.73 % |#### Frontalization
The frontalization results from LFW dataset.
#### Image-to-image Translation
The image-to-image translation results.
## Usage
### From Others
Thanks to all the authors of these awesome repositories.
[SSIM](https://github.com/Po-Hsun-Su/pytorch-ssim)
[Optical Flow Visualization](https://github.com/tomrunia/OpticalFlow_Visualization)### Download Pretrained Model
[Google Drive](https://drive.google.com/file/d/1dDuXLyn3AFclGos-Ku2geMBWE2v4a2Y9/view?usp=sharing)
### Test translation
```
python test_translation.py --loadmodel (pretrained model) \
```and you can get like below
### Replicate RAF-DB results
Download pretrained model and [RAF-DB](http://www.whdeng.cn/RAF/model1.html)
```
python RAF_classify.py --loadmodel (pretrained model) \
--datapath (your RAF dataset path) \
--savemodel (your path for saving)
```You can get 70~71% accuracy with basic emotion classification (7 categories) using linear protocol.
## Contacts
[email protected]Any discussions or concerns are welcomed!