Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/csinva/matching-with-gans

Matching in GAN latent space for better bias benchmarking and semantic image editing. πŸ‘ΆπŸ»πŸ§’πŸΎπŸ‘©πŸΌβ€πŸ¦°πŸ‘±πŸ½β€β™‚οΈπŸ‘΄πŸΎ
https://github.com/csinva/matching-with-gans

ai bias causal-inference computer-vision deep-learning disentanglement facial-recognition fairness gan ml neural-network python3 pytorch stylegan2

Last synced: 3 months ago
JSON representation

Matching in GAN latent space for better bias benchmarking and semantic image editing. πŸ‘ΆπŸ»πŸ§’πŸΎπŸ‘©πŸΌβ€πŸ¦°πŸ‘±πŸ½β€β™‚οΈπŸ‘΄πŸΎ

Awesome Lists containing this project

README

        

Matching in GAN-Space

Code for using GANs to aid in matching, accompanying the paper "Overcoming confounding in face datasets via GAN-based matching" (arXiv)


Projection and manipulation β€’
Matching and benchmarking


Reproducibility β€’
Reference


Quickstart demo (manipulate and interpolate your own face images!):

# Projection and manipulation
This code allows one to project images into the GAN latent space, after which they can be modified for certain attributes (e.g. age, gender, hair-length) and mixed with other faces (e.g. other people, older/younger versions of the same person). All this code is handled by the `projection_manipulation/project_and_manipulate.sh` script - the easiest way to get started is to use the [Colab notebook](https://colab.research.google.com/drive/1zevDVuqXc_ARcbirJAfEzsk1SClzBqXf), where you can upload your own images, and they will be automatically cropped, aligned projected, manipulated, and interpolated

Start with 2 real images (higher-res photos work better, as well as photos where the face is front-facing and not obstructed by things like hats, scarves, etc.):




Interpolating between the images:



Manipulating an image along pre-specified attributes:



Can do a lot more, like blending together many faces or interpolating between different faces of the same person!

# Matching and benchmarking
The matching code [here](matching_benchmarking) finds images that match across a certain attribute (e.g. perceived gender). This is useful for removing confounding factors when doing downstream analyses of things like gender bias in facial recognition. Similarly, we can perform matching using other methods, such as propensity scores, using the GAN latent space as covariates. Some example matches:





Note: these annotations do not necessarily reflect the *gender identity* of the person, rather they refer to *binarized gender as perceived by a casual observer*

After performing matching, confounding is much lower on CelebA-HQ. This is illustrated by the fact that the mean values of several key (binary) attributes become much closer after matching:



# Reproducibility

## Dependencies ![Python 3.6](https://img.shields.io/badge/python-3.6-blue.svg) ![](https://img.shields.io/badge/tensorflow-1.14.0-blue)

- uses tensorflow-gpu 1.14.0 (the gpu dependencies are only required for the projection / manipulation code which uses [StyleGAN2](https://github.com/NVlabs/stylegan2))
- the required dependencies can be set up on AWS by selecting a deep learning AMI, running `source activate python3`, and then running `pip install tensorflow-gpu==1.14.0`

## Data/cached outputs for reproducing pipeline in [this gdrive folder](https://drive.google.com/drive/folders/1YO_GZ48o30jTnME-z7d8LlcZoJejcNsk?usp=sharing)

- `data/celeba-hq/ims` folder
- unzip the images in celeba-hq dataset at 1024 x 1024 resolution into this folder
- `data/processed` folder
- distances: `dists_pairwise_gan.npy`, `dists_pairwise_vgg.npy`, `dists_pairwise_facial.npy`, `dists_pairwise_facial_facenet.npy`, `dists_pairwise_facial_facenet_casia.npy`, `dists_pairwise_facial_vgg2.npy` - (30k x 30k) matrices storing the pairwise distances between all the images in celeba-hq using different distance measures
- `data/processed/gen/generated_images_0.1`
- latents `celeba_hq_latents_stylegan2.zip` - these are used in downstream analysis and are required for the propensity score analysis
- (already present in repo) - annotations (e.g. gender, smiling, eyeglasses) + predicted metrics (e.g. predicted yaw, roll, pitch, quality, race) for each image + latent StyleGAN2 directions for different attributes + precomputed match numbers
- (optional) can download the raw annotations and annotated images as well
- (optional) all these paths can be changed in the `config.py` file

## Scripts

Both the [matching_benchmarking](matching_benchmarking) folder and the [projection_manipulation](projection_manipulation) folder contain two types of files:
- `.py` files in the `scripts` subdirectories - these scripts are used to calculate the cached outputs in the gdrive folder. They do not need to be rerun, but show how the cached outputs were generated and can be rerun on new datasets.
- `.ipynb` notebooks - these are used to reproduce the results from the cached outputs in the gdrive folde. Noteboks beginning with `eda` are for exploratory analysis, which can be useful but are note required to generate the final results in the paper

# Reference
- this project builds on many wonderful open-source projects (see the readmes in the [lib](lib) subfolders for more details) including
- stylegan: [stylegan2](https://github.com/NVlabs/stylegan2) and [stylegan2 encoder](https://github.com/rolux/stylegan2encoder)
- facial recogntion: [dlib](https://github.com/davisking/dlib), python [face_recognition](https://face-recognition.readthedocs.io/en/latest/face_recognition.html), [facenet](https://github.com/davidsandberg/facenet)
- gender/race prediction: [fairface](https://github.com/joojs/fairface)
- pose/background prediction: [deep_head_pose](https://github.com/shahroudy/deep-head-pose), [face_segmentation](https://github.com/nasir6/face-segmentation), and [faceQnet](https://github.com/uam-biometrics/FaceQnet)

```r
@article{singh2021matched,
title={Matched sample selection with GANs for mitigating attribute confounding},
author={Chandan Singh and Guha Balakrishnan and Pietro Perona},
journal={arXiv preprint arXiv:2103.13455},
year={2021}
}
```