Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/astorfi/gan-evaluation
https://github.com/astorfi/gan-evaluation
Last synced: 14 days ago
JSON representation
- Host: GitHub
- URL: https://github.com/astorfi/gan-evaluation
- Owner: astorfi
- License: mit
- Created: 2019-12-09T22:23:40.000Z (almost 5 years ago)
- Default Branch: master
- Last Pushed: 2020-11-28T01:01:57.000Z (almost 4 years ago)
- Last Synced: 2024-10-08T13:33:03.710Z (about 1 month ago)
- Language: Python
- Size: 69.3 KB
- Stars: 4
- Watchers: 4
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE.md
Awesome Lists containing this project
README
# On the Evaluation of Generative Adversarial Networks By Discriminative Models
[![Name](https://img.shields.io/github/license/astorfi/gan-evaluation)](https://github.com/astorfi/gan-evaluation/blob/master/LICENSE.md)
[![arXiv](https://img.shields.io/badge/arXiv-2010.03549-b31b1b.svg)](https://arxiv.org/abs/2010.03549)This repository contains an implementation of "On the Evaluation of Generative Adversarial Networks By Discriminative Models".
For a detailed description of the architecture please read [our paper](https://arxiv.org/abs/2010.03549). Using the code of this repository is allowed with **proper attribution**: Please cite the paper if you use the code from this repository in your work.
## Bibtex
@article{torfi2020evaluation,
title={On the Evaluation of Generative Adversarial Networks By Discriminative Models},
author={Torfi, Amirsina and Beyki, Mohammadreza and Fox, Edward A},
journal={arXiv preprint arXiv:2010.03549},
year={2020}
}Table of contents
=================* [Paper Summary](#paper-summary)
* [Running the Code](#Running-the-Code)
* [Prerequisites](#Prerequisites)
* [Datasets](#Datasets)
* [Collaborators](#Collaborators)## Paper Summary
Abstract
*Generative Adversarial Networks (GANs) can accurately model complex multi-dimensional data and generate realistic samples. However, due to their implicit estimation of data distributions, their evaluation is a challenging task. The majority of research efforts associated with tackling this issue were validated by qualitative visual evaluation. Such approaches do not generalize well beyond the image domain. Since many of those evaluation metrics are proposed and bound to the vision domain, they are difficult to apply to other domains. Quantitative measures are necessary to better guide the training and comparison of different GANs models. In this work, we leverage Siamese neural networks to propose a domain-agnostic evaluation metric: (1) with a qualitative evaluation that is consistent with human evaluation, (2) that is robust relative to common GAN issues such as mode dropping and invention, and (3) does not require any pretrained classifier. The empirical results in this paper demonstrate the superiority of this method compared to the popular Inception Score and are competitive with the FID score.*
## Running the Code
### Prerequisites
* Pytorch
* CUDA [strongly recommended]**NOTE:** PyTorch does a pretty good job in installing required packages but you should have installed CUDA according to PyTorch requirements.
Please refer to [this link](https://pytorch.org/) for further information.### Datasets
You need to download and process the datasets mentioned in the paper. **The code in this repository is for MNIST and [CELEB A](http://mmlab.ie.cuhk.edu.hk/projects/CelebA.html) datasets only**.
## Collaborators
| [](https://github.com/astorfi)
[Amirsina Torfi](https://github.com/astorfi) | [](https://github.com/mohibeyki)
[Mohammadreza Beyki](https://github.com/mohibeyki) |
| --- | --- |