Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/christopher-beckham/amr
Official adversarial mixup resynthesis repository
https://github.com/christopher-beckham/amr
acai adversarial-mixup-resynthesis autoencoders celeba-dataset kmnist mixup mnist-dataset representation-learning svhn-dataset unsupervised-learning
Last synced: about 2 months ago
JSON representation
Official adversarial mixup resynthesis repository
- Host: GitHub
- URL: https://github.com/christopher-beckham/amr
- Owner: christopher-beckham
- License: bsd-3-clause
- Created: 2019-03-20T18:47:27.000Z (almost 6 years ago)
- Default Branch: master
- Last Pushed: 2020-02-14T15:06:12.000Z (almost 5 years ago)
- Last Synced: 2024-08-03T23:13:24.749Z (5 months ago)
- Topics: acai, adversarial-mixup-resynthesis, autoencoders, celeba-dataset, kmnist, mixup, mnist-dataset, representation-learning, svhn-dataset, unsupervised-learning
- Language: Python
- Homepage:
- Size: 13.9 MB
- Stars: 33
- Watchers: 4
- Forks: 2
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- Awesome-Mixup - [Code
README
# Adversarial mixup resynthesis
_Christopher Beckham, Sina Honari, Vikas Verma, Alex Lamb, Farnoosh Ghadiri, R Devon Hjelm, Yoshua Bengio, Christopher Pal_
[[paper]](https://arxiv.org/abs/1903.02709) [[video]](https://www.youtube.com/watch?v=ezbC3_VZeNY) [[poster]](https://postersession.ai/poster/on-adversarial-mixup-resynthesis/)
In this paper, we explore new approaches to combining information encoded within the learned representations of autoencoders. We explore models that are capable of combining the attributes of multiple inputs such that a resynthesised output is trained to fool an adversarial discriminator for real versus synthesised data. Furthermore, we explore the use of such an architecture in the context of semi-supervised learning, where we learn a mixing function whose objective is to produce interpolations of hidden states, or masked combinations of latent representations that are consistent with a conditioned class label. We show quantitative and qualitative evidence that such a formulation is an interesting avenue of research.
## Setting up the project
### Cloning the repository:
`$ git clone https://github.com/christopher-beckham/amr.git`### Environment setup
1. Install Anaconda, if not already done, by following these instructions:
https://docs.anaconda.com/anaconda/install/linux/2. Create a conda environment using the `environment.yaml` file, to install the dependencies:
`$ conda env create -f environment.yaml`3. Activate the new conda environment:
`$ conda activate amr`(Note: this was my dev environment exported directly to yaml, and may contain a lot of unnecessary dependencies. If you want a more clean environment, it shouldn't be too hard to start from scratch -- all of the dependencies can be easily downloaded with either `pip` or `conda`.)
### Getting the data
For most of the experiments, there is no need to download external datasets since they are already provided `torchvision` (namely, MNIST and SVHN). The exception to this is the DSprites dataset (used for the disentanglement experiments). In order to download this, simply do:
```
cd iterators
wget https://github.com/deepmind/dsprites-dataset/raw/master/dsprites_ndarray_co1sh3sc6or40x32y32_64x64.npz
```## Running experiments
The experiment scripts can be found in the `exps` folder. Simply `cd` into this folder and run `bash /.sh`. Experiments for Table 1
in the paper correspond to the folders `mnist_downstream`, `kmnist_downstream`, and `svhn32_downstream`. For Table 3, consult `svhn256_downstream`.### Training the models
In order to launch experiments, we use the `task_launcher.py` script. This is a bit hefty at this point and contains a lot of argument options,
so it's recommended you get familiar with them by running `python task_launcher.py --help`. You can also see various examples of its usage by looking at the experimental scripts in the `exps` folder.### Evaluating samples
This also easy! Simply add `--mode=interp_train` (or `--mode=interp_valid`) to the script. This changes the mode in the task launcher script
from training (which is the default) to interpolation mode. In this mode, interpolations between samples will be produced and output in the
results folder. The number of samples used for interpolation is dependent on `--val_batch_size`.## Notes
- The main architecture we use here is one derived from a PyTorch reimplementation of ACAI, courtesy of Kyle McDonald, whose implementation can be found here: https://gist.github.com/kylemcdonald/e8ca989584b3b0e6526c0a737ed412f0
- The main changes we make is that we add spectral norm to the discriminator to stabilise GAN training. We also added instance norm to the generator to stabilise training.
- Generator code: https://github.com/christopher-beckham/amr/blob/dev/architectures/arch_kyle.py#L21-L96
- Discriminator code: https://github.com/christopher-beckham/amr/blob/dev/architectures/arch_kyle.py#L98-L108## Troubleshooting
If you are experiencing any issues, please file a ticket in the Issues section.