{"id":15034709,"url":"https://github.com/clementchadebec/benchmark_vae","last_synced_at":"2025-05-14T15:10:33.952Z","repository":{"id":37292583,"uuid":"412850368","full_name":"clementchadebec/benchmark_VAE","owner":"clementchadebec","description":"Unifying Variational Autoencoder (VAE) implementations in Pytorch (NeurIPS 2022)","archived":false,"fork":false,"pushed_at":"2024-07-31T12:13:28.000Z","size":44563,"stargazers_count":1905,"open_issues_count":32,"forks_count":174,"subscribers_count":17,"default_branch":"main","last_synced_at":"2025-04-29T13:39:16.603Z","etag":null,"topics":["benchmarking","beta-vae","comparison","normalizing-flows","pixel-cnn","pytorch","reproducibility","reproducible-research","vae","vae-gan","vae-implementation","vae-pytorch","variational-autoencoder","vq-vae","wasserstein-autoencoder"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/clementchadebec.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":"CITATION.cff","codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2021-10-02T16:26:24.000Z","updated_at":"2025-04-28T13:48:33.000Z","dependencies_parsed_at":"2023-02-19T10:16:05.682Z","dependency_job_id":"1d01c62c-7575-486e-8420-f5b219b84f98","html_url":"https://github.com/clementchadebec/benchmark_VAE","commit_stats":{"total_commits":345,"total_committers":20,"mean_commits":17.25,"dds":"0.42028985507246375","last_synced_commit":"6419e21558f2a6abc2da99944bddda846ded30f4"},"previous_names":[],"tags_count":11,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/clementchadebec%2Fbenchmark_VAE","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/clementchadebec%2Fbenchmark_VAE/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/clementchadebec%2Fbenchmark_VAE/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/clementchadebec%2Fbenchmark_VAE/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/clementchadebec","download_url":"https://codeload.github.com/clementchadebec/benchmark_VAE/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254170054,"owners_count":22026219,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["benchmarking","beta-vae","comparison","normalizing-flows","pixel-cnn","pytorch","reproducibility","reproducible-research","vae","vae-gan","vae-implementation","vae-pytorch","variational-autoencoder","vq-vae","wasserstein-autoencoder"],"created_at":"2024-09-24T20:26:05.112Z","updated_at":"2025-05-14T15:10:33.935Z","avatar_url":"https://github.com/clementchadebec.png","language":"Python","readme":"\u003cp align=\"center\"\u003e\n\t\u003ca href=\"https://pypi.org/project/pythae/\"\u003e\n\t    \u003cimg src='https://badge.fury.io/py/pythae.svg' alt='Python' /\u003e\n\t\u003c/a\u003e\n    \u003ca\u003e\n\t    \u003cimg src='https://img.shields.io/badge/python-3.7%7C3.8%7C3.9%2B-blueviolet' alt='Python' /\u003e\n\t\u003c/a\u003e\n\t\u003ca href='https://pythae.readthedocs.io/en/latest/?badge=latest'\u003e\n    \t\u003cimg src='https://readthedocs.org/projects/pythae/badge/?version=latest' alt='Documentation Status' /\u003e\n\t\u003c/a\u003e\n\t\u003ca href='https://opensource.org/licenses/Apache-2.0'\u003e\n\t    \u003cimg src='https://img.shields.io/github/license/clementchadebec/benchmark_VAE?color=blue' /\u003e\n\t\u003c/a\u003e\u003cbr\u003e\n    \u003ca\u003e\n\t    \u003cimg src='https://img.shields.io/badge/code%20style-black-black' /\u003e\n\t\u003c/a\u003e\n\t\u003ca href=\"https://codecov.io/gh/clementchadebec/benchmark_VAE\"\u003e\n  \t\t\u003cimg src=\"https://codecov.io/gh/clementchadebec/benchmark_VAE/branch/main/graph/badge.svg?token=KEM7KKISXJ\"/\u003e\n\t\u003c/a\u003e\n\t\u003ca href=\"https://colab.research.google.com/github/clementchadebec/benchmark_VAE/blob/main/examples/notebooks/overview_notebook.ipynb\"\u003e\n  \t\t\u003cimg src=\"https://colab.research.google.com/assets/colab-badge.svg\"/\u003e\n\t\u003c/a\u003e\n\t\u003c/a\u003e\n\u003c/p\u003e\n\n\u003c/p\u003e\n\u003cp align=\"center\"\u003e\n  \u003ca href=\"https://pythae.readthedocs.io/en/latest/\"\u003eDocumentation\u003c/a\u003e\n\u003c/p\u003e\n\t\n    \n# pythae \n\nThis library implements some of the most common (Variational) Autoencoder models under a unified implementation. In particular, it \nprovides the possibility to perform benchmark experiments and comparisons by training \nthe models with the same autoencoding neural network architecture. The feature *make your own autoencoder* \nallows you to train any of these models with your own data and own Encoder and Decoder neural networks. It integrates experiment monitoring tools such [wandb](https://wandb.ai/), [mlflow](https://mlflow.org/) or [comet-ml](https://www.comet.com/signup?utm_source=pythae\u0026utm_medium=partner\u0026utm_campaign=AMS_US_EN_SNUP_Pythae_Comet_Integration) 🧪 and allows model sharing and loading from the [HuggingFace Hub](https://huggingface.co/models) 🤗 in a few lines of code.\n\n**News** 📢\n\nAs of v0.1.0, `Pythae` now supports distributed training using PyTorch's [DDP](https://pytorch.org/docs/stable/notes/ddp.html). You can now train your favorite VAE faster and on larger datasets, still with a few lines of code.\nSee our speed-up [benchmark](#benchmark).\n\n## Quick access:\n- [Installation](#installation)\n- [Implemented models](#available-models) / [Implemented samplers](#available-samplers)\n- [Reproducibility statement](#reproducibility) / [Results flavor](#results)\n- [Model training](#launching-a-model-training) / [Data generation](#launching-data-generation) / [Custom network architectures](#define-you-own-autoencoder-architecture) / [Distributed training](#distributed-training-with-pythae)\n- [Model sharing with 🤗 Hub](#sharing-your-models-with-the-huggingface-hub-) / [Experiment tracking with `wandb`](#monitoring-your-experiments-with-wandb-) / [Experiment tracking with `mlflow`](#monitoring-your-experiments-with-mlflow-) / [Experiment tracking with `comet_ml`](#monitoring-your-experiments-with-comet_ml-)\n- [Tutorials](#getting-your-hands-on-the-code) / [Documentation](https://pythae.readthedocs.io/en/latest/)\n- [Contributing 🚀](#contributing-) / [Issues 🛠️](#dealing-with-issues-%EF%B8%8F)\n- [Citing this repository](#citation)\n\n# Installation\n\nTo install the latest stable release of this library run the following using ``pip``\n\n```bash\n$ pip install pythae\n``` \n\nTo install the latest github version of this library run the following using ``pip``\n\n```bash\n$ pip install git+https://github.com/clementchadebec/benchmark_VAE.git\n``` \n\nor alternatively you can clone the github repo to access to tests, tutorials and scripts.\n```bash\n$ git clone https://github.com/clementchadebec/benchmark_VAE.git\n```\nand install the library\n```bash\n$ cd benchmark_VAE\n$ pip install -e .\n``` \n\n## Available Models\n\nBelow is the list of the models currently implemented in the library.\n\n\n|               Models               |                                                                                    Training example                                                                                    |                     Paper                    |                           Official Implementation                          |\n|:----------------------------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------:|:--------------------------------------------------------------------------:|\n| Autoencoder (AE)                   | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/clementchadebec/benchmark_VAE/blob/main/examples/notebooks/models_training/ae_training.ipynb) |                                              |                                                                            |\n| Variational Autoencoder (VAE)      | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/clementchadebec/benchmark_VAE/blob/main/examples/notebooks/models_training/vae_training.ipynb) | [link](https://arxiv.org/abs/1312.6114)  |\n| Beta Variational Autoencoder (BetaVAE) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/clementchadebec/benchmark_VAE/blob/main/examples/notebooks/models_training/beta_vae_training.ipynb) | [link](https://openreview.net/pdf?id=Sy2fzU9gl)  |   \nVAE with Linear Normalizing Flows (VAE_LinNF) |  [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/clementchadebec/benchmark_VAE/blob/main/examples/notebooks/models_training/vae_lin_nf_training.ipynb) | [link](https://arxiv.org/abs/1505.05770) |         \nVAE with Inverse Autoregressive Flows (VAE_IAF) |  [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/clementchadebec/benchmark_VAE/blob/main/examples/notebooks/models_training/vae_iaf_training.ipynb) | [link](https://arxiv.org/abs/1606.04934) |  [link](https://github.com/openai/iaf)                                  |\n| Disentangled Beta Variational Autoencoder (DisentangledBetaVAE) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/clementchadebec/benchmark_VAE/blob/main/examples/notebooks/models_training/disentangled_beta_vae_training.ipynb) | [link](https://arxiv.org/abs/1804.03599)  |   \n| Disentangling by Factorising (FactorVAE) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/clementchadebec/benchmark_VAE/blob/main/examples/notebooks/models_training/factor_vae_training.ipynb) | [link](https://arxiv.org/abs/1802.05983)  |                                                                            |\n| Beta-TC-VAE (BetaTCVAE) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/clementchadebec/benchmark_VAE/blob/main/examples/notebooks/models_training/beta_tc_vae_training.ipynb) | [link](https://arxiv.org/abs/1802.04942)  |  [link](https://github.com/rtqichen/beta-tcvae)\n| Importance Weighted Autoencoder (IWAE) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/clementchadebec/benchmark_VAE/blob/main/examples/notebooks/models_training/iwae_training.ipynb) | [link](https://arxiv.org/abs/1509.00519v4)  | [link](https://github.com/yburda/iwae)  \n| Multiply Importance Weighted Autoencoder (MIWAE) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/clementchadebec/benchmark_VAE/blob/main/examples/notebooks/models_training/miwae_training.ipynb) | [link](https://arxiv.org/abs/1802.04537)  |       \n| Partially Importance Weighted Autoencoder (PIWAE) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/clementchadebec/benchmark_VAE/blob/main/examples/notebooks/models_training/piwae_training.ipynb) | [link](https://arxiv.org/abs/1802.04537)  |       \n| Combination Importance Weighted Autoencoder (CIWAE) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/clementchadebec/benchmark_VAE/blob/main/examples/notebooks/models_training/ciwae_training.ipynb) | [link](https://arxiv.org/abs/1802.04537)  |                                                                             |\n| VAE with perceptual metric similarity (MSSSIM_VAE)      | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/clementchadebec/benchmark_VAE/blob/main/examples/notebooks/models_training/ms_ssim_vae_training.ipynb) | [link](https://arxiv.org/abs/1511.06409)  |\n| Wasserstein Autoencoder (WAE)      | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/clementchadebec/benchmark_VAE/blob/main/examples/notebooks/models_training/wae_training.ipynb) | [link](https://arxiv.org/abs/1711.01558) | [link](https://github.com/tolstikhin/wae)                                  |\n| Info Variational Autoencoder (INFOVAE_MMD)      | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/clementchadebec/benchmark_VAE/blob/main/examples/notebooks/models_training/info_vae_training.ipynb) | [link](https://arxiv.org/abs/1706.02262) |                                   |\n| VAMP Autoencoder (VAMP)            | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/clementchadebec/benchmark_VAE/blob/main/examples/notebooks/models_training/vamp_training.ipynb) | [link](https://arxiv.org/abs/1705.07120) | [link](https://github.com/jmtomczak/vae_vampprior)                         |\n| Hyperspherical VAE (SVAE)            | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/clementchadebec/benchmark_VAE/blob/main/examples/notebooks/models_training/svae_training.ipynb) | [link](https://arxiv.org/abs/1804.00891) | [link](https://github.com/nicola-decao/s-vae-pytorch)\n| Poincaré Disk VAE (PoincareVAE)            | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/clementchadebec/benchmark_VAE/blob/main/examples/notebooks/models_training/pvae_training.ipynb) | [link](https://arxiv.org/abs/1901.06033) | [link](https://github.com/emilemathieu/pvae)                         |\n| Adversarial Autoencoder (Adversarial_AE)                   | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/clementchadebec/benchmark_VAE/blob/main/examples/notebooks/models_training/adversarial_ae_training.ipynb) | [link](https://arxiv.org/abs/1511.05644)\n| Variational Autoencoder GAN (VAEGAN) 🥗 | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/clementchadebec/benchmark_VAE/blob/main/examples/notebooks/models_training/vaegan_training.ipynb) | [link](https://arxiv.org/abs/1512.09300) | [link](https://github.com/andersbll/autoencoding_beyond_pixels)| [link](https://arxiv.org/abs/1512.09300) | [link](https://github.com/andersbll/autoencoding_beyond_pixels)\n| Vector Quantized VAE (VQVAE) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/clementchadebec/benchmark_VAE/blob/main/examples/notebooks/models_training/vqvae_training.ipynb) | [link](https://arxiv.org/abs/1711.00937) | [link](https://github.com/deepmind/sonnet/blob/v2/sonnet/)\n| Hamiltonian VAE (HVAE)             | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/clementchadebec/benchmark_VAE/blob/main/examples/notebooks/models_training/hvae_training.ipynb) | [link](https://arxiv.org/abs/1805.11328) | [link](https://github.com/anthonycaterini/hvae-nips)                       |\n| Regularized AE with L2 decoder param (RAE_L2) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/clementchadebec/benchmark_VAE/blob/main/examples/notebooks/models_training/rae_l2_training.ipynb) | [link](https://arxiv.org/abs/1903.12436) | [link](https://github.com/ParthaEth/Regularized_autoencoders-RAE-/tree/master/) |\n| Regularized AE with gradient penalty (RAE_GP) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/clementchadebec/benchmark_VAE/blob/main/examples/notebooks/models_training/rae_gp_training.ipynb) | [link](https://arxiv.org/abs/1903.12436) | [link](https://github.com/ParthaEth/Regularized_autoencoders-RAE-/tree/master/) |\n| Riemannian Hamiltonian VAE (RHVAE) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/clementchadebec/benchmark_VAE/blob/main/examples/notebooks/models_training/rhvae_training.ipynb) | [link](https://arxiv.org/abs/2105.00026) | [link](https://github.com/clementchadebec/pyraug)|\n| Hierarchical Residual Quantization (HRQVAE) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/clementchadebec/benchmark_VAE/blob/main/examples/notebooks/models_training/hrqvae_training.ipynb) | [link](https://aclanthology.org/2022.acl-long.178/) | [link](https://github.com/tomhosking/hrq-vae)|\n\n**See [reconstruction](#Reconstruction) and [generation](#Generation) results for all aforementionned models**\n\n## Available Samplers\n\nBelow is the list of the models currently implemented in the library.\n\n|                Samplers               |   Models  \t\t  | Paper \t\t\t\t\t\t\t\t\t\t\t  | Official Implementation \t\t\t\t  |\n|:-------------------------------------:|:-------------------:|:-------------------------------------------------:|:-----------------------------------------:|\n| Normal prior (NormalSampler)                         | all models\t\t  | [link](https://arxiv.org/abs/1312.6114)\t\t  |\n| Gaussian mixture (GaussianMixtureSampler) | all models\t\t  | [link](https://arxiv.org/abs/1903.12436) \t  | [link](https://github.com/ParthaEth/Regularized_autoencoders-RAE-/tree/master/models/rae) |\n| Two stage VAE sampler (TwoStageVAESampler)\t\t\t\t\t| all VAE based models| [link](https://openreview.net/pdf?id=B1e0X3C9tQ)  | [link](https://github.com/daib13/TwoStageVAE/) |)\n| Unit sphere uniform sampler (HypersphereUniformSampler)                     |    SVAE  \t\t  | [link](https://arxiv.org/abs/1804.00891)      |\t\t[link](https://github.com/nicola-decao/s-vae-pytorch)\n| Poincaré Disk sampler (PoincareDiskSampler)                     |    PoincareVAE  \t\t  | [link](https://arxiv.org/abs/1901.06033)      |\t\t[link](https://github.com/emilemathieu/pvae)\n| VAMP prior sampler (VAMPSampler)                   |    VAMP   \t\t  | [link](https://arxiv.org/abs/1705.07120) \t  | [link](https://github.com/jmtomczak/vae_vampprior) |\n| Manifold sampler (RHVAESampler)                     |    RHVAE  \t\t  | [link](https://arxiv.org/abs/2105.00026)      |\t[link](https://github.com/clementchadebec/pyraug)|\n| Masked Autoregressive Flow Sampler (MAFSampler) | all models | [link](https://arxiv.org/abs/1705.07057v4)      |\t[link](https://github.com/gpapamak/maf) |\n| Inverse Autoregressive Flow Sampler (IAFSampler) | all models | [link](https://arxiv.org/abs/1606.04934) |  [link](https://github.com/openai/iaf)             |   \n| PixelCNN (PixelCNNSampler) | VQVAE | [link](https://arxiv.org/abs/1606.05328) |             |                     \n\n## Reproducibility\n\nWe validate the implementations by reproducing some results presented in the original publications when the official code has been released or when enough details about the experimental section of the papers were available. See [reproducibility](https://github.com/clementchadebec/benchmark_VAE/tree/main/examples/scripts/reproducibility) for more details.\n\n## Launching a model training\n\nTo launch a model training, you only need to call a `TrainingPipeline` instance. \n\n```python\n\u003e\u003e\u003e from pythae.pipelines import TrainingPipeline\n\u003e\u003e\u003e from pythae.models import VAE, VAEConfig\n\u003e\u003e\u003e from pythae.trainers import BaseTrainerConfig\n\n\u003e\u003e\u003e # Set up the training configuration\n\u003e\u003e\u003e my_training_config = BaseTrainerConfig(\n...\toutput_dir='my_model',\n...\tnum_epochs=50,\n...\tlearning_rate=1e-3,\n...\tper_device_train_batch_size=200,\n...\tper_device_eval_batch_size=200,\n...\ttrain_dataloader_num_workers=2,\n...\teval_dataloader_num_workers=2,\n...\tsteps_saving=20,\n...\toptimizer_cls=\"AdamW\",\n...\toptimizer_params={\"weight_decay\": 0.05, \"betas\": (0.91, 0.995)},\n...\tscheduler_cls=\"ReduceLROnPlateau\",\n...\tscheduler_params={\"patience\": 5, \"factor\": 0.5}\n... )\n\u003e\u003e\u003e # Set up the model configuration \n\u003e\u003e\u003e my_vae_config = model_config = VAEConfig(\n...\tinput_dim=(1, 28, 28),\n...\tlatent_dim=10\n... )\n\u003e\u003e\u003e # Build the model\n\u003e\u003e\u003e my_vae_model = VAE(\n...\tmodel_config=my_vae_config\n... )\n\u003e\u003e\u003e # Build the Pipeline\n\u003e\u003e\u003e pipeline = TrainingPipeline(\n... \ttraining_config=my_training_config,\n... \tmodel=my_vae_model\n... )\n\u003e\u003e\u003e # Launch the Pipeline\n\u003e\u003e\u003e pipeline(\n...\ttrain_data=your_train_data, # must be torch.Tensor, np.array or torch datasets\n...\teval_data=your_eval_data # must be torch.Tensor, np.array or torch datasets\n... )\n```\n\nAt the end of training, the best model weights, model configuration and training configuration are stored in a `final_model` folder available in  `my_model/MODEL_NAME_training_YYYY-MM-DD_hh-mm-ss` (with `my_model` being the `output_dir` argument of the `BaseTrainerConfig`). If you further set the `steps_saving` argument to a certain value, folders named `checkpoint_epoch_k` containing the best model weights, optimizer, scheduler, configuration and training configuration at epoch *k* will also appear in `my_model/MODEL_NAME_training_YYYY-MM-DD_hh-mm-ss`.\n\n## Launching a training on benchmark datasets\nWe also provide a training script example [here](https://github.com/clementchadebec/benchmark_VAE/tree/main/examples/scripts/training.py) that can be used to train the models on benchmarks datasets (mnist, cifar10, celeba ...). The script can be launched with the following commandline\n\n```bash\npython training.py --dataset mnist --model_name ae --model_config 'configs/ae_config.json' --training_config 'configs/base_training_config.json'\n```\n\nSee [README.md](https://github.com/clementchadebec/benchmark_VAE/tree/main/examples/scripts/README.md) for further details on this script\n\n## Launching data generation\n\n### Using the `GenerationPipeline`\n\nThe easiest way to launch a data generation from a trained model consists in using the built-in `GenerationPipeline` provided in Pythae. Say you want to generate 100 samples using a `MAFSampler` all you have to do is 1) relaod the trained model, 2) define the sampler's configuration and 3) create and launch the `GenerationPipeline` as follows\n\n```python\n\u003e\u003e\u003e from pythae.models import AutoModel\n\u003e\u003e\u003e from pythae.samplers import MAFSamplerConfig\n\u003e\u003e\u003e from pythae.pipelines import GenerationPipeline\n\u003e\u003e\u003e # Retrieve the trained model\n\u003e\u003e\u003e my_trained_vae = AutoModel.load_from_folder(\n...\t'path/to/your/trained/model'\n... )\n\u003e\u003e\u003e my_sampler_config = MAFSamplerConfig(\n...\tn_made_blocks=2,\n...\tn_hidden_in_made=3,\n...\thidden_size=128\n... )\n\u003e\u003e\u003e # Build the pipeline\n\u003e\u003e\u003e pipe = GenerationPipeline(\n...\tmodel=my_trained_vae,\n...\tsampler_config=my_sampler_config\n... )\n\u003e\u003e\u003e # Launch data generation\n\u003e\u003e\u003e generated_samples = pipe(\n...\tnum_samples=args.num_samples,\n...\treturn_gen=True, # If false returns nothing\n...\ttrain_data=train_data, # Needed to fit the sampler\n...\teval_data=eval_data, # Needed to fit the sampler\n...\ttraining_config=BaseTrainerConfig(num_epochs=200) # TrainingConfig to use to fit the sampler\n... )\n```\n\n### Using the Samplers\n\nAlternatively, you can launch the data generation process from a trained model directly with the sampler. For instance, to generate new data with your sampler, run the following.\n\n```python\n\u003e\u003e\u003e from pythae.models import AutoModel\n\u003e\u003e\u003e from pythae.samplers import NormalSampler\n\u003e\u003e\u003e # Retrieve the trained model\n\u003e\u003e\u003e my_trained_vae = AutoModel.load_from_folder(\n...\t'path/to/your/trained/model'\n... )\n\u003e\u003e\u003e # Define your sampler\n\u003e\u003e\u003e my_samper = NormalSampler(\n...\tmodel=my_trained_vae\n... )\n\u003e\u003e\u003e # Generate samples\n\u003e\u003e\u003e gen_data = my_samper.sample(\n...\tnum_samples=50,\n...\tbatch_size=10,\n...\toutput_dir=None,\n...\treturn_gen=True\n... )\n```\nIf you set `output_dir` to a specific path, the generated images will be saved as `.png` files named `00000000.png`, `00000001.png` ...\nThe samplers can be used with any model as long as it is suited. For instance, a `GaussianMixtureSampler` instance can be used to generate from any model but a `VAMPSampler` will only be usable with a `VAMP` model. Check [here](#available-samplers) to see which ones apply to your model. Be carefull that some samplers such as the `GaussianMixtureSampler` for instance may need to be fitted by calling the `fit` method before using. Below is an example for the `GaussianMixtureSampler`. \n\n```python\n\u003e\u003e\u003e from pythae.models import AutoModel\n\u003e\u003e\u003e from pythae.samplers import GaussianMixtureSampler, GaussianMixtureSamplerConfig\n\u003e\u003e\u003e # Retrieve the trained model\n\u003e\u003e\u003e my_trained_vae = AutoModel.load_from_folder(\n...\t'path/to/your/trained/model'\n... )\n\u003e\u003e\u003e # Define your sampler\n... gmm_sampler_config = GaussianMixtureSamplerConfig(\n...\tn_components=10\n... )\n\u003e\u003e\u003e my_samper = GaussianMixtureSampler(\n...\tsampler_config=gmm_sampler_config,\n...\tmodel=my_trained_vae\n... )\n\u003e\u003e\u003e # fit the sampler\n\u003e\u003e\u003e gmm_sampler.fit(train_dataset)\n\u003e\u003e\u003e # Generate samples\n\u003e\u003e\u003e gen_data = my_samper.sample(\n...\tnum_samples=50,\n...\tbatch_size=10,\n...\toutput_dir=None,\n...\treturn_gen=True\n... )\n```\n\n\n## Define you own Autoencoder architecture \n \nPythae provides you the possibility to define your own neural networks within the VAE models. For instance, say you want to train a Wassertstein AE with a specific encoder and decoder, you can do the following:\n\n```python\n\u003e\u003e\u003e from pythae.models.nn import BaseEncoder, BaseDecoder\n\u003e\u003e\u003e from pythae.models.base.base_utils import ModelOutput\n\u003e\u003e\u003e class My_Encoder(BaseEncoder):\n...\tdef __init__(self, args=None): # Args is a ModelConfig instance\n...\t\tBaseEncoder.__init__(self)\n...\t\tself.layers = my_nn_layers()\n...\t\t\n...\tdef forward(self, x:torch.Tensor) -\u003e ModelOutput:\n...\t\tout = self.layers(x)\n...\t\toutput = ModelOutput(\n...\t\t\tembedding=out # Set the output from the encoder in a ModelOutput instance \n...\t\t)\n...\t\treturn output\n...\n... class My_Decoder(BaseDecoder):\n...\tdef __init__(self, args=None):\n...\t\tBaseDecoder.__init__(self)\n...\t\tself.layers = my_nn_layers()\n...\t\t\n...\tdef forward(self, x:torch.Tensor) -\u003e ModelOutput:\n...\t\tout = self.layers(x)\n...\t\toutput = ModelOutput(\n...\t\t\treconstruction=out # Set the output from the decoder in a ModelOutput instance\n...\t\t)\n...\t\treturn output\n...\n\u003e\u003e\u003e my_encoder = My_Encoder()\n\u003e\u003e\u003e my_decoder = My_Decoder()\n```\n\nAnd now build the model\n\n```python\n\u003e\u003e\u003e from pythae.models import WAE_MMD, WAE_MMD_Config\n\u003e\u003e\u003e # Set up the model configuration \n\u003e\u003e\u003e my_wae_config = model_config = WAE_MMD_Config(\n...\tinput_dim=(1, 28, 28),\n...\tlatent_dim=10\n... )\n...\n\u003e\u003e\u003e # Build the model\n\u003e\u003e\u003e my_wae_model = WAE_MMD(\n...\tmodel_config=my_wae_config,\n...\tencoder=my_encoder, # pass your encoder as argument when building the model\n...\tdecoder=my_decoder # pass your decoder as argument when building the model\n... )\n```\n\n**important note 1**: For all AE-based models (AE, WAE, RAE_L2, RAE_GP), both the encoder and decoder must return a `ModelOutput` instance. For the encoder, the `ModelOutput` instance must contain the embbeddings under the key `embedding`. For the decoder, the `ModelOutput` instance must contain the reconstructions under the key `reconstruction`.\n\n\n**important note 2**: For all VAE-based models (VAE, BetaVAE, IWAE, HVAE, VAMP, RHVAE), both the encoder and decoder must return a `ModelOutput` instance. For the encoder, the `ModelOutput` instance must contain the embbeddings and **log**-covariance matrices (of shape batch_size x latent_space_dim) respectively under the key `embedding` and `log_covariance` key. For the decoder, the `ModelOutput` instance must contain the reconstructions under the key `reconstruction`.\n\n\n## Using benchmark neural nets\nYou can also find predefined neural network architectures for the most common data sets (*i.e.* MNIST, CIFAR, CELEBA ...) that can be loaded as follows\n\n```python\n\u003e\u003e\u003e from pythae.models.nn.benchmark.mnist import (\n...\tEncoder_Conv_AE_MNIST, # For AE based model (only return embeddings)\n...\tEncoder_Conv_VAE_MNIST, # For VAE based model (return embeddings and log_covariances)\n...\tDecoder_Conv_AE_MNIST\n... )\n```\nReplace *mnist* by cifar or celeba to access to other neural nets.\n\n## Distributed Training with `Pythae`\nAs of `v0.1.0`, Pythae now supports distributed training using PyTorch's [DDP](https://pytorch.org/docs/stable/notes/ddp.html). It allows you to train your favorite VAE faster and on larger dataset using multi-gpu and/or multi-node training.\n\nTo do so, you can build a python script that will then be launched by a launcher (such as `srun` on a cluster). The only thing that is needed in the script is to specify some elements relative to the distributed environment (such as the number of nodes/gpus) directly in the training configuration as follows\n\n```python\n\u003e\u003e\u003e training_config = BaseTrainerConfig(\n...     num_epochs=10,\n...     learning_rate=1e-3,\n...     per_device_train_batch_size=64,\n...     per_device_eval_batch_size=64,\n...     train_dataloader_num_workers=8,\n...     eval_dataloader_num_workers=8,\n...     dist_backend=\"nccl\", # distributed backend\n...     world_size=8 # number of gpus to use (n_nodes x n_gpus_per_node),\n...     rank=5 # global gpu id,\n...     local_rank=1 # gpu id within a node,\n...     master_addr=\"localhost\" # master address,\n...     master_port=\"12345\" # master port,\n... )\n```\n\nSee this [example script](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/scripts/distributed_training_imagenet.py) that defines a multi-gpu VQVAE training on ImageNet dataset. Please note that the way the distributed environnement variables (`world_size`, `rank` ...) are recovered may be specific to the cluster and launcher you use. \n\n### Benchmark\n\nBelow are indicated the training times for a Vector Quantized VAE (VQ-VAE) with `Pythae` for 100 epochs on MNIST on V100 16GB GPU(s), for 50 epochs on [FFHQ](https://github.com/NVlabs/ffhq-dataset) (1024x1024 images) and for 20 epochs on [ImageNet-1k](https://huggingface.co/datasets/imagenet-1k) on V100 32GB GPU(s).\n\n|  | Train Data | 1 GPU | 4 GPUs | 2x4 GPUs |\n|:---:|:---:|:---:|:---:|---|\n| MNIST (VQ-VAE) | 28x28 images (50k) | 235.18 s | 62.00 s | 35.86 s |\n| FFHQ 1024x1024 (VQVAE) | 1024x1024 RGB images (60k) | 19h 1min | 5h 6min | 2h 37min |\n| ImageNet-1k 128x128 (VQVAE) | 128x128 RGB images (~ 1.2M) | 6h 25min | 1h 41min | 51min 26s |\n\n\nFor each dataset, we provide the benchmarking scripts [here](https://github.com/clementchadebec/benchmark_VAE/tree/main/examples/scripts)\n\n\n## Sharing your models with the HuggingFace Hub 🤗\nPythae also allows you to share your models on the [HuggingFace Hub](https://huggingface.co/models). To do so you need:\n- a valid HuggingFace account\n- the package `huggingface_hub` installed in your virtual env. If not you can install it with \n```\n$ python -m pip install huggingface_hub\n```\n- to be logged in to your HuggingFace account using\n```\n$ huggingface-cli login\n```\n\n### Uploading a model to the Hub\nAny pythae model can be easily uploaded using the method `push_to_hf_hub`\n```python\n\u003e\u003e\u003e my_vae_model.push_to_hf_hub(hf_hub_path=\"your_hf_username/your_hf_hub_repo\")\n```\n**Note:** If `your_hf_hub_repo` already exists and is not empty, files will be overridden. In case, \nthe repo `your_hf_hub_repo` does not exist, a folder having the same name will be created.\n\n### Downloading models from the Hub\nEquivalently, you can download or reload any Pythae's model directly from the Hub using the method `load_from_hf_hub`\n```python\n\u003e\u003e\u003e from pythae.models import AutoModel\n\u003e\u003e\u003e my_downloaded_vae = AutoModel.load_from_hf_hub(hf_hub_path=\"path_to_hf_repo\")\n```\n\n## Monitoring your experiments with `wandb` 🧪\nPythae also integrates the experiment tracking tool [wandb](https://wandb.ai/) allowing users to store their configs, monitor their trainings and compare runs through a graphic interface. To be able use this feature you will need:\n- a valid wandb account\n- the package `wandb` installed in your virtual env. If not you can install it with \n```\n$ pip install wandb\n```\n- to be logged in to your wandb account using\n```\n$ wandb login\n```\n\n### Creating a `WandbCallback`\nLaunching an experiment monitoring with `wandb` in pythae is pretty simple. The only thing a user needs to do is create a `WandbCallback` instance...\n\n```python\n\u003e\u003e\u003e # Create you callback\n\u003e\u003e\u003e from pythae.trainers.training_callbacks import WandbCallback\n\u003e\u003e\u003e callbacks = [] # the TrainingPipeline expects a list of callbacks\n\u003e\u003e\u003e wandb_cb = WandbCallback() # Build the callback \n\u003e\u003e\u003e # SetUp the callback \n\u003e\u003e\u003e wandb_cb.setup(\n...\ttraining_config=your_training_config, # training config\n...\tmodel_config=your_model_config, # model config\n...\tproject_name=\"your_wandb_project\", # specify your wandb project\n...\tentity_name=\"your_wandb_entity\", # specify your wandb entity\n... )\n\u003e\u003e\u003e callbacks.append(wandb_cb) # Add it to the callbacks list\n```\n...and then pass it to the `TrainingPipeline`.\n```python\n\u003e\u003e\u003e pipeline = TrainingPipeline(\n...\ttraining_config=config,\n...\tmodel=model\n... )\n\u003e\u003e\u003e pipeline(\n...\ttrain_data=train_dataset,\n...\teval_data=eval_dataset,\n...\tcallbacks=callbacks # pass the callbacks to the TrainingPipeline and you are done!\n... )\n\u003e\u003e\u003e # You can log to https://wandb.ai/your_wandb_entity/your_wandb_project to monitor your training\n```\nSee the detailed tutorial \n\n## Monitoring your experiments with `mlflow` 🧪\nPythae also integrates the experiment tracking tool [mlflow](https://mlflow.org/) allowing users to store their configs, monitor their trainings and compare runs through a graphic interface. To be able use this feature you will need:\n- the package `mlfow` installed in your virtual env. If not you can install it with \n```\n$ pip install mlflow\n```\n\n### Creating a `MLFlowCallback`\nLaunching an experiment monitoring with `mlfow` in pythae is pretty simple. The only thing a user needs to do is create a `MLFlowCallback` instance...\n\n```python\n\u003e\u003e\u003e # Create you callback\n\u003e\u003e\u003e from pythae.trainers.training_callbacks import MLFlowCallback\n\u003e\u003e\u003e callbacks = [] # the TrainingPipeline expects a list of callbacks\n\u003e\u003e\u003e mlflow_cb = MLFlowCallback() # Build the callback \n\u003e\u003e\u003e # SetUp the callback \n\u003e\u003e\u003e mlflow_cb.setup(\n...\ttraining_config=your_training_config, # training config\n...\tmodel_config=your_model_config, # model config\n...\trun_name=\"mlflow_cb_example\", # specify your mlflow run\n... )\n\u003e\u003e\u003e callbacks.append(mlflow_cb) # Add it to the callbacks list\n```\n...and then pass it to the `TrainingPipeline`.\n```python\n\u003e\u003e\u003e pipeline = TrainingPipeline(\n...\ttraining_config=config,\n...\tmodel=model\n... )\n\u003e\u003e\u003e pipeline(\n...\ttrain_data=train_dataset,\n...\teval_data=eval_dataset,\n...\tcallbacks=callbacks # pass the callbacks to the TrainingPipeline and you are done!\n... )\n```\nyou can visualize your metric by running the following in the directory where the `./mlruns`\n```bash\n$ mlflow ui \n```\nSee the detailed tutorial \n\n## Monitoring your experiments with `comet_ml` 🧪\nPythae also integrates the experiment tracking tool [comet_ml](https://www.comet.com/signup?utm_source=pythae\u0026utm_medium=partner\u0026utm_campaign=AMS_US_EN_SNUP_Pythae_Comet_Integration) allowing users to store their configs, monitor their trainings and compare runs through a graphic interface. To be able use this feature you will need:\n- the package `comet_ml` installed in your virtual env. If not you can install it with \n```\n$ pip install comet_ml\n```\n\n### Creating a `CometCallback`\nLaunching an experiment monitoring with `comet_ml` in pythae is pretty simple. The only thing a user needs to do is create a `CometCallback` instance...\n\n```python\n\u003e\u003e\u003e # Create you callback\n\u003e\u003e\u003e from pythae.trainers.training_callbacks import CometCallback\n\u003e\u003e\u003e callbacks = [] # the TrainingPipeline expects a list of callbacks\n\u003e\u003e\u003e comet_cb = CometCallback() # Build the callback \n\u003e\u003e\u003e # SetUp the callback \n\u003e\u003e\u003e comet_cb.setup(\n...\ttraining_config=training_config, # training config\n...\tmodel_config=model_config, # model config\n...\tapi_key=\"your_comet_api_key\", # specify your comet api-key\n...\tproject_name=\"your_comet_project\", # specify your wandb project\n...\t#offline_run=True, # run in offline mode\n...\t#offline_directory='my_offline_runs' # set the directory to store the offline runs\n... )\n\u003e\u003e\u003e callbacks.append(comet_cb) # Add it to the callbacks list\n```\n...and then pass it to the `TrainingPipeline`.\n```python\n\u003e\u003e\u003e pipeline = TrainingPipeline(\n...\ttraining_config=config,\n...\tmodel=model\n... )\n\u003e\u003e\u003e pipeline(\n...\ttrain_data=train_dataset,\n...\teval_data=eval_dataset,\n...\tcallbacks=callbacks # pass the callbacks to the TrainingPipeline and you are done!\n... )\n\u003e\u003e\u003e # You can log to https://comet.com/your_comet_username/your_comet_project to monitor your training\n```\nSee the detailed tutorial \n\n\n## Getting your hands on the code \n\nTo help you to understand the way pythae works and how you can train your models with this library we also\nprovide tutorials:\n\n- [making_your_own_autoencoder.ipynb](https://github.com/clementchadebec/benchmark_VAE/tree/main/examples/notebooks) shows you how to pass your own networks to the models implemented in pythae [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/clementchadebec/benchmark_VAE/blob/main/examples/notebooks/making_your_own_autoencoder.ipynb)\n\n- [custom_dataset.ipynb](https://github.com/clementchadebec/benchmark_VAE/tree/main/examples/notebooks) shows you how to  use custom datasets with any of the models implemented in pythae [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/clementchadebec/benchmark_VAE/blob/main/examples/notebooks/custom_dataset.ipynb)\n\n- [hf_hub_models_sharing.ipynb](https://github.com/clementchadebec/benchmark_VAE/tree/main/examples/notebooks) shows you how to upload and download models for the HuggingFace Hub [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/clementchadebec/benchmark_VAE/blob/main/examples/notebooks/hf_hub_models_sharing.ipynb)\n\n- [wandb_experiment_monitoring.ipynb](https://github.com/clementchadebec/benchmark_VAE/tree/main/examples/notebooks) shows you how to monitor you experiments using `wandb` [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/clementchadebec/benchmark_VAE/blob/main/examples/notebooks/wandb_experiment_monitoring.ipynb)\n\n- [mlflow_experiment_monitoring.ipynb](https://github.com/clementchadebec/benchmark_VAE/tree/main/examples/notebooks) shows you how to monitor you experiments using `mlflow` [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/clementchadebec/benchmark_VAE/blob/main/examples/notebooks/mlflow_experiment_monitoring.ipynb)\n\n- [comet_experiment_monitoring.ipynb](https://github.com/clementchadebec/benchmark_VAE/tree/main/examples/notebooks) shows you how to monitor you experiments using `comet_ml` [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/clementchadebec/benchmark_VAE/blob/main/examples/notebooks/comet_experiment_monitoring.ipynb)\n\n- [models_training](https://github.com/clementchadebec/benchmark_VAE/tree/main/examples/notebooks/models_training) folder provides notebooks showing how to train each implemented model and how to sample from it using `pythae.samplers`.\n\n- [scripts](https://github.com/clementchadebec/benchmark_VAE/tree/main/examples/scripts) folder provides in particular an example of a training script to train the models on benchmark data sets (mnist, cifar10, celeba ...)\n\n## Dealing with issues 🛠️\n\nIf you are experiencing any issues while running the code or request new features/models to be implemented please [open an issue on github](https://github.com/clementchadebec/benchmark_VAE/issues).\n\n## Contributing 🚀\n\nYou want to contribute to this library by adding a model, a sampler or simply fix a bug ? That's awesome! Thank you! Please see [CONTRIBUTING.md](https://github.com/clementchadebec/benchmark_VAE/tree/main/CONTRIBUTING.md) to follow the main contributing guidelines.\n\n## Results\n\n### Reconstruction\nFirst let's have a look at the reconstructed samples taken from the evaluation set. \n\n\n|               Models               |                                                                                    MNIST                                                                     |                     CELEBA             \n|:----------------------------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------:|\n| Eval data                  | ![Eval](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/eval_reconstruction_mnist.png) | ![AE](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/eval_reconstruction_celeba.png)  \n| AE                  | ![AE](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/ae_reconstruction_mnist.png) | ![AE](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/ae_reconstruction_celeba.png)                                                                            |\n| VAE | ![VAE](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/vae_reconstruction_mnist.png) |  ![VAE](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/vae_reconstruction_celeba.png)\n| Beta-VAE| ![Beta](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/beta_vae_reconstruction_mnist.png) | ![Beta Normal](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/beta_vae_reconstruction_celeba.png)\n| VAE Lin NF| ![VAE_LinNF](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/vae_lin_nf_reconstruction_mnist.png) | ![VAE_IAF Normal](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/vae_lin_nf_reconstruction_celeba.png)\n| VAE IAF| ![VAE_IAF](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/vae_iaf_reconstruction_mnist.png) | ![VAE_IAF Normal](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/vae_iaf_reconstruction_celeba.png)\n| Disentangled  Beta-VAE| ![Disentangled Beta](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/disentangled_beta_vae_reconstruction_mnist.png) | ![Disentangled Beta](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/disentangled_beta_vae_reconstruction_celeba.png)\n| FactorVAE| ![FactorVAE](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/factor_vae_reconstruction_mnist.png) | ![FactorVAE](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/factor_vae_reconstruction_celeba.png)\n| BetaTCVAE| ![BetaTCVAE](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/beta_tc_vae_reconstruction_mnist.png) | ![BetaTCVAE](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/beta_tc_vae_reconstruction_celeba.png)\n| IWAE | ![IWAE](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/iwae_reconstruction_mnist.png) | ![IWAE](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/iwae_reconstruction_celeba.png)\n| MSSSIM_VAE | ![MSSSIM VAE](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/msssim_vae_reconstruction_mnist.png) |  ![MSSSIM VAE](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/msssim_vae_reconstruction_celeba.png)\n| WAE| ![WAE](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/wae_reconstruction_mnist.png) | ![WAE](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/wae_reconstruction_celeba.png)\n| INFO VAE| ![INFO](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/infovae_reconstruction_mnist.png) | ![INFO](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/infovae_reconstruction_celeba.png)\n| VAMP | ![VAMP](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/vamp_reconstruction_mnist.png) | ![VAMP](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/vamp_reconstruction_celeba.png) |\n| SVAE | ![SVAE](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/svae_reconstruction_mnist.png) | ![SVAE](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/svae_reconstruction_celeba.png) |\n| Adversarial_AE          | ![AAE](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/aae_reconstruction_mnist.png) | ![AAE](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/aae_reconstruction_celeba.png) |\n| VAE_GAN          | ![VAEGAN](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/vaegan_reconstruction_mnist.png) | ![VAEGAN](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/vaegan_reconstruction_celeba.png) |\n| VQVAE          | ![VQVAE](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/vqvae_reconstruction_mnist.png) | ![VQVAE](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/vqvae_reconstruction_celeba.png) |\n| HVAE             | ![HVAE](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/hvae_reconstruction_mnist.png) | ![HVAE](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/hvae_reconstruction_celeba.png)\n| RAE_L2 | ![RAE L2](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/rae_l2_reconstruction_mnist.png)  |  ![RAE L2](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/rae_l2_reconstruction_celeba.png)\n| RAE_GP | ![RAE GMM](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/rae_gp_reconstruction_mnist.png)  |  ![RAE GMM](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/rae_gp_reconstruction_celeba.png)\n| Riemannian Hamiltonian VAE (RHVAE)| ![RHVAE](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/rhvae_reconstruction_mnist.png) | ![RHVAE RHVAE](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/rhvae_reconstruction_celeba.png)\n\n----------------------------\n### Generation\n\nHere, we show the generated samples using each model implemented in the library and different samplers.\n\n|               Models               |                                                                                    MNIST                                                                     |                     CELEBA             \n|:----------------------------------:|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:|:--------------------------------------------:|\n| AE  + GaussianMixtureSampler                  | ![AE GMM](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/ae_gmm_sampling_mnist.png) | ![AE GMM](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/ae_gmm_sampling_celeba.png)                                                                            |\n| VAE  + NormalSampler    | ![VAE Normal](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/vae_normal_sampling_mnist.png) |  ![VAE Normal](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/vae_normal_sampling_celeba.png)\n| VAE  + GaussianMixtureSampler    | ![VAE GMM](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/vae_gmm_sampling_mnist.png) |  ![VAE GMM](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/vae_gmm_sampling_celeba.png)\n| VAE  + TwoStageVAESampler    | ![VAE 2 stage](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/vae_second_stage_sampling_mnist.png) |  ![VAE 2 stage](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/vae_second_stage_sampling_celeba.png)\n| VAE  + MAFSampler    | ![VAE MAF](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/vae_maf_sampling_mnist.png) |  ![VAE MAF](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/vae_maf_sampling_celeba.png)\n| Beta-VAE + NormalSampler | ![Beta Normal](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/beta_vae_normal_sampling_mnist.png) | ![Beta Normal](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/beta_vae_normal_sampling_celeba.png)\n| VAE Lin NF + NormalSampler | ![VAE_LinNF Normal](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/vae_lin_nf_normal_sampling_mnist.png) | ![VAE_LinNF Normal](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/vae_lin_nf_normal_sampling_celeba.png)\n| VAE IAF + NormalSampler | ![VAE_IAF Normal](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/vae_iaf_normal_sampling_mnist.png) | ![VAE IAF Normal](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/vae_iaf_normal_sampling_celeba.png)\n| Disentangled Beta-VAE + NormalSampler | ![Disentangled Beta Normal](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/disentangled_beta_vae_normal_sampling_mnist.png) | ![Disentangled Beta Normal](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/disentangled_beta_vae_normal_sampling_celeba.png)\n| FactorVAE + NormalSampler | ![FactorVAE Normal](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/factor_vae_normal_sampling_mnist.png) | ![FactorVAE Normal](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/factor_vae_normal_sampling_celeba.png)\n| BetaTCVAE + NormalSampler | ![BetaTCVAE Normal](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/beta_tc_vae_normal_sampling_mnist.png) | ![BetaTCVAE Normal](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/beta_tc_vae_normal_sampling_celeba.png)\n| IWAE +  Normal sampler | ![IWAE Normal](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/iwae_normal_sampling_mnist.png) | ![IWAE Normal](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/iwae_normal_sampling_celeba.png)\n| MSSSIM_VAE  + NormalSampler    | ![MSSSIM_VAE Normal](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/msssim_vae_normal_sampling_mnist.png) |  ![MSSSIM_VAE Normal](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/msssim_vae_normal_sampling_celeba.png)\n| WAE + NormalSampler| ![WAE Normal](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/wae_normal_sampling_mnist.png) | ![WAE Normal](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/wae_normal_sampling_celeba.png)\n| INFO VAE + NormalSampler| ![INFO Normal](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/infovae_normal_sampling_mnist.png) | ![INFO Normal](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/infovae_normal_sampling_celeba.png)\n| SVAE + HypershereUniformSampler          | ![SVAE Sphere](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/svae_hypersphere_uniform_sampling_mnist.png) | ![SVAE Sphere](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/svae_hypersphere_uniform_sampling_celeba.png) |\n| VAMP + VAMPSampler          | ![VAMP Vamp](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/vamp_vamp_sampling_mnist.png) | ![VAMP Vamp](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/vamp_vamp_sampling_celeba.png) |\n| Adversarial_AE + NormalSampler          | ![AAE_Normal](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/aae_normal_sampling_mnist.png) | ![AAE_Normal](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/aae_normal_sampling_celeba.png) |\n| VAEGAN + NormalSampler          | ![VAEGAN_Normal](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/vaegan_normal_sampling_mnist.png) | ![VAEGAN_Normal](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/vaegan_normal_sampling_celeba.png) |\n| VQVAE + MAFSampler          | ![VQVAE_MAF](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/vqvae_maf_sampling_mnist.png) | ![VQVAE_MAF](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/vqvae_maf_sampling_celeba.png) |\n| HVAE + NormalSampler             | ![HVAE Normal](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/hvae_normal_sampling_mnist.png) | ![HVAE GMM](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/hvae_normal_sampling_celeba.png)\n| RAE_L2 + GaussianMixtureSampler | ![RAE L2 GMM](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/rae_l2_gmm_sampling_mnist.png)  |  ![RAE L2 GMM](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/rae_l2_gmm_sampling_celeba.png)\n| RAE_GP + GaussianMixtureSampler| ![RAE GMM](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/rae_gp_gmm_sampling_mnist.png)  |  ![RAE GMM](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/rae_gp_gmm_sampling_celeba.png)\n| Riemannian Hamiltonian VAE (RHVAE) + RHVAE Sampler| ![RHVAE RHVAE](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/rhvae_rhvae_sampling_mnist.png) | ![RHVAE RHVAE](https://github.com/clementchadebec/benchmark_VAE/blob/main/examples/showcases/rhvae_rhvae_sampling_celeba.png)\n\n\n# Citation\n\nIf you find this work useful or use it in your research, please consider citing us\n\n```bibtex\n@inproceedings{chadebec2022pythae,\n author = {Chadebec, Cl\\'{e}ment and Vincent, Louis and Allassonniere, Stephanie},\n booktitle = {Advances in Neural Information Processing Systems},\n editor = {S. Koyejo and S. Mohamed and A. Agarwal and D. Belgrave and K. Cho and A. Oh},\n pages = {21575--21589},\n publisher = {Curran Associates, Inc.},\n title = {Pythae: Unifying Generative Autoencoders in Python - A Benchmarking Use Case},\n volume = {35},\n year = {2022}\n}\n```\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fclementchadebec%2Fbenchmark_vae","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fclementchadebec%2Fbenchmark_vae","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fclementchadebec%2Fbenchmark_vae/lists"}