An open API service indexing awesome lists of open source software.

https://github.com/trungnt13/odin-ai

Orgainzed Digital Intelligent Network (O.D.I.N)
https://github.com/trungnt13/odin-ai

bayesian-methods deep-learning deep-neural-networks disentangled-representations disentanglement-learning factor-vae generative-model graph-algorithms image-processing machine-learning natural-language-processing probabilistic-graphical-models probabilistic-programming semi-supervised-learning speech-processing text-processing variational-autoencoder variational-autoencoders

Last synced: about 2 months ago
JSON representation

Orgainzed Digital Intelligent Network (O.D.I.N)

Awesome Lists containing this project

README

        

.. image:: https://readthedocs.org/projects/odin/badge/
:target: http://odin0.readthedocs.org/en/latest/

O.D.I.N
=======
Organized Digital Intelligent Network (O.D.I.N)

O.D.I.N is a framework for building "Organized Digital Intelligent Networks".

End-to-end design, versatile, plug-n-play, minimized repetitive work

This repo contains the most comprehensive implementation of variational autoencoder and disentangled representation benchmark.

.. code-block:: python

from odin.fuel import MNIST
from odin.networks import get_networks
from odin.bay.vi import VariationalAutoencoder

ds = MNIST()
train = ds.create_dataset(partition='train')
# optimized architectures for MNIST
networks = get_networks(ds, is_hierarchical=False, is_semi_supervised=False)

# create the VAE
vae = VariationalAutoencoder(**networks)
vae.build(ds.full_shape)
vae.fit(train, max_iter=10000)

TOC
---

1. `VAE`__
2. `Hierachical VAE`__
3. `Semi-supervised VAE`__
4. `Disentanglement Gym`__
5. `Faster Classical ML`__ (automatically select GPU implementation)

.. __: #variational-autoencoder-vae
.. __: #hierarchical-vae
.. __: #semi-supervised-vae
.. __: #disentanglement-gym
.. __: #fast-api-for-classical-ml

Variational Autoencoder (VAE)
-----------------------------

.. list-table::
:widths: 30 80 25
:header-rows: 1

* - Model
- Reference/Description
- Implementation
* - 1. Vanilla VAE
- (Kingma et al. 2014). "Auto-Encoding Variational Bayes" [`Paper `_]
- [`Code `_][`Example `_]
* - 2. Beta-VAE
- (Higgins et al. 2016). "beta-VAE: Learning Basic Visual Concepts with a Constrained Variational Framework" [`Paper `_]
- [`Code `_][`Example `_]
* - 3. BetaGamma-VAE
- Customized version of Beta-VAE, support re-weighing both reconstruction and regularization ``\(\mathrm{ELBO}=\gamma \cdot E_q[log p(x|z)] - \beta \cdot KL(q(z|x)||p(z|x))\)``
- [`Code `_][`Example `_]
* - 4. Annealing VAE
- (Sønderby et al. 2016) "Ladder variational autoencoder"
- [`Code `_][`Example `_]
* - 5. CyclicalAnnealing VAE
- (Fu et al. 2019) "Cyclical Annealing Schedule: A Simple Approach to Mitigating KL Vanishing"
- [`Code `_][`Example `_]
* - 6. BetaTC-VAE
- (Chen et al. 2019) "Isolating Sources of Disentanglement in Variational Autoencoders" (regularize the latents' Total Correlation)
- [`Code `_][`Example `_]
* - 7. Controlled Capacity Beta-VAE
- (Burgess et al. 2018) "Understanding disentangling in beta-VAE"
- [`Code `_][`Example `_]
* - 8. FactorVAE
- (Kim et al. 2018) "Disentangling by Factorising"
- [`Code `_][`Example `_]
* - 9. AuxiliaryVAE
- (Maaløe et al. 2016) "Auxiliary Deep Generative Models"
- [`Code `_][`Example `_]
* - 10. HypersphericalVAE
- (Davidson et al. 2018) "Hyperspherical Variational Auto-Encoders"
- [`Code `_][`Example `_]
* - 11. PowersphericalVAE
- (De Cao et al. 2020) "The Power Spherical distribution"
- [`Code `_][`Example `_]
* - 12. DIPVAE
- (Kumar et al. 2018) "Variational Inference of Disentangled Latent Concepts from Unlabeled Observations" (I - `only_mean=True`; II - `only_mean=False`)
- [`Code `_][`Example `_]
* - 13. InfoVAE
- (Zhao et al. 2018) "infoVAE: Balancing Learning and Inference in Variational Autoencoders"
- [`Code `_][`Example `_]
* - 14. MIVAE
- (Ducau et al. 2017) "Mutual Information in Variational Autoencoders" (max Mutual Information I(X;Z))
- [`Code `_][`Example `_]
* - 15. irmVAE
- (Jing et al. 2020) "Implicit Rank-Minimizing Autoencoder" (Implicit Rank Minimizer)
- [`Code `_][`Example `_]
* - 16. ALDA
- (Figurnov et al. 2018) "Implicit Reparameterization Gradients" (Amortized Latent Dirichlet Allocation - VAE with Dirichlet latents for topic modeling)
- [`Code `_][`Example `_]
* - 17. TwoStageVAE
- (Dai et al. 2019) "Diagnosing and Enhancing VAE Models"
- [`Code `_][`Example `_]
* - 18. VampriorVAE
- (Tomczak et al. 2018) "VAE with a VampPrior"
- [`Code `_][`Example `_]
* - 19. VQVAE
- (Oord et al. 2017) "Neural Discrete Representation Learning"
- [`Code `_][`Example `_]

Hierarchical VAE
----------------

.. list-table::
:widths: 30 80 25
:header-rows: 1

* - Model
- Reference/Description
- Implementation
* - 20. LadderVAE
- (Sønderby et al. 2016) "Ladder variational autoencoder"
- [`Code `_][`Example `_]
* - 21. BidirectionalVAE
- (Kingma et al. 2016) "Improved variational inference with inverse autoregressive flow" (Bidirectional inference hierarchical VAE)
- [`Code `_][`Example `_]
* - 22. ParallelVAE
- (Zhao et al. 2017) "Learning Hierarchical Features from Generative Models" (Multiple latents connects encoder-decoder from bottom to top in parallel)
- [`Code `_][`Example `_]

Semi-supervised VAE
-------------------

.. list-table::
:widths: 30 80 25
:header-rows: 1

* - Model
- Reference/Description
- Implementation
* - 23. Semi-supervised FactorVAE
- Same as FactorVAE, but the discriminator also estimate the density of the labels and unlabeled data (like in semi-GAN)
- [`Code `_][`Example `_]
* - 24. MultiheadVAE
- VAE has multiple decoders for different tasks
- [`Code `_][`Example `_]
* - 25. SkiptaskVAE
- VAE has multiple tasks directly constrain the latents
- [`Code `_][`Example `_]
* - 26. ConditionalM2VAE
- (Kingma et al. 2014) "Semi-supervised learning with deep generative models" [`Paper `_]
- [`Code `_][`Example `_]
* - 27. CCVAE (capture characteristic VAE)
- (Joy et al. 2021) "Capturing label characteristics in VAEs" [`Paper `_]
- [`Code `_][`Example `_]
* - 28. SemafoVAE
- (Trung et al. 2021) "The transitive information theory and its application to deep generative models" [`Paper `_]
- [`Code `_][`Example `_]

Disentanglement Gym
-------------------

`DisentanglementGym `_: fast API for benchmarks on popular datasets and renowned disentanglement metrics.

Dataset support: `['shapes3d', 'dsprites', 'celeba', 'fashionmnist', 'mnist', 'cifar10', 'cifar100', 'svhn', 'cortex', 'pbmc', 'halfmoons']`

Metrics support:

* Correlation: 'spearman', 'pearson', 'lasso'
* BetaVAE score
* FactorVAE score
* Mutual Information Estimated
* MIG (Mutual Information Gap)
* SAP (Separated Attribute Prediction)
* RDS (relative disentanglement strength)
* DCI (Disentanglement, Completeness, Informativeness)
* FID (Frechet Inception Distance)
* Total Correlation
* Clustering scores: Adjusted Rand Index, Adjusted Mutual Info, Normalized Mutual Info, Silhouette score.

Fast API for classical ML
-------------------------

Automatically accelerated by RAPIDS.ai (i.e. automatically select GPU implementation if available)

Dimension Reduction
~~~~~~~~~~~~~~~~~~~

* t-SNE [`Code `_]
* UMAP [`Code `_]
* PCA, Probabilistic PCA, Supervised Probabilistic PCA, MiniBatch PCA, Randomize PCA [`Code `_]
* Probabilistic Linear Discriminant Analysis (PLDA) [`Code `_]
* iVector (GPU acclerated) [`Code `_]

GMM
~~~

* GMM classifier [`Code `_]
* Probabilistic embedding with GMM [`Code `_]
* Universal Background Model (GMM-Tmatrix) [`Code `_]

Clustering
~~~~~~~~~~

* KNN [`Code `_]
* KMeans [`Code `_]
* DBSCAN [`Code `_]