https://github.com/anindya-prithvi/mnist_caelatentspace
Convolutional AutoEncoders (MNIST) and their generative capabilities (kind of amazing)
https://github.com/anindya-prithvi/mnist_caelatentspace
autoencoder-mnist linear-discriminant-analysis machine-learning mnist onnx onnxruntime onnxruntime-web plt python
Last synced: 4 months ago
JSON representation
Convolutional AutoEncoders (MNIST) and their generative capabilities (kind of amazing)
- Host: GitHub
- URL: https://github.com/anindya-prithvi/mnist_caelatentspace
- Owner: Anindya-Prithvi
- Created: 2022-05-20T13:05:19.000Z (about 3 years ago)
- Default Branch: main
- Last Pushed: 2022-05-23T17:04:13.000Z (about 3 years ago)
- Last Synced: 2025-01-16T05:12:30.296Z (5 months ago)
- Topics: autoencoder-mnist, linear-discriminant-analysis, machine-learning, mnist, onnx, onnxruntime, onnxruntime-web, plt, python
- Language: Jupyter Notebook
- Homepage: https://anindya-prithvi.github.io/MNIST_CAELatentSpace/
- Size: 6.92 MB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Objective
Observe the Latent space of Convolutional AutoEncoder for a classification dataset (MNIST)# Architecture of Auto Encoder
The encoder consists of multiple convolutions and batchnormalizations only. We scale a 28x28 to 7x1 vector.
The decoder is again transposed convolutions and batchnormalizations, i.e. 7x1 to 28x28.
Optimizers/LR/decays used have not been hyperparameter optimized yet. Therefore the current loss is 0.02 for 50 epochs.
# To Observable
We encode the training images to our latent space as usual. However we then apply LDA transformations with `n_components=2`. Plotting the transformed data we could observe well formed clusters (refer notebook). Also, there were obviously some overlapping ones, to resolve them, a 3D LDA transformation was obtained. The clusters were amazingly distinguishable with almost 0 overlap.
## 2D LDA result.

## 3D LDA plot © [Aflah](https://github.com/aflah02/)
[3D LDA result - Interactive (Click)](https://anindya-prithvi.github.io/filehost/plotlypage.html)
# Observations
1. We can generate multiple fake samples given the latent space encoding distributed on multivariate gaussian distribution with parameters from MLE estimates of the training data (of a specific class) in latent space.
2. Latent space was initally made keeping in mind a 7-segment decoder. But guess what (HINT: Look at the means)
3. All of the following is synthetic data

```py
array([
3, 2, 2, 2, 7, 6, 6, 2, 2, 2, 2, 1, 2, 6, 2, 2,
2, 2, 6, 2, 2, 2, 2, 2, 4, 2, 2, 2, 2, 2, 1, 2,
8, 2, 2, 2, 2, 6, 2, 2, 6, 0, 7, 2, 0, 2, 2, 9,
5, 2, 2, 2, 5, 2, 2, 2, 6, 6, 2, 2, 2, 2, 2, 2,
2, 2, 2, 2, 4, 8, 2, 2, 2, 2, 3, 2, 2, 2, 2, 2,
6, 2, 8, 2, 2, 2, 6, 2, 2, 2, 2, 2, 2, 8, 3, 2,
2, 2, 6, 2, 8, 2, 2, 2, 2, 2, 2, 5, 2, 2, 2, 2,
2, 6, 2, 2, 2, 2, 2, 2, 2, 8, 2, 6, 8, 2, 2, 2], dtype=int64)
4. A demo is now live at [ghpages](https://anindya-prithvi.github.io/MNIST_CAELatentSpace/). Drag the sliders to move around the Latent space.