Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/wilberquito/mnist-autoencoder-classification
Autoencoder Feature Extraction for Classification. SSL
https://github.com/wilberquito/mnist-autoencoder-classification
Last synced: 4 days ago
JSON representation
Autoencoder Feature Extraction for Classification. SSL
- Host: GitHub
- URL: https://github.com/wilberquito/mnist-autoencoder-classification
- Owner: wilberquito
- Created: 2024-04-30T19:32:59.000Z (7 months ago)
- Default Branch: main
- Last Pushed: 2024-05-01T21:16:06.000Z (7 months ago)
- Last Synced: 2024-05-02T13:27:07.564Z (7 months ago)
- Language: Jupyter Notebook
- Homepage:
- Size: 2 MB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Autoencoder Feature Extraction for Classification
Autoencoders are a type of neural network which generates an “n-layer” coding
of the given input and attempts to reconstruct the input using the code
generated.The Autoencoder architecture architecture is divided into the encoder
structure, the decoder structure, and the latent space, also known as the
“bottleneck”.## Encoder
$$h = E(x)$$
## Decoder
$$x' = D(h)$$
## Latence space
This is the data representation or the low-level, compressed representation of
the model’s input. The decoder structure uses this low-dimensional form of data
to reconstruct the input. It is represented by $h$.## Self Supervised Learning
### pretext task
Now for both model to learn, we need a metric. This metric of loss $L$ which
should mesure how good the decoder $D$ does reconstructing the original data
from the encoder $E$.$$L = Loss(x, x')$$
### downstream task
The learned representation by encoder $E$ can be used and fine tunned. As
encoder $E$ "knows" important features from the SSL problem, we can use it for
transfer learning in a classification or regression task.