Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/reyhaneh-saffar/autoencoder-reconstruction-of-mixed-mnist-and-cifar-10-images
Exploring the use of an autoencoder neural network for data compression and reconstruction
https://github.com/reyhaneh-saffar/autoencoder-reconstruction-of-mixed-mnist-and-cifar-10-images
Last synced: 16 days ago
JSON representation
Exploring the use of an autoencoder neural network for data compression and reconstruction
- Host: GitHub
- URL: https://github.com/reyhaneh-saffar/autoencoder-reconstruction-of-mixed-mnist-and-cifar-10-images
- Owner: reyhaneh-saffar
- Created: 2025-01-11T00:14:12.000Z (20 days ago)
- Default Branch: main
- Last Pushed: 2025-01-11T00:18:19.000Z (20 days ago)
- Last Synced: 2025-01-11T01:21:46.573Z (20 days ago)
- Language: Jupyter Notebook
- Size: 88.9 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
This project explores the use of an autoencoder neural network for data compression and reconstruction. The datasets CIFAR-10 and MNIST were preprocessed, combined, and used to train the model, with promising results in reducing reconstruction loss.
### Data Processing
- **Dataset Preparation**:
- CIFAR-10 and MNIST datasets were transformed and normalized to ensure compatibility.
- MNIST images were converted to RGB and resized to match CIFAR-10 dimensions.
- Both datasets were standardized in terms of image size and pixel value distribution.
- **Mean Image Computation**:
- Corresponding images from CIFAR-10 and MNIST were averaged to create new "mean images."
- This process enriched the dataset by combining features from both datasets, producing unique training and testing data.
- **Dataset Structuring**:
- The mean images were reshaped to fit the input format required for model training and evaluation.
- Structured training and testing datasets were created to ensure consistency and diversity for the model.### Training the Autoencoder
- **Model Construction**:
- The autoencoder was composed of:
- **Encoder**: Compressed input data into a latent space representation using convolutional layers with non-linear activation functions.
- **Decoder**: Reconstructed input data from the latent space using transposed convolutional layers, restoring spatial dimensions.
- A loss function measured reconstruction accuracy, optimized using backpropagation.
- **Training Process**:
- **Data Handling**: Training and testing datasets were loaded into data loaders for efficient batching and shuffling.
- **Training Loop**:
- Over 50 epochs, the autoencoder iteratively learned to compress and reconstruct input data.
- The optimizer adjusted model parameters based on reconstruction loss.
- **Evaluation Loop**:
- After each epoch, the model was tested on unseen data to monitor its generalization ability.
- Reconstruction loss was recorded for both training and testing datasets.### Results and Insights
- **Performance Metrics**:
- Training Loss: Decreased from **0.9747** in the first epoch to **0.2641** in the final epoch.
- Testing Loss: Reduced from **0.9674** to **0.2609**.
- **Visualization**:
- A plot of training and testing losses over 50 epochs showed steady decreases, indicating effective learning and generalization.