https://github.com/elifirinci/cifar-10
This project explores the concept of transfer learning using the CIFAR-10 dataset. The work demonstrates how to reuse a convolutional neural network trained on a subset of image classes and then fine-tune it on a different set of classes. This approach is common in real-world deep learning applications where labeled data is limited.
https://github.com/elifirinci/cifar-10
cifar10 classification deep-learning fine-tuning googlenet inceptionv3
Last synced: 2 months ago
JSON representation
This project explores the concept of transfer learning using the CIFAR-10 dataset. The work demonstrates how to reuse a convolutional neural network trained on a subset of image classes and then fine-tune it on a different set of classes. This approach is common in real-world deep learning applications where labeled data is limited.
- Host: GitHub
- URL: https://github.com/elifirinci/cifar-10
- Owner: elifirinci
- License: mit
- Created: 2025-05-23T16:49:11.000Z (5 months ago)
- Default Branch: main
- Last Pushed: 2025-05-28T11:42:54.000Z (5 months ago)
- Last Synced: 2025-06-06T08:43:45.433Z (4 months ago)
- Topics: cifar10, classification, deep-learning, fine-tuning, googlenet, inceptionv3
- Language: Jupyter Notebook
- Homepage:
- Size: 512 KB
- Stars: 0
- Watchers: 0
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
## 📚 DatasetThe CIFAR-10 dataset contains 60,000 color images (32x32 pixels) divided into 10 classes:
- airplane
- automobile
- bird
- cat
- deer
- dog
- frog
- horse
- ship
- truckIn this project:
- The model is first trained on 5 classes: airplane, automobile, bird, cat, and deer.
- Then, the last layer(s) of the network are retrained to classify the other 5 classes: dog, frog, horse, ship, and truck.## 🧠Method
- A pre-trained InceptionV3 model is used for transfer learning.
- Two strategies were tested:
1. Freeze all layers except the output layer.
2. Freeze all layers except the fully connected and output layers.### Results
- **Trainable Parameters**:
- Strategy 1: ~262,917
- Strategy 2: ~263,109- **Accuracy**:
- Strategy 1: ~93% (train and validation)
- Strategy 2: ~94% (train), ~94.5% (validation)- Strategy 2 performed better due to more layers being fine-tuned, allowing better adaptation to the new classes.
## Why Fine-Tuning is Faster
Fine-tuning is faster than full training because most layers are frozen and do not require gradient calculations. This reduces the computational load significantly.
## Libraries Used
- TensorFlow / Keras
- NumPy
- Matplotlib