An open API service indexing awesome lists of open source software.

https://github.com/avijit-jana/cnn-architectures-benchmark

A comparative benchmark of popular Convolutional Neural Network architectures (LeNet‑5, AlexNet, GoogLeNet, ResNet, Xeception) on MNIST, Fashion‑MNIST and CIFAR‑10 using PyTorch. Includes analysis of loss curves, accuracy, precision, recall and F1‑scores.
https://github.com/avijit-jana/cnn-architectures-benchmark

benchmarking cifar-10 cnn cnn-classification computer-vision deep-learning fmnist image-classification machine-learning mnist model-evaluation open-source pytorch tensorflow

Last synced: 30 days ago
JSON representation

A comparative benchmark of popular Convolutional Neural Network architectures (LeNet‑5, AlexNet, GoogLeNet, ResNet, Xeception) on MNIST, Fashion‑MNIST and CIFAR‑10 using PyTorch. Includes analysis of loss curves, accuracy, precision, recall and F1‑scores.

Awesome Lists containing this project

README

          

# CNN Architectures Benchmark

![GitHub repo size](https://img.shields.io/github/repo-size/Avijit-Jana/cnn-architectures-benchmark?style=plastic)
![GitHub repo size](https://img.shields.io/github/repo-size/Avijit-Jana/cnn-architectures-benchmark?style=plastic)
![GitHub language count](https://img.shields.io/github/languages/count/Avijit-Jana/cnn-architectures-benchmark?style=plastic)
![GitHub top language](https://img.shields.io/github/languages/top/Avijit-Jana/cnn-architectures-benchmark?style=plastic)
![GitHub last commit](https://img.shields.io/github/last-commit/Avijit-Jana/cnn-architectures-benchmark?color=red&style=plastic)

## Table of Contents

- [📖**Project Description**](#project-description)
- [🧑‍💼**Business Use Cases**](#business-use-cases)
- [📁**Data Set Explanation**](#data-set-explanation)
- [**📊Project Evaluation Metrics**](#project-evaluation-metrics)
- [**🚩How to Approach this Project**](#how-to-approach-this-project)

## 📖Project Description

The goal of this project is to compare the performance of different CNN architectures on
various datasets. Specifically, we will evaluate LeNet-5, AlexNet, GoogLeNet, VGGNet,
ResNet, Xception, and SENet on MNIST, FMNIST, and CIFAR-10 datasets. The comparison
will be based on metrics such as loss curves, accuracy, precision, recall, and F1-score.
Comparison of CNN architectures (LeNet-5, AlexNet, GoogLeNet, VGGNet, ResNet, Xception, SENet) on MNIST, FMNIST, and CIFAR-10 datasets. Evaluates performance using loss curves, accuracy, precision, recall, and F1-score. Implemented with TensorFlow and PyTorch.

## 🧑‍💼Business Use Cases

The insights from this project can be applied in various business scenarios, including:

- Choosing the appropriate CNN architecture for specific computer vision tasks.
- Improving model performance by understanding the impact of dataset characteristics.
- Optimizing resource allocation by selecting models that offer the best trade-off between performance and computational cost.
- Identifying potential trade-offs between model complexity and performance.
- Understanding the impact of dataset characteristics on model performance.

## 📁Data Set Explanation

**The datasets used in this project are:**

- **MNIST**: Handwritten digits dataset consisting of 60,000 training images and 10,000 testing images. Each image is 28x28 pixels in grayscale.
- **FMNIST**: Fashion MNIST dataset consisting of 60,000 training images and 10,000 testing images of fashion products. Each image is 28x28 pixels in grayscale.
- **CIFAR-10**: Dataset consisting of 60,000 32x32 color images in 10 classes, with 50,000 training images and 10,000 testing images.

**The datasets are chosen to cover a variety of image classification tasks:**

- **MNIST and FMNIST** provide simpler tasks with grayscale images, allowing for the evaluation of basic image recognition capabilities.
- **CIFAR-10** offers a more complex task with color images, testing the models 'abilities to handle more detailed and varied data.

## 📊Project Evaluation Metrics

The success and effectiveness of the project will be evaluated using the following metrics: -

- **Accuracy:** The proportion of correct predictions out of the total predictions made.
- **Precision:** The proportion of true positive predictions out of all positive predictions made.
- **Recall:** The proportion of true positive predictions out of all actual positives.
- **F1-score:** The harmonic mean of precision and recall.
- **Loss:** The value of the loss function during training and testing.

## 🚩How to Approach this Project

- To understand the project, check out the [**Approach File**](https://github.com/Avijit-Jana/cnn-architectures-benchmark/blob/main/Approach.md).

- You can download all the dependencies by running the [**requirements.txt**](https://github.com/Avijit-Jana/cnn-architectures-benchmark/blob/main/requirements.txt) file using the following command:
```bash
pip install -r requirements.txt
```

- Also check out the [**Final Report**](https://github.com/Avijit-Jana/cnn-architectures-benchmark/blob/main/Final_Result.md) for more details about the outcome of the project.

![Badge](https://img.shields.io/badge/Developed%20By-Avijit_Jana-blueviolet?style=for-the-badge)