An open API service indexing awesome lists of open source software.

https://github.com/sameetasadullah/cnn-benchmark-suite

A modular deep learning evaluation framework for benchmarking multiple CNN architectures across varied optimization strategies and training configurations. Built for scalable experimentation and transferability to real-world image classification tasks.
https://github.com/sameetasadullah/cnn-benchmark-suite

alexnet cifar10 cnn computer-vision deep-learning googlenet hyperparameter-tuning image-classification mobilenet model-benchmarking model-evaluation pytorch resnet torchvision

Last synced: 28 days ago
JSON representation

A modular deep learning evaluation framework for benchmarking multiple CNN architectures across varied optimization strategies and training configurations. Built for scalable experimentation and transferability to real-world image classification tasks.

Awesome Lists containing this project

README

          

# 🔍 Vision Model Evaluation Framework (PyTorch)

A scalable framework for benchmarking deep convolutional architectures on classification tasks. This project evaluates the impact of optimizers, learning rates, and batch sizes across multiple CNN backbones — providing a reproducible experimental setup aligned with research and production best practices.

---

## 🚀 Key Highlights

- Plug-and-play support for multiple CNN architectures
- Scalable benchmarking with grid search over:
- Optimizers: `SGD`, `Adam`
- Learning Rates: `0.01`, `0.1`
- Batch Sizes: `32`, `64`
- Data augmentation, normalization, and stratified validation split
- Detailed metrics logging + real-time visualization support
- Model weights saved automatically for top-performing configs

---

## 🧠 Architectures Supported

- **Custom Lightweight CNN** (baseline)
- **ResNet-18**
- **MobileNetV2**
- **GoogleNet**
- **AlexNet** *(included for completeness; not recommended for production)*

> 🛡️ Modular design allows easy plug-in of ViT, EfficientNet, ConvNext, etc.

---

## 📊 Experiments & Logging

- Validation and test performance tracked across all combinations
- Key metrics:
- Train & Val Accuracy/Loss (per epoch)
- Best Validation Accuracy (per config)
- Final Test Accuracy (per model)
- Automatic selection of best model per architecture
- Visual analytics:
- Accuracy trends
- Impact of learning rate
- Batch size comparison
- Model vs Optimizer performance

---

## 🔁 Use Cases

- Model architecture benchmarking
- Optimizer sensitivity studies
- Lightweight deployment model search
- Academic reproducibility experiments