https://github.com/sameetasadullah/cnn-benchmark-suite
A modular deep learning evaluation framework for benchmarking multiple CNN architectures across varied optimization strategies and training configurations. Built for scalable experimentation and transferability to real-world image classification tasks.
https://github.com/sameetasadullah/cnn-benchmark-suite
alexnet cifar10 cnn computer-vision deep-learning googlenet hyperparameter-tuning image-classification mobilenet model-benchmarking model-evaluation pytorch resnet torchvision
Last synced: 28 days ago
JSON representation
A modular deep learning evaluation framework for benchmarking multiple CNN architectures across varied optimization strategies and training configurations. Built for scalable experimentation and transferability to real-world image classification tasks.
- Host: GitHub
- URL: https://github.com/sameetasadullah/cnn-benchmark-suite
- Owner: SameetAsadullah
- Created: 2024-11-10T06:36:42.000Z (11 months ago)
- Default Branch: main
- Last Pushed: 2025-06-19T11:21:46.000Z (4 months ago)
- Last Synced: 2025-06-19T12:29:51.164Z (4 months ago)
- Topics: alexnet, cifar10, cnn, computer-vision, deep-learning, googlenet, hyperparameter-tuning, image-classification, mobilenet, model-benchmarking, model-evaluation, pytorch, resnet, torchvision
- Language: Jupyter Notebook
- Homepage:
- Size: 806 KB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# 🔍 Vision Model Evaluation Framework (PyTorch)
A scalable framework for benchmarking deep convolutional architectures on classification tasks. This project evaluates the impact of optimizers, learning rates, and batch sizes across multiple CNN backbones — providing a reproducible experimental setup aligned with research and production best practices.
---
## 🚀 Key Highlights
- Plug-and-play support for multiple CNN architectures
- Scalable benchmarking with grid search over:
- Optimizers: `SGD`, `Adam`
- Learning Rates: `0.01`, `0.1`
- Batch Sizes: `32`, `64`
- Data augmentation, normalization, and stratified validation split
- Detailed metrics logging + real-time visualization support
- Model weights saved automatically for top-performing configs---
## 🧠 Architectures Supported
- **Custom Lightweight CNN** (baseline)
- **ResNet-18**
- **MobileNetV2**
- **GoogleNet**
- **AlexNet** *(included for completeness; not recommended for production)*> 🛡️ Modular design allows easy plug-in of ViT, EfficientNet, ConvNext, etc.
---
## 📊 Experiments & Logging
- Validation and test performance tracked across all combinations
- Key metrics:
- Train & Val Accuracy/Loss (per epoch)
- Best Validation Accuracy (per config)
- Final Test Accuracy (per model)
- Automatic selection of best model per architecture
- Visual analytics:
- Accuracy trends
- Impact of learning rate
- Batch size comparison
- Model vs Optimizer performance---
## 🔁 Use Cases
- Model architecture benchmarking
- Optimizer sensitivity studies
- Lightweight deployment model search
- Academic reproducibility experiments