https://github.com/vigneshs10/model-quantization-for-diabetes-classification-in-edge-devices
Model Quantization for Federated Learning in edge devices
https://github.com/vigneshs10/model-quantization-for-diabetes-classification-in-edge-devices
alexnet convolutional-neural-networks federated-learning mobilenet pytorch resnet-50
Last synced: 3 months ago
JSON representation
Model Quantization for Federated Learning in edge devices
- Host: GitHub
- URL: https://github.com/vigneshs10/model-quantization-for-diabetes-classification-in-edge-devices
- Owner: VigneshS10
- Created: 2022-10-28T12:12:25.000Z (over 2 years ago)
- Default Branch: main
- Last Pushed: 2022-10-29T07:05:07.000Z (over 2 years ago)
- Last Synced: 2025-03-06T07:19:06.963Z (3 months ago)
- Topics: alexnet, convolutional-neural-networks, federated-learning, mobilenet, pytorch, resnet-50
- Language: Python
- Homepage:
- Size: 30.3 KB
- Stars: 3
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Model Quantization to optimize Alexnet, MobileNet and Resnet-50 for Diabetes ClassificationQuantization for deep learning is the process of approximating a neural network that uses floating-point numbers by a neural network of low bit width numbers. This dramatically reduces both the memory requirement and computational cost of using neural networks.
In this repository, I have implemented **static quantization** strategy for Alexnet, MobileNet and Resnet-50 trained on a Retina based Diabetes Classification dataset and also have proven the benefits occuring from doing so by giving relevant statistics and evaluation metrics.
---
## RequirementsTo install requirements:
```setup
pip install -r requirements.txt
```
## Dataset
Download the dataset from the links given below.[Train](https://drive.google.com/drive/folders/1LI9RDJRTOKUKfwC_dXKMBwHa4Y-OA6eX?usp=share_link)\
[Train2](https://drive.google.com/drive/folders/1--4A1O_T2FdijAa48877YyIxZ-Mt5HtK?usp=share_link)\
[Val](https://drive.google.com/drive/folders/144XCsP-3U1ld0SUIXp568o-9Ie6I7Dnc?usp=share_link)## Training and Evaluation
1. Download the dataset from the link above.
2. Be sure to change the file directory paths in all the required places as per your convenience.
2. Run the quant python files to train your model on the diabetes classification dataset.
3. Both the normal model and the quantized models will be saved in the events folder, the predictions in the predictions folder and event logs in the event folder.
4. Statistics such as Accuracy, Model size and CPU inference latency are all in built for evaluation in the quant python files.## Results
From the statistics given below, my quantization strategy reduced the model size, and the inference latency multiple folds while maintaining the accuracy of the model without any noticeble decline.
| Model |Top 1 Acc (%)| Model size| CPU Inference Latency |
|-----------------------------|----------- |-----------|-----------------------|
| Alexnet | 42.537 |90 MB |37.94 ms/sample |
| **Quantized Alexnet** | **42.537** |**23.2 MB**| **29.65 ms/sample** |---
| Model |Top 1 Acc (%)| Model size| CPU Inference Latency |
|-----------------------------|----------- |-----------|-----------------------|
| MobileNet | 58.46 |8.7 MB |10.23 ms/sample |
| **Quantized MobileNet** | **58.41** |**2.8 MB** | **6.27 ms/sample** |---
| Model |Top 1 Acc (%)| Model size| CPU Inference Latency |
|-----------------------------|----------- |-----------|-----------------------|
| Resnet50 | 62.44 |90 MB |162.19 ms/sample |
| **Quantized Resnet50** | **63.41** |**23.2 MB**| **101.07 ms/sample** |## Pretrained models
Pretrained models for the all the mentioned models can be downloaded from this [link](https://drive.google.com/drive/folders/1cyvzv0cl4PxqV_DTJZf0EwkBZvwEJFEr?usp=share_link) for inference.## Contact
For any queries, feel free to contact at [email protected].