Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/omidghadami95/efficientnetv2_quantization_ck
EfficientNetV2 (Efficientnetv2-b2) and quantization int8 and fp32 (QAT and PTQ) on CK+ dataset . fine-tuning, augmentation, solving imbalanced dataset, etc.
https://github.com/omidghadami95/efficientnetv2_quantization_ck
ckplus efficientnet efficientnetv2 efficientnetv2-b2 emotion-recognition facial-emotion-recognition googlecolab imbalanced-dataset keras post-training-quantization ptq python qat quantization quantization-aware-training real-time-emotion-classification real-time-emotion-detection scale-down tensorflow
Last synced: 7 days ago
JSON representation
EfficientNetV2 (Efficientnetv2-b2) and quantization int8 and fp32 (QAT and PTQ) on CK+ dataset . fine-tuning, augmentation, solving imbalanced dataset, etc.
- Host: GitHub
- URL: https://github.com/omidghadami95/efficientnetv2_quantization_ck
- Owner: OmidGhadami95
- Created: 2023-07-14T17:37:13.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-05-04T12:15:14.000Z (8 months ago)
- Last Synced: 2024-05-04T13:30:05.192Z (8 months ago)
- Topics: ckplus, efficientnet, efficientnetv2, efficientnetv2-b2, emotion-recognition, facial-emotion-recognition, googlecolab, imbalanced-dataset, keras, post-training-quantization, ptq, python, qat, quantization, quantization-aware-training, real-time-emotion-classification, real-time-emotion-detection, scale-down, tensorflow
- Language: Jupyter Notebook
- Homepage:
- Size: 344 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# EfficientNetV2_Quantization_CKplus tensorflow keras
EfficientNetV2 (Efficientnetv2-b2) and quantization int8 and fp32 (QAT and PTQ) on CK+ dataset . fine-tuning, augmentation, solving imbalanced dataset and so on.Real-time facial emotion recognition using EfficientNetV2 and quantization on CK+ dataset. This code includes:
1- data loading steps (download and split dataset).
2- preprocessing steps on CK+ dataset (normalization, resizing, augmentation and solving imbalanced dataset problem).
3- fine-tuning (using pre-trained weights from imagenet dataset as initial weights for training step).
4- quantization int8 and fp32 and fine-tuning after quantization ( Quantization-aware training integer8 (QAT) and Post-training quantization float32 (PTQ) ).
5- Macro, Micro, and Weighted for Precision, Recall, F1-score
6- Confusion MatrixNote that Quantization int8 has some benefits in reducing inference time and model size. But, Sometimes, it leads to a lower accuracy (PTQ). If we want to compensate for this loss, we need to use quantization-aware training approach. It means we need fine-tuning after quantization to compensate for lost accuracy. Finally, we compared int8 QAT and fp32 PTQ in terms of accuracy and model size, and inference time.