Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/reshalfahsi/knowledge-distillation
Knowledge Distillation for Skin Lesion Classification
https://github.com/reshalfahsi/knowledge-distillation
efficientnet image-processing knowledge-distillation medical-image-processing medmnist pytorch pytorch-lightning skin-lesion-classification squeezenet
Last synced: about 6 hours ago
JSON representation
Knowledge Distillation for Skin Lesion Classification
- Host: GitHub
- URL: https://github.com/reshalfahsi/knowledge-distillation
- Owner: reshalfahsi
- Created: 2024-01-13T15:10:54.000Z (10 months ago)
- Default Branch: master
- Last Pushed: 2024-08-24T10:47:33.000Z (3 months ago)
- Last Synced: 2024-08-24T11:50:06.793Z (3 months ago)
- Topics: efficientnet, image-processing, knowledge-distillation, medical-image-processing, medmnist, pytorch, pytorch-lightning, skin-lesion-classification, squeezenet
- Language: Jupyter Notebook
- Homepage:
- Size: 4.17 MB
- Stars: 0
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Knowledge Distillation for Skin Lesion Classification
The goal of knowledge distillation is to improve the performance of the half-witted model, which, most of the time, has fewer parameters, by allowing it to learn from the more competent model or the teacher model. The half-witted model, or the student model, excerpts the knowledge from the teacher model by matching its class distribution to the teacher model's. To make the distributions softer (used in the training process as part of the loss function), we can adjust a temperature _T_ to them (this is done by dividing the logits before softmax by the temperature). This project designates EfficientNet-B0 as the teacher and SqueezeNet v1.1 as the student. These models will be experimented on the DermaMNIST dataset of MedMNIST. We will take a look at the performance of the teacher, the student (without knowledge distillation), and the student (with knowledge distillation) in the result section.
## Experiment
To witness the distillation in action, please refer to the notebook at the following [link](https://github.com/reshalfahsi/knowledge-distillation/blob/master/Knowledge_Distillation_for_Skin_Lesion_Classification.ipynb).
## Result
## Quantitative Result
The quantitative results are delivered below in the form of a table.
Model | Loss | Accuracy |
------------ | ------------- | ------------- |
Teacher | 1.935 | 71.61% |
Student | 1.932 | 69.02% |
Distilled | 1.918 | 73.44% |## Accuracy and Loss Curve
### Teacher
The loss curve on the train set and the validation set of the teacher model.
The accuracy curve on the train set and the validation set of the teacher model.### Student
The loss curve on the train set and the validation set of the student model.
The accuracy curve on the train set and the validation set of the student model.### Distilled
The loss curve on the train set and the validation set of the distilled model.
The accuracy curve on the train set and the validation set of the distilled model.### Overall Validation Curve
Comparison of loss curves between the teacher model, the student model, and the distilled model on the validation set.
Comparison of accuracy curves between the teacher model, the student model, and the distilled model on the validation set.## Qualitative Result
The qualitative results of the models on the test set are exhibited in the collated form below.
### Teacher
The qualitative result of the teacher model.### Student
The qualitative result of the student model.### Distilled
The qualitative result of the distilled model.## Credit
- [Knowledge Distillation](https://keras.io/examples/vision/knowledge_distillation/)
- [Distilling the Knowledge in a Neural Network](https://arxiv.org/pdf/1503.02531.pdf)
- [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946)
- [SqueezeNet v1.1](https://github.com/forresti/SqueezeNet/tree/master/SqueezeNet_v1.1)
- [MedMNIST v2 - A large-scale lightweight benchmark for 2D and 3D biomedical image classification](https://medmnist.com/)
- [PyTorch Lightning](https://lightning.ai/docs/pytorch/latest/)