Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/reshalfahsi/knowledge-distillation

Knowledge Distillation for Skin Lesion Classification
https://github.com/reshalfahsi/knowledge-distillation

efficientnet image-processing knowledge-distillation medical-image-processing medmnist pytorch pytorch-lightning skin-lesion-classification squeezenet

Last synced: about 6 hours ago
JSON representation

Knowledge Distillation for Skin Lesion Classification

Awesome Lists containing this project

README

        

# Knowledge Distillation for Skin Lesion Classification


colab


The goal of knowledge distillation is to improve the performance of the half-witted model, which, most of the time, has fewer parameters, by allowing it to learn from the more competent model or the teacher model. The half-witted model, or the student model, excerpts the knowledge from the teacher model by matching its class distribution to the teacher model's. To make the distributions softer (used in the training process as part of the loss function), we can adjust a temperature _T_ to them (this is done by dividing the logits before softmax by the temperature). This project designates EfficientNet-B0 as the teacher and SqueezeNet v1.1 as the student. These models will be experimented on the DermaMNIST dataset of MedMNIST. We will take a look at the performance of the teacher, the student (without knowledge distillation), and the student (with knowledge distillation) in the result section.

## Experiment

To witness the distillation in action, please refer to the notebook at the following [link](https://github.com/reshalfahsi/knowledge-distillation/blob/master/Knowledge_Distillation_for_Skin_Lesion_Classification.ipynb).

## Result

## Quantitative Result

The quantitative results are delivered below in the form of a table.

Model | Loss | Accuracy |
------------ | ------------- | ------------- |
Teacher | 1.935 | 71.61% |
Student | 1.932 | 69.02% |
Distilled | 1.918 | 73.44% |

## Accuracy and Loss Curve

### Teacher

teacher_loss_curve
The loss curve on the train set and the validation set of the teacher model.

teacher_acc_curve
The accuracy curve on the train set and the validation set of the teacher model.

### Student

student_loss_curve
The loss curve on the train set and the validation set of the student model.

student_acc_curve
The accuracy curve on the train set and the validation set of the student model.

### Distilled

distilled_loss_curve
The loss curve on the train set and the validation set of the distilled model.

distilled_acc_curve
The accuracy curve on the train set and the validation set of the distilled model.

### Overall Validation Curve

overall_loss
Comparison of loss curves between the teacher model, the student model, and the distilled model on the validation set.

overall_acc
Comparison of accuracy curves between the teacher model, the student model, and the distilled model on the validation set.

## Qualitative Result

The qualitative results of the models on the test set are exhibited in the collated form below.

### Teacher

teacher_qualitative
The qualitative result of the teacher model.

### Student

student_qualitative
The qualitative result of the student model.

### Distilled

distilled_qualitative
The qualitative result of the distilled model.

## Credit

- [Knowledge Distillation](https://keras.io/examples/vision/knowledge_distillation/)
- [Distilling the Knowledge in a Neural Network](https://arxiv.org/pdf/1503.02531.pdf)
- [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946)
- [SqueezeNet v1.1](https://github.com/forresti/SqueezeNet/tree/master/SqueezeNet_v1.1)
- [MedMNIST v2 - A large-scale lightweight benchmark for 2D and 3D biomedical image classification](https://medmnist.com/)
- [PyTorch Lightning](https://lightning.ai/docs/pytorch/latest/)