Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/reshalfahsi/contrastive-ssl-pathology

Self-Supervised Contrastive Learning for Colon Pathology Classification
https://github.com/reshalfahsi/contrastive-ssl-pathology

biomedical-engineering biomedical-image-processing contrastive-learning image-classification medical-image-analysis pathology-image self-supervised-learning

Last synced: about 6 hours ago
JSON representation

Self-Supervised Contrastive Learning for Colon Pathology Classification

Awesome Lists containing this project

README

        

# Self-Supervised Contrastive Learning for Colon Pathology Classification


colab


Self-supervised learning, or SSL, has become a modern way to learn the hidden representation of data points. A dataset is not always provided with a label that marks a data point's category or value. SSL mitigates this issue by projecting a data point into an embedding vector representing information beneath. SSL can be trained contrastively, i.e., to measure the similarity between two projected embeddings (original and augmented) using certain metrics, e.g., cosine similarity, Euclidean distance, Manhattan distance, etc. By learning the latent representation, the SSL model can be utilized as a pre-trained model and fine-tuned as needed. The SSL model is divided into three parts: the backbone feature extractor, the embedding projection head, and the classification head. The backbone feature extractor leverages ResNet 18. The embedding head gives the embedding vector. The classification head concludes the classification task's result. Here, two other models are also introduced: the baseline model and the fine-tuned pre-trained SSL model. Both of them consist of a backbone feature extractor and a classification head. Yet, the latter makes use of the trained SSL model's backbone as its own backbone. To evaluate the performance of the models, the PathMNIST of the MedMNIST dataset is utilized. On batched training, the other pairs in the batch relative to a certain pair (positive pair) are treated as negative pairs. This notion is useful for the computation of the contrastive loss: NTXentLoss/InfoNCE.

## Experiment

Click [here](https://github.com/reshalfahsi/contrastive-ssl-pathology/blob/master/Self_Supervised_Contrastive_Learning_for_Colon_Pathology_Classification.ipynb) to carry out experiments on the baseline model, the SSL model, and the fine-tuned pretrained SSL model.

## Result

## Quantitative Result

The table below presents the quantitative result of the models on the test set.

Model | Loss | Accuracy |
------------ | ------------- | ------------- |
Baseline | 0.367 | 91.89% |
SSL | 0.480 | 86.32% |
Fine-tuned | 0.438 | 91.05% |

## Validation Accuracy and Loss Curve

acc_curve
Comparison of accuracy curves between the baseline model, the SSL model, and the fine-tuned pre-trained SSL model on the validation set

loss_curve
Comparison of loss curves between the baseline model, the SSL model, and the fine-tuned pre-trained SSL model on the validation set

## Qualitative Result

The qualitative results of the model on the inference set are shown below.

baseline_qualitative
The qualitative result of the baseline model.

ssl_qualitative
The qualitative result of the SSL model.

fine-tuned_qualitative
The qualitative result of the fine-tuned pre-trained SSL model.

## Credit

- [Semi-supervised image classification using contrastive pretraining with SimCLR](https://keras.io/examples/vision/semisupervised_simclr/)
- [MedMNIST v2 - A large-scale lightweight benchmark for 2D and 3D biomedical image classification](https://medmnist.com/)
- [PyTorch Lightning](https://lightning.ai/docs/pytorch/latest/)