Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/reshalfahsi/contrastive-ssl-pathology
Self-Supervised Contrastive Learning for Colon Pathology Classification
https://github.com/reshalfahsi/contrastive-ssl-pathology
biomedical-engineering biomedical-image-processing contrastive-learning image-classification medical-image-analysis pathology-image self-supervised-learning
Last synced: about 6 hours ago
JSON representation
Self-Supervised Contrastive Learning for Colon Pathology Classification
- Host: GitHub
- URL: https://github.com/reshalfahsi/contrastive-ssl-pathology
- Owner: reshalfahsi
- Created: 2023-07-20T09:18:15.000Z (over 1 year ago)
- Default Branch: master
- Last Pushed: 2024-02-28T10:50:54.000Z (9 months ago)
- Last Synced: 2024-02-28T11:49:23.030Z (9 months ago)
- Topics: biomedical-engineering, biomedical-image-processing, contrastive-learning, image-classification, medical-image-analysis, pathology-image, self-supervised-learning
- Language: Jupyter Notebook
- Homepage:
- Size: 1.07 MB
- Stars: 1
- Watchers: 3
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Self-Supervised Contrastive Learning for Colon Pathology Classification
Self-supervised learning, or SSL, has become a modern way to learn the hidden representation of data points. A dataset is not always provided with a label that marks a data point's category or value. SSL mitigates this issue by projecting a data point into an embedding vector representing information beneath. SSL can be trained contrastively, i.e., to measure the similarity between two projected embeddings (original and augmented) using certain metrics, e.g., cosine similarity, Euclidean distance, Manhattan distance, etc. By learning the latent representation, the SSL model can be utilized as a pre-trained model and fine-tuned as needed. The SSL model is divided into three parts: the backbone feature extractor, the embedding projection head, and the classification head. The backbone feature extractor leverages ResNet 18. The embedding head gives the embedding vector. The classification head concludes the classification task's result. Here, two other models are also introduced: the baseline model and the fine-tuned pre-trained SSL model. Both of them consist of a backbone feature extractor and a classification head. Yet, the latter makes use of the trained SSL model's backbone as its own backbone. To evaluate the performance of the models, the PathMNIST of the MedMNIST dataset is utilized. On batched training, the other pairs in the batch relative to a certain pair (positive pair) are treated as negative pairs. This notion is useful for the computation of the contrastive loss: NTXentLoss/InfoNCE.
## Experiment
Click [here](https://github.com/reshalfahsi/contrastive-ssl-pathology/blob/master/Self_Supervised_Contrastive_Learning_for_Colon_Pathology_Classification.ipynb) to carry out experiments on the baseline model, the SSL model, and the fine-tuned pretrained SSL model.
## Result
## Quantitative Result
The table below presents the quantitative result of the models on the test set.
Model | Loss | Accuracy |
------------ | ------------- | ------------- |
Baseline | 0.367 | 91.89% |
SSL | 0.480 | 86.32% |
Fine-tuned | 0.438 | 91.05% |## Validation Accuracy and Loss Curve
Comparison of accuracy curves between the baseline model, the SSL model, and the fine-tuned pre-trained SSL model on the validation set
Comparison of loss curves between the baseline model, the SSL model, and the fine-tuned pre-trained SSL model on the validation set## Qualitative Result
The qualitative results of the model on the inference set are shown below.
The qualitative result of the baseline model.
The qualitative result of the SSL model.
The qualitative result of the fine-tuned pre-trained SSL model.## Credit
- [Semi-supervised image classification using contrastive pretraining with SimCLR](https://keras.io/examples/vision/semisupervised_simclr/)
- [MedMNIST v2 - A large-scale lightweight benchmark for 2D and 3D biomedical image classification](https://medmnist.com/)
- [PyTorch Lightning](https://lightning.ai/docs/pytorch/latest/)