Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/deepmancer/byol-pytorch

BYOL (Bootstrap Your Own Latent), implemented from scratch in Pytorch
https://github.com/deepmancer/byol-pytorch

bootstrap byol contrastive-learning from-scratch pytorch

Last synced: 17 days ago
JSON representation

BYOL (Bootstrap Your Own Latent), implemented from scratch in Pytorch

Awesome Lists containing this project

README

        

# 📈 BYOL (Bootstrap Your Own Latent) – From Scratch Implementation in Pytorch


PyTorch
Python
Jupyter Notebook




Welcome to the **BYOL (Bootstrap Your Own Latent)** repository! Dive into our comprehensive, from-scratch implementation of BYOL—a self-supervised contrastive learning algorithm that's transforming how we approach unsupervised feature learning.

---

## 🔧 Requirements

Before running the code, ensure you have the following dependencies:

- **Python 3**: The language used for implementation.
- **PyTorch**: The deep learning framework powering our model training and evaluation.

## 🧠 Model Overview

BYOL represents a breakthrough in self-supervised learning. Unlike traditional methods that rely on negative samples for contrastive learning, BYOL focuses solely on positive pairs—images of the same instance with different augmentations. This unique approach simplifies training and minimizes computational requirements, achieving remarkable results.

### Key Features:
- **No Negative Samples Required**: Efficient training by focusing exclusively on positive pairs.
- **State-of-the-Art Results**: Achieves impressive performance on various image classification benchmarks.

BYOL Model



BYOL Model Overview

## 📁 Dataset

### STL10 Dataset

We use the STL10 dataset to evaluate our BYOL implementation. This dataset is tailored for developing and testing unsupervised feature learning and self-supervised learning models.

- **Overview**: Contains 10 classes, each with 500 training images and 800 test images.
- **Source**: [STL10 Dataset](https://cs.stanford.edu/~acoates/stl10/)

STL10 Dataset Example



STL10 Dataset Sample

## 📊 Results

Our experiments highlight the impact of BYOL pretraining on the STL10 dataset:

- **Without Pretraining**: Baseline accuracy of 84.58%.
- **With BYOL Pretraining**: Accuracy improved to 87.61% after 10 epochs, demonstrating BYOL’s effectiveness.

### Implementation Insights

This repository features a complete, from-scratch implementation of BYOL. For our experiments, we used a ResNet18 model pretrained on ImageNet as the encoder, showcasing how leveraging pretrained models can further enhance BYOL’s capabilities.