Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/deepmancer/byol-pytorch
BYOL (Bootstrap Your Own Latent), implemented from scratch in Pytorch
https://github.com/deepmancer/byol-pytorch
bootstrap byol contrastive-learning from-scratch pytorch
Last synced: 17 days ago
JSON representation
BYOL (Bootstrap Your Own Latent), implemented from scratch in Pytorch
- Host: GitHub
- URL: https://github.com/deepmancer/byol-pytorch
- Owner: deepmancer
- Created: 2023-08-17T21:28:05.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-08-16T11:23:33.000Z (3 months ago)
- Last Synced: 2024-10-11T20:06:19.592Z (about 1 month ago)
- Topics: bootstrap, byol, contrastive-learning, from-scratch, pytorch
- Language: Jupyter Notebook
- Homepage:
- Size: 252 KB
- Stars: 4
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# 📈 BYOL (Bootstrap Your Own Latent) – From Scratch Implementation in Pytorch
Welcome to the **BYOL (Bootstrap Your Own Latent)** repository! Dive into our comprehensive, from-scratch implementation of BYOL—a self-supervised contrastive learning algorithm that's transforming how we approach unsupervised feature learning.---
## 🔧 Requirements
Before running the code, ensure you have the following dependencies:
- **Python 3**: The language used for implementation.
- **PyTorch**: The deep learning framework powering our model training and evaluation.## 🧠 Model Overview
BYOL represents a breakthrough in self-supervised learning. Unlike traditional methods that rely on negative samples for contrastive learning, BYOL focuses solely on positive pairs—images of the same instance with different augmentations. This unique approach simplifies training and minimizes computational requirements, achieving remarkable results.
### Key Features:
- **No Negative Samples Required**: Efficient training by focusing exclusively on positive pairs.
- **State-of-the-Art Results**: Achieves impressive performance on various image classification benchmarks.BYOL Model
## 📁 Dataset
### STL10 Dataset
We use the STL10 dataset to evaluate our BYOL implementation. This dataset is tailored for developing and testing unsupervised feature learning and self-supervised learning models.
- **Overview**: Contains 10 classes, each with 500 training images and 800 test images.
- **Source**: [STL10 Dataset](https://cs.stanford.edu/~acoates/stl10/)STL10 Dataset Example
## 📊 Results
Our experiments highlight the impact of BYOL pretraining on the STL10 dataset:
- **Without Pretraining**: Baseline accuracy of 84.58%.
- **With BYOL Pretraining**: Accuracy improved to 87.61% after 10 epochs, demonstrating BYOL’s effectiveness.### Implementation Insights
This repository features a complete, from-scratch implementation of BYOL. For our experiments, we used a ResNet18 model pretrained on ImageNet as the encoder, showcasing how leveraging pretrained models can further enhance BYOL’s capabilities.