Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/facebookresearch/asym-siam
PyTorch implementation of Asymmetric Siamese (https://arxiv.org/abs/2204.00613)
https://github.com/facebookresearch/asym-siam
Last synced: 3 months ago
JSON representation
PyTorch implementation of Asymmetric Siamese (https://arxiv.org/abs/2204.00613)
- Host: GitHub
- URL: https://github.com/facebookresearch/asym-siam
- Owner: facebookresearch
- License: other
- Archived: true
- Created: 2022-01-14T21:03:35.000Z (almost 3 years ago)
- Default Branch: main
- Last Pushed: 2022-05-02T07:05:28.000Z (over 2 years ago)
- Last Synced: 2024-04-20T04:34:12.496Z (7 months ago)
- Language: Python
- Homepage:
- Size: 224 KB
- Stars: 99
- Watchers: 11
- Forks: 5
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Code of conduct: CODE_OF_CONDUCT.md
Awesome Lists containing this project
- Awesome-Mixup - [Code
README
# Asym-Siam: On the Importance of Asymmetry for Siamese Representation Learning
This is a PyTorch implementation of the [Asym-Siam paper](https://arxiv.org/abs/2204.00613), CVPR 2022:
```
@inproceedings{wang2022asym,
title = {On the Importance of Asymmetry for Siamese Representation Learning},
author = {Xiao Wang and Haoqi Fan and Yuandong Tian and Daisuke Kihara and Xinlei Chen},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022}
}
```
The pre-training code is built on [MoCo](https://github.com/facebookresearch/moco), with additional designs described and analyzed in the paper.The linear classification code is from [SimSiam](https://github.com/facebookresearch/simsiam), which uses LARS optimizer.
## Installation
1. [`Install git`](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git)
2. Install PyTorch and ImageNet dataset following the [official PyTorch ImageNet training code](https://github.com/pytorch/examples/tree/master/imagenet).
3. Install [apex](https://github.com/NVIDIA/apex) for the LARS optimizer used in linear classification. If you find it hard to install apex, it suffices to just copy the [code](https://github.com/NVIDIA/apex/blob/master/apex/parallel/LARC.py) directly for use.
4. Clone the repository:
```
git clone https://github.com/facebookresearch/asym-siam & cd asym-siam
```## 1 Unsupervised Training
This implementation only supports **multi-gpu**, **DistributedDataParallel** training, which is faster and simpler; single-gpu or DataParallel training is not supported.
### 1.1 Our MoCo Baseline (BN in projector MLP)
To do unsupervised pre-training of a ResNet-50 model on ImageNet in an 8-gpu machine, run:
```
python main_moco.py \
-a resnet50 \
--lr 0.03 \
--batch-size 256 \
--dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 \
[your imagenet-folder with train and val folders]
```
This script uses all the default hyper-parameters as described in the MoCo v2 paper. We only upgrade the projector to a MLP with BN layer.### 1.2 MoCo + MultiCrop
```
python main_moco.py \
-a resnet50 \
--lr 0.03 \
--batch-size 256 \
--dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 \
[your imagenet-folder with train and val folders] --enable-multicrop
```
By simply setting **--enable-multicrop** to true, we can have asym MultiCrop on source side.### 1.3 MoCo + ScaleMix
```
python main_moco.py \
-a resnet50 \
--lr 0.03 \
--batch-size 256 \
--dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 \
[your imagenet-folder with train and val folders] --enable-scalemix
```
By simply setting **--enable-scalemix** to true, we can have asym ScaleMix on source side.### 1.4 MoCo + AsymAug
```
python main_moco.py \
-a resnet50 \
--lr 0.03 \
--batch-size 256 \
--dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 \
[your imagenet-folder with train and val folders] --enable-asymm-aug
```
By simply setting **--enable-asymm-aug** to true, we can have Stronger Augmentation on source side and Weaker Augmentation on target side.### 1.5 MoCo + AsymBN
```
python main_moco.py \
-a resnet50 \
--lr 0.03 \
--batch-size 256 \
--dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 \
[your imagenet-folder with train and val folders] --enable-asym-bn
```
By simply setting **--enable-asym-bn** to true, we can have asym BN on target side (sync BN for target).### 1.6 MoCo + MeanEnc
```
python main_moco.py \
-a resnet50 \
--lr 0.03 \
--batch-size 256 \
--dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 \
[your imagenet-folder with train and val folders] --enable-mean-encoding
```
By simply setting **--enable-mean-encoding** to true, we can have MeanEnc on target side.## 2 Linear Classification
With a pre-trained model, to train a supervised linear classifier on frozen features/weights, run:
```
python main_lincls.py \
-a resnet50 \
--lars \
--dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 \
--pretrained [your checkpoint path] \
[your imagenet-folder with train and val folders]
```Linear classification results on ImageNet using this repo with 8 NVIDIA V100 GPUs :
Method
pre-train
epochs
pre-train
time
top-1
model
md5Our MoCo
100
23.6h
65.8
download
e82edeMoCo
+MultiCrop
100
50.8h
69.9
download
892916
MoCo
+ScaleMix
100
30.7h
67.6
download
3f5d79MoCo
+AsymAug
100
24.0h
67.2
download
d94e24
MoCo
+AsymBN
100
23.8h
66.3
download
2bf912
MoCo
+MeanEnc
100
32.2h
67.7
download
599801### License
This project is under the CC-BY-NC 4.0 license. See [LICENSE](LICENSE) for details.