{"id":19932189,"url":"https://github.com/amazon-science/exponential-moving-average-normalization","last_synced_at":"2025-05-03T11:31:30.842Z","repository":{"id":139011590,"uuid":"369926761","full_name":"amazon-science/exponential-moving-average-normalization","owner":"amazon-science","description":"PyTorch implementation of EMAN for self-supervised and semi-supervised learning: https://arxiv.org/abs/2101.08482","archived":false,"fork":false,"pushed_at":"2021-06-19T00:51:40.000Z","size":73,"stargazers_count":103,"open_issues_count":2,"forks_count":13,"subscribers_count":10,"default_branch":"main","last_synced_at":"2025-04-07T15:04:45.347Z","etag":null,"topics":["computer-vision","normalization","self-supervised-learning","semi-supervised-learning"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/amazon-science.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null}},"created_at":"2021-05-22T23:48:47.000Z","updated_at":"2025-03-13T02:02:50.000Z","dependencies_parsed_at":null,"dependency_job_id":"16fe01bf-e30e-4c41-a265-0f3a3d556449","html_url":"https://github.com/amazon-science/exponential-moving-average-normalization","commit_stats":null,"previous_names":["amazon-research/exponential-moving-average-normalization"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/amazon-science%2Fexponential-moving-average-normalization","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/amazon-science%2Fexponential-moving-average-normalization/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/amazon-science%2Fexponential-moving-average-normalization/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/amazon-science%2Fexponential-moving-average-normalization/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/amazon-science","download_url":"https://codeload.github.com/amazon-science/exponential-moving-average-normalization/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":252184234,"owners_count":21707912,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["computer-vision","normalization","self-supervised-learning","semi-supervised-learning"],"created_at":"2024-11-12T23:09:20.510Z","updated_at":"2025-05-03T11:31:30.337Z","avatar_url":"https://github.com/amazon-science.png","language":"Python","readme":"## EMAN: Exponential Moving Average Normalization for Self-supervised and Semi-supervised Learning\n\nThis is a PyTorch implementation of the [EMAN paper](https://arxiv.org/abs/2101.08482). It supports three popular self-supervised and semi-supervised learning techniques, i.e., [MoCo](https://arxiv.org/abs/1911.05722), [BYOL](https://arxiv.org/abs/2006.07733) and [FixMatch](https://arxiv.org/abs/2001.07685).\n\nIf you use the code/model/results of this repository please cite:\n```\n@inproceedings{cai21eman,\n  author  = {Zhaowei Cai and Avinash Ravichandran and Subhransu Maji and Charless Fowlkes and Zhuowen Tu and Stefano Soatto},\n  title   = {Exponential Moving Average Normalization for Self-supervised and Semi-supervised Learning},\n  booktitle = {CVPR},\n  Year  = {2021}\n}\n```\n\n\n### Install\n\nFirst, [install PyTorch](https://pytorch.org/get-started/locally/) and torchvision. We have tested on version of 1.7.1, but the other versions should also be working, e.g. 1.5.1.\n\n```bash\n$ conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=10.1 -c pytorch\n``` \n\nAlso install other dependencies. \n\n```bash\n$ pip install pandas faiss-gpu\n``` \n\n\n### Data Preparation\n\nInstall ImageNet dataset following the [official PyTorch ImageNet training code](https://github.com/pytorch/examples/tree/master/imagenet), with the standard data folder structure for the torchvision ``datasets.ImageFolder``. Please download the ImageNet [index files](https://eman-cvpr.s3.amazonaws.com/imagenet_indexes.zip) for semi-supervised learning experiments. The file structure should look like:\n\n  ```bash\n  $ tree data\n  imagenet\n  ├── train\n  │   ├── class1\n  │   │   └── *.jpeg\n  │   ├── class2\n  │   │   └── *.jpeg\n  │   └── ...\n  ├── val\n  │   ├── class1\n  │   │   └── *.jpeg\n  │   ├── class2\n  │   │   └── *.jpeg\n  │   └── ...\n  └── indexes\n      └── *_index.csv\n  ```\n\n### Training\n\nTo do self-supervised pre-training of MoCo-v2 with EMAN for 200 epochs, run:\n```\npython main_moco.py \\\n  --arch MoCoEMAN --backbone resnet50_encoder \\\n  --epochs 200 --warmup-epoch 10 \\\n  --moco-t 0.2 --cos \\\n  --dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 \\\n  /path/to/imagenet\n```\n\nTo do self-supervised pre-training of BYOL with EMAN for 200 epochs, run:\n```\npython main_byol.py \\\n  --arch BYOLEMAN --backbone resnet50_encoder \\\n  --lr 1.8 -b 512 --wd 0.000001 \\\n  --byol-m 0.98 \\\n  --epochs 200 --cos --warmup-epoch 10 \\\n  --dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 \\\n  /path/to/imagenet\n```\n\nTo do semi-supervised training of FixMatch with EMAN for 100 epochs, run:\n```\npython main_fixmatch.py \\\n  --arch FixMatch --backbone resnet50_encoder \\\n  --eman \\\n  --lr 0.03 \\\n  --epochs 100 --schedule 60 80 \\\n  --warmup-epoch 5 \\\n  --trainindex_x train_10p_index.csv --trainindex_u train_90p_index.csv \\\n  --dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 \\\n  /path/to/imagenet\n```\n\n### Linear Classification and Finetuning\n\nWith a pre-trained model, to train a supervised linear classifier on frozen features/weights (e.g. MoCo) on 10% imagenet, run:\n```\npython main_lincls.py \\\n  -a resnet50 \\\n  --lr 30.0 \\\n  --epochs 50 --schedule 30 40 \\\n  --eval-freq 5 \\\n  --trainindex train_10p_index.csv \\\n  --model-prefix encoder_q \\\n  --pretrained /path/to/model_best.pth.tar \\\n  --dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 \\\n  /path/to/imagenet\n```\n\nTo finetune the self-supervised pretrained model on 10% imagenet, with different learning rates for pretrained backbone and last classification layer, run:\n```\npython main_cls.py \\\n  -a resnet50 \\\n  --lr 0.001 --lr-classifier 0.1 \\\n  --epochs 50 --schedule 30 40 \\\n  --eval-freq 5 \\\n  --trainindex train_10p_index.csv \\\n  --model-prefix encoder_q \\\n  --self-pretrained /path/to/model_best.pth.tar \\\n  --dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 \\\n  /path/to/imagenet\n```\n\nFor BYOL, change to ``--model-prefix online_net.backbone``. For the best performance, follow the learning rate setting in Section 5.2 in the paper.\n\n\n### Models\n\nOur pre-trained ResNet-50 models can be downloaded as following:\n\n| name | epoch | acc@1% IN | acc@10% IN | acc@100% IN | model |\n| :---: | :---: | :---: | :---: | :---: | :---: |\n| MoCo-EMAN | 200 | 48.9 | 60.5 | 67.7 | [download](https://eman-cvpr.s3.amazonaws.com/models/res50_moco_eman_200ep.pth.tar) |\n| MoCo-EMAN | 800 | 55.4 | 64.0 | 70.1 | [download](https://eman-cvpr.s3.amazonaws.com/models/res50_moco_eman_800ep.pth.tar) |\n| MoCo-2X-EMAN | 200 | 56.8 | 65.7 | 72.3 | [download](https://eman-cvpr.s3.amazonaws.com/models/res50_2x_moco_eman_200ep.pth.tar) |\n| BYOL-EMAN | 200 | 55.1 | 66.7 | 72.2 | [download](https://eman-cvpr.s3.amazonaws.com/models/res50_byol_eman_200ep.pth.tar) |\n\n### License\n\nThis project is under the CC-BY-NC 4.0 license. See [LICENSE](LICENSE) for details.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Famazon-science%2Fexponential-moving-average-normalization","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Famazon-science%2Fexponential-moving-average-normalization","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Famazon-science%2Fexponential-moving-average-normalization/lists"}