https://github.com/zeyademam/active_learning
Code for Active Learning at The ImageNet Scale. This repository implements many popular active learning algorithms and allows training with torch's DDP.
https://github.com/zeyademam/active_learning
active-learning distributed-data-parallel pytorch-implementation
Last synced: 11 days ago
JSON representation
Code for Active Learning at The ImageNet Scale. This repository implements many popular active learning algorithms and allows training with torch's DDP.
- Host: GitHub
- URL: https://github.com/zeyademam/active_learning
- Owner: zeyademam
- License: mit
- Created: 2021-11-25T00:44:08.000Z (over 3 years ago)
- Default Branch: master
- Last Pushed: 2021-11-29T02:37:39.000Z (over 3 years ago)
- Last Synced: 2024-11-15T07:34:05.494Z (6 months ago)
- Topics: active-learning, distributed-data-parallel, pytorch-implementation
- Language: Python
- Homepage:
- Size: 790 KB
- Stars: 52
- Watchers: 4
- Forks: 2
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Active Learning at the ImageNet Scale
This repo contains code for the following paper:
```
@misc{emam2021active,
title={Active Learning at the ImageNet Scale},
author={Zeyad Ali Sami Emam and Hong-Min Chu and Ping-Yeh Chiang and Wojciech Czaja and Richard Leapman and Micah Goldblum and Tom Goldstein},
year={2021},
eprint={2111.12880},
archivePrefix={arXiv},
primaryClass={cs.CV}}
```Please cite our work if you use this code.
## Requirements
`pip install -r requirements.txt`
## Comet and Logging
This project uses Comet ML to log all experiments, you must
install [comet_ml](https://www.comet.ml) (included in requirements.txt), however, the code does not
require the user to have a Comet ML account or to enable comet logging at all. If you choose to use
comet ML, then you should include your API key in your home directory
`~/.comet.config` (more on this in the Comet ML documentation). To use comet make sure the use the
flag `--enable_comet`.Logs and network weights are stored according to the command line arguments `--log_dir`
and `--ckpt_path`.## Loading SSP checkpoints
Self-supervised pretrained checkpoints must be obtained separately and specified
in `./src/arg_pools` for each argpool, under the key `"init_pretrained_ckpt_path"`.
To access the checkpoints used in our experiments, please use the following links:
- [ResNet-18 checkpoint for CIFAR-10](https://drive.google.com/file/d/1jN0A9SDj_bvwyDGPwvPPvcc-iIpfdEJf/view?usp=sharing)
- [ResNet-18 checkpoint for imbalanced CIFAR-10](https://drive.google.com/file/d/1QzJV0C4kkGqXNPkn6ifySEFKueKBiwXi/view?usp=sharing)
- [ResNet-50 checkpoint for ImageNet](https://drive.google.com/file/d/17px0_0syO3QNmuQGlLQuw00rvSG3ypuH/view?usp=sharing)## Sample Commands to Reproduce the Results in the Paper
Each Imagenet experiment was conducted on a cluster node with a single V100-SXM2 GPU (32GB VRAM),
64gb of RAM, and 16 2.3 GHz Intel Gold 6140 cpus. If more than one gpu are available on the node,
the code will automatically distribute batches across all gpus using DistributedDataParallel
training.Below is a sample command for running an experiment. The full list of command line arguments can be
found in `src/utils/parser.py`.```
python main_al.py --dataset_dir --exp_name RandomSampler_arg_ssp_linear_evaluation_imagenet_b10000 --dataset imagenet --arg_pool ssp_linear_evaluation --model SSLResNet50 --strategy RandomSampler --rounds 8 --round_budget 10000 --init_pool_size 30000 --subset_labeled 50000 --subset_unlabeled 80000 --freeze_feature --partitions 10 --init_pool_type random
```The full list of commands to reproduce all plots in the paper can be obtained by
running `python src/gen_jobs.py`.