Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/rahulvigneswaran/Class-Balanced-Distillation-for-Long-Tailed-Visual-Recognition.pytorch
Un-offical PyTorch Implementation of "Class-Balanced Distillation for Long-Tailed Visual Recognition" paper.
https://github.com/rahulvigneswaran/Class-Balanced-Distillation-for-Long-Tailed-Visual-Recognition.pytorch
Last synced: 7 days ago
JSON representation
Un-offical PyTorch Implementation of "Class-Balanced Distillation for Long-Tailed Visual Recognition" paper.
- Host: GitHub
- URL: https://github.com/rahulvigneswaran/Class-Balanced-Distillation-for-Long-Tailed-Visual-Recognition.pytorch
- Owner: rahulvigneswaran
- Created: 2021-06-08T19:44:33.000Z (over 3 years ago)
- Default Branch: main
- Last Pushed: 2021-10-31T17:38:22.000Z (about 3 years ago)
- Last Synced: 2024-08-02T15:36:37.942Z (3 months ago)
- Language: Python
- Size: 779 KB
- Stars: 16
- Watchers: 2
- Forks: 4
- Open Issues: 0
-
Metadata Files:
- Readme: Readme.md
Awesome Lists containing this project
README
# Pytorch Implementation of [Class-Balanced Distillation for Long-Tailed Visual Recognition](https://arxiv.org/abs/2104.05279) by [Ahmet Iscen](https://cmp.felk.cvut.cz/~iscenahm/), André Araujo, Boqing Gong, Cordelia Schmid
---
### Note:
- Implemented only for ImageNetLT
- `normal_teachers` is the `Standard model` from the paper
- `aug_teachers` is the `Data Augmentation model` from the paper## Things to do before you run :
- Change the `data_root` for your dataset in `main.py`.
- If you are using wandb logging ([Weights & Biases](https://docs.wandb.ai/quickstart)), make sure to change the `wandb.init` in `main.py` accordingly.## How to use?
- Easy to use : Check this script - `multi_runs.sh`
- Train the normal teachers :
```
python main.py --experiment=0.1 --seed=1 --gpu="0,1" --train --log_offline
```
- Train the augmentation teachers :
```
python main.py --experiment=0.2 --seed=1 --gpu="0,1" --train --log_offline
```
- Train the Class Balanced Distilled Student :
```
python main.py --experiment=0.3 --alpha=0.4 --beta=100 --seed=$seeds --gpu="0,1" --train --log_offline --normal_teacher="10,20" --aug_teacher="20,30"
```### Arguments :
(General)
- `--seed`: Seed of your current run
- `--gpu`: GPUs to be used
- `--experiment`: Experiment number (Check `libs/utils/experiment_maker.py` for more details)
- `--wandb_logger`: Does wandb Logging
- `--log_offline`: Does offline Logging
- `--resume`: Resumes the training if the run crashes(Specific to Distillation and Student's training)
- `--alpha`: Weightage between Classifier loss and distillation loss
- `--beta`: weightage for the Cosine Similarity between teachers and student
- `--normal_teachers`: What all seed of norma teachers do you want to use? If you want to use only augmentation teachers, just don't use this argument. It is `None` by default.
- `--aug_teachers`: What all seed of augmented teachers do you want to use? If you want to use only normal teachers, just don't use this argument. It is `None` by default.## Raise an issue :
If something is not clear or you found a bug, raise an issue!!