Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/fistyee/MixPro
π₯MixPro: Data Augmentation with MaskMix and Progressive Attention Labeling for Vision Transformer [Official, ICLR 2023]
https://github.com/fistyee/MixPro
Last synced: 3 months ago
JSON representation
π₯MixPro: Data Augmentation with MaskMix and Progressive Attention Labeling for Vision Transformer [Official, ICLR 2023]
- Host: GitHub
- URL: https://github.com/fistyee/MixPro
- Owner: fistyee
- License: mit
- Created: 2023-04-15T04:15:54.000Z (almost 2 years ago)
- Default Branch: main
- Last Pushed: 2023-11-03T06:19:25.000Z (over 1 year ago)
- Last Synced: 2024-08-03T23:13:38.737Z (6 months ago)
- Language: Python
- Homepage:
- Size: 4.34 MB
- Stars: 18
- Watchers: 2
- Forks: 3
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- Awesome-Mixup - [Code
README
## MixPro: Data Augmentation with MaskMix and Progressive Attention Labeling for Vision Transformer [Official, ICLR 2023,[paper](https://arxiv.org/pdf/2304.12043.pdf)] π₯
### [Qihao Zhao](https://scholar.google.com/citations?hl=zh-CN&user=sECb19EAAAAJ)1, [Yangyu Huang](https://scholar.google.com/citations?hl=zh-CN&user=ycNodL0AAAAJ)2, [Wei Hu](https://scholar.google.com/citations?user=ACJickwAAAAJ&hl=zh-CN)1, [Fan Zhang](https://scholar.google.com/citations?user=CujOi1kAAAAJ&hl=zh-CN)1, [Jun Liu](https://scholar.google.com/citations?hl=zh-CN&user=Q5Ild8UAAAAJ)31 Beijing University of Chemical Technology
2 Microsoft Research Asia
3 Singapore University of Technology and Design
![MixPro](./fig.png)
### Citation
If you find our work inspiring or use our codebase in your research, please consider giving a star βοΈ and a citation.```
@inproceedings{
zhao2023mixpro,
title={MixPro: Data Augmentation with MaskMix and Progressive Attention Labeling for Vision Transformer},
author={Qihao Zhao and Yangyu Huang and Wei Hu and Fan Zhang and Jun Liu},
booktitle={The Eleventh International Conference on Learning Representations },
year={2023},
url={https://openreview.net/forum?id=dRjWsd3gwsm}
}
```## Install
We recommend using the pytorch docker `nvcr>=21.05` by
nvidia: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch.- Create a conda virtual environment and activate it:
```bash
conda create -n mixpro python=3.7 -y
conda activate mixpro
```- Install `CUDA>=10.2` with `cudnn>=7` following
the [official installation instructions](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html)
- Install `PyTorch>=1.8.0` and `torchvision>=0.9.0` with `CUDA>=10.2`:```bash
conda install pytorch==1.8.0 torchvision==0.9.0 cudatoolkit=10.2 -c pytorch
```- Install `timm==0.4.12`:
```bash
pip install timm==0.4.12
```- Install other requirements:
```bash
pip install opencv-python==4.4.0.46 termcolor==1.1.0 yacs==0.1.8
```### Data preparation
For ImageNet-1K dataset, you can download it from http://image-net.org/. We provide the following two ways to
load data:- For standard folder dataset, move validation images to labeled sub-folders. The file structure should look like:
```bash
$ tree data
imagenet
βββ train
β βββ class1
β β βββ img1.jpeg
β β βββ img2.jpeg
β β βββ ...
β βββ class2
β β βββ img3.jpeg
β β βββ ...
β βββ ...
βββ val
βββ class1
β βββ img4.jpeg
β βββ img5.jpeg
β βββ ...
βββ class2
β βββ img6.jpeg
β βββ ...
βββ ...```
- To boost the slow speed when reading images from massive small files, we also support zipped ImageNet, which includes
four files:
- `train.zip`, `val.zip`: which store the zipped folder for train and validate splits.
- `train_map.txt`, `val_map.txt`: which store the relative path in the corresponding zip file and ground truth
label. Make sure the data folder looks like this:```bash
$ tree data
data
βββ ImageNet-Zip
βββ train_map.txt
βββ train.zip
βββ val_map.txt
βββ val.zip$ head -n 5 data/ImageNet-Zip/val_map.txt
ILSVRC2012_val_00000001.JPEG 65
ILSVRC2012_val_00000002.JPEG 970
ILSVRC2012_val_00000003.JPEG 230
ILSVRC2012_val_00000004.JPEG 809
ILSVRC2012_val_00000005.JPEG 516$ head -n 5 data/ImageNet-Zip/train_map.txt
n01440764/n01440764_10026.JPEG 0
n01440764/n01440764_10027.JPEG 0
n01440764/n01440764_10029.JPEG 0
n01440764/n01440764_10040.JPEG 0
n01440764/n01440764_10042.JPEG 0
```
### Training from scratch on ImageNet-1KTo train a MixPro with `Vision Transformer` on ImageNet from scratch, run:
```bash
python -m torch.distributed.launch --nproc_per_node --master_port 12345 main.py \
--cfg --data-path [--batch-size --output --tag ]
```**Notes**:
- To use zipped ImageNet instead of folder dataset, add `--zip` to the parameters.
- To cache the dataset in the memory instead of reading from files every time, add `--cache-mode part`, which will
shard the dataset into non-overlapping pieces for different GPUs and only load the corresponding one for each GPU.
- When GPU memory is not enough, you can try the following suggestions:
- Use gradient accumulation by adding `--accumulation-steps `, set appropriate `` according to your need.
- Use gradient checkpointing by adding `--use-checkpoint`, e.g., it saves about 60% memory when training `DeiT-B`.
Please refer to [this page](https://pytorch.org/docs/stable/checkpoint.html) for more details.
- We recommend using multi-node with more GPUs for training very large models, a tutorial can be found
in [this page](https://pytorch.org/tutorials/intermediate/dist_tuto.html).
- To change config options in general, you can use `--opts KEY1 VALUE1 KEY2 VALUE2`, e.g.,
`--opts TRAIN.EPOCHS 100 TRAIN.WARMUP_EPOCHS 5` will change total epochs to 100 and warm-up epochs to 5.
- For additional options, see [config](config.py) and run `python main.py --help` to get detailed message.For example, to train `MixPro DeiT Transformer` with 8 GPU on a single node for 300 epochs, run:
`MixPro DeiT-T`:
```bash
python -m torch.distributed.launch --nproc_per_node 8 --master_port 12345 main.py \
--cfg configs/deit/deit_tiny_patch14_mask56_224_alpha1.yaml --data-path --batch-size 128
````MixPro DeiT-S`:
```bash
python -m torch.distributed.launch --nproc_per_node 8 --master_port 12345 main.py \
--cfg configs/deit/deit_small_patch14_mask56_224_alpha1.yaml --data-path --batch-size 128
````MixPro DeiT-B`:
```bash
python -m torch.distributed.launch --nproc_per_node 8 --master_port 12345 main.py \
--cfg configs/deit_base_patch14_mask112_224_alpha1_attn_all.yaml --data-path --batch-size 64 \```
### Config
```
#in config.py
#Probability of switching to mixpro when both mixup and mixpro enabled
_C.AUG.MASKMIX_PROB = 0.5
# MaskMix alpha , maskmix enabled if > 0
_C.AUG.MASKMIX_ALPHA = 1.0
# PAL
_C.AUG.PAL_ATTN = True
```### Acknowledgements
The project is based on [Swin](https://github.com/microsoft/Swin-Transformer) and [Vit(unofficial)](https://github.com/lucidrains/vit-pytorch)### License
The project is released under the MIT License