https://github.com/Bai-YT/AdaptiveSmoothing
Implementation of the paper "Improving the Accuracy-Robustness Trade-off of Classifiers via Adaptive Smoothing".
https://github.com/Bai-YT/AdaptiveSmoothing
adversarial-attacks adversarial-defense adversarial-machine-learning adversarial-robustness robust-machine-learning
Last synced: about 2 months ago
JSON representation
Implementation of the paper "Improving the Accuracy-Robustness Trade-off of Classifiers via Adaptive Smoothing".
- Host: GitHub
- URL: https://github.com/Bai-YT/AdaptiveSmoothing
- Owner: Bai-YT
- License: mit
- Created: 2023-01-26T00:25:55.000Z (over 2 years ago)
- Default Branch: main
- Last Pushed: 2024-02-06T20:36:56.000Z (over 1 year ago)
- Last Synced: 2024-11-05T10:44:38.528Z (7 months ago)
- Topics: adversarial-attacks, adversarial-defense, adversarial-machine-learning, adversarial-robustness, robust-machine-learning
- Language: Jupyter Notebook
- Homepage: https://arxiv.org/abs/2301.12554
- Size: 1.6 MB
- Stars: 10
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: readme.md
- License: LICENSE
Awesome Lists containing this project
README
## Improving the Accuracy-Robustness Trade-off of Classifiers via Adaptive Smoothing
This repository is the official code base for the paper [Improving the Accuracy-Robustness Trade-off of Classifiers via Adaptive Smoothing](https://arxiv.org/abs/2301.12554).
We publically share one CIFAR-10 model and two CIFAR-100 models that aim to defend the $\ell_\infty$ attack. Each proposed models rely on an accurate base classifier, a robust base classifier, and an optional "mixing network". The two proposed models share the same accurate base classifier but use two different robust base models and mixing networks. The results are the following:
| Model | Clean Accuracy | $\ell_\infty$ AutoAttacked Accuracy ($\epsilon = 8/255$) |
|-------------------|----------------|----------------------------------------------------------|
| CIFAR-10 | 95.23 % | 68.06 % |
| CIFAR-100 Model 1 | 85.21 % | 38.72 % |
| CIFAR-100 Model 2 | 80.18 % | 35.15 % |These results are also verified and listed on [RobustBench](https://robustbench.github.io).
#### Citing our work (BibTeX)
```bibtex
@article{bai2023improving,
title={Improving the Accuracy-Robustness Trade-off of Classifiers via Adaptive Smoothing},
author={Bai, Yatong and Anderson, Brendon G and Kim, Aerin and Sojoudi, Somayeh},
journal={arXiv preprint arXiv:2301.12554},
year={2023}
}
```### Running RobustBench to replicate the results
Running the [RobustBench](https://github.com/RobustBench/robustbench) benchmark should only require `pytorch`, `torchvision`, `numpy`, `click`, and `robustbench` packages.
Make a directory `` at a desired path to store the model checkpoints. Then, download the following models:
- Accurate base classifier: [Big Transfer (BiT)](https://github.com/google-research/big_transfer) ResNet-152 model finetuned on CIFAR-100 -- [download](http://172.233.227.28/base_models/cifar100/cifar100_std_rn152.pt).
- Robust base classifier 1: WideResNet-70-16 model from [this repo](https://github.com/wzekai99/DM-Improves-AT) -- [download](https://huggingface.co/wzekai99/DM-Improves-AT/resolve/main/checkpoint/cifar100_linf_wrn70-16.pt) and rename as `cifar100_linf_edm_wrn70-16.pt`.
- This model was trained on additional images generated by a EDM diffusion model.
- Robust base classifier 2: WideResNet-70-16 model from [this repo](https://github.com/deepmind/deepmind-research/tree/master/adversarial_robustness) -- [download](https://storage.googleapis.com/dm-adversarial-robustness/cifar100_linf_wrn70-16_with.pt) and rename as `cifar100_linf_trades_wrn70-16.pt`.
- Mixing network to be coupled with robust base classifier 1 -- [download](https://drive.google.com/uc?export=download&id=15FHXj7lmAgKT4Miu6S1CONufFtAwlWyT).
- Mixing network to be coupled with robust base classifier 2 -- [download](https://drive.google.com/uc?export=download&id=1_Lh0XLlo3mX0B9o2jGebG8L6h_NpwFea).**Edited on August 3, 2023:**
**We have added a CIFAR-10 model to our results.**
- The accurate base classifier is a [Big Transfer (BiT)](https://github.com/google-research/big_transfer) ResNet-152 model finetuned on CIFAR-10 -- [download](http://172.233.227.28/base_models/cifar10/cifar10_std_rn152.pt).
- The robust base classifier is a WideResNet-70-16 model from [this repo](https://github.com/wzekai99/DM-Improves-AT) -- [download](https://huggingface.co/wzekai99/DM-Improves-AT/resolve/main/checkpoint/cifar10_linf_wrn70-16.pt) and rename as `cifar10_linf_edm_wrn70-16.pt`.
- The corresponding mixing network -- [download](https://drive.google.com/uc?export=download&id=1SE19EHy6WFDqpNs2_exQ9iotV2sF0CZ9).Now, organize `` following the structure below:
```│
└───Base
│ │ cifar100_linf_edm_wrn70-16.pt
│ │ cifar100_linf_trades_wrn70-16.pt
| | cifar10_linf_edm_wrn70-16.pt
│ │ cifar100_bit_rn152.tar
│ │ cifar10_bit_rn152.tar
│
└───CompModel
│ cifar100_edm_best.pt
│ cifar100_trades_best.pt
│ cifar100_edm_best.pt
```To benchmark existing models with RobustBench, run the following:
```
python run_robustbench.py --root_dir --dataset {cifar10, cifar100} --model_name {edm,trades}
```Note that while the base classifiers may require additional (collected or synthesized) training data, the provided mixing networks were only trained on CIFAR training data.
### Training a new model
To train a new model with the provided code, install the full environment. We require the following packages: `pytorch torchvision tensorboard pytorch_warmup numpy scipy matplotlib jupyter notebook ipykernel ipywidgets tqdm click PyYAML`.
To train, run the following:
```
python run.py --training --config configs/xxx.yaml
```To evaluate, run the following:
```
python run.py --eval --config configs/xxx.yaml
```