Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/hmdolatabadi/AdvFlow
[NeurIPS2020] The official repository of "AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows".
https://github.com/hmdolatabadi/AdvFlow
adversarial-machine-learning black-box-attacks neurips-2020 normalizing-flows
Last synced: 3 months ago
JSON representation
[NeurIPS2020] The official repository of "AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows".
- Host: GitHub
- URL: https://github.com/hmdolatabadi/AdvFlow
- Owner: hmdolatabadi
- License: mit
- Created: 2020-07-14T23:55:37.000Z (over 4 years ago)
- Default Branch: master
- Last Pushed: 2023-10-03T21:32:08.000Z (about 1 year ago)
- Last Synced: 2024-07-12T12:41:41.455Z (4 months ago)
- Topics: adversarial-machine-learning, black-box-attacks, neurips-2020, normalizing-flows
- Language: Python
- Homepage: https://hmdolatabadi.github.io/posts/2020/10/advflow/
- Size: 1.28 MB
- Stars: 43
- Watchers: 3
- Forks: 2
- Open Issues: 5
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-normalizing-flows - [Code
README
# AdvFlow
*Hadi M. Dolatabadi, Sarah Erfani, and Christopher Leckie 2020*
[![arXiv](http://img.shields.io/badge/arXiv-2007.07435-B31B1B.svg)](https://arxiv.org/abs/2007.07435)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)This is the official implementation of NeurIPS 2020 paper [_AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows_](https://arxiv.org/abs/2007.07435).
A small part of this work, the Greedy AdvFlow, has been published in [ICML Workshop on Invertible Neural Networks, Normalizing Flows, and Explicit Likelihood Models](https://invertibleworkshop.github.io/accepted_papers/pdfs/36.pdf). A blog post explaining our approach can be found [here](https://hmdolatabadi.github.io/posts/2020/10/advflow/).
## Requirements
To install requirements:
```setup
pip install -r requirements.txt
```## Training Normalizing Flows
To train the a flow-based model, first set `mode = 'pre_training'`, and specify all relevant variables in `config.py`. Once specified, run this command:
```train
python train.py
```## Attack Evaluation
To perform AdvFlow black-box adversarial attack, first set the `mode = 'attack'` in `config.py`.
Also, specify the dataset, target model architecture and path by setting the `dataset`, `target_arch`,
and `target_weight_path` variables in `config.py`, respectively. Once specified, run:```eval
python attack.py
```for CIFAR-10, SVHN, and CelebA. For ImageNet, however, you need to run:
```eval
python attack_imagenet.py
```Finally, you can run the Greedy AdvFlow by:
```eval
python attack_greedy.py
```## Pre-trained Models
Pre-trained flow-based models as well as some target classifiers can be found [here](https://drive.google.com/file/d/18J8eh-KLaPq9vUe_TwhuQMBW4WKBVX0L/view?usp=sharing).
## Results
### Fooling Adversarial Example Detectors
The primary assumption of adversarial example detectors is that the adversaries come from a different distribution than the data.
Here, we attack the CIFAR-10 and SVHN classifiers defended by well-known adversarial example detectors, and show that the adversaries generated by our model can mislead them more than the similar method of NATTACK. This suggests that we have come up with adversaries that have similar distribution to the data.
Table: Area under the receiver operating characteristic curve (AUROC) and accuracy of detecting adversarial examples generated by NATTACK and AdvFlow (un. for un-trained and tr. for pre-trained NF) using LID, Mahalanobis, and Res-Flow adversarial attack detectors.Data
Metric
AUROC(%)
Detection Acc.(%)Method
𝒩Attack
AdvFlow (un.)
AdvFlow (tr.)
𝒩Attack
AdvFlow (un.)
AdvFlow (tr.)CIFAR-10
LID
78.69
84.39
57.59
72.12
77.11
55.74Mahalanobis
97.95
99.50
66.85
95.59
97.46
62.21Res-Flow
97.90
99.40
67.03
94.55
97.21
62.60SVHN
LID
57.70
58.92
61.11
55.60
56.43
58.21Mahalanobis
73.17
74.67
64.72
68.20
69.46
60.88Res-Flow
69.70
74.86
64.68
64.53
68.41
61.13## Acknowledgement
This repository is mainly built upon [FrEIA, the Framework for Easily Invertible Architectures](https://github.com/VLL-HD/FrEIA), and [NATTACK](https://github.com/Cold-Winter/Nattack).
We thank the authors of these two repositories.## Citation
If you have found our code or paper beneficial to your research, please consider citing them as:
```bash
@inproceedings{dolatabadi2020advflow,
title={AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows},
author={Hadi Mohaghegh Dolatabadi and Sarah Erfani and Christopher Leckie},
booktitle = {Proceedings of the Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems ({NeurIPS})},
year={2020}
}
```