Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/borealisai/advertorch
A Toolbox for Adversarial Robustness Research
https://github.com/borealisai/advertorch
adversarial-attacks adversarial-example adversarial-examples adversarial-learning adversarial-machine-learning adversarial-perturbations benchmarking machine-learning pytorch robustness security toolbox
Last synced: 5 days ago
JSON representation
A Toolbox for Adversarial Robustness Research
- Host: GitHub
- URL: https://github.com/borealisai/advertorch
- Owner: BorealisAI
- License: lgpl-3.0
- Created: 2018-11-29T22:17:33.000Z (about 6 years ago)
- Default Branch: master
- Last Pushed: 2023-09-14T02:51:02.000Z (over 1 year ago)
- Last Synced: 2024-12-20T02:04:04.273Z (5 days ago)
- Topics: adversarial-attacks, adversarial-example, adversarial-examples, adversarial-learning, adversarial-machine-learning, adversarial-perturbations, benchmarking, machine-learning, pytorch, robustness, security, toolbox
- Language: Jupyter Notebook
- Size: 8.19 MB
- Stars: 1,312
- Watchers: 27
- Forks: 198
- Open Issues: 27
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
Awesome Lists containing this project
README
[![Build Status](https://travis-ci.org/BorealisAI/advertorch.svg?branch=master)](https://travis-ci.org/BorealisAI/advertorch)
is a Python toolbox for adversarial robustness research. The primary functionalities are implemented in PyTorch. Specifically, AdverTorch contains modules for generating adversarial perturbations and defending against adversarial examples, also scripts for adversarial training.
#### Latest version (v0.2)
## Installation
### Installing AdverTorch itself
We developed AdverTorch under Python 3.6 and PyTorch 1.0.0 & 0.4.1. To install AdverTorch, simply run
```
pip install advertorch
```or clone the repo and run
```
python setup.py install
```To install the package in "editable" mode:
```
pip install -e .
```### Setting up the testing environments
Some attacks are tested against implementations in [Foolbox](https://github.com/bethgelab/foolbox) or [CleverHans](https://github.com/tensorflow/cleverhans) to ensure correctness. Currently, they are tested under the following versions of related libraries.
```
conda install -c anaconda tensorflow-gpu==1.11.0
pip install git+https://github.com/tensorflow/cleverhans.git@336b9f4ed95dccc7f0d12d338c2038c53786ab70
pip install Keras==2.2.2
pip install foolbox==1.3.2
```## Examples
```python
# prepare your pytorch model as "model"
# prepare a batch of data and label as "cln_data" and "true_label"
# ...from advertorch.attacks import LinfPGDAttack
adversary = LinfPGDAttack(
model, loss_fn=nn.CrossEntropyLoss(reduction="sum"), eps=0.3,
nb_iter=40, eps_iter=0.01, rand_init=True, clip_min=0.0, clip_max=1.0,
targeted=False)adv_untargeted = adversary.perturb(cln_data, true_label)
target = torch.ones_like(true_label) * 3
adversary.targeted = True
adv_targeted = adversary.perturb(cln_data, target)
```For runnable examples see [`advertorch_examples/tutorial_attack_defense_bpda_mnist.ipynb`](https://github.com/BorealisAI/advertorch/blob/master/advertorch_examples/tutorial_attack_defense_bpda_mnist.ipynb) for how to attack and defend; see [`advertorch_examples/tutorial_train_mnist.py`](https://github.com/BorealisAI/advertorch/blob/master/advertorch_examples/tutorial_train_mnist.py) for how to adversarially train a robust model on MNIST.
## Documentation
The documentation webpage is on readthedocs https://advertorch.readthedocs.io.
## Coming Soon
AdverTorch is still under active development. We will add the following features/items down the road:
* more examples
* support for other machine learning frameworks, e.g. TensorFlow
* more attacks, defenses and other related functionalities
* support for other Python versions and future PyTorch versions
* contributing guidelines
* ...## Known issues
`FastFeatureAttack` and `JacobianSaliencyMapAttack` do not pass the tests against the version of CleverHans used. (They use to pass tests on a previous version of CleverHans.) This issue is being investigated. In the file `test_attacks_on_cleverhans.py`, they are marked as "skipped" in `pytest` tests.
## License
This project is licensed under the LGPL. The terms and conditions can be found in the LICENSE and LICENSE.GPL files.
## Citation
If you use AdverTorch in your research, we kindly ask that you cite the following [technical report](https://arxiv.org/abs/1902.07623):
```
@article{ding2019advertorch,
title={{AdverTorch} v0.1: An Adversarial Robustness Toolbox based on PyTorch},
author={Ding, Gavin Weiguang and Wang, Luyu and Jin, Xiaomeng},
journal={arXiv preprint arXiv:1902.07623},
year={2019}
}
```## Contributors
* [Gavin Weiguang Ding](https://gwding.github.io/)
* Luyu Wang
* Xiaomeng Jin
* Laurent Meunier
* Alexandre Araujo
* Jérôme Rony
* Ben Feinstein
* Francesco Croce
* Taro Kiritani