Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/mzweilin/EvadeML-Zoo
Benchmarking and Visualization Tool for Adversarial Machine Learning
https://github.com/mzweilin/EvadeML-Zoo
Last synced: 3 months ago
JSON representation
Benchmarking and Visualization Tool for Adversarial Machine Learning
- Host: GitHub
- URL: https://github.com/mzweilin/EvadeML-Zoo
- Owner: mzweilin
- License: mit
- Created: 2017-05-29T17:57:22.000Z (over 7 years ago)
- Default Branch: master
- Last Pushed: 2023-04-04T12:38:38.000Z (over 1 year ago)
- Last Synced: 2024-07-18T02:49:18.172Z (4 months ago)
- Language: Python
- Homepage: https://evadeML.org/zoo
- Size: 26.9 MB
- Stars: 182
- Watchers: 10
- Forks: 63
- Open Issues: 10
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-production-machine-learning - EvadeML - Zoo.svg?style=social) - benchmarking and visualization tool for adversarial ML maintained by Weilin Xu, a PhD at University of Virginia, working with David Evans. Has a tutorial on re-implementation of one of the most important adversarial defense papers - [feature squeezing](https://arxiv.org/abs/1704.01155) (same team). (Adversarial Robustness Libraries)
- Awesome-AIML-Data-Ops - EvadeML - Zoo.svg?style=social) - benchmarking and visualization tool for adversarial ML maintained by Weilin Xu, a PhD at University of Virginia, working with David Evans. Has a tutorial on re-implementation of one of the most important adversarial defense papers - [feature squeezing](https://arxiv.org/abs/1704.01155) (same team). (Adversarial Robustness Libraries)
README
# EvadeML-Zoo
The goal of this project:
* Several datasets ready to use: MNIST, CIFAR-10, ImageNet-ILSVRC and more.
* Pre-trained state-of-the-art models to attack. [[See details]](models/README.md).
* Existing attacking methods: FGSM, BIM, JSMA, Deepfool, Universal Perturbations, Carlini/Wagner-L2/Li/L0 and more. [[See details]](attacks/README.md).
* Visualization of adversarial examples.
* Existing defense methods as baseline.The code was developed on Python 2, but should be runnable on Python 3 with tiny modifications.
> Please follow [the instructions](Reproduce_FeatureSqueezing.md) to reproduce the _**Feature Squeezing**_ results.
## 1. Install dependencies.
```bash
pip install -r requirements_cpu.txt
```If you are going to run the code on GPU, install this list instead:
```bash
pip install -r requirements_gpu.txt
```## 2. Fetch submodules.
```bash
git submodule update --init --recursive
```## 3. Download pre-trained models.
```bash
mkdir downloads; curl -sL https://github.com/mzweilin/EvadeML-Zoo/releases/download/v0.1/downloads.tar.gz | tar xzv -C downloads
```## 4. (Optional) Download the SVHN dataset and pre-trained model.
```bash
python datasets/svhn_dataset/download_svhn_data.py
curl -sL https://github.com/mzweilin/EvadeML-Zoo/releases/download/v0.1/svhn_model_weights.tar.gz | tar xzv
```## 5. Usage of `python main.py`
```
usage: python main.py [-h] [--dataset_name DATASET_NAME] [--model_name MODEL_NAME]
[--select [SELECT]] [--noselect] [--nb_examples NB_EXAMPLES]
[--balance_sampling [BALANCE_SAMPLING]] [--nobalance_sampling]
[--test_mode [TEST_MODE]] [--notest_mode] [--attacks ATTACKS]
[--clip CLIP] [--visualize [VISUALIZE]] [--novisualize]
[--robustness ROBUSTNESS] [--detection DETECTION]
[--detection_train_test_mode [DETECTION_TRAIN_TEST_MODE]]
[--nodetection_train_test_mode] [--result_folder RESULT_FOLDER]
[--verbose [VERBOSE]] [--noverbose]optional arguments:
-h, --help show this help message and exit
--dataset_name DATASET_NAME
Supported: MNIST, CIFAR-10, ImageNet, SVHN.
--model_name MODEL_NAME
Supported: cleverhans, cleverhans_adv_trained and
carlini for MNIST; carlini and DenseNet for CIFAR-10;
ResNet50, VGG19, Inceptionv3 and MobileNet for
ImageNet; tohinz for SVHN.
--select [SELECT] Select correctly classified examples for the
experiement.
--noselect
--nb_examples NB_EXAMPLES
The number of examples selected for attacks.
--balance_sampling [BALANCE_SAMPLING]
Select the same number of examples for each class.
--nobalance_sampling
--test_mode [TEST_MODE]
Only select one sample for each class.
--notest_mode
--attacks ATTACKS Attack name and parameters in URL style, separated by
semicolon.
--clip CLIP L-infinity clip on the adversarial perturbations.
--visualize [VISUALIZE]
Output the image examples for each attack, enabled by
default.
--novisualize
--robustness ROBUSTNESS
Supported: FeatureSqueezing.
--detection DETECTION
Supported: feature_squeezing.
--detection_train_test_mode [DETECTION_TRAIN_TEST_MODE]
Split into train/test datasets.
--nodetection_train_test_mode
--result_folder RESULT_FOLDER
The output folder for results.
--verbose [VERBOSE] Stdout level. The hidden content will be saved to log
files anyway.
--noverbose
```### 5. Example.
```bash
python main.py --dataset_name MNIST --model_name carlini \
--nb_examples 2000 --balance_sampling \
--attacks "FGSM?eps=0.1;" \
--robustness "none;FeatureSqueezing?squeezer=bit_depth_1;" \
--detection "FeatureSqueezing?squeezers=bit_depth_1,median_filter_2_2&distance_measure=l1&fpr=0.05;"
```## Cite this work
You are encouraged to cite the following paper if you use `EvadeML-Zoo` for academic research.
```
@inproceedings{xu2018feature,
title={{Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks}},
author={Xu, Weilin and Evans, David and Qi, Yanjun},
booktitle={Proceedings of the 2018 Network and Distributed Systems Security Symposium (NDSS)},
year={2018}
}
```