Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/harry24k/pgd-pytorch
A pytorch implementation of "Towards Deep Learning Models Resistant to Adversarial Attacks"
https://github.com/harry24k/pgd-pytorch
adversarial-attacks deep-learning pytorch
Last synced: about 2 months ago
JSON representation
A pytorch implementation of "Towards Deep Learning Models Resistant to Adversarial Attacks"
- Host: GitHub
- URL: https://github.com/harry24k/pgd-pytorch
- Owner: Harry24k
- License: mit
- Created: 2019-04-17T04:41:07.000Z (over 5 years ago)
- Default Branch: master
- Last Pushed: 2019-09-04T14:46:07.000Z (over 5 years ago)
- Last Synced: 2023-03-04T02:24:36.724Z (almost 2 years ago)
- Topics: adversarial-attacks, deep-learning, pytorch
- Language: Jupyter Notebook
- Size: 621 KB
- Stars: 96
- Watchers: 2
- Forks: 26
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# PGD-pytorch
**A pytorch implementation of "[Towards Deep Learning Models Resistant to Adversarial Attacks](https://arxiv.org/abs/1706.06083)"**## Summary
This code is a pytorch implementation of **PGD attack**
In this code, I used above methods to fool [Inception v3](https://arxiv.org/abs/1512.00567).
'[Giant Panda](http://www.image-net.org/)' used for an example.
You can add other pictures with a folder with the label name in the 'data/imagenet'.## Requirements
* python==3.6
* numpy==1.14.2
* pytorch==1.0.1## Important results not in the code
- Capacity(size of network) plays an important role in adversarial training. (p.9-10)
- For only natural examples training, it increases the robustness against one-step perturbations.
- For PGD adversarial training, small capacity networks fails.
- As capacity increases, the model can fit the adversairal examples increasingly well.
- More capacity and strong adversaries decrease transferability. (Section B)
- FGSM adversaries don't increase robustness for large epsilon(=8). (p.9-10)
- The network overfit to FGSM adversarial examples.
- Adversarial training with PGD shows good enough defense results.(p.12-13)## Notice
- This Repository won't be updated.
- Please check [the package of adversarial attacks in pytorch](https://github.com/Harry24k/adversairal-attacks-pytorch)