Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/harry24k/pgd-pytorch

A pytorch implementation of "Towards Deep Learning Models Resistant to Adversarial Attacks"
https://github.com/harry24k/pgd-pytorch

adversarial-attacks deep-learning pytorch

Last synced: about 2 months ago
JSON representation

A pytorch implementation of "Towards Deep Learning Models Resistant to Adversarial Attacks"

Awesome Lists containing this project

README

        

# PGD-pytorch
**A pytorch implementation of "[Towards Deep Learning Models Resistant to Adversarial Attacks](https://arxiv.org/abs/1706.06083)"**

## Summary
This code is a pytorch implementation of **PGD attack**
In this code, I used above methods to fool [Inception v3](https://arxiv.org/abs/1512.00567).
'[Giant Panda](http://www.image-net.org/)' used for an example.
You can add other pictures with a folder with the label name in the 'data/imagenet'.

## Requirements
* python==3.6
* numpy==1.14.2
* pytorch==1.0.1

## Important results not in the code
- Capacity(size of network) plays an important role in adversarial training. (p.9-10)
- For only natural examples training, it increases the robustness against one-step perturbations.
- For PGD adversarial training, small capacity networks fails.
- As capacity increases, the model can fit the adversairal examples increasingly well.
- More capacity and strong adversaries decrease transferability. (Section B)
- FGSM adversaries don't increase robustness for large epsilon(=8). (p.9-10)
- The network overfit to FGSM adversarial examples.
- Adversarial training with PGD shows good enough defense results.(p.12-13)

## Notice
- This Repository won't be updated.
- Please check [the package of adversarial attacks in pytorch](https://github.com/Harry24k/adversairal-attacks-pytorch)