Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/jaypmorgan/adversarial.jl
Adversarial attacks for Neural Networks written with FluxML
https://github.com/jaypmorgan/adversarial.jl
adversarial-attacks adversarial-machine-learning flux julia julialang machine-learning
Last synced: 28 days ago
JSON representation
Adversarial attacks for Neural Networks written with FluxML
- Host: GitHub
- URL: https://github.com/jaypmorgan/adversarial.jl
- Owner: jaypmorgan
- Created: 2019-08-28T06:56:33.000Z (about 5 years ago)
- Default Branch: master
- Last Pushed: 2021-01-20T12:09:48.000Z (almost 4 years ago)
- Last Synced: 2024-10-15T16:43:48.960Z (about 1 month ago)
- Topics: adversarial-attacks, adversarial-machine-learning, flux, julia, julialang, machine-learning
- Language: Julia
- Homepage: https://jaypmorgan.github.io/Adversarial.jl/dev/
- Size: 228 KB
- Stars: 15
- Watchers: 3
- Forks: 1
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Adversarial.jl
[![](https://img.shields.io/badge/docs-dev-blue.svg)](https://jaypmorgan.github.io/Adversarial.jl/dev)
Adversarial attacks for Neural Networks written with FluxML.
Adversarial examples are inputs to Neural Networks (NNs) that result in a miss-classification. Through the exploitation that NNs are susceptible to slight changes (perturbations) of the input space, adversarial examples are often indistinguishable from the original input from the point of view of a human.
A common example of this phenomenon is from the use of the Fast Gradient Sign Method (FGSM) proposed by Goodfellow et al. 2014 https://arxiv.org/abs/1412.6572 where the gradient information of the network can be used to move the pixels of the image in the direction of gradient, and thereby increasing the loss for the resulting image. Despite the very small shift in pixels, this is enough for the NN to miss-classify the image: https://pytorch.org/tutorials/beginner/fgsm_tutorial.html
We have included some of the common methods to create adversarial examples, this includes:
- Fast Gradient Sign Method (FGSM)
- Projected Gradient Descent (PGD)
- Jacobian-based Saliency Map Attack (JSMA)
- Carlini & Wagner (CW)## Installation
You can install this package through Julia's package manager in the REPL:
```julia
] add Adversarial
```or via a script:
```julia
using Pkg; Pkg.add("Adversarial")
```## Quick Start Guide
As an example, we can create an adversarial image using the FGSM method:
```julia
x_adv = FGSM(model, loss, x, y; ϵ = 0.07)
```Where model is the FluxML model, loss is some loss function that uses a predict function, for example `crossentropy(model(x), y)`. x is the original input, y is the true class label, and \epsilon is a parameter that determines how much each pixel is changed by.
More indepth examples and uses of different methods can be found in the [examples folder](examples/markdown)