An open API service indexing awesome lists of open source software.

https://github.com/shreyansh26/convnext-adversarial-examples

Generating Adversarial examples for ConvNeXt
https://github.com/shreyansh26/convnext-adversarial-examples

adversarial-attacks adversarial-machine-learning convnext deeplearning image-classification

Last synced: 7 months ago
JSON representation

Generating Adversarial examples for ConvNeXt

Awesome Lists containing this project

README

          

# Adversarial examples generation for ConvNeXt

[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1c7EiO59cmVbxdCFZRDn82I-gBi6aTPDN?usp=sharing)

**This project is a Pytorch implementation of [@stanislavfort's project](https://twitter.com/stanislavfort/status/1481263565998805002?s=20).**

The notebook looks at generating adversarial images to "fool" the ConvNeXt model's image classification capabilities. [ConvNeXt](https://arxiv.org/abs/2201.03545) came out earlier this year from Meta AI.

The FGSM (Fast Gradient Sign Method) is a great algorithm to attack models in a *white-box* fashion with the goal of misclassification. Noise is added to the input image (not randomly) but in a manner such that the direction is the same as the gradient of the cost function with respect to the data.

Since this notebook is just the implementation - If you want to know more about FGSM, you may refer these -

1. https://www.tensorflow.org/tutorials/generative/adversarial_fgsm
2. https://pytorch.org/tutorials/beginner/fgsm_tutorial.html
3. https://arxiv.org/abs/1412.6572

The following figure summarizes the goal of this notebook -