https://github.com/soskek/interval-bound-propagation-chainer
Sven Gowal et al., Scalable Verified Training for Provably Robust Image Classification, ICCV 2019
https://github.com/soskek/interval-bound-propagation-chainer
adversarial-attacks chainer deep-learning deepmind iccv2019 interval-bound-propagation neural-networks verification
Last synced: 2 months ago
JSON representation
Sven Gowal et al., Scalable Verified Training for Provably Robust Image Classification, ICCV 2019
- Host: GitHub
- URL: https://github.com/soskek/interval-bound-propagation-chainer
- Owner: soskek
- Created: 2019-09-23T05:42:33.000Z (about 6 years ago)
- Default Branch: master
- Last Pushed: 2019-11-17T15:36:37.000Z (almost 6 years ago)
- Last Synced: 2025-04-15T12:47:21.367Z (6 months ago)
- Topics: adversarial-attacks, chainer, deep-learning, deepmind, iccv2019, interval-bound-propagation, neural-networks, verification
- Language: Jupyter Notebook
- Homepage:
- Size: 246 KB
- Stars: 9
- Watchers: 3
- Forks: 2
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Interval Bound Propagation
This is Chainer easy-to-follow implementation of Interval Bound Propagation. The MNIST experiment with small or medium models is implemented.
Paper: Sven Gowal, Krishnamurthy Dvijotham, Robert Stanforth, Rudy Bunel, Chongli Qin, Jonathan Uesato, Relja Arandjelovic, Timothy Mann, Pushmeet Kohli, [On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models (Scalable Verified Training for Provably Robust Image Classification), ICCV 2019](https://arxiv.org/abs/1810.12715)
Authors' TensorFlow Code: [https://github.com/deepmind/interval-bound-propagation](https://github.com/deepmind/interval-bound-propagation)
```
python train_mnist.py -d 0 --model-class small
```See `layers.py` or `models.py` for understanding the core of the algorithm.
## Visualize

The left is by a baseline model while the right is by a IBP-trained model.
Each shows the activation feature map of `normal`, `upper`- and `lower`-bounds at each layer.
The image of `range` shows the `upper - normal` and `normal - lower` diffs with red and blue respectively.
The more red, the more looser the upper bound is. The more blue, the more looser the lower bound is.We can see the IBP-trained model produces noise-robust features.
And, its logit for classification (`y`), on the bottom, is also robust and consistently predicts 7 as the label while the baseline fails.Visualization notebook is `visualize_interval_bound_propagation.ipynb`.