https://github.com/hummat/saliency
PyTorch implementation of 'Vanilla' Gradient, Grad-CAM, Guided backprop, Integrated Gradients and their SmoothGrad variants.
https://github.com/hummat/saliency
deep-learning deep-learning-algorithms deep-neural-networks grad-cam guided-backpropagation integrated-gradients machine-learning machine-learning-algorithms pytorch saliency smoothgrad xrai
Last synced: 1 day ago
JSON representation
PyTorch implementation of 'Vanilla' Gradient, Grad-CAM, Guided backprop, Integrated Gradients and their SmoothGrad variants.
- Host: GitHub
- URL: https://github.com/hummat/saliency
- Owner: hummat
- License: gpl-3.0
- Created: 2019-12-21T15:53:36.000Z (almost 6 years ago)
- Default Branch: master
- Last Pushed: 2024-12-09T13:59:48.000Z (10 months ago)
- Last Synced: 2025-06-06T16:11:38.168Z (4 months ago)
- Topics: deep-learning, deep-learning-algorithms, deep-neural-networks, grad-cam, guided-backpropagation, integrated-gradients, machine-learning, machine-learning-algorithms, pytorch, saliency, smoothgrad, xrai
- Language: Jupyter Notebook
- Homepage: https://hummat.github.io/saliency/
- Size: 9.83 MB
- Stars: 19
- Watchers: 0
- Forks: 3
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Saliency Methods
## Overview
This repository contains code for the following saliency techniques:
### 1. Vanilla Gradients

### 2. Guided Backpropogation (with Smoothgrad)

### 3. Integrated Gradients

### 4. (Guided) Grad-CAM

### 5. XRAI
## Remarks
The methods should work with all models from the [torchvision](https://github.com/pytorch/vision) package. Tested models so far are:
* VGG variants
* ResNet variants
* DenseNet variants
* Inception/GoogLeNet**In order for *Guided Backpropagation* and *Grad-CAM* to work properly with the *Inception* and *GoogLeNet* models, they need to by modified slightly, such that all *ReLUs* are modules of the model rather than function calls.
```python
# This class can be found at the very end of inception.py and googlenet.py respectively.
class BasicConv2d(nn.Module):def __init__(self, in_channels, out_channels, **kwargs):
super(BasicConv2d, self).__init__()
self.conv = nn.Conv2d(in_channels, out_channels, bias=False, **kwargs)
self.bn = nn.BatchNorm2d(out_channels, eps=0.001)
self.relu = nn.ReLU(inplace=True) # Add this linedef forward(self, x):
x = self.conv(x)
x = self.bn(x)
return self.relu(x) # Replaces F.relu(x, inplace=True)
```
## Examples
For a brief overview on how to use the package, please have a look at this short [tutorial notebook](https://github.com/hummat/saliency/blob/master/tutorial.ipynb). The bare minimum is summarized below.```python
# Standard imports
import torchvision# Import desired utils and methods
from ml_utils import load_image, show_mask
from guided_backprop import GuidedBackprop# Load model and image
model = torchvision.models.resnet50(pretrained=True)
doberman = load_image('images/doberman.png', size=224)# Construct a saliency object and compute the saliency mask.
guided_backprop = GuidedBackprop(model)
rgb_mask = guided_backprop.get_mask(image_tensor=doberman)# Visualize the result
show_mask(rgb_mask, title='Guided Backprop')
```## Credits
The implementation follows closely that of the corresponding [TensorFlow saliency](https://github.com/PAIR-code/saliency) repository, reusing its code were applicable (mostly for the XRAI method).Further inspiration has been taken from [this](https://github.com/utkuozbulak/pytorch-cnn-visualizations) repository.