Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/raghakot/keras-vis
Neural network visualization toolkit for keras
https://github.com/raghakot/keras-vis
deep-learning keras machine-learning neural-networks tensorflow theano visualization
Last synced: 4 days ago
JSON representation
Neural network visualization toolkit for keras
- Host: GitHub
- URL: https://github.com/raghakot/keras-vis
- Owner: raghakot
- License: mit
- Created: 2016-11-11T23:27:34.000Z (about 8 years ago)
- Default Branch: master
- Last Pushed: 2022-02-07T16:06:07.000Z (almost 3 years ago)
- Last Synced: 2024-10-29T15:17:28.060Z (3 months ago)
- Topics: deep-learning, keras, machine-learning, neural-networks, tensorflow, theano, visualization
- Language: Python
- Homepage: https://raghakot.github.io/keras-vis
- Size: 147 MB
- Stars: 2,979
- Watchers: 71
- Forks: 661
- Open Issues: 117
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
Awesome Lists containing this project
- awesome-meteo - keras-vis
- awesome-keras - keras-vis - A neural network visualization toolkit for keras. (Network Visualisation)
- Awesome-AIML-Data-Ops - keras-vis - vis.svg?style=social) - keras-vis is a high-level toolkit for visualizing and debugging your trained keras neural net models. Currently supported visualizations include: Activation maximization, Saliency maps, Class activation maps. (Explaining Black Box Models and Datasets)
- awesome-production-machine-learning - keras-vis - vis.svg?style=social) - keras-vis is a high-level toolkit for visualizing and debugging your trained keras neural net models. Currently supported visualizations include: Activation maximization, Saliency maps, Class activation maps. (Explainability and Fairness)
- Awesome-explainable-AI - https://github.com/raghakot/keras-vis - vis?style=social) (Python Libraries(sort in alphabeta order) / Evaluation methods)
- awesome-python-machine-learning-resources - GitHub - 52% open · ⏱️ 20.04.2020): (模型的可解释性)
- StarryDivineSky - raghakot/keras-vis - vis是一个用于可视化和调试训练好的 Keras 神经网络模型的高级工具包,支持激活最大化、显著性映射和类别激活映射等可视化方法,并可用于 N 维图像输入。该工具包将所有可视化方法抽象为能量最小化问题,提供简洁易用的接口,兼容 Theano 和 TensorFlow 后端,并支持 "channels_first" 和 "channels_last" 数据格式。你可以通过定义加权损失函数和配置优化器来最小化损失,从而生成自然逼真的图像。项目还提供了多种示例,方便你快速上手。 (A01_机器学习教程)
README
# Keras Visualization Toolkit
[![Build Status](https://travis-ci.org/raghakot/keras-vis.svg?branch=master)](https://travis-ci.org/raghakot/keras-vis)
[![license](https://img.shields.io/github/license/mashape/apistatus.svg?maxAge=2592000)](https://github.com/raghakot/keras-vis/blob/master/LICENSE)
[![Slack](https://img.shields.io/badge/slack-discussion-E01563.svg)](https://keras-vis.herokuapp.com/)keras-vis is a high-level toolkit for visualizing and debugging your trained keras neural net models. Currently
supported visualizations include:- Activation maximization
- Saliency maps
- Class activation mapsAll visualizations by default support N-dimensional image inputs. i.e., it generalizes to N-dim image inputs
to your model.The toolkit generalizes all of the above as energy minimization problems with a clean, easy to use,
and extendable interface. Compatible with both theano and tensorflow backends with 'channels_first', 'channels_last'
data format.## Quick links
* Read the documentation at [https://raghakot.github.io/keras-vis](https://raghakot.github.io/keras-vis).
* The Japanese edition is [https://keisen.github.io/keras-vis-docs-ja](https://keisen.github.io/keras-vis-docs-ja).
* Join the slack [channel](https://keras-vis.herokuapp.com/) for questions/discussions.
* We are tracking new features/tasks in [waffle.io](https://waffle.io/raghakot/keras-vis). Would love it if you lend us
a hand and submit PRs.## Getting Started
In image backprop problems, the goal is to generate an input image that minimizes some loss function.
Setting up an image backprop problem is easy.**Define weighted loss function**
Various useful loss functions are defined in [losses](https://raghakot.github.io/keras-vis/vis.losses).
A custom loss function can be defined by implementing [Loss.build_loss](https://raghakot.github.io/keras-vis/vis.losses/#lossbuild_loss).```python
from vis.losses import ActivationMaximization
from vis.regularizers import TotalVariation, LPNormfilter_indices = [1, 2, 3]
# Tuple consists of (loss_function, weight)
# Add regularizers as needed.
losses = [
(ActivationMaximization(keras_layer, filter_indices), 1),
(LPNorm(model.input), 10),
(TotalVariation(model.input), 10)
]
```**Configure optimizer to minimize weighted loss**
In order to generate natural looking images, image search space is constrained using regularization penalties.
Some common regularizers are defined in [regularizers](https://raghakot.github.io/keras-vis/vis.regularizers).
Like loss functions, custom regularizer can be defined by implementing
[Loss.build_loss](https://raghakot.github.io/keras-vis/vis.losses/#lossbuild_loss).```python
from vis.optimizer import Optimizeroptimizer = Optimizer(model.input, losses)
opt_img, grads, _ = optimizer.minimize()
```Concrete examples of various supported visualizations can be found in
[examples folder](https://github.com/raghakot/keras-vis/tree/master/examples).## Installation
1) Install [keras](https://github.com/fchollet/keras/blob/master/README.md#installation)
with theano or tensorflow backend. Note that this library requires Keras > 2.02) Install keras-vis
> From sources
```bash
sudo python setup.py install
```> PyPI package
```bash
sudo pip install keras-vis
```## Visualizations
**NOTE: The links are currently broken and the entire documentation is being reworked.
Please see examples/ for samples.**Neural nets are black boxes. In the recent years, several approaches for understanding and visualizing Convolutional
Networks have been developed in the literature. They give us a way to peer into the black boxes,
diagnose mis-classifications, and assess whether the network is over/under fitting.Guided backprop can also be used to create [trippy art](https://deepdreamgenerator.com/gallery), neural/texture
[style transfer](https://github.com/jcjohnson/neural-style) among the list of other growing applications.Various visualizations, documented in their own pages, are summarized here.
### [Conv filter visualization](https://raghakot.github.io/keras-vis/visualizations/conv_filters)
*Convolutional filters learn 'template matching' filters that maximize the output when a similar template
pattern is found in the input image. Visualize those templates via Activation Maximization.*
### [Dense layer visualization](https://raghakot.github.io/keras-vis/visualizations/dense)
*How can we assess whether a network is over/under fitting or generalizing well?*
### [Attention Maps](https://raghakot.github.io/keras-vis/visualizations/attention)
*How can we assess whether a network is attending to correct parts of the image in order to generate a decision?*
### Generating animated gif of optimization progress
It is possible to generate an animated gif of optimization progress by leveraging
[callbacks](https://raghakot.github.io/keras-vis/vis.callbacks). Following example shows how to visualize the
activation maximization for 'ouzel' class (output_index: 20).```python
from keras.applications import VGG16from vis.losses import ActivationMaximization
from vis.regularizers import TotalVariation, LPNorm
from vis.input_modifiers import Jitter
from vis.optimizer import Optimizer
from vis.callbacks import GifGenerator# Build the VGG16 network with ImageNet weights
model = VGG16(weights='imagenet', include_top=True)
print('Model loaded.')# The name of the layer we want to visualize
# (see model definition in vggnet.py)
layer_name = 'predictions'
layer_dict = dict([(layer.name, layer) for layer in model.layers[1:]])
output_class = [20]losses = [
(ActivationMaximization(layer_dict[layer_name], output_class), 2),
(LPNorm(model.input), 10),
(TotalVariation(model.input), 10)
]
opt = Optimizer(model.input, losses)
opt.minimize(max_iter=500, verbose=True, input_modifiers=[Jitter()], callbacks=[GifGenerator('opt_progress')])```
Notice how the output jitters around? This is because we used [Jitter](https://raghakot.github.io/keras-vis/vis.modifiers/#jitter),
a kind of [ImageModifier](https://raghakot.github.io/keras-vis/vis.modifiers/#imagemodifier) that is known to produce
crisper activation maximization images. As an exercise, try:- Without Jitter
- Varying various loss weights![opt_progress](https://raw.githubusercontent.com/raghakot/keras-vis/master/images/opt_progress.gif?raw=true "Optimization progress")
## Citation
Please cite keras-vis in your publications if it helped your research. Here is an example BibTeX entry:
```
@misc{raghakotkerasvis,
title={keras-vis},
author={Kotikalapudi, Raghavendra and contributors},
year={2017},
publisher={GitHub},
howpublished={\url{https://github.com/raghakot/keras-vis}},
}
```