Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/vduchauffour/transformers-visualizer
Explain your 🤗 transformers without effort! Plot the internal behavior of your model.
https://github.com/vduchauffour/transformers-visualizer
ai explainability explainable-ai huggingface huggingface-transformers nlp transformer transformers
Last synced: about 1 month ago
JSON representation
Explain your 🤗 transformers without effort! Plot the internal behavior of your model.
- Host: GitHub
- URL: https://github.com/vduchauffour/transformers-visualizer
- Owner: VDuchauffour
- License: apache-2.0
- Created: 2022-11-02T12:57:06.000Z (about 2 years ago)
- Default Branch: main
- Last Pushed: 2022-12-29T16:55:03.000Z (about 2 years ago)
- Last Synced: 2024-11-10T05:11:32.052Z (2 months ago)
- Topics: ai, explainability, explainable-ai, huggingface, huggingface-transformers, nlp, transformer, transformers
- Language: Python
- Homepage:
- Size: 2.31 MB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
Transformers visualizer
Explain your 🤗 transformers without effort!
Transformers visualizer is a python package designed to work with the [🤗 transformers](https://huggingface.co/docs/transformers/index) package. Given a `model` and a `tokenizer`, this package supports multiple ways to explain your model by plotting its internal behavior.
This package is mostly based on the [Captum][Captum] tutorials [[1]][captum_part1] [[2]][Captum_part2].
## Installation
```shell
pip install transformers-visualizer
```## Quickstart
Let's define a model, a tokenizer and a text input for the following examples.
```python
from transformers import AutoModel, AutoTokenizermodel_name = "bert-base-uncased"
model = AutoModel.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
text = "The dominant sequence transduction models are based on complex recurrent or convolutional neural networks that include an encoder and a decoder."
```### Visualizers
Attention matrices of a specific layer
```python
from transformers_visualizer import TokenToTokenAttentionsvisualizer = TokenToTokenAttentions(model, tokenizer)
visualizer(text)
```Instead of using `__call__` function, you can use the `compute` method. Both work in place, `compute` method allows chaining method.
`plot` method accept a layer index as parameter to specify which part of your model you want to plot. By default, the last layer is plotted.
```python
import matplotlib.pyplot as pltvisualizer.plot(layer_index = 6)
plt.savefig("token_to_token.jpg")
```
Attention matrices normalized across head axis
You can specify the `order` used in `torch.linalg.norm` in `__call__` and `compute` methods. By default, an L2 norm is applied.
```python
from transformers_visualizer import TokenToTokenNormalizedAttentionsvisualizer = TokenToTokenNormalizedAttentions(model, tokenizer)
visualizer.compute(text).plot()
```
## Plotting
`plot` method accept to skip special tokens with the parameter `skip_special_tokens`, by default it's set to `False`.
You can use the following imports to use plotting functions directly.
```python
from transformers_visualizer.plotting import plot_token_to_token, plot_token_to_token_specific_dimension
```These functions or the `plot` method of a visualizer can use the following parameters.
- `figsize (Tuple[int, int])`: Figsize of the plot. Defaults to (20, 20).
- `ticks_fontsize (int)`: Ticks fontsize. Defaults to 7.
- `title_fontsize (int)`: Title fontsize. Defaults to 9.
- `cmap (str)`: Colormap. Defaults to "viridis".
- `colorbar (bool)`: Display colorbars. Defaults to True.## Upcoming features
- [x] Add an option to mask special tokens.
- [ ] Add an option to specify head/layer indices to plot.
- [ ] Add other plotting backends such as Plotly, Bokeh, Altair.
- [ ] Implement other visualizers such as [vector norm](https://arxiv.org/pdf/2004.10102.pdf).## References
- [[1]][captum_part1] Captum's BERT tutorial (part 1)
- [[2]][captum_part2] Captum's BERT tutorial (part 2)## Acknowledgements
- [Transformers Interpret](https://github.com/cdpierse/transformers-interpret) for the idea of this project.
[Captum]: https://captum.ai/
[captum_part1]: https://captum.ai/tutorials/Bert_SQUAD_Interpret
[Captum_part2]: https://captum.ai/tutorials/Bert_SQUAD_Interpret2