https://github.com/fork123aniket/model-agnostic-graph-explainability-from-scratch
Implementation of Model-Agnostic Graph Explainability Technique from Scratch in PyTorch
https://github.com/fork123aniket/model-agnostic-graph-explainability-from-scratch
explainability explainable-ai explainable-machine-learning explainable-ml explainer explanations graph-neural-networks pytorch pytorch-geometric pytorch-implementation
Last synced: 2 months ago
JSON representation
Implementation of Model-Agnostic Graph Explainability Technique from Scratch in PyTorch
- Host: GitHub
- URL: https://github.com/fork123aniket/model-agnostic-graph-explainability-from-scratch
- Owner: fork123aniket
- License: mit
- Created: 2023-03-31T07:38:33.000Z (about 2 years ago)
- Default Branch: main
- Last Pushed: 2023-04-30T07:20:48.000Z (about 2 years ago)
- Last Synced: 2025-01-16T06:25:25.753Z (4 months ago)
- Topics: explainability, explainable-ai, explainable-machine-learning, explainable-ml, explainer, explanations, graph-neural-networks, pytorch, pytorch-geometric, pytorch-implementation
- Language: Python
- Homepage: https://pytorch-geometric.readthedocs.io/en/latest/modules/contrib.html#torch_geometric.contrib.explain.GraphMaskExplainer
- Size: 3.08 MB
- Stars: 2
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Model-agnostic Graph Explainability from Scratch
This repository holds a graph explainability solution, which extends the work ([***GraphMask Explainer***](https://arxiv.org/pdf/2010.00577.pdf)) to heterogeneous as well as homogeneous Graphs, making this functionality ***model-agnostic***. Moreover, this implementation provides both ***node feature-level*** and ***edge-level attributes mask (explanation subgraph)***, which is a ***binary-valued*** vector. All `0` values of this ***mask vector*** represent those features (and edges) of the graph that do not affect their corresponding predictions, whereas features (and edges) associated with `1` values consider to be a lot effective in influencing their associated predictions outputted by the original Graph Neural Network (GNN) model.
## Requirements
- `PyTorch Geometric`
- `PyTorch`
- `numpy`
- `scikit-learn`
- `tqdm`
## Usage
### Data
This implementation of ***GraphMask Explainer*** demonstrates explainability examples for ***GCN***, ***GAT***, and ***RGCN*** *layer-types* on ***Node Classification (NC)***, ***Graph Classification (GC)***, and ***Link Prediction (LP)*** tasks.
| Layer Type | Task | Dataset|
| ---------- |:----:|:------:|
| GCN | NC | Cora |
|GCN | GC | Enzymes |
| GAT | NC | Cora |
| GAT | GC | Enzymes |
| GAT | LP | Cora |
| RGCN | NC | AIFB |
| RGCN | GC | Enzymes |
### Training and Testing
- To see the ***model-agnostic*** explainability layer’s implementation, check `graphmask_explainer.py`.
- To train the ***GraphMask Explainer*** and generate explanations for any of the aforementioned tasks, run `graphmask_explainer_example.py`.
- All hyperparameters’ settings can be tweaked (based on requirements) by altering their corresponding values provided in both `graphmask_explainer.py` and `graphmask_explainer_example.py` files.
## Results| NC Task Explanation Subgraph (AIFB Dataset) | GC Task Explanation Subgraph (Enzymes Dataset) |
| ------------------------- |:----------------------------:|
|  |  |These figures show ***output subgraph*** in which all irrelevant edges (having `0` values in the ***binary-valued mask***) have been colored `grey`, whereas all relevant edges (having `1` values in the generated ***binary-valued mask***) have been illustrated in `black` color. Note that, for ***NC*** task, the ***output subgraph*** contains only those nodes that lie within the ***3-hop neighborhood*** of the parent node with index `20` and have the same relation type as the parent node has, on the other hand, for ***GC*** task, the ***output subgraph*** demonstrates explanations of the graph with index `10` present in ***Enzymes*** dataset.