Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

https://github.com/wangyongjie-ntu/Awesome-explainable-AI

A collection of research materials on explainable AI/ML
https://github.com/wangyongjie-ntu/Awesome-explainable-AI

List: Awesome-explainable-AI

counterfactual-explanations explainable-ai explanation-system interpretability interpretable-ai recourse xai xml

Last synced: 21 days ago
JSON representation

A collection of research materials on explainable AI/ML

Lists

README

        

[![Awesome](fig/awesome.svg)](https://github.com/wangyongjie-ntu/Awesome-explainable-AI)
[![Maintenance](https://img.shields.io/badge/Maintained%3F-YES-green.svg)](https://github.com/wangyongjie-ntu/Awesome-explainable-AI/graphs/commit-activity)
![](https://img.shields.io/github/license/wangyongjie-ntu/Awesome-explainable-AI)
[![GitHub stars](https://img.shields.io/github/stars/wangyongjie-ntu/Awesome-explainable-AI?color=blue&style=plastic)](https://github.com/wangyongjie-ntu/Awesome-explainable-AI/stargazers)
[![GitHub watchers](https://img.shields.io/github/watchers/wangyongjie-ntu/Awesome-explainable-AI?color=yellow&style=plastic)](https://github.com/wangyongjie-ntu/Awesome-explainable-AI)
[![GitHub forks](https://img.shields.io/github/forks/wangyongjie-ntu/Awesome-explainable-AI?color=red&style=plastic)](https://github.com/wangyongjie-ntu/Awesome-explainable-AI/watchers)
[![GitHub Pull Requests](https://img.shields.io/github/issues-pr/wangyongjie-ntu/Awesome-explainable-AI?color=green&style=plastic)](https://github.com/wangyongjie-ntu/Awesome-explainable-AI/network/members)
[![GitHub Contributors](https://img.shields.io/github/contributors/wangyongjie-ntu/Awesome-explainable-AI?color=green&style=plastic)](https://github.com/wangyongjie-ntu/Awesome-explainable-AI/network/members)

# Awesome-explainable-AI

This repository contains the frontier research on explainable AI(XAI) which is a hot topic recently. From the figure below we can see the trend of interpretable/explainable AI. The publications on this topic are booming.
![Trends](https://github.com/iversonicter/awesome-explainable-AI/blob/master/fig/Trend.png)

The figure below illustrates several use cases of XAI. Here we also divide the publications into serveal categories based on this figure. It is challenging to organise these papers well. Good to hear your voice!

![Use cases](https://github.com/iversonicter/awesome-explainable-AI/blob/master/fig/use_cases.png)

## Survey Papers

[Benchmarking and Survey of Explanation Methods for Black Box Models](https://link.springer.com/article/10.1007/s10618-023-00933-9), DMKD 2023

[Post-hoc Interpretability for Neural NLP: A Survey](https://dl.acm.org/doi/full/10.1145/3546577), ACM Computing Survey 2022

[Connecting the Dots in Trustworthy Artificial Intelligence: From AI Principles, Ethics, and Key Requirements to Responsible AI Systems and Regulation](https://arxiv.org/pdf/2305.02231.pdf), Arxiv 2023

[Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence](https://www.sciencedirect.com/science/article/pii/S1566253523001148), Information fusion 2023

[Explainable Biometrics in the Age of Deep Learning](https://arxiv.org/abs/2208.09500), Arxiv preprint 2022

[Explainable AI (XAI): Core Ideas, Techniques and Solutions](https://dl.acm.org/doi/abs/10.1145/3561048), ACM Computing Survey 2022

[A Review of Taxonomies of Explainable Artificial Intelligence (XAI) Methods](https://facctconference.org/static/pdfs_2022/facct22-173.pdf), FaccT 2022

[From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI](https://arxiv.org/pdf/2201.08164.pdf), ArXiv preprint 2022. [Corresponding website with collection of XAI methods](https://utwente-dmb.github.io/xai-papers/)

[Interpretable machine learning:Fundamental principles and 10 grand challenges](https://projecteuclid.org/journals/statistics-surveys/volume-16/issue-none/Interpretable-machine-learning-Fundamental-principles-and-10-grand-challenges/10.1214/21-SS133.full), Statist. Survey 2022

[Teach Me to Explain: A Review of Datasets for Explainable Natural Language Processing](https://arxiv.org/abs/2102.12060), NeurlIPS 2021

[Pitfalls of Explainable ML: An Industry Perspective](https://arxiv.org/pdf/2106.07758.pdf), Arxiv preprint 2021

[Explainable Machine Learning in Deployment](https://dl.acm.org/doi/pdf/10.1145/3351095.3375624), FAT 2020

[The elephant in the interpretability room: Why use attention as explanation when we have saliency methods](https://arxiv.org/abs/2010.05607), EMNLP Workshop 2020

[A Survey of the State of Explainable AI for Natural Language Processing](https://arxiv.org/abs/2010.00711), AACL-IJCNLP 2020

[Interpretable Machine Learning – A Brief History, State-of-the-Art and Challenges](https://link.springer.com/chapter/10.1007/978-3-030-65965-3_28), Communications in Computer and Information Science 2020

[A brief survey of visualization methods for deep learning models from the perspective of Explainable AI](https://www.macs.hw.ac.uk/~ic14/IoannisChalkiadakis_RRR.pdf), Information Visualization 2020

[Explaining Explanations in AI](https://arxiv.org/pdf/1811.01439.pdf), ACM FAT 2019

[Machine learning interpretability: A survey on methods and metrics](https://www.mdpi.com/2079-9292/8/8/832), Electronics, 2019

[A Survey on Explainable Artificial Intelligence (XAI): Towards Medical XAI](http://arxiv.org/abs/1907.07374), IEEE TNNLS 2020

[Interpretable machine learning: definitions, methods, and applications](https://arxiv.org/pdf/1901.04592.pdf), Arxiv preprint 2019

[Visual Analytics in Deep Learning: An Interrogative Survey for the Next Frontiers](https://ieeexplore.ieee.org/document/8371286), IEEE Transactions on Visualization and Computer Graphics, 2019

[Explainable Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI](http://arxiv.org/abs/1910.10045), Information Fusion, 2019

[Explanation in artificial intelligence: Insights from the social sciences](https://www.sciencedirect.com/science/article/abs/pii/S0004370218305988), Artificial Intelligence 2019

[Evaluating Explanation Without Ground Truth in Interpretable Machine Learning](https://arxiv.org/pdf/1907.06831v1.pdf), Arxiv preprint 2019

[Explanation in Human-AI Systems: A Literature Meta-Review Synopsis of Key Ideas and Publications and Bibliography for Explainable AI](https://arxiv.org/pdf/1902.01876.pdf), DARPA XAI literature Review 2019

[A survey of methods for explaining black box models](http://arxiv.org/abs/1802.01933), ACM Computing Surveys, 2018

[Explaining Explanations: An Overview of Interpretability of Machine Learning](https://arxiv.org/abs/1806.00069), IEEE DSAA, 2018

[Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)](https://ieeexplore.ieee.org/document/8466590/), IEEE Access, 2018

[Explainable artificial intelligence: A survey](https://ieeexplore.ieee.org/document/8400040/), MIPRO, 2018

[The Mythos of Model Interpretability: In machine learning, the concept of interpretability is both important and slippery](https://arxiv.org/abs/1606.03490v3), ACM Queue 2018

[How Convolutional Neural Networks See the World — A Survey of Convolutional Neural Network Visualization Methods](https://arxiv.org/pdf/1804.11191.pdf), Mathematical Foundations of Computing 2018

[Explainable Artificial Intelligence: Understanding, Visualizing and Interpreting Deep Learning Models](https://arxiv.org/abs/1708.08296), Arxiv 2017

[Towards A Rigorous Science of Interpretable Machine Learning](https://arxiv.org/pdf/1702.08608.pdf), Arxiv preprint 2017

[Explaining Explanation, Part 1: Theoretical Foundations](https://ieeexplore.ieee.org/abstract/document/7933919), IEEE Intelligent System 2017

[Explaining Explanation, Part 2: Empirical Foundations](https://ieeexplore.ieee.org/abstract/document/8012316), IEEE Intelligent System 2017

[Explaining Explanation, Part 3: The Causal Landscape](https://ieeexplore.ieee.org/abstract/document/8378482), IEEE Intelligent System 2017

[Explaining Explanation, Part 4: A Deep Dive on Deep Nets](https://ieeexplore.ieee.org/abstract/document/8423529), IEEE Intelligent System 2017

[An accurate comparison of methods for quantifying variable importance in artificial neural networks using simulated data](https://depts.washington.edu/oldenlab/wordpress/wp-content/uploads/2013/03/EcologicalModelling_2004.pdf), Ecological Modelling 2004

[Review and comparison of methods to study the contribution of variables in artificial neural network models](http://sovan.lek.free.fr/publi/160-3%20Gevrey.pdf), Ecological Modelling 2003

## Books

[Explainable Artificial Intelligence (xAI) Approaches and Deep Meta-Learning Models](https://www.intechopen.com/online-first/explainable-artificial-intelligence-xai-approaches-and-deep-meta-learning-models), Advances in Deep Learning Chapter 2020

[Explainable AI: Interpreting, Explaining and Visualizing Deep Learning](http://link.springer.com/10.1007/978-3-030-28954-6), Springer 2019

[Explanation in Artificial Intelligence: Insights from the Social Sciences](https://arxiv.org/pdf/1706.07269.pdf), 2017 arxiv preprint

[Visualizations of Deep Neural Networks in Computer Vision: A Survey](https://link.springer.com/chapter/10.1007/978-3-319-54024-5_6), Springer Transparent Data Mining for Big and Small Data 2017

[Explanatory Model Analysis Explore, Explain and Examine Predictive Models](https://pbiecek.github.io/ema/)

[Interpretable Machine Learning A Guide for Making Black Box Models Explainable](https://christophm.github.io/interpretable-ml-book/)

[Limitations of Interpretable Machine Learning Methods](https://compstat-lmu.github.io/iml_methods_limitations/index.html)

[An Introduction to Machine Learning Interpretability An Applied Perspective on Fairness, Accountability, Transparency,and Explainable AI](https://www.h2o.ai/wp-content/uploads/2019/08/An-Introduction-to-Machine-Learning-Interpretability-Second-Edition.pdf)

## Open Courses

[Interpretability and Explainability in Machine Learning, Harvard University](https://interpretable-ml-class.github.io/)

## Papers

We mainly follow the taxonomy in the [survey paper](http://arxiv.org/abs/1802.01933) and divide the XAI/XML papers into the several branches.

* [1. Transparent Model Design](https://github.com/wangyongjie-ntu/Awesome-explainable-AI/tree/master/transparent_model)
* [2. Post-Explanation](https://github.com/wangyongjie-ntu/Awesome-explainable-AI)
* [2.1 Model Explanation(Model-level)](https://github.com/wangyongjie-ntu/Awesome-explainable-AI/tree/master/model_explanation)
* [2.2 Model Inspection](https://github.com/wangyongjie-ntu/Awesome-explainable-AI/tree/master/model_inspection)
* [2.3 Outcome Explanation](https://github.com/wangyongjie-ntu/Awesome-explainable-AI)
* [2.3.1 Feature Attribution/Importance(Saliency Map)](https://github.com/wangyongjie-ntu/Awesome-explainable-AI/tree/master/feature_attribution)
* [2.4 Neuron Importance](https://github.com/wangyongjie-ntu/Awesome-explainable-AI/tree/master/neuron_importance)
* [2.5 Example-based Explanations](https://github.com/wangyongjie-ntu/Awesome-explainable-AI)
* [2.5.1 Counterfactual Explanations(Recourse)](https://github.com/wangyongjie-ntu/Awesome-explainable-AI/tree/master/counterfactuals)
* [2.5.2 Influential Instances](https://github.com/wangyongjie-ntu/Awesome-explainable-AI/tree/master/influential_instances)
* [2.5.3 Prototypes&Criticisms](https://github.com/wangyongjie-ntu/Awesome-explainable-AI/tree/master/prototype_criticisms)

### Evaluation methods

[Faithfulness Tests for Natural Language Explanations](https://aclanthology.org/2023.acl-short.25.pdf), ACL 2023

[OpenXAI: Towards a Transparent Evaluation of Model Explanations](https://arxiv.org/pdf/2206.11104.pdf), Arxiv 2022

[When Can Models Learn From Explanations? A Formal Framework for Understanding the Roles of Explanation Data](https://aclanthology.org/2022.lnls-1.4/), ACL 2022

[From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI](https://arxiv.org/pdf/2201.08164.pdf), ArXiv preprint 2022. [Corresponding website with collection of XAI methods](https://utwente-dmb.github.io/xai-papers/)

[Towards Better Understanding Attribution Methods](https://openaccess.thecvf.com/content/CVPR2022/html/Rao_Towards_Better_Understanding_Attribution_Methods_CVPR_2022_paper.html), CVPR 2022

[What Do You See? Evaluation of Explainable Artificial Intelligence (XAI) Interpretability through Neural Backdoors](https://arxiv.org/pdf/2009.10639.pdf), KDD 2021

[Evaluations and Methods for Explanation through Robustness Analysis](https://arxiv.org/pdf/2006.00442.pdf), arxiv preprint 2020

[Evaluating and Aggregating Feature-based Model Explanations](https://arxiv.org/abs/2005.00631), IJCAI 2020

[Sanity Checks for Saliency Metrics](https://aaai.org/ojs/index.php/AAAI/article/view/6064), AAAI 2020

[A benchmark for interpretability methods in deep neural networks](https://papers.nips.cc/paper/9167-a-benchmark-for-interpretability-methods-in-deep-neural-networks.pdf), NIPS 2019

[What Do Different Evaluation Metrics Tell Us About Saliency Models?](https://ieeexplore.ieee.org/document/8315047), TPAMI 2018

[Methods for interpreting and understanding deep neural networks](https://www.sciencedirect.com/science/article/pii/S1051200417302385), Digital Signal Processing 2017

[Evaluating the visualization of what a Deep Neural Network has learned](http://arxiv.org/abs/1509.06321), IEEE Transactions on Neural Networks and Learning Systems 2015

## Python Libraries(sort in alphabeta order)

AIF360: [https://github.com/Trusted-AI/AIF360](https://github.com/Trusted-AI/AIF360), ![](https://img.shields.io/github/stars/Trusted-AI/AIF360.svg?style=social)

AIX360: [https://github.com/IBM/AIX360](https://github.com/IBM/AIX360), ![](https://img.shields.io/github/stars/IBM/AIX360.svg?style=social)

Anchor: [https://github.com/marcotcr/anchor](https://github.com/marcotcr/anchor), scikit-learn ![](https://img.shields.io/github/stars/marcotcr/anchor?style=social)

Alibi: [https://github.com/SeldonIO/alibi](https://github.com/SeldonIO/alibi) ![](https://img.shields.io/github/stars/SeldonIO/alibi.svg?style=social)

Alibi-detect: [https://github.com/SeldonIO/alibi-detect](https://github.com/SeldonIO/alibi-detect) ![](https://img.shields.io/github/stars/SeldonIO/alibi-detect?style=social)

BlackBoxAuditing: [https://github.com/algofairness/BlackBoxAuditing](https://github.com/algofairness/BlackBoxAuditing), scikit-learn ![](https://img.shields.io/github/stars/algofairness/BlackBoxAuditing?style=social)

Brain2020: [https://github.com/vkola-lab/brain2020](https://github.com/vkola-lab/brain2020), Pytorch, 3D Brain MRI ![](https://img.shields.io/github/stars/vkola-lab/brain2020?style=social)

Boruta-Shap: [https://github.com/Ekeany/Boruta-Shap](https://github.com/Ekeany/Boruta-Shap), scikit-learn ![](https://img.shields.io/github/stars/Ekeany/Boruta-Shap?style=social)

casme: [https://github.com/kondiz/casme](https://github.com/kondiz/casme), Pytorch ![](https://img.shields.io/github/stars/kondiz/casme?style=social)

Captum: [https://github.com/pytorch/captum](https://github.com/pytorch/captum), Pytorch, ![](https://img.shields.io/github/stars/pytorch/captum.svg?style=social)

cnn-exposed: [https://github.com/idealo/cnn-exposed](https://github.com/idealo/cnn-exposed), Tensorflow ![](https://img.shields.io/github/stars/idealo/cnn-exposed?style=social)

ClusterShapley: [https://github.com/wilsonjr/ClusterShapley](https://github.com/wilsonjr/ClusterShapley), Sklearn ![](https://img.shields.io/github/stars/wilsonjr/ClusterShapley.svg?style=social)

DALEX: [https://github.com/ModelOriented/DALEX](https://github.com/ModelOriented/DALEX), ![](https://img.shields.io/github/stars/ModelOriented/DALEX.svg?style=social)

Deeplift: [https://github.com/kundajelab/deeplift](https://github.com/kundajelab/deeplift), Tensorflow, Keras![](https://img.shields.io/github/stars/kundajelab/deeplift.svg?style=social)

DeepExplain: [https://github.com/marcoancona/DeepExplain](https://github.com/marcoancona/DeepExplain), Tensorflow, Keras ![](https://img.shields.io/github/stars/marcoancona/DeepExplain?style=social)

Deep Visualization Toolbox: [https://github.com/yosinski/deep-visualization-toolbox](https://github.com/yosinski/deep-visualization-toolbox), Caffe, ![](https://img.shields.io/github/stars/yosinski/deep-visualization-toolbox?style=social)

dianna: [https://github.com/dianna-ai/dianna](https://github.com/dianna-ai/dianna), ONNX, ![](https://img.shields.io/github/stars/dianna-ai/dianna?style=social)

Eli5: [https://github.com/TeamHG-Memex/eli5](https://github.com/TeamHG-Memex/eli5), Scikit-learn, Keras, xgboost, lightGBM, catboost etc.![](https://img.shields.io/github/stars/TeamHG-Memex/eli5.svg?style=social)

explabox: [https://github.com/MarcelRobeer/explabox](https://github.com/MarcelRobeer/explabox), ONNX, Scikit-learn, Pytorch, Keras, Tensorflow, Huggingface ![](https://img.shields.io/github/stars/MarcelRobeer/explabox?style=social)

explainx: [https://github.com/explainX/explainx](https://github.com/explainX/explainx), xgboost, catboost ![](https://img.shields.io/github/stars/explainX/explainx?style=social)

ExplainaBoard: [https://github.com/neulab/ExplainaBoard](https://github.com/neulab/ExplainaBoard), ![](https://img.shields.io/github/stars/neulab/ExplainaBoard?style=social)

ExKMC: [https://github.com/navefr/ExKMC](https://github.com/navefr/ExKMC), Python, ![](https://img.shields.io/github/stars/navefr/ExKMC?style=social)

Facet: [https://github.com/BCG-Gamma/facet](https://github.com/BCG-Gamma/facet), sklearn, ![](https://img.shields.io/github/stars/BCG-Gamma/facet?style=social)

Grad-cam-Tensorflow: [https://github.com/insikk/Grad-CAM-tensorflow](https://github.com/insikk/Grad-CAM-tensorflow), Tensorflow ![](https://img.shields.io/github/stars/insikk/Grad-CAM-tensorflow?style=social)

GRACE: [https://github.com/lethaiq/GRACE_KDD20](https://github.com/lethaiq/GRACE_KDD20), Pytorch

Innvestigate: [https://github.com/albermax/innvestigate](https://github.com/albermax/innvestigate), tensorflow, theano, cntk, Keras ![](https://img.shields.io/github/stars/albermax/innvestigate.svg?style=social)

imodels: [https://github.com/csinva/imodels](https://github.com/csinva/imodels), ![](https://img.shields.io/github/stars/csinva/imodels.svg?style=social)

InterpretML: [https://github.com/interpretml/interpret](https://github.com/interpretml/interpret) ![](https://img.shields.io/github/stars/InterpretML/interpret.svg?style=social)

interpret-community: [https://github.com/interpretml/interpret-community](https://github.com/interpretml/interpret-community) ![](https://img.shields.io/github/stars/InterpretML/interpret-community.svg?style=social)

Integrated-Gradients: [https://github.com/ankurtaly/Integrated-Gradients](https://github.com/ankurtaly/Integrated-Gradients), Tensorflow ![](https://img.shields.io/github/stars/ankurtaly/Integrated-Gradients?style=social)

Keras-grad-cam: [https://github.com/jacobgil/keras-grad-cam](https://github.com/jacobgil/keras-grad-cam), Keras ![](https://img.shields.io/github/stars/jacobgil/keras-grad-cam?style=social)

Keras-vis: [https://github.com/raghakot/keras-vis](https://github.com/raghakot/keras-vis), Keras ![](https://img.shields.io/github/stars/raghakot/keras-vis?style=social)

keract: [https://github.com/philipperemy/keract](https://github.com/philipperemy/keract), Keras ![](https://img.shields.io/github/stars/philipperemy/keract?style=social)

Lucid: [https://github.com/tensorflow/lucid](https://github.com/tensorflow/lucid), Tensorflow ![](https://img.shields.io/github/stars/tensorflow/lucid.svg?style=social)

LIT: [https://github.com/PAIR-code/lit](https://github.com/PAIR-code/lit), Tensorflow, specified for NLP Task ![](https://img.shields.io/github/stars/PAIR-code/lit?style=social)

Lime: [https://github.com/marcotcr/lime](https://github.com/marcotcr/lime), Nearly all platform on Python ![](https://img.shields.io/github/stars/marcotcr/lime.svg?style=social)

LOFO: [https://github.com/aerdem4/lofo-importance](https://github.com/aerdem4/lofo-importance), scikit-learn ![](https://img.shields.io/github/stars/aerdem4/lofo-importance?style=social)

modelStudio: [https://github.com/ModelOriented/modelStudio](https://github.com/ModelOriented/modelStudio), Keras, Tensorflow, xgboost, lightgbm, h2o ![](https://img.shields.io/github/stars/ModelOriented/modelStudio?style=social)

M3d-Cam: [https://github.com/MECLabTUDA/M3d-Cam](https://github.com/MECLabTUDA/M3d-Cam), PyTorch, ![](https://img.shields.io/github/stars/MECLabTUDA/M3d-Cam?style=social)

NeuroX: [https://github.com/fdalvi/NeuroX](https://github.com/fdalvi/NeuroX), PyTorch, ![](https://img.shields.io/github/stars/fdalvi/NeuroX?style=social)

neural-backed-decision-trees: [https://github.com/alvinwan/neural-backed-decision-trees](https://github.com/alvinwan/neural-backed-decision-trees), Pytorch ![](https://img.shields.io/github/stars/alvinwan/neural-backed-decision-trees?style=social)

Outliertree: [https://github.com/david-cortes/outliertree](https://github.com/david-cortes/outliertree), (Python, R, C++), ![](https://img.shields.io/github/stars/david-cortes/outliertree?style=social)

InterpretDL: [https://github.com/PaddlePaddle/InterpretDL](https://github.com/PaddlePaddle/InterpretDL), (Python PaddlePaddle), ![](https://img.shields.io/github/stars/PaddlePaddle/InterpretDL?style=social)

polyjuice: [https://github.com/tongshuangwu/polyjuice](https://github.com/tongshuangwu/polyjuice), (Pytorch), ![](https://img.shields.io/github/stars/tongshuangwu/polyjuice?style=social)

pytorch-cnn-visualizations: [https://github.com/utkuozbulak/pytorch-cnn-visualizations](https://github.com/utkuozbulak/pytorch-cnn-visualizations), Pytorch ![](https://img.shields.io/github/stars/utkuozbulak/pytorch-cnn-visualizations?style=social)

Pytorch-grad-cam: [https://github.com/jacobgil/pytorch-grad-cam](https://github.com/jacobgil/pytorch-grad-cam), Pytorch ![](https://img.shields.io/github/stars/jacobgil/pytorch-grad-cam?style=social)

PDPbox: [https://github.com/SauceCat/PDPbox](https://github.com/SauceCat/PDPbox), Scikit-learn ![](https://img.shields.io/github/stars/SauceCat/PDPbox?style=social)

py-ciu:[https://github.com/TimKam/py-ciu/](https://github.com/TimKam/py-ciu/), ![](https://img.shields.io/github/stars/TimKam/py-ciu/?style=social)

PyCEbox: [https://github.com/AustinRochford/PyCEbox](https://github.com/AustinRochford/PyCEbox) ![](https://img.shields.io/github/stars/AustinRochford/PyCEbox?style=social)

path_explain: [https://github.com/suinleelab/path_explain](https://github.com/suinleelab/path_explain), Tensorflow ![](https://img.shields.io/github/stars/suinleelab/path_explain?style=social)

Quantus: [https://github.com/understandable-machine-intelligence-lab/Quantus](https://github.com/understandable-machine-intelligence-lab/Quantus), Tensorflow, Pytorch ![](https://img.shields.io/github/stars/understandable-machine-intelligence-lab/Quantus?style=social)

rulefit: [https://github.com/christophM/rulefit](https://github.com/christophM/rulefit), ![](https://img.shields.io/github/stars/christophM/rulefit?style=social)

rulematrix: [https://github.com/rulematrix/rule-matrix-py](https://github.com/rulematrix/rule-matrix-py), ![](https://img.shields.io/github/stars/rulematrix/rule-matrix-py?style=social)

Saliency: [https://github.com/PAIR-code/saliency](https://github.com/PAIR-code/saliency), Tensorflow ![](https://img.shields.io/github/stars/PAIR-code/saliency?style=social)

SHAP: [https://github.com/slundberg/shap](https://github.com/slundberg/shap), Nearly all platform on Python ![](https://img.shields.io/github/stars/slundberg/shap.svg?style=social)

Shapley: [https://github.com/benedekrozemberczki/shapley](https://github.com/benedekrozemberczki/shapley), ![](https://img.shields.io/github/stars/benedekrozemberczki/shapley.svg?style=social)

Skater: [https://github.com/oracle/Skater](https://github.com/oracle/Skater) ![](https://img.shields.io/github/stars/datascienceinc/Skater.svg?style=social)

TCAV: [https://github.com/tensorflow/tcav](https://github.com/tensorflow/tcav), Tensorflow, scikit-learn ![](https://img.shields.io/github/stars/tensorflow/tcav?style=social)

skope-rules: [https://github.com/scikit-learn-contrib/skope-rules](https://github.com/scikit-learn-contrib/skope-rules), Scikit-learn ![](https://img.shields.io/github/stars/scikit-learn-contrib/skope-rules?style=social)

TensorWatch: [https://github.com/microsoft/tensorwatch.git](https://github.com/microsoft/tensorwatch.git), Tensorflow ![](https://img.shields.io/github/stars/microsoft/tensorwatch?style=social)

tf-explain: [https://github.com/sicara/tf-explain](https://github.com/sicara/tf-explain), Tensorflow ![](https://img.shields.io/github/stars/sicara/tf-explain?style=social)

Treeinterpreter: [https://github.com/andosa/treeinterpreter](https://github.com/andosa/treeinterpreter), scikit-learn, ![](https://img.shields.io/github/stars/andosa/treeinterpreter?style=social)

torch-cam: [https://github.com/frgfm/torch-cam](https://github.com/frgfm/torch-cam), Pytorch, ![](https://img.shields.io/github/stars/frgfm/torch-cam?style=social)

WeightWatcher: [https://github.com/CalculatedContent/WeightWatcher](https://github.com/CalculatedContent/WeightWatcher), Keras, Pytorch ![](https://img.shields.io/github/stars/CalculatedContent/WeightWatcher?style=social)

What-if-tool: [https://github.com/PAIR-code/what-if-tool](https://github.com/PAIR-code/what-if-tool), Tensorflow![](https://img.shields.io/github/stars/PAIR-code/what-if-tool?style=social)

XAI: [https://github.com/EthicalML/xai](https://github.com/EthicalML/xai), scikit-learn ![](https://img.shields.io/github/stars/EthicalML/xai?style=social)

Xplique: [https://github.com/deel-ai/xplique](https://github.com/deel-ai/xplique), Tensorflow, ![](https://img.shields.io/github/stars/deel-ai/xplique?style=social)

## Related Repositories

[https://github.com/jphall663/awesome-machine-learning-interpretability](https://github.com/jphall663/awesome-machine-learning-interpretability), ![](https://img.shields.io/github/stars/jphall663/awesome-machine-learning-interpretability?style=social)

[https://github.com/lopusz/awesome-interpretable-machine-learning](https://github.com/lopusz/awesome-interpretable-machine-learning), ![](https://img.shields.io/github/stars/lopusz/awesome-interpretable-machine-learning?style=social)

[https://github.com/pbiecek/xai_resources](https://github.com/pbiecek/xai_resources), ![](https://img.shields.io/github/stars/pbiecek/xai_resources?style=social)

[https://github.com/h2oai/mli-resources](https://github.com/h2oai/mli-resources), ![](https://img.shields.io/github/stars/h2oai/mli-resources?style=social)

[https://github.com/AstraZeneca/awesome-explainable-graph-reasoning](https://github.com/AstraZeneca/awesome-explainable-graph-reasoning), ![](https://img.shields.io/github/stars/AstraZeneca/awesome-explainable-graph-reasoning?style=social)

[https://github.com/utwente-dmb/xai-papers](https://github.com/utwente-dmb/xai-papers), ![](https://img.shields.io/github/stars/utwente-dmb/xai-papers?style=social)

[https://github.com/samzabdiel/XAI](https://github.com/samzabdiel/XAI), ![](https://img.shields.io/github/stars/samzabdiel/XAI?style=social)

## Acknowledge

Need your help to re-organize and refine current taxonomy. Thanks very very much!

I appreciate it very much if you could add more works related to XAI/XML to this repo, archive uncategoried papers or anything to enrich this repo.

If any questions, feel free to drop me an email([email protected]). Welcome to discuss together.

## Stargazers over time

[![Stargazers over time](https://starchart.cc/wangyongjie-ntu/Awesome-explainable-AI.svg)](https://starchart.cc/wangyongjie-ntu/Awesome-explainable-AI)