Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/lopusz/awesome-interpretable-machine-learning
https://github.com/lopusz/awesome-interpretable-machine-learning
List: awesome-interpretable-machine-learning
data-science explainable-ai interpretable-ai interpretable-machine-learning interpretable-ml machine-learning xai
Last synced: 14 days ago
JSON representation
- Host: GitHub
- URL: https://github.com/lopusz/awesome-interpretable-machine-learning
- Owner: lopusz
- Created: 2017-12-27T16:21:24.000Z (almost 7 years ago)
- Default Branch: master
- Last Pushed: 2023-03-19T09:28:48.000Z (over 1 year ago)
- Last Synced: 2024-05-22T04:10:42.644Z (6 months ago)
- Topics: data-science, explainable-ai, interpretable-ai, interpretable-machine-learning, interpretable-ml, machine-learning, xai
- Language: Python
- Size: 1.47 MB
- Stars: 901
- Watchers: 55
- Forks: 141
- Open Issues: 1
-
Metadata Files:
- Readme: README.org
Awesome Lists containing this project
- awesome-artificial-intelligence-research - Interpretability 2
- awesome-trustworthy-deep-learning - Awesome Interpretable Machine Learning - interpretable-machine-learning) ![ ](https://img.shields.io/github/last-commit/lopusz/awesome-interpretable-machine-learning) (Interpretability Lists)
- awesome-AI-kubernetes - Awesome Interpretable Machine Learning
- Awesome-explainable-AI - https://github.com/lopusz/awesome-interpretable-machine-learning - interpretable-machine-learning?style=social) (Related Repositories / Evaluation methods)
README
* Awesome Interpretable Machine Learning [[https://awesome.re][https://awesome.re/badge.svg]]
Opinionated list of resources facilitating model interpretability
(introspection, simplification, visualization, explanation).** Interpretable Models
+ Interpretable models
+ Simple decision trees
+ Rules
+ (Regularized) linear regression
+ k-NN+ (2008) Predictive learning via rule ensembles by Jerome H. Friedman, Bogdan E. Popescu
+ https://dx.doi.org/10.1214/07-AOAS148+ (2014) Comprehensible classification models by Alex A. Freitas
+ https://dx.doi.org/10.1145/2594473.2594475
+ http://www.kdd.org/exploration_files/V15-01-01-Freitas.pdf
+ Interesting discussion of interpretability for a few classification models
(decision trees, classification rules, decision tables, nearest neighbors and Bayesian network classifier)+ (2015) Interpretable classifiers using rules and Bayesian analysis: Building a better stroke prediction model by Benjamin Letham, Cynthia Rudin, Tyler H. McCormick, David Madigan
+ https://arxiv.org/pdf/1511.01644
+ https://dx.doi.org/10.1214/15-AOAS848+ (2017) Learning Explanatory Rules from Noisy Data by Richard Evans, Edward Grefenstette
+ https://arxiv.org/pdf/1711.04574+ (2019) Transparent Classification with Multilayer Logical Perceptrons and Random Binarization by Zhuo Wang, Wei Zhang, Ning Liu, Jianyong Wang
+ https://arxiv.org/pdf/1912.04695
+ Code: https://github.com/12wang3/mllp** Feature Importance
+ Models offering feature importance measures
+ Random forest
+ Boosted trees
+ Extremely randomized trees
+ (2006) Extremely randomized trees by Pierre Geurts, Damien Ernst, Louis Wehenkel
+ https://dx.doi.org/10.1007/s10994-006-6226-1
+ Random ferns
+ (2015) rFerns: An Implementation of the Random Ferns Method for General-Purpose Machine Learning by Miron B. Kursa
+ https://dx.doi.org/10.18637/jss.v061.i10
+ https://cran.r-project.org/web/packages/rFerns
+ https://notabug.org/mbq/rFerns
+ Linear regression (with a grain of salt)+ (2007) Bias in random forest variable importance measures: Illustrations, sources and a solution by Carolin Strobl, Anne-Laure Boulesteix, Achim Zeileis, Torsten Hothorn
+ https://dx.doi.org/10.1186/1471-2105-8-25+ (2008) Conditional Variable Importance for Random Forests by Carolin Strobl, Anne-Laure Boulesteix, Thomas Kneib, Thomas Augustin, Achim Zeileis
+ https://dx.doi.org/10.1186/1471-2105-9-307+ (2018) Model Class Reliance: Variable Importance Measures for any Machine Learning Model Class, from the "Rashomon" Perspective by Aaron Fisher, Cynthia Rudin, Francesca Dominici
+ https://arxiv.org/pdf/1801.01489
+ https://github.com/aaronjfisher/mcr
+ Universal (model agnostic) variable importance measure+ (2019) Please Stop Permuting Features: An Explanation and Alternatives by Giles Hooker, Lucas Mentch
+ https://arxiv.org/pdf/1905.03151
+ Paper advocating against feature permutation for importance+ (2018) Visualizing the Feature Importance for Black Box Models by Giuseppe Casalicchio, Christoph Molnar, Bernd Bischl
+ https://arxiv.org/pdf/1804.06620
+ https://github.com/giuseppec/featureImportance
+ Global and local (model agnostic) variable importance measure (based on Model Reliance)+ Very good blog post describing deficiencies of random forest feature importance and the permutation importance
+ http://explained.ai/rf-importance/index.html+ Permutation importance - simple model agnostic approach is described in Eli5 documentation
+ https://eli5.readthedocs.io/en/latest/blackbox/permutation_importance.html** Feature Selection
+ Classification of feature selection methods
+ Filters
+ Wrappers
+ Embedded methods+ (2003) An Introduction to Variable and Feature Selection by Isabelle Guyon, André Elisseeff
+ http://www.jmlr.org/papers/volume3/guyon03a/guyon03a.pdf
+ Be sure to read this very illustrative introduction to feature selection+ Filter Methods
+ (2006) On the Use of Variable Complementarity for Feature Selection in Cancer Classification by Patrick Meyer, Gianluca Bontempi
+ https://dx.doi.org/10.1007/11732242_9
+ https://pdfs.semanticscholar.org/d72f/f5063520ce4542d6d9b9e6a4f12aafab6091.pdf
+ Introduces information theoretic methods - double input symmetrical relevance (DISR)+ (2012) Conditional Likelihood Maximisation: A Unifying Framework for Information Theoretic Feature Selection by Gavin Brown, Adam Pocock, Ming-Jie Zhao, Mikel Luján
+ http://www.jmlr.org/papers/volume13/brown12a/brown12a.pdf
+ Code: https://github.com/Craigacp/FEAST
+ Discusses various approaches based on mutual information (MIM, mRMR, MIFS, CMIM, JMI, DISR, ICAP, CIFE, CMI)+ (2012) Feature selection via joint likelihood by Adam Pocock
+ http://www.cs.man.ac.uk/~gbrown/publications/pocockPhDthesis.pdf+ (2017) Relief-Based Feature Selection: Introduction and Review by Ryan J. Urbanowicz, Melissa Meeker, William LaCava, Randal S. Olson, Jason H. Moore
+ https://arxiv.org/pdf/1711.08421+ (2017) Benchmarking Relief-Based Feature Selection Methods for Bioinformatics Data Mining by Ryan J. Urbanowicz, Randal S. Olson, Peter Schmitt, Melissa Meeker, Jason H. Moore
+ https://arxiv.org/pdf/1711.08477+ Wrapper methods
+ (2015) Feature Selection with theBorutaPackage by Miron B. Kursa, Witold R. Rudnicki
+ https://dx.doi.org/10.18637/jss.v036.i11
+ https://cran.r-project.org/web/packages/Boruta/
+ Code (official, R): https://notabug.org/mbq/Boruta/
+ Code (Python): https://github.com/scikit-learn-contrib/boruta_py+ Boruta for those in a hurry
+ https://cran.r-project.org/web/packages/Boruta/vignettes/inahurry.pdf+ General
+ (1994) Irrelevant Features and the Subset Selection Problem by George John, Ron Kohavi, Karl Pfleger
+ https://pdfs.semanticscholar.org/a83b/ddb34618cc68f1014ca12eef7f537825d104.pdf
+ Classic paper discussing weakly relevant features, irrelevant features, strongly relevant features+ (2003) Special issue of JMLR of feature selection - oldish (2003)
+ http://www.jmlr.org/papers/special/feature03.html+ (2004) Result Analysis of the NIPS 2003 Feature Selection Challenge by Isabelle Guyon, Steve Gunn, Asa Ben-Hur, Gideon Dror
+ Paper: https://papers.nips.cc/paper/2728-result-analysis-of-the-nips-2003-feature-selection-challenge.pdf
+ Website http://clopinet.com/isabelle/Projects/NIPS2003/+ (2007) Consistent Feature Selection for Pattern Recognition in Polynomial Time by Roland Nilsson, José Peña, Johan Björkegren, Jesper Tegnér
+ http://www.jmlr.org/papers/volume8/nilsson07a/nilsson07a.pdf
+ Discusses minimal optimal vs all-relevant approaches to feature selection+ Feature Engineering and Selection by Kuhn & Johnson
+ Sligtly off-topic, but very interesting book
+ http://www.feat.engineering/index.html
+ https://bookdown.org/max/FES/
+ https://github.com/topepo/FES+ Feature Engineering presentation by H. J. van Veen
+ Slightly off-topicm but very interesting deck of slides
+ Slides: https://www.slideshare.net/HJvanVeen/feature-engineering-72376750** Model Explanations
*** Philosophy
+ Magnets by R. P. Feynman
https://www.youtube.com/watch?v=wMFPe-DwULM+ (2002) Looking Inside the Black Box, presentation of Leo Breiman
+ https://www.stat.berkeley.edu/users/breiman/wald2002-2.pdf+ (2011) To Explain or to Predict? by Galit Shmueli
+ https://arxiv.org/pdf/1101.0891
+ https://dx.doi.org/10.1214/10-STS330+ (2016) The Mythos of Model Interpretability by Zachary C. Lipton
+ https://arxiv.org/pdf/1606.03490
+ https://www.youtube.com/watch?v=mvzBQci04qA+ (2017) Towards A Rigorous Science of Interpretable Machine Learning by Finale Doshi-Velez, Been Kim
+ https://arxiv.org/pdf/1702.08608+ (2017) The Promise and Peril of Human Evaluation for Model Interpretability by Bernease Herman
+ https://arxiv.org/pdf/1711.07414+ (2018) [[http://bayes.cs.ucla.edu/WHY/why-intro.pdf][The Book of Why: The New Science of Cause and Effect]] by Judea Pearl
+ (2018) Please Stop Doing the "Explainable" ML by Cynthia Rudin
+ Video (starts 17:30, lasts 10 min): https://zoom.us/recording/play/0y-iI9HamgyDzzP2k_jiTu6jB7JgVVXnjWZKDMbnyRTn3FsxTDZy6Wkrj3_ekx4J
+ Linked at: https://users.cs.duke.edu/~cynthia/mediatalks.html+ (2018) Explaining Explanations: An Approach to Evaluating Interpretability of Machine Learning by Leilani H. Gilpin, David Bau, Ben Z. Yuan, Ayesha Bajwa, Michael Specter, Lalana Kagal
+ https://arxiv.org/pdf/1806.00069+ (2019) Interpretable machine learning: definitions, methods, and applications by W. James Murdoch, Chandan Singh, Karl Kumbier, Reza Abbasi-Asl, Bin Yu
+ https://arxiv.org/pdf/1901.04592+ (2019) On Explainable Machine Learning Misconceptions A More Human-Centered Machine Learning by Patrick Hall
+ https://github.com/jphall663/xai_misconceptions/blob/master/xai_misconceptions.pdf
+ https://github.com/jphall663/xai_misconceptions+ (2019) An Introduction to Machine Learning Interpretability. An Applied Perspective on Fairness, Accountability, Transparency, and Explainable AI by Patrick Hall and Navdeep Gill
+ https://www.h2o.ai/wp-content/uploads/2019/08/An-Introduction-to-Machine-Learning-Interpretability-Second-Edition.pdf*** Model Agnostic Explanations
+ (2009) How to Explain Individual Classification Decisions by David Baehrens, Timon Schroeter, Stefan Harmeling, Motoaki Kawanabe, Katja Hansen, Klaus-Robert Mueller
+ https://arxiv.org/pdf/0912.1128+ (2013) Peeking Inside the Black Box: Visualizing Statistical Learning with Plots of Individual Conditional Expectation by Alex Goldstein, Adam Kapelner, Justin Bleich, Emil Pitkin
+ https://arxiv.org/pdf/1309.6392+ (2016) "Why Should I Trust You?": Explaining the Predictions of Any Classifier by Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin
+ https://arxiv.org/pdf/1602.04938
+ Code: https://github.com/marcotcr/lime
+ https://github.com/marcotcr/lime-experiments
+ https://www.youtube.com/watch?v=bCgEP2zuYxI
+ Introduces the LIME method (Local Interpretable Model-agnostic Explanations)+ (2016) A Model Explanation System: Latest Updates and Extensions by Ryan Turner
+ https://arxiv.org/pdf/1606.09517
+ http://www.blackboxworkshop.org/pdf/Turner2015_MES.pdf+ (2017) Understanding Black-box Predictions via Influence Functions by Pang Wei Koh, Percy Liang
+ https://arxiv.org/pdf/1703.04730+ (2017) A Unified Approach to Interpreting Model Predictions by Scott Lundberg, Su-In Lee
+ https://arxiv.org/pdf/1705.07874
+ Code: https://github.com/slundberg/shap
+ Introduces the SHAP method (SHapley Additive exPlanations), generalizing LIME+ (2018) Anchors: High-Precision Model-Agnostic Explanations by Marco Ribeiro, Sameer Singh, Carlos Guestrin
+ https://homes.cs.washington.edu/~marcotcr/aaai18.pdf
+ Code: https://github.com/marcotcr/anchor-experiments+ (2018) Learning to Explain: An Information-Theoretic Perspective on Model Interpretation by Jianbo Chen, Le Song, Martin J. Wainwright, Michael I. Jordan
+ https://arxiv.org/pdf/1802.07814+ (2018) Explanations of model predictions with live and breakDown packages by Mateusz Staniak, Przemyslaw Biecek
+ https://arxiv.org/pdf/1804.01955
+ Docs: https://mi2datalab.github.io/live/
+ Code: https://github.com/MI2DataLab/live
+ Docs: https://pbiecek.github.io/breakDown
+ Code: https://github.com/pbiecek/breakDown+ (2018) A review book - Interpretable Machine Learning. A Guide for Making Black Box
Models Explainable by Christoph Molnar+ https://christophm.github.io/interpretable-ml-book/
+ (2018) Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead by Cynthia Rudin
+ https://arxiv.org/pdf/1811.10154
+ (2019) Quantifying Interpretability of Arbitrary Machine Learning Models Through Functional Decomposition by Christoph Molnar, Giuseppe Casalicchio, Bernd Bischl
+ https://arxiv.org/pdf/1904.03867*** Model Specific Explanations - Neural Networks
+ (2013) Visualizing and Understanding Convolutional Networks by Matthew D Zeiler, Rob Fergus
+ https://arxiv.org/pdf/1311.2901+ (2013) Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps by Karen Simonyan, Andrea Vedaldi, Andrew Zisserman
+ https://arxiv.org/pdf/1312.6034+ (2015) Understanding Neural Networks Through Deep Visualization by Jason Yosinski, Jeff Clune, Anh Nguyen, Thomas Fuchs, Hod Lipson
+ https://arxiv.org/pdf/1506.06579
+ https://github.com/yosinski/deep-visualization-toolbox+ (2016) Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization by Ramprasaath R. Selvaraju, Michael Cogswell, Abhishek Das, Ramakrishna Vedantam, Devi Parikh, Dhruv Batra
+ https://arxiv.org/pdf/1610.02391+ (2016) Generating Visual Explanations by Lisa Anne Hendricks, Zeynep Akata, Marcus Rohrbach, Jeff Donahue, Bernt Schiele, Trevor Darrell
+ https://arxiv.org/pdf/1603.08507+ (2016) Rationalizing Neural Predictions by Tao Lei, Regina Barzilay, Tommi Jaakkola
+ https://arxiv.org/pdf/1606.04155
+ https://people.csail.mit.edu/taolei/papers/emnlp16_rationale_slides.pdf
+ Code: https://github.com/taolei87/rcnn/tree/master/code/rationale+ (2016) Gradients of Counterfactuals by Mukund Sundararajan, Ankur Taly, Qiqi Yan
+ https://arxiv.org/pdf/1611.02639+ Pixel entropy can be used to detect relevant picture regions (for CovNets)
+ See Visualization section and Fig. 5 of the paper
+ (2017) High-Resolution Breast Cancer Screening with Multi-View Deep Convolutional Neural Networks by Krzysztof J. Geras, Stacey Wolfson, Yiqiu Shen, Nan Wu, S. Gene Kim, Eric Kim, Laura Heacock, Ujas Parikh, Linda Moy, Kyunghyun Cho
+ https://arxiv.org/pdf/1703.07047+ (2017) SVCCA: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability by Maithra Raghu, Justin Gilmer, Jason Yosinski, Jascha Sohl-Dickstein
+ https://arxiv.org/pdf/1706.05806
+ https://research.googleblog.com/2017/11/interpreting-deep-neural-networks-with.html+ (2017) Visual Explanation by Interpretation: Improving Visual Feedback Capabilities of Deep Neural Networks by Jose Oramas, Kaili Wang, Tinne Tuytelaars
+ https://arxiv.org/pdf/1712.06302+ (2017) Axiomatic Attribution for Deep Networks by Mukund Sundararajan, Ankur Taly, Qiqi Yan
+ https://arxiv.org/pdf/1703.01365
+ Code: https://github.com/ankurtaly/Integrated-Gradients
+ Proposes Integrated Gradients Method
+ See also: Gradients of Counterfactuals https://arxiv.org/pdf/1611.02639.pdf+ (2017) Learning Important Features Through Propagating Activation Differences by Avanti Shrikumar, Peyton Greenside, Anshul Kundaje
+ https://arxiv.org/pdf/1704.02685+ Proposes Deep Lift method
+ Code: https://github.com/kundajelab/deeplift
+ Videos: https://www.youtube.com/playlist?list=PLJLjQOkqSRTP3cLB2cOOi_bQFw6KPGKML
+ (2017) The (Un)reliability of saliency methods by Pieter-Jan Kindermans, Sara Hooker, Julius Adebayo, Maximilian Alber, Kristof T. Schütt, Sven Dähne, Dumitru Erhan, Been Kim
+ https://arxiv.org/pdf/1711.0867
+ Review of failures for methods extracting most important pixels for prediction+ (2018) Classifier-agnostic saliency map extraction by Konrad Zolna, Krzysztof J. Geras, Kyunghyun Cho
+ https://arxiv.org/pdf/1805.08249
+ Code: https://github.com/kondiz/casme+ (2018) A Benchmark for Interpretability Methods in Deep Neural Networks by Sara Hooker, Dumitru Erhan, Pieter-Jan Kindermans, Been Kim
+ https://arxiv.org/pdf/1806.10758+ (2018) The Building Blocks of Interpretability by Chris Olah, Arvind Satyanarayan, Ian Johnson, Shan Carter, Ludwig Schubert, Katherine Ye, Alexander Mordvintsev
+ https://dx.doi.org/10.23915/distill.00010
+ Has some embeded links to notebooks
+ Uses Lucid library https://github.com/tensorflow/lucid+ (2018) Hierarchical interpretations for neural network predictions by Chandan Singh, W. James Murdoch, Bin Yu
+ https://arxiv.org/pdf/1806.05337
+ Code: https://github.com/csinva/hierarchical_dnn_interpretations+ (2018) iNNvestigate neural networks! by Maximilian Alber, Sebastian Lapuschkin, Philipp Seegerer, Miriam Hägele, Kristof T. Schütt, Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller, Sven Dähne, Pieter-Jan Kindermans
+ https://arxiv.org/pdf/1808.04260
+ Code: https://github.com/albermax/innvestigate+ (2018) YASENN: Explaining Neural Networks via Partitioning Activation Sequences by Yaroslav Zharov, Denis Korzhenkov, Pavel Shvechikov, Alexander Tuzhilin
+ https://arxiv.org/pdf/1811.02783+ (2019) Attention is not Explanation by Sarthak Jain, Byron C. Wallace
+ https://arxiv.org/pdf/1902.10186+ (2019) Attention Interpretability Across NLP Tasks by Shikhar Vashishth, Shyam Upadhyay, Gaurav Singh Tomar, Manaal Faruqui
+ https://arxiv.org/pdf/1909.11218+ (2019) GRACE: Generating Concise and Informative Contrastive Sample to Explain Neural Network Model's Prediction by Thai Le, Suhang Wang, Dongwon Lee
+ https://arxiv.org/pdf/1911.02042
+ Code: https://github.com/lethaiq/GRACE_KDD20** Extracting Interpretable Models From Complex Ones
+ (2017) Extracting Automata from Recurrent Neural Networks Using Queries and Counterexamples by Gail Weiss, Yoav Goldberg, Eran Yahav
+ https://arxiv.org/pdf/1711.09576+ (2017) Distilling a Neural Network Into a Soft Decision Tree by Nicholas Frosst, Geoffrey Hinton
+ https://arxiv.org/pdf/1711.09784+ (2017) Detecting Bias in Black-Box Models Using Transparent Model Distillation by Sarah Tan, Rich Caruana, Giles Hooker, Yin Lou
+ http://www.aies-conference.com/2018/contents/papers/main/AIES_2018_paper_96.pdf** Model Visualization
+ Visualizing Statistical Models: Removing the blindfold
+ http://had.co.nz/stat645/model-vis.pdf+ Partial dependence plots
+ http://scikit-learn.org/stable/auto_examples/ensemble/plot_partial_dependence.html
+ pdp: An R Package for Constructing Partial Dependence Plots
https://journal.r-project.org/archive/2017/RJ-2017-016/RJ-2017-016.pdf
https://cran.r-project.org/web/packages/pdp/index.html+ ggfortify: Unified Interface to Visualize Statistical Results of Popular R Packages
+ https://journal.r-project.org/archive/2016-2/tang-horikoshi-li.pdf
+ CRAN https://cran.r-project.org/web/packages/ggfortify/index.html+ RandomForestExplainer
+ Master thesis https://rawgit.com/geneticsMiNIng/BlackBoxOpener/master/randomForestExplainer_Master_thesis.pdf
+ R code
+ CRAN https://cran.r-project.org/web/packages/randomForestExplainer/index.html
+ Code: https://github.com/MI2DataLab/randomForestExplainer+ ggRandomForest
+ Paper (vignette) https://github.com/ehrlinger/ggRandomForests/raw/master/vignettes/randomForestSRC-Survival.pdf
+ R code
+ CRAN https://cran.r-project.org/web/packages/ggRandomForests/index.html
+ Code: https://github.com/ehrlinger/ggRandomForests** Selected Review Talks and Tutorials
+ Tutorial on Interpretable machine learning at ICML 2017
+ Slides: http://people.csail.mit.edu/beenkim/papers/BeenK_FinaleDV_ICML2017_tutorial.pdf+ P. Biecek, Show Me Your Model - Tools for Visualisation of Statistical Models
+ Video: https://channel9.msdn.com/Events/useR-international-R-User-conferences/useR-International-R-User-2017-Conference/Show-Me-Your-Model-tools-for-visualisation-of-statistical-models+ S. Ritchie, Just-So Stories of AI
+ Video: https://www.youtube.com/watch?v=DiWkKqZChF0
+ Slides: https://speakerdeck.com/sritchie/just-so-stories-for-ai-explaining-black-box-predictions+ C. Jarmul, Towards Interpretable Accountable Models
+ Video: https://www.youtube.com/watch?v=B3PtcF-6Dtc
+ Slides: https://docs.google.com/presentation/d/e/2PACX-1vR05kpagAbL5qo1QThxwu44TI5SQAws_UFVg3nUAmKp39uNG0xdBjcMA-VyEeqZRGGQtt0CS5h2DMTS/embed?start=false&loop=false&delayms=3000+ I. Oszvald, Machine Learning Libraries You'd Wish You'd Known About
+ A large part of the talk covers model explanation and visualization
+ Video: https://www.youtube.com/watch?v=nDF7_8FOhpI
+ Associated notebook on explaining regression predictions: https://github.com/ianozsvald/data_science_delivered/blob/master/ml_explain_regression_prediction.ipynb+ G. Varoquaux, Understanding and diagnosing your machine-learning models (covers PDP and Lime among others)
+ Video: https://www.youtube.com/watch?v=kbj3llSbaVA
+ Slides: http://gael-varoquaux.info/interpreting_ml_tuto/** Venues
+ Interpretable ML Symposium (NIPS 2017) (contains links to *papers*, *slides* and *videos*)
+ http://interpretable.ml/
+ Debate, Interpretability is necessary in machine learning
+ https://www.youtube.com/watch?v=2hW05ZfsUUo
+ Workshop on Human Interpretability in Machine Learning (WHI), organised in conjunction with ICML
+ 2018 (contains links to *papers* and *slides*)
+ https://sites.google.com/view/whi2018
+ Proceedings https://arxiv.org/html/1807.01308
+ 2017 (contains links to *papers* and *slides*)
+ https://sites.google.com/view/whi2017/home
+ Proceedings https://arxiv.org/html/1708.02666
+ 2016 (contains links to *papers*)
+ https://sites.google.com/site/2016whi/
+ Proceedings https://arxiv.org/html/1607.02531 or [[https://drive.google.com/open?id=0B9mGJ4F63iKGZWk0cXZraTNjRVU][here]]
+ Analyzing and interpreting neural networks for NLP (BlackboxNLP), organised in conjunction with EMNLP
+ 2019 (links below may get prefixed by 2019 later on)
+ https://blackboxnlp.github.io/
+ https://blackboxnlp.github.io/program.html
+ Papers should be available on arXiv
+ 2018
+ https://blackboxnlp.github.io/2018
+ https://blackboxnlp.github.io/program.html
+ [[https://arxiv.org/search/advanced?advanced=&terms-0-operator=AND&terms-0-term=BlackboxNLP&terms-0-field=comments&terms-1-operator=OR&terms-1-term=Analyzing+interpreting+neural+networks+NLP&terms-1-field=comments&classification-physics_archives=all&date-filter_by=all_dates&date-year=&date-from_date=&date-to_date=&date-date_type=submitted_date&abstracts=show&size=200&order=-announced_date_first][List of papers]]
+ FAT/ML Fairness, Accountability, and Transparency in Machine Learning [[https://www.fatml.org/]]
+ 2018
+ https://www.fatml.org/schedule/2018
+ 2017
+ https://www.fatml.org/schedule/2017
+ 2016
+ https://www.fatml.org/schedule/2016
+ 2016
+ https://www.fatml.org/schedule/2016
+ 2015
+ https://www.fatml.org/schedule/2015
+ 2014
+ https://www.fatml.org/schedule/2014
+ AAAI/ACM Annual Conferenceon AI, Ethics, and Society
+ 2019 (links below may get prefixed by 2019 later on)
+ http://www.aies-conference.com/accepted-papers/
+ 2018
+ http://www.aies-conference.com/2018/accepted-papers/
+ http://www.aies-conference.com/2018/accepted-student-papers/
** Software
Software related to papers is mentioned along with each publication.
Here only standalone software is included.+ DALEX - R package, Descriptive mAchine Learning EXplanations
+ CRAN https://cran.r-project.org/web/packages/DALEX/DALEX.pdf
+ Code: https://github.com/pbiecek/DALEX+ ELI5 - Python package dedicated to debugging machine learning classifiers
and explaining their predictions
+ Code: https://github.com/TeamHG-Memex/eli5
+ https://eli5.readthedocs.io/en/latest/+ forestmodel - R package visualizing coefficients of different models with the so called forest plot
+ CRAN https://cran.r-project.org/web/packages/forestmodel/index.html
+ Code: https://github.com/NikNakk/forestmodel+ fscaret - R package with automated Feature Selection from 'caret'
+ CRAN https://cran.r-project.org/web/packages/fscaret/
+ Tutorial: https://cran.r-project.org/web/packages/fscaret/vignettes/fscaret.pdf+ iml - R package for Interpretable Machine Learning
+ CRAN https://cran.r-project.org/web/packages/iml/
+ Code: https://github.com/christophM/iml
+ Publication: http://joss.theoj.org/papers/10.21105/joss.00786+ interpret - Python package package for training interpretable models and explaining blackbox systems by Microsoft
+ Code: https://github.com/microsoft/interpret+ lime - R package implementing LIME
+ https://github.com/thomasp85/lime+ lofo-importance - Python package feature importance by Leave One Feature Out Importance method
+ Code: https://github.com/aerdem4/lofo-importance+ Lucid - a collection of infrastructure and tools for research in neural network interpretability
+ Code: https://github.com/tensorflow/lucid+ praznik - R package with a collection of feature selection filters performing greedy optimisation of mutual information-based usefulness criteria, see JMLR 13, 27−66 (2012)
+ CRAN https://cran.r-project.org/web/packages/praznik/index.html
+ Code: https://notabug.org/mbq/praznik+ yellowbrick - Python package offering visual analysis and diagnostic tools to facilitate machine learning model selection
+ Code: https://github.com/DistrictDataLabs/yellowbrick
+ http://www.scikit-yb.org/en/latest/** Other Resources
+ *Awesome* list of resources by Patrick Hall
+ https://github.com/jphall663/awesome-machine-learning-interpretability
+ *Awesome* XAI resources by Przemysław Biecek
+ https://github.com/pbiecek/xai_resources