Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
awesome-xai
Awesome Explainable AI (XAI) and Interpretable ML Papers and Resources
https://github.com/altamiracorp/awesome-xai
Last synced: 3 days ago
JSON representation
-
Papers
-
XAI Methods
- inTrees - Interpreting tree ensembles with inTrees.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- inTrees - Interpreting tree ensembles with inTrees.
- PDP - Partial dependence plots.
- VEC - Variable effect characteristic curve.
- VIN - Variable interaction network.
- Xu, et. al. - Show, attend, tell attention model.
- Ada-SISE - Adaptive semantice inpute sampling for explanation.
- ALE - Accumulated local effects plot.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Anchors - High-Precision Model-Agnostic Explanations.
- Auditing - Auditing black-box models.
- BayLIME - Bayesian local interpretable model-agnostic explanations.
- Break Down - Break down plots for additive attributions.
- CAM - Class activation mapping.
- CDT - Confident interpretation of Bayesian decision tree ensembles.
- CICE - Centered ICE plot.
- CMM - Combined multiple models metalearner.
- Conj Rules - Using sampling and queries to extract rules from trained neural networks.
- CP - Contribution propogation.
- DecText - Extracting decision trees from trained neural networks.
- DeepLIFT - Deep label-specific feature learning for image annotation.
- DTD - Deep Taylor decomposition.
- ExplainD - Explanations of evidence in additive classifiers.
- FIRM - Feature importance ranking measure.
- Fong, et. al. - Meaninful perturbations model.
- G-REX - Rule extraction using genetic algorithms.
- Gibbons, et. al. - Explain random forest using decision tree.
- GoldenEye - Exploring classifiers by randomization.
- GPD - Gaussian process decisions.
- GPDT - Genetic program to evolve decision trees.
- GradCAM - Gradient-weighted Class Activation Mapping.
- GradCAM++ - Generalized gradient-based visual explanations.
- Hara, et. al. - Making tree ensembles interpretable.
- ICE - Individual conditional expectation plots.
- IG - Integrated gradients.
- inTrees - Interpreting tree ensembles with inTrees.
- IOFP - Iterative orthoganol feature projection.
- IP - Information plane visualization.
- KL-LIME - Kullback-Leibler Projections based LIME.
- Krishnan, et. al. - Extracting decision trees from trained neural networks.
- Lei, et. al. - Rationalizing neural predictions with generator and encoder.
- LIME - Local Interpretable Model-Agnostic Explanations.
- LOCO - Leave-one covariate out.
- LORE - Local rule-based explanations.
- Lou, et. al. - Accurate intelligibile models with pairwise interactions.
- LRP - Layer-wise relevance propogation.
- MCR - Model class reliance.
- MES - Model explanation system.
- MFI - Feature importance measure for non-linear algorithms.
- NID - Neural interpretation diagram.
- OptiLIME - Optimized LIME.
- PALM - Partition aware local model.
- PDA - Prediction Difference Analysis: Visualize deep neural network decisions.
- PDP - Partial dependence plots.
- POIMs - Positional oligomer importance matrices for understanding SVM signal detectors.
- ProfWeight - Transfer information from deep network to simpler model.
- Prospector - Interactive partial dependence diagnostics.
- QII - Quantitative input influence.
- REFNE - Extracting symbolic rules from trained neural network ensembles.
- RETAIN - Reverse time attention model.
- RISE - Randomized input sampling for explanation.
- RxREN - Reverse engineering neural networks for rule extraction.
- SHAP - A unified approach to interpretting model predictions.
- SIDU - Similarity, difference, and uniqueness input perturbation.
- Simonynan, et. al - Visualizing CNN classes.
- Singh, et. al - Programs as black-box explanations.
- STA - Interpreting models via Single Tree Approximation.
- Strumbelj, et. al. - Explanation of individual classifications using game theory.
- SVM+P - Rule extraction from support vector machines.
- TCAV - Testing with concept activation vectors.
- Tolomei, et. al. - Interpretable predictions of tree-ensembles via actionable feature tweaking.
- Tree Metrics - Making sense of a forest of trees.
- TreeSHAP - Consistent feature attribute for tree ensembles.
- TreeView - Feature-space partitioning.
- TREPAN - Extracting tree-structured representations of trained networks.
- TSP - Tree space prototypes.
- VBP - Visual back-propagation.
- X-TREPAN - Adapted etraction of comprehensible decision tree in ANNs.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- inTrees - Interpreting tree ensembles with inTrees.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- inTrees - Interpreting tree ensembles with inTrees.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- inTrees - Interpreting tree ensembles with inTrees.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- inTrees - Interpreting tree ensembles with inTrees.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- inTrees - Interpreting tree ensembles with inTrees.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- inTrees - Interpreting tree ensembles with inTrees.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- inTrees - Interpreting tree ensembles with inTrees.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- inTrees - Interpreting tree ensembles with inTrees.
- FIRM - Feature importance ranking measure.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- inTrees - Interpreting tree ensembles with inTrees.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- inTrees - Interpreting tree ensembles with inTrees.
- Auditing - Auditing black-box models.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- inTrees - Interpreting tree ensembles with inTrees.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- inTrees - Interpreting tree ensembles with inTrees.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- inTrees - Interpreting tree ensembles with inTrees.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- inTrees - Interpreting tree ensembles with inTrees.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- inTrees - Interpreting tree ensembles with inTrees.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- inTrees - Interpreting tree ensembles with inTrees.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- inTrees - Interpreting tree ensembles with inTrees.
- Auditing - Auditing black-box models.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- inTrees - Interpreting tree ensembles with inTrees.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- inTrees - Interpreting tree ensembles with inTrees.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- inTrees - Interpreting tree ensembles with inTrees.
- Auditing - Auditing black-box models.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- FIRM - Feature importance ranking measure.
- inTrees - Interpreting tree ensembles with inTrees.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- inTrees - Interpreting tree ensembles with inTrees.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- inTrees - Interpreting tree ensembles with inTrees.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- inTrees - Interpreting tree ensembles with inTrees.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- inTrees - Interpreting tree ensembles with inTrees.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- inTrees - Interpreting tree ensembles with inTrees.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- inTrees - Interpreting tree ensembles with inTrees.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- inTrees - Interpreting tree ensembles with inTrees.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- inTrees - Interpreting tree ensembles with inTrees.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- inTrees - Interpreting tree ensembles with inTrees.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- inTrees - Interpreting tree ensembles with inTrees.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- FIRM - Feature importance ranking measure.
- inTrees - Interpreting tree ensembles with inTrees.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- inTrees - Interpreting tree ensembles with inTrees.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- Lou, et. al. - Accurate intelligibile models with pairwise interactions.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- inTrees - Interpreting tree ensembles with inTrees.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- inTrees - Interpreting tree ensembles with inTrees.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- inTrees - Interpreting tree ensembles with inTrees.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
- FIRM - Feature importance ranking measure.
- ALIME - Autoencoder Based Approach for Local Interpretability.
- Auditing - Auditing black-box models.
-
Critiques
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- The (Un)reliability of Saliency Methods - Authors demonstrate how saliency methods vary attribution when adding a constant shift to the input data. They argue that methods should fulfill *input invariance*, that a saliency method mirror the sensistivity of the model with respect to transformations of the input.
- Attention is not Explanation - Authors perform a series of NLP experiments which argue attention does not provide meaningful explanations. They also demosntrate that different attentions can generate similar model outputs.
- Attention is not --not-- Explanation - This is a rebutal to the above paper. Authors argue that multiple explanations can be valid and that the and that attention can produce *a* valid explanation, if not -the- valid explanation.
- Do Not Trust Additive Explanations - Authors argue that addditive explanations (e.g. LIME, SHAP, Break Down) fail to take feature ineractions into account and are thus unreliable.
- Please Stop Permuting Features An Explanation and Alternatives - Authors demonstrate why permuting features is misleading, especially where there is strong feature dependence. They offer several previously described alternatives.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- The (Un)reliability of Saliency Methods - Authors demonstrate how saliency methods vary attribution when adding a constant shift to the input data. They argue that methods should fulfill *input invariance*, that a saliency method mirror the sensistivity of the model with respect to transformations of the input.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- The (Un)reliability of Saliency Methods - Authors demonstrate how saliency methods vary attribution when adding a constant shift to the input data. They argue that methods should fulfill *input invariance*, that a saliency method mirror the sensistivity of the model with respect to transformations of the input.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- The (Un)reliability of Saliency Methods - Authors demonstrate how saliency methods vary attribution when adding a constant shift to the input data. They argue that methods should fulfill *input invariance*, that a saliency method mirror the sensistivity of the model with respect to transformations of the input.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- The (Un)reliability of Saliency Methods - Authors demonstrate how saliency methods vary attribution when adding a constant shift to the input data. They argue that methods should fulfill *input invariance*, that a saliency method mirror the sensistivity of the model with respect to transformations of the input.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- The (Un)reliability of Saliency Methods - Authors demonstrate how saliency methods vary attribution when adding a constant shift to the input data. They argue that methods should fulfill *input invariance*, that a saliency method mirror the sensistivity of the model with respect to transformations of the input.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- The (Un)reliability of Saliency Methods - Authors demonstrate how saliency methods vary attribution when adding a constant shift to the input data. They argue that methods should fulfill *input invariance*, that a saliency method mirror the sensistivity of the model with respect to transformations of the input.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- The (Un)reliability of Saliency Methods - Authors demonstrate how saliency methods vary attribution when adding a constant shift to the input data. They argue that methods should fulfill *input invariance*, that a saliency method mirror the sensistivity of the model with respect to transformations of the input.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- The (Un)reliability of Saliency Methods - Authors demonstrate how saliency methods vary attribution when adding a constant shift to the input data. They argue that methods should fulfill *input invariance*, that a saliency method mirror the sensistivity of the model with respect to transformations of the input.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- The (Un)reliability of Saliency Methods - Authors demonstrate how saliency methods vary attribution when adding a constant shift to the input data. They argue that methods should fulfill *input invariance*, that a saliency method mirror the sensistivity of the model with respect to transformations of the input.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- The (Un)reliability of Saliency Methods - Authors demonstrate how saliency methods vary attribution when adding a constant shift to the input data. They argue that methods should fulfill *input invariance*, that a saliency method mirror the sensistivity of the model with respect to transformations of the input.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- The (Un)reliability of Saliency Methods - Authors demonstrate how saliency methods vary attribution when adding a constant shift to the input data. They argue that methods should fulfill *input invariance*, that a saliency method mirror the sensistivity of the model with respect to transformations of the input.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- The (Un)reliability of Saliency Methods - Authors demonstrate how saliency methods vary attribution when adding a constant shift to the input data. They argue that methods should fulfill *input invariance*, that a saliency method mirror the sensistivity of the model with respect to transformations of the input.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- The (Un)reliability of Saliency Methods - Authors demonstrate how saliency methods vary attribution when adding a constant shift to the input data. They argue that methods should fulfill *input invariance*, that a saliency method mirror the sensistivity of the model with respect to transformations of the input.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- The (Un)reliability of Saliency Methods - Authors demonstrate how saliency methods vary attribution when adding a constant shift to the input data. They argue that methods should fulfill *input invariance*, that a saliency method mirror the sensistivity of the model with respect to transformations of the input.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- The (Un)reliability of Saliency Methods - Authors demonstrate how saliency methods vary attribution when adding a constant shift to the input data. They argue that methods should fulfill *input invariance*, that a saliency method mirror the sensistivity of the model with respect to transformations of the input.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- The (Un)reliability of Saliency Methods - Authors demonstrate how saliency methods vary attribution when adding a constant shift to the input data. They argue that methods should fulfill *input invariance*, that a saliency method mirror the sensistivity of the model with respect to transformations of the input.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- The (Un)reliability of Saliency Methods - Authors demonstrate how saliency methods vary attribution when adding a constant shift to the input data. They argue that methods should fulfill *input invariance*, that a saliency method mirror the sensistivity of the model with respect to transformations of the input.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- The (Un)reliability of Saliency Methods - Authors demonstrate how saliency methods vary attribution when adding a constant shift to the input data. They argue that methods should fulfill *input invariance*, that a saliency method mirror the sensistivity of the model with respect to transformations of the input.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- The (Un)reliability of Saliency Methods - Authors demonstrate how saliency methods vary attribution when adding a constant shift to the input data. They argue that methods should fulfill *input invariance*, that a saliency method mirror the sensistivity of the model with respect to transformations of the input.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- The (Un)reliability of Saliency Methods - Authors demonstrate how saliency methods vary attribution when adding a constant shift to the input data. They argue that methods should fulfill *input invariance*, that a saliency method mirror the sensistivity of the model with respect to transformations of the input.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- The (Un)reliability of Saliency Methods - Authors demonstrate how saliency methods vary attribution when adding a constant shift to the input data. They argue that methods should fulfill *input invariance*, that a saliency method mirror the sensistivity of the model with respect to transformations of the input.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- The (Un)reliability of Saliency Methods - Authors demonstrate how saliency methods vary attribution when adding a constant shift to the input data. They argue that methods should fulfill *input invariance*, that a saliency method mirror the sensistivity of the model with respect to transformations of the input.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- The (Un)reliability of Saliency Methods - Authors demonstrate how saliency methods vary attribution when adding a constant shift to the input data. They argue that methods should fulfill *input invariance*, that a saliency method mirror the sensistivity of the model with respect to transformations of the input.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- The (Un)reliability of Saliency Methods - Authors demonstrate how saliency methods vary attribution when adding a constant shift to the input data. They argue that methods should fulfill *input invariance*, that a saliency method mirror the sensistivity of the model with respect to transformations of the input.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- The (Un)reliability of Saliency Methods - Authors demonstrate how saliency methods vary attribution when adding a constant shift to the input data. They argue that methods should fulfill *input invariance*, that a saliency method mirror the sensistivity of the model with respect to transformations of the input.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- The (Un)reliability of Saliency Methods - Authors demonstrate how saliency methods vary attribution when adding a constant shift to the input data. They argue that methods should fulfill *input invariance*, that a saliency method mirror the sensistivity of the model with respect to transformations of the input.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- The (Un)reliability of Saliency Methods - Authors demonstrate how saliency methods vary attribution when adding a constant shift to the input data. They argue that methods should fulfill *input invariance*, that a saliency method mirror the sensistivity of the model with respect to transformations of the input.
- Stop Explaining Black Box Machine Learning Models for High States Decisions and Use Interpretable Models Instead - Authors present a number of issues with explainable ML and challenges to interpretable ML: (1) constructing optimal logical models, (2) constructing optimal sparse scoring systems, (3) defining interpretability and creating methods for specific methods. They also offer an argument for why interpretable models might exist in many different domains.
- The (Un)reliability of Saliency Methods - Authors demonstrate how saliency methods vary attribution when adding a constant shift to the input data. They argue that methods should fulfill *input invariance*, that a saliency method mirror the sensistivity of the model with respect to transformations of the input.
-
Interpretable Models
- Decision List - Like a decision tree with no branches.
- Decision Trees - The tree provides an interpretation.
- Explainable Boosting Machine - Method that predicts based on learned vector graphs of features.
- k-Nearest Neighbors - The prototypical clustering method.
- Linear Regression - Easily plottable and understandable regression.
- Logistic Regression - Easily plottable and understandable classification.
- Naive Bayes - Good classification, poor estimation using conditional probabilities.
- RuleFit - Sparse linear model as decision rules including feature interactions.
-
Landmarks
- Explanation in Artificial Intelligence: Insights from the Social Sciences - This paper provides an introduction to the social science research into explanations. The author provides 4 major findings: (1) explanations are constrastive, (2) explanations are selected, (3) probabilities probably don't matter, (4) explanations are social. These fit into the general theme that explanations are -contextual-.
- Sanity Checks for Saliency Maps - An important read for anyone using saliency maps. This paper proposes two experiments to determine whether saliency maps are useful: (1) model parameter randomization test compares maps from trained and untrained models, (2) data randomization test compares maps from models trained on the original dataset and models trained on the same dataset with randomized labels. They find that "some widely deployed saliency methods are independent of both the data the model was trained on, and the model parameters".
-
Surveys
- Explainable Deep Learning: A Field Guide for the Uninitiated - An in-depth description of XAI focused on technqiues for deep learning.
-
Evaluations
- Quantifying Explainability of Saliency Methods in Deep Neural Networks - An analysis of how different heatmap-based saliency methods perform based on experimentation with a generated dataset.
-
-
Repositories
-
Critiques
- slundberg/shap - A Python module for using Shapley Additive Explanations.
- slundberg/shap - A Python module for using Shapley Additive Explanations.
- PAIR-code/what-if-tool - A tool for Tensorboard or Notebooks which allows investigating model performance and fairness.
-
-
Videos
-
Critiques
- Debate: Interpretability is necessary for ML - A debate on whether interpretability is necessary for ML with Rich Caruana and Patrice Simard for and Kilian Weinberger and Yann LeCun against.
-
-
Follow
-
Critiques
- The Institute for Ethical AI & Machine Learning - A UK-based research center that performs research into ethical AI/ML, which frequently involves XAI.
- Tim Miller - One of the preeminent researchers in XAI.
- Rich Caruana - The man behind Explainable Boosting Machines.
-
Programming Languages
Categories
Sub Categories