Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

awesome-xai

Awesome Explainable AI (XAI) and Interpretable ML Papers and Resources
https://github.com/altamiracorp/awesome-xai

Last synced: 3 days ago
JSON representation

  • Papers

    • XAI Methods

      • inTrees - Interpreting tree ensembles with inTrees.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • inTrees - Interpreting tree ensembles with inTrees.
      • PDP - Partial dependence plots.
      • VEC - Variable effect characteristic curve.
      • VIN - Variable interaction network.
      • Xu, et. al. - Show, attend, tell attention model.
      • Ada-SISE - Adaptive semantice inpute sampling for explanation.
      • ALE - Accumulated local effects plot.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Anchors - High-Precision Model-Agnostic Explanations.
      • Auditing - Auditing black-box models.
      • BayLIME - Bayesian local interpretable model-agnostic explanations.
      • Break Down - Break down plots for additive attributions.
      • CAM - Class activation mapping.
      • CDT - Confident interpretation of Bayesian decision tree ensembles.
      • CICE - Centered ICE plot.
      • CMM - Combined multiple models metalearner.
      • Conj Rules - Using sampling and queries to extract rules from trained neural networks.
      • CP - Contribution propogation.
      • DecText - Extracting decision trees from trained neural networks.
      • DeepLIFT - Deep label-specific feature learning for image annotation.
      • DTD - Deep Taylor decomposition.
      • ExplainD - Explanations of evidence in additive classifiers.
      • FIRM - Feature importance ranking measure.
      • Fong, et. al. - Meaninful perturbations model.
      • G-REX - Rule extraction using genetic algorithms.
      • Gibbons, et. al. - Explain random forest using decision tree.
      • GoldenEye - Exploring classifiers by randomization.
      • GPD - Gaussian process decisions.
      • GPDT - Genetic program to evolve decision trees.
      • GradCAM - Gradient-weighted Class Activation Mapping.
      • GradCAM++ - Generalized gradient-based visual explanations.
      • Hara, et. al. - Making tree ensembles interpretable.
      • ICE - Individual conditional expectation plots.
      • IG - Integrated gradients.
      • inTrees - Interpreting tree ensembles with inTrees.
      • IOFP - Iterative orthoganol feature projection.
      • IP - Information plane visualization.
      • KL-LIME - Kullback-Leibler Projections based LIME.
      • Krishnan, et. al. - Extracting decision trees from trained neural networks.
      • Lei, et. al. - Rationalizing neural predictions with generator and encoder.
      • LIME - Local Interpretable Model-Agnostic Explanations.
      • LOCO - Leave-one covariate out.
      • LORE - Local rule-based explanations.
      • Lou, et. al. - Accurate intelligibile models with pairwise interactions.
      • LRP - Layer-wise relevance propogation.
      • MCR - Model class reliance.
      • MES - Model explanation system.
      • MFI - Feature importance measure for non-linear algorithms.
      • NID - Neural interpretation diagram.
      • OptiLIME - Optimized LIME.
      • PALM - Partition aware local model.
      • PDA - Prediction Difference Analysis: Visualize deep neural network decisions.
      • PDP - Partial dependence plots.
      • POIMs - Positional oligomer importance matrices for understanding SVM signal detectors.
      • ProfWeight - Transfer information from deep network to simpler model.
      • Prospector - Interactive partial dependence diagnostics.
      • QII - Quantitative input influence.
      • REFNE - Extracting symbolic rules from trained neural network ensembles.
      • RETAIN - Reverse time attention model.
      • RISE - Randomized input sampling for explanation.
      • RxREN - Reverse engineering neural networks for rule extraction.
      • SHAP - A unified approach to interpretting model predictions.
      • SIDU - Similarity, difference, and uniqueness input perturbation.
      • Simonynan, et. al - Visualizing CNN classes.
      • Singh, et. al - Programs as black-box explanations.
      • STA - Interpreting models via Single Tree Approximation.
      • Strumbelj, et. al. - Explanation of individual classifications using game theory.
      • SVM+P - Rule extraction from support vector machines.
      • TCAV - Testing with concept activation vectors.
      • Tolomei, et. al. - Interpretable predictions of tree-ensembles via actionable feature tweaking.
      • Tree Metrics - Making sense of a forest of trees.
      • TreeSHAP - Consistent feature attribute for tree ensembles.
      • TreeView - Feature-space partitioning.
      • TREPAN - Extracting tree-structured representations of trained networks.
      • TSP - Tree space prototypes.
      • VBP - Visual back-propagation.
      • X-TREPAN - Adapted etraction of comprehensible decision tree in ANNs.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • inTrees - Interpreting tree ensembles with inTrees.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • inTrees - Interpreting tree ensembles with inTrees.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • inTrees - Interpreting tree ensembles with inTrees.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • inTrees - Interpreting tree ensembles with inTrees.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • inTrees - Interpreting tree ensembles with inTrees.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • inTrees - Interpreting tree ensembles with inTrees.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • inTrees - Interpreting tree ensembles with inTrees.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • inTrees - Interpreting tree ensembles with inTrees.
      • FIRM - Feature importance ranking measure.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • inTrees - Interpreting tree ensembles with inTrees.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • inTrees - Interpreting tree ensembles with inTrees.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • inTrees - Interpreting tree ensembles with inTrees.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • inTrees - Interpreting tree ensembles with inTrees.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • inTrees - Interpreting tree ensembles with inTrees.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • inTrees - Interpreting tree ensembles with inTrees.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • inTrees - Interpreting tree ensembles with inTrees.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • inTrees - Interpreting tree ensembles with inTrees.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • inTrees - Interpreting tree ensembles with inTrees.
      • Auditing - Auditing black-box models.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • inTrees - Interpreting tree ensembles with inTrees.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • inTrees - Interpreting tree ensembles with inTrees.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • inTrees - Interpreting tree ensembles with inTrees.
      • Auditing - Auditing black-box models.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • FIRM - Feature importance ranking measure.
      • inTrees - Interpreting tree ensembles with inTrees.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • inTrees - Interpreting tree ensembles with inTrees.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • inTrees - Interpreting tree ensembles with inTrees.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • inTrees - Interpreting tree ensembles with inTrees.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • inTrees - Interpreting tree ensembles with inTrees.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • inTrees - Interpreting tree ensembles with inTrees.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • inTrees - Interpreting tree ensembles with inTrees.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • inTrees - Interpreting tree ensembles with inTrees.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • inTrees - Interpreting tree ensembles with inTrees.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • inTrees - Interpreting tree ensembles with inTrees.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • inTrees - Interpreting tree ensembles with inTrees.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • FIRM - Feature importance ranking measure.
      • inTrees - Interpreting tree ensembles with inTrees.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • inTrees - Interpreting tree ensembles with inTrees.
      • Prospector - Interactive partial dependence diagnostics.
      • Lou, et. al. - Accurate intelligibile models with pairwise interactions.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • inTrees - Interpreting tree ensembles with inTrees.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • inTrees - Interpreting tree ensembles with inTrees.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • inTrees - Interpreting tree ensembles with inTrees.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
      • ALIME - Autoencoder Based Approach for Local Interpretability.
      • Auditing - Auditing black-box models.
      • FIRM - Feature importance ranking measure.
    • Critiques

    • Interpretable Models

    • Landmarks

      • Explanation in Artificial Intelligence: Insights from the Social Sciences - This paper provides an introduction to the social science research into explanations. The author provides 4 major findings: (1) explanations are constrastive, (2) explanations are selected, (3) probabilities probably don't matter, (4) explanations are social. These fit into the general theme that explanations are -contextual-.
      • Sanity Checks for Saliency Maps - An important read for anyone using saliency maps. This paper proposes two experiments to determine whether saliency maps are useful: (1) model parameter randomization test compares maps from trained and untrained models, (2) data randomization test compares maps from models trained on the original dataset and models trained on the same dataset with randomized labels. They find that "some widely deployed saliency methods are independent of both the data the model was trained on, and the model parameters".
    • Surveys

    • Evaluations

  • Repositories

    • Critiques

      • slundberg/shap - A Python module for using Shapley Additive Explanations.
      • slundberg/shap - A Python module for using Shapley Additive Explanations.
      • PAIR-code/what-if-tool - A tool for Tensorboard or Notebooks which allows investigating model performance and fairness.
  • Videos

  • Follow