Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
awesome-efficient-xai
https://github.com/ynchuang/awesome-efficient-xai
Last synced: 3 days ago
JSON representation
-
Review and General Papers
- Interpretable machine learning
- Techniques for Interpretable Machine Learning
- Counterfactual Explanations and Algorithmic Recourses for Machine Learning: A Review
- Techniques for Interpretable Machine Learning
- Explainable Recommendation: A Survey and New Perspectives
- Counterfactual Explanations and Algorithmic Recourses for Machine Learning: A Review
- Explainable Recommendation: A Survey and New Perspectives
- Interpretable machine learning
-
Acceleration of Feature Interaction Detection
-
Reinforcement Method
- XDeep: an Open-source Python Library for Interpretable Machine Learning
- OmniXAI: A Library for Explainable AI
- Alibi Explain: An Open-source Python library for Machine Learning Model Inspection and Interpretation
- OpenXAI: Towards a Transparent Evaluation of Model Explanations
- Awesome Interpretable Machine Learning
- Awesome Machine Learning Interpretability
- Awesome Fairness in AI
- A Guide for Making Black Box Models Explainable
- Neural Interaction Transparency (NIT): Disentangling Learned Interactions for Improved Interpretability
- Detecting Statistical Interactions from Neural Network Weights
- Towards Interaction Detection Using Topological Analysis on Neural Networks
- Feature Interaction Interpretability: A Case for Explaining Ad-Recommendation Systems via Neural Interaction Detection
- Faith-Shap: The Faithful Shapley Interaction Index
- The Shapley Taylor Interaction Index
- Captum: Model Interpretability for PyTorch
- AIX 360: An Open-source Toolkit of Explainable Machine Learning
- InterpretML: A Toolkit to Help Understand Models and Enable Responsible Machine Learning
- A Chinese Open Course of Interpretable Machine Learning
- Faith-Shap: The Faithful Shapley Interaction Index
- SHAP: A a Game Theoretic Package to Explain Machine Learning Models
- Detecting Statistical Interactions from Neural Network Weights
-
-
Non-amortized acceleration
-
Data-centric Acceleration
- Accelerating Shapley Explanation via Contributive cooperator Selection
- Groupshapley: Efficient Prediction Explanation with Shapley Values for Feature Group
- Antithetic and Monte Carlo Kernel Estimators for Partial Rankings
- Sampling Permutations for Shapley Value Estimation
- Scaling Guarantees for Nearest Counterfactual Explanations
- Optimal Counterfactual Explanations in Tree Ensembles
- Counterfactual Explanations for Oblique Decision Trees: Exact, Efficient Algorithms
- Efficient Search for Diverse Coherent Explanations
- Groupshapley: Efficient Prediction Explanation with Shapley Values for Feature Group
- Antithetic and Monte Carlo Kernel Estimators for Partial Rankings
- Scaling Guarantees for Nearest Counterfactual Explanations
- Counterfactual Explanations for Oblique Decision Trees: Exact, Efficient Algorithms
- Accelerating Shapley Explanation via Contributive cooperator Selection
- Optimal Counterfactual Explanations in Tree Ensembles
- Efficient Search for Diverse Coherent Explanations
- Polynomial Calculation of the Shapley Value based on Sampling
-
Model-centric Acceleration
- DACE: Distribution-Aware Counterfactual Explanation by Mixed-Integer Linear Optimization
- Multi-Objective Counterfactual Explanations
- Influence-Driven Explanations for Bayesian Network Classiers
- Interpretable Counterfactual Explanations Guided by Prototypes
- Counterfactual Shapley Additive Explanations
- Explaining Deep Neural Networks with a Polynomial Time Algorithm for Shapley Values Approximation
- Efficient Computation and Analysis of Distributional Shapley Values
- Improving KernelSHAP: Practical Shapley Value Estimation via Linear Regression
- Improved Feature Importance Computations for Tree Models: Shapley vs. Banzhaf
- Fast TreeSHAP: Accelerating SHAP Value Computation for Trees
- L-Shapley and C-Shapley: Efficient Model Interpretation for Structured Data
- Multi-Objective Counterfactual Explanations
- Influence-Driven Explanations for Bayesian Network Classiers
- Interpretable Counterfactual Explanations Guided by Prototypes
- Counterfactual Shapley Additive Explanations
- Explaining Deep Neural Networks with a Polynomial Time Algorithm for Shapley Values Approximation
- Efficient Computation and Analysis of Distributional Shapley Values
- Improving KernelSHAP: Practical Shapley Value Estimation via Linear Regression
- Improved Feature Importance Computations for Tree Models: Shapley vs. Banzhaf
- Fast TreeSHAP: Accelerating SHAP Value Computation for Trees
- Efficient computation of counterfactual explanations and counterfactual metrics of prototype-based classifiers
-
-
Amortized acceleration
-
Predictive-driven Method
- FastSHAP: Real-Time Shapley Value Estimation
- CoRTX: Contrastive Framework for Real-time Explanation
- Learning to Explain: An Information-Theoretic Perspective on Model Interpretation
- CXPlain: Causal Explanations for Model Interpretation under Uncertainty
- Investigating Causal Relations by Econometric Models and Cross-spectral Methods
- Shapley Explanation Networks
- Learning to Explain with Complemental Examples
- Attention-like feature explanation for tabular data
- Efficient Explanations from Empirical Explainers
- Fast Axiomatic Attribution for Neural Networks
- Learning to Explain: An Information-Theoretic Perspective on Model Interpretation
- CXPlain: Causal Explanations for Model Interpretation under Uncertainty
- Attention-like feature explanation for tabular data
- Efficient Explanations from Empirical Explainers
- Learning to Explain with Complemental Examples
-
Generative-driven Method
- Model-Based Counterfactual Synthesizer for Interpretation
- Explanation by Progressive Exaggeration
- Explaining the Black-box Smoothly- A Counterfactual Approach
- Getting a CLUE: A Method for Explaining Uncertainty Estimates
- Cycle-Consistent Counterfactuals by Latent Transformations
- Learning Model-Agnostic Counterfactual Explanations for Tabular Data
- Towards Realistic Individual Recourse and Actionable Explanations in Black-Box Decision Making Systems
- Beyond Trivial Counterfactual Explanations with Diverse Valuable Explanations
- VCNet: A Self-explaining Model for Realistic Counterfactual Generation
- CRUDS: Counterfactual Recourse Using Disentangled Subspaces
- Preserving Causal Constraints in Counterfactual Explanations for Machine Learning Classifiers
- xGEMs: Generating Examplars to Explain Black-Box Models
- Generative Counterfactuals for Neural Networks via Attribute-Informed Perturbation
- CLEAR: Generative Counterfactual Explanations on Graphs
- Model-Based Counterfactual Synthesizer for Interpretation
- Explanation by Progressive Exaggeration
- Cycle-Consistent Counterfactuals by Latent Transformations
- Learning Model-Agnostic Counterfactual Explanations for Tabular Data
- Towards Realistic Individual Recourse and Actionable Explanations in Black-Box Decision Making Systems
- Beyond Trivial Counterfactual Explanations with Diverse Valuable Explanations
- Preserving Causal Constraints in Counterfactual Explanations for Machine Learning Classifiers
- Generative Counterfactuals for Neural Networks via Attribute-Informed Perturbation
- Getting a CLUE: A Method for Explaining Uncertainty Estimates
- CLEAR: Generative Counterfactual Explanations on Graphs
-
Reinforcement Method
-
Programming Languages
Categories
Sub Categories
Keywords
machine-learning
6
interpretability
5
xai
4
interpretable-machine-learning
3
explainable-ai
3
deep-learning
2
explainable-ml
2
explainability
2
data-science
2
fairness
2
interpretable-ai
2
interpretable-ml
2
explanation
1
counterfactual
1
explanations
1
benchmark
1
leaderboard
1
reproducibility
1
shapley
1
shap
1
gradient-boosting
1
gender-bias
1
bias
1
ai
1
transparency
1
secure-ml
1
reliable-ai
1
r
1
python
1
privacy-preserving-machine-learning
1
privacy-enhancing-technologies
1
machine-learning-interpretability
1
awesome-list
1
awesome
1
ai-safety
1