Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/navdeep-g/interpretable-ml
Techniques & resources for training interpretable ML models, explaining ML models, and debugging ML models.
https://github.com/navdeep-g/interpretable-ml
accountability data-mining data-science decision-trees fairness fatml gradient-boosting-machine iml interpretability interpretable interpretable-ai interpretable-machine-learning interpretable-ml lime machine-learning machine-learning-interpretability python transparency xai
Last synced: 14 days ago
JSON representation
Techniques & resources for training interpretable ML models, explaining ML models, and debugging ML models.
- Host: GitHub
- URL: https://github.com/navdeep-g/interpretable-ml
- Owner: navdeep-G
- License: apache-2.0
- Created: 2018-08-18T00:51:42.000Z (over 6 years ago)
- Default Branch: master
- Last Pushed: 2022-06-21T22:01:05.000Z (over 2 years ago)
- Last Synced: 2023-03-22T20:36:04.241Z (over 1 year ago)
- Topics: accountability, data-mining, data-science, decision-trees, fairness, fatml, gradient-boosting-machine, iml, interpretability, interpretable, interpretable-ai, interpretable-machine-learning, interpretable-ml, lime, machine-learning, machine-learning-interpretability, python, transparency, xai
- Language: Jupyter Notebook
- Homepage:
- Size: 84.4 MB
- Stars: 17
- Watchers: 6
- Forks: 8
- Open Issues: 13
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Interpretable Machine Learning
##### **A collection of code, notebooks, and resources for training interpretable machine learning (ML) models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.**
##### **Want to contribute your own examples/code/resources?** Just make a pull request.
## Setup
```
cd interpretable-ml
virtualenv -p python3.6 env
source env/bin/activate
pip install -r python/jupyter-notebooks/requirements.txt** Note: if using Ubuntu, you may have to manually install gcc. Try the following
1. sudo apt-get update
2. sudo apt-get install gcc
3. sudo apt-get install --reinstall build-essential
```
## Contents
* Presentations
* [Responsible Machine Learning](https://github.com/navdeep-G/interpretable-ml/tree/master/responsible_ml_tex/rml.pdf)
* [Overview of Interpretable Machine Learning Techniques (incomplete list)](https://github.com/navdeep-G/interpretable-ml/tree/master/iml_tex/interpretable_ml.pdf)
* [Discrimination in Machine Learning](https://github.com/navdeep-G/interpretable-ml/tree/master/fair_ml_tex/fair_mli.pdf)
* [Secure Machine Learning](https://github.com/navdeep-G/interpretable-ml/tree/master/secure_ml_tex/secure_ml.pdf)
* Jupyter Notebooks
- [Binary Classification](https://github.com/navdeep-G/interpretable-ml/tree/master/python/jupyter-notebooks/credit/binomial)
- [Shapley, PDP, & ICE](https://github.com/navdeep-G/interpretable-ml/blob/master/python/jupyter-notebooks/credit/binomial/xgb_credit_binary_shap_pdp_ice.ipynb)
- [Decision Tree Surrogate and Leave One Covariate Out](https://github.com/navdeep-G/interpretable-ml/blob/master/python/jupyter-notebooks/credit/binomial/dt_surrogate_loco.ipynb)
- [LIME](https://github.com/navdeep-G/interpretable-ml/blob/master/python/jupyter-notebooks/credit/binomial/lime.ipynb)
- [Residual Analysis](https://github.com/navdeep-G/interpretable-ml/blob/master/python/jupyter-notebooks/credit/binomial/debugging_resid_analysis_redux.ipynb)
- [Sensitivity Analysis](https://github.com/navdeep-G/interpretable-ml/blob/master/python/jupyter-notebooks/credit/binomial/debugging_sens_analysis_redux.ipynb)
- [Disparate Impact Analysis](https://github.com/navdeep-G/interpretable-ml/blob/master/python/jupyter-notebooks/credit/binomial/dia.ipynb)
- [Disparate Impact Analysis w/ Python `datatable`](https://github.com/navdeep-G/interpretable-ml/blob/master/python/jupyter-notebooks/credit/binomial/dia_with_datatable.ipynb)
- [Multinomial Classification](https://github.com/navdeep-G/interpretable-ml/tree/master/python/jupyter-notebooks/credit/multinomial)
- [Shapley, PDP, & ICE](https://github.com/navdeep-G/interpretable-ml/blob/master/python/jupyter-notebooks/credit/multinomial/xgb_credit_multinomial_shap_pdp_ice.ipynb)
- [Disparate Impact Analysis](https://github.com/navdeep-G/interpretable-ml/blob/master/python/jupyter-notebooks/credit/multinomial/dia_multinomial.ipynb)
- Simulated Data for Testing Purposes
- [Binary Classfication](https://github.com/navdeep-G/interpretable-ml/tree/master/python/jupyter-notebooks/simulated/binomial)
- [Shapley, PDP, & ICE](https://github.com/navdeep-G/interpretable-ml/blob/master/python/jupyter-notebooks/simulated/binomial/xgb_simulated_binomial_shap_pdp_ice.ipynb)
- [Multinomial Classification]()
- [Shapley, PDP, & ICE](https://github.com/navdeep-G/interpretable-ml/blob/master/python/jupyter-notebooks/simulated/multinomial/xgb_simulated_multinomial_shap_pdp_ice.ipynb)
- [Shapley, PDP, ICE, & Decision Tree Surrogate](https://github.com/navdeep-G/interpretable-ml/blob/master/python/jupyter-notebooks/simulated/multinomial/xgb_simulated_multinomial_shap_pdp_ice_DT.ipynb)## Further reading:
* Books/Articles
* [*Responsible Machine Learning:
Actionable Strategies for Mitigating Risks & Driving Adoption*](https://www.h2o.ai/resources/ebook/responsible-machine-learning/)
* [*An Introduction to Machine Learning Interpretability, 2nd Edition*](https://www.h2o.ai/wp-content/uploads/2019/08/An-Introduction-to-Machine-Learning-Interpretability-Second-Edition.pdf)
* [*A Responsible Machine Learning Workflow with Focus on Interpretable Models, Post-hoc Explanation, and Discrimination Testing*](https://www.mdpi.com/2078-2489/11/3/137)
* [*On the Art and Science of Explainable Machine Learning*](https://arxiv.org/pdf/1810.02909.pdf)
* [*Proposals for model vulnerability and security*](https://www.oreilly.com/ideas/proposals-for-model-vulnerability-and-security)
* [*Proposed Guidelines for the Responsible Use of Explainable Machine Learning*](https://arxiv.org/pdf/1906.03533.pdf)
* [*Real-World Strategies for Model Debugging*](https://medium.com/@jphall_22520/strategies-for-model-debugging-aa822f1097ce)
* [*Warning Signs: Security and Privacy in an Age of Machine Learning*](https://fpf.org/wp-content/uploads/2019/09/FPF_WarningSigns_Report.pdf)
* [*Why you should care about debugging machine learning models*](https://www.oreilly.com/radar/why-you-should-care-about-debugging-machine-learning-models/)## Resources
* [*Awesome Machine Learning Interpretability*](https://github.com/jphall663/awesome-machine-learning-interpretability)