{"id":18414826,"url":"https://github.com/navdeep-g/interpretable-ml","last_synced_at":"2026-03-12T06:01:46.030Z","repository":{"id":40984125,"uuid":"145178548","full_name":"navdeep-G/interpretable-ml","owner":"navdeep-G","description":"Techniques \u0026 resources for training interpretable ML models, explaining ML models, and debugging ML models.","archived":false,"fork":false,"pushed_at":"2022-06-21T22:01:05.000Z","size":88465,"stargazers_count":21,"open_issues_count":13,"forks_count":8,"subscribers_count":5,"default_branch":"master","last_synced_at":"2025-10-07T09:53:39.025Z","etag":null,"topics":["accountability","data-mining","data-science","decision-trees","fairness","fatml","gradient-boosting-machine","iml","interpretability","interpretable","interpretable-ai","interpretable-machine-learning","interpretable-ml","lime","machine-learning","machine-learning-interpretability","python","transparency","xai"],"latest_commit_sha":null,"homepage":"","language":"Jupyter Notebook","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/navdeep-G.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2018-08-18T00:51:42.000Z","updated_at":"2024-11-06T14:38:30.000Z","dependencies_parsed_at":"2022-09-18T14:31:29.100Z","dependency_job_id":null,"html_url":"https://github.com/navdeep-G/interpretable-ml","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/navdeep-G/interpretable-ml","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/navdeep-G%2Finterpretable-ml","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/navdeep-G%2Finterpretable-ml/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/navdeep-G%2Finterpretable-ml/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/navdeep-G%2Finterpretable-ml/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/navdeep-G","download_url":"https://codeload.github.com/navdeep-G/interpretable-ml/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/navdeep-G%2Finterpretable-ml/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":30416733,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-03-12T04:41:02.746Z","status":"ssl_error","status_checked_at":"2026-03-12T04:40:12.571Z","response_time":114,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.6:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["accountability","data-mining","data-science","decision-trees","fairness","fatml","gradient-boosting-machine","iml","interpretability","interpretable","interpretable-ai","interpretable-machine-learning","interpretable-ml","lime","machine-learning","machine-learning-interpretability","python","transparency","xai"],"created_at":"2024-11-06T03:52:27.096Z","updated_at":"2026-03-12T06:01:46.014Z","avatar_url":"https://github.com/navdeep-G.png","language":"Jupyter Notebook","readme":"# Interpretable Machine Learning\n\n##### **A collection of code, notebooks, and resources for training interpretable machine learning (ML) models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.**\n\n##### **Want to contribute your own examples/code/resources?** Just make a pull request.\n\n## Setup\n```\ncd interpretable-ml\nvirtualenv -p python3.6 env\nsource env/bin/activate\npip install -r python/jupyter-notebooks/requirements.txt\n\n** Note: if using Ubuntu, you may have to manually install gcc. Try the following \n1. sudo apt-get update\n2. sudo apt-get install gcc\n3. sudo apt-get install --reinstall build-essential\n```\n## Contents \n* Presentations\n\t* [Responsible Machine Learning](https://github.com/navdeep-G/interpretable-ml/tree/master/responsible_ml_tex/rml.pdf) \n\t* [Overview of Interpretable Machine Learning Techniques (incomplete list)](https://github.com/navdeep-G/interpretable-ml/tree/master/iml_tex/interpretable_ml.pdf)\n\t* [Discrimination in Machine Learning](https://github.com/navdeep-G/interpretable-ml/tree/master/fair_ml_tex/fair_mli.pdf)\n\t* [Secure Machine Learning](https://github.com/navdeep-G/interpretable-ml/tree/master/secure_ml_tex/secure_ml.pdf)\n* Jupyter Notebooks\n\t- [Binary Classification](https://github.com/navdeep-G/interpretable-ml/tree/master/python/jupyter-notebooks/credit/binomial)\n\t\t- [Shapley, PDP, \u0026 ICE](https://github.com/navdeep-G/interpretable-ml/blob/master/python/jupyter-notebooks/credit/binomial/xgb_credit_binary_shap_pdp_ice.ipynb)\n\t\t- [Decision Tree Surrogate and Leave One Covariate Out](https://github.com/navdeep-G/interpretable-ml/blob/master/python/jupyter-notebooks/credit/binomial/dt_surrogate_loco.ipynb)\n\t\t- [LIME](https://github.com/navdeep-G/interpretable-ml/blob/master/python/jupyter-notebooks/credit/binomial/lime.ipynb)\n\t\t- [Residual Analysis](https://github.com/navdeep-G/interpretable-ml/blob/master/python/jupyter-notebooks/credit/binomial/debugging_resid_analysis_redux.ipynb)\n\t\t- [Sensitivity Analysis](https://github.com/navdeep-G/interpretable-ml/blob/master/python/jupyter-notebooks/credit/binomial/debugging_sens_analysis_redux.ipynb)\n\t\t- [Disparate Impact Analysis](https://github.com/navdeep-G/interpretable-ml/blob/master/python/jupyter-notebooks/credit/binomial/dia.ipynb)\n\t\t- [Disparate Impact Analysis w/ Python `datatable`](https://github.com/navdeep-G/interpretable-ml/blob/master/python/jupyter-notebooks/credit/binomial/dia_with_datatable.ipynb)\n\t- [Multinomial Classification](https://github.com/navdeep-G/interpretable-ml/tree/master/python/jupyter-notebooks/credit/multinomial)\n\t\t- [Shapley, PDP, \u0026 ICE](https://github.com/navdeep-G/interpretable-ml/blob/master/python/jupyter-notebooks/credit/multinomial/xgb_credit_multinomial_shap_pdp_ice.ipynb)\n\t\t- [Disparate Impact Analysis](https://github.com/navdeep-G/interpretable-ml/blob/master/python/jupyter-notebooks/credit/multinomial/dia_multinomial.ipynb)\n- Simulated Data for Testing Purposes\n\t- [Binary Classfication](https://github.com/navdeep-G/interpretable-ml/tree/master/python/jupyter-notebooks/simulated/binomial)\n\t\t- [Shapley, PDP, \u0026 ICE](https://github.com/navdeep-G/interpretable-ml/blob/master/python/jupyter-notebooks/simulated/binomial/xgb_simulated_binomial_shap_pdp_ice.ipynb)\n\t- [Multinomial Classification]()\n\t\t- [Shapley, PDP, \u0026 ICE](https://github.com/navdeep-G/interpretable-ml/blob/master/python/jupyter-notebooks/simulated/multinomial/xgb_simulated_multinomial_shap_pdp_ice.ipynb)\n\t\t- [Shapley, PDP, ICE, \u0026 Decision Tree Surrogate](https://github.com/navdeep-G/interpretable-ml/blob/master/python/jupyter-notebooks/simulated/multinomial/xgb_simulated_multinomial_shap_pdp_ice_DT.ipynb)\n\n## Further reading:\n* Books/Articles\n\t* [*Responsible Machine Learning:\n\tActionable Strategies for Mitigating Risks \u0026 Driving Adoption*](https://www.h2o.ai/resources/ebook/responsible-machine-learning/)\n\t* [*An Introduction to Machine Learning Interpretability, 2nd Edition*](https://www.h2o.ai/wp-content/uploads/2019/08/An-Introduction-to-Machine-Learning-Interpretability-Second-Edition.pdf)\n\t* [*A Responsible Machine Learning Workflow with Focus on Interpretable Models, Post-hoc Explanation, and Discrimination Testing*](https://www.mdpi.com/2078-2489/11/3/137)\n\t* [*On the Art and Science of Explainable Machine Learning*](https://arxiv.org/pdf/1810.02909.pdf)\n\t* [*Proposals for model vulnerability and security*](https://www.oreilly.com/ideas/proposals-for-model-vulnerability-and-security)\n\t* [*Proposed Guidelines for the Responsible Use of Explainable Machine Learning*](https://arxiv.org/pdf/1906.03533.pdf)\n\t* [*Real-World Strategies for Model Debugging*](https://medium.com/@jphall_22520/strategies-for-model-debugging-aa822f1097ce)\n\t* [*Warning Signs: Security and Privacy in an Age of Machine Learning*](https://fpf.org/wp-content/uploads/2019/09/FPF_WarningSigns_Report.pdf)\n\t* [*Why you should care about debugging machine learning models*](https://www.oreilly.com/radar/why-you-should-care-about-debugging-machine-learning-models/)\n\n## Resources\n* [*Awesome Machine Learning Interpretability*](https://github.com/jphall663/awesome-machine-learning-interpretability)\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fnavdeep-g%2Finterpretable-ml","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fnavdeep-g%2Finterpretable-ml","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fnavdeep-g%2Finterpretable-ml/lists"}