https://github.com/solegalli/machine-learning-interpretability
Code repository for the online course Machine Learning Interpretability
https://github.com/solegalli/machine-learning-interpretability
explainable-ai explainable-machine-learning interpretable-machine-learning machine-learning machine-learning-algorithms
Last synced: 10 months ago
JSON representation
Code repository for the online course Machine Learning Interpretability
- Host: GitHub
- URL: https://github.com/solegalli/machine-learning-interpretability
- Owner: solegalli
- License: other
- Created: 2023-06-08T09:26:11.000Z (over 2 years ago)
- Default Branch: main
- Last Pushed: 2024-10-12T17:49:22.000Z (over 1 year ago)
- Last Synced: 2025-03-29T03:22:44.062Z (11 months ago)
- Topics: explainable-ai, explainable-machine-learning, interpretable-machine-learning, machine-learning, machine-learning-algorithms
- Language: Jupyter Notebook
- Homepage: https://www.trainindata.com/courses/enrolled/2106490
- Size: 23.9 MB
- Stars: 25
- Watchers: 1
- Forks: 18
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- Funding: .github/FUNDING.yml
- License: LICENSE
Awesome Lists containing this project
README

[](https://github.com/solegalli/machine-learning-interpretability/blob/master/LICENSE)
[](https://www.trainindata.com/)
## Machine Learning Interpretability- Code Repository
Code repository for the online course [Machine Learning Interpretability](https://www.trainindata.com/p/machine-learning-interpretability)
**Course launch: 30th November, 2023**
Actively maintained.
[
](https://www.trainindata.com/p/machine-learning-interpretability)
## Table of Contents
1. **Machine Learning Interpretability**
1. Interpretability in the context of Machine Learning
2. Local vs Global Interpretability
3. Intrinsically explainable models
4. Post-hoc explainability methods
5. Challenges to interpretability
6. How to make models more explainable
2. **Intrinsically Explainable Models**
1. Linear and Logistic Regression
2. Decision trees
3. Random forests
4. Gradient boosting machines
5. Global and local interpretation
3. **Post-hoc methods - Global explainability**
1. Permutation Feature Importance
2. Partial dependency plots
3. Accumulated local effects
4. **Post-hoc methods - Local explainability**
1. LIME
2. SHAP
3. Individual contitional expectation
5. **Featuring the following Python interpretability libraries**
1. Scikit-learn
2. treeinterpreter
3. Eli5
4. Dalex
5. Alibi
6. pdpbox
7. Lime
8. Shap
## Links
- [Online Course](https://www.trainindata.com/p/machine-learning-interpretability)