An open API service indexing awesome lists of open source software.

https://github.com/hsma-programme/h6_4g_explainable_ai


https://github.com/hsma-programme/h6_4g_explainable_ai

Last synced: 2 months ago
JSON representation

Awesome Lists containing this project

README

        

# HSMA Session 4G - Explainable AI

## Slides

Google Slides - Click here to view slides for this session

## Lecture Recording

An Introduction to Explainable AI + Correlation vs Causality: Youtube - Click here to watch the lecture

Feature Importance in Logistic Regression & Odds/Log Odds/Probability: Youtube - Click here to watch the lecture

Feature Importance with MDI + PFI: Youtube - Click here to watch the lecture

Partial Dependence Plots and Individual Conditional Expectation Plots: Youtube - Click here to watch the lecture

Explainable AI with SHAP: Youtube - Click here to watch the lecture

Prediction Uncertainty: Youtube - Click here to watch the lecture

## Exercises

The notebooks in the `exercises` folder can be downloaded and run locally if you have Python installed.

Alternatively, you can run each exercise on **Google Colab**, a free online platform for coding exercises. You will need to be logged in to a google account in your browser.

Using the links below will open a fresh copy of the notebook to work on - your changes will not be visible to anyone else. However, if you want to be able to refer back to your version of the notebook in future, make sure you click **'File --> Save to Drive'**.
Your changes will then be saved to your own account, and you can access your edited copy of the notebook from https://colab.research.google.com/.

Open Exercise 1 in Google Colab:
Open In Colab

Open Exercise 2a in Google Colab:
Open In Colab

Open Exercise 2b in Google Colab:
Open In Colab

## Learning Objectives

Students should be able to:

- Give examples to demonstrate why explainable AI is important
- Explain the difference between model-agnostic and model-specific solutions
- Explain the difference between global and local explainability
- Explain the benefits of model-agnostic explainable AI techniques
- Name some of the techniques available for model-agnostic explainable AI
- Explain how to interpret feature importance coefficients from logistic regression models
- Explain the theory of mean decrease in impurity (MDI) for feature importance in tree based model
- Explain the theory of permutation feature importance (PFI) for model-agnostic calculation of feature importance
- Calculate MDI and PFI and visualise the outputs
- Explain what partial dependence plots (PDPs) are
- Create partial dependence plots (PDPs)
- Explain what Individual Conditional Expectation (ICE) plots are
- Create Individual Conditional Expectation (ICE) plots
- Explain the history and theory behind SHAP
- Interpret SHAP plots
- Use the shap python library to create plots for both global and local explainability
- List some of the criticisms of different explainability techniques

## Credits

Original versions of shap session (titanic teaching notebook) and exercise 1 (formerly iris dataset) created by [Elliot Coyne](https://github.com/ElliottHSMA) for [HSMA 5 masterclass](https://github.com/hsma-programme/h5_masterclass_shap/tree/main)