Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/sayakpaul/benchmarking-and-mli-experiments-on-the-adult-dataset
Contains benchmarking and interpretability experiments on the Adult dataset using several libraries
https://github.com/sayakpaul/benchmarking-and-mli-experiments-on-the-adult-dataset
data-science fastai h2oai interpretable-machine-learning machine-learning microsoft-interpret tensorflow
Last synced: 23 days ago
JSON representation
Contains benchmarking and interpretability experiments on the Adult dataset using several libraries
- Host: GitHub
- URL: https://github.com/sayakpaul/benchmarking-and-mli-experiments-on-the-adult-dataset
- Owner: sayakpaul
- Created: 2017-09-17T07:25:44.000Z (about 7 years ago)
- Default Branch: master
- Last Pushed: 2019-05-19T01:03:19.000Z (over 5 years ago)
- Last Synced: 2024-10-03T12:38:20.665Z (about 1 month ago)
- Topics: data-science, fastai, h2oai, interpretable-machine-learning, machine-learning, microsoft-interpret, tensorflow
- Language: Jupyter Notebook
- Homepage:
- Size: 869 KB
- Stars: 34
- Watchers: 3
- Forks: 12
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
The initial experiments were a part of an assignment given from TCS ILP Innovations' Lab. Later as my appetite for the wonderful field of machine learning increased, I decided to give it another try and try out the new libraries.
It includes benchmarking and interpretability experiments on the [Adult Data set](https://archive.ics.uci.edu/ml/datasets/adult) using libraries like [`fastai`](docs.fast.ai), [`h2o`](http://docs.h2o.ai) and [`interpret`](https://github.com/Microsoft/interpret). Along with these, I have shown how one can use the `interpret` library to construct explanations for `sklearn` models. **Note** that `keras` models can be converted to `sklearn` variants and this enables `interpret` to work equally on these models as well.
I show you how easy it is to interpret a blackbox machine learning model with `interpret`. I think the library really stands its name. Along with this, I also show how to use Decision Tree Surrogate to explain models in `h2o`.
To do:
**Annotate the notebooks in plain English and include short explanations to the various interpretability methods used.**