{"id":13570136,"url":"https://github.com/oegedijk/explainerdashboard","last_synced_at":"2026-01-29T22:05:22.620Z","repository":{"id":35951628,"uuid":"218478064","full_name":"oegedijk/explainerdashboard","owner":"oegedijk","description":"Quickly build Explainable AI dashboards that show the inner workings of so-called \"blackbox\" machine learning models.","archived":false,"fork":false,"pushed_at":"2026-01-25T21:45:50.000Z","size":86304,"stargazers_count":2469,"open_issues_count":41,"forks_count":346,"subscribers_count":21,"default_branch":"master","last_synced_at":"2026-01-26T08:43:34.563Z","etag":null,"topics":["dash","dashboard","data-scientists","explainer","inner-workings","interactive-dashboards","interactive-plots","model-predictions","permutation-importances","plotly","shap","shap-values","xai","xai-library"],"latest_commit_sha":null,"homepage":"http://explainerdashboard.readthedocs.io","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/oegedijk.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":".github/FUNDING.yml","license":"LICENSE.txt","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null},"funding":{"github":["oegedijk"],"patreon":null,"open_collective":null,"ko_fi":null,"tidelift":null,"community_bridge":null,"liberapay":null,"issuehunt":null,"otechie":null,"custom":null}},"created_at":"2019-10-30T08:26:16.000Z","updated_at":"2026-01-25T21:43:59.000Z","dependencies_parsed_at":"2024-01-16T12:47:34.636Z","dependency_job_id":"bed2af36-e347-43ad-996c-52d04153c006","html_url":"https://github.com/oegedijk/explainerdashboard","commit_stats":{"total_commits":1254,"total_committers":21,"mean_commits":"59.714285714285715","dds":"0.13476874003189787","last_synced_commit":"78cb10f63223604f132fae7247643b1468aa116b"},"previous_names":[],"tags_count":90,"template":false,"template_full_name":null,"purl":"pkg:github/oegedijk/explainerdashboard","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/oegedijk%2Fexplainerdashboard","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/oegedijk%2Fexplainerdashboard/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/oegedijk%2Fexplainerdashboard/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/oegedijk%2Fexplainerdashboard/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/oegedijk","download_url":"https://codeload.github.com/oegedijk/explainerdashboard/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/oegedijk%2Fexplainerdashboard/sbom","scorecard":{"id":702803,"data":{"date":"2025-08-11","repo":{"name":"github.com/oegedijk/explainerdashboard","commit":"e7e2c3d4a505244615fac511db5c4e466f50c0cd"},"scorecard":{"version":"v5.2.1-40-gf6ed084d","commit":"f6ed084d17c9236477efd66e5b258b9d4cc7b389"},"score":4.7,"checks":[{"name":"Maintained","score":9,"reason":"11 commit(s) and 0 issue activity found in the last 90 days -- score normalized to 9","details":null,"documentation":{"short":"Determines if the project is \"actively maintained\".","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#maintained"}},{"name":"Code-Review","score":1,"reason":"Found 3/30 approved changesets -- score normalized to 1","details":null,"documentation":{"short":"Determines if the project requires human code review before pull requests (aka merge requests) are merged.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#code-review"}},{"name":"Dangerous-Workflow","score":10,"reason":"no dangerous workflow patterns detected","details":null,"documentation":{"short":"Determines if the project's GitHub Action workflows avoid dangerous patterns.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#dangerous-workflow"}},{"name":"CII-Best-Practices","score":0,"reason":"no effort to earn an OpenSSF best practices badge detected","details":null,"documentation":{"short":"Determines if the project has an OpenSSF (formerly CII) Best Practices Badge.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#cii-best-practices"}},{"name":"Binary-Artifacts","score":10,"reason":"no binaries found in the repo","details":null,"documentation":{"short":"Determines if the project has generated executable (binary) artifacts in the source repository.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#binary-artifacts"}},{"name":"Token-Permissions","score":0,"reason":"detected GitHub workflow tokens with excessive permissions","details":["Warn: no topLevel permission defined: .github/workflows/codecov.yml:1","Warn: no topLevel permission defined: .github/workflows/explainerdashboard.yml:1","Warn: no topLevel permission defined: .github/workflows/upload_to_pypi.yml:1","Info: no jobLevel write permissions found"],"documentation":{"short":"Determines if the project's workflows follow the principle of least privilege.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#token-permissions"}},{"name":"Security-Policy","score":0,"reason":"security policy file not detected","details":["Warn: no security policy file detected","Warn: no security file to analyze","Warn: no security file to analyze","Warn: no security file to analyze"],"documentation":{"short":"Determines if the project has published a security policy.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#security-policy"}},{"name":"License","score":10,"reason":"license file detected","details":["Info: project has a license file: LICENSE.txt:0","Info: FSF or OSI recognized license: MIT License: LICENSE.txt:0"],"documentation":{"short":"Determines if the project has defined a license.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#license"}},{"name":"Vulnerabilities","score":10,"reason":"0 existing vulnerabilities detected","details":null,"documentation":{"short":"Determines if the project has open, known unfixed vulnerabilities.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#vulnerabilities"}},{"name":"Packaging","score":10,"reason":"packaging workflow detected","details":["Info: Project packages its releases by way of GitHub Actions.: .github/workflows/upload_to_pypi.yml:6"],"documentation":{"short":"Determines if the project is published as a package that others can easily download, install, easily update, and uninstall.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#packaging"}},{"name":"Signed-Releases","score":-1,"reason":"no releases found","details":null,"documentation":{"short":"Determines if the project cryptographically signs release artifacts.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#signed-releases"}},{"name":"Branch-Protection","score":0,"reason":"branch protection not enabled on development/release branches","details":["Warn: branch protection not enabled for branch 'master'"],"documentation":{"short":"Determines if the default and release branches are protected with GitHub's branch protection settings.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#branch-protection"}},{"name":"Fuzzing","score":0,"reason":"project is not fuzzed","details":["Warn: no fuzzer integrations found"],"documentation":{"short":"Determines if the project uses fuzzing.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#fuzzing"}},{"name":"SAST","score":0,"reason":"SAST tool is not run on all commits -- score normalized to 0","details":["Warn: 0 commits out of 6 are checked with a SAST tool"],"documentation":{"short":"Determines if the project uses static code analysis.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#sast"}},{"name":"Pinned-Dependencies","score":0,"reason":"dependency not pinned by hash detected -- score normalized to 0","details":["Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/codecov.yml:12: update your workflow using https://app.stepsecurity.io/secureworkflow/oegedijk/explainerdashboard/codecov.yml/master?enable=pin","Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/codecov.yml:14: update your workflow using https://app.stepsecurity.io/secureworkflow/oegedijk/explainerdashboard/codecov.yml/master?enable=pin","Warn: third-party GitHubAction not pinned by hash: .github/workflows/codecov.yml:19: update your workflow using https://app.stepsecurity.io/secureworkflow/oegedijk/explainerdashboard/codecov.yml/master?enable=pin","Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/codecov.yml:23: update your workflow using https://app.stepsecurity.io/secureworkflow/oegedijk/explainerdashboard/codecov.yml/master?enable=pin","Warn: third-party GitHubAction not pinned by hash: .github/workflows/codecov.yml:35: update your workflow using https://app.stepsecurity.io/secureworkflow/oegedijk/explainerdashboard/codecov.yml/master?enable=pin","Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/explainerdashboard.yml:25: update your workflow using https://app.stepsecurity.io/secureworkflow/oegedijk/explainerdashboard/explainerdashboard.yml/master?enable=pin","Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/explainerdashboard.yml:27: update your workflow using https://app.stepsecurity.io/secureworkflow/oegedijk/explainerdashboard/explainerdashboard.yml/master?enable=pin","Warn: third-party GitHubAction not pinned by hash: .github/workflows/explainerdashboard.yml:31: update your workflow using https://app.stepsecurity.io/secureworkflow/oegedijk/explainerdashboard/explainerdashboard.yml/master?enable=pin","Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/explainerdashboard.yml:35: update your workflow using https://app.stepsecurity.io/secureworkflow/oegedijk/explainerdashboard/explainerdashboard.yml/master?enable=pin","Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/upload_to_pypi.yml:10: update your workflow using https://app.stepsecurity.io/secureworkflow/oegedijk/explainerdashboard/upload_to_pypi.yml/master?enable=pin","Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/upload_to_pypi.yml:14: update your workflow using https://app.stepsecurity.io/secureworkflow/oegedijk/explainerdashboard/upload_to_pypi.yml/master?enable=pin","Warn: third-party GitHubAction not pinned by hash: .github/workflows/upload_to_pypi.yml:18: update your workflow using https://app.stepsecurity.io/secureworkflow/oegedijk/explainerdashboard/upload_to_pypi.yml/master?enable=pin","Warn: GitHub-owned GitHubAction not pinned by hash: .github/workflows/upload_to_pypi.yml:22: update your workflow using https://app.stepsecurity.io/secureworkflow/oegedijk/explainerdashboard/upload_to_pypi.yml/master?enable=pin","Warn: third-party GitHubAction not pinned by hash: .github/workflows/upload_to_pypi.yml:33: update your workflow using https://app.stepsecurity.io/secureworkflow/oegedijk/explainerdashboard/upload_to_pypi.yml/master?enable=pin","Info:   0 out of   9 GitHub-owned GitHubAction dependencies pinned","Info:   0 out of   5 third-party GitHubAction dependencies pinned"],"documentation":{"short":"Determines if the project has declared and pinned the dependencies of its build process.","url":"https://github.com/ossf/scorecard/blob/f6ed084d17c9236477efd66e5b258b9d4cc7b389/docs/checks.md#pinned-dependencies"}}]},"last_synced_at":"2025-08-22T05:36:02.220Z","repository_id":35951628,"created_at":"2025-08-22T05:36:02.221Z","updated_at":"2025-08-22T05:36:02.221Z"},"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":28886883,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-01-29T21:06:44.224Z","status":"ssl_error","status_checked_at":"2026-01-29T21:06:42.160Z","response_time":59,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["dash","dashboard","data-scientists","explainer","inner-workings","interactive-dashboards","interactive-plots","model-predictions","permutation-importances","plotly","shap","shap-values","xai","xai-library"],"created_at":"2024-08-01T14:00:48.672Z","updated_at":"2026-01-29T22:05:22.614Z","avatar_url":"https://github.com/oegedijk.png","language":"Python","readme":"![GitHub Workflow Status (with event)](https://img.shields.io/github/actions/workflow/status/oegedijk/explainerdashboard/explainerdashboard.yml)\n![https://pypi.python.org/pypi/explainerdashboard/](https://img.shields.io/pypi/v/explainerdashboard.svg)\n![https://anaconda.org/conda-forge/explainerdashboard/](https://anaconda.org/conda-forge/explainerdashboard/badges/version.svg)\n[![codecov](https://codecov.io/gh/oegedijk/explainerdashboard/branch/master/graph/badge.svg?token=0XU6HNEGBK)](undefined)\n[![Downloads](https://static.pepy.tech/badge/explainerdashboard)](https://pepy.tech/project/explainerdashboard)\n\n# explainerdashboard\nby: Oege Dijk\n\nThis package makes it convenient to quickly deploy a dashboard web app\nthat explains the workings of a (scikit-learn compatible) machine\nlearning model. The dashboard provides interactive plots on model performance,\nfeature importances, feature contributions to individual predictions,\n\"what if\" analysis,\npartial dependence plots, SHAP (interaction) values, visualization of individual\ndecision trees, etc.\n\nYou can also interactively explore components of the dashboard in a\nnotebook/colab environment (or just launch a dashboard straight from there).\nOr design a dashboard with your own [custom layout](https://explainerdashboard.readthedocs.io/en/latest/buildcustom.html)\nand explanations (thanks to the modular design of the library). And you can combine multiple dashboards into\na single [ExplainerHub](https://explainerdashboard.readthedocs.io/en/latest/hub.html).\n\nDashboards can be exported to static html directly from a running dashboard, or\nprogrammatically as an artifact as part of an automated CI/CD deployment process.\n\n Examples deployed at: [titanicexplainer.herokuapp.com](http://titanicexplainer.herokuapp.com),\n detailed documentation at [explainerdashboard.readthedocs.io](http://explainerdashboard.readthedocs.io),\n example notebook on how to launch dashboard for different models [here](notebooks/dashboard_examples.ipynb), and an example notebook on how to interact with the explainer object [here](notebooks/explainer_examples.ipynb).\n\n Works with `scikit-learn`, `xgboost`, `catboost`, `lightgbm`, and `skorch`\n (sklearn wrapper for tabular PyTorch models) and others.\n\n## Installation\n\nYou can install the package through pip:\n\n`pip install explainerdashboard`\n\nor conda-forge:\n\n`conda install -c conda-forge explainerdashboard`\n\n## SageMaker Studio\n\nSageMaker Studio runs notebooks and terminals in separate apps, so a common workflow\nis to export a dashboard config to disk and run it from the JupyterServer terminal.\nWhen running inside Studio, `explainerdashboard` can auto-detect SageMaker and apply\nthe correct proxy prefixes, or you can set them explicitly.\n\nNotebook example (export dashboard to disk):\n\n```python\ndb = ExplainerDashboard(\n    explainer,\n    mode=\"dash\",\n    port=8051,\n    sagemaker=True,\n)\ndb.to_yaml(\"dashboard.yaml\", explainerfile=\"dashboard.joblib\", dump_explainer=True)\n```\n\nTerminal example (run from the JupyterServer app):\n\n```bash\nexplainerdashboard run dashboard.yaml --sagemaker --port 8051 --no-browser\n```\n\nAccess the dashboard via the Studio proxy URL:\n\n```text\n\u003cSTUDIO_URL\u003e/jupyter/default/proxy/8051/\n```\n\nIf your Studio proxy path differs, you can override the prefixes:\n\n```bash\nexplainerdashboard run dashboard.yaml \\\n  --routes-pathname-prefix=\"/\" \\\n  --requests-pathname-prefix=\"/jupyter/default/proxy/8051/\"\n```\n\nAuto-detection uses the presence of `/opt/ml/metadata/resource-metadata.json`.\n\n## Demonstration:\n\n![explainerdashboard.gif](explainerdashboard.gif)\n\n\u003c!-- [![Dashboard Screenshot](https://i.postimg.cc/Gm8RnKVb/Screenshot-2020-07-01-at-13-25-19.png)](https://postimg.cc/PCj9mWd7) --\u003e\n(for live demonstration see [titanicexplainer.herokuapp.com](http://titanicexplainer.herokuapp.com))\n## Background\n\nIn a lot of organizations, especially governmental, but with the GDPR also increasingly in private sector, it is becoming more and more important to be able to explain the inner workings of your machine learning algorithms. Customers have to some extent a right to an explanation why they received a certain prediction, and more and more internal and external regulators require it. With recent innovations in explainable AI (e.g. SHAP values) the old black box trope is no longer valid, but it can still take quite a bit of data wrangling and plot manipulation to get the explanations out of a model. This library aims to make this easy.\n\nThe goal is manyfold:\n- Make it easy for data scientists to quickly inspect the workings and performance of their model in a few lines of code\n- Make it possible for non data scientist stakeholders such as managers, directors, internal and external watchdogs to interactively inspect the inner workings of the model without having to depend on a data scientist to generate every plot and table\n- Make it easy to build an application that explains individual predictions of your model for customers that ask for an explanation\n- Explain the inner workings of the model to the people working (human-in-the-loop) with it so that they gain understanding what the model does and doesn't do. This is important so that they can gain an intuition for when the model is likely missing information and may have to be overruled.\n\n\nThe library includes:\n- *Shap values* (i.e. what is the contributions of each feature to each individual prediction?)\n- *Permutation importances* (how much does the model metric deteriorate when you shuffle a feature?)\n- *Partial dependence plots* (how does the model prediction change when you vary a single feature?\n- *Shap interaction values* (decompose the shap value into a direct effect an interaction effects)\n- For Random Forests and xgboost models: visualisation of individual decision trees\n- Plus for classifiers: precision plots, confusion matrix, ROC AUC plot, PR AUC plot, etc\n- For regression models: goodness-of-fit plots, residual plots, etc.\n\nThe library is designed to be modular so that it should be easy to design your own interactive dashboards with plotly dash, with most of the work of calculating and formatting data, and rendering plots and tables handled by `explainerdashboard`, so that you can focus on the layout\nand project specific textual explanations. (i.e. design it so that it will be interpretable for business users in your organization, not just data scientists)\n\nAlternatively, there is a built-in standard dashboard with pre-built tabs (that you can switch off individually)\n\n## Examples of use\n\nFitting a model, building the explainer object, building the dashboard, and then running it can be as simple as:\n\n```python\nExplainerDashboard(ClassifierExplainer(RandomForestClassifier().fit(X_train, y_train), X_test, y_test)).run()\n```\n\nBelow a multi-line example, adding a few extra parameters.\nYou can group onehot encoded categorical variables together using the `cats`\nparameter. You can either pass a dict specifying a list of onehot cols per\ncategorical feature, or if you encode using e.g.\n`pd.get_dummies(df.Name, prefix=['Name'])` (resulting in column names `'Name_Adam', 'Name_Bob'`)\nyou can simply pass the prefix `'Name'`:\n\n```python\nfrom sklearn.ensemble import RandomForestClassifier\nfrom explainerdashboard import ClassifierExplainer, ExplainerDashboard\nfrom explainerdashboard.datasets import titanic_survive, titanic_names\n\nfeature_descriptions = {\n    \"Sex\": \"Gender of passenger\",\n    \"Gender\": \"Gender of passenger\",\n    \"Deck\": \"The deck the passenger had their cabin on\",\n    \"PassengerClass\": \"The class of the ticket: 1st, 2nd or 3rd class\",\n    \"Fare\": \"The amount of money people paid\",\n    \"Embarked\": \"the port where the passenger boarded the Titanic. Either Southampton, Cherbourg or Queenstown\",\n    \"Age\": \"Age of the passenger\",\n    \"No_of_siblings_plus_spouses_on_board\": \"The sum of the number of siblings plus the number of spouses on board\",\n    \"No_of_parents_plus_children_on_board\" : \"The sum of the number of parents plus the number of children on board\",\n}\n\nX_train, y_train, X_test, y_test = titanic_survive()\ntrain_names, test_names = titanic_names()\nmodel = RandomForestClassifier(n_estimators=50, max_depth=5)\nmodel.fit(X_train, y_train)\n\nexplainer = ClassifierExplainer(model, X_test, y_test,\n                                cats=['Deck', 'Embarked',\n                                    {'Gender': ['Sex_male', 'Sex_female', 'Sex_nan']}],\n                                cats_notencoded={'Embarked': 'Stowaway'}, # defaults to 'NOT_ENCODED'\n                                descriptions=feature_descriptions, # adds a table and hover labels to dashboard\n                                labels=['Not survived', 'Survived'], # defaults to ['0', '1', etc]\n                                idxs = test_names, # defaults to X.index\n                                index_name = \"Passenger\", # defaults to X.index.name\n                                target = \"Survival\", # defaults to y.name\n                                )\n\ndb = ExplainerDashboard(explainer,\n                        title=\"Titanic Explainer\", # defaults to \"Model Explainer\"\n                        shap_interaction=False, # you can switch off tabs with bools\n                        )\ndb.run(port=8050)\n```\n\nFor a regression model you can also pass the units of the target variable (e.g.\ndollars):\n\n```python\nX_train, y_train, X_test, y_test = titanic_fare()\nmodel = RandomForestRegressor().fit(X_train, y_train)\n\nexplainer = RegressionExplainer(model, X_test, y_test,\n                                cats=['Deck', 'Embarked', 'Sex'],\n                                descriptions=feature_descriptions,\n                                units = \"$\", # defaults to \"\"\n                                )\n\nExplainerDashboard(explainer).run()\n```\n\n`y_test` is actually optional, although some parts of the dashboard like performance\nmetrics will obviously not be available: `ExplainerDashboard(ClassifierExplainer(model, X_test)).run()`.\n\nYou can export a dashboard to static html with `db.save_html('dashboard.html')`.\n\n\n\u003cdetails\u003e\n\u003csummary\u003eYou can pass a specific index for the static dashboard to display\u003c/summary\u003e\n\u003cp\u003e\n\n```\nExplainerDashboard(explainer, index=0).save_html('dashboard.html')\n```\n\nor\n\n\n```\nExplainerDashboard(explainer, index='Cumings, Mrs. John Bradley (Florence Briggs Thayer)').save_html('dashboard.html')\n```\n\u003c/p\u003e\n\u003c/details\u003e\n\nFor a simplified single page dashboard try `ExplainerDashboard(explainer, simple=True)`.\n\n\u003cdetails\u003e\u003csummary\u003eShow simplified dashboard screenshot\u003c/summary\u003e\n\u003cp\u003e\n\n\n![docs/source/screenshots/simple_classifier_dashboard.png](docs/source/screenshots/simple_classifier_dashboard.png)\n\n\u003c/p\u003e\n\u003c/details\u003e\n\u003cp\u003e\u003c/p\u003e\n\n### ExplainerHub\n\nYou can combine multiple dashboards and host them in a single place using\n[ExplainerHub](https://explainerdashboard.readthedocs.io/en/latest/hub.html):\n\n```python\ndb1 = ExplainerDashboard(explainer1, title=\"Classifier Explainer\",\n         description=\"Model predicting survival on H.M.S. Titanic\")\ndb2 = ExplainerDashboard(explainer2, title=\"Regression Explainer\",\n         description=\"Model predicting ticket price on H.M.S. Titanic\")\nhub = ExplainerHub([db1, db2])\nhub.run()\n```\n\nYou can adjust titles and descriptions, manage users and logins, store and load\nfrom config, manage the hub through a CLI and more. See the\n[ExplainerHub documentation](https://explainerdashboard.readthedocs.io/en/latest/hub.html).\n\n\u003cdetails\u003e\u003csummary\u003eShow ExplainerHub screenshot\u003c/summary\u003e\n\u003cp\u003e\n\n\n![docs/source/screenshots/explainerhub.png](docs/source/screenshots/explainerhub.png)\n\n\u003c/p\u003e\n\u003c/details\u003e\n\u003cp\u003e\u003c/p\u003e\n\n\n### Dealing with slow calculations\n\nSome of the calculations for the dashboard such as calculating SHAP (interaction) values\nand permutation importances can be slow for large datasets and complicated models.\nThere are a few tricks to make this less painful:\n\n1. Switching off the interactions tab (`shap_interaction=False`) and disabling\n    permutation importances (`no_permutations=True`). Especially SHAP interaction\n    values can be very slow to calculate, and often are not needed for analysis.\n    For permutation importances you can set the `n_jobs` parameter to speed up\n    the calculation in parallel.\n2. Calculate approximate shap values. You can pass approximate=True as a shap parameter by\n   passing `shap_kwargs=dict(approximate=True)` to the explainer initialization.\n3. Use GPU Tree SHAP by passing `shap='gputree'` when your model supports it.\n   This requires an NVIDIA GPU and a CUDA-enabled SHAP build (see the SHAP docs).\n4. Storing the explainer. The calculated properties are only calculated once\n    for each instance, however each time when you instantiate a new explainer\n    instance they will have to be recalculated. You can store them with\n    `explainer.dump(\"explainer.joblib\")` and load with e.g.\n    `ClassifierExplainer.from_file(\"explainer.joblib\")`. All calculated properties\n    are stored along with the explainer.\n5. Using a smaller (test) dataset, or using smaller decision trees.\n    TreeShap computational complexity is `O(TLD^2)`, where `T` is the\n    number of trees, `L` is the maximum number of leaves in any tree and\n    `D` the maximal depth of any tree. So reducing the number of leaves or average\n    depth in the decision tree can really speed up SHAP calculations.\n6. Pre-computing shap values. Perhaps you already have calculated the shap values\n    somewhere, or you can calculate them off on a giant cluster somewhere, or\n    your model supports [GPU generated shap values](https://github.com/rapidsai/gputreeshap).\n    You can simply add these pre-calculated shap values to the explainer\n    with `explainer.set_shap_values()` and `explainer.set_shap_interaction_values()` methods.\n7. Plotting only a random sample of points. When you have a lots of observations,\n    simply rendering the plots may get slow as well. You can pass the `plot_sample`\n    parameter to render a (different each time) random sample of observations\n    for the various scatter plots in the dashboard. E.g.:\n    `ExplainerDashboard(explainer, plot_sample=1000).run()`\n\n## Launching from within a notebook\n\nWhen working inside Jupyter or Google Colab you can use\n`ExplainerDashboard(mode='inline')`, `ExplainerDashboard(mode='external')` or\n`ExplainerDashboard(mode='jupyterlab')`, to run the dashboard inline in the notebook,\nor in a seperate tab but keep the notebook interactive. (`db.run(mode='inline')`\nnow also works)\n\nThere is also a specific interface for quickly displaying interactive components\ninline in your notebook: `InlineExplainer()`. For example you can use\n`InlineExplainer(explainer).shap.dependence()` to display the shap dependence\ncomponent interactively in your notebook output cell.\n\n## Command line tool\n\nYou can store explainers to disk with `explainer.dump(\"explainer.joblib\")`\nand then run them from the command-line:\n\n```bash\n$ explainerdashboard run explainer.joblib\n```\n\nOr store the full configuration of a dashboard to `.yaml` with e.g.\n`dashboard.to_yaml(\"dashboard.yaml\", explainerfile=\"explainer.joblib\", dump_explainer=True)` and run it with:\n\n```bash\n$ explainerdashboard run dashboard.yaml\n```\n\nYou can also build explainers from the commandline with `explainerdashboard build`.\nSee [explainerdashboard CLI documentation](https://explainerdashboard.readthedocs.io/en/latest/cli.html)\nfor details.\n\n## Customizing your dashboard\n\nThe dashboard is highly modular and customizable so that you can adjust it your\nown needs and project.\n\n### Changing bootstrap theme\n\nYou can change the bootstrap theme by passing a link to the appropriate css\nfile. You can use the convenient [themes](https://dash-bootstrap-components.opensource.faculty.ai/docs/themes/) module of\n[dash_bootstrap_components](https://dash-bootstrap-components.opensource.faculty.ai/docs/) to generate\nthe css url for you:\n\n```python\nimport dash_bootstrap_components as dbc\n\nExplainerDashboard(explainer, bootstrap=dbc.themes.FLATLY).run()\n```\n\nSee the [dbc themes documentation](https://dash-bootstrap-components.opensource.faculty.ai/docs/themes/)\nand [bootwatch website](https://bootswatch.com/) for the different themes that are supported.\n\n### Switching off tabs\n\nYou can switch off individual tabs using boolean flags. This also makes sure\nthat expensive calculations for that tab don't get executed:\n\n```python\nExplainerDashboard(explainer,\n                    importances=False,\n                    model_summary=True,\n                    contributions=True,\n                    whatif=True,\n                    shap_dependence=True,\n                    shap_interaction=False,\n                    decision_trees=True)\n```\n\n### Hiding components\n\nYou can also hide individual components on the various tabs:\n\n```python\n    ExplainerDashboard(explainer,\n        # importances tab:\n        hide_importances=True,\n        # classification stats tab:\n        hide_globalcutoff=True, hide_modelsummary=True,\n        hide_confusionmatrix=True, hide_precision=True,\n        hide_classification=True, hide_rocauc=True,\n        hide_prauc=True, hide_liftcurve=True, hide_cumprecision=True,\n        # regression stats tab:\n        # hide_modelsummary=True,\n        hide_predsvsactual=True, hide_residuals=True,\n        hide_regvscol=True,\n        # individual predictions tab:\n        hide_predindexselector=True, hide_predictionsummary=True,\n        hide_contributiongraph=True, hide_pdp=True,\n        hide_contributiontable=True,\n        # whatif tab:\n        hide_whatifindexselector=True, hide_whatifprediction=True,\n        hide_inputeditor=True, hide_whatifcontributiongraph=True,\n        hide_whatifcontributiontable=True, hide_whatifpdp=True,\n        # shap dependence tab:\n        hide_shapsummary=True, hide_shapdependence=True,\n        # shap interactions tab:\n        hide_interactionsummary=True, hide_interactiondependence=True,\n        # decisiontrees tab:\n        hide_treeindexselector=True, hide_treesgraph=True,\n        hide_treepathtable=True, hide_treepathgraph=True,\n        ).run()\n```\n\n### Hiding toggles and dropdowns inside components\n\nYou can also hide individual toggles and dropdowns using `**kwargs`. However they\nare not individually targeted, so if you pass `hide_cats=True` then the group\ncats toggle will be hidden on every component that has one:\n\n```python\nExplainerDashboard(explainer,\n                    no_permutations=True, # do not show or calculate permutation importances\n                    hide_poweredby=True, # hide the poweredby:explainerdashboard footer\n                    hide_popout=True, # hide the 'popout' button from each graph\n                    hide_depth=True, # hide the depth (no of features) dropdown\n                    hide_sort=True, # hide sort type dropdown in contributions graph/table\n                    hide_orientation=True, # hide orientation dropdown in contributions graph/table\n                    hide_type=True, # hide shap/permutation toggle on ImportancesComponent\n                    hide_dropna=True, # hide dropna toggle on pdp component\n                    hide_sample=True, # hide sample size input on pdp component\n                    hide_gridlines=True, # hide gridlines on pdp component\n                    hide_gridpoints=True, # hide gridpoints input on pdp component\n                    hide_cats_sort=True, # hide the sorting option for categorical features\n                    hide_cutoff=True, # hide cutoff selector on classification components\n                    hide_percentage=True, # hide percentage toggle on classificaiton components\n                    hide_log_x=True, # hide x-axis logs toggle on regression plots\n                    hide_log_y=True, # hide y-axis logs toggle on regression plots\n                    hide_ratio=True, # hide the residuals type dropdown\n                    hide_points=True, # hide the show violin scatter markers toggle\n                    hide_winsor=True, # hide the winsorize input\n                    hide_wizard=True, # hide the wizard toggle in lift curve component\n                    hide_range=True, # hide the range subscript on feature input\n                    hide_star_explanation=True, # hide the '* indicates observed label` text\n)\n```\n\n### Setting default values\n\nYou can also set default values for the various dropdowns and toggles.\nAll the components with their parameters can be found [in the documentation](https://explainerdashboard.readthedocs.io/en/latest/components.html).\nSome examples of useful parameters to pass:\n\n```python\nExplainerDashboard(explainer,\n                    higher_is_better=False, # flip green and red in contributions graph\n                    n_input_cols=3, # divide feature inputs into 3 columns on what if tab\n                    col='Fare', # initial feature in shap graphs\n                    color_col='Age', # color feature in shap dependence graph\n                    interact_col='Age', # interaction feature in shap interaction\n                    depth=5, # only show top 5 features\n                    sort = 'low-to-high', # sort features from lowest shap to highest in contributions graph/table\n                    cats_topx=3, # show only the top 3 categories for categorical features\n                    cats_sort='alphabet', # short categorical features alphabetically\n                    orientation='horizontal', # horizontal bars in contributions graph\n                    index='Rugg, Miss. Emily', # initial index to display\n                    pdp_col='Fare', # initial pdp feature\n                    cutoff=0.8, # cutoff for classification plots\n                    round=2 # rounding to apply to floats\n                    show_metrics=['accuracy', 'f1', custom_metric] # only show certain metrics\n                    plot_sample=1000, # only display a 1000 random markers in scatter plots\n                    )\n```\n\n\n### Designing your own layout\n\nAll the components in the dashboard are modular and re-usable, which means that\nyou can build your own custom [dash](https://dash.plotly.com/) dashboards\naround them.\n\nBy using the built-in `ExplainerComponent` class it is easy to build your\nown layouts, with just a bare minimum of knowledge of HTML and [bootstrap](https://dash-bootstrap-components.opensource.faculty.ai/docs/quickstart/). For\nexample if you only wanted to display the `ConfusionMatrixComponent` and\n`ShapContributionsGraphComponent`, but hide\na few toggles:\n\n```python\nfrom explainerdashboard.custom import *\n\nclass CustomDashboard(ExplainerComponent):\n    def __init__(self, explainer, name=None):\n        super().__init__(explainer, title=\"Custom Dashboard\")\n        self.confusion = ConfusionMatrixComponent(explainer, name=self.name+\"cm\",\n                            hide_selector=True, hide_percentage=True,\n                            cutoff=0.75)\n        self.contrib = ShapContributionsGraphComponent(explainer, name=self.name+\"contrib\",\n                            hide_selector=True, hide_cats=True,\n                            hide_depth=True, hide_sort=True,\n                            index='Rugg, Miss. Emily')\n\n    def layout(self):\n        return dbc.Container([\n            dbc.Row([\n                dbc.Col([\n                    html.H1(\"Custom Demonstration:\"),\n                    html.H3(\"How to build your own layout using ExplainerComponents.\")\n                ])\n            ]),\n            dbc.Row([\n                dbc.Col([\n                    self.confusion.layout(),\n                ]),\n                dbc.Col([\n                    self.contrib.layout(),\n                ])\n            ])\n        ])\n\ndb = ExplainerDashboard(explainer, CustomDashboard, hide_header=True).run()\n```\n\n\u003cdetails\u003e\u003csummary\u003eShow example custom dashboard screenshot\u003c/summary\u003e\n\u003cp\u003e\n\n\n![docs/source/screenshots/custom_dashboard.png](docs/source/screenshots/custom_dashboard.png)\n\n\u003c/p\u003e\n\n\u003c/details\u003e\n\u003cp\u003e\u003c/p\u003e\n\n\nYou can use this to define your own layouts, specifically tailored to your\nown model, project and needs. You can use the [ExplainerComposites](https://github.com/oegedijk/explainerdashboard/blob/master/explainerdashboard/dashboard_components/composites.py) that\nare used for the tabs of the default dashboard as a starting point, and edit\nthem to reorganize components, add text, etc.\nSee [custom dashboard documentation](https://explainerdashboard.readthedocs.io/en/latest/custom.html)\nfor more details. A deployed custom dashboard can be found [here](http://titanicexplainer.herokuapp.com/custom/)([source code](https://github.com/oegedijk/explainingtitanic/blob/master/buildcustom.py)).\n\n## Deployment\n\nIf you wish to use e.g. `gunicorn` or `waitress` to deploy the dashboard you should add\n`app = db.flask_server()` to your code to expose the Flask server. You can then\nstart the server with e.g. `gunicorn dashboard:app`\n(assuming the file you defined the dashboard in was called `dashboard.py`).\nSee also the [ExplainerDashboard section](https://explainerdashboard.readthedocs.io/en/latest/dashboards.html)\nand the [deployment section of the documentation](https://explainerdashboard.readthedocs.io/en/latest/deployment.html).\n\nIt can be helpful to store your `explainer` and dashboard layout to disk, and\nthen reload, e.g.:\n\n**generate_dashboard.py**:\n```python\nfrom explainerdashboard import ClassifierExplainer, ExplainerDashboard\nfrom explainerdashboard.custom import *\n\nexplainer = ClassifierExplainer(model, X_test, y_test)\n\n# building an ExplainerDashboard ensures that all necessary properties\n# get calculated:\ndb = ExplainerDashboard(explainer, [ShapDependenceComposite, WhatIfComposite],\n                        title='Awesome Dashboard', hide_whatifpdp=True)\n\n# store both the explainer and the dashboard configuration:\ndb.to_yaml(\"dashboard.yaml\", explainerfile=\"explainer.joblib\", dump_explainer=True)\n```\n\nYou can then reload it in **dashboard.py**:\n```python\nfrom explainerdashboard import ClassifierExplainer, ExplainerDashboard\n\n# you can override params during load from_config:\ndb = ExplainerDashboard.from_config(\"dashboard.yaml\", title=\"Awesomer Title\")\n\napp = db.flask_server()\n```\n\nAnd then run it with:\n\n```sh\n    $ gunicorn dashboard:app\n```\n\nor with waitress (also works on Windows):\n\n```sh\n    $ waitress-serve dashboard:app\n```\n\n### Minimizing memory usage\n\nWhen you deploy a dashboard with a dataset with a large number of rows (`n`) and columns (`m`),\nthe memory usage of the dashboard can be substantial. You can check the (approximate)\nmemory usage with `explainer.memory_usage()`. (as a side note: if you have lots\nof rows, you probably want to set the `plot_sample` parameter as well)\n\nIn order to reduce the memory footprint there are a number of things you can do:\n\n1. Not including shap interaction tab: shap interaction values are shape (`n*m*m`),\n    so can take a subtantial amount of memory.\n2. Setting a lower precision. By default shap values are stored as `'float64'`,\n    but you can store them as `'float32'` instead and save half the space:\n    ```ClassifierExplainer(model, X_test, y_test, precision='float32')```. You\n    can also set a lower precision on your `X_test` dataset yourself of course.\n3. For multi class classifier, by default `ClassifierExplainer` calculates\n    shap values for all classes. If you're only interested in a single class\n    you can drop the other shap values: `explainer.keep_shap_pos_label_only(pos_label)`\n4. Storing data externally. You can for example only store a subset of 10.000 rows in\n    the explainer itself (enough to generate importance and dependence plots),\n    and store the rest of your millions of rows of input data in an external file\n    or database:\n    - with `explainer.set_X_row_func()` you can set a function that takes\n        an `index` as argument and returns a single row dataframe with model\n        compatible input data for that index. This function can include a query\n        to a database or fileread.\n    - with `explainer.set_y_func()` you can set a function that takes\n        and `index` as argument and returns the observed outcome `y` for\n        that index.\n    - with `explainer.set_index_list_func()` you can set a function\n        that returns a list of available indexes that can be queried. Only gets\n        called upon start of the dashboard.\n\n    If you have a very large number of indexes and the user is able to look\n    them up elsewhere, you can also replace the index dropdowns with a simple free\n    text field with `index_dropdown=False`. Only valid indexes (i.e. in the\n    `get_index_list()` list) get propagated\n    to other components by default, but this can be overriden with `index_check=False`.\n    Instead of an ``index_list_func`` you can also set an\n    ``explainer.set_index_check_func(func)`` which should return a bool whether\n    the ``index`` exists or not.\n\n    Important: these function can be called multiple times by multiple independent\n    components, so probably best to implement some kind of caching functionality.\n    The functions you pass can be also methods, so you have access to all of the\n    internals of the explainer.\n\n\n## Documentation\n\nDocumentation can be found at [explainerdashboard.readthedocs.io](https://explainerdashboard.readthedocs.io/en/latest/).\n\nExample notebook on how to launch dashboards for different model types here: [dashboard_examples.ipynb](notebooks/dashboard_examples.ipynb).\n\nExample notebook on how to interact with the explainer object here: [explainer_examples.ipynb](notebooks/explainer_examples.ipynb).\n\nExample notebook on how to design a custom dashboard: [custom_examples.ipynb](notebooks/custom_examples.ipynb).\n\n\n\n## Deployed example:\n\nYou can find an example dashboard at [titanicexplainer.herokuapp.com](http://titanicexplainer.herokuapp.com)\n\n(source code at [https://github.com/oegedijk/explainingtitanic](https://github.com/oegedijk/explainingtitanic))\n\n## Citation:\n\nA doi can be found at [zenodo](https://zenodo.org/record/7633294)\n","funding_links":["https://github.com/sponsors/oegedijk"],"categories":["Deep Learning Framework","Python","Transparency","模型的可解释性","其他_机器学习与深度学习","Uncategorized","Technical Resources","Exploration"],"sub_categories":["Interpretability \u0026 Adversarial Training","Uncategorized","Open Source/Access Responsible AI Software Packages"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Foegedijk%2Fexplainerdashboard","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Foegedijk%2Fexplainerdashboard","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Foegedijk%2Fexplainerdashboard/lists"}