{"id":13704382,"url":"https://github.com/maif/shapash","last_synced_at":"2026-01-30T12:05:39.367Z","repository":{"id":37980700,"uuid":"259856594","full_name":"MAIF/shapash","owner":"MAIF","description":"🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models","archived":false,"fork":false,"pushed_at":"2025-05-06T07:55:20.000Z","size":64743,"stargazers_count":2869,"open_issues_count":45,"forks_count":347,"subscribers_count":36,"default_branch":"master","last_synced_at":"2025-05-06T08:51:14.326Z","etag":null,"topics":["ethical-artificial-intelligence","explainability","explainable-ml","interpretability","lime","machine-learning","python","shap","transparency"],"latest_commit_sha":null,"homepage":"https://maif.github.io/shapash/","language":"Jupyter Notebook","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/MAIF.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2020-04-29T07:34:23.000Z","updated_at":"2025-05-06T07:55:24.000Z","dependencies_parsed_at":"2022-07-12T19:40:53.740Z","dependency_job_id":"7566d2a3-00f1-4674-8b77-c667f8d51efb","html_url":"https://github.com/MAIF/shapash","commit_stats":{"total_commits":1023,"total_committers":39,"mean_commits":26.23076923076923,"dds":0.6842619745845553,"last_synced_commit":"bfaa6aae743164a7945399da88ad155b5e129652"},"previous_names":[],"tags_count":50,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/MAIF%2Fshapash","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/MAIF%2Fshapash/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/MAIF%2Fshapash/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/MAIF%2Fshapash/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/MAIF","download_url":"https://codeload.github.com/MAIF/shapash/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":253764933,"owners_count":21960658,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ethical-artificial-intelligence","explainability","explainable-ml","interpretability","lime","machine-learning","python","shap","transparency"],"created_at":"2024-08-02T21:01:08.557Z","updated_at":"2026-01-30T12:05:39.356Z","avatar_url":"https://github.com/MAIF.png","language":"Jupyter Notebook","readme":"\u003cp align=\"center\"\u003e\n\u003cimg src=\"https://raw.githubusercontent.com/MAIF/shapash/master/docs/_static/shapash-resize.png\" width=\"300\" title=\"shapash-logo\"\u003e\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003c!-- Tests --\u003e\n  \u003ca href=\"https://github.com/MAIF/shapash/workflows/Build%20%26%20Test/badge.svg\"\u003e\n    \u003cimg src=\"https://github.com/MAIF/shapash/workflows/Build%20%26%20Test/badge.svg\" alt=\"tests\"\u003e\n  \u003c/a\u003e\n  \u003c!-- PyPi --\u003e\n  \u003ca href=\"https://img.shields.io/pypi/v/shapash\"\u003e\n    \u003cimg src=\"https://img.shields.io/pypi/v/shapash\" alt=\"pypi\"\u003e\n  \u003c/a\u003e\n  \u003c!-- Downloads --\u003e\n  \u003ca href=\"https://static.pepy.tech/personalized-badge/shapash?period=total\u0026units=international_system\u0026left_color=grey\u0026right_color=orange\u0026left_text=Downloads\"\u003e\n    \u003cimg src=\"https://static.pepy.tech/personalized-badge/shapash?period=total\u0026units=international_system\u0026left_color=grey\u0026right_color=orange\u0026left_text=Downloads\" alt=\"downloads\"\u003e\n  \u003c/a\u003e\n  \u003c!-- Python Version --\u003e\n  \u003ca href=\"https://img.shields.io/pypi/pyversions/shapash\"\u003e\n    \u003cimg src=\"https://img.shields.io/pypi/pyversions/shapash\" alt=\"pyversion\"\u003e\n  \u003c/a\u003e\n  \u003c!-- License --\u003e\n  \u003ca href=\"https://img.shields.io/pypi/l/shapash\"\u003e\n    \u003cimg src=\"https://img.shields.io/pypi/l/shapash\" alt=\"license\"\u003e\n  \u003c/a\u003e\n  \u003c!-- Doc --\u003e\n  \u003ca href=\"https://shapash.readthedocs.io/en/latest/\"\u003e\n    \u003cimg src=\"https://readthedocs.org/projects/shapash/badge/?version=latest\" alt=\"doc\"\u003e\n  \u003c/a\u003e\n\u003c/p\u003e\n\n## 🔍 Overview\n\nShapash is a Python library designed to **make machine learning interpretable and comprehensible for everyone**. It offers various visualizations with clear and explicit labels that are easily understood by all.\n\nWith Shapash, you can generate a **Webapp** that simplifies the comprehension of **interactions between the model's features**, and allows **seamless navigation between local and global explainability**. This Webapp enables Data Scientists to effortlessly understand their models and **share their results with both data scientists and non-data experts**.\n\nAdditionally, Shapash contributes to data science auditing by **presenting valuable information** about any model and data **in a comprehensive report**.\n\nShapash is suitable for Regression, Binary Classification and Multiclass problems. It is **compatible with numerous models**, including Catboost, Xgboost, LightGBM, Sklearn Ensemble, Linear models, and SVM. For other models, solutions to integrate Shapash are available; more details can be found [here](#how_shapash_works).\n\n\u003e [!NOTE]\n\u003e If you want to give us feedback : [Feedback form](https://framaforms.org/shapash-collecting-your-feedback-and-use-cases-1687456776)\n\n[Shapash App Demo](https://shapash-demo.ossbymaif.fr/)\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"https://raw.githubusercontent.com/MAIF/shapash/master/docs/_static/shapash_global.gif\" width=\"800\"\u003e\n\u003c/p\u003e\n\n## 🌱 Documentation and resources\n\n- Readthedocs: [![documentation badge](https://readthedocs.org/projects/shapash/badge/?version=latest)](https://shapash.readthedocs.io/en/latest/)\n- [Video presentation for french speakers](https://www.youtube.com/watch?v=r1R_A9B9apk)\n- Medium:\n  - [Understand your model with Shapash - Towards AI](https://pub.towardsai.net/shapash-making-ml-models-understandable-by-everyone-8f96ad469eb3)\n  - [Model auditability - Towards DS](https://towardsdatascience.com/shapash-1-3-2-announcing-new-features-for-more-auditable-ai-64a6db71c919)\n  - [Group of features - Towards AI](https://pub.towardsai.net/machine-learning-6011d5d9a444)\n  - [Building confidence on explainability - Towards DS](https://towardsdatascience.com/building-confidence-on-explainability-methods-66b9ee575514)\n  - [Picking Examples to Understand Machine Learning Model](https://www.kdnuggets.com/2022/11/picking-examples-understand-machine-learning-model.html)\n  - [Enhancing Webapp Built-In Features for Comprehensive Machine Learning Model Interpretation](https://pub.towardsai.net/shapash-2-3-0-comprehensive-model-interpretation-40b50157c2fb)\n\n\n## 🎉 What's new ?\n\n| Version       | New Feature                                                                           | Description                                                                                                                            | Tutorial |\n|:-------------:|:-------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------------------------------------:|:--------:|\n| 2.3.x         |  Additional dataset columns \u003cbr\u003e [New demo](https://shapash-demo.ossbymaif.fr/) \u003cbr\u003e [Article](https://pub.towardsai.net/shapash-2-3-0-comprehensive-model-interpretation-40b50157c2fb)                                                                | In Webapp: Target and error columns added to dataset and possibility to add features outside the model for more filtering options            |  [\u003cimg src=\"https://raw.githubusercontent.com/MAIF/shapash/master/docs/_static/add_column_icon.png\" width=\"50\" title=\"add_column\"\u003e](https://github.com/MAIF/shapash/blob/master/tutorial/generate_webapp/tuto-webapp01-additional-data.ipynb)\n| 2.3.x         |  Identity card \u003cbr\u003e [New demo](https://shapash-demo.ossbymaif.fr/) \u003cbr\u003e [Article](https://pub.towardsai.net/shapash-2-3-0-comprehensive-model-interpretation-40b50157c2fb)                                                                  | In Webapp: New identity card to summarize the information of the selected sample                  |  [\u003cimg src=\"https://raw.githubusercontent.com/MAIF/shapash/master/docs/_static/identity_card.png\" width=\"50\" title=\"identity\"\u003e](https://github.com/MAIF/shapash/blob/master/tutorial/generate_webapp/tuto-webapp01-additional-data.ipynb)\n| 2.2.x         |  Picking samples \u003cbr\u003e [Article](https://www.kdnuggets.com/2022/11/picking-examples-understand-machine-learning-model.html)                                                                | New tab in the webapp for picking samples. The graph represents the \"True Values Vs Predicted Values\"            |  [\u003cimg src=\"https://raw.githubusercontent.com/MAIF/shapash/master/docs/_static/picking.png\" width=\"50\" title=\"picking\"\u003e](https://github.com/MAIF/shapash/blob/master/tutorial/plots_and_charts/tuto-plot06-prediction_plot.ipynb)\n| 2.2.x         |  Dataset Filter \u003cbr\u003e                                                              | New tab in the webapp to filter data. And several improvements in the webapp: subtitles, labels, screen adjustments                   |  [\u003cimg src=\"https://raw.githubusercontent.com/MAIF/shapash/master/docs/_static/webapp.png\" width=\"50\" title=\"webapp\"\u003e](https://github.com/MAIF/shapash/blob/master/tutorial/tutorial01-Shapash-Overview-Launch-WebApp.ipynb)\n| 2.0.x         |  Refactoring Shapash \u003cbr\u003e                                                                   | Refactoring attributes of compile methods and init. Refactoring implementation for new backends                   |  [\u003cimg src=\"https://raw.githubusercontent.com/MAIF/shapash/master/docs/_static/modular.png\" width=\"50\" title=\"modular\"\u003e](https://github.com/MAIF/shapash/blob/master/tutorial/explainer_and_backend/tuto-expl06-Shapash-custom-backend.ipynb)\n| 1.7.x         |  Variabilize Colors \u003cbr\u003e                                                                   | Giving possibility to have your own colour palette for outputs adapted to your design                   |  [\u003cimg src=\"https://raw.githubusercontent.com/MAIF/shapash/master/docs/_static/variabilize-colors.png\" width=\"50\" title=\"variabilize-colors\"\u003e](https://github.com/MAIF/shapash/blob/master/tutorial/common/tuto-common02-colors.ipynb)\n| 1.6.x         |  Explainability Quality Metrics \u003cbr\u003e [Article](https://towardsdatascience.com/building-confidence-on-explainability-methods-66b9ee575514)                                                                   | To help increase confidence in explainability methods, you can evaluate the relevance of your explainability using 3 metrics: **Stability**, **Consistency** and **Compacity**                   |  [\u003cimg src=\"https://raw.githubusercontent.com/MAIF/shapash/master/docs/_static/quality-metrics.png\" width=\"50\" title=\"quality-metrics\"\u003e](https://github.com/MAIF/shapash/blob/master/tutorial/explainability_quality/tuto-quality01-Builing-confidence-explainability.ipynb)\n| 1.4.x         |  Groups of features \u003cbr\u003e [Demo](https://shapash-demo2.ossbymaif.fr/)                  | You can now regroup features that share common properties together. \u003cbr\u003eThis option can be useful if your model has a lot of features. |  [\u003cimg src=\"https://raw.githubusercontent.com/MAIF/shapash/master/docs/_static/groups_features.gif\" width=\"120\" title=\"groups-features\"\u003e](https://github.com/MAIF/shapash/blob/master/tutorial/common/tuto-common01-groups_of_features.ipynb)    |\n| 1.3.x         |  Shapash Report \u003cbr\u003e [Demo](https://shapash.readthedocs.io/en/latest/report.html)     | A standalone HTML report that constitutes a basis of an audit document.                                                                |  [\u003cimg src=\"https://raw.githubusercontent.com/MAIF/shapash/master/docs/_static/report-icon.png\" width=\"50\" title=\"shapash-report\"\u003e](https://github.com/MAIF/shapash/blob/master/tutorial/generate_report/tuto-shapash-report01.ipynb)    |\n\n## 🔥 Features\n\n- Display clear and understandable results: plots and outputs use **explicit labels** for each feature and its values\n\n\u003cp align=\"center\"\u003e\n  \u003cimg align=\"left\" src=\"https://github.com/MAIF/shapash/blob/master/docs/_static/shapash-grid-images-02.png?raw=true\" width=\"28%\"/\u003e\n  \u003cimg src=\"https://github.com/MAIF/shapash/blob/master/docs/_static/shapash-grid-images-06.png?raw=true\" width=\"28%\" /\u003e\n  \u003cimg align=\"right\" src=\"https://github.com/MAIF/shapash/blob/master/docs/_static/shapash-grid-images-04.png?raw=true\" width=\"28%\" /\u003e\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003cimg align=\"left\" src=\"https://github.com/MAIF/shapash/blob/master/docs/_static/shapash-grid-images-01.png?raw=true\" width=\"28%\" /\u003e\n  \u003cimg src=\"https://github.com/MAIF/shapash/blob/master/docs/_static/shapash-resize.png?raw=true\" width=\"18%\" /\u003e\n  \u003cimg align=\"right\" src=\"https://github.com/MAIF/shapash/blob/master/docs/_static/shapash-grid-images-13.png?raw=true\" width=\"28%\" /\u003e\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003cimg align=\"left\" src=\"https://github.com/MAIF/shapash/blob/master/docs/_static/shapash-grid-images-12.png?raw=true\" width=\"33%\" /\u003e\n  \u003cimg src=\"https://github.com/MAIF/shapash/blob/master/docs/_static/shapash-grid-images-03.png?raw=true\" width=\"28%\" /\u003e\n  \u003cimg align=\"right\" src=\"https://github.com/MAIF/shapash/blob/master/docs/_static/shapash-grid-images-10.png?raw=true\" width=\"25%\" /\u003e\n\u003c/p\u003e\n\n\n- Allow Data Scientists to quickly understand their models using a **webapp** to easily navigate between global and local explainability, and understand how the different features contribute: [Live Demo Shapash-Monitor](https://shapash-demo.ossbymaif.fr/)\n\n- **Summarize and export** local explanation\n\u003e **Shapash** provides concise and clear local explanations, It allows each user, enabling users of any Data background to understand a local prediction of a supervised model through a summarized and explicit explanation\n\n\n- **Evaluate** the quality of your explainability with various metrics\n\n- Effortlessly share and discuss results with non-Data users\n\n- Select subsets for in-depth analysis of explainability by filtering based on explanatory and additional features, as well as correct or wrong predictions. [Picking Examples to Understand Machine Learning Model](https://www.kdnuggets.com/2022/11/picking-examples-understand-machine-learning-model.html)\n\n- Deploy interpretability part of your project: From model training to deployment (API or Batch Mode)\n\n- Contribute to the **auditability of your model** by generating a **standalone HTML report** of your projects. [Report Example](https://shapash.readthedocs.io/en/latest/report.html)\n\u003eWe believe that this report will offer valuable support for auditing models and data, leading to improved AI governance.\nData Scientists can now provide anyone interested in their project with **a document that captures various aspects of their work as the foundation for an audit report**.\nThis document can be easily shared among teams (internal audit, DPO, risk, compliance...).\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"https://raw.githubusercontent.com/MAIF/shapash/master/docs/_static/shapash-report-demo.gif\" width=\"800\"\u003e\n\u003c/p\u003e\n\n\u003ca name=\"how_shapash_works\"\u003e\u003c/a\u003e\n## ⚙️ How Shapash works\n**Shapash** is an overlay package for libraries focused on model interpretability. It uses Shap or Lime backend\nto compute contributions.\n**Shapash** builds upon the various steps required to create a machine learning model, making the results more understandable.\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"https://raw.githubusercontent.com/MAIF/shapash/master/docs/_static/shapash-diagram.png\" width=\"700\" title=\"diagram\"\u003e\n\u003c/p\u003e\n\n**Shapash** is suitable for Regression, Binary Classification or Multiclass problem. \u003cbr /\u003e\nIt is compatible with numerous models: *Catboost*, *Xgboost*, *LightGBM*, *Sklearn Ensemble*, *Linear models*, *SVM*. \u003cbr /\u003e\n\nIf your model is not in the list of compatible models, it is possible to provide Shapash with local contributions calculated with shap or another method. [Here's](https://github.com/MAIF/shapash/blob/master/tutorial/explainer_and_backend/tuto-expl05-Shapash-using-Fasttreeshap.ipynb) an example of how to provide contributions to Shapash. An [issue](https://github.com/MAIF/shapash/issues/488) has been created to enhance this use case.\n\nShapash can use category-encoders object, sklearn ColumnTransformer or simply features dictionary. \u003cbr /\u003e\n- Category_encoder: *OneHotEncoder*, *OrdinalEncoder*, *BaseNEncoder*, *BinaryEncoder*, *TargetEncoder*\n- Sklearn ColumnTransformer: *OneHotEncoder*, *OrdinalEncoder*, *StandardScaler*, *QuantileTransformer*, *PowerTransformer*\n\n## 🛠 Installation\n\nShapash is intended to work with Python versions 3.9 to 3.12. Installation can be done with pip:\n\n```bash\npip install shapash\n```\n\nIn order to generate the Shapash Report some extra requirements are needed.\nYou can install these using the following command :  \n```bash\npip install shapash[report]\n```\n\nIf you encounter **compatibility issues** you may check the corresponding section in the Shapash documentation [here](https://shapash.readthedocs.io/en/latest/installation-instructions/index.html).\n\n## 🕐 Quickstart\n\nThe 4 steps to display results:\n\n- Step 1: Declare SmartExplainer Object\n  \u003e There 1 mandatory parameter in compile method: Model\n  \u003e You can declare features dict here to specify the labels to display\n\n```python\nfrom shapash import SmartExplainer\n\nxpl = SmartExplainer(\n    model=regressor,\n    features_dict=house_dict,  # Optional parameter\n    preprocessing=encoder,  # Optional: compile step can use inverse_transform method\n    postprocessing=postprocess,  # Optional: see tutorial postprocessing\n)\n```\n\n- Step 2: Compile  Dataset, ...\n  \u003e There 1 mandatory parameter in compile method: Dataset\n\n```python\nxpl.compile(\n    x=xtest,\n    y_pred=y_pred,  # Optional: for your own prediction (by default: model.predict)\n    y_target=yTest,  # Optional: allows to display True Values vs Predicted Values\n    additional_data=xadditional,  # Optional: additional dataset of features for Webapp\n    additional_features_dict=features_dict_additional,  # Optional: dict additional data\n)\n```  \n\n- Step 3: Display output\n  \u003e There are several outputs and plots available. for example, you can launch the web app:\n\n```python\napp = xpl.run_app()\n```\n\n[Live Demo Shapash-Monitor](https://shapash-demo.ossbymaif.fr/)\n\n- Step 4: Generate the Shapash Report\n  \u003e This step allows to generate a standalone html report of your project using the different splits\n  of your dataset and also the metrics you used:\n\n```python\nxpl.generate_report(\n    output_file=\"path/to/output/report.html\",\n    project_info_file=\"path/to/project_info.yml\",\n    x_train=xtrain,\n    y_train=ytrain,\n    y_test=ytest,\n    title_story=\"House prices report\",\n    title_description=\"\"\"This document is a data science report of the kaggle house prices tutorial project.\n        It was generated using the Shapash library.\"\"\",\n    metrics=[{\"name\": \"MSE\", \"path\": \"sklearn.metrics.mean_squared_error\"}],\n)\n```\n\n[Report Example](https://shapash.readthedocs.io/en/latest/report.html)\n\n- Step 5: From training to deployment : SmartPredictor Object\n  \u003e Shapash provides a SmartPredictor object to deploy the summary of local explanation for the operational needs.\n  It is an object dedicated to deployment, lighter than SmartExplainer with additional consistency checks.\n  SmartPredictor can be used with an API or in batch mode. It provides predictions, detailed or summarized local\n  explainability using appropriate wording.\n\n```python\npredictor = xpl.to_smartpredictor()\n```\nSee the tutorial part to know how to use the SmartPredictor object\n\n## 📖  Tutorials\nThis github repository offers many tutorials to allow you to easily get started with Shapash.\n\n\n\u003cdetails\u003e\u003csummary\u003e\u003cb\u003eOverview\u003c/b\u003e \u003c/summary\u003e\n\n- [Launch the webapp with a concrete use case](tutorial/tutorial01-Shapash-Overview-Launch-WebApp.ipynb)\n- [Jupyter Overviews - The main outputs and methods available with the SmartExplainer object](tutorial/tutorial02-Shapash-overview-in-Jupyter.ipynb)\n- [Shapash in production: From model training to deployment (API or Batch Mode)](tutorial/tutorial03-Shapash-overview-model-in-production.ipynb)\n- [Use groups of features](tutorial/common/tuto-common01-groups_of_features.ipynb)\n- [Deploy local explainability in production with SmartPredictor](tutorial/predictor_to_production/tuto-smartpredictor-introduction-to-SmartPredictor.ipynb)\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\u003csummary\u003e\u003cb\u003eCharts and plots\u003c/b\u003e \u003c/summary\u003e\n\n- [**Shapash** Features Importance](tutorial/plots_and_charts/tuto-plot03-features-importance.ipynb)\n- [Contribution plot to understand how one feature affects a prediction](tutorial/plots_and_charts/tuto-plot02-contribution_plot.ipynb)\n- [Summarize, display and export local contribution using filter and local_plot method](tutorial/plots_and_charts/tuto-plot01-local_plot-and-to_pandas.ipynb)\n- [Contributions Comparing plot to understand why predictions on several individuals are different](tutorial/plots_and_charts/tuto-plot04-compare_plot.ipynb)\n- [Visualize interactions between couple of variables](tutorial/plots_and_charts/tuto-plot05-interactions-plot.ipynb)\n- [Display True Values Vs Predicted Values](tutorial/plots_and_charts/tuto-plot06-prediction_plot.ipynb)\n- [Customize colors in Webapp, plots and report](tutorial/common/tuto-common02-colors.ipynb)\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\u003csummary\u003e\u003cb\u003eDifferent ways to use Encoders and Dictionaries\u003c/b\u003e \u003c/summary\u003e\n\n- [Use Category_Encoder \u0026 inverse transformation](tutorial/use_encoders/tuto-encoder01-using-category_encoder.ipynb)\n- [Use ColumnTransformers](tutorial/use_encoders/tuto-encoder02-using-columntransformer.ipynb)\n- [Use Simple Python Dictionnaries](tutorial/use_encoders/tuto-encoder03-using-dict.ipynb)\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\u003csummary\u003e\u003cb\u003eDisplaying data with postprocessing\u003c/b\u003e \u003c/summary\u003e\n\n[Using postprocessing parameter in compile method](tutorial/postprocess/tuto-postprocess01.ipynb)\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\u003csummary\u003e\u003cb\u003eUsing different backends\u003c/b\u003e \u003c/summary\u003e\n\n- [Compute Shapley Contributions using **Shap**](tutorial/explainer_and_backend/tuto-expl01-Shapash-Viz-using-Shap-contributions.ipynb)\n- [Use **Lime** to compute local explanation, Summarize-it with **Shapash**](tutorial/explainer_and_backend/tuto-expl02-Shapash-Viz-using-Lime-contributions.ipynb)\n- [Compile faster Lime and consistency of contributions](tutorial/explainer_and_backend/tuto-expl04-Shapash-compute-Lime-faster.ipynb)\n- [Use **FastTreeSHAP** or add contributions from another backend](tutorial/explainer_and_backend/tuto-expl05-Shapash-using-Fasttreeshap.ipynb)\n- [Use Class Shapash Backend](tutorial/explainer_and_backend/tuto-expl06-Shapash-custom-backend.ipynb)\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\u003csummary\u003e\u003cb\u003eEvaluating the quality of your explainability\u003c/b\u003e \u003c/summary\u003e\n\n- [Building confidence on explainability methods using **Stability**, **Consistency** and **Compacity** metrics](tutorial/explainability_quality/tuto-quality01-Builing-confidence-explainability.ipynb)\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\u003csummary\u003e\u003cb\u003eGenerate a report of your project\u003c/b\u003e \u003c/summary\u003e\n\n- [Generate a standalone HTML report of your project with generate_report](tutorial/generate_report/tuto-shapash-report01.ipynb)\n\n\u003c/details\u003e\n\n\u003cdetails\u003e\u003csummary\u003e\u003cb\u003eAnalysing your model via Shapash WebApp\u003c/b\u003e \u003c/summary\u003e\n\n- [Add features outside of the model for more exploration options](tutorial/generate_webapp/tuto-webapp01-additional-data.ipynb)\n\n\u003c/details\u003e\n\n## 🤝 Contributors\n\n\u003cdiv align=\"left\"\u003e\n  \u003cdiv style=\"display: flex; align-items: flex-start;\"\u003e\n    \u003ca href=\"https://maif.github.io/projets.html\" \u003e\n      \u003cimg align=middle src=\"https://github.com/MAIF/shapash/blob/master/docs/_static/logo_maif.png\" width=\"18%\"/\u003e\n    \u003c/a\u003e\n    \u003ca href=\"https://www.quantmetry.com/\" \u003e\n      \u003cimg align=middle src=\"https://github.com/MAIF/shapash/blob/master/docs/_static/logo_quantmetry.png\" width=\"18%\"/\u003e\n    \u003c/a\u003e\n    \u003cimg align=middle src=\"https://github.com/MAIF/shapash/blob/master/docs/_static/logo_societe_generale.png\" width=\"18%\" /\u003e\n    \u003cimg align=middle src=\"https://github.com/MAIF/shapash/blob/master/docs/_static/logo_groupe_vyv.png\" width=\"18%\" /\u003e\n    \u003ca href=\"https://www.sixfoissept.com/en/\" \u003e\n      \u003cimg align=middle src=\"https://github.com/MAIF/shapash/blob/master/docs/_static/logo_SixfoisSept.png\" width=\"18%\"/\u003e\n    \u003c/a\u003e\n  \u003c/div\u003e\n\u003c/div\u003e\n\n\n## 🏆 Awards\n\n\u003ca href=\"https://raw.githubusercontent.com/MAIF/shapash/master/docs/_static/awards-argus-or.png\"\u003e\n  \u003cimg align=\"left\" src=\"https://raw.githubusercontent.com/MAIF/shapash/master/docs/_static/awards-argus-or.png\" width=\"180\" /\u003e\n\u003c/a\u003e\n\n\u003ca href=\"https://www.kdnuggets.com/2021/04/shapash-machine-learning-models-understandable.html\"\u003e\n  \u003cimg src=\"https://www.kdnuggets.com/images/tkb-2104-g.png?raw=true\" width=\"65\" /\u003e\n\u003c/a\u003e\n","funding_links":[],"categories":["Tools"],"sub_categories":["Interpretability/Explicability"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmaif%2Fshapash","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fmaif%2Fshapash","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmaif%2Fshapash/lists"}