{"id":15029212,"url":"https://github.com/seldonio/alibi","last_synced_at":"2025-05-14T05:10:37.151Z","repository":{"id":37257534,"uuid":"172687028","full_name":"SeldonIO/alibi","owner":"SeldonIO","description":"Algorithms for explaining machine learning models","archived":false,"fork":false,"pushed_at":"2025-04-04T00:44:46.000Z","size":31810,"stargazers_count":2484,"open_issues_count":153,"forks_count":255,"subscribers_count":53,"default_branch":"master","last_synced_at":"2025-04-12T11:26:53.509Z","etag":null,"topics":["counterfactual","explanations","interpretability","machine-learning","xai"],"latest_commit_sha":null,"homepage":"https://docs.seldon.io/projects/alibi/en/stable/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/SeldonIO.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":"CITATION.cff","codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2019-02-26T10:10:56.000Z","updated_at":"2025-04-11T10:36:27.000Z","dependencies_parsed_at":"2024-03-08T15:51:44.418Z","dependency_job_id":"85d1d9d4-ec61-49dc-ae87-6aecbb058011","html_url":"https://github.com/SeldonIO/alibi","commit_stats":{"total_commits":581,"total_committers":22,"mean_commits":26.40909090909091,"dds":0.53184165232358,"last_synced_commit":"f2bcd4edc0bfb51a2cb82388b5057c93976ba132"},"previous_names":[],"tags_count":33,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SeldonIO%2Falibi","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SeldonIO%2Falibi/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SeldonIO%2Falibi/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/SeldonIO%2Falibi/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/SeldonIO","download_url":"https://codeload.github.com/SeldonIO/alibi/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254076850,"owners_count":22010611,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["counterfactual","explanations","interpretability","machine-learning","xai"],"created_at":"2024-09-24T20:09:58.273Z","updated_at":"2025-05-14T05:10:37.125Z","avatar_url":"https://github.com/SeldonIO.png","language":"Python","readme":"\u003cp align=\"center\"\u003e\n  \u003cimg src=\"https://raw.githubusercontent.com/SeldonIO/alibi/master/doc/source/_static/Alibi_Explain_Logo_rgb.png\" alt=\"Alibi Logo\" width=\"50%\"\u003e\n\u003c/p\u003e\n\n\u003c!--- BADGES: START ---\u003e\n\n[![Build Status](https://github.com/SeldonIO/alibi-detect/workflows/CI/badge.svg?branch=master)][#build-status]\n[![Documentation Status](https://readthedocs.org/projects/alibi/badge/?version=latest)][#docs-package]\n[![codecov](https://codecov.io/gh/SeldonIO/alibi/branch/master/graph/badge.svg)](https://codecov.io/gh/SeldonIO/alibi)\n[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/alibi?logo=pypi\u0026style=flat\u0026color=blue)][#pypi-package]\n[![PyPI - Package Version](https://img.shields.io/pypi/v/alibi?logo=pypi\u0026style=flat\u0026color=orange)][#pypi-package]\n[![Conda (channel only)](https://img.shields.io/conda/vn/conda-forge/alibi?logo=anaconda\u0026style=flat\u0026color=orange)][#conda-forge-package]\n[![GitHub - License](https://img.shields.io/github/license/SeldonIO/alibi?logo=github\u0026style=flat\u0026color=green)][#github-license]\n[![Slack channel](https://img.shields.io/badge/chat-on%20slack-e51670.svg)][#slack-channel]\n\n\u003c!--- Hide platform for now as platform agnostic ---\u003e\n\u003c!--- [![Conda - Platform](https://img.shields.io/conda/pn/conda-forge/alibi?logo=anaconda\u0026style=flat)][#conda-forge-package]---\u003e\n\n[#github-license]: https://github.com/SeldonIO/alibi/blob/master/LICENSE\n[#pypi-package]: https://pypi.org/project/alibi/\n[#conda-forge-package]: https://anaconda.org/conda-forge/alibi\n[#docs-package]: https://docs.seldon.io/projects/alibi/en/stable/\n[#build-status]: https://github.com/SeldonIO/alibi/actions?query=workflow%3A%22CI%22\n[#slack-channel]: https://join.slack.com/t/seldondev/shared_invite/zt-vejg6ttd-ksZiQs3O_HOtPQsen_labg\n\u003c!--- BADGES: END ---\u003e\n---\n\n[Alibi](https://docs.seldon.io/projects/alibi) is a source-available Python library aimed at machine learning model inspection and interpretation.\nThe focus of the library is to provide high-quality implementations of black-box, white-box, local and global\nexplanation methods for classification and regression models.\n*  [Documentation](https://docs.seldon.io/projects/alibi/en/stable/)\n\nIf you're interested in outlier detection, concept drift or adversarial instance detection, check out our sister project [alibi-detect](https://github.com/SeldonIO/alibi-detect).\n\n\u003ctable\u003e\n  \u003ctr valign=\"top\"\u003e\n    \u003ctd width=\"50%\" \u003e\n        \u003ca href=\"https://docs.seldon.io/projects/alibi/en/stable/examples/anchor_image_imagenet.html\"\u003e\n            \u003cbr\u003e\n            \u003cb\u003eAnchor explanations for images\u003c/b\u003e\n            \u003cbr\u003e\n            \u003cbr\u003e\n            \u003cimg src=\"https://github.com/SeldonIO/alibi/raw/master/doc/source/_static/anchor_image.png\"\u003e\n        \u003c/a\u003e\n    \u003c/td\u003e\n    \u003ctd width=\"50%\"\u003e\n        \u003ca href=\"https://docs.seldon.io/projects/alibi/en/stable/examples/integrated_gradients_imdb.html\"\u003e\n            \u003cbr\u003e\n            \u003cb\u003eIntegrated Gradients for text\u003c/b\u003e\n            \u003cbr\u003e\n            \u003cbr\u003e\n            \u003cimg src=\"https://github.com/SeldonIO/alibi/raw/master/doc/source/_static/ig_text.png\"\u003e\n        \u003c/a\u003e\n    \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr valign=\"top\"\u003e\n    \u003ctd width=\"50%\"\u003e\n        \u003ca href=\"https://docs.seldon.io/projects/alibi/en/stable/methods/CFProto.html\"\u003e\n            \u003cbr\u003e\n            \u003cb\u003eCounterfactual examples\u003c/b\u003e\n            \u003cbr\u003e\n            \u003cbr\u003e\n            \u003cimg src=\"https://github.com/SeldonIO/alibi/raw/master/doc/source/_static/cf.png\"\u003e\n        \u003c/a\u003e\n    \u003c/td\u003e\n    \u003ctd width=\"50%\"\u003e\n        \u003ca href=\"https://docs.seldon.io/projects/alibi/en/stable/methods/ALE.html\"\u003e\n            \u003cbr\u003e\n            \u003cb\u003eAccumulated Local Effects\u003c/b\u003e\n            \u003cbr\u003e\n            \u003cbr\u003e\n            \u003cimg src=\"https://github.com/SeldonIO/alibi/raw/master/doc/source/_static/ale.png\"\u003e\n        \u003c/a\u003e\n    \u003c/td\u003e\n  \u003c/tr\u003e\n\u003c/table\u003e\n\n## Table of Contents\n\n* [Installation and Usage](#installation-and-usage)\n* [Supported Methods](#supported-methods)\n  * [Model Explanations](#model-explanations)\n  * [Model Confidence](#model-confidence)\n  * [Prototypes](#prototypes)\n  * [References and Examples](#references-and-examples)\n* [Citations](#citations)\n\n## Installation and Usage\nAlibi can be installed from:\n\n- PyPI or GitHub source (with `pip`)\n- Anaconda (with `conda`/`mamba`)\n\n### With pip\n\n- Alibi can be installed from [PyPI](https://pypi.org/project/alibi):\n\n  ```bash\n  pip install alibi\n  ```\n  \n- Alternatively, the development version can be installed:\n  ```bash\n  pip install git+https://github.com/SeldonIO/alibi.git \n  ```\n\n- To take advantage of distributed computation of explanations, install `alibi` with `ray`:\n  ```bash\n  pip install alibi[ray]\n  ```\n\n- For SHAP support, install `alibi` as follows:\n  ```bash\n  pip install alibi[shap]\n  ```\n\n### With conda \n\nTo install from [conda-forge](https://conda-forge.org/) it is recommended to use [mamba](https://mamba.readthedocs.io/en/stable/), \nwhich can be installed to the *base* conda enviroment with:\n\n```bash\nconda install mamba -n base -c conda-forge\n```\n\n- For the standard Alibi install:\n  ```bash\n  mamba install -c conda-forge alibi\n  ```\n\n- For distributed computing support:\n  ```bash\n  mamba install -c conda-forge alibi ray\n  ```\n\n- For SHAP support:\n  ```bash\n  mamba install -c conda-forge alibi shap\n  ```\n\n### Usage\nThe alibi explanation API takes inspiration from `scikit-learn`, consisting of distinct initialize,\nfit and explain steps. We will use the [AnchorTabular](https://docs.seldon.io/projects/alibi/en/stable/methods/Anchors.html)\nexplainer to illustrate the API:\n\n```python\nfrom alibi.explainers import AnchorTabular\n\n# initialize and fit explainer by passing a prediction function and any other required arguments\nexplainer = AnchorTabular(predict_fn, feature_names=feature_names, category_map=category_map)\nexplainer.fit(X_train)\n\n# explain an instance\nexplanation = explainer.explain(x)\n```\n\nThe explanation returned is an `Explanation` object with attributes `meta` and `data`. `meta` is a dictionary\ncontaining the explainer metadata and any hyperparameters and `data` is a dictionary containing everything\nrelated to the computed explanation. For example, for the Anchor algorithm the explanation can be accessed\nvia `explanation.data['anchor']` (or `explanation.anchor`). The exact details of available fields varies\nfrom method to method so we encourage the reader to become familiar with the\n[types of methods supported](https://docs.seldon.io/projects/alibi/en/stable/overview/algorithms.html).\n \n\n## Supported Methods\nThe following tables summarize the possible use cases for each method.\n\n### Model Explanations\n| Method                                                                                                       |    Models    |     Explanations      | Classification | Regression | Tabular | Text | Images | Categorical features | Train set required | Distributed |\n|:-------------------------------------------------------------------------------------------------------------|:------------:|:---------------------:|:--------------:|:----------:|:-------:|:----:|:------:|:--------------------:|:------------------:|:-----------:|\n| [ALE](https://docs.seldon.io/projects/alibi/en/stable/methods/ALE.html)                                      |      BB      |        global         |       ✔        |     ✔      |    ✔    |      |        |                      |                    |             |\n| [Partial Dependence](https://docs.seldon.io/projects/alibi/en/stable/methods/PartialDependence.html)         |    BB WB     |        global         |       ✔        |     ✔      |    ✔    |      |        |          ✔           |                    |             |\n| [PD Variance](https://docs.seldon.io/projects/alibi/en/stable/methods/PartialDependenceVariance.html)        |    BB WB     |        global         |       ✔        |     ✔      |    ✔    |      |        |          ✔           |                    |             |\n| [Permutation Importance](https://docs.seldon.io/projects/alibi/en/stable/methods/PermutationImportance.html) |      BB      |        global         |       ✔        |     ✔      |    ✔    |      |        |          ✔           |                    |             |\n| [Anchors](https://docs.seldon.io/projects/alibi/en/stable/methods/Anchors.html)                              |      BB      |         local         |       ✔        |            |    ✔    |  ✔   |   ✔    |          ✔           |    For Tabular     |             |\n| [CEM](https://docs.seldon.io/projects/alibi/en/stable/methods/CEM.html)                                      | BB* TF/Keras |         local         |       ✔        |            |    ✔    |      |   ✔    |                      |      Optional      |             |\n| [Counterfactuals](https://docs.seldon.io/projects/alibi/en/stable/methods/CF.html)                           | BB* TF/Keras |         local         |       ✔        |            |    ✔    |      |   ✔    |                      |         No         |             |\n| [Prototype Counterfactuals](https://docs.seldon.io/projects/alibi/en/stable/methods/CFProto.html)            | BB* TF/Keras |         local         |       ✔        |            |    ✔    |      |   ✔    |          ✔           |      Optional      |             |\n| [Counterfactuals with RL](https://docs.seldon.io/projects/alibi/en/stable/methods/CFRL.html)                 |      BB      |         local         |       ✔        |            |    ✔    |      |   ✔    |          ✔           |         ✔          |             |\n| [Integrated Gradients](https://docs.seldon.io/projects/alibi/en/stable/methods/IntegratedGradients.html)     |   TF/Keras   |         local         |       ✔        |     ✔      |    ✔    |  ✔   |   ✔    |          ✔           |      Optional      |             |\n| [Kernel SHAP](https://docs.seldon.io/projects/alibi/en/stable/methods/KernelSHAP.html)                       |      BB      | local \u003cbr\u003e\u003c/br\u003eglobal |       ✔        |     ✔      |    ✔    |      |        |          ✔           |         ✔          |      ✔      |\n| [Tree SHAP](https://docs.seldon.io/projects/alibi/en/stable/methods/TreeSHAP.html)                           |      WB      | local \u003cbr\u003e\u003c/br\u003eglobal |       ✔        |     ✔      |    ✔    |      |        |          ✔           |      Optional      |             |\n| [Similarity explanations](https://docs.seldon.io/projects/alibi/en/stable/methods/Similarity.html)           |      WB      |         local         |       ✔        |     ✔      |    ✔    |  ✔   |   ✔    |          ✔           |         ✔          |             |\n\n### Model Confidence\nThese algorithms provide **instance-specific** scores measuring the model confidence for making a\nparticular prediction.\n\n|Method|Models|Classification|Regression|Tabular|Text|Images|Categorical Features|Train set required|\n|:---|:---|:---:|:---:|:---:|:---:|:---:|:---:|:---|\n|[Trust Scores](https://docs.seldon.io/projects/alibi/en/stable/methods/TrustScores.html)|BB|✔| |✔|✔(1)|✔(2)| |Yes|\n|[Linearity Measure](https://docs.seldon.io/projects/alibi/en/stable/methods/LinearityMeasure.html)|BB|✔|✔|✔| |✔| |Optional|\n\nKey:\n - **BB** - black-box (only require a prediction function)\n - **BB\\*** - black-box but assume model is differentiable\n - **WB** - requires white-box model access. There may be limitations on models supported\n - **TF/Keras** - TensorFlow models via the Keras API\n - **Local** - instance specific explanation, why was this prediction made?\n - **Global** - explains the model with respect to a set of instances\n - **(1)** -  depending on model\n - **(2)** -  may require dimensionality reduction\n\n### Prototypes\nThese algorithms provide a **distilled** view of the dataset and help construct a 1-KNN **interpretable** classifier.\n\n|Method|Classification|Regression|Tabular|Text|Images|Categorical Features|Train set labels|\n|:-----|:-------------|:---------|:------|:---|:-----|:-------------------|:---------------|\n|[ProtoSelect](https://docs.seldon.io/projects/alibi/en/latest/methods/ProtoSelect.html)|✔| |✔|✔|✔|✔| Optional       |\n\n\n## References and Examples\n- Accumulated Local Effects (ALE, [Apley and Zhu, 2016](https://arxiv.org/abs/1612.08468))\n  - [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/ALE.html)\n  - Examples:\n    [California housing dataset](https://docs.seldon.io/projects/alibi/en/stable/examples/ale_regression_california.html),\n    [Iris dataset](https://docs.seldon.io/projects/alibi/en/stable/examples/ale_classification.html)\n\n- Partial Dependence ([J.H. Friedman, 2001](https://projecteuclid.org/journals/annals-of-statistics/volume-29/issue-5/Greedy-function-approximation-A-gradient-boostingmachine/10.1214/aos/1013203451.full))\n  - [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/PartialDependence.html)\n  - Examples:\n    [Bike rental](https://docs.seldon.io/projects/alibi/en/stable/examples/pdp_regression_bike.html)\n\n- Partial Dependence Variance([Greenwell et al., 2018](https://arxiv.org/abs/1805.04755))\n  - [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/PartialDependenceVariance.html)\n  - Examples:\n    [Friedman’s regression problem](https://docs.seldon.io/projects/alibi/en/stable/examples/pd_variance_regression_friedman.html)\n\n- Permutation Importance([Breiman, 2001](https://link.springer.com/article/10.1023/A:1010933404324); [Fisher et al., 2018](https://arxiv.org/abs/1801.01489))\n  - [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/PermutationImportance.html)\n  - Examples:\n    [Who's Going to Leave Next?](https://docs.seldon.io/projects/alibi/en/stable/examples/permutation_importance_classification_leave.html)\n\n- Anchor explanations ([Ribeiro et al., 2018](https://homes.cs.washington.edu/~marcotcr/aaai18.pdf))\n  - [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/Anchors.html)\n  - Examples:\n    [income prediction](https://docs.seldon.io/projects/alibi/en/stable/examples/anchor_tabular_adult.html),\n    [Iris dataset](https://docs.seldon.io/projects/alibi/en/stable/examples/anchor_tabular_iris.html),\n    [movie sentiment classification](https://docs.seldon.io/projects/alibi/en/stable/examples/anchor_text_movie.html),\n    [ImageNet](https://docs.seldon.io/projects/alibi/en/stable/examples/anchor_image_imagenet.html),\n    [fashion MNIST](https://docs.seldon.io/projects/alibi/en/stable/examples/anchor_image_fashion_mnist.html)\n\n- Contrastive Explanation Method (CEM, [Dhurandhar et al., 2018](https://papers.nips.cc/paper/7340-explanations-based-on-the-missing-towards-contrastive-explanations-with-pertinent-negatives))\n  - [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/CEM.html)\n  - Examples: [MNIST](https://docs.seldon.io/projects/alibi/en/stable/examples/cem_mnist.html),\n    [Iris dataset](https://docs.seldon.io/projects/alibi/en/stable/examples/cem_iris.html)\n\n- Counterfactual Explanations (extension of\n  [Wachter et al., 2017](https://arxiv.org/abs/1711.00399))\n  - [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/CF.html)\n  - Examples: \n    [MNIST](https://docs.seldon.io/projects/alibi/en/stable/examples/cf_mnist.html)\n\n- Counterfactual Explanations Guided by Prototypes ([Van Looveren and Klaise, 2019](https://arxiv.org/abs/1907.02584))\n  - [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/CFProto.html)\n  - Examples:\n    [MNIST](https://docs.seldon.io/projects/alibi/en/stable/examples/cfproto_mnist.html),\n    [California housing dataset](https://docs.seldon.io/projects/alibi/en/stable/examples/cfproto_housing.html),\n    [Adult income (one-hot)](https://docs.seldon.io/projects/alibi/en/stable/examples/cfproto_cat_adult_ohe.html),\n    [Adult income (ordinal)](https://docs.seldon.io/projects/alibi/en/stable/examples/cfproto_cat_adult_ord.html)\n\n- Model-agnostic Counterfactual Explanations via RL([Samoilescu et al., 2021](https://arxiv.org/abs/2106.02597))\n  - [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/CFRL.html)\n  - Examples:\n    [MNIST](https://docs.seldon.io/projects/alibi/en/stable/examples/cfrl_mnist.html),\n    [Adult income](https://docs.seldon.io/projects/alibi/en/stable/examples/cfrl_adult.html)\n\n- Integrated Gradients ([Sundararajan et al., 2017](https://arxiv.org/abs/1703.01365))\n  - [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/IntegratedGradients.html),\n  - Examples:\n    [MNIST example](https://docs.seldon.io/projects/alibi/en/stable/examples/integrated_gradients_mnist.html),\n    [Imagenet example](https://docs.seldon.io/projects/alibi/en/stable/examples/integrated_gradients_imagenet.html),\n    [IMDB example](https://docs.seldon.io/projects/alibi/en/stable/examples/integrated_gradients_imdb.html).\n\n- Kernel Shapley Additive Explanations ([Lundberg et al., 2017](https://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions))\n  - [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/KernelSHAP.html)\n  - Examples:\n    [SVM with continuous data](https://docs.seldon.io/projects/alibi/en/stable/examples/kernel_shap_wine_intro.html),\n    [multinomial logistic regression with continous data](https://docs.seldon.io/projects/alibi/en/stable/examples/kernel_shap_wine_lr.html),\n    [handling categorical variables](https://docs.seldon.io/projects/alibi/en/stable/examples/kernel_shap_adult_lr.html)\n    \n- Tree Shapley Additive Explanations ([Lundberg et al., 2020](https://www.nature.com/articles/s42256-019-0138-9))\n  - [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/TreeSHAP.html)\n  - Examples:\n    [Interventional (adult income, xgboost)](https://docs.seldon.io/projects/alibi/en/stable/examples/interventional_tree_shap_adult_xgb.html),\n    [Path-dependent (adult income, xgboost)](https://docs.seldon.io/projects/alibi/en/stable/examples/path_dependent_tree_shap_adult_xgb.html)\n    \n- Trust Scores ([Jiang et al., 2018](https://arxiv.org/abs/1805.11783))\n  - [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/TrustScores.html)\n  - Examples:\n    [MNIST](https://docs.seldon.io/projects/alibi/en/stable/examples/trustscore_mnist.html),\n    [Iris dataset](https://docs.seldon.io/projects/alibi/en/stable/examples/trustscore_mnist.html)\n\n- Linearity Measure\n  - [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/LinearityMeasure.html)\n  - Examples:\n    [Iris dataset](https://docs.seldon.io/projects/alibi/en/stable/examples/linearity_measure_iris.html),\n    [fashion MNIST](https://docs.seldon.io/projects/alibi/en/stable/examples/linearity_measure_fashion_mnist.html)\n\n- ProtoSelect\n  - [Documentation](https://docs.seldon.io/projects/alibi/en/latest/methods/ProtoSelect.html)\n  - Examples:\n    [Adult Census \u0026 CIFAR10](https://docs.seldon.io/projects/alibi/en/latest/examples/protoselect_adult_cifar10.html)\n\n- Similarity explanations\n  - [Documentation](https://docs.seldon.io/projects/alibi/en/stable/methods/Similarity.html)\n  - Examples:\n    [20 news groups dataset](https://docs.seldon.io/projects/alibi/en/stable/examples/similarity_explanations_20ng.html),\n    [ImageNet dataset](https://docs.seldon.io/projects/alibi/en/stable/examples/similarity_explanations_imagenet.html),\n    [MNIST dataset](https://docs.seldon.io/projects/alibi/en/stable/examples/similarity_explanations_mnist.html)\n\n## Citations\nIf you use alibi in your research, please consider citing it.\n\nBibTeX entry:\n\n```\n@article{JMLR:v22:21-0017,\n  author  = {Janis Klaise and Arnaud Van Looveren and Giovanni Vacanti and Alexandru Coca},\n  title   = {Alibi Explain: Algorithms for Explaining Machine Learning Models},\n  journal = {Journal of Machine Learning Research},\n  year    = {2021},\n  volume  = {22},\n  number  = {181},\n  pages   = {1-7},\n  url     = {http://jmlr.org/papers/v22/21-0017.html}\n}\n```\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fseldonio%2Falibi","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fseldonio%2Falibi","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fseldonio%2Falibi/lists"}