{"id":25081759,"url":"https://github.com/understandable-machine-intelligence-lab/quantus","last_synced_at":"2025-05-14T16:02:06.808Z","repository":{"id":37755872,"uuid":"349116466","full_name":"understandable-machine-intelligence-lab/Quantus","owner":"understandable-machine-intelligence-lab","description":"Quantus is an eXplainable AI toolkit for responsible evaluation of neural network explanations","archived":false,"fork":false,"pushed_at":"2025-02-05T11:04:30.000Z","size":154326,"stargazers_count":580,"open_issues_count":52,"forks_count":75,"subscribers_count":9,"default_branch":"main","last_synced_at":"2025-02-06T10:39:38.076Z","etag":null,"topics":["deep-learning","explainable-ai","interpretability","machine-learning","pytorch","quantification-evaluation-methods","reproducibility","tensorflow","xai"],"latest_commit_sha":null,"homepage":"https://quantus.readthedocs.io/","language":"Jupyter Notebook","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/understandable-machine-intelligence-lab.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"COPYING","code_of_conduct":null,"threat_model":null,"audit":null,"citation":"CITATION","codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2021-03-18T15:04:58.000Z","updated_at":"2025-02-06T09:59:06.000Z","dependencies_parsed_at":"2022-07-12T16:44:52.370Z","dependency_job_id":"a65c3657-202b-4129-bfd4-87d07629f42f","html_url":"https://github.com/understandable-machine-intelligence-lab/Quantus","commit_stats":{"total_commits":1108,"total_committers":21,"mean_commits":52.76190476190476,"dds":0.48014440433213,"last_synced_commit":"1a71a600e4fcbca31e9865b0f293f9ab985b2d37"},"previous_names":[],"tags_count":26,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/understandable-machine-intelligence-lab%2FQuantus","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/understandable-machine-intelligence-lab%2FQuantus/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/understandable-machine-intelligence-lab%2FQuantus/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/understandable-machine-intelligence-lab%2FQuantus/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/understandable-machine-intelligence-lab","download_url":"https://codeload.github.com/understandable-machine-intelligence-lab/Quantus/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248631651,"owners_count":21136554,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["deep-learning","explainable-ai","interpretability","machine-learning","pytorch","quantification-evaluation-methods","reproducibility","tensorflow","xai"],"created_at":"2025-02-07T05:18:26.321Z","updated_at":"2025-04-12T20:38:06.826Z","avatar_url":"https://github.com/understandable-machine-intelligence-lab.png","language":"Jupyter Notebook","readme":"\u003cp align=\"center\"\u003e\n  \u003cimg width=\"350\" src=\"https://raw.githubusercontent.com/understandable-machine-intelligence-lab/Quantus/main/quantus_logo.png\"\u003e\n\u003c/p\u003e\n\u003c!--\u003ch1 align=\"center\"\u003e\u003cb\u003eQuantus\u003c/b\u003e\u003c/h1\u003e--\u003e\n\u003ch3 align=\"center\"\u003e\u003cb\u003eA toolkit to evaluate neural network explanations\u003c/b\u003e\u003c/h3\u003e\n\u003cp align=\"center\"\u003e\n  PyTorch and TensorFlow\n\n[![Getting started!](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/understandable-machine-intelligence-lab/Quantus/blob/main/tutorials/Tutorial_ImageNet_Example_All_Metrics.ipynb)\n[![Launch Tutorials](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/understandable-machine-intelligence-lab/Quantus/HEAD?labpath=tutorials)\n![Python version](https://img.shields.io/badge/python-3.8%20%7C%203.9%20%7C%203.10%20%7C%203.11-blue.svg)\n[![PyPI version](https://badge.fury.io/py/quantus.svg)](https://badge.fury.io/py/quantus)\n[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)\n[![Documentation Status](https://readthedocs.org/projects/quantus/badge/?version=latest)](https://quantus.readthedocs.io/en/latest/?badge=latest)\n[![codecov.io](https://codecov.io/github/understandable-machine-intelligence-lab/Quantus/coverage.svg?branch=master)](https://codecov.io/github/understandable-machine-intelligence-lab/Quantus?branch=master)\n[![Downloads](https://static.pepy.tech/badge/quantus)](https://pepy.tech/project/quantus)\n\u003c!--[![Python package](https://github.com/understandable-machine-intelligence-lab/Quantus/actions/workflows/python-package.yml/badge.svg)](https://github.com/understandable-machine-intelligence-lab/Quantus/actions/workflows/python-package.yml)\n[![Code coverage](https://github.com/understandable-machine-intelligence-lab/Quantus/actions/workflows/codecov.yml/badge.svg)](https://github.com/understandable-machine-intelligence-lab/Quantus/actions/workflows/codecov.yml)\n--\u003e\n\n_Quantus is currently under active development so carefully note the Quantus release version to ensure reproducibility of your work._\n\n[📑 Shortcut to paper!](https://jmlr.org/papers/volume24/22-0142/22-0142.pdf)\n\nIf you want to contribute/ improve/ extend Quantus, join our [Discord](https://discord.gg/HB77krUE)!\n## News and Highlights! :rocket:\n\n- 🐼 For **training data attribution** evaluation, check out [quanda](https://github.com/dilyabareeva/quanda)!\n- New [batch implementation](https://github.com/understandable-machine-intelligence-lab/Quantus/pull/351) for 12X speedup of existing faithfulness metrics (!)\n- New metrics added: [EfficientMPRT](https://github.com/understandable-machine-intelligence-lab/Quantus/blob/main/quantus/metrics/randomisation/efficient_mprt.py) and [SmoothMPRT](https://github.com/understandable-machine-intelligence-lab/Quantus/blob/main/quantus/metrics/randomisation/smooth_mprt.py) by [Hedström et al., (2023)](https://openreview.net/pdf?id=vVpefYmnsG)\n- Accepted to Journal of Machine Learning Research (MLOSS), read the [paper](https://jmlr.org/papers/v24/22-0142.html)\n- Offers more than **35+ metrics in 6 categories** for XAI evaluation\n- Supports different data types (image, time-series, tabular, NLP next up!) and models (PyTorch, TensorFlow)\n- Extended built-in support for explanation methods ([captum](https://captum.ai/), [tf-explain](https://tf-explain.readthedocs.io/en/latest/) and [zennit](https://github.com/chr5tphr/zennit))\n\u003c!--- Released a new version [here](https://github.com/understandable-machine-intelligence-lab/Quantus/releases) with [Python 3.7 discontinued](https://devguide.python.org/versions/)--\u003e\n\n## Citation\n\nIf you find this toolkit or its companion paper\n[**Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond**](https://jmlr.org/papers/v24/22-0142.html)\ninteresting or useful in your research, use the following Bibtex annotation to cite us:\n\n```bibtex\n@article{hedstrom2023quantus,\n  author  = {Anna Hedstr{\\\"{o}}m and Leander Weber and Daniel Krakowczyk and Dilyara Bareeva and Franz Motzkus and Wojciech Samek and Sebastian Lapuschkin and Marina Marina M.{-}C. H{\\\"{o}}hne},\n  title   = {Quantus: An Explainable AI Toolkit for Responsible Evaluation of Neural Network Explanations and Beyond},\n  journal = {Journal of Machine Learning Research},\n  year    = {2023},\n  volume  = {24},\n  number  = {34},\n  pages   = {1--11},\n  url     = {http://jmlr.org/papers/v24/22-0142.html}\n}\n```\n\nWhen applying the individual metrics of Quantus, please make sure to also properly cite the work of the original authors (as linked below).\n\n## Table of contents\n\n* [Library overview](#library-overview)\n* [Installation](#installation)\n* [Getting started](#getting-started)\n* [Tutorials](#tutorials)\n* [Contributing](#contributing)\n\u003c!--* [Citation](#citation)--\u003e\n\n## Library overview \n\nA simple visual comparison of eXplainable Artificial Intelligence (XAI) methods is often not sufficient to decide which explanation method works best as shown exemplarily in Figure a) for four gradient-based methods — Saliency ([Mørch et al., 1995](https://ieeexplore.ieee.org/document/488997); [Baehrens et al., 2010](https://www.jmlr.org/papers/volume11/baehrens10a/baehrens10a.pdf)), Integrated Gradients ([Sundararajan et al., 2017](http://proceedings.mlr.press/v70/sundararajan17a/sundararajan17a.pdf)), GradientShap ([Lundberg and Lee, 2017](https://arxiv.org/abs/1705.07874)) or FusionGrad ([Bykov et al., 2021](https://arxiv.org/abs/2106.10185)), yet it is a common practice for evaluation XAI methods in absence of ground truth data. Therefore, we developed Quantus, an easy-to-use yet comprehensive toolbox for quantitative evaluation of explanations — including 30+ different metrics. \n\n\u003c/p\u003e\n\u003cp align=\"center\"\u003e\n  \u003cimg width=\"800\" src=\"https://raw.githubusercontent.com/understandable-machine-intelligence-lab/Quantus/main/viz.png\"\u003e\n\u003c/p\u003e\n\nWith Quantus, we can obtain richer insights on how the methods compare e.g., b) by holistic quantification on several evaluation criteria and c) by providing sensitivity analysis of how a single parameter e.g. the pixel replacement strategy of a faithfulness test influences the ranking of the XAI methods.\n \n### Metrics\n\nThis project started with the goal of collecting existing evaluation metrics that have been introduced in the context of XAI research — to help automate the task of _XAI quantification_. Along the way of implementation, it became clear that XAI metrics most often belong to one out of six categories i.e., 1) faithfulness, 2) robustness, 3) localisation 4) complexity 5) randomisation (sensitivity) or 6) axiomatic metrics. The library contains implementations of the following evaluation metrics:\n\n\u003cdetails\u003e\n  \u003csummary\u003e\u003cb\u003eFaithfulness\u003c/b\u003e\u003c/summary\u003e\nquantifies to what extent explanations follow the predictive behaviour of the model (asserting that more important features play a larger role in model outcomes)\n \u003cbr\u003e\u003cbr\u003e\n  \u003cul\u003e\n    \u003cli\u003e\u003cb\u003eFaithfulness Correlation \u003c/b\u003e\u003ca href=\"https://www.ijcai.org/Proceedings/2020/0417.pdf\"\u003e(Bhatt et al., 2020)\u003c/a\u003e: iteratively replaces a random subset of given attributions with a baseline value and then measuring the correlation between the sum of this attribution subset and the difference in function output \n    \u003cli\u003e\u003cb\u003eFaithfulness Estimate \u003c/b\u003e\u003ca href=\"https://arxiv.org/pdf/1806.07538.pdf\"\u003e(Alvarez-Melis et al., 2018)\u003c/a\u003e: computes the correlation between probability drops and attribution scores on various points\n    \u003cli\u003e\u003cb\u003eMonotonicity Metric \u003c/b\u003e\u003ca href=\"https://arxiv.org/abs/1909.03012\"\u003e(Arya et al. 2019)\u003c/a\u003e: starts from a reference baseline to then incrementally replace each feature in a sorted attribution vector, measuring the effect on model performance\n    \u003cli\u003e\u003cb\u003eMonotonicity Metric \u003c/b\u003e\u003ca href=\"https://arxiv.org/pdf/2007.07584.pdf\"\u003e (Nguyen et al, 2020)\u003c/a\u003e: measures the spearman rank correlation between the absolute values of the attribution and the uncertainty in the probability estimation\n    \u003cli\u003e\u003cb\u003ePixel Flipping \u003c/b\u003e\u003ca href=\"https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0130140\"\u003e(Bach et al., 2015)\u003c/a\u003e: captures the impact of perturbing pixels in descending order according to the attributed value on the classification score\n    \u003cli\u003e\u003cb\u003eRegion Perturbation \u003c/b\u003e\u003ca href=\"https://arxiv.org/pdf/1509.06321.pdf\"\u003e(Samek et al., 2015)\u003c/a\u003e: is an extension of Pixel-Flipping to flip an area rather than a single pixel\n    \u003cli\u003e\u003cb\u003eSelectivity \u003c/b\u003e\u003ca href=\"https://arxiv.org/pdf/1706.07979.pdf\"\u003e(Montavon et al., 2018)\u003c/a\u003e: measures how quickly an evaluated prediction function starts to drop when removing features with the highest attributed values\n    \u003cli\u003e\u003cb\u003eSensitivityN \u003c/b\u003e\u003ca href=\"https://arxiv.org/pdf/1711.06104.pdf\"\u003e(Ancona et al., 2019)\u003c/a\u003e: computes the correlation between the sum of the attributions and the variation in the target output while varying the fraction of the total number of features, averaged over several test samples\n    \u003cli\u003e\u003cb\u003eIROF \u003c/b\u003e\u003ca href=\"https://arxiv.org/pdf/2003.08747.pdf\"\u003e(Rieger at el., 2020)\u003c/a\u003e: computes the area over the curve per class for sorted mean importances of feature segments (superpixels) as they are iteratively removed (and prediction scores are collected), averaged over several test samples\n    \u003cli\u003e\u003cb\u003eInfidelity \u003c/b\u003e\u003ca href=\"https://arxiv.org/pdf/1901.09392.pdf\"\u003e(Chih-Kuan, Yeh, et al., 2019)\u003c/a\u003e: represents the expected mean square error between 1) a dot product of an attribution and input perturbation and 2) difference in model output after significant perturbation \n    \u003cli\u003e\u003cb\u003eROAD \u003c/b\u003e\u003ca href=\"https://arxiv.org/pdf/2202.00449.pdf\"\u003e(Rong, Leemann, et al., 2022)\u003c/a\u003e: measures the accuracy of the model on the test set in an iterative process of removing k most important pixels, at each step k most relevant pixels (MoRF order) are replaced with noisy linear imputations\n    \u003cli\u003e\u003cb\u003eSufficiency \u003c/b\u003e\u003ca href=\"https://arxiv.org/abs/2202.00734\"\u003e(Dasgupta et al., 2022)\u003c/a\u003e: measures the extent to which similar explanations have the same prediction label\n\u003c/ul\u003e\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eRobustness\u003c/b\u003e\u003c/summary\u003e\nmeasures to what extent explanations are stable when subject to slight perturbations of the input, assuming that model output approximately stayed the same\n     \u003cbr\u003e\u003cbr\u003e\n\u003cul\u003e\n    \u003cli\u003e\u003cb\u003eLocal Lipschitz Estimate \u003c/b\u003e\u003ca href=\"https://arxiv.org/pdf/1806.08049.pdf\"\u003e(Alvarez-Melis et al., 2018)\u003c/a\u003e: tests the consistency in the explanation between adjacent examples\n    \u003cli\u003e\u003cb\u003eMax-Sensitivity \u003c/b\u003e\u003ca href=\"https://arxiv.org/pdf/1901.09392.pdf\"\u003e(Yeh et al., 2019)\u003c/a\u003e: measures the maximum sensitivity of an explanation using a Monte Carlo sampling-based approximation\n    \u003cli\u003e\u003cb\u003eAvg-Sensitivity \u003c/b\u003e\u003ca href=\"https://arxiv.org/pdf/1901.09392.pdf\"\u003e(Yeh et al., 2019)\u003c/a\u003e: measures the average sensitivity of an explanation using a Monte Carlo sampling-based approximation\n    \u003cli\u003e\u003cb\u003eContinuity \u003c/b\u003e\u003ca href=\"https://arxiv.org/pdf/1706.07979.pdf\"\u003e(Montavon et al., 2018)\u003c/a\u003e: captures the strongest variation in explanation of an input and its perturbed version\n    \u003cli\u003e\u003cb\u003eConsistency \u003c/b\u003e\u003ca href=\"https://arxiv.org/abs/2202.00734\"\u003e(Dasgupta et al., 2022)\u003c/a\u003e: measures the probability that the inputs with the same explanation have the same prediction label\n    \u003cli\u003e\u003cb\u003eRelative Input Stability (RIS)\u003c/b\u003e\u003ca href=\"https://arxiv.org/pdf/2203.06877.pdf\"\u003e (Agarwal, et. al., 2022)\u003c/a\u003e: measures the relative distance between explanations e_x and e_x' with respect to the distance between the two inputs x and x'\n    \u003cli\u003e\u003cb\u003eRelative Representation Stability (RRS)\u003c/b\u003e\u003ca href=\"https://arxiv.org/pdf/2203.06877.pdf\"\u003e (Agarwal, et. al., 2022)\u003c/a\u003e: measures the relative distance between explanations e_x and e_x' with respect to the distance between internal models representations L_x and L_x' for x and x' respectively\n    \u003cli\u003e\u003cb\u003eRelative Output Stability (ROS)\u003c/b\u003e\u003ca href=\"https://arxiv.org/pdf/2203.06877.pdf\"\u003e (Agarwal, et. al., 2022)\u003c/a\u003e: measures the relative distance between explanations e_x and e_x' with respect to the distance between output logits h(x) and h(x') for x and x' respectively\n\u003c/ul\u003e\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eLocalisation\u003c/b\u003e\u003c/summary\u003e\ntests if the explainable evidence is centred around a region of interest (RoI) which may be defined around an object by a bounding box, a segmentation mask or, a cell within a grid\n     \u003cbr\u003e\u003cbr\u003e\n\u003cul\u003e\n    \u003cli\u003e\u003cb\u003ePointing Game \u003c/b\u003e\u003ca href=\"https://arxiv.org/abs/1608.00507\"\u003e(Zhang et al., 2018)\u003c/a\u003e: checks whether attribution with the highest score is located within the targeted object\n    \u003cli\u003e\u003cb\u003eAttribution Localization \u003c/b\u003e\u003ca href=\"https://arxiv.org/abs/1910.09840\"\u003e(Kohlbrenner et al., 2020)\u003c/a\u003e: measures the ratio of positive attributions within the targeted object towards the total positive attributions\n    \u003cli\u003e\u003cb\u003eTop-K Intersection \u003c/b\u003e\u003ca href=\"https://arxiv.org/abs/2104.14995\"\u003e(Theiner et al., 2021)\u003c/a\u003e: computes the intersection between a ground truth mask and the binarized explanation at the top k feature locations\n    \u003cli\u003e\u003cb\u003eRelevance Rank Accuracy \u003c/b\u003e\u003ca href=\"https://arxiv.org/abs/2003.07258\"\u003e(Arras et al., 2021)\u003c/a\u003e: measures the ratio of highly attributed pixels within a ground-truth mask towards the size of the ground truth mask\n    \u003cli\u003e\u003cb\u003eRelevance Mass Accuracy \u003c/b\u003e\u003ca href=\"https://arxiv.org/abs/2003.07258\"\u003e(Arras et al., 2021)\u003c/a\u003e: measures the ratio of positively attributed attributions inside the ground-truth mask towards the overall positive attributions\n    \u003cli\u003e\u003cb\u003eAUC \u003c/b\u003e\u003ca href=\"https://doi.org/10.1016/j.patrec.2005.10.010\"\u003e(Fawcett et al., 2006)\u003c/a\u003e: compares the ranking between attributions and a given ground-truth mask\n    \u003cli\u003e\u003cb\u003eFocus \u003c/b\u003e\u003ca href=\"https://arxiv.org/abs/2109.15035\"\u003e(Arias et al., 2022)\u003c/a\u003e: quantifies the precision of the explanation by creating mosaics of data instances from different classes\n\u003c/ul\u003e\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eComplexity\u003c/b\u003e\u003c/summary\u003e\ncaptures to what extent explanations are concise i.e., that few features are used to explain a model prediction\n     \u003cbr\u003e\u003cbr\u003e\n\u003cul\u003e\n    \u003cli\u003e\u003cb\u003eSparseness \u003c/b\u003e\u003ca href=\"https://arxiv.org/abs/1810.06583\"\u003e(Chalasani et al., 2020)\u003c/a\u003e: uses the Gini Index for measuring, if only highly attributed features are truly predictive of the model output\n    \u003cli\u003e\u003cb\u003eComplexity \u003c/b\u003e\u003ca href=\"https://arxiv.org/abs/2005.00631\"\u003e(Bhatt et al., 2020)\u003c/a\u003e: computes the entropy of the fractional contribution of all features to the total magnitude of the attribution individually\n    \u003cli\u003e\u003cb\u003eEffective Complexity \u003c/b\u003e\u003ca href=\"https://arxiv.org/abs/2007.07584\"\u003e(Nguyen at el., 2020)\u003c/a\u003e: measures how many attributions in absolute values are exceeding a certain threshold\n\u003c/ul\u003e\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eRandomisation (Sensitivity)\u003c/b\u003e\u003c/summary\u003e\ntests to what extent explanations deteriorate as inputs to the evaluation problem e.g., model parameters are increasingly randomised\n     \u003cbr\u003e\u003cbr\u003e\n\u003cul\u003e\n    \u003cli\u003e\u003cb\u003eMPRT (Model Parameter Randomisation Test) \u003c/b\u003e\u003ca href=\"https://arxiv.org/abs/1810.03292\"\u003e(Adebayo et. al., 2018)\u003c/a\u003e: randomises the parameters of single model layers in a cascading or independent way and measures the distance of the respective explanation to the original explanation\n    \u003cli\u003e\u003cb\u003eSmooth MPRT \u003c/b\u003e\u003ca href=\"https://openreview.net/pdf?id=vVpefYmnsG\"\u003e(Hedström et. al., 2023)\u003c/a\u003e: adds a \"denoising\" preprocessing step to the original MPRT, where the explanations are averaged over N noisy samples before the similarity between the original- and fully random model's explanations is measured\n    \u003cli\u003e\u003cb\u003eEfficient MPRT \u003c/b\u003e\u003ca href=\"https://openreview.net/pdf?id=vVpefYmnsG\"\u003e(Hedström et. al., 2023)\u003c/a\u003e: reinterprets MPRT by evaluating the rise in explanation complexity (discrete entropy) before and after full model randomisation, asking for increased explanation complexity post-randomisation\n    \u003cli\u003e\u003cb\u003eRandom Logit Test \u003c/b\u003e\u003ca href=\"https://arxiv.org/abs/1912.09818\"\u003e(Sixt et al., 2020)\u003c/a\u003e: computes for the distance between the original explanation and the explanation for a random other class\n\u003c/ul\u003e\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003eAxiomatic\u003c/b\u003e\u003c/summary\u003e\n  assesses if explanations fulfil certain axiomatic properties\n     \u003cbr\u003e\u003cbr\u003e\n\u003cul\u003e\n    \u003cli\u003e\u003cb\u003eCompleteness \u003c/b\u003e\u003ca href=\"https://arxiv.org/abs/1703.01365\"\u003e(Sundararajan et al., 2017)\u003c/a\u003e: evaluates whether the sum of attributions is equal to the difference between the function values at the input x and baseline x' (and referred to as Summation to Delta (Shrikumar et al., 2017), Sensitivity-n (slight variation, Ancona et al., 2018) and Conservation (Montavon et al., 2018))\n    \u003cli\u003e\u003cb\u003eNon-Sensitivity \u003c/b\u003e\u003ca href=\"https://arxiv.org/abs/2007.07584\"\u003e(Nguyen at el., 2020)\u003c/a\u003e: measures whether the total attribution is proportional to the explainable evidence at the model output\n    \u003cli\u003e\u003cb\u003eInput Invariance \u003c/b\u003e\u003ca href=\"https://arxiv.org/abs/1711.00867\"\u003e(Kindermans et al., 2017)\u003c/a\u003e: adds a shift to input, asking that attributions should not change in response (assuming the model does not)\n\u003c/ul\u003e\n\u003c/details\u003e\n\nAdditional metrics will be included in future releases. Please [open an issue](https://github.com/understandable-machine-intelligence-lab/Quantus/issues/new/choose) if you have a metric you believe should be apart of Quantus.\n\n**Disclaimers.** It is worth noting that the implementations of the metrics in this library have not been verified by the original authors. Thus any metric implementation in this library may differ from the original authors. Further, bear in mind that evaluation metrics for XAI methods are often empirical interpretations (or translations) of qualities that some researcher(s) claimed were important for explanations to fulfil, so it may be a discrepancy between what the author claims to measure by the proposed metric and what is actually measured e.g., using entropy as an operationalisation of explanation complexity. Please read the [user guidelines](https://quantus.readthedocs.io/en/latest/guidelines/guidelines_and_disclaimers.html) for further guidance on how to best use the library. \n\n## Installation\n\nIf you already have [PyTorch](https://pytorch.org/) or [TensorFlow](https://www.TensorFlow.org) installed on your machine, \nthe most light-weight version of Quantus can be obtained from [PyPI](https://pypi.org/project/quantus/) as follows (no additional explainability functionality or deep learning framework will be included):\n\n```setup\npip install quantus\n```\nAlternatively, you can simply add the desired deep learning framework (in brackets) to have the package installed together with Quantus.\nTo install Quantus with PyTorch, please run:\n```setup\npip install \"quantus[torch]\"\n```\n\nFor TensorFlow, please run:\n\n```setup\npip install \"quantus[tensorflow]\"\n```\n\n### Package requirements\n\nThe package requirements are as follows:\n```\npython\u003e=3.8.0\ntorch\u003e=1.11.0\ntensorflow\u003e=2.5.0\n```\n\nPlease note that the exact [PyTorch](https://pytorch.org/) and/ or [TensorFlow](https://www.TensorFlow.org) versions\nto be installed depends on your Python version (3.8-3.11) and platform (`darwin`, `linux`, …).\nSee `[project.optional-dependencies]` section in the `pyproject.toml` file.\n\n## Getting started\n\nThe following will give a short introduction to how to get started with Quantus. Note that this example is based on the [PyTorch](https://pytorch.org/) framework, but we also support \n[TensorFlow](https://www.tensorflow.org), which would differ only in the loading of the model, data and explanations. To get started with Quantus, you need:\n* A model (`model`), inputs (`x_batch`) and labels (`y_batch`)\n* Some explanations you want to evaluate (`a_batch`)\n\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003e\u003cbig\u003eStep 1. Load data and model\u003c/big\u003e\u003c/b\u003e\u003c/summary\u003e\n\nLet's first load the data and model. In this example, a pre-trained LeNet available from Quantus \nfor the purpose of this tutorial is loaded, but generally, you might use any Pytorch (or TensorFlow) model instead. To follow this example, one needs to have quantus and torch installed, by e.g., `pip install 'quantus[torch]'`.\n\n```python\nimport quantus\nfrom quantus.helpers.model.models import LeNet\nimport torch\nimport torchvision\nfrom torchvision import transforms\n  \n# Enable GPU.\ndevice = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n\n# Load a pre-trained LeNet classification model (architecture at quantus/helpers/models).\nmodel = LeNet()\nif device.type == \"cpu\":\n    model.load_state_dict(torch.load(\"tests/assets/mnist\", map_location=torch.device('cpu')))\nelse: \n    model.load_state_dict(torch.load(\"tests/assets/mnist\"))\n\n# Load datasets and make loaders.\ntest_set = torchvision.datasets.MNIST(root='./sample_data', download=True, transform=transforms.Compose([transforms.ToTensor()]))\ntest_loader = torch.utils.data.DataLoader(test_set, batch_size=24)\n\n# Load a batch of inputs and outputs to use for XAI evaluation.\nx_batch, y_batch = iter(test_loader).next()\nx_batch, y_batch = x_batch.cpu().numpy(), y_batch.cpu().numpy()\n```\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003e\u003cbig\u003eStep 2. Load explanations\u003c/big\u003e\u003c/b\u003e\u003c/summary\u003e\n\nWe still need some explanations to evaluate. \nFor this, there are two possibilities in Quantus. You can provide either:\n1. a set of re-computed attributions (`np.ndarray`)\n2. any arbitrary explanation function (`callable`), e.g., the built-in method `quantus.explain` or your own customised function\n\nWe show the different options below.\n\n#### Using pre-computed explanations\n\nQuantus allows you to evaluate explanations that you have pre-computed, \nassuming that they match the data you provide in `x_batch`. Let's say you have explanations \nfor [Saliency](https://arxiv.org/abs/1312.6034) and [Integrated Gradients](https://arxiv.org/abs/1703.01365)\nalready pre-computed.\n\nIn that case, you can simply load these into corresponding variables `a_batch_saliency` \nand `a_batch_intgrad`:\n\n```python\na_batch_saliency = load(\"path/to/precomputed/saliency/explanations\")\na_batch_intgrad = load(\"path/to/precomputed/intgrad/explanations\")\n```\n\nAnother option is to simply obtain the attributions using one of many XAI frameworks out there, \nsuch as [Captum](https://captum.ai/), \n[Zennit](https://github.com/chr5tphr/zennit), \n[tf.explain](https://github.com/sicara/tf-explain),\nor [iNNvestigate](https://github.com/albermax/innvestigate). The following code example shows how to obtain explanations ([Saliency](https://arxiv.org/abs/1312.6034) \nand [Integrated Gradients](https://arxiv.org/abs/1703.01365), to be specific) \nusing [Captum](https://captum.ai/):\n\n```python\nimport captum\nfrom captum.attr import Saliency, IntegratedGradients\n\n# Generate Integrated Gradients attributions of the first batch of the test set.\na_batch_saliency = Saliency(model).attribute(inputs=x_batch, target=y_batch, abs=True).sum(axis=1).cpu().numpy()\na_batch_intgrad = IntegratedGradients(model).attribute(inputs=x_batch, target=y_batch, baselines=torch.zeros_like(x_batch)).sum(axis=1).cpu().numpy()\n\n# Save x_batch and y_batch as numpy arrays that will be used to call metric instances.\nx_batch, y_batch = x_batch.cpu().numpy(), y_batch.cpu().numpy()\n\n# Quick assert.\nassert [isinstance(obj, np.ndarray) for obj in [x_batch, y_batch, a_batch_saliency, a_batch_intgrad]]\n```\n\n#### Passing an explanation function\n\nIf you don't have a pre-computed set of explanations but rather want to pass an arbitrary explanation function \nthat you wish to evaluate with Quantus, this option exists. \n\nFor this, you can for example rely on the built-in `quantus.explain` function to get started, which includes some popular explanation methods \n(please run `quantus.available_methods()` to see which ones).  Examples of how to use `quantus.explain` \nor your own customised explanation function are included in the next section.\n\n\u003cimg class=\"center\" width=\"500\" alt=\"drawing\"  src=\"tutorials/assets/mnist_example.png\"/\u003e\n\nAs seen in the above image, the qualitative aspects of explanations \nmay look fairly uninterpretable --- since we lack ground truth of what the explanations\nshould be looking like, it is hard to draw conclusions about the explainable evidence. To gather quantitative evidence for the quality of the different explanation methods, we can apply Quantus.\n\u003c/details\u003e\n\n\u003cdetails\u003e\n\u003csummary\u003e\u003cb\u003e\u003cbig\u003eStep 3. Evaluate with Quantus\u003c/big\u003e\u003c/b\u003e\u003c/summary\u003e \n\nQuantus implements XAI evaluation metrics from different categories, \ne.g., Faithfulness, Localisation and Robustness etc which all inherit from the base `quantus.Metric` class. \nTo apply a metric to your setting (e.g., [Max-Sensitivity](https://arxiv.org/abs/1901.09392)) \nit first needs to be instantiated:\n\n```python\nmetric = quantus.MaxSensitivity(nr_samples=10,\n                                lower_bound=0.2,\n                                norm_numerator=quantus.fro_norm,\n                                norm_denominator=quantus.fro_norm,\n                                perturb_func=quantus.uniform_noise,\n                                similarity_func=quantus.difference,\n                                abs=True,\n                                normalise=True)\n```\n\nand then applied to your model, data, and (pre-computed) explanations:\n\n```python\nscores = metric(\n    model=model,\n    x_batch=x_batch,\n    y_batch=y_batch,\n    a_batch=a_batch_saliency,\n    device=device,\n    explain_func=quantus.explain,\n    explain_func_kwargs={\"method\": \"Saliency\"},\n)\n```\n\n#### Use quantus.explain\n\nSince a re-computation of the explanations is necessary for robustness evaluation, in this example, we also pass an explanation function (`explain_func`) to the metric call. Here, we rely on the built-in `quantus.explain` function to recompute the explanations. The hyperparameters are set with the `explain_func_kwargs` dictionary. Please find more details on how to use  `quantus.explain` at [API documentation](https://quantus.readthedocs.io/en/latest/docs_api/quantus.functions.explanation_func.html).\n\n#### Employ customised functions\n\nYou can alternatively use your own customised explanation function\n(assuming it returns an `np.ndarray` in a shape that matches the input `x_batch`). This is done as follows:\n\n```python\ndef your_own_callable(model, models, targets, **kwargs) -\u003e np.ndarray\n  \"\"\"Logic goes here to compute the attributions and return an \n  explanation  in the same shape as x_batch (np.array), \n  (flatten channels if necessary).\"\"\"\n  return explanation(model, x_batch, y_batch)\n\nscores = metric(\n    model=model,\n    x_batch=x_batch,\n    y_batch=y_batch,\n    device=device,\n    explain_func=your_own_callable\n)\n```\n#### Run large-scale evaluation\n\nQuantus also provides high-level functionality to support large-scale evaluations,\ne.g., multiple XAI methods, multifaceted evaluation through several metrics, or a combination thereof. To utilise `quantus.evaluate()`, you simply need to define two things:\n\n1. The **Metrics** you would like to use for evaluation (each `__init__` parameter configuration counts as its own metric):\n    ```python\n    metrics = {\n        \"max-sensitivity-10\": quantus.MaxSensitivity(nr_samples=10),\n        \"max-sensitivity-20\": quantus.MaxSensitivity(nr_samples=20),\n        \"region-perturbation\": quantus.RegionPerturbation(),\n    }\n    ```\n   \n2. The **XAI methods** you would like to evaluate, e.g., a `dict` with pre-computed attributions:\n    ```python\n    xai_methods = {\n        \"Saliency\": a_batch_saliency,\n        \"IntegratedGradients\": a_batch_intgrad\n    }\n    ```\n\nYou can then simply run a large-scale evaluation as follows (this aggregates the result by `np.mean` averaging):\n\n```python\nimport numpy as np\nresults = quantus.evaluate(\n      metrics=metrics,\n      xai_methods=xai_methods,\n      agg_func=np.mean,\n      model=model,\n      x_batch=x_batch,\n      y_batch=y_batch,\n      **{\"softmax\": False,}\n)\n```\n\u003c/details\u003e\n\nPlease see [\nGetting started tutorial](https://github.com/understandable-machine-intelligence-lab/quantus/blob/main/tutorials/Tutorial_Getting_Started.ipynb) to run code similar to this example. For more information on how to customise metrics and extend Quantus' functionality, please see [Getting started guide](https://quantus.readthedocs.io/en/latest/getting_started/getting_started_example.html).\n\n\n## Tutorials\n\nFurther tutorials are available that showcase the many types of analysis that can be done using Quantus.\nFor this purpose, please see notebooks in the [tutorials](https://github.com/understandable-machine-intelligence-lab/Quantus/blob/main/tutorials/) folder which includes examples such as:\n* [All Metrics ImageNet Example](https://github.com/understandable-machine-intelligence-lab/Quantus/blob/main/tutorials/Tutorial_ImageNet_Example_All_Metrics.ipynb): shows how to instantiate the different metrics for ImageNet dataset\n* [Metric Parameterisation Analysis](https://github.com/understandable-machine-intelligence-lab/Quantus/blob/main/tutorials/Tutorial_Metric_Parameterisation_Analysis.ipynb): explores how sensitive a metric could be to its hyperparameters\n* [Robustness Analysis Model Training](https://github.com/understandable-machine-intelligence-lab/Quantus/blob/main/tutorials/Tutorial_XAI_Sensitivity_Model_Training.ipynb): measures robustness of explanations as model accuracy increases \n* [Full Quantification with Quantus](https://github.com/understandable-machine-intelligence-lab/Quantus/blob/main/tutorials/Tutorial_ImageNet_Quantification_with_Quantus.ipynb): example of benchmarking explanation methods\n* [Tabular Data Example](https://github.com/understandable-machine-intelligence-lab/Quantus/blob/main/tutorials/Tutorial_Getting_Started_with_Tabular_Data.ipynb): example of how to use Quantus with tabular data\n* [Quantus and TensorFlow Data Example](https://github.com/understandable-machine-intelligence-lab/Quantus/blob/main/tutorials/Tutorial_Getting_Started_with_Tensorflow.ipynb): showcases how to use Quantus with TensorFlow\n\n... and more.\n\n## Contributing\n\nWe welcome any sort of contribution to Quantus! For a detailed contribution guide, please refer to [Contributing](https://github.com/understandable-machine-intelligence-lab/Quantus/blob/main/CONTRIBUTING.md) documentation first. \n\nIf you have any developer-related questions, please [open an issue](https://github.com/understandable-machine-intelligence-lab/Quantus/issues/new/choose)\nor write us at [hedstroem.anna@gmail.com](mailto:hedstroem.anna@gmail.com).\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Funderstandable-machine-intelligence-lab%2Fquantus","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Funderstandable-machine-intelligence-lab%2Fquantus","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Funderstandable-machine-intelligence-lab%2Fquantus/lists"}