{"id":13435384,"url":"https://github.com/sicara/tf-explain","last_synced_at":"2025-05-15T04:06:10.734Z","repository":{"id":39613743,"uuid":"196956879","full_name":"sicara/tf-explain","owner":"sicara","description":"Interpretability Methods for tf.keras models with Tensorflow 2.x","archived":false,"fork":false,"pushed_at":"2024-06-03T10:38:45.000Z","size":953,"stargazers_count":1026,"open_issues_count":47,"forks_count":110,"subscribers_count":48,"default_branch":"master","last_synced_at":"2025-05-13T00:09:25.615Z","etag":null,"topics":["deep-learning","interpretability","keras","machine-learning","tensorflow","tf2","visualization"],"latest_commit_sha":null,"homepage":"https://tf-explain.readthedocs.io","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/sicara.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":"CITATION.cff","codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2019-07-15T08:26:24.000Z","updated_at":"2025-05-12T05:38:34.000Z","dependencies_parsed_at":"2024-11-15T03:59:17.107Z","dependency_job_id":"933d8f75-a280-43cd-a502-7868729caccc","html_url":"https://github.com/sicara/tf-explain","commit_stats":{"total_commits":171,"total_committers":18,"mean_commits":9.5,"dds":"0.14035087719298245","last_synced_commit":"9d7d1e900ec3e3e4b5338fbc43dfb93539acecc2"},"previous_names":[],"tags_count":8,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/sicara%2Ftf-explain","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/sicara%2Ftf-explain/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/sicara%2Ftf-explain/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/sicara%2Ftf-explain/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/sicara","download_url":"https://codeload.github.com/sicara/tf-explain/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254270646,"owners_count":22042859,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["deep-learning","interpretability","keras","machine-learning","tensorflow","tf2","visualization"],"created_at":"2024-07-31T03:00:35.299Z","updated_at":"2025-05-15T04:06:05.712Z","avatar_url":"https://github.com/sicara.png","language":"Python","readme":"# tf-explain\n\n[![Pypi Version](https://img.shields.io/pypi/v/tf-explain.svg)](https://pypi.org/project/tf-explain/)\n[![DOI](https://zenodo.org/badge/196956879.svg)](https://zenodo.org/badge/latestdoi/196956879)\n[![Build Status](https://github.com/sicara/tf-explain/actions/workflows/ci.yml/badge.svg)](https://github.com/sicara/tf-explain/actions)\n[![Documentation Status](https://readthedocs.org/projects/tf-explain/badge/?version=latest)](https://tf-explain.readthedocs.io/en/latest/?badge=latest)\n![Python Versions](https://img.shields.io/badge/python-3.6%20%7C%203.7%20%7C%203.8-%23EBBD68.svg)\n![Tensorflow Versions](https://img.shields.io/badge/tensorflow-2.x-blue.svg)\n\n__tf-explain__ implements interpretability methods as Tensorflow 2.x callbacks to __ease neural network's understanding__.\nSee [Introducing tf-explain, Interpretability for Tensorflow 2.0](https://blog.sicara.com/tf-explain-interpretability-tensorflow-2-9438b5846e35)\n\n__Documentation__: https://tf-explain.readthedocs.io\n\n## Installation\n\n__tf-explain__ is available on PyPi. To install it:\n\n```bash\nvirtualenv venv -p python3.8\npip install tf-explain\n```\n\ntf-explain is compatible with Tensorflow 2.x. It is not declared as a dependency\nto let you choose between full and standalone-CPU versions. Additionally to the previous install, run:\n\n```bash\n# For CPU or GPU\npip install tensorflow==2.6.0\n```\nOpencv is also a dependency. To install it, run:\n```bash\n# For CPU or GPU\npip install opencv-python\n```\n\n## Quickstart\n\ntf-explain offers 2 ways to apply interpretability methods. The full list of methods is the [Available Methods](#available-methods) section.\n\n### On trained model\n\nThe best option is probably to load a trained model and apply the methods on it.\n\n```python\n# Load pretrained model or your own\nmodel = tf.keras.applications.vgg16.VGG16(weights=\"imagenet\", include_top=True)\n\n# Load a sample image (or multiple ones)\nimg = tf.keras.preprocessing.image.load_img(IMAGE_PATH, target_size=(224, 224))\nimg = tf.keras.preprocessing.image.img_to_array(img)\ndata = ([img], None)\n\n# Start explainer\nexplainer = GradCAM()\ngrid = explainer.explain(data, model, class_index=281)  # 281 is the tabby cat index in ImageNet\n\nexplainer.save(grid, \".\", \"grad_cam.png\")\n```\n\n### During training\n\nIf you want to follow your model during the training, you can also use it as a Keras Callback,\nand see the results directly in [TensorBoard](https://www.tensorflow.org/tensorboard/).\n\n```python\nfrom tf_explain.callbacks.grad_cam import GradCAMCallback\n\nmodel = [...]\n\ncallbacks = [\n    GradCAMCallback(\n        validation_data=(x_val, y_val),\n        class_index=0,\n        output_dir=output_dir,\n    )\n]\n\nmodel.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)\n```\n\n\n## Available Methods\n\n1. [Activations Visualization](#activations-visualization)\n1. [Vanilla Gradients](#vanilla-gradients)\n1. [Gradients*Inputs](#gradients-inputs)\n1. [Occlusion Sensitivity](#occlusion-sensitivity)\n1. [Grad CAM (Class Activation Maps)](#grad-cam)\n1. [SmoothGrad](#smoothgrad)\n1. [Integrated Gradients](#integrated-gradients)\n\n### Activations Visualization\n\n\u003e Visualize how a given input comes out of a specific activation layer\n\n```python\nfrom tf_explain.callbacks.activations_visualization import ActivationsVisualizationCallback\n\nmodel = [...]\n\ncallbacks = [\n    ActivationsVisualizationCallback(\n        validation_data=(x_val, y_val),\n        layers_name=[\"activation_1\"],\n        output_dir=output_dir,\n    ),\n]\n\nmodel.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)\n```\n\n\u003cp align=\"center\"\u003e\n    \u003cimg src=\"./docs/assets/activations_visualisation.png\" width=\"400\" /\u003e\n\u003c/p\u003e\n\n\n### Vanilla Gradients\n\n\u003e Visualize gradients importance on input image\n\n```python\nfrom tf_explain.callbacks.vanilla_gradients import VanillaGradientsCallback\n\nmodel = [...]\n\ncallbacks = [\n    VanillaGradientsCallback(\n        validation_data=(x_val, y_val),\n        class_index=0,\n        output_dir=output_dir,\n    ),\n]\n\nmodel.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)\n```\n\n\u003cp align=\"center\"\u003e\n    \u003cimg src=\"./docs/assets/vanilla_gradients.png\" width=\"200\" /\u003e\n\u003c/p\u003e\n\n\n### Gradients*Inputs\n\n\u003e Variant of [Vanilla Gradients](#vanilla-gradients) ponderating gradients with input values\n\n```python\nfrom tf_explain.callbacks.gradients_inputs import GradientsInputsCallback\n\nmodel = [...]\n\ncallbacks = [\n    GradientsInputsCallback(\n        validation_data=(x_val, y_val),\n        class_index=0,\n        output_dir=output_dir,\n    ),\n]\n\nmodel.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)\n```\n\n\u003cp align=\"center\"\u003e\n    \u003cimg src=\"./docs/assets/gradients_inputs.png\" width=\"200\" /\u003e\n\u003c/p\u003e\n\n\n### Occlusion Sensitivity\n\n\u003e Visualize how parts of the image affects neural network's confidence by occluding parts iteratively\n\n```python\nfrom tf_explain.callbacks.occlusion_sensitivity import OcclusionSensitivityCallback\n\nmodel = [...]\n\ncallbacks = [\n    OcclusionSensitivityCallback(\n        validation_data=(x_val, y_val),\n        class_index=0,\n        patch_size=4,\n        output_dir=output_dir,\n    ),\n]\n\nmodel.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)\n```\n\n\u003cdiv align=\"center\"\u003e\n    \u003cimg src=\"./docs/assets/occlusion_sensitivity.png\" width=\"200\" /\u003e\n    \u003cp style=\"color: grey; font-size:small; width:350px;\"\u003eOcclusion Sensitivity for Tabby class (stripes differentiate tabby cat from other ImageNet cat classes)\u003c/p\u003e\n\u003c/div\u003e\n\n### Grad CAM\n\n\u003e Visualize how parts of the image affects neural network's output by looking into the activation maps\n\nFrom [Grad-CAM: Visual Explanations from Deep Networks\nvia Gradient-based Localization](https://arxiv.org/abs/1610.02391)\n\n```python\nfrom tf_explain.callbacks.grad_cam import GradCAMCallback\n\nmodel = [...]\n\ncallbacks = [\n    GradCAMCallback(\n        validation_data=(x_val, y_val),\n        class_index=0,\n        output_dir=output_dir,\n    )\n]\n\nmodel.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)\n```\n\n\n\u003cp align=\"center\"\u003e\n    \u003cimg src=\"./docs/assets/grad_cam.png\" width=\"200\" /\u003e\n\u003c/p\u003e\n\n### SmoothGrad\n\n\u003e Visualize stabilized gradients on the inputs towards the decision\n\nFrom [SmoothGrad: removing noise by adding noise](https://arxiv.org/abs/1706.03825)\n\n```python\nfrom tf_explain.callbacks.smoothgrad import SmoothGradCallback\n\nmodel = [...]\n\ncallbacks = [\n    SmoothGradCallback(\n        validation_data=(x_val, y_val),\n        class_index=0,\n        num_samples=20,\n        noise=1.,\n        output_dir=output_dir,\n    )\n]\n\nmodel.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)\n```\n\n\u003cp align=\"center\"\u003e\n    \u003cimg src=\"./docs/assets/smoothgrad.png\" width=\"200\" /\u003e\n\u003c/p\u003e\n\n### Integrated Gradients\n\n\u003e Visualize an average of the gradients along the construction of the input towards the decision\n\nFrom [Axiomatic Attribution for Deep Networks](https://arxiv.org/pdf/1703.01365.pdf)\n\n```python\nfrom tf_explain.callbacks.integrated_gradients import IntegratedGradientsCallback\n\nmodel = [...]\n\ncallbacks = [\n    IntegratedGradientsCallback(\n        validation_data=(x_val, y_val),\n        class_index=0,\n        n_steps=20,\n        output_dir=output_dir,\n    )\n]\n\nmodel.fit(x_train, y_train, batch_size=2, epochs=2, callbacks=callbacks)\n```\n\n\u003cp align=\"center\"\u003e\n    \u003cimg src=\"./docs/assets/integrated_gradients.png\" width=\"200\" /\u003e\n\u003c/p\u003e\n\n\n## Roadmap\n\n- [ ] Subclassing API Support\n- [ ] Additional Methods\n  - [ ] [GradCAM++](https://arxiv.org/abs/1710.11063)\n  - [x] [Integrated Gradients](https://arxiv.org/abs/1703.01365)\n  - [x] [Guided SmoothGrad](https://arxiv.org/abs/1706.03825)\n  - [ ] [LRP](https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0130140)\n- [ ] Auto-generated API Documentation \u0026 Documentation Testing\n\n## Contributing\n\nTo contribute to the project, please read the [dedicated section](./CONTRIBUTING.md).\n\n## Citation\n\nA [citation file](./CITATION.cff) is available for citing this work. Click the \"Cite this repository\" button on the right-side panel of Github to get a BibTeX-ready citation.\n","funding_links":[],"categories":["Python","模型的可解释性","Sample Codes / Projects \u003ca name=\"sample\" /\u003e ⛏️📐📁","Python Libraries(sort in alphabeta order)","Technical Resources","Other 💛💛💛💛💛\u003ca name=\"Other\" /\u003e","Uncategorized"],"sub_categories":["General 🚧 \u003ca name=\"GeneralCode\" /\u003e","Evaluation methods","Open Source/Access Responsible AI Software Packages","解释性工具","Uncategorized"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsicara%2Ftf-explain","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fsicara%2Ftf-explain","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fsicara%2Ftf-explain/lists"}