{"id":13736628,"url":"https://github.com/PAIR-code/saliency","last_synced_at":"2025-05-08T12:33:30.992Z","repository":{"id":21733616,"uuid":"93900269","full_name":"PAIR-code/saliency","owner":"PAIR-code","description":"Framework-agnostic implementation for state-of-the-art saliency methods (XRAI, BlurIG, SmoothGrad, and more).","archived":false,"fork":false,"pushed_at":"2024-03-20T19:44:13.000Z","size":329482,"stargazers_count":934,"open_issues_count":12,"forks_count":190,"subscribers_count":24,"default_branch":"master","last_synced_at":"2024-05-22T07:52:45.779Z","etag":null,"topics":["convolutional-neural-networks","deep-learning","deep-neural-networks","ig-saliency","image-recognition","machine-learning","object-detection","saliency","saliency-map","smoothgrad","tensorflow"],"latest_commit_sha":null,"homepage":"https://pair-code.github.io/saliency/","language":"Jupyter Notebook","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/PAIR-code.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2017-06-09T22:07:35.000Z","updated_at":"2024-06-18T15:14:22.608Z","dependencies_parsed_at":"2023-01-14T08:00:49.126Z","dependency_job_id":"2ab768ad-9cf7-4944-805d-b48f8a21d124","html_url":"https://github.com/PAIR-code/saliency","commit_stats":{"total_commits":70,"total_committers":18,"mean_commits":3.888888888888889,"dds":0.8571428571428572,"last_synced_commit":"fc90418fadff32285620331e8c385cef852793a5"},"previous_names":[],"tags_count":1,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/PAIR-code%2Fsaliency","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/PAIR-code%2Fsaliency/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/PAIR-code%2Fsaliency/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/PAIR-code%2Fsaliency/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/PAIR-code","download_url":"https://codeload.github.com/PAIR-code/saliency/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":224732129,"owners_count":17360416,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["convolutional-neural-networks","deep-learning","deep-neural-networks","ig-saliency","image-recognition","machine-learning","object-detection","saliency","saliency-map","smoothgrad","tensorflow"],"created_at":"2024-08-03T03:01:25.324Z","updated_at":"2025-05-08T12:33:30.984Z","avatar_url":"https://github.com/PAIR-code.png","language":"Jupyter Notebook","readme":"# Saliency Library\n## Updates\n\n\u0026#x1F534;\u0026nbsp;\u0026nbsp; Now framework-agnostic! [(Example core notebook)](Examples_core.ipynb) \u0026nbsp;\u0026#x1F534;\n\n\u0026#x1F517;\u0026nbsp;\u0026nbsp; For further explanation of the methods and more examples of the resulting maps, see our [Github Pages website](https://pair-code.github.io/saliency)  \u0026nbsp;\u0026#x1F517;\n\nIf upgrading from an older version, update old imports to `import saliency.tf1 as saliency`. We provide wrappers to make the framework-agnostic version compatible with TF1 models. [(Example TF1 notebook)](Examples_tf1.ipynb)\n\n\u0026#x1F534;\u0026nbsp;\u0026nbsp; Added Performance Information Curve (PIC) - a human\nindependent metric for evaluating the quality of saliency methods.\n([Example notebook](https://github.com/PAIR-code/saliency/blob/master/pic_metrics.ipynb)) \u0026nbsp;\u0026#x1F534;\n\n## Saliency Methods\n\nThis repository contains code for the following saliency techniques:\n\n*   Guided Integrated Gradients* ([paper](https://arxiv.org/abs/2106.09788), [poster](https://github.com/PAIR-code/saliency/blob/master/docs/CVPR_Guided_IG_Poster.pdf))\n*   XRAI* ([paper](https://arxiv.org/abs/1906.02825), [poster](https://github.com/PAIR-code/saliency/blob/master/docs/ICCV_XRAI_Poster.pdf))\n*   SmoothGrad* ([paper](https://arxiv.org/abs/1706.03825))\n*   Vanilla Gradients\n    ([paper](https://scholar.google.com/scholar?q=Visualizing+higher-layer+features+of+a+deep+network\u0026btnG=\u0026hl=en\u0026as_sdt=0%2C22),\n    [paper](https://arxiv.org/abs/1312.6034))\n*   Guided Backpropogation ([paper](https://arxiv.org/abs/1412.6806))\n*   Integrated Gradients ([paper](https://arxiv.org/abs/1703.01365))\n*   Occlusion\n*   Grad-CAM ([paper](https://arxiv.org/abs/1610.02391))\n*   Blur IG ([paper](https://arxiv.org/abs/2004.03383))\n\n\\*Developed by PAIR.\n\nThis list is by no means comprehensive. We are accepting pull requests to add\nnew methods!\n\n## Evaluation of Saliency Methods\n\nThe repository provides an implementation of Performance Information Curve (PIC) -\na human independent metric for evaluating the quality of saliency methods\n([paper](https://arxiv.org/abs/1906.02825),\n[poster](https://github.com/PAIR-code/saliency/blob/master/docs/ICCV_XRAI_Poster.pdf),\n[code](https://github.com/PAIR-code/saliency/blob/master/saliency/metrics/pic.py),\n[notebook](https://github.com/PAIR-code/saliency/blob/master/pic_metrics.ipynb)).\n\n\n## Download\n\n```\n# To install the core subpackage:\npip install saliency\n\n# To install core and tf1 subpackages:\npip install saliency[tf1]\n\n```\n\nor for the development version:\n```\ngit clone https://github.com/pair-code/saliency\ncd saliency\n```\n\n\n## Usage\n\nThe saliency library has two subpackages:\n*\t`core` uses a generic `call_model_function` which can be used with any ML \n\tframework.\n*\t`tf1` accepts input/output tensors directly, and sets up the necessary \n\tgraph operations for each method.\n\n### Core\n\nEach saliency mask class extends from the `CoreSaliency` base class. This class\ncontains the following methods:\n\n*   `GetMask(x_value, call_model_function, call_model_args=None)`: Returns a mask\n    of\n    the shape of non-batched `x_value` given by the saliency technique.\n*   `GetSmoothedMask(x_value, call_model_function, call_model_args=None, stdev_spread=.15, nsamples=25, magnitude=True)`: \n    Returns a mask smoothed of the shape of non-batched `x_value` with the \n    SmoothGrad technique.\n\n\nThe visualization module contains two methods for saliency visualization:\n\n* ```VisualizeImageGrayscale(image_3d, percentile)```: Marginalizes across the\n  absolute value of each channel to create a 2D single channel image, and clips\n  the image at the given percentile of the distribution. This method returns a\n  2D tensor normalized between 0 to 1.\n* ```VisualizeImageDiverging(image_3d, percentile)```: Marginalizes across the\n  value of each channel to create a 2D single channel image, and clips the\n  image at the given percentile of the distribution. This method returns a\n  2D tensor normalized between -1 to 1 where zero remains unchanged.\n\nIf the sign of the value given by the saliency mask is not important, then use\n```VisualizeImageGrayscale```, otherwise use ```VisualizeImageDiverging```. See\nthe SmoothGrad paper for more details on which visualization method to use.\n\n##### call_model_function\n`call_model_function` is how we pass inputs to a given model and receive the outputs\nnecessary to compute saliency masks. The description of this method and expected \noutput format is in the `CoreSaliency` description, as well as separately for each method.\n\n\n##### Examples\n\n[This example iPython notebook](http://github.com/pair-code/saliency/blob/master/Examples_core.ipynb)\nshowing these techniques is a good starting place.\n\nHere is a condensed example of using IG+SmoothGrad with TensorFlow 2:\n\n```\nimport saliency.core as saliency\nimport tensorflow as tf\n\n...\n\n# call_model_function construction here.\ndef call_model_function(x_value_batched, call_model_args, expected_keys):\n\ttape = tf.GradientTape()\n\tgrads = np.array(tape.gradient(output_layer, images))\n\treturn {saliency.INPUT_OUTPUT_GRADIENTS: grads}\n\n...\n\n# Load data.\nimage = GetImagePNG(...)\n\n# Compute IG+SmoothGrad.\nig_saliency = saliency.IntegratedGradients()\nsmoothgrad_ig = ig_saliency.GetSmoothedMask(image, \n\t\t\t\t\t\t\t\t\t\t\tcall_model_function, \n                                            call_model_args=None)\n\n# Compute a 2D tensor for visualization.\ngrayscale_visualization = saliency.VisualizeImageGrayscale(\n    smoothgrad_ig)\n```\n\n### TF1\n\nEach saliency mask class extends from the `TF1Saliency` base class. This class\ncontains the following methods:\n\n*   `__init__(graph, session, y, x)`: Constructor of the SaliencyMask. This can\n    modify the graph, or sometimes create a new graph. Often this will add nodes\n    to the graph, so this shouldn't be called continuously. `y` is the output\n    tensor to compute saliency masks with respect to, `x` is the input tensor\n    with the outer most dimension being batch size.\n*   `GetMask(x_value, feed_dict)`: Returns a mask of the shape of non-batched\n    `x_value` given by the saliency technique.\n*   `GetSmoothedMask(x_value, feed_dict)`: Returns a mask smoothed of the shape\n    of non-batched `x_value` with the SmoothGrad technique.\n\nThe visualization module contains two visualization methods:\n\n* ```VisualizeImageGrayscale(image_3d, percentile)```: Marginalizes across the\n  absolute value of each channel to create a 2D single channel image, and clips\n  the image at the given percentile of the distribution. This method returns a\n  2D tensor normalized between 0 to 1.\n* ```VisualizeImageDiverging(image_3d, percentile)```: Marginalizes across the\n  value of each channel to create a 2D single channel image, and clips the\n  image at the given percentile of the distribution. This method returns a\n  2D tensor normalized between -1 to 1 where zero remains unchanged.\n\nIf the sign of the value given by the saliency mask is not important, then use\n```VisualizeImageGrayscale```, otherwise use ```VisualizeImageDiverging```. See\nthe SmoothGrad paper for more details on which visualization method to use.\n\n##### Examples\n\n[This example iPython notebook](http://github.com/pair-code/saliency/blob/master/Examples_tf1.ipynb) shows\nthese techniques is a good starting place.\n\nAnother example of using GuidedBackprop with SmoothGrad from TensorFlow:\n\n```\nfrom saliency.tf1 import GuidedBackprop\nfrom saliency.tf1 import VisualizeImageGrayscale\nimport tensorflow.compat.v1 as tf\n\n...\n# Tensorflow graph construction here.\ny = logits[5]\nx = tf.placeholder(...)\n...\n\n# Compute guided backprop.\n# NOTE: This creates another graph that gets cached, try to avoid creating many\n# of these.\nguided_backprop_saliency = GuidedBackprop(graph, session, y, x)\n\n...\n# Load data.\nimage = GetImagePNG(...)\n...\n\nsmoothgrad_guided_backprop =\n    guided_backprop_saliency.GetMask(image, feed_dict={...})\n\n# Compute a 2D tensor for visualization.\ngrayscale_visualization = visualization.VisualizeImageGrayscale(\n    smoothgrad_guided_backprop)\n```\n\n## Conclusion/Disclaimer\n\nIf you have any questions or suggestions for improvements to this library,\nplease contact the owners of the `PAIR-code/saliency` repository.\n\nThis is not an official Google product.","funding_links":[],"categories":["其他_机器学习与深度学习","Jupyter Notebook","Tensorflow实用程序","Python Libraries(sort in alphabeta order)"],"sub_categories":["Evaluation methods"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FPAIR-code%2Fsaliency","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FPAIR-code%2Fsaliency","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FPAIR-code%2Fsaliency/lists"}