{"id":21608315,"url":"https://github.com/alstonlo/torch-influence","last_synced_at":"2025-10-04T11:54:49.989Z","repository":{"id":53262012,"uuid":"509324856","full_name":"alstonlo/torch-influence","owner":"alstonlo","description":"A simple PyTorch implementation of influence functions.","archived":false,"fork":false,"pushed_at":"2024-06-17T16:34:50.000Z","size":7282,"stargazers_count":89,"open_issues_count":3,"forks_count":12,"subscribers_count":3,"default_branch":"main","last_synced_at":"2025-08-17T19:38:32.042Z","etag":null,"topics":["deep-learning","influence-functions","interpretability","machine-learning"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/alstonlo.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":null,"funding":null,"license":"LICENSE.txt","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2022-07-01T04:55:34.000Z","updated_at":"2025-06-27T13:25:55.000Z","dependencies_parsed_at":"2025-08-17T19:31:50.959Z","dependency_job_id":"9ae920cf-e750-4d53-9cf6-36d72178b0ab","html_url":"https://github.com/alstonlo/torch-influence","commit_stats":null,"previous_names":[],"tags_count":1,"template":false,"template_full_name":null,"purl":"pkg:github/alstonlo/torch-influence","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/alstonlo%2Ftorch-influence","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/alstonlo%2Ftorch-influence/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/alstonlo%2Ftorch-influence/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/alstonlo%2Ftorch-influence/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/alstonlo","download_url":"https://codeload.github.com/alstonlo/torch-influence/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/alstonlo%2Ftorch-influence/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":278308625,"owners_count":25965654,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-10-04T02:00:05.491Z","response_time":63,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["deep-learning","influence-functions","interpretability","machine-learning"],"created_at":"2024-11-24T20:37:19.836Z","updated_at":"2025-10-04T11:54:49.970Z","avatar_url":"https://github.com/alstonlo.png","language":"Python","funding_links":[],"categories":[],"sub_categories":[],"readme":"\u003cdiv align=\"center\"\u003e    \n\n# torch-influence\n\n![GitHub release (latest SemVer)](https://img.shields.io/github/v/release/alstonlo/torch-influence)\n[![Read the Docs](https://img.shields.io/readthedocs/torch-influence)](https://torch-influence.readthedocs.io/en/latest/)\n[![License: Apache 2.0](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](LICENSE.txt)\n\n\u003c/div\u003e\n\n![](examples/dogfish_influences.png)\n\ntorch-influence is a PyTorch implementation of influence functions, a classical\ntechnique from robust statistics that estimates the effect of removing a single training data point on a model’s\nlearned parameters. In their seminal paper _Understanding Black-box Predictions via Influence Functions_\n([paper](https://arxiv.org/abs/1703.04730)),\nKoh \u0026 Liang (2017) first co-opted influence functions to the domain of machine learning. Since then,\ninfluence functions have been applied on a variety of machine learning tasks,\nincluding explaining model predictions, dataset relabelling and reweighing,\ndata poisoning, increasing model fairness, and data augmentation.\n\nThis library aims to be simple and minimal. In addition, it fixes a few errors found in some of the existing\nimplementations of influence functions.\n\nThe code is supplement to the paper [If Influence Functions are the Answer, Then What is the Question?](https://arxiv.org/abs/2209.05364). Furthermore, the Jax implementation can be found at [here](https://github.com/pomonam/jax-influence).\n\n______________________________________________________________________\n\n## Installation\n\nPip from source:\n\n```bash\ngit clone https://github.com/alstonlo/torch-influence\n \ncd torch_influence\npip install -e .   \n ```\n\n______________________________________________________________________\n\n## Quickstart\n\n### Overview\n\nIn order to use torch-influence, the first step is to subclass its `BaseInfluenceModule` class and implement its\nsingle abstract method `BaseInfluenceModule.inverse_hvp()`. This method computes inverse Hessian-vector products (iHVPs), \nwhich is an important but costly step in influence function computation. Conveniently, torch-influence provides three \nsubclasses out-of-the-box:\n\n\n\u003cdiv align=\"center\"\u003e\n \n| Subclass  | Method of iHVP computation |\n| ------------- | ------------- |\n| `AutogradInfluenceModule`  | Direct computation and inversion of the Hessian with `torch.autograd`  |\n| `CGInfluenceModule`  | Truncated Conjugate Gradients (Martens et al., 2010) ([paper](https://www.cs.toronto.edu/~jmartens/docs/Deep_HessianFree.pdf)) |\n| `LiSSAInfluenceModule`  | Linear time Stochastic Second-Order Algorithm (Agarwal et al., 2016) ([paper](https://arxiv.org/abs/1602.03943)) |\n\n\u003c/div\u003e\n\nThe next step is to subclass `BaseObjective` and implement its four abstract methods. \nThe `BaseObjective` class serves as an adapter that holds project-specific information about how \ntraining and test losses are computed. \nAll of `BaseInfluenceModule` and its three subclasses require an implementation of `BaseObjective` to be passed through its constructor.\nThe following is a sample subclass for an $L_2$-regularized classification model:\n\n\n```python\nimport torch\nimport torch.nn.functional as F\nfrom torch_influence import BaseObjective\n\n\nclass MyObjective(BaseObjective):\n\n    def train_outputs(self, model, batch):\n        return model(batch[0])\n\n    def train_loss_on_outputs(self, outputs, batch):\n        return F.cross_entropy(outputs, batch[1])  # mean reduction required\n\n    def train_regularization(self, params):\n        return 0.01 * torch.square(params.norm())\n\n    # training loss by default taken to be \n    # train_loss_on_outputs + train_regularization\n\n    def test_loss(self, model, params, batch):\n        return F.cross_entropy(model(batch[0]), batch[1])  # no regularization in test loss\n```\n\nFinally, all that is left is to piece everything together. \nAfter instantiating a subclass of `BaseInfluenceModule`, \ninfluence scores can then be computed through the `BaseInfluenceModule.influences()` method.\nFor example:\n\n```python \nfrom torch_influence import AutogradInfluenceModule\n   \n\nmodule = AutogradInfluenceModule(\n    model=model,\n    objective=MyObjective(),  \n    train_loader=train_loader,\n    test_loader=test_loader,\n    device=device,\n    damp=0.001\n)\n\n# influence scores of training points 1, 2, and 3 on test point 0\nscores = module.influences([1, 2, 3], [0])\n```\n\n\nFor more details, we refer users to the [API Reference](https://torch-influence.readthedocs.io/en/latest/).\n\n\n\n### Dogfish \n\nThe `examples/` directory contains a more complete example, which finetunes the topmost\nlayer of a pretrained Inceptionv3 network on the Dogfish dataset (Koh \u0026 Liang, 2017). Then, it \nuses influence functions to find the most helpful and harmful training images,\nwith respect to a couple of test images. To run the example, please download and extract\nthe Dogfish dataset ([CodaLab](https://worksheets.codalab.org/bundles/0x550cd344825049bdbb865b887381823c))\ninto the `examples/` folder and execute the following:\n\n\n```bash\n# install dependencies\npip install -e .[dev]  \n\ncd examples/\n\n# train model and analyze influence scores\npython analyze_dogfish.py  \n```  \n\n______________________________________________________________________\n\n## Contributors\n\n- [Alston Lo](https://github.com/alstonlo)\n- [Juhan Bae](https://www.juhanbae.com/)\n\n","project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Falstonlo%2Ftorch-influence","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Falstonlo%2Ftorch-influence","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Falstonlo%2Ftorch-influence/lists"}