{"id":34095555,"url":"https://github.com/basics-lab/spectral-explain","last_synced_at":"2026-03-17T20:38:30.869Z","repository":{"id":278450285,"uuid":"884415529","full_name":"basics-lab/spectral-explain","owner":"basics-lab","description":"Fast XAI with interactions at large scale. SPEX can help you understand the output of your LLM, even if you have a long context!","archived":false,"fork":false,"pushed_at":"2026-03-10T23:21:29.000Z","size":5672,"stargazers_count":12,"open_issues_count":0,"forks_count":0,"subscribers_count":4,"default_branch":"main","last_synced_at":"2026-03-11T04:42:19.523Z","etag":null,"topics":["explainability","explainable-ai","llm-interpretability","shap","sparse-transformer","xai"],"latest_commit_sha":null,"homepage":"https://arxiv.org/abs/2502.13870","language":"Jupyter Notebook","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/basics-lab.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2024-11-06T17:55:19.000Z","updated_at":"2026-03-10T23:21:33.000Z","dependencies_parsed_at":"2025-02-19T21:22:39.895Z","dependency_job_id":"b6d1b4ad-9b14-4023-8a03-ccb3abc15e27","html_url":"https://github.com/basics-lab/spectral-explain","commit_stats":null,"previous_names":["basics-lab/spectral-explain"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/basics-lab/spectral-explain","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/basics-lab%2Fspectral-explain","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/basics-lab%2Fspectral-explain/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/basics-lab%2Fspectral-explain/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/basics-lab%2Fspectral-explain/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/basics-lab","download_url":"https://codeload.github.com/basics-lab/spectral-explain/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/basics-lab%2Fspectral-explain/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":30631403,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-03-17T17:32:55.572Z","status":"ssl_error","status_checked_at":"2026-03-17T17:32:38.732Z","response_time":56,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["explainability","explainable-ai","llm-interpretability","shap","sparse-transformer","xai"],"created_at":"2025-12-14T15:15:00.411Z","updated_at":"2026-03-17T20:38:30.864Z","avatar_url":"https://github.com/basics-lab.png","language":"Jupyter Notebook","readme":"\u003cp align=\"center\"\u003e\n  \u003cb\u003e⚠️ NOTE: We encourage using the implementation of SPEX and ProxySPEX within \u003ca href=\"https://github.com/mmschlk/shapiq\"\u003eshapiq\u003c/a\u003e, which receives much more frequent maintenance. ⚠️\u003c/b\u003e\n\u003c/p\u003e\n\u003ch1 align=\"center\"\u003e\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"https://github.com/landonbutler/landonbutler.github.io/blob/master/imgs/spex.png?raw=True\" width=\"200\" style=\"vertical-align: middle;\"\u003e\n  \u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\n  \u003cimg src=\"https://github.com/landonbutler/landonbutler.github.io/blob/master/imgs/ProxySPEX.png?raw=True\" width=\"260\" style=\"vertical-align: middle;\"\u003e\n\u003c/p\u003e\n\n\u003ch4 align=\"center\"\u003eSpectral Explainer: Scalable Feature Interaction Attribution\u003c/h4\u003e\n\n\n\u003cp align=\"center\"\u003e\n  \u003ca href=\"#installation\"\u003eInstallation\u003c/a\u003e •\n  \u003ca href=\"#quickstart\"\u003eQuickstart\u003c/a\u003e •\n  \u003ca href=\"#examples\"\u003eExamples\u003c/a\u003e •\n  \u003ca href=\"#citation\"\u003eCitation\u003c/a\u003e\n\u003c/p\u003e\n\n\u003ch2 id=\"installation\"\u003eInstallation\u003c/h2\u003e\n\nTo install the core `spectralexplain` package via PyPI, run:\n```\npip install spectralexplain\n```\n\n### Requirements\nTo replicate the experiments in this repository, you need to install additional dependencies. To install `spectralexplain` with these optional dependencies, run:\n```\ngit clone git@github.com:basics-lab/spectral-explain.git\ncd spectral-explain\npip install -e .[dev]\n```\n\nTo use the `ExactSolver` for finding the optimal value function, you will additionally need a valid [Gurobi License](https://www.gurobi.com/) configured on your machine.\n\nFor Hugging Face models , you must install `transformers` and `torch`, and have your Hugging Face API Token configured as an environment variable in your terminal:\n```bash\nexport HF_TOKEN=\"your_hf_token_here\"\n```\n\n\n\n\u003ch2 id=\"quickstart\"\u003eQuickstart\u003c/h2\u003e\n\n`spectralexplain` can be used to quickly compute feature interactions for your models and datasets. Simply define a `value_function` which takes in a matrix of masking patterns and returns the model's outputs to masked inputs.\n\nUpon passing this function to the `Explainer` class, alongside the number of features in your dataset, `spectralexplain` will discover feature interactions. You can specify `algorithm=\"proxyspex\"` to use the recent [ProxySPEX](https://openreview.net/forum?id=KI8qan2EA7) algorithm, or use the default [SPEX](https://openreview.net/forum?id=pRlKbAwczl) algorithm.\n\nCalling `explainer.interactions`, alongside a choice of interaction index, will return an `Interactions` object for any of the following interaction types:\n\n\u003cdiv align=\"center\"\u003e\n  \n| Index | Full Name | Citation |\n| :--- | :--- | :--- | \n| **`fourier`** | Fourier Interactions | [Ahmed et al. (1975)](https://www.researchgate.net/publication/3115888_Orthogonal_Transform_for_Digital_Signal_Processing) | \n| **`mobius`** | Möbius Interactions (Harsanyi Dividends) | [Harsanyi (1959)](https://doi.org/10.2307/2525487), [Grabisch et al. (2000)](https://www.jstor.org/stable/3690575)|\n| **`bii`** | Banzhaf Interaction Index | [Grabisch et al. (2000)](https://www.jstor.org/stable/3690575) |\n| **`sii`** | Shapley Interaction Index | [Grabisch et al. (2000)](https://www.jstor.org/stable/3690575) |\n| **`fbii`** | Faith-Banzhaf Interaction Index | [Tsai et al. (2023)](https://jmlr.org/papers/v24/22-0202.html) |\n| **`fsii`** | Faith-Shapley Interaction Index | [Tsai et al. (2023)](https://jmlr.org/papers/v24/22-0202.html) | \n| **`stii`** | Shapley-Taylor Interaction Index | [Sundararajan et al. (2020)](https://proceedings.mlr.press/v119/sundararajan20a.html) |\n\n\n\u003c/div\u003e\n\n```python\nimport spectralexplain as spex\n\n# X is a (num_samples x num_features) binary masking matrix\ndef value_function(X):\n    return ...\n\nexplainer = spex.Explainer(\n    value_function=value_function,\n    features=num_features,\n    algorithm=\"proxyspex\", # Optional: defaults to \"spex\"\n    max_order=5 # Optional: caps the interaction order\n)\n\nprint(explainer.interactions(index=\"fbii\"))\n```\n\nFirst, a sparse Fourier representation is learned. Then, the representation is converted to your index of choice using the conversions in [Appendix C](https://openreview.net/forum?id=pRlKbAwczl) of our paper.\n\u003ch2 id=\"examples\"\u003eExamples\u003c/h2\u003e\n\u003ch3\u003eTabular\u003c/h3\u003e\n\n```python\nimport spectralexplain as spex\nimport numpy as np\nfrom sklearn.ensemble import RandomForestRegressor\nfrom sklearn.datasets import load_breast_cancer\n\ndata, target = load_breast_cancer(return_X_y=True)\ntest_point, data, target = data[0], data[1:], target[1:]\n\nmodel = RandomForestRegressor().fit(data, target)\n\ndef tabular_masking(X):\n    return model.predict(np.where(X, test_point, data.mean(axis=0)))\n\nexplainer = spex.Explainer(\n    value_function=tabular_masking,\n    features=range(len(test_point)),\n    sample_budget=1000,\n    algorithm=\"proxyspex\"\n)\n\nprint(explainer.interactions(index=\"fbii\"))\n\n\u003e\u003e Interactions(\n\u003e\u003e   index=FBII, max_order=4, baseline_value=0.626\n\u003e\u003e   sample_budget=1000, num_features=30,\n\u003e\u003e   Top Interactions:\n\u003e\u003e     (27,): -0.295\n\u003e\u003e     (22,): -0.189\n\u003e\u003e     (3, 6, 8, 22): 0.188\n\u003e\u003e     (6, 10, 14, 28): 0.176\n\u003e\u003e     (23,): -0.145\n\u003e\u003e )\n```\n\u003ch3\u003eSentiment Analysis\u003c/h3\u003e\n\n```python\nimport spectralexplain as spex\nfrom transformers import pipeline\n\nreview = \"Her acting never fails to impress\".split()\nsentiment_pipeline = pipeline(\"sentiment-analysis\")\n\ndef sentiment_masking(X):\n    masked_reviews = [\" \".join([review[i] if x[i] == 1 else \"[MASK]\" for i in range(len(review))]) for x in X]\n    return [outputs['score'] if outputs['label'] == 'POSITIVE' else 1-outputs['score'] for outputs in sentiment_pipeline(masked_reviews)]\n\nexplainer = spex.Explainer(value_function=sentiment_masking,\n                           features=review,\n                           sample_budget=1000)\n\nprint(explainer.interactions(index=\"stii\"))\n\n\u003e\u003e Interactions(\n\u003e\u003e   index=STII, max_order=5, baseline_value=-0.63\n\u003e\u003e   sample_budget=1000, num_features=6,\n\u003e\u003e   Top Interactions:\n\u003e\u003e     ('never', 'fails'): 2.173\n\u003e\u003e     ('fails', 'impress'): -1.615\n\u003e\u003e     ('never', 'fails', 'impress'): 1.592\n\u003e\u003e     ('fails', 'to'): -1.505\n\u003e\u003e     ('impress',): 1.436\n\u003e\u003e )\n```\n\n\u003ch3\u003eOptimizing the Value Function\u003c/h3\u003e\n\n```python\nimport spectralexplain as spex\n\n# A basic example of finding the optimal feature perturbations to maximize the value function\n# given a sparse Fourier interaction representation.\n\nsolver = spex.utils.ExactSolver(\n    fourier_dictionary=explainer.fourier_transform,\n    maximize=True, \n    exact_solution_order=5 # Optional: specify exact number of features to select\n)\noptimal_features = solver.solve()\nprint(\"Optimal feature selection:\", optimal_features)\n```\n\n\u003ch2 id=\"citation\"\u003eCitation\u003c/h2\u003e\n\n```bibtex\n@inproceedings{\n  kang2025spex,\n  title={{SPEX}: Scaling Feature Interaction Explanations for {LLM}s},\n  author={Justin Singh Kang and Landon Butler and Abhineet Agarwal and Yigit Efe Erginbas and Ramtin Pedarsani and Bin Yu and Kannan Ramchandran},\n  booktitle={Forty-second International Conference on Machine Learning},\n  year={2025},\n  url={https://openreview.net/forum?id=pRlKbAwczl}\n}\n\n@inproceedings{\n  butler2025proxyspex,\n  title={ProxySPEX: Inference-Efficient Interpretability via Sparse Feature Interactions in LLMs},\n  author={Landon Butler and Abhineet Agarwal and Justin Singh Kang and Yigit Efe Erginbas and Bin Yu and Kannan Ramchandran},\n  booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems},\n  year={2025},\n  url={https://openreview.net/forum?id=KI8qan2EA7}\n}\n```\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbasics-lab%2Fspectral-explain","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fbasics-lab%2Fspectral-explain","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbasics-lab%2Fspectral-explain/lists"}