{"id":13472239,"url":"https://pair-code.github.io/lit/","last_synced_at":"2025-03-26T15:31:41.438Z","repository":{"id":37051248,"uuid":"283215238","full_name":"PAIR-code/lit","owner":"PAIR-code","description":"The Learning Interpretability Tool: Interactively analyze ML models to understand their behavior in an extensible and framework agnostic interface.","archived":false,"fork":false,"pushed_at":"2024-10-29T11:46:49.000Z","size":211800,"stargazers_count":3482,"open_issues_count":116,"forks_count":355,"subscribers_count":67,"default_branch":"main","last_synced_at":"2024-10-29T14:51:41.548Z","etag":null,"topics":["machine-learning","natural-language-processing","visualization"],"latest_commit_sha":null,"homepage":"https://pair-code.github.io/lit","language":"TypeScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/PAIR-code.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2020-07-28T13:07:26.000Z","updated_at":"2024-10-25T20:44:54.000Z","dependencies_parsed_at":"2024-01-06T07:56:46.905Z","dependency_job_id":"5067b7a6-4b7a-43ca-8eea-d53c943d3996","html_url":"https://github.com/PAIR-code/lit","commit_stats":{"total_commits":1379,"total_committers":41,"mean_commits":33.63414634146341,"dds":0.7766497461928934,"last_synced_commit":"61faeb66127e6ec33b16d4ef1fa3c412e14f5b82"},"previous_names":[],"tags_count":13,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/PAIR-code%2Flit","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/PAIR-code%2Flit/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/PAIR-code%2Flit/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/PAIR-code%2Flit/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/PAIR-code","download_url":"https://codeload.github.com/PAIR-code/lit/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":245535493,"owners_count":20631297,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["machine-learning","natural-language-processing","visualization"],"created_at":"2024-07-31T16:00:53.213Z","updated_at":"2025-03-26T15:31:41.416Z","avatar_url":"https://github.com/PAIR-code.png","language":"TypeScript","readme":"# 🔥 Learning Interpretability Tool (LIT)\n\n\u003c!--* freshness: { owner: 'lit-dev' reviewed: '2024-06-25' } *--\u003e\n\nThe Learning Interpretability Tool (🔥LIT, formerly known as the Language\nInterpretability Tool) is a visual, interactive ML model-understanding tool that\nsupports text, image, and tabular data. It can be run as a standalone server, or\ninside of notebook environments such as Colab, Jupyter, and Google Cloud Vertex\nAI notebooks.\n\nLIT is built to answer questions such as:\n\n*   **What kind of examples** does my model perform poorly on?\n*   **Why did my model make this prediction?** Can this prediction be attributed\n    to adversarial behavior, or to undesirable priors in the training set?\n*   **Does my model behave consistently** if I change things like textual style,\n    verb tense, or pronoun gender?\n\n![Example of LIT UI](https://pair-code.github.io/lit/assets/images/readme-fig-1.png)\n\nLIT supports a variety of debugging workflows through a browser-based UI.\nFeatures include:\n\n*   **Local explanations** via salience maps and rich visualization of model\n    predictions.\n*   **Aggregate analysis** including custom metrics, slicing and binning, and\n    visualization of embedding spaces.\n*   **Counterfactual generation** via manual edits or generator plug-ins to\n    dynamically create and evaluate new examples.\n*   **Side-by-side mode** to compare two or more models, or one model on a pair\n    of examples.\n*   **Highly extensible** to new model types, including classification,\n    regression, span labeling, seq2seq, and language modeling. Supports\n    multi-head models and multiple input features out of the box.\n*   **Framework-agnostic** and compatible with TensorFlow, PyTorch, and more.\n\nLIT has a [website](https://pair-code.github.io/lit) with live demos, tutorials,\na setup guide and more.\n\nStay up to date on LIT by joining the\n[lit-announcements mailing list](https://groups.google.com/g/lit-annoucements).\n\nFor a broader overview, check out [our paper](https://arxiv.org/abs/2008.05122) and the\n[user guide](https://pair-code.github.io/lit/documentation/ui_guide).\n\n## Documentation\n\n*   [Documentation index](https://pair-code.github.io/lit/documentation/)\n*   [FAQ](https://pair-code.github.io/lit/documentation/faq)\n*   [Release notes](./RELEASE.md)\n\n## Download and Installation\n\nLIT can be installed via `pip` or built from source. Building from source is\nnecessary if you want to make code changes.\n\n### Install from PyPI with pip\n\n```sh\npip install lit-nlp\n```\n\nThe default `pip` installation will install all required packages to use the LIT\nPython API, built-in interpretability components, and web application. To\ninstall dependencies for the provided demos or test suite, install LIT with the\nappropriate optional dependencies.\n\n```sh\n# To install dependencies for the discriminative AI examples (GLUE, Penguin)\npip install 'lit-nlp[examples-discriminative-ai]'\n\n# To install dependencies for the generative AI examples (Prompt Debugging)\npip install 'lit-nlp[examples-generative-ai]'\n\n# To install dependencies for all examples plus the test suite\npip install 'lit-nlp[test]'\n```\n\n### Install from source\n\nClone the repo:\n\n```sh\ngit clone https://github.com/PAIR-code/lit.git\ncd lit\n```\n\nNote: be sure you are running Python 3.9+. If you have a different version on\nyour system, use the `conda` instructions below to set up a Python 3.9\nenvironment.\n\nSet up a Python environment with `venv` (or your preferred environment manager).\nNote that these instructions assume you will be making code changes to LIT and\nincludes the full requirements for all examples and the test suite. See the\nother optional dependency possibilities in the install with pip section.\n\n```sh\npython -m venv .venv\nsource .venv/bin/activate\npython -m pip install -e '.[test]'\n```\n\nThe LIT repo does not include a distributable version of the LIT app. You must\nbuild it from source.\n\n```sh\n(cd lit_nlp; yarn \u0026\u0026 yarn build)\n```\n\nNote: if you see [an error](https://github.com/yarnpkg/yarn/issues/2821)\nrunning `yarn` on Ubuntu/Debian, be sure you have the\n[correct version installed](https://yarnpkg.com/en/docs/install#linux-tab).\n\n## Running LIT\n\nExplore a collection of hosted demos on the\n[demos page](https://pair-code.github.io/lit/demos).\n\n### Using container images\n\nSee the [containerization guide](https://pair-code.github.io/lit/documentation/docker) for instructions on using LIT\nlocally in Docker, Podman, etc.\n\nLIT also provides pre-built images that can take advantage of accelerators,\nmaking Generative AI and LLM use cases easier to work with. Check out the\n[LIT on GCP docs](https://codelabs.developers.google.com/codelabs/responsible-ai/lit-on-gcp)\nfor more.\n\n### Quick-start: classification and regression\n\nTo explore classification and regression models tasks from the popular\n[GLUE benchmark](https://gluebenchmark.com/):\n\n```sh\npython -m lit_nlp.examples.glue.demo --port=5432 --quickstart\n```\n\nNavigate to http://localhost:5432 to access the LIT UI.\n\nYour default view will be a\n[small BERT-based model](https://arxiv.org/abs/1908.08962) fine-tuned on the\n[Stanford Sentiment Treebank](https://nlp.stanford.edu/sentiment/treebank.html),\nbut you can switch to\n[STS-B](http://ixa2.si.ehu.es/stswiki/index.php/STSbenchmark) or\n[MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) using the toolbar or the\ngear icon in the upper right.\n\nAnd navigate to http://localhost:5432 for the UI.\n\n### Notebook usage\n\nColab notebooks showing the use of LIT inside of notebooks can be found at\n[lit_nlp/examples/notebooks](./lit_nlp/examples/notebooks).\n\nWe provide a simple\n[Colab demo](https://colab.research.google.com/github/PAIR-code/lit/blob/main/lit_nlp/examples/notebooks/LIT_sentiment_classifier.ipynb).\nRun all the cells to see LIT on an example classification model in the notebook.\n\n### More Examples\n\nSee [lit_nlp/examples](./lit_nlp/examples). Most are run similarly to the\nquickstart example above:\n\n```sh\npython -m lit_nlp.examples.\u003cexample_name\u003e.demo --port=5432 [optional --args]\n```\n\n## User Guide\n\nTo learn about LIT's features, check out the [user guide](https://pair-code.github.io/lit/documentation/ui_guide), or\nwatch this [video](https://www.youtube.com/watch?v=CuRI_VK83dU).\n\n## Adding your own models or data\n\nYou can easily run LIT with your own model by creating a custom `demo.py`\nlauncher, similar to those in [lit_nlp/examples](./lit_nlp/examples). The\nbasic steps are:\n\n*   Write a data loader which follows the [`Dataset` API](https://pair-code.github.io/lit/documentation/api#datasets)\n*   Write a model wrapper which follows the [`Model` API](https://pair-code.github.io/lit/documentation/api#models)\n*   Pass models, datasets, and any additional\n    [components](https://pair-code.github.io/lit/documentation/api#interpretation-components) to the LIT server class\n\nFor a full walkthrough, see\n[adding models and data](https://pair-code.github.io/lit/documentation/api#adding-models-and-data).\n\n## Extending LIT with new components\n\nLIT is easy to extend with new interpretability components, generators, and\nmore, both on the frontend or the backend. See our [documentation](https://pair-code.github.io/lit/documentation/) to get\nstarted.\n\n## Pull Request Process\n\nTo make code changes to LIT, please work off of the `dev` branch and\n[create pull requests](https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request)\n(PRs) against that branch. The `main` branch is for stable releases, and it is\nexpected that the `dev` branch will always be ahead of `main`.\n\n[Draft PRs](https://github.blog/2019-02-14-introducing-draft-pull-requests/) are\nencouraged, especially for first-time contributors or contributors working on\ncomplex tasks (e.g., Google Summer of Code contributors). Please use these to\ncommunicate ideas and implementations with the LIT team, in addition to issues.\n\nPrior to sending your PR or marking a Draft PR as \"Ready for Review\", please run\nthe Python and TypeScript linters on your code to ensure compliance with\nGoogle's [Python](https://google.github.io/styleguide/pyguide.html) and\n[TypeScript](https://google.github.io/styleguide/tsguide.html) Style Guides.\n\n```sh\n# Run Pylint on your code using the following command from the root of this repo\n(cd lit_nlp; pylint)\n\n# Run ESLint on your code using the following command from the root of this repo\n(cd lit_nlp; yarn lint)\n```\n\n## Citing LIT\n\nIf you use LIT as part of your work, please cite the\n[EMNLP paper](https://arxiv.org/abs/2008.05122) or the\n[Sequence Salience paper](https://arxiv.org/abs/2404.07498)\n\n```BibTeX\n@misc{tenney2020language,\n    title={The Language Interpretability Tool: Extensible, Interactive Visualizations and Analysis for {NLP} Models},\n    author={Ian Tenney and James Wexler and Jasmijn Bastings and Tolga Bolukbasi and Andy Coenen and Sebastian Gehrmann and Ellen Jiang and Mahima Pushkarna and Carey Radebaugh and Emily Reif and Ann Yuan},\n    booktitle = \"Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations\",\n    year = \"2020\",\n    publisher = \"Association for Computational Linguistics\",\n    pages = \"107--118\",\n    url = \"https://www.aclweb.org/anthology/2020.emnlp-demos.15\",\n}\n```\n\n```BibTeX\n@article{tenney2024interactive,\n  title={Interactive prompt debugging with sequence salience},\n  author={Tenney, Ian and Mullins, Ryan and Du, Bin and Pandya, Shree and Kahng, Minsuk and Dixon, Lucas},\n  journal={arXiv preprint arXiv:2404.07498},\n  year={2024}\n}\n```\n\n## Disclaimer\n\nThis is not an official Google product.\n\nLIT is a research project and under active development by a small team. We want\nLIT to be an open platform, not a walled garden, and would love your suggestions\nand feedback \u0026ndash; please\n[report any bugs](https://github.com/pair-code/lit/issues) and reach out on the\n[Discussions page](https://github.com/PAIR-code/lit/discussions/landing).\n\n","funding_links":[],"categories":["Table of Contents","Datasets-or-Benchmark","Tools"],"sub_categories":["LLM Interpretability Tools","通用","Interpretability/Explicability"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/pair-code.github.io%2Flit%2F","html_url":"https://awesome.ecosyste.ms/projects/pair-code.github.io%2Flit%2F","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/pair-code.github.io%2Flit%2F/lists"}