{"id":13564487,"url":"https://github.com/facebookresearch/vizseq","last_synced_at":"2025-05-15T07:03:28.560Z","repository":{"id":35777147,"uuid":"204480061","full_name":"facebookresearch/vizseq","owner":"facebookresearch","description":"An Analysis Toolkit for Natural Language Generation (Translation, Captioning, Summarization, etc.)","archived":false,"fork":false,"pushed_at":"2025-03-07T17:59:22.000Z","size":19895,"stargazers_count":445,"open_issues_count":7,"forks_count":58,"subscribers_count":15,"default_branch":"main","last_synced_at":"2025-04-14T10:42:42.969Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"https://arxiv.org/abs/1909.05424","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/facebookresearch.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2019-08-26T13:19:38.000Z","updated_at":"2025-04-12T21:10:13.000Z","dependencies_parsed_at":"2025-03-16T22:20:41.457Z","dependency_job_id":null,"html_url":"https://github.com/facebookresearch/vizseq","commit_stats":{"total_commits":75,"total_committers":3,"mean_commits":25.0,"dds":0.48,"last_synced_commit":"65cc7bd6949b6ce03e0efc4c9e8bf6f87c5dae6d"},"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/facebookresearch%2Fvizseq","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/facebookresearch%2Fvizseq/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/facebookresearch%2Fvizseq/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/facebookresearch%2Fvizseq/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/facebookresearch","download_url":"https://codeload.github.com/facebookresearch/vizseq/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254291961,"owners_count":22046424,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-01T13:01:32.073Z","updated_at":"2025-05-15T07:03:28.521Z","avatar_url":"https://github.com/facebookresearch.png","language":"Python","readme":"[![PyPI](https://img.shields.io/pypi/v/vizseq?style=flat-square)](https://pypi.org/project/vizseq/)\n[![CircleCI](https://img.shields.io/circleci/build/github/facebookresearch/vizseq?style=flat-square)](https://circleci.com/gh/facebookresearch/vizseq)\n![PyPI - License](https://img.shields.io/pypi/l/vizseq?style=flat-square)\n![PyPI - Python Version](https://img.shields.io/pypi/pyversions/vizseq?style=flat-square)\n\n# \u003cimg src=\"logo.png\" alt=\"VizSeq\" width=\"160\"\u003e\nVizSeq is a Python toolkit for visual analysis on text generation tasks like machine translation, summarization,\nimage captioning, speech translation and video description. It takes multi-modal sources,\ntext references as well as text predictions as inputs, and analyzes them visually\nin [Jupyter Notebook](https://facebookresearch.github.io/vizseq/docs/getting_started/ipynb_example) or a\nbuilt-in [Web App](https://facebookresearch.github.io/vizseq/docs/getting_started/web_app_example)\n(the former has [Fairseq integration](https://facebookresearch.github.io/vizseq/docs/getting_started/fairseq_example)).\nVizSeq also provides a collection of [multi-process scorers](https://facebookresearch.github.io/vizseq/docs/features/metrics) as\na normal Python package.\n\n[[Paper]](https://arxiv.org/pdf/1909.05424.pdf)\n[[Documentation]](https://facebookresearch.github.io/vizseq)\n[[Blog]](https://ai.facebook.com/blog/vizseq-a-visual-analysis-toolkit-for-accelerating-text-generation-research)\n\n\u003cp align=\"center\"\u003e\n\u003cimg src=\"overview.png\" alt=\"VizSeq Overview\" width=\"480\"\u003e\n\u003cimg src=\"teaser.gif\" alt=\"VizSeq Teaser\" width=\"480\"\u003e\n\u003c/p\u003e\n\n### Task Coverage\n\n| Source | Example Tasks |\n| :--- | :--- |\n| Text | Machine translation, text summarization, dialog generation, grammatical error correction, open-domain question answering |\n| Image | Image captioning, image question answering, optical character recognition                                                |\n| Audio | Speech recognition, speech translation                                                                                   |\n| Video | Video description                                                                                                        |\n| Multimodal | Multimodal machine translation\n\n### Metric Coverage\n**Accelerated with multi-processing/multi-threading.**\n\n| Type | Metrics |\n| :--- | :--- |\n| N-gram-based | BLEU ([Papineni et al., 2002](https://www.aclweb.org/anthology/P02-1040)), NIST ([Doddington, 2002](http://www.mt-archive.info/HLT-2002-Doddington.pdf)), METEOR ([Banerjee et al., 2005](https://www.aclweb.org/anthology/W05-0909)), TER ([Snover et al., 2006](http://mt-archive.info/AMTA-2006-Snover.pdf)), RIBES ([Isozaki et al., 2010](https://www.aclweb.org/anthology/D10-1092)), chrF ([Popović et al., 2015](https://www.aclweb.org/anthology/W15-3049)), GLEU ([Wu et al., 2016](https://arxiv.org/pdf/1609.08144.pdf)), ROUGE ([Lin, 2004](https://www.aclweb.org/anthology/W04-1013)), CIDEr ([Vedantam et al., 2015](https://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Vedantam_CIDEr_Consensus-Based_Image_2015_CVPR_paper.pdf)), WER |\n| Embedding-based | LASER ([Artetxe and Schwenk, 2018](https://arxiv.org/pdf/1812.10464.pdf)), BERTScore ([Zhang et al., 2019](https://arxiv.org/pdf/1904.09675.pdf)) |\n\n\n## Getting Started\n\n### Installation\nVizSeq requires **Python 3.6+** and currently runs on **Unix/Linux** and **macOS/OS X**. It will support **Windows** as well in the future.\n\nYou can install VizSeq from PyPI repository:\n```bash\n$ pip install vizseq\n```\n\nOr install it from source:\n```bash\n$ git clone https://github.com/facebookresearch/vizseq\n$ cd vizseq\n$ pip install -e .\n```\n\n### [Documentation](https://facebookresearch.github.io/vizseq)\n\n### Jupyter Notebook Examples\n- [Basic example](https://facebookresearch.github.io/vizseq/docs/getting_started/ipynb_example)\n- [Multimodal Machine Translation](examples/multimodal_machine_translation.ipynb)\n- [Multilingual Machine Translation](examples/multilingual_machine_translation.ipynb)\n- [Speech Translation](examples/speech_translation.ipynb)\n\n### [Fairseq integration](https://facebookresearch.github.io/vizseq/docs/getting_started/fairseq_example)\n\n### [Web App Example](https://facebookresearch.github.io/vizseq/docs/getting_started/web_app_example)\nDownload example data:\n```bash\n$ git clone https://github.com/facebookresearch/vizseq\n$ cd vizseq\n$ bash get_example_data.sh\n```\nLaunch the web server:\n```bash\n$ python -m vizseq.server --port 9001 --data-root ./examples/data\n```\nAnd then, navigate to the following URL in your web browser:\n```text\nhttp://localhost:9001\n```\n\n## License\nVizSeq is licensed under [MIT](https://github.com/facebookresearch/vizseq/blob/master/LICENSE). See the [LICENSE](https://github.com/facebookresearch/vizseq/blob/master/LICENSE) file for details.\n\n## Citation\nPlease cite as\n```\n@inproceedings{wang2019vizseq,\n  title = {VizSeq: A Visual Analysis Toolkit for Text Generation Tasks},\n  author = {Changhan Wang, Anirudh Jain, Danlu Chen, Jiatao Gu},\n  booktitle = {In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},\n  year = {2019},\n}\n```\n\n## Contact\nChanghan Wang ([changhan@fb.com](mailto:changhan@fb.com)), Jiatao Gu ([jgu@fb.com](mailto:jgu@fb.com))\n","funding_links":[],"categories":["Python","文本数据和NLP","Evaluation","**Programming (learning)**"],"sub_categories":["**Developer\\'s Tools**"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ffacebookresearch%2Fvizseq","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ffacebookresearch%2Fvizseq","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ffacebookresearch%2Fvizseq/lists"}