{"id":13585239,"url":"https://github.com/open-mmlab/mmaction2","last_synced_at":"2025-05-13T19:11:48.487Z","repository":{"id":37297390,"uuid":"278810244","full_name":"open-mmlab/mmaction2","owner":"open-mmlab","description":"OpenMMLab's Next Generation Video Understanding Toolbox and Benchmark","archived":false,"fork":false,"pushed_at":"2024-08-14T07:45:53.000Z","size":71467,"stargazers_count":4567,"open_issues_count":308,"forks_count":1275,"subscribers_count":40,"default_branch":"main","last_synced_at":"2025-04-28T00:52:25.164Z","etag":null,"topics":["action-recognition","ava","benchmark","deep-learning","i3d","non-local","openmmlab","posec3d","pytorch","slowfast","spatial-temporal-action-detection","temporal-action-localization","tsm","tsn","uniformerv2","video-classification","video-understanding","x3d"],"latest_commit_sha":null,"homepage":"https://mmaction2.readthedocs.io","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/open-mmlab.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":".github/CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":"CITATION.cff","codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2020-07-11T07:19:10.000Z","updated_at":"2025-04-27T09:08:55.000Z","dependencies_parsed_at":"2024-01-07T18:11:46.146Z","dependency_job_id":"4cfc8602-5291-4ac8-81f6-7c859bcf2202","html_url":"https://github.com/open-mmlab/mmaction2","commit_stats":{"total_commits":1623,"total_committers":90,"mean_commits":"18.033333333333335","dds":0.8471965495995071,"last_synced_commit":"4d6c93474730cad2f25e51109adcf96824efc7a3"},"previous_names":[],"tags_count":28,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/open-mmlab%2Fmmaction2","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/open-mmlab%2Fmmaction2/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/open-mmlab%2Fmmaction2/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/open-mmlab%2Fmmaction2/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/open-mmlab","download_url":"https://codeload.github.com/open-mmlab/mmaction2/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254010811,"owners_count":21998993,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["action-recognition","ava","benchmark","deep-learning","i3d","non-local","openmmlab","posec3d","pytorch","slowfast","spatial-temporal-action-detection","temporal-action-localization","tsm","tsn","uniformerv2","video-classification","video-understanding","x3d"],"created_at":"2024-08-01T15:04:49.441Z","updated_at":"2025-05-13T19:11:48.469Z","avatar_url":"https://github.com/open-mmlab.png","language":"Python","readme":"\u003cdiv align=\"center\"\u003e\n  \u003cimg src=\"https://github.com/open-mmlab/mmaction2/raw/main/resources/mmaction2_logo.png\" width=\"600\"/\u003e\n  \u003cdiv\u003e\u0026nbsp;\u003c/div\u003e\n  \u003cdiv align=\"center\"\u003e\n    \u003cb\u003e\u003cfont size=\"5\"\u003eOpenMMLab website\u003c/font\u003e\u003c/b\u003e\n    \u003csup\u003e\n      \u003ca href=\"https://openmmlab.com\"\u003e\n        \u003ci\u003e\u003cfont size=\"4\"\u003eHOT\u003c/font\u003e\u003c/i\u003e\n      \u003c/a\u003e\n    \u003c/sup\u003e\n    \u0026nbsp;\u0026nbsp;\u0026nbsp;\u0026nbsp;\n    \u003cb\u003e\u003cfont size=\"5\"\u003eOpenMMLab platform\u003c/font\u003e\u003c/b\u003e\n    \u003csup\u003e\n      \u003ca href=\"https://platform.openmmlab.com\"\u003e\n        \u003ci\u003e\u003cfont size=\"4\"\u003eTRY IT OUT\u003c/font\u003e\u003c/i\u003e\n      \u003c/a\u003e\n    \u003c/sup\u003e\n  \u003c/div\u003e\n\n[![Documentation](https://readthedocs.org/projects/mmaction2/badge/?version=latest)](https://mmaction2.readthedocs.io/en/latest/)\n[![actions](https://github.com/open-mmlab/mmaction2/workflows/build/badge.svg)](https://github.com/open-mmlab/mmaction2/actions)\n[![codecov](https://codecov.io/gh/open-mmlab/mmaction2/branch/main/graph/badge.svg)](https://codecov.io/gh/open-mmlab/mmaction2)\n[![PyPI](https://img.shields.io/pypi/v/mmaction2)](https://pypi.org/project/mmaction2/)\n[![LICENSE](https://img.shields.io/github/license/open-mmlab/mmaction2.svg)](https://github.com/open-mmlab/mmaction2/blob/main/LICENSE)\n[![Average time to resolve an issue](https://isitmaintained.com/badge/resolution/open-mmlab/mmaction2.svg)](https://github.com/open-mmlab/mmaction2/issues)\n[![Percentage of issues still open](https://isitmaintained.com/badge/open/open-mmlab/mmaction2.svg)](https://github.com/open-mmlab/mmaction2/issues)\n\n[📘Documentation](https://mmaction2.readthedocs.io/en/latest/) |\n[🛠️Installation](https://mmaction2.readthedocs.io/en/latest/get_started/installation.html) |\n[👀Model Zoo](https://mmaction2.readthedocs.io/en/latest/modelzoo_statistics.html) |\n[🆕Update News](https://mmaction2.readthedocs.io/en/latest/notes/changelog.html) |\n[🚀Ongoing Projects](https://github.com/open-mmlab/mmaction2/projects) |\n[🤔Reporting Issues](https://github.com/open-mmlab/mmaction2/issues/new/choose)\n\n\u003c/div\u003e\n\n\u003cdiv align=\"center\"\u003e\n  \u003ca href=\"https://openmmlab.medium.com/\" style=\"text-decoration:none;\"\u003e\n    \u003cimg src=\"https://user-images.githubusercontent.com/25839884/219255827-67c1a27f-f8c5-46a9-811d-5e57448c61d1.png\" width=\"3%\" alt=\"\" /\u003e\u003c/a\u003e\n  \u003cimg src=\"https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png\" width=\"3%\" alt=\"\" /\u003e\n  \u003ca href=\"https://discord.com/channels/1037617289144569886/1046608014234370059\" style=\"text-decoration:none;\"\u003e\n    \u003cimg src=\"https://user-images.githubusercontent.com/25839884/218347213-c080267f-cbb6-443e-8532-8e1ed9a58ea9.png\" width=\"3%\" alt=\"\" /\u003e\u003c/a\u003e\n  \u003cimg src=\"https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png\" width=\"3%\" alt=\"\" /\u003e\n  \u003ca href=\"https://twitter.com/OpenMMLab\" style=\"text-decoration:none;\"\u003e\n    \u003cimg src=\"https://user-images.githubusercontent.com/25839884/218346637-d30c8a0f-3eba-4699-8131-512fb06d46db.png\" width=\"3%\" alt=\"\" /\u003e\u003c/a\u003e\n  \u003cimg src=\"https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png\" width=\"3%\" alt=\"\" /\u003e\n  \u003ca href=\"https://www.youtube.com/openmmlab\" style=\"text-decoration:none;\"\u003e\n    \u003cimg src=\"https://user-images.githubusercontent.com/25839884/218346691-ceb2116a-465a-40af-8424-9f30d2348ca9.png\" width=\"3%\" alt=\"\" /\u003e\u003c/a\u003e\n  \u003cimg src=\"https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png\" width=\"3%\" alt=\"\" /\u003e\n  \u003ca href=\"https://space.bilibili.com/1293512903\" style=\"text-decoration:none;\"\u003e\n    \u003cimg src=\"https://user-images.githubusercontent.com/25839884/219026751-d7d14cce-a7c9-4e82-9942-8375fca65b99.png\" width=\"3%\" alt=\"\" /\u003e\u003c/a\u003e\n  \u003cimg src=\"https://user-images.githubusercontent.com/25839884/218346358-56cc8e2f-a2b8-487f-9088-32480cceabcf.png\" width=\"3%\" alt=\"\" /\u003e\n  \u003ca href=\"https://www.zhihu.com/people/openmmlab\" style=\"text-decoration:none;\"\u003e\n    \u003cimg src=\"https://user-images.githubusercontent.com/25839884/219026120-ba71e48b-6e94-4bd4-b4e9-b7d175b5e362.png\" width=\"3%\" alt=\"\" /\u003e\u003c/a\u003e\n\u003c/div\u003e\n\nEnglish | [简体中文](/README_zh-CN.md)\n\n## 📄 Table of Contents\n\n- [📄 Table of Contents](#-table-of-contents)\n- [🥳 🚀 What's New](#--whats-new-)\n- [📖 Introduction](#-introduction-)\n- [🎁 Major Features](#-major-features-)\n- [🛠️ Installation](#️-installation-)\n- [👀 Model Zoo](#-model-zoo-)\n- [👨‍🏫 Get Started](#-get-started-)\n- [🎫 License](#-license-)\n- [🖊️ Citation](#️-citation-)\n- [🙌 Contributing](#-contributing-)\n- [🤝 Acknowledgement](#-acknowledgement-)\n- [🏗️ Projects in OpenMMLab](#️-projects-in-openmmlab-)\n\n## 🥳 🚀 What's New [🔝](#-table-of-contents)\n\n**The default branch has been switched to `main`(previous `1.x`) from `master`(current `0.x`), and we encourage users to migrate to the latest version with more supported models, stronger pre-training checkpoints and simpler coding. Please refer to [Migration Guide](https://mmaction2.readthedocs.io/en/latest/migration.html) for more details.**\n\n**Release (2023.10.12)**: v1.2.0 with the following new features:\n\n- Support VindLU multi-modality algorithm and the Training of ActionClip\n- Support lightweight model MobileOne TSN/TSM\n- Support video retrieval dataset MSVD\n- Support SlowOnly K700 feature to train localization models\n- Support Video and Audio Demos\n\n## 📖 Introduction [🔝](#-table-of-contents)\n\nMMAction2 is an open-source toolbox for video understanding based on PyTorch.\nIt is a part of the [OpenMMLab](http://openmmlab.com/) project.\n\n\u003cdiv align=\"center\"\u003e\n  \u003cimg src=\"https://github.com/open-mmlab/mmaction2/raw/main/resources/mmaction2_overview.gif\" width=\"380px\"\u003e\n  \u003cimg src=\"https://user-images.githubusercontent.com/34324155/123989146-2ecae680-d9fb-11eb-916b-b9db5563a9e5.gif\" width=\"380px\"\u003e\n  \u003cp style=\"font-size:1.5vw;\"\u003e Action Recognition on Kinetics-400 (left) and Skeleton-based Action Recognition on NTU-RGB+D-120 (right)\u003c/p\u003e\n\u003c/div\u003e\n\n\u003cdiv align=\"center\"\u003e\n  \u003cimg src=\"https://user-images.githubusercontent.com/30782254/155710881-bb26863e-fcb4-458e-b0c4-33cd79f96901.gif\" width=\"580px\"/\u003e\u003cbr\u003e\n    \u003cp style=\"font-size:1.5vw;\"\u003eSkeleton-based Spatio-Temporal Action Detection and Action Recognition Results on Kinetics-400\u003c/p\u003e\n\u003c/div\u003e\n\u003cdiv align=\"center\"\u003e\n  \u003cimg src=\"https://github.com/open-mmlab/mmaction2/raw/main/resources/spatio-temporal-det.gif\" width=\"800px\"/\u003e\u003cbr\u003e\n    \u003cp style=\"font-size:1.5vw;\"\u003eSpatio-Temporal Action Detection Results on AVA-2.1\u003c/p\u003e\n\u003c/div\u003e\n\n## 🎁 Major Features [🔝](#-table-of-contents)\n\n- **Modular design**: We decompose a video understanding framework into different components. One can easily construct a customized video understanding framework by combining different modules.\n\n- **Support five major video understanding tasks**: MMAction2 implements various algorithms for multiple video understanding tasks, including action recognition, action localization, spatio-temporal action detection, skeleton-based action detection and video retrieval.\n\n- **Well tested and documented**: We provide detailed documentation and API reference, as well as unit tests.\n\n## 🛠️ Installation [🔝](#-table-of-contents)\n\nMMAction2 depends on [PyTorch](https://pytorch.org/), [MMCV](https://github.com/open-mmlab/mmcv), [MMEngine](https://github.com/open-mmlab/mmengine), [MMDetection](https://github.com/open-mmlab/mmdetection) (optional) and [MMPose](https://github.com/open-mmlab/mmpose) (optional).\n\nPlease refer to [install.md](https://mmaction2.readthedocs.io/en/latest/get_started/installation.html) for detailed instructions.\n\n\u003cdetails close\u003e\n\u003csummary\u003eQuick instructions\u003c/summary\u003e\n\n```shell\nconda create --name openmmlab python=3.8 -y\nconda activate openmmlab\nconda install pytorch torchvision -c pytorch  # This command will automatically install the latest version PyTorch and cudatoolkit, please check whether they match your environment.\npip install -U openmim\nmim install mmengine\nmim install mmcv\nmim install mmdet  # optional\nmim install mmpose  # optional\ngit clone https://github.com/open-mmlab/mmaction2.git\ncd mmaction2\npip install -v -e .\n```\n\n\u003c/details\u003e\n\n## 👀 Model Zoo [🔝](#-table-of-contents)\n\nResults and models are available in the [model zoo](https://mmaction2.readthedocs.io/en/latest/model_zoo/modelzoo.html).\n\n\u003cdetails close\u003e\n\n\u003csummary\u003eSupported model\u003c/summary\u003e\n\n\u003ctable style=\"margin-left:auto;margin-right:auto;font-size:1.3vw;padding:3px 5px;text-align:center;vertical-align:center;\"\u003e\n  \u003ctr\u003e\n    \u003ctd colspan=\"5\" style=\"font-weight:bold;\"\u003eAction Recognition\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/configs/recognition/c3d/README.md\"\u003eC3D\u003c/a\u003e (CVPR'2014)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/configs/recognition/tsn/README.md\"\u003eTSN\u003c/a\u003e (ECCV'2016)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/configs/recognition/i3d/README.md\"\u003eI3D\u003c/a\u003e (CVPR'2017)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/configs/recognition/c2d/README.md\"\u003eC2D\u003c/a\u003e (CVPR'2018)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/configs/recognition/i3d/README.md\"\u003eI3D Non-Local\u003c/a\u003e (CVPR'2018)\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/configs/recognition/r2plus1d/README.md\"\u003eR(2+1)D\u003c/a\u003e (CVPR'2018)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/configs/recognition/trn/README.md\"\u003eTRN\u003c/a\u003e (ECCV'2018)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/configs/recognition/tsm/README.md\"\u003eTSM\u003c/a\u003e (ICCV'2019)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/configs/recognition/tsm/README.md\"\u003eTSM Non-Local\u003c/a\u003e (ICCV'2019)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/configs/recognition/slowonly/README.md\"\u003eSlowOnly\u003c/a\u003e (ICCV'2019)\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/configs/recognition/slowfast/README.md\"\u003eSlowFast\u003c/a\u003e (ICCV'2019)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/configs/recognition/csn/README.md\"\u003eCSN\u003c/a\u003e (ICCV'2019)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/configs/recognition/tin/README.md\"\u003eTIN\u003c/a\u003e (AAAI'2020)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/configs/recognition/tpn/README.md\"\u003eTPN\u003c/a\u003e (CVPR'2020)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/configs/recognition/x3d/README.md\"\u003eX3D\u003c/a\u003e (CVPR'2020)\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/configs/recognition_audio/resnet/README.md\"\u003eMultiModality: Audio\u003c/a\u003e (ArXiv'2020)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/configs/recognition/tanet/README.md\"\u003eTANet\u003c/a\u003e (ArXiv'2020)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/configs/recognition/timesformer/README.md\"\u003eTimeSformer\u003c/a\u003e (ICML'2021)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/projects/actionclip/README.md\"\u003eActionCLIP\u003c/a\u003e (ArXiv'2021)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/configs/recognition/swin/README.md\"\u003eVideoSwin\u003c/a\u003e (CVPR'2022)\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/configs/recognition/videomae/README.md\"\u003eVideoMAE\u003c/a\u003e (NeurIPS'2022)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/configs/recognition/mvit/README.md\"\u003eMViT V2\u003c/a\u003e (CVPR'2022)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/configs/recognition/uniformer/README.md\"\u003eUniFormer V1\u003c/a\u003e (ICLR'2022)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/configs/recognition/uniformerv2/README.md\"\u003eUniFormer V2\u003c/a\u003e (Arxiv'2022)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/configs/recognition/videomaev2/README.md\"\u003eVideoMAE V2\u003c/a\u003e (CVPR'2023)\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd colspan=\"5\" style=\"font-weight:bold;\"\u003eAction Localization\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/configs/localization/bsn/README.md\"\u003eBSN\u003c/a\u003e (ECCV'2018)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/configs/localization/bmn/README.md\"\u003eBMN\u003c/a\u003e (ICCV'2019)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/configs/localization/tcanet/README.md\"\u003eTCANet\u003c/a\u003e (CVPR'2021)\u003c/td\u003e\n    \u003ctd\u003e\u003c/td\u003e\n    \u003ctd\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd colspan=\"5\" style=\"font-weight:bold;\"\u003eSpatio-Temporal Action Detection\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/configs/detection/acrn/README.md\"\u003eACRN\u003c/a\u003e (ECCV'2018)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/configs/detection/slowonly/README.md\"\u003eSlowOnly+Fast R-CNN\u003c/a\u003e (ICCV'2019)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/configs/detection/slowfast/README.md\"\u003eSlowFast+Fast R-CNN\u003c/a\u003e (ICCV'2019)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/configs/detection/lfb/README.md\"\u003eLFB\u003c/a\u003e (CVPR'2019)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/configs/recognition/videomae/README.md\"\u003eVideoMAE\u003c/a\u003e (NeurIPS'2022)\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd colspan=\"5\" style=\"font-weight:bold;\"\u003eSkeleton-based Action Recognition\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/configs/skeleton/stgcn/README.md\"\u003eST-GCN\u003c/a\u003e (AAAI'2018)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/configs/skeleton/2s-agcn/README.md\"\u003e2s-AGCN\u003c/a\u003e (CVPR'2019)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/configs/skeleton/posec3d/README.md\"\u003ePoseC3D\u003c/a\u003e (CVPR'2022)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/configs/skeleton/stgcnpp/README.md\"\u003eSTGCN++\u003c/a\u003e (ArXiv'2022)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/projects/ctrgcn/README.md\"\u003eCTRGCN\u003c/a\u003e (CVPR'2021)\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/projects/msg3d/README.md\"\u003eMSG3D\u003c/a\u003e (CVPR'2020)\u003c/td\u003e\n    \u003ctd\u003e\u003c/td\u003e\n    \u003ctd\u003e\u003c/td\u003e\n    \u003ctd\u003e\u003c/td\u003e\n    \u003ctd\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd colspan=\"5\" style=\"font-weight:bold;\"\u003eVideo Retrieval\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/configs/retrieval/clip4clip/README.md\"\u003eCLIP4Clip\u003c/a\u003e (ArXiv'2022)\u003c/td\u003e\n    \u003ctd\u003e\u003c/td\u003e\n    \u003ctd\u003e\u003c/td\u003e\n    \u003ctd\u003e\u003c/td\u003e\n    \u003ctd\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n\n\u003c/table\u003e\n\n\u003c/details\u003e\n\n\u003cdetails close\u003e\n\n\u003csummary\u003eSupported dataset\u003c/summary\u003e\n\n\u003ctable style=\"margin-left:auto;margin-right:auto;font-size:1.3vw;padding:3px 5px;text-align:center;vertical-align:center;\"\u003e\n  \u003ctr\u003e\n    \u003ctd colspan=\"4\" style=\"font-weight:bold;\"\u003eAction Recognition\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/tools/data/hmdb51/README.md\"\u003eHMDB51\u003c/a\u003e (\u003ca href=\"https://serre-lab.clps.brown.edu/resource/hmdb-a-large-human-motion-database/\"\u003eHomepage\u003c/a\u003e) (ICCV'2011)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/tools/data/ucf101/README.md\"\u003eUCF101\u003c/a\u003e (\u003ca href=\"https://www.crcv.ucf.edu/research/data-sets/ucf101/\"\u003eHomepage\u003c/a\u003e) (CRCV-IR-12-01)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/tools/data/activitynet/README.md\"\u003eActivityNet\u003c/a\u003e (\u003ca href=\"http://activity-net.org/\"\u003eHomepage\u003c/a\u003e) (CVPR'2015)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/tools/data/kinetics/README.md\"\u003eKinetics-[400/600/700]\u003c/a\u003e (\u003ca href=\"https://deepmind.com/research/open-source/kinetics/\"\u003eHomepage\u003c/a\u003e) (CVPR'2017)\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/tools/data/sthv1/README.md\"\u003eSthV1\u003c/a\u003e  (ICCV'2017)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/tools/data/sthv2/README.md\"\u003eSthV2\u003c/a\u003e (\u003ca href=\"https://developer.qualcomm.com/software/ai-datasets/something-something\"\u003eHomepage\u003c/a\u003e) (ICCV'2017)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/tools/data/diving48/README.md\"\u003eDiving48\u003c/a\u003e (\u003ca href=\"http://www.svcl.ucsd.edu/projects/resound/dataset.html\"\u003eHomepage\u003c/a\u003e) (ECCV'2018)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/tools/data/jester/README.md\"\u003eJester\u003c/a\u003e (\u003ca href=\"https://developer.qualcomm.com/software/ai-datasets/jester\"\u003eHomepage\u003c/a\u003e) (ICCV'2019)\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/tools/data/mit/README.md\"\u003eMoments in Time\u003c/a\u003e (\u003ca href=\"http://moments.csail.mit.edu/\"\u003eHomepage\u003c/a\u003e) (TPAMI'2019)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/tools/data/mmit/README.md\"\u003eMulti-Moments in Time\u003c/a\u003e (\u003ca href=\"http://moments.csail.mit.edu/challenge_iccv_2019.html\"\u003eHomepage\u003c/a\u003e) (ArXiv'2019)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/tools/data/hvu/README.md\"\u003eHVU\u003c/a\u003e (\u003ca href=\"https://github.com/holistic-video-understanding/HVU-Dataset\"\u003eHomepage\u003c/a\u003e) (ECCV'2020)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/tools/data/omnisource/README.md\"\u003eOmniSource\u003c/a\u003e (\u003ca href=\"https://kennymckormick.github.io/omnisource/\"\u003eHomepage\u003c/a\u003e) (ECCV'2020)\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/tools/data/gym/README.md\"\u003eFineGYM\u003c/a\u003e (\u003ca href=\"https://sdolivia.github.io/FineGym/\"\u003eHomepage\u003c/a\u003e) (CVPR'2020)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/tools/data/kinetics710/README.md\"\u003eKinetics-710\u003c/a\u003e (\u003ca href=\"https://arxiv.org/pdf/2211.09552.pdf\"\u003eHomepage\u003c/a\u003e) (Arxiv'2022)\u003c/td\u003e\n    \u003ctd\u003e\u003c/td\u003e\n    \u003ctd\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd colspan=\"4\" style=\"font-weight:bold;\"\u003eAction Localization\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/tools/data/thumos14/README.md\"\u003eTHUMOS14\u003c/a\u003e (\u003ca href=\"https://www.crcv.ucf.edu/THUMOS14/download.html\"\u003eHomepage\u003c/a\u003e) (THUMOS Challenge 2014)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/tools/data/activitynet/README.md\"\u003eActivityNet\u003c/a\u003e (\u003ca href=\"http://activity-net.org/\"\u003eHomepage\u003c/a\u003e) (CVPR'2015)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/tools/data/hacs/README.md\"\u003eHACS\u003c/a\u003e (\u003ca href=\"https://github.com/hangzhaomit/HACS-dataset\"\u003eHomepage\u003c/a\u003e) (ICCV'2019)\u003c/td\u003e\n    \u003ctd\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd colspan=\"4\" style=\"font-weight:bold;\"\u003eSpatio-Temporal Action Detection\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/tools/data/ucf101_24/README.md\"\u003eUCF101-24*\u003c/a\u003e (\u003ca href=\"http://www.thumos.info/download.html\"\u003eHomepage\u003c/a\u003e) (CRCV-IR-12-01)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/tools/data/jhmdb/README.md\"\u003eJHMDB*\u003c/a\u003e (\u003ca href=\"http://jhmdb.is.tue.mpg.de/\"\u003eHomepage\u003c/a\u003e) (ICCV'2015)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/tools/data/ava/README.md\"\u003eAVA\u003c/a\u003e (\u003ca href=\"https://research.google.com/ava/index.html\"\u003eHomepage\u003c/a\u003e) (CVPR'2018)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/tools/data/ava_kinetics/README.md\"\u003eAVA-Kinetics\u003c/a\u003e (\u003ca href=\"https://research.google.com/ava/index.html\"\u003eHomepage\u003c/a\u003e) (Arxiv'2020)\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/tools/data/multisports/README.md\"\u003eMultiSports\u003c/a\u003e (\u003ca href=\"https://deeperaction.github.io/datasets/multisports.html\"\u003eHomepage\u003c/a\u003e) (ICCV'2021)\u003c/td\u003e\n    \u003ctd\u003e\u003c/td\u003e\n    \u003ctd\u003e\u003c/td\u003e\n    \u003ctd\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd colspan=\"4\" style=\"font-weight:bold;\"\u003eSkeleton-based Action Recognition\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/tools/data/skeleton/README.md\"\u003ePoseC3D-FineGYM\u003c/a\u003e (\u003ca href=\"https://kennymckormick.github.io/posec3d/\"\u003eHomepage\u003c/a\u003e) (ArXiv'2021)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/tools/data/skeleton/README.md\"\u003ePoseC3D-NTURGB+D\u003c/a\u003e (\u003ca href=\"https://kennymckormick.github.io/posec3d/\"\u003eHomepage\u003c/a\u003e) (ArXiv'2021)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/tools/data/skeleton/README.md\"\u003ePoseC3D-UCF101\u003c/a\u003e (\u003ca href=\"https://kennymckormick.github.io/posec3d/\"\u003eHomepage\u003c/a\u003e) (ArXiv'2021)\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/tools/data/skeleton/README.md\"\u003ePoseC3D-HMDB51\u003c/a\u003e (\u003ca href=\"https://kennymckormick.github.io/posec3d/\"\u003eHomepage\u003c/a\u003e) (ArXiv'2021)\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd colspan=\"4\" style=\"font-weight:bold;\"\u003eVideo Retrieval\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003ca href=\"https://github.com/open-mmlab/mmaction2/blob/main/tools/data/video_retrieval/README.md\"\u003eMSRVTT\u003c/a\u003e (\u003ca href=\"https://www.microsoft.com/en-us/research/publication/msr-vtt-a-large-video-description-dataset-for-bridging-video-and-language/\"\u003eHomepage\u003c/a\u003e) (CVPR'2016)\u003c/td\u003e\n    \u003ctd\u003e\u003c/td\u003e\n    \u003ctd\u003e\u003c/td\u003e\n    \u003ctd\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n\n\u003c/table\u003e\n\n\u003c/details\u003e\n\n## 👨‍🏫 Get Started [🔝](#-table-of-contents)\n\nFor tutorials, we provide the following user guides for basic usage:\n\n- [Migration from MMAction2 0.X](https://mmaction2.readthedocs.io/en/latest/migration.html)\n- [Learn about Configs](https://mmaction2.readthedocs.io/en/latest/user_guides/config.html)\n- [Prepare Datasets](https://mmaction2.readthedocs.io/en/latest/user_guides/prepare_dataset.html)\n- [Inference with Existing Models](https://mmaction2.readthedocs.io/en/latest/user_guides/inference.html)\n- [Training and Testing](https://mmaction2.readthedocs.io/en/latest/user_guides/train_test.html)\n\n\u003cdetails close\u003e\n\u003csummary\u003eResearch works built on MMAction2 by users from community\u003c/summary\u003e\n\n- Video Swin Transformer. [\\[paper\\]](https://arxiv.org/abs/2106.13230)[\\[github\\]](https://github.com/SwinTransformer/Video-Swin-Transformer)\n- Evidential Deep Learning for Open Set Action Recognition, ICCV 2021 **Oral**. [\\[paper\\]](https://arxiv.org/abs/2107.10161)[\\[github\\]](https://github.com/Cogito2012/DEAR)\n- Rethinking Self-supervised Correspondence Learning: A Video Frame-level Similarity Perspective, ICCV 2021 **Oral**. [\\[paper\\]](https://arxiv.org/abs/2103.17263)[\\[github\\]](https://github.com/xvjiarui/VFS)\n\n\u003c/details\u003e\n\n## 🎫 License [🔝](#-table-of-contents)\n\nThis project is released under the [Apache 2.0 license](LICENSE).\n\n## 🖊️ Citation [🔝](#-table-of-contents)\n\nIf you find this project useful in your research, please consider cite:\n\n```BibTeX\n@misc{2020mmaction2,\n    title={OpenMMLab's Next Generation Video Understanding Toolbox and Benchmark},\n    author={MMAction2 Contributors},\n    howpublished = {\\url{https://github.com/open-mmlab/mmaction2}},\n    year={2020}\n}\n```\n\n## 🙌 Contributing [🔝](#-table-of-contents)\n\nWe appreciate all contributions to improve MMAction2. Please refer to [CONTRIBUTING.md](https://github.com/open-mmlab/mmcv/blob/2.x/CONTRIBUTING.md) in MMCV for more details about the contributing guideline.\n\n## 🤝 Acknowledgement [🔝](#-table-of-contents)\n\nMMAction2 is an open-source project that is contributed by researchers and engineers from various colleges and companies.\nWe appreciate all the contributors who implement their methods or add new features and users who give valuable feedback.\nWe wish that the toolbox and benchmark could serve the growing research community by providing a flexible toolkit to reimplement existing methods and develop their new models.\n\n## 🏗️ Projects in OpenMMLab [🔝](#-table-of-contents)\n\n- [MMEngine](https://github.com/open-mmlab/mmengine): OpenMMLab foundational library for training deep learning models.\n- [MMCV](https://github.com/open-mmlab/mmcv): OpenMMLab foundational library for computer vision.\n- [MIM](https://github.com/open-mmlab/mim): MIM installs OpenMMLab packages.\n- [MMEval](https://github.com/open-mmlab/mmeval): A unified evaluation library for multiple machine learning libraries.\n- [MMPreTrain](https://github.com/open-mmlab/mmpretrain): OpenMMLab pre-training toolbox and benchmark.\n- [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab detection toolbox and benchmark.\n- [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab's next-generation platform for general 3D object detection.\n- [MMRotate](https://github.com/open-mmlab/mmrotate): OpenMMLab rotated object detection toolbox and benchmark.\n- [MMYOLO](https://github.com/open-mmlab/mmyolo): OpenMMLab YOLO series toolbox and benchmark.\n- [MMSegmentation](https://github.com/open-mmlab/mmsegmentation): OpenMMLab semantic segmentation toolbox and benchmark.\n- [MMOCR](https://github.com/open-mmlab/mmocr): OpenMMLab text detection, recognition, and understanding toolbox.\n- [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab pose estimation toolbox and benchmark.\n- [MMHuman3D](https://github.com/open-mmlab/mmhuman3d): OpenMMLab 3D human parametric model toolbox and benchmark.\n- [MMSelfSup](https://github.com/open-mmlab/mmselfsup): OpenMMLab self-supervised learning toolbox and benchmark.\n- [MMRazor](https://github.com/open-mmlab/mmrazor): OpenMMLab model compression toolbox and benchmark.\n- [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab fewshot learning toolbox and benchmark.\n- [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab's next-generation action understanding toolbox and benchmark.\n- [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab video perception toolbox and benchmark.\n- [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab optical flow toolbox and benchmark.\n- [MMagic](https://github.com/open-mmlab/mmagic): Open**MM**Lab **A**dvanced, **G**enerative and **I**ntelligent **C**reation toolbox.\n- [MMGeneration](https://github.com/open-mmlab/mmgeneration): OpenMMLab image and video generative models toolbox.\n- [MMDeploy](https://github.com/open-mmlab/mmdeploy): OpenMMLab model deployment framework.\n- [Playground](https://github.com/open-mmlab/playground): A central hub for gathering and showcasing amazing projects built upon OpenMMLab.\n","funding_links":[],"categories":["Topics","Python","Pytorch \u0026 related libraries｜Pytorch \u0026 相关库","Pytorch \u0026 related libraries","Frameworks and Libraries","Action Recognition and Video Understanding"],"sub_categories":["Perception","CV｜计算机视觉:","CV:","Video Action Recognition","Video Representation"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fopen-mmlab%2Fmmaction2","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fopen-mmlab%2Fmmaction2","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fopen-mmlab%2Fmmaction2/lists"}