Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/open-mmlab/mmaction
An open-source toolbox for action understanding based on PyTorch
https://github.com/open-mmlab/mmaction
action-detection action-recognition pytorch spatial-temporal-action-detection temporal-action-detection temporal-action-localization video-understanding
Last synced: 11 days ago
JSON representation
An open-source toolbox for action understanding based on PyTorch
- Host: GitHub
- URL: https://github.com/open-mmlab/mmaction
- Owner: open-mmlab
- License: apache-2.0
- Created: 2019-06-13T14:26:23.000Z (over 5 years ago)
- Default Branch: master
- Last Pushed: 2022-04-08T03:44:50.000Z (over 2 years ago)
- Last Synced: 2024-08-01T03:45:46.944Z (3 months ago)
- Topics: action-detection, action-recognition, pytorch, spatial-temporal-action-detection, temporal-action-detection, temporal-action-localization, video-understanding
- Language: Python
- Homepage: https://open-mmlab.github.io/
- Size: 3.95 MB
- Stars: 1,851
- Watchers: 40
- Forks: 352
- Open Issues: 56
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
Awesome Lists containing this project
- awesome-cv - mmaction - source toolbox for action understanding based on PyTorch | (Repos / Tags: Object Classification `[ObjCls]`, Object Detection `[ObjDet]`, Object Segmentation `[ObjSeg]`, General Library `[GenLib]`, Text Reading / Object Character Recognition `[OCR]`, Action Recognition `[ActRec]`, Object Tracking `[ObjTrk]`, Data Augmentation `[DatAug]`, Simultaneous Localization and Mapping `[SLAM]`, Outlier/Anomaly/Novelty Detection `[NvlDet]`, Content-based Image Retrieval `[CBIR]`, Image Enhancement `[ImgEnh]`, Aesthetic Assessment `[AesAss]`, Explainable Artificial Intelligence `[XAI]`, Text-to-Image Generation `[TexImg]`, Pose Estimation `[PosEst]`, Video Matting `[VidMat]`, Eye Tracking `[EyeTrk]`)
README
# MMAction
## Introduction
MMAction is an open source toolbox for action understanding based on PyTorch.
It is a part of the [open-mmlab](https://github.com/open-mmlab) project developed by [Multimedia Laboratory, CUHK](http://mmlab.ie.cuhk.edu.hk/).### Major Features
- MMAction is capable of dealing with all of the tasks below.- action recognition from trimmed videos
- temporal action detection (also known as action localization) in untrimmed videos
- spatial-temporal action detection in untrimmed videos.- Support for various datasets
Video datasets have emerging throughout the recent years and have greatly fostered the devlopment of this field.
MMAction provides tools to deal with various datasets.- Support for multiple action understanding frameworks
MMAction implements popular frameworks for action understanding:
- For action recognition, various algorithms are implemented, including TSN, I3D, SlowFast, R(2+1)D, CSN.
- For temporal action detection, we implement SSN.
- For spatial temporal atomic action detection, a Fast-RCNN baseline is provided.- Modular design
The tasks in human action understanding share some common aspects such as backbones, and long-term and short-term sampling schemes.
Also, tasks can benefit from each other. For example, a better backbone for action recognition will bring performance gain for action detection.
Modular design enables us to view action understanding in a more integrated perspective.## License
The project is release under the [Apache 2.0 license](https://github.com/open-mmlab/mmaction/blob/master/LICENSE).## Updates
[OmniSource](https://arxiv.org/abs/2003.13042) Model Release (22/08/2020)
- We release several models of our work [OmniSource](https://arxiv.org/abs/2003.13042). These models are jointly trained with
Kinetics-400 and OmniSourced web dataset. Those models are of good performance (Top1 Accuracy: **75.7%** for 3-segment TSN and **80.4%** for SlowOnly on Kinetics-400 val) and the learned representation transfer well to other tasks.v0.2.0 (15/03/2020)
- We build a diversified modelzoo for action recognition, which include popular algorithms (TSN, I3D, SlowFast, R(2+1)D, CSN). The performance is aligned with or better than the original papers.v0.1.0 (19/06/2019)
- MMAction is online!## Model zoo
Results and reference models are available in the [model zoo](https://github.com/open-mmlab/mmaction/blob/master/MODEL_ZOO.md).## Installation
Please refer to [INSTALL.md](https://github.com/open-mmlab/mmaction/blob/master/INSTALL.md) for installation.Update: for Docker installation, Please refer to [DOCKER.md](https://github.com/open-mmlab/mmaction/blob/master/DOCKER.md) for using docker for this project.
## Data preparation
Please refer to [DATASET.md](https://github.com/open-mmlab/mmaction/blob/master/DATASET.md) for a general knowledge of data preparation.
Detailed documents for the supported datasets are available in `data_tools/`.## Get started
Please refer to [GETTING_STARTED.md](https://github.com/open-mmlab/mmaction/blob/master/GETTING_STARTED.md) for detailed examples and abstract usage.## Contributing
We appreciate all contributions to improve MMAction.
Please refer to [CONTRUBUTING.md](https://github.com/open-mmlab/mmaction/blob/master/CONTRIBUTING.md) for the contributing guideline.## Citation
If you use our codebase or models in your research, please cite this work.
We will release a technical report later.
```
@misc{mmaction2019,
author = {Yue Zhao, Yuanjun Xiong, Dahua Lin},
title = {MMAction},
howpublished = {\url{https://github.com/open-mmlab/mmaction}},
year = {2019}
}
```## Contact
If you have any question, please file an issue or contact the author:
```
Yue Zhao: [email protected]
```