Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/amazon-science/long-short-term-transformer
[NeurIPS 2021 Spotlight] Official implementation of Long Short-Term Transformer for Online Action Detection
https://github.com/amazon-science/long-short-term-transformer
online-action-detection video-analysis video-transformer
Last synced: 7 days ago
JSON representation
[NeurIPS 2021 Spotlight] Official implementation of Long Short-Term Transformer for Online Action Detection
- Host: GitHub
- URL: https://github.com/amazon-science/long-short-term-transformer
- Owner: amazon-science
- License: apache-2.0
- Created: 2021-11-18T16:52:43.000Z (about 3 years ago)
- Default Branch: main
- Last Pushed: 2024-07-25T10:47:38.000Z (5 months ago)
- Last Synced: 2024-12-05T16:22:23.794Z (17 days ago)
- Topics: online-action-detection, video-analysis, video-transformer
- Language: Python
- Homepage:
- Size: 142 KB
- Stars: 129
- Watchers: 7
- Forks: 19
- Open Issues: 13
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Code of conduct: CODE_OF_CONDUCT.md
Awesome Lists containing this project
README
# Long Short-Term Transformer for Online Action Detection
## Introduction
This is a PyTorch implementation for our NeurIPS 2021 Spotlight paper "[`Long Short-Term Transformer for Online Action Detection`](https://arxiv.org/pdf/2107.03377.pdf)".
![network](demo/network.png?raw=true)
## Environment
- The code is developed with CUDA 10.2, ***Python >= 3.7.7***, ***PyTorch >= 1.7.1***
0. [Optional but recommended] create a new conda environment.
```
conda create -n lstr python=3.7.7
```
And activate the environment.
```
conda activate lstr
```1. Install the requirements
```
pip install -r requirements.txt
```## Data Preparation
#### Option1: Prepare the features and targets by yourself.
1. Download the [`THUMOS'14`](https://www.crcv.ucf.edu/THUMOS14/) and [`TVSeries`](https://homes.esat.kuleuven.be/psi-archive/rdegeest/TVSeries.html) datasets.
2. Extract feature representations for video frames.
* For **ActivityNet** pretrained features, we use the [`ResNet-50`](https://arxiv.org/pdf/1512.03385.pdf) model for the RGB and optical flow inputs. We recommend to use this [`checkpoint`](https://github.com/open-mmlab/mmaction2/blob/master/configs/recognition/tsn/README.md#activitynet-v13) in [`MMAction2`](https://github.com/open-mmlab/mmaction2).
* For **Kinetics** pretrained features, we use the [`ResNet-50`](https://arxiv.org/pdf/1512.03385.pdf) model for the RGB inputs. We recommend to use this [`checkpoint`](https://github.com/open-mmlab/mmaction2/blob/master/configs/recognition/tsn/tsn_r50_320p_1x1x8_100e_kinetics400_rgb.py) in [`MMAction2`](https://github.com/open-mmlab/mmaction2). We use the [`BN-Inception`](https://arxiv.org/pdf/1502.03167.pdf) model for the optical flow inputs. We recommend to use the model [`here`](https://drive.google.com/drive/folders/1Q8yf2u8YWkva-apAxW_9_TzvLGuWZaix?usp=sharing).
***Note:*** We compute the optical flow using [`DenseFlow`](https://github.com/xumingze0308/denseflow).3. If you want to use our [dataloaders](src/rekognition_online_action_detection/datasets), please make sure to put the files as the following structure:
* THUMOS'14 dataset:
```
$YOUR_PATH_TO_THUMOS_DATASET
├── rgb_kinetics_resnet50/
| ├── video_validation_0000051.npy (of size L x 2048)
│ ├── ...
├── flow_kinetics_bninception/
| ├── video_validation_0000051.npy (of size L x 1024)
| ├── ...
├── target_perframe/
| ├── video_validation_0000051.npy (of size L x 22)
| ├── ...
```
* TVSeries dataset:
```
$YOUR_PATH_TO_TVSERIES_DATASET
├── rgb_kinetics_resnet50/
| ├── Breaking_Bad_ep1.npy (of size L x 2048)
│ ├── ...
├── flow_kinetics_bninception/
| ├── Breaking_Bad_ep1.npy (of size L x 1024)
| ├── ...
├── target_perframe/
| ├── Breaking_Bad_ep1.npy (of size L x 31)
| ├── ...
```4. Create softlinks of datasets:
```
cd long-short-term-transformer
ln -s $YOUR_PATH_TO_THUMOS_DATASET data/THUMOS
ln -s $YOUR_PATH_TO_TVSERIES_DATASET data/TVSeries
```
#### Option2: Directly download the pre-extracted features and targets from [`TeSTra`](https://github.com/zhaoyue-zephyrus/TeSTra).If you want to skip the data preprocessing and quickly try LSTR, please refer to [`TeSTra`](https://github.com/zhaoyue-zephyrus/TeSTra). The features and targets there ***exactly*** follow LSTR's data structure and should be able to reproduce LSTR's performance. However, if you have any question about the processing of these features and targets, please contact the authors of [`TeSTra`](https://github.com/zhaoyue-zephyrus/TeSTra) directly.
## Training
Training LSTR with 512 seconds long-term memory and 8 seconds short-term memory requires less 3 GB GPU memory.
The commands are as follows.
```
cd long-short-term-transformer
# Training from scratch
python tools/train_net.py --config_file $PATH_TO_CONFIG_FILE --gpu $CUDA_VISIBLE_DEVICES
# Finetuning from a pretrained model
python tools/train_net.py --config_file $PATH_TO_CONFIG_FILE --gpu $CUDA_VISIBLE_DEVICES \
MODEL.CHECKPOINT $PATH_TO_CHECKPOINT
```## Online Inference
There are *three kinds* of evaluation methods in our code.
* First, you can use the config `SOLVER.PHASES "['train', 'test']"` during training. This process devides each test video into non-overlapping samples, and makes prediction on the all the frames in the short-term memory as if they were the latest frame. Note that this evaluation result is ***not*** the final performance, since (1) for most of the frames, their short-term memory is not fully utlized and (2) for simplicity, samples in the boundaries are mostly ignored.
```
cd long-short-term-transformer
# Inference along with training
python tools/train_net.py --config_file $PATH_TO_CONFIG_FILE --gpu $CUDA_VISIBLE_DEVICES \
SOLVER.PHASES "['train', 'test']"
```* Second, you could run the online inference in `batch mode`. This process evaluates all video frames by considering each of them as the latest frame and filling the long- and short-term memories by tracing back in time. Note that this evaluation result matches the numbers reported in the paper, but `batch mode` cannot be further accelerated as descibed in paper's Sec 3.6. On the other hand, this mode can run faster when you use a large batch size, and we recomand to use it for performance benchmarking.
```
cd long-short-term-transformer
# Online inference in batch mode
python tools/test_net.py --config_file $PATH_TO_CONFIG_FILE --gpu $CUDA_VISIBLE_DEVICES \
MODEL.CHECKPOINT $PATH_TO_CHECKPOINT MODEL.LSTR.INFERENCE_MODE batch
```* Third, you could run the online inference in `stream mode`. This process tests frame by frame along the entire video, from the beginning to the end. Note that this evaluation result matches the both LSTR's performance and runtime reported in the paper. It processes the entire video as LSTR is applied to real-world scenarios. However, currently it only supports to test one video at each time.
```
cd long-short-term-transformer
# Online inference in stream mode
python tools/test_net.py --config_file $PATH_TO_CONFIG_FILE --gpu $CUDA_VISIBLE_DEVICES \
MODEL.CHECKPOINT $PATH_TO_CHECKPOINT MODEL.LSTR.INFERENCE_MODE stream DATA.TEST_SESSION_SET "['$VIDEO_NAME']"
```## Evaluation
Evaluate LSTR's performance for online action detection using perframe mAP or mcAP.
```
cd long-short-term-transformer
python tools/eval/eval_perframe --pred_scores_file $PRED_SCORES_FILE
```Evaluate LSTR's performance at different action stages by evaluating each decile (ten-percent interval) of the video frames separately.
```
cd long-short-term-transformer
python tools/eval/eval_perstage --pred_scores_file $PRED_SCORES_FILE
```## Citations
If you are using the data/code/model provided here in a publication, please cite our paper:
@inproceedings{xu2021long,
title={Long Short-Term Transformer for Online Action Detection},
author={Xu, Mingze and Xiong, Yuanjun and Chen, Hao and Li, Xinyu and Xia, Wei and Tu, Zhuowen and Soatto, Stefano},
booktitle={Conference on Neural Information Processing Systems (NeurIPS)},
year={2021}
}## Security
See [CONTRIBUTING](CONTRIBUTING.md#security-issue-notifications) for more information.
## License
This project is licensed under the Apache-2.0 License.