https://github.com/yuliangxiu/poseflow
PoseFlow: Efficient Online Pose Tracking (BMVC'18)
https://github.com/yuliangxiu/poseflow
deep-learning machine-learning opencv pose-estimation posetrack posetracking real-time realtime tracking video
Last synced: 19 days ago
JSON representation
PoseFlow: Efficient Online Pose Tracking (BMVC'18)
- Host: GitHub
- URL: https://github.com/yuliangxiu/poseflow
- Owner: YuliangXiu
- Created: 2018-04-14T18:28:08.000Z (about 7 years ago)
- Default Branch: master
- Last Pushed: 2024-05-03T19:40:12.000Z (12 months ago)
- Last Synced: 2025-03-29T02:05:55.900Z (27 days ago)
- Topics: deep-learning, machine-learning, opencv, pose-estimation, posetrack, posetracking, real-time, realtime, tracking, video
- Language: C++
- Homepage: https://xiuyuliang.cn
- Size: 9.3 MB
- Stars: 431
- Watchers: 19
- Forks: 90
- Open Issues: 9
-
Metadata Files:
- Readme: README.md
- Funding: .github/FUNDING.yml
Awesome Lists containing this project
README
# Pose Flow
Official implementation of [Pose Flow: Efficient Online Pose Tracking ](https://arxiv.org/abs/1802.00977).
![]()
![]()
Results on PoseTrack Challenge validation set:
1. Task2: Multi-Person Pose Estimation (mAP)
| Method | Head mAP | Shoulder mAP | Elbow mAP | Wrist mAP | Hip mAP | Knee mAP | Ankle mAP | Total mAP |
|:-------|:-----:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|
| Detect-and-Track(FAIR) | **67.5** | 70.2 | 62 | 51.7 | 60.7 | 58.7 | 49.8 | 60.6 |
| **AlphaPose** | 66.7 | **73.3** | **68.3** | **61.1** | **67.5** | **67.0** | **61.3** | **66.5** |2. Task3: Pose Tracking (MOTA)
| Method | Head MOTA | Shoulder MOTA | Elbow MOTA | Wrist MOTA | Hip MOTA | Knee MOTA | Ankle MOTA | Total MOTA | Total MOTP| Speed(FPS) |
|:-------|:-----:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|
| Detect-and-Track(FAIR) | **61.7** | 65.5 | 57.3 | 45.7 | 54.3 | 53.1 | 45.7 | 55.2 | 61.5 |Unknown|
| **PoseFlow(DeepMatch)** | 59.8 | **67.0** | 59.8 | 51.6 | **60.0** | **58.4** | **50.5** | **58.3** | **67.8**|8|
| **PoseFlow(OrbMatch)** | 59.0 | 66.8 | **60.0** | **51.8** | 59.4 | **58.4** | 50.3 | 58.0 | 62.2|24|## Latest Features
- Dec 2018: PoseFlow(General Version) released! Support ANY DATASET and pose tracking results visualization.
- Oct 2018: Support generating correspondence files with ORB(OpenCV), 3X FASTER and no need to compile DeepMatching library.## Requirements
- Python 2.7.13
- OpenCV 3.4.2.16
- OpenCV-contrib 3.4.2.16
- tqdm 4.19.8## Installation
1. Download PoseTrack Dataset from [PoseTrack](https://posetrack.net/) to `AlphaPose/PoseFlow/posetrack_data/`
2. (Optional) Use [DeepMatching](http://lear.inrialpes.fr/src/deepmatching/) to extract dense correspondences between adjcent frames in every video, please refer to [DeepMatching Compile Error](https://github.com/MVIG-SJTU/AlphaPose/issues/97) to compile DeepMatching correctly```shell
pip install -r requirements.txtcd deepmatching
make clean all
make
cd ..
```## For Any Datasets (General Version)
1. Using [AlphaPose](https://github.com/MVIG-SJTU/AlphaPose) to generate multi-person pose estimation results.
```shell
# pytorch version
python demo.py --indir ${image_dir}$ --outdir ${results_dir}$# torch version
./run.sh --indir ${image_dir}$ --outdir ${results_dir}$
```2. Run pose tracking
```shell
# pytorch version
python tracker-general.py --imgdir ${image_dir}$
--in_json ${results_dir}$/alphapose-results.json
--out_json ${results_dir}$/alphapose-results-forvis-tracked.json
--visdir ${render_dir}$# torch version
python tracker-general.py --imgdir ${image_dir}$
--in_json ${results_dir}$/POSE/alpha-pose-results-forvis.json
--out_json ${results_dir}$/POSE/alpha-pose-results-forvis-tracked.json
--visdir ${render_dir}$
```## For PoseTrack Dataset Evaluation (Paper Baseline)
1. Using [AlphaPose](https://github.com/MVIG-SJTU/AlphaPose) to generate multi-person pose estimation results on videos with format like `alpha-pose-results-sample.json`.
2. Using DeepMatching/ORB to generate correspondence files.```shell
# Generate correspondences by DeepMatching
# (More Robust but Slower)
python matching.py --orb=0or
# Generate correspondences by Orb
# (Faster but Less Robust)
python matching.py --orb=1
```3. Run pose tracking
```shell
python tracker-baseline.py --dataset=val/test --orb=1/0
```
4. EvaluationOriginal [poseval](https://github.com/leonid-pishchulin/poseval) has some instructions on how to convert annotation files from MAT to JSON.
Evaluate pose tracking results on validation dataset:
```shell
git clone https://github.com/leonid-pishchulin/poseval.git --recursive
cd poseval/py && export PYTHONPATH=$PWD/../py-motmetrics:$PYTHONPATH
cd ../../
python poseval/py/evaluate.py --groundTruth=./posetrack_data/annotations/val \
--predictions=./${track_result_dir}/ \
--evalPoseTracking --evalPoseEstimation
```## Citation
Please cite these papers in your publications if it helps your research:
@inproceedings{xiu2018poseflow,
author = {Xiu, Yuliang and Li, Jiefeng and Wang, Haoyu and Fang, Yinghong and Lu, Cewu},
title = {{Pose Flow}: Efficient Online Pose Tracking},
booktitle={BMVC},
year = {2018}
}