Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/naver/dope
Last synced: 3 months ago
JSON representation
- Host: GitHub
- URL: https://github.com/naver/dope
- Owner: naver
- License: other
- Created: 2020-08-18T15:53:52.000Z (over 4 years ago)
- Default Branch: master
- Last Pushed: 2020-11-23T09:54:41.000Z (about 4 years ago)
- Last Synced: 2024-10-11T00:38:30.672Z (4 months ago)
- Language: Python
- Size: 969 KB
- Stars: 92
- Watchers: 8
- Forks: 22
- Open Issues: 0
-
Metadata Files:
- Readme: README.MD
- License: LICENSE
Awesome Lists containing this project
- awesome-human-pose-estimation - [code - ECCV, DOPE, (only Keypoint) (3D Whole-Body Mesh Recovery / 2020)
README
# DOPE: Distillation Of Part Experts for whole-body 3D pose estimation in the wild
This repository contains the code for running our DOPE model.
We only provide code for testing, not for training.
If you use our code, please cite our [ECCV'20 paper](https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123710375.pdf):```bibtex
@inproceedings{dope,
title={{DOPE: Distillation Of Part Experts for whole-body 3D pose estimation in the wild}},
author={{Weinzaepfel, Philippe and Br\'egier, Romain and Combaluzier, Hadrien and Leroy, Vincent and Rogez, Gr\'egory},
booktitle={{ECCV}},
year={2020}
}
```## License
DOPE is distributed under the CC BY-NC-SA 4.0 License. See [LICENSE](LICENSE) for more information.
### Getting started
Our python3 code requires the following packages:
* pytorch
* torchvision
* opencv (for drawing the results)
* numpy/scipyOur code has been tested on Linux, with pytorch 1.5 and torchvision 0.6.
We do not provide support for installation.#### Download the models
First create a folder `models/` in which you should place the downloaded pretrained models.
The list of models include:
* [DOPE_v1_0_0](http://download.europe.naverlabs.com/ComputerVision/DOPE_models/DOPE_v1_0_0.pth.tgz) as used in our ECCV'20 paper
* [DOPErealtime_v1_0_0](http://download.europe.naverlabs.com/ComputerVision/DOPE_models/DOPErealtime_v1_0_0.pth.tgz) which is its real-time version#### post-processing with a modified version of LCR-Net++
Our post-processing relies on a modified version of the pose proposals integration proposed in the [LCR-Net++ code](https://thoth.inrialpes.fr/src/LCR-Net/).
To get this code, once in the DOPE folder, please clone our modified LCR-Net++ repository:
```
git clone https://github.com/naver/lcrnet-v2-improved-ppi.git
```Alternatively, you can use a more naive post-processing based on non-maximum suppression by add `--postprocess nms` to the commandlines below, which will result in a slight decrease of performance.
## Using the code
To use our code on an image, use the following command:
```
python dope.py --model --image
```with
* ``: name of model to use (eg DOPE_v1_0_0)
* ``: name of the image to testFor instance, you can run
```
python dope.py --model DOPErealtime_v1_0_0 --image 015994080.jpg
```The command will create an image `_.jpg` that shows the 2D poses output by our DOPE model.
We also provide code to visualize the results but in 2D and in 3D, just add the argument `--visu3d` to the previous commandline.
The 3D visualization uses OpenGL and the visvis python package, we do not provide support for installation and OpenGL issues.
Running the command with the `--visu3d` option should create another file with the 3D visualization named `__visu3d.jpg`.
Note that DOPE predicts 3D poses for each body part in their relative coordinate systems, eg, centered on body center for bodies.
For better visualization, we approximate 3D scene coordinates by finding the offsets that minimize the reprojection error based on a scaled orthographic projection model.Here is one example resulting image:
![example result](example_result.jpg)Our real-time models use half computation. In case your device cannot handle it, please uncomment the line ` #ckpt['half'] = False` in `dope.py`.