Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/stefanopini/simple-higherhrnet
Multi-person Human Pose Estimation with HigherHRNet in Pytorch, with TensorRT support
https://github.com/stefanopini/simple-higherhrnet
coco-dataset deep-learning higher-hrnet hrnet human-pose-estimation keypoint-detection mscoco-keypoint pytorch
Last synced: 7 days ago
JSON representation
Multi-person Human Pose Estimation with HigherHRNet in Pytorch, with TensorRT support
- Host: GitHub
- URL: https://github.com/stefanopini/simple-higherhrnet
- Owner: stefanopini
- License: gpl-3.0
- Created: 2020-03-20T22:22:05.000Z (over 4 years ago)
- Default Branch: master
- Last Pushed: 2022-12-28T23:23:58.000Z (almost 2 years ago)
- Last Synced: 2023-11-07T18:03:36.362Z (about 1 year ago)
- Topics: coco-dataset, deep-learning, higher-hrnet, hrnet, human-pose-estimation, keypoint-detection, mscoco-keypoint, pytorch
- Language: Python
- Homepage:
- Size: 15.2 MB
- Stars: 144
- Watchers: 8
- Forks: 18
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Multi-person Human Pose Estimation with HigherHRNet in PyTorch
This is an unofficial implementation of the paper
[*HigherHRNet: Scale-Aware Representation Learning for Bottom-Up Human Pose Estimation*](https://openaccess.thecvf.com/content_CVPR_2020/papers/Cheng_HigherHRNet_Scale-Aware_Representation_Learning_for_Bottom-Up_Human_Pose_Estimation_CVPR_2020_paper.pdf).
The code is a simplified version of the [official code](https://github.com/HRNet/HigherHRNet-Human-Pose-Estimation)
with the ease-of-use in mind.The code is fully compatible with the
[official pre-trained weights](https://github.com/HRNet/HigherHRNet-Human-Pose-Estimation).
It supports both Windows and Linux.This repository currently provides:
- A slightly simpler implementation of ``HigherHRNet`` in PyTorch (>=1.0) - compatible with official weights
(``pose_higher_hrnet_*``).
- A simple class (``SimpleHigherHRNet``) that loads the HigherHRNet network for the bottom-up human pose
estimation, loads the pre-trained weights, and make human predictions on a single image or a batch of images.
- Support for multi-GPU inference.
- Multi-person support by design (HigherHRNet is a bottom-up approach).
- A reference code that runs a live demo reading frames from a webcam or a video file.
- **NEW** Support for TensorRT (thanks to [@gpastal24](https://github.com/gpastal24), see [#14](https://github.com/stefanopini/simple-HigherHRNet/pull/14) and [#15](https://github.com/stefanopini/simple-HigherHRNet/pull/15)).
- **NEW** A [Jupyter Notebook](https://github.com/stefanopini/simple-HigherHRNet/blob/master/SimpleHigherHRNet_notebook.ipynb) compatible with Google Colab showcasing how to use this repository.
- [Click here](https://colab.research.google.com/github/stefanopini/simple-HigherHRNet/blob/master/SimpleHigherHRNet_notebook.ipynb) to open the notebook on Colab!This repository is built along the lines of the repository
[*simple-HRNet*](https://github.com/stefanopini/simple-HRNet).
Unfortunately, compared to HRNet, results and performance of HigherHRNet are somewhat disappointing: the network and
the required post-processing are slower and the predictions does not look more precise.
Moreover, multiple skeletons are often predicted for the same person, requiring additional steps to filter out the
redundant poses.
On the other hand, being a bottom-up approach, HigherHRNet does not rely on any person detection algorithm like Yolo-v3
and can be used for person detection too.
### Examples
### Class usage
```
import cv2
from SimpleHigherHRNet import SimpleHigherHRNetmodel = SimpleHigherHRNet(32, 17, "./weights/pose_higher_hrnet_w32_512.pth")
image = cv2.imread("image.png", cv2.IMREAD_COLOR)joints = model.predict(image)
```The most useful parameters of the `__init__` function are:
cnumber of channels (HRNet: 32, 48)
nof_jointsnumber of joints (COCO: 17, CrowdPose: 14)
checkpoint_pathpath of the (official) weights to be loaded
resolutionimage resolution (min side), it depends on the loaded weights
return_heatmapsthe `predict` method returns also the heatmaps
return_bounding_boxesthe `predict` method returns also the bounding boxes
filter_redundant_posesredundant poses (poses being almost identical) are filtered out
max_nof_peoplemaximum number of people in the scene
max_batch_sizemaximum batch size used in hrnet inference
devicedevice (cpu or cuda)
### Running the live demo
From a connected camera:
```
python scripts/live-demo.py --camera_id 0
```
From a saved video:
```
python scripts/live-demo.py --filename video.mp4
```For help:
```
python scripts/live-demo.py --help
```### Extracting keypoints:
From a saved video:
```
python scripts/extract-keypoints.py --format csv --filename video.mp4
```For help:
```
python scripts/extract-keypoints.py --help
```### Converting the model to TensorRT:
Warning: require the installation of TensorRT (see Nvidia website) and onnx.
On some platforms, they can be installed with
```
pip install tensorrt onnx
```Converting in FP16:
```
python scripts/export-tensorrt-model.py --device 0 --half
```For help:
```
python scripts/export-tensorrt-model.py --help
```### Installation instructions
- Clone the repository
``git clone https://github.com/stefanopini/simple-HigherHRNet.git``
- Install the required packages
``pip install -r requirements.txt``
- Download the official pre-trained weights from
[https://github.com/HRNet/HigherHRNet-Human-Pose-Estimation](https://github.com/HRNet/HigherHRNet-Human-Pose-Estimation)
Direct links, COCO ([official Drive folder](https://drive.google.com/drive/folders/1X9-TzWpwbX2zQf2To8lB-ZQHMYviYYh6)):
- w48 640 (more accurate, but slower)
[pose_higher_hrnet_w48_640.pth.tar](https://drive.google.com/file/d/10j9Wx_I2H6qaw-prAdlJ44fLryDtA-ah/view)
- w32 640 (less accurate, but faster)
[pose_higher_hrnet_w32_640.pth.tar](https://drive.google.com/file/d/1uEcQlm1rjV-JRgVbaP79Y5sMLqUX2ciD/view)
- w32 512 (even less accurate, but even faster) - Used as default in `live_demo.py`
[pose_higher_hrnet_w32_512.pth](https://drive.google.com/file/d/1V9Iz0ZYy9m8VeaspfKECDW0NKlGsYmO1/view)
Remember to set the parameters of SimpleHigherHRNet accordingly (in particular `c` and `resolution`).
- Your folders should look like:
```
simple-HigherHRNet
├── gifs (preview in README.md)
├── misc (misc)
├── models (pytorch models)
├── scripts (scripts)
└── weights (HigherHRnet weights)
```### ToDos
- [x] Add keypoint extraction script (thanks to [@wuyenlin](https://github.com/wuyenlin))
- [ ] Optimize the post-processing steps
- [ ] Add COCO dataset and evaluation
- [ ] Add Train/Test scripts
- [x] Add TensorRT support
- [x] Add notebook compatible with Colab