{"id":13722025,"url":"https://github.com/tensorlayer/hyperpose","last_synced_at":"2026-01-12T07:37:18.482Z","repository":{"id":37279529,"uuid":"146084605","full_name":"tensorlayer/HyperPose","owner":"tensorlayer","description":"Library for Fast and Flexible Human Pose Estimation","archived":false,"fork":false,"pushed_at":"2023-03-25T01:16:28.000Z","size":10134,"stargazers_count":1261,"open_issues_count":33,"forks_count":274,"subscribers_count":57,"default_branch":"master","last_synced_at":"2025-05-12T00:02:14.156Z","etag":null,"topics":["computer-vision","distributed-training","mobilenet","neural-networks","openpose","pose-estimation","tensorflow","tensorlayer","tensorrt"],"latest_commit_sha":null,"homepage":"https://hyperpose.readthedocs.io","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/tensorlayer.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null}},"created_at":"2018-08-25T09:59:09.000Z","updated_at":"2025-04-19T22:31:33.000Z","dependencies_parsed_at":"2024-02-03T13:51:14.704Z","dependency_job_id":null,"html_url":"https://github.com/tensorlayer/HyperPose","commit_stats":{"total_commits":444,"total_committers":20,"mean_commits":22.2,"dds":0.7117117117117118,"last_synced_commit":"e34c6acb91144e1d090466324f99c521fbf47cdb"},"previous_names":["tensorlayer/openpose"],"tags_count":6,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorlayer%2FHyperPose","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorlayer%2FHyperPose/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorlayer%2FHyperPose/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorlayer%2FHyperPose/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/tensorlayer","download_url":"https://codeload.github.com/tensorlayer/HyperPose/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254337612,"owners_count":22054253,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["computer-vision","distributed-training","mobilenet","neural-networks","openpose","pose-estimation","tensorflow","tensorlayer","tensorrt"],"created_at":"2024-08-03T01:01:23.813Z","updated_at":"2026-01-12T07:37:18.450Z","avatar_url":"https://github.com/tensorlayer.png","language":"Python","readme":"\u003c/a\u003e\n\n\u003cp align=\"center\"\u003e\n    \u003cimg src=\"./docs/markdown/images/logo.png\", width=\"600\"\u003e\n\u003c/p\u003e\n\u003cp align=\"center\"\u003e\n    \u003ca href=\"https://readthedocs.org/projects/hyperpose/badge/?version=latest\" title=\"Docs Building\"\u003e\u003cimg src=\"https://readthedocs.org/projects/hyperpose/badge/?version=latest\"\u003e\u003c/a\u003e\n    \u003ca href=\"https://github.com/tensorlayer/hyperpose/actions?query=workflow%3ACI\" title=\"Build Status\"\u003e\u003cimg src=\"https://github.com/tensorlayer/hyperpose/workflows/CI/badge.svg\"\u003e\u003c/a\u003e\n    \u003ca href=\"https://hub.docker.com/r/tensorlayer/hyperpose\" title=\"Docker\"\u003e\u003cimg src=\"https://img.shields.io/docker/image-size/tensorlayer/hyperpose\"\u003e\u003c/a\u003e\n    \u003ca href=\"https://github.com/tensorlayer/hyperpose/releases\" title=\"Github Release\"\u003e\u003cimg src=\"https://img.shields.io/github/v/release/tensorlayer/hyperpose?include_prereleases\"\u003e\u003c/a\u003e\n    \u003ca href=\"https://drive.google.com/drive/folders/1w9EjMkrjxOmMw3Rf6fXXkiv_ge7M99jR?usp=sharing\" title=\"PreTrainedModels\"\u003e\u003cimg src=\"https://img.shields.io/badge/ModelZoo-GoogleDrive-brightgreen.svg\"\u003e\u003c/a\u003e\n    \u003ca href=\"https://en.cppreference.com/w/cpp/17\" title=\"CppStandard\"\u003e\u003cimg src=\"https://img.shields.io/badge/C++-17-blue.svg?style=flat\u0026logo=c%2B%2B\"\u003e\u003c/a\u003e\n    \u003ca href=\"https://github.com/tensorlayer/tensorlayer/blob/master/LICENSE.rst\" title=\"TensorLayer\"\u003e\u003cimg src=\"https://img.shields.io/badge/License-Apache%202.0-blue.svg\"\u003e\n\u003c/p\u003e\n\n\n---\n\n\u003cp align=\"center\"\u003e\n    \u003ca href=\"#Features\"\u003eFeatures\u003c/a\u003e •\n    \u003ca href=\"#Documentation\"\u003eDocumentation\u003c/a\u003e •\n    \u003ca href=\"#Quick-Start\"\u003eQuick Start\u003c/a\u003e •\n    \u003ca href=\"#Performance\"\u003ePerformance\u003c/a\u003e •\n    \u003ca href=\"#Accuracy\"\u003eAccuracy\u003c/a\u003e •\n    \u003ca href=\"#Cite-Us\"\u003eCite Us\u003c/a\u003e •\n    \u003ca href=\"#License\"\u003eLicense\u003c/a\u003e\n\u003c/p\u003e\n\n# HyperPose\n\nHyperPose is a library for building high-performance custom pose estimation applications.\n\n## Features\n\nHyperPose has two key features:\n\n- **High-performance pose estimation with CPUs/GPUs**: HyperPose achieves real-time pose estimation through a high-performance pose estimation engine. This engine implements numerous system optimisations: pipeline parallelism, model inference with TensorRT, CPU/GPU hybrid scheduling, and many others. These optimisations contribute to up to 10x higher FPS compared to OpenPose, TF-Pose and OpenPifPaf.\n- **Flexibility for developing custom pose estimation models**: HyperPose provides high-level Python APIs to develop pose estimation models. HyperPose users can:\n    * Customise training, evaluation, visualisation, pre-processing and post-processing in pose estimation.\n    * Customise model architectures (e.g., OpenPose, Pifpaf, PoseProposal Network) and training datasets.\n    * Speed up training with multiple GPUs.\n\n## Demo\n\n\u003c/a\u003e\n\u003cp align=\"center\"\u003e\n    \u003cimg src=\"./docs/markdown/images/demo-xbd.gif\", width=\"600\"\u003e\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n    新宝岛 with HyperPose (Lightweight OpenPose model)\n\u003c/p\u003e\n\n## Quick Start\n\nThe HyperPose library contains two parts:\n* A C++ library for high-performance pose estimation model inference.\n* A Python library for developing custom pose estimation models.\n\n### C++ inference library\n\nThe easiest way to use the inference library is through a [Docker image](https://hub.docker.com/r/tensorlayer/hyperpose). Pre-requisites for this image:\n\n- [CUDA Driver \u003e= 418.81.07](https://www.tensorflow.org/install/gpu) (For default CUDA 10.0 image)\n- [NVIDIA Docker \u003e= 2.0](https://github.com/NVIDIA/nvidia-docker)\n- [Docker CE Engine \u003e= 19.03](https://docs.docker.com/engine/install/)\n\nRun this command to check if pre-requisites are ready:\n\n```bash\nwget https://raw.githubusercontent.com/tensorlayer/hyperpose/master/scripts/test_docker.py -qO- | python\n```\n\nOnce pre-requisites are ready, pull the HyperPose docker:\n\n```bash\ndocker pull tensorlayer/hyperpose\n```\n\nWe provide 4 examples within this image (The following commands have been tested on Ubuntu 18.04):\n\n```bash\n# [Example 1]: Doing inference on given video, copy the output.avi to the local path.\ndocker run --name quick-start --gpus all tensorlayer/hyperpose --runtime=stream\ndocker cp quick-start:/hyperpose/build/output.avi .\ndocker rm quick-start\n\n\n# [Example 2](X11 server required to see the imshow window): Real-time inference.\n# You may need to install X11 server locally:\n# sudo apt install xorg openbox xauth\nxhost +; docker run --rm --gpus all -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix tensorlayer/hyperpose --imshow\n\n\n# [Example 3]: Camera + imshow window\nxhost +; docker run --name pose-camera --rm --gpus all -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix --device=/dev/video0:/dev/video0 tensorlayer/hyperpose --source=camera --imshow\n# To quit this image, please type `docker kill pose-camera` in another terminal.\n\n\n# [Dive into the image]\nxhost +; docker run --rm --gpus all -it -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix --device=/dev/video0:/dev/video0 --entrypoint /bin/bash tensorlayer/hyperpose\n# For users that cannot access a camera or X11 server. You may also use:\n# docker run --rm --gpus all -it --entrypoint /bin/bash tensorlayer/hyperpose\n```\n\nFor more usage regarding the command line flags, please visit [here](https://hyperpose.readthedocs.io/en/latest/markdown/quick_start/prediction.html#table-of-flags-for-hyperpose-cli).\n\n### Python training library\n\nWe recommend using the Python training library within an [Anaconda](https://www.anaconda.com/products/individual) environment. The below quick-start has been tested with these environments:\n\n| OS           | NVIDIA Driver | CUDA Toolkit | GPU            |\n| ------------ | ------------- | ------------ | -------------- |\n| Ubuntu 18.04 | 410.79        | 10.0         | Tesla V100-DGX |\n| Ubuntu 18.04 | 440.33.01     | 10.2         | Tesla V100-DGX |\n| Ubuntu 18.04 | 430.64        | 10.1         | TITAN RTX      |\n| Ubuntu 18.04 | 430.26        | 10.2         | TITAN XP       |\n| Ubuntu 16.04 | 430.50        | 10.1         | RTX 2080Ti     |\n\nOnce Anaconda is installed, run below Bash commands to create a virtual environment:\n\n```bash\n# Create virtual environment (choose yes)\nconda create -n hyperpose python=3.7\n# Activate the virtual environment, start installation\nconda activate hyperpose\n# Install cudatoolkit and cudnn library using conda\nconda install cudatoolkit=10.0.130\nconda install cudnn=7.6.0\n```\n\nWe then clone the repository and install the dependencies listed in [requirements.txt](https://github.com/tensorlayer/hyperpose/blob/master/requirements.txt):\n\n```bash\ngit clone https://github.com/tensorlayer/hyperpose.git \u0026\u0026 cd hyperpose\npip install -r requirements.txt\n```\n\nWe demonstrate how to train a custom pose estimation model with HyperPose. HyperPose APIs contain three key modules: *Config*, *Model* and *Dataset*, and their basic usages are shown below.\n\n```python\nfrom hyperpose import Config, Model, Dataset\n\n# Set model name to distinguish models (necessary)\nConfig.set_model_name(\"MyLightweightOpenPose\")\n\n# Set model type, model backbone and dataset\nConfig.set_model_type(Config.MODEL.LightweightOpenpose)\nConfig.set_model_backbone(Config.BACKBONE.Vggtiny)\nConfig.set_dataset_type(Config.DATA.MSCOCO)\n\n# Set single-node training or parallel-training\nConfig.set_train_type(Config.TRAIN.Single_train)\n\nconfig = Config.get_config()\nmodel = Model.get_model(config)\ndataset = Dataset.get_dataset(config)\n\n# Start the training process\nModel.get_train(config)(model, dataset)\n```\n\nThe full training program is listed [here](https://github.com/tensorlayer/hyperpose/blob/master/train.py). To evaluate the trained model, you can use the evaluation program [here](https://github.com/tensorlayer/hyperpose/blob/master/eval.py). More information about the training library can be found [here](https://hyperpose.readthedocs.io/en/latest/markdown/quick_start/training.html).\n\n\n## Documentation\n\nThe APIs of the HyperPose training library and the inference library are described in the [Documentation](https://hyperpose.readthedocs.io/en/latest/).\n\n## Performance\n\nWe compare the prediction performance of HyperPose with [OpenPose 1.6](https://github.com/CMU-Perceptual-Computing-Lab/openpose), [TF-Pose](https://github.com/ildoonet/tf-pose-estimation) and [OpenPifPaf 0.12](https://github.com/openpifpaf/openpifpaf). The test-bed has Ubuntu18.04, 1070Ti GPU, Intel i7 CPU (12 logic cores).\n\n| HyperPose Configuration  | DNN Size | Input Size | HyperPose | Baseline |\n| --------------- | ------------- | ------------------ | ------------------ | --------------------- |\n| OpenPose (VGG)   | 209.3MB       | 656 x 368            | **27.32 FPS**           | 8 FPS (OpenPose)          |\n| OpenPose (TinyVGG)  | 34.7 MB       | 384 x 256          | **124.925 FPS**         | N/A                   |\n| OpenPose (MobileNet) | 17.9 MB       | 432 x 368          | **84.32 FPS**           | 8.5 FPS (TF-Pose)         |\n| OpenPose (ResNet18)  | 45.0 MB       | 432 x 368          | **62.52 FPS**           | N/A                  |\n| OpenPifPaf (ResNet50)  | 97.6 MB       | 432 x 368          | **44.16 FPS**           | 14.5 FPS (OpenPifPaf)    |\n\n## Accuracy\n\nWe evaluate the accuracy of pose estimation models developed by HyperPose. The environment is Ubuntu16.04, with 4 V100-DGXs and 24 Intel Xeon CPU. The training procedure takes 1~2 weeks using 1 V100-DGX for each model. (If you don't want to train from scratch, you could use our pre-trained backbone models)\n\n| HyperPose Configuration | DNN Size | Input Size | Evaluate Dataset | Accuracy-hyperpose (Iou=0.50:0.95) | Accuracy-original (Iou=0.50:0.95) |\n| -------------------- | ---------- | ------------- | ---------------- | --------------------- | ----------------------- |\n| OpenPose (VGG19)   | 199 MB | 432 x 368 | MSCOCO2014 (random 1160 images) | 57.0 map | 58.4 map  |\n| LightweightOpenPose (Dilated MobileNet)   | 17.7 MB | 432 x 368 | MSCOCO2017(all 5000 img.) | 46.1 map | 42.8 map |\n| LightweightOpenPose (MobileNet-Thin)   | 17.4 MB | 432 x 368 | MSCOCO2017 (all 5000 img.) | 44.2 map | 28.06 map (MSCOCO2014) |\n| LightweightOpenPose (tiny VGG)   | 23.6 MB | 432 x 368 | MSCOCO2017 (all 5000 img.) | 47.3 map | - |\n| LightweightOpenPose (ResNet50)   | 42.7 MB | 432 x 368 | MSCOCO2017 (all 5000 img.) | 48.2 map | - |\n| PoseProposal (ResNet18)   | 45.2 MB | 384 x 384 | MPII (all 2729 img.) | 54.9 map (PCKh) | 72.8 map (PCKh)|\n\n## Cite Us\n\nIf you find HyperPose helpful for your project, please cite our paper：\n\n```\n@article{hyperpose2021,\n    author  = {Guo, Yixiao and Liu, Jiawei and Li, Guo and Mai, Luo and Dong, Hao},\n    journal = {ACM Multimedia},\n    title   = {{Fast and Flexible Human Pose Estimation with HyperPose}},\n    url     = {https://github.com/tensorlayer/hyperpose},\n    year    = {2021}\n}\n```\n\n## License\n\nHyperPose is open-sourced under the [Apache 2.0 license](https://github.com/tensorlayer/tensorlayer/blob/master/LICENSE.rst).\n\n","funding_links":[],"categories":["Sensor Processing"],"sub_categories":["Image Processing"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftensorlayer%2Fhyperpose","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ftensorlayer%2Fhyperpose","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftensorlayer%2Fhyperpose/lists"}