{"id":13419537,"url":"https://github.com/CMU-Perceptual-Computing-Lab/openpose","last_synced_at":"2025-03-15T05:31:33.306Z","repository":{"id":37319282,"uuid":"89247811","full_name":"CMU-Perceptual-Computing-Lab/openpose","owner":"CMU-Perceptual-Computing-Lab","description":"OpenPose: Real-time multi-person keypoint detection library for body, face, hands, and foot estimation","archived":false,"fork":false,"pushed_at":"2024-08-03T01:59:11.000Z","size":86496,"stargazers_count":31872,"open_issues_count":344,"forks_count":7913,"subscribers_count":923,"default_branch":"master","last_synced_at":"2025-02-18T03:01:51.269Z","etag":null,"topics":["caffe","computer-vision","cpp","cvpr-2017","deep-learning","face","foot-estimation","hand-estimation","human-behavior-understanding","human-pose","human-pose-estimation","keypoint-detection","keypoints","machine-learning","multi-person","opencv","openpose","pose","pose-estimation","real-time"],"latest_commit_sha":null,"homepage":"https://cmu-perceptual-computing-lab.github.io/openpose","language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/CMU-Perceptual-Computing-Lab.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2017-04-24T14:06:31.000Z","updated_at":"2025-02-18T02:42:14.000Z","dependencies_parsed_at":"2022-07-09T06:46:23.483Z","dependency_job_id":"9dd6650e-784e-45af-95b2-32fdc166c3ab","html_url":"https://github.com/CMU-Perceptual-Computing-Lab/openpose","commit_stats":null,"previous_names":[],"tags_count":15,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/CMU-Perceptual-Computing-Lab%2Fopenpose","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/CMU-Perceptual-Computing-Lab%2Fopenpose/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/CMU-Perceptual-Computing-Lab%2Fopenpose/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/CMU-Perceptual-Computing-Lab%2Fopenpose/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/CMU-Perceptual-Computing-Lab","download_url":"https://codeload.github.com/CMU-Perceptual-Computing-Lab/openpose/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":243690111,"owners_count":20331726,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["caffe","computer-vision","cpp","cvpr-2017","deep-learning","face","foot-estimation","hand-estimation","human-behavior-understanding","human-pose","human-pose-estimation","keypoint-detection","keypoints","machine-learning","multi-person","opencv","openpose","pose","pose-estimation","real-time"],"created_at":"2024-07-30T22:01:17.374Z","updated_at":"2025-03-15T05:31:33.299Z","avatar_url":"https://github.com/CMU-Perceptual-Computing-Lab.png","language":"C++","readme":"\u003cdiv align=\"center\"\u003e\n    \u003cimg src=\".github/Logo_main_black.png\" width=\"300\"\u003e\n\u003c/div\u003e\n\n-----------------\n\n| **Build Type**   |`Linux`           |`MacOS`           |`Windows`         |\n| :---:            | :---:            | :---:            | :---:            |\n| **Build Status** | [![Status](https://github.com/CMU-Perceptual-Computing-Lab/openpose/workflows/CI/badge.svg)](https://github.com/CMU-Perceptual-Computing-Lab/openpose/actions) | [![Status](https://github.com/CMU-Perceptual-Computing-Lab/openpose/workflows/CI/badge.svg)](https://github.com/CMU-Perceptual-Computing-Lab/openpose/actions) | [![Status](https://ci.appveyor.com/api/projects/status/5leescxxdwen77kg/branch/master?svg=true)](https://ci.appveyor.com/project/gineshidalgo99/openpose/branch/master) |\n\n[**OpenPose**](https://github.com/CMU-Perceptual-Computing-Lab/openpose) has represented the **first real-time multi-person system to jointly detect human body, hand, facial, and foot keypoints (in total 135 keypoints) on single images**.\n\nIt is **authored by** [**Ginés Hidalgo**](https://www.gineshidalgo.com), [**Zhe Cao**](https://people.eecs.berkeley.edu/~zhecao), [**Tomas Simon**](http://www.cs.cmu.edu/~tsimon), [**Shih-En Wei**](https://scholar.google.com/citations?user=sFQD3k4AAAAJ\u0026hl=en), [**Yaadhav Raaj**](https://www.raaj.tech), [**Hanbyul Joo**](https://jhugestar.github.io), **and** [**Yaser Sheikh**](http://www.cs.cmu.edu/~yaser). It is **maintained by** [**Ginés Hidalgo**](https://www.gineshidalgo.com) **and** [**Yaadhav Raaj**](https://www.raaj.tech). OpenPose would not be possible without the [**CMU Panoptic Studio dataset**](http://domedb.perception.cs.cmu.edu). We would also like to thank all the people who [have helped OpenPose in any way](doc/09_authors_and_contributors.md).\n\n\n\n\u003cp align=\"center\"\u003e\n    \u003cimg src=\".github/media/pose_face_hands.gif\" width=\"480\"\u003e\n    \u003cbr\u003e\n    \u003csup\u003eAuthors \u003ca href=\"https://www.gineshidalgo.com\" target=\"_blank\"\u003eGinés Hidalgo\u003c/a\u003e (left) and \u003ca href=\"https://jhugestar.github.io\" target=\"_blank\"\u003eHanbyul Joo\u003c/a\u003e (right) in front of the \u003ca href=\"http://domedb.perception.cs.cmu.edu\" target=\"_blank\"\u003eCMU Panoptic Studio\u003c/a\u003e\u003c/sup\u003e\n\u003c/p\u003e\n\n\n\n## Contents\n1. [Results](#results)\n2. [Features](#features)\n3. [Related Work](#related-work)\n4. [Installation](#installation)\n5. [Quick Start Overview](#quick-start-overview)\n6. [Send Us Feedback!](#send-us-feedback)\n7. [Citation](#citation)\n8. [License](#license)\n\n\n\n## Results\n### Whole-body (Body, Foot, Face, and Hands) 2D Pose Estimation\n\u003cp align=\"center\"\u003e\n    \u003cimg src=\".github/media/dance_foot.gif\" width=\"300\"\u003e\n    \u003cimg src=\".github/media/pose_face.gif\" width=\"300\"\u003e\n    \u003cimg src=\".github/media/pose_hands.gif\" width=\"300\"\u003e\n    \u003cbr\u003e\n    \u003csup\u003eTesting OpenPose: (Left) \u003ca href=\"https://www.youtube.com/watch?v=2DiQUX11YaY\" target=\"_blank\"\u003e\u003ci\u003eCrazy Uptown Funk flashmob in Sydney\u003c/i\u003e\u003c/a\u003e video sequence. (Center and right) Authors \u003ca href=\"https://www.gineshidalgo.com\" target=\"_blank\"\u003eGinés Hidalgo\u003c/a\u003e and \u003ca href=\"http://www.cs.cmu.edu/~tsimon\" target=\"_blank\"\u003eTomas Simon\u003c/a\u003e testing face and hands\u003c/sup\u003e\n\u003c/p\u003e\n\n### Whole-body 3D Pose Reconstruction and Estimation\n\u003cp align=\"center\"\u003e\n    \u003cimg src=\".github/media/openpose3d.gif\" width=\"360\"\u003e\n    \u003cbr\u003e\n    \u003csup\u003e\u003ca href=\"https://ziutinyat.github.io/\" target=\"_blank\"\u003eTianyi Zhao\u003c/a\u003e testing the OpenPose 3D Module\u003c/a\u003e\u003c/sup\u003e\n\u003c/p\u003e\n\n### Unity Plugin\n\u003cp align=\"center\"\u003e\n    \u003cimg src=\".github/media/unity_main.png\" width=\"300\"\u003e\n    \u003cimg src=\".github/media/unity_body_foot.png\" width=\"300\"\u003e\n    \u003cimg src=\".github/media/unity_hand_face.png\" width=\"300\"\u003e\n    \u003cbr\u003e\n    \u003csup\u003e\u003ca href=\"https://ziutinyat.github.io/\" target=\"_blank\"\u003eTianyi Zhao\u003c/a\u003e and \u003ca href=\"https://www.gineshidalgo.com\" target=\"_blank\"\u003eGinés Hidalgo\u003c/a\u003e testing the \u003ca href=\"https://github.com/CMU-Perceptual-Computing-Lab/openpose_unity_plugin\" target=\"_blank\"\u003eOpenPose Unity Plugin\u003c/a\u003e\u003c/sup\u003e\n\u003c/p\u003e\n\n### Runtime Analysis\nWe show an inference time comparison between the 3 available pose estimation libraries (same hardware and conditions): OpenPose, Alpha-Pose (fast Pytorch version), and Mask R-CNN. The OpenPose runtime is constant, while the runtime of Alpha-Pose and Mask R-CNN grow linearly with the number of people. More details [**here**](https://arxiv.org/abs/1812.08008).\n\n\u003cp align=\"center\"\u003e\n    \u003cimg src=\".github/media/openpose_vs_competition.png\" width=\"360\"\u003e\n\u003c/p\u003e\n\n\n\n## Features\n**Main Functionality**:\n- **2D real-time multi-person keypoint detection**:\n    - 15, 18 or **25-keypoint body/foot keypoint estimation**, including **6 foot keypoints**. **Runtime invariant to number of detected people**.\n    - **2x21-keypoint hand keypoint estimation**. **Runtime depends on number of detected people**. See [**OpenPose Training**](https://github.com/CMU-Perceptual-Computing-Lab/openpose_train) for a runtime invariant alternative.\n    - **70-keypoint face keypoint estimation**. **Runtime depends on number of detected people**. See [**OpenPose Training**](https://github.com/CMU-Perceptual-Computing-Lab/openpose_train) for a runtime invariant alternative.\n- [**3D real-time single-person keypoint detection**](doc/advanced/3d_reconstruction_module.md):\n    - 3D triangulation from multiple single views.\n    - Synchronization of Flir cameras handled.\n    - Compatible with Flir/Point Grey cameras.\n- [**Calibration toolbox**](doc/advanced/calibration_module.md): Estimation of distortion, intrinsic, and extrinsic camera parameters.\n- **Single-person tracking** for further speedup or visual smoothing.\n\n**Input**: Image, video, webcam, Flir/Point Grey, IP camera, and support to add your own custom input source (e.g., depth camera).\n\n**Output**: Basic image + keypoint display/saving (PNG, JPG, AVI, ...), keypoint saving (JSON, XML, YML, ...), keypoints as array class, and support to add your own custom output code (e.g., some fancy UI).\n\n**OS**: Ubuntu (20, 18, 16, 14), Windows (10, 8), Mac OSX, Nvidia TX2.\n\n**Hardware compatibility**: CUDA (Nvidia GPU), OpenCL (AMD GPU), and non-GPU (CPU-only) versions.\n\n**Usage Alternatives**:\n- [**Command-line demo**](doc/01_demo.md) for built-in functionality.\n- [**C++ API**](doc/04_cpp_api.md/) and [**Python API**](doc/03_python_api.md) for custom functionality. E.g., adding your custom inputs, pre-processing, post-posprocessing, and output steps.\n\nFor further details, check the [major released features](doc/07_major_released_features.md) and [release notes](doc/08_release_notes.md) docs.\n\n\n\n## Related Work\n- [**OpenPose training code**](https://github.com/CMU-Perceptual-Computing-Lab/openpose_train)\n- [**OpenPose foot dataset**](https://cmu-perceptual-computing-lab.github.io/foot_keypoint_dataset/)\n- [**OpenPose Unity Plugin**](https://github.com/CMU-Perceptual-Computing-Lab/openpose_unity_plugin)\n- OpenPose papers published in **IEEE TPAMI and CVPR**. Cite them in your publications if OpenPose helps your research! (Links and more details in the [Citation](#citation) section below).\n\n\n\n## Installation\nIf you want to use OpenPose without installing or writing any code, simply [download and use the latest Windows portable version of OpenPose](doc/installation/0_index.md#windows-portable-demo)!\n\nOtherwise, you could [build OpenPose from source](doc/installation/0_index.md#compiling-and-running-openpose-from-source). See the [installation doc](doc/installation/0_index.md) for all the alternatives.\n\n\n\n## Quick Start Overview\nSimply use the OpenPose Demo from your favorite command-line tool (e.g., Windows PowerShell or Ubuntu Terminal). E.g., this example runs OpenPose on your webcam and displays the body keypoints:\n```\n# Ubuntu\n./build/examples/openpose/openpose.bin\n```\n```\n:: Windows - Portable Demo\nbin\\OpenPoseDemo.exe --video examples\\media\\video.avi\n```\n\nYou can also add any of the available flags in any order. E.g., the following example runs on a video (`--video {PATH}`), enables face (`--face`) and hands (`--hand`), and saves the output keypoints on JSON files on disk (`--write_json {PATH}`).\n```\n# Ubuntu\n./build/examples/openpose/openpose.bin --video examples/media/video.avi --face --hand --write_json output_json_folder/\n```\n```\n:: Windows - Portable Demo\nbin\\OpenPoseDemo.exe --video examples\\media\\video.avi --face --hand --write_json output_json_folder/\n```\n\nOptionally, you can also extend OpenPose's functionality from its Python and C++ APIs. After [installing](doc/installation/0_index.md) OpenPose, check its [official doc](doc/00_index.md) for a quick overview of all the alternatives and tutorials.\n\n\n\n## Send Us Feedback!\nOur library is open source for research purposes, and we want to improve it! So let us know (create a new GitHub issue or pull request, email us, etc.) if you...\n1. Find/fix any bug (in functionality or speed) or know how to speed up or improve any part of OpenPose.\n2. Want to add/show some cool functionality/demo/project made on top of OpenPose. We can add your project link to our [Community-based Projects](doc/10_community_projects.md) section or even integrate it with OpenPose!\n\n\n\n## Citation\nPlease cite these papers in your publications if OpenPose helps your research. All of OpenPose is based on [OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields](https://arxiv.org/abs/1812.08008), while the hand and face detectors also use [Hand Keypoint Detection in Single Images using Multiview Bootstrapping](https://arxiv.org/abs/1704.07809) (the face detector was trained using the same procedure as the hand detector).\n\n    @article{8765346,\n      author = {Z. {Cao} and G. {Hidalgo Martinez} and T. {Simon} and S. {Wei} and Y. A. {Sheikh}},\n      journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},\n      title = {OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields},\n      year = {2019}\n    }\n\n    @inproceedings{simon2017hand,\n      author = {Tomas Simon and Hanbyul Joo and Iain Matthews and Yaser Sheikh},\n      booktitle = {CVPR},\n      title = {Hand Keypoint Detection in Single Images using Multiview Bootstrapping},\n      year = {2017}\n    }\n\n    @inproceedings{cao2017realtime,\n      author = {Zhe Cao and Tomas Simon and Shih-En Wei and Yaser Sheikh},\n      booktitle = {CVPR},\n      title = {Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields},\n      year = {2017}\n    }\n\n    @inproceedings{wei2016cpm,\n      author = {Shih-En Wei and Varun Ramakrishna and Takeo Kanade and Yaser Sheikh},\n      booktitle = {CVPR},\n      title = {Convolutional pose machines},\n      year = {2016}\n    }\n\nPaper links:\n- OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields:\n    - [IEEE TPAMI](https://ieeexplore.ieee.org/document/8765346)\n    - [ArXiv](https://arxiv.org/abs/1812.08008)\n- [Hand Keypoint Detection in Single Images using Multiview Bootstrapping](https://arxiv.org/abs/1704.07809)\n- [Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields](https://arxiv.org/abs/1611.08050)\n- [Convolutional Pose Machines](https://arxiv.org/abs/1602.00134)\n\n\n\n## License\nOpenPose is freely available for free non-commercial use, and may be redistributed under these conditions. Please, see the [license](./LICENSE) for further details. Interested in a commercial license? Check this [FlintBox link](https://cmu.flintbox.com/#technologies/b820c21d-8443-4aa2-a49f-8919d93a8740). For commercial queries, use the `Contact` section from the [FlintBox link](https://cmu.flintbox.com/#technologies/b820c21d-8443-4aa2-a49f-8919d93a8740) and also send a copy of that message to [Yaser Sheikh](mailto:yaser@cs.cmu.edu).\n","funding_links":[],"categories":["C++","TODO scan for Android support in followings","Popular implementations","🧍 2D Human Pose Estimation","\u003ca name=\"cpp\"\u003e\u003c/a\u003eC++","Pose Estimation","Please find below the links to awesome cheat-sheet and resources:","Getting Started \u003ca name=\"start\"\u003e\u003c/a\u003e","C++ (70)","Computer Vision","Image \u0026 Vision","人像\\姿势\\3D人脸","Uncategorized","opencv","Model Deployment library","Tools and Applications","List of Most Starred Github Projects related to Deep Learning","GameProgramming","Repos","Tools and Utilities","Libraries and Frameworks"],"sub_categories":["Others","Bottom-Up Methods","[Tools](#tools-1)","Pose Estimation","Machine-Learning/Data Science/AI/DL:","Classification \u0026 Detection \u0026 Tracking","网络服务_其他","Notable Projects","Uncategorized","Caffe \u003ca name=\"caffe\"/\u003e","Surveys","Autopilots"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FCMU-Perceptual-Computing-Lab%2Fopenpose","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FCMU-Perceptual-Computing-Lab%2Fopenpose","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FCMU-Perceptual-Computing-Lab%2Fopenpose/lists"}