https://github.com/v-sekai/v-sekai.mediapipe-labeler
https://github.com/v-sekai/v-sekai.mediapipe-labeler
Last synced: 10 days ago
JSON representation
- Host: GitHub
- URL: https://github.com/v-sekai/v-sekai.mediapipe-labeler
- Owner: V-Sekai
- License: mit
- Created: 2023-09-02T21:23:18.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2025-01-10T21:40:08.000Z (4 months ago)
- Last Synced: 2025-03-30T06:31:57.305Z (about 1 month ago)
- Language: Python
- Size: 5.79 MB
- Stars: 2
- Watchers: 4
- Forks: 0
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Mediapipe Skeleton to COCO JSON
[](#contributors-)
**WORK IN PROGRESS**
This is a Mediapipe workflow that writes the Mediapipe skeleton, including hands, into a COCO-style JSON format. The workflow utilizes the Mediapipe framework, which provides a flexible and efficient infrastructure for building multimodal applied machine learning models.
[Now on replicate -- live.](https://replicate.com/fire/v-sekai.mediapipe-labeler)
![]()
**Model File**: [Link to Model Card Blendshape V2.pdf](https://storage.googleapis.com/mediapipe-assets/Model%20Card%20Blendshape%20V2.pdf)
**COCO Keypoints File**: https://roboflow.com/formats/coco-keypoint
# Install
```zsh
sudo curl -o /usr/local/bin/cog -L https://github.com/replicate/cog/releases/latest/download/cog_`uname -s`_`uname -m`
sudo chmod +x /usr/local/bin/cog
# Generate COCO-style JSON with Mediapipe skeleton and hands.
cog predict -i image_path=@./thirdparty/image.jpg
```## Contributors ✨
Thanks goes to these wonderful people ([emoji key](https://allcontributors.org/docs/en/emoji-key)):
This project follows the [all-contributors](https://github.com/all-contributors/all-contributors) specification. Contributions of any kind welcome!