Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/ahmed-ai-01/nao_mimc
The Humanoid Robot Pose Mimicry project merges computer vision and robotics. It uses the Mediapipe library to capture human upper body poses from images and transmits them to a NAO humanoid robot. This interactive project explores seamless human-robot interaction through the replication of detected poses.
https://github.com/ahmed-ai-01/nao_mimc
computer-vision cv2 mediapipe mimc nao naoqi-api naoqi-python naoqi-robot opencv openpose-estimation pose-estimation pyhton python3 robotics
Last synced: about 5 hours ago
JSON representation
The Humanoid Robot Pose Mimicry project merges computer vision and robotics. It uses the Mediapipe library to capture human upper body poses from images and transmits them to a NAO humanoid robot. This interactive project explores seamless human-robot interaction through the replication of detected poses.
- Host: GitHub
- URL: https://github.com/ahmed-ai-01/nao_mimc
- Owner: Ahmed-AI-01
- License: mit
- Created: 2024-01-08T18:12:10.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2024-06-19T07:49:54.000Z (8 months ago)
- Last Synced: 2024-06-19T16:03:48.588Z (8 months ago)
- Topics: computer-vision, cv2, mediapipe, mimc, nao, naoqi-api, naoqi-python, naoqi-robot, opencv, openpose-estimation, pose-estimation, pyhton, python3, robotics
- Language: Python
- Homepage:
- Size: 10.7 KB
- Stars: 2
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Robotics Project: Humanoid Robot Pose Mimicry
## Overview
This project involves capturing and processing human upper body pose information using the Mediapipe library and transmitting the relevant joint angles to a humanoid robot (NAO) using the NAOqi framework. The goal is to enable the robot to mimic the detected upper body pose of a human.## Components
1. **Human Pose Detection:(py3)**
- Uses the Mediapipe library to detect and extract key landmarks of the upper body from an input image.
- Landmark coordinates are stored in a text file for further use.2. **Robot Motion Control(py2.7):**
- Uses the NAOqi framework to control the motion of a humanoid robot based on the extracted pose information.
- Joint angles are read from the text file and interpolated to mimic the detected upper body pose.## Requirements
- OpenCV
- Mediapipe
- NAOqi (Python SDK for controlling NAO humanoid robots)## Setup
1. Ensure all required libraries are installed (`cv2`, `mediapipe`, and `naoqi`).
2. Update all paths
3. Run the pose detection component to capture and store upper body pose information, make to sure to use pyhton 3.
4. Run the robot motion control component to make the humanoid robot mimic the detected upper body pose, make sure to use python 2.7.## Documentation
For detailed documentation, including the Denavit-Hartenberg (DH) model of the NAO robot, please refer to [email protected]## Usage
1. Adjust file paths and IP addresses in the code according to your setup.
2. Fine-tune time durations in the robot motion code for smoother movements.## Note
1. You can make the pose detection capture the landmarks from a video or even LiveFeed## Contribution
Feel free to contribute by opening issues or creating pull requests. Your feedback and enhancements are welcome!
---