https://github.com/m4yh3m-dev/6-dof-voice-controlled-robotic-arm
An AI-powered, voice-controlled 6-DOF robotic arm built on Raspberry Pi 5. It features YOLOv8 object detection, smooth servo motion, real-time camera preview, custom motion presets, and hands-free operation using offline speech recognition. Designed for precision manipulation and interactive robotics applications with a 3D printable frame.
https://github.com/m4yh3m-dev/6-dof-voice-controlled-robotic-arm
6-dof comupter-vision opencv robotic-arm robotic-arm-3d-model robotics yolov8
Last synced: 2 months ago
JSON representation
An AI-powered, voice-controlled 6-DOF robotic arm built on Raspberry Pi 5. It features YOLOv8 object detection, smooth servo motion, real-time camera preview, custom motion presets, and hands-free operation using offline speech recognition. Designed for precision manipulation and interactive robotics applications with a 3D printable frame.
- Host: GitHub
- URL: https://github.com/m4yh3m-dev/6-dof-voice-controlled-robotic-arm
- Owner: M4YH3M-DEV
- License: apache-2.0
- Created: 2025-04-17T10:09:35.000Z (6 months ago)
- Default Branch: main
- Last Pushed: 2025-04-26T19:31:15.000Z (6 months ago)
- Last Synced: 2025-05-17T14:11:30.311Z (5 months ago)
- Topics: 6-dof, comupter-vision, opencv, robotic-arm, robotic-arm-3d-model, robotics, yolov8
- Language: Python
- Homepage:
- Size: 1.88 MB
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# 6-DOF Voice-Controlled Robotic Arm with Vision Integration
(1).png)
A highly interactive 6-DOF robotic arm controlled via voice commands and enhanced with object detection using a camera and YOLOv8. This project integrates speech recognition, computer vision, and robotic control using Raspberry Pi 5 and Python, aiming to create a smart and intuitive robotic system for real-world tasks.
## โจ Features
๐ฃ๏ธ **Voice Command Integration**
- Uses Vosk for offline speech recognition.
- Custom command parsing: `"pick up red and put over blue"`, `"rotate red 90 degrees"`, `"place red parallel to blue"`, and more.๐ท **Live Camera Preview**
- Real-time video feed using `cv2.imshow()` to monitor object positions and arm movements.
- Runs on a separate thread to avoid blocking other operations.๐ง **YOLOv8 Object Detection**
- Detects and localizes colored cubes (e.g., red, blue) using YOLOv8 and OpenCV.
- Extracts object positions to guide arm movements intelligently.๐ฏ **Smooth & Precise Servo Control**
- Controlled via `adafruit_servokit`, ensuring smooth transitions for all 6 joints.
- Movement functions are incremental and smooth for precise control.๐พ **Motion Presets with JSON Config**
- Save and load custom arm positions using `"save preset "` and `"load preset "` voice commands.
- All presets are saved in a `config.json` file for persistence.๐ **Pause & Resume (Safety Override)**
- Voice-controlled `"stop"` and `"resume"` commands to pause/resume all arm actions.
- Prevents unexpected movements for safety and debugging.๐งต **Multithreaded Design**
- Camera feed, servo control, and voice recognition run concurrently using Python threads.๐ **Text-to-Speech Feedback**
- The system speaks back using `pyttsx3`, confirming commands and task status for full interactivity.---
## ๐ง System Requirements- Raspberry Pi 5 (or any Linux-capable SBC with GPIO)
- Python 3.7+
- USB Camera
- Microphone
- 6-DOF Robotic Arm (PWM-compatible servos)---
## ๐งฉ 3D Printed Robotic Arm Model
You can download the STL file for the robotic arm design below:
[๐งฉ Download 3D Model (.stl)](Robotic%20Arm%203D%20Model%20v4.stl)
This model can be 3D printed and assembled for use with the Raspberry Pi-controlled voice-command system.
---
## ๐ ๏ธ Setup Instructions
1. **Clone the Repository**
```bash
git clone https://github.com/M4YH3M-DEV/6-DOF-Voice-Controlled-Robotic-Arm.git
cd 6-DOF-Voice-Controlled-Robotic-Arm
```2. **Install Dependencies**
```bash
sudo apt-get update
sudo apt-get install python3-pyaudio python3-pip portaudio19-dev espeak
pip3 install -r requirements.txt
```3. **Download Vosk Model**
```bash
wget https://alphacephei.com/vosk/models/vosk-model-small-en-us-0.15.zip
unzip vosk-model-small-en-us-0.15.zip
```4. **Run the Project**
```bash
python3 main.py
```---
## ๐ค Supported Voice Commands
| Command Example | Action |
|--------------------------------------------|------------------------------|
| `"pick up red and put over blue"` | Pick and place cubes |
| `"rotate red 90 degrees"` | Rotate a detected cube |
| `"place red parallel to blue"` | Align cubes side-by-side |
| `"save preset "` | Save current position |
| `"load preset "` | Load saved position |
| `"stop"` | Pause arm operations |
| `"resume"` | Resume arm operations |
| `"exit"` | Shutdown the system |---
## ๐ File Structure
6-DOF-Voice-Controlled-Robotic-Arm/
โโโ config.json # Preset storage
โโโ vosk-model-small-en-us-0.15/ # Vosk voice recognition model
โโโ main.py # Core logic and loop
โโโ requirements.txt # Python dependencies
โโโ README.md # You're reading it!
---
## ๐ฆ Dependencies
- `vosk`, `pyaudio`, `pyttsx3` โ Voice recognition & TTS
- `ultralytics` โ YOLOv8 object detection
- `opencv-python` โ Camera input & processing
- `adafruit-circuitpython-servokit` โ Servo control
- `RPi.GPIO` โ Raspberry Pi GPIO handling---
## ๐งช Coming Soon
- Hand gesture control integration
- Dynamic environment calibration
- Autonomous operation mode (no voice needed)
- Mobile app for manual override---
## ๐จโ๐ป Author
Made with ๐ก && ๐ง by [M4YH3M-DEV](https://github.com/M4YH3M-DEV)
---
## ๐ License
Licensed under the [Apache License 2.0](LICENSE).