An open API service indexing awesome lists of open source software.

https://github.com/m4yh3m-dev/6-dof-voice-controlled-robotic-arm

An AI-powered, voice-controlled 6-DOF robotic arm built on Raspberry Pi 5. It features YOLOv8 object detection, smooth servo motion, real-time camera preview, custom motion presets, and hands-free operation using offline speech recognition. Designed for precision manipulation and interactive robotics applications with a 3D printable frame.
https://github.com/m4yh3m-dev/6-dof-voice-controlled-robotic-arm

6-dof comupter-vision opencv robotic-arm robotic-arm-3d-model robotics yolov8

Last synced: 2 months ago
JSON representation

An AI-powered, voice-controlled 6-DOF robotic arm built on Raspberry Pi 5. It features YOLOv8 object detection, smooth servo motion, real-time camera preview, custom motion presets, and hands-free operation using offline speech recognition. Designed for precision manipulation and interactive robotics applications with a 3D printable frame.

Awesome Lists containing this project

README

          

# 6-DOF Voice-Controlled Robotic Arm with Vision Integration

![Banner](./6DOF-Banner(1)(1).png)

A highly interactive 6-DOF robotic arm controlled via voice commands and enhanced with object detection using a camera and YOLOv8. This project integrates speech recognition, computer vision, and robotic control using Raspberry Pi 5 and Python, aiming to create a smart and intuitive robotic system for real-world tasks.

## โœจ Features

๐Ÿ—ฃ๏ธ **Voice Command Integration**
- Uses Vosk for offline speech recognition.
- Custom command parsing: `"pick up red and put over blue"`, `"rotate red 90 degrees"`, `"place red parallel to blue"`, and more.

๐Ÿ“ท **Live Camera Preview**
- Real-time video feed using `cv2.imshow()` to monitor object positions and arm movements.
- Runs on a separate thread to avoid blocking other operations.

๐Ÿง  **YOLOv8 Object Detection**
- Detects and localizes colored cubes (e.g., red, blue) using YOLOv8 and OpenCV.
- Extracts object positions to guide arm movements intelligently.

๐ŸŽฏ **Smooth & Precise Servo Control**
- Controlled via `adafruit_servokit`, ensuring smooth transitions for all 6 joints.
- Movement functions are incremental and smooth for precise control.

๐Ÿ’พ **Motion Presets with JSON Config**
- Save and load custom arm positions using `"save preset "` and `"load preset "` voice commands.
- All presets are saved in a `config.json` file for persistence.

๐Ÿ›‘ **Pause & Resume (Safety Override)**
- Voice-controlled `"stop"` and `"resume"` commands to pause/resume all arm actions.
- Prevents unexpected movements for safety and debugging.

๐Ÿงต **Multithreaded Design**
- Camera feed, servo control, and voice recognition run concurrently using Python threads.

๐Ÿ”Š **Text-to-Speech Feedback**
- The system speaks back using `pyttsx3`, confirming commands and task status for full interactivity.

---
## ๐Ÿง  System Requirements

- Raspberry Pi 5 (or any Linux-capable SBC with GPIO)
- Python 3.7+
- USB Camera
- Microphone
- 6-DOF Robotic Arm (PWM-compatible servos)

---

## ๐Ÿงฉ 3D Printed Robotic Arm Model

You can download the STL file for the robotic arm design below:

[๐Ÿงฉ Download 3D Model (.stl)](Robotic%20Arm%203D%20Model%20v4.stl)

This model can be 3D printed and assembled for use with the Raspberry Pi-controlled voice-command system.

---

## ๐Ÿ› ๏ธ Setup Instructions

1. **Clone the Repository**
```bash
git clone https://github.com/M4YH3M-DEV/6-DOF-Voice-Controlled-Robotic-Arm.git
cd 6-DOF-Voice-Controlled-Robotic-Arm
```

2. **Install Dependencies**
```bash
sudo apt-get update
sudo apt-get install python3-pyaudio python3-pip portaudio19-dev espeak
pip3 install -r requirements.txt
```

3. **Download Vosk Model**
```bash
wget https://alphacephei.com/vosk/models/vosk-model-small-en-us-0.15.zip
unzip vosk-model-small-en-us-0.15.zip
```

4. **Run the Project**
```bash
python3 main.py
```

---

## ๐Ÿ”ค Supported Voice Commands

| Command Example | Action |
|--------------------------------------------|------------------------------|
| `"pick up red and put over blue"` | Pick and place cubes |
| `"rotate red 90 degrees"` | Rotate a detected cube |
| `"place red parallel to blue"` | Align cubes side-by-side |
| `"save preset "` | Save current position |
| `"load preset "` | Load saved position |
| `"stop"` | Pause arm operations |
| `"resume"` | Resume arm operations |
| `"exit"` | Shutdown the system |

---

## ๐Ÿ“ File Structure

6-DOF-Voice-Controlled-Robotic-Arm/

โ”œโ”€โ”€ config.json # Preset storage

โ”œโ”€โ”€ vosk-model-small-en-us-0.15/ # Vosk voice recognition model

โ”œโ”€โ”€ main.py # Core logic and loop

โ”œโ”€โ”€ requirements.txt # Python dependencies

โ””โ”€โ”€ README.md # You're reading it!

---

## ๐Ÿ“ฆ Dependencies

- `vosk`, `pyaudio`, `pyttsx3` โ€“ Voice recognition & TTS
- `ultralytics` โ€“ YOLOv8 object detection
- `opencv-python` โ€“ Camera input & processing
- `adafruit-circuitpython-servokit` โ€“ Servo control
- `RPi.GPIO` โ€“ Raspberry Pi GPIO handling

---

## ๐Ÿงช Coming Soon

- Hand gesture control integration
- Dynamic environment calibration
- Autonomous operation mode (no voice needed)
- Mobile app for manual override

---

## ๐Ÿ‘จโ€๐Ÿ’ป Author

Made with ๐Ÿ’ก && ๐Ÿง  by [M4YH3M-DEV](https://github.com/M4YH3M-DEV)

---

## ๐Ÿ“œ License

Licensed under the [Apache License 2.0](LICENSE).