https://github.com/rhythmusbyte/sign-language-to-speech
Real-time ASL interpreter using OpenCV and TensorFlow/Keras for hand gesture recognition. Features custom hand tracking, image preprocessing, and gesture classification to translate American Sign Language into text and speech output. Built with accessibility in mind.
https://github.com/rhythmusbyte/sign-language-to-speech
asl assistive-technology cv2 hand-tracking handdetection image-classification image-processing keras machine-learning mediapipe numpy opencv python real-time-detection real-time-recognition sign-language tensorflow
Last synced: 6 months ago
JSON representation
Real-time ASL interpreter using OpenCV and TensorFlow/Keras for hand gesture recognition. Features custom hand tracking, image preprocessing, and gesture classification to translate American Sign Language into text and speech output. Built with accessibility in mind.
- Host: GitHub
- URL: https://github.com/rhythmusbyte/sign-language-to-speech
- Owner: RhythmusByte
- License: bsd-3-clause
- Created: 2025-02-06T04:32:57.000Z (8 months ago)
- Default Branch: main
- Last Pushed: 2025-03-30T08:31:01.000Z (6 months ago)
- Last Synced: 2025-04-04T04:46:39.893Z (6 months ago)
- Topics: asl, assistive-technology, cv2, hand-tracking, handdetection, image-classification, image-processing, keras, machine-learning, mediapipe, numpy, opencv, python, real-time-detection, real-time-recognition, sign-language, tensorflow
- Language: Python
- Homepage:
- Size: 1.19 GB
- Stars: 5
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
[](https://github.com/RhythmusByte/Sign-Language-to-Speech)
[](https://opensource.org/licenses/BSD-3-Clause)
---
## 🎯 Project Overview
**Sign Language to Speech Conversion** is a real-time **American Sign Language (ASL) recognition system** powered by **computer vision** and **deep learning**. It translates ASL hand gestures into **both text and speech output**, enhancing accessibility and communication.📖 For installation, architecture, usage, and contribution guidelines, visit the **[Project Wiki](https://github.com/RhythmusByte/Sign-Language-to-Speech/wiki)**.
---
## ✨ Key Features
- 🔮 **Real-time** hand detection & gesture tracking
- 🧠**CNN-based** classification using TensorFlow/Keras
- 🔊 Simultaneous **text & speech** output
- 📢 Designed for **accessibility & inclusivity**---
## 📊 System Architecture
| Level 0 | Level 1 | Level 2 |
|---------|---------|---------|
|  |  |  |For details on **Data Flow Diagrams (DFD), Use Case Diagrams, and System Design**, check the **[Architecture Section](https://github.com/RhythmusByte/Sign-Language-to-Speech/wiki/System-Architecture-&-Design)** in the Wiki.
---
## 🛠Tech Stack
### **Core Technologies**


### **Supporting Libraries**


---
## 📂 Repository Structure
```text
Sign-Language-to-Speech/
├── data/
├── Application.py
├── trainedModel.h5
├── requirements.txt
└── white.jpg
```For a **detailed breakdown of modules and system design**, refer to the **[Project Documentation](https://github.com/RhythmusByte/Sign-Language-to-Speech/wiki)**.
---
## 📢 Contributing
We welcome contributions! Before submitting a pull request, please check out the **[Contributing Guide](https://github.com/RhythmusByte/Sign-Language-to-Speech/wiki/Contributions)**.
---
## 📜 License
This project is licensed under the **BSD 3-Clause License**. See the full details in the [LICENSE](LICENSE) file.
---
📌 **For all documentation, including installation, setup, and FAQs, visit the** 👉 **[Project Wiki](https://github.com/RhythmusByte/Sign-Language-to-Speech/wiki)**.