https://github.com/prasukj7-arch/visual_text
The project uses computer vision and machine learning to translate sign language gestures into text, MediaPipe detects handprints, and Random Forest Classifier predicts characters in real time. It recognizes all 26 letters of the English alphabet, providing an intuitive interface to improve communication for people who need to use sign language.
https://github.com/prasukj7-arch/visual_text
computer-vision english-alphabet-recognition hand-gesture-recognition human-computer-interaction machine-learning mediapipe random-forest-classifier real-time-prediction sign-language-translation
Last synced: 6 months ago
JSON representation
The project uses computer vision and machine learning to translate sign language gestures into text, MediaPipe detects handprints, and Random Forest Classifier predicts characters in real time. It recognizes all 26 letters of the English alphabet, providing an intuitive interface to improve communication for people who need to use sign language.
- Host: GitHub
- URL: https://github.com/prasukj7-arch/visual_text
- Owner: Prasukj7-arch
- Created: 2024-12-21T15:39:36.000Z (10 months ago)
- Default Branch: master
- Last Pushed: 2024-12-26T16:56:52.000Z (9 months ago)
- Last Synced: 2025-03-25T05:12:39.551Z (7 months ago)
- Topics: computer-vision, english-alphabet-recognition, hand-gesture-recognition, human-computer-interaction, machine-learning, mediapipe, random-forest-classifier, real-time-prediction, sign-language-translation
- Language: Python
- Homepage:
- Size: 81.4 MB
- Stars: 2
- Watchers: 1
- Forks: 0
- Open Issues: 0