Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/monzerdev/sign-language-detection
Real-time sign language detection using CNN and MediaPipe for hand landmark recognition. Implements deep learning models to classify sign language gestures from live video input.
https://github.com/monzerdev/sign-language-detection
cnn computer-vision deep-learning machine-learning mediapipe opencv pandas python pytorch real-time signlanguagedetection
Last synced: about 1 month ago
JSON representation
Real-time sign language detection using CNN and MediaPipe for hand landmark recognition. Implements deep learning models to classify sign language gestures from live video input.
- Host: GitHub
- URL: https://github.com/monzerdev/sign-language-detection
- Owner: MonzerDev
- Created: 2024-08-14T22:47:11.000Z (3 months ago)
- Default Branch: master
- Last Pushed: 2024-08-18T10:21:03.000Z (3 months ago)
- Last Synced: 2024-10-10T08:01:07.521Z (about 1 month ago)
- Topics: cnn, computer-vision, deep-learning, machine-learning, mediapipe, opencv, pandas, python, pytorch, real-time, signlanguagedetection
- Language: Python
- Homepage:
- Size: 7.36 MB
- Stars: 2
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
Real-Time Sign Language Detection
This project implements a real-time sign language detection system using a Convolutional Neural Network (CNN) and MediaPipe for hand landmark detection. The system captures live video input, processes hand gestures, and classifies them into corresponding sign language alphabets.
Project Structure:
- `CNNModel.py`: Defines the Convolutional Neural Network (CNN) architecture used for classifying hand gestures.
- `handLandMarks.py`: Handles the detection of hand landmarks using MediaPipe and processes them for use by the CNN model.
- `mediapipeHandDetection.py`: Integrates MediaPipe to perform real-time hand detection through the webcam.
- `realTime.py`: The main script that ties everything together, using the CNN model and MediaPipe for real-time sign language detection.
- `training.py`: Script used for training the CNN model on a dataset of hand gestures.
- `testCNN.py`: Script for testing the performance of the trained CNN model on a test dataset.
- `CNN_model_alphabet_SIBI.pth`: Pre-trained CNN model weights used for classification.How to Run the Project:
1. Install Dependencies
Make sure you have Python installed on your system. You can install the required Python packages using pip:
pip install -r requirements.txt
If you don't have a `requirements.txt` file, you can manually install the necessary packages:
pip install opencv-python mediapipe torch numpy pandas
2. Running Real-Time Detection
To start the real-time sign language detection, run the following command:
python realTime.py
This will activate your webcam and start detecting and classifying hand gestures in real-time.
3. Training the Model (Optional)
If you want to train the CNN model from scratch, you can run:
python training.py
This script will use a dataset of hand gestures to train the model.
4. Testing the Model (Optional)
To test the performance of the trained CNN model on a test dataset, you can run:
python testCNN.py
How It Works:
1. Hand Landmark Detection:
- The system uses MediaPipe to detect and track hand landmarks in real-time from the webcam feed.2. Feature Extraction:
- The detected hand landmarks are processed and used as input features for the CNN model.3. Gesture Classification:
- The CNN model classifies the input features into one of the predefined sign language alphabets (A-Z).4. Real-Time Feedback:
- The classified gesture is displayed in real-time, providing immediate feedback to the user.Requirements:
- Python 3.x
- OpenCV
- MediaPipe
- PyTorch
- PandasContributing:
Contributions are welcome! If you have any ideas, suggestions, or improvements, feel free to open an issue or submit a pull request.
Contact:
For any questions or suggestions, please feel free to contact me at [[email protected]].