https://github.com/anmol-singh-jaggi/sign-language-recognition
:v: :ok_hand: :fist: :camera: Sign Language Recognition using Python
https://github.com/anmol-singh-jaggi/sign-language-recognition
image-processing logistic-regression machine-learning nearest-neighbor opencv python sign-language support-vector-machine
Last synced: 2 months ago
JSON representation
:v: :ok_hand: :fist: :camera: Sign Language Recognition using Python
- Host: GitHub
- URL: https://github.com/anmol-singh-jaggi/sign-language-recognition
- Owner: Anmol-Singh-Jaggi
- License: mit
- Created: 2016-06-21T16:07:37.000Z (about 9 years ago)
- Default Branch: master
- Last Pushed: 2021-10-12T22:59:31.000Z (over 3 years ago)
- Last Synced: 2025-04-14T10:04:13.986Z (2 months ago)
- Topics: image-processing, logistic-regression, machine-learning, nearest-neighbor, opencv, python, sign-language, support-vector-machine
- Language: Python
- Homepage:
- Size: 1.71 MB
- Stars: 132
- Watchers: 6
- Forks: 42
- Open Issues: 3
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Sign Language Recognition

[](https://saythanks.io/to/Anmol-Singh-Jaggi)Recognize [American Sign Language (ASL)](https://en.wikipedia.org/wiki/American_Sign_Language) using Machine Learning.
Currently, the following algorithms are supported:
- [K-Nearest-Neighbours](https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm)
- [Logistic Regression](https://en.wikipedia.org/wiki/Logistic_regression)
- [Support Vector Machines](https://en.wikipedia.org/wiki/Support_vector_machine)The [training images](https://drive.google.com/drive/folders/0Bw239KLrN7zoNkU5elZMRkc4TU0?resourcekey=0-mkP6SVgwEJFmIYd2xotVug) were retrieved from a video, filmed at `640x480` resolution using a smartphone camera.
**Setup:**
- Install **`Python3`** (last tested on Python3.7).
- Install [pipenv](https://pipenv.readthedocs.io/en/latest/).
- In the project root directory, execute `pipenv sync`.**Usage:**
You can directly start classifying new images using the pre-trained models (the `.pkl` files in `data/generated/output//`) trained using the above dataset:
python predict_from_file.py
Note that the pre-generated model files do not contain the file for `knn` due to its large size.
If you want to use `knn`, then download it separately from [here](https://drive.google.com/drive/folders/0Bw239KLrN7zoZ0dNZHFhdlI5ZFU?resourcekey=0-bME2qAFVHS_lKv7WgO-tLQ) and place it in `data/generated/output/knn/`.
The models available by default are `svm` and `logistic`.The above workflow can be executed using *`run_quick.sh`*.
---------------------------------------------------------
However, if you wish to use your own dataset, do the following steps:
1. Put all the training and testing images in a directory and update their paths in the config file *`code/common/config.py`*.
(Or skip to use the default paths which should also work).
Optionally, you can generate the images in real-time from webcam - `python capture_from_camera.py`.
2. Generate image-vs-label mappings for all the training images - `python generate_images_labels.py train`.
3. Apply the image-transformation algorithms to the training images - `python transform_images.py`.
4. Train the model - `python train_model.py `. Model names can be `svm`/`knn`/`logistic`.
6. Generate image-vs-label mapping for all the test images - `python generate_images_labels.py test`.
7. Test the model - `python predict_from_file.py `.
Optionally, you can test the model on a live video stream from a webcam - `python predict_from_camera.py`.
(If recording, then make sure to have the same background and hand alignment as in the training images.)All the python commands above have to be executed from the `code/` directory.
The above workflow can be executed using *`run.sh`*.**To-Do:**
- Improve the command-line-arguments input mechanism.
- ~~Add progress bar while transforming images.~~
- ~~Add logger.~~