{"id":14958319,"url":"https://github.com/drgfreeman/rps-cv","last_synced_at":"2025-09-09T14:09:10.061Z","repository":{"id":130571707,"uuid":"106104459","full_name":"DrGFreeman/rps-cv","owner":"DrGFreeman","description":"A Rock-Paper-Scissors game using computer vision and machine learning on Raspberry Pi","archived":false,"fork":false,"pushed_at":"2019-08-16T18:46:07.000Z","size":18880,"stargazers_count":121,"open_issues_count":0,"forks_count":32,"subscribers_count":6,"default_branch":"master","last_synced_at":"2025-04-07T01:41:43.583Z","etag":null,"topics":["computer-vision","game","image-classification","image-recognition","machine-learning","opencv","opencv-python","raspberry-pi","raspberry-pi-camera","rock-paper-scissors","skimage","sklearn"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/DrGFreeman.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2017-10-07T14:32:51.000Z","updated_at":"2025-03-17T09:08:44.000Z","dependencies_parsed_at":null,"dependency_job_id":"62c9a1f1-b095-47a4-9334-b7a02c05be04","html_url":"https://github.com/DrGFreeman/rps-cv","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/DrGFreeman/rps-cv","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DrGFreeman%2Frps-cv","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DrGFreeman%2Frps-cv/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DrGFreeman%2Frps-cv/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DrGFreeman%2Frps-cv/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/DrGFreeman","download_url":"https://codeload.github.com/DrGFreeman/rps-cv/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DrGFreeman%2Frps-cv/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":271078587,"owners_count":24695473,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-08-18T02:00:08.743Z","response_time":89,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["computer-vision","game","image-classification","image-recognition","machine-learning","opencv","opencv-python","raspberry-pi","raspberry-pi-camera","rock-paper-scissors","skimage","sklearn"],"created_at":"2024-09-24T13:16:44.464Z","updated_at":"2025-08-19T00:11:47.887Z","avatar_url":"https://github.com/DrGFreeman.png","language":"Python","readme":"# rps-cv\nA Rock-Paper-Scissors game using computer vision and machine learning on Raspberry Pi.\n\nBy Julien de la Bruère-Terreault (drgfreeman@tuta.io)\n\n[![Animated screenshot](img/doc/rps.gif)](https://www.youtube.com/watch?v=ozo0-lx_PMA)  \nClick on image to access [video on YouTube](https://www.youtube.com/watch?v=ozo0-lx_PMA).\n\n#### This project is [showcased](https://www.raspberrypi.org/magpi-issues/MagPi74.pdf#%5B%7B%22num%22%3A272%2C%22gen%22%3A0%7D%2C%7B%22name%22%3A%22FitH%22%7D%2C787%5D) in [issue 74](https://www.raspberrypi.org/magpi/issues/74/)  of the MagPi, the official magazine of the [Raspberry Pi Foundation](https://www.raspberrypi.org/).\n\n* See my [DrGFreeman/rps-cv-data-science](https://github.com/DrGFreeman/rps-cv-data-science) repository where I posted different notebooks demonstrating some cool data science analysis on the image dataset resulting from this project.\n\n* [MagPi article in Simplified Chinese](https://github.com/TommyZihao/MagPi_Chinese/blob/master/MagPi74_18-19%E7%94%A8%E6%A0%91%E8%8E%93%E6%B4%BE%E8%B7%9F%E4%BA%BA%E5%B7%A5%E6%99%BA%E8%83%BD%E7%8E%A9%E7%8C%9C%E6%8B%B3.md) contributed by [Tommy Zihao](https://github.com/TommyZihao)\n\n## Summary\n\n### Project origin\n\nThis project results from a challenge my son gave me when I was teaching him the basics of computer programming making a simple text based Rock-Paper-Scissors game in Python. At that time I was starting to experiment with computer vision with a Raspberry Pi and an old USB webcam so my son naively asked me:\n\n*\"Could you make a Rock-Paper-Scissors game that uses the camera to detect hand gestures?\"*\n\nI accepted the challenge and about a year and a lot of learning later, I completed the challenge with a functional game.\n\n### Overview of the game\n\nThe game uses a Raspberry Pi computer and Raspberry Pi camera installed on a 3D printed support with LED strips to achieve consistent images.\n\nThe pictures taken by the camera are processed and fed to an image classifier that determines whether the gesture corresponds to \"Rock\", \"Paper\" or \"Scissors\" gestures.\n\nThe image classifier uses a [Support Vector Machine](https://en.wikipedia.org/wiki/Support_vector_machine), a class of [machine learning](https://en.wikipedia.org/wiki/Machine_learning) algorithm. The image classifier has been priorly \"trained\" with a bank of labeled images corresponding to the \"Rock\", \"Paper\", \"Scissors\" gestures captured with the Raspberry Pi camera.\n\n### How it works\n\nThe image below shows the processing pipeline for the training of the image classifier (top portion) and the prediction of gesture for new images captured by the camera during the game (bottom portion). Click [here](https://raw.githubusercontent.com/DrGFreeman/rps-cv/master/img/doc/rps-pipeline.png) for full size image.\n![Rock-Paper-Scissors computer vision \u0026 machine learning pipeline](img/doc/rps-pipeline.png)\n\n## Dependencies\n\nThe project depends on and has been tested with the following libraries:\n\n* OpenCV \u003e= 3.3.0 with bindings for Python 3*\n* Python \u003e= 3.4+\n* Numpy \u003e= 1.13.0\n* Scikit-Learn \u003e= 0.18.2\n* Scikit-Image \u003e= 0.13.0\n* Pygame \u003e= 1.9.3\n* Picamera\n\n\\* Follow [this guide](https://www.pyimagesearch.com/2016/04/18/install-guide-raspberry-pi-3-raspbian-jessie-opencv-3/) for installation of OpenCV on the Raspberry Pi. Install Python libraries within the same virtual environment as OpenCV using the `pip install \u003cpackage_name\u003e` command. Picamera is installed by default on [Raspbian](https://www.raspberrypi.org/downloads/raspbian/) images.\n\n### Hardware:\n\n* [Raspberry Pi 3 Model B or B+ computer](https://www.raspberrypi.org/products/raspberry-pi-3-model-b-plus/)\n* [Raspberry Pi Camera Module V2](https://www.raspberrypi.org/products/camera-module-v2/)\n* A green background to allow background subtraction in the captured images.\n* A physical setup for the camera to ensure consistent lighting and camera position. The 3D models I used are available [on Thingiverse](https://www.thingiverse.com/thing:2598378).\n\n\n![Camera \u0026 lighting setup](img/doc/hardware_front.jpg)\n![Camera \u0026 lighting setup](img/doc/hardware_rear.jpg)\n![Camera \u0026 lighting setup](img/doc/hardware_top.jpg)\n\n## Program files\n\n* *capture.py*  \nThis file opens the camera in \"capture mode\", to capture and label images that will later be used to train the image classifier. The captured images are automatically named and stored in a folder structure.\n\n* *train.py*  \nThis script reads and processes the training images in preparation for training the image classifier. The processed image data is then used to train the support vector machine image classifier. The trained classifier is stored in the `clf.pkl` file read by `play.py`.\n\n* *playgui.py*  \nThis file runs the actual Rock-Paper-Scissors game using the camera and the trained image classifier in a graphical user interface (GUI). Images from each play are captured and added to the image bank, creating additional images to train the classifier.\n\n* *play.py*  \nThis file runs the actual Rock-Paper-Scissors game similarly to playgui.py except the game output is done in the terminal and OpenCV window (no GUI).\n\n\\* Note that the due to memory limitations on the Raspberry Pi, the *train.py* script may not run properly on the Raspberry Pi with training sets of more than a few hundred images. Consequently, it is recommended to run these on a more powerful computer. This computer must also have OpenCV, Python 3.4+ and the numpy, scikit-learn and scikit-image Python libraries installed.\n\n## Library modules\n\n* *rpscv.gui*  \nThis module defines the RPSGUI class and associated methods to manage the game\n graphical user interface (GUI).\n\n* *rpscv.imgproc*  \nThis module provides the image processing functions used by the various other Python files.\n\n* *rpscv.utils*  \nThis module provides functions and constants used by the various other Python files.\n\n* *rpscv.camera*  \nThis module defines the Camera class, a wrapper around the picamera library, with specific methods for the project such as white balance calibration.\n\n## Ouput \u0026 Screenshots\n\n### Training mode\n\nTypical output from *train.py* (on PC with Intel Core I7-6700 @3.4GHz, 16GB RAM, Anaconda distribution):\n```\n(rps-cv) jul@rosalind:~/pi/git/rps-cv$ python train.py\n+0.0: Importing libraries\n+3.75: Generating image data\nCompleted processing 1708 images\n  rock: 562 images\n  paper: 568 images\n  scissors: 578 images\n+99.51: Generating test set\n+99.64: Defining pipeline\n+99.64: Defining cross-validation\n+99.64: Defining grid search\nGrid search parameters:\nGridSearchCV(cv=StratifiedKFold(n_splits=5, random_state=42, shuffle=True),\n       error_score='raise',\n       estimator=Pipeline(steps=[('pca', PCA(copy=True, iterated_power='auto', n_components=None,\n       random_state=None, svd_solver='auto', tol=0.0, whiten=False)), ('clf', SVC(C=1.0,\n       cache_size=200, class_weight=None, coef0=0.0, decision_function_shape=None, degree=3,\n       gamma='auto', kernel='rbf', max_iter=-1, probability=False, random_state=None, shrinking=True,\n       tol=0.001, verbose=False))]),\n       fit_params={}, iid=True, n_jobs=4,\n       param_grid={'clf__C': array([   1.     ,    3.16228,   10.     ,   31.62278,  100.     ]),\n                   'clf__gamma': array([ 0.0001 ,  0.00032,  0.001  ,  0.00316,  0.01   ]),\n                   'pca__n_components': [60]},\n       pre_dispatch='2*n_jobs', refit=True, return_train_score=True,\n       scoring='f1_micro', verbose=1)\n+99.64: Fitting classifier\nFitting 5 folds for each of 25 candidates, totalling 125 fits\n[Parallel(n_jobs=4)]: Done  42 tasks      | elapsed:  2.1min\n[Parallel(n_jobs=4)]: Done 125 out of 125 | elapsed:  5.9min finished\nGrid search best score: 0.9910406616126809\nGrid search best parameters:\n  pca__n_components: 60\n  clf__C: 10.0\n  clf__gamma: 0.00031622776601683794\n+458.66: Validating classifier on test set\nClassifier f1-score on test set: 0.9922178988326849\nConfusion matrix:\n[[84  1  0]\n [ 1 84  0]\n [ 0  0 87]]\nClassification report:\n             precision    recall  f1-score   support\n\n       rock       0.99      0.99      0.99        85\n      paper       0.99      0.99      0.99        85\n   scissors       1.00      1.00      1.00        87\n\navg / total       0.99      0.99      0.99       257\n\n+458.72: Writing classifier to clf.pkl\n+467.25: Done!\n```\n\n### Play mode with Graphical User Interface (*playgui.py*)\n\nInitial screen:  \n![Initial screen](img/doc/screen-0-0.png)\n\nComputer wins the play:  \n![Computer wins play](img/doc/screen-1-0.png)\n\nPlayer wins the play:  \n![Player wins play](img/doc/screen-2-3.png)\n\nTie:  \n![Tie](img/doc/screen-4-4-tie.png)\n\nGame over, player wins the game:  \n![Game over - player wins](img/doc/screen-3-5-game-over.png)\n\n### Image capture mode (*capture.py*)\n![Capture mode](img/doc/screen-capture.py.png)\n\n### Play mode without GUI (*play.py*)\n![Play no GUI](img/doc/screen-play.py.png)\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdrgfreeman%2Frps-cv","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdrgfreeman%2Frps-cv","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdrgfreeman%2Frps-cv/lists"}