{"id":15008772,"url":"https://github.com/hozhiyi/juno_bot","last_synced_at":"2026-01-27T06:33:26.919Z","repository":{"id":174891551,"uuid":"652942741","full_name":"hozhiyi/juno_bot","owner":"hozhiyi","description":"An AI robotic project on posture recognition. ","archived":false,"fork":false,"pushed_at":"2023-07-15T06:47:29.000Z","size":8491,"stargazers_count":0,"open_issues_count":0,"forks_count":1,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-05-30T16:17:41.621Z","etag":null,"topics":["artificial-intelligence","cvbridge","juno","lstm","lstm-attention","opencv","posture-recognition","python2","python3","robotics","ros-melodic"],"latest_commit_sha":null,"homepage":"","language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/hozhiyi.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-06-13T05:23:08.000Z","updated_at":"2023-07-14T10:39:35.000Z","dependencies_parsed_at":null,"dependency_job_id":"1f9cac90-6a61-443d-ac0b-b96631db55a0","html_url":"https://github.com/hozhiyi/juno_bot","commit_stats":null,"previous_names":["hozhiyi/juno_bot"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/hozhiyi/juno_bot","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hozhiyi%2Fjuno_bot","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hozhiyi%2Fjuno_bot/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hozhiyi%2Fjuno_bot/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hozhiyi%2Fjuno_bot/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/hozhiyi","download_url":"https://codeload.github.com/hozhiyi/juno_bot/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/hozhiyi%2Fjuno_bot/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":28806394,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-01-27T06:25:51.065Z","status":"ssl_error","status_checked_at":"2026-01-27T06:25:50.640Z","response_time":168,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["artificial-intelligence","cvbridge","juno","lstm","lstm-attention","opencv","posture-recognition","python2","python3","robotics","ros-melodic"],"created_at":"2024-09-24T19:20:30.692Z","updated_at":"2026-01-27T06:33:26.904Z","avatar_url":"https://github.com/hozhiyi.png","language":"C++","readme":"\n# GYM Counting Device (GYMcD)\n\nUniversity of Malaya, WID3005 Intelligent Robotics, Semester 2 2022/2023\n\nLecturer: Dr. Zati Hakim Binti Azizul Hasan\n\nGroup Members: \n\n| NAME                           | MATRIC NUMBER  | FAVOURITE EXERCISE  |\n|--------------------------------|----------------|---------------------|\n| Lawrence Leroy Chieng Tze Yao  | S2018935       | Sit-ups             |\n| Pang Chong Wen                 | U2005402       | Pilates             |\n| Tan Jia Xuan                   | U2005407       | Hiking              |\n| Ho Zhi Yi                      | U2005261       | Weight Training     |\n\n## Project Details \n\nPosture is a prominent factor that makes or breaks an effective gym workout, be it cardio or hypertrophy. In the case of training solo, it is often challenging to fully self-assess if a person is having the correct pose, yet not everyone can afford a personal trainer.\n\nGYMcD, being the proposed solution to the aforementioned predicament, hopes to deliver a personalised, efficient, and tech-savvy experience to solo trainers. This device hopes to cover the needs of a gym assistant by capturing the real-time exercises performed by the user and displaying via image and audio the current reps for each workout. Incorrect actions will not be counted into the workout to enforce integrity, just like how a human trainer would. \n\nThe project offers to detect three kinds of posture - curl, push, and squat using a customized Posture Detection model built with attention-based LSTM architecture and MediaPipe Pose. For each correct posture being made, Juno will count for the user and read the count out loud.\n\nFrom a technical viewpoint, data captured from the JUNO lens is published to a rostopic to be subscribed by the AI machine learning algorithm, which then publishes the calculated results to the display node and text-to-speech node respectively. The model in this project was largely trained by us, with a minimum of 150 reps per exercise in front of the camera. Beneath the hood, the pose estimation machine learning model determines the correctness of the detected posture.   \n\nWe hope that GYMcD can be a considerable alternative to budget solidarity individuals who desire a propped physique without needing to hire a trainer. With that inspiration in mind, we hope GYMcD could be one of the small steps of man in the giant leap of robotics.\n\n## Project Demo \n\nVideo link: https://www.youtube.com/watch?v=yrsDZUI_-h4 \n\n## Getting Started\n\n### Prerequisites\n- Have the Juno bot with you! Our project requires Juno with vision and speech functions only.\n\n### Clone this repo\n```shell\ncd [workspace]/src\ngit clone https://github.com/hozhiyi/juno_bot.git\n```\n\n\n### Model\n- Create a folder named \"models\" inside of src folder.\n- Download the model from our Google Drive - [LSTM Attention.h5](https://drive.google.com/file/d/1V4iLpShTlPDDALkrWmv_q0v3R33gB5kg/view?usp=sharing).\n- Place the LSTM Attention.h5 in the models folder.\n\n### Installation \n\n1. To create a virtual environment, we'll be using [pyenv](https://github.com/pyenv/pyenv), a tool that allows you to manage multiple versions of Python on your system. That's because our project requires both Python 2 and 3 running. Here's a guide to do so on Ubuntu/Debian.\n- Build pyenv dependencies.\n    ```shell\n    sudo apt-get install -y make build-essential libssl-dev zlib1g-dev \\\n    libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm libncurses5-dev \\\n    libncursesw5-dev xz-utils tk-dev libffi-dev liblzma-dev python-openssl\n    ```\n- Install pyenv\n    ```bash\n    curl https://pyenv.run | bash\n    ```\n- Install Python.\n    ```shell\n    pyenv install -v 3.9.2\n    ```\n- Create virtual environment using pyenv\n    ```shell\n    pyenv virtualenv \u003cpython_version\u003e \u003cenvironment_name\u003e\n    ```\n- Activate the python version\n    ```shell\n    pyenv local \u003cenvironment_name\u003e\n    ```\n- Activate the virtual environment\n    ```shell\n    pyenv activate \u003cenvironment_name\u003e\n    ```\n- Install the required packages using \n    ```shell\n    pip install -r requirements.txt\n    ```\n\n### To build CVBridge \nTo ensure that the Juno robot is able to read the CV2 image, a library named cv_bridge, a ROS package that provides a bridge between ROS image messages and OpenCV image formats is required. We'll be manually building it because catkin_make only compiles Python 2 scripts. Please follow these steps: \n```shell\nmkdir -p ~/cvbridge_build_ws/src\ncd ~/cvbridge_build_ws/src\n\ncd ~/cvbridge_build_ws\ncatkin config -DPYTHON_EXECUTABLE=/home/mustar/.pyenv/versions/env-w15/bin/python3 -DPYTHON_INCLUDE_DIR=/usr/include/python3.6m -DPYTHON_LIBRARY=/home/mustar/.pyenv/versions/env-w15/lib/python3.9/site-packages\ncatkin config --install\n\ncatkin install -DCMAKE_BUILD_TYPE=Release -DSETUPTOOLS_DEB_LAYOUT=OFF\n\ncatkin build cv_bridge\n\nsource /home/mustar/cvbridge_build_ws/install/setup.bash [change to correct directory**] --extend\n```\n\n### To Start GYMcD\n\n- Four terminals are required to run four sets of commands in parallel. \n\n1. Terminal 1\n    - To build the project with Catkin and start roscore.\n    ```shell\n    cd ..\n    catkin_make\n    cd src/juno_bot/src\n    chmod +x *.py\n    roscore\n    ```\n2. Terminal 2\n    - To start Juno's vision and capture your exercise posture.\n    ```shell\n    rosrun juno_bot camera_node.py \n    ```\n3. Terminal 3\n    - To start the text-to-speech node so that Juno can read the counts out loud.\n    ```shell\n    rosrun juno_bot gtts_node.py\n    ```\n4. Terminal 4\n    - To launch the posture detection task.\n    ```shell\n    cd catkin_ws/src/juno_bot/\n    rosrun juno_bot exercise_detection.py \n    ```\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhozhiyi%2Fjuno_bot","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fhozhiyi%2Fjuno_bot","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fhozhiyi%2Fjuno_bot/lists"}