{"id":13585206,"url":"https://github.com/cvzone/cvzone","last_synced_at":"2025-05-14T23:07:36.100Z","repository":{"id":40523342,"uuid":"365531232","full_name":"cvzone/cvzone","owner":"cvzone","description":"This is a Computer vision package that makes its easy to run Image processing and AI functions. At the core it uses OpenCV and Mediapipe libraries.","archived":false,"fork":false,"pushed_at":"2024-05-10T12:42:26.000Z","size":3330,"stargazers_count":1244,"open_issues_count":49,"forks_count":267,"subscribers_count":27,"default_branch":"master","last_synced_at":"2025-05-14T03:44:23.932Z","etag":null,"topics":["computervision","mediapipe","opencv","opencv-python"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/cvzone.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE.txt","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2021-05-08T14:12:58.000Z","updated_at":"2025-05-12T01:17:50.000Z","dependencies_parsed_at":"2024-03-31T20:25:08.378Z","dependency_job_id":"84dfeafe-e72d-49e8-a1d7-d96bfe48a50d","html_url":"https://github.com/cvzone/cvzone","commit_stats":{"total_commits":46,"total_committers":7,"mean_commits":6.571428571428571,"dds":0.4782608695652174,"last_synced_commit":"a6d0d6d4219956fde1f3f1af66a12434500130ba"},"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/cvzone%2Fcvzone","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/cvzone%2Fcvzone/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/cvzone%2Fcvzone/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/cvzone%2Fcvzone/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/cvzone","download_url":"https://codeload.github.com/cvzone/cvzone/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254104878,"owners_count":22015554,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["computervision","mediapipe","opencv","opencv-python"],"created_at":"2024-08-01T15:04:48.287Z","updated_at":"2025-05-14T23:07:31.085Z","avatar_url":"https://github.com/cvzone.png","language":"Python","readme":"# CVZone\n\n[![Downloads](https://pepy.tech/badge/cvzone)](https://pepy.tech/project/cvzone)\n[![Downloads](https://pepy.tech/badge/cvzone/month)](https://pepy.tech/project/cvzone)\n[![Downloads](https://pepy.tech/badge/cvzone/week)](https://pepy.tech/project/cvzone)\n\n\n\nThis is a Computer vision package that makes its easy to run Image processing and AI functions. At the core it uses [OpenCV](https://github.com/opencv/opencv) and [Mediapipe](https://github.com/google/mediapipe) libraries. \n\n## Installation\nYou can  simply use pip to install the latest version of cvzone.\n\n`pip install cvzone`\n\n\u003chr\u003e\n\n## Examples\nFor sample usage and examples, please refer to the Examples folder in this repository. This folder contains various examples to help you understand how to make the most out of cvzone's features.\n\n## Video Documentation\n\n[![YouTube Video](https://img.youtube.com/vi/ieXQTtQgyo0/0.jpg)](https://youtu.be/ieXQTtQgyo0)\n\n\n## Table of Contents\n\n1. [Installations](#installations)\n2. [Corner Rectangle](#corner-rectangle)\n3. [PutTextRect](#puttextrect)\n4. [Download Image from URL](#download-image-from-url)\n5. [Overlay PNG](#overlay-png)\n6. [Rotate Image](#rotate-image)\n7. [Stack Images](#stack-images)\n8. [FPS](#fps)\n9. [Finding Contours](#finding-contours)\n10. [Color Module](#color-module)\n11. [Classification Module](#classification-module)\n12. [Face Detection](#face-detection)\n13. [Face Mesh Module](#face-mesh-module)\n14. [Selfie Segmentation Module](#selfie-segmentation-module)\n15. [Hand Tracking Module](#hand-tracking-module)\n16. [Pose Module](#pose-module)\n17. [Serial Module](#serial-module)\n18. [Plot Module](#plot-module)\n\n---\n\n### Installations\n\nTo install the cvzone package, run the following command:\n\n```bash\npip install cvzone\n```\n\n\n### Corner Rectangle\n\n\u003cdiv align=\"center\"\u003e\n  \u003cimg src=\"Results/cornerRect2.jpg\" alt=\"Corner Rectangle CVZone\"\u003e\n\u003c/div\u003e\n\n```python\n\nimport cv2\nimport cvzone  # Importing the cvzone library\n\n# Initialize the webcam\ncap = cv2.VideoCapture(2)  # Capture video from the third webcam (0-based index)\n\n# Main loop to continuously capture frames\nwhile True:\n    # Capture a single frame from the webcam\n    success, img = cap.read()  # 'success' is a boolean that indicates if the frame was captured successfully, and 'img' contains the frame itself\n\n    # Add a rectangle with styled corners to the image\n    img = cvzone.cornerRect(\n        img,  # The image to draw on\n        (200, 200, 300, 200),  # The position and dimensions of the rectangle (x, y, width, height)\n        l=30,  # Length of the corner edges\n        t=5,  # Thickness of the corner edges\n        rt=1,  # Thickness of the rectangle\n        colorR=(255, 0, 255),  # Color of the rectangle\n        colorC=(0, 255, 0)  # Color of the corner edges\n    )\n\n    # Show the modified image\n    cv2.imshow(\"Image\", img)  # Display the image in a window named \"Image\"\n\n    # Wait for 1 millisecond between frames\n    cv2.waitKey(1)  # Waits 1 ms for a key event (not being used here)\n```\n\n### PutTextRect\n\n\u003cdiv align=\"center\"\u003e\n  \u003cimg src=\"Results/putTextRect.jpg\" alt=\"putTextRect CVZone\"\u003e\n\u003c/div\u003e\n\n```python\n\nimport cv2\nimport cvzone  # Importing the cvzone library\n\n# Initialize the webcam\ncap = cv2.VideoCapture(2)  # Capture video from the third webcam (0-based index)\n\n# Main loop to continuously capture frames\nwhile True:\n    # Capture a single frame from the webcam\n    success, img = cap.read()  # 'success' is a boolean that indicates if the frame was captured successfully, and 'img' contains the frame itself\n\n    # Add a rectangle and put text inside it on the image\n    img, bbox = cvzone.putTextRect(\n        img, \"CVZone\", (50, 50),  # Image and starting position of the rectangle\n        scale=3, thickness=3,  # Font scale and thickness\n        colorT=(255, 255, 255), colorR=(255, 0, 255),  # Text color and Rectangle color\n        font=cv2.FONT_HERSHEY_PLAIN,  # Font type\n        offset=10,  # Offset of text inside the rectangle\n        border=5, colorB=(0, 255, 0)  # Border thickness and color\n    )\n\n    # Show the modified image\n    cv2.imshow(\"Image\", img)  # Display the image in a window named \"Image\"\n\n    # Wait for 1 millisecond between frames\n    cv2.waitKey(1)  # Waits 1 ms for a key event (not being used here)\n```\n\n### Download Image from URL\n\n```python\nimport cv2\nimport cvzone\n\nimgNormal = cvzone.downloadImageFromUrl(\n    url='https://github.com/cvzone/cvzone/blob/master/Results/shapes.png?raw=true')\n\nimgPNG = cvzone.downloadImageFromUrl(\n    url='https://github.com/cvzone/cvzone/blob/master/Results/cvzoneLogo.png?raw=true',\n    keepTransparency=True)\nimgPNG =cv2.resize(imgPNG,(0,0),None,3,3)\n\ncv2.imshow(\"Image Normal\", imgNormal)\ncv2.imshow(\"Transparent Image\", imgPNG)\ncv2.waitKey(0)\n\n```\n\n### Overlay PNG\n\n\u003cdiv align=\"center\"\u003e\n  \u003cimg src=\"Results/overlayPNG.jpg\" alt=\"overlayPNG CVZone\"\u003e\n\u003c/div\u003e\n\n```python\nimport cv2\nimport cvzone\n\n# Initialize camera capture\ncap = cv2.VideoCapture(2)\n\n# imgPNG = cvzone.downloadImageFromUrl(\n#     url='https://github.com/cvzone/cvzone/blob/master/Results/cvzoneLogo.png?raw=true',\n#     keepTransparency=True)\n\nimgPNG = cv2.imread(\"cvzoneLogo.png\",cv2.IMREAD_UNCHANGED)\n\n\nwhile True:\n    # Read image frame from camera\n    success, img = cap.read()\n\n    imgOverlay = cvzone.overlayPNG(img, imgPNG, pos=[-30, 50])\n    imgOverlay = cvzone.overlayPNG(img, imgPNG, pos=[200, 200])\n    imgOverlay = cvzone.overlayPNG(img, imgPNG, pos=[500, 400])\n\n    cv2.imshow(\"imgOverlay\", imgOverlay)\n    cv2.waitKey(1)\n```\n### Rotate Image\n\n\u003cdiv align=\"center\"\u003e\n  \u003cimg src=\"Results/rotateImage.jpg\" alt=\"rotateImage CVZone\"\u003e\n\u003c/div\u003e\n\n```python\nimport cv2\nfrom cvzone.Utils import rotateImage  # Import rotateImage function from cvzone.Utils\n\n# Initialize the video capture\ncap = cv2.VideoCapture(2)  # Capture video from the third webcam (index starts at 0)\n\n# Start the loop to continuously get frames from the webcam\nwhile True:\n    # Read a frame from the webcam\n    success, img = cap.read()  # 'success' will be True if the frame is read successfully, 'img' will contain the frame\n\n    # Rotate the image by 60 degrees without keeping the size\n    imgRotated60 = rotateImage(img, 60, scale=1,\n                               keepSize=False)  # Rotate image 60 degrees, scale it by 1, and don't keep original size\n\n    # Rotate the image by 60 degrees while keeping the size\n    imgRotated60KeepSize = rotateImage(img, 60, scale=1,\n                                       keepSize=True)  # Rotate image 60 degrees, scale it by 1, and keep the original size\n\n    # Display the rotated images\n    cv2.imshow(\"imgRotated60\", imgRotated60)  # Show the 60-degree rotated image without keeping the size\n    cv2.imshow(\"imgRotated60KeepSize\", imgRotated60KeepSize)  # Show the 60-degree rotated image while keeping the size\n\n    # Wait for 1 millisecond between frames\n    cv2.waitKey(1)  # Wait for 1 ms, during which any key press can be detected (not being used here)\n\n```\n### Stack Images\n\u003cdiv align=\"center\"\u003e\n  \u003cimg src=\"Results/stackImages.jpg\" alt=\"stackImages CVZone\"\u003e\n\u003c/div\u003e\n\n```python\nimport cv2\nimport cvzone\n\n# Initialize camera capture\ncap = cv2.VideoCapture(2)\n\n# Start an infinite loop to continually capture frames\nwhile True:\n    # Read image frame from camera\n    success, img = cap.read()\n\n    # Convert the image to grayscale\n    imgGray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)\n\n    # Resize the image to be smaller (0.1x of original size)\n    imgSmall = cv2.resize(img, (0, 0), None, 0.1, 0.1)\n\n    # Resize the image to be larger (3x of original size)\n    imgBig = cv2.resize(img, (0, 0), None, 3, 3)\n\n    # Apply Canny edge detection on the grayscale image\n    imgCanny = cv2.Canny(imgGray, 50, 150)\n\n    # Convert the image to HSV color space\n    imgHSV = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)\n\n    # Create a list of all processed images\n    imgList = [img, imgGray, imgCanny, imgSmall, imgBig, imgHSV]\n\n    # Stack the images together using cvzone's stackImages function\n    stackedImg = cvzone.stackImages(imgList, 3, 0.7)\n\n    # Display the stacked images\n    cv2.imshow(\"stackedImg\", stackedImg)\n\n    # Wait for 1 millisecond; this also allows for keyboard inputs\n    cv2.waitKey(1)\n```\n### FPS\n\n```python\nimport cvzone\nimport cv2\n\n# Initialize the FPS class with an average count of 30 frames for smoothing\nfpsReader = cvzone.FPS(avgCount=30)\n\n# Initialize the webcam and set it to capture\ncap = cv2.VideoCapture(0)\ncap.set(cv2.CAP_PROP_FPS, 30)  # Set the frames per second to 30\n\n# Main loop to capture frames and display FPS\nwhile True:\n    # Read a frame from the webcam\n    success, img = cap.read()\n\n    # Update the FPS counter and draw the FPS on the image\n    # fpsReader.update returns the current FPS and the updated image\n    fps, img = fpsReader.update(img, pos=(20, 50),\n                                bgColor=(255, 0, 255), textColor=(255, 255, 255),\n                                scale=3, thickness=3)\n\n    # Display the image with the FPS counter\n    cv2.imshow(\"Image\", img)\n\n    # Wait for 1 ms to show this frame, then continue to the next frame\n    cv2.waitKey(1)\n```\n### Finding Contours\n\n```python\nimport cv2           # Importing the OpenCV library for computer vision tasks\nimport cvzone       # Importing the cvzone library for additional functionalities\nimport numpy as np  # Importing NumPy library for numerical operations\n\n# Download an image containing shapes from a given URL\nimgShapes = cvzone.downloadImageFromUrl(\n    url='https://github.com/cvzone/cvzone/blob/master/Results/shapes.png?raw=true')\n\n# Perform edge detection using the Canny algorithm\nimgCanny = cv2.Canny(imgShapes, 50, 150)\n\n# Dilate the edges to strengthen the detected contours\nimgDilated = cv2.dilate(imgCanny, np.ones((5, 5), np.uint8), iterations=1)\n\n# Find contours in the image without any corner filtering\nimgContours, conFound = cvzone.findContours(\n    imgShapes, imgDilated, minArea=1000, sort=True,\n    filter=None, drawCon=True, c=(255, 0, 0), ct=(255, 0, 255),\n    retrType=cv2.RETR_EXTERNAL, approxType=cv2.CHAIN_APPROX_NONE)\n\n# Find contours in the image and filter them based on corner points (either 3 or 4 corners)\nimgContoursFiltered, conFoundFiltered = cvzone.findContours(\n    imgShapes, imgDilated, minArea=1000, sort=True,\n    filter=[3, 4], drawCon=True, c=(255, 0, 0), ct=(255, 0, 255),\n    retrType=cv2.RETR_EXTERNAL, approxType=cv2.CHAIN_APPROX_NONE)\n\n# Display the image with all found contours\ncv2.imshow(\"imgContours\", imgContours)\n\n# Display the image with filtered contours (either 3 or 4 corners)\ncv2.imshow(\"imgContoursFiltered\", imgContoursFiltered)\n\n# Wait until a key is pressed to close the windows\ncv2.waitKey(0)\n\n```\n### Color Module\n\n```python\nimport cvzone\nimport cv2\n\n# Create an instance of the ColorFinder class with trackBar set to True.\nmyColorFinder = cvzone.ColorFinder(trackBar=True)\n\n# Initialize the video capture using OpenCV.\n# Using the third camera (index 2). Adjust index if you have multiple cameras.\ncap = cv2.VideoCapture(2)\n\n# Set the dimensions of the camera feed to 640x480.\ncap.set(3, 640)\ncap.set(4, 480)\n\n# Custom color values for detecting orange.\n# 'hmin', 'smin', 'vmin' are the minimum values for Hue, Saturation, and Value.\n# 'hmax', 'smax', 'vmax' are the maximum values for Hue, Saturation, and Value.\nhsvVals = {'hmin': 10, 'smin': 55, 'vmin': 215, 'hmax': 42, 'smax': 255, 'vmax': 255}\n\n# Main loop to continuously get frames from the camera.\nwhile True:\n    # Read the current frame from the camera.\n    success, img = cap.read()\n\n    # Use the update method from the ColorFinder class to detect the color.\n    # It returns the masked color image and a binary mask.\n    imgOrange, mask = myColorFinder.update(img, hsvVals)\n\n    # Stack the original image, the masked color image, and the binary mask.\n    imgStack = cvzone.stackImages([img, imgOrange, mask], 3, 1)\n\n    # Show the stacked images.\n    cv2.imshow(\"Image Stack\", imgStack)\n\n    # Break the loop if the 'q' key is pressed.\n    if cv2.waitKey(1) \u0026 0xFF == ord('q'):\n        break\n\n```\n### Classification Module\n\n```python\nfrom cvzone.ClassificationModule import Classifier\nimport cv2\n\ncap = cv2.VideoCapture(2)  # Initialize video capture\npath = \"C:/Users/USER/Documents/maskModel/\"\nmaskClassifier = Classifier(f'{path}/keras_model.h5', f'{path}/labels.txt')\n\nwhile True:\n    _, img = cap.read()  # Capture frame-by-frame\n    prediction = maskClassifier.getPrediction(img)\n    print(prediction)  # Print prediction result\n    cv2.imshow(\"Image\", img)\n    cv2.waitKey(1)  # Wait for a key press\n```\n### Face Detection\n\n```python\nimport cvzone\nfrom cvzone.FaceDetectionModule import FaceDetector\nimport cv2\n\n# Initialize the webcam\n# '2' means the third camera connected to the computer, usually 0 refers to the built-in webcam\ncap = cv2.VideoCapture(2)\n\n# Initialize the FaceDetector object\n# minDetectionCon: Minimum detection confidence threshold\n# modelSelection: 0 for short-range detection (2 meters), 1 for long-range detection (5 meters)\ndetector = FaceDetector(minDetectionCon=0.5, modelSelection=0)\n\n# Run the loop to continually get frames from the webcam\nwhile True:\n    # Read the current frame from the webcam\n    # success: Boolean, whether the frame was successfully grabbed\n    # img: the captured frame\n    success, img = cap.read()\n\n    # Detect faces in the image\n    # img: Updated image\n    # bboxs: List of bounding boxes around detected faces\n    img, bboxs = detector.findFaces(img, draw=False)\n\n    # Check if any face is detected\n    if bboxs:\n        # Loop through each bounding box\n        for bbox in bboxs:\n            # bbox contains 'id', 'bbox', 'score', 'center'\n\n            # ---- Get Data  ---- #\n            center = bbox[\"center\"]\n            x, y, w, h = bbox['bbox']\n            score = int(bbox['score'][0] * 100)\n\n            # ---- Draw Data  ---- #\n            cv2.circle(img, center, 5, (255, 0, 255), cv2.FILLED)\n            cvzone.putTextRect(img, f'{score}%', (x, y - 10))\n            cvzone.cornerRect(img, (x, y, w, h))\n\n    # Display the image in a window named 'Image'\n    cv2.imshow(\"Image\", img)\n    # Wait for 1 millisecond, and keep the window open\n    cv2.waitKey(1)\n```\n### Face Mesh Module\n\n```python\nfrom cvzone.FaceMeshModule import FaceMeshDetector\nimport cv2\n\n# Initialize the webcam\n# '2' indicates the third camera connected to the computer, '0' would usually refer to the built-in webcam\ncap = cv2.VideoCapture(2)\n\n# Initialize FaceMeshDetector object\n# staticMode: If True, the detection happens only once, else every frame\n# maxFaces: Maximum number of faces to detect\n# minDetectionCon: Minimum detection confidence threshold\n# minTrackCon: Minimum tracking confidence threshold\ndetector = FaceMeshDetector(staticMode=False, maxFaces=2, minDetectionCon=0.5, minTrackCon=0.5)\n\n# Start the loop to continually get frames from the webcam\nwhile True:\n    # Read the current frame from the webcam\n    # success: Boolean, whether the frame was successfully grabbed\n    # img: The current frame\n    success, img = cap.read()\n\n    # Find face mesh in the image\n    # img: Updated image with the face mesh if draw=True\n    # faces: Detected face information\n    img, faces = detector.findFaceMesh(img, draw=True)\n\n    # Check if any faces are detected\n    if faces:\n        # Loop through each detected face\n        for face in faces:\n            # Get specific points for the eye\n            # leftEyeUpPoint: Point above the left eye\n            # leftEyeDownPoint: Point below the left eye\n            leftEyeUpPoint = face[159]\n            leftEyeDownPoint = face[23]\n\n            # Calculate the vertical distance between the eye points\n            # leftEyeVerticalDistance: Distance between points above and below the left eye\n            # info: Additional information (like coordinates)\n            leftEyeVerticalDistance, info = detector.findDistance(leftEyeUpPoint, leftEyeDownPoint)\n\n            # Print the vertical distance for debugging or information\n            print(leftEyeVerticalDistance)\n\n    # Display the image in a window named 'Image'\n    cv2.imshow(\"Image\", img)\n\n    # Wait for 1 millisecond to check for any user input, keeping the window open\n    cv2.waitKey(1)\n\n```\n### Selfie Segmentation Module\n\n```python\nimport cvzone\nfrom cvzone.SelfiSegmentationModule import SelfiSegmentation\nimport cv2\n\n# Initialize the webcam. '2' indicates the third camera connected to the computer.\n# '0' usually refers to the built-in camera.\ncap = cv2.VideoCapture(2)\n\n# Set the frame width to 640 pixels\ncap.set(3, 640)\n# Set the frame height to 480 pixels\ncap.set(4, 480)\n\n# Initialize the SelfiSegmentation class. It will be used for background removal.\n# model is 0 or 1 - 0 is general 1 is landscape(faster)\nsegmentor = SelfiSegmentation(model=0)\n\n# Infinite loop to keep capturing frames from the webcam\nwhile True:\n    # Capture a single frame\n    success, img = cap.read()\n\n    # Use the SelfiSegmentation class to remove the background\n    # Replace it with a magenta background (255, 0, 255)\n    # imgBG can be a color or an image as well. must be same size as the original if image\n    # 'cutThreshold' is the sensitivity of the segmentation.\n    imgOut = segmentor.removeBG(img, imgBg=(255, 0, 255), cutThreshold=0.1)\n\n    # Stack the original image and the image with background removed side by side\n    imgStacked = cvzone.stackImages([img, imgOut], cols=2, scale=1)\n\n    # Display the stacked images\n    cv2.imshow(\"Image\", imgStacked)\n\n    # Check for 'q' key press to break the loop and close the window\n    if cv2.waitKey(1) \u0026 0xFF == ord('q'):\n        break\n\n```\n### Hand Tracking Module\n\n```python\nfrom cvzone.HandTrackingModule import HandDetector\nimport cv2\n\n# Initialize the webcam to capture video\n# The '2' indicates the third camera connected to your computer; '0' would usually refer to the built-in camera\ncap = cv2.VideoCapture(2)\n\n# Initialize the HandDetector class with the given parameters\ndetector = HandDetector(staticMode=False, maxHands=2, modelComplexity=1, detectionCon=0.5, minTrackCon=0.5)\n\n# Continuously get frames from the webcam\nwhile True:\n    # Capture each frame from the webcam\n    # 'success' will be True if the frame is successfully captured, 'img' will contain the frame\n    success, img = cap.read()\n\n    # Find hands in the current frame\n    # The 'draw' parameter draws landmarks and hand outlines on the image if set to True\n    # The 'flipType' parameter flips the image, making it easier for some detections\n    hands, img = detector.findHands(img, draw=True, flipType=True)\n\n    # Check if any hands are detected\n    if hands:\n        # Information for the first hand detected\n        hand1 = hands[0]  # Get the first hand detected\n        lmList1 = hand1[\"lmList\"]  # List of 21 landmarks for the first hand\n        bbox1 = hand1[\"bbox\"]  # Bounding box around the first hand (x,y,w,h coordinates)\n        center1 = hand1['center']  # Center coordinates of the first hand\n        handType1 = hand1[\"type\"]  # Type of the first hand (\"Left\" or \"Right\")\n\n        # Count the number of fingers up for the first hand\n        fingers1 = detector.fingersUp(hand1)\n        print(f'H1 = {fingers1.count(1)}', end=\" \")  # Print the count of fingers that are up\n\n        # Calculate distance between specific landmarks on the first hand and draw it on the image\n        length, info, img = detector.findDistance(lmList1[8][0:2], lmList1[12][0:2], img, color=(255, 0, 255),\n                                                  scale=10)\n\n        # Check if a second hand is detected\n        if len(hands) == 2:\n            # Information for the second hand\n            hand2 = hands[1]\n            lmList2 = hand2[\"lmList\"]\n            bbox2 = hand2[\"bbox\"]\n            center2 = hand2['center']\n            handType2 = hand2[\"type\"]\n\n            # Count the number of fingers up for the second hand\n            fingers2 = detector.fingersUp(hand2)\n            print(f'H2 = {fingers2.count(1)}', end=\" \")\n\n            # Calculate distance between the index fingers of both hands and draw it on the image\n            length, info, img = detector.findDistance(lmList1[8][0:2], lmList2[8][0:2], img, color=(255, 0, 0),\n                                                      scale=10)\n\n        print(\" \")  # New line for better readability of the printed output\n\n    # Display the image in a window\n    cv2.imshow(\"Image\", img)\n\n    # Keep the window open and update it for each frame; wait for 1 millisecond between frames\n    cv2.waitKey(1)\n```\n### Pose Module\n\n```python\nfrom cvzone.PoseModule import PoseDetector\nimport cv2\n\n# Initialize the webcam and set it to the third camera (index 2)\ncap = cv2.VideoCapture(2)\n\n# Initialize the PoseDetector class with the given parameters\ndetector = PoseDetector(staticMode=False,\n                        modelComplexity=1,\n                        smoothLandmarks=True,\n                        enableSegmentation=False,\n                        smoothSegmentation=True,\n                        detectionCon=0.5,\n                        trackCon=0.5)\n\n# Loop to continuously get frames from the webcam\nwhile True:\n    # Capture each frame from the webcam\n    success, img = cap.read()\n\n    # Find the human pose in the frame\n    img = detector.findPose(img)\n\n    # Find the landmarks, bounding box, and center of the body in the frame\n    # Set draw=True to draw the landmarks and bounding box on the image\n    lmList, bboxInfo = detector.findPosition(img, draw=True, bboxWithHands=False)\n\n    # Check if any body landmarks are detected\n    if lmList:\n        # Get the center of the bounding box around the body\n        center = bboxInfo[\"center\"]\n\n        # Draw a circle at the center of the bounding box\n        cv2.circle(img, center, 5, (255, 0, 255), cv2.FILLED)\n\n        # Calculate the distance between landmarks 11 and 15 and draw it on the image\n        length, img, info = detector.findDistance(lmList[11][0:2],\n                                                  lmList[15][0:2],\n                                                  img=img,\n                                                  color=(255, 0, 0),\n                                                  scale=10)\n\n        # Calculate the angle between landmarks 11, 13, and 15 and draw it on the image\n        angle, img = detector.findAngle(lmList[11][0:2],\n                                        lmList[13][0:2],\n                                        lmList[15][0:2],\n                                        img=img,\n                                        color=(0, 0, 255),\n                                        scale=10)\n\n        # Check if the angle is close to 50 degrees with an offset of 10\n        isCloseAngle50 = detector.angleCheck(myAngle=angle,\n                                             targetAngle=50,\n                                             offset=10)\n\n        # Print the result of the angle check\n        print(isCloseAngle50)\n\n    # Display the frame in a window\n    cv2.imshow(\"Image\", img)\n\n    # Wait for 1 millisecond between each frame\n    cv2.waitKey(1)\n```\n### Serial Module\n\n```python\nfrom cvzone.SerialModule import SerialObject\n\n# Initialize the Arduino SerialObject with optional parameters\n# baudRate = 9600, digits = 1, max_retries = 5\narduino = SerialObject(portNo=None, baudRate=9600, digits=1, max_retries=5)\n\n# Initialize a counter to keep track of iterations\ncount = 0\n\n# Start an infinite loop\nwhile True:\n    # Increment the counter on each iteration\n    count += 1\n\n    # Print data received from the Arduino\n    # getData method returns the list of data received from the Arduino\n    print(arduino.getData())\n\n    # If the count is less than 100\n    if count \u003c 100:\n        # Send a list containing [1] to the Arduino\n        arduino.sendData([1])\n    else:\n        # If the count is 100 or greater, send a list containing [0] to the Arduino\n        arduino.sendData([0])\n\n    # Reset the count back to 0 once it reaches 200\n    # This will make the cycle repeat\n    if count == 200:\n        count = 0\n\n```\n```cpp\n#include \u003ccvzone.h\u003e\n\nSerialData serialData(1,1); //(numOfValsRec,digitsPerValRec)\n/*0 or 1 - 1 digit\n0 to 99 -  2 digits\n0 to 999 - 3 digits\n */\n//SerialData serialData;   // if not receving only sending\n\n\nint sendVals[2]; // min val of 2 even when sending 1\nint valsRec[1];\n\nint x = 0;\n\nvoid setup() {\n\nserialData.begin(9600);\npinMode(13,OUTPUT);\n}\n\nvoid loop() {\n\n  // ------- To SEND --------\n  x +=1;\n  if (x==100){x=0;}\n  sendVals[0] = x;\n  serialData.Send(sendVals);\n\n  // ------- To Recieve --------\n  serialData.Get(valsRec);\n  digitalWrite(13,valsRec[0]);\n\n}\n```\n\n\n\n### Plot Module\n#### Sine Example \n```python\nfrom cvzone.PlotModule import LivePlot\nimport cv2\nimport math\n\n\nsinPlot = LivePlot(w=1200, yLimit=[-100, 100], interval=0.01)\nxSin=0\n\nwhile True:\n\n    xSin += 1\n    if xSin == 360: xSin = 0\n    imgPlotSin = sinPlot.update(int(math.sin(math.radians(xSin)) * 100))\n\n    cv2.imshow(\"Image Sin Plot\", imgPlotSin)\n\n    if cv2.waitKey(1) \u0026 0xFF == ord('q'):\n        break\n\n```\n\n#### Face Detection X Value Example\n```python\nfrom cvzone.PlotModule import LivePlot\nfrom cvzone.FaceDetectionModule import FaceDetector\nimport cv2\nimport cvzone\n\ncap = cv2.VideoCapture(1)\ndetector = FaceDetector(minDetectionCon=0.85, modelSelection=0)\n\nxPlot = LivePlot(w=1200, yLimit=[0, 500], interval=0.01)\n\nwhile True:\n    success, img = cap.read()\n\n    img, bboxs = detector.findFaces(img, draw=False)\n    val = 0\n    if bboxs:\n        # Loop through each bounding box\n        for bbox in bboxs:\n            # bbox contains 'id', 'bbox', 'score', 'center'\n            # ---- Get Data  ---- #\n            center = bbox[\"center\"]\n            x, y, w, h = bbox['bbox']\n            score = int(bbox['score'][0] * 100)\n            val = center[0]\n            # ---- Draw Data  ---- #\n            cv2.circle(img, center, 5, (255, 0, 255), cv2.FILLED)\n            cvzone.putTextRect(img, f'{score}%', (x, y - 10))\n            cvzone.cornerRect(img, (x, y, w, h))\n\n    imgPlot = xPlot.update(val)\n\n    cv2.imshow(\"Image Plot\", imgPlot)\n    cv2.imshow(\"Image\", img)\n\n    if cv2.waitKey(1) \u0026 0xFF == ord('q'):\n        break\n\n```","funding_links":[],"categories":["Python"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcvzone%2Fcvzone","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fcvzone%2Fcvzone","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fcvzone%2Fcvzone/lists"}