Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/christiansassi/computer-vision-project
Project developed by Pietro Bologna (@bolognapietro) and Christian Sassi for the Computer Vision course.
https://github.com/christiansassi/computer-vision-project
computer-vision detection stitching tracking volleyball
Last synced: 2 days ago
JSON representation
Project developed by Pietro Bologna (@bolognapietro) and Christian Sassi for the Computer Vision course.
- Host: GitHub
- URL: https://github.com/christiansassi/computer-vision-project
- Owner: christiansassi
- Created: 2024-04-10T07:00:55.000Z (10 months ago)
- Default Branch: main
- Last Pushed: 2025-01-23T17:37:00.000Z (19 days ago)
- Last Synced: 2025-01-23T18:29:59.612Z (19 days ago)
- Topics: computer-vision, detection, stitching, tracking, volleyball
- Language: Python
- Homepage:
- Size: 201 MB
- Stars: 0
- Watchers: 2
- Forks: 1
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Top view stitching and tracking (tracking and geometry)
# Table of contents
- [Project Overview](#project-overview)
- [Code Overview](#code-overview)
- [Top-View Court Stitching](#top-view-court-stitching)
- [Object Detection on Top-View Images](#object-detection-on-top-view-images)
- [Object Tracking](#object-tracking)
- [Ball Detection and Tracking](#ball-detection-and-tracking)
- [Color-based team identification](#color-based-team-identification)
- [Project structure](#project-structure)
- [Getting Started](#getting-started)
- [Contacts](#contacts)# Project Overview
This project focuses on processing video footage captured at the Sanbapolis facility in Trento. The main objectives are:
- **Top-View Court Stitching**: The facility has three distinct camera views—top, center, and bottom—each captured by four cameras. Initially, the images from cameras within the same view are stitched together. Then, these stitched views (top, center, and bottom) are merged to create a cohesive top-down view of the entire volleyball court.
- **Object Detection on Top-View Images**: Various object detection algorithms were applied to the stitched top-view images, including frame subtraction, background subtraction, adaptive background subtraction, and Gaussian averaging. After evaluating these methods, background subtraction was selected as the most effective for detecting objects on the court.
- **Object Tracking**: Particle filtering was implemented for tracking detected objects (bounding boxes). Given its performance, no further methods were explored.
- **Ball Detection and Tracking**: YOLO (You Only Look Once) was used for ball detection and tracking. YOLO's efficiency proved to be particularly effective due to the ball's high velocity and potential distortion in certain frames.
- **Color-based Team Identification**: This project processes volleyball videos, and the net was used to separate the two teams. The color-based identification method faces challenges, as players from both teams wear uniforms of similar colors, complicating the identification process.# Code Overview
## Top-View Court Stitching
In the stitching phase, the images from each view (top, center, bottom) are stitched together. This process involves initially stitching the images from cameras within the same view, followed by combining the views to create a seamless top-down representation of the entire court. Due to the complexity of this task, a filtering algorithm was developed to discard low-quality matches based on the inclination of the line connecting two features.
The final result of the stitching process is shown below:
Stitched imageOne key consideration is that objects positioned higher in the frame are more likely to be cut off at the stitching seams due to the camera angles. For example:
Example of a player being cut off due to stitching artifactsTo improve performance, stitching parameters were cached to avoid recalculating them for each iteration.
## Object Detection on Top-View Images
Various object detection algorithms were tested on the stitched top-view images. The methods considered include frame subtraction, background subtraction, adaptive background subtraction, and Gaussian averaging. Background subtraction was found to be the most effective.
The detection process begins with thresholding the image to highlight the most relevant areas, followed by dilation to account for any stitching errors. Small areas are discarded to focus on significant objects:
Thresholded and dilated imageNext, contours are filtered based on the volleyball court's boundaries, with objects that intersect the court area by 25% or more being retained:
Volleyball field maskCombining these methods results in the following motion detection output:
Motion detection## Object Tracking
Particle filtering was chosen for object tracking, which initializes a new particle system for each detected bounding box. Over several iterations, the particle system refines its position. Initially, the particles exhibit chaotic behavior:
Initial particle systemAs iterations proceed, the particle system becomes more accurate:
Particle system after some iterationsFinally, the particle system is used to predict the direction of the moving object. While the particle system performs well overall, it struggles with sudden, fast movements, requiring several iterations to adjust:
Motion tracking> [!NOTE]
> While particle systems may not be the best option for all tracking scenarios, they performed well for this project. Other methods might be more appropriate for rapid changes in object movement.## Ball Detection and Tracking
For ball detection and tracking, YOLO (You Only Look Once) was used. Given the high velocity of the ball, traditional methods often resulted in distortion, making it difficult to detect. To overcome this, a custom dataset was created by manually extracting 1,000 images from the video, each with a labeled bounding box around the ball.
YOLO v11 was then applied to the dataset, enabling accurate detection. The same particle system technique used for player tracking was applied to track the ball's movement:
Ball detection and trackingAs with player tracking, the particle system may require a few iterations to adapt to rapid movements, potentially leading to inaccurate predictions during those iterations.
> [!NOTE]
> Although particle systems can face challenges in tracking fast-moving objects, the ball's movement is more predictable, making the technique more effective in this case.## Color-based Team Identification
Given that this project processes volleyball videos, the optimal method for team identification was to use the net to separate the two teams. This approach is particularly effective since players from different teams in volleyball are generally positioned on opposite sides of the net.
However, color-based team identification faced challenges due to the similarity in uniform colors between the two teams.
Color-based team identification applied to distinct colorsAs shown in the histograms, the uniform colors of the two teams were highly similar, making it difficult to distinguish between them based solely on color. However, if the uniforms differed more significantly, color-based identification would be much more effective.
While this method is fast and efficient, it has some limitations. For instance, when players from both teams are near the net, they may be merged into a single bounding box, leading to misclassification of one team. Using YOLO for more precise detection could mitigate this issue.
Two bounding boxes near the net merged into a single bounding box, resulting in misclassification# Project structure
```text
.
├── assets # Images
├── models # YOLO11 model
├── libs # Source files
└── videos
├── cut # Cut videos (private)
├── original # Original videos (private)
└── processed # Processed videos (private)
```# Getting Started
1. Set up the workspace:
```bash
git clone https://github.com/christiansassi/computer-vision-project
cd computer-vision-project
pip install -r requirements.txt
```2. Run [main.py](main.py) script:
```bash
python3 main.py
python3 main.py -live # Run in live mode
```> [!WARNING]
> Due to privacy reasons, the video files cannot be shared.
Demo
Tracking plot# Resources
[Report](https://github.com/christiansassi/computer-vision-project/blob/d302cc1fa7fd1fcb81e19c02c43ca008ac82afa0/report/Top_view_stitching_and_tracking__tracking_and_geometry_.pdf)
[Presentation](https://www.canva.com/design/DAGXGWPdaG0/6xN45w81QcofHTlDwvXiJA/edit?utm_content=DAGXGWPdaG0&utm_campaign=designshare&utm_medium=link2&utm_source=sharebutton)
[Video](https://www.youtube.com/watch?v=kBeKlCkUAaY)
# Contacts
Pietro Bologna - [[email protected]](mailto:[email protected])
Christian Sassi - [[email protected]](mailto:[email protected])