Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/scorleos773/snap-scan
https://github.com/scorleos773/snap-scan
Last synced: 25 days ago
JSON representation
- Host: GitHub
- URL: https://github.com/scorleos773/snap-scan
- Owner: SCORLEOs773
- Created: 2025-01-11T03:04:20.000Z (about 1 month ago)
- Default Branch: main
- Last Pushed: 2025-01-11T03:37:48.000Z (about 1 month ago)
- Last Synced: 2025-01-11T04:29:14.782Z (about 1 month ago)
- Language: Python
- Size: 1.95 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Snap-Scan: Object Detection with YOLOv8
This project demonstrates object detection using the YOLOv8 model. The program detects objects in an input image, identifies sub-objects (e.g., helmets for people, tires for cars, etc.), and saves the results, including cropped sub-object images and annotated visuals.
---
## Features
- Detect objects in an image using YOLOv8.
- Identify sub-objects for specific detected objects:
- Helmet for "person"
- Tire for "car"
- "Unknown" for other objects
- Save annotated images with bounding boxes.
- Generate a JSON file containing object and sub-object details.
- Save cropped images of detected sub-objects.---
## Folder Structure
ENV/
├── Include/
├── Lib/
├── Scripts/
├── share/
├── sub_objects/ # Cropped sub-object images are saved here
├── annotated_sample.jpg # Annotated image with bounding boxes
├── detect_objects.py # Main Python script for object detection
├── output.json # JSON file with object and sub-object details
├── README.md # Project documentation
├── sample.jpeg # Input image for object detection
├── yolov8n.pt # Pre-trained YOLOv8 model weights---
## Setup Instructions
1. **Clone the Repository**
```bash
git clone
cd2. **Setup the Python Environment**
```bash
python -m venv ENV
source ENV/Scripts/activate # On Windows
source ENV/bin/activate # On macOS/Linux
pip install -r requirements.txt
```
3. **Place Input Image**
Replace ```sample.jpeg``` with your input image.## Usage
1. Run the Script
```bash
python detect_objects.py
```2. Results
a. Annotated Image: ```annotated_sample.jpg```
b. JSON Output: ```output.json```
c. Cropped Sub-Objects: Saved in the ```sub_objects/``` folder.## Sample JSON Output
```json
[
{
"object": "person",
"id": 1,
"bbox": [50.34, 100.56, 200.45, 300.67],
"subobject": {
"object": "Helmet",
"id": 1,
"bbox": [60.34, 110.56, 190.45, 290.67]
}
}
]
```## Notes
1. The yolov8n.pt model is pre-trained on the COCO dataset.
2. Sub-object detection is simulated and depends on predefined rules.
3. The JSON output includes bounding box coordinates for both objects and sub-objects.## Acknowledgments
1. Ultralytics YOLO
2. COCO Dataset