Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/spirizeon/neutron-hacks
https://github.com/spirizeon/neutron-hacks
Last synced: about 7 hours ago
JSON representation
- Host: GitHub
- URL: https://github.com/spirizeon/neutron-hacks
- Owner: Spirizeon
- License: mit
- Created: 2024-11-06T04:39:24.000Z (2 days ago)
- Default Branch: main
- Last Pushed: 2024-11-06T10:13:13.000Z (2 days ago)
- Last Synced: 2024-11-06T11:20:30.377Z (2 days ago)
- Size: 1000 Bytes
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# README
## Overview
This project uses Detectron2 and Flask to serve a dashboard that streams a processed video feed along with simulated sensor data. It employs a Faster R-CNN model to detect objects in each frame of a video, highlights them with bounding boxes, and applies a simple "restoration" effect on the detected areas. The app runs a real-time dashboard that displays both the original and processed video frames along with simulated sensor readings for light, temperature, and humidity.## Model Specifications
- **Model Type**: Faster R-CNN (Region-based Convolutional Neural Network)
- **Base Architecture**: ResNet-50 with Feature Pyramid Network (FPN)
- **Framework**: Detectron2 (built on PyTorch)
- **Dataset**: Pre-trained on COCO Dataset (common objects in context)
- **Configuration**: `faster_rcnn_R_50_FPN_3x.yaml`
- **Confidence Threshold**: 0.5
- **Device**: CPU (configurable to GPU if available)## Features
- **Object Detection**: Uses Faster R-CNN to detect objects in the video feed.
- **Frame Processing**: Frames are processed to include bounding boxes and enhanced contrast for detected areas.
- **Sensor Data Simulation**: Light, temperature, and humidity data are simulated and displayed alongside the video stream.
- **Dashboard Interface**: Displays the original and restored video frames, annotated with real-time sensor data.## Setup and Installation
### Prerequisites
- Python 3.7 or higher
- [Detectron2](https://github.com/facebookresearch/detectron2) and OpenCV libraries### Install dependencies
1. Clone the repository and navigate to the project directory:
```bash
git clone [email protected]:spirizeon/neutron-hacks
cd neutron-hacks
```
2. Set up a virtual environment (recommended):
```bash
python3 -m venv venv
source venv/bin/activate # On Windows, use `venv\Scripts\activate`
```
3. Install required packages:
```bash
pip install -r requirements.txt
```### Install Detectron2
For Detectron2, follow the [official installation instructions](https://detectron2.readthedocs.io/en/latest/tutorials/install.html), or use the following commands:
```bash
python -m pip install 'git+https://github.com/facebookresearch/detectron2.git'
```### Usage
1. **Place the Video**: Ensure that a video file (e.g., `stock.webm`) is in the same directory as the script or update the path in the `video_feed` route if needed.
2. **Run the Application**:
```bash
python app.py
```
3. **Access the Dashboard**: Open a browser and go to `http://127.0.0.1:5000/` to view the real-time dashboard.## Code Overview
- `initialize_frcnn_model()`: Initializes the Faster R-CNN model with a COCO-trained configuration.
- `process_frame(frame)`: Processes each video frame, applies bounding boxes and contrast enhancement on detected objects.
- `generate_sensor_data()`: Simulates sensor data for light, temperature, and humidity readings.
- **Routes**:
- `/`: Renders the dashboard page.
- `/video_feed`: Streams the processed video feed with the original and restored frames.
- `/sensor_data`: Returns simulated sensor data in JSON format.## Notes
- Ensure that your video file path is correct.
- For faster processing, use a GPU-enabled environment if available by setting `cfg.MODEL.DEVICE` to `"cuda"`.