https://github.com/anshlulla/image-segmentation-for-self-driving-cars
Segmenting various features of the road to make it easier for self-driving cars to take decisions by using image processing techniques and a U-Net Neural Network
https://github.com/anshlulla/image-segmentation-for-self-driving-cars
image-processing image-segmentation python self-driving-cars tensorflow
Last synced: 11 days ago
JSON representation
Segmenting various features of the road to make it easier for self-driving cars to take decisions by using image processing techniques and a U-Net Neural Network
- Host: GitHub
- URL: https://github.com/anshlulla/image-segmentation-for-self-driving-cars
- Owner: Anshlulla
- Created: 2024-05-12T21:09:54.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-05-15T08:33:31.000Z (over 1 year ago)
- Last Synced: 2025-06-07T09:44:21.700Z (4 months ago)
- Topics: image-processing, image-segmentation, python, self-driving-cars, tensorflow
- Language: Jupyter Notebook
- Homepage:
- Size: 2.31 MB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Image-Segmentation-for-Self-Driving-Cars
## Overview
This project focuses on image segmentation for self-driving cars using the U-Net architecture. The U-Net is a convolutional neural network designed for image segmentation tasks, making it suitable for identifying and localizing objects in real-time. The model is trained on a dataset of labeled images containing various objects relevant to autonomous driving, such as vehicles, pedestrians, and traffic signs. The goal is to develop a robust model to segment different features of the road to enhance the safety and performance of self-driving cars.## Workflow
### Data Collection:
Obtain a dataset of labeled images with annotations for objects of interest (e.g., vehicles, pedestrians, traffic signs) relevant to self-driving cars.### Data Pre-processing:
Resize images to a consistent resolution suitable for model input.
Normalize pixel values to a standardized range (e.g., 0 to 1).
Augment the dataset using techniques like rotation, flipping, and scaling to improve model generalization.### Data Augmentation:
Apply data augmentation techniques such as rotation, flipping, scaling, and random cropping to increase the diversity of the training dataset and improve model robustness.### Model Training:
Train the U-Net model using the augmented training dataset.### Model Evaluation:
Evaluate the trained model's performance using metrics such as Intersection over Union (IoU) and Mean Average Precision (mAP) on a validation dataset.## Model Architecture
The U-Net architecture consists of an encoder-decoder structure with skip connections. It efficiently captures both local and global features, making it suitable for image segmentation tasks like object detection.