Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/hemanthh17/ct5129-thesis-image-dehazing
ClearFlow: An Enhanced Image Dehazing Workflow Using Model Ensembling and CNN Postprocessing. This project is aimed to train a neural network to handle the task of image dehazing.
https://github.com/hemanthh17/ct5129-thesis-image-dehazing
computer-vision deep-learning deep-neural-networks torch torchvision
Last synced: about 1 month ago
JSON representation
ClearFlow: An Enhanced Image Dehazing Workflow Using Model Ensembling and CNN Postprocessing. This project is aimed to train a neural network to handle the task of image dehazing.
- Host: GitHub
- URL: https://github.com/hemanthh17/ct5129-thesis-image-dehazing
- Owner: hemanthh17
- Created: 2024-05-28T12:16:32.000Z (7 months ago)
- Default Branch: main
- Last Pushed: 2024-08-21T22:40:07.000Z (4 months ago)
- Last Synced: 2024-08-22T14:30:22.896Z (4 months ago)
- Topics: computer-vision, deep-learning, deep-neural-networks, torch, torchvision
- Language: Jupyter Notebook
- Homepage:
- Size: 54.2 MB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# CT5129- Artificial Intelligence Project
## Topic: ClearFlow: An Enhanced Image Dehazing Workflow Using Model Ensembling and CNN Postprocessing 📸 🌫
### Name: Hemanth Harikrishnan
### ID: 23105030The method uses deep convolutional networks with Channel and Pixel Attention to perform dehazing process. The tracking of the model was done on Weights and biases and the logging can be found here:
[](https://api.wandb.ai/links/hemanthh17/k66xssih)
#### Dataset 🖥
The datasets used are:
- RESIDE (ITS and SOTS)
- I-Haze (NTIRE-2018)
- O-Haze (NTIRE-2018)
- Dense-Haze (NTIRE-2019)
- NH-Haze (NTIRE-2020)The datasets can be found on Kaggle:
[![Kaggle](https://img.shields.io/badge/Kaggle-blue)](https://www.kaggle.com/datasets/hemanthhari/dehazing-dataset-thesis)#### Training the model
The following steps needs to be done to train the model.
- Make sure the requirements are satisfied.
```sh
pip install -r requirements.txt
```
- Download the Dataset from the URL mentioned.
- Check for presence of GPU
```sh
nvidia-smi
```
- Train the model using the DWT+DehazerNet-MSE-Perceptual Training Script.ipynb and DWT+DehazerNet-MSE-Perceptual Training Script.ipynb notebooks
- Save the model
#### Testing the model
The model can be tested using the ```inference-script.ipynb```. The results will be saved in a separate folder. The process can be easily run on a CPU.#### Note 📝
The paths are supposed to be changed according to where your corresponding dataset is present. The paths in the code are according to the Kaggle dataset strucutre.