Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/vita-group/enlightengan
[IEEE TIP] "EnlightenGAN: Deep Light Enhancement without Paired Supervision" by Yifan Jiang, Xinyu Gong, Ding Liu, Yu Cheng, Chen Fang, Xiaohui Shen, Jianchao Yang, Pan Zhou, Zhangyang Wang
https://github.com/vita-group/enlightengan
gan generative-adversarial-networks low-light low-light-enhance pytorch unsupervised-learning
Last synced: about 20 hours ago
JSON representation
[IEEE TIP] "EnlightenGAN: Deep Light Enhancement without Paired Supervision" by Yifan Jiang, Xinyu Gong, Ding Liu, Yu Cheng, Chen Fang, Xiaohui Shen, Jianchao Yang, Pan Zhou, Zhangyang Wang
- Host: GitHub
- URL: https://github.com/vita-group/enlightengan
- Owner: VITA-Group
- License: other
- Created: 2019-06-16T13:14:30.000Z (over 5 years ago)
- Default Branch: master
- Last Pushed: 2024-04-25T07:46:42.000Z (7 months ago)
- Last Synced: 2024-08-11T17:09:20.472Z (3 months ago)
- Topics: gan, generative-adversarial-networks, low-light, low-light-enhance, pytorch, unsupervised-learning
- Language: Python
- Homepage:
- Size: 17.4 MB
- Stars: 900
- Watchers: 20
- Forks: 198
- Open Issues: 15
-
Metadata Files:
- Readme: README.md
- License: License
Awesome Lists containing this project
README
# EnlightenGAN: Deep Light Enhancement without Paired Supervision
[Yifan Jiang](https://yifanjiang19.github.io/), Xinyu Gong, Ding Liu, Yu Cheng, Chen Fang, Xiaohui Shen, Jianchao Yang, Pan Zhou, Zhangyang Wang[[Paper]](https://arxiv.org/abs/1906.06972) [[Supplementary Materials]](https://yifanjiang.net/files/EnlightenGAN_Supplementary.pdf)
### Representitive Results
![representive_results](/assets/show_3.png)### Overal Architecture
![architecture](/assets/arch.png)## Environment Preparing
```
python3.5
```
You should prepare at least 3 1080ti gpus or change the batch size.```pip install -r requirement.txt```
```mkdir model```
Download VGG pretrained model from [[Google Drive 1]](https://drive.google.com/file/d/1IfCeihmPqGWJ0KHmH-mTMi_pn3z3Zo-P/view?usp=sharing), and then put it into the directory `model`.### Training process
Before starting training process, you should launch the `visdom.server` for visualizing.```nohup python -m visdom.server -port=8097```
then run the following command
```python scripts/script.py --train```
### Testing process
Download [pretrained model](https://drive.google.com/file/d/1AkV-n2MdyfuZTFvcon8Z4leyVb0i7x63/view?usp=sharing) and put it into `./checkpoints/enlightening`
Create directories `../test_dataset/testA` and `../test_dataset/testB`. Put your test images on `../test_dataset/testA` (And you should keep whatever one image in `../test_dataset/testB` to make sure program can start.)
Run
```python scripts/script.py --predict ```
### Dataset preparing
Training data [[Google Drive]](https://drive.google.com/drive/folders/1KivxOm79VidSJnJrMV9osr751UD68pCu?usp=sharing) (unpaired images collected from multiple datasets)
Testing data [[Google Drive]](https://drive.google.com/drive/folders/1PrvL8jShZ7zj2IC3fVdDxBY1oJR72iDf?usp=sharing) (including LIME, MEF, NPE, VV, DICP)
And [[BaiduYun]](https://github.com/TAMU-VITA/EnlightenGAN/issues/28) is available now thanks to @YHLelaine!
### Faster Inference
https://github.com/arsenyinfo/EnlightenGAN-inference from @arsenyinfoIf you find this work useful for you, please cite
```
@article{jiang2021enlightengan,
title={Enlightengan: Deep light enhancement without paired supervision},
author={Jiang, Yifan and Gong, Xinyu and Liu, Ding and Cheng, Yu and Fang, Chen and Shen, Xiaohui and Yang, Jianchao and Zhou, Pan and Wang, Zhangyang},
journal={IEEE Transactions on Image Processing},
volume={30},
pages={2340--2349},
year={2021},
publisher={IEEE}
}
```