Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

https://github.com/bupt-ai-cz/LLVIP

LLVIP: A Visible-infrared Paired Dataset for Low-light Vision
https://github.com/bupt-ai-cz/LLVIP

cnn computer-vision deep-learning gan iccv2021 image-fusion image-to-image-translation low-light-image low-light-vision object-detection visible-infrared

Last synced: about 1 month ago
JSON representation

LLVIP: A Visible-infrared Paired Dataset for Low-light Vision

Lists

README

        

# LLVIP: A Visible-infrared Paired Dataset for Low-light Vision ![visitors](https://visitor-badge.glitch.me/badge?page_id=bupt-ai-cz.LLVIP)
[Project](https://bupt-ai-cz.github.io/LLVIP/) | [Arxiv](https://arxiv.org/abs/2108.10831) | [Kaggle](https://www.kaggle.com/c/find-person-in-the-dark) | [![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/llvip-a-visible-infrared-paired-dataset-for/pedestrian-detection-on-llvip)](https://paperswithcode.com/sota/pedestrian-detection-on-llvip?p=llvip-a-visible-infrared-paired-dataset-for) | [![Tweet](https://img.shields.io/twitter/url/http/shields.io.svg?style=social)](https://twitter.com/intent/tweet?text=Codes%20and%20Data%20for%20Our%20Paper:%20"LLVIP:%20A%20Visible-infrared%20Paired%20Dataset%20for%20Low-light%20Vision"%20&url=https://github.com/bupt-ai-cz/LLVIP)

## News
- ⚡(2024-1-8): The pre-trained pix2pixGAN on LLVIP is released [Here.](https://github.com/bupt-ai-cz/LLVIP/releases)
- ⚡(2023-2-21): The annotations of a small part of images have been corrected and updated, including the annotation of some missing pedestrians, and the optimization of some imprecise annotations. The updated dataset is now available from the [homepage](https://bupt-ai-cz.github.io/LLVIP/) or [here](https://github.com/bupt-ai-cz/LLVIP/blob/main/download_dataset.md). If you need the previous version of the annotations, please refer to [here](https://github.com/bupt-ai-cz/LLVIP/blob/main/previous%20annotations.md).
- ⚡(2022-5-24): We provide a [toolbox](https://github.com/bupt-ai-cz/LLVIP/blob/main/toolbox/toolbox_readme.md) for various format conversions (xml to yolov5, xml to yolov3, xml to coco)
- ⚡(2022-3-27): We released some raw data (unregistered image pairs and videos) for further research including image registration. Please visit [homepage](https://bupt-ai-cz.github.io/LLVIP/) to get the update. (2022-3-28 We have updated the link of Baidu Yun of LLVIP raw data, the data downloaded from the new link supports decompression under `windows` and `macos`. The original link only support `windows`.)
- ⚡(2021-12-25): We released a Kaggle Community Competition "[Find Person in the Dark!](https://www.kaggle.com/c/find-person-in-the-dark)" based on part of LLVIP dataset. Welcome playing and having fun! Attention: only the visible-image data we uploaded in Kaggle platform is allowed to use (the infrared images in LLVIP or other external data are forbidden)
- ⚡(2021-11-24): Pedestrian detection models were released
- ⚡(2021-09-01): We have released the dataset, please visit [homepage](https://bupt-ai-cz.github.io/LLVIP/) or [here](https://github.com/bupt-ai-cz/LLVIP/blob/main/download_dataset.md) to get the dataset. (Note that we removed some low-quality images from the original dataset, and for this version there are 30976 images.)

---

![figure1-LR](imgs/figure1-LR.png)

---

## Dataset Downloading:

### [Homepage](https://bupt-ai-cz.github.io/LLVIP/) or [DatasetDownload](https://github.com/bupt-ai-cz/LLVIP/blob/main/download_dataset.md)

---

## Citation
If you use this data for your research, please cite our paper [LLVIP: A Visible-infrared Paired Dataset for Low-light Vision](https://arxiv.org/abs/2108.10831):

```
@inproceedings{jia2021llvip,
title={LLVIP: A visible-infrared paired dataset for low-light vision},
author={Jia, Xinyu and Zhu, Chuang and Li, Minzhen and Tang, Wenqi and Zhou, Wenli},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={3496--3504},
year={2021}
}
```
or

```
@misc{https://doi.org/10.48550/arxiv.2108.10831,
doi = {10.48550/ARXIV.2108.10831},
url = {https://arxiv.org/abs/2108.10831},
author = {Jia, Xinyu and Zhu, Chuang and Li, Minzhen and Tang, Wenqi and Liu, Shengjie and Zhou, Wenli},
keywords = {Computer Vision and Pattern Recognition (cs.CV), Artificial Intelligence (cs.AI), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {LLVIP: A Visible-infrared Paired Dataset for Low-light Vision},
publisher = {arXiv},
year = {2021},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```

Image Fusion

Baselines
- [GTF](https://github.com/jiayi-ma/GTF)
- [FusionGAN](https://github.com/jiayi-ma/FusionGAN)
- [Densefuse](https://github.com/hli1221/imagefusion_densefuse)
- [IFCNN](https://github.com/uzeful/IFCNN)

## FusionGAN
### Preparation
- Install requirements
```bash
git clone https://github.com/bupt-ai-cz/LLVIP.git
cd LLVIP/FusionGAN
# Create your virtual environment using anaconda
conda create -n FusionGAN python=3.7
conda activate FusionGAN

conda install matplotlib scipy==1.2.1 tensorflow-gpu==1.14.0
pip install opencv-python
sudo apt install libgl1-mesa-glx
```
- File structure
```
FusionGAN
├── ...
├── Test_LLVIP_ir
| ├── 190001.jpg
| ├── 190002.jpg
| └── ...
├── Test_LLVIP_vi
| ├── 190001.jpg
| ├── 190002.jpg
| └── ...
├── Train_LLVIP_ir
| ├── 010001.jpg
| ├── 010002.jpg
| └── ...
└── Train_LLVIP_vi
├── 010001.jpg
├── 010002.jpg
└── ...
```
### Train
```bash
python main.py --epoch 10 --batch_size 32
```
See more training options in `main.py`.
### Test
```bash
python test_one_image.py
```
Remember to put pretrained model in your `checkpoint` folder and change corresponding model name in `test_one_image.py`.
To acquire complete LLVIP dataset, please visit https://bupt-ai-cz.github.io/LLVIP/.

## Densefuse
### Preparation
- Install requirements
```bash
git clone https://github.com/bupt-ai-cz/LLVIP
cd LLVIP/imagefusion_densefuse

# Create your virtual environment using anaconda
conda create -n Densefuse python=3.7
conda activate Densefuse

conda install scikit-image scipy==1.2.1 tensorflow-gpu==1.14.0
```
- File structure
```
imagefusion_densefuse
├── ...
├──datasets
| ├──010001_ir.jpg
| ├──010001_vi.jpg
| └── ...
├──test
| ├──190001_ir.jpg
| ├──190001_vi.jpg
| └── ...
└──LLVIP
├── infrared
| ├──train
| | ├── 010001.jpg
| | ├── 010002.jpg
| | └── ...
| └──test
| ├── 190001.jpg
| ├── 190002.jpg
| └── ...
└── visible
├──train
| ├── 010001.jpg
| ├── 010002.jpg
| └── ...
└── test
├── 190001.jpg
├── 190002.jpg
└── ...
```

### Train & Test
```bash
python main.py
```
Check and modify training/testing options in `main.py`. Before training/testing, you need to rename the images in LLVIP dataset and put them in the designated folder. We have provided a script named `rename.py` to rename the images and save them in the `datasets` or `test` folder. Checkpoints are saved in `./models/densefuse_gray/`. To acquire complete LLVIP dataset, please visit https://bupt-ai-cz.github.io/LLVIP/.

## IFCNN
Please visit https://github.com/uzeful/IFCNN.

Pedestrian Detection

Baselines
- [Yolov5](https://github.com/ultralytics/yolov5)
- [Yolov3](https://github.com/ultralytics/yolov3)
## Yolov5
### Preparation
#### Linux and Python>=3.6.0
- Install requirements
```bash
git clone https://github.com/bupt-ai-cz/LLVIP.git
cd LLVIP/yolov5
pip install -r requirements.txt
```
- File structure

The training set of LLVIP is used for training the yolov5 model and the testing set of LLVIP is used for the validation of the yolov5 model.
```
yolov5
├── ...
└──LLVIP
├── labels
| ├──train
| | ├── 010001.txt
| | ├── 010002.txt
| | └── ...
| └──val
| ├── 190001.txt
| ├── 190002.txt
| └── ...
└── images
├──train
| ├── 010001.jpg
| ├── 010002.jpg
| └── ...
└── val
├── 190001.jpg
├── 190002.jpg
└── ...
```
We provide a [toolbox](https://github.com/bupt-ai-cz/LLVIP/blob/main/toolbox/toolbox_readme.md) for converting annotation files to txt files in yolov5 format.
### Train
```bash
python train.py --img 1280 --batch 8 --epochs 200 --data LLVIP.yaml --weights yolov5l.pt --name LLVIP_export
```
See more training options in `train.py`. The pretrained model `yolov5l.pt` can be downloaded from [here](https://github.com/ultralytics/yolov5/releases). The trained model will be saved in `./runs/train/LLVIP_export/weights` folder.
### Test
```bash
python val.py --data --img 1280 --weights last.pt --data LLVIP.yaml
```
Remember to put the trained model in the same folder as `val.py`.

Our trained model can be downloaded from here: [Google-Drive-Yolov5-model](https://drive.google.com/file/d/1SPbr0PDiItape602-g-bstkX0P7NZo0q/view?usp=sharing) or [BaiduYun-Yolov5-model](https://pan.baidu.com/s/1q3mGhQzT_D3uiqdfAHVqCA) (code: qepr)
- Click [Here](yolov3/README.md) for the tutorial of Yolov3 (Our trained Yolov3 model can be downloaded from here: [Google-Drive-Yolov3-model](https://drive.google.com/file/d/1-BYauAZGXhw7PjKp8M4CHlnMNYqm8n7J/view?usp=sharing) or [BaiduYun-Yolov3-model](https://pan.baidu.com/s/1eZcyugmpo_3VZjd3wpwcow) (code: ine5)).
### Results
We retrained and tested Yolov5l and Yolov3 on the updated dataset (30976 images).



Where AP means the average of AP at IoU threshold of 0.5 to 0.95, with an interval of 0.05.




The figure above shows the change of AP under different IoU thresholds. When the IoU threshold is higher than 0.7, the AP value drops rapidly. Besides, the infrared image highlights pedestrains and achieves a better effect than the visible image in the detection task, which not only proves the necessity of infrared images but also indicates that the performance of visible-image pedestrian detection algorithm is not good enough under low-light conditions.

We also calculated log average miss rate based on the test results and drew the miss rate-FPPI curve.





Image-to-Image Translation

Baseline
- [pix2pixGAN](https://github.com/phillipi/pix2pix)
## pix2pixGAN
### Preparation
- Install requirements
```bash
cd pix2pixGAN
pip install -r requirements.txt
```
- [Prepare dataset](https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/blob/master/docs/datasets.md)
- File structure
```
pix2pixGAN
├── ...
└──datasets
├── ...
└──LLVIP
├── train
| ├── 010001.jpg
| ├── 010002.jpg
| ├── 010003.jpg
| └── ...
└── test
├── 190001.jpg
├── 190002.jpg
├── 190003.jpg
└── ...
```

### Train
```bash
python train.py --dataroot ./datasets/LLVIP --name LLVIP --model pix2pix --direction AtoB --batch_size 8 --preprocess scale_width_and_crop --load_size 320 --crop_size 256 --gpu_ids 0 --n_epochs 100 --n_epochs_decay 100
```
### Test
```bash
python test.py --dataroot ./datasets/LLVIP --name LLVIP --model pix2pix --direction AtoB --gpu_ids 0 --preprocess scale_width_and_crop --load_size 320 --crop_size 256
```
See `./pix2pixGAN/options` for more train and test options.



### Results
We retrained and tested pix2pixGAN on the updated dataset(30976 images). The structure of generator is unet256, and the structure of discriminator is the basic PatchGAN as default.





## License
This LLVIP Dataset is made freely available to academic and non-academic entities for non-commercial purposes such as academic research, teaching, scientific publications, or personal experimentation. Permission is granted to use the data given that you agree to our [license terms](Term%20of%20Use%20and%20License.md).

## Call For Contributions

Welcome to point out errors in data annotation. If you want to modify the label, please refer to the [annotation tutorial](https://github.com/SantJay/LLVIP-1/blob/main/annotation%20tutorial.md), and email us the corrected label file.

More annotation forms are also welcome (such as segmentation), please contact us.

## Acknowledgments
Thanks [XueZ-phd](https://github.com/XueZ-phd) for his contribution to LLVIP dataset. He corrected the imperfect annotations in the dataset.

## Contact

email: [email protected], [email protected], [email protected], [email protected]