An open API service indexing awesome lists of open source software.

https://github.com/zjykzj/yolov2

[CVPR 2017]YOLO9000: Better, Faster, Stronger
https://github.com/zjykzj/yolov2

apex coco darknet darknet19 darknet53 distributed-data-parallel mixed-precision-training nvidia-docker pascal-voc python pytorch yolo yolov2 yolov2-loss yolov5

Last synced: 3 months ago
JSON representation

[CVPR 2017]YOLO9000: Better, Faster, Stronger

Awesome Lists containing this project

README

          


Language:
πŸ‡ΊπŸ‡Έ
πŸ‡¨πŸ‡³


Β«YOLOv2Β» reproduced the paper "YOLO9000: Better, Faster, Stronger"







* Train using the `VOC07+12 trainval` dataset and test using the `VOC2007 Test` dataset with an input size of `640x640`. Given the result as follows



Original (darknet)
tztztztztz/yolov2.pytorch
zjykzj/YOLOv2(This)
zjykzj/YOLOv2(This)


ARCH
YOLOv2
YOLOv2
YOLOv2
YOLOv2-Fast


GFLOPs
/
/
69.5
48.5


DATASET(TRAIN)
VOC TRAINVAL 2007+2012
VOC TRAINVAL 2007+2012
VOC TRAINVAL 2007+2012
VOC TRAINVAL 2007+2012


DATASET(VAL)
VOC TEST 2007
VOC TEST 2007
VOC TEST 2007
VOC TEST 2007


INPUT_SIZE
416x416
416x416
640x640
640x640


PRETRAINED
TRUE
TRUE
FALSE
FALSE


VOC AP[IoU=0.50:0.95]
/
/
44.3
29.8


VOC AP[IoU=0.50]
76.8
72.7
75.1
62.6

* Train using the `COCO train2017` dataset and test using the `COCO val2017` dataset with an input size of `640x640`. Given the result as follows (*Note: The results of the original paper were evaluated on the `COCO test-dev2015` dataset*)



Original (darknet)
zjykzj/YOLOv2(This)
zjykzj/YOLOv2(This)


ARCH
YOLOv2
YOLOv2
YOLOv2-Fast


GFLOPs
/
69.7
48.8


DATASET(TRAIN)
/
COCO TRAIN2017
COCO TRAIN2017


DATASET(VAL)
/
COCO VAL2017
COCO VAL2017


INPUT_SIZE
416x416
640x640
640x640


PRETRAINED
TRUE
FALSE
FALSE


COCO AP[IoU=0.50:0.95]
21.6
28.6
20.1


COCO AP[IoU=0.50]
44.0
50.7
41.2

## Table of Contents

- [Table of Contents](#table-of-contents)
- [Latest News](#latest-news)
- [Background](#background)
- [Installation](#installation)
- [Usage](#usage)
- [Train](#train)
- [Eval](#eval)
- [Predict](#predict)
- [Maintainers](#maintainers)
- [Thanks](#thanks)
- [Contributing](#contributing)
- [License](#license)

## Latest News

* ***[2024/05/04][v1.0.0](https://github.com/zjykzj/YOLOv2/releases/tag/v1.0.0). Refactoring YOLOv2 project, integrating yolov5 v7.0, reimplementing YOLOv2/YOLOv2-fast and YOLOv2Loss.***
* ***[2023/07/16][v0.3.0](https://github.com/zjykzj/YOLOv2/releases/tag/v0.3.0). Add [ultralytics/yolov5](https://github.com/ultralytics/yolov5)([485da42](https://github.com/ultralytics/yolov5/commit/485da42273839d20ea6bdaf142fd02c1027aba61)) transforms.***
* ***[2023/06/28][v0.2.1](https://github.com/zjykzj/YOLOv2/releases/tag/v0.2.1). Refactor data module.***
* ***[2023/05/21][v0.2.0](https://github.com/zjykzj/YOLOv2/releases/tag/v0.2.0). Reconstructed loss function and add Darknet53 as a backbone.***
* ***[2023/05/09][v0.1.2](https://github.com/zjykzj/YOLOv2/releases/tag/v0.1.2). Add COCO dataset result and update VOC dataset training results.***
* ***[2023/05/03][v0.1.1](https://github.com/zjykzj/YOLOv2/releases/tag/v0.1.1). Fix target transform and update `yolov2_voc.cfg` and `yolov2-tiny_voc.cfg` training results for VOC2007 Test.***
* ***[2023/05/02][v0.1.0](https://github.com/zjykzj/YOLOv2/releases/tag/v0.1.0). Complete YOLOv2 training/evaluation/prediction, while providing the evaluation results of VOC2007 Test.***

## Background

YOLOv2 has made more innovations on the basis of YOLOv1. For the network, it has created Darknet-19; For the loss function, it adds anchor box settings to help network training with more fine-grained features. Compared with YOLOv1, YOLOv2 is more modern and high-performance.

This repository references many repositories implementations, including [tztztztztz/yolov2.pytorch](https://github.com/tztztztztz/yolov2.pytorch) and [yjh0410/yolov2-yolov3_PyTorch](https://github.com/yjh0410/yolov2-yolov3_PyTorch), as well as [zjykzj/YOLOv3](https://github.com/zjykzj/YOLOv3).

Note: the latest implementation of YOLOv2 in our warehouse is entirely based on [ultralytics/yolov5 v7.0](https://github.com/ultralytics/yolov5/releases/tag/v7.0)

## Installation

```shell
pip3 install -r requirements.txt
```

Or use docker container

```shell
docker run -it --runtime nvidia --gpus=all --shm-size=16g -v /etc/localtime:/etc/localtime -v $(pwd):/workdir --workdir=/workdir --name yolov2 ultralytics/yolov5:v7.0
```

## Usage

### Train

```shell
python3 train.py --data VOC.yaml --weights "" --cfg yolov2_voc.yaml --img 640 --device 0 --yolov2loss
python3 train.py --data VOC.yaml --weights "" --cfg yolov2-fast_voc.yaml --img 640 --device 0 --yolov2loss
python3 train.py --data coco.yaml --weights "" --cfg yolov2.yaml --img 640 --device 0 --yolov2loss
python3 train.py --data coco.yaml --weights "" --cfg yolov2-fast.yaml --img 640 --device 0 --yolov2loss
```

### Eval

```shell
# python3 val.py --weights runs/train/voc/exp/weights/best.pt --data VOC.yaml --img 640 --device 0
yolov2_voc summary: 53 layers, 50645053 parameters, 0 gradients, 69.5 GFLOPs
Class Images Instances P R mAP50 mAP50-95: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 155/155 00:41
all 4952 12032 0.735 0.711 0.751 0.443
Speed: 0.1ms pre-process, 3.1ms inference, 1.3ms NMS per image at shape (32, 3, 640, 640)
# python3 val.py --weights runs/train/voc/exp3/weights/best.pt --data VOC.yaml --img 640 --device 0
yolov2-fast_voc summary: 33 layers, 42367485 parameters, 0 gradients, 48.5 GFLOPs
Class Images Instances P R mAP50 mAP50-95: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 155/155 00:37
all 4952 12032 0.626 0.612 0.626 0.298
Speed: 0.1ms pre-process, 2.3ms inference, 1.5ms NMS per image at shape (32, 3, 640, 640)
# python3 val.py --weights runs/train/coco/exp/weights/best.pt --data coco.yaml --img 640 --device 0
yolov2 summary: 53 layers, 50952553 parameters, 0 gradients, 69.7 GFLOPs
Class Images Instances P R mAP50 mAP50-95: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 157/157 00:57
all 5000 36335 0.627 0.48 0.507 0.286
Speed: 0.1ms pre-process, 3.1ms inference, 2.0ms NMS per image at shape (32, 3, 640, 640)
# python3 val.py --weights runs/train/coco/exp2/weights/best.pt --data coco.yaml --img 640 --device 0
yolov2-fast summary: 33 layers, 42674985 parameters, 0 gradients, 48.8 GFLOPs
Class Images Instances P R mAP50 mAP50-95: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 157/157 00:53
all 5000 36335 0.549 0.402 0.412 0.201
Speed: 0.1ms pre-process, 2.4ms inference, 2.1ms NMS per image at shape (32, 3, 640, 640)
```

### Predict

```shell
python3 detect.py --weights runs/yolov2_voc.pt --source ./assets/voc2007-test/
```

```shell
python3 detect.py --weights runs/yolov2_coco.pt --source ./assets/coco/
```

## Maintainers

* zhujian - *Initial work* - [zjykzj](https://github.com/zjykzj)

## Thanks

* [zjykzj/vocdev](https://github.com/zjykzj/vocdev)
* [zjykzj/YOLOv3](https://github.com/zjykzj/YOLOv3)
* [zjykzj/anchor-boxes](https://github.com/zjykzj/anchor-boxes)
* [ultralytics/yolov5](https://github.com/ultralytics/yolov5)
* [AlexeyAB/darknet](https://github.com/AlexeyAB/darknet)
* [tztztztztz/yolov2.pytorch](https://github.com/tztztztztz/yolov2.pytorch)
* [yjh0410/yolov2-yolov3_PyTorch](https://github.com/yjh0410/yolov2-yolov3_PyTorch)

## Contributing

Anyone's participation is welcome! Open an [issue](https://github.com/zjykzj/YOLOv2/issues) or submit PRs.

Small note:

* Git submission specifications should be complied
with [Conventional Commits](https://www.conventionalcommits.org/en/v1.0.0-beta.4/)
* If versioned, please conform to the [Semantic Versioning 2.0.0](https://semver.org) specification
* If editing the README, please conform to the [standard-readme](https://github.com/RichardLitt/standard-readme)
specification.

## License

[Apache License 2.0](LICENSE) Β© 2023 zjykzj