https://github.com/BICLab/EMS-YOLO
Offical implementation of "Deep Directly-Trained Spiking Neural Networks for Object Detection" (ICCV2023)
https://github.com/BICLab/EMS-YOLO
spiking-neural-network spiking-yolo
Last synced: about 1 month ago
JSON representation
Offical implementation of "Deep Directly-Trained Spiking Neural Networks for Object Detection" (ICCV2023)
- Host: GitHub
- URL: https://github.com/BICLab/EMS-YOLO
- Owner: BICLab
- License: gpl-3.0
- Created: 2023-07-19T09:33:15.000Z (almost 2 years ago)
- Default Branch: main
- Last Pushed: 2023-10-15T11:59:31.000Z (over 1 year ago)
- Last Synced: 2024-10-28T05:12:34.201Z (6 months ago)
- Topics: spiking-neural-network, spiking-yolo
- Language: Python
- Homepage: https://arxiv.org/abs/2307.11411
- Size: 369 KB
- Stars: 146
- Watchers: 4
- Forks: 13
- Open Issues: 16
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-yolo-object-detection - EMS-YOLO - YOLO?style=social"/> : Offical implementation of "Deep Directly-Trained Spiking Neural Networks for Object Detection" (**[ICCV 2023](https://openaccess.thecvf.com/content/ICCV2023/html/Su_Deep_Directly-Trained_Spiking_Neural_Networks_for_Object_Detection_ICCV_2023_paper.html)**) (Object Detection Applications)
- awesome-yolo-object-detection - EMS-YOLO - YOLO?style=social"/> : Offical implementation of "Deep Directly-Trained Spiking Neural Networks for Object Detection" (**[ICCV 2023](https://openaccess.thecvf.com/content/ICCV2023/html/Su_Deep_Directly-Trained_Spiking_Neural_Networks_for_Object_Detection_ICCV_2023_paper.html)**) (Applications)
README
##
Deep Directly-Trained Spiking Neural Networks for Object Detection [(ICCV2023)](https://openaccess.thecvf.com/content/ICCV2023/html/Su_Deep_Directly-Trained_Spiking_Neural_Networks_for_Object_Detection_ICCV_2023_paper.html)### Requirements
The code has been tested with pytorch=1.10.1,py=3.8, cuda=11.3, cudnn=8.2.0_0 . The conda environment can be copied directly via environment.yml. Some additional dependencies can be found in the environment.txt.
Install
```bash
$ git clone https://github.com/BICLab/EMS-YOLO.git
$ pip install -r requirements.txt
```### Pretrained Checkpoints
We provide the best and the last trained model based on EMS-Res34 on the COCO dataset.
`detect.py` runs inference on a variety of sources, downloading models automatically from
the [COCO_EMS-ResNet34](https://drive.google.com/drive/folders/1mry8sdED6ncqxajmQROKBECpcrmXStpB?usp=sharing) .The relevant parameter files are in the `runs/train`.
### Training & Addition
Train
The relevant code for the Gen1 dataset is at `/g1-resnet`. It needs to be replaced or added to the appropriate root folder.
For gen1 dataset:
```python
python path/to/train_g1.py --weights ***.pt --img 640
```
For coco dataset:
```python
python train.py
```Calculating the spiking rate:
Dependencies can be downloaded from [Visualizer](https://github.com/luo3300612/Visualizer).
```python
python calculate_fr.py
```### Contact Information
```shell
@inproceedings{su2023deep,
title={Deep Directly-Trained Spiking Neural Networks for Object Detection},
author={Su, Qiaoyi and Chou, Yuhong and Hu, Yifan and Li, Jianing and Mei, Shijie and Zhang, Ziyang and Li, Guoqi},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={6555--6565},
year={2023}
}
```
YOLOv3 is a family of object detection architectures and models pretrained on the COCO dataset, and represents Ultralytics
open-source research into future vision AI methods, incorporating lessons learned and best practices evolved over thousands of hours of research and development.
Our code is also implemented in this framework, so please remember to cite their work.