https://github.com/gjy3035/gcc-sfcn
This is the official code of spatial FCN in the paper Learning from Synthetic Data for Crowd Counting in the Wild [CVPR2019].
https://github.com/gjy3035/gcc-sfcn
computer-vision crowd-analysis crowd-counting cvpr2019
Last synced: about 1 month ago
JSON representation
This is the official code of spatial FCN in the paper Learning from Synthetic Data for Crowd Counting in the Wild [CVPR2019].
- Host: GitHub
- URL: https://github.com/gjy3035/gcc-sfcn
- Owner: gjy3035
- License: mit
- Created: 2018-12-22T02:51:49.000Z (over 6 years ago)
- Default Branch: master
- Last Pushed: 2019-09-04T21:06:49.000Z (over 5 years ago)
- Last Synced: 2025-03-24T21:38:51.042Z (about 2 months ago)
- Topics: computer-vision, crowd-analysis, crowd-counting, cvpr2019
- Language: Python
- Homepage: https://gjy3035.github.io/GCC-CL/
- Size: 1.14 MB
- Stars: 162
- Watchers: 11
- Forks: 39
- Open Issues: 22
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# SFCN in "Learning from Synthetic Data for Crowd Counting in the Wild"
This is an official implementaion of the paper "SFCN+" in **Learning from Synthetic Data for Crowd Counting in the Wild**. More detialed information of the paper is shown in the [project homepage](https://gjy3035.github.io/GCC-CL/).
## Requirements
- Python 2.7
- Pytorch 0.4.0
- TensorboardX (pip)
- torchvision (pip)
- easydict (pip)
- pandas (pip)## Data preparation
1. Download the original UCF-QNRF Dataset [Link: [Dropbox ](http://crcv.ucf.edu/data/ucf-qnrf/)]
2. Resize the images and the locations of key points.
3. Generate the density maps by using the [code](https://github.com/aachenhang/crowdcount-mcnn/tree/master/data_preparation).
4. Generate the segmentation maps.The pre-trained resSFCN on GCC and the processed QNRF dataset: [[Link](https://mailnwpueducn-my.sharepoint.com/:f:/g/personal/gjy3035_mail_nwpu_edu_cn/EjgL9bSXYO1GvgdLIigURQUBPZ2GMDmPpF71JZTBtWj_jA?e=VAWhFB)]
## Training model
1. Run the train.py: ```python train.py```.
2. See the training outputs: ```Tensorboard --logdir=exp --port=6006```.## Testing pretrained model
1. Download pretrained resSFCN on QNRF in this [link](https://mailnwpueducn-my.sharepoint.com/:f:/g/personal/gjy3035_mail_nwpu_edu_cn/EjgL9bSXYO1GvgdLIigURQUBPZ2GMDmPpF71JZTBtWj_jA) (two versions: the results of CVPR and the screenshot in this repo).
2. Run ```python test.py```.## Expermental results
### Quantitative results
Errors on test set:

Note: the blue line is the result of using pre-trained GCC Dataset, and the red is the result of using pre-trained ImageNet.
### Visualization results| | Pre-trained ImageNet | Pre-trained GCC |
|------|:------:|:------:|
| epoch 1,6| |  |
| epoch 11,16 | |  |
| epoch 379,380 | |  |Column 1: input image; Column 2: density map GT; Column 3: density map prediction.
### Tips
In this code, the validation is directly on the test set. Strictly speaking, it should be evaluated on the val set (randomly selected from the training set, which is adopted in the paper). Here, for a comparable reproduction (namely fixed splitting sets), this code directly adopts the test set for validation, which causes that the results of this code are better than that of our paper (mae: 99 v.s. 102).## One More Thing
We reproduce some classic networks (MCNN, CSRNet, SANet, etc.) and some solid baseline networks (AlexNet, VGG, ResNet, etc.) on GCC dataset. Welcome to visit this [link](https://github.com/gjy3035/C-3-Framework). It is under development, we will release it as soon as possible.
## Citation
If you find this project useful for your research, please cite:
```
@inproceedings{wang2019learning,
title={Learning from Synthetic Data for Crowd Counting in the Wild},
author={Wang, Qi and Gao, Junyu and Lin, Wei and Yuan, Yuan},
booktitle={Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
pages={8198--8207},
year={2019}
}
```