Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/liruilong940607/ochumanapi
API for the dataset proposed in "Pose2Seg: Detection Free Human Instance Segmentation" @ CVPR2019.
https://github.com/liruilong940607/ochumanapi
cvpr2019 dataset detection pose-estimation segmentation
Last synced: about 3 hours ago
JSON representation
API for the dataset proposed in "Pose2Seg: Detection Free Human Instance Segmentation" @ CVPR2019.
- Host: GitHub
- URL: https://github.com/liruilong940607/ochumanapi
- Owner: liruilong940607
- License: mit
- Created: 2019-03-15T07:03:08.000Z (almost 6 years ago)
- Default Branch: master
- Last Pushed: 2019-06-14T04:25:37.000Z (over 5 years ago)
- Last Synced: 2024-12-27T01:07:08.674Z (7 days ago)
- Topics: cvpr2019, dataset, detection, pose-estimation, segmentation
- Language: Python
- Homepage: http://www.liruilong.cn/projects/pose2seg/index.html
- Size: 932 KB
- Stars: 256
- Watchers: 8
- Forks: 43
- Open Issues: 6
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# OCHuman(Occluded Human) Dataset Api
Dataset proposed in "Pose2Seg: Detection Free Human Instance Segmentation" [[ProjectPage]](http://www.liruilong.cn/projects/pose2seg/index.html) [[arXiv]](https://arxiv.org/abs/1803.10683) @ CVPR2019.
- **News! 2019.06.14** Bug fixed: Val/Test annotation split is now matched to our paper, please update!
- **News! 2019.04.08** [Codes](https://github.com/liruilong940607/Pose2Seg) for our paper is available now!
Samples of OCHuman Dataset
This dataset focus on heavily occluded human with comprehensive annotations including bounding-box, humans pose and instance mask. This dataset contains 13360 elaborately annotated human instances within 5081 images. With average 0.573 MaxIoU of each person, OCHuman is the most complex and challenging dataset related to human. Through this dataset, we want to emphasize occlusion as a challenging problem for researchers to study.
## Statistics
All the instances in this dataset are annotated by bounding-box. While not all of them have the
keypoint/mask annotation. If you want to compare your results with ours in the paper, please use the subset
that contains both keypoint and mask annotations (4731 images, 8110 persons).| | bbox | keypoint | mask | keypoint&mask | bbox&keypoint&mask|
| ------ | ----- | ----- | ----- | ----- | ----- |
| #Images | 5081 | 5081 | 4731 | 4731 | 4731 |
| #Persons | 13360 | 10375 | 8110 | 8110 | 8110 |
| #mMaxIou | 0.573 | 0.670 | 0.669 | 0.669 | 0.669 |**Note**:
- *MaxIoU* measures the severity of an object being occluded, which means the max IoU with other same category objects in a single image.
- All instances in OCHuman with kpt/mask annotations are suffered by heavy occlusion. (MaxIou > 0.5)## Download Links
- [Images (667MB) & Annotations](https://cg.cs.tsinghua.edu.cn/dataset/form.html?dataset=ochuman)
In the above link, we also provide the coco style annotations (*val* and *test* subset) so that you can run evaluation using cocoEval toolbox.
*Update at 2019.06.14: Please download annotation files (\*json) again to match the val/test split used in our paper.*
## Install API
```
git clone https://github.com/liruilong940607/OCHumanApi
cd OCHumanApi
make install
```## How to use
See [Demo.ipynb](Demo.ipynb)