https://github.com/fkodom/simple-cocotools
A simple, modern alternative to 'pycocotools'.
https://github.com/fkodom/simple-cocotools
Last synced: 3 months ago
JSON representation
A simple, modern alternative to 'pycocotools'.
- Host: GitHub
- URL: https://github.com/fkodom/simple-cocotools
- Owner: fkodom
- License: mit
- Created: 2022-05-15T02:00:12.000Z (about 3 years ago)
- Default Branch: main
- Last Pushed: 2023-03-03T19:19:15.000Z (over 2 years ago)
- Last Synced: 2025-02-28T23:10:42.034Z (4 months ago)
- Language: Python
- Homepage:
- Size: 57.6 KB
- Stars: 3
- Watchers: 1
- Forks: 0
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# simple-cocotools
A simple, modern alternative to `pycocotools`.
## About
Why not just use [Pycocotools](https://github.com/cocodataset/cocoapi/tree/master/PythonAPI/pycocotools)?
* Code is more readable and hackable.
* Metrics are more transparent and understandable.
* Evaluation is fast.
* Only dependencies are `numpy` and `scipy`. No `cython` extensions.
* Code is more modern (type annotations, linting, etc).## Install
### From PyPI
```bash
pip install simple-cocotools
```### From Repo
```bash
pip install "simple-cocotools @ git+ssh://[email protected]/fkodom/simple-cocotools.git"
```### For Contributors
```bash
# Clone this repository
gh repo clone fkodom/simple-cocotools
cd simple-cocotools
# Install all dev dependencies (tests etc.)
pip install -e .[all]
# Setup pre-commit hooks
pre-commit install
```## Usage
Expects target annotations to have the same format as model predictions. (The format used by all `torchvision` detection models.) You may already have code to convert annotations into this format, since it's required to train many detection models. If not, use ['AnnotationsToDetectionFormat' from this repo](./simple_cocotools/utils/coco.py#L83) as an example for how to do that.
A minimal example:
```python
from torchvision.detection.models import maskrcnn_resnet50_fpn
from simple_cocotools import CocoEvaluatorevaluator = CocoEvaluator()
model = maskrcnn_resnet50_fpn(pretrained=True).eval()for images, targets in data_loader:
predictions = model(images)
evaluator.update(predictions, targets)metrics = evaluator.summarize()
```
`metrics` will be a dictionary with format:
```json
{
"box": {
"mAP": 0.40,
"mAR": 0.41,
"class_AP": {
"cat": 0.39,
"dog": 0.42,
...
},
"class_AR": {
# Same as 'class_AP' above.
}
}
"mask": {
# Same as 'box' above.
}
}
```For a more complete example, see [`scripts/mask_rcnn_example.py`](./scripts/mask_rcnn_example.py).
## Benchmarks
I benchmarked against several `torchvision` detection models, which have [mAP scores reported on the PyTorch website](https://pytorch.org/vision/stable/models.html#object-detection-instance-segmentation-and-person-keypoint-detection).
Using a default score threshold of 0.5:
Model | Backbone | box mAP
(official) | box mAP | box mAR | mask mAP
(official) | mask mAP | mask mAR
-------------|-------------------|-----------------------|---------|---------|------------------------|----------|----------
Mask R-CNN | ResNet50 | 37.9 | 36.9 | 43.2 | 34.6 | 34.1 | 40.0
Faster R-CNN | ResNet50 | 37.0 | 36.3 | 42.0 | - | - | -
Faster R-CNN | MobileNetV3-Large | 32.8 | 39.9 | 35.0 | - | - | -Notice that the mAP for `MobileNetV3-Large` is artificially high, since it has a much lower mAR at that score threshold. After tuning the score threshold, so that mAP and mAR are more balanced:
Model | Backbone | Threshold | box mAP | box mAR | mask mAP | mask mAR
-------------|-------------------|-----------|---------|---------|----------|----------
Mask R-CNN | ResNet50 | 0.6 | 41.1 | 41.3 | 38.2 | 38.5
Faster R-CNN | ResNet50 | 0.6 | 40.8 | 40.4 | - | -
Faster R-CNN | MobileNetV3-Large | 0.425 | 36.2 | 36.2 | - | -These scores are more reflective of model performance, in my opinion. Mask R-CNN slightly outperforms Faster R-CNN, and there is a noticeable (but not horrible) gap between ResNet50 and MobileNetV3 backbones. PyTorch docs don't mention what score thresholds were used for each model benchmark. ¯\\_(ツ)_/¯
Ignoring the time spent getting predictions from the model, evaluation is very fast.
* **Bbox:** ~400 samples/second
* **Bbox + mask:** ~100 samples/second
* Using a Google Cloud `n1-standard-4` VM (4 vCPUs, 16 GB RAM).**Note:** Speeds are dependent on the number of detections per image, and therefore dependent on the model and score threshold.
## Keypoints Usage
Keypoint mAP and mAR normally use pre-computed "sigmas" to determin the "correctness" of each keypoint prediction. Unfortunately, those sigmas are tailored specifically for human pose (as in the COCO dataset), and not applicable to other keypoint datasets.
> **NOTE:** Sigmas are actually computed using the predictions of a specific model trained on COCO. To make this applicable to other datasets, you would need to train a model on that dataset, and then use the sigmas from that model. The logic is somewhat circular -- you need to train a model to get the sigmas, but you need the sigmas to compute mAP / mAR.
>
> There's no way around this, unless a large body of pretrained models are already available for the dataset you're using. For most real-world problems, that is not the case. So, the open-source mAP / mAR keypoints metrics are not generally extensible to other datasets.`simple-cocotools` does not use sigmas, and instead computes the average distance between each keypoint prediction and ground truth. This is a much simpler approach, and is more applicable to other datasets. It's roughly how the [sigmas for COCO were originally computed](https://cocodataset.org/#keypoints-eval). The downside is that it's not directly comparable to the official COCO keypoints mAP / mAR.
Some keypoints are more ambiguous than others. For example, "left hip" is much more ambiguous than "left eye" -- the exact location of "left eye" should be obvious, while "left hip" is hidden by the torso and clothing. The average distance for "left hip" will be much larger than "left eye", even if the predictions are correct. (This is how sigmas were used in the official COCO keypoints mAP / mAR.) For that reason, keypoint distances should be interpreted with some knowledge about the specific dataset at hand.
`metrics` will be a dictionary with format:
```json
{
"box": {
"mAP": 0.40,
"mAR": 0.41,
"class_AP": {
"cat": 0.39,
"dog": 0.42,
...
},
"class_AR": {
# Same as 'class_AP' above.
}
}
"keypoints": {
"distance": 0.10,
"class_distance": {
"cat": {
"distance": 0.11,
"keypoint_distance": {
"left_eye": 0.12,
"right_eye": 0.13,
...
}
},
...
}
}
}
```## How It Works
**TODO:** Blog post on how `simple-cocotools` works.
1. Match the predictions/labels together, maximizing the IoU between pairs with the same object class. SciPy's `linear_sum_assignment` method does most of the heavy lifting here.
2. For each IoU threshold, determine the number of "correct" predictions from the assignments above. Pairs with IoU < threshold are incorrect.
3. For each image, count the number of total predictions, correct predictions, and ground truth labels for each object class and IoU threshold.
3. Compute AP/AR for each class from the prediction counts above. Then compute mAP and mAR by averaging over all object classes.