https://github.com/ignacio-rocco/detectorch
Detectorch - detectron for PyTorch
https://github.com/ignacio-rocco/detectorch
detectron instance-segmentation object-detection object-segmentation python pytorch
Last synced: about 2 months ago
JSON representation
Detectorch - detectron for PyTorch
- Host: GitHub
- URL: https://github.com/ignacio-rocco/detectorch
- Owner: ignacio-rocco
- License: other
- Created: 2018-03-07T17:08:40.000Z (over 7 years ago)
- Default Branch: master
- Last Pushed: 2018-10-30T12:41:48.000Z (over 6 years ago)
- Last Synced: 2024-11-14T04:34:46.698Z (7 months ago)
- Topics: detectron, instance-segmentation, object-detection, object-segmentation, python, pytorch
- Language: Jupyter Notebook
- Homepage:
- Size: 2.42 MB
- Stars: 558
- Watchers: 28
- Forks: 72
- Open Issues: 11
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- Awesome-pytorch-list-CNVersion - detectorch - PyTorch版detectron框架,目前仅有detectron的推断(inference)和评估(evalutaion)功能,无训练(training)功能。 (Pytorch & related libraries|Pytorch & 相关库 / CV|计算机视觉:)
- Awesome-pytorch-list - detectorch - detectron for PyTorch (Pytorch & related libraries / CV:)
README
# Detectorch - detectron for PyTorch
(Disclaimer: this is work in progress and does not feature all the functionalities of detectron. Currently only inference and evaluation are supported -- no training)
(News: Now supporting FPN and ResNet-101!)This code allows to use some of the [Detectron models for object detection from Facebook AI Research](https://github.com/facebookresearch/Detectron/) with PyTorch.
It currently supports:
- Fast R-CNN
- Faster R-CNN
- Mask R-CNNIt supports ResNet-50/101 models with or without FPN. The pre-trained models from caffe2 can be imported and used on PyTorch.
![]()
Example Mask R-CNN with ResNet-101 and FPN.
## Evaluation
Both bounding box evaluation and instance segmentation evaluation where tested, yielding the same results as in the Detectron caffe2 models. These results below have been computed using the PyTorch code:| Model | box AP | mask AP | model id |
| --- | --- | --- | --- |
| [fast_rcnn_R-50-C4_2x](https://s3-us-west-2.amazonaws.com/detectron/36224046/12_2017_baselines/fast_rcnn_R-50-C4_2x.yaml.08_22_57.XFxNqEnL/output/train/coco_2014_train%3Acoco_2014_valminusminival/generalized_rcnn/model_final.pkl) | 35.6 | | 36224046 |
| [fast_rcnn_R-50-FPN_2x](https://s3-us-west-2.amazonaws.com/detectron/36225249/12_2017_baselines/fast_rcnn_R-50-FPN_2x.yaml.08_40_18.zoChak1f/output/train/coco_2014_train%3Acoco_2014_valminusminival/generalized_rcnn/model_final.pkl) | 36.8 | | 36225249 |
| [e2e_faster_rcnn_R-50-C4_2x](https://s3-us-west-2.amazonaws.com/detectron/35857281/12_2017_baselines/e2e_faster_rcnn_R-50-C4_2x.yaml.01_34_56.ScPH0Z4r/output/train/coco_2014_train%3Acoco_2014_valminusminival/generalized_rcnn/model_final.pkl) | 36.5 | | 35857281 |
| [e2e_faster_rcnn_R-50-FPN_2x](https://s3-us-west-2.amazonaws.com/detectron/35857389/12_2017_baselines/e2e_faster_rcnn_R-50-FPN_2x.yaml.01_37_22.KSeq0b5q/output/train/coco_2014_train%3Acoco_2014_valminusminival/generalized_rcnn/model_final.pkl) | 37.9 | | 35857389 |
| [e2e_mask_rcnn_R-50-C4_2x](https://s3-us-west-2.amazonaws.com/detectron/35858828/12_2017_baselines/e2e_mask_rcnn_R-50-C4_2x.yaml.01_46_47.HBThTerB/output/train/coco_2014_train%3Acoco_2014_valminusminival/generalized_rcnn/model_final.pkl) | 37.8 | 32.8 | 35858828 |
| [e2e_mask_rcnn_R-50-FPN_2x](https://s3-us-west-2.amazonaws.com/detectron/35859007/12_2017_baselines/e2e_mask_rcnn_R-50-FPN_2x.yaml.01_49_07.By8nQcCH/output/train/coco_2014_train%3Acoco_2014_valminusminival/generalized_rcnn/model_final.pkl)| 38.6 | 34.5 | 35859007 |
| [e2e_mask_rcnn_R-101-FPN_2x](https://s3-us-west-2.amazonaws.com/detectron/35861858/12_2017_baselines/e2e_mask_rcnn_R-101-FPN_2x.yaml.02_32_51.SgT4y1cO/output/train/coco_2014_train:coco_2014_valminusminival/generalized_rcnn/model_final.pkl) | 40.9 | 36.4 | 35861858 |## Training
Training code is experimental. See `train_fast.py` for training Fast R-CNN. It seems to work, but slow.## Installation
First, clone the repo with `git clone --recursive https://github.com/ignacio-rocco/detectorch` so that you also clone the Coco API.The code can be used with PyTorch 0.3.1 or PyTorch 0.4 (master) under Python 3. Anaconda is recommended. Other required packages
- torchvision (`conda install torchvision -c soumith`)
- opencv (`conda install -c conda-forge opencv `)
- cython (`conda install cython`)
- matplotlib (`conda install matplotlib`)
- scikit-image (`conda install scikit-image`)
- ninja (`conda install ninja`) *(required for Pytorch 0.4 only)*Additionally, you need to build the Coco API and RoIAlign layer. See below.
#### Compiling the Coco API
If you cloned this repo with `git clone --recursive` you should have also cloned the cocoapi in `lib/cocoapi`. Compile this with:
```
cd lib/cocoapi/PythonAPI
make install
```#### Compiling RoIAlign
The RoIAlign layer was converted from the caffe2 version. There are two different implementations for each PyTorch version:- Pytorch 0.4: RoIAlign using ATen library (lib/cppcuda). Compiled JIT when loaded.
- PyTorch 0.3.1: RoIAlign using TH/THC and cffi (lib/cppcuda_cffi). Needs to be compiled with:```
cd lib/cppcuda_cffi
./make.sh
```## Quick Start
Check the demo notebook.