https://github.com/wkentaro/labelme
Image Polygonal Annotation with Python (polygon, rectangle, circle, line, point and image-level flag annotation).
https://github.com/wkentaro/labelme
annotations classification computer-vision deep-learning image-annotation instance-segmentation python semantic-segmentation video-annotation
Last synced: about 1 month ago
JSON representation
Image Polygonal Annotation with Python (polygon, rectangle, circle, line, point and image-level flag annotation).
- Host: GitHub
- URL: https://github.com/wkentaro/labelme
- Owner: wkentaro
- License: other
- Created: 2016-05-09T12:30:26.000Z (almost 9 years ago)
- Default Branch: main
- Last Pushed: 2024-11-19T15:01:51.000Z (5 months ago)
- Last Synced: 2024-11-23T18:39:54.520Z (5 months ago)
- Topics: annotations, classification, computer-vision, deep-learning, image-annotation, instance-segmentation, python, semantic-segmentation, video-annotation
- Language: Python
- Homepage: https://labelme.io
- Size: 44.5 MB
- Stars: 13,582
- Watchers: 147
- Forks: 3,414
- Open Issues: 148
-
Metadata Files:
- Readme: README.md
- Funding: .github/FUNDING.yml
- License: LICENSE
- Citation: CITATION.cff
Awesome Lists containing this project
- awesome-cv - labelme - level flag annotation) (Annotation Tools / Tags: Object Classification `[ObjCls]`, Object Detection `[ObjDet]`, Object Segmentation `[ObjSeg]`, General Library `[GenLib]`, Text Reading / Object Character Recognition `[OCR]`, Action Recognition `[ActRec]`, Object Tracking `[ObjTrk]`, Data Augmentation `[DatAug]`, Simultaneous Localization and Mapping `[SLAM]`, Outlier/Anomaly/Novelty Detection `[NvlDet]`, Content-based Image Retrieval `[CBIR]`, Image Enhancement `[ImgEnh]`, Aesthetic Assessment `[AesAss]`, Explainable Artificial Intelligence `[XAI]`, Text-to-Image Generation `[TexImg]`, Pose Estimation `[PosEst]`, Video Matting `[VidMat]`, Eye Tracking `[EyeTrk]`)
- awesome-object-detection-datasets - labelme - level flag annotation). (Summary)
- awesome-robotic-tooling - labelme - Image Polygonal Annotation with Python (polygon, rectangle, circle, line, point and image-level flag annotation). (Data Visualization and Mission Control / Annotation)
- awesome-data-annotation - labelme - - image/video (classification, polygon, geometric shapes) (Image / video / Open source)
- awesome-dataset-tools - labelme - Image Polygonal Annotation with Python (polygon, rectangle, circle, line, point and image-level flag annotation) (Labeling Tools / Images)
- StarryDivineSky - wkentaro/labelme
- awesome-yolo-object-detection - labelme - level flag annotation). (Object Detection Applications)
- awesome-yolo-object-detection - labelme - level flag annotation). (Applications)
- awesome-image-tools - labelme
README
labelme
Image Polygonal Annotation with Python
![]()
## Description
Labelme is a graphical image annotation tool inspired by .
It is written in Python and uses Qt for its graphical interface.
![]()
![]()
![]()
![]()
![]()
VOC dataset example of instance segmentation.
![]()
![]()
![]()
Other examples (semantic segmentation, bbox detection, and classification).
![]()
![]()
![]()
Various primitives (polygon, rectangle, circle, line, and point).## Features
- [x] Image annotation for polygon, rectangle, circle, line and point. ([tutorial](examples/tutorial))
- [x] Image flag annotation for classification and cleaning. ([#166](https://github.com/wkentaro/labelme/pull/166))
- [x] Video annotation. ([video annotation](examples/video_annotation))
- [x] GUI customization (predefined labels / flags, auto-saving, label validation, etc). ([#144](https://github.com/wkentaro/labelme/pull/144))
- [x] Exporting VOC-format dataset for semantic/instance segmentation. ([semantic segmentation](examples/semantic_segmentation), [instance segmentation](examples/instance_segmentation))
- [x] Exporting COCO-format dataset for instance segmentation. ([instance segmentation](examples/instance_segmentation))## Installation
There are 2 options to install labelme:
### Option 1: Using pip
For more detail, check ["Install Labelme using Pip"](https://www.labelme.io/docs/install-labelme-pip).
```bash
pip install labelme
```### Option 2: Using standalone executable (Easiest)
If you're willing to invest in the convenience of simple installation without any dependencies (Python, Qt),
you can download the standalone executable from ["Install Labelme as App"](https://www.labelme.io/docs/install-labelme-app).It's a one-time payment for lifetime access, and it helps us to maintain this project.
## Usage
Run `labelme --help` for detail.
The annotations are saved as a [JSON](http://www.json.org/) file.```bash
labelme # just open gui# tutorial (single image example)
cd examples/tutorial
labelme apc2016_obj3.jpg # specify image file
labelme apc2016_obj3.jpg -O apc2016_obj3.json # close window after the save
labelme apc2016_obj3.jpg --nodata # not include image data but relative image path in JSON file
labelme apc2016_obj3.jpg \
--labels highland_6539_self_stick_notes,mead_index_cards,kong_air_dog_squeakair_tennis_ball # specify label list# semantic segmentation example
cd examples/semantic_segmentation
labelme data_annotated/ # Open directory to annotate all images in it
labelme data_annotated/ --labels labels.txt # specify label list with a file
```### Command Line Arguments
- `--output` specifies the location that annotations will be written to. If the location ends with .json, a single annotation will be written to this file. Only one image can be annotated if a location is specified with .json. If the location does not end with .json, the program will assume it is a directory. Annotations will be stored in this directory with a name that corresponds to the image that the annotation was made on.
- The first time you run labelme, it will create a config file in `~/.labelmerc`. You can edit this file and the changes will be applied the next time that you launch labelme. If you would prefer to use a config file from another location, you can specify this file with the `--config` flag.
- Without the `--nosortlabels` flag, the program will list labels in alphabetical order. When the program is run with this flag, it will display labels in the order that they are provided.
- Flags are assigned to an entire image. [Example](examples/classification)
- Labels are assigned to a single polygon. [Example](examples/bbox_detection)### FAQ
- **How to convert JSON file to numpy array?** See [examples/tutorial](examples/tutorial#convert-to-dataset).
- **How to load label PNG file?** See [examples/tutorial](examples/tutorial#how-to-load-label-png-file).
- **How to get annotations for semantic segmentation?** See [examples/semantic_segmentation](examples/semantic_segmentation).
- **How to get annotations for instance segmentation?** See [examples/instance_segmentation](examples/instance_segmentation).## Examples
* [Image Classification](examples/classification)
* [Bounding Box Detection](examples/bbox_detection)
* [Semantic Segmentation](examples/semantic_segmentation)
* [Instance Segmentation](examples/instance_segmentation)
* [Video Annotation](examples/video_annotation)## How to build standalone executable
```bash
LABELME_PATH=./labelme
OSAM_PATH=$(python -c 'import os, osam; print(os.path.dirname(osam.__file__))')
pyinstaller labelme/labelme/__main__.py \
--name=Labelme \
--windowed \
--noconfirm \
--specpath=build \
--add-data=$(OSAM_PATH)/_models/yoloworld/clip/bpe_simple_vocab_16e6.txt.gz:osam/_models/yoloworld/clip \
--add-data=$(LABELME_PATH)/config/default_config.yaml:labelme/config \
--add-data=$(LABELME_PATH)/icons/*:labelme/icons \
--add-data=$(LABELME_PATH)/translate/*:translate \
--icon=$(LABELME_PATH)/icons/icon.png \
--onedir
```## Acknowledgement
This repo is the fork of [mpitid/pylabelme](https://github.com/mpitid/pylabelme).