{"id":13645146,"url":"https://github.com/Cartucho/OpenLabeling","last_synced_at":"2025-04-21T13:31:53.040Z","repository":{"id":37602861,"uuid":"118135147","full_name":"Cartucho/OpenLabeling","owner":"Cartucho","description":"Label images and video for Computer Vision applications","archived":false,"fork":false,"pushed_at":"2022-07-06T19:58:48.000Z","size":7725,"stargazers_count":936,"open_issues_count":16,"forks_count":270,"subscribers_count":32,"default_branch":"master","last_synced_at":"2025-04-09T10:04:02.989Z","etag":null,"topics":["bounding-boxes","darkflow","darknet","gui","labeling-tool","object-detection","opencv","pascal-voc","training-yolo","yolo"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/Cartucho.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2018-01-19T14:30:09.000Z","updated_at":"2025-04-06T23:42:13.000Z","dependencies_parsed_at":"2022-07-18T05:46:13.214Z","dependency_job_id":null,"html_url":"https://github.com/Cartucho/OpenLabeling","commit_stats":null,"previous_names":[],"tags_count":4,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Cartucho%2FOpenLabeling","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Cartucho%2FOpenLabeling/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Cartucho%2FOpenLabeling/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Cartucho%2FOpenLabeling/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/Cartucho","download_url":"https://codeload.github.com/Cartucho/OpenLabeling/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":250064703,"owners_count":21368952,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["bounding-boxes","darkflow","darknet","gui","labeling-tool","object-detection","opencv","pascal-voc","training-yolo","yolo"],"created_at":"2024-08-02T01:02:29.294Z","updated_at":"2025-04-21T13:31:52.132Z","avatar_url":"https://github.com/Cartucho.png","language":"Python","readme":"# OpenLabeling: open-source image and video labeler\n\n[![GitHub stars](https://img.shields.io/github/stars/Cartucho/OpenLabeling.svg?style=social\u0026label=Stars)](https://github.com/Cartucho/OpenLabeling)\n\nImage labeling in multiple annotation formats:\n- PASCAL VOC (= [darkflow](https://github.com/thtrieu/darkflow))\n- [YOLO darknet](https://github.com/pjreddie/darknet)\n- ask for more (create a new issue)...\n\n\u003cimg src=\"https://media.giphy.com/media/l49JDgDSygJN369vW/giphy.gif\" width=\"40%\"\u003e\u003cimg src=\"https://media.giphy.com/media/3ohc1csRs9PoDgCeuk/giphy.gif\" width=\"40%\"\u003e\n\u003cimg src=\"https://media.giphy.com/media/3o752fXKwTJJkhXP32/giphy.gif\" width=\"40%\"\u003e\u003cimg src=\"https://media.giphy.com/media/3ohc11t9auzSo6fwLS/giphy.gif\" width=\"40%\"\u003e\n\n## Citation\n\nThis project was developed for the following paper, please consider citing it:\n\n```bibtex\n@INPROCEEDINGS{8594067,\n  author={J. {Cartucho} and R. {Ventura} and M. {Veloso}},\n  booktitle={2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, \n  title={Robust Object Recognition Through Symbiotic Deep Learning In Mobile Robots}, \n  year={2018},\n  pages={2336-2341},\n}\n```\n\n## Latest Features\n\n- Jun 2019: Deep Learning Object Detection Model\n- May 2019: [ECCV2018] Distractor-aware Siamese Networks for Visual Object Tracking\n- Jan 2019: easy and quick bounding-boxe's resizing!\n- Jan 2019: video object tracking with OpenCV trackers!\n- TODO: Label photos via Google drive to allow \"team online labeling\".\n[New Features Discussion](https://github.com/Cartucho/OpenLabeling/issues/3)\n\n## Table of contents\n\n- [Quick start](#quick-start)\n- [Prerequisites](#prerequisites)\n- [Run project](#run-project)\n- [GUI usage](#gui-usage)\n- [Authors](#authors)\n\n## Quick start\n\nTo start using the YOLO Bounding Box Tool you need to [download the latest release](https://github.com/Cartucho/OpenLabeling/archive/v1.3.zip) or clone the repo:\n\n```\ngit clone --recurse-submodules git@github.com:Cartucho/OpenLabeling.git\n```\n\n### Prerequisites\n\nYou need to install:\n\n- [Python](https://www.python.org/downloads/)\n- [OpenCV](https://opencv.org/) version \u003e= 3.0\n    1. `python -mpip install -U pip`\n    1. `python -mpip install -U opencv-python`\n    1. `python -mpip install -U opencv-contrib-python`\n- numpy, tqdm and lxml:\n    1. `python -mpip install -U numpy`\n    1. `python -mpip install -U tqdm`\n    1. `python -mpip install -U lxml`\n\nAlternatively, you can install everything at once by simply running:\n\n```\npython -mpip install -U pip\npython -mpip install -U -r requirements.txt\n```\n- [PyTorch](https://pytorch.org/get-started/locally/) \n    Visit the link for a configurator for your setup.\n    \n### Run project\n\nStep by step:\n\n  1. Open the `main/` directory\n  2. Insert the input images and videos in the folder **input/**\n  3. Insert the classes in the file **class_list.txt** (one class name per line)\n  4. Run the code:\n  5. You can find the annotations in the folder **output/**\n\n         python main.py [-h] [-i] [-o] [-t] [--tracker TRACKER_TYPE] [-n N_FRAMES]\n\n         optional arguments:\n          -h, --help                Show this help message and exit\n          -i, --input               Path to images and videos input folder | Default: input/\n          -o, --output              Path to output folder (if using the PASCAL VOC format it's important to set this path correctly) | Default: output/\n          -t, --thickness           Bounding box and cross line thickness (int) | Default: -t 1\n          --tracker tracker_type    tracker_type being used: ['CSRT', 'KCF','MOSSE', 'MIL', 'BOOSTING', 'MEDIANFLOW', 'TLD', 'GOTURN', 'DASIAMRPN']\n          -n N_FRAMES               number of frames to track object for\n  To use DASIAMRPN Tracker:\n  1. Install the [DaSiamRPN](https://github.com/foolwood/DaSiamRPN) submodule and download the model (VOT) from [google drive](https://drive.google.com/drive/folders/1BtIkp5pB6aqePQGlMb2_Z7bfPy6XEj6H)\n  2. copy it into 'DaSiamRPN/code/'\n  3. set default tracker in main.py or run it with --tracker DASIAMRPN\n\n\n#### How to use the deep learning feature\n\n- Download one or some deep learning models from https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md\n  and put it into `object_detection/models` directory (you need to create the `models` folder by yourself). The outline of `object_detection` looks like that:\n  + `tf_object_detection.py`\n  + `utils.py`\n  + `models/ssdlite_mobilenet_v2_coco_2018_05_09`\n\nDownload the pre-trained model by clicking this link http://download.tensorflow.org/models/object_detection/ssdlite_mobilenet_v2_coco_2018_05_09.tar.gz and put it into `object_detection/models`. Create the `models` folder if necessary. Make sure to extract the model.\n\n  **Note**: Default model used in `main_auto.py` is `ssdlite_mobilenet_v2_coco_2018_05_09`. We can\n  set `graph_model_path` in file `main_auto.py` to change the pretrain model\n- Using `main_auto.py` to automatically label data first\n\n  TODO: explain how the user can \n\n### GUI usage\n\nKeyboard, press: \n\n\u003cimg src=\"https://github.com/Cartucho/OpenLabeling/blob/master/keyboard_usage.jpg\"\u003e\n\n| Key | Description |\n| --- | --- |\n| a/d | previous/next image |\n| s/w | previous/next class |\n| e | edges |\n| h | help |\n| q | quit |\n\nVideo:\n\n| Key | Description |\n| --- | --- |\n| p | predict the next frames' labels |\n\nMouse:\n  - Use two separate left clicks to do each bounding box\n  - **Right-click** -\u003e **quick delete**!\n  - Use the middle mouse to zoom in and out\n  - Use double click to select a bounding box\n\n## Authors\n\n* **João Cartucho**\n\n    Feel free to contribute\n\n    [![GitHub contributors](https://img.shields.io/github/contributors/Cartucho/OpenLabeling.svg)](https://github.com/Cartucho/OpenLabeling/graphs/contributors)\n","funding_links":[],"categories":["Summary","Data Labelling and Synthesis","Labeling Tools","Python","Object Detection Applications","Data Labelling Tools and Frameworks"],"sub_categories":["Images"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FCartucho%2FOpenLabeling","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FCartucho%2FOpenLabeling","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FCartucho%2FOpenLabeling/lists"}