{"id":20399849,"url":"https://github.com/kitware/dive","last_synced_at":"2025-04-05T05:05:25.807Z","repository":{"id":37537889,"uuid":"219046251","full_name":"Kitware/dive","owner":"Kitware","description":"Media annotation and analysis tools for web and desktop.  Get started at https://viame.kitware.com","archived":false,"fork":false,"pushed_at":"2025-03-28T15:54:15.000Z","size":67087,"stargazers_count":87,"open_issues_count":167,"forks_count":21,"subscribers_count":10,"default_branch":"main","last_synced_at":"2025-03-29T04:04:55.045Z","etag":null,"topics":["annotation","computer-vision","docker","image-annotation","machine-learning","marine-biology","object-detection","video","video-analytics","video-annotation"],"latest_commit_sha":null,"homepage":"https://kitware.github.io/dive","language":"Vue","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/Kitware.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":"docs/Support.md","governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2019-11-01T19:03:12.000Z","updated_at":"2025-03-14T15:07:07.000Z","dependencies_parsed_at":"2024-03-19T14:48:41.899Z","dependency_job_id":"1362f3cf-2502-4b06-839a-a1171d1b94e9","html_url":"https://github.com/Kitware/dive","commit_stats":{"total_commits":852,"total_committers":20,"mean_commits":42.6,"dds":0.7852112676056338,"last_synced_commit":"6b66a28ffe6500e50e8989fa773840ed084c9bd9"},"previous_names":[],"tags_count":58,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Kitware%2Fdive","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Kitware%2Fdive/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Kitware%2Fdive/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Kitware%2Fdive/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/Kitware","download_url":"https://codeload.github.com/Kitware/dive/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247289426,"owners_count":20914464,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["annotation","computer-vision","docker","image-annotation","machine-learning","marine-biology","object-detection","video","video-analytics","video-annotation"],"created_at":"2024-11-15T04:34:31.003Z","updated_at":"2025-04-05T05:05:25.791Z","avatar_url":"https://github.com/Kitware.png","language":"Vue","readme":"\u003cimg src=\"http://www.viametoolkit.org/wp-content/uploads/2016/08/viami_logo.png\" alt=\"VIAME Logo\" width=\"200\" height=\"78\"\u003e\n\nDIVE is a web interface for performing data management, video annotation, and running a portion of the algorithms stored within the [VIAME](https://github.com/VIAME/VIAME) repository. When compiled, docker instances for DIVE can be run either as local servers or online in web services. A sample instance of DIVE is running on a public server at [viame.kitware.com](https://viame.kitware.com).\n\n![docs/images/Banner.png](docs/images/Banner.png)\n\n## Features\n\n* video annotation\n* still image (and image sequence) annotation\n* deep integration with [VIAME](https://github.com/VIAME/VIAME) computer vision analysis tools\n* single-frame boxes, polygons, and lines\n* multi-frame bounding box tracks with interpolation\n* Automatic transcoding to support most video formats\n* Customizable labeling with text, numeric, multiple-choice attributes\n\n## Documentation\n\n* [Client User Guide](https://kitware.github.io/dive/)\n* [Client Development Docs](client/README.md)\n* [Server Development Docs](server/README.md)\n* [Deployment Overview](https://kitware.github.io/dive/Deployment-Overview/)\n* [Running with Docker Compose](https://kitware.github.io/dive/Deployment-Docker-Compose/)\n\n## Technologies Used\n\nDIVE uses [Girder](https://girder.readthedocs.io/en/stable/) for data management and has a typical girder + girder worker + docker architecture.  See docker scripts for additional details.\n\n* The client application is a standard [@vue/cli](https://cli.vuejs.org/) application.\n* The job runner is built on celery and [Girder Worker](https://girder-worker.readthedocs.io/en/latest/).  Command-line executables for VIAME and FFmpeg are built inside the worker docker image.\n\n## Example Data\n\n### Input\n\nDIVE takes two different kinds of input data, either a video file (e.g. .mpg) or an image sequence. Both types can\nbe optionally accompanied with a CSV file containing video annotations. Example input sequences are available at\nhttps://viame.kitware.com/girder#collections.\n\n### Output\n\nWhen running an algorithmic pipelines or performing manual video annotation (and saving the annotations with the save\nbutton) output CSV files are produced containing output detections. Simultaneously a detection plot of results\nis shown underneath each video sequence.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fkitware%2Fdive","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fkitware%2Fdive","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fkitware%2Fdive/lists"}