{"id":13577431,"url":"https://github.com/jakowenko/double-take","last_synced_at":"2025-05-16T01:05:24.673Z","repository":{"id":37103437,"uuid":"346734534","full_name":"jakowenko/double-take","owner":"jakowenko","description":"Unified UI and API for processing and training images for facial recognition.","archived":false,"fork":false,"pushed_at":"2024-02-17T11:02:02.000Z","size":13132,"stargazers_count":1330,"open_issues_count":143,"forks_count":118,"subscribers_count":24,"default_branch":"master","last_synced_at":"2025-04-08T11:15:29.866Z","etag":null,"topics":["compreface","deepstack","face-recognition","facebox","frigate","home-assistant","home-automation","mqtt","rekognition","room-presence"],"latest_commit_sha":null,"homepage":"https://hub.docker.com/r/jakowenko/double-take","language":"JavaScript","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/jakowenko.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":null,"funding":".github/FUNDING.yml","license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null},"funding":{"github":"jakowenko"}},"created_at":"2021-03-11T14:44:36.000Z","updated_at":"2025-04-07T19:06:49.000Z","dependencies_parsed_at":"2024-03-17T04:38:33.720Z","dependency_job_id":"aeb10c09-62f3-4d81-aba5-f3b348a48ace","html_url":"https://github.com/jakowenko/double-take","commit_stats":{"total_commits":1046,"total_committers":8,"mean_commits":130.75,"dds":0.007648183556405397,"last_synced_commit":"8e2728d283b3901d688c2454086fd0b512739b53"},"previous_names":[],"tags_count":53,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jakowenko%2Fdouble-take","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jakowenko%2Fdouble-take/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jakowenko%2Fdouble-take/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jakowenko%2Fdouble-take/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/jakowenko","download_url":"https://codeload.github.com/jakowenko/double-take/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254448579,"owners_count":22072764,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["compreface","deepstack","face-recognition","facebox","frigate","home-assistant","home-automation","mqtt","rekognition","room-presence"],"created_at":"2024-08-01T15:01:21.398Z","updated_at":"2025-05-16T01:05:19.657Z","avatar_url":"https://github.com/jakowenko.png","language":"JavaScript","readme":"[![Double Take](https://badgen.net/github/release/jakowenko/double-take/stable)](https://github.com/jakowenko/double-take) [![Double Take](https://badgen.net/github/stars/jakowenko/double-take)](https://github.com/jakowenko/double-take/stargazers) [![Docker Pulls](https://flat.badgen.net/docker/pulls/jakowenko/double-take)](https://hub.docker.com/r/jakowenko/double-take) [![Discord](https://flat.badgen.net/discord/members/3pumsskdN5?label=Discord)](https://discord.gg/3pumsskdN5)\n\n# Double Take\n\nUnified UI and API for processing and training images for facial recognition.\n\n\u003cp align=\"center\"\u003e\n  \u003cimg src=\"https://user-images.githubusercontent.com/1081811/126434926-cf2275f7-f3a8-43eb-adc2-903c0071f7d1.jpg\" width=\"100%\"\u003e\n\u003c/p\u003e\n\n## Why?\n\nThere's a lot of great open source software to perform facial recognition, but each of them behave differently. Double Take was created to abstract the complexities of the detection services and combine them into an easy to use UI and API.\n\n## Features\n\n- Responsive UI and API bundled into single [Docker image](https://hub.docker.com/r/jakowenko/double-take)\n- Ability to [password protect](#authentication) UI and API\n- Support for [multiple detectors](#supported-detectors)\n- Train and untrain images for subjects\n- Process images from [NVRs](#supported-nvrs)\n- Publish results to [MQTT topics](#mqtt-1)\n- [REST API](#api) can be invoked by other applications\n- Disable detection based on a [schedule](#schedule)\n- [Home Assistant Add-ons](https://github.com/jakowenko/double-take-hassio-addons)\n- Preprocess images with [OpenCV](https://docs.opencv.org/4.6.0/d1/de5/classcv_1_1CascadeClassifier.html)\n\n### Supported Architecture\n\n- amd64\n- arm64\n- arm/v7\n\n### Supported Detectors\n\n- [CompreFace](https://github.com/exadel-inc/CompreFace)\n- [Amazon Rekognition](https://aws.amazon.com/rekognition)\n- [DeepStack](https://deepstack.cc)\n- [Facebox](https://machinebox.io)\n\n### Supported NVRs\n\n- [Frigate](https://github.com/blakeblackshear/frigate)\n\n## Integrations\n\n### [Frigate](https://github.com/blakeblackshear/frigate)\n\nSubscribe to Frigate's MQTT topics and process images for analysis.\n\n```yaml\nmqtt:\n  host: localhost\n\nfrigate:\n  url: http://localhost:5000\n```\n\nWhen the `frigate/events` topic is updated the API begins to process the [`snapshot.jpg`](https://blakeblackshear.github.io/frigate/usage/api/#apieventsidsnapshotjpg) and [`latest.jpg`](https://blakeblackshear.github.io/frigate/usage/api/#apicamera_namelatestjpgh300) images from Frigate's API. These images are passed from the API to the configured detector(s) until a match is found that meets the configured requirements. To improve the chances of finding a match, the processing of the images will repeat until the amount of retries is exhausted or a match is found.\n\nWhen the `frigate/+/person/snapshot` topic is updated the API will process that image with the configured detector(s). It is recommended to increase the MQTT snapshot size in the [Frigate camera config](https://docs.frigate.video/configuration/index).\n\n```yaml\ncameras:\n  front-door:\n    mqtt:\n      timestamp: False\n      bounding_box: False\n      crop: True\n      quality: 100\n      height: 500\n```\n\nIf a match is found the image is saved to `/.storage/matches/\u003cfilename\u003e`.\n\n### [Home Assistant](https://www.home-assistant.io)\n\nTrigger automations / notifications when images are processed.\n\nIf the MQTT integration is configured within Home Assistant, then sensors will automatically be created.\n\n#### Notification Automation\n\nThis notification will work for both matches and unknown results. The message can be customized with any of the attributes from the entity.\n\n```yaml\nalias: Notify\ntrigger:\n  - platform: state\n    entity_id: sensor.double_take_david\n  - platform: state\n    entity_id: sensor.double_take_unknown\ncondition:\n  - condition: template\n    value_template: '{{ trigger.to_state.state != trigger.from_state.state }}'\naction:\n  - service: notify.mobile_app\n    data:\n      message: |-\n        {% if trigger.to_state.attributes.match is defined %}\n          {{trigger.to_state.attributes.friendly_name}} is near the {{trigger.to_state.state}} @ {{trigger.to_state.attributes.match.confidence}}% by {{trigger.to_state.attributes.match.detector}}:{{trigger.to_state.attributes.match.type}} taking {{trigger.to_state.attributes.attempts}} attempt(s) @ {{trigger.to_state.attributes.duration}} sec\n        {% elif trigger.to_state.attributes.unknown is defined %}\n          unknown is near the {{trigger.to_state.state}} @ {{trigger.to_state.attributes.unknown.confidence}}% by {{trigger.to_state.attributes.unknown.detector}}:{{trigger.to_state.attributes.unknown.type}} taking {{trigger.to_state.attributes.attempts}} attempt(s) @ {{trigger.to_state.attributes.duration}} sec\n        {% endif %}\n      data:\n        attachment:\n          url: |-\n            {% if trigger.to_state.attributes.match is defined %}\n              http://localhost:3000/api/storage/matches/{{trigger.to_state.attributes.match.filename}}?box=true\u0026token={{trigger.to_state.attributes.token}}\n            {% elif trigger.to_state.attributes.unknown is defined %}\n               http://localhost:3000/api/storage/matches/{{trigger.to_state.attributes.unknown.filename}}?box=true\u0026token={{trigger.to_state.attributes.token}}\n            {% endif %}\n        actions:\n          - action: URI\n            title: View Image\n            uri: |-\n              {% if trigger.to_state.attributes.match is defined %}\n                http://localhost:3000/api/storage/matches/{{trigger.to_state.attributes.match.filename}}?box=true\u0026token={{trigger.to_state.attributes.token}}\n              {% elif trigger.to_state.attributes.unknown is defined %}\n                 http://localhost:3000/api/storage/matches/{{trigger.to_state.attributes.unknown.filename}}?box=true\u0026token={{trigger.to_state.attributes.token}}\n              {% endif %}\nmode: parallel\nmax: 10\n```\n\n### MQTT\n\nPublish results to `double-take/matches/\u003cname\u003e` and `double-take/cameras/\u003ccamera\u003e`. The number of results will also be published to `double-take/cameras/\u003ccamera\u003e/person` and will reset back to `0` after 30 seconds.\n\nErrors from the API will be published to `double-take/errors`.\n\n```yaml\nmqtt:\n  host: localhost\n```\n\n#### double-take/matches/david\n\n```json\n{\n  \"id\": \"1623906078.684285-5l9hw6\",\n  \"duration\": 1.26,\n  \"timestamp\": \"2021-06-17T05:01:36.030Z\",\n  \"attempts\": 3,\n  \"camera\": \"living-room\",\n  \"zones\": [],\n  \"match\": {\n    \"name\": \"david\",\n    \"confidence\": 66.07,\n    \"match\": true,\n    \"box\": { \"top\": 308, \"left\": 1018, \"width\": 164, \"height\": 177 },\n    \"type\": \"latest\",\n    \"duration\": 0.28,\n    \"detector\": \"compreface\",\n    \"filename\": \"2f07d1ad-9252-43fd-9233-2786a36a15a9.jpg\",\n    \"base64\": null\n  }\n}\n```\n\n#### double-take/cameras/back-door\n\n```json\n{\n  \"id\": \"ff894ff3-2215-4cea-befa-43fe00898b65\",\n  \"duration\": 4.25,\n  \"timestamp\": \"2021-06-17T03:19:55.695Z\",\n  \"attempts\": 5,\n  \"camera\": \"back-door\",\n  \"zones\": [],\n  \"matches\": [\n    {\n      \"name\": \"david\",\n      \"confidence\": 100,\n      \"match\": true,\n      \"box\": { \"top\": 286, \"left\": 744, \"width\": 319, \"height\": 397 },\n      \"type\": \"manual\",\n      \"duration\": 0.8,\n      \"detector\": \"compreface\",\n      \"filename\": \"dcb772de-d8e8-4074-9bce-15dbba5955c5.jpg\",\n      \"base64\": null\n    }\n  ],\n  \"misses\": [],\n  \"unknowns\": [],\n  \"counts\": { \"person\": 1, \"match\": 1, \"miss\": 0, \"unknown\": 0 }\n}\n```\n\n## Notify Services\n\n### [Gotify](https://gotify.net)\n\n```yaml\nnotify:\n  gotify:\n    url: http://localhost:8080\n    token:\n```\n\n## API Images\n\nMatch images are saved to `/.storage/matches` and can be accessed via `http://localhost:3000/api/storage/matches/\u003cfilename\u003e`.\n\nTraining images are saved to `/.storage/train` and can be accessed via `http://localhost:3000/api/storage/train/\u003cname\u003e/\u003cfilename\u003e`.\n\nLatest images are saved to `/.storage/latest` and can be accessed via `http://localhost:3000/api/storage/latest/\u003cname|camera\u003e.jpg`.\n\n| Query Parameters | Description                    | Default |\n| ---------------- | ------------------------------ | ------- |\n| `box`            | Show bounding box around faces | `false` |\n| `token`          | Access token                   |         |\n\n## UI\n\nThe UI is accessible via `http://localhost:3000`.\n\n- Matches: `/`\n- Train: `/train`\n- Config: `/config`\n- Access Tokens: `/tokens` (_if authentication is enabled_)\n\n## Authentication\n\nEnable authentication to password protect the UI. This is recommended if running Double Take behind a reverse proxy which is exposed to the internet.\n\n```yaml\nauth: true\n```\n\n## API\n\nDocumentation can be viewed on [Postman](https://documenter.getpostman.com/view/1013188/TzsWuAa8).\n\n## Usage\n\n### Docker Compose\n\n```yaml\nversion: '3.7'\n\nvolumes:\n  double-take:\n\nservices:\n  double-take:\n    container_name: double-take\n    image: jakowenko/double-take\n    restart: unless-stopped\n    volumes:\n      - double-take:/.storage\n    ports:\n      - 3000:3000\n```\n\n## Configuration\n\nConfigurable options are saved to `/.storage/config/config.yml` and are editable via the UI at `http://localhost:3000/config`. _Default values do not need to be specified in configuration unless they need to be overwritten._\n\n### `auth`\n\n```yaml\n# enable authentication for ui and api (default: shown below)\nauth: false\n```\n\n### `token`\n\n```yaml\n# if authentication is enabled\n# age of access token in api response and mqtt topics (default: shown below)\n# expressed in seconds or a string describing a time span zeit/ms\n# https://github.com/vercel/ms\ntoken:\n  image: 24h\n```\n\n### `mqtt`\n\n```yaml\n# enable mqtt subscribing and publishing (default: shown below)\nmqtt:\n  host:\n  username:\n  password:\n  client_id:\n\n  tls:\n    # cert chains in PEM format: /path/to/client.crt\n    cert:\n    # private keys in PEM format: /path/to/client.key\n    key:\n    # optionally override the trusted CA certificates: /path/to/ca.crt\n    ca:\n    # if true the server will reject any connection which is not authorized with the list of supplied CAs\n    reject_unauthorized: false\n\n  topics:\n    # mqtt topic for frigate message subscription\n    frigate: frigate/events\n    #  mqtt topic for home assistant discovery subscription\n    homeassistant: homeassistant\n    # mqtt topic where matches are published by name\n    matches: double-take/matches\n    # mqtt topic where matches are published by camera name\n    cameras: double-take/cameras\n```\n\n### `detect`\n\n```yaml\n# global detect settings (default: shown below)\ndetect:\n  match:\n    # save match images\n    save: true\n    # include base64 encoded string in api results and mqtt messages\n    # options: true, false, box\n    base64: false\n    # minimum confidence needed to consider a result a match\n    confidence: 60\n    # hours to keep match images until they are deleted\n    purge: 168\n    # minimum area in pixels to consider a result a match\n    min_area: 10000\n\n  unknown:\n    # save unknown images\n    save: true\n    # include base64 encoded string in api results and mqtt messages\n    # options: true, false, box\n    base64: false\n    # minimum confidence needed before classifying a name as unknown\n    confidence: 40\n    # hours to keep unknown images until they are deleted\n    purge: 8\n    # minimum area in pixels to keep an unknown result\n    min_area: 0\n```\n\n### `frigate`\n\n```yaml\n# frigate settings (default: shown below)\nfrigate:\n  url:\n\n  # if double take should send matches back to frigate as a sub label\n  # NOTE: requires frigate 0.11.0+\n  update_sub_labels: false\n\n  # stop the processing loop if a match is found\n  # if set to false all image attempts will be processed before determining the best match\n  stop_on_match: true\n\n  # ignore detected areas so small that face recognition would be difficult\n  # quadrupling the min_area of the detector is a good start\n  # does not apply to MQTT events\n  min_area: 0\n\n  # object labels that are allowed for facial recognition\n  labels:\n    - person\n\n  attempts:\n    # number of times double take will request a frigate latest.jpg for facial recognition\n    latest: 10\n    # number of times double take will request a frigate snapshot.jpg for facial recognition\n    snapshot: 10\n    # process frigate images from frigate/+/person/snapshot topics\n    mqtt: true\n    # add a delay expressed in seconds between each detection loop\n    delay: 0\n\n  image:\n    # height of frigate image passed for facial recognition\n    height: 500\n\n  # only process images from specific cameras\n  cameras:\n    # - front-door\n    # - garage\n\n  # only process images from specific zones\n  zones:\n    # - camera: garage\n    #   zone: driveway\n\n  # override frigate attempts and image per camera\n  events:\n    # front-door:\n    #   attempts:\n    #     # number of times double take will request a frigate latest.jpg for facial recognition\n    #     latest: 5\n    #     # number of times double take will request a frigate snapshot.jpg for facial recognition\n    #     snapshot: 5\n    #     # process frigate images from frigate/\u003ccamera-name\u003e/person/snapshot topic\n    #     mqtt: false\n    #     # add a delay expressed in seconds between each detection loop\n    #     delay: 1\n\n    #   image:\n    #     # height of frigate image passed for facial recognition (only if using default latest.jpg and snapshot.jpg)\n    #     height: 1000\n    #     # custom image that will be used in place of latest.jpg\n    #     latest: http://camera-url.com/image.jpg\n    #     # custom image that will be used in place of snapshot.jpg\n    #     snapshot: http://camera-url.com/image.jpg\n```\n\n### `cameras`\n\n```yaml\n# camera settings (default: shown below)\ncameras:\n  front-door:\n    # apply masks before processing image\n    # masks:\n    #   # list of x,y coordinates to define the polygon of the zone\n    #   coordinates:\n    #     - 1920,0,1920,328,1638,305,1646,0\n    #   # show the mask on the final saved image (helpful for debugging)\n    #   visible: false\n    #   # size of camera stream used in resizing masks\n    #   size: 1920x1080\n\n    # override global detect variables per camera\n    # detect:\n    #   match:\n    #     # save match images\n    #     save: true\n    #     # include base64 encoded string in api results and mqtt messages\n    #     # options: true, false, box\n    #     base64: false\n    #     # minimum confidence needed to consider a result a match\n    #     confidence: 60\n    #     # minimum area in pixels to consider a result a match\n    #     min_area: 10000\n\n    #   unknown:\n    #     # save unknown images\n    #     save: true\n    #     # include base64 encoded string in api results and mqtt messages\n    #     # options: true, false, box\n    #     base64: false\n    #     # minimum confidence needed before classifying a match name as unknown\n    #     confidence: 40\n    #     # minimum area in pixels to keep an unknown result\n    #     min_area: 0\n\n    # snapshot:\n    #   # process any jpeg encoded mqtt topic for facial recognition\n    #   topic:\n    #   # process any http image for facial recognition\n    #   url:\n```\n\n### `detectors`\n\n```yaml\n# detector settings (default: shown below)\ndetectors:\n  compreface:\n    url:\n    # recognition api key\n    key:\n    # number of seconds before the request times out and is aborted\n    timeout: 15\n    # minimum required confidence that a recognized face is actually a face\n    # value is between 0.0 and 1.0\n    det_prob_threshold: 0.8\n    # require opencv to find a face before processing with detector\n    opencv_face_required: false\n    # comma-separated slugs of face plugins\n    # https://github.com/exadel-inc/CompreFace/blob/master/docs/Face-services-and-plugins.md)\n    # face_plugins: mask,gender,age\n    # only process images from specific cameras, if omitted then all cameras will be processed\n    # cameras:\n    #   - front-door\n    #   - garage\n\n  rekognition:\n    aws_access_key_id: !secret aws_access_key_id\n    aws_secret_access_key: !secret aws_secret_access_key\n    aws_region:\n    collection_id: double-take\n    # require opencv to find a face before processing with detector\n    opencv_face_required: true\n    # only process images from specific cameras, if omitted then all cameras will be processed\n    # cameras:\n    #   - front-door\n    #   - garage\n\n  deepstack:\n    url:\n    key:\n    # number of seconds before the request times out and is aborted\n    timeout: 15\n    # require opencv to find a face before processing with detector\n    opencv_face_required: false\n    # only process images from specific cameras, if omitted then all cameras will be processed\n    # cameras:\n    #   - front-door\n    #   - garage\n\n  facebox:\n    url:\n    # number of seconds before the request times out and is aborted\n    timeout: 15\n    # require opencv to find a face before processing with detector\n    opencv_face_required: false\n    # only process images from specific cameras, if omitted then all cameras will be processed\n    # cameras:\n    #   - front-door\n    #   - garage\n```\n\n### `opencv`\n\n```yaml\n# opencv settings (default: shown below)\n# docs: https://docs.opencv.org/4.6.0/d1/de5/classcv_1_1CascadeClassifier.html\nopencv:\n  scale_factor: 1.05\n  min_neighbors: 4.5\n  min_size_width: 30\n  min_size_height: 30\n```\n\n### `schedule`\n\n```yaml\n# schedule settings (default: shown below)\nschedule:\n  # disable recognition if conditions are met\n  disable:\n    # - days:\n    #     - monday\n    #     - tuesday\n    #   times:\n    #     - 20:00-23:59\n    #   cameras:\n    #     - office\n    # - days:\n    #     - tuesday\n    #     - wednesday\n    #   times:\n    #     - 13:00-15:00\n    #     - 18:00-20:00\n    #   cameras:\n    #     - living-room\n```\n\n### `notify`\n\n```yaml\n# notify settings (default: shown below)\nnotify:\n  gotify:\n    url:\n    token:\n    priority: 5\n\n    # only notify from specific cameras\n    # cameras:\n    #   - front-door\n    #   - garage\n\n    # only notify from specific zones\n    # zones:\n    #   - camera: garage\n    #     zone: driveway\n```\n\n### `time`\n\n```yaml\n# time settings (default: shown below)\ntime:\n  # defaults to iso 8601 format with support for token-based formatting\n  # https://github.com/moment/luxon/blob/master/docs/formatting.md#table-of-tokens\n  format:\n  # time zone used in logs\n  timezone: UTC\n```\n\n### `logs`\n\n```yaml\n# log settings (default: shown below)\n# options: silent, error, warn, info, http, verbose, debug, silly\nlogs:\n  level: info\n```\n\n### `ui`\n\n```yaml\n# ui settings (default: shown below)\nui:\n  # base path of ui\n  path:\n\n  pagination:\n    # number of results per page\n    limit: 50\n\n  thumbnails:\n    # value between 0-100\n    quality: 95\n    # value in pixels\n    width: 500\n\n  logs:\n    # number of lines displayed\n    lines: 500\n```\n\n### `telemetry`\n\n```yaml\n# telemetry settings (default: shown below)\n# self hosted version of plausible.io\n# 100% anonymous, used to help improve project\n# no cookies and fully compliant with GDPR, CCPA and PECR\ntelemetry: true\n```\n\n## Storing Secrets\n\n**Note:** If using one of the [Home Assistant Add-ons](https://github.com/jakowenko/double-take-hassio-addons) then the default Home Assistant `/config/secrets.yaml` file is used.\n\n```yaml\nmqtt:\n  host: localhost\n  username: mqtt\n  password: !secret mqtt_password\n\ndetectors:\n  compreface:\n    url: localhost:8000\n    key: !secret compreface_key\n```\n\nThe `secrets.yml` file contains the corresponding value assigned to the identifier.\n\n```yaml\nmqtt_password: \u003cpassword\u003e\ncompreface_key: \u003capi-key\u003e\n```\n\n## Development\n\n### Run Local Containers\n\n| Service |                  |\n| ------- | ---------------- |\n| UI      | `localhost:8080` |\n| API     | `localhost:3000` |\n| MQTT    | `localhost:1883` |\n\n```bash\n# start development containers\n./.develop/docker up\n\n# remove development containers\n./.develop/docker down\n```\n\n### Build Local Image\n\n```bash\n./.develop/build\n```\n\n## Donations\n\nIf you would like to make a donation to support development, please use [GitHub Sponsors](https://github.com/sponsors/jakowenko).\n","funding_links":["https://github.com/sponsors/jakowenko"],"categories":["JavaScript"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fjakowenko%2Fdouble-take","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fjakowenko%2Fdouble-take","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fjakowenko%2Fdouble-take/lists"}