{"id":13582603,"url":"https://github.com/vladmandic/human","last_synced_at":"2025-05-13T00:33:59.723Z","repository":{"id":37236230,"uuid":"303229610","full_name":"vladmandic/human","owner":"vladmandic","description":"Human: AI-powered 3D Face Detection \u0026 Rotation Tracking, Face Description \u0026 Recognition, Body Pose Tracking, 3D Hand \u0026 Finger Tracking, Iris Analysis, Age \u0026 Gender \u0026 Emotion Prediction, Gaze Tracking, Gesture Recognition","archived":false,"fork":false,"pushed_at":"2025-02-05T15:11:20.000Z","size":595438,"stargazers_count":2585,"open_issues_count":3,"forks_count":342,"subscribers_count":45,"default_branch":"main","last_synced_at":"2025-04-23T18:52:54.674Z","etag":null,"topics":["age-estimation","body-segmentation","body-tracking","emotion-detection","face-detection","face-matching","face-mesh","face-position","face-recognition","faceid","gaze-tracking","gender-prediction","gesture-recognition","hand-tracking","iris-tracking","tensorflowjs","tfjs"],"latest_commit_sha":null,"homepage":"https://vladmandic.github.io/human/demo/index.html","language":"HTML","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/vladmandic.png","metadata":{"files":{"readme":"README.md","changelog":"CHANGELOG.md","contributing":"CONTRIBUTING","funding":".github/FUNDING.yml","license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null},"funding":{"github":["vladmandic"],"patreon":null,"open_collective":null,"ko_fi":null,"tidelift":null,"community_bridge":null,"liberapay":null,"issuehunt":null,"otechie":null,"lfx_crowdfunding":null,"custom":null}},"created_at":"2020-10-11T23:14:19.000Z","updated_at":"2025-04-23T01:09:18.000Z","dependencies_parsed_at":"2022-07-13T18:20:28.836Z","dependency_job_id":"9b84ed23-68b9-401b-a229-3140dd3a6115","html_url":"https://github.com/vladmandic/human","commit_stats":{"total_commits":1258,"total_committers":9,"mean_commits":"139.77777777777777","dds":"0.011128775834658211","last_synced_commit":"745fd626a391611709371c4b9be17fc905244630"},"previous_names":[],"tags_count":22,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/vladmandic%2Fhuman","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/vladmandic%2Fhuman/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/vladmandic%2Fhuman/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/vladmandic%2Fhuman/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/vladmandic","download_url":"https://codeload.github.com/vladmandic/human/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":253850289,"owners_count":21973661,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["age-estimation","body-segmentation","body-tracking","emotion-detection","face-detection","face-matching","face-mesh","face-position","face-recognition","faceid","gaze-tracking","gender-prediction","gesture-recognition","hand-tracking","iris-tracking","tensorflowjs","tfjs"],"created_at":"2024-08-01T15:02:52.392Z","updated_at":"2025-05-13T00:33:59.665Z","avatar_url":"https://github.com/vladmandic.png","language":"HTML","readme":"[![](https://img.shields.io/static/v1?label=Sponsor\u0026message=%E2%9D%A4\u0026logo=GitHub\u0026color=%23fe8e86)](https://github.com/sponsors/vladmandic)\n![Git Version](https://img.shields.io/github/package-json/v/vladmandic/human?style=flat-square\u0026svg=true\u0026label=git)\n![NPM Version](https://img.shields.io/npm/v/@vladmandic/human.png?style=flat-square)\n![Last Commit](https://img.shields.io/github/last-commit/vladmandic/human?style=flat-square\u0026svg=true)\n![License](https://img.shields.io/github/license/vladmandic/human?style=flat-square\u0026svg=true)\n![GitHub Status Checks](https://img.shields.io/github/checks-status/vladmandic/human/main?style=flat-square\u0026svg=true)\n\n# Human Library\n\n**AI-powered 3D Face Detection \u0026 Rotation Tracking, Face Description \u0026 Recognition,**  \n**Body Pose Tracking, 3D Hand \u0026 Finger Tracking, Iris Analysis,**  \n**Age \u0026 Gender \u0026 Emotion Prediction, Gaze Tracking, Gesture Recognition, Body Segmentation**  \n\n\u003cbr\u003e\n\n## Highlights\n\n- Compatible with most server-side and client-side environments and frameworks  \n- Combines multiple machine learning models which can be switched on-demand depending on the use-case  \n- Related models are executed in an attention pipeline to provide details when needed  \n- Optimized input pre-processing that can enhance image quality of any type of inputs  \n- Detection of frame changes to trigger only required models for improved performance  \n- Intelligent temporal interpolation to provide smooth results regardless of processing performance  \n- Simple unified API  \n- Built-in Image, Video and WebCam handling\n\n[*Jump to Quick Start*](#quick-start)\n\n\u003cbr\u003e\n\n## Compatibility\n\n**Browser**:  \n  - Compatible with both desktop and mobile platforms  \n  - Compatible with *WebGPU*, *WebGL*, *WASM*, *CPU* backends  \n  - Compatible with *WebWorker* execution  \n  - Compatible with *WebView*  \n  - Primary platform: *Chromium*-based browsers  \n  - Secondary platform: *Firefox*, *Safari*\n\n**NodeJS**:  \n  - Compatibile with *WASM* backend for executions on architectures where *tensorflow* binaries are not available  \n  - Compatible with *tfjs-node* using software execution via *tensorflow* shared libraries  \n  - Compatible with *tfjs-node* using GPU-accelerated execution via *tensorflow* shared libraries and nVidia CUDA  \n  - Supported versions are from **14.x** to **22.x**  \n  - NodeJS version **23.x** is not supported due to breaking changes and issues with `@tensorflow/tfjs`  \n\n\u003cbr\u003e\n\n## Releases\n- [Release Notes](https://github.com/vladmandic/human/releases)\n- [NPM Link](https://www.npmjs.com/package/@vladmandic/human)\n## Demos\n\n*Check out [**Simple Live Demo**](https://vladmandic.github.io/human/demo/typescript/index.html) fully annotated app as a good start starting point ([html](https://github.com/vladmandic/human/blob/main/demo/typescript/index.html))([code](https://github.com/vladmandic/human/blob/main/demo/typescript/index.ts))*  \n\n*Check out [**Main Live Demo**](https://vladmandic.github.io/human/demo/index.html) app for advanced processing of of webcam, video stream or images static images with all possible tunable options*  \n\n- To start video detection, simply press *Play*  \n- To process images, simply drag \u0026 drop in your Browser window  \n- Note: For optimal performance, select only models you'd like to use\n- Note: If you have modern GPU, *WebGL* (default) backend is preferred, otherwise select *WASM* backend\n\n\u003cbr\u003e\n\n\n- [**List of all Demo applications**](https://github.com/vladmandic/human/wiki/Demos)\n- [**Live Examples galery**](https://vladmandic.github.io/human/samples/index.html)\n\n### Browser Demos\n\n*All browser demos are self-contained without any external dependencies*\n\n- **Full** [[*Live*]](https://vladmandic.github.io/human/demo/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo): Main browser demo app that showcases all Human capabilities\n- **Simple** [[*Live*]](https://vladmandic.github.io/human/demo/typescript/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/typescript): Simple demo in WebCam processing demo in TypeScript\n- **Embedded** [[*Live*]](https://vladmandic.github.io/human/demo/video/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/video/index.html): Even simpler demo with tiny code embedded in HTML file\n- **Face Detect** [[*Live*]](https://vladmandic.github.io/human/demo/facedetect/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/facedetect): Extract faces from images and processes details\n- **Face Match** [[*Live*]](https://vladmandic.github.io/human/demo/facematch/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/facematch): Extract faces from images, calculates face descriptors and similarities and matches them to known database\n- **Face ID** [[*Live*]](https://vladmandic.github.io/human/demo/faceid/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/faceid): Runs multiple checks to validate webcam input before performing face match to faces in IndexDB\n- **Multi-thread** [[*Live*]](https://vladmandic.github.io/human/demo/multithread/index.html) [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/multithread): Runs each Human module in a separate web worker for highest possible performance  \n- **NextJS** [[*Live*]](https://vladmandic.github.io/human-next/out/index.html) [[*Details*]](https://github.com/vladmandic/human-next): Use Human with TypeScript, NextJS and ReactJS\n- **ElectronJS** [[*Details*]](https://github.com/vladmandic/human-electron): Use Human with TypeScript and ElectonJS to create standalone cross-platform apps\n- **3D Analysis with BabylonJS** [[*Live*]](https://vladmandic.github.io/human-motion/src/index.html) [[*Details*]](https://github.com/vladmandic/human-motion): 3D tracking and visualization of heead, face, eye, body and hand\n- **VRM Virtual Model Tracking with Three.JS** [[*Live*]](https://vladmandic.github.io/human-three-vrm/src/human-vrm.html) [[*Details*]](https://github.com/vladmandic/human-three-vrm): VR model with head, face, eye, body and hand tracking  \n- **VRM Virtual Model Tracking with BabylonJS** [[*Live*]](https://vladmandic.github.io/human-bjs-vrm/src/index.html) [[*Details*]](https://github.com/vladmandic/human-bjs-vrm): VR model with head, face, eye, body and hand tracking  \n\n### NodeJS Demos\n\n*NodeJS demos may require extra dependencies which are used to decode inputs*  \n*See header of each demo to see its dependencies as they are not automatically installed with `Human`*\n\n- **Main** [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/nodejs/node.js): Process images from files, folders or URLs using native methods  \n- **Canvas** [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/nodejs/node-canvas.js): Process image from file or URL and draw results to a new image file using `node-canvas`  \n- **Video** [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/nodejs/node-video.js): Processing of video input using `ffmpeg`  \n- **WebCam** [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/nodejs/node-webcam.js): Processing of webcam screenshots using `fswebcam`  \n- **Events** [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/nodejs/node-event.js): Showcases usage of `Human` eventing to get notifications on processing\n- **Similarity** [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/nodejs/node-similarity.js): Compares two input images for similarity of detected faces\n- **Face Match** [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/facematch/node-match.js): Parallel processing of face **match** in multiple child worker threads\n- **Multiple Workers** [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/multithread/node-multiprocess.js): Runs multiple parallel `human` by dispaching them to pool of pre-created worker processes  \n- **Dynamic Load** [[*Details*]](https://github.com/vladmandic/human/tree/main/demo/nodejs): Loads Human dynamically with multiple different desired backends  \n\n## Project pages\n\n- [**Code Repository**](https://github.com/vladmandic/human)\n- [**NPM Package**](https://www.npmjs.com/package/@vladmandic/human)\n- [**Issues Tracker**](https://github.com/vladmandic/human/issues)\n- [**TypeDoc API Specification - Main class**](https://vladmandic.github.io/human/typedoc/classes/Human.html)\n- [**TypeDoc API Specification - Full**](https://vladmandic.github.io/human/typedoc/)\n- [**Change Log**](https://github.com/vladmandic/human/blob/main/CHANGELOG.md)\n- [**Current To-do List**](https://github.com/vladmandic/human/blob/main/TODO.md)\n\n## Wiki pages\n\n- [**Home**](https://github.com/vladmandic/human/wiki)\n- [**Installation**](https://github.com/vladmandic/human/wiki/Install)\n- [**Usage \u0026 Functions**](https://github.com/vladmandic/human/wiki/Usage)\n- [**Configuration Details**](https://github.com/vladmandic/human/wiki/Config)\n- [**Result Details**](https://github.com/vladmandic/human/wiki/Result)\n- [**Customizing Draw Methods**](https://github.com/vladmandic/human/wiki/Draw)\n- [**Caching \u0026 Smoothing**](https://github.com/vladmandic/human/wiki/Caching)\n- [**Input Processing**](https://github.com/vladmandic/human/wiki/Image)\n- [**Face Recognition \u0026 Face Description**](https://github.com/vladmandic/human/wiki/Embedding)\n- [**Gesture Recognition**](https://github.com/vladmandic/human/wiki/Gesture)\n- [**Common Issues**](https://github.com/vladmandic/human/wiki/Issues)\n- [**Background and Benchmarks**](https://github.com/vladmandic/human/wiki/Background)\n\n## Additional notes\n\n- [**Comparing Backends**](https://github.com/vladmandic/human/wiki/Backends)\n- [**Development Server**](https://github.com/vladmandic/human/wiki/Development-Server)\n- [**Build Process**](https://github.com/vladmandic/human/wiki/Build-Process)\n- [**Adding Custom Modules**](https://github.com/vladmandic/human/wiki/Module)\n- [**Performance Notes**](https://github.com/vladmandic/human/wiki/Performance)\n- [**Performance Profiling**](https://github.com/vladmandic/human/wiki/Profiling)\n- [**Platform Support**](https://github.com/vladmandic/human/wiki/Platforms)\n- [**Diagnostic and Performance trace information**](https://github.com/vladmandic/human/wiki/Diag)\n- [**Dockerize Human applications**](https://github.com/vladmandic/human/wiki/Docker)\n- [**List of Models \u0026 Credits**](https://github.com/vladmandic/human/wiki/Models)\n- [**Models Download Repository**](https://github.com/vladmandic/human-models)\n- [**Security \u0026 Privacy Policy**](https://github.com/vladmandic/human/blob/main/SECURITY.md)\n- [**License \u0026 Usage Restrictions**](https://github.com/vladmandic/human/blob/main/LICENSE)\n\n\u003cbr\u003e\n\n*See [**issues**](https://github.com/vladmandic/human/issues?q=) and [**discussions**](https://github.com/vladmandic/human/discussions) for list of known limitations and planned enhancements*  \n\n*Suggestions are welcome!*  \n\n\u003chr\u003e\u003cbr\u003e\n\n## App Examples\n\nVisit [Examples gallery](https://vladmandic.github.io/human/samples/index.html) for more examples  \n[\u003cimg src=\"assets/samples.jpg\" width=\"640\"/\u003e](assets/samples.jpg)\n\n\u003cbr\u003e\n\n## Options\n\nAll options as presented in the demo application...  \n[demo/index.html](demo/index.html)  \n[\u003cimg src=\"assets/screenshot-menu.png\"/\u003e](assets/screenshot-menu.png)\n\n\u003cbr\u003e\n\n**Results Browser:**  \n[ *Demo -\u003e Display -\u003e Show Results* ]\u003cbr\u003e\n[\u003cimg src=\"assets/screenshot-results.png\"/\u003e](assets/screenshot-results.png)\n\n\u003cbr\u003e\n\n## Advanced Examples\n\n1. **Face Similarity Matching:**  \nExtracts all faces from provided input images,  \nsorts them by similarity to selected face  \nand optionally matches detected face with database of known people to guess their names\n\u003e [demo/facematch](demo/facematch/index.html)  \n\n[\u003cimg src=\"assets/screenshot-facematch.jpg\" width=\"640\"/\u003e](assets/screenshot-facematch.jpg)\n\n2. **Face Detect:**  \nExtracts all detect faces from loaded images on-demand and highlights face details on a selected face  \n\u003e [demo/facedetect](demo/facedetect/index.html)  \n\n[\u003cimg src=\"assets/screenshot-facedetect.jpg\" width=\"640\"/\u003e](assets/screenshot-facedetect.jpg)\n\n3. **Face ID:**  \nPerforms validation check on a webcam input to detect a real face and matches it to known faces stored in database\n\u003e [demo/faceid](demo/faceid/index.html)  \n\n[\u003cimg src=\"assets/screenshot-faceid.jpg\" width=\"640\"/\u003e](assets/screenshot-faceid.jpg)\n\n\u003cbr\u003e\n\n4. **3D Rendering:**  \n\u003e [human-motion](https://github.com/vladmandic/human-motion)\n\n[\u003cimg src=\"https://github.com/vladmandic/human-motion/raw/main/assets/screenshot-face.jpg\" width=\"640\"/\u003e](https://github.com/vladmandic/human-motion/raw/main/assets/screenshot-face.jpg)\n[\u003cimg src=\"https://github.com/vladmandic/human-motion/raw/main/assets/screenshot-body.jpg\" width=\"640\"/\u003e](https://github.com/vladmandic/human-motion/raw/main/assets/screenshot-body.jpg)\n[\u003cimg src=\"https://github.com/vladmandic/human-motion/raw/main/assets/screenshot-hand.jpg\" width=\"640\"/\u003e](https://github.com/vladmandic/human-motion/raw/main/assets/screenshot-hand.jpg)\n\n\u003cbr\u003e\n\n5. **VR Model Tracking:**  \n\u003e [human-three-vrm](https://github.com/vladmandic/human-three-vrm)  \n\u003e [human-bjs-vrm](https://github.com/vladmandic/human-bjs-vrm)  \n\n[\u003cimg src=\"https://github.com/vladmandic/human-three-vrm/raw/main/assets/human-vrm-screenshot.jpg\" width=\"640\"/\u003e](https://github.com/vladmandic/human-three-vrm/raw/main/assets/human-vrm-screenshot.jpg)\n\n\n6. **Human as OS native application:**\n\u003e [human-electron](https://github.com/vladmandic/human-electron)\n\n\u003cbr\u003e\n\n**468-Point Face Mesh Defails:**  \n(view in full resolution to see keypoints)  \n\n[\u003cimg src=\"assets/facemesh.png\" width=\"400\"/\u003e](assets/facemesh.png)\n\n\u003cbr\u003e\u003chr\u003e\u003cbr\u003e\n\n## Quick Start\n\nSimply load `Human` (*IIFE version*) directly from a cloud CDN in your HTML file:  \n(pick one: `jsdelirv`, `unpkg` or `cdnjs`)\n\n```html\n\u003c!DOCTYPE HTML\u003e\n\u003cscript src=\"https://cdn.jsdelivr.net/npm/@vladmandic/human/dist/human.js\"\u003e\u003c/script\u003e\n\u003cscript src=\"https://unpkg.dev/@vladmandic/human/dist/human.js\"\u003e\u003c/script\u003e\n\u003cscript src=\"https://cdnjs.cloudflare.com/ajax/libs/human/3.0.0/human.js\"\u003e\u003c/script\u003e\n```\n\nFor details, including how to use `Browser ESM` version or `NodeJS` version of `Human`, see [**Installation**](https://github.com/vladmandic/human/wiki/Install)\n\n\u003cbr\u003e\n\n## Code Examples\n\nSimple app that uses Human to process video input and  \ndraw output on screen using internal draw helper functions\n\n```js\n// create instance of human with simple configuration using default values\nconst config = { backend: 'webgl' };\nconst human = new Human.Human(config);\n// select input HTMLVideoElement and output HTMLCanvasElement from page\nconst inputVideo = document.getElementById('video-id');\nconst outputCanvas = document.getElementById('canvas-id');\n\nfunction detectVideo() {\n  // perform processing using default configuration\n  human.detect(inputVideo).then((result) =\u003e {\n    // result object will contain detected details\n    // as well as the processed canvas itself\n    // so lets first draw processed frame on canvas\n    human.draw.canvas(result.canvas, outputCanvas);\n    // then draw results on the same canvas\n    human.draw.face(outputCanvas, result.face);\n    human.draw.body(outputCanvas, result.body);\n    human.draw.hand(outputCanvas, result.hand);\n    human.draw.gesture(outputCanvas, result.gesture);\n    // and loop immediate to the next frame\n    requestAnimationFrame(detectVideo);\n    return result;\n  });\n}\n\ndetectVideo();\n```\n\nor using `async/await`:\n\n```js\n// create instance of human with simple configuration using default values\nconst config = { backend: 'webgl' };\nconst human = new Human(config); // create instance of Human\nconst inputVideo = document.getElementById('video-id');\nconst outputCanvas = document.getElementById('canvas-id');\n\nasync function detectVideo() {\n  const result = await human.detect(inputVideo); // run detection\n  human.draw.all(outputCanvas, result); // draw all results\n  requestAnimationFrame(detectVideo); // run loop\n}\n\ndetectVideo(); // start loop\n```\n\nor using `Events`:\n\n```js\n// create instance of human with simple configuration using default values\nconst config = { backend: 'webgl' };\nconst human = new Human(config); // create instance of Human\nconst inputVideo = document.getElementById('video-id');\nconst outputCanvas = document.getElementById('canvas-id');\n\nhuman.events.addEventListener('detect', () =\u003e { // event gets triggered when detect is complete\n  human.draw.all(outputCanvas, human.result); // draw all results\n});\n\nfunction detectVideo() {\n  human.detect(inputVideo) // run detection\n    .then(() =\u003e requestAnimationFrame(detectVideo)); // upon detect complete start processing of the next frame\n}\n\ndetectVideo(); // start loop\n```\n\nor using interpolated results for smooth video processing by separating detection and drawing loops:\n\n```js\nconst human = new Human(); // create instance of Human\nconst inputVideo = document.getElementById('video-id');\nconst outputCanvas = document.getElementById('canvas-id');\nlet result;\n\nasync function detectVideo() {\n  result = await human.detect(inputVideo); // run detection\n  requestAnimationFrame(detectVideo); // run detect loop\n}\n\nasync function drawVideo() {\n  if (result) { // check if result is available\n    const interpolated = human.next(result); // get smoothened result using last-known results\n    human.draw.all(outputCanvas, interpolated); // draw the frame\n  }\n  requestAnimationFrame(drawVideo); // run draw loop\n}\n\ndetectVideo(); // start detection loop\ndrawVideo(); // start draw loop\n```\n\nor same, but using built-in full video processing instead of running manual frame-by-frame loop:\n\n```js\nconst human = new Human(); // create instance of Human\nconst inputVideo = document.getElementById('video-id');\nconst outputCanvas = document.getElementById('canvas-id');\n\nasync function drawResults() {\n  const interpolated = human.next(); // get smoothened result using last-known results\n  human.draw.all(outputCanvas, interpolated); // draw the frame\n  requestAnimationFrame(drawResults); // run draw loop\n}\n\nhuman.video(inputVideo); // start detection loop which continously updates results\ndrawResults(); // start draw loop\n```\n\nor using built-in webcam helper methods that take care of video handling completely:\n\n```js\nconst human = new Human(); // create instance of Human\nconst outputCanvas = document.getElementById('canvas-id');\n\nasync function drawResults() {\n  const interpolated = human.next(); // get smoothened result using last-known results\n  human.draw.canvas(outputCanvas, human.webcam.element); // draw current webcam frame\n  human.draw.all(outputCanvas, interpolated); // draw the frame detectgion results\n  requestAnimationFrame(drawResults); // run draw loop\n}\n\nawait human.webcam.start({ crop: true });\nhuman.video(human.webcam.element); // start detection loop which continously updates results\ndrawResults(); // start draw loop\n```\n\nAnd for even better results, you can run detection in a separate web worker thread\n\n\u003cbr\u003e\u003chr\u003e\u003cbr\u003e\n\n## Inputs\n\n`Human` library can process all known input types:  \n\n- `Image`, `ImageData`, `ImageBitmap`, `Canvas`, `OffscreenCanvas`, `Tensor`,  \n- `HTMLImageElement`, `HTMLCanvasElement`, `HTMLVideoElement`, `HTMLMediaElement`\n\nAdditionally, `HTMLVideoElement`, `HTMLMediaElement` can be a standard `\u003cvideo\u003e` tag that links to:\n\n- WebCam on user's system\n- Any supported video type  \n  e.g. `.mp4`, `.avi`, etc.\n- Additional video types supported via *HTML5 Media Source Extensions*  \n  e.g.: **HLS** (*HTTP Live Streaming*) using `hls.js` or **DASH** (*Dynamic Adaptive Streaming over HTTP*) using `dash.js`\n- **WebRTC** media track using built-in support  \n\n\u003cbr\u003e\u003chr\u003e\u003cbr\u003e\n\n## Detailed Usage\n\n- [**Wiki Home**](https://github.com/vladmandic/human/wiki)\n- [**List of all available methods, properies and namespaces**](https://github.com/vladmandic/human/wiki/Usage)\n- [**TypeDoc API Specification - Main class**](https://vladmandic.github.io/human/typedoc/classes/Human.html)\n- [**TypeDoc API Specification - Full**](https://vladmandic.github.io/human/typedoc/)\n\n    ![typedoc](assets/screenshot-typedoc.png)\n\n\u003cbr\u003e\u003chr\u003e\u003cbr\u003e\n\n## TypeDefs\n\n`Human` is written using TypeScript strong typing and ships with full **TypeDefs** for all classes defined by the library bundled in `types/human.d.ts` and enabled by default  \n\n*Note*: This does not include embedded `tfjs`  \nIf you want to use embedded `tfjs` inside `Human` (`human.tf` namespace) and still full **typedefs**, add this code:\n\n\u003e import type * as tfjs from '@vladmandic/human/dist/tfjs.esm';  \n\u003e const tf = human.tf as typeof tfjs;\n\nThis is not enabled by default as `Human` does not ship with full **TFJS TypeDefs** due to size considerations  \nEnabling `tfjs` TypeDefs as above creates additional project (dev-only as only types are required) dependencies as defined in `@vladmandic/human/dist/tfjs.esm.d.ts`:\n\n\u003e @tensorflow/tfjs-core, @tensorflow/tfjs-converter, @tensorflow/tfjs-backend-wasm, @tensorflow/tfjs-backend-webgl\n\n\n\u003cbr\u003e\u003chr\u003e\u003cbr\u003e\n\n## Default models\n\nDefault models in Human library are:\n\n- **Face Detection**: *MediaPipe BlazeFace Back variation*\n- **Face Mesh**: *MediaPipe FaceMesh*\n- **Face Iris Analysis**: *MediaPipe Iris*\n- **Face Description**: *HSE FaceRes*\n- **Emotion Detection**: *Oarriaga Emotion*\n- **Body Analysis**: *MoveNet Lightning variation*\n- **Hand Analysis**: *HandTrack \u0026 MediaPipe HandLandmarks*\n- **Body Segmentation**: *Google Selfie*\n- **Object Detection**: *CenterNet with MobileNet v3*\n\nNote that alternative models are provided and can be enabled via configuration  \nFor example, body pose detection by default uses *MoveNet Lightning*, but can be switched to *MultiNet Thunder* for higher precision or *Multinet MultiPose* for multi-person detection or even *PoseNet*, *BlazePose* or *EfficientPose* depending on the use case  \n\nFor more info, see [**Configuration Details**](https://github.com/vladmandic/human/wiki/Configuration) and [**List of Models**](https://github.com/vladmandic/human/wiki/Models)\n\n\u003cbr\u003e\u003chr\u003e\u003cbr\u003e\n\n## Diagnostics\n\n- [How to get diagnostic information or performance trace information](https://github.com/vladmandic/human/wiki/Diag)\n\n\u003cbr\u003e\u003chr\u003e\u003cbr\u003e\n\n`Human` library is written in [TypeScript](https://www.typescriptlang.org/docs/handbook/intro.html) **5.1** using [TensorFlow/JS](https://www.tensorflow.org/js/) **4.10** and conforming to latest `JavaScript` [ECMAScript version 2022](https://262.ecma-international.org/) standard  \n\nBuild target for distributables is `JavaScript` [EMCAScript version 2018](https://262.ecma-international.org/9.0/)  \n\n\u003cbr\u003e\n\nFor details see [**Wiki Pages**](https://github.com/vladmandic/human/wiki)  \nand [**API Specification**](https://vladmandic.github.io/human/typedoc/classes/Human.html)\n\n\u003cbr\u003e\n\n[![](https://img.shields.io/static/v1?label=Sponsor\u0026message=%E2%9D%A4\u0026logo=GitHub\u0026color=%23fe8e86)](https://github.com/sponsors/vladmandic)\n![Stars](https://img.shields.io/github/stars/vladmandic/human?style=flat-square\u0026svg=true)\n![Forks](https://badgen.net/github/forks/vladmandic/human)\n![Code Size](https://img.shields.io/github/languages/code-size/vladmandic/human?style=flat-square\u0026svg=true)\n![CDN](https://data.jsdelivr.com/v1/package/npm/@vladmandic/human/badge)\u003cbr\u003e\n![Downloads](https://img.shields.io/npm/dw/@vladmandic/human.png?style=flat-square)\n![Downloads](https://img.shields.io/npm/dm/@vladmandic/human.png?style=flat-square)\n![Downloads](https://img.shields.io/npm/dy/@vladmandic/human.png?style=flat-square)\n","funding_links":["https://github.com/sponsors/vladmandic"],"categories":["HTML"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fvladmandic%2Fhuman","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fvladmandic%2Fhuman","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fvladmandic%2Fhuman/lists"}