{"id":13433102,"url":"https://github.com/tensorflow/tfjs-models","last_synced_at":"2025-05-12T16:20:00.986Z","repository":{"id":37490545,"uuid":"127968704","full_name":"tensorflow/tfjs-models","owner":"tensorflow","description":"Pretrained models for TensorFlow.js","archived":false,"fork":false,"pushed_at":"2025-02-13T03:24:51.000Z","size":173804,"stargazers_count":14445,"open_issues_count":211,"forks_count":4388,"subscribers_count":276,"default_branch":"master","last_synced_at":"2025-05-05T04:05:57.372Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"https://js.tensorflow.org","language":"TypeScript","has_issues":false,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/tensorflow.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2018-04-03T21:04:35.000Z","updated_at":"2025-05-05T00:22:02.000Z","dependencies_parsed_at":"2023-02-15T05:46:59.387Z","dependency_job_id":"2d5e73ec-3e07-4e0b-b60a-5ade654a54f7","html_url":"https://github.com/tensorflow/tfjs-models","commit_stats":{"total_commits":915,"total_committers":109,"mean_commits":8.394495412844037,"dds":0.8699453551912568,"last_synced_commit":"5b38bedd951cf7ba8cb2f98796db5c91ee75adf2"},"previous_names":[],"tags_count":110,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorflow%2Ftfjs-models","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorflow%2Ftfjs-models/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorflow%2Ftfjs-models/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorflow%2Ftfjs-models/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/tensorflow","download_url":"https://codeload.github.com/tensorflow/tfjs-models/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":252448528,"owners_count":21749491,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-07-31T02:01:21.048Z","updated_at":"2025-05-05T12:52:47.766Z","avatar_url":"https://github.com/tensorflow.png","language":"TypeScript","readme":"# Pre-trained TensorFlow.js models\n\nThis repository hosts a set of pre-trained models that have been ported to\nTensorFlow.js.\n\nThe models are hosted on NPM and unpkg so they can be used in any project out of the box. They can be used directly or used in a transfer learning\nsetting with TensorFlow.js.\n\nTo find out about APIs for models, look at the README in each of the respective\ndirectories. In general, we try to hide tensors so the API can be used by\nnon-machine learning experts.\n\nFor those interested in contributing a model, please file a [GitHub issue on tfjs](https://github.com/tensorflow/tfjs/issues) to gauge\ninterest. We are trying to add models that complement the existing set of models\nand can be used as building blocks in other apps.\n\n## Models\n\n\u003ctable style=\"max-width:100%;table-layout:auto;\"\u003e\n  \u003ctr style=\"text-align:center;\"\u003e\n    \u003cth\u003eType\u003c/th\u003e\n    \u003cth\u003eModel\u003c/th\u003e\n    \u003cth\u003eDemo\u003c/th\u003e\n    \u003cth\u003eDetails\u003c/th\u003e\n    \u003cth\u003eInstall\u003c/th\u003e\n  \u003c/tr\u003e\n  \u003c!-- Images --\u003e\n  \u003c!-- ** MobileNet --\u003e\n  \u003ctr\u003e\n    \u003ctd rowspan=\"12\"\u003e\u003cb\u003eImages\u003c/b\u003e\u003c/td\u003e\n    \u003ctd rowspan=\"2\"\u003e\u003cb\u003e\u003ca style=\"white-space:nowrap; display:inline-block;\" href=\"./mobilenet\"\u003e\u003cdiv style='vertical-align:middle; display:inline;'\u003eMobileNet\u003c/div\u003e\u003c/a\u003e\u003c/b\u003e\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://storage.googleapis.com/tfjs-models/demos/mobilenet/index.html\"\u003elive\u003c/a\u003e\u003c/td\u003e\n    \u003ctd rowspan=\"2\"\u003eClassify images with labels from the \u003ca href=\"http://www.image-net.org/\"\u003eImageNet database\u003c/a\u003e.\u003c/td\u003e\n    \u003ctd rowspan=\"2\"\u003e\u003ccode\u003enpm i @tensorflow-models/mobilenet\u003c/code\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003ca href=\"./mobilenet/demo/index.html\"\u003esource\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003c!-- ** Hand --\u003e\n  \u003ctr\u003e\n    \u003ctd rowspan=\"2\"\u003e\u003cb\u003e\u003ca style=\"white-space:nowrap; display:inline-block;\" href=\"./hand-pose-detection\"\u003e\u003cdiv style='vertical-align:middle; display:inline;'\u003eHand\u003c/div\u003e\u003c/a\u003e\u003c/b\u003e\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://storage.googleapis.com/tfjs-models/demos/hand-pose-detection/index.html?model=mediapipe_hands\"\u003elive\u003c/a\u003e\u003c/td\u003e\n    \u003ctd rowspan=\"2\"\u003eReal-time hand pose detection in the browser using TensorFlow.js.\u003c/td\u003e\n    \u003ctd rowspan=\"2\"\u003e\u003ccode\u003enpm i @tensorflow-models/hand-pose-detection\u003c/code\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003ca href=\"./hand-pose-detection/demos/live_video/index.html\"\u003esource\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n    \u003c!-- ** Pose --\u003e\n  \u003ctr\u003e\n    \u003ctd rowspan=\"2\"\u003e\u003cb\u003e\u003ca style=\"white-space:nowrap; display:inline-block;\" href=\"./pose-detection\"\u003e\u003cdiv style='vertical-align:middle; display:inline;'\u003ePose\u003c/div\u003e\u003c/a\u003e\u003c/b\u003e\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://storage.googleapis.com/tfjs-models/demos/pose-detection/index.html?model=movenet\"\u003elive\u003c/a\u003e\u003c/td\u003e\n    \u003ctd rowspan=\"2\"\u003eAn API for real-time human pose detection in the browser.\u003c/td\u003e\n    \u003ctd rowspan=\"2\"\u003e\u003ccode\u003enpm i @tensorflow-models/pose-detection\u003c/code\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003ca href=\"./pose-detection/demos/live_video/index.html\"\u003esource\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003c!-- ** Coco SSD --\u003e\n  \u003ctr\u003e\n    \u003ctd rowspan=\"2\"\u003e\u003cb\u003e\u003ca style=\"white-space:nowrap; display:inline-block;\" href=\"./coco-ssd\"\u003e\u003cdiv style='vertical-align:middle; display:inline;'\u003eCoco SSD\u003c/div\u003e\u003c/a\u003e\u003c/b\u003e\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"\"\u003e\u003c/a\u003e\u003c/td\u003e\n    \u003ctd rowspan=\"2\"\u003eObject detection model that aims to localize and identify multiple objects in a single image. Based on the \u003ca href=\"https://github.com/tensorflow/models/blob/master/research/object_detection/README.md\"\u003eTensorFlow object detection API\u003c/a\u003e.\u003c/td\u003e\n    \u003ctd rowspan=\"2\"\u003e\u003ccode\u003enpm i @tensorflow-models/coco-ssd\u003c/code\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003ca href=\"./coco-ssd/demo\"\u003esource\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n    \u003ctd rowspan=\"2\"\u003e\u003cb\u003e\u003ca style=\"white-space:nowrap; display:inline-block;\" href=\"./deeplab\"\u003e\u003cdiv style='vertical-align:middle; display:inline;'\u003eDeepLab v3\u003c/div\u003e\u003c/a\u003e\u003c/b\u003e\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"\"\u003e\u003c/a\u003e\u003c/td\u003e\n    \u003ctd rowspan=\"2\"\u003eSemantic segmentation\u003c/td\u003e\n    \u003ctd rowspan=\"2\"\u003e\u003ccode\u003enpm i @tensorflow-models/deeplab\u003c/code\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003ca href=\"./deeplab/demo\"\u003esource\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n    \u003c!-- ** Face Landmark Detection --\u003e\n  \u003ctr\u003e\n    \u003ctd rowspan=\"2\"\u003e\u003cb\u003e\u003ca style=\"white-space:nowrap; display:inline-block;\" href=\"./face-landmarks-detection\"\u003e\u003cdiv style='vertical-align:middle; display:inline;'\u003eFace Landmark Detection\u003c/div\u003e\u003c/a\u003e\u003c/b\u003e\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://storage.googleapis.com/tfjs-models/demos/face-landmarks-detection/index.html?model=mediapipe_face_mesh\"\u003elive\u003c/a\u003e\u003c/td\u003e\n    \u003ctd rowspan=\"2\"\u003eReal-time 3D facial landmarks detection to infer the approximate surface geometry of a human face\u003c/td\u003e\n    \u003ctd rowspan=\"2\"\u003e\u003ccode\u003enpm i @tensorflow-models/face-landmarks-detection\u003c/code\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003ca href=\"./face-landmarks-detection/demos\"\u003esource\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n\n  \u003c!-- * Audio --\u003e\n  \u003c!-- ** Speech Commands --\u003e\n  \u003ctr\u003e\n    \u003ctd rowspan=\"2\"\u003e\u003cb\u003eAudio\u003c/b\u003e\u003c/td\u003e\n    \u003ctd rowspan=\"2\"\u003e\u003cb\u003e\u003ca style=\"white-space:nowrap; display:inline-block;\" href=\"./speech-commands\"\u003e\u003cdiv style='vertical-align:middle; display:inline;'\u003eSpeech Commands\u003c/div\u003e\u003c/a\u003e\u003c/b\u003e\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://storage.googleapis.com/tfjs-speech-model-test/2019-01-03a/dist/index.html\"\u003elive\u003c/a\u003e\u003c/td\u003e\n    \u003ctd rowspan=\"2\"\u003eClassify 1 second audio snippets from the \u003ca href=\"https://www.tensorflow.org/tutorials/audio/simple_audio\"\u003espeech commands dataset\u003c/a\u003e.\u003c/td\u003e\n    \u003ctd rowspan=\"2\"\u003e\u003ccode\u003enpm i @tensorflow-models/speech-commands\u003c/code\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003ca href=\"./speech-commands/demo/index.html\"\u003esource\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003c!-- * Text --\u003e\n  \u003c!-- ** Universal Sentence Encoder --\u003e\n  \u003ctr\u003e\n    \u003ctd rowspan=\"4\"\u003e\u003cb\u003eText\u003c/b\u003e\u003c/td\u003e\n    \u003ctd rowspan=\"2\"\u003e\u003cb\u003e\u003ca style=\"white-space:nowrap; display:inline-block;\" href=\"./universal-sentence-encoder\"\u003e\u003cdiv style='vertical-align:middle; display:inline;'\u003eUniversal Sentence Encoder\u003c/div\u003e\u003c/a\u003e\u003c/b\u003e\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"\"\u003e\u003c/a\u003e\u003c/td\u003e\n    \u003ctd rowspan=\"2\"\u003eEncode text into a 512-dimensional embedding to be used as inputs to natural language processing tasks such as sentiment classification and textual similarity.\u003c/td\u003e\n    \u003ctd rowspan=\"2\"\u003e\u003ccode\u003enpm i @tensorflow-models/universal-sentence-encoder\u003c/code\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003ca href=\"./universal-sentence-encoder/demo\"\u003esource\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003c!-- ** Text Toxicity --\u003e\n  \u003ctr\u003e\n    \u003ctd rowspan=\"2\"\u003e\u003cb\u003e\u003ca style=\"white-space:nowrap; display:inline-block;\" href=\"./toxicity\"\u003e\u003cdiv style='vertical-align:middle; display:inline;'\u003eText Toxicity\u003c/div\u003e\u003c/a\u003e\u003c/b\u003e\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://storage.googleapis.com/tfjs-models/demos/toxicity/index.html\"\u003elive\u003c/a\u003e\u003c/td\u003e\n    \u003ctd rowspan=\"2\"\u003eScore the perceived impact a comment might have on a conversation, from \"Very toxic\" to \"Very healthy\".\u003c/td\u003e\n    \u003ctd rowspan=\"2\"\u003e\u003ccode\u003enpm i @tensorflow-models/toxicity\u003c/code\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003ca href=\"./toxicity/demo/index.html\"\u003esource\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003c!-- * Depth Estimation --\u003e\n  \u003c!-- ** Portrait Depth --\u003e\n  \u003ctr\u003e\n    \u003ctd rowspan=\"2\"\u003e\u003cb\u003eDepth Estimation\u003c/b\u003e\u003c/td\u003e\n    \u003ctd rowspan=\"2\"\u003e\u003cb\u003e\u003ca style=\"white-space:nowrap; display:inline-block;\" href=\"./depth-estimation\"\u003e\u003cdiv style='vertical-align:middle; display:inline;'\u003ePortrait Depth\u003c/div\u003e\u003c/a\u003e\u003c/b\u003e\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"https://storage.googleapis.com/tfjs-models/demos/3dphoto/index.html\"\u003elive\u003c/a\u003e\u003c/td\u003e\n    \u003ctd rowspan=\"2\"\u003eEstimate per-pixel depth (the distance to the camera center) for a single portrait image, which can be further used for creative applications such as \u003ca href=\"https://blog.tensorflow.org/2022/05/portrait-depth-api-turning-single-image.html?linkId=8063793\"\u003e3D photo\u003c/a\u003e and \u003ca href=\"https://storage.googleapis.com/tfjs-models/demos/relighting/index.html\"\u003erelighting\u003c/a\u003e.\u003c/td\u003e\n    \u003ctd rowspan=\"2\"\u003e\u003ccode\u003enpm i @tensorflow-models/depth-estimation\u003c/code\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003ca href=\"./depth-estimation/demos/3d_photo/index.html\"\u003esource\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003c!-- * General Utilities --\u003e\n  \u003ctr\u003e\n    \u003ctd rowspan=\"2\"\u003e\u003cb\u003eGeneral Utilities\u003c/b\u003e\u003c/td\u003e\n  \u003c!-- ** KNN Classifier --\u003e\n    \u003ctd rowspan=\"2\"\u003e\u003cb\u003e\u003ca style=\"white-space:nowrap; display:inline-block;\" href=\"./knn-classifier\"\u003e\u003cdiv style='vertical-align:middle; display:inline;'\u003eKNN Classifier\u003c/div\u003e\u003c/a\u003e\u003c/b\u003e\u003c/td\u003e\n    \u003ctd\u003e\u003ca href=\"\"\u003e\u003c/a\u003e\u003c/td\u003e\n    \u003ctd rowspan=\"2\"\u003eThis package provides a utility for creating a classifier using the K-Nearest Neighbors algorithm. Can be used for transfer learning.\u003c/td\u003e\n    \u003ctd rowspan=\"2\"\u003e\u003ccode\u003enpm i @tensorflow-models/knn-classifier\u003c/code\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n    \u003ctd\u003e\u003ca href=\"./knn-classifier/demo\"\u003esource\u003c/a\u003e\u003c/td\u003e\n  \u003c/tr\u003e\n\u003c/table\u003e\n\n## Development\n\nYou can run the unit tests for any of the models by running the following\ninside a directory:\n\n`yarn test`\n\nNew models should have a test NPM script (see [this](./mobilenet/package.json) `package.json` and `run_tests.ts` [helper](./mobilenet/run_tests.ts) for reference).\n\nTo run all of the tests, you can run the following command from the root of this\nrepo:\n\n`yarn presubmit`\n","funding_links":[],"categories":["TypeScript","others","Tools","📖 Learn","语言资源库","Uncategorized"],"sub_categories":["🤖 Models/Projects","typescript","Uncategorized"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftensorflow%2Ftfjs-models","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ftensorflow%2Ftfjs-models","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftensorflow%2Ftfjs-models/lists"}