{"id":13405565,"url":"https://github.com/pytorch/vision","last_synced_at":"2025-05-12T15:13:05.127Z","repository":{"id":37444307,"uuid":"73328905","full_name":"pytorch/vision","owner":"pytorch","description":"Datasets, Transforms and Models specific to Computer Vision","archived":false,"fork":false,"pushed_at":"2025-05-04T11:34:51.000Z","size":1227492,"stargazers_count":16799,"open_issues_count":1110,"forks_count":7049,"subscribers_count":473,"default_branch":"main","last_synced_at":"2025-05-05T11:14:20.543Z","etag":null,"topics":["computer-vision","machine-learning"],"latest_commit_sha":null,"homepage":"https://pytorch.org/vision","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"bsd-3-clause","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/pytorch.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":"CITATION.cff","codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2016-11-09T23:11:43.000Z","updated_at":"2025-05-04T15:30:14.000Z","dependencies_parsed_at":"2023-12-20T13:04:24.600Z","dependency_job_id":"28b4c50b-41ef-48c8-bdd0-822d5d59ca37","html_url":"https://github.com/pytorch/vision","commit_stats":{"total_commits":3904,"total_committers":634,"mean_commits":6.157728706624606,"dds":0.8322233606557377,"last_synced_commit":"6d7851bd5e2bedc294e40e90532f0e375fcfee04"},"previous_names":[],"tags_count":187,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pytorch%2Fvision","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pytorch%2Fvision/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pytorch%2Fvision/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/pytorch%2Fvision/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/pytorch","download_url":"https://codeload.github.com/pytorch/vision/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":252498521,"owners_count":21757818,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["computer-vision","machine-learning"],"created_at":"2024-07-30T19:02:05.538Z","updated_at":"2025-05-05T12:37:42.654Z","avatar_url":"https://github.com/pytorch.png","language":"Python","readme":"# torchvision\n\n[![total torchvision downloads](https://pepy.tech/badge/torchvision)](https://pepy.tech/project/torchvision)\n[![documentation](https://img.shields.io/badge/dynamic/json.svg?label=docs\u0026url=https%3A%2F%2Fpypi.org%2Fpypi%2Ftorchvision%2Fjson\u0026query=%24.info.version\u0026colorB=brightgreen\u0026prefix=v)](https://pytorch.org/vision/stable/index.html)\n\nThe torchvision package consists of popular datasets, model architectures, and common image transformations for computer\nvision.\n\n## Installation\n\nPlease refer to the [official\ninstructions](https://pytorch.org/get-started/locally/) to install the stable\nversions of `torch` and `torchvision` on your system.\n\nTo build source, refer to our [contributing\npage](https://github.com/pytorch/vision/blob/main/CONTRIBUTING.md#development-installation).\n\nThe following is the corresponding `torchvision` versions and supported Python\nversions.\n\n| `torch`            | `torchvision`      | Python              |\n| ------------------ | ------------------ | ------------------- |\n| `main` / `nightly` | `main` / `nightly` | `\u003e=3.9`, `\u003c=3.12`   |\n| `2.5`              | `0.20`             | `\u003e=3.9`, `\u003c=3.12`   |\n| `2.4`              | `0.19`             | `\u003e=3.8`, `\u003c=3.12`   |\n| `2.3`              | `0.18`             | `\u003e=3.8`, `\u003c=3.12`   |\n| `2.2`              | `0.17`             | `\u003e=3.8`, `\u003c=3.11`   |\n| `2.1`              | `0.16`             | `\u003e=3.8`, `\u003c=3.11`   |\n| `2.0`              | `0.15`             | `\u003e=3.8`, `\u003c=3.11`   |\n\n\u003cdetails\u003e\n    \u003csummary\u003eolder versions\u003c/summary\u003e\n\n| `torch` | `torchvision`     | Python                    |\n|---------|-------------------|---------------------------|\n| `1.13`  | `0.14`            | `\u003e=3.7.2`, `\u003c=3.10`       |\n| `1.12`  | `0.13`            | `\u003e=3.7`, `\u003c=3.10`         |\n| `1.11`  | `0.12`            | `\u003e=3.7`, `\u003c=3.10`         |\n| `1.10`  | `0.11`            | `\u003e=3.6`, `\u003c=3.9`          |\n| `1.9`   | `0.10`            | `\u003e=3.6`, `\u003c=3.9`          |\n| `1.8`   | `0.9`             | `\u003e=3.6`, `\u003c=3.9`          |\n| `1.7`   | `0.8`             | `\u003e=3.6`, `\u003c=3.9`          |\n| `1.6`   | `0.7`             | `\u003e=3.6`, `\u003c=3.8`          |\n| `1.5`   | `0.6`             | `\u003e=3.5`, `\u003c=3.8`          |\n| `1.4`   | `0.5`             | `==2.7`, `\u003e=3.5`, `\u003c=3.8` |\n| `1.3`   | `0.4.2` / `0.4.3` | `==2.7`, `\u003e=3.5`, `\u003c=3.7` |\n| `1.2`   | `0.4.1`           | `==2.7`, `\u003e=3.5`, `\u003c=3.7` |\n| `1.1`   | `0.3`             | `==2.7`, `\u003e=3.5`, `\u003c=3.7` |\n| `\u003c=1.0` | `0.2`             | `==2.7`, `\u003e=3.5`, `\u003c=3.7` |\n\n\u003c/details\u003e\n\n## Image Backends\n\nTorchvision currently supports the following image backends:\n\n- torch tensors\n- PIL images:\n    - [Pillow](https://python-pillow.org/)\n    - [Pillow-SIMD](https://github.com/uploadcare/pillow-simd) - a **much faster** drop-in replacement for Pillow with SIMD.\n\nRead more in in our [docs](https://pytorch.org/vision/stable/transforms.html).\n\n## [UNSTABLE] Video Backend\n\nTorchvision currently supports the following video backends:\n\n- [pyav](https://github.com/PyAV-Org/PyAV) (default) - Pythonic binding for ffmpeg libraries.\n- video_reader - This needs ffmpeg to be installed and torchvision to be built from source. There shouldn't be any\n  conflicting version of ffmpeg installed. Currently, this is only supported on Linux.\n\n```\nconda install -c conda-forge 'ffmpeg\u003c4.3'\npython setup.py install\n```\n\n# Using the models on C++\n\nRefer to [example/cpp](https://github.com/pytorch/vision/tree/main/examples/cpp).\n\n**DISCLAIMER**: the `libtorchvision` library includes the torchvision\ncustom ops as well as most of the C++ torchvision APIs. Those APIs do not come\nwith any backward-compatibility guarantees and may change from one version to\nthe next. Only the Python APIs are stable and with backward-compatibility\nguarantees. So, if you need stability within a C++ environment, your best bet is\nto export the Python APIs via torchscript.\n\n## Documentation\n\nYou can find the API documentation on the pytorch website: \u003chttps://pytorch.org/vision/stable/index.html\u003e\n\n## Contributing\n\nSee the [CONTRIBUTING](CONTRIBUTING.md) file for how to help out.\n\n## Disclaimer on Datasets\n\nThis is a utility library that downloads and prepares public datasets. We do not host or distribute these datasets,\nvouch for their quality or fairness, or claim that you have license to use the dataset. It is your responsibility to\ndetermine whether you have permission to use the dataset under the dataset's license.\n\nIf you're a dataset owner and wish to update any part of it (description, citation, etc.), or do not want your dataset\nto be included in this library, please get in touch through a GitHub issue. Thanks for your contribution to the ML\ncommunity!\n\n## Pre-trained Model License\n\nThe pre-trained models provided in this library may have their own licenses or terms and conditions derived from the\ndataset used for training. It is your responsibility to determine whether you have permission to use the models for your\nuse case.\n\nMore specifically, SWAG models are released under the CC-BY-NC 4.0 license. See\n[SWAG LICENSE](https://github.com/facebookresearch/SWAG/blob/main/LICENSE) for additional details.\n\n## Citing TorchVision\n\nIf you find TorchVision useful in your work, please consider citing the following BibTeX entry:\n\n```bibtex\n@software{torchvision2016,\n    title        = {TorchVision: PyTorch's Computer Vision library},\n    author       = {TorchVision maintainers and contributors},\n    year         = 2016,\n    journal      = {GitHub repository},\n    publisher    = {GitHub},\n    howpublished = {\\url{https://github.com/pytorch/vision}}\n}\n```\n","funding_links":[],"categories":["Python","HarmonyOS","Frameworks \u0026 libraries","The Data Science Toolbox","Computer Vision","Pytorch \u0026 related libraries｜Pytorch \u0026 相关库","Deep Learning Framework","Pytorch \u0026 related libraries","CV","Deep Learning","其他_机器视觉","计算机视觉 (CV)","CV\u0026PyTorch实战","图像数据与CV","Deep Learning Tools","📚 Skill Development \u0026 Career"],"sub_categories":["Windows Manager","Machine learning","Deep Learning Packages","Others","CV｜计算机视觉:","High-Level DL APIs","CV:","PyTorch","网络服务_其他","Data Sources \u0026 Datasets"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fpytorch%2Fvision","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fpytorch%2Fvision","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fpytorch%2Fvision/lists"}