{"id":13443988,"url":"https://github.com/tensorflow/graphics","last_synced_at":"2025-05-12T13:21:06.550Z","repository":{"id":37451121,"uuid":"164626274","full_name":"tensorflow/graphics","owner":"tensorflow","description":"TensorFlow Graphics: Differentiable Graphics Layers for TensorFlow","archived":false,"fork":false,"pushed_at":"2025-05-07T10:02:17.000Z","size":7457,"stargazers_count":2767,"open_issues_count":144,"forks_count":367,"subscribers_count":77,"default_branch":"master","last_synced_at":"2025-05-08T00:09:45.672Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/tensorflow.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2019-01-08T10:39:44.000Z","updated_at":"2025-05-07T08:52:01.000Z","dependencies_parsed_at":"2023-02-11T16:30:54.810Z","dependency_job_id":"d7319bd5-4657-4530-9d11-b63bd1c4fd6e","html_url":"https://github.com/tensorflow/graphics","commit_stats":{"total_commits":724,"total_committers":38,"mean_commits":19.05263157894737,"dds":0.68646408839779,"last_synced_commit":"8f60a9305ce3bd8ee271b3128b01326f20c5ad26"},"previous_names":[],"tags_count":1,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorflow%2Fgraphics","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorflow%2Fgraphics/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorflow%2Fgraphics/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorflow%2Fgraphics/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/tensorflow","download_url":"https://codeload.github.com/tensorflow/graphics/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":253745197,"owners_count":21957320,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-07-31T03:02:15.897Z","updated_at":"2025-05-12T13:21:06.525Z","avatar_url":"https://github.com/tensorflow.png","language":"Python","readme":"# TensorFlow Graphics\n\n[![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://opensource.org/licenses/Apache-2.0)\n[![Build](https://github.com/tensorflow/graphics/workflows/Build/badge.svg?branch=master)](https://github.com/tensorflow/graphics/actions)\n[![Code coverage](https://img.shields.io/coveralls/github/tensorflow/graphics.svg)](https://coveralls.io/github/tensorflow/graphics)\n[![PyPI project status](https://img.shields.io/pypi/status/tensorflow-graphics.svg)](https://pypi.org/project/tensorflow-graphics/)\n[![Supported Python version](https://img.shields.io/pypi/pyversions/tensorflow-graphics.svg)](https://pypi.org/project/tensorflow-graphics/)\n[![PyPI release version](https://img.shields.io/pypi/v/tensorflow-graphics.svg)](https://pypi.org/project/tensorflow-graphics/)\n[![Downloads](https://pepy.tech/badge/tensorflow-graphics)](https://pepy.tech/project/tensorflow-graphics)\n\nThe last few years have seen a rise in novel differentiable graphics layers\nwhich can be inserted in neural network architectures. From spatial transformers\nto differentiable graphics renderers, these new layers leverage the knowledge\nacquired over years of computer vision and graphics research to build new and\nmore efficient network architectures. Explicitly modeling geometric priors and\nconstraints into neural networks opens up the door to architectures that can be\ntrained robustly, efficiently, and more importantly, in a self-supervised\nfashion.\n\n## Overview\n\nAt a high level, a computer graphics pipeline requires a representation of 3D\nobjects and their absolute positioning in the scene, a description of the\nmaterial they are made of, lights and a camera. This scene description is then\ninterpreted by a renderer to generate a synthetic rendering.\n\n\u003cdiv align=\"center\"\u003e\n  \u003cimg border=\"0\"  src=\"https://storage.googleapis.com/tensorflow-graphics/git/readme/graphics.jpg\" width=\"600\"\u003e\n\u003c/div\u003e\n\nIn comparison, a computer vision system would start from an image and try to\ninfer the parameters of the scene. This allows the prediction of which objects\nare in the scene, what materials they are made of, and their three-dimensional\nposition and orientation.\n\n\u003cdiv align=\"center\"\u003e\n  \u003cimg border=\"0\"  src=\"https://storage.googleapis.com/tensorflow-graphics/git/readme/cv.jpg\" width=\"600\"\u003e\n\u003c/div\u003e\n\nTraining machine learning systems capable of solving these complex 3D vision\ntasks most often requires large quantities of data. As labelling data is a\ncostly and complex process, it is important to have mechanisms to design machine\nlearning models that can comprehend the three dimensional world while being\ntrained without much supervision. Combining computer vision and computer\ngraphics techniques provides a unique opportunity to leverage the vast amounts\nof readily available unlabelled data. As illustrated in the image below, this\ncan, for instance, be achieved using analysis by synthesis where the vision\nsystem extracts the scene parameters and the graphics system renders back an\nimage based on them. If the rendering matches the original image, the vision\nsystem has accurately extracted the scene parameters. In this setup, computer\nvision and computer graphics go hand in hand, forming a single machine learning\nsystem similar to an autoencoder, which can be trained in a self-supervised\nmanner.\n\n\u003cdiv align=\"center\"\u003e\n  \u003cimg border=\"0\"  src=\"https://storage.googleapis.com/tensorflow-graphics/git/readme/cv_graphics.jpg\" width=\"600\"\u003e\n\u003c/div\u003e\n\nTensorflow Graphics is being developed to help tackle these types of challenges\nand to do so, it provides a set of differentiable graphics and geometry layers\n(e.g. cameras, reflectance models, spatial transformations, mesh convolutions)\nand 3D viewer functionalities (e.g. 3D TensorBoard) that can be used to train\nand debug your machine learning models of choice.\n\n## Installing TensorFlow Graphics\n\nSee the [install](https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/g3doc/install.md)\ndocumentation for instructions on how to install TensorFlow Graphics.\n\n## API Documentation\n\nYou can find the API documentation\n[here](https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/g3doc/api_docs/python/tfg.md).\n\n## Compatibility\n\nTensorFlow Graphics is fully compatible with the latest stable release of\nTensorFlow, tf-nightly, and tf-nightly-2.0-preview. All the functions are\ncompatible with graph and eager execution.\n\n## Debugging\n\nTensorflow Graphics heavily relies on L2 normalized tensors, as well as having\nthe inputs to specific function be in a pre-defined range. Checking for all of\nthis takes cycles, and hence is not activated by default. It is recommended to\nturn these checks on during a couple epochs of training to make sure that\neverything behaves as expected. This\n[page](https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/g3doc/debug_mode.md)\nprovides the instructions to enable these checks.\n\n## Colab tutorials\n\nTo help you get started with some of the functionalities provided by TF\nGraphics, some Colab notebooks are available below and roughly ordered by\ndifficulty. These Colabs touch upon a large range of topics including, object\npose estimation, interpolation, object materials, lighting, non-rigid surface\ndeformation, spherical harmonics, and mesh convolutions.\n\nNOTE: the tutorials are maintained carefully. However, they are not considered\npart of the API and they can change at any time without warning. It is not\nadvised to write code that takes dependency on them.\n\n### Beginner\n\n\u003cdiv align=\"center\"\u003e\n  \u003ctable\u003e\n    \u003ctr\u003e\n      \u003cth style=\"text-align:center\"\u003e\u003ca href=\"https://colab.research.google.com/github/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/6dof_alignment.ipynb\"\u003eObject pose estimation\u003c/a\u003e\u003c/th\u003e\n      \u003cth style=\"text-align:center\"\u003e\u003ca href=\"https://colab.research.google.com/github/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/intrinsics_optimization.ipynb\"\u003eCamera intrinsics optimization\u003c/a\u003e\u003c/th\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003ctd align=\"center\"\u003e\n        \u003ca href=\"https://colab.research.google.com/github/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/6dof_alignment.ipynb\"\u003e\u003cimg border=\"0\"  src=\"https://storage.googleapis.com/tensorflow-graphics/notebooks/6dof_pose/thumbnail.jpg\" width=\"200\" height=\"200\"\u003e\n        \u003c/a\u003e\n      \u003c/td\u003e\n      \u003ctd align=\"center\"\u003e\n              \u003ca href=\"https://colab.research.google.com/github/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/intrinsics_optimization.ipynb\"\u003e\u003cimg border=\"0\" src=\"https://storage.googleapis.com/tensorflow-graphics/notebooks/intrinsics/intrinsics_thumbnail.png\" width=\"200\" height=\"200\"\u003e\n        \u003c/a\u003e\n      \u003c/td\u003e\n    \u003c/tr\u003e\n  \u003c/table\u003e\n\u003c/div\u003e\n\n### Intermediate\n\n\u003cdiv align=\"center\"\u003e\n  \u003ctable\u003e\n    \u003ctr\u003e\n      \u003cth style=\"text-align:center\"\u003e\u003ca href=\"https://colab.research.google.com/github/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/interpolation.ipynb\"\u003eB-spline and slerp interpolation\u003c/a\u003e\u003c/th\u003e\n      \u003cth style=\"text-align:center\"\u003e\u003ca href=\"https://colab.research.google.com/github/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/reflectance.ipynb\"\u003eReflectance\u003c/a\u003e\u003c/th\u003e\n      \u003cth style=\"text-align:center\"\u003e\u003ca href=\"https://colab.research.google.com/github/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/non_rigid_deformation.ipynb\"\u003eNon-rigid surface deformation\u003c/a\u003e\u003c/th\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003ctd align=\"center\"\u003e\u003ca href=\"https://colab.research.google.com/github/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/interpolation.ipynb\"\u003e\u003cimg border=\"0\" src=\"https://storage.googleapis.com/tensorflow-graphics/notebooks/interpolation/thumbnail.png\" width=\"200\" height=\"200\"\u003e \u003c/td\u003e\n      \u003ctd align=\"center\"\u003e\u003ca href=\"https://colab.research.google.com/github/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/reflectance.ipynb\"\u003e\u003cimg border=\"0\" src=\"https://storage.googleapis.com/tensorflow-graphics/notebooks/reflectance/thumbnail.png\" width=\"200\" height=\"200\"\u003e\u003c/td\u003e\n      \u003ctd align=\"center\"\u003e\u003ca href=\"https://colab.research.google.com/github/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/non_rigid_deformation.ipynb\"\u003e\u003cimg border=\"0\" src=\"https://storage.googleapis.com/tensorflow-graphics/notebooks/non_rigid_deformation/thumbnail.jpg\" width=\"200\" height=\"200\"\u003e\n      \u003c/a\u003e\u003c/td\u003e\n    \u003c/tr\u003e\n  \u003c/table\u003e\n\u003c/div\u003e\n\n### Advanced\n\n\u003cdiv align=\"center\"\u003e\n  \u003ctable\u003e\n    \u003ctr\u003e\n      \u003cth style=\"text-align:center\"\u003e\u003ca href=\"https://colab.research.google.com/github/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/spherical_harmonics_approximation.ipynb\"\u003eSpherical harmonics rendering\u003c/a\u003e\u003c/th\u003e\n      \u003cth style=\"text-align:center\"\u003e\u003ca href=\"https://colab.research.google.com/github/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/spherical_harmonics_optimization.ipynb\"\u003eEnvironment map optimization\u003c/a\u003e\u003c/th\u003e\n      \u003cth style=\"text-align:center\"\u003e\u003ca href=\"https://colab.research.google.com/github/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/mesh_segmentation_demo.ipynb\"\u003eSemantic mesh segmentation\u003c/a\u003e\u003c/th\u003e\n    \u003c/tr\u003e\n    \u003ctr\u003e\n      \u003ctd align=\"center\"\u003e\u003ca href=\"https://colab.research.google.com/github/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/spherical_harmonics_approximation.ipynb\"\u003e\u003cimg border=\"0\" src=\"https://storage.googleapis.com/tensorflow-graphics/notebooks/sh_rendering/thumbnail.png\" width=\"200\" height=\"200\"\u003e\n      \u003c/a\u003e\u003c/td\u003e\n      \u003ctd align=\"center\"\u003e\u003ca href=\"https://colab.research.google.com/github/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/spherical_harmonics_optimization.ipynb\"\u003e\u003cimg border=\"0\" src=\"https://storage.googleapis.com/tensorflow-graphics/notebooks/environment_lighting/thumbnail.png\" width=\"200\" height=\"200\"\u003e\n      \u003c/a\u003e\u003c/td\u003e\n      \u003ctd align=\"center\"\u003e\u003ca href=\"https://colab.research.google.com/github/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/mesh_segmentation_demo.ipynb\"\u003e\u003cimg border=\"0\" src=\"https://storage.googleapis.com/tensorflow-graphics/notebooks/mesh_segmentation/thumbnail.jpg\" width=\"200\" height=\"200\"\u003e\n      \u003c/a\u003e\u003c/td\u003e\n    \u003c/tr\u003e\n  \u003c/table\u003e\n\u003c/div\u003e\n\n## TensorBoard 3D\n\nVisual debugging is a great way to assess whether an experiment is going in the\nright direction. To this end, TensorFlow Graphics comes with a TensorBoard\nplugin to interactively visualize 3D meshes and point clouds.\n[This demo](https://colab.research.google.com/github/tensorflow/tensorboard/blob/master/tensorboard/plugins/mesh/Mesh_Plugin_Tensorboard.ipynb)\nshows how to use the plugin. Follow\n[these instructions](https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/g3doc/tensorboard.md)\nto install and configure TensorBoard 3D. Note that TensorBoard 3D is currently\nnot compatible with eager execution nor TensorFlow 2.\n\n\u003cdiv align=\"center\"\u003e\n  \u003cimg border=\"0\"  src=\"https://storage.googleapis.com/tensorflow-graphics/git/readme/tensorboard_plugin.jpg\" width=\"1280\"\u003e\n\u003c/div\u003e\n\n## Coming next...\n\nAmong many things, we are hoping to release resamplers, additional 3D\nconvolution and pooling operators, and a differentiable rasterizer!\n\nFollow us on [Twitter](https://twitter.com/_TFGraphics_) to hear about the\nlatest updates!\n\n## Additional Information\n\nYou may use this software under the\n[Apache 2.0 License](https://github.com/tensorflow/graphics/blob/master/LICENSE).\n\n## Community\n\nAs part of TensorFlow, we're committed to fostering an open and welcoming\nenvironment.\n\n*   [Stack Overflow](https://stackoverflow.com/questions/tagged/tensorflow): Ask\n    or answer technical questions.\n*   [GitHub](https://github.com/tensorflow/graphics/issues): Report bugs or make\n    feature requests.\n*   [TensorFlow Blog](https://blog.tensorflow.org/): Stay up to date on content\n    from the TensorFlow team and best articles from the community.\n*   [Youtube Channel](http://youtube.com/tensorflow/): Follow TensorFlow shows.\n\n## References\n\nIf you use TensorFlow Graphics in your research, please reference it as:\n\n    @inproceedings{TensorflowGraphicsIO2019,\n       author = {Oztireli, Cengiz and Valentin, Julien and Keskin, Cem and Pidlypenskyi, Pavel and Makadia, Ameesh and Sud, Avneesh and Bouaziz, Sofien},\n       title = {TensorFlow Graphics: Computer Graphics Meets Deep Learning},\n       year = {2019}\n    }\n\n### Contact\n\nWant to reach out? E-mail us at tf-graphics-contact@google.com!\n\n### Contributors - in alphabetical order\n\n-   Sofien Bouaziz (sofien@google.com)\n-   Jay Busch\n-   Forrester Cole\n-   Ambrus Csaszar\n-   Boyang Deng\n-   Ariel Gordon\n-   Christian Häne\n-   Cem Keskin\n-   Ameesh Makadia\n-   Cengiz Öztireli\n-   Rohit Pandey\n-   Romain Prévost\n-   Pavel Pidlypenskyi\n-   Stefan Popov\n-   Konstantinos Rematas\n-   Omar Sanseviero\n-   Aviv Segal\n-   Avneesh Sud\n-   Andrea Tagliasacchi\n-   Anastasia Tkach\n-   Julien Valentin\n-   He Wang\n-   Yinda Zhang\n","funding_links":[],"categories":["Python","图像数据与CV","Graph","Technologies"],"sub_categories":["Others"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftensorflow%2Fgraphics","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ftensorflow%2Fgraphics","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftensorflow%2Fgraphics/lists"}