{"id":13699551,"url":"https://github.com/tensorflow/runtime","last_synced_at":"2025-05-14T22:08:54.044Z","repository":{"id":37622541,"uuid":"258604114","full_name":"tensorflow/runtime","owner":"tensorflow","description":"A performant and modular runtime for TensorFlow","archived":false,"fork":false,"pushed_at":"2025-04-18T16:34:30.000Z","size":25138,"stargazers_count":761,"open_issues_count":44,"forks_count":122,"subscribers_count":45,"default_branch":"master","last_synced_at":"2025-05-08T00:09:45.195Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/tensorflow.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":"CODEOWNERS","security":null,"support":null,"governance":null,"roadmap":null,"authors":"AUTHORS","dei":null,"publiccode":null,"codemeta":null}},"created_at":"2020-04-24T19:25:39.000Z","updated_at":"2025-04-30T11:09:28.000Z","dependencies_parsed_at":"2023-10-12T12:34:13.038Z","dependency_job_id":"8ead7bf1-d783-4dad-992b-f1ba2dcfeaad","html_url":"https://github.com/tensorflow/runtime","commit_stats":{"total_commits":4114,"total_committers":105,"mean_commits":"39.180952380952384","dds":0.6307729703451628,"last_synced_commit":"07992d7c1ead60f610c17b7c1f9e50b6898adc87"},"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorflow%2Fruntime","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorflow%2Fruntime/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorflow%2Fruntime/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorflow%2Fruntime/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/tensorflow","download_url":"https://codeload.github.com/tensorflow/runtime/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254235700,"owners_count":22036964,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-02T20:00:36.151Z","updated_at":"2025-05-14T22:08:49.026Z","avatar_url":"https://github.com/tensorflow.png","language":"C++","readme":"# TFRT: A New TensorFlow Runtime\n\nTFRT is a new TensorFlow runtime. It aims to provide a unified, extensible\ninfrastructure layer with best-in-class performance across a wide variety of\ndomain specific hardware. It provides efficient use of multithreaded host CPUs,\nsupports fully asynchronous programming models, and focuses on low-level\nefficiency.\n\nTFRT will benefit a broad range of users, but it will be of particular interest\nto you if you are a:\n\n*   Researcher looking to experiment with complex new models and add custom\n    operations to TensorFlow\n*   Application developer looking for improved performance when serving models\n    in production\n*   Hardware maker looking to plug hardware into TensorFlow, including edge and\n    datacenter devices\n\n...or you are simply curious about cool ML infrastructure and low-level runtime\ntechnology!\n\nTo learn more about TFRT’s early progress and wins, check out our\n[Tensorflow Dev Summit 2020 presentation](https://www.youtube.com/watch?v=15tiQoPpuZ8)\nwhere we provided a performance benchmark for small-batch GPU inference on\nResNet 50, and our\n[MLIR Open Design Deep Dive presentation](https://drive.google.com/drive/folders/1fkLJuVP-tIk4GENBu2AgemF3oXYGr2PB)\nwhere we provided a detailed overview of TFRT’s core components, low-level\nabstractions, and general design principles.\n\n**Note:** TFRT is an early stage project and is not yet ready for general use.\n\n## Getting started\n\n**TLDR:** This section describes how to set up a development environment for\nTFRT, as well as instructions to build and test TFRT components.\n\nTFRT currently supports Ubuntu-16.04. Future supported platforms include MacOS,\nWindows, etc. Bazel and clang are required to build and test TFRT. NVIDIA's CUDA\nToolkit and cuDNN libraries are required for the GPU backend.\n\nTo describe the TFRT build and test workflows, we will build and run the\nfollowing binaries for graph execution.\n\nRecall from our Dev Summit presentation that for graph execution, a TensorFlow\nuser passes into TFRT a TensorFlow graph created via high-level TensorFlow APIs,\nand TFRT then calls the [MLIR](https://www.tensorflow.org/mlir)-based graph\ncompiler to optimize and lower the graph into\n[BEF](documents/binary_executable_format.md), a Binary Executable Format for\nTFRT graph execution (MLIR is the compiler infrastructure that we use to\nrepresent TFRT host programs). The blue arrows in the simplified TensorFlow\ntraining stack diagram below show this flow.\n\n\u003cdiv align=\"center\"\u003e\n\u003cimg src=\"documents/img/TFRT_overview.svg\" alt=\"TFRT Overview\" width=\"800\"\u003e\n\u003c/div\u003e\n\nThe two binaries introduced next focus on the backend of the graph execution\nworkflow. After the graph compiler has optimized the TensorFlow graph and\nproduced a low-level TFRT Host Program represented in MLIR, `tfrt_translate`\ngenerates a `BEF` file from that host program and `bef_executor` runs the `BEF`\nfile. The progression from TFRT Host Program to `bef_executor` via\n`tfrt_translate` is depicted in the expanded TensorFlow training stack diagram\nbelow. Note that the blue arrow between TFRT Host Program and `BEF` file\nrepresents `tfrt_translate`. Both programs are built in the `tools` directory.\n\n\u003cdiv align=\"center\"\u003e\n\u003cimg src=\"documents/img/BEF_conversion.svg\" alt=\"BEF Conversion\" width=\"480\"\u003e\n\u003c/div\u003e\n\n#### tfrt_translate\n\nThe `tfrt_translate` program does round trip translation between MLIR and BEF,\nsimilar to an assembler and disassembler.\n\n#### bef_executor\n\nThe `bef_executor` program is the execution driver of `BEF` files. It reads in a\n`BEF` file, sets up runtime, and asynchronously executes function(s) in that\nfile.\n\n### Prerequisites\n\n#### Install Bazel\n\nTo build TFRT, you need to install Bazel. TFRT is built and verified with Bazel\n4.0. Follow\n[the Bazel installation instructions](https://docs.bazel.build/versions/master/install-ubuntu.html)\nto install Bazel. Verify the installation with\n\n```shell\n$ bazel --version\nbazel 4.0.0\n```\n\n#### Install clang\n\nFollow [the clang installation instructions](https://apt.llvm.org/) to install\nclang. The automatic installation script that installs clang, lldb, and lld, is\nrecommended. TFRT is built and verified with clang 11.1.\n\nIf you have multiple versions of clang installed, ensure that the right version\nof clang is the default. On Ubuntu based systems, you can use\n`update-alternatives` to select the default version. The following example\ncommands assume you installed clang-11:\n\n```shell\n$ sudo update-alternatives --install /usr/bin/clang clang /usr/bin/clang-11 11\n$ sudo update-alternatives --install /usr/bin/clang++ clang++ /usr/bin/clang++-11 11\n```\n\nVerify the installation with\n\n```shell\n$ clang --version\nclang version 11.1.0\n```\n\n#### Install libstdc++\n\nTFRT requires libstdc++8 or greater. Check clang's selected version with\n\n```shell\n$ clang++ -v |\u0026 grep \"Selected GCC\"\nSelected GCC installation: /usr/bin/../lib/gcc/x86_64-linux-gnu/10\n```\n\nIn the example above, the *10* at the end of the path indicates that clang will\nuse libstdc++\u003cem\u003e10\u003c/em\u003e, which is compatible with TFRT.\n\nIf you need to upgrade, the easiest way is to install gcc-8. Run the following\ncommand to install:\n\n```shell\n$ sudo add-apt-repository -y ppa:ubuntu-toolchain-r/test\n$ sudo apt-get update\n$ sudo apt-get install -y gcc-8 g++-8\n```\n\nTo verify installation, re-run the `clang++ -v` check above.\n\n#### GPU prerequisites\n\n**Note:** You can skip this section if you don't want to build the GPU backend.\nRemember to exclude `//backends/gpu/...` from your Bazel target patterns though.\n\nBuilding and running the GPU backend requires installing additional components.\n\nInstall clang Python bindings using pip with\n\n```shell\n$ pip install libclang\n```\n\nInstall NVIDIA's CUDA Toolkit v11.2 (see\n[installation guide](https://docs.nvidia.com/cuda/cuda-installation-guide-linux)\nfor details) in a single directory from NVIDIA’s `.run` package with\n\n```shell\n$ wget http://developer.download.nvidia.com/compute/cuda/11.2.2/local_installers/cuda_11.2.2_460.32.03_linux.run\n$ sudo sh cuda_11.2.2_460.32.03_linux.run --toolkit --installpath=\u003cpath\u003e\n```\n\nRegister the path to CUDA shared objects with\n\n```shell\n$ sudo echo '\u003cpath\u003e/lib64' \u003e '/etc/ld.so.conf.d/cuda.conf'\n$ sudo ldconfig\n```\n\nInstall NVIDIA's cuDNN libraries (see\n[installation guide](http://docs.nvidia.com/deeplearning/sdk/cudnn-install) for\ndetails) with\n\n```shell\n$ wget http://developer.download.nvidia.com/compute/machine-learning/repos/ubuntu1604/x86_64/libcudnn8_8.0.4.30-1+cuda11.1_amd64.deb\n$ sudo apt install ./libcudnn8_8.0.4.30-1+cuda11.1_amd64.deb\n```\n\n**Note:** The above package is intended for CUDA 11.1, but is compatible with\nCUDA 11.2. TFRT is built and verified with cuDNN 8.1 for CUDA 11.2. Access to\nthat package requires a (free) NVIDIA developer account.\n\n### Building and running TFRT\n\nTo build TFRT, `cd` to the root directory (where `WORKSPACE` file is located) of\nthe TFRT workspace. A set of build configurations is in `.bazelrc` file. You can\ncreate a `user.bazelrc` in the repository root with extra Bazel configs that may\nbe useful. Build `tfrt_translate` and `bef_executor` with the following\ncommands:\n\n```shell\n$ bazel build //tools:bef_executor\n$ bazel build //tools:tfrt_translate\n```\n\nThe above commands build the binaries with `opt` compilation mode. Check\n[Bazel's documentation](https://docs.bazel.build/versions/master/command-line-reference.html#build-options)\nfor more build options. Bazel will notify the output location at the end of a\nsuccessful build (default is `bazel-bin`).\n\nAfter `tfrt_translate` and `bef_executor` are built, run an `.mlir` program with\nthe following command:\n\n```shell\n$ bazel-bin/tools/tfrt_translate -mlir-to-bef path/to/program.mlir | bazel-bin/tools/bef_executor\n```\n\nTFRT provides a series of .mlir test programs. For example:\n\n```shell\n$ bazel-bin/tools/tfrt_translate -mlir-to-bef mlir_tests/bef_executor/async.mlir | bazel-bin/tools/bef_executor\n```\n\nAny output will be printed out to the terminal.\n\n### Adding GPU support\n\nAdd `--config=cuda` to the Bazel command to link the GPU backend to the above\ntargets.\n\nCustom CUDA Toolkit locations can be specified with\n`--repo_env=CUDA_PATH=\u003cpath\u003e`. The default is `/usr/local/cuda`.\n\n### Testing\n\nTFRT utilizes LLVM’s [LIT](https://llvm.org/docs/CommandGuide/lit.html)\ninfrastructure and\n[FileCheck](https://llvm.org/docs/CommandGuide/FileCheck.html) utility tool to\nconstruct MLIR-based check tests. These tests verify that some set of string\ntags appear in the test’s output. More introduction and guidelines on testing\ncan be found\n[here](https://mlir.llvm.org/getting_started/TestingGuide/#check-tests). An\nexample test is shown below:\n\n```c++\n// RUN: tfrt_translate -mlir-to-bef %s | bef_executor | FileCheck %s\n// RUN: tfrt_opt %s | tfrt_opt\n\n// CHECK-LABEL: --- Running 'basic_tensor'\nfunc @basic_tensor() {\n  %c0 = tfrt.new.chain\n\n  %a = dht.create_uninitialized_tensor.i32.2 [3 : i64, 2 : i64]\n  %c1 = dht.fill_tensor_with_constant.i32 %a, %c0 0 : i32\n\n  // CHECK: shape = [3, 2], values = [0, 0, 0, 0, 0, 0]\n  %c2 = dht.print_tensor %a, %c1\n\n  tfrt.return\n}\n```\n\nTo run a test, simply invoke `bazel test`:\n\n```shell\n$ bazel test //mlir_tests/bef_executor:basics.mlir.test\n```\n\nMost tests under `//backends/gpu/...` need to be built with `--config=cuda` so\nthat the GPU backend is linked to the bef_executor:\n\n```shell\n$ bazel test --config=cuda //backends/gpu/mlir_tests/core_runtime:get_device.mlir.test\n```\n\nUse Bazel\n[target patterns](https://docs.bazel.build/versions/master/guide.html#specifying-targets-to-build)\nto run multiple tests:\n\n```shell\n$ bazel test -- //... -//third_party/... -//backends/gpu/...  # All CPU tests.\n$ bazel test --config=cuda //backends/gpu/...                 # All GPU tests.\n```\n\n### Next Steps\n\nTry our [tutorial](documents/tutorial.md) for some hands-on experience with\nTFRT.\n\nSee [host runtime design](documents/tfrt_host_runtime_design.md) for more\ndetails on TFRT's design.\n\n## Repository Overview\n\nThe three key directories under the TFRT root directory are\n\n*   `lib/`: Contains core TFRT infrastructure code\n*   `backends/`: Contains device specific infrastructure and op/kernel\n    implementations\n*   `include/`: Contains public header files for core TFRT infrastructure\n\n\u003ctable\u003e\n  \u003ctr\u003e\n   \u003ctd\u003e\u003cstrong\u003eTop level directory\u003c/strong\u003e\n   \u003c/td\u003e\n   \u003ctd\u003e\u003cstrong\u003eSub-directory\u003c/strong\u003e\n   \u003c/td\u003e\n   \u003ctd\u003e\u003cstrong\u003eDescription\u003c/strong\u003e\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n   \u003ctd colspan=\"2\" \u003e\u003cstrong\u003e\u003ccode\u003einclude/\u003c/code\u003e\u003c/strong\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eTFRT infrastructure public headers\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n   \u003ctd colspan=\"2\" \u003e\u003cstrong\u003e\u003ccode\u003elib/\u003c/code\u003e\u003c/strong\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eTFRT infrastructure common for host runtime and all device runtime\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n   \u003ctd\u003e\n   \u003c/td\u003e\n   \u003ctd\u003e\u003ccode\u003ebasic_kernels/\u003c/code\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eCommon infrastructure kernels, e.g. control flow kernels\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n   \u003ctd\u003e\n   \u003c/td\u003e\n   \u003ctd\u003e\u003ccode\u003ebef_executor/\u003c/code\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eBEFFile and BEFExecutor implementation\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n   \u003ctd\u003e\n   \u003c/td\u003e\n   \u003ctd\u003e\u003ccode\u003ebef_executor_driver/\u003c/code\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eDriver code for running BEFExecutor for an input MLIR file\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n   \u003ctd\u003e\n   \u003c/td\u003e\n   \u003ctd\u003e\u003ccode\u003ebef_converter/\u003c/code\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eConverter between MLIR and BEF (bef_to_mlir and mlir_to_bef)\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n   \u003ctd\u003e\n   \u003c/td\u003e\n   \u003ctd\u003e\u003ccode\u003ecore_runtime/\u003c/code\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eTFRT Core Runtime infrastructure\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n   \u003ctd\u003e\n   \u003c/td\u003e\n   \u003ctd\u003e\u003ccode\u003edistributed_runtime/\u003c/code\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eTFRT Distributed Runtime infrastructure\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n   \u003ctd\u003e\n   \u003c/td\u003e\n   \u003ctd\u003e\u003ccode\u003edata/\u003c/code\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eTFRT infrastructure for TF input pipelines\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n   \u003ctd\u003e\n   \u003c/td\u003e\n   \u003ctd\u003e\u003ccode\u003ehost_context/\u003c/code\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eHost TFRT data structure, e.g. HostContext, AsyncValue, ConcurrentWorkQueue\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n   \u003ctd\u003e\n   \u003c/td\u003e\n   \u003ctd\u003e\u003ccode\u003emetrics/\u003c/code\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eML metric integration\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n   \u003ctd\u003e\n   \u003c/td\u003e\n   \u003ctd\u003e\u003ccode\u003esupport/\u003c/code\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eBasic utilities, e.g. hash_util, string_util\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n   \u003ctd\u003e\n   \u003c/td\u003e\n   \u003ctd\u003e\u003ccode\u003etensor/\u003c/code\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eBase Tensor class and host tensor implementations\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n   \u003ctd\u003e\n   \u003c/td\u003e\n   \u003ctd\u003e\u003ccode\u003etest_kernels/\u003c/code\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eTesting kernel implementations\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n   \u003ctd\u003e\n   \u003c/td\u003e\n   \u003ctd\u003e\u003ccode\u003etracing/\u003c/code\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eTracing/profiling support\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n   \u003ctd colspan=\"2\" \u003e\u003cstrong\u003e\u003ccode\u003ecpp_tests/\u003c/code\u003e\u003c/strong\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eC++ unit tests\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n   \u003ctd colspan=\"2\" \u003e\u003cstrong\u003e\u003ccode\u003emlir_tests/\u003c/code\u003e\u003c/strong\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eMLIR-based unit tests\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n   \u003ctd colspan=\"2\" \u003e\u003cstrong\u003e\u003ccode\u003eutils/\u003c/code\u003e\u003c/strong\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eMiscellaneous utilities, such as scripts for generating test ML models.\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n   \u003ctd colspan=\"2\" \u003e\u003cstrong\u003e\u003ccode\u003etools/\u003c/code\u003e\u003c/strong\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eBinaries including bef_executor, tfrt_translate etc.\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n   \u003ctd colspan=\"2\" \u003e\u003cstrong\u003e\u003ccode\u003ebackends/common/\u003c/code\u003e\u003c/strong\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eLibrary shared for different backends, e.g. eigen, dnn_op_utils.h\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n   \u003ctd\u003e\n   \u003c/td\u003e\n   \u003ctd\u003e\u003ccode\u003eops/\u003c/code\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eShared library for op implementations across devices, e.g. metadata functions\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n   \u003ctd\u003e\n   \u003c/td\u003e\n   \u003ctd\u003e\u003ccode\u003ecompat/eigen/\u003c/code\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eAdapter library for eigen, used by multiple backends\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n   \u003ctd\u003e\n   \u003c/td\u003e\n   \u003ctd\u003e\u003ccode\u003eutils/\u003c/code\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eMiscellaneous utilities, such as scripts for generating MLIR test code.\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n   \u003ctd colspan=\"2\" \u003e\u003cstrong\u003e\u003ccode\u003ebackends/cpu/\u003c/code\u003e\u003c/strong\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eCPU device infra and CPU ops and kernels\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n   \u003ctd\u003e\n   \u003c/td\u003e\n   \u003ctd\u003e\u003ccode\u003einclude/\u003c/code\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eCPU related public headers\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n   \u003ctd\u003e\n   \u003c/td\u003e\n   \u003ctd\u003e\u003ccode\u003elib/core_runtime/\u003c/code\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eCPU core_runtime infra, e.g. cpu_device\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n   \u003ctd\u003e\n   \u003c/td\u003e\n   \u003ctd\u003e\u003ccode\u003elib/ops\u003c/code\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eCPU ops\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n   \u003ctd\u003e\n   \u003c/td\u003e\n   \u003ctd\u003e\u003ccode\u003elib/kernels\u003c/code\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eCPU kernels\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n   \u003ctd\u003e\n   \u003c/td\u003e\n   \u003ctd\u003e\u003ccode\u003ecpp_tests/\u003c/code\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eCPU infra unit tests\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n   \u003ctd\u003e\n   \u003c/td\u003e\n   \u003ctd\u003e\u003ccode\u003emlir_tests/\u003c/code\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eCPU mlir based tests\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctd colspan=\"2\" \u003e\u003cstrong\u003e\u003ccode\u003ebackends/gpu/\u003c/code\u003e\u003c/strong\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eGPU infra and op/kernel implementations. We might split this directory into a separate repository at some point after the interface with the rest of TFRT infra becomes stable.\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n   \u003ctd\u003e\n   \u003c/td\u003e\n   \u003ctd\u003e\u003ccode\u003einclude/\u003c/code\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eGPU related public headers\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n   \u003ctd\u003e\n   \u003c/td\u003e\n   \u003ctd\u003e\u003ccode\u003elib/core_runtime/\u003c/code\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eGPU Core runtime infra\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n   \u003ctd\u003e\n   \u003c/td\u003e\n   \u003ctd\u003e\u003ccode\u003elib/memory\u003c/code\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eGPU memory abstraction\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n   \u003ctd\u003e\n   \u003c/td\u003e\n   \u003ctd\u003e\u003ccode\u003elib/stream\u003c/code\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eGPU stream abstraction and wrappers\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n   \u003ctd\u003e\n   \u003c/td\u003e\n   \u003ctd\u003e\u003ccode\u003elib/tensor\u003c/code\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eGPU tensor\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n   \u003ctd\u003e\n   \u003c/td\u003e\n   \u003ctd\u003e\u003ccode\u003elib/ops\u003c/code\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eGPU ops\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n   \u003ctd\u003e\n   \u003c/td\u003e\n   \u003ctd\u003e\u003ccode\u003elib/kernels\u003c/code\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eGPU kernels\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n   \u003ctd\u003e\n   \u003c/td\u003e\n   \u003ctd\u003e\u003ccode\u003elib/data\u003c/code\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eGPU kernels for input pipeline infrastructure\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n   \u003ctd\u003e\n   \u003c/td\u003e\n   \u003ctd\u003e\u003ccode\u003ecpp_tests/\u003c/code\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eGPU infra unit tests\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n   \u003ctd\u003e\n   \u003c/td\u003e\n   \u003ctd\u003e\u003ccode\u003emlir_tests/\u003c/code\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eGPU mlir based tests\n   \u003c/td\u003e\n  \u003c/tr\u003e\n  \u003ctr\u003e\n   \u003ctd\u003e\n   \u003c/td\u003e\n   \u003ctd\u003e\u003ccode\u003etools/\u003c/code\u003e\n   \u003c/td\u003e\n   \u003ctd\u003eMiscellaneous utilities\n   \u003c/td\u003e\n  \u003c/tr\u003e\n\u003c/table\u003e\n\n## Contribution guidelines\n\nIf you want to contribute to TFRT, be sure to review the\n[contribution guidelines](CONTRIBUTING.md). This project adheres to TensorFlow's\n[code of conduct](https://github.com/tensorflow/tensorflow/blob/master/CODE_OF_CONDUCT.md).\nBy participating, you are expected to uphold this code of conduct.\n\n**Note:** TFRT is currently not open to contributions. TFRT developers are\ncurrently developing workflows and continuous integration for accepting\ncontributions. Once we are ready, we will update this page.\n\n## Continuous build status\n\n[![Status](https://storage.googleapis.com/tensorflow-kokoro-build-badges/tf_runtime/ubuntu-clang.svg)](https://storage.googleapis.com/tensorflow-kokoro-build-badges/tf_runtime/ubuntu-clang.html)\n[![Status](https://storage.googleapis.com/tensorflow-kokoro-build-badges/tf_runtime/ubuntu-gcc.svg)](https://storage.googleapis.com/tensorflow-kokoro-build-badges/tf_runtime/ubuntu-gcc.html)\n\n## Contact\n\nSubscribe to the\n[TFRT mailing list](https://groups.google.com/a/tensorflow.org/d/forum/tfrt) for\ngeneral discussions about the runtime.\n\nWe use GitHub [issues](https://github.com/tensorflow/runtime/issues) to track\nbugs and feature requests.\n\n## License\n\n[Apache License 2.0](LICENSE)\n","funding_links":[],"categories":["C++"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftensorflow%2Fruntime","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ftensorflow%2Fruntime","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftensorflow%2Fruntime/lists"}