{"id":15158621,"url":"https://github.com/tensorflow/recommenders-addons","last_synced_at":"2025-04-14T05:18:48.073Z","repository":{"id":37012417,"uuid":"314416740","full_name":"tensorflow/recommenders-addons","owner":"tensorflow","description":"Additional utils and helpers to extend TensorFlow when build recommendation systems, contributed and maintained by SIG Recommenders.","archived":false,"fork":false,"pushed_at":"2025-03-26T21:42:56.000Z","size":13507,"stargazers_count":613,"open_issues_count":24,"forks_count":141,"subscribers_count":31,"default_branch":"master","last_synced_at":"2025-04-14T05:18:30.627Z","etag":null,"topics":["dynamic-embedding","recommender-system","sig-recommenders","tensorflow","tensorflow-recommenders-addons"],"latest_commit_sha":null,"homepage":"","language":"Cuda","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/tensorflow.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":".github/CODEOWNERS","security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2020-11-20T01:44:08.000Z","updated_at":"2025-04-08T08:24:17.000Z","dependencies_parsed_at":"2024-04-18T17:50:40.603Z","dependency_job_id":"22c5c826-1216-4177-b909-eb00f9bfb679","html_url":"https://github.com/tensorflow/recommenders-addons","commit_stats":{"total_commits":333,"total_committers":35,"mean_commits":9.514285714285714,"dds":0.7147147147147147,"last_synced_commit":"6f750cc8c648af65e86d3548052b887595ca3399"},"previous_names":[],"tags_count":15,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorflow%2Frecommenders-addons","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorflow%2Frecommenders-addons/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorflow%2Frecommenders-addons/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tensorflow%2Frecommenders-addons/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/tensorflow","download_url":"https://codeload.github.com/tensorflow/recommenders-addons/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248824693,"owners_count":21167345,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["dynamic-embedding","recommender-system","sig-recommenders","tensorflow","tensorflow-recommenders-addons"],"created_at":"2024-09-26T21:00:46.549Z","updated_at":"2025-04-14T05:18:48.050Z","avatar_url":"https://github.com/tensorflow.png","language":"Cuda","readme":"# TensorFlow Recommenders Addons\n-----------------\n![TensorFlow Recommenders logo](assets/SIGRecommendersAddons.png)\n[![PyPI Status Badge](https://badge.fury.io/py/tensorflow-recommenders-addons.svg)](https://pypi.org/project/tensorflow-recommenders-addons/)\n[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/tensorflow-recommenders-addons)](https://pypi.org/project/tensorflow-recommenders-addons/)\n[![Documentation](https://img.shields.io/badge/api-reference-blue.svg)](docs/api_docs/)\n\nTensorFlow Recommenders Addons(TFRA) are a collection of projects related to large-scale recommendation systems \nbuilt upon TensorFlow by introducing the **Dynamic Embedding Technology** to TensorFlow \nthat makes TensorFlow more suitable for training models of **Search, Recommendations, and Advertising** and \nmakes building, evaluating, and serving sophisticated recommenders models easy. \nSee approved TensorFlow RFC #[313](https://github.com/tensorflow/community/pull/313).\nThose contributions will be complementary to TensorFlow Core and TensorFlow Recommenders etc. \n\nFor Apple silicon(M1), please refer to [Apple Silicon Support](#apple-silicon-support).\n\n## Main Features\n\n- Make key-value data structure (dynamic embedding) trainable in TensorFlow\n- Get better recommendation effect compared to static embedding mechanism with no hash conflicts\n- Compatible with all native TensorFlow optimizers and initializers\n- Compatible with native TensorFlow CheckPoint and SavedModel format\n- Fully support train and inference recommenders models on GPUs\n- Support [TF serving](https://github.com/tensorflow/serving) and [Triton Inference Server](https://github.com/triton-inference-server/server) as inference framework\n- Support variant Key-Value implements as dynamic embedding storage and easy to extend\n  - [cuckoohash_map](https://github.com/efficient/libcuckoo) (from Efficient Computing at Carnegie Mellon, on CPU)\n  - [HierarchicalKV](https://github.com/NVIDIA-Merlin/HierarchicalKV) (from NVIDIA, on GPU)\n  - [Redis](https://github.com/redis/redis)\n- Support half synchronous training based on Horovod\n  - Synchronous training for dense weights\n  - Asynchronous training for sparse weights\n\n## Subpackages\n\n* [tfra.dynamic_embedding](docs/api_docs/tfra/dynamic_embedding.md), [RFC](rfcs/20200424-sparse-domain-isolation.md)\n* [tfra.embedding_variable](https://github.com/tensorflow/recommenders-addons/blob/master/docs/tutorials/embedding_variable_tutorial.ipynb), [RFC](https://docs.google.com/document/d/1odez6-69YH-eFcp8rKndDHTNGxZgdFFRJufsW94_gl4)\n\n## Contributors\n\nTensorFlow Recommenders-Addons depends on public contributions, bug fixes, and documentation.\nThis project exists thanks to all the people and organizations who contribute. [[Contribute](CONTRIBUTING.md)]\n\n\u003ca href=\"https://github.com/tensorflow/recommenders-addons/graphs/contributors\"\u003e\n  \u003cimg src=\"https://contrib.rocks/image?repo=tensorflow/recommenders-addons\" /\u003e\n\u003c/a\u003e\n\n\n\\\n\u003ca href=\"https://github.com/tencent\"\u003e\n  \u003ckbd\u003e \u003cimg src=\"./assets/tencent.png\" height=\"70\" /\u003e \u003c/kbd\u003e\n\u003c/a\u003e\u003ca href=\"https://github.com/alibaba\"\u003e\n  \u003ckbd\u003e \u003cimg src=\"./assets/alibaba.jpg\" height=\"70\" /\u003e \u003c/kbd\u003e\n\u003c/a\u003e\u003ca href=\"https://vip.com/\"\u003e \n  \u003ckbd\u003e \u003cimg src=\"./assets/vips.jpg\" height=\"70\" /\u003e \u003c/kbd\u003e\n\u003c/a\u003e\u003ca href=\"https://www.zhipin.com//\"\u003e\n  \u003ckbd\u003e \u003cimg src=\"./assets/boss.svg\" height=\"70\" /\u003e \u003c/kbd\u003e\n\u003c/a\u003e\n\n\\\nA special thanks to [NVIDIA Merlin Team](https://github.com/NVIDIA-Merlin) and NVIDIA China DevTech Team, \nwho have provided GPU acceleration technology support and code contribution.\n\n\u003ca href=\"https://github.com/NVIDIA-Merlin\"\u003e\n  \u003ckbd\u003e \u003cimg src=\"./assets/merilin.png\" height=\"70\" /\u003e \u003c/kbd\u003e\n\u003c/a\u003e\n\n\n## Tutorials \u0026 Demos\nSee [tutorials](docs/tutorials/) and [demo](demo/) for end-to-end examples of each subpackages.\n\n## Installation\n#### Stable Builds\nTensorFlow Recommenders-Addons is available on PyPI for Linux, macOS. To install the latest version, \nrun the following:\n```\npip install tensorflow-recommenders-addons\n```\nBefore version 0.8, to install GPU version, run the following:\n```\npip install tensorflow-recommenders-addons-gpu\n```\n\nTo use TensorFlow Recommenders-Addons:\n\n```python\nimport tensorflow as tf\nimport tensorflow_recommenders_addons as tfra\n```\n\n### Compatibility with Tensorflow\nTensorFlow C++ APIs are not stable and thus we can only guarantee compatibility with the \nversion TensorFlow Recommenders-Addons(TFRA) was built against. It is possible TFRA will work with \nmultiple versions of TensorFlow, but there is also a chance for segmentation faults or other problematic \ncrashes. Warnings will be emitted if your TensorFlow version does not match what it was built against.\n\nAdditionally, TFRA custom ops registration does not have a stable ABI interface so it is \nrequired that users have a compatible installation of TensorFlow even if the versions \nmatch what we had built against. A simplification of this is that **TensorFlow Recommenders-Addons \ncustom ops will work with `pip`-installed TensorFlow** but will have issues when TensorFlow \nis compiled differently. A typical example of this would be `conda`-installed TensorFlow.\n[RFC #133](https://github.com/tensorflow/community/pull/133) aims to fix this.\n\n\n#### Compatibility Matrix\n*GPU is supported by version `0.2.0` and later.*\n\n| TFRA  | TensorFlow | Compiler   | CUDA | CUDNN | Compute Capability           | CPU      |\n|:------|:-----------|:-----------|:-----|:------|:-----------------------------|:---------|\n| 0.8.0 | 2.16.2     | GCC 8.2.1  | 12.3 | 8.9   | 7.0, 7.5, 8.0, 8.6, 8.9, 9.0 | x86      |\n| 0.8.0 | 2.16.2     | Xcode 13.1 | -    | -     | -                            | Apple M1 |\n| 0.7.0 | 2.15.1     | GCC 8.2.1  | 12.2 | 8.9   | 7.0, 7.5, 8.0, 8.6, 8.9, 9.0 | x86      |\n| 0.7.0 | 2.15.1     | Xcode 13.1 | -    | -     | -                            | Apple M1 |\n| 0.6.0 | 2.8.3      | GCC 7.3.1  | 11.2 | 8.1   | 6.0, 6.1, 7.0, 7.5, 8.0, 8.6 | x86      |\n| 0.6.0 | 2.6.0      | Xcode 13.1 | -    | -     | -                            | Apple M1 |\n| 0.5.1 | 2.8.3      | GCC 7.3.1  | 11.2 | 8.1   | 6.0, 6.1, 7.0, 7.5, 8.0, 8.6 | x86      |\n| 0.5.1 | 2.6.0      | Xcode 13.1 | -    | -     | -                            | Apple M1 |\n| 0.5.0 | 2.8.3      | GCC 7.3.1  | 11.2 | 8.1   | 6.0, 6.1, 7.0, 7.5, 8.0, 8.6 | x86      |\n| 0.5.0 | 2.6.0      | Xcode 13.1 | -    | -     | -                            | Apple M1 |\n| 0.4.0 | 2.5.1      | GCC 7.3.1  | 11.2 | 8.1   | 6.0, 6.1, 7.0, 7.5, 8.0, 8.6 | x86      |\n| 0.4.0 | 2.5.0      | Xcode 13.1 | -    | -     | -                            | Apple M1 |\n| 0.3.1 | 2.5.1      | GCC 7.3.1  | 11.2 | 8.1   | 6.0, 6.1, 7.0, 7.5, 8.0, 8.6 | x86      |\n| 0.2.0 | 2.4.1      | GCC 7.3.1  | 11.0 | 8.0   | 6.0, 6.1, 7.0, 7.5, 8.0      | x86      |\n| 0.2.0 | 1.15.2     | GCC 7.3.1  | 10.0 | 7.6   | 6.0, 6.1, 7.0, 7.5           | x86      |\n| 0.1.0 | 2.4.1      | GCC 7.3.1  | -    | -     | -                            | x86      |\n\nCheck [nvidia-support-matrix](https://docs.nvidia.com/deeplearning/cudnn/support-matrix/index.html) for more details.\n\n**NOTICE**\n\n- The release packages have a strict version binding relationship with TensorFlow. \n- Due to the significant changes in the Tensorflow API, we can only ensure version 0.2.0 compatibility with TF1.15.2 on CPU \u0026 GPU, \n  but **there are no official releases**, you can only get it through compiling by the following:\n```sh\nPY_VERSION=\"3.9\" \\\nTF_VERSION=\"2.15.1\" \\\nTF_NEED_CUDA=1 \\\nsh .github/workflows/make_wheel_Linux_x86.sh\n\n# .whl file will be created in ./wheelhouse/\n```\n\n- If you need to work with TensorFlow 1.14.x or older version, we suggest you give up,\nbut maybe this doc can help you : [Extract headers from TensorFlow compiling directory](./build_deps/tf_header/README.md).\nAt the same time, we find some OPs used by TRFA have better performance, so we highly recommend you update TensorFlow to 2.x.\n\n### Installing from Source\n\nFor all developers, we recommend you use the development docker containers which are all GPU enabled:\n```sh\ndocker pull tfra/dev_container:latest-tf2.15.1-python3.9  # Available tensorflow and python combinations can be found [here](https://www.tensorflow.org/install/source#linux)\ndocker run --privileged --gpus all -it --rm -v $(pwd):$(pwd) tfra/dev_container:latest-tf2.15.1-python3.9\n```\n\n#### CPU Only\nYou can also install from source. This requires the [Bazel](https://bazel.build/) build system (version == 5.1.1).\nPlease install a TensorFlow on your compiling machine, The compiler needs to know the version of Tensorflow and \nits headers according to the installed TensorFlow. \n\n```sh\nexport TF_VERSION=\"2.15.1\"  # \"2.11.0\" are well tested.\npip install tensorflow==$TF_VERSION\n\ngit clone https://github.com/tensorflow/recommenders-addons.git\ncd recommenders-addons\n\n# This script links project with TensorFlow dependency\npython configure.py\n\nbazel build --enable_runfiles build_pip_pkg\nbazel-bin/build_pip_pkg artifacts\n\npip install artifacts/tensorflow_recommenders_addons-*.whl\n```\n#### GPU Support\nOnly `TF_NEED_CUDA=1` is required and other environment variables are optional:\n```sh\nexport TF_VERSION=\"2.15.1\"  # \"2.11.0\" is well tested.\nexport PY_VERSION=\"3.9\" \nexport TF_NEED_CUDA=1\nexport TF_CUDA_VERSION=12.2 # nvcc --version to check version\nexport TF_CUDNN_VERSION=8.9 # print(\"cuDNN version:\", tf.sysconfig.get_build_info()[\"cudnn_version\"])\nexport CUDA_TOOLKIT_PATH=\"/usr/local/cuda\"\nexport CUDNN_INSTALL_PATH=\"/usr/lib/x86_64-linux-gnu\"\n\npython configure.py\n```\nAnd then build the pip package and install:\n```sh\nbazel build --enable_runfiles build_pip_pkg\nbazel-bin/build_pip_pkg artifacts\npip install artifacts/tensorflow_recommenders_addons_gpu-*.whl\n```\nto run unit test\n```sh\ncp -f ./bazel-bin/tensorflow_recommenders_addons/dynamic_embedding/core/*.so ./tensorflow_recommenders_addons/dynamic_embedding/core/\npip install pytest\npython tensorflow_recommenders_addons/tests/run_all_test.py\n# and run pytest such as\npytest -s tensorflow_recommenders_addons/dynamic_embedding/python/kernel_tests/hkv_hashtable_ops_test.py\n```\n\n#### Apple Silicon Support\nRequirements:\n\n- macOS 12.0.0+\n- tensorflow 2.15.1\n- bazel 5.1.1\n\n\n\n**Install TFRA on Apple Silicon via Pypi**\n```sh\npython -m pip install tensorflow-recommenders-addons --no-deps\n```\n\n**Build TFRA on Apple Silicon from Source**\n\n```sh\n# Install bazelisk\nbrew install bazelisk\n\n# Build wheel from source\nTF_VERSION=2.15.1 TF_NEED_CUDA=\"0\" sh .github/workflows/make_wheel_macOS_arm64.sh\n\n# Install the wheel\npython -m pip install --no-deps ./artifacts/*.whl\n```\n\n**Known Issues:**\n\nThe Apple silicon version of TFRA doesn't support: \n\n* Data type **float16**\n* Synchronous training based on **Horovod**\n* HierarchicalKV (HKV)\n* `save_to_file_system`\n* `load_from_file_system` \n* `warm_start_util`\n\n`save_to_file_system` and `load_from_file_system` are not supported because TFIO is not supported on apple silicon devices. Horovod and `warm_start_util` are not supported because the natively supported tensorflow-macos doesn't support V1 Tensorflow networks.\n\nThese issues may be fixed in the future release.\n\n\n##### Data Type Matrix for `tfra.dynamic_embedding.Variable` \n\n| Values \\\\ Keys |  int64   |  int32   | string |\n|:--------------:|:--------:|:--------:|:------:| \n|     float      | CPU, GPU | CPU, GPU |  CPU   |\n|    bfloat16    | CPU, GPU |   CPU    |  CPU   |\n|      half      | CPU, GPU |    -     |  CPU   |\n|     int32      | CPU, GPU |   CPU    |  CPU   |\n|      int8      | CPU, GPU |   -      |  CPU   |\n|     int64      |   CPU    |    -     |  CPU   |\n|     double     | CPU, CPU |   CPU    |  CPU   |\n|      bool      |    -     |    -     |  CPU   |\n|     string     |   CPU    |    -     |   -    |\n\n##### To use GPU by `tfra.dynamic_embedding.Variable`\nThe `tfra.dynamic_embedding.Variable` will ignore the device placement mechanism of TensorFlow, \nyou should specify the `devices` onto GPUs explicitly for it.\n\n```python\nimport tensorflow as tf\nimport tensorflow_recommenders_addons as tfra\n\nde = tfra.dynamic_embedding.get_variable(\"VariableOnGpu\",\n                                         devices=[\"/job:ps/task:0/GPU:0\", ],\n                                         # ...\n                                         )\n```\n\n**Usage restrictions on GPU**\n- Only work on Nvidia GPU with cuda compute capability 6.0 or higher.\n- Considering the size of the .whl file, currently `dim` only supports less than or equal to 200, if you need longer `dim`, please submit an issue.\n- Only `dynamic_embedding` APIs and relative OPs support running on GPU.\n- For GPU HashTables manage GPU memory independently, TensorFlow should be configured to allow GPU memory growth by the following:\n```python\nsess_config.gpu_options.allow_growth = True\n```\n\n## Inference \n\n### With TensorFlow Serving\n\n#### Compatibility Matrix\n| TFRA  | TensorFlow | Serving branch | Compiler  | CUDA | CUDNN | Compute Capability           |\n|:------|:-----------|:---------------|:----------|:-----|:------|:-----------------------------|\n| 0.8.0 | 2.16.2     | r2.16          | GCC 8.2.1 | 12.3 | 8.9   | 7.0, 7.5, 8.0, 8.6, 8.9, 9.0 |\n| 0.7.0 | 2.15.1     | r2.15          | GCC 8.2.1 | 12.2 | 8.9   | 7.0, 7.5, 8.0, 8.6, 8.9, 9.0 |\n| 0.6.0 | 2.8.3      | r2.8           | GCC 7.3.1 | 11.2 | 8.1   | 6.0, 6.1, 7.0, 7.5, 8.0, 8.6 |\n| 0.5.1 | 2.8.3      | r2.8           | GCC 7.3.1 | 11.2 | 8.1   | 6.0, 6.1, 7.0, 7.5, 8.0, 8.6 |\n| 0.5.0 | 2.8.3      | r2.8           | GCC 7.3.1 | 11.2 | 8.1   | 6.0, 6.1, 7.0, 7.5, 8.0, 8.6 |\n| 0.4.0 | 2.5.1      | r2.5           | GCC 7.3.1 | 11.2 | 8.1   | 6.0, 6.1, 7.0, 7.5, 8.0, 8.6 |\n| 0.3.1 | 2.5.1      | r2.5           | GCC 7.3.1 | 11.2 | 8.1   | 6.0, 6.1, 7.0, 7.5, 8.0, 8.6 |\n| 0.2.0 | 2.4.1      | r2.4           | GCC 7.3.1 | 11.0 | 8.0   | 6.0, 6.1, 7.0, 7.5, 8.0      |\n| 0.2.0 | 1.15.2     | r1.15          | GCC 7.3.1 | 10.0 | 7.6   | 6.0, 6.1, 7.0, 7.5           |\n| 0.1.0 | 2.4.1      | r2.4           | GCC 7.3.1 | -    | -     | -                            |\n\nServing TFRA-enable models by custom ops in TensorFlow Serving. \n \n```sh\n## If enable GPU OPs\nexport SERVING_WITH_GPU=1 \n\n## Specifiy the branch of TFRA\nexport TFRA_BRANCH=\"master\" # The `master` and `r0.6` are available.\n\n## Create workspace, modify the directory as you prefer to.\nexport TFRA_SERVING_WORKSPACE=~/tfra_serving_workspace/\nmkdir -p $TFRA_SERVING_WORKSPACE \u0026\u0026 cd $TFRA_SERVING_WORKSPACE\n\n## Clone the release branches of serving and TFRA according to `Compatibility Matrix`.\ngit clone -b r2.8 https://github.com/tensorflow/serving.git\ngit clone -b $TFRA_BRANCH https://github.com/tensorflow/recommenders-addons.git\n\n## Run config shell script\ncd $TFRA_SERVING_WORKSPACE/recommenders-addons/tools\nbash config_tfserving.sh $TFRA_BRANCH $TFRA_SERVING_WORKSPACE/serving $SERVING_WITH_GPU\n\n## Build serving with TFRA OPs.\ncd $TFRA_SERVING_WORKSPACE/serving\n./tools/run_in_docker.sh bazel build tensorflow_serving/model_servers:tensorflow_model_server\n\n```\n\nFor more detail, please refer to the shell script `./tools/config_tfserving.sh`.\n\n**NOTICE**\n- Distributed inference is only supported when using Redis as Key-Value storage. \n- Reference documents: https://www.tensorflow.org/tfx/serving/custom_op\n\n### With Triton\nWhen building the custom operations shared library it is important to\nuse the same version of TensorFlow as is being used in Triton. You can\nfind the TensorFlow version in the [Triton Release\nNotes](https://docs.nvidia.com/deeplearning/triton-inference-server/release-notes/index.html). A\nsimple way to ensure you are using the correct version of TensorFlow\nis to use the [NGC TensorFlow\ncontainer](https://ngc.nvidia.com/catalog/containers/nvidia:tensorflow)\ncorresponding to the Triton container. For example, if you are using\nthe 23.05 version of Triton, use the 23.05 version of the TensorFlow\ncontainer.\n```bash\ndocker pull nvcr.io/nvidia/tritonserver:22.05-py3\n\nexport TFRA_BRANCH=\"master\"\ngit clone -b $TFRA_BRANCH https://github.com/tensorflow/recommenders-addons.git\ncd recommenders-addons\n\npython configure.py\nbazel build //tensorflow_recommenders_addons/dynamic_embedding/core:_cuckoo_hashtable_ops.so ##bazel 5.1.1 is well tested\nmkdir /tmp/so\n#you can also use the so file from pip install package file from \"(PYTHONPATH)/site-packages/tensorflow_recommenders_addons/dynamic_embedding/core/_cuckoo_hashtable_ops.so\"\ncp bazel-bin/tensorflow_recommenders_addons/dynamic_embedding/core/_cuckoo_hashtable_ops.so /tmp/so\n\n#tfra saved_model directory \"/models/model_repository\"\ndocker run --net=host -v /models/model_repository:/models nvcr.io/nvidia/tritonserver:22.05-py3 bash -c \\\n  \"LD_PRELOAD=/tmp/so/_cuckoo_hashtable_ops.so:${LD_PRELOAD} tritonserver --model-repository=/models/ --backend-config=tensorflow,version=2 --strict-model-config=false\"\n```\n\n**NOTICE**\n- The above LD_LIBRARY_PATH and backend-config must be set Because the default backend is tf1.\n\n\n## Community\n\n* SIG Recommenders mailing list:\n[recommenders@tensorflow.org](https://groups.google.com/a/tensorflow.org/g/recommenders)\n\n## Acknowledgment\nWe are very grateful to the maintainers of [tensorflow/addons](https://github.com/tensorflow/addons) for borrowing a lot of code from [tensorflow/addons](https://github.com/tensorflow/addons) to build our workflow and documentation system.\nWe also want to extend a thank you to the Google team members who have helped with CI setup and reviews!\n\n## License\nApache License 2.0\n\n\n\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftensorflow%2Frecommenders-addons","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ftensorflow%2Frecommenders-addons","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftensorflow%2Frecommenders-addons/lists"}