{"id":30318213,"url":"https://github.com/deepmind/trfl","last_synced_at":"2025-08-17T20:09:41.459Z","repository":{"id":33272659,"uuid":"144027556","full_name":"google-deepmind/trfl","owner":"google-deepmind","description":"TensorFlow Reinforcement Learning","archived":false,"fork":false,"pushed_at":"2022-12-08T18:07:05.000Z","size":291,"stargazers_count":3134,"open_issues_count":6,"forks_count":387,"subscribers_count":200,"default_branch":"master","last_synced_at":"2025-08-15T00:11:10.494Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/google-deepmind.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2018-08-08T14:44:11.000Z","updated_at":"2025-08-09T15:36:00.000Z","dependencies_parsed_at":"2023-01-15T00:16:04.682Z","dependency_job_id":null,"html_url":"https://github.com/google-deepmind/trfl","commit_stats":null,"previous_names":["google-deepmind/trfl","deepmind/trfl"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/google-deepmind/trfl","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-deepmind%2Ftrfl","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-deepmind%2Ftrfl/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-deepmind%2Ftrfl/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-deepmind%2Ftrfl/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/google-deepmind","download_url":"https://codeload.github.com/google-deepmind/trfl/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/google-deepmind%2Ftrfl/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":270899582,"owners_count":24664720,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-08-17T02:00:09.016Z","response_time":129,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2025-08-17T20:04:11.918Z","updated_at":"2025-08-17T20:09:41.450Z","avatar_url":"https://github.com/google-deepmind.png","language":"Python","funding_links":[],"categories":["Reinforcement Learning (RL) and Deep Reinforcement Learning (DRL)","Reinforcement Learning","TensorFlow Models","The Data Science Toolbox","Python","强化学习","Table of Contents","Libraries","General benchmark frameworks"],"sub_categories":["RL/DRL Algorithm Implementations and Software Frameworks","NLP","Others","Reinforcement Learning","Deep Learning Packages"],"readme":"# TRFL\n\nTRFL (pronounced \"truffle\") is a library built on top of TensorFlow that exposes\nseveral useful building blocks for implementing Reinforcement Learning agents.\n\n\n## Installation\n\nTRFL can be installed from pip with the following command:\n`pip install trfl`\n\nTRFL will work with both the CPU and GPU version of tensorflow, but to allow\nfor that it does not list Tensorflow as a requirement, so you need to install\nTensorflow and Tensorflow-probability separately if you haven't already done so.\n\n## Usage Example\n\n```python\nimport tensorflow as tf\nimport trfl\n\n# Q-values for the previous and next timesteps, shape [batch_size, num_actions].\nq_tm1 = tf.get_variable(\n    \"q_tm1\", initializer=[[1., 1., 0.], [1., 2., 0.]], dtype=tf.float32)\nq_t = tf.get_variable(\n    \"q_t\", initializer=[[0., 1., 0.], [1., 2., 0.]], dtype=tf.float32)\n\n# Action indices, discounts and rewards, shape [batch_size].\na_tm1 = tf.constant([0, 1], dtype=tf.int32)\nr_t = tf.constant([1, 1], dtype=tf.float32)\npcont_t = tf.constant([0, 1], dtype=tf.float32)  # the discount factor\n\n# Q-learning loss, and auxiliary data.\nloss, q_learning = trfl.qlearning(q_tm1, a_tm1, r_t, pcont_t, q_t)\n```\n\n`loss` is the tensor representing the loss. For Q-learning, it is half the\nsquared difference between the predicted Q-values and the TD targets, shape\n`[batch_size]`. Extra information is in the `q_learning` namedtuple, including\n`q_learning.td_error` and `q_learning.target`.\n\nThe `loss` tensor can be differentiated to derive the corresponding RL update.\n\n```python\nreduced_loss = tf.reduce_mean(loss)\noptimizer = tf.train.AdamOptimizer(learning_rate=0.1)\ntrain_op = optimizer.minimize(reduced_loss)\n```\n\nAll loss functions in the package return both a loss tensor and a namedtuple\nwith extra information, using the above convention, but different functions\nmay have different `extra` fields. Check the documentation of each function\nbelow for more information.\n\n# Documentation\n\nCheck out the full documentation page\n[here](https://github.com/deepmind/trfl/blob/master/docs/index.md).\n","project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdeepmind%2Ftrfl","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdeepmind%2Ftrfl","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdeepmind%2Ftrfl/lists"}