{"id":13492886,"url":"https://github.com/PaddlePaddle/PARL","last_synced_at":"2025-03-28T11:31:07.200Z","repository":{"id":37471151,"uuid":"131044128","full_name":"PaddlePaddle/PARL","owner":"PaddlePaddle","description":"A high-performance distributed training framework for Reinforcement Learning ","archived":false,"fork":false,"pushed_at":"2024-07-30T10:50:08.000Z","size":48161,"stargazers_count":3262,"open_issues_count":132,"forks_count":822,"subscribers_count":62,"default_branch":"develop","last_synced_at":"2024-10-29T11:44:37.628Z","etag":null,"topics":["large-scale","parallelization","reinforcement-learning"],"latest_commit_sha":null,"homepage":"https://parl.readthedocs.io/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/PaddlePaddle.png","metadata":{"files":{"readme":"README.cn.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2018-04-25T17:54:22.000Z","updated_at":"2024-10-26T16:20:24.000Z","dependencies_parsed_at":"2024-11-19T05:31:50.265Z","dependency_job_id":null,"html_url":"https://github.com/PaddlePaddle/PARL","commit_stats":{"total_commits":501,"total_committers":42,"mean_commits":"11.928571428571429","dds":0.7365269461077844,"last_synced_commit":"2b4b253e1320662d72b33626c4213c8fa3536a70"},"previous_names":[],"tags_count":9,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/PaddlePaddle%2FPARL","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/PaddlePaddle%2FPARL/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/PaddlePaddle%2FPARL/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/PaddlePaddle%2FPARL/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/PaddlePaddle","download_url":"https://codeload.github.com/PaddlePaddle/PARL/tar.gz/refs/heads/develop","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":245562956,"owners_count":20635907,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["large-scale","parallelization","reinforcement-learning"],"created_at":"2024-07-31T19:01:10.187Z","updated_at":"2025-03-28T11:31:07.193Z","avatar_url":"https://github.com/PaddlePaddle.png","language":"Python","readme":"\u003cp align=\"center\"\u003e\n\u003cimg src=\".github/PARL-logo.png\" alt=\"PARL\" width=\"500\"/\u003e\n\u003c/p\u003e\n\n[English](./README.md) | 简体中文\n\n[![Documentation Status](https://img.shields.io/badge/docs-latest-brightgreen.svg?style=flat)](https://parl.readthedocs.io/en/latest/index.html) [![Documentation Status](https://img.shields.io/badge/中文文档-最新-brightgreen.svg)](https://parl.readthedocs.io/zh_CN/latest/) [![Documentation Status](https://img.shields.io/badge/手册-中文-brightgreen.svg)](./docs/zh_CN/Overview.md) [![Release](https://img.shields.io/badge/release-v2.2.1-blue.svg)](https://github.com/PaddlePaddle/PARL/releases)\n\n\n\u003e PARL 是一个高性能、灵活的强化学习框架。\n\n\u003c!-- toc --\u003e\n\n- [概览](#概览)\n\t- [特点](#特点)\n\t- [框架结构](#框架结构)\n\t\t- [Model](#model)\n\t\t- [Algorithm](#algorithm)\n\t\t- [Agent](#agent)\n\t- [简易高效的并行接口](#简易高效的并行接口)\n- [安装:](#安装)\n\t\t- [依赖](#依赖)\n- [快速开始](#快速开始)\n- [算法示例](#算法示例)\n- [xparl 安全说明](#xparl-安全说明)\n\t- [安全性注意事项](#安全性注意事项)\n\n# 概览\n## 特点\n**可复现性保证**。我们提供了高质量的主流强化学习算法实现，严格地复现了论文对应的指标。\n\n**大规模并行支持**。框架最高可支持上万个CPU的同时并发计算，并且支持多GPU强化学习模型的训练。\n\n**可复用性强**。用户无需自己重新实现算法，通过复用框架提供的算法可以轻松地把经典强化学习算法应用到具体的场景中。\n\n**良好扩展性**。当用户想调研新的算法时，可以通过继承我们提供的基类可以快速实现自己的强化学习算法。\n\n\n## 框架结构\n\u003cimg src=\".github/abstractions.png\" alt=\"abstractions\" width=\"400\"/\u003e  \nPARL的目标是构建一个可以完成复杂任务的智能体。以下是用户在逐步构建一个智能体的过程中需要了解到的结构：\n\n### Model\n`Model` 用来定义前向 (`Forward`)网络，这通常是一个策略网络 (`Policy Network`)或者一个值函数网络 (`Value Function`)，输入是当前环境状态 (`State`)。\n\n### Algorithm\n`Algorithm` 定义了具体的算法来更新前向网络 (`Model`)，也就是通过定义损失函数来更新`Model`。一个`Algorithm`包含至少一个`Model`。\n\n### Agent\n`Agent` 负责算法与环境的交互，在交互过程中把生成的数据提供给`Algorithm`来更新模型 (`Model`)，数据的预处理流程也一般定义在这里。\n\n提示： 请访问[教程](https://parl.readthedocs.io/zh_CN/latest/tutorial/getting_started.html) and [API 文档](https://parl.readthedocs.io/zh_CN/latest/apis/model.html)以获取更多关于基础类的信息。\n\n## 简易高效的并行接口\n在PARL中，一个**修饰符**(parl.remote_class)就可以帮助用户实现自己的并行算法。\n以下我们通过`Hello World`的例子来说明如何简单地通过PARL来调度外部的计算资源实现并行计算。 请访问我们的[教程文档](https://parl.readthedocs.io/zh_CN/latest/parallel_training/setup.html)以获取更多的并行训练信息。\n```python\n#============Agent.py=================\n@parl.remote_class\nclass Agent(object):\n\n\tdef say_hello(self):\n\t\tprint(\"Hello World!\")\n\n\tdef sum(self, a, b):\n\t\treturn a+b\n\nparl.connect('localhost:8037')\nagent = Agent()\nagent.say_hello()\nans = agent.sum(1,5) # run remotely and not comsume any local computation resources \n```\n两步调度外部的计算资源：\n1. 使用`parl.remote_class`修饰一个类，之后这个类就被转化为可以运行在其他CPU或者机器上的类。\n2. 调用`parl.connect`函数来初始化并行通讯，通过这种方式获取到的实例和原来的类是有同样的函数的。由于这些类是在别的计算资源上运行的，执行这些函数**不再消耗当前线程计算资源**。\n\n\u003cimg src=\".github/decorator.png\" alt=\"PARL\" width=\"450\"/\u003e\n\n如上图所示，真实的actor（橙色圆圈）运行在CPU集群，learner（蓝色圆圈）和remote actor（黄色圆圈）运行在本地的GPU上。对于用户而言，完全可以像写多线程代码一样来实现并行算法，相当简单，但是这些多线程的运算利用了外部的计算资源。我们也提供了并行算法示例，更多细节请参考[IMPALA](benchmark/fluid/IMPALA/), [A2C](examples/A2C/)。\n\n\n# 安装:\n### 依赖\n- Python 3.6+. (Python 3.8+ 更适合用于并行训练)\n- [paddlepaddle\u003e=2.3.1](https://github.com/PaddlePaddle/Paddle) (**非必须的**，如果你只用并行部分的接口不需要安装paddle) \n\n\n```\npip install parl\n```\n\n[详细安装说明 (持续更新)](docs/installation_guide_cn.md)\n\n# 快速开始\n请查看一下几个教程帮助您快速上手PARL:\n- [教程](https://parl.readthedocs.io/zh_CN/latest/tutorial/getting_started.html) : 解决经典的 CartPole 问题。\n- [Xparl用法](https://parl.readthedocs.io/zh_CN/latest/parallel_training/setup.html) : 如何使用`xparl`设置集群，实现并行运算。\n- [进阶教程](https://parl.readthedocs.io/zh_CN/latest/implementations/new_alg.html) : 自定义新算法。\n- [API 文档](https://parl.readthedocs.io/zh_CN/latest/apis/model.html)\n\n同时，我们还为零基础开发者提供强化学习入门课程 : ( [视频](https://www.bilibili.com/video/BV1yv411i7xd) | [代码](examples/tutorials/) )\n\n# 算法示例\n- [QuickStart](examples/QuickStart/)\n- [DQN](examples/DQN/)\n- [ES](examples/ES/)\n- [DDPG](examples/DDPG/)\n- [A2C](examples/A2C/)\n- [TD3](examples/TD3/)\n- [SAC](examples/SAC/)\n- [QMIX](examples/QMIX/)\n- [MADDPG](examples/MADDPG/)\n- [PPO](examples/PPO/)\n- [CQL](examples/CQL/)\n- [IMPALA](examples/IMPALA)\n- [冠军解决方案：NIPS2018强化学习假肢挑战赛](examples/NeurIPS2018-AI-for-Prosthetics-Challenge/)\n- [冠军解决方案：NIPS2019强化学习仿生人控制赛事](examples/NeurIPS2019-Learn-to-Move-Challenge/)\n- [冠军解决方案：NIPS2020强化学习电网调度赛事](examples/NeurIPS2020-Learning-to-Run-a-Power-Network-Challenge/)\n\n\u003cimg src=\"examples/NeurIPS2019-Learn-to-Move-Challenge/image/performance.gif\" width = \"300\" height =\"200\" alt=\"NeurlIPS2018\"/\u003e \u003cimg src=\".github/Half-Cheetah.gif\" width = \"300\" height =\"200\" alt=\"Half-Cheetah\"/\u003e \u003cimg src=\".github/Breakout.gif\" width = \"200\" height =\"200\" alt=\"Breakout\"/\u003e \n\u003cbr\u003e\n\u003cimg src=\".github/Aircraft.gif\"  width = \"808\" height =\"300\"  alt=\"NeurlIPS2018\"/\u003e\n\n# xparl 安全说明\n\n`xparl` 提供了跨多机集群的多进程并行功能，类似于 Python 自带的单机多进程。这意味着在某个客户端上编写代码后，可以在集群内的任意机器上执行任意代码，例如获取其他机器上的数据、增删文件等。\n\n这是设计的初衷，因为强化学习环境多种多样，`env_wrapper` 需要具备执行各种可能操作的能力。`xparl` 使用了 `pickle` 实现这一功能（类似于 `ray`）。与大多数情况下将 `pickle` 视为可注入代码的漏洞不同，这里使用 `pickle` 是一种特性。\n\n## 安全性注意事项\n\n由于支持任意代码执行，用户需要确保集群环境是安全的：\n\n- **不要允许不信任的机器加入集群。**  \n- **不要让不信任的用户访问集群，例如不要将 `xparl` 的端口暴露在公网。**  \n- **不要在集群上执行不信任的代码。**\n\n","funding_links":[],"categories":["Uncategorized","Deep Learning Framework","Industry Strength Reinforcement Learning","强化学习","General benchmark frameworks"],"sub_categories":["Uncategorized","High-Level DL APIs"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FPaddlePaddle%2FPARL","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FPaddlePaddle%2FPARL","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FPaddlePaddle%2FPARL/lists"}