{"id":15030696,"url":"https://github.com/openrlhf/openrlhf","last_synced_at":"2026-03-08T15:03:43.613Z","repository":{"id":184754818,"uuid":"672415139","full_name":"OpenRLHF/OpenRLHF","owner":"OpenRLHF","description":"An Easy-to-use, Scalable and High-performance RLHF Framework based on Ray (PPO \u0026 GRPO \u0026 REINFORCE++ \u0026 LoRA \u0026 vLLM \u0026 RFT)","archived":false,"fork":false,"pushed_at":"2025-05-01T01:55:49.000Z","size":2695,"stargazers_count":6511,"open_issues_count":238,"forks_count":636,"subscribers_count":39,"default_branch":"main","last_synced_at":"2025-05-01T02:42:04.276Z","etag":null,"topics":["large-language-models","openai-o1","proximal-policy-optimization","raylib","reinforcement-learning","reinforcement-learning-from-human-feedback","transformers","vllm"],"latest_commit_sha":null,"homepage":"https://openrlhf.readthedocs.io/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/OpenRLHF.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2023-07-30T02:20:13.000Z","updated_at":"2025-05-01T01:55:53.000Z","dependencies_parsed_at":"2024-05-05T08:25:00.096Z","dependency_job_id":"45efc570-b18d-4853-aaba-093a9f897e43","html_url":"https://github.com/OpenRLHF/OpenRLHF","commit_stats":{"total_commits":852,"total_committers":41,"mean_commits":20.78048780487805,"dds":0.5140845070422535,"last_synced_commit":"43297e5e89324b5e095dab825798b4859a9b39b4"},"previous_names":["openllmai/openllama2","openllmai/openrlhf","openrlhf/openrlhf"],"tags_count":71,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OpenRLHF%2FOpenRLHF","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OpenRLHF%2FOpenRLHF/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OpenRLHF%2FOpenRLHF/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OpenRLHF%2FOpenRLHF/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/OpenRLHF","download_url":"https://codeload.github.com/OpenRLHF/OpenRLHF/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":252735745,"owners_count":21796259,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["large-language-models","openai-o1","proximal-policy-optimization","raylib","reinforcement-learning","reinforcement-learning-from-human-feedback","transformers","vllm"],"created_at":"2024-09-24T20:14:08.109Z","updated_at":"2026-03-08T15:03:43.606Z","avatar_url":"https://github.com/OpenRLHF.png","language":"Python","readme":"\u003cdiv align=\"center\"\u003e\n    \u003cimg alt=\"OpenRLHF logo\" src=\"./docs/logo.png\" style=\"height: 140px;\" /\u003e\n\u003c/div\u003e\n\u003cdiv align=\"center\"\u003e\n\u003cp align=\"center\"\u003e\n      \u003ca href=\"https://github.com/OpenRLHF/OpenRLHF/graphs/contributors\"\u003e\n        \u003cimg alt=\"GitHub Contributors\" src=\"https://img.shields.io/github/contributors/OpenRLHF/OpenRLHF\" /\u003e\n      \u003c/a\u003e\n      \u003ca href=\"https://github.com/OpenRLHF/OpenRLHF/issues\"\u003e\n        \u003cimg alt=\"Issues\" src=\"https://img.shields.io/github/issues/OpenRLHF/OpenRLHF?color=0088ff\" /\u003e\n      \u003c/a\u003e\n      \u003ca href=\"https://github.com/OpenRLHF/OpenRLHF/discussions\"\u003e\n        \u003cimg alt=\"Issues\" src=\"https://img.shields.io/github/discussions/OpenRLHF/OpenRLHF?color=0088ff\" /\u003e\n      \u003c/a\u003e\n      \u003ca href=\"https://github.com/OpenRLHF/OpenRLHF/pulls\"\u003e\n        \u003cimg alt=\"GitHub pull requests\" src=\"https://img.shields.io/github/issues-pr/OpenRLHF/OpenRLHF?color=0088ff\" /\u003e\n      \u003c/a\u003e\n      \u003ca href=\"https://github.com/OpenRLHF/OpenRLHF/stargazers\"\u003e\n        \u003cimg alt=\"GitHub stars\" src=\"https://img.shields.io/github/stars/OpenRLHF/OpenRLHF?color=ccf\" /\u003e\n      \u003c/a\u003e\n      \u003ca href=\"https://deepwiki.com/OpenRLHF/OpenRLHF\"\u003e\u003cimg src=\"https://deepwiki.com/badge.svg\" alt=\"Ask DeepWiki\"\u003e\u003c/a\u003e\n      \u003cbr\u003e\n      \u003cem\u003eOpen-source / Comprehensive / Lightweight / Easy-to-use\u003c/em\u003e\n    \u003c/p\u003e\n\u003c/div\u003e\n\n\u003chr\u003e\n\n\u003cspan\u003e[ English | \u003ca href=\"README_zh.md\"\u003e中文\u003c/a\u003e | \u003ca href=\"README_ja.md\"\u003e日本語\u003c/a\u003e ]\u003c/span\u003e\n\nOpenRLHF is **the first** high-performance, production-ready open-source RLHF framework that combines **Ray + vLLM distributed architecture** with a **unified agent-based design paradigm** for scalable and extensible reinforcement learning from human feedback.\n\n📚 **Learn More**: [Documentation](https://openrlhf.readthedocs.io/) | [Slides](https://docs.google.com/presentation/d/1JRhB1d7csofx0PIZBmfyBdMluxNd5JLPpUHrrvVhGnk/edit?usp=sharing) | [Technical Report](https://www.researchgate.net/publication/393414548_OpenRLHF_An_Easy-to-use_Scalable_and_High-performance_RLHF_Framework) | [Video](https://www.bilibili.com/video/BV1dv2jBxEQG/)\n\n## 📖 Table of Contents\n\n- [🗞️ News](#news)\n- [🏗️ Architecture Foundation](#architecture-foundation-ray--vllm-distribution) - Ray + vLLM + DeepSpeed distributed infrastructure\n- [🎯 Design Paradigm](#design-paradigm-agent-based-execution) - Unified agent-based execution pipeline\n- [🚀 RL Algorithms](#state-of-the-art-rl-algorithms) - PPO, REINFORCE++, GRPO, RLOO\n- [📋 Features Overview](#comprehensive-features) - Complete RLHF pipeline capabilities\n- [🎬 Quick Start](#quick-start) - Installation and typical workflow\n- [🎓 Training Guide](#supervised-fine-tuning) - SFT, Reward Model, RL Training\n- [🎯 Single-Turn Agent](#single-turn-agent-reinforced-fine-tuning-with-custom-rewards) - Custom reward functions\n- [🤖 Multi-Turn Agent](#multi-turn-agent-complex-environment-interactions) - Complex environments\n- [🔧 Advanced Topics](#advanced-topics) - LoRA, performance tuning\n\n---\n\n\u003ca id=\"news\"\u003e\u003c/a\u003e\n## News\n\n\u003cdetails\u003e\n\u003csummary\u003eShow News\u003c/summary\u003e\n\n- [2026/2] [ProRL V2](https://developer.nvidia.com/blog/scaling-llm-reinforcement-learning-with-prolonged-training-using-prorl-v2/) uses REINFORCE++-baseline to train a state-of-the-art 1.5B reasoning model with prolonged RL training. Training script: [train_prorlv2_math_hybrid_engine.sh](./examples/scripts/train_prorlv2_math_hybrid_engine.sh)\n- [2025/10] [ScaleRL](https://arxiv.org/abs/2510.13786) validates the effectiveness of REINFORCE++-baseline in large-scale training scenarios. Releases [REINFORCE++ slides](https://docs.google.com/presentation/d/1stieP_3PM1z4Hq1YWR3GywFkxcHEAlstXMaS23KlGN4)\n- [2025/6] [Magistral](https://mistral.ai/static/research/magistral.pdf) uses the method quite similar to REINFORCE++-baseline to train the reasoning models.\n- [2025/5] [MARTI](https://github.com/TsinghuaC3I/MARTI) has been released as a fork of OpenRLHF. It is designed to train LLM-based multi-agent systems using RL, by integrating centralized multi-agent interactions with distributed policy training.\n- [2025/5] OpenRLHF 0.8.0 supports async RLHF training via `--async_train` and async agent RLHF via `--agent_func_path`. See [train_reinforce_baseline_ray_agent_async.sh](./examples/scripts/train_reinforce_baseline_ray_agent_async.sh) for a runnable example.\n- [2025/4] Post the blog [Accelerating RLHF with vLLM, Best Practice from OpenRLHF](https://blog.vllm.ai/2025/04/23/openrlhf-vllm.html)\n- [2025/4] Clean OpenRLHF: Refactored the source code based on Single Controller and Unified Packing Samples\n- [2025/3] The CMU [Advanced Natural Language Processing Spring 2025](https://cmu-l3.github.io/anlp-spring2025/) course uses OpenRLHF as the RLHF framework teaching case.\n- [2025/2] [Logic-RL](https://arxiv.org/abs/2502.14768) and [PRIME](https://arxiv.org/abs/2502.01456) demonstrate that REINFORCE++ is more stable in training compared to GRPO and faster than PPO.\n- [2025/2] [LMM-R1](https://github.com/TideDra/lmm-r1) is a fork of OpenRLHF, aimed at providing high-performance RL infrastructure for reproduction of DeepSeek-R1 on multimodal tasks.\n- [2025/2] MIT \u0026 Microsoft proposed the [On the Emergence of Thinking in LLMs I: Searching for the Right Intuition](https://arxiv.org/pdf/2502.06773) using OpenRLHF\n- [2025/1] HKUST reproduced the [DeepSeek-R1-Zero and DeepSeek-R1 training on small models using OpenRLHF](https://github.com/hkust-nlp/simpleRL-reason)\n- [2024/12] We \"proposed\" 😊 the [REINFORCE++: A Simple and Efficient Approach for Aligning Large Language Models](https://www.researchgate.net/publication/387487679_REINFORCE_An_Efficient_RLHF_Algorithm_with_Robustnessto_Both_Prompt_and_Reward_Models).\n- [2024/12] We analyzed the PPO, REINFORCE++, GRPO and RLOO in the [Notion Blogpost](https://hijkzzz.notion.site/unraveling-rlhf-and-its-variants-engineering-insights#147d9a33ecc9806090f3d5c749d31f05).\n- [2023/8] OpenRLHF was open-sourced.\n\n\u003c/details\u003e\n\n---\n\n\u003ca id=\"architecture-foundation-ray--vllm-distribution\"\u003e\u003c/a\u003e\n## 🏗️ Architecture Foundation: Ray + vLLM Distribution\n\nOpenRLHF is **the first RLHF framework** built on Ray + vLLM distributed architecture, orchestrating multiple components across GPUs efficiently:\n\n\u003cdiv align=\"center\"\u003e\n  \u003cimg alt=\"OpenRLHF Architecture (Ray + vLLM)\" src=\"./docs/openrlhf_architecture.svg\" style=\"max-width: 100%; height: auto;\" /\u003e\n\u003c/div\u003e\n\n### Core Infrastructure Components\n\n**Ray - Distributed Scheduler and Controller**  \nOpenRLHF leverages [Ray](https://github.com/ray-project/ray) for efficient distributed scheduling. It separates the Actor, Reward, Reference, and Critic models across different GPUs, enabling scalable training for models up to **70B+ parameters**.\n\n**Hybrid Engine Scheduling**: All models and vLLM engines can share GPU resources—minimizing idle time and maximizing GPU utilization. This allows running full RLHF pipelines on limited hardware.\n\n**vLLM - High-Performance Inference Engine**  \nRLHF training spends **80% of the time on sample generation**. Powered by [vLLM](https://github.com/vllm-project/vllm) with Auto Tensor Parallelism (AutoTP) and Pipeline Parallelism (PP), OpenRLHF delivers high-throughput, memory-efficient generation.\n\n**DeepSpeed - Memory-Efficient Training**  \nBuilt on [DeepSpeed](https://github.com/deepspeedai/DeepSpeed) ZeRO-3, [deepcompile](https://github.com/deepspeedai/DeepSpeed/blob/master/blogs/deepcompile/README.md), [AutoTP](https://github.com/deepspeedai/DeepSpeed/blob/master/blogs/huggingface-tp/README.md), and RingAttention. Enables large model training without heavyweight frameworks while working directly with HuggingFace models.\n\n**Transformers - Model Interface**  \nNative integration with HuggingFace Transformers for seamless model loading, state management, and fine-tuning of pretrained models.\n\n**NCCL / CUDA IPC - High-Speed Communication**  \nEfficient inter-GPU communication for distributed training and inference.\n\n---\n\n\u003ca id=\"design-paradigm-agent-based-execution\"\u003e\u003c/a\u003e\n## 🎯 Design Paradigm: Agent-Based Execution\n\n**On top of the Ray distributed architecture**, OpenRLHF is **the first RLHF framework** to implement a **unified agent-based paradigm**. Every training run—whether standard PPO or complex multi-turn reasoning—follows a consistent agent execution pipeline.\n\n### Why Agent-Based?\n\nOpenRLHF **unifies generation and training through token-in-token-out agent execution**, ensuring perfect consistency, easy single/multi-turn extension, and zero text-level mismatches.\n\n### Agent Architecture\n\n```\n                 ┌─────────────────────────────┐\n                 │    AgentExecutorBase        │\n                 │  (Token-in-Token-out Core)  │\n                 └─────────────────────────────┘\n                              │\n                 ┌────────────┴────────────┐\n                 ↓                         ↓\n         SingleTurnExecutor        MultiTurnExecutor\n                 │                         │\n      ┌──────────┴──────────┐   ┌─────────┴──────────┐\n      ↓                     ↓   ↓                    ↓\n  Standard RLHF      Custom Reward   Multi-Step    External Env\n  (One-shot gen)     Function      Reasoning     (NeMo Gym)\n      ↓                     ↓           ↓                ↓\n      └─────────────────────┴───────────┴────────────────┘\n                              │\n                    Consistent Token Trajectories\n                              │\n                    ┌─────────┴─────────┐\n                    │  RL Algorithms    │\n                    │  (Decoupled)      │\n                    │                   │\n                    │  PPO, REINFORCE++ │\n                    │  GRPO, RLOO, etc. │\n                    └───────────────────┘\n```\n\n### Core Design Principles\n\n\u003cdetails\u003e\n\u003csummary\u003eShow core design principles\u003c/summary\u003e\n\n| Principle | Description | Benefit |\n|-----------|-------------|---------|\n| **Token-in-Token-out** | All sampling produces token-level trajectories | Zero text-level mismatch |\n| **Unified Interface** | Same `AgentExecutorBase` API for all modes | Switch modes with one flag |\n| **Algorithm-Agnostic** | RL algorithms (PPO, REINFORCE++, etc.) are decoupled from agent executors | Any algorithm works with any mode |\n| **Extensible** | Plug in custom rewards/environments easily | Rapid experimentation |\n| **Production-Ready** | Sync/Async/Hybrid Engine support | From research to deployment |\n\n\u003c/details\u003e\n\n### Two Execution Modes (Orthogonal to RL Algorithms)\n\nThe agent execution mode is **independent** of the RL algorithm you choose. You can use **any algorithm** (PPO, REINFORCE++, GRPO, etc.) with **any execution mode**:\n\n| Mode | Use Cases | Interface | Complexity |\n|------|-----------|-----------|------------|\n| **Single-Turn** | Standard RLHF, custom reward functions | Optional `reward_func()` | ⭐ Default (99% use cases) |\n| **Multi-Turn** | Multi-step reasoning, interactive environments | `reset()` + `step()` | ⭐⭐ Advanced |\n\n---\n\n\u003ca id=\"state-of-the-art-rl-algorithms\"\u003e\u003c/a\u003e\n## 🚀 State-of-the-Art RL Algorithms\n\nOpenRLHF implements **PPO, REINFORCE++, REINFORCE++-baseline, GRPO, RLOO** with advanced optimization tricks inspired by practical guides and community best practices. \n\n**Key Design**: RL algorithms are **decoupled from agent execution modes**. All algorithms work seamlessly with both single-turn and multi-turn agent executors, running through the unified token-in-token-out pipeline for consistent behavior.\n\n\u003cdetails\u003e\n\u003csummary\u003eShow algorithm comparison table\u003c/summary\u003e\n\n| Algorithm | `--advantage_estimator` | Key Feature | Best Use Case |\n|-----------|------------------------|-------------|---------------|\n| **PPO** | (default) | Full critic network | Stable training, proven results |\n| **REINFORCE++** | `reinforce` | PPO tricks without critic | Efficient training, less memory |\n| **REINFORCE++-baseline** | `reinforce_baseline` | Mean reward baseline | Reasoning tasks (RLVR), robust to reward scales |\n| **RLOO** | `rloo` | Per-token KL + PPO-clip | Multi-sample training |\n| **GRPO** | `group_norm` | Group normalization | Batch-based training |\n| **Dr. GRPO** | `dr_grpo` | Simplified GRPO | Removes local `/std` norm |\n\n\u003c/details\u003e\n\nReferences: [Zhihu article](https://zhuanlan.zhihu.com/p/622134699) | [Notion best practices](https://hijkzzz.notion.site/rlhf-implementation-tricks?v=158d9a33ecc98132bf9e000c39227361)\n\n---\n\n\u003ca id=\"comprehensive-features\"\u003e\u003c/a\u003e\n## 📋 Comprehensive Features\n\nOpenRLHF provides a complete RLHF pipeline with agent-based flexibility:\n\n### 🎯 Agent-Based RL Training (Core Innovation)\n\n\u003cdetails\u003e\n\u003csummary\u003eShow agent-based RL training details\u003c/summary\u003e\n\n**Single-Turn Mode** (Default - 99% of use cases)\n- One-shot generation per prompt\n- Works with all RL algorithms: [PPO](./examples/scripts/train_ppo_ray_hybrid_engine.sh), [REINFORCE++/baseline/GRPO/RLOO](./examples/scripts/train_reinforce_baseline_hybrid_engine.sh)\n- [Custom reward functions](./examples/scripts/train_ppo_with_reward_fn.sh) (`--remote_rm_url`)\n- [Hybrid Engine](./examples/scripts/train_ppo_ray_hybrid_engine.sh) for maximum GPU utilization\n\n**Multi-Turn Mode** (Advanced - Interactive tasks)\n- Multi-step interactions with environment feedback\n- Works with all RL algorithms\n- [Custom agent functions](./examples/scripts/train_reinforce_baseline_ray_agent_async.sh) (`--agent_func_path`)\n- NeMo Gym integration: see `examples/python/agent_func_nemogym_executor.py` for an agent executor that integrates NeMo Gym rollouts\n- Async pipeline (`--async_train`) for higher throughput: [train_reinforce_baseline_ray_agent_async.sh](./examples/scripts/train_reinforce_baseline_ray_agent_async.sh)\n\n\u003c/details\u003e\n\n### 🎓 Supervised Training \u0026 Preference Learning\n\n\u003cdetails\u003e\n\u003csummary\u003eShow supervised training \u0026 preference learning table\u003c/summary\u003e\n\n| Method | Script | Description |\n|--------|--------|-------------|\n| **SFT** | [train_sft.sh](./examples/scripts/train_sft.sh) | Supervised fine-tuning with packing |\n| **DPO/IPO/cDPO** | [train_dpo_llama.sh](./examples/scripts/train_dpo_llama.sh) | Direct preference optimization |\n| **KTO** | [train_kto_llama.sh](./examples/scripts/train_kto_llama.sh) | Kahneman-Tversky optimization |\n| **Iterative DPO** | [train_iterative_dpo.sh](./examples/scripts/train_iterative_dpo.sh) | Online preference learning |\n| **Reward Model** | [train_rm.sh](./examples/scripts/train_rm.sh) | Train reward models |\n| **Process RM** | [train_prm_mistral.sh](./examples/scripts/train_prm_mistral.sh) | Step-by-step reward models |\n| **Rejection Sampling** | [train_rejection_sampling_llama.sh](./examples/scripts/train_rejection_sampling_llama.sh) | Best-of-N sampling |\n| **Conditional SFT** | [train_conditional.sh](./examples/scripts/train_conditional.sh) | Quality-conditioned training |\n| **Distillation** | [train_knowledge_distillation.sh](./examples/scripts/train_knowledge_distillation.sh) | Knowledge transfer |\n\n\u003c/details\u003e\n\n### ⚡ Advanced Capabilities\n\n\u003cdetails\u003e\n\u003csummary\u003eShow advanced capabilities\u003c/summary\u003e\n\n**Efficiency Optimizations**\n- Sample packing (`--packing_samples`) for all training modes\n- vLLM acceleration (`--vllm_num_engines`) for fast generation\n- DAPO [dynamic filtering](./examples/scripts/train_dapo_ray_hybrid_engine.sh) (`--dynamic_filtering`)\n  - 🎲 Dynamic Sampling: for each prompt, generate multiple responses and **filter** them by your reward / agent **0–1 `scores`** signal\n    - Enable: `--dynamic_filtering`\n    - Score range: `--dynamic_filtering_reward_range 0.0 1.0`\n    - Requires: `--n_samples_per_prompt \u003e 1` and either `--remote_rm_url` or `--agent_func_path`\n    - Example: `./examples/scripts/train_dapo_ray_hybrid_engine.sh`\n\n**Scalability**\n- DeepSpeed AutoTP for tensor parallelism (see `--ds_tensor_parallel_size` in training scripts)\n- [RingAttention](./examples/test_scripts/train_dpo_ring_llama.sh) for long context (`--ring_attn_size`)\n- Multi-node training with [SLURM](./examples/scripts/train_ppo_ray_slurm.sh)\n\n**Model Support**\n- [LoRA/QLoRA](./examples/scripts/train_sft_mixtral_lora.sh) (`--lora_rank`, `--load_in_4bit`)\n- [Mixture of Experts (MoE)](./examples/test_scripts/train_sft_moe.sh) (`--aux_loss_coef`)\n- FlashAttention (`--attn_implementation`)\n- HuggingFace chat templates (`--apply_chat_template`)\n\n**Production Features**\n- Wandb (`--use_wandb`) and TensorBoard (`--use_tensorboard`) logging\n- Checkpoint recovery (`--load_checkpoint`, `--save_steps`)\n- Evaluation datasets (`--eval_dataset`)\n\n\u003c/details\u003e\n\n---\n\n\u003ca id=\"quick-start\"\u003e\u003c/a\u003e\n## 🎬 Quick Start\n\n### Installation\n\n**Recommended**: Use Docker for hassle-free setup\n\n```bash\n# 1. Launch Docker container\ndocker run --runtime=nvidia -it --rm --shm-size=\"10g\" --cap-add=SYS_ADMIN \\\n  -v $PWD:/openrlhf nvcr.io/nvidia/pytorch:25.11-py3 bash\n\n# 2. Clean conflicting packages\nsudo pip uninstall xgboost transformer_engine flash_attn pynvml -y\n\n# 3. Install OpenRLHF (choose one)\npip install openrlhf                    # Basic\npip install openrlhf[vllm]              # + vLLM 0.17.0 (recommended)\npip install openrlhf[vllm_latest]       # + Latest vLLM\npip install openrlhf[vllm,ring,liger]   # + All optimizations\n```\n\n**Alternative: Install from source**\n\n```bash\ngit clone https://github.com/OpenRLHF/OpenRLHF.git\ncd OpenRLHF\npip install -e .\n```\n\n\u003e [!TIP]\n\u003e We recommend **vLLM 0.17.0+** for best performance. See [Dockerfiles](./dockerfile/) and [Nvidia-Docker Install Script](./examples/scripts/nvidia_docker_install.sh).\n\n### Prepare Datasets\n\nOpenRLHF provides flexible data processing methods:\n\n**Key Parameters**:\n- `--input_key`: Specify JSON key name for input data\n- `--apply_chat_template`: Use HuggingFace tokenizer's [chat template](https://huggingface.co/docs/transformers/main/en/chat_templating)\n- `--input_template`: Custom template string (alternative to chat template)\n- `--prompt_data_probs` / `--dataset_probs`: Mix multiple datasets (e.g., `0.1,0.4,0.5`)\n- `--eval_dataset`: Specify evaluation dataset path\n\n**Chat Template Example**:\n\n```python\ndataset = [{\"input_key\": [\n  {\"role\": \"user\", \"content\": \"Hello, how are you?\"},\n  {\"role\": \"assistant\", \"content\": \"I'm doing great. How can I help you today?\"},\n  {\"role\": \"user\", \"content\": \"I'd like to show off how chat templating works!\"},\n]}]\n\ntokenizer.apply_chat_template(dataset[0][\"input_key\"], tokenize=False)\n# Output: \"\u003cs\u003e[INST] Hello, how are you? [/INST]I'm doing great...\u003c/s\u003e [INST] I'd like to show off... [/INST]\"\n```\n\n\u003e [!NOTE]\n\u003e JSON key options vary by dataset type. See [Reward Dataset](https://github.com/OpenRLHF/OpenRLHF/blob/main/openrlhf/datasets/reward_dataset.py#L10), [SFT Dataset](https://github.com/OpenRLHF/OpenRLHF/blob/main/openrlhf/datasets/sft_dataset.py#L9), and [Prompt Dataset](https://github.com/OpenRLHF/OpenRLHF/blob/main/openrlhf/datasets/prompts_dataset.py#L6)\n\n\u003ca id=\"supervised-fine-tuning\"\u003e\u003c/a\u003e\n### Supervised Fine-tuning\n\nOpenRLHF's model checkpoint is fully compatible with HuggingFace models. You can specify the model name or path using `--pretrain  {name or path}`, `--reward_pretrain  {name or path}` and `--critic_pretrain  {name or path}`. We have provided some pre-trained checkpoints and datasets on [HuggingFace OpenRLHF](https://huggingface.co/OpenRLHF).\n\nThen you can use the startup scripts we provide in the [examples/scripts](./examples/scripts/) directory, or start the training using the following commands.\n\n\u003cdetails\u003e\n\u003csummary\u003eSFT command\u003c/summary\u003e\n\n```bash\ndeepspeed --module openrlhf.cli.train_sft \\\n   --max_len 4096 \\\n   --dataset Open-Orca/OpenOrca \\\n   --input_key question \\\n   --output_key response \\\n   --input_template $'User: {}\\nAssistant: ' \\\n   --train_batch_size 256 \\\n   --micro_train_batch_size 2 \\\n   --max_samples 500000 \\\n   --pretrain meta-llama/Meta-Llama-3-8B \\\n   --save_path ./checkpoint/llama3-8b-sft \\\n   --save_steps -1 \\\n   --logging_steps 1 \\\n   --eval_steps -1 \\\n   --zero_stage 2 \\\n   --max_epochs 1 \\\n   --packing_samples \\\n   --param_dtype bf16 \\\n   --learning_rate 5e-6 \\\n   --gradient_checkpointing \\\n   --use_wandb {wandb_token}\n\n# Additional options:\n# --apply_chat_template                # Use HF tokenizer chat template\n# --ring_attn_size 2                   # Enable RingAttention (install ring_flash_attn first)\n# --multiturn                          # Multi-turn fine-tuning loss\n# --pretrain_mode                      # Continued pre-training mode\n```\n\n\u003c/details\u003e\n\n\n### Reward Model Training\n\n\u003cdetails\u003e\n\u003csummary\u003eReward model training command\u003c/summary\u003e\n\n```bash\ndeepspeed --module openrlhf.cli.train_rm \\\n   --save_path ./checkpoint/llama3-8b-rm \\\n   --save_steps -1 \\\n   --logging_steps 1 \\\n   --eval_steps -1 \\\n   --train_batch_size 256 \\\n   --micro_train_batch_size 1 \\\n   --pretrain OpenRLHF/Llama-3-8b-sft-mixture \\\n   --param_dtype bf16 \\\n   --max_epochs 1 \\\n   --max_len 8192 \\\n   --zero_stage 3 \\\n   --learning_rate 9e-6 \\\n   --dataset OpenRLHF/preference_dataset_mixture2_and_safe_pku \\\n   --apply_chat_template \\\n   --chosen_key chosen \\\n   --rejected_key rejected \\\n   --packing_samples \\\n   --gradient_checkpointing \\\n   --use_wandb {wandb_token}\n\n```\n\n\u003c/details\u003e\n\nIt is recommended to set the `--value_prefix_head` option of the Reward Model to `score`, so that we can load the model using `AutoModelForSequenceClassification`:\n\n```python\nreward_model = AutoModelForSequenceClassification.from_pretrained(\n              reward_model_path,\n              num_labels=1,\n              torch_dtype=torch.bfloat16,\n              attn_implementation=\"flash_attention_2\",\n              use_cache=False,\n          )\ninputs = xxxx (Left Padding Input Tokens)\nreward = reward_model.model(*inputs).last_hidden_state\nreward = reward_model.score(reward)[:, -1]\n```\n\n### RL Training: PPO/REINFORCE++ with Ray and vLLM\n\nAll RL training in OpenRLHF runs through the **agent execution pipeline**. The following example shows single-turn agent execution (default mode) with Hybrid Engine for optimal performance:\n\n```bash\n# launch the master node of ray in container\nray start --head --node-ip-address 0.0.0.0 --num-gpus 8\n\n# if you want to launch ray on more nodes, use\nray start --address {MASTER-NODE-ADDRESS}:6379  --num-gpus 8\n\nray job submit --address=\"http://127.0.0.1:8265\" \\\n   --runtime-env-json='{\"working_dir\": \"/openrlhf\"}' \\\n   -- python3 -m openrlhf.cli.train_ppo_ray \\\n   --ref_num_nodes 1 \\\n   --ref_num_gpus_per_node 8 \\\n   --reward_num_nodes 1 \\\n   --reward_num_gpus_per_node 8 \\\n   --critic_num_nodes 1 \\\n   --critic_num_gpus_per_node 8 \\\n   --actor_num_nodes 1 \\\n   --actor_num_gpus_per_node 8 \\\n   --vllm_num_engines 4 \\\n   --vllm_tensor_parallel_size 2 \\\n   --colocate_all_models \\\n   --vllm_gpu_memory_utilization 0.5 \\\n   --pretrain OpenRLHF/Llama-3-8b-sft-mixture \\\n   --reward_pretrain OpenRLHF/Llama-3-8b-rm-700k \\\n   --save_path /openrlhf/examples/test_scripts/final/llama3-8b-rlhf \\\n   --ckpt_path /openrlhf/examples/test_scripts/ckpt/llama3-8b-rlhf \\\n   --save_hf_ckpt \\\n   --train_batch_size 128 \\\n   --rollout_batch_size 1024 \\\n   --use_dynamic_batch \\\n   --n_samples_per_prompt 1 \\\n   --max_epochs 1 \\\n   --prompt_max_len 1024 \\\n   --max_samples 100000 \\\n   --generate_max_len 1024 \\\n   --zero_stage 3 \\\n   --param_dtype bf16 \\\n   --actor_learning_rate 5e-7 \\\n   --critic_learning_rate 9e-6 \\\n   --init_kl_coef 0.01 \\\n   --prompt_data OpenRLHF/prompt-collection-v0.1 \\\n   --input_key context_messages \\\n   --apply_chat_template \\\n   --normalize_reward \\\n   --gradient_checkpointing \\\n   --packing_samples \\\n   --vllm_sync_backend nccl \\\n   --enforce_eager \\\n   --vllm_enable_sleep \\\n   --deepspeed_enable_sleep \\\n   --use_wandb {wandb_token}\n\n# Algorithm Variants (all use single-turn agent execution):\n# --advantage_estimator reinforce        # REINFORCE++\n# --advantage_estimator rloo             # RLOO\n# --advantage_estimator reinforce_baseline  # REINFORCE++-baseline (best for RLVR)\n# --advantage_estimator group_norm       # GRPO\n# --advantage_estimator dr_grpo          # Dr. GRPO\n\n# Advanced Options:\n# --init_kl_coef 0                      # No reference model\n# --remote_rm_url http://host:5000/get_reward  # HTTP reward model\n# --n_samples_per_prompt 4              # Multiple samples per prompt\n# --enable_vllm_is_correction           # TIS (vLLM importance sampling correction) for off-policy rollouts (PPO only)\n# --vllm_is_truncated_threshold 0.5 5.0 # TIS truncation interval: [low, high]\n# --use_icepop                          # ICEPOP: set coefficients outside [low, high] to 0 (instead of clamp)\n```\n\n\u003e [!TIP]\n\u003e **For reasoning tasks (RLVR)**: Use `--advantage_estimator reinforce_baseline` for REINFORCE++-baseline—it's robust to different reward scales.\n\n\u003e [!NOTE]\n\u003e **Ray Environment Setup**: Let Ray auto-deploy with `--runtime-env-json='{\"setup_commands\": [\"pip install openrlhf[vllm]\"]}'`\n\n\u003e [!NOTE]\n\u003e **Troubleshooting GPU index errors**: Set `export RAY_EXPERIMENTAL_NOSET_CUDA_VISIBLE_DEVICES=1` if you encounter DeepSpeed GPU device setup issues.\n\n📚 **More Examples**: See [examples/scripts](./examples/scripts/) and [Documentation](https://openrlhf.readthedocs.io/en/latest/usage.html)\n\n---\n\n\u003ca id=\"single-turn-agent-reinforced-fine-tuning-with-custom-rewards\"\u003e\u003c/a\u003e\n## 🎯 Single-Turn Agent: Reinforced Fine-tuning with Custom Rewards\n\nThe **single-turn agent execution** (default mode) supports custom reward functions—perfect for reinforced fine-tuning without a trained reward model. Instead of using a pre-trained reward model, you provide a Python function that computes rewards on-the-fly.\n\n**Ideal for**:\n- Rule-based rewards (length, format, code execution, math verification)\n- External API rewards (judge models, compilers, test suites)\n- Hybrid rewards (combining multiple signals)\n\n### Example: Custom Reward Function\n\n```python\n# reward_func.py\nimport torch\n\ndef reward_func(queries, prompts, labels):\n    \"\"\"\n    Compute custom rewards for generated responses.\n    \n    Args:\n        queries: List[str] - Full text (prompt + response)\n        prompts: List[str] - Original prompts only\n        labels: List[str] - Ground truth labels (from --label_key)\n    \n    Returns:\n        dict with:\n            - rewards: Tensor for advantage calculation\n            - scores: Tensor for dynamic filtering (0-1 range)\n            - extra_logs: Dict for wandb logging\n    \"\"\"\n    batch_size = len(queries)\n    \n    # Example: Random rewards (replace with your logic)\n    # Real examples: code execution, math verification, format checking\n    reward = torch.randint(0, 2, (batch_size,)).float()\n\n    return {\n        \"rewards\": reward,           # Used in RL advantage calculation\n        \"scores\": reward,            # Used for dynamic filtering (--dynamic_filtering)\n        \"extra_logs\": {              # Logged to wandb\n            \"custom_metric\": reward.mean().item(),\n        },\n    }\n```\n\n### Usage\n\n```bash\nray job submit --address=\"http://127.0.0.1:8265\" \\\n  --runtime-env-json='{\"working_dir\": \"/openrlhf\"}' \\\n  -- python3 -m openrlhf.cli.train_ppo_ray \\\n  --pretrain meta-llama/Meta-Llama-3-8B \\\n  --use_dynamic_batch \\\n  --remote_rm_url /path/to/reward_func.py \\\n  --label_key answer \\\n  --prompt_data your_prompt_dataset \\\n  ... # other training args\n```\n\n**Key Parameter**: `--label_key answer` passes the \"answer\" field from your dataset to `reward_func` as `labels`.\n\n\u003e [!TIP]\n\u003e **Use Cases**: Code generation (execute tests), Math (verify solutions), Formatting (check structure), Multi-objective (combine multiple signals)\n\n📖 **Full Example**: [examples/scripts/train_ppo_with_reward_fn.sh](./examples/scripts/train_ppo_with_reward_fn.sh)\n\n---\n\n\u003ca id=\"multi-turn-agent-complex-environment-interactions\"\u003e\u003c/a\u003e\n## 🤖 Multi-Turn Agent: Complex Environment Interactions\n\nFor tasks requiring **multi-step interactions** (reasoning chains, coding with feedback, game playing), OpenRLHF provides the **Multi-Turn Agent Execution** mode.\n\n### Building Custom Multi-Turn Agents\n\nImplement `AgentInstanceBase` with `reset/step` methods:\n\n```python\n# agent_func.py\nimport random\nfrom typing import Any, Dict\n\nimport torch\nfrom openrlhf.utils.agent import AgentInstanceBase, MultiTurnAgentExecutor\n\n\n# A simple n-step random environment\nclass AgentInstance(AgentInstanceBase):\n    async def __init__(self, *args, **kwargs):\n        self.step_idx = 0\n        self.max_steps = random.randint(1, 3)  # 1-3 steps\n\n    async def reset(self, states: dict, **kwargs):\n        return {\"observation\": states[\"observation\"]}  # Return original text observation\n\n    async def step(self, states: dict, **kwargs) -\u003e Dict[str, Any]:\n        print(f\"step_idx: {self.step_idx}, max_steps: {self.max_steps}\")\n\n        observation_text = states[\"observation_text\"]\n        action_text = states[\"action_text\"]\n        label = states[\"label\"]\n\n        # Check if episode is done\n        done = self.step_idx \u003e= self.max_steps\n        reward = torch.randint(0, 2, (1,)).float() if done else torch.tensor(0)\n\n        # Generate environment feedback based on whether episode is done\n        environment_feedback = (\n            \"\\n\\nHuman: [CORRECT]\\n\u003c/s\u003e\"\n            if done\n            else \"\\n\\nHuman: [INCORRECT]\\nPlease analyze the issues and try again.\\n\u003c/s\u003e\\n\\nAssistant: \"\n        )\n\n        self.step_idx += 1\n\n        return {\n            \"rewards\": reward,  # Rewards for advantage calculation\n            \"scores\": reward,  # Scores for dynamic filtering (0-1 reward)\n            \"environment_feedback\": environment_feedback,  # Environment feedback text\n            \"done\": done,  # Boolean indicating if the episode is complete\n            \"sampling_params\": states.get(\"sampling_params\", None),  # Parameters for vLLM sampling in next step\n            \"extra_logs\": {\"dummy_scores\": reward},  # Additional logging information\n        }\n\n\nclass AgentExecutor(MultiTurnAgentExecutor):\n    def __init__(self):\n        super().__init__(AgentInstance)\n```\n\nThen launch with:\n\n```bash\nray job submit --address=\"http://127.0.0.1:8265\" \\\n  --runtime-env-json='{\"working_dir\": \"/openrlhf\"}' \\\n  -- python3 -m openrlhf.cli.train_ppo_ray \\\n  ...\n  --use_dynamic_batch \\\n  --agent_func_path /path/to/agent_func.py \\\n  --async_train  # Optional: enable async pipeline\n```\n\n### Configuration Options\n\n**Async Pipeline** (for higher throughput):\n- Enable: `--async_train`\n- Buffer size: `--async_queue_size 1` (larger = more off-policy, default 1)\n\n**Training Modes**:\n- **Synchronous**: Default, better stability\n- **Asynchronous**: Higher throughput, may affect convergence\n- **Hybrid Engine**: Best GPU utilization with `--colocate_all_models` (remove `--async_train`)\n\n\u003e [!NOTE]\n\u003e For fully custom token-level execution, inherit `AgentExecutorBase` and implement `execute()`. This design enforces the **token-in-token-out principle** to keep sampling and training consistent.\n\n\u003e [!WARNING] \n\u003e Asynchronous training may affect training stability. Use it only when throughput is critical and convergence is validated.\n\n📚 **Examples**:\n- Single-turn: [train_ppo_ray_hybrid_engine.sh](./examples/scripts/train_ppo_ray_hybrid_engine.sh)\n- Custom reward: [train_ppo_with_reward_fn.sh](./examples/scripts/train_ppo_with_reward_fn.sh)\n- Multi-turn: [train_reinforce_baseline_ray_agent_async.sh](./examples/scripts/train_reinforce_baseline_ray_agent_async.sh)\n- NeMo Gym: `examples/python/agent_func_nemogym_executor.py`\n\n---\n\n\u003ca id=\"advanced-topics\"\u003e\u003c/a\u003e\n## 🔧 Advanced Topics\n\n### LoRA: Merging Adapters\n\nWhen using LoRA/QLoRA, OpenRLHF saves only the adapter weights. To deploy or continue training, merge the adapter with the base model:\n\n```bash\npython -m openrlhf.cli.lora_combiner \\\n    --model_path meta-llama/Meta-Llama-3-8B \\\n    --lora_path ./checkpoint/llama3-8b-rm \\\n    --output_path ./checkpoint/llama-3-8b-rm-combined \\\n    --is_rm \\\n    --param_dtype bf16\n```\n\n### Performance Tuning Guide\n\nOptimize OpenRLHF for your hardware and workload with these recommendations:\n\n#### 🎯 Resource Allocation (Distributed Mode)\n\n**Recommended ratio**: `vLLM : Actor : Critic = 1:1:1`\n\n```bash\n# Example: 70B model on 48 A100 GPUs\n# - 16 GPUs → vLLM Engine\n# - 16 GPUs → Actor\n# - 16 GPUs → Critic\n```\n\n#### ⚡ Speed Optimizations\n\n| Optimization | Flag | When to Use |\n|--------------|------|-------------|\n| **Hybrid Engine** | `--colocate_all_models`\u003cbr\u003e`--vllm_enable_sleep`\u003cbr\u003e`--deepspeed_enable_sleep` | Sufficient GPU memory |\n| **Async Training** | `--async_train` | Convergence validated, need throughput |\n| **Sample Packing** | `--packing_samples` | Always (especially training) |\n| **DeepCompile** | `--deepcompile` | PyTorch 2.0+ |\n| **Overlap Comm** | `--overlap_comm` | Sufficient GPU memory |\n| **Dynamic Batch** | `--use_dynamic_batch` | Variable sequence lengths |\n| **Prefix Caching** | vLLM config | `n_samples_per_prompt` \u003e 1 |\n\n#### 💾 Memory Management\n\n**When you have enough memory**:\n- ✅ Disable `--adam_offload`\n- ✅ Enable `--overlap_comm`\n- ✅ Use `--colocate_critic_reward` and `--colocate_actor_ref`\n\n**When hitting OOM**:\n- ❌ Disable all `--colocate_*` options\n- ✅ Reduce batch sizes\n- ✅ Enable gradient checkpointing\n\n#### 🎮 Batch Size Tuning\n\n1. **Generation Phase**: Maximize `--micro_rollout_batch_size`, minimize vLLM TP size\n2. **Training Phase**: Maximize `--micro_train_batch_size`, enable `--packing_samples`\n3. **vLLM**: Always use `--vllm_sync_backend nccl`\n\n\u003e [!TIP]\n\u003e **Quick Start Template**: For 8x A100 (80GB), try Hybrid Engine + `--vllm_gpu_memory_utilization 0.5` + `--colocate_all_models`\n\n📖 **More Details**: [Performance Tuning Documentation](https://openrlhf.readthedocs.io/en/latest/performance.html)\n\n\n## Companies and Organizations using OpenRLHF\n\n- Google\n- ByteDance\n- Tencent\n- Alibaba\n- Baidu\n- China Telecom\n- Vivo\n- Allen AI\n- NexusFlow\n- Jülich Supercomputing Centre (JSC)\n- Berkeley Starling Team\n- M-A-P\n- ...\n\n## Join Us\n\n**How to Join?**\n\n1. Email us at janhu9527@gmail.com or join [GitHub Organization](https://github.com/OpenRLHF). Please include the following details:\n   - Your name\n   - Your GitHub username\n   - Your areas of interest\n   - Your skills and experience related to NLP and/or AI\n1. You can also join us through the official GitHub [OpenRLHF ↗](https://github.com/OpenRLHF/OpenRLHF) project page. Just create an issue about your interest to contribute and we will get back to you.\n\n**What can you do?**\n\n1. Join the team and participate in the development of the OpenRLHF project.\n1. Contribute to the project by submitting pull requests.\n1. Help improve documentation, fix bugs, or create new features.\n1. Share the project and help us grow the community.\n\n## Sponsor Us\n\nYour sponsorship can help us maintain and improve OpenRLHF. If you find this project useful, please consider sponsoring us. You can sponsor us on [Open Collective ↗](https://opencollective.com/OpenRLHF).\n\n## Starchart\n\n[![Star History Chart](https://api.star-history.com/svg?repos=OpenRLHF/OpenRLHF\u0026type=Date)](https://star-history.com/#OpenRLHF/OpenRLHF\u0026Date)\n\n## Contributors\n\nA big thank you to all our contributors! If you want to contribute, feel free to make a pull request or create an issue.\n\n\u003ca href=\"https://github.com/OpenRLHF/OpenRLHF/graphs/contributors\"\u003e\n  \u003cimg src=\"https://contrib.rocks/image?repo=OpenRLHF/OpenRLHF\" /\u003e\n\u003c/a\u003e\n\n## References \u0026 Acknowledgements\n\nWe would like to express our gratitude to the following projects and organizations for their contributions to the field of AI and NLP:\n\n- [Hugging Face Transformers ↗](https://github.com/huggingface/transformers)\n- [OpenAI GPT ↗](https://github.com/openai/gpt-3)\n- [LLaMA ↗](https://llama.meta.com/)\n- [DeepSpeed ↗](https://github.com/microsoft/DeepSpeed)\n- [Ray ↗](https://github.com/ray-project/ray)\n\nOur project would also like to thank [ColossalChat](https://github.com/hpcaitech/ColossalAI/tree/main/applications/ColossalChat) and [DeepSpeedChat](https://github.com/microsoft/DeepSpeedExamples/tree/master/applications/DeepSpeed-Chat). In the early stages of the project, we referred to their code design. \nOur project would like to thank [Netmind.AI](https://www.netmind.ai/) for the GPU support of developing ring attention.\n\n(2024/7) Our GitHub organization has changed from OpenLLMAI to OpenRLHF.\n\n## Citation\nOpenRLHF\n\n```\n@article{hu2024openrlhf,\n  title={OpenRLHF: An Easy-to-use, Scalable and High-performance RLHF Framework},\n  author={Jian Hu and Xibin Wu and Zilin Zhu and Xianyu and Weixun Wang and Dehao Zhang and Yu Cao},\n  journal={arXiv preprint arXiv:2405.11143},\n  year={2024}\n}\n```\nREINFORCE++-baseline\n```\n@article{hu2025reinforce++,\n  title={Reinforce++: A simple and efficient approach for aligning large language models},\n  author={Hu, Jian},\n  journal={arXiv preprint arXiv:2501.03262},\n  year={2025}\n}\n```\n\n______________________________________________________________________\n\n*OpenRLHF © 2025 OpenRLHF. All Rights Reserved.*\n","funding_links":["https://opencollective.com/OpenRLHF"],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fopenrlhf%2Fopenrlhf","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fopenrlhf%2Fopenrlhf","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fopenrlhf%2Fopenrlhf/lists"}