{"id":14701428,"url":"https://github.com/alibaba/ChatLearn","last_synced_at":"2025-09-10T09:31:03.157Z","repository":{"id":255133967,"uuid":"679077548","full_name":"alibaba/ChatLearn","owner":"alibaba","description":"A flexible and efficient training framework for large-scale alignment tasks","archived":false,"fork":false,"pushed_at":"2025-09-02T10:15:14.000Z","size":5450,"stargazers_count":416,"open_issues_count":23,"forks_count":35,"subscribers_count":19,"default_branch":"main","last_synced_at":"2025-09-02T12:18:55.116Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/alibaba.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2023-08-16T03:51:28.000Z","updated_at":"2025-09-02T10:15:18.000Z","dependencies_parsed_at":"2024-08-28T07:40:09.009Z","dependency_job_id":"6d7f03b6-9d3b-49c5-b94f-5c04fb24e367","html_url":"https://github.com/alibaba/ChatLearn","commit_stats":null,"previous_names":["alibaba/chatlearn"],"tags_count":7,"template":false,"template_full_name":null,"purl":"pkg:github/alibaba/ChatLearn","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/alibaba%2FChatLearn","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/alibaba%2FChatLearn/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/alibaba%2FChatLearn/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/alibaba%2FChatLearn/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/alibaba","download_url":"https://codeload.github.com/alibaba/ChatLearn/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/alibaba%2FChatLearn/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":274440639,"owners_count":25285735,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-09-10T02:00:12.551Z","response_time":83,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-09-13T12:00:48.147Z","updated_at":"2025-09-10T09:31:03.139Z","avatar_url":"https://github.com/alibaba.png","language":"Python","readme":"\u003cp align=\"center\"\u003e\n  \u003cpicture\u003e\n    \u003cimg alt=\"ChatLearn\" src=\"docs/images/logo.jpg\" width=30%\u003e\n  \u003c/picture\u003e\n\u003c/p\u003e\n\n\u003ch3 align=\"center\"\u003e\nA flexible and efficient reinforcement learning framework for large language models(LLMs).  \n\u003c/h3\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003ca href=\"https://chatlearn.readthedocs.io/en/latest/\"\u003e\n    \u003cimg src=\"https://img.shields.io/badge/docs-latest-brightgreen.svg\" alt=\"docs\"\u003e\n  \u003c/a\u003e\n  \u003ca href=\"https://github.com/alibaba/ChatLearn/blob/main/LICENSE\"\u003e\n    \u003cimg src=\"https://img.shields.io/badge/License-Apache%202.0-blue.svg\" alt=\"License\"\u003e\n  \u003c/a\u003e\n\u003c/p\u003e\n\n\u003cp align=\"center\"\u003e\n        \u0026nbspEnglish\u0026nbsp |  \u003ca href=\"README_CN.md\"\u003e 中文 \u003c/a\u003e\u0026nbsp\n\u003c/p\u003e\n\n\n---\n\n*Latest News* 🔥\n- [2025/8] We support GSPO on [Mcore](scripts/train_mcore_vllm_qwen3_30b_gspo.sh)! 🔥\n- [2025/7] We give a reinforcement learning training example for DeepSeek-V3-671B based on [Mcore](scripts/train_mcore_vllm_deepseek_v3_671b_grpo.sh)! 🔥\n- [2025/7] We give reinforcement learning training examples for Qwen3-235B-A22B based on [Mcore](scripts/train_mcore_vllm_qwen3_235b_grpo.sh) and [FSDP2](scripts/train_fsdp_vllm_qwen3_235b_a22b_grpo.sh)! 🔥\n- [2025/7] Training now supports the FSDP2 framework! We support sequence packing, sequence parallelism, and group GEMM for efficient and user-friendly reinforcement learning training! 🔥\n- [2025/5] We support Mcore frameworks for training! By using Mcore and vLLM, we give a [tutorial](docs/en/tutorial/tutorial_grpo_mcore.md) about end-2-end GRPO training for Qwen3!\n- [2025/5] We support FSDP frameworks for training! By using FSDP and vLLM, we give a [tutorial](docs/en/tutorial/tutorial_grpo_fsdp.md) about end-2-end GRPO training for Qwen3!\n- [2024/8] We officially released ChatLearn! Check out our [documentation](docs/en/chatlearn.md).\n\n---\n\nChatLearn is a large-scale reinforcement learning training framework for LLMs developed by the Alibaba Cloud PAI platform.\n\n![RLHF Flow](docs/images/rlhf.png)\n\nChatlearn has the following advantages:\n1. 🚀**User-friendly programming interface**: Users can focus on programming individual models by wrapping a few functions, while the system takes care of resource scheduling, data and control flow transmission, and distributed execution.\n2. 🔧**Highly Scalable Training Methodology**: ChatLearn supports user-defined model execution flows, making customized training processes more flexible and convenient.\n3. 🔄**Diverse Distributed Acceleration Engines**: ChatLearn supports industry-leading SOTA training (FSDP2, Megatron) and inference engines (vLLM, SGLang), delivering exceptional training throughput performance.\n4. 🎯**Flexible Parallel Strategies and Resource Allocation**: ChatLearn supports different parallel strategies for various model configurations, enabling the formulation of distinct parallel approaches tailored to each model's computational, memory, and communication characteristics. Additionally, ChatLearn features a flexible resource scheduling mechanism that accommodates exclusive or shared use of resources across models. Through its system scheduling policies, it facilitates efficient serial/parallel execution and optimized GPU memory sharing, enhancing overall performance and efficiency.\n5. ⚡**High performance**: Compared to current SOTA systems, ChatLearn achieves a 52% performance improvement at the 7B+7B (Policy+Reward) scale and a 137% performance improvement at the 70B+70B scale. Meanwhile, ChatLearn supports reinforcement learning training at scales exceeding 600B parameters.\n\n# Quick Start\n\nPlease refer to the [documentation](https://chatlearn.readthedocs.io/zh-cn/latest/) for a quick start.\n\n1. [Environment and Code Setup](docs/en/installation.md)  \n2. [End-to-End GRPO Training Pipeline for Qwen3 Model Using FSDP + vLLM](docs/en/tutorial/tutorial_grpo_fsdp.md)  \n3. [End-to-End GRPO Training Pipeline for Qwen3 Model Using Megatron + vLLM](docs/en/tutorial/tutorial_grpo_mcore.md)\n\n## Feature List\n\n- Supports training engines such as [Megatron](https://github.com/alibaba/ChatLearn/blob/main/scripts/train_mcore_vllm_qwen3_8b_grpo.sh) and [FSDP](https://github.com/alibaba/ChatLearn/blob/main/scripts/train_fsdp_vllm_qwen3_8b_grpo.sh)\n- Supports inference engines including vLLM and SGLang, controlled via the `runtime_args.rollout_engine` parameter\n- Supports reinforcement learning algorithms such as GRPO and [GSPO](https://github.com/alibaba/ChatLearn/blob/main/scripts/train_mcore_vllm_qwen3_30b_gspo.sh)\n- Supports experiment monitoring with wandb and tensorboard\n- Supports training acceleration techniques such as [sequence packing](https://github.com/alibaba/ChatLearn/blob/main/scripts/train_fsdp_vllm_qwen3_8b_grpo.sh), Ulysses sequence parallelism, and [Group GEMM](https://github.com/alibaba/ChatLearn/blob/main/scripts/train_fsdp_vllm_qwen3_30b_a3b_grpo.sh)\n\n# Performance\n\nWe compared the RLHF training throughput of models with different parameter scales, adopting an N+N model configuration where both the Policy model and the Reward model have the same number of parameters. We benchmarked against DeepSpeed-Chat and OpenRLHF with 7B and 70B model configurations. For the 8 GPU setup with a 7B+7B scale, we achieved a 115% speedup; for the 32 GPU setup with a 70B+70B scale, the speedup was 208%. The larger the scale, the more pronounced the acceleration effect becomes. Additionally, ChatLearn can support even larger-scale reinforcement learning, such as at a 600B scale.\n\n![Compare Performance](docs/images/perf.png)\n\nNote: The performance of DeepSpeed-Chat and OpenRLHF has already been optimized.\n\n# Roadmap\n\nThe upcoming features for ChatLearn include:\n- [x] Simplify Configuration Settings\n- [x] Support tutorials for the RL training of MoE (Mixture of Experts) models\n- [ ] Support for more models\n- [ ] Performance Optimization\n- [ ] Support for more RL algorithms\n\n\nWe are continuously hiring and welcome you to contact us or submit your resume to [email](mailto:huangjun.hj@alibaba-inc.com).","funding_links":[],"categories":["A01_文本生成_文本对话","微调 Fine-Tuning","Industry Strength Reinforcement Learning"],"sub_categories":["大语言对话模型及数据"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Falibaba%2FChatLearn","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Falibaba%2FChatLearn","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Falibaba%2FChatLearn/lists"}