{"id":28416006,"url":"https://github.com/internlm/oreal","last_synced_at":"2025-06-26T10:30:57.754Z","repository":{"id":278913442,"uuid":"930451618","full_name":"InternLM/OREAL","owner":"InternLM","description":"Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning","archived":false,"fork":false,"pushed_at":"2025-03-20T02:51:58.000Z","size":783,"stargazers_count":184,"open_issues_count":2,"forks_count":6,"subscribers_count":8,"default_branch":"main","last_synced_at":"2025-06-14T14:02:33.722Z","etag":null,"topics":["llm","mathematics","o1","reasoning","rl"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/InternLM.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2025-02-10T16:51:54.000Z","updated_at":"2025-06-14T03:21:23.000Z","dependencies_parsed_at":"2025-02-22T14:39:59.000Z","dependency_job_id":null,"html_url":"https://github.com/InternLM/OREAL","commit_stats":null,"previous_names":["internlm/oreal"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/InternLM/OREAL","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/InternLM%2FOREAL","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/InternLM%2FOREAL/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/InternLM%2FOREAL/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/InternLM%2FOREAL/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/InternLM","download_url":"https://codeload.github.com/InternLM/OREAL/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/InternLM%2FOREAL/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":262047774,"owners_count":23250418,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["llm","mathematics","o1","reasoning","rl"],"created_at":"2025-06-03T20:07:15.319Z","updated_at":"2025-06-26T10:30:57.748Z","avatar_url":"https://github.com/InternLM.png","language":"Python","readme":"# OREAL: Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning\n\n\n[![license](https://img.shields.io/github/license/InternLM/opencompass.svg)](./LICENSE)\n[![arXiv](https://img.shields.io/badge/arXiv-2502.06781-b31b1b.svg)](https://arxiv.org/abs/2502.06781)\n[![huggingface](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-OREAL-ffc107?color=ffc107\u0026logoColor=white)](https://huggingface.co/collections/internlm/oreal-67aaccf5a8192c1ba3cff018)\n\n\n## ✨ Introduction\n\n![main_fig](./figures/main_fig.jpg)\n\nReasoning abilities, especially those for solving complex math problems, are crucial components of general intelligence.\nRecent advances by proprietary companies, such as o-series models of OpenAI, have made remarkable progress on reasoning tasks. However, the complete technical details remain unrevealed, and the techniques that are believed certainly to be adopted are only reinforcement learning (RL) and the long chain of thoughts.\n\nWe proposes a new RL framework, termed OREAL, to pursue the performance limit that can be achieved through **O**utcome **RE**w**A**rd-based reinforcement **L**earning for mathematical reasoning tasks, where only binary outcome rewards are easily accessible.\n\n+ We theoretically prove that behavior cloning on positive trajectories from best-of-N (BoN) sampling is sufficient to learn the KL-regularized optimal policy in binary feedback environments.\n+ This formulation further implies that the rewards of negative samples should be reshaped to ensure the gradient consistency between positive and negative samples.\n+ To alleviate the long-existing difficulties brought by sparse rewards in RL, which are even exacerbated by the partial correctness of the long chain of thought for reasoning tasks, we further apply a token-level reward model to sample important tokens in reasoning trajectories for learning.\n\nThe OREAL implementation pseudocode is as follows:\n\n![algo](./figures/algo.png)\n\n\n## 📃 Key Results\n\nWith OREAL, for the first time, a 7B model can obtain 94.0 pass@1 accuracy on MATH-500 through RL, being on par with 32B models. OREAL-32B also surpasses previous 32B models trained by distillation with 95.0 pass@1 accuracy on MATH-500.\n\n![main_table](./figures/main_table.png)\n\n## 🤗 HuggingFace\n\n### Model\n\nOur OREAL models are available on Hugging Face 🤗:\n\n| Model    | Huggingface Repo |\n|----------|------------------|\n| OREAL-DeepSeek-R1-Distill-Qwen-7B  | [Model Link](https://huggingface.co/internlm/OREAL-DeepSeek-R1-Distill-Qwen-7B) |\n| OREAL-7B  | [Model Link](https://huggingface.co/internlm/OREAL-7B)  |\n| OREAL-32B  | [Model Link](https://huggingface.co/internlm/OREAL-32B)  |\n\nWe also release the models of SFT version. You can construct your own RL pipeline on them:)\n\n| Model    | Huggingface Repo |\n|----------|------------------|\n| OREAL-7B-SFT  | [Model Link](https://huggingface.co/internlm/OREAL-7B-SFT)  |\n| OREAL-32B-SFT  | [Model Link](https://huggingface.co/internlm/OREAL-32B-SFT)  |\n\n### Data\n\nWe release the prompts utilzed in our RL training phase.\n\n| Dataset    | Huggingface Repo |\n|----------|------------------|\n| RL Prompts  | [Model Link](https://huggingface.co/datasets/internlm/OREAL-RL-Prompts)  |\n\n## 🚄 Training Tutorial\n\n### 1. Install Dependencies\n\nOREAL utilizes [XTuner](https://github.com/InternLM/xtuner/tree/main) as the training engine. \n\n```bash\npip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 --index-url https://download.pytorch.org/whl/cu124\npip install flash-attn --no-build-isolation\npip install -r requirements.txt\n```\n\n### 2. Prepare Data (Optional)\n\nThe training data can be found at [HERE](https://huggingface.co/datasets/internlm/OREAL-RL-Prompts). The training script will automatically download the data from huggingface.\n\n### 3. Start LLM Verifier Service\n\nOREAL requires a language model as a verifier to evaluate the correctness of the generated solutions along with a rule based verificy function (see the [source code](oreal/judgers/math_judger.py)). We use Qwen2.5-72B-Instruct as the verifier in our experiments. You can start the verifier service with [lmdeploy](https://github.com/InternLM/lmdeploy) by running the following command:\n\n```bash\nlmdeploy serve api_server Qwen/Qwen2.5-72B-Instruct --tp 4 --chat-template qwen --log-level INFO --server-port 10003\n```\n\nOr you can use any other inference engine such as [sglang](https://github.com/sgl-project/sglang) or [vllm](https://github.com/vllm-project/vllm) or [ollama](https://ollama.com/). Just make sure the verifier service can be reached by OpenAI-compatible API.\n\nFill in the verifier service address in the [config file](./oreal/configs) before training.\n\n```python\njudgers_config = dict(\n    math_judger=dict(  # math judger related settings\n        hosts=[\"x.x.x.x:xxxx\", \"x.x.x.x:xxxx\"],  # verifier service addresses\n        stop_word=stop_word,\n        thinking_finish_words=[\"\u003cconclude\u003e\", \"**Final Answer**\", \"\u003c/think\u003e\"],\n        num_processes=8,\n        concurrency_per_proc=(8, 8),\n    )\n)\n```\n\n### 4. Train OREAL\n\n**OREAL-7B**\n\n7B requires 32 GPUs to train. You can use the following command to train the model with [OREAL-7B-SFT](https://huggingface.co/internlm/OREAL-7B-SFT) as the initial policy:\n\n```bash\ntorchrun --nnodes 4 --nproc_per_node 8 --master_addr $MASTER_ADDR --node_rank $RANK --master_port $MASTER_PORT train_oreal.py oreal/configs/oreal_w_tokenrm_OREAL-7B-SFT_seqlen16k.py --total_steps 90 --work_dir ./work_dir/oreal_w_tokenrm_OREAL-7B-SFT_seqlen16k\n```\n\nIt takes about 9 hours to train the model 90 steps with 32xA100.\n\n**OREAL-32B**\n\n32B requires 128 GPUs to train. You can use the following command to train the model with [OREAL-32B-SFT](https://huggingface.co/internlm/OREAL-32B-SFT) as the initial policy:\n\n```bash\ntorchrun --nnodes 16 --nproc_per_node 8 --master_addr $MASTER_ADDR --node_rank $RANK --master_port $MASTER_PORT train_oreal.py oreal/configs/oreal_w_tokenrm_OREAL-32B-SFT_seqlen16k.py --total_steps 90 --work_dir ./work_dir/oreal_w_tokenrm_OREAL-32B-SFT_seqlen16k\n```\n\nMore detailed training settings can be found in the [oreal/configs](./oreal/configs) folder.\n\n**Note**:\n\n+ The best checkpoint may not be the last one. Consider evaluating during training and early stopping when the performance is saturated.\n\n\n## 🖊️ Citation\n\n```\n@article{lyu2025exploring,\n  title={Exploring the Limit of Outcome Reward for Learning Mathematical Reasoning},\n  author={Lyu, Chengqi and Gao, Songyang and Gu, Yuzhe and Zhang, Wenwei and Gao, Jianfei and Liu, Kuikun and Wang, Ziyi and Li, Shuaibin and Zhao, Qian and Huang, Haian and others},\n  journal={arXiv preprint arXiv:2502.06781},\n  year={2025}\n}\n```\n\n## 💳 License\n\nThis project is released under the Apache 2.0 [license](./LICENSE).\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Finternlm%2Foreal","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Finternlm%2Foreal","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Finternlm%2Foreal/lists"}