https://github.com/gair-nlp/octothinker
Revisiting Mid-training in the Era of RL Scaling
https://github.com/gair-nlp/octothinker
llama llm mid-training post-training pre-training qwen reasoning rl verl
Last synced: 3 months ago
JSON representation
Revisiting Mid-training in the Era of RL Scaling
- Host: GitHub
- URL: https://github.com/gair-nlp/octothinker
- Owner: GAIR-NLP
- License: apache-2.0
- Created: 2025-04-17T07:30:51.000Z (6 months ago)
- Default Branch: main
- Last Pushed: 2025-06-26T11:53:57.000Z (4 months ago)
- Last Synced: 2025-06-26T12:43:30.497Z (4 months ago)
- Topics: llama, llm, mid-training, post-training, pre-training, qwen, reasoning, rl, verl
- Language: Jupyter Notebook
- Homepage: http://arxiv.org/abs/2506.20512
- Size: 16.3 MB
- Stars: 68
- Watchers: 3
- Forks: 1
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
π OctoThinker
Mid-training Incentivizes Reinforcement Learning Scaling[](https://arxiv.org/abs/2506.20512)
[](https://tinyurl.com/OctoThinker)
[-5f16a8?style=for-the-badge&logo=huggingface&logoColor=white)](https://huggingface.co/OctoThinker)> *Revisiting Mid-training in the Era of RL Scaling*
## π₯ News
- **[2025-06-26]** πππ We release our detailed technical report on [**arXiv**](https://arxiv.org/abs/2506.20512)
and MegaMath-Pro-Max corpus on [**HuggingFace**](https://huggingface.co/datasets/OctoThinker/MegaMath-Web-Pro-Max).
- **[2025-04-24]** πππ We release our first progress blog on [**Notion**](https://tinyurl.com/OctoThinker), together with the first version of our base and RL models on [**HuggingFace**](https://huggingface.co/collections/GAIR/octothinker-68035e416813f9833a8060f3), which is trained on Llama-3 series.## π Introduction

> **Note:** We are still in the process of exploring more possibilities and expand to different model families, but we are eager to share some findings with the community from our empirical results in an open-source manner!
We explores how different early pre(mid)-training strategies' could bring impact to post-training stages, especially during the period of Reinforcement Learning (RL). We hold the hope of reshaping the pre-training stage of LLMs, in the era of RL scaling. **π OctoThinker** is our initial attempt to explore this direction.
**We go through a thorough pipeline of pre-training, RL, and evaluation, to investigate deep-level insights.**### What does π OctoThinker mean?
"Octo" is from the word "octopus", representing our base model families which are branched and trained via different strategies.
"Thinker" means the model is finally trained to think and reason at RL stage, which is expected to show frequent self-reflection behaviors and strong reasoning abilities.## Usage
Currently, our repo contains 3 main parts:
- Pre-training code based on [Nanotron](https://github.com/huggingface/nanotron)
- RL code based on [verl](https://github.com/volcengine/verl)
- Evaluation code which is refined from [DeepSeekMath](https://github.com/deepseek-ai/deepseek-math) and [MegaMath](https://github.com/LLM360/MegaMath)### Pre-training
Pre-training Environment Setup
```bash
conda create -n nanotron python=3.10
conda activate nanotron
cd nanotron
pip install -r requirements.txt
```To Submit Pre-training Jobs
```bash
#TODO: add pre-training scripts
```### RL
RL Environment Setup
```bash
#TODO: add RL scripts
```To Submit RL Jobs
```bash
#TODO: add RL scripts
```### Evaluation
Evaluation Environment Setup
```bash
conda create -n matheval python=3.10
conda activate matheval
cd eval
pip install -r requirements.txt
```To Submit Evaluation Jobs
```bash
cd eval
bash scripts/en_math_cot_eval_last4dir.sh
```### Visualization
We also provide the visualization code for the pre-training and RL process. All visualizations are in [plot](./plot/) directory to ensure the reproducibility.## Acknowledgements
For training framework and inference engine, we useΒ [**verl**](https://github.com/volcengine/verl) and Β [**vLLM**](https://github.com/vllm-project/vllm). We thank huggingface **[open-r1 team](https://huggingface.co/open-r1)**, [**a-m-team**](https://huggingface.co/a-m-team), and also [**SimpleRL**](https://github.com/hkust-nlp/simpleRL-reason) Project, to open source their dataset and training recipes. In fact, we are deeply grateful to the entire openβsource community for their tireless efforts in making our exploration possible.
If you find this work useful, please cite:
```
@article{wang2025octothinker,
title={OctoThinker: Mid-training Incentivizes Reinforcement Learning Scaling},
author={Wang, Zengzhi and Zhou, Fan and Li, Xuefeng and Liu, Pengfei},
year={2025},
journal={arXiv preprint arXiv:2506.20512},
year={2025},
note={Preprint}
}
```