An open API service indexing awesome lists of open source software.

https://github.com/ZJU-REAL/Self-Braking-Tuning

[NeurIPS 2025] Code for Let LLMs Break Free from Overthinking via Self-Braking Tuning. https://arxiv.org/abs/2505.14604
https://github.com/ZJU-REAL/Self-Braking-Tuning

Last synced: 3 months ago
JSON representation

[NeurIPS 2025] Code for Let LLMs Break Free from Overthinking via Self-Braking Tuning. https://arxiv.org/abs/2505.14604

Awesome Lists containing this project

README

          



Logo Let LRMs Break Free from Overthinking via Self-Braking Tuning














Haoran Zhao1,2*,


Yuchen Yan1*,


Yongliang Shen1†,


Haolei Xu1,


Wenqi Zhang1,


Kaitao Song3,


Jian Shao1,


Weiming Lu1,


Jun Xiao1


Yueting Zhuang1



1Zhejiang University,
2Tianjin University,
3Microsoft Research Asia


Preprint. Under review.

*Equal Contribution, Corresponding Author










Overview of Self-Braking Tuning: Through a specialized data construction method and training strategy, our self-braking model is able to spontaneously halt overthinking.



## News 🔥🔥
- **2025.09.18:** Our paper has been accepted by **NeurIPS 2025**.
- **2025.05.20:** We release our paper.

## 📝 About
Self-Braking Tuning is a novel framework that unlocks the potential of large reasoning models to autonomously identify and terminate redundant reasoning, enabling the models to regulate their own reasoning processes without relying on external control mechanisms.
During fine-tuning, we use the Megatron-LM framework, with related parameters specified in [`configs/train.yaml`](configs/train.yaml); for evaluation, we employ the vLLM framework as the inference engine, with corresponding parameters located in [`configs/evaluation.yaml`](configs/evaluation.yaml).
Here, we provide a complete data construction framework that can be applied to nearly any long-chain tuning dataset, generating corresponding self-braking data accordingly.

## 🛠️ Preparation Steps Before Starting
In *Let LLMs Break Free from Overthinking via Self-Braking Tuning*, we performed self-braking tuning based on the OpenR1-Math dataset. In fact, this approach is applicable to any long-chain reasoning dataset, as long as the reasoning segments are wrapped with `` and `` tags. It is worth noting that, prior to training, it is recommended to keep the model's max_position_embeddings set to 32,768. In addition, to extend the context length from 4k to 32k, we increase the RoPE frequency to 300k.

Our method requires access to an LLM, and the recommended way to provide this is by setting:

```
export APIKEY=
```
**Tip**: To provide a convenient default option, we use the OpenAI API key. However, for large-scale datasets, it is recommended to deploy open-source models locally using vLLM or other frameworks, and to leverage efficient methods such as batch processing for better scalability and cost efficiency.

## 🚀 Quick Start

### 1. Install Dependencies

```bash
pip install -r requirements.txt
```

### 2. Download

```bash
python models/model_download.py
python data/datasets/download_benchmarks.py
```

### 3. Get the baseline

```bash
python data/datasets/download_OpenR1-Math.py
```
### 4. Preprocess Data

```bash
python data/preprocessing/build_sbt-e.py
python data/preprocessing/build_sbt-d.py
```

### 5. Configure and Run Training / Evaluation

Refer to the config Settings in the following file:

* `train.yaml`: Training settings
* `evalution.yaml`: Evaluation settings

## 📖 Citation

If you find our work helpful, feel free to give us a cite.

```
@misc{zhao2025letllmsbreakfree,
title={Let LLMs Break Free from Overthinking via Self-Braking Tuning},
author={Haoran Zhao and Yuchen Yan and Yongliang Shen and Haolei Xu and Wenqi Zhang and Kaitao Song and Jian Shao and Weiming Lu and Jun Xiao and Yueting Zhuang},
year={2025},
eprint={2505.14604},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.14604},
}
```

## 📬 Contact Us
If you have any questions, please contact us by email:
ran159753@tju.edu.cn