https://github.com/sileod/reasoning-core
Procedural symbolic reasoning data generators suite for synthetic pretraining
https://github.com/sileod/reasoning-core
data-generators dataset dataset-generation grpo llm logic pre-pre-training pre-training procedural procedural-dataset procedural-generation reasoning rlvr symbolic verifiers
Last synced: 25 days ago
JSON representation
Procedural symbolic reasoning data generators suite for synthetic pretraining
- Host: GitHub
- URL: https://github.com/sileod/reasoning-core
- Owner: sileod
- License: mit
- Created: 2025-03-10T10:54:32.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2026-03-26T10:23:44.000Z (about 1 month ago)
- Last Synced: 2026-03-27T03:52:35.207Z (about 1 month ago)
- Topics: data-generators, dataset, dataset-generation, grpo, llm, logic, pre-pre-training, pre-training, procedural, procedural-dataset, procedural-generation, reasoning, rlvr, symbolic, verifiers
- Language: Python
- Homepage:
- Size: 368 KB
- Stars: 35
- Watchers: 2
- Forks: 2
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Reasoning Core ◉
reasoning-core is a suite of procedural data generators for LLM pre-training and post-training.
It is centered on expressive symbolic tasks, including full fledged first-order-logic, formal mathematics with TPTP, planning, and CFG syntax tasks.
We release pre-generated data scaled to more than 10B tokens
🤗 [https://hf.co/collections/reasoning-core/datasets](https://huggingface.co/collections/reasoning-core/datasets)
# Standalone
```python
pip install reasoning_core
from reasoning_core import list_tasks, get_task, score_answer
T = get_task('arithmetics')()
x = T.generate_example()
assert score_answer(x.answer, x)==1
```
# Parallel generation script
Run `bash run_generate.sh` for multi-threaded generation to json files (readable by Huggingface Datasets).
# Task examples and task authoring guide
[GALLERY](https://github.com/sileod/reasoning_core/blob/main/GALLERY.md) (names link to task code)
[TASK_AUTHORING_GUIDE](https://github.com/sileod/reasoning_core/blob/main/TASK_AUTHORING_GUIDE.md)
# Integrations
### Prime Environment Hub
```python
#!pip install uv #install uv if needed
!uv tool install prime --with openai -q
!uv tool run prime -- env install sileod/reasoning-core-env
from verifiers import load_environment
import os; from openai import OpenAI
env = load_environment("reasoning-core-env")
client = OpenAI( base_url="https://openrouter.ai/api/v1", api_key=os.getenv("OPENROUTER_API_KEY")) #🔑
results = env.evaluate(client=client, model="gpt-4.1-mini", num_examples=20, rollouts_per_example=1)
df=env.make_dataset(results).to_pandas()
```
### Reasoning gym integration
We use a custom interface but compatible interface. Our tasks, which are mostly orthogonal to RG, can be imported in it.
```python
import reasoning_gym, reasoning_core
from reasoning_gym.composite import DatasetSpec
reasoning_core.register_to_reasoning_gym() # registers RC tasks into RG
specs = [
DatasetSpec(name='leg_counting', weight=1, config={}), #from reasoning_gym 🏋
DatasetSpec(name='arithmetics', weight=1, config={}), #from reasoning_core ◉
]
D=reasoning_gym.create_dataset('composite', size=10, seed=42, datasets=specs)
```
And the other way around:
```python
frm reasoning_core import get_task
t=get_task('reasoning_gym')
t.generate_example(level=1, rg_task='lcm') #or unspecified for random task
```
## Citation and paper
https://arxiv.org/abs/2603.02208
```
@article{reasoningcore2026,
title={Reasoning Core: A Scalable Procedural Data Generation Suite for Symbolic Pre-training and Post-Training},
author={Lacombe, Valentin and Quesnel, Valentin and Sileo, Damien},
journal={arXiv preprint arXiv:2603.02208},
year={2026},
url={https://arxiv.org/abs/2603.02208}
}
```