An open API service indexing awesome lists of open source software.

https://github.com/opengvlab/zerogui

ZeroGUI: Automating Online GUI Learning at Zero Human Cost
https://github.com/opengvlab/zerogui

Last synced: about 2 months ago
JSON representation

ZeroGUI: Automating Online GUI Learning at Zero Human Cost

Awesome Lists containing this project

README

          

# ZeroGUI: Automating Online GUI Learning at Zero Human Cost

[![Static Badge](https://img.shields.io/badge/arXiv-2505.23762-green)](https://arxiv.org/abs/2505.23762)
[![Static Badge](https://img.shields.io/badge/🤗 -HuggingFace-blue)](https://huggingface.co/collections/OpenGVLab/zerogui-68388cb7dbf608133c4b5fb2)

We propose [**ZeroGUI**](https://arxiv.org/abs/2505.23762), a fully automated online reinforcement learning framework that enables GUI agents to train and adapt in interactive environments at zero human cost.

**🔥 Updates**

- [x] **2025/07/05**: Release the training scripts on OSWorld.

- [x] **2025/06/15**: Release the evaluation code and scripts on OSWorld.

- [x] **2025/05/30**: Release the task generation code on OSWorld.

- [x] **2025/05/30**: Release our paper and model checkpoints.

## 🚀 Highlights

* 🚫 **Zero Human Cost:** Requires no handcrafted task annotations or rule-based reward designs.

* 🧠 **VLM-based Automation:** Both training tasks and rewards are generated by powerful VLMs.

* ♻️ **Online Learning:** Agents continuously learn from interacting with GUI environments.

* 📈 **Significant Gains:** +63% (Aguvis) and +14% (UI-TARS) relative improvements on OSWorld.

![overview](./assets/overview.png)

## 📖 Summary

* 🧠 **Automatic Task Generation:** Automatically proposes diverse, executable GUI tasks.

* ✅ **Automatic Reward Estimation:** Assigns binary task rewards based on trajectory screenshots and employs a voting mechanism to avoid hallucinated success.

* ♻️ **Two-Stage Online RL:** Combines training on generated tasks and test-time adaptation to continually improve agent's performance.

![framework](./assets/framework.png)

## 📈 Results

### 💻 OSWorld

![results_osworld](./assets/results_osworld.png)

### 📱 AndroidLab



## 📦 Checkpoints

| base model | env | 🤗 link |
| :--: | :--: | :--: |
| UI-TARS-7B-DPO | OSWorld | [ZeroGUI-OSWorld-7B](https://huggingface.co/OpenGVLab/ZeroGUI-OSWorld-7B) |
| UI-TARS-7B-DPO | AndroidLab | [ZeroGUI-AndroidLab-7B](https://huggingface.co/OpenGVLab/ZeroGUI-AndroidLab-7B) |

### Model Deployment

The prompts and parsing functions are provided in [`openrlhf/agent/uitars.py`](./openrlhf/agent/uitars.py). You can refer to [UI-TARS](https://github.com/bytedance/UI-TARS/blob/main/README_v1.md) for more details.

## 🛠️ Usage

### Setup

1. Setup python environment: `pip install -r requirements.txt`. Please use `python>=3.10`.

2. Setup GUI environment: [OSWorld](./osworld/README.md).

### Evaluation

Use the following command to evaluate the model on OSWorld:

```bash
bash scripts/eval/eval_osworld.sh OpenGVLab/ZeroGUI-OSWorld-7B
```

where `` and `` are the URL and port of the API manager launched [here](./osworld/env_api_manager.py).

### Training

#### Test-Time Training

[`scripts/train_osworld_test-time`](./scripts/train_osworld_test-time) contains an example script for test-time training on OSWorld.

1. Modify [`train.sh`](./scripts/train_osworld_test-time/train.sh) according to your server setup. `ENV_URL` and `ENV_MANAGER_PORT` are the URL and port of the environment API manager. `API_BASE_URL` and `API_KEY` are used for VLM-based reward estimation. You can deploy the VLM locally or use online APIs.

2. Modify the Slurm settings in [`slurm_launch.sh`](./scripts/train_osworld_test-time/slurm_launch.sh). Adjust the environment variables provided by your Slurm configuration in [`ray_launch.sh`](./scripts/train_osworld_test-time/ray_launch.sh)

3. Run the following command:

```bash
bash scripts/train_osworld_test-time/srun_launch.sh
```

#### Generated Task Training.

1. Run task generation: [OSWorld](./osworld/README.md). Organize the task metas like [`data/osworld_test_all.jsonl`](./data/osworld_test_all.jsonl).

2. Launch training similarly to test-time training.

## 📚 Citation

If you find this work helpful in your research, please consider citing:

```bibtex
@article{yang2025zerogui,
title={ZeroGUI: Automating Online GUI Learning at Zero Human Cost},
author={Yang, Chenyu and Shiqian, Su and Liu, Shi and Dong, Xuan and Yu, Yue and Su, Weijie and Wang, Xuehui and Liu, Zhaoyang and Zhu, Jinguo and Li, Hao and Wang, Wenhai and Qiao, Yu and Zhu, Xizhou and Dai, Jifeng},
journal={arXiv preprint arXiv:2505.23762},
year={2025}
}
```

## Acknowledgements
Our code is built with reference to the code of the following projects: [OpenRLHF](https://github.com/OpenRLHF/OpenRLHF), [UI-TARS](https://github.com/bytedance/UI-TARS), [AGUVIS](https://github.com/xlang-ai/aguvis), [OSWorld](https://github.com/xlang-ai/OSWorld), and [AndroidLab](https://github.com/THUDM/Android-Lab).