https://github.com/IMNearth/CoAT
Android in the Zoo: Chain-of-Action-Thought for GUI Agents
https://github.com/IMNearth/CoAT
Last synced: 5 months ago
JSON representation
Android in the Zoo: Chain-of-Action-Thought for GUI Agents
- Host: GitHub
- URL: https://github.com/IMNearth/CoAT
- Owner: IMNearth
- Created: 2024-01-12T10:10:41.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-07-20T13:08:47.000Z (9 months ago)
- Last Synced: 2024-07-20T14:27:46.571Z (9 months ago)
- Language: Python
- Homepage:
- Size: 2.08 MB
- Stars: 19
- Watchers: 1
- Forks: 0
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- awesome-ui-agents - code
- awesome-ui-agents - code
- acu - Code
README
Android in the Zoo:
Chain-of-Action-Thought for GUI Agents\br>Jiwen Zhang1,2 , Jihao Wu2 , Yihua Teng2 , Minghui Liao2 , Nuo Xu2 , Xiao Xiao2 , Zhongyu Wei1 , Duyu Tang2.
1Fudan University 2Huawei Inc.
--------------
This work presents **Chain-of-Action-Thought** (dubbed **CoAT**), which takes the description of the previous actions, the current screen, and more importantly the action thinking of what actions should be performed and the outcomes led by the chosen action. To enable an adaptive learning of CoAT process, we construct a benchmark **Android-In-The-Zoo**, which contains 18,643 screen-action pairs together with CoAT annotations.
![]()
## 📣 Update
- **[2024-10-15]** Evaluation code has been released!
- **[2024-09-20]** Our work has been accepted to EMNLP2024 Findings!
- **[2024-07-16]** We add the demo code for using CoAT on proprietary models (GPT4V, Gemini-Pro and Qwen-VL-Max)!
- **[2024-03-31]** We release the first version of our AiTZ dataset!
- **[2024-03-05]** We have our paper arxived, now you can acess it by clicking [here](https://arxiv.org/abs/2403.02713) !
## Android-in-the-Zoo
The data in AiTZ has 18,643 screens together with 2500+ instructions, all annotated with CoAT-driven semantic labels. The sample format for each time step is
```json
{
"episode_id": "523638528775825151",
"episode_length": 4,
"step_id": 0,
"coat_screen_desc": "[observe]",
"coat_action_think": "[action think]",
"coat_action_desc": "[next action description]",
"coat_action_result": "[action result]",
...
}
```You can refer to `data-example` folder for a more specific example.
### Download
Our dataset ([GoogleDrive](https://drive.google.com/file/d/12xOV2m62fBUFLhMcWIsFiC6zwV7a2RhI/view?usp=sharing) or [BaiduNetdisk](https://pan.baidu.com/s/1dHG-4L0RE1aYINzMSA4dCw?pwd=7g82)) contains both the screens (.png) and the annotations (.json), consuming about 2.6G device space.
### Statistics
| Subset | Train | | Test | |
| ----------- | ---------- | --------- | ---------- | --------- |
| | \#Episodes | \#Screens | \#Episodes | \#Screens |
| General | 323 | 2405 | 156 | 1202 |
| Install | 286 | 2519 | 134 | 1108 |
| GoogleApps | 166 | 1268 | 76 | 621 |
| Single | 844 | 2594 | 0 | 0 |
| WebShopping | 379 | 5133 | 140 | 1793 |
| **Total** | **1998** | **13919** | **506** | **4724** |## Chain-of-Action-Thought
### Comparison with other context modeling methods
We validate the effectiveness of CoAT by conducting a preliminary experiment on 50 episodes randomly sampled from AITW dataset.
The compared baselines are [Chain-of-Thought](https://arxiv.org/abs/2201.11903) (CoT) and [Chain-of-Actions](https://arxiv.org/abs/2309.11436) (CoA).
| Prompt | Metric | QwenVL | Gemini-PV | GPT-4V |
| ------ | ------ | ------ | --------- | ------ |
| CoA | hit | 94.5 | 99.8 | 99.3 |
| | acc | 44.4 | 47.7 | 62.8 |
| CoT | hit | 95.6 | 97.5 | 97.1 |
| | acc | 49.4 | 52.0 | 64.1 |
| CoAT | hit | 96.3 | 96.4 | 98.2 |
| | acc | 52.4 | 54.5 | 73.5 |where “hit” means format hit rate, and “acc” means action type prediction accuracy. (One can refer to Table 8 in our paper for more details.)
### CoAT demo usage
Here we provide a demo code for anyone who wants to try the CoAT on GPT-4V, Qwen-VL-Max and Gemini-1.0-Pro-Vision.
Firstly, go to `coat/config.yaml` and add your own api-keys and urls.
Secondly, run the folloiwng code in commad line to generate sematic components of CoAT framework:
```shell
python run_coat.py --task "flow" --DEMO_MODE "COAT" --MODEL.NAME "openai/gemini/qwenvl" --num-threads 3
```Then, you can obtain the action prediction results by
```shell
python run_coat.py --task "predict" --DEMO_MODE "COAT" --MODEL.NAME "openai/gemini/qwenvl" --num-threads 3
```## Citation
If you find our work helpful, please consider citing our paper.
```
@misc{zhang2024android,
title={Android in the Zoo: Chain-of-Action-Thought for GUI Agents},
author={Jiwen Zhang and Jihao Wu and Yihua Teng and Minghui Liao and Nuo Xu and Xiao Xiao and Zhongyu Wei and Duyu Tang},
year={2024},
eprint={2403.02713},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```