{"id":28676509,"url":"https://github.com/zjunlp/synworld","last_synced_at":"2026-02-22T18:37:29.304Z","repository":{"id":287849887,"uuid":"905193978","full_name":"zjunlp/SynWorld","owner":"zjunlp","description":"[ACL 2025] SynWorld: Virtual Scenario Synthesis for Agentic Action Knowledge Refinement","archived":false,"fork":false,"pushed_at":"2025-04-14T08:41:43.000Z","size":354,"stargazers_count":7,"open_issues_count":0,"forks_count":1,"subscribers_count":4,"default_branch":"main","last_synced_at":"2025-07-21T12:53:48.739Z","etag":null,"topics":["agent","agent-planning","agentic","artificial-intelligence","knowledge","large-language-models","natural-language-processing","synworld"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/zjunlp.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2024-12-18T10:47:06.000Z","updated_at":"2025-05-24T07:38:51.000Z","dependencies_parsed_at":"2025-04-14T09:44:20.679Z","dependency_job_id":null,"html_url":"https://github.com/zjunlp/SynWorld","commit_stats":null,"previous_names":["zjunlp/synworld"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/zjunlp/SynWorld","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zjunlp%2FSynWorld","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zjunlp%2FSynWorld/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zjunlp%2FSynWorld/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zjunlp%2FSynWorld/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/zjunlp","download_url":"https://codeload.github.com/zjunlp/SynWorld/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zjunlp%2FSynWorld/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":29722029,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-02-22T15:10:41.462Z","status":"ssl_error","status_checked_at":"2026-02-22T15:10:04.636Z","response_time":110,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["agent","agent-planning","agentic","artificial-intelligence","knowledge","large-language-models","natural-language-processing","synworld"],"created_at":"2025-06-13T23:04:58.402Z","updated_at":"2026-02-22T18:37:29.276Z","avatar_url":"https://github.com/zjunlp.png","language":"Python","readme":"\u003ch1 align=\"center\"\u003e SynWorld \u003c/h1\u003e\n\u003ch3 align=\"center\"\u003eVirtual Scenario Synthesis for Agentic Action Knowledge Refinement\u003c/h3\u003e\n\n\u003cp align=\"center\"\u003e\n  \u003ca href=\"https://arxiv.org/pdf/2504.03561\" target=\"_blank\"\u003e📄arXiv\u003c/a\u003e •\n  \u003ca href=\"https://huggingface.co/papers/2504.03561\" target=\"_blank\"\u003e🤗HFPaper\u003c/a\u003e \n\u003c/p\u003e\n\n\u003c!-- [![Awesome](https://awesome.re/badge.svg)](https://github.com/zjunlp/WorFBench)  --\u003e\n[![License: MIT](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT)\n![](https://img.shields.io/github/last-commit/zjunlp/SynWorld?color=green) \n\n## Table of Contents\n\n- 🌻[Acknowledgement](#acknowledgement)\n- 🌟[Overview](#overview)\n- 🔧[Installation](#installation)\n- ✏️[ToolBench-API-Deploy](#toolbench-api-deploy)\n- 📝[MCTS-refine](#MCTS-refine)\n- 🤔[ActionKnowledge-Evaluation](#ActionKnowledge-evaluation)\n- 🚩[Citation](#citation)\n\u003c!-- - 🎉[Contributors](#🎉contributors) --\u003e\n\n---\n\n## 🌻Acknowledgement\n\nOur code of training module is referenced and adapted from [StableToolbench](https://github.com/THUNLP-MT/StableToolBench), [Aflow](https://github.com/FoundationAgents/AFlow), [Draft](https://github.com/quchangle1/DRAFT). And the Dataset is collected from [ToolBench](https://github.com/openbmb/toolbench?tab=readme-ov-file), [HotpotQA](). Our end-to-end evaluation module is based on [Stable ToolBench](https://github.com/THUNLP-MT/StableToolBench), [HotpotQA](https://github.com/hotpotqa/hotpot). Thanks for their great contributions!\n\n\n\n## 🌟Overview\n\nIn the interaction between agents and their environments, agents expand their capabilities by planning and executing actions. However, LLM-based agents face substantial challenges when deployed in novel environments or required to navigate unconventional action spaces. To empower agents to autonomously explore environments, optimize workflows, and enhance their understanding of actions, we propose SynWorld, a framework that allows agents to synthesize possible scenarios with multi-step action invocation within the action space and perform Monte Carlo Tree Search (MCTS) exploration to effectively refine their action knowledge in the current environment. Our experiments demonstrate that SynWorld is an effective and general approach to learning action knowledge in new environments\n\n## 🔧Installation\n\n```bash\ngit clone https://github.com/zjunlp/SynWorld\ncd SynWorld\npip install -r requirements.txt\n\n```\n\n\n\n## ✏️ToolBench-API-Deploy\n```bash\ngit clone https://github.com/THUNLP-MT/StableToolBench.git\ncd StableToolBench\npip install requirements.txt\ncd server\npython main.py\n```\n\n\n\n\n## 📝MCTS—refine\nGenerate workflow with local llm api\n```bash\ncd SynWrold/src\npython PlanAlign.py \\\n    --model qwen-max \\\n    --max_rounds 15 \\\n    --dataset Toolbench \\\n    --validate_nums 100 \\\n    --max_workers 16 \\\n    --mode dev \\\n    --overwrite\n  \n```\n\n\n\n## 🤔ActionKnowledge-Evaluation\n\nEvaluation the workflow of node n\n```bash\ncd SynWrold/src\npython PlanAlign.py \\\n    --model qwen-max \\\n    --dataset Toolbench \\\n    --validate_nums 100 \\\n    --max_workers 16 \\\n    --mode test \\\n    --test_round the_round_num_you_select(eg. 100 in our experiment) \\\n    --overwrite\n  \n```\n\n\n\n## 🚩Citation\n\nIf this work is helpful, please kindly cite as:\n\n```bibtex\n@article{fang2025synworld,\n  title={SynWorld: Virtual Scenario Synthesis for Agentic Action Knowledge Refinement},\n  author={Fang, Runnan and Wang, Xiaobin and Liang, Yuan and Qiao, Shuofei and Wu, Jialong and Xi, Zekun and Zhang, Ningyu and Jiang, Yong and Xie, Pengjun and Huang, Fei and others},\n  journal={arXiv preprint arXiv:2504.03561},\n  year={2025}\n}\n```\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fzjunlp%2Fsynworld","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fzjunlp%2Fsynworld","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fzjunlp%2Fsynworld/lists"}