{"id":28447813,"url":"https://github.com/opengvlab/diffagent","last_synced_at":"2026-01-30T14:34:53.134Z","repository":{"id":232543692,"uuid":"773324685","full_name":"OpenGVLab/DiffAgent","owner":"OpenGVLab","description":"[CVPR 2024] DiffAgent: Fast and Accurate Text-to-Image API Selection with Large Language Model","archived":false,"fork":false,"pushed_at":"2024-04-16T03:57:45.000Z","size":11,"stargazers_count":17,"open_issues_count":0,"forks_count":0,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-06-30T13:44:45.233Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":null,"has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/OpenGVLab.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2024-03-17T11:11:43.000Z","updated_at":"2024-12-26T08:34:03.000Z","dependencies_parsed_at":"2025-06-30T13:56:13.245Z","dependency_job_id":null,"html_url":"https://github.com/OpenGVLab/DiffAgent","commit_stats":null,"previous_names":["opengvlab/diffagent"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/OpenGVLab/DiffAgent","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OpenGVLab%2FDiffAgent","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OpenGVLab%2FDiffAgent/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OpenGVLab%2FDiffAgent/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OpenGVLab%2FDiffAgent/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/OpenGVLab","download_url":"https://codeload.github.com/OpenGVLab/DiffAgent/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OpenGVLab%2FDiffAgent/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":28914335,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-01-30T12:13:43.263Z","status":"ssl_error","status_checked_at":"2026-01-30T12:13:22.389Z","response_time":66,"last_error":"SSL_read: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2025-06-06T12:07:17.898Z","updated_at":"2026-01-30T14:34:53.107Z","avatar_url":"https://github.com/OpenGVLab.png","language":null,"funding_links":[],"categories":[],"sub_categories":[],"readme":"# DiffAgent: Fast and Accurate Text-to-Image API Selection with Large Language Model\n\n\n[![MIT license](https://img.shields.io/badge/License-MIT-blue.svg)](https://lbesson.mit-license.org/)  [![arXiv](https://img.shields.io/badge/arXiv-2404.01342-red)](https://arxiv.org/abs/2404.01342) \n\n## Abstract\n\n\u003cdetails\u003e\u003csummary\u003eCLICK for the full abstract\u003c/summary\u003e\n\n\u003e Text-to-image (T2I) generative models have attracted significant attention and found extensive applications within and beyond academic research. For example, the Civitai community, a platform for T2I innovation, currently hosts an impressive array of 74,492 distinct models. However, this diversity presents a formidable challenge in selecting the most appropriate model and parameters, a process that typically requires numerous trials. Drawing inspiration from the tool usage research of large language models (LLMs), we introduce DiffAgent, an LLM agent designed to screen the accurate selection in seconds via API calls. DiffAgent leverages a novel two-stage training framework, SFTA, enabling it to accurately align T2I API responses with user input in accordance with human preferences. To train and evaluate DiffAgent's capabilities, we present DABench, a comprehensive dataset encompassing an extensive range of T2I APIs from the community. Our evaluations reveal that DiffAgent not only excels in identifying the appropriate T2I API but also underscores the effectiveness of the SFTA training framework.\n\u003e \u003c/details\u003e\n\nWe are open to any suggestions and discussions and feel free to contact us through [liruizhao@stu.xmu.edu.cn](mailto:liruizhao@stu.xmu.edu.cn).\n\n\n## TODO\n\n- [x] dataset\n- [ ] data collection script\n- [ ] pretrain model\n- [ ] training code\n\n## News\n\n- 2024/04/15 - Our dataset DABench is now publicly accessible and can be retrieved from [Google Drive](https://drive.google.com/file/d/1-zqkHbuD1Di5eqLUspE3mzkRAmOCZYtZ/view?usp=sharing)!\n\n## Contents\n\n- [Install](#install)\n- [Dataset](#dataset)\n- [Usage](#usage)\n- [Citation](#citation)\n\n## Install\n\n```\nconda create -n diffagent python=3.9.17\nconda activate diffagent\ngit clone https://github.com/OpenGVLab/DiffAgent.git\ncd diffagent\npip install -r requirements.txt\n```\n\n## Dataset\n\nOur research introduces a high-quality dataset, DABench, accessible via [Google Drive](https://drive.google.com/file/d/1-zqkHbuD1Di5eqLUspE3mzkRAmOCZYtZ/view?usp=sharing), encompassing Instruction-API pairs from SD 1.5 and SD XL (a total of 50,482).\nAdditionally, we furnish the corresponding mapping dictionaries to facilitate subsequent model downloads or API information reconstruction.\n\n\nThe dataset DABench proposed in our work is collected from Civitai ([license](https://github.com/civitai/civitai/blob/main/LICENSE)). The stipulations of the license highlight potential legal implications if this dataset is employed for commercial objectives. Therefore, it is strongly recommended that any entity intending to utilize this data for commercial ends should seek explicit authorization from either the relevant website or author.\n\n\n## Usage\n\n\n## Citation\n\nIf you use our work or our dataset in this repo, or find them helpful, please consider giving a citation.\n\n```\n@article{zhao2024diffagent,\n  title={DiffAgent: Fast and Accurate Text-to-Image API Selection with Large Language Model},\n  author={Zhao, Lirui and Yang, Yue and Zhang, Kaipeng and Shao, Wenqi and Zhang, Yuxin and Qiao, Yu and Luo, Ping and Ji, Rongrong},\n  journal={arXiv preprint arXiv:2404.01342},\n  year={2024}\n}\n```\n\n","project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fopengvlab%2Fdiffagent","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fopengvlab%2Fdiffagent","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fopengvlab%2Fdiffagent/lists"}