{"id":13488758,"url":"https://github.com/OpenGVLab/Diffree","last_synced_at":"2025-03-28T01:37:44.036Z","repository":{"id":250093842,"uuid":"833440526","full_name":"OpenGVLab/Diffree","owner":"OpenGVLab","description":"Diffree: Text-Guided Shape Free Object Inpainting with Diffusion Model","archived":false,"fork":false,"pushed_at":"2024-08-06T02:24:30.000Z","size":69888,"stargazers_count":147,"open_issues_count":0,"forks_count":10,"subscribers_count":4,"default_branch":"main","last_synced_at":"2024-08-06T13:58:33.641Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/OpenGVLab.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-07-25T04:13:32.000Z","updated_at":"2024-08-06T09:57:40.000Z","dependencies_parsed_at":"2024-08-03T05:37:58.922Z","dependency_job_id":null,"html_url":"https://github.com/OpenGVLab/Diffree","commit_stats":null,"previous_names":["opengvlab/diffree"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OpenGVLab%2FDiffree","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OpenGVLab%2FDiffree/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OpenGVLab%2FDiffree/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/OpenGVLab%2FDiffree/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/OpenGVLab","download_url":"https://codeload.github.com/OpenGVLab/Diffree/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":222333976,"owners_count":16968058,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-07-31T18:01:21.327Z","updated_at":"2025-03-28T01:37:44.028Z","avatar_url":"https://github.com/OpenGVLab.png","language":"Python","funding_links":[],"categories":["SD-inpaint","Python"],"sub_categories":[],"readme":"# Diffree\nOfficial PyTorch implement of paper \"Diffree: Text-Guided Shape Free Object Inpainting with Diffusion Model\"\n\n\u003cp align=\"center\"\u003e\n  \u003ca href=\"https://opengvlab.github.io/Diffree/\"\u003e\u003cu\u003e[🌐 Project Page]\u003c/u\u003e\u003c/a\u003e\n  \u0026nbsp;\u0026nbsp;\n  \u003ca href=\"https://huggingface.co/datasets/LiruiZhao/OABench\"\u003e\u003cu\u003e[🗞️ Dataset]\u003c/u\u003e\u003c/a\u003e\n  \u0026nbsp;\u0026nbsp;\n  \u003ca href=\"https://drive.google.com/file/d/1AdIPA5TK5LB1tnqqZuZ9GsJ6Zzqo2ua6/view\"\u003e\u003cu\u003e[🎥 Video]\u003c/u\u003e\u003c/a\u003e\n  \u0026nbsp;\u0026nbsp;\n  \u003ca href=\"https://arxiv.org/pdf/2407.16982\"\u003e\u003cu\u003e[📜 Arxiv]\u003c/u\u003e\u003c/a\u003e\n  \u0026nbsp;\u0026nbsp;\n  \u003ca href=\"https://huggingface.co/spaces/LiruiZhao/Diffree\"\u003e\u003cu\u003e[🤗 Hugging Face Demo]\u003c/u\u003e\u003c/a\u003e\n\u003c/p\u003e\n\n## Abstract\n\n\u003cdetails\u003e\u003csummary\u003eCLICK for the full abstract\u003c/summary\u003e\n\n\u003e This paper addresses an important problem of object addition for images with only text guidance. It is challenging because the new object must be integrated seamlessly into the image with consistent visual context, such as lighting, texture, and spatial location. While existing text-guided image inpainting methods can add objects, they either fail to preserve the background consistency or involve cumbersome human intervention in specifying bounding boxes or user-scribbled masks. To tackle this challenge, we introduce Diffree, a Text-to-Image (T2I) model that facilitates text-guided object addition with only text control. To this end, we curate OABench, an exquisite synthetic dataset by removing objects with advanced image inpainting techniques. OABench comprises 74K real-world tuples of an original image, an inpainted image with the object removed, an object mask, and object descriptions. Trained on OABench using the Stable Diffusion model with an additional mask prediction module, Diffree uniquely predicts the position of the new object and achieves object addition with guidance from only text. Extensive experiments demonstrate that Diffree excels in adding new objects with a high success rate while maintaining background consistency, spatial appropriateness, and object relevance and quality.\n\u003e \u003c/details\u003e\n\nWe are open to any suggestions and discussions and feel free to contact us through [liruizhao@stu.xmu.edu.cn](mailto:liruizhao@stu.xmu.edu.cn).\n\n## News\n- [2024/07] Release inference code and \u003ca href=\"https://huggingface.co/LiruiZhao/Diffree\"\u003echeckpoint\u003c/a\u003e\n- [2024/07] Release \u003ca href=\"https://huggingface.co/spaces/LiruiZhao/Diffree\"\u003e🤗 Hugging Face Demo\u003c/a\u003e\n- [2024/08] Release ConfyUI demo. Thanks [smthemex](https://github.com/smthemex) ([ComfyUI_Diffree](https://github.com/smthemex/ComfyUI_Diffree)) for helping!\n- [2024/08] Release [training dataset OABench](https://huggingface.co/datasets/LiruiZhao/OABench) in Hugging Face\n- [2024/08] Release training code\n- [2024/08] Update \u003ca href=\"https://huggingface.co/spaces/LiruiZhao/Diffree\"\u003e🤗 Demo\u003c/a\u003e, now support iterative generation through a text list\n\n## Contents\n- [Install](#install)\n- [Inference](#inference)\n- [Data Download](#data-download)\n- [Training](#training)\n- [Citation](#citation)\n\n## Install\n1. Clone this repository and navigate to Diffree folder\n```\ngit clone https://github.com/OpenGVLab/Diffree.git\n\ncd Diffree\n```\n\n2. Install package\n```\nconda create -n diffree python=3.8.5\n\nconda activate diffree\n\npip install -r requirements.txt\n```\n\n## Inference\n\n1. Download the Diffree model from Huggingface.\n```\npip install huggingface_hub\n\nhuggingface-cli download LiruiZhao/Diffree --local-dir ./checkpoints\n```\n\n2. You can inference with the script:\n\n```\npython app.py\n```\n\nSpecifically, `--resolution` defines the maximum size for both the resized input image and output image. For our \u003ca href=\"https://huggingface.co/spaces/LiruiZhao/Diffree\"\u003eHugging Face Demo\u003c/a\u003e, we set the `--resolution` to `512` to enhance the user experience with higher-resolution results. While during the training process of Diffree, `--resolution` is set to `256`. Therefore, reducing `--resolution` might improve results (e.g., consider trying `320` as a potential value).\n\n## Data Download\n\nYou can download the OABench here, which are used for training the Diffree.\n\n1. Download the OABench dataset from Huggingface.\n\n```\nhuggingface-cli download --repo-type dataset LiruiZhao/OABench --local-dir ./dataset --local-dir-use-symlinks False\n```\n\n2. Find and extract all compressed files in the dataset directory\n\n```\ncd dataset\n\nls *.tar.gz | xargs -n1 tar xvf\n```\n\nThe data structure should be like:\n\n```\n|-- dataset\n    |-- original_images\n        |-- 58134.jpg\n        |-- 235791.jpg\n        |-- ...\n    |-- inpainted_images\n        |-- 58134\n          |-- 634757.jpg\n          |-- 634761.jpg\n          |-- ...\n        |-- 235791\n        |-- ...\n    |-- mask_images\n        |-- 58134\n          |-- 634757.png\n          |-- 634761.png\n          |-- ...\n        |-- 235791\n        |-- ...\n    |-- annotations.json\n```\n\nIn the `inpainted_images` and `mask_images` directories, the top-level folders correspond to the original images, and the contents of each folder are the inpainted images and masks for those images.\n\n## Training\nDiffree is trained by fine-tuning from an initial StableDiffusion checkpoint. \n\n1. Download a Stable Diffusion checkpoint and move it to the `checkpoints` directory. For our trained models, we used [the v1.5 checkpoint](https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt) as the starting point. You can also use the following command:\n\n```\ncurl -L https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned-emaonly.ckpt -o checkpoints/v1-5-pruned-emaonly.ckpt\n```\n\n\n2. Next, you can start training.\n\n```\npython main.py --name diffree --base config/train.yaml --train --gpus 0,1,2,3\n```\n\nAll configurations are stored in the YAML file. If you need to use custom configuration settings, you can modify the `--base` to point to your custom config file.\n\n\n## Citation\nIf you found this work useful, please consider citing:\n\n```\n@article{zhao2024diffree,\n  title={Diffree: Text-Guided Shape Free Object Inpainting with Diffusion Model},\n  author={Zhao, Lirui and Yang, Tianshuo and Shao, Wenqi and Zhang, Yuxin and Qiao, Yu and Luo, Ping and Zhang, Kaipeng and Ji, Rongrong},\n  journal={arXiv preprint arXiv:2407.16982},\n  year={2024}\n}\n```\n","project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FOpenGVLab%2FDiffree","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FOpenGVLab%2FDiffree","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FOpenGVLab%2FDiffree/lists"}