{"id":13775290,"url":"https://github.com/design-edit/DesignEdit","last_synced_at":"2025-05-11T07:32:29.070Z","repository":{"id":229174439,"uuid":"776010193","full_name":"design-edit/DesignEdit","owner":"design-edit","description":"DesignEdit: Unify Spatial-Aware Image Editing via Training-free Inpainting with a Multi-Layered Latent Diffusion Framework","archived":false,"fork":false,"pushed_at":"2024-12-10T03:26:18.000Z","size":21678,"stargazers_count":309,"open_issues_count":8,"forks_count":22,"subscribers_count":9,"default_branch":"master","last_synced_at":"2024-12-10T04:19:07.102Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"https://design-edit.github.io/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/design-edit.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-03-22T13:54:39.000Z","updated_at":"2024-12-10T03:26:21.000Z","dependencies_parsed_at":"2024-07-21T02:50:59.920Z","dependency_job_id":null,"html_url":"https://github.com/design-edit/DesignEdit","commit_stats":null,"previous_names":["design-edit/designedit"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/design-edit%2FDesignEdit","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/design-edit%2FDesignEdit/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/design-edit%2FDesignEdit/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/design-edit%2FDesignEdit/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/design-edit","download_url":"https://codeload.github.com/design-edit/DesignEdit/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":253533990,"owners_count":21923515,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-03T17:01:36.502Z","updated_at":"2025-05-11T07:32:29.056Z","avatar_url":"https://github.com/design-edit.png","language":"Python","readme":"# DesignEdit: Unify Spatial-Aware Image Editing via Training-free Inpainting with a Multi-Layered Latent Diffusion Framework\n\u003e *Stable Diffusion XL 1.0* Implementation\n\n![teaser](docs/teaser.jpg)\n### [Project Page](https://design-edit.github.io/)\u0026ensp;\u0026ensp;\u0026ensp;[Paper](https://arxiv.org/abs/2403.14487)\u0026ensp;\u0026ensp;\u0026ensp;[Hugging Face Demo](https://huggingface.co/spaces/YuhuiYuan/DesignEdit)\n\n## ✨ News ✨\n\n- [2024/12/10] 🎉 DesignEdit has been accepted to AAAI 2025! 🎉\n- [2024/4/4] We have supported the Gradio Application on Hugging Face 🤗, encouraging you to design online without the need for local deployment.\n- [2024/3/28] We release the code for DesignEdit! Let's design together! 😍\n\n## Setup\n\nThe required Python version is 3.10.12. , and the [Pytorch](https://pytorch.org/) version is 2.0.1.\nThe code's framework is built on [Prompt-to-prompt](https://github.com/google/prompt-to-prompt/) and  [Stable Diffusion](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0).\n\nAdditional required packages are listed in the requirements file.\n```bash\nconda create -n DesignEdit python=3.10.12\nconda activate DesignEdit\npip install -r requirements.txt\n```\nNotice that our model is entirely **training-free**💪!!! The base model is the Stable Diffusion XL-1.0.\n\n## Demo\nWe have created an interactive interface using Gradio, as shown below. You only need to simply run the following command in the environment we previously set up:\n```bash\npython design_app.py\n```\n![page_1](docs/page01.png)\n\n### 🖱️Usage\n\n- We have 5 function pages for different editing operations.\n\n💡**Object Removal**\n\n💡**Zooming Out**\n\n💡**Camera Panning**\n\n💡**Object Moving, Resizing and Flipping**\n\n💡**Multi-Layered Editing**  \n\n- You can follow the \"Usage\" instructions within each page.  \n\n![page_4](docs/page04.png)  \n\n- For each page, we also provide some interesting examples for you to try.  \n\n![page_2](docs/page02.png)  \n\n- Notice that the **Multi-Layered Editing** page, which uses a multi-layered representation for multiple editing tasks, can achieve the same results as those of Object Removal and Object Moving, Resizing, and Flipping in a general representation.  \n\n- Moreover, we have added the \"Mask Preparation\" page for you to utilize SAM or sketching to combine several masks together. This may be useful when you are on the **Multi-Layered Editing** page.  \n\n![page_3](docs/page03.png)\n\n## More Details  \n\nIf you are interested in exploring more details about the model implementation, we recommend checking out [`model.py`](design_copy/src/demo/model.py). Pay special attention to the `register_attention_control()` function and the `LayerFusion` class.  \n\n\n## Applications  \n\nFor more applications, we kindly invite you to explore our [project page](https://design-edit.github.io/) and refer to our [paper](https://arxiv.org/abs/2403.14487).\n\n### 💡Object Removal  \n\nYou can choose more than one object to remove on the **Object Removal** page, and it is also possible to mask irregular regions for removal.\n\n\u003cdiv align=\"center\"\u003e\n    \u003cimg src=\"docs/removal.jpg\" width=\"700\"/\u003e\n\u003c/div\u003e\n\n### 💡Object Removal with \u003cspan style=\"color:red;\"\u003eRefine Mask\u003c/span\u003e  \n\nUsing remove mask directly may cause artifacts, the refine mask indicates regions that may cause artifacts. You can turn to **Object Removal** page to explore.  \n\n\u003cdiv align=\"center\"\u003e\n    \u003cimg src=\"docs/refine.jpg\"  width=\"700\"/\u003e\n\u003c/div\u003e\n\n### 💡Camera Panning and Zooming Out  \n\nYou can use the **Camera Panning** and **Zooming Out** page to achieve editing with different scales and directions.\n\n\u003cdiv align=\"center\"\u003e\n    \u003cimg src=\"docs/pan.jpg\" width=\"700\"/\u003e\n\u003c/div\u003e\n\u003cdiv align=\"center\"\u003e\n    \u003cimg src=\"docs/zoom.jpg\" width=\"700\"/\u003e\n\u003c/div\u003e\n\nThe illustration of image adjustment and mask preparation is shown below.  \n\n\u003cdiv align=\"center\"\u003e\n    \u003cimg src=\"docs/pan+zoom.jpg\" width=\"700\"/\u003e\n\u003c/div\u003e\n\n### 💡Multi-Object Editing with Moving, Resizing, Flipping\n\nYou can achieve single object moving, resizing, flipping in **Object Moving, Resizing and Flipping** page, \nfor multi-object editing like swapping and addition, you can turn to **Multi-Layered Editing** page.  \n\n\u003cdiv align=\"center\"\u003e\n    \u003cimg src=\"docs/multi.jpg\" width=\"700\"/\u003e\n\u003c/div\u003e\n\n### 💡Cross-Image Composition  \n\nBy choosing one image as the background and specifying the position, size, and placement order of the foreground images, we can achieve cross-image composition. You can try examples on the **Multi-Layered Editing** page.\n\n\u003cdiv align=\"center\"\u003e\n    \u003cimg src=\"docs/cross.jpg\" width=\"700\"/\u003e\n\u003c/div\u003e\n\n### 💡Typography Retyping  \n\nTypography retyping refers to the specific use of design elements, which you can achieve on the **Multi-Layered Editing** page.  \n\n\u003cdiv align=\"center\"\u003e\n    \u003cimg src=\"docs/retype.jpg\" width=\"700\"/\u003e\n\u003c/div\u003e\n\n## Acknowledgements  \n\nOur project benefits from the contributions of several outstanding projects and techniques. We express our gratitude to:\n\n- [**Prompt-to-Prompt**](https://github.com/google/prompt-to-prompt.git): For innovative approaches in prompt engineering. \n\n- [**Proximal-Guidance**](https://github.com/phymhan/prompt-to-prompt.git): For their cutting-edge inversion technique, significantly improving our model's performance. \n\n- [**DragonDiffusion**](https://github.com/MC-E/DragonDiffusion.git): For inspiration on Gradio interface and efficient SAM API integration.\n\nEach of these projects has played a crucial role in the development of our work. We thank their contributors for sharing their expertise and resources with the community.\n\n## BibTeX\n\n```bibtex\n@misc{jia2024designedit,\n  title={DesignEdit: Multi-Layered Latent Decomposition and Fusion for Unified \u0026 Accurate Image Editing},\n  author={Yueru Jia and Yuhui Yuan and Aosong Cheng and Chuke Wang and Ji Li and Huizhu Jia and Shanghang Zhang},\n  year={2024},\n  eprint={2403.14487},\n  archivePrefix={arXiv},\n  primaryClass={cs.CV}\n}\n```\n\n\n\n","funding_links":[],"categories":["Projekte"],"sub_categories":["🌄 Image"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdesign-edit%2FDesignEdit","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdesign-edit%2FDesignEdit","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdesign-edit%2FDesignEdit/lists"}