{"id":13488444,"url":"https://github.com/omerbt/MultiDiffusion","last_synced_at":"2025-03-28T01:35:23.808Z","repository":{"id":65937104,"uuid":"594869373","full_name":"omerbt/MultiDiffusion","owner":"omerbt","description":"Official Pytorch Implementation for \"MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation\" presenting \"MultiDiffusion\" (ICML 2023)","archived":false,"fork":false,"pushed_at":"2023-09-21T12:27:10.000Z","size":7262,"stargazers_count":955,"open_issues_count":21,"forks_count":57,"subscribers_count":36,"default_branch":"master","last_synced_at":"2024-08-01T18:37:36.538Z","etag":null,"topics":["diffusion-models","generative-model","icml","image-generation","multidiffusion","stable-diffusion","text-to-image"],"latest_commit_sha":null,"homepage":"https://multidiffusion.github.io/","language":"Jupyter Notebook","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/omerbt.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null}},"created_at":"2023-01-29T21:50:17.000Z","updated_at":"2024-08-01T18:01:19.000Z","dependencies_parsed_at":"2024-01-16T09:02:50.029Z","dependency_job_id":"bc080026-7512-4508-b4cf-69e0baf86ab4","html_url":"https://github.com/omerbt/MultiDiffusion","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/omerbt%2FMultiDiffusion","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/omerbt%2FMultiDiffusion/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/omerbt%2FMultiDiffusion/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/omerbt%2FMultiDiffusion/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/omerbt","download_url":"https://codeload.github.com/omerbt/MultiDiffusion/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":222333975,"owners_count":16968058,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["diffusion-models","generative-model","icml","image-generation","multidiffusion","stable-diffusion","text-to-image"],"created_at":"2024-07-31T18:01:15.879Z","updated_at":"2025-03-28T01:35:23.795Z","avatar_url":"https://github.com/omerbt.png","language":"Jupyter Notebook","readme":"# MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation (ICML 2023)\n## [\u003ca href=\"https://multidiffusion.github.io/\" target=\"_blank\"\u003eProject Page\u003c/a\u003e]\n\n[![arXiv](https://img.shields.io/badge/arXiv-MultiDiffusion-b31b1b.svg)](https://arxiv.org/abs/2302.08113)\n![Pytorch](https://img.shields.io/badge/PyTorch-\u003e=1.10.0-Red?logo=pytorch)\n[![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/weizmannscience/MultiDiffusion)\n[![Replicate](https://replicate.com/cjwbw/multidiffusion/badge)](https://replicate.com/cjwbw/multidiffusion)\n\n[//]: # ([![Hugging Face Spaces]\u0026#40;https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue\u0026#41;]\u0026#40;https://huggingface.co/spaces/weizmannscience/text2live\u0026#41;)\n\n![teaser](imgs/teaser.jpg)\n\n**MultiDiffusion** is a unified framework that enables versatile and controllable image generation, using a pre-trained text-to-image diffusion model, without any further training or finetuning, as described in \u003ca href=\"https://arxiv.org/abs/2302.08113\" target=\"_blank\"\u003e(link to paper)\u003c/a\u003e.\n\n[//]: # (. It can be used for localized and global edits that change the texture of existing objects or augment the scene with semi-transparent effects \u0026#40;e.g. smoke, fire, snow\u0026#41;.)\n\n[//]: # (### Abstract)\n\u003eRecent advances in text-to-image generation with diffusion models present transformative capabilities in image quality. However, user controllability of the generated image, and fast adaptation to new tasks still remains an open challenge, currently mostly addressed by costly and long re-training and fine-tuning or ad-hoc adaptations to specific image generation tasks. In this work, we present MultiDiffusion, a unified framework that enables versatile and controllable image generation, using a pre-trained text-to-image diffusion model, without any further training or finetuning. At the center of our approach is a new generation process, based on an optimization task that binds together multiple diffusion generation processes with a shared set of parameters or constraints. We show that MultiDiffusion can be readily applied to generate high quality and diverse images that adhere to user-provided controls, such as desired aspect ratio (e.g., panorama), and spatial guiding signals, ranging from tight segmentation masks to bounding boxes.\n\nFor more see the [project webpage](https://multidiffusion.github.io).\n\n## Diffusers Integration [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/omerbt/MultiDiffusion/blob/master/MultiDiffusion_Panorama.ipynb)\nMultiDiffusion Text2Panorama is integrated into [diffusers](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/panorama), and can be run as follows:\n```\nimport torch\nfrom diffusers import StableDiffusionPanoramaPipeline, DDIMScheduler\n\nmodel_ckpt = \"stabilityai/stable-diffusion-2-base\"\nscheduler = DDIMScheduler.from_pretrained(model_ckpt, subfolder=\"scheduler\")\npipe = StableDiffusionPanoramaPipeline.from_pretrained(\n     model_ckpt, scheduler=scheduler, torch_dtype=torch.float16\n)\n\npipe = pipe.to(\"cuda\")\n\nprompt = \"a photo of the dolomites\"\nimage = pipe(prompt).images[0]\n```\n\n## Gradio Demo \nWe provide a gradio UI for our method. Running the following command in a terminal will launch the demo:\n```\npython app_gradio.py\n```\nThis demo is also hosted on HuggingFace [here](https://huggingface.co/spaces/weizmannscience/MultiDiffusion)\n\n## Spatial controls\n\nA web demo for the spatial controls is hosted on HuggingFace [here](https://huggingface.co/spaces/weizmannscience/multidiffusion-region-based).\n\n## Citation\n```\n@article{bar2023multidiffusion,\n  title={MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation},\n  author={Bar-Tal, Omer and Yariv, Lior and Lipman, Yaron and Dekel, Tali},\n  journal={arXiv preprint arXiv:2302.08113},\n  year={2023}\n}\n```\n","funding_links":[],"categories":["Spatial Control"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fomerbt%2FMultiDiffusion","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fomerbt%2FMultiDiffusion","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fomerbt%2FMultiDiffusion/lists"}