{"id":23425335,"url":"https://github.com/wooyeolbaek/attention-map-diffusers","last_synced_at":"2025-04-08T12:09:09.556Z","repository":{"id":210625828,"uuid":"726478743","full_name":"wooyeolbaek/attention-map-diffusers","owner":"wooyeolbaek","description":"🚀 Cross attention map tools for huggingface/diffusers","archived":false,"fork":false,"pushed_at":"2025-01-18T18:24:38.000Z","size":8272,"stargazers_count":247,"open_issues_count":8,"forks_count":19,"subscribers_count":2,"default_branch":"main","last_synced_at":"2025-04-08T12:09:04.360Z","etag":null,"topics":["attention-map","cross-attention","cross-attention-diffusers","cross-attention-map","diffusers","huggingface","stable-diffusion","text-to-image","visualization"],"latest_commit_sha":null,"homepage":"https://huggingface.co/spaces/We-Want-GPU/diffusers-cross-attention-map-SDXL-t2i","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/wooyeolbaek.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-12-02T14:20:51.000Z","updated_at":"2025-04-08T08:40:14.000Z","dependencies_parsed_at":"2024-05-07T17:45:29.597Z","dependency_job_id":"c68e0993-bdca-4f4c-a9a8-8abc027cd805","html_url":"https://github.com/wooyeolbaek/attention-map-diffusers","commit_stats":null,"previous_names":["wooyeolbaek/attention-map","wooyeolbaek/attention-map-diffusers"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/wooyeolbaek%2Fattention-map-diffusers","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/wooyeolbaek%2Fattention-map-diffusers/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/wooyeolbaek%2Fattention-map-diffusers/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/wooyeolbaek%2Fattention-map-diffusers/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/wooyeolbaek","download_url":"https://codeload.github.com/wooyeolbaek/attention-map-diffusers/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247838444,"owners_count":21004580,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["attention-map","cross-attention","cross-attention-diffusers","cross-attention-map","diffusers","huggingface","stable-diffusion","text-to-image","visualization"],"created_at":"2024-12-23T05:11:24.805Z","updated_at":"2025-04-08T12:09:09.520Z","avatar_url":"https://github.com/wooyeolbaek.png","language":"Python","readme":"# Cross Attention Map Visualization\n\n[![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/We-Want-GPU/diffusers-cross-attention-map-SDXL-t2i)\n\nThanks to HuggingFace [Diffusers](https://github.com/huggingface/diffusers) team for the GPU sponsorship!\n\nThis repository is for extracting and visualizing cross attention maps, based on the latest [Diffusers](https://github.com/huggingface/diffusers) code (`v0.32.0`).\n\nFor errors reports or feature requests, feel free to raise an issue.\n\n## Update Log.\n[2024-12-22] It is now compatible with _\"Stable Diffusion 3.5\"_, _\"Flux-dev\"_ and _\"Flux-schnell\"_! (“Sana\" will be the focus of the next update.)\n\n[2024-12-17] Refactor and add setup.py\n\n[2024-11-12] _\"Stable Diffusion 3\"_ is compatible and supports _batch operations_! (Flux and \"Stable Diffusion 3.5\" is not compatible yet.)\n\n[2024-07-04] Added features for _saving attention maps based on timesteps and layers_.\n\n\n## Compatible models.\n\u003c!-- Compatible with various models, including both UNet/DiT based models listed below. --\u003e\nCompatible with various models listed below.\n- [black-forest-labs/FLUX.1-schnell](https://huggingface.co/black-forest-labs/FLUX.1-schnell)\n- [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev)\n- [stabilityai/stable-diffusion-3.5-medium](https://huggingface.co/stabilityai/stable-diffusion-3.5-medium)\n- [stabilityai/stable-diffusion-3-medium-diffusers](https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers)\n- [stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)\n- [stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base)\n- ...\n\n\u003c!-- - [sdxl-turbo](https://huggingface.co/stabilityai/sdxl-turbo) --\u003e\n\u003c!-- - [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev) --\u003e\n\n\n## Example.\n\n\n\u003cdiv style=\"text-align: center;\"\u003e\n    \u003cimg src=\"./assets/sd3.png\" alt=\"Image 1\" width=\"400\" height=\"400\"\u003e\n    \u003cimg src=\"./assets/4--bara\u003e.png\" alt=\"Image 2\" width=\"400\" height=\"400\"\u003e\n\u003c/div\u003e\n\n\n\n\u003cdetails\u003e\n\u003csummary\u003ecap-\u003c/summary\u003e\n\u003cdiv markdown=\"1\"\u003e\n\n\u003cdiv style=\"text-align: center;\"\u003e\n    \u003cimg src=\"./assets/sd3.png\" alt=\"Image 1\" width=\"400\" height=\"400\"\u003e\n    \u003cimg src=\"./assets/2-\u003ccap-.png\" alt=\"\u003ccap-\" width=\"400\" height=\"400\"\u003e\n\u003c/div\u003e\n\n\u003c/div\u003e\n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003e-y-\u003c/summary\u003e\n\u003cdiv markdown=\"1\"\u003e\n\n\u003cdiv style=\"text-align: center;\"\u003e\n    \u003cimg src=\"./assets/sd3.png\" alt=\"Image 1\" width=\"400\" height=\"400\"\u003e\n    \u003cimg src=\"./assets/3--y-.png\" alt=\"-y-\" width=\"400\" height=\"400\"\u003e\n\u003c/div\u003e\n\n\u003c/div\u003e\n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003e-bara\u003c/summary\u003e\n\u003cdiv markdown=\"1\"\u003e\n\n\u003cdiv style=\"text-align: center;\"\u003e\n    \u003cimg src=\"./assets/sd3.png\" alt=\"Image 1\" width=\"400\" height=\"400\"\u003e\n    \u003cimg src=\"./assets/4--bara\u003e.png\" alt=\"-bara\u003e\" width=\"400\" height=\"400\"\u003e\n\u003c/div\u003e\n\n\u003c/div\u003e\n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003ehello\u003c/summary\u003e\n\u003cdiv markdown=\"1\"\u003e\n\n\u003cdiv style=\"text-align: center;\"\u003e\n    \u003cimg src=\"./assets/sd3.png\" alt=\"Image 1\" width=\"400\" height=\"400\"\u003e\n    \u003cimg src=\"./assets/10-\u003chello\u003e.png\" alt=\"\u003chello\u003e\" width=\"400\" height=\"400\"\u003e\n\u003c/div\u003e\n\n\u003c/div\u003e\n\u003c/details\u003e\n\n\n\u003cdetails\u003e\n\u003csummary\u003eworld\u003c/summary\u003e\n\u003cdiv markdown=\"1\"\u003e\n\n\u003cdiv style=\"text-align: center;\"\u003e\n    \u003cimg src=\"./assets/sd3.png\" alt=\"Image 1\" width=\"400\" height=\"400\"\u003e\n    \u003cimg src=\"./assets/11-\u003cworld\u003e.png\" alt=\"\u003cworld\u003e\u003e\" width=\"400\" height=\"400\"\u003e\n\u003c/div\u003e\n\n\u003c/div\u003e\n\u003c/details\u003e\n\n\n\n## demo\n```bash\ngit clone https://github.com/wooyeolBaek/attention-map-diffusers.git\ncd attention-map-diffusers\npip install -e .\n```\nor\n```bash\npip install attention_map_diffusers\n```\n\n### Flux-dev\n```python\nimport torch\nfrom diffusers import FluxPipeline\nfrom attention_map_diffusers import (\n    attn_maps,\n    init_pipeline,\n    save_attention_maps\n)\n\npipe = FluxPipeline.from_pretrained(\n    \"black-forest-labs/FLUX.1-dev\",\n    torch_dtype=torch.bfloat16\n)\n# pipe.enable_model_cpu_offload() #save some VRAM by offloading the model to CPU. Remove this if you have enough GPU power\npipe.to('cuda')\n\n##### 1. Replace modules and Register hook #####\npipe = init_pipeline(pipe)\n################################################\n\n# recommend not using batch operations for sd3, as cpu memory could be exceeded.\nprompts = [\n    # \"A photo of a puppy wearing a hat.\",\n    \"A capybara holding a sign that reads Hello World.\",\n]\n\nimages = pipe(\n    prompts,\n    num_inference_steps=15,\n    guidance_scale=4.5,\n).images\n\nfor batch, image in enumerate(images):\n    image.save(f'{batch}-flux-dev.png')\n\n##### 2. Process and Save attention map #####\nsave_attention_maps(attn_maps, pipe.tokenizer, prompts, base_dir='attn_maps-flux-dev', unconditional=False)\n#############################################\n```\n\n### Flux-schnell\n```python\nimport torch\nfrom diffusers import FluxPipeline\nfrom attention_map_diffusers import (\n    attn_maps,\n    init_pipeline,\n    save_attention_maps\n)\n\npipe = FluxPipeline.from_pretrained(\n    \"black-forest-labs/FLUX.1-schnell\",\n    torch_dtype=torch.bfloat16\n)\n# pipe.enable_model_cpu_offload() #save some VRAM by offloading the model to CPU. Remove this if you have enough GPU power\npipe.to('cuda')\n\n##### 1. Replace modules and Register hook #####\npipe = init_pipeline(pipe)\n################################################\n\n# recommend not using batch operations for sd3, as cpu memory could be exceeded.\nprompts = [\n    # \"A photo of a puppy wearing a hat.\",\n    \"A capybara holding a sign that reads Hello World.\",\n]\n\nimages = pipe(\n    prompts,\n    num_inference_steps=15,\n    guidance_scale=4.5,\n).images\n\nfor batch, image in enumerate(images):\n    image.save(f'{batch}-flux-schnell.png')\n\n##### 2. Process and Save attention map #####\nsave_attention_maps(attn_maps, pipe.tokenizer, prompts, base_dir='attn_maps-flux-schnell', unconditional=False)\n#############################################\n```\n\n### Stable Diffusion 3.5\n```python\nimport torch\nfrom diffusers import StableDiffusion3Pipeline\nfrom attention_map_diffusers import (\n    attn_maps,\n    init_pipeline,\n    save_attention_maps\n)\n\npipe = StableDiffusion3Pipeline.from_pretrained(\n    \"stabilityai/stable-diffusion-3.5-medium\",\n    torch_dtype=torch.bfloat16\n)\npipe = pipe.to(\"cuda\")\n\n##### 1. Replace modules and Register hook #####\npipe = init_pipeline(pipe)\n################################################\n\n# recommend not using batch operations for sd3, as cpu memory could be exceeded.\nprompts = [\n    # \"A photo of a puppy wearing a hat.\",\n    \"A capybara holding a sign that reads Hello World.\",\n]\n\nimages = pipe(\n    prompts,\n    num_inference_steps=15,\n    guidance_scale=4.5,\n).images\n\nfor batch, image in enumerate(images):\n    image.save(f'{batch}-sd3-5.png')\n\n##### 2. Process and Save attention map #####\nsave_attention_maps(attn_maps, pipe.tokenizer, prompts, base_dir='attn_maps-sd3-5', unconditional=True)\n#############################################\n```\n\n### Stable Diffusion 3.0\n```python\nimport torch\nfrom diffusers import StableDiffusion3Pipeline\nfrom attention_map_diffusers import (\n    attn_maps,\n    init_pipeline,\n    save_attention_maps\n)\n\n\npipe = StableDiffusion3Pipeline.from_pretrained(\n    \"stabilityai/stable-diffusion-3-medium-diffusers\",\n    torch_dtype=torch.bfloat16\n)\npipe = pipe.to(\"cuda\")\n\n##### 1. Replace modules and Register hook #####\npipe = init_pipeline(pipe)\n################################################\n\n# recommend not using batch operations for sd3, as cpu memory could be exceeded.\nprompts = [\n    # \"A photo of a puppy wearing a hat.\",\n    \"A capybara holding a sign that reads Hello World.\",\n]\n\nimages = pipe(\n    prompts,\n    num_inference_steps=15,\n    guidance_scale=4.5,\n).images\n\nfor batch, image in enumerate(images):\n    image.save(f'{batch}-sd3.png')\n\n##### 2. Process and Save attention map #####\nsave_attention_maps(attn_maps, pipe.tokenizer, prompts, base_dir='attn_maps', unconditional=True)\n#############################################\n```\n\n### Stable Diffusion XL\n```python\nimport torch\nfrom diffusers import DiffusionPipeline\nfrom attention_map_diffusers import (\n    attn_maps,\n    init_pipeline,\n    save_attention_maps\n)\n\n\npipe = DiffusionPipeline.from_pretrained(\n    \"stabilityai/stable-diffusion-xl-base-1.0\",\n    torch_dtype=torch.float16,\n)\npipe = pipe.to(\"cuda\")\n\n##### 1. Replace modules and Register hook #####\npipe = init_pipeline(pipe)\n################################################\n\nprompts = [\n    \"A photo of a puppy wearing a hat.\",\n    \"A capybara holding a sign that reads Hello World.\",\n]\n\nimages = pipe(\n    prompts,\n    num_inference_steps=15,\n).images\n\nfor batch, image in enumerate(images):\n    image.save(f'{batch}-sdxl.png')\n\n##### 2. Process and Save attention map #####\nsave_attention_maps(attn_maps, pipe.tokenizer, prompts, base_dir='attn_maps', unconditional=True)\n#############################################\n```\n\n### Stable Diffusion 2.1\n```python\nimport torch\nfrom diffusers import DiffusionPipeline\nfrom attention_map_diffusers import (\n    attn_maps,\n    init_pipeline,\n    save_attention_maps\n)\n\n\npipe = DiffusionPipeline.from_pretrained(\n    \"stabilityai/stable-diffusion-2-1\",\n    torch_dtype=torch.float16,\n)\npipe = pipe.to(\"cuda\")\n\n##### 1. Replace modules and Register hook #####\npipe = init_pipeline(pipe)\n################################################\n\nprompts = [\n    \"A photo of a puppy wearing a hat.\",\n    \"A capybara holding a sign that reads Hello World.\",\n]\n\nimages = pipe(\n    prompts,\n    num_inference_steps=15,\n).images\n\nfor batch, image in enumerate(images):\n    image.save(f'{batch}-sd2-1.png')\n\n##### 2. Process and Save attention map #####\nsave_attention_maps(attn_maps, pipe.tokenizer, prompts, base_dir='attn_maps', unconditional=True)\n#############################################\n\n```","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fwooyeolbaek%2Fattention-map-diffusers","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fwooyeolbaek%2Fattention-map-diffusers","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fwooyeolbaek%2Fattention-map-diffusers/lists"}