{"id":114992,"url":"https://github.com/openai/guided-diffusion","last_synced_at":"2025-05-14T11:08:54.825Z","repository":{"id":37539940,"uuid":"362242108","full_name":"openai/guided-diffusion","owner":"openai","description":null,"archived":false,"fork":false,"pushed_at":"2024-07-02T11:26:50.000Z","size":28,"stargazers_count":6699,"open_issues_count":105,"forks_count":854,"subscribers_count":142,"default_branch":"main","last_synced_at":"2025-04-09T03:09:40.201Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/openai.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2021-04-27T20:27:52.000Z","updated_at":"2025-04-09T03:03:53.000Z","dependencies_parsed_at":"2024-12-31T10:09:35.322Z","dependency_job_id":"f199beae-5cab-4524-81aa-9fddabe99c9d","html_url":"https://github.com/openai/guided-diffusion","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/openai%2Fguided-diffusion","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/openai%2Fguided-diffusion/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/openai%2Fguided-diffusion/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/openai%2Fguided-diffusion/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/openai","download_url":"https://codeload.github.com/openai/guided-diffusion/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254129484,"owners_count":22019628,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-01-06T02:39:44.900Z","updated_at":"2025-05-14T11:08:54.777Z","avatar_url":"https://github.com/openai.png","language":"Python","readme":"# guided-diffusion\n\nThis is the codebase for [Diffusion Models Beat GANS on Image Synthesis](http://arxiv.org/abs/2105.05233).\n\nThis repository is based on [openai/improved-diffusion](https://github.com/openai/improved-diffusion), with modifications for classifier conditioning and architecture improvements.\n\n# Download pre-trained models\n\nWe have released checkpoints for the main models in the paper. Before using these models, please review the corresponding [model card](model-card.md) to understand the intended use and limitations of these models.\n\nHere are the download links for each model checkpoint:\n\n * 64x64 classifier: [64x64_classifier.pt](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/64x64_classifier.pt)\n * 64x64 diffusion: [64x64_diffusion.pt](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/64x64_diffusion.pt)\n * 128x128 classifier: [128x128_classifier.pt](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/128x128_classifier.pt)\n * 128x128 diffusion: [128x128_diffusion.pt](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/128x128_diffusion.pt)\n * 256x256 classifier: [256x256_classifier.pt](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/256x256_classifier.pt)\n * 256x256 diffusion: [256x256_diffusion.pt](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/256x256_diffusion.pt)\n * 256x256 diffusion (not class conditional): [256x256_diffusion_uncond.pt](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/256x256_diffusion_uncond.pt)\n * 512x512 classifier: [512x512_classifier.pt](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/512x512_classifier.pt)\n * 512x512 diffusion: [512x512_diffusion.pt](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/512x512_diffusion.pt)\n * 64x64 -\u0026gt; 256x256 upsampler: [64_256_upsampler.pt](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/64_256_upsampler.pt)\n * 128x128 -\u0026gt; 512x512 upsampler: [128_512_upsampler.pt](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/128_512_upsampler.pt)\n * LSUN bedroom: [lsun_bedroom.pt](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/lsun_bedroom.pt)\n * LSUN cat: [lsun_cat.pt](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/lsun_cat.pt)\n * LSUN horse: [lsun_horse.pt](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/lsun_horse.pt)\n * LSUN horse (no dropout): [lsun_horse_nodropout.pt](https://openaipublic.blob.core.windows.net/diffusion/jul-2021/lsun_horse_nodropout.pt)\n\n# Sampling from pre-trained models\n\nTo sample from these models, you can use the `classifier_sample.py`, `image_sample.py`, and `super_res_sample.py` scripts.\nHere, we provide flags for sampling from all of these models.\nWe assume that you have downloaded the relevant model checkpoints into a folder called `models/`.\n\nFor these examples, we will generate 100 samples with batch size 4. Feel free to change these values.\n\n```\nSAMPLE_FLAGS=\"--batch_size 4 --num_samples 100 --timestep_respacing 250\"\n```\n\n## Classifier guidance\n\nNote for these sampling runs that you can set `--classifier_scale 0` to sample from the base diffusion model.\nYou may also use the `image_sample.py` script instead of `classifier_sample.py` in that case.\n\n * 64x64 model:\n\n```\nMODEL_FLAGS=\"--attention_resolutions 32,16,8 --class_cond True --diffusion_steps 1000 --dropout 0.1 --image_size 64 --learn_sigma True --noise_schedule cosine --num_channels 192 --num_head_channels 64 --num_res_blocks 3 --resblock_updown True --use_new_attention_order True --use_fp16 True --use_scale_shift_norm True\"\npython classifier_sample.py $MODEL_FLAGS --classifier_scale 1.0 --classifier_path models/64x64_classifier.pt --classifier_depth 4 --model_path models/64x64_diffusion.pt $SAMPLE_FLAGS\n```\n\n * 128x128 model:\n\n```\nMODEL_FLAGS=\"--attention_resolutions 32,16,8 --class_cond True --diffusion_steps 1000 --image_size 128 --learn_sigma True --noise_schedule linear --num_channels 256 --num_heads 4 --num_res_blocks 2 --resblock_updown True --use_fp16 True --use_scale_shift_norm True\"\npython classifier_sample.py $MODEL_FLAGS --classifier_scale 0.5 --classifier_path models/128x128_classifier.pt --model_path models/128x128_diffusion.pt $SAMPLE_FLAGS\n```\n\n * 256x256 model:\n\n```\nMODEL_FLAGS=\"--attention_resolutions 32,16,8 --class_cond True --diffusion_steps 1000 --image_size 256 --learn_sigma True --noise_schedule linear --num_channels 256 --num_head_channels 64 --num_res_blocks 2 --resblock_updown True --use_fp16 True --use_scale_shift_norm True\"\npython classifier_sample.py $MODEL_FLAGS --classifier_scale 1.0 --classifier_path models/256x256_classifier.pt --model_path models/256x256_diffusion.pt $SAMPLE_FLAGS\n```\n\n * 256x256 model (unconditional):\n\n```\nMODEL_FLAGS=\"--attention_resolutions 32,16,8 --class_cond False --diffusion_steps 1000 --image_size 256 --learn_sigma True --noise_schedule linear --num_channels 256 --num_head_channels 64 --num_res_blocks 2 --resblock_updown True --use_fp16 True --use_scale_shift_norm True\"\npython classifier_sample.py $MODEL_FLAGS --classifier_scale 10.0 --classifier_path models/256x256_classifier.pt --model_path models/256x256_diffusion_uncond.pt $SAMPLE_FLAGS\n```\n\n * 512x512 model:\n\n```\nMODEL_FLAGS=\"--attention_resolutions 32,16,8 --class_cond True --diffusion_steps 1000 --image_size 512 --learn_sigma True --noise_schedule linear --num_channels 256 --num_head_channels 64 --num_res_blocks 2 --resblock_updown True --use_fp16 False --use_scale_shift_norm True\"\npython classifier_sample.py $MODEL_FLAGS --classifier_scale 4.0 --classifier_path models/512x512_classifier.pt --model_path models/512x512_diffusion.pt $SAMPLE_FLAGS\n```\n\n## Upsampling\n\nFor these runs, we assume you have some base samples in a file `64_samples.npz` or `128_samples.npz` for the two respective models.\n\n * 64 -\u0026gt; 256:\n\n```\nMODEL_FLAGS=\"--attention_resolutions 32,16,8 --class_cond True --diffusion_steps 1000 --large_size 256  --small_size 64 --learn_sigma True --noise_schedule linear --num_channels 192 --num_heads 4 --num_res_blocks 2 --resblock_updown True --use_fp16 True --use_scale_shift_norm True\"\npython super_res_sample.py $MODEL_FLAGS --model_path models/64_256_upsampler.pt --base_samples 64_samples.npz $SAMPLE_FLAGS\n```\n\n * 128 -\u0026gt; 512:\n\n```\nMODEL_FLAGS=\"--attention_resolutions 32,16 --class_cond True --diffusion_steps 1000 --large_size 512 --small_size 128 --learn_sigma True --noise_schedule linear --num_channels 192 --num_head_channels 64 --num_res_blocks 2 --resblock_updown True --use_fp16 True --use_scale_shift_norm True\"\npython super_res_sample.py $MODEL_FLAGS --model_path models/128_512_upsampler.pt $SAMPLE_FLAGS --base_samples 128_samples.npz\n```\n\n## LSUN models\n\nThese models are class-unconditional and correspond to a single LSUN class. Here, we show how to sample from `lsun_bedroom.pt`, but the other two LSUN checkpoints should work as well:\n\n```\nMODEL_FLAGS=\"--attention_resolutions 32,16,8 --class_cond False --diffusion_steps 1000 --dropout 0.1 --image_size 256 --learn_sigma True --noise_schedule linear --num_channels 256 --num_head_channels 64 --num_res_blocks 2 --resblock_updown True --use_fp16 True --use_scale_shift_norm True\"\npython image_sample.py $MODEL_FLAGS --model_path models/lsun_bedroom.pt $SAMPLE_FLAGS\n```\n\nYou can sample from `lsun_horse_nodropout.pt` by changing the dropout flag:\n\n```\nMODEL_FLAGS=\"--attention_resolutions 32,16,8 --class_cond False --diffusion_steps 1000 --dropout 0.0 --image_size 256 --learn_sigma True --noise_schedule linear --num_channels 256 --num_head_channels 64 --num_res_blocks 2 --resblock_updown True --use_fp16 True --use_scale_shift_norm True\"\npython image_sample.py $MODEL_FLAGS --model_path models/lsun_horse_nodropout.pt $SAMPLE_FLAGS\n```\n\nNote that for these models, the best samples result from using 1000 timesteps:\n\n```\nSAMPLE_FLAGS=\"--batch_size 4 --num_samples 100 --timestep_respacing 1000\"\n```\n\n# Results\n\nThis table summarizes our ImageNet results for pure guided diffusion models:\n\n| Dataset          | FID  | Precision | Recall |\n|------------------|------|-----------|--------|\n| ImageNet 64x64   | 2.07 | 0.74      | 0.63   |\n| ImageNet 128x128 | 2.97 | 0.78      | 0.59   |\n| ImageNet 256x256 | 4.59 | 0.82      | 0.52   |\n| ImageNet 512x512 | 7.72 | 0.87      | 0.42   |\n\nThis table shows the best results for high resolutions when using upsampling and guidance together:\n\n| Dataset          | FID  | Precision | Recall |\n|------------------|------|-----------|--------|\n| ImageNet 256x256 | 3.94 | 0.83      | 0.53   |\n| ImageNet 512x512 | 3.85 | 0.84      | 0.53   |\n\nFinally, here are the unguided results on individual LSUN classes:\n\n| Dataset      | FID  | Precision | Recall |\n|--------------|------|-----------|--------|\n| LSUN Bedroom | 1.90 | 0.66      | 0.51   |\n| LSUN Cat     | 5.57 | 0.63      | 0.52   |\n| LSUN Horse   | 2.57 | 0.71      | 0.55   |\n\n# Training models\n\nTraining diffusion models is described in the [parent repository](https://github.com/openai/improved-diffusion). Training a classifier is similar. We assume you have put training hyperparameters into a `TRAIN_FLAGS` variable, and classifier hyperparameters into a `CLASSIFIER_FLAGS` variable. Then you can run:\n\n```\nmpiexec -n N python scripts/classifier_train.py --data_dir path/to/imagenet $TRAIN_FLAGS $CLASSIFIER_FLAGS\n```\n\nMake sure to divide the batch size in `TRAIN_FLAGS` by the number of MPI processes you are using.\n\nHere are flags for training the 128x128 classifier. You can modify these for training classifiers at other resolutions:\n\n```sh\nTRAIN_FLAGS=\"--iterations 300000 --anneal_lr True --batch_size 256 --lr 3e-4 --save_interval 10000 --weight_decay 0.05\"\nCLASSIFIER_FLAGS=\"--image_size 128 --classifier_attention_resolutions 32,16,8 --classifier_depth 2 --classifier_width 128 --classifier_pool attention --classifier_resblock_updown True --classifier_use_scale_shift_norm True\"\n```\n\nFor sampling from a 128x128 classifier-guided model, 25 step DDIM:\n\n```sh\nMODEL_FLAGS=\"--attention_resolutions 32,16,8 --class_cond True --image_size 128 --learn_sigma True --num_channels 256 --num_heads 4 --num_res_blocks 2 --resblock_updown True --use_fp16 True --use_scale_shift_norm True\"\nCLASSIFIER_FLAGS=\"--image_size 128 --classifier_attention_resolutions 32,16,8 --classifier_depth 2 --classifier_width 128 --classifier_pool attention --classifier_resblock_updown True --classifier_use_scale_shift_norm True --classifier_scale 1.0 --classifier_use_fp16 True\"\nSAMPLE_FLAGS=\"--batch_size 4 --num_samples 50000 --timestep_respacing ddim25 --use_ddim True\"\nmpiexec -n N python scripts/classifier_sample.py \\\n    --model_path /path/to/model.pt \\\n    --classifier_path path/to/classifier.pt \\\n    $MODEL_FLAGS $CLASSIFIER_FLAGS $SAMPLE_FLAGS\n```\n\nTo sample for 250 timesteps without DDIM, replace `--timestep_respacing ddim25` to `--timestep_respacing 250`, and replace `--use_ddim True` with `--use_ddim False`.\n","funding_links":[],"categories":["Python","Implementations","CLIP-related","Papers","Diffusion Models"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fopenai%2Fguided-diffusion","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fopenai%2Fguided-diffusion","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fopenai%2Fguided-diffusion/lists"}