{"id":48468221,"url":"https://github.com/dyedd/deepspeed-diffusers","last_synced_at":"2026-04-07T05:30:27.934Z","repository":{"id":230350647,"uuid":"778694805","full_name":"dyedd/deepspeed-diffusers","owner":"dyedd","description":"🚀 原生使用 Deepspeed 训练 Diffusers | Native Training of Diffusers with Deepspeed","archived":false,"fork":false,"pushed_at":"2025-01-19T11:13:22.000Z","size":61,"stargazers_count":12,"open_issues_count":1,"forks_count":1,"subscribers_count":2,"default_branch":"main","last_synced_at":"2025-01-19T12:25:04.630Z","etag":null,"topics":["deepspeed","diffusers","diffusion","model"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/dyedd.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-03-28T08:09:04.000Z","updated_at":"2025-01-19T11:13:24.000Z","dependencies_parsed_at":"2024-06-06T14:37:11.594Z","dependency_job_id":"e498623e-c367-480a-af77-50cda12ae0ee","html_url":"https://github.com/dyedd/deepspeed-diffusers","commit_stats":null,"previous_names":["dyedd/deepspeed-diffusers"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/dyedd/deepspeed-diffusers","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dyedd%2Fdeepspeed-diffusers","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dyedd%2Fdeepspeed-diffusers/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dyedd%2Fdeepspeed-diffusers/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dyedd%2Fdeepspeed-diffusers/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/dyedd","download_url":"https://codeload.github.com/dyedd/deepspeed-diffusers/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dyedd%2Fdeepspeed-diffusers/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":286080680,"owners_count":31501903,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2026-04-07T03:10:19.677Z","status":"ssl_error","status_checked_at":"2026-04-07T03:10:13.982Z","response_time":105,"last_error":"SSL_connect returned=1 errno=0 peeraddr=140.82.121.5:443 state=error: unexpected eof while reading","robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":false,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["deepspeed","diffusers","diffusion","model"],"created_at":"2026-04-07T05:30:25.947Z","updated_at":"2026-04-07T05:30:27.919Z","avatar_url":"https://github.com/dyedd.png","language":"Python","readme":"# 🧩 Deepspeed-diffusers\n\n*Read this in [English](README_en.md).*\n\n`deepspeed-diffusers` 是一个原生结合`Deepspeed`、`Diffusers`、`Peft`库来训练扩散模型（Diffusion Models）的项目。\n\n## 动机\n训练过`Diffusers`的开发者都知道目前已经有了`Huggingface`另一个产品`Accelerate`来并行训练扩散模型。\n\n为什么我还开源了这个项目呢？\n- 一个非常滑稽的事实是当初我没有仔细看到`diffusers`的训练示例文件是部分支持`Zero3`而不是不支持。本项目最后实现下来，结果`Zero3`实现思路与其不谋而和。\n- 可以安慰自己的是，虽然现在集成框架很多，使用起来也很便捷，但是集成的东西太多了，就不容易二次开发了！\n- `Slurm`调度系统的脚本起码我也提供了，使用和修复了`Deepspeed`本来就支持却又不太行的`Slurm`启动器。\n\n\u003e 为了充分单独发挥`Deepspeed`的能力，本项目就这么诞生了。\n\n此外，本项目在原生结合的时候也借鉴了[OvJat](https://github.com/OvJat/DeepSpeedTutorial) 、[afiaka87](https://github.com/afiaka87/latent-diffusion-deepspeed)、[liucongg](https://github.com/liucongg/ChatGLM-Finetuning)的代码，在此非常感谢这些项目的付出！\n\n在这些项目中，本项目的缺点是什么？\n\n1. 我在实现的时候，`ZeRO-1,2`也只支持`UNet2DConditionModel`。目前我还没有发现我差别人具体在哪，还得充分测试。\n2. 无法保存Lora训练时候的检测点，这和`Peft`库太过集成绑定有些关系，除非自己写个Lora微调逻辑。\n\n除此之外，`Accelerate`有的功能本项目也有，本项目主打一个简洁实用🥹\n\n## 最近更新 🔥\n\n- [2024/05/30] 类似`Accelerate`示例，不把`stable diffusion`类完全加入DeepSpeed，仅加载`Unet`，这样减少了大概3GB+的显存\n- [2024/05/25] 支持数据集自定义，支持EMA\n- [2024/04/18] 修复训练文本不匹配的问题，增加推理验证\n- [2024/04/10] 支持`slurm`多节点和单节点训练。\n- [2024/04/01] 支持`wandb`。\n- [2024/03/31] 支持`lora`微调，修复了全量微调后的生成图片总是黑色图片的问题，增加了`slurm`单节点训练脚本\n- [2024/03/29] **`deepspeed-diffusers`** 发布了，支持`Unet`全量微调。\n\n## 演示\n\n### 安装依赖\n\n在运行脚本之前，请确定安装了所有的依赖：\n\n- 请保证源码是最新的：`git clone https://github.com/dyedd/deepspeed-diffusers`\n- 然后`cd`到文件夹并运行：`pip -r install requirements.txt`\n\n### [宝可梦数据集](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions)示例\n\n\u003e 强烈推荐直接把数据集下载到本地，而不是通过脚本自动下载的是缓存文件。否则哪天断网也要断`Huggingface`的数据反馈以及对于国内环境不是非常友好。\n\n根据下载的目录，修改cfg.json的`dataset_dir`内容。\n\n### 下载权重\n\n接下来的示例结果都是在[stable-diffusion-1.5](https://huggingface.co/runwayml/stable-diffusion-v1-5)的权重下实验的。\n\n注意：如果您使用 [stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2) 768x768\n模型，请将`cfg/default.json`的`resolution`更改为 768。\n\n在此，仍然强烈自己下载`git clone`权重，不要通过huggingface自动下载~然后修改`cfg/default.json`的`pretrained_model_name_or_path`。\n\n\n### 配置文件解读\n项目所需的参数都写在了`cfg/default.json`：\n以下是每个字段的解释：\n- \"num_epochs\": 训练的总轮数。\n- \"validation_epochs\": 每多少轮进行一次验证。\n- \"max_train_steps\": 限制最大训练步数，0表示不限制。\n- \"lr_warmup_steps\": 学习率预热的步数。\n- \"save_interval\": 模型保存的间隔步数。\n- \"seed\": 随机种子，用于确保实验的可重复性。\n- \"validation_prompts\": 验证时使用的提示。\n- \"pretrained_model_name_or_path\": 预训练模型的名称或路径。\n- \"dataset_dir\": 数据集的目录。\n- \"imagefolder\": 是否使用包含**图像**的文件夹作为数据源。\n- \"checkpoint_dir\": 检查点（模型状态）保存的目录。\n- \"output_dir\": 输出（如训练日志、生成的图像等）的目录。\n- \"use_lora\": 是否使用LoRA（Long Range Attention）技术，以及相关的参数。\n  - \"action\": 是否启用LoRA。\n  - \"rank\": LoRA的秩。\n  - \"alpha\": LoRA的α参数。\n- \"dataloader_num_workers\": 数据加载器的工作线程数。\n- \"resume_from_checkpoint\": 是否从检查点恢复训练。\n- \"use_ema\": 是否使用指数移动平均（Exponential Moving Average）。\n- \"offline\": 是否在`wandb`离线模式下运行。\n- \"resolution\": 图像的分辨率。\n- \"center_crop\": 是否对图像进行中心裁剪。\n- \"random_flip\": 是否对图像进行随机翻转。\n- 之后都是`DeepSpeed`常用的配置，如有不解，请查看官方文档。如果有添加，不要忘记同时修改`utils.py`的`deepspeed_config_from_args`函数。\n\n### 训练\n\n本项目支持2种训练模式。\n\n- 全量微调unet，在混合精度FP16下，如果不开启Zero，每张卡的bacth_size为4, 显存大致在11.61到23.32GB。\n- Lora+unet，在混合精度FP16下，如果不开启Zero，每张卡的bacth_size为4, 显存大致在2到13GB。\n\n\u003e 只要开启梯度累积和Zero，那么全量微调也能实现实现16GB以下的显卡训练了！\n\n\n如果你在本地，可以直接通过`bash scripts/train_text_to_image.sh`运行脚本；\n\n如果你在slurm系统，在修改部分信息后，可以通过`sbatch scripts/train_text_to_image.slurm`下提交。\n\n### 推理/采样\n同样支持本地和Slurm调度，命令分别为\n```\nbash scripts/test_text_to_image.sh\nsbatch scripts/test_text_to_image.slurm\n```\n\n## 可能出现的问题\n\n\u003e [!NOTE]\n\u003e 1. 生成的图像都是黑色图片或者报错`RuntimeWarning: invalid value encountered in cast images = (images * 255).round().astype(\"uint8\")`\n\u003e\n\u003e 请注意，本项目配置文件中关于优化器，学习率的参数仅对于宝可梦这个数据集而言。这个问题是因为优化器，学习率不适合训练集，而造成训练的损失一直是none。\n\u003e 2. 提交的`slurm`脚本运行时间为0秒就退出\n\u003e 这是因为`slurm`脚本里写了日志保存在`log`文件夹，而该文件夹目前不存在，就无法运行。\n\n## 引用\n如果您在论文和项目中使用了`Deepspeed-diffusers`，请使用以下`BibTeX`引用它。\n```\n@Misc{Deepspeed-diffusers,\n  title =        {Deepspeed-diffusers: training diffusers with deepspeed.},\n  author =       {Ximiao Dong},\n  howpublished = {\\url{https://github.com/dyedd/deepspeed-diffusers}},\n  year =         {2024}\n}\n```","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdyedd%2Fdeepspeed-diffusers","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdyedd%2Fdeepspeed-diffusers","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdyedd%2Fdeepspeed-diffusers/lists"}