{"id":28412930,"url":"https://github.com/teriks/dgenerate","last_synced_at":"2025-06-24T18:31:39.775Z","repository":{"id":175233932,"uuid":"653408376","full_name":"Teriks/dgenerate","owner":"Teriks","description":"dgenerate is a scriptable command line tool (and library) for generating images and animation sequences using stable diffusion and related techniques, with an accompanying GUI scripting environment.","archived":false,"fork":false,"pushed_at":"2025-06-20T06:47:49.000Z","size":164476,"stargazers_count":36,"open_issues_count":17,"forks_count":1,"subscribers_count":5,"default_branch":"master","last_synced_at":"2025-06-20T07:39:22.457Z","etag":null,"topics":["ai","ai-upscale","command-line","command-line-tool","cross-platform","gui-application","image-editing","image-generation","image-processing","scriptable","stable-diffusion","upscaling","video-editing","video-generation","video-processing"],"latest_commit_sha":null,"homepage":"https://dgenerate.readthedocs.io","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"bsd-3-clause","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/Teriks.png","metadata":{"files":{"readme":"README.rst","changelog":null,"contributing":null,"funding":".github/FUNDING.yml","license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null},"funding":{"github":"Teriks","patreon":null,"open_collective":null,"ko_fi":"teriks","tidelift":null,"community_bridge":null,"liberapay":null,"issuehunt":null,"lfx_crowdfunding":null,"polar":null,"buy_me_a_coffee":null,"custom":null}},"created_at":"2023-06-14T02:10:53.000Z","updated_at":"2025-06-14T19:10:33.000Z","dependencies_parsed_at":"2023-09-26T13:41:14.729Z","dependency_job_id":"b35dd17d-99c8-4a90-b5d0-0052714317f0","html_url":"https://github.com/Teriks/dgenerate","commit_stats":null,"previous_names":["teriks/dgenerate"],"tags_count":87,"template":false,"template_full_name":null,"purl":"pkg:github/Teriks/dgenerate","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Teriks%2Fdgenerate","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Teriks%2Fdgenerate/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Teriks%2Fdgenerate/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Teriks%2Fdgenerate/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/Teriks","download_url":"https://codeload.github.com/Teriks/dgenerate/tar.gz/refs/heads/master","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Teriks%2Fdgenerate/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":261733634,"owners_count":23201690,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai","ai-upscale","command-line","command-line-tool","cross-platform","gui-application","image-editing","image-generation","image-processing","scriptable","stable-diffusion","upscaling","video-editing","video-generation","video-processing"],"created_at":"2025-06-03T02:16:55.464Z","updated_at":"2025-06-24T18:31:39.724Z","avatar_url":"https://github.com/Teriks.png","language":"Python","readme":".. _homebrew_1: https://brew.sh/\n\n.. _optimum-quanto_library_1: https://github.com/huggingface/optimum-quanto\n.. _vermeer_canny_edged.png_1: https://raw.githubusercontent.com/Teriks/dgenerate/v4.5.1/examples/media/vermeer_canny_edged.png\n\n.. _spandrel_1: https://github.com/chaiNNer-org/spandrel\n.. _ncnn_1: https://github.com/Tencent/ncnn\n\n.. _Stable_Diffusion_Web_UI_1: https://github.com/AUTOMATIC1111/stable-diffusion-webui\n.. _CivitAI_1: https://civitai.com/\n.. _chaiNNer_1: https://github.com/chaiNNer-org/chaiNNer\n\n.. |Documentation| image:: https://readthedocs.org/projects/dgenerate/badge/?version=v4.5.1\n   :target: http://dgenerate.readthedocs.io/en/v4.5.1/\n\n.. |Latest Release| image:: https://img.shields.io/github/v/release/Teriks/dgenerate\n   :target: https://github.com/Teriks/dgenerate/releases/latest\n   :alt: GitHub Latest Release\n\n.. |Support Dgenerate| image:: https://img.shields.io/badge/Ko–fi-support%20dgenerate%20-hotpink?logo=kofi\u0026logoColor=white\n   :target: https://ko-fi.com/teriks\n   :alt: ko-fi\n\nOverview\n========\n\n**See here for v5.0.0 dev branch:** https://github.com/Teriks/dgenerate/tree/version_5.0.0\n\n**See here for v5.0.0 nightlys:** https://github.com/Teriks/dgenerate/releases/tag/pre-release\n\n----\n\n|Documentation| |Latest Release| |Support Dgenerate|\n\n``dgenerate`` is a cross-platform command line tool and library for generating images\nand animation sequences using Stable Diffusion and related models.\n\nAlongside the command line tool, this project features a syntax-highlighting\nREPL `Console UI`_ for the dgenerate configuration / scripting language, which is built on\nTkinter to be lightweight and portable. This GUI serves as an interface to dgenerate running\nin the background via the ``--shell`` option.\n\nYou can use dgenerate to generate multiple images or animated outputs using multiple\ncombinations of diffusion input parameters in batch, so that the differences in\ngenerated output can be compared / curated easily.  This can be accomplished via a single command,\nor through more advanced scripting with the built-in interpreted shell-like language if needed.\n\nAnimated output can be produced by processing every frame of a Video, GIF, WebP, or APNG through\nvarious implementations of diffusion in img2img or inpainting mode, as well as with ControlNets and\ncontrol guidance images, in any combination thereof. MP4 (h264) video can be written without memory\nconstraints related to frame count. GIF, WebP, and PNG/APNG can be written WITH memory constraints,\nIE: all frames exist in memory at once before being written.\n\nVideo input of any runtime can be processed without memory constraints related to the video size.\nMany video formats are supported through the use of PyAV (ffmpeg).\n\nAnimated image input such as GIF, APNG (extension must be .apng), and WebP, can also be processed\nWITH memory constraints, IE: all frames exist in memory at once after an animated image is read.\n\nPNG, JPEG, JPEG-2000, TGA (Targa), BMP, and PSD (Photoshop) are supported for static image inputs.\n\nIn addition to diffusion, dgenerate also supports the processing of any supported image, video, or\nanimated image using any of its built-in image processors, which include various edge detectors,\ndepth detectors, segment generation, normal map generation, pose detection, non-diffusion based\nAI upscaling, and more.  dgenerate's image processors may be used to pre-process image / video\ninput to diffusion, post-process diffusion output, or to process images and video directly.\n\ndgenerate brings many major features of the HuggingFace ``diffusers`` library directly to the\ncommand line in a very flexible way with a near one-to-one mapping, akin to ffmpeg, allowing\nfor creative uses as powerful as direct implementation in python with less effort and\nenvironmental setup.\n\ndgenerate is compatible with HuggingFace as well as typical CivitAI-hosted models,\nprompt weighting and many other useful generation features are supported.\n\ndgenerate can be easily installed on Windows via a Windows Installer MSI containing a\nfrozen python environment, making setup for Windows users easy, and likely to \"just work\"\nwithout any dependency issues. This installer can be found in the release artifact under each\nrelease located on the `github releases page \u003chttps://github.com/Teriks/dgenerate/releases\u003e`_.\n\nThis software requires a Nvidia GPU supporting CUDA 12.1+, AMD GPU supporting ROCm (Linux Only),\nor MacOS on Apple Silicon, and supports ``python\u003e=3.10,\u003c3.13``. CPU rendering is possible for\nsome operations but extraordinarily slow.\n\nFor library documentation, and a better README reading experience which\nincludes proper syntax highlighting for examples, and side panel navigation,\nplease visit `readthedocs \u003chttp://dgenerate.readthedocs.io/en/v4.5.1/\u003e`_.\n\n----\n\n* `Help Output`_\n* `Diffusion Feature Table \u003chttps://github.com/Teriks/dgenerate/blob/v4.5.1/FEATURE_TABLE.rst\u003e`_\n\n* How to install\n    * `Windows Install`_\n    * `Linux or WSL Install`_\n    * `Linux with ROCm (AMD Cards)`_\n    * `MacOS Install (Apple Silicon Only)`_\n    * `Google Colab Install`_\n\n* Usage Manual\n    * `Basic Usage`_\n    * `Negative Prompt`_\n    * `Multiple Prompts`_\n    * `Image Seeds`_\n    * `Inpainting`_\n    * `Per Image Seed Resizing`_\n    * `Animated Output`_\n    * `Animation Slicing`_\n    * `Inpainting Animations`_\n    * `Deterministic Output`_\n    * `Specifying a specific GPU for CUDA`_\n    * `Specifying a Scheduler (sampler)`_\n    * `Specifying a VAE`_\n    * `VAE Tiling and Slicing`_\n    * `Specifying a UNet`_\n    * `Specifying a Transformer (SD3 and Flux)`_\n    * `Specifying an SDXL Refiner`_\n    * `Specifying a Stable Cascade Decoder`_\n    * `Specifying LoRAs`_\n    * `Specifying IP Adapters`_\n        * `basic --image-seeds specification`_\n        * `img2img --image-seeds specification`_\n        * `inpainting --image-seeds specification`_\n        * `quoting IP Adapter image URLs with plus symbols`_\n        * `animated inputs \u0026 combinatorics`_\n    * `Specifying Textual Inversions (embeddings)`_\n    * `Specifying Control Nets`_\n        * `Flux Control Net Union Mode`_\n    * `Specifying T2I Adapters`_\n    * `Specifying Text Encoders`_\n    * `Prompt Weighting and Enhancement`_\n        * `The compel prompt weighter`_\n        * `The sd-embed prompt weighter`_\n    * `Utilizing CivitAI links and Other Hosted Models`_\n    * `Specifying Generation Batch Size`_\n    * `Batching Input Images and Inpaint Masks`_\n    * `Image Processors`_\n        * `Image processor arguments`_\n        * `Multiple control net images, and input image batching`_\n    * `Sub Commands`_\n        * `Sub Command: image-process`_\n        * `Sub Command: civitai-links`_\n    * `Upscaling`_\n        * `Upscaling with Diffusion Upscaler Models`_\n        * `Upscaling with chaiNNer Compatible Torch Upscaler Models`_\n        * `Upscaling with NCNN Upscaler Models`_\n    * `Writing and Running Configs`_\n        * `Basic config syntax`_\n        * `Built in template variables and functions`_\n        * `Directives, and applying templating`_\n        * `Setting template variables, in depth`_\n        * `Setting environmental variables, in depth`_\n        * `Globbing and path manipulation`_\n        * `The \\\\print and \\\\echo directive`_\n        * `The \\\\image_process directive`_\n        * `The \\\\exec directive`_\n        * `The \\\\download directive`_\n        * `The download() template function`_\n        * `The \\\\exit directive`_\n        * `Running configs from the command line`_\n        * `Config argument injection`_\n    * `Writing Plugins`_\n        * `Image processor plugins`_\n        * `Config directive and template function plugins`_\n        * `Sub-command plugins`_\n        * `Prompt weighter plugins`_\n    * `Console UI`_\n    * `File Cache Control`_\n\nHelp Output\n===========\n\n.. code-block:: text\n\n    usage: dgenerate [-h] [-v] [--version] [--file | --shell | --no-stdin | --console]\n                     [--plugin-modules PATH [PATH ...]] [--sub-command SUB_COMMAND]\n                     [--sub-command-help [SUB_COMMAND ...]] [-ofm] [--templates-help [VARIABLE_NAME ...]]\n                     [--directives-help [DIRECTIVE_NAME ...]] [--functions-help [FUNCTION_NAME ...]]\n                     [-mt MODEL_TYPE] [-rev BRANCH] [-var VARIANT] [-sbf SUBFOLDER] [-atk TOKEN] [-bs INTEGER]\n                     [-bgs SIZE] [-te TEXT_ENCODER_URIS [TEXT_ENCODER_URIS ...]]\n                     [-te2 TEXT_ENCODER_URIS [TEXT_ENCODER_URIS ...]] [-un UNET_URI] [-un2 UNET_URI]\n                     [-tf TRANSFORMER_URI] [-vae VAE_URI] [-vt] [-vs] [-lra LORA_URI [LORA_URI ...]]\n                     [-lrfs LORA_FUSE_SCALE] [-ie IMAGE_ENCODER_URI] [-ipa IP_ADAPTER_URI [IP_ADAPTER_URI ...]]\n                     [-ti URI [URI ...]] [-cn CONTROLNET_URI [CONTROLNET_URI ...] | -t2i T2I_ADAPTER_URI\n                     [T2I_ADAPTER_URI ...]] [-sch SCHEDULER_URI [SCHEDULER_URI ...]] [-pag]\n                     [-pags FLOAT [FLOAT ...]] [-pagas FLOAT [FLOAT ...]] [-rpag] [-rpags FLOAT [FLOAT ...]]\n                     [-rpagas FLOAT [FLOAT ...]] [-mqo | -mco] [--s-cascade-decoder MODEL_URI] [-dqo] [-dco]\n                     [--s-cascade-decoder-prompts PROMPT [PROMPT ...]]\n                     [--s-cascade-decoder-inference-steps INTEGER [INTEGER ...]]\n                     [--s-cascade-decoder-guidance-scales INTEGER [INTEGER ...]]\n                     [--s-cascade-decoder-scheduler SCHEDULER_URI [SCHEDULER_URI ...]]\n                     [--sdxl-refiner MODEL_URI] [-rqo] [-rco]\n                     [--sdxl-refiner-scheduler SCHEDULER_URI [SCHEDULER_URI ...]] [--sdxl-refiner-edit]\n                     [--sdxl-second-prompts PROMPT [PROMPT ...]] [--sdxl-t2i-adapter-factors FLOAT [FLOAT ...]]\n                     [--sdxl-aesthetic-scores FLOAT [FLOAT ...]]\n                     [--sdxl-crops-coords-top-left COORD [COORD ...]] [--sdxl-original-size SIZE [SIZE ...]]\n                     [--sdxl-target-size SIZE [SIZE ...]] [--sdxl-negative-aesthetic-scores FLOAT [FLOAT ...]]\n                     [--sdxl-negative-original-sizes SIZE [SIZE ...]]\n                     [--sdxl-negative-target-sizes SIZE [SIZE ...]]\n                     [--sdxl-negative-crops-coords-top-left COORD [COORD ...]]\n                     [--sdxl-refiner-prompts PROMPT [PROMPT ...]]\n                     [--sdxl-refiner-clip-skips INTEGER [INTEGER ...]]\n                     [--sdxl-refiner-second-prompts PROMPT [PROMPT ...]]\n                     [--sdxl-refiner-aesthetic-scores FLOAT [FLOAT ...]]\n                     [--sdxl-refiner-crops-coords-top-left COORD [COORD ...]]\n                     [--sdxl-refiner-original-sizes SIZE [SIZE ...]]\n                     [--sdxl-refiner-target-sizes SIZE [SIZE ...]]\n                     [--sdxl-refiner-negative-aesthetic-scores FLOAT [FLOAT ...]]\n                     [--sdxl-refiner-negative-original-sizes SIZE [SIZE ...]]\n                     [--sdxl-refiner-negative-target-sizes SIZE [SIZE ...]]\n                     [--sdxl-refiner-negative-crops-coords-top-left COORD [COORD ...]] [-hnf FLOAT [FLOAT ...]]\n                     [-ri INT [INT ...]] [-rg FLOAT [FLOAT ...]] [-rgr FLOAT [FLOAT ...]] [-sc] [-d DEVICE]\n                     [-t DTYPE] [-s SIZE] [-na] [-o PATH] [-op PREFIX] [-ox] [-oc] [-om]\n                     [-pw PROMPT_WEIGHTER_URI] [--prompt-weighter-help [PROMPT_WEIGHTER_NAMES ...]]\n                     [-p PROMPT [PROMPT ...]] [--sd3-max-sequence-length INTEGER]\n                     [--sd3-second-prompts PROMPT [PROMPT ...]] [--sd3-third-prompts PROMPT [PROMPT ...]]\n                     [--flux-second-prompts PROMPT [PROMPT ...]] [--flux-max-sequence-length INTEGER]\n                     [-cs INTEGER [INTEGER ...]] [-se SEED [SEED ...]] [-sei] [-gse COUNT] [-af FORMAT]\n                     [-if FORMAT] [-nf] [-fs FRAME_NUMBER] [-fe FRAME_NUMBER] [-is SEED [SEED ...]]\n                     [-sip PROCESSOR_URI [PROCESSOR_URI ...]] [-mip PROCESSOR_URI [PROCESSOR_URI ...]]\n                     [-cip PROCESSOR_URI [PROCESSOR_URI ...]] [--image-processor-help [PROCESSOR_NAME ...]]\n                     [-pp PROCESSOR_URI [PROCESSOR_URI ...]] [-iss FLOAT [FLOAT ...] | -uns INTEGER\n                     [INTEGER ...]] [-gs FLOAT [FLOAT ...]] [-igs FLOAT [FLOAT ...]] [-gr FLOAT [FLOAT ...]]\n                     [-ifs INTEGER [INTEGER ...]] [-mc EXPR [EXPR ...]] [-pmc EXPR [EXPR ...]]\n                     [-umc EXPR [EXPR ...]] [-vmc EXPR [EXPR ...]] [-cmc EXPR [EXPR ...]] [-tmc EXPR [EXPR ...]]\n                     [-iemc EXPR [EXPR ...]] [-amc EXPR [EXPR ...]] [-tfmc EXPR [EXPR ...]]\n                     [-ipmc EXPR [EXPR ...]] [-ipcc EXPR [EXPR ...]]\n                     model_path\n\n    Batch image generation and manipulation tool supporting Stable Diffusion and related techniques /\n    algorithms, with support for video and animated image processing.\n\n    positional arguments:\n      model_path            Hugging Face model repository slug, Hugging Face blob link to a model file, path to\n                            folder on disk, or path to a .pt, .pth, .bin, .ckpt, or .safetensors file.\n                            --------------------------------------------------------------------------\n\n    options:\n      -h, --help            show this help message and exit\n                            -------------------------------\n      -v, --verbose         Output information useful for debugging, such as pipeline call and model load\n                            parameters.\n                            -----------\n      --version             Show dgenerate's version and exit\n                            ---------------------------------\n      --file                Convenience argument for reading a configuration script from a file instead of using\n                            a pipe. This is a meta argument which can not be used within a configuration script\n                            and is only valid from the command line or during a popen invocation of dgenerate.\n                            ----------------------------------------------------------------------------------\n      --shell               When reading configuration from STDIN (a pipe), read forever, even when\n                            configuration errors occur. This allows dgenerate to run in the background and be\n                            controlled by another process sending commands. Launching dgenerate with this option\n                            and not piping it input will attach it to the terminal like a shell. Entering\n                            configuration into this shell requires two newlines to submit a command due to\n                            parsing lookahead. IE: two presses of the enter key. This is a meta argument which\n                            can not be used within a configuration script and is only valid from the command\n                            line or during a popen invocation of dgenerate.\n                            -----------------------------------------------\n      --no-stdin            Can be used to indicate to dgenerate that it will not receive any piped in input.\n                            This is useful for running dgenerate via popen from Python or another application\n                            using normal arguments, where it would otherwise try to read from STDIN and block\n                            forever because it is not attached to a terminal. This is a meta argument which can\n                            not be used within a configuration script and is only valid from the command line or\n                            during a popen invocation of dgenerate.\n                            ---------------------------------------\n      --console             Launch a terminal-like Tkinter GUI that interacts with an instance of dgenerate\n                            running in the background. This allows you to interactively write dgenerate config\n                            scripts as if dgenerate were a shell / REPL. This is a meta argument which can not\n                            be used within a configuration script and is only valid from the command line or\n                            during a popen invocation of dgenerate.\n                            ---------------------------------------\n      --plugin-modules PATH [PATH ...]\n                            Specify one or more plugin module folder paths (folder containing __init__.py) or\n                            Python .py file paths, or Python module names to load as plugins. Plugin modules can\n                            currently implement image processors, config directives, config template functions,\n                            prompt weighters, and sub-commands.\n                            -----------------------------------\n      --sub-command SUB_COMMAND\n                            Specify the name a sub-command to invoke. dgenerate exposes some extra image\n                            processing functionality through the use of sub-commands. Sub commands essentially\n                            replace the entire set of accepted arguments with those of a sub-command which\n                            implements additional functionality. See --sub-command-help for a list of sub-\n                            commands and help.\n                            ------------------\n      --sub-command-help [SUB_COMMAND ...]\n                            Use this option alone (or with --plugin-modules) and no model specification in order\n                            to list available sub-command names. Calling a sub-command with \"--sub-command name\n                            --help\" will produce argument help output for that sub-command. When used with\n                            --plugin-modules, sub-commands implemented by the specified plugins will also be\n                            listed.\n                            -------\n      -ofm, --offline-mode  Whether dgenerate should try to download Hugging Face models that do not exist in\n                            the disk cache, or only use what is available in the cache. Referencing a model on\n                            Hugging Face that has not been cached because it was not previously downloaded will\n                            result in a failure when using this option.\n                            -------------------------------------------\n      --templates-help [VARIABLE_NAME ...]\n                            Print a list of template variables available in the interpreter environment used for\n                            dgenerate config scripts, particularly the variables set after a dgenerate\n                            invocation occurs. When used as a command line option, their values are not\n                            presented, just their names and types. Specifying names will print type information\n                            for those variable names.\n                            -------------------------\n      --directives-help [DIRECTIVE_NAME ...]\n                            Use this option alone (or with --plugin-modules) and no model specification in order\n                            to list available config directive names. Providing names will print documentation\n                            for the specified directive names. When used with --plugin-modules, directives\n                            implemented by the specified plugins will also be listed.\n                            ---------------------------------------------------------\n      --functions-help [FUNCTION_NAME ...]\n                            Use this option alone (or with --plugin-modules) and no model specification in order\n                            to list available config template function names. Providing names will print\n                            documentation for the specified function names. When used with --plugin-modules,\n                            functions implemented by the specified plugins will also be listed.\n                            -------------------------------------------------------------------\n      -mt MODEL_TYPE, --model-type MODEL_TYPE\n                            Use when loading different model types. Currently supported: torch, torch-pix2pix,\n                            torch-sdxl, torch-sdxl-pix2pix, torch-upscaler-x2, torch-upscaler-x4, torch-if,\n                            torch-ifs, torch-ifs-img2img, torch-s-cascade, torch-sd3, torch-flux, or torch-flux-\n                            fill. (default: torch)\n                            ----------------------\n      -rev BRANCH, --revision BRANCH\n                            The model revision to use when loading from a Hugging Face repository, (The Git\n                            branch / tag, default is \"main\")\n                            --------------------------------\n      -var VARIANT, --variant VARIANT\n                            If specified when loading from a Hugging Face repository or folder, load weights\n                            from \"variant\" filename, e.g. \"pytorch_model.\u003cvariant\u003e.safetensors\". Defaults to\n                            automatic selection.\n                            --------------------\n      -sbf SUBFOLDER, --subfolder SUBFOLDER\n                            Main model subfolder. If specified when loading from a Hugging Face repository or\n                            folder, load weights from the specified subfolder.\n                            --------------------------------------------------\n      -atk TOKEN, --auth-token TOKEN\n                            Huggingface auth token. Required to download restricted repositories that have\n                            access permissions granted to your Hugging Face account.\n                            --------------------------------------------------------\n      -bs INTEGER, --batch-size INTEGER\n                            The number of image variations to produce per set of individual diffusion parameters\n                            in one rendering step simultaneously on a single GPU.\n\n                            When generating animations with a --batch-size greater than one, a separate\n                            animation (with the filename suffix \"animation_N\") will be written to for each image\n                            in the batch.\n\n                            If --batch-grid-size is specified when producing an animation then the image grid is\n                            used for the output frames.\n\n                            During animation rendering each image in the batch will still be written to the\n                            output directory along side the produced animation as either suffixed files or image\n                            grids depending on the options you choose. (Default: 1)\n                            -------------------------------------------------------\n      -bgs SIZE, --batch-grid-size SIZE\n                            Produce a single image containing a grid of images with the number of COLUMNSxROWS\n                            given to this argument when --batch-size is greater than 1. If not specified with a\n                            --batch-size greater than 1, images will be written individually with an image\n                            number suffix (image_N) in the filename signifying which image in the batch they\n                            are.\n                            ----\n      -te TEXT_ENCODER_URIS [TEXT_ENCODER_URIS ...], --text-encoders TEXT_ENCODER_URIS [TEXT_ENCODER_URIS ...]\n                            Specify Text Encoders for the main model using URIs, main models may use one or more\n                            text encoders depending on the --model-type value and other dgenerate arguments.\n                            See: --text-encoders help for information about what text encoders are needed for\n                            your invocation.\n\n                            Examples: \"CLIPTextModel;model=huggingface/text_encoder\",\n                            \"CLIPTextModelWithProjection;model=huggingface/text_encoder;revision=main\",\n                            \"T5EncoderModel;model=text_encoder_folder_on_disk\".\n\n                            For main models which require multiple text encoders, the + symbol may be used to\n                            indicate that a default value should be used for a particular text encoder, for\n                            example: --text-encoders + + huggingface/encoder3. Any trailing text encoders which\n                            are not specified are given their default value.\n\n                            The value \"null\" may be used to indicate that a specific text encoder should not be\n                            loaded.\n\n                            Blob links / single file loads are not supported for Text Encoders.\n\n                            The \"revision\" argument specifies the model revision to use for the Text Encoder\n                            when loading from Hugging Face repository, (The Git branch / tag, default is\n                            \"main\").\n\n                            The \"variant\" argument specifies the Text Encoder model variant. If \"variant\" is\n                            specified when loading from a Hugging Face repository or folder, weights will be\n                            loaded from \"variant\" filename, e.g. \"pytorch_model.\u003cvariant\u003e.safetensors\". For this\n                            argument, \"variant\" defaults to the value of --variant if it is not specified in the\n                            URI.\n\n                            The \"subfolder\" argument specifies the UNet model subfolder, if specified when\n                            loading from a Hugging Face repository or folder, weights from the specified\n                            subfolder.\n\n                            The \"dtype\" argument specifies the Text Encoder model precision, it defaults to the\n                            value of -t/--dtype and should be one of: auto, bfloat16, float16, or float32.\n\n                            The \"quantize\" argument specifies whether or not to use optimum-quanto to quantize\n                            the text encoder weights, and may be passed the values \"qint2\", \"qint4\", \"qint8\",\n                            \"qfloat8_e4m3fn\", \"qfloat8_e4m3fnuz\", \"qfloat8_e5m2\", or \"qfloat8\" to specify the\n                            quantization datatype, this can be utilized to run Flux models with much less GPU\n                            memory.\n\n                            If you wish to load weights directly from a path on disk, you must point this\n                            argument at the folder they exist in, which should also contain the config.json file\n                            for the Text Encoder. For example, a downloaded repository folder from Hugging Face.\n                            ------------------------------------------------------------------------------------\n      -te2 TEXT_ENCODER_URIS [TEXT_ENCODER_URIS ...], --text-encoders2 TEXT_ENCODER_URIS [TEXT_ENCODER_URIS ...]\n                            --text-encoders but for the SDXL refiner or Stable Cascade decoder model.\n                            -------------------------------------------------------------------------\n      -un UNET_URI, --unet UNET_URI\n                            Specify a UNet using a URI.\n\n                            Examples: \"huggingface/unet\", \"huggingface/unet;revision=main\",\n                            \"unet_folder_on_disk\".\n\n                            Blob links / single file loads are not supported for UNets.\n\n                            The \"revision\" argument specifies the model revision to use for the UNet when\n                            loading from Hugging Face repository, (The Git branch / tag, default is \"main\").\n\n                            The \"variant\" argument specifies the UNet model variant. If \"variant\" is specified\n                            when loading from a Hugging Face repository or folder, weights will be loaded from\n                            \"variant\" filename, e.g. \"pytorch_model.\u003cvariant\u003e.safetensors. For this argument,\n                            \"variant\" defaults to the value of --variant if it is not specified in the URI.\n\n                            The \"subfolder\" argument specifies the UNet model subfolder, if specified when\n                            loading from a Hugging Face repository or folder, weights from the specified\n                            subfolder.\n\n                            The \"dtype\" argument specifies the UNet model precision, it defaults to the value of\n                            -t/--dtype and should be one of: auto, bfloat16, float16, or float32.\n\n                            If you wish to load weights directly from a path on disk, you must point this\n                            argument at the folder they exist in, which should also contain the config.json file\n                            for the UNet. For example, a downloaded repository folder from Hugging Face.\n                            ----------------------------------------------------------------------------\n      -un2 UNET_URI, --unet2 UNET_URI\n                            Specify a second UNet, this is only valid when using SDXL or Stable Cascade model\n                            types. This UNet will be used for the SDXL refiner, or Stable Cascade decoder model.\n                            ------------------------------------------------------------------------------------\n      -tf TRANSFORMER_URI, --transformer TRANSFORMER_URI\n                            Specify a Stable Diffusion 3 or Flux Transformer model using a URI.\n\n                            Examples: \"huggingface/transformer\", \"huggingface/transformer;revision=main\",\n                            \"transformer_folder_on_disk\".\n\n                            Blob links / single file loads are supported for SD3 Transformers.\n\n                            The \"revision\" argument specifies the model revision to use for the Transformer when\n                            loading from Hugging Face repository or blob link, (The Git branch / tag, default is\n                            \"main\").\n\n                            The \"variant\" argument specifies the Transformer model variant. If \"variant\" is\n                            specified when loading from a Hugging Face repository or folder, weights will be\n                            loaded from \"variant\" filename, e.g. \"pytorch_model.\u003cvariant\u003e.safetensors. For this\n                            argument, \"variant\" defaults to the value of --variant if it is not specified in the\n                            URI.\n\n                            The \"subfolder\" argument specifies the Transformer model subfolder, if specified\n                            when loading from a Hugging Face repository or folder, weights from the specified\n                            subfolder.\n\n                            The \"dtype\" argument specifies the Transformer model precision, it defaults to the\n                            value of -t/--dtype and should be one of: auto, bfloat16, float16, or float32.\n\n                            The \"quantize\" argument specifies whether or not to use optimum-quanto to quantize\n                            the transformer weights, and may be passed the values \"qint2\", \"qint4\", \"qint8\",\n                            \"qfloat8_e4m3fn\", \"qfloat8_e4m3fnuz\", \"qfloat8_e5m2\", or \"qfloat8\" to specify the\n                            quantization datatype, this can be utilized to run Flux models with much less GPU\n                            memory.\n\n                            If you wish to load a weights file directly from disk, the simplest way is:\n                            --transformer \"transformer.safetensors\", or with a dtype\n                            \"transformer.safetensors;dtype=float16\". All loading arguments except \"dtype\" and\n                            \"quantize\" are unused in this case and may produce an error message if used.\n\n                            If you wish to load a specific weight file from a Hugging Face repository, use the\n                            blob link loading syntax: --transformer\n                            \"AutoencoderKL;https://huggingface.co/UserName/repository-\n                            name/blob/main/transformer.safetensors\", the \"revision\" argument may be used with\n                            this syntax.\n                            ------------\n      -vae VAE_URI, --vae VAE_URI\n                            Specify a VAE using a URI, the URI syntax is: \"AutoEncoderClass;model=(Hugging Face\n                            repository slug/blob link or file/folder path)\".\n\n                            Examples: \"AutoencoderKL;model=vae.pt\",\n                            \"AsymmetricAutoencoderKL;model=huggingface/vae\",\n                            \"AutoencoderTiny;model=huggingface/vae\",\n                            \"ConsistencyDecoderVAE;model=huggingface/vae\".\n\n                            The AutoencoderKL encoder class accepts Hugging Face repository slugs/blob links,\n                            .pt, .pth, .bin, .ckpt, and .safetensors files.\n\n                            Other encoders can only accept Hugging Face repository slugs/blob links, or a path\n                            to a folder on disk with the model configuration and model file(s).\n\n                            If an AutoencoderKL VAE model file exists at a URL which serves the file as a raw\n                            download, you may provide an http/https link to it and it will be downloaded to\n                            dgenerates web cache.\n\n                            Aside from the \"model\" argument, there are four other optional arguments that can be\n                            specified, these are: \"revision\", \"variant\", \"subfolder\", \"dtype\".\n\n                            They can be specified as so in any order, they are not positional: \"AutoencoderKL;mo\n                            del=huggingface/vae;revision=main;variant=fp16;subfolder=sub_folder;dtype=float16\".\n\n                            The \"revision\" argument specifies the model revision to use for the VAE when loading\n                            from Hugging Face repository or blob link, (The Git branch / tag, default is\n                            \"main\").\n\n                            The \"variant\" argument specifies the VAE model variant. If \"variant\" is specified\n                            when loading from a Hugging Face repository or folder, weights will be loaded from\n                            \"variant\" filename, e.g. \"pytorch_model.\u003cvariant\u003e.safetensors. \"variant\" in the case\n                            of --vae does not default to the value of --variant to prevent failures during\n                            common use cases.\n\n                            The \"subfolder\" argument specifies the VAE model subfolder, if specified when\n                            loading from a Hugging Face repository or folder, weights from the specified\n                            subfolder.\n\n                            The \"dtype\" argument specifies the VAE model precision, it defaults to the value of\n                            -t/--dtype and should be one of: auto, bfloat16, float16, or float32.\n\n                            If you wish to load a weights file directly from disk, the simplest way is: --vae\n                            \"AutoencoderKL;my_vae.safetensors\", or with a dtype\n                            \"AutoencoderKL;my_vae.safetensors;dtype=float16\". All loading arguments except\n                            \"dtype\" are unused in this case and may produce an error message if used.\n\n                            If you wish to load a specific weight file from a Hugging Face repository, use the\n                            blob link loading syntax: --vae\n                            \"AutoencoderKL;https://huggingface.co/UserName/repository-\n                            name/blob/main/vae_model.safetensors\", the \"revision\" argument may be used with this\n                            syntax.\n                            -------\n      -vt, --vae-tiling     Enable VAE tiling. Assists in the generation of large images with lower memory\n                            overhead. The VAE will split the input tensor into tiles to compute decoding and\n                            encoding in several steps. This is useful for saving a large amount of memory and to\n                            allow processing larger images. Note that if you are using --control-nets you may\n                            still run into memory issues generating large images, or with --batch-size greater\n                            than 1.\n                            -------\n      -vs, --vae-slicing    Enable VAE slicing. Assists in the generation of large images with lower memory\n                            overhead. The VAE will split the input tensor in slices to compute decoding in\n                            several steps. This is useful to save some memory, especially when --batch-size is\n                            greater than 1. Note that if you are using --control-nets you may still run into\n                            memory issues generating large images.\n                            --------------------------------------\n      -lra LORA_URI [LORA_URI ...], --loras LORA_URI [LORA_URI ...]\n                            Specify one or more LoRA models using URIs. These should be a Hugging Face\n                            repository slug, path to model file on disk (for example, a .pt, .pth, .bin, .ckpt,\n                            or .safetensors file), or model folder containing model files.\n\n                            If a LoRA model file exists at a URL which serves the file as a raw download, you\n                            may provide an http/https link to it and it will be downloaded to dgenerates web\n                            cache.\n\n                            Hugging Face blob links are not supported, see \"subfolder\" and \"weight-name\" below\n                            instead.\n\n                            Optional arguments can be provided after a LoRA model specification, these are:\n                            \"scale\", \"revision\", \"subfolder\", and \"weight-name\".\n\n                            They can be specified as so in any order, they are not positional:\n                            \"huggingface/lora;scale=1.0;revision=main;subfolder=repo_subfolder;weight-\n                            name=lora.safetensors\".\n\n                            The \"scale\" argument indicates the scale factor of the LoRA.\n\n                            The \"revision\" argument specifies the model revision to use for the LoRA when\n                            loading from Hugging Face repository, (The Git branch / tag, default is \"main\").\n\n                            The \"subfolder\" argument specifies the LoRA model subfolder, if specified when\n                            loading from a Hugging Face repository or folder, weights from the specified\n                            subfolder.\n\n                            The \"weight-name\" argument indicates the name of the weights file to be loaded when\n                            loading from a Hugging Face repository or folder on disk.\n\n                            If you wish to load a weights file directly from disk, the simplest way is: --loras\n                            \"my_lora.safetensors\", or with a scale \"my_lora.safetensors;scale=1.0\", all other\n                            loading arguments are unused in this case and may produce an error message if used.\n                            -----------------------------------------------------------------------------------\n      -lrfs LORA_FUSE_SCALE, --lora-fuse-scale LORA_FUSE_SCALE\n                            LoRA weights are merged into the main model at this scale. When specifying multiple\n                            LoRA models, they are fused together into one set of weights using their individual\n                            scale values, after which they are fused into the main model at this scale value.\n                            (default: 1.0).\n                            ---------------\n      -ie IMAGE_ENCODER_URI, --image-encoder IMAGE_ENCODER_URI\n                            Specify an Image Encoder using a URI.\n\n                            Image Encoders are used with --ip-adapters models, and must be specified if none of\n                            the loaded --ip-adapters contain one. An error will be produced in this situation,\n                            which requires you to use this argument.\n\n                            An image encoder can also be manually specified for Stable Cascade models.\n\n                            Examples: \"huggingface/image_encoder\", \"huggingface/image_encoder;revision=main\",\n                            \"image_encoder_folder_on_disk\".\n\n                            Blob links / single file loads are not supported for Image Encoders.\n\n                            The \"revision\" argument specifies the model revision to use for the Image Encoder\n                            when loading from Hugging Face repository or blob link, (The Git branch / tag,\n                            default is \"main\").\n\n                            The \"variant\" argument specifies the Image Encoder model variant. If \"variant\" is\n                            specified when loading from a Hugging Face repository or folder, weights will be\n                            loaded from \"variant\" filename, e.g. \"pytorch_model.\u003cvariant\u003e.safetensors.\n\n                            Similar to --vae, \"variant\" does not default to the value of --variant in order to\n                            prevent errors with common use cases. If you specify multiple IP Adapters, they must\n                            all have the same \"variant\" value or you will receive a usage error.\n\n                            The \"subfolder\" argument specifies the Image Encoder model subfolder, if specified\n                            when loading from a Hugging Face repository or folder, weights from the specified\n                            subfolder.\n\n                            The \"dtype\" argument specifies the Image Encoder model precision, it defaults to the\n                            value of -t/--dtype and should be one of: auto, bfloat16, float16, or float32.\n\n                            If you wish to load weights directly from a path on disk, you must point this\n                            argument at the folder they exist in, which should also contain the config.json file\n                            for the Image Encoder. For example, a downloaded repository folder from Hugging\n                            Face.\n                            -----\n      -ipa IP_ADAPTER_URI [IP_ADAPTER_URI ...], --ip-adapters IP_ADAPTER_URI [IP_ADAPTER_URI ...]\n                            Specify one or more IP Adapter models using URIs. These should be a Hugging Face\n                            repository slug, path to model file on disk (for example, a .pt, .pth, .bin, .ckpt,\n                            or .safetensors file), or model folder containing model files.\n\n                            If an IP Adapter model file exists at a URL which serves the file as a raw download,\n                            you may provide an http/https link to it and it will be downloaded to dgenerates web\n                            cache.\n\n                            Hugging Face blob links are not supported, see \"subfolder\" and \"weight-name\" below\n                            instead.\n\n                            Optional arguments can be provided after an IP Adapter model specification, these\n                            are: \"scale\", \"revision\", \"subfolder\", and \"weight-name\".\n\n                            They can be specified as so in any order, they are not positional: \"huggingface/ip-\n                            adapter;scale=1.0;revision=main;subfolder=repo_subfolder;weight-\n                            name=ip_adapter.safetensors\".\n\n                            The \"scale\" argument indicates the scale factor of the IP Adapter.\n\n                            The \"revision\" argument specifies the model revision to use for the IP Adapter when\n                            loading from Hugging Face repository, (The Git branch / tag, default is \"main\").\n\n                            The \"subfolder\" argument specifies the IP Adapter model subfolder, if specified when\n                            loading from a Hugging Face repository or folder, weights from the specified\n                            subfolder.\n\n                            The \"weight-name\" argument indicates the name of the weights file to be loaded when\n                            loading from a Hugging Face repository or folder on disk.\n\n                            If you wish to load a weights file directly from disk, the simplest way is: --ip-\n                            adapters \"ip_adapter.safetensors\", or with a scale\n                            \"ip_adapter.safetensors;scale=1.0\", all other loading arguments are unused in this\n                            case and may produce an error message if used.\n                            ----------------------------------------------\n      -ti URI [URI ...], --textual-inversions URI [URI ...]\n                            Specify one or more Textual Inversion models using URIs. These should be a Hugging\n                            Face repository slug, path to model file on disk (for example, a .pt, .pth, .bin,\n                            .ckpt, or .safetensors file), or model folder containing model files.\n\n                            If a Textual Inversion model file exists at a URL which serves the file as a raw\n                            download, you may provide an http/https link to it and it will be downloaded to\n                            dgenerates web cache.\n\n                            Hugging Face blob links are not supported, see \"subfolder\" and \"weight-name\" below\n                            instead.\n\n                            Optional arguments can be provided after the Textual Inversion model specification,\n                            these are: \"token\", \"revision\", \"subfolder\", and \"weight-name\".\n\n                            They can be specified as so in any order, they are not positional:\n                            \"huggingface/ti_model;revision=main;subfolder=repo_subfolder;weight-\n                            name=ti_model.safetensors\".\n\n                            The \"token\" argument can be used to override the prompt token used for the textual\n                            inversion prompt embedding. For normal Stable Diffusion the default token value is\n                            provided by the model itself, but for Stable Diffusion XL and Flux the default token\n                            value is equal to the model file name with no extension and all spaces replaced by\n                            underscores.\n\n                            The \"revision\" argument specifies the model revision to use for the Textual\n                            Inversion model when loading from Hugging Face repository, (The Git branch / tag,\n                            default is \"main\").\n\n                            The \"subfolder\" argument specifies the Textual Inversion model subfolder, if\n                            specified when loading from a Hugging Face repository or folder, weights from the\n                            specified subfolder.\n\n                            The \"weight-name\" argument indicates the name of the weights file to be loaded when\n                            loading from a Hugging Face repository or folder on disk.\n\n                            If you wish to load a weights file directly from disk, the simplest way is:\n                            --textual-inversions \"my_ti_model.safetensors\", all other loading arguments are\n                            unused in this case and may produce an error message if used.\n                            -------------------------------------------------------------\n      -cn CONTROLNET_URI [CONTROLNET_URI ...], --control-nets CONTROLNET_URI [CONTROLNET_URI ...]\n                            Specify one or more ControlNet models using URIs. This should be a Hugging Face\n                            repository slug / blob link, path to model file on disk (for example, a .pt, .pth,\n                            .bin, .ckpt, or .safetensors file), or model folder containing model files.\n\n                            If a ControlNet model file exists at a URL which serves the file as a raw download,\n                            you may provide an http/https link to it and it will be downloaded to dgenerates web\n                            cache.\n\n                            Optional arguments can be provided after the ControlNet model specification, these\n                            are: \"scale\", \"start\", \"end\", \"revision\", \"variant\", \"subfolder\", and \"dtype\".\n\n                            They can be specified as so in any order, they are not positional: \"huggingface/cont\n                            rolnet;scale=1.0;start=0.0;end=1.0;revision=main;variant=fp16;subfolder=repo_subfold\n                            er;dtype=float16\".\n\n                            The \"scale\" argument specifies the scaling factor applied to the ControlNet model,\n                            the default value is 1.0.\n\n                            The \"start\" argument specifies at what fraction of the total inference steps to\n                            begin applying the ControlNet, defaults to 0.0, IE: the very beginning.\n\n                            The \"end\" argument specifies at what fraction of the total inference steps to stop\n                            applying the ControlNet, defaults to 1.0, IE: the very end.\n\n                            The \"mode\" argument can be used when using --model-type torch-flux and ControlNet\n                            Union to specify the ControlNet mode. Acceptable values are: \"canny\", \"tile\",\n                            \"depth\", \"blur\", \"pose\", \"gray\", \"lq\". This value may also be an integer between 0\n                            and 6, inclusive.\n\n                            The \"revision\" argument specifies the model revision to use for the ControlNet model\n                            when loading from Hugging Face repository, (The Git branch / tag, default is\n                            \"main\").\n\n                            The \"variant\" argument specifies the ControlNet model variant, if \"variant\" is\n                            specified when loading from a Hugging Face repository or folder, weights will be\n                            loaded from \"variant\" filename, e.g. \"pytorch_model.\u003cvariant\u003e.safetensors. \"variant\"\n                            defaults to automatic selection. \"variant\" in the case of --control-nets does not\n                            default to the value of --variant to prevent failures during common use cases.\n\n                            The \"subfolder\" argument specifies the ControlNet model subfolder, if specified when\n                            loading from a Hugging Face repository or folder, weights from the specified\n                            subfolder.\n\n                            The \"dtype\" argument specifies the ControlNet model precision, it defaults to the\n                            value of -t/--dtype and should be one of: auto, bfloat16, float16, or float32.\n\n                            If you wish to load a weights file directly from disk, the simplest way is:\n                            --control-nets \"my_controlnet.safetensors\" or --control-nets\n                            \"my_controlnet.safetensors;scale=1.0;dtype=float16\", all other loading arguments\n                            aside from \"scale\", \"start\", \"end\", and \"dtype\" are unused in this case and may\n                            produce an error message if used.\n\n                            If you wish to load a specific weight file from a Hugging Face repository, use the\n                            blob link loading syntax: --control-nets\n                            \"https://huggingface.co/UserName/repository-name/blob/main/controlnet.safetensors\",\n                            the \"revision\" argument may be used with this syntax.\n                            -----------------------------------------------------\n      -t2i T2I_ADAPTER_URI [T2I_ADAPTER_URI ...], --t2i-adapters T2I_ADAPTER_URI [T2I_ADAPTER_URI ...]\n                            Specify one or more T2IAdapter models using URIs. This should be a Hugging Face\n                            repository slug / blob link, path to model file on disk (for example, a .pt, .pth,\n                            .bin, .ckpt, or .safetensors file), or model folder containing model files.\n\n                            If a T2IAdapter model file exists at a URL which serves the file as a raw download,\n                            you may provide an http/https link to it and it will be downloaded to dgenerates web\n                            cache.\n\n                            Optional arguments can be provided after the T2IAdapter model specification, these\n                            are: \"scale\", \"revision\", \"variant\", \"subfolder\", and \"dtype\".\n\n                            They can be specified as so in any order, they are not positional: \"huggingface/t2ia\n                            dapter;scale=1.0;revision=main;variant=fp16;subfolder=repo_subfolder;dtype=float16\".\n\n                            The \"scale\" argument specifies the scaling factor applied to the T2IAdapter model,\n                            the default value is 1.0.\n\n                            The \"revision\" argument specifies the model revision to use for the T2IAdapter model\n                            when loading from Hugging Face repository, (The Git branch / tag, default is\n                            \"main\").\n\n                            The \"variant\" argument specifies the T2IAdapter model variant, if \"variant\" is\n                            specified when loading from a Hugging Face repository or folder, weights will be\n                            loaded from \"variant\" filename, e.g. \"pytorch_model.\u003cvariant\u003e.safetensors. \"variant\"\n                            defaults to automatic selection. \"variant\" in the case of --t2i-adapters does not\n                            default to the value of --variant to prevent failures during common use cases.\n\n                            The \"subfolder\" argument specifies the ControlNet model subfolder, if specified when\n                            loading from a Hugging Face repository or folder, weights from the specified\n                            subfolder.\n\n                            The \"dtype\" argument specifies the T2IAdapter model precision, it defaults to the\n                            value of -t/--dtype and should be one of: auto, bfloat16, float16, or float32.\n\n                            If you wish to load a weights file directly from disk, the simplest way is:\n                            --t2i-adapters \"my_t2i_adapter.safetensors\" or --t2i-adapters\n                            \"my_t2i_adapter.safetensors;scale=1.0;dtype=float16\", all other loading arguments\n                            aside from \"scale\" and \"dtype\" are unused in this case and may produce an error\n                            message if used.\n\n                            If you wish to load a specific weight file from a Hugging Face repository, use the\n                            blob link loading syntax: --t2i-adapters\n                            \"https://huggingface.co/UserName/repository-name/blob/main/t2i_adapter.safetensors\",\n                            the \"revision\" argument may be used with this syntax.\n                            -----------------------------------------------------\n      -sch SCHEDULER_URI [SCHEDULER_URI ...], --scheduler SCHEDULER_URI [SCHEDULER_URI ...], --schedulers SCHEDULER_URI [SCHEDULER_URI ...]\n                            Specify a scheduler (sampler) by URI. Passing \"help\" to this argument will print the\n                            compatible schedulers for a model without generating any images. Passing \"helpargs\"\n                            will yield a help message with a list of overridable arguments for each scheduler\n                            and their typical defaults. Arguments listed by \"helpargs\" can be overridden using\n                            the URI syntax typical to other dgenerate URI arguments.\n\n                            You may pass multiple scheduler URIs to this argument, each URI will be tried in\n                            turn.\n                            -----\n      -pag, --pag           Use perturbed attention guidance? This is supported for --model-type torch, torch-\n                            sdxl, and torch-sd3 for most use cases. This enables PAG for the main model using\n                            default scale values.\n                            ---------------------\n      -pags FLOAT [FLOAT ...], --pag-scales FLOAT [FLOAT ...]\n                            One or more perturbed attention guidance scales to try. Specifying values enables\n                            PAG for the main model. (default: [3.0])\n                            ----------------------------------------\n      -pagas FLOAT [FLOAT ...], --pag-adaptive-scales FLOAT [FLOAT ...]\n                            One or more adaptive perturbed attention guidance scales to try. Specifying values\n                            enables PAG for the main model. (default: [0.0])\n                            ------------------------------------------------\n      -rpag, --sdxl-refiner-pag\n                            Use perturbed attention guidance in the SDXL refiner? This is supported for --model-\n                            type torch-sdxl for most use cases. This enables PAG for the SDXL refiner model\n                            using default scale values.\n                            ---------------------------\n      -rpags FLOAT [FLOAT ...], --sdxl-refiner-pag-scales FLOAT [FLOAT ...]\n                            One or more perturbed attention guidance scales to try with the SDXL refiner pass.\n                            Specifying values enables PAG for the refiner. (default: [3.0])\n                            ---------------------------------------------------------------\n      -rpagas FLOAT [FLOAT ...], --sdxl-refiner-pag-adaptive-scales FLOAT [FLOAT ...]\n                            One or more adaptive perturbed attention guidance scales to try with the SDXL\n                            refiner pass. Specifying values enables PAG for the refiner. (default: [0.0])\n                            -----------------------------------------------------------------------------\n      -mqo, --model-sequential-offload\n                            Force sequential model offloading for the main pipeline, this may drastically reduce\n                            memory consumption and allow large models to run when they would otherwise not fit\n                            in your GPUs VRAM. Inference will be much slower. Mutually exclusive with --model-\n                            cpu-offload\n                            -----------\n      -mco, --model-cpu-offload\n                            Force model cpu offloading for the main pipeline, this may reduce memory consumption\n                            and allow large models to run when they would otherwise not fit in your GPUs VRAM.\n                            Inference will be slower. Mutually exclusive with --model-sequential-offload\n                            ----------------------------------------------------------------------------\n      --s-cascade-decoder MODEL_URI\n                            Specify a Stable Cascade (torch-s-cascade) decoder model path using a URI. This\n                            should be a Hugging Face repository slug / blob link, path to model file on disk\n                            (for example, a .pt, .pth, .bin, .ckpt, or .safetensors file), or model folder\n                            containing model files.\n\n                            Optional arguments can be provided after the decoder model specification, these are:\n                            \"revision\", \"variant\", \"subfolder\", and \"dtype\".\n\n                            They can be specified as so in any order, they are not positional: \"huggingface/deco\n                            der_model;revision=main;variant=fp16;subfolder=repo_subfolder;dtype=float16\".\n\n                            The \"revision\" argument specifies the model revision to use for the decoder model\n                            when loading from Hugging Face repository, (The Git branch / tag, default is\n                            \"main\").\n\n                            The \"variant\" argument specifies the decoder model variant and defaults to the value\n                            of --variant. When \"variant\" is specified when loading from a Hugging Face\n                            repository or folder, weights will be loaded from \"variant\" filename, e.g.\n                            \"pytorch_model.\u003cvariant\u003e.safetensors.\n\n                            The \"subfolder\" argument specifies the decoder model subfolder, if specified when\n                            loading from a Hugging Face repository or folder, weights from the specified\n                            subfolder.\n\n                            The \"dtype\" argument specifies the Stable Cascade decoder model precision, it\n                            defaults to the value of -t/--dtype and should be one of: auto, bfloat16, float16,\n                            or float32.\n\n                            If you wish to load a weights file directly from disk, the simplest way is: --sdxl-\n                            refiner \"my_decoder.safetensors\" or --sdxl-refiner\n                            \"my_decoder.safetensors;dtype=float16\", all other loading arguments aside from\n                            \"dtype\" are unused in this case and may produce an error message if used.\n\n                            If you wish to load a specific weight file from a Hugging Face repository, use the\n                            blob link loading syntax: --s-cascade-decoder\n                            \"https://huggingface.co/UserName/repository-name/blob/main/decoder.safetensors\", the\n                            \"revision\" argument may be used with this syntax.\n                            -------------------------------------------------\n      -dqo, --s-cascade-decoder-sequential-offload\n                            Force sequential model offloading for the Stable Cascade decoder pipeline, this may\n                            drastically reduce memory consumption and allow large models to run when they would\n                            otherwise not fit in your GPUs VRAM. Inference will be much slower. Mutually\n                            exclusive with --s-cascade-decoder-cpu-offload\n                            ----------------------------------------------\n      -dco, --s-cascade-decoder-cpu-offload\n                            Force model cpu offloading for the Stable Cascade decoder pipeline, this may reduce\n                            memory consumption and allow large models to run when they would otherwise not fit\n                            in your GPUs VRAM. Inference will be slower. Mutually exclusive with --s-cascade-\n                            decoder-sequential-offload\n                            --------------------------\n      --s-cascade-decoder-prompts PROMPT [PROMPT ...]\n                            One or more prompts to try with the Stable Cascade decoder model, by default the\n                            decoder model gets the primary prompt, this argument overrides that with a prompt of\n                            your choosing. The negative prompt component can be specified with the same syntax\n                            as --prompts\n                            ------------\n      --s-cascade-decoder-inference-steps INTEGER [INTEGER ...]\n                            One or more inference steps values to try with the Stable Cascade decoder. (default:\n                            [10])\n                            -----\n      --s-cascade-decoder-guidance-scales INTEGER [INTEGER ...]\n                            One or more guidance scale values to try with the Stable Cascade decoder. (default:\n                            [0])\n                            ----\n      --s-cascade-decoder-scheduler SCHEDULER_URI [SCHEDULER_URI ...], --s-cascade-decoder-schedulers SCHEDULER_URI [SCHEDULER_URI ...]\n                            Specify a scheduler (sampler) by URI for the Stable Cascade decoder pass. Operates\n                            the exact same way as --scheduler including the \"help\" option. Passing 'helpargs'\n                            will yield a help message with a list of overridable arguments for each scheduler\n                            and their typical defaults. Defaults to the value of --scheduler.\n\n                            You may pass multiple scheduler URIs to this argument, each URI will be tried in\n                            turn.\n                            -----\n      --sdxl-refiner MODEL_URI\n                            Specify a Stable Diffusion XL (torch-sdxl) refiner model path using a URI. This\n                            should be a Hugging Face repository slug / blob link, path to model file on disk\n                            (for example, a .pt, .pth, .bin, .ckpt, or .safetensors file), or model folder\n                            containing model files.\n\n                            Optional arguments can be provided after the SDXL refiner model specification, these\n                            are: \"revision\", \"variant\", \"subfolder\", and \"dtype\".\n\n                            They can be specified as so in any order, they are not positional: \"huggingface/refi\n                            ner_model_xl;revision=main;variant=fp16;subfolder=repo_subfolder;dtype=float16\".\n\n                            The \"revision\" argument specifies the model revision to use for the refiner model\n                            when loading from Hugging Face repository, (The Git branch / tag, default is\n                            \"main\").\n\n                            The \"variant\" argument specifies the SDXL refiner model variant and defaults to the\n                            value of --variant. When \"variant\" is specified when loading from a Hugging Face\n                            repository or folder, weights will be loaded from \"variant\" filename, e.g.\n                            \"pytorch_model.\u003cvariant\u003e.safetensors.\n\n                            The \"subfolder\" argument specifies the SDXL refiner model subfolder, if specified\n                            when loading from a Hugging Face repository or folder, weights from the specified\n                            subfolder.\n\n                            The \"dtype\" argument specifies the SDXL refiner model precision, it defaults to the\n                            value of -t/--dtype and should be one of: auto, bfloat16, float16, or float32.\n\n                            If you wish to load a weights file directly from disk, the simplest way is: --sdxl-\n                            refiner \"my_sdxl_refiner.safetensors\" or --sdxl-refiner\n                            \"my_sdxl_refiner.safetensors;dtype=float16\", all other loading arguments aside from\n                            \"dtype\" are unused in this case and may produce an error message if used.\n\n                            If you wish to load a specific weight file from a Hugging Face repository, use the\n                            blob link loading syntax: --sdxl-refiner\n                            \"https://huggingface.co/UserName/repository-\n                            name/blob/main/refiner_model.safetensors\", the \"revision\" argument may be used with\n                            this syntax.\n                            ------------\n      -rqo, --sdxl-refiner-sequential-offload\n                            Force sequential model offloading for the SDXL refiner pipeline, this may\n                            drastically reduce memory consumption and allow large models to run when they would\n                            otherwise not fit in your GPUs VRAM. Inference will be much slower. Mutually\n                            exclusive with --refiner-cpu-offload\n                            ------------------------------------\n      -rco, --sdxl-refiner-cpu-offload\n                            Force model cpu offloading for the SDXL refiner pipeline, this may reduce memory\n                            consumption and allow large models to run when they would otherwise not fit in your\n                            GPUs VRAM. Inference will be slower. Mutually exclusive with --refiner-sequential-\n                            offload\n                            -------\n      --sdxl-refiner-scheduler SCHEDULER_URI [SCHEDULER_URI ...], --sdxl-refiner-schedulers SCHEDULER_URI [SCHEDULER_URI ...]\n                            Specify a scheduler (sampler) by URI for the SDXL refiner pass. Operates the exact\n                            same way as --scheduler including the \"help\" option. Passing 'helpargs' will yield a\n                            help message with a list of overridable arguments for each scheduler and their\n                            typical defaults. Defaults to the value of --scheduler.\n\n                            You may pass multiple scheduler URIs to this argument, each URI will be tried in\n                            turn.\n                            -----\n      --sdxl-refiner-edit   Force the SDXL refiner to operate in edit mode instead of cooperative denoising mode\n                            as it would normally do for inpainting and ControlNet usage. The main model will\n                            perform the full amount of inference steps requested by --inference-steps. The\n                            output of the main model will be passed to the refiner model and processed with an\n                            image seed strength in img2img mode determined by (1.0 - high-noise-fraction)\n                            -----------------------------------------------------------------------------\n      --sdxl-second-prompts PROMPT [PROMPT ...]\n                            One or more secondary prompts to try using SDXL's secondary text encoder. By default\n                            the model is passed the primary prompt for this value, this option allows you to\n                            choose a different prompt. The negative prompt component can be specified with the\n                            same syntax as --prompts\n                            ------------------------\n      --sdxl-t2i-adapter-factors FLOAT [FLOAT ...]\n                            One or more SDXL specific T2I adapter factors to try, this controls the amount of\n                            time-steps for which a T2I adapter applies guidance to an image, this is a value\n                            between 0.0 and 1.0. A value of 0.5 for example indicates that the T2I adapter is\n                            only active for half the amount of time-steps it takes to completely render an\n                            image.\n                            ------\n      --sdxl-aesthetic-scores FLOAT [FLOAT ...]\n                            One or more Stable Diffusion XL (torch-sdxl) \"aesthetic-score\" micro-conditioning\n                            parameters. Used to simulate an aesthetic score of the generated image by\n                            influencing the positive text condition. Part of SDXL's micro-conditioning as\n                            explained in section 2.2 of [https://huggingface.co/papers/2307.01952].\n                            -----------------------------------------------------------------------\n      --sdxl-crops-coords-top-left COORD [COORD ...]\n                            One or more Stable Diffusion XL (torch-sdxl) \"negative-crops-coords-top-left\" micro-\n                            conditioning parameters in the format \"0,0\". --sdxl-crops-coords-top-left can be\n                            used to generate an image that appears to be \"cropped\" from the position --sdxl-\n                            crops-coords-top-left downwards. Favorable, well-centered images are usually\n                            achieved by setting --sdxl-crops-coords-top-left to \"0,0\". Part of SDXL's micro-\n                            conditioning as explained in section 2.2 of\n                            [https://huggingface.co/papers/2307.01952].\n                            -------------------------------------------\n      --sdxl-original-size SIZE [SIZE ...], --sdxl-original-sizes SIZE [SIZE ...]\n                            One or more Stable Diffusion XL (torch-sdxl) \"original-size\" micro-conditioning\n                            parameters in the format (WIDTH)x(HEIGHT). If not the same as --sdxl-target-size the\n                            image will appear to be down or up-sampled. --sdxl-original-size defaults to\n                            --output-size or the size of any input images if not specified. Part of SDXL's\n                            micro-conditioning as explained in section 2.2 of\n                            [https://huggingface.co/papers/2307.01952]\n                            ------------------------------------------\n      --sdxl-target-size SIZE [SIZE ...], --sdxl-target-sizes SIZE [SIZE ...]\n                            One or more Stable Diffusion XL (torch-sdxl) \"target-size\" micro-conditioning\n                            parameters in the format (WIDTH)x(HEIGHT). For most cases, --sdxl-target-size should\n                            be set to the desired height and width of the generated image. If not specified it\n                            will default to --output-size or the size of any input images. Part of SDXL's micro-\n                            conditioning as explained in section 2.2 of\n                            [https://huggingface.co/papers/2307.01952]\n                            ------------------------------------------\n      --sdxl-negative-aesthetic-scores FLOAT [FLOAT ...]\n                            One or more Stable Diffusion XL (torch-sdxl) \"negative-aesthetic-score\" micro-\n                            conditioning parameters. Part of SDXL's micro-conditioning as explained in section\n                            2.2 of [https://huggingface.co/papers/2307.01952]. Can be used to simulate an\n                            aesthetic score of the generated image by influencing the negative text condition.\n                            ----------------------------------------------------------------------------------\n      --sdxl-negative-original-sizes SIZE [SIZE ...]\n                            One or more Stable Diffusion XL (torch-sdxl) \"negative-original-sizes\" micro-\n                            conditioning parameters. Negatively condition the generation process based on a\n                            specific image resolution. Part of SDXL's micro-conditioning as explained in section\n                            2.2 of [https://huggingface.co/papers/2307.01952]. For more information, refer to\n                            this issue thread: https://github.com/huggingface/diffusers/issues/4208\n                            -----------------------------------------------------------------------\n      --sdxl-negative-target-sizes SIZE [SIZE ...]\n                            One or more Stable Diffusion XL (torch-sdxl) \"negative-original-sizes\" micro-\n                            conditioning parameters. To negatively condition the generation process based on a\n                            target image resolution. It should be as same as the \"--sdxl-target-size\" for most\n                            cases. Part of SDXL's micro-conditioning as explained in section 2.2 of\n                            [https://huggingface.co/papers/2307.01952]. For more information, refer to this\n                            issue thread: https://github.com/huggingface/diffusers/issues/4208.\n                            -------------------------------------------------------------------\n      --sdxl-negative-crops-coords-top-left COORD [COORD ...]\n                            One or more Stable Diffusion XL (torch-sdxl) \"negative-crops-coords-top-left\" micro-\n                            conditioning parameters in the format \"0,0\". Negatively condition the generation\n                            process based on a specific crop coordinates. Part of SDXL's micro-conditioning as\n                            explained in section 2.2 of [https://huggingface.co/papers/2307.01952]. For more\n                            information, refer to this issue thread:\n                            https://github.com/huggingface/diffusers/issues/4208.\n                            -----------------------------------------------------\n      --sdxl-refiner-prompts PROMPT [PROMPT ...]\n                            One or more prompts to try with the SDXL refiner model, by default the refiner model\n                            gets the primary prompt, this argument overrides that with a prompt of your\n                            choosing. The negative prompt component can be specified with the same syntax as\n                            --prompts\n                            ---------\n      --sdxl-refiner-clip-skips INTEGER [INTEGER ...]\n                            One or more clip skip override values to try for the SDXL refiner, which normally\n                            uses the clip skip value for the main model when it is defined by --clip-skips.\n                            -------------------------------------------------------------------------------\n      --sdxl-refiner-second-prompts PROMPT [PROMPT ...]\n                            One or more prompts to try with the SDXL refiner models secondary text encoder, by\n                            default the refiner model gets the primary prompt passed to its second text encoder,\n                            this argument overrides that with a prompt of your choosing. The negative prompt\n                            component can be specified with the same syntax as --prompts\n                            ------------------------------------------------------------\n      --sdxl-refiner-aesthetic-scores FLOAT [FLOAT ...]\n                            See: --sdxl-aesthetic-scores, applied to SDXL refiner pass.\n                            -----------------------------------------------------------\n      --sdxl-refiner-crops-coords-top-left COORD [COORD ...]\n                            See: --sdxl-crops-coords-top-left, applied to SDXL refiner pass.\n                            ----------------------------------------------------------------\n      --sdxl-refiner-original-sizes SIZE [SIZE ...]\n                            See: --sdxl-refiner-original-sizes, applied to SDXL refiner pass.\n                            -----------------------------------------------------------------\n      --sdxl-refiner-target-sizes SIZE [SIZE ...]\n                            See: --sdxl-refiner-target-sizes, applied to SDXL refiner pass.\n                            ---------------------------------------------------------------\n      --sdxl-refiner-negative-aesthetic-scores FLOAT [FLOAT ...]\n                            See: --sdxl-negative-aesthetic-scores, applied to SDXL refiner pass.\n                            --------------------------------------------------------------------\n      --sdxl-refiner-negative-original-sizes SIZE [SIZE ...]\n                            See: --sdxl-negative-original-sizes, applied to SDXL refiner pass.\n                            ------------------------------------------------------------------\n      --sdxl-refiner-negative-target-sizes SIZE [SIZE ...]\n                            See: --sdxl-negative-target-sizes, applied to SDXL refiner pass.\n                            ----------------------------------------------------------------\n      --sdxl-refiner-negative-crops-coords-top-left COORD [COORD ...]\n                            See: --sdxl-negative-crops-coords-top-left, applied to SDXL refiner pass.\n                            -------------------------------------------------------------------------\n      -hnf FLOAT [FLOAT ...], --sdxl-high-noise-fractions FLOAT [FLOAT ...]\n                            One or more high-noise-fraction values for Stable Diffusion XL (torch-sdxl), this\n                            fraction of inference steps will be processed by the base model, while the rest will\n                            be processed by the refiner model. Multiple values to this argument will result in\n                            additional generation steps for each value. In certain situations when the mixture\n                            of denoisers algorithm is not supported, such as when using --control-nets and\n                            inpainting with SDXL, the inverse proportion of this value IE: (1.0 - high-noise-\n                            fraction) becomes the --image-seed-strengths input to the SDXL refiner. (default:\n                            [0.8])\n                            ------\n      -ri INT [INT ...], --sdxl-refiner-inference-steps INT [INT ...]\n                            One or more inference steps values for the SDXL refiner when in use. Override the\n                            number of inference steps used by the SDXL refiner, which defaults to the value\n                            taken from --inference-steps.\n                            -----------------------------\n      -rg FLOAT [FLOAT ...], --sdxl-refiner-guidance-scales FLOAT [FLOAT ...]\n                            One or more guidance scale values for the SDXL refiner when in use. Override the\n                            guidance scale value used by the SDXL refiner, which defaults to the value taken\n                            from --guidance-scales.\n                            -----------------------\n      -rgr FLOAT [FLOAT ...], --sdxl-refiner-guidance-rescales FLOAT [FLOAT ...]\n                            One or more guidance rescale values for the SDXL refiner when in use. Override the\n                            guidance rescale value used by the SDXL refiner, which defaults to the value taken\n                            from --guidance-rescales.\n                            -------------------------\n      -sc, --safety-checker\n                            Enable safety checker loading, this is off by default. When turned on images with\n                            NSFW content detected may result in solid black output. Some pretrained models have\n                            no safety checker model present, in that case this option has no effect.\n                            ------------------------------------------------------------------------\n      -d DEVICE, --device DEVICE\n                            cuda / cpu, or other device supported by torch, for example mps on MacOS. (default:\n                            cuda, mps on MacOS). Use: cuda:0, cuda:1, cuda:2, etc. to specify a specific cuda\n                            supporting GPU.\n                            ---------------\n      -t DTYPE, --dtype DTYPE\n                            Model precision: auto, bfloat16, float16, or float32. (default: auto)\n                            ---------------------------------------------------------------------\n      -s SIZE, --output-size SIZE\n                            Image output size, for txt2img generation this is the exact output size. The\n                            dimensions specified for this value must be aligned by 8 or you will receive an\n                            error message. If an --image-seeds URI is used its Seed, Mask, and/or Control\n                            component image sources will be resized to this dimension with aspect ratio\n                            maintained before being used for generation by default, except in the case of Stable\n                            Cascade where the images are used as a style prompt (not a noised seed), and can be\n                            of varying dimensions.\n\n                            If --no-aspect is not specified, width will be fixed and a new height (aligned by 8)\n                            will be calculated for the input images. In most cases resizing the image inputs\n                            will result in an image output of an equal size to the inputs, except for upscalers\n                            and Deep Floyd --model-type values (torch-if*).\n\n                            If only one integer value is provided, that is the value for both dimensions. X/Y\n                            dimension values should be separated by \"x\".\n\n                            This value defaults to 512x512 for Stable Diffusion when no --image-seeds are\n                            specified (IE txt2img mode), 1024x1024 for Stable Cascade and Stable Diffusion 3/XL\n                            or Flux model types, and 64x64 for --model-type torch-if (Deep Floyd stage 1).\n\n                            Deep Floyd stage 1 images passed to superscaler models (--model-type torch-ifs*)\n                            that are specified with the 'floyd' keyword argument in an --image-seeds definition\n                            are never resized or processed in any way.\n                            ------------------------------------------\n      -na, --no-aspect      This option disables aspect correct resizing of images provided to --image-seeds\n                            globally. Seed, Mask, and Control guidance images will be resized to the closest\n                            dimension specified by --output-size that is aligned by 8 pixels with no\n                            consideration of the source aspect ratio. This can be overriden at the --image-seeds\n                            level with the image seed keyword argument 'aspect=true/false'.\n                            ---------------------------------------------------------------\n      -o PATH, --output-path PATH\n                            Output path for generated images and files. This directory will be created if it\n                            does not exist. (default: ./output)\n                            -----------------------------------\n      -op PREFIX, --output-prefix PREFIX\n                            Name prefix for generated images and files. This prefix will be added to the\n                            beginning of every generated file, followed by an underscore.\n                            -------------------------------------------------------------\n      -ox, --output-overwrite\n                            Enable overwrites of files in the output directory that already exists. The default\n                            behavior is not to do this, and instead append a filename suffix:\n                            \"_duplicate_(number)\" when it is detected that the generated file name already\n                            exists.\n                            -------\n      -oc, --output-configs\n                            Write a configuration text file for every output image or animation. The text file\n                            can be used reproduce that particular output image or animation by piping it to\n                            dgenerate STDIN or by using the --file option, for example \"dgenerate \u003c config.dgen\"\n                            or \"dgenerate --file config.dgen\". These files will be written to --output-path and\n                            are affected by --output-prefix and --output-overwrite as well. The files will be\n                            named after their corresponding image or animation file. Configuration files\n                            produced for animation frame images will utilize --frame-start and --frame-end to\n                            specify the frame number.\n                            -------------------------\n      -om, --output-metadata\n                            Write the information produced by --output-configs to the PNG metadata of each\n                            image. Metadata will not be written to animated files (yet). The data is written to\n                            a PNG metadata property named DgenerateConfig and can be read using ImageMagick like\n                            so: \"magick identify -format \"%[Property:DgenerateConfig] generated_file.png\".\n                            ------------------------------------------------------------------------------\n      -pw PROMPT_WEIGHTER_URI, --prompt-weighter PROMPT_WEIGHTER_URI\n                            Specify a prompt weighter implementation by URI, for example: --prompt-weighter\n                            compel, or --prompt-weighter sd-embed. By default, no prompt weighting syntax is\n                            enabled, meaning that you cannot adjust token weights as you may be able to do in\n                            software such as ComfyUI, Automatic1111, CivitAI etc. And in some cases the length\n                            of your prompt is limited. Prompt weighters support these special token weighting\n                            syntaxes and long prompts, currently there are two implementations \"compel\" and \"sd-\n                            embed\". See: --prompt-weighter-help for a list of implementation names. You may also\n                            use --prompt-weighter-help \"name\" to see comprehensive documentation for a specific\n                            prompt weighter implementation.\n                            -------------------------------\n      --prompt-weighter-help [PROMPT_WEIGHTER_NAMES ...]\n                            Use this option alone (or with --plugin-modules) and no model specification in order\n                            to list available prompt weighter names. Specifying one or more prompt weighter\n                            names after this option will cause usage documentation for the specified prompt\n                            weighters to be printed. When used with --plugin-modules, prompt weighters\n                            implemented by the specified plugins will also be listed.\n                            ---------------------------------------------------------\n      -p PROMPT [PROMPT ...], --prompts PROMPT [PROMPT ...]\n                            One or more prompts to try, an image group is generated for each prompt, prompt data\n                            is split by ; (semi-colon). The first value is the positive text influence, things\n                            you want to see. The Second value is negative influence IE. things you don't want to\n                            see. Example: --prompts \"photo of a horse in a field; artwork, painting, rain\".\n                            (default: [(empty string)])\n                            ---------------------------\n      --sd3-max-sequence-length INTEGER\n                            The maximum amount of prompt tokens that the T5EncoderModel (third text encoder) of\n                            Stable Diffusion 3 can handle. This should be an integer value between 1 and 512\n                            inclusive. The higher the value the more resources and time are required for\n                            processing. (default: 256)\n                            --------------------------\n      --sd3-second-prompts PROMPT [PROMPT ...]\n                            One or more secondary prompts to try using the torch-sd3 (Stable Diffusion 3)\n                            secondary text encoder. By default the model is passed the primary prompt for this\n                            value, this option allows you to choose a different prompt. The negative prompt\n                            component can be specified with the same syntax as --prompts\n                            ------------------------------------------------------------\n      --sd3-third-prompts PROMPT [PROMPT ...]\n                            One or more tertiary prompts to try using the torch-sd3 (Stable Diffusion 3)\n                            tertiary (T5) text encoder. By default the model is passed the primary prompt for\n                            this value, this option allows you to choose a different prompt. The negative prompt\n                            component can be specified with the same syntax as --prompts\n                            ------------------------------------------------------------\n      --flux-second-prompts PROMPT [PROMPT ...]\n                            One or more secondary prompts to try using the torch-flux (Flux) secondary (T5) text\n                            encoder. By default the model is passed the primary prompt for this value, this\n                            option allows you to choose a different prompt.\n                            -----------------------------------------------\n      --flux-max-sequence-length INTEGER\n                            The maximum amount of prompt tokens that the T5EncoderModel (second text encoder) of\n                            Flux can handle. This should be an integer value between 1 and 512 inclusive. The\n                            higher the value the more resources and time are required for processing. (default:\n                            512)\n                            ----\n      -cs INTEGER [INTEGER ...], --clip-skips INTEGER [INTEGER ...]\n                            One or more clip skip values to try. Clip skip is the number of layers to be skipped\n                            from CLIP while computing the prompt embeddings, it must be a value greater than or\n                            equal to zero. A value of 1 means that the output of the pre-final layer will be\n                            used for computing the prompt embeddings. This is only supported for --model-type\n                            values \"torch\", \"torch-sdxl\", and \"torch-sd3\".\n                            ----------------------------------------------\n      -se SEED [SEED ...], --seeds SEED [SEED ...]\n                            One or more seeds to try, define fixed seeds to achieve deterministic output. This\n                            argument may not be used when --gse/--gen-seeds is used. (default: [randint(0,\n                            99999999999999)])\n                            -----------------\n      -sei, --seeds-to-images\n                            When this option is enabled, each provided --seeds value or value generated by\n                            --gen-seeds is used for the corresponding image input given by --image-seeds. If the\n                            amount of --seeds given is not identical to that of the amount of --image-seeds\n                            given, the seed is determined as: seed = seeds[image_seed_index % len(seeds)], IE:\n                            it wraps around.\n                            ----------------\n      -gse COUNT, --gen-seeds COUNT\n                            Auto generate N random seeds to try. This argument may not be used when -se/--seeds\n                            is used.\n                            --------\n      -af FORMAT, --animation-format FORMAT\n                            Output format when generating an animation from an input video / gif / webp etc.\n                            Value must be one of: mp4, png, apng, gif, or webp. You may also specify \"frames\" to\n                            indicate that only frames should be output and no coalesced animation file should be\n                            rendered. (default: mp4)\n                            ------------------------\n      -if FORMAT, --image-format FORMAT\n                            Output format when writing static images. Any selection other than \"png\" is not\n                            compatible with --output-metadata. Value must be one of: png, apng, blp, bmp, dib,\n                            bufr, pcx, dds, ps, eps, gif, grib, h5, hdf, jp2, j2k, jpc, jpf, jpx, j2c, icns,\n                            ico, im, jfif, jpe, jpg, jpeg, tif, tiff, mpo, msp, palm, pdf, pbm, pgm, ppm, pnm,\n                            pfm, bw, rgb, rgba, sgi, tga, icb, vda, vst, webp, wmf, emf, or xbm. (default: png)\n                            -----------------------------------------------------------------------------------\n      -nf, --no-frames      Do not write frame images individually when rendering an animation, only write the\n                            animation file. This option is incompatible with --animation-format frames.\n                            ---------------------------------------------------------------------------\n      -fs FRAME_NUMBER, --frame-start FRAME_NUMBER\n                            Starting frame slice point for animated files (zero-indexed), the specified frame\n                            will be included. (default: 0)\n                            ------------------------------\n      -fe FRAME_NUMBER, --frame-end FRAME_NUMBER\n                            Ending frame slice point for animated files (zero-indexed), the specified frame will\n                            be included.\n                            ------------\n      -is SEED [SEED ...], --image-seeds SEED [SEED ...]\n                            One or more image seed URIs to process, these may consist of URLs or file paths.\n                            Videos / GIFs / WEBP files will result in frames being rendered as well as an\n                            animated output file being generated if more than one frame is available in the\n                            input file. Inpainting for static images can be achieved by specifying a black and\n                            white mask image in each image seed string using a semicolon as the separating\n                            character, like so: \"my-seed-image.png;my-image-mask.png\", white areas of the mask\n                            indicate where generated content is to be placed in your seed image.\n\n                            Output dimensions specific to the image seed can be specified by placing the\n                            dimension at the end of the string following a semicolon like so: \"my-seed-\n                            image.png;512x512\" or \"my-seed-image.png;my-image-mask.png;512x512\". When using\n                            --control-nets, a singular image specification is interpreted as the control\n                            guidance image, and you can specify multiple control image sources by separating\n                            them with commas in the case where multiple ControlNets are specified, IE: (--image-\n                            seeds \"control-image1.png, control-image2.png\") OR (--image-seeds\n                            \"seed.png;control=control-image1.png, control-image2.png\").\n\n                            Using --control-nets with img2img or inpainting can be accomplished with the syntax:\n                            \"my-seed-image.png;mask=my-image-mask.png;control=my-control-\n                            image.png;resize=512x512\". The \"mask\" and \"resize\" arguments are optional when using\n                            --control-nets. Videos, GIFs, and WEBP are also supported as inputs when using\n                            --control-nets, even for the \"control\" argument.\n\n                            --image-seeds is capable of reading from multiple animated files at once or any\n                            combination of animated files and images, the animated file with the least amount of\n                            frames dictates how many frames are generated and static images are duplicated over\n                            the total amount of frames. The keyword argument \"aspect\" can be used to determine\n                            resizing behavior when the global argument --output-size or the local keyword\n                            argument \"resize\" is specified, it is a boolean argument indicating whether aspect\n                            ratio of the input image should be respected or ignored.\n\n                            The keyword argument \"floyd\" can be used to specify images from a previous deep\n                            floyd stage when using --model-type torch-ifs*. When keyword arguments are present,\n                            all applicable images such as \"mask\", \"control\", etc. must also be defined with\n                            keyword arguments instead of with the short syntax.\n                            ---------------------------------------------------\n      -sip PROCESSOR_URI [PROCESSOR_URI ...], --seed-image-processors PROCESSOR_URI [PROCESSOR_URI ...]\n                            Specify one or more image processor actions to perform on the primary image(s)\n                            specified by --image-seeds.\n\n                            For example: --seed-image-processors \"flip\" \"mirror\" \"grayscale\".\n\n                            To obtain more information about what image processors are available and how to use\n                            them, see: --image-processor-help.\n\n                            If you have multiple images specified for batching, for example (--image-seeds\n                            \"images: img2img-1.png, img2img-2.png\"), you may use the delimiter \"+\" to separate\n                            image processor chains, so that a certain chain affects a certain seed image, the\n                            plus symbol may also be used to represent a null processor.\n\n                            For example: (--seed-image-processors affect-img-1 + affect-img-2), or (--seed-\n                            image-processors + affect-img-2), or (--seed-image-processors affect-img-1 +).\n\n                            The amount of processors / processor chains must not exceed the amount of input\n                            images, or you will receive a syntax error message. To obtain more information about\n                            what image processors are available and how to use them, see: --image-processor-\n                            help.\n                            -----\n      -mip PROCESSOR_URI [PROCESSOR_URI ...], --mask-image-processors PROCESSOR_URI [PROCESSOR_URI ...]\n                            Specify one or more image processor actions to perform on the inpaint mask image(s)\n                            specified by --image-seeds.\n\n                            For example: --mask-image-processors \"invert\".\n\n                            To obtain more information about what image processors are available and how to use\n                            them, see: --image-processor-help.\n\n                            If you have multiple masks specified for batching, for example --image-seeds\n                            (\"images: img2img-1.png, img2img-2.png; mask-1.png, mask-2.png\"), you may use the\n                            delimiter \"+\" to separate image processor chains, so that a certain chain affects a\n                            certain mask image, the plus symbol may also be used to represent a null processor.\n\n                            For example: (--mask-image-processors affect-mask-1 + affect-mask-2), or (--mask-\n                            image-processors + affect-mask-2), or (--mask-image-processors affect-mask-1 +).\n\n                            The amount of processors / processor chains must not exceed the amount of input mask\n                            images, or you will receive a syntax error message. To obtain more information about\n                            what image processors are available and how to use them, see: --image-processor-\n                            help.\n                            -----\n      -cip PROCESSOR_URI [PROCESSOR_URI ...], --control-image-processors PROCESSOR_URI [PROCESSOR_URI ...]\n                            Specify one or more image processor actions to perform on the control image\n                            specified by --image-seeds, this option is meant to be used with --control-nets.\n\n                            Example: --control-image-processors \"canny;lower=50;upper=100\".\n\n                            The delimiter \"+\" can be used to specify a different processor group for each image\n                            when using multiple control images with --control-nets.\n\n                            For example if you have --image-seeds \"img1.png, img2.png\" or --image-seeds\n                            \"...;control=img1.png, img2.png\" specified and multiple ControlNet models specified\n                            with --control-nets, you can specify processors for those control images with the\n                            syntax: (--control-image-processors \"processes-img1\" + \"processes-img2\").\n\n                            This syntax also supports chaining of processors, for example: (--control-image-\n                            processors \"first-process-img1\" \"second-process-img1\" + \"process-img2\").\n\n                            The amount of specified processors must not exceed the amount of specified control\n                            images, or you will receive a syntax error message.\n\n                            Images which do not have a processor defined for them will not be processed, and the\n                            plus character can be used to indicate an image is not to be processed and instead\n                            skipped over when that image is a leading element, for example (--control-image-\n                            processors + \"process-second\") would indicate that the first control guidance image\n                            is not to be processed, only the second.\n\n                            To obtain more information about what image processors are available and how to use\n                            them, see: --image-processor-help.\n                            ----------------------------------\n      --image-processor-help [PROCESSOR_NAME ...]\n                            Use this option alone (or with --plugin-modules) and no model specification in order\n                            to list available image processor names. Specifying one or more image processor\n                            names after this option will cause usage documentation for the specified image\n                            processors to be printed. When used with --plugin-modules, image processors\n                            implemented by the specified plugins will also be listed.\n                            ---------------------------------------------------------\n      -pp PROCESSOR_URI [PROCESSOR_URI ...], --post-processors PROCESSOR_URI [PROCESSOR_URI ...]\n                            Specify one or more image processor actions to perform on generated output before it\n                            is saved. For example: --post-processors \"upcaler;model=4x_ESRGAN.pth\". To obtain\n                            more information about what processors are available and how to use them, see:\n                            --image-processor-help.\n                            -----------------------\n      -iss FLOAT [FLOAT ...], --image-seed-strengths FLOAT [FLOAT ...]\n                            One or more image strength values to try when using --image-seeds for img2img or\n                            inpaint mode. Closer to 0 means high usage of the seed image (less noise\n                            convolution), 1 effectively means no usage (high noise convolution). Low values will\n                            produce something closer or more relevant to the input image, high values will give\n                            the AI more creative freedom. This value must be greater than 0 and less than or\n                            equal to 1. (default: [0.8])\n                            ----------------------------\n      -uns INTEGER [INTEGER ...], --upscaler-noise-levels INTEGER [INTEGER ...]\n                            One or more upscaler noise level values to try when using the super resolution\n                            upscaler --model-type torch-upscaler-x4 or torch-ifs. Specifying this option for\n                            --model-type torch-upscaler-x2 will produce an error message. The higher this value\n                            the more noise is added to the image before upscaling (similar to --image-seed-\n                            strengths). (default: [20 for x4, 250 for torch-ifs/torch-ifs-img2img, 0 for torch-\n                            ifs inpainting mode])\n                            ---------------------\n      -gs FLOAT [FLOAT ...], --guidance-scales FLOAT [FLOAT ...]\n                            One or more guidance scale values to try. Guidance scale effects how much your text\n                            prompt is considered. Low values draw more data from images unrelated to text\n                            prompt. (default: [5])\n                            ----------------------\n      -igs FLOAT [FLOAT ...], --image-guidance-scales FLOAT [FLOAT ...]\n                            One or more image guidance scale values to try. This can push the generated image\n                            towards the initial image when using --model-type *-pix2pix models, it is\n                            unsupported for other model types. Use in conjunction with --image-seeds, inpainting\n                            (masks) and --control-nets are not supported. Image guidance scale is enabled by\n                            setting image-guidance-scale \u003e 1. Higher image guidance scale encourages generated\n                            images that are closely linked to the source image, usually at the expense of lower\n                            image quality. Requires a value of at least 1. (default: [1.5])\n                            ---------------------------------------------------------------\n      -gr FLOAT [FLOAT ...], --guidance-rescales FLOAT [FLOAT ...]\n                            One or more guidance rescale factors to try. Proposed by [Common Diffusion Noise\n                            Schedules and Sample Steps are Flawed](https://arxiv.org/pdf/2305.08891.pdf)\n                            \"guidance_scale\" is defined as \"φ\" in equation 16. of [Common Diffusion Noise\n                            Schedules and Sample Steps are Flawed] (https://arxiv.org/pdf/2305.08891.pdf).\n                            Guidance rescale factor should fix overexposure when using zero terminal SNR. This\n                            is supported for basic text to image generation when using --model-type \"torch\" but\n                            not inpainting, img2img, or --control-nets. When using --model-type \"torch-sdxl\" it\n                            is supported for basic generation, inpainting, and img2img, unless --control-nets is\n                            specified in which case only inpainting is supported. It is supported for --model-\n                            type \"torch-sdxl-pix2pix\" but not --model-type \"torch-pix2pix\". (default: [0.0])\n                            --------------------------------------------------------------------------------\n      -ifs INTEGER [INTEGER ...], --inference-steps INTEGER [INTEGER ...]\n                            One or more inference steps values to try. The amount of inference (de-noising)\n                            steps effects image clarity to a degree, higher values bring the image closer to\n                            what the AI is targeting for the content of the image. Values between 30-40 produce\n                            good results, higher values may improve image quality and or change image content.\n                            (default: [30])\n                            ---------------\n      -mc EXPR [EXPR ...], --cache-memory-constraints EXPR [EXPR ...]\n                            Cache constraint expressions describing when to clear all model caches automatically\n                            (DiffusionPipeline, UNet, VAE, ControlNet, and Text Encoder) considering current\n                            memory usage. If any of these constraint expressions are met all models cached in\n                            memory will be cleared. Example, and default value: \"used_percent \u003e 70\" For Syntax\n                            See: [https://dgenerate.readthedocs.io/en/v4.5.1/dgenerate_submodules.html#dgenerate\n                            .pipelinewrapper.CACHE_MEMORY_CONSTRAINTS]\n                            ------------------------------------------\n      -pmc EXPR [EXPR ...], --pipeline-cache-memory-constraints EXPR [EXPR ...]\n                            Cache constraint expressions describing when to automatically clear the in memory\n                            DiffusionPipeline cache considering current memory usage, and estimated memory usage\n                            of new models that are about to enter memory. If any of these constraint expressions\n                            are met all DiffusionPipeline objects cached in memory will be cleared. Example, and\n                            default value: \"pipeline_size \u003e (available * 0.75)\" For Syntax See: [https://dgenera\n                            te.readthedocs.io/en/v4.5.1/dgenerate_submodules.html#dgenerate.pipelinewrapper.PIPE\n                            LINE_CACHE_MEMORY_CONSTRAINTS]\n                            ------------------------------\n      -umc EXPR [EXPR ...], --unet-cache-memory-constraints EXPR [EXPR ...]\n                            Cache constraint expressions describing when to automatically clear the in memory\n                            UNet cache considering current memory usage, and estimated memory usage of new UNet\n                            models that are about to enter memory. If any of these constraint expressions are\n                            met all UNet models cached in memory will be cleared. Example, and default value:\n                            \"unet_size \u003e (available * 0.75)\" For Syntax See: [https://dgenerate.readthedocs.io/e\n                            n/v4.5.1/dgenerate_submodules.html#dgenerate.pipelinewrapper.UNET_CACHE_MEMORY_CONST\n                            RAINTS]\n                            -------\n      -vmc EXPR [EXPR ...], --vae-cache-memory-constraints EXPR [EXPR ...]\n                            Cache constraint expressions describing when to automatically clear the in memory\n                            VAE cache considering current memory usage, and estimated memory usage of new VAE\n                            models that are about to enter memory. If any of these constraint expressions are\n                            met all VAE models cached in memory will be cleared. Example, and default value:\n                            \"vae_size \u003e (available * 0.75)\" For Syntax See: [https://dgenerate.readthedocs.io/en\n                            /v4.5.1/dgenerate_submodules.html#dgenerate.pipelinewrapper.VAE_CACHE_MEMORY_CONSTRA\n                            INTS]\n                            -----\n      -cmc EXPR [EXPR ...], --control-net-cache-memory-constraints EXPR [EXPR ...]\n                            Cache constraint expressions describing when to automatically clear the in memory\n                            ControlNet cache considering current memory usage, and estimated memory usage of new\n                            ControlNet models that are about to enter memory. If any of these constraint\n                            expressions are met all ControlNet models cached in memory will be cleared. Example,\n                            and default value: \"controlnet_size \u003e (available * 0.75)\" For Syntax See: [https://d\n                            generate.readthedocs.io/en/v4.5.1/dgenerate_submodules.html#dgenerate.pipelinewrappe\n                            r.CONTROLNET_CACHE_MEMORY_CONSTRAINTS]\n                            --------------------------------------\n      -tmc EXPR [EXPR ...], --text-encoder-cache-memory-constraints EXPR [EXPR ...]\n                            Cache constraint expressions describing when to automatically clear the in memory\n                            Text Encoder cache considering current memory usage, and estimated memory usage of\n                            new Text Encoder models that are about to enter memory. If any of these constraint\n                            expressions are met all Text Encoder models cached in memory will be cleared.\n                            Example, and default value: \"text_encoder_size \u003e (available * 0.75)\" For Syntax See:\n                            [https://dgenerate.readthedocs.io/en/v4.5.1/dgenerate_submodules.html#dgenerate.pipe\n                            linewrapper.TEXT_ENCODER_CACHE_MEMORY_CONSTRAINTS]\n                            --------------------------------------------------\n      -iemc EXPR [EXPR ...], --image-encoder-cache-memory-constraints EXPR [EXPR ...]\n                            Cache constraint expressions describing when to automatically clear the in memory\n                            Image Encoder cache considering current memory usage, and estimated memory usage of\n                            new Image Encoder models that are about to enter memory. If any of these constraint\n                            expressions are met all Image Encoder models cached in memory will be cleared.\n                            Example, and default value: \"image_encoder_size \u003e (available * 0.75)\" For Syntax\n                            See: [https://dgenerate.readthedocs.io/en/v4.5.1/dgenerate_submodules.html#dgenerate\n                            .pipelinewrapper.IMAGE_ENCODER_CACHE_MEMORY_CONSTRAINTS]\n                            --------------------------------------------------------\n      -amc EXPR [EXPR ...], --adapter-cache-memory-constraints EXPR [EXPR ...]\n                            Cache constraint expressions describing when to automatically clear the in memory\n                            T2I Adapter cache considering current memory usage, and estimated memory usage of\n                            new T2I Adapter models that are about to enter memory. If any of these constraint\n                            expressions are met all T2I Adapter models cached in memory will be cleared.\n                            Example, and default value: \"adapter_size \u003e (available * 0.75)\" For Syntax See: [htt\n                            ps://dgenerate.readthedocs.io/en/v4.5.1/dgenerate_submodules.html#dgenerate.pipeline\n                            wrapper.ADAPTER_CACHE_MEMORY_CONSTRAINTS]\n                            -----------------------------------------\n      -tfmc EXPR [EXPR ...], --transformer-cache-memory-constraints EXPR [EXPR ...]\n                            Cache constraint expressions describing when to automatically clear the in memory\n                            Transformer cache considering current memory usage, and estimated memory usage of\n                            new Transformer models that are about to enter memory. If any of these constraint\n                            expressions are met all Transformer models cached in memory will be cleared.\n                            Example, and default value: \"transformer_size \u003e (available * 0.75)\" For Syntax See:\n                            [https://dgenerate.readthedocs.io/en/v4.5.1/dgenerate_submodules.html#dgenerate.pipe\n                            linewrapper.TRANSFORMER_CACHE_MEMORY_CONSTRAINTS]\n                            -------------------------------------------------\n      -ipmc EXPR [EXPR ...], --image-processor-memory-constraints EXPR [EXPR ...]\n                            Cache constraint expressions describing when to automatically clear the entire in\n                            memory diffusion model cache considering current memory usage, and estimated memory\n                            usage of new image processor models that are about to enter memory. If any of these\n                            constraint expressions are met all diffusion related models cached in memory will be\n                            cleared. Example, and default value: \"processor_size \u003e (available * 0.70)\" For\n                            Syntax See: [https://dgenerate.readthedocs.io/en/v4.5.1/dgenerate_submodules.html#dg\n                            enerate.imageprocessors.IMAGE_PROCESSOR_MEMORY_CONSTRAINTS]\n                            -----------------------------------------------------------\n      -ipcc EXPR [EXPR ...], --image-processor-cuda-memory-constraints EXPR [EXPR ...]\n                            Cache constraint expressions describing when to automatically clear the last active\n                            diffusion model from VRAM considering current GPU memory usage, and estimated GPU\n                            memory usage of new image processor models that are about to enter VRAM. If any of\n                            these constraint expressions are met the last active diffusion model in VRAM will be\n                            destroyed. Example, and default value: \"processor_size \u003e (available * 0.70)\" For\n                            Syntax See: [https://dgenerate.readthedocs.io/en/v4.5.1/dgenerate_submodules.html#dg\n                            enerate.imageprocessors.IMAGE_PROCESSOR_CUDA_MEMORY_CONSTRAINTS]\n                            ----------------------------------------------------------------\n\n\n\nWindows Install\n===============\n\nYou can install using the Windows installer provided with each release on the\n`Releases Page \u003chttps://github.com/Teriks/dgenerate/releases\u003e`_, or you can manually\ninstall with pipx, (or pip if you want) as described below.\n\n\nManual Install\n--------------\n\nInstall Visual Studios (Community or other), make sure \"Desktop development with C++\" is selected, unselect anything you do not need.\n\nhttps://visualstudio.microsoft.com/downloads/\n\n\nInstall rust compiler using rustup-init.exe (x64), use the default install options.\n\nhttps://www.rust-lang.org/tools/install\n\nInstall Python:\n\nhttps://www.python.org/ftp/python/3.12.3/python-3.12.3-amd64.exe\n\nMake sure you select the option \"Add to PATH\" in the python installer,\notherwise invoke python directly using it's full path while installing the tool.\n\nInstall GIT for Windows:\n\nhttps://gitforwindows.org/\n\n\nInstall dgenerate\n-----------------\n\nUsing Windows CMD\n\nInstall pipx:\n\n.. code-block:: bash\n\n    pip install pipx\n    pipx ensurepath\n\n    # Log out and log back in so PATH takes effect\n\nInstall dgenerate:\n\n.. code-block:: bash\n\n    pipx install dgenerate ^\n    --pip-args \"--extra-index-url https://download.pytorch.org/whl/cu124/\"\n\n    # with NCNN upscaler support\n\n    pipx install dgenerate[ncnn] ^\n    --pip-args \"--extra-index-url https://download.pytorch.org/whl/cu124/\"\n\n    # If you want a specific version\n\n    pipx install dgenerate==4.5.1 ^\n    --pip-args \"--extra-index-url https://download.pytorch.org/whl/cu124/\"\n\n    # with NCNN upscaler support and a specific version\n\n    pipx install dgenerate[ncnn]==4.5.1 ^\n    --pip-args \"--extra-index-url https://download.pytorch.org/whl/cu124/\"\n\n    # You can install without pipx into your own environment like so\n\n    pip install dgenerate==4.5.1 --extra-index-url https://download.pytorch.org/whl/cu124/\n\n    # Or with NCNN\n\n    pip install dgenerate[ncnn]==4.5.1 --extra-index-url https://download.pytorch.org/whl/cu124/\n\n\nIt is recommended to install dgenerate with pipx if you are just intending\nto use it as a command line program, if you want to develop you can install it from\na cloned repository like this:\n\n.. code-block:: bash\n\n    # in the top of the repo make\n    # an environment and activate it\n\n    python -m venv venv\n    venv\\Scripts\\activate\n\n    # Install with pip into the environment\n\n    pip install --editable .[dev] --extra-index-url https://download.pytorch.org/whl/cu124/\n\n    # Install with pip into the environment, include NCNN\n\n    pip install --editable .[dev, ncnn] --extra-index-url https://download.pytorch.org/whl/cu124/\n\n\nRun ``dgenerate`` to generate images:\n\n.. code-block:: bash\n\n    # Images are output to the \"output\" folder\n    # in the current working directory by default\n\n    dgenerate --help\n\n    dgenerate stabilityai/stable-diffusion-2-1 ^\n    --prompts \"an astronaut riding a horse\" ^\n    --output-path output ^\n    --inference-steps 40 ^\n    --guidance-scales 10\n\nLinux or WSL Install\n====================\n\nFirst update your system and install build-essential\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    sudo apt update \u0026\u0026 sudo apt upgrade\n    sudo apt install build-essential\n\nInstall CUDA Toolkit 12.*: https://developer.nvidia.com/cuda-downloads\n\nI recommend using the runfile option.\n\nDo not attempt to install a driver from the prompts if using WSL.\n\nAdd libraries to linker path:\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # Add to ~/.bashrc\n\n    # For Linux add the following\n    export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH\n\n    # For WSL add the following\n    export LD_LIBRARY_PATH=/usr/lib/wsl/lib:/usr/local/cuda/lib64:$LD_LIBRARY_PATH\n\n    # Add this in both cases as well\n    export PATH=/usr/local/cuda/bin:$PATH\n\n\nWhen done editing ``~/.bashrc`` do:\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    source ~/.bashrc\n\n\nInstall Python \u003e=3.10,\u003c3.13 (Debian / Ubuntu) and pipx\n------------------------------------------------------\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    sudo apt install python3 python3-pip pipx python3-venv python3-wheel\n    pipx ensurepath\n\n    source ~/.bashrc\n\n\nInstall dgenerate\n-----------------\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # install with just support for torch\n\n    pipx install dgenerate \\\n    --pip-args \"--extra-index-url https://download.pytorch.org/whl/cu124/\"\n\n    # With NCNN upscaler support\n\n    # be aware that the ncnn python package depends on\n    # the non headless version of python-opencv and it may\n    # cause issues on headless systems without a window manager such\n    # as not being able to find the native library: libGL\n    # in addition you are going to probably have to do some work\n    # to get Vulkan driver support\n\n    pipx install dgenerate[ncnn] \\\n    --pip-args \"--extra-index-url https://download.pytorch.org/whl/cu124/\"\n\n    # If you want a specific version\n\n    pipx install dgenerate==4.5.1 \\\n    --pip-args \"--extra-index-url https://download.pytorch.org/whl/cu124/\"\n\n    # You can install without pipx into your own environment like so\n\n    pip3 install dgenerate==4.5.1 --extra-index-url https://download.pytorch.org/whl/cu124/\n\n    # Or with NCNN\n\n    pip3 install dgenerate[ncnn]==4.5.1 --extra-index-url https://download.pytorch.org/whl/cu124/\n\n\nIt is recommended to install dgenerate with pipx if you are just intending\nto use it as a command line program, if you want to develop you can install it from\na cloned repository like this:\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # in the top of the repo make\n    # an environment and activate it\n\n    python3 -m venv venv\n    source venv/bin/activate\n\n    # Install with pip into the environment\n\n    pip3 install --editable .[dev] --extra-index-url https://download.pytorch.org/whl/cu124/\n\n\nRun ``dgenerate`` to generate images:\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # Images are output to the \"output\" folder\n    # in the current working directory by default\n\n    dgenerate --help\n\n    dgenerate stabilityai/stable-diffusion-2-1 \\\n    --prompts \"an astronaut riding a horse\" \\\n    --output-path output \\\n    --inference-steps 40 \\\n    --guidance-scales 10\n\n\nLinux with ROCm (AMD Cards)\n===========================\n\nOn Linux you can use the ROCm torch backend with AMD cards. This is only supported on Linux, as\ntorch does not distribute this backend for Windows.\n\nROCm has been minimally verified to work with dgenerate using a rented\nMI300X AMD GPU instance / space, and has not been tested extensively.\n\nWhen specifying any ``--device`` value use ``cuda``, ``cuda:1``, etc. as you would for Nvidia GPUs.\n\nYou need to first install ROCm support, follow: https://rocm.docs.amd.com/projects/install-on-linux/en/latest/install/quick-start.html\n\nThen use: ``--extra-index-url https://download.pytorch.org/whl/rocm6.2/`` when installing via ``pip`` or ``pipx``.\n\nInstall Python \u003e=3.10,\u003c3.13 (Debian / Ubuntu) and pipx\n------------------------------------------------------\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    sudo apt install python3 python3-pip pipx python3-venv python3-wheel\n    pipx ensurepath\n\n    source ~/.bashrc\n\n\nSetup Environment\n-----------------\n\nYou may need to export the environmental variable ``PYTORCH_ROCM_ARCH`` before attempting to use dgenerate.\n\nThis value will depend on the model of your card, you may wish to add this and any other necessary\nenvironmental variables to ``~/.bashrc`` so that they persist in your shell environment.\n\nFor details, see: https://rocm.docs.amd.com/projects/install-on-linux/en/latest/install/3rd-party/pytorch-install.html\n\nGenerally, this information can be obtained by running the command: ``rocminfo``\n\n.. code-block:: bash\n\n    # example\n\n    export PYTORCH_ROCM_ARCH=\"gfx1030\"\n\n\nInstall dgenerate\n-----------------\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # install with just support for torch\n\n    pipx install dgenerate \\\n    --pip-args \"--extra-index-url https://download.pytorch.org/whl/rocm6.2/\"\n\n    # With NCNN upscaler support\n\n    pipx install dgenerate[ncnn] \\\n    --pip-args \"--extra-index-url https://download.pytorch.org/whl/rocm6.2/\"\n\n    # If you want a specific version\n\n    pipx install dgenerate==4.5.1 \\\n    --pip-args \"--extra-index-url https://download.pytorch.org/whl/rocm6.2/\"\n\n    # You can install without pipx into your own environment like so\n\n    pip3 install dgenerate==4.5.1 --extra-index-url https://download.pytorch.org/whl/rocm6.2/\n\n    # Or with NCNN\n\n    pip3 install dgenerate[ncnn]==4.5.1 --extra-index-url https://download.pytorch.org/whl/rocm6.2/\n\n\n    \nMacOS Install (Apple Silicon Only)\n==================================\n\nMacOS on Apple Silicon (arm64) is experimentally supported.\n\nRendering can be preformed in CPU only mode, and with hardware acceleration using ``--device mps`` (Metal Performance Shaders).\n\nThe default device on MacOS is ``mps`` unless specified otherwise.\n\nYou can install on MacOS by first installing python from the universal ``pkg`` installer\nlocated at: https://www.python.org/downloads/release/python-3126/\n\nIt is also possible to install Python using `homebrew \u003chomebrew_1_\u003e`_, though tkinter will\nnot be available meaning that you cannot run the Console UI.\n\nOnce you have done so, you can install using ``pipx`` (recommended), or create a virtual\nenvironment in a directory of your choosing and install ``dgenerate`` into it.\n\nDo not specify any ``--extra-index-url`` to ``pip``, it is not necessary on MacOS.\n\nWhen using SDXL on MacOS with ``--dtype float16``, you might need to specify \n``--vae AutoencoderKL;model=madebyollin/sdxl-vae-fp16-fix`` if your images \nare rendering solid black.\n\nMacOS pipx install\n------------------\n\nInstalling with ``pipx`` allows you to easily install ``dgenerate`` and\nhave it available globally from the command line without installing\nglobal python site packages.\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n    \n    # install pipx\n    \n    pip3 install pipx\n    \n    # install dgenerate into an isolated\n    # environment with pipx\n    \n    pipx install dgenerate==4.5.1\n    pipx ensurepath\n    \n    # open a new terminal or logout \u0026 login\n    \n    # launch the Console UI to test the install.\n    # tkinter will be available when you install \n    # python using the dmg from pythons official \n    # website\n    \n    dgenerate --console\n    \n    # or generate images\n    \n    dgenerate stabilityai/stable-diffusion-2-1 \\\n    --prompts \"an astronaut riding a horse\" \\\n    --output-path output \\\n    --inference-steps 40 \\\n    --guidance-scales 10\n\n\nIf you want to upgrade dgenerate, uninstall it first and then install the new version with ``pipx``.\n\n.. code-block:: bash\n\n    pipx uninstall dgenerate\n    pipx install dgenerate==4.5.1\n    \n\nMacOS venv install\n------------------\n\nYou can also manually install into a virtual environment \nof your own creation.\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # create the environment\n\n    python3 -m venv dgenerate_venv\n    \n    # you must activate this environment\n    # every time you want to use dgenerate\n    # with this install method \n    \n    source dgenerate_venv/bin/activate\n    \n    # install dgenerate into an isolated environment\n    \n    pip3 install dgenerate==4.5.1\n    \n    # launch the Console UI to test the install.\n    # tkinter will be available when you install \n    # python using the dmg from pythons official \n    # website\n    \n    dgenerate --console\n    \n    # or generate images\n    \n    dgenerate stabilityai/stable-diffusion-2-1 \\\n    --prompts \"an astronaut riding a horse\" \\\n    --output-path output \\\n    --inference-steps 40 \\\n    --guidance-scales 10\n    \n\nGoogle Colab Install\n====================\n\nThe following cell entries will get you started in a Google Collab environment.\n\nMake sure you select a GPU runtime for your notebook, such as the T4 runtime.\n\n\n1.) Install venv.\n\n.. code-block:: bash\n\n    !apt install python3-venv\n\n2.) Create a virtual environment.\n\n.. code-block:: bash\n\n    !python3 -m venv venv\n\n3.) Install dgenerate, you must activate the virtual environment in the same cell.\n\n.. code-block:: bash\n\n    !source /content/venv/bin/activate; pip install dgenerate==4.5.1 --extra-index-url https://download.pytorch.org/whl/cu121\n\n4.) Finally you can run dgenerate, you must prefix all calls to dgenerate with an activation of the virtual environment, as\nthe virtual environment is not preserved between cells.  For brevity, and as an example, just print the help text here.\n\n.. code-block:: bash\n\n    !source /content/venv/bin/activate; dgenerate --help\n\n\nBasic Usage\n===========\n\nThe example below attempts to generate an astronaut riding a horse using 5 different\nrandom seeds, 3 different inference steps values, and 3 different guidance scale values.\n\nIt utilizes the ``stabilityai/stable-diffusion-2-1`` model repo on `Hugging Face \u003chttps://huggingface.co/stabilityai/stable-diffusion-2-1\u003e`_.\n\n45 uniquely named images will be generated ``(5 x 3 x 3)``\n\nAlso Adjust output size to ``512x512`` and output generated images to the ``astronaut`` folder in the current working directory.\n\nWhen ``--output-path`` is not specified, the default output location is the ``output`` folder\nin the current working directory, if the path that is specified does not exist then it will be created.\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    dgenerate stabilityai/stable-diffusion-2-1 \\\n    --prompts \"an astronaut riding a horse\" \\\n    --gen-seeds 5 \\\n    --output-path astronaut \\\n    --inference-steps 30 40 50 \\\n    --guidance-scales 5 7 10 \\\n    --output-size 512x512\n\n\nLoading models from huggingface blob links is also supported:\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    dgenerate https://huggingface.co/stabilityai/stable-diffusion-2-1/blob/main/v2-1_768-ema-pruned.safetensors \\\n    --prompts \"an astronaut riding a horse\" \\\n    --gen-seeds 5 \\\n    --output-path astronaut \\\n    --inference-steps 30 40 50 \\\n    --guidance-scales 5 7 10 \\\n    --output-size 512x512\n\n\nSDXL is supported and can be used to generate highly realistic images.\n\nPrompt only generation, img2img, and inpainting is supported for SDXL.\n\nRefiner models can be specified, ``fp16`` model variant and a datatype of ``float16`` is\nrecommended to prevent out of memory conditions on the average GPU :)\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    dgenerate stabilityai/stable-diffusion-xl-base-1.0 --model-type torch-sdxl \\\n    --sdxl-high-noise-fractions 0.6 0.7 0.8 \\\n    --gen-seeds 5 \\\n    --inference-steps 50 \\\n    --guidance-scales 12 \\\n    --sdxl-refiner stabilityai/stable-diffusion-xl-refiner-1.0 \\\n    --prompts \"real photo of an astronaut riding a horse on the moon\" \\\n    --variant fp16 --dtype float16 \\\n    --output-size 1024\n\n\nNegative Prompt\n===============\n\nIn order to specify a negative prompt, each prompt argument is split\ninto two parts separated by ``;``\n\nThe prompt text occurring after ``;`` is the negative influence prompt.\n\nTo attempt to avoid rendering of a saddle on the horse being ridden, you\ncould for example add the negative prompt ``saddle`` or ``wearing a saddle``\nor ``horse wearing a saddle`` etc.\n\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    dgenerate stabilityai/stable-diffusion-2-1 \\\n    --prompts \"an astronaut riding a horse; horse wearing a saddle\" \\\n    --gen-seeds 5 \\\n    --output-path astronaut \\\n    --inference-steps 50 \\\n    --guidance-scales 10 \\\n    --output-size 512x512\n\n\nMultiple Prompts\n================\n\nMultiple prompts can be specified one after another in quotes in order\nto generate images using multiple prompt variations.\n\nThe following command generates 10 uniquely named images using two\nprompts and five random seeds ``(2x5)``\n\n5 of them will be from the first prompt and 5 of them from the second prompt.\n\nAll using 50 inference steps, and 10 for guidance scale value.\n\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    dgenerate stabilityai/stable-diffusion-2-1 \\\n    --prompts \"an astronaut riding a horse\" \"an astronaut riding a donkey\" \\\n    --gen-seeds 5 \\\n    --output-path astronaut \\\n    --inference-steps 50 \\\n    --guidance-scales 10 \\\n    --output-size 512x512\n\n\nImage Seeds\n===========\n\nThe ``--image-seeds`` argument can be used to specify one or more image input resource groups\nfor use in rendering, and allows for the specification of img2img source images, inpaint masks,\ncontrol net guidance images, deep floyd stage images, image group resizing, and frame slicing values\nfor animations. It possesses it's own URI syntax for defining different image inputs used for image generation,\nthe example described below is the simplest case for one image input (img2img).\n\nThis example uses a photo of Buzz Aldrin on the moon to generate a photo of an astronaut standing on mars\nusing img2img, this uses an image seed downloaded from wikipedia.\n\nDisk file paths may also be used for image seeds and generally that is the standard use case,\nmultiple image seed definitions may be provided and images will be generated from each image\nseed individually.\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # Generate this image using 5 different seeds, 3 different inference-step values, 3 different\n    # guidance-scale values as above.\n\n    # In addition this image will be generated using 3 different image seed strengths.\n\n    # Adjust output size to 512x512 and output generated images to 'astronaut' folder, the image seed\n    # will be resized to that dimension with aspect ratio respected by default, the width is fixed and\n    # the height will be calculated, this behavior can be changed globally with the --no-aspect option\n    # if desired or locally by specifying \"img2img-seed.png;aspect=false\" as your image seed\n\n    # If you do not adjust the output size of the generated image, the size of the input image seed will be used.\n\n    # 135 uniquely named images will be generated (5x3x3x3)\n\n    dgenerate stabilityai/stable-diffusion-2-1 \\\n    --prompts \"an astronaut walking on mars\" \\\n    --image-seeds https://upload.wikimedia.org/wikipedia/commons/9/98/Aldrin_Apollo_11_original.jpg \\\n    --image-seed-strengths 0.2 0.5 0.8 \\\n    --gen-seeds 5 \\\n    --output-path astronaut \\\n    --inference-steps 30 40 50 \\\n    --guidance-scales 5 7 10 \\\n    --output-size 512x512\n\n\n``--image-seeds`` serves as the entire mechanism for determining if img2img or inpainting is going to occur via\nit's URI syntax described further in the section `Inpainting`_.\n\nIn addition to this it can be used to provide control guidance images in the case of txt2img, img2img, or inpainting\nvia the use of a URI syntax involving keyword arguments.\n\nThe syntax ``--image-seeds \"my-image-seed.png;control=my-control-image.png\"`` can be used with ``--control-nets`` to specify\nimg2img mode with a ControlNet for example, see: `Specifying Control Nets`_ for more information.\n\nIP Adapter images may be provided via a special ``adapters: ...`` syntax and\nvia the ``adapters`` URI argument discussed in: `Specifying IP Adapters`_\n\nBatching or providing multiple image inputs for the same generation, resulting in multiple output\nvariations possibly using different input images, or multiple image prompts, is possible using the\n``images: ...`` syntax discussed in the section: `Batching Input Images and Inpaint Masks`_.\n\n\nInpainting\n==========\n\nInpainting on an image can be preformed by providing a mask image with your image seed. This mask should be a black and white image\nof identical size to your image seed.  White areas of the mask image will be used to tell the AI what areas of the seed image should be filled\nin with generated content.\n\nFor using inpainting on animated image seeds, jump to: `Inpainting Animations`_\n\nSome possible definitions for inpainting are:\n\n    * ``--image-seeds \"my-image-seed.png;my-mask-image.png\"``\n    * ``--image-seeds \"my-image-seed.png;mask=my-mask-image.png\"``\n\nThe format is your image seed and mask image separated by ``;``, optionally ``mask`` can be named argument.\nThe alternate syntax is for disambiguation when preforming img2img or inpainting operations while `Specifying Control Nets`_\nor other operations where keyword arguments might be necessary for disambiguation such as per image seed `Animation Slicing`_,\nand the specification of the image from a previous Deep Floyd stage using the ``floyd`` argument.\n\nMask images can be downloaded from URL's just like any other resource mentioned in an ``--image-seeds`` definition,\nhowever for this example files on disk are used for brevity.\n\nYou can download them here:\n\n * `my-image-seed.png \u003chttps://raw.githubusercontent.com/Teriks/dgenerate/v4.5.1/examples/media/dog-on-bench.png\u003e`_\n * `my-mask-image.png \u003chttps://raw.githubusercontent.com/Teriks/dgenerate/v4.5.1/examples/media/dog-on-bench-mask.png\u003e`_\n\nThe command below generates a cat sitting on a bench with the images from the links above, the mask image masks out\nareas over the dog in the original image, causing the dog to be replaced with an AI generated cat.\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    dgenerate stabilityai/stable-diffusion-2-inpainting \\\n    --image-seeds \"my-image-seed.png;my-mask-image.png\" \\\n    --prompts \"Face of a yellow cat, high resolution, sitting on a park bench\" \\\n    --image-seed-strengths 0.8 \\\n    --guidance-scales 10 \\\n    --inference-steps 100\n\n\nPer Image Seed Resizing\n=======================\n\nIf you want to specify multiple image seeds that will have different output sizes irrespective\nof their input size or a globally defined output size defined with ``--output-size``,\nYou can specify their output size individually at the end of each provided image seed.\n\nThis will work when using a mask image for inpainting as well, including when using animated inputs.\n\nThis also works when `Specifying Control Nets`_ and guidance images for control nets.\n\nResizing in this fashion will resize any img2img image, inpaint mask, or control image to the specified\nsize, generally all of these images need to be the same size. In combination with the URI argument\n``aspect=False`` this can be used to force multiple images of different sizes to the same dimension.\n\nThis does not resize IP Adapter images as they have their own special per image resizing\nsyntax discussed in: `Specifying IP Adapters`_\n\nHere are some possible definitions:\n\n    * ``--image-seeds \"my-image-seed.png;512x512\"`` (img2img)\n    * ``--image-seeds \"my-image-seed.png;my-mask-image.png;512x512\"`` (inpainting)\n    * ``--image-seeds \"my-image-seed.png;resize=512x512\"`` (img2img)\n    * ``--image-seeds \"my-image-seed.png;mask=my-mask-image.png;resize=512x512\"`` (inpainting)\n\nThe alternate syntax with named arguments is for disambiguation when `Specifying Control Nets`_, or\npreforming per image seed `Animation Slicing`_, or specifying the previous Deep Floyd stage output\nwith the ``floyd`` keyword argument.\n\nWhen one dimension is specified, that dimension is the width, and the height.\n\nThe height of an image is calculated to be aspect correct by default for all resizing\nmethods unless ``--no-aspect`` has been given as an argument on the command line or the\n``aspect`` keyword argument is used in the ``--image-seeds`` definition.\n\nThe the aspect correct resize behavior can be controlled on a per image seed definition basis\nusing the ``aspect`` keyword argument.  Any value given to this argument overrides the presence\nor absense of the ``--no-aspect`` command line argument.\n\nthe ``aspect`` keyword argument can only be used when all other components of the image seed\ndefinition are defined using keyword arguments. ``aspect=false`` disables aspect correct resizing,\nand ``aspect=true`` enables it.\n\nSome possible definitions:\n\n    * ``--image-seeds \"my-image-seed.png;resize=512x512;aspect=false\"`` (img2img)\n    * ``--image-seeds \"my-image-seed.png;mask=my-mask-image.png;resize=512x512;aspect=false\"`` (inpainting)\n\n\nThe following example preforms img2img generation, followed by inpainting generation using 2 image seed definitions.\nThe involved images are resized using the basic syntax with no keyword arguments present in the image seeds.\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    dgenerate stabilityai/stable-diffusion-2-1 \\\n    --image-seeds \"my-image-seed.png;1024\" \"my-image-seed.png;my-mask-image.png;512x512\" \\\n    --prompts \"Face of a yellow cat, high resolution, sitting on a park bench\" \\\n    --image-seed-strengths 0.8 \\\n    --guidance-scales 10 \\\n    --inference-steps 100\n\n\nAnimated Output\n===============\n\n``dgenerate`` supports many video formats through the use of PyAV (ffmpeg), as well as GIF \u0026 WebP.\n\nSee ``--help`` for information about all formats supported for the ``--animation-format`` option.\n\nWhen an animated image seed is given, animated output will be produced in the format of your choosing.\n\nIn addition, every frame will be written to the output folder as a uniquely named image.\n\nBy specifying ``--animation-format frames`` you can tell dgenerate that you just need\nthe frame images and not to produce any coalesced animation file for you. You may also\nspecify ``--no-frames`` to indicate that you only want an animation file to be produced\nand no intermediate frames, though using this option with ``--animation-format frames``\nis considered an error.\n\nIf the animation is not 1:1 aspect ratio, the width will be fixed to the width of the\nrequested output size, and the height calculated to match the aspect ratio of the animation.\nUnless ``--no-aspect`` or the ``--image-seeds`` keyword argument ``aspect=false`` are specified,\nin which case the video will be resized to the requested dimension exactly.\n\nIf you do not set an output size, the size of the input animation will be used.\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # Use a GIF of a man riding a horse to create an animation of an astronaut riding a horse.\n\n    dgenerate stabilityai/stable-diffusion-2-1 \\\n    --prompts \"an astronaut riding a horse\" \\\n    --image-seeds https://upload.wikimedia.org/wikipedia/commons/7/7b/Muybridge_race_horse_~_big_transp.gif \\\n    --image-seed-strengths 0.5 \\\n    --output-path astronaut \\\n    --inference-steps 50 \\\n    --guidance-scales 10 \\\n    --output-size 512x512 \\\n    --animation-format mp4\n\n\nThe above syntax is the same syntax used for generating an animation with a control\nimage when ``--control-nets`` or ``--t2i-adapters`` is used.\n\nAnimations can also be generated using an alternate syntax for ``--image-seeds``\nthat allows the specification of a control image source when it is desired to use\n``--control-nets`` with img2img or inpainting.\n\nFor more information about this see: `Specifying Control Nets`_\n\nAnd also: `Specifying T2I Adapters`_\n\nAs well as the information about ``--image-seeds`` from dgenerates ``--help``\noutput.\n\nIP Adapter images can also be animated inputs see: `Specifying IP Adapters`_\n\nIn general, every image component of an ``--image-seeds`` specification may be an\nanimated file, animated files may be mixed with static images. The animated input with the \nshortest length determines the number of output frames, and any static image components\nare duplicated over that amount of frames.\n\n\nAnimation Slicing\n=================\n\nAnimated inputs can be sliced by a frame range either globally using\n``--frame-start`` and ``--frame-end`` or locally using the named argument\nsyntax for ``--image-seeds``, for example:\n\n    * ``--image-seeds \"animated.gif;frame-start=3;frame-end=10\"``.\n\nWhen using animation slicing at the ``--image-seed`` level, all image input definitions\nother than the main image must be specified using keyword arguments.\n\nFor example here are some possible definitions:\n\n    * ``--image-seeds \"seed.gif;frame-start=3;frame-end=10\"``\n    * ``--image-seeds \"seed.gif;mask=mask.gif;frame-start=3;frame-end=10``\n    * ``--image-seeds \"seed.gif;control=control-guidance.gif;frame-start=3;frame-end=10``\n    * ``--image-seeds \"seed.gif;mask=mask.gif;control=control-guidance.gif;frame-start=3;frame-end=10``\n    * ``--image-seeds \"seed.gif;floyd=stage1.gif;frame-start=3;frame-end=10\"``\n    * ``--image-seeds \"seed.gif;mask=mask.gif;floyd=stage1.gif;frame-start=3;frame-end=10\"``\n\nSpecifying a frame slice locally in an image seed overrides the global frame\nslice setting defined by ``--frame-start`` or ``--frame-end``, and is specific only\nto that image seed, other image seed definitions will not be affected.\n\nPerhaps you only want to run diffusion on the first frame of an animated input in\norder to save time in finding good parameters for generating every frame. You could\nslice to only the first frame using ``--frame-start 0 --frame-end 0``, which will be much\nfaster than rendering the entire video/gif outright.\n\nThe slice range zero indexed and also inclusive, inclusive means that the starting and ending frames\nspecified by ``--frame-start`` and ``--frame-end`` will be included in the slice.  Both slice points\ndo not have to be specified at the same time. You can exclude the tail end of a video with\njust ``--frame-end`` alone, or seek to a certain start frame in the video with ``--frame-start`` alone\nand render from there onward, this applies for keyword arguments in the ``--image-seeds`` definition as well.\n\nIf your slice only results in the processing of a single frame, an animated file format will\nnot be generated, only a single image output will be generated for that image seed during the\ngeneration step.\n\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # Generate using only the first frame\n\n    dgenerate stabilityai/stable-diffusion-2-1 \\\n    --prompts \"an astronaut riding a horse\" \\\n    --image-seeds https://upload.wikimedia.org/wikipedia/commons/7/7b/Muybridge_race_horse_~_big_transp.gif \\\n    --image-seed-strengths 0.5 \\\n    --output-path astronaut \\\n    --inference-steps 50 \\\n    --guidance-scales 10 \\\n    --output-size 512x512 \\\n    --animation-format mp4 \\\n    --frame-start 0 \\\n    --frame-end 0\n\n\nInpainting Animations\n=====================\n\nImage seeds can be supplied an animated or static image mask to define the areas for inpainting while generating an animated output.\n\nAny possible combination of image/video parameters can be used. The animation with least amount of frames in the entire\nspecification determines the frame count, and any static images present are duplicated across the entire animation.\nThe first animation present in an image seed specification always determines the output FPS of the animation.\n\nWhen an animated seed is used with an animated mask, the mask for every corresponding frame in the input is taken from the animated mask,\nthe runtime of the animated output will be equal to the shorter of the two animated inputs. IE: If the seed animation and the mask animation\nhave different length, the animated output is clipped to the length of the shorter of the two.\n\nWhen a static image is used as a mask, that image is used as an inpaint mask for every frame of the animated seed.\n\nWhen an animated mask is used with a static image seed, the animated output length is that of the animated mask. A video is\ncreated by duplicating the image seed for every frame of the animated mask, the animated output being generated by masking\nthem together.\n\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # A video with a static inpaint mask over the entire video\n\n    dgenerate stabilityai/stable-diffusion-2-inpainting \\\n    --prompts \"an astronaut riding a horse\" \\\n    --image-seeds \"my-animation.mp4;my-static-mask.png\" \\\n    --output-path inpaint \\\n    --animation-format mp4\n\n    # Zip two videos together, masking the left video with corresponding frames\n    # from the right video. The two animated inputs do not have to be the same file format\n    # you can mask videos with gif/webp and vice versa\n\n    dgenerate stabilityai/stable-diffusion-2-inpainting \\\n    --prompts \"an astronaut riding a horse\" \\\n    --image-seeds \"my-animation.mp4;my-animation-mask.mp4\" \\\n    --output-path inpaint \\\n    --animation-format mp4\n\n    dgenerate stabilityai/stable-diffusion-2-inpainting \\\n    --prompts \"an astronaut riding a horse\" \\\n    --image-seeds \"my-animation.mp4;my-animation-mask.gif\" \\\n    --output-path inpaint \\\n    --animation-format mp4\n\n    dgenerate stabilityai/stable-diffusion-2-inpainting \\\n    --prompts \"an astronaut riding a horse\" \\\n    --image-seeds \"my-animation.gif;my-animation-mask.gif\" \\\n    --output-path inpaint \\\n    --animation-format mp4\n\n    dgenerate stabilityai/stable-diffusion-2-inpainting \\\n    --prompts \"an astronaut riding a horse\" \\\n    --image-seeds \"my-animation.gif;my-animation-mask.webp\" \\\n    --output-path inpaint \\\n    --animation-format mp4\n\n    dgenerate stabilityai/stable-diffusion-2-inpainting \\\n    --prompts \"an astronaut riding a horse\" \\\n    --image-seeds \"my-animation.webp;my-animation-mask.gif\" \\\n    --output-path inpaint \\\n    --animation-format mp4\n\n    dgenerate stabilityai/stable-diffusion-2-inpainting \\\n    --prompts \"an astronaut riding a horse\" \\\n    --image-seeds \"my-animation.gif;my-animation-mask.mp4\" \\\n    --output-path inpaint \\\n    --animation-format mp4\n\n    # etc...\n\n    # Use a static image seed and mask it with every frame from an\n    # Animated mask file\n\n    dgenerate stabilityai/stable-diffusion-2-inpainting \\\n    --prompts \"an astronaut riding a horse\" \\\n    --image-seeds \"my-static-image-seed.png;my-animation-mask.mp4\" \\\n    --output-path inpaint \\\n    --animation-format mp4\n\n    dgenerate stabilityai/stable-diffusion-2-inpainting \\\n    --prompts \"an astronaut riding a horse\" \\\n    --image-seeds \"my-static-image-seed.png;my-animation-mask.gif\" \\\n    --output-path inpaint \\\n    --animation-format mp4\n\n    dgenerate stabilityai/stable-diffusion-2-inpainting \\\n    --prompts \"an astronaut riding a horse\" \\\n    --image-seeds \"my-static-image-seed.png;my-animation-mask.webp\" \\\n    --output-path inpaint \\\n    --animation-format mp4\n\n    # etc...\n\n\n\nDeterministic Output\n====================\n\nIf you generate an image you like using a random seed, you can later reuse that seed in another generation.\n\nUpdates to the backing model may affect determinism in the generation.\n\nOutput images have a name format that starts with the seed, IE: ``s_(seed here)_ ...png``\n\nReusing a seed has the effect of perfectly reproducing the image in the case that all\nother parameters are left alone, including the model version.\n\nYou can output a configuration file for each image / animation produced that will reproduce it\nexactly using the option ``--output-configs``, that same information can be written to the\nmetadata of generated PNG files using the option ``--output-metadata`` and can be read back\nwith ImageMagick for example as so:\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    magick identify -format \"%[Property:DgenerateConfig]\" generated_file.png\n\nGenerated configuration can be read back into dgenerate via a pipe or file redirection.\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # DO NOT DO THIS IF THE IMAGE IS UNTRUSTED, SUCH AS IF IT IS SOMEONE ELSE'S IMAGE!\n    # VERIFY THAT THE METADATA CONTENT OF THE IMAGE IS NOT MALICIOUS FIRST,\n    # USING THE IDENTIFY COMMAND ALONE\n\n    magick identify -format \"%[Property:DgenerateConfig]\" generated_file.png | dgenerate\n\n    dgenerate \u003c generated-config.dgen\n\nSpecifying a seed directly and changing the prompt slightly, or parameters such as image seed strength\nif using a seed image, guidance scale, or inference steps, will allow for generating variations close\nto the original image which may possess all the original qualities about the image that you liked as well as\nadditional qualities.  You can further manipulate the AI into producing results that you want with this method.\n\nChanging output resolution will drastically affect image content when reusing a seed to the point where trying to\nreuse a seed with a different output size is pointless.\n\nThe following command demonstrates manually specifying two different seeds to try: ``1234567890``, and ``9876543210``\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    dgenerate stabilityai/stable-diffusion-2-1 \\\n    --prompts \"an astronaut riding a horse\" \\\n    --seeds 1234567890 9876543210 \\\n    --output-path astronaut \\\n    --inference-steps 50 \\\n    --guidance-scales 10 \\\n    --output-size 512x512\n\n\nSpecifying a specific GPU for CUDA\n==================================\n\nThe desired GPU to use for CUDA acceleration can be selected using ``--device cuda:N`` where ``N`` is\nthe device number of the GPU as reported by ``nvidia-smi``.\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # Console 1, run on GPU 0\n\n    dgenerate stabilityai/stable-diffusion-2-1 \\\n    --prompts \"an astronaut riding a horse\" \\\n    --output-path astronaut_1 \\\n    --inference-steps 50 \\\n    --guidance-scales 10 \\\n    --output-size 512x512 \\\n    --device cuda:0\n\n    # Console 2, run on GPU 1 in parallel\n\n    dgenerate stabilityai/stable-diffusion-2-1 \\\n    --prompts \"an astronaut riding a cow\" \\\n    --output-path astronaut_2 \\\n    --inference-steps 50 \\\n    --guidance-scales 10 \\\n    --output-size 512x512 \\\n    --device cuda:1\n\n\nSpecifying a Scheduler (sampler)\n================================\n\nA scheduler (otherwise known as a sampler) for the main model can be selected via the use of ``--scheduler``.\n\nAnd in the case of SDXL the refiner's scheduler can be selected independently with ``--sdxl-refiner-scheduler``.\n\nFor Stable Cascade the decoder scheduler can be specified via the argument ``-s-cascade-decoder-scheduler``\nhowever only one scheduler type is supported for Stable Cascade (``DDPMWuerstchenScheduler``).\n\nBoth of these default to the value of ``--scheduler``, which in turn defaults to automatic selection.\n\nAvailable schedulers for a specific combination of dgenerate arguments can be\nqueried using ``--scheduler help``, ``--sdxl-refiner-scheduler help``, or ``--s-cascade-decoder-scheduler help``\nthough they cannot be queried simultaneously.\n\nIn order to use the query feature it is ideal that you provide all the other arguments\nthat you plan on using while making the query, as different combinations of arguments\nwill result in different underlying pipeline implementations being created, each of which\nmay have different compatible scheduler names listed. The model needs to be loaded in order to\ngather this information.\n\nFor example there is only one compatible scheduler for this upscaler configuration:\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    dgenerate stabilityai/sd-x2-latent-upscaler --variant fp16 --dtype float16 \\\n    --model-type torch-upscaler-x2 \\\n    --prompts \"none\" \\\n    --image-seeds my-image.png \\\n    --output-size 256 \\\n    --scheduler help\n\n    # Outputs:\n    #\n    # Compatible schedulers for \"stabilityai/sd-x2-latent-upscaler\" are:\n    #\n    #    \"EulerDiscreteScheduler\"\n\nTypically however, there will be many compatible schedulers:\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    dgenerate stabilityai/stable-diffusion-2 \\\n    --inference-steps 40 \\\n    --guidance-scales 8 \\\n    --output-size 1024 \\\n    --gen-seeds 2 \\\n    --prompts \"none\" \\\n    --scheduler help\n\n    # Outputs:\n    #\n    # Compatible schedulers for \"stabilityai/stable-diffusion-2\" are:\n    #\n    #     \"DDIMScheduler\"\n    #     \"DDPMScheduler\"\n    #     \"DEISMultistepScheduler\"\n    #     \"DPMSolverMultistepScheduler\"\n    #     \"DPMSolverSDEScheduler\"\n    #     \"DPMSolverSinglestepScheduler\"\n    #     \"EDMEulerScheduler\"\n    #     \"EulerAncestralDiscreteScheduler\"\n    #     \"EulerDiscreteScheduler\"\n    #     \"HeunDiscreteScheduler\"\n    #     \"KDPM2AncestralDiscreteScheduler\"\n    #     \"KDPM2DiscreteScheduler\"\n    #     \"LCMScheduler\"\n    #     \"LMSDiscreteScheduler\"\n    #     \"PNDMScheduler\"\n    #     \"UniPCMultistepScheduler\"\n\n\nPassing ``helpargs`` to a ``--scheduler`` related option will reveal configuration arguments that\ncan be overridden via a URI syntax, for every possible scheduler.\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    dgenerate stabilityai/stable-diffusion-2 \\\n    --inference-steps 40 \\\n    --guidance-scales 8 \\\n    --output-size 1024 \\\n    --gen-seeds 2 \\\n    --prompts \"none\" \\\n    --scheduler helpargs\n\n\n    # Outputs (shortened for brevity...):\n    #\n    # Compatible schedulers for \"stabilityai/stable-diffusion-2\" are:\n    #    ...\n    #\n    #    PNDMScheduler:\n    #        num-train-timesteps=1000\n    #        beta-start=0.0001\n    #        beta-end=0.02\n    #        beta-schedule=linear\n    #        trained-betas=None\n    #        skip-prk-steps=False\n    #        set-alpha-to-one=False\n    #        prediction-type=epsilon\n    #        timestep-spacing=leading\n    #        steps-offset=0\n    #\n    #   ...\n\n\nAs an example, you may override the mentioned arguments for any scheduler in this manner:\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # Change prediction type of the scheduler to \"v_prediction\".\n    # for some models this may be necessary, not for this model\n    # this is just a syntax example\n\n    dgenerate stabilityai/stable-diffusion-2 \\\n    --inference-steps 40 \\\n    --guidance-scales 8 \\\n    --output-size 1024 \\\n    --gen-seeds 2 \\\n    --prompts \"none\" \\\n    --scheduler PNDMScheduler;prediction-type=v_prediction\n\nLike diffusion parameter arguments, you may specify multiple scheduler URIs and they will be tried in turn.\n\nWhen you specify multiple schedulers in this manner they will be added to the beginning of the\noutput file name, in the order: ``(scheduler)_(refiner / decoder scheduler)``\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # Try these two schedulers one after another\n\n    dgenerate stabilityai/stable-diffusion-2-1 \\\n    --inference-steps 30 \\\n    --guidance-scales 5 \\\n    --schedulers EulerAncestralDiscreteScheduler KDPM2AncestralDiscreteScheduler \\\n    --output-size 512x512 \\\n    --prompts \"a horse standing in a field\"\n\n\n    # This works for all scheduler arguments, for instance\n    # this SDXL command results in 4 generation steps, trying\n    # all possible combinations of schedulers provided\n\n    dgenerate stabilityai/stable-diffusion-xl-base-1.0 \\\n    --model-type torch-sdxl \\\n    --dtype float16 \\\n    --variant fp16 \\\n    --sdxl-refiner stabilityai/stable-diffusion-xl-refiner-1.0 \\\n    --schedulers EulerAncestralDiscreteScheduler EulerDiscreteScheduler \\\n    --sdxl-refiner-schedulers KDPM2AncestralDiscreteScheduler KDPM2DiscreteScheduler \\\n    --inference-steps 30 \\\n    --guidance-scales 5 \\\n    --prompts \"a horse standing in a field\"\n\n\nSpecifying a VAE\n================\n\nTo specify a VAE directly use ``--vae``.\n\nVAEs are supported for these model types:\n\n    * ``--model-type torch``\n    * ``--model-type torch-pix2pix``\n    * ``--model-type torch-upscaler-x2``\n    * ``--model-type torch-upscaler-x4``\n    * ``--model-type torch-sdxl``\n    * ``--model-type torch-sdxl-pix2pix``\n    * ``--model-type torch-sd3``\n    * ``--model-type torch-flux``\n\nThe URI syntax for ``--vae`` is ``AutoEncoderClass;model=(huggingface repository slug/blob link or file/folder path)``\n\nNamed arguments when loading a VAE are separated by the ``;`` character and are not positional,\nmeaning they can be defined in any order.\n\nLoading arguments available when specifying a VAE are: ``model``, ``revision``, ``variant``, ``subfolder``, and ``dtype``\n\nThe only named arguments compatible with loading a .safetensors or other model file\ndirectly off disk are ``model`` and ``dtype``\n\nThe other named arguments are available when loading from a huggingface repository or folder\nthat may or may not be a local git repository on disk.\n\nAvailable encoder classes are:\n\n* AutoencoderKL\n* AsymmetricAutoencoderKL (Does not support ``--vae-slicing`` or ``--vae-tiling``)\n* AutoencoderTiny\n* ConsistencyDecoderVAE\n\nThe AutoencoderKL encoder class accepts huggingface repository slugs/blob links,\n.pt, .pth, .bin, .ckpt, and .safetensors files. Other encoders can only accept huggingface\nrepository slugs/blob links, or a path to a folder on disk with the model\nconfiguration and model file(s).\n\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    dgenerate stabilityai/stable-diffusion-2-1 \\\n    --vae \"AutoencoderKL;model=stabilityai/sd-vae-ft-mse\" \\\n    --prompts \"an astronaut riding a horse\" \\\n    --output-path astronaut \\\n    --inference-steps 50 \\\n    --guidance-scales 10 \\\n    --output-size 512x512\n\n\nIf you want to select the repository revision, such as ``main`` etc, use the named argument ``revision``,\n``subfolder`` is required in this example as well because the VAE model file exists in a subfolder\nof the specified huggingface repository.\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    dgenerate stabilityai/stable-diffusion-2-1 \\\n    --revision fp16 \\\n    --dtype float16 \\\n    --vae \"AutoencoderKL;model=stabilityai/stable-diffusion-2-1;revision=fp16;subfolder=vae\" \\\n    --prompts \"an astronaut riding a horse\" \\\n    --output-path astronaut \\\n    --inference-steps 50 \\\n    --guidance-scales 10 \\\n    --output-size 512x512\n\n\nIf you wish to specify a weights variant IE: load ``pytorch_model.\u003cvariant\u003e.safetensors``, from a huggingface\nrepository that has variants of the same model, use the named argument ``variant``. this value does NOT default to the value\n``--variant`` to prevent errors during common use cases. If you wish to select a variant you must specify it in the URI.\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    dgenerate stabilityai/stable-diffusion-2-1 \\\n    --variant fp16 \\\n    --vae \"AutoencoderKL;model=stabilityai/stable-diffusion-2-1;subfolder=vae;variant=fp16\" \\\n    --prompts \"an astronaut riding a horse\" \\\n    --output-path astronaut \\\n    --inference-steps 50 \\\n    --guidance-scales 10 \\\n    --output-size 512x512\n\n\nIf your weights file exists in a subfolder of the repository, use the named argument ``subfolder``\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    dgenerate stabilityai/stable-diffusion-2-1 \\\n    --vae \"AutoencoderKL;model=stabilityai/stable-diffusion-2-1;subfolder=vae\" \\\n    --prompts \"an astronaut riding a horse\" \\\n    --output-path astronaut \\\n    --inference-steps 50 \\\n    --guidance-scales 10 \\\n    --output-size 512x512\n\n\nIf you want to specify the model precision, use the named argument ``dtype``,\naccepted values are the same as ``--dtype``, IE: ``float32``, ``float16``, ``auto``\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    dgenerate stabilityai/stable-diffusion-2-1 \\\n    --revision fp16 \\\n    --dtype float16 \\\n    --vae \"AutoencoderKL;model=stabilityai/stable-diffusion-2-1;revision=fp16;subfolder=vae;dtype=float16\" \\\n    --prompts \"an astronaut riding a horse\" \\\n    --output-path astronaut \\\n    --inference-steps 50 \\\n    --guidance-scales 10 \\\n    --output-size 512x512\n\n\nIf you are loading a .safetensors or other file from a path on disk, only the ``model``, and ``dtype``\narguments are available.\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # These are only syntax examples\n\n    dgenerate huggingface/diffusion_model \\\n    --vae \"AutoencoderKL;model=my_vae.safetensors\" \\\n    --prompts \"Syntax example\"\n\n    dgenerate huggingface/diffusion_model \\\n    --vae \"AutoencoderKL;model=my_vae.safetensors;dtype=float16\" \\\n    --prompts \"Syntax example\"\n\nVAE Tiling and Slicing\n======================\n\nYou can use ``--vae-tiling`` and ``--vae-slicing`` to enable to generation of huge images\nwithout running your GPU out of memory. Note that if you are using ``--control-nets`` you may\nstill be memory limited by the size of the image being processed by the ControlNet, and still\nmay run in to memory issues with large image inputs.\n\nWhen ``--vae-tiling`` is used, the VAE will split the input tensor into tiles to\ncompute decoding and encoding in several steps. This is useful for saving a large amount of\nmemory and to allow processing larger images.\n\nWhen ``--vae-slicing`` is used, the VAE will split the input tensor in slices to\ncompute decoding in several steps. This is useful to save some memory, especially\nwhen ``--batch-size`` is greater than 1.\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # Here is an SDXL example of high resolution image generation utilizing VAE tiling/slicing\n\n    dgenerate stabilityai/stable-diffusion-xl-base-1.0 --model-type torch-sdxl \\\n    --variant fp16 --dtype float16 \\\n    --vae \"AutoencoderKL;model=madebyollin/sdxl-vae-fp16-fix\" \\\n    --vae-tiling \\\n    --vae-slicing \\\n    --sdxl-refiner stabilityai/stable-diffusion-xl-refiner-1.0 \\\n    --sdxl-high-noise-fractions 0.8 \\\n    --inference-steps 30 \\\n    --guidance-scales 8 \\\n    --output-size 2048 \\\n    --sdxl-target-size 2048 \\\n    --prompts \"Photo of a horse standing near the open door of a red barn, high resolution; artwork\"\n\n\nSpecifying a UNet\n=================\n\nAn alternate UNet model can be specified via a URI with the ``--unet`` option, in a\nsimilar fashion to ``--vae`` and other model arguments that accept URIs.\n\nUNets are supported for these model types:\n\n    * ``--model-type torch``\n    * ``--model-type torch-if``\n    * ``--model-type torch-ifs``\n    * ``--model-type torch-ifs-img2img``\n    * ``--model-type torch-pix2pix``\n    * ``--model-type torch-upscaler-x2``\n    * ``--model-type torch-upscaler-x4``\n    * ``--model-type torch-sdxl``\n    * ``--model-type torch-sdxl-pix2pix``\n    * ``--model-type torch-s-cascade``\n\nThis is useful in particular for using the latent consistency scheduler as well as the\n``lite`` variants of the unet models used with Stable Cascade.\n\nThe first component of the ``--unet`` URI is the model path itself.\n\nYou can provide a path to a huggingface repo, or a folder on disk (downloaded huggingface repository).\n\nThe latent consistency UNet for SDXL can be specified with the ``--unet`` argument.\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    dgenerate stabilityai/stable-diffusion-xl-base-1.0 --model-type torch-sdxl \\\n    --variant fp16 --dtype float16 \\\n    --unet latent-consistency/lcm-sdxl \\\n    --scheduler LCMScheduler \\\n    --inference-steps 4 \\\n    --guidance-scales 8 \\\n    --gen-seeds 2 \\\n    --output-size 1024 \\\n    --prompts \"a close-up picture of an old man standing in the rain\"\n\nLoading arguments available when specifying a UNet are: ``revision``, ``variant``, ``subfolder``, and ``dtype``\n\nIn the case of ``--unet`` the ``variant`` loading argument defaults to the value\nof ``--variant`` if you do not specify it in the URI.\n\nThe ``--unet2`` option can be used to specify a UNet for the\n`SDXL Refiner \u003c#specifying-an-sdxl-refiner\u003e`_ or `Stable Cascade Decoder \u003c#specifying-a-stable-cascade-decoder\u003e`_,\nand uses the same syntax as ``--unet``.\n\nHere is an example of using the ``lite`` variants of Stable Cascade's\nUNet models which have a smaller memory footprint using ``--unet`` and ``--unet2``.\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    dgenerate stabilityai/stable-cascade-prior \\\n    --model-type torch-s-cascade \\\n    --variant bf16 \\\n    --dtype bfloat16 \\\n    --unet \"stabilityai/stable-cascade-prior;subfolder=prior_lite\" \\\n    --unet2 \"stabilityai/stable-cascade;subfolder=decoder_lite\" \\\n    --model-cpu-offload \\\n    --s-cascade-decoder-cpu-offload \\\n    --s-cascade-decoder \"stabilityai/stable-cascade;dtype=float16\" \\\n    --inference-steps 20 \\\n    --guidance-scales 4 \\\n    --s-cascade-decoder-inference-steps 10 \\\n    --s-cascade-decoder-guidance-scales 0 \\\n    --gen-seeds 2 \\\n    --prompts \"an image of a shiba inu, donning a spacesuit and helmet\"\n\n\nSpecifying a Transformer (SD3 and Flux)\n=======================================\n\nStable Diffusion 3 and Flux do not use a UNet architecture, and instead use a\nTransformer model in place of a UNet.\n\nA specific transformer model can be specified using the ``--transformer`` argument.\n\nThis argument is nearly identical to ``--unet``, however it can support single file loads\nfrom safetensors files or huggingface blob links if desired.\n\nIn addition to the arguments that ``--unet`` supports, ``--transformer`` supports the ``quantize``\nURI argument to enable weights quantization via the `optimum-quanto library \u003coptimum-quanto_library_1_\u003e`_, \nallowing for lower GPU memory usage. ``quantize`` may be passed the values ``qint2``, ``qint4``, ``qint8``,\n``qfloat8_e4m3fn``, ``qfloat8_e5m2``, or ``qfloat8``, to indicate the quantization data type.\n\nSD3 Example:\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # This just loads the default transformer out of the repo on huggingface\n\n    dgenerate stabilityai/stable-diffusion-3-medium-diffusers \\\n    --model-type torch-sd3 \\\n    --transformer stabilityai/stable-diffusion-3-medium-diffusers;subfolder=transformer \\\n    --variant fp16 \\\n    --dtype float16 \\\n    --inference-steps 30 \\\n    --guidance-scales 5.00 \\\n    --clip-skips 0 \\\n    --gen-seeds 2 \\\n    --output-path output \\\n    --model-sequential-offload \\\n    --prompts \"Photo of a horse standing near the open door of a red barn, high resolution; artwork\"\n\nFlux Example:\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # use Flux with quantized transformer and text encoder (qfloat8)\n\n    dgenerate black-forest-labs/FLUX.1-dev \\\n    --model-type torch-flux \\\n    --dtype bfloat16 \\\n    --transformer https://huggingface.co/Kijai/flux-fp8/blob/main/flux1-dev-fp8.safetensors;quantize=qfloat8 \\\n    --text-encoders + T5EncoderModel;model=black-forest-labs/FLUX.1-dev;subfolder=text_encoder_2;quantize=qfloat8 \\\n    --model-cpu-offload \\\n    --inference-steps 20 \\\n    --guidance-scales 3.5 \\\n    --gen-seeds 1 \\\n    --output-path output \\\n    --output-size 512x512 \\\n    --prompts \"Photo of a horse standing near the open door of a red barn, high resolution\"\n\n\nSpecifying an SDXL Refiner\n==========================\n\nWhen the main model is an SDXL model and ``--model-type torch-sdxl`` is specified,\nyou may specify a refiner model with ``--sdxl-refiner``.\n\nYou can provide a path to a huggingface repo/blob link, folder on disk, or a model file\non disk such as a .pt, .pth, .bin, .ckpt, or .safetensors file.\n\nThis argument is parsed in much the same way as the argument ``--vae``, except the\nmodel is the first value specified.\n\nLoading arguments available when specifying a refiner are: ``revision``, ``variant``, ``subfolder``, and ``dtype``\n\nThe only named argument compatible with loading a .safetensors or other file directly off disk is ``dtype``\n\nThe other named arguments are available when loading from a huggingface repo/blob link,\nor folder that may or may not be a local git repository on disk.\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # Basic usage of SDXL with a refiner\n\n    dgenerate stabilityai/stable-diffusion-xl-base-1.0 --model-type torch-sdxl \\\n    --variant fp16 --dtype float16 \\\n    --sdxl-refiner stabilityai/stable-diffusion-xl-refiner-1.0 \\\n    --sdxl-high-noise-fractions 0.8 \\\n    --inference-steps 40 \\\n    --guidance-scales 8 \\\n    --output-size 1024 \\\n    --prompts \"Photo of a horse standing near the open door of a red barn, high resolution; artwork\"\n\n\n\nIf you want to select the repository revision, such as ``main`` etc, use the named argument ``revision``\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    dgenerate stabilityai/stable-diffusion-xl-base-1.0 --model-type torch-sdxl \\\n    --variant fp16 --dtype float16 \\\n    --sdxl-refiner \"stabilityai/stable-diffusion-xl-refiner-1.0;revision=main\" \\\n    --sdxl-high-noise-fractions 0.8 \\\n    --inference-steps 40 \\\n    --guidance-scales 8 \\\n    --output-size 1024 \\\n    --prompts \"Photo of a horse standing near the open door of a red barn, high resolution; artwork\"\n\n\nIf you wish to specify a weights variant IE: load ``pytorch_model.\u003cvariant\u003e.safetensors``, from a huggingface\nrepository that has variants of the same model, use the named argument ``variant``. By default this\nvalue is the same as ``--variant`` unless you override it.\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    dgenerate stabilityai/stable-diffusion-xl-base-1.0 --model-type torch-sdxl \\\n    --variant fp16 --dtype float16 \\\n    --sdxl-refiner \"stabilityai/stable-diffusion-xl-refiner-1.0;variant=fp16\" \\\n    --sdxl-high-noise-fractions 0.8 \\\n    --inference-steps 40 \\\n    --guidance-scales 8 \\\n    --output-size 1024 \\\n    --prompts \"Photo of a horse standing near the open door of a red barn, high resolution; artwork\"\n\n\nIf your weights file exists in a subfolder of the repository, use the named argument ``subfolder``\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # This is only a syntax example\n\n    dgenerate huggingface/sdxl_model --model-type torch-sdxl \\\n    --variant fp16 --dtype float16 \\\n    --sdxl-refiner \"huggingface/sdxl_refiner;subfolder=repo_subfolder\"\n\n\nIf you want to select the model precision, use the named argument ``dtype``. By\ndefault this value is the same as ``--dtype`` unless you override it. Accepted\nvalues are the same as ``--dtype``, IE: 'float32', 'float16', 'auto'\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    dgenerate stabilityai/stable-diffusion-xl-base-1.0 --model-type torch-sdxl \\\n    --variant fp16 --dtype float16 \\\n    --sdxl-refiner \"stabilityai/stable-diffusion-xl-refiner-1.0;dtype=float16\" \\\n    --sdxl-high-noise-fractions 0.8 \\\n    --inference-steps 40 \\\n    --guidance-scales 8 \\\n    --output-size 1024 \\\n    --prompts \"Photo of a horse standing near the open door of a red barn, high resolution; artwork\"\n\n\nIf you are loading a .safetensors or other file from a path on disk, simply do:\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # This is only a syntax example\n\n    dgenerate huggingface/sdxl_model --model-type torch-sdxl \\\n    --sdxl-refiner my_refinermodel.safetensors\n\n\nWhen preforming inpainting or when using `ControlNets \u003c#specifying-control-nets\u003e`_, the\nrefiner will automatically operate in edit mode instead of cooperative denoising mode.\nEdit mode can be forced in other situations with the option ``--sdxl-refiner-edit``.\n\nEdit mode means that the refiner model is accepting the fully (or mostly) denoised output\nof the main model generated at the full number of inference steps specified, and acting\non it with an image strength (image seed strength) determined by (1.0 - high-noise-fraction).\n\nThe output latent from the main model is renoised with a certain amount of noise determined\nby the strength, a lower number means less noise and less modification of the latent output\nby the main model.\n\nThis is similar to what happens when using dgenerate in img2img with a standalone model,\ntechnically it is just img2img, however refiner models are better at enhancing details\nfrom the main model in this use case.\n\nSpecifying a Stable Cascade Decoder\n===================================\n\nWhen the main model is a Stable Cascade prior model and ``--model-type torch-s-cascade`` is specified,\nyou may specify a decoder model with ``--s-cascade-decoder``.\n\nThe syntax (and URI arguments) for specifying the decoder model is identical to specifying an SDXL refiner\nmodel as mentioned above.\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    dgenerate stabilityai/stable-cascade-prior \\\n    --model-type torch-s-cascade \\\n    --variant bf16 \\\n    --dtype bfloat16 \\\n    --model-cpu-offload \\\n    --s-cascade-decoder-cpu-offload \\\n    --s-cascade-decoder \"stabilityai/stable-cascade;dtype=float16\" \\\n    --inference-steps 20 \\\n    --guidance-scales 4 \\\n    --s-cascade-decoder-inference-steps 10 \\\n    --s-cascade-decoder-guidance-scales 0 \\\n    --gen-seeds 2 \\\n    --prompts \"an image of a shiba inu, donning a spacesuit and helmet\"\n\n\nSpecifying LoRAs\n================\n\nIt is possible to specify one or more LoRA models using ``--loras``\n\nLoRAs are supported for these model types:\n\n    * ``--model-type torch``\n    * ``--model-type torch-pix2pix``\n    * ``--model-type torch-upscaler-x4``\n    * ``--model-type torch-sdxl``\n    * ``--model-type torch-sdxl-pix2pix``\n    * ``--model-type torch-sd3``\n    * ``--model-type torch-flux``\n    * ``--model-type torch-flux-fill``\n\nWhen multiple specifications are given, all mentioned models will be fused together\ninto one set of weights at their individual scale, and then those weights will be\nfused into the main model at the scale value of ``--lora-fuse-scale``, which\ndefaults to 1.0.\n\nYou can provide a huggingface repository slug, .pt, .pth, .bin, .ckpt, or .safetensors files.\nBlob links are not accepted, for that use ``subfolder`` and ``weight-name`` described below.\n\nThe individual LoRA scale for each provided model can be specified after the model path\nby placing a ``;`` (semicolon) and then using the named argument ``scale``\n\nWhen a scale is not specified, 1.0 is assumed.\n\nNamed arguments when loading a LoRA are separated by the ``;`` character and are\nnot positional, meaning they can be defined in any order.\n\nLoading arguments available when specifying a LoRA are: ``scale``, ``revision``, ``subfolder``, and ``weight-name``\n\nThe only named argument compatible with loading a .safetensors or other file directly off disk is ``scale``\n\nThe other named arguments are available when loading from a huggingface repository or folder\nthat may or may not be a local git repository on disk.\n\nThis example shows loading a LoRA using a huggingface repository slug and specifying scale for it.\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # Don't expect great results with this example,\n    # Try models and LoRA's downloaded from CivitAI\n\n    dgenerate Lykon/dreamshaper-8 \\\n    --loras \"pcuenq/pokemon-lora;scale=0.5\" \\\n    --prompts \"Gengar standing in a field at night under a full moon, highquality, masterpiece, digital art\" \\\n    --inference-steps 40 \\\n    --guidance-scales 10 \\\n    --gen-seeds 5 \\\n    --output-size 800\n\n\nSpecifying the file in a repository directly can be done with the named argument ``weight-name``\n\nShown below is an SDXL compatible LoRA being used with the SDXL base model and a refiner.\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    dgenerate stabilityai/stable-diffusion-xl-base-1.0 --model-type torch-sdxl \\\n    --inference-steps 30 \\\n    --sdxl-refiner stabilityai/stable-diffusion-xl-refiner-1.0 \\\n    --prompts \"sketch of a horse by Leonardo da Vinci\" \\\n    --variant fp16 --dtype float16 \\\n    --loras \"goofyai/SDXL-Lora-Collection;scale=1.0;weight-name=leonardo_illustration.safetensors\" \\\n    --output-size 1024\n\n\nIf you want to select the repository revision, such as ``main`` etc, use the named argument ``revision``\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    dgenerate Lykon/dreamshaper-8 \\\n    --loras \"pcuenq/pokemon-lora;scale=0.5;revision=main\" \\\n    --prompts \"Gengar standing in a field at night under a full moon, highquality, masterpiece, digital art\" \\\n    --inference-steps 40 \\\n    --guidance-scales 10 \\\n    --gen-seeds 5 \\\n    --output-size 800\n\n\nIf your weights file exists in a subfolder of the repository, use the named argument ``subfolder``\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # This is only a syntax example\n\n    dgenerate huggingface/model \\\n    --prompts \"Syntax example\" \\\n    --loras \"huggingface/lora_repo;scale=1.0;subfolder=repo_subfolder;weight-name=lora_weights.safetensors\"\n\n\nIf you are loading a .safetensors or other file from a path on disk, only the ``scale`` argument is available.\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # This is only a syntax example\n\n    dgenerate Lykon/dreamshaper-8 \\\n    --prompts \"Syntax example\" \\\n    --loras \"my_lora.safetensors;scale=1.0\"\n\n\nSpecifying Textual Inversions (embeddings)\n==========================================\n\nOne or more Textual Inversion models (otherwise known as embeddings) may be specified with ``--textual-inversions``\n\nTextual inversions are supported for these model types:\n\n    * ``--model-type torch``\n    * ``--model-type torch-pix2pix``\n    * ``--model-type torch-upscaler-x4``\n    * ``--model-type torch-sdxl``\n    * ``--model-type torch-sdxl-pix2pix``\n    * ``--model-type torch-flux`` (``txt2img``, ``txt2img + Control Nets``, ``inpainting + Control Nets`` only)\n    * ``--model-type torch-flux-fill`` (``inpainting`` only)\n\nYou can provide a huggingface repository slug, .pt, .pth, .bin, .ckpt, or .safetensors files.\nBlob links are not accepted, for that use ``subfolder`` and ``weight-name`` described below.\n\nArguments pertaining to the loading of each textual inversion model may be specified in the same\nway as when using ``--loras`` minus the scale argument.\n\nAvailable arguments are: ``token``,  ``revision``, ``subfolder``, and ``weight-name``\n\nNamed arguments are available when loading from a huggingface repository or folder\nthat may or may not be a local git repository on disk, when loading directly from a .safetensors file\nor other file from a path on disk they should not be used.\n\nThe ``token`` argument may be used to override the prompt token value, which is the text token\nin the prompt that triggers the inversion, textual inversions for stable diffusion usually\ninclude this token value in the model itself, for instance in the example below the token\nfor ``Isometric_Dreams-1000.pt`` is ``Isometric_Dreams``.\n\nThe token value used for SDXL and Flux models is a bit different, a default\nvalue is not provided in the model file. If you do not provide a token value, dgenerate\nwill assign the tokens default value to the filename of the model with any spaces converted to\nunderscores, and with the file extension removed.\n\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # Load a textual inversion from a huggingface repository specifying it's name in the repository\n    # as an argument\n\n    dgenerate Duskfallcrew/isometric-dreams-sd-1-5  \\\n    --textual-inversions \"Duskfallcrew/IsometricDreams_TextualInversions;weight-name=Isometric_Dreams-1000.pt\" \\\n    --scheduler KDPM2DiscreteScheduler \\\n    --inference-steps 30 \\\n    --guidance-scales 7 \\\n    --prompts \"a bright photo of the Isometric_Dreams, a tv and a stereo in it and a book shelf, a table, a couch,a room with a bed\"\n\n\nYou can change the ``token`` value to affect the prompt token used to trigger the embedding\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # Load a textual inversion from a huggingface repository specifying it's name in the repository\n    # as an argument\n\n    dgenerate Duskfallcrew/isometric-dreams-sd-1-5  \\\n    --textual-inversions \"Duskfallcrew/IsometricDreams_TextualInversions;weight-name=Isometric_Dreams-1000.pt;token=\u003cMY_TOKEN\u003e\" \\\n    --scheduler KDPM2DiscreteScheduler \\\n    --inference-steps 30 \\\n    --guidance-scales 7 \\\n    --prompts \"a bright photo of the \u003cMY_TOKEN\u003e, a tv and a stereo in it and a book shelf, a table, a couch,a room with a bed\"\n\n\nIf you want to select the repository revision, such as ``main`` etc, use the named argument ``revision``\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # This is a non working example as I do not know of a repo that utilizes revisions with\n    # textual inversion weights :) this is only a syntax example\n\n    dgenerate huggingface/model \\\n    --prompts \"Syntax example\" \\\n    --textual-inversions \"huggingface/ti_repo;revision=main\"\n\n\nIf your weights file exists in a subfolder of the repository, use the named argument ``subfolder``\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # This is only a syntax example\n\n    dgenerate huggingface/model \\\n    --prompts \"Syntax example\" \\\n    --textual-inversions \"huggingface/ti_repo;subfolder=repo_subfolder;weight-name=ti_model.safetensors\"\n\n\nIf you are loading a .safetensors or other file from a path on disk, simply do:\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # This is only a syntax example\n\n    dgenerate Lykon/dreamshaper-8 \\\n    --prompts \"Syntax example\" \\\n    --textual-inversions \"my_ti_model.safetensors\"\n\n\n\nSpecifying Control Nets\n=======================\n\nOne or more ControlNet models may be specified with ``--control-nets``, and multiple control\nnet guidance images can be specified via ``--image-seeds`` in the case that you specify\nmultiple control net models.\n\nControlNet models are supported for these model types:\n\n    * ``--model-type torch``\n    * ``--model-type torch-sdxl``\n    * ``--model-type torch-sd3`` (img2img and inpainting not supported)\n    * ``--model-type torch-flux``\n\nYou can provide a huggingface repository slug / blob link, .pt, .pth, .bin, .ckpt, or .safetensors files.\n\nControl images for the Control Nets can be provided using ``--image-seeds``\n\nWhen using ``--control-nets`` specifying control images via ``--image-seeds`` can be accomplished in these ways:\n\n    * ``--image-seeds \"control-image.png\"`` (txt2img)\n    * ``--image-seeds \"img2img-seed.png;control=control-image.png\"`` (img2img)\n    * ``--image-seeds \"img2img-seed.png;mask=mask.png;control=control-image.png\"`` (inpainting)\n\nMultiple control image sources can be specified in these ways when using multiple control nets:\n\n    * ``--image-seeds \"control-1.png, control-2.png\"`` (txt2img)\n    * ``--image-seeds \"img2img-seed.png;control=control-1.png, control-2.png\"`` (img2img)\n    * ``--image-seeds \"img2img-seed.png;mask=mask.png;control=control-1.png, control-2.png\"`` (inpainting)\n\n\nIt is considered a syntax error if you specify a non-equal amount of control guidance\nimages and ``--control-nets`` URIs and you will receive an error message if you do so.\n\n``resize=WIDTHxHEIGHT`` can be used to select a per ``--image-seeds`` resize dimension for all image\nsources involved in that particular specification, as well as ``aspect=true/false`` and the frame\nslicing arguments ``frame-start`` and ``frame-end``.\n\nControlNet guidance images may actually be animations such as MP4s, GIFs etc. Frames can be\ntaken from multiple videos simultaneously. Any possible combination of image/video parameters can be used.\nThe animation with least amount of frames in the entire specification determines the frame count, and\nany static images present are duplicated across the entire animation. The first animation present\nin an image seed specification always determines the output FPS of the animation.\n\nArguments pertaining to the loading of each ControlNet model specified with ``--control-nets`` may be\ndeclared in the same way as when using ``--vae`` with the addition of a ``scale`` argument.\n\nAvailable arguments are: ``--model-type`` values are: ``scale``, ``start``, ``end``, ``revision``, ``variant``, ``subfolder``, ``dtype``\n\nMost named arguments apply to loading from a huggingface repository or folder\nthat may or may not be a local git repository on disk, when loading directly from a .safetensors file\nor other file from a path on disk the available arguments are ``scale``, ``start``, and ``end``.\n\nThe ``scale`` argument indicates the affect scale of the control net model.\n\nFor torch, the ``start`` argument indicates at what fraction of the total inference steps\nat which the control net model starts to apply guidance. If you have multiple\ncontrol net models specified, they can apply guidance over different segments\nof the inference steps using this option, it defaults to 0.0, meaning start at the\nfirst inference step.\n\nfor torch, the ``end`` argument indicates at what fraction of the total inference steps\nat which the control net model stops applying guidance. It defaults to 1.0, meaning\nstop at the last inference step.\n\n\nThese examples use: `vermeer_canny_edged.png \u003cvermeer_canny_edged.png_1_\u003e`_\n\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # SD1.5 example, use \"vermeer_canny_edged.png\" as a control guidance image\n\n    dgenerate Lykon/dreamshaper-8 \\\n    --inference-steps 40 \\\n    --guidance-scales 8 \\\n    --prompts \"Painting, Girl with a pearl earring by Leonardo Da Vinci, masterpiece; low quality, low resolution, blank eyeballs\" \\\n    --control-nets \"lllyasviel/sd-controlnet-canny;scale=0.5\" \\\n    --image-seeds \"vermeer_canny_edged.png\"\n\n\n    # If you have an img2img image seed, use this syntax\n\n    dgenerate Lykon/dreamshaper-8 \\\n    --inference-steps 40 \\\n    --guidance-scales 8 \\\n    --prompts \"Painting, Girl with a pearl earring by Leonardo Da Vinci, masterpiece; low quality, low resolution, blank eyeballs\" \\\n    --control-nets \"lllyasviel/sd-controlnet-canny;scale=0.5\" \\\n    --image-seeds \"my-image-seed.png;control=vermeer_canny_edged.png\"\n\n\n    # If you have an img2img image seed and an inpainting mask, use this syntax\n\n    dgenerate Lykon/dreamshaper-8 \\\n    --inference-steps 40 \\\n    --guidance-scales 8 \\\n    --prompts \"Painting, Girl with a pearl earring by Leonardo Da Vinci, masterpiece; low quality, low resolution, blank eyeballs\" \\\n    --control-nets \"lllyasviel/sd-controlnet-canny;scale=0.5\" \\\n    --image-seeds \"my-image-seed.png;mask=my-inpaint-mask.png;control=vermeer_canny_edged.png\"\n\n    # SDXL example\n\n    dgenerate stabilityai/stable-diffusion-xl-base-1.0 --model-type torch-sdxl \\\n    --variant fp16 --dtype float16 \\\n    --vae \"AutoencoderKL;model=madebyollin/sdxl-vae-fp16-fix\" \\\n    --sdxl-refiner stabilityai/stable-diffusion-xl-refiner-1.0 \\\n    --inference-steps 30 \\\n    --guidance-scales 8 \\\n    --prompts \"Taylor Swift, high quality, masterpiece, high resolution; low quality, bad quality, sketches\" \\\n    --control-nets \"diffusers/controlnet-canny-sdxl-1.0;scale=0.5\" \\\n    --image-seeds \"vermeer_canny_edged.png\" \\\n    --output-size 1024\n\n\nIf you want to select the repository revision, such as ``main`` etc, use the named argument ``revision``\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # This is only a syntax example\n\n    dgenerate huggingface/model \\\n    --prompts \"Syntax example\" \\\n    --control-nets \"huggingface/cn_repo;revision=main\"\n\n\nIf your weights file exists in a subfolder of the repository, use the named argument ``subfolder``\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # This is only a syntax example\n\n    dgenerate huggingface/model \\\n    --prompts \"Syntax example\" \\\n    --control-nets \"huggingface/cn_repo;subfolder=repo_subfolder\"\n\n\nIf you are loading a .safetensors or other file from a path on disk, simply do:\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # This is only a syntax example\n\n    dgenerate Lykon/dreamshaper-8 \\\n    --prompts \"Syntax example\" \\\n    --control-nets \"my_cn_model.safetensors\"\n\n\nFlux Control Net Union Mode\n---------------------------\n\nFlux can utilize a combined control net model called ControlNet Union, i.e ``InstantX/FLUX.1-dev-Controlnet-Union``.\n\nThis model is a union (combined weights) of seven different trained control net models for Flux in one file under\none HuggingFace repository.\n\nContained within the safetensors file are ControlNet weights for: ``canny``, ``tile``, ``depth``, ``blur``, ``pose``, ``gray``, and ``lq``.\n\nWhen using this control net repository, you must specify which weights within that you want to use.\n\nYou can do this by specifying the mode name to the ``mode`` URI argument of ``--control-nets``.\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # Use a character from the examples media folder\n    # of this repository to generate an openpose rigging,\n    # and then feed that image to Flux using the ControlNet\n    # union repository, with the mode specified as \"pose\"\n\n    dgenerate black-forest-labs/FLUX.1-schnell \\\n    --model-type torch-flux \\\n    --dtype bfloat16 \\\n    --model-sequential-offload \\\n    --control-nets InstantX/FLUX.1-dev-Controlnet-Union;scale=0.8;mode=pose \\\n    --image-seeds examples/media/man-fighting-pose.jpg \\\n    --control-image-processors openpose \\\n    --inference-steps 4 \\\n    --guidance-scales 0 \\\n    --gen-seeds 1 \\\n    --output-path output \\\n    --output-size 1024x1024 \\\n    --prompts \"a boxer throwing a punch in the ring\"\n\n\nYou can specify multiple instances of this control net URI with different modes if desired.\n\nEverything else about control net URI usage, such as URI arguments, is unchanged from\nwhat is described in the main `Specifying Control Nets`_ section.\n\nSpecifying T2I Adapters\n=======================\n\nOne or more T2I Adapters models may be specified with ``--t2i-adapters``, and multiple\nT2I Adapter guidance images can be specified via ``--image-seeds`` in the case that you specify\nmultiple T2I Adapter models.\n\nT2I Adapters are similar to Control Net models and are mutually exclusive with Control Net models,\nIE: they cannot be used together.\n\nT2I Adapters are more lightweight than Control Net models, but only support txt2img generation\nwith control images for guidance, img2img and inpainting is not supported with T2I Adapters.\n\nT2I Adapter models are supported for these model types:\n\n    * ``--model-type torch``\n    * ``--model-type torch-sdxl``\n\nYou can provide a huggingface repository slug / blob link, .pt, .pth, .bin, .ckpt, or .safetensors files.\n\nControl images for the T2I Adapters can be provided using ``--image-seeds``\n\nWhen using ``--t2i-adapters`` specifying control images via ``--image-seeds`` can be accomplished like this:\n\n    * ``--image-seeds \"control-image.png\"`` (txt2img)\n\nMultiple control image sources can be specified like this when using multiple T2I Adapters:\n\n    * ``--image-seeds \"control-1.png, control-2.png\"`` (txt2img)\n\n\nIt is considered a syntax error if you specify a non-equal amount of control guidance\nimages and ``--t2i-adapters`` URIs and you will receive an error message if you do so.\n\nAvailable URI arguments are: ``scale``, ``revision``, ``variant``, ``subfolder``, ``dtype``\n\nThe ``scale`` argument indicates the affect scale of the T2I Adapter model.\n\nWhen using SDXL, the dgenerate argument ``--sdxl-t2i-adapter-factors`` can be used to specify\nmultiple adapter factors to try generating images with, the adapter factor is value between ``0.0`` and ``1.0``\nindicating the fraction of time-steps over which the T2I adapter guidance is applied.\n\nFor example, a ``--sdxl-t2i-adapter-factors`` value of ``0.5`` would mean to only apply guidance\nover the first half of the time-steps needed to generate the image.\n\nWhen using multiple T2I Adapters, this value applies to all T2I Adapter models mentioned.\n\nThese examples use: `vermeer_canny_edged.png \u003cvermeer_canny_edged.png_1_\u003e`_\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # SD1.5 example, use \"vermeer_canny_edged.png\" as a control guidance image\n\n    dgenerate Lykon/dreamshaper-8 \\\n    --inference-steps 40 \\\n    --guidance-scales 8 \\\n    --prompts \"Painting, Girl with a pearl earring by Leonardo Da Vinci, masterpiece; low quality, low resolution, blank eyeballs\" \\\n    --t2i-adapters \"TencentARC/t2iadapter_canny_sd15v2;scale=0.5\" \\\n    --image-seeds \"vermeer_canny_edged.png\"\n\n    # SDXL example\n\n    dgenerate stabilityai/stable-diffusion-xl-base-1.0 --model-type torch-sdxl \\\n    --variant fp16 --dtype float16 \\\n    --vae \"AutoencoderKL;model=madebyollin/sdxl-vae-fp16-fix\" \\\n    --sdxl-refiner stabilityai/stable-diffusion-xl-refiner-1.0 \\\n    --inference-steps 30 \\\n    --guidance-scales 8 \\\n    --prompts \"Taylor Swift, high quality, masterpiece, high resolution; low quality, bad quality, sketches\" \\\n    --t2i-adapters \"TencentARC/t2i-adapter-canny-sdxl-1.0;scale=0.5\" \\\n    --image-seeds \"vermeer_canny_edged.png\" \\\n    --output-size 1024\n\n\nIf you want to select the repository revision, such as ``main`` etc, use the named argument ``revision``\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # This is only a syntax example\n\n    dgenerate huggingface/model \\\n    --prompts \"Syntax example\" \\\n    --t2i-adapters \"huggingface/t2i_repo;revision=main\"\n\n\nIf your weights file exists in a subfolder of the repository, use the named argument ``subfolder``\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # This is only a syntax example\n\n    dgenerate huggingface/model \\\n    --prompts \"Syntax example\" \\\n    --t2i-adapters \"huggingface/t2i_repo;subfolder=repo_subfolder\"\n\n\nIf you are loading a .safetensors or other file from a path on disk, simply do:\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # This is only a syntax example\n\n    dgenerate Lykon/dreamshaper-8 \\\n    --prompts \"Syntax example\" \\\n    --t2i-adapters \"my_t2i_model.safetensors\"\n\n\nSpecifying IP Adapters\n======================\n\nOne or more IP Adapter models can be specified with the ``--ip-adapters`` argument.\n\nThe URI syntax for this argument is identical to ``--loras``, which is discussed in: `Specifying LoRAs`_\n\nIP Adapters are supported for these model types:\n\n    * ``--model-type torch``\n    * ``--model-type torch-pix2pix``\n    * ``--model-type torch-sdxl``\n    * ``--model-type torch-flux`` (basic adapter image specification only)\n\nHere is a brief example of loading an IP Adapter in the most basic way and passing it an image via ``--image-seeds``.\n\nThis example nearly duplicates an image created with a code snippet in the diffusers documentation page\n`found here \u003chttps://huggingface.co/docs/diffusers/main/en/using-diffusers/ip_adapter#general-tasks\u003e`_.\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # this uses one IP Adapter input image with the IP Adapter h94/IP-Adapter\n\n    dgenerate stabilityai/stable-diffusion-xl-base-1.0 \\\n    --model-type torch-sdxl \\\n    --dtype float16 \\\n    --variant fp16 \\\n    --sdxl-refiner stabilityai/stable-diffusion-xl-refiner-1.0 \\\n    --inference-steps 30 \\\n    --guidance-scales 5 \\\n    --sdxl-high-noise-fractions 0.8 \\\n    --seeds 0 \\\n    --output-path basic \\\n    --model-cpu-offload \\\n    --image-seeds \"adapter: https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_diner.png\" \\\n    --ip-adapters h94/IP-Adapter;subfolder=sdxl_models;weight-name=ip-adapter_sdxl.bin \\\n    --output-size 1024x1024 \\\n    --prompts \"a polar bear sitting in a chair drinking a milkshake; \\\n               deformed, ugly, wrong proportion, low res, bad anatomy, worst quality, low quality\"\n\n\nThe main complexity of working with IP Adapters comes when specifying the ``--image-seeds`` URI for tasks other than the most basic usage\nshown above.\n\nEach IP Adapter model can accept multiple IP Adapter input images, and they do not need to all be the same dimension or aligned in any\nparticular way for the model to work.\n\nIn addition, IP Adapter models can be used with Control Net and T2I Adapter models introducing additional complexities in specifying\nimage input.\n\nIf you specify multiple IP Adapters, they must all have the same ``variant`` URI argument value or you will receive a usage error.\n\n----\n\nbasic --image-seeds specification\n---------------------------------\n\nThe first syntax we can use with ``--image-seeds`` is designed to allow using IP Adapter images alone or with Control Net images.\n\n    * ``--image-seeds \"adapter: adapter-image.png\"`` (txt2img)\n    * ``--image-seeds \"adapter: adapter-image.png;control=control-image.png\"`` (txt2img + Control Net or T2I Adapter)\n\nYou may specify multiple IP Adapter images with the ``+`` image syntax, and multiple control images as you normally would with control images.\n\n    * ``--image-seeds \"adapter: adapter-image1.png + adapter-image2.png\"``\n    * ``--image-seeds \"adapter: adapter-image1.png + adapter-image2.png;control=control-image1.png, control-image2.png\"``\n\n\nIf you have multiple IP Adapter models loaded via ``--ip-adapters``, a comma delimits the images passed to each IP Adapter model.\n\n    * ``--image-seeds \"adapter: model1-adapter-image1.png + model1-adapter-image2.png, model2-adapter-image1.png + model2-adapter-image2.png\"``\n\n\nIf you specify the ``resize``, ``aspect``, or ``align`` arguments for resizing the ``--image-seeds`` components, these arguments do\nnot affect the IP Adapter images.  Only the control images in the cases being discussed here.\n\nIn order to resize IP adapter images from the ``--image-seeds`` URI, you must use a sub-uri syntax for each adapter image.\n\nThis is always true for all adapter image specification syntaxes.\n\nThis sub-uri syntax uses the pipe ``|`` symbol to delimit its URI arguments for the specific IP Adapter image.\n\n    * ``--image-seeds \"adapter: adapter-image.png|resize=256|align=8|aspect=True\"``\n    * ``--image-seeds \"adapter: adapter-image1.png|resize=256|align=8|aspect=True + adapter-image2.png|resize=256|align=8|aspect=True\"``\n\n\nThis sub-uri syntax allows resizing each IP Adapter input image individually.\n\nThis syntax supports the arguments ``resize``, ``align``, and ``aspect``, which refer to the resize\ndimension, image alignment, and whether or not the image resize that occurs is aspect correct.\n\nThese arguments mirror the behavior of the top level ``--image-seeds`` arguments with the same names.\n\nHowever, alignment for IP Adapter images defaults to 1, meaning that there is no forced alignment\nunless you force it manually.\n\n----\n\n\nimg2img --image-seeds specification\n-----------------------------------\n\nYou may use a traditional img2img input image along with IP Adapter input images.\n\nThe adapter images are then specified with the URI argument ``adapter``.\n\nThe exact same syntax is used when specifying the IP Adapter images this way as when using the ``adapter:`` prefix mentioned in the section above.\n\nIncluding the ``+`` syntax and sub-uri resizing syntax.\n\n\n    * ``--image-seeds \"img2img-input.png;adapter=adapter-image.png\"`` (img2img)\n    * ``--image-seeds \"img2img-input.png;adapter=adapter-image.png;control=control-image.png\"`` (img2img + Control Net or T2I Adapter)\n\n\n----\n\ninpainting --image-seeds specification\n--------------------------------------\n\nYou may use inpainting with IP Adapter images by specifying an img2img input image and the ``mask`` argument of the ``--image-seeds`` URI.\n\nThe ``mask`` argument in this case does not refer to IP Adapter mask images, but simply inpainting mask images.\n\n\n    * ``--image-seeds \"img2img-input.png;mask=inpaint-mask.png;adapter=adapter-image.png\"`` (inpaint)\n    * ``--image-seeds \"img2img-input.png;mask=inpaint-mask.png;adapter=adapter-image.png;control=control-image.png\"`` (inpaint + Control Net or T2I Adapter)\n\n\n----\n\nquoting IP Adapter image URLs with plus symbols\n-----------------------------------------------\n\nIf you happen to need to download an IP Adapter image from a URL containing a plus symbol, the URL can be quoted\nusing single or double quotes depending on context.\n\nThere are quite a few different ways to quote the URI itself that will work, especially in config scripts where ``;`` is not\nconsidered to be any kind of significant operator, and ``|`` is only used as an operator with the ``\\exec`` directive.\n\n\n    * ``--image-seeds \"adapter: 'https://url.com?arg=hello+world' + image2.png\"``\n    * ``--image-seeds 'adapter:\"https://url.com?arg=hello+world\" + image2.png'``\n    * ``--image-seeds \"img2img.png;adapter='https://url.com?arg=hello+world' + image2.png\"``\n    * ``--image-seeds 'img2img.png;adapter=\"https://url.com?arg=hello+world\" + image2.png'``\n\n----\n\nanimated inputs \u0026 combinatorics\n-------------------------------\n\nAnimated inputs work for IP Adapter images, when you specify an image seed with animated components such as videos or gifs,\nthe shortest animation dictates the amount of frames which will be processed in total, and any static images specified in\nthe image seed are duplicated across those frames.\n\nThe IP Adapter syntax introduces a lot of possible combinations for ``--image-seeds`` input images, and\nnot all possible combinations are covered in this documentation as it would be hard to do so.\n\nIf you find a combination that behaves strangely or incorrectly, or that should work but doesn't, please submit an issue :)\n\n\nSpecifying Text Encoders\n========================\n\nDiffusion pipelines supported by dgenerate may use a varying number of\ntext encoder sub models, currently up to 3. ``--model-type torch-sd3``\nfor instance uses 3 text encoder sub models, all of which can be\nindividually specified from the command line if desired.\n\nTo specify a Text Encoder models directly use ``--text-encoders`` for\nthe primary model and ``--text-encoders2`` for the SDXL Refiner or\nStable Cascade decoder.\n\nText Encoder URIs do not support loading from blob links or a single file,\ntext encoders must be loaded from a huggingface slug or a folder on disk\ncontaining the models and configuration.\n\nThe syntax for specifying text encoders is similar to that of ``--vae``\n\nThe URI syntax for ``--text-encoders`` is ``TextEncoderClass;model=(huggingface repository slug or folder path)``\n\nLoading arguments available when specifying a Text Encoder are: ``model``, ``revision``, ``variant``, ``subfolder``, ``dtype``, and ``quantize``\n\nThe ``variant`` argument defaults to the value of ``--variant``\n\nThe ``dtype`` argument defaults to the value of ``--dtype``\n\nThe ``quantize`` URI argument enables weights quantization via the `optimum-quanto \nlibrary \u003coptimum-quanto_library_1_\u003e`_, allowing for lower GPU memory usage. \nThis is useful when generating with Flux models. ``quantize`` may be passed the\nvalues ``qint2``, ``qint4``, ``qint8``, ``qfloat8_e4m3fn``, ``qfloat8_e5m2``, or ``qfloat8``,\nto indicate the quantization data type.\n\nThe other named arguments are available when loading from a huggingface repository or folder\nthat may or may not be a local git repository on disk.\n\nAvailable encoder classes are:\n\n* CLIPTextModel\n* CLIPTextModelWithProjection\n* T5EncoderModel\n\nYou can query the text encoder types and position for a model by passing ``help``\nas an argument to ``--text-encoders`` or ``--text-encoders2``. This feature\nmay not be used for both arguments simultaneously, and also may not be used\nwhen passing ``help`` or ``helpargs`` to any ``--scheduler`` type argument.\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # ask for text encoder help on the main model that is mentioned\n\n    dgenerate https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/sd3_medium_incl_clips.safetensors \\\n    --model-type torch-sd3 \\\n    --variant fp16 \\\n    --dtype float16 \\\n    --text-encoders help\n\n    # outputs:\n\n    # Text encoder type help:\n    #\n    #     0 = CLIPTextModelWithProjection\n    #     1 = CLIPTextModelWithProjection\n    #     2 = T5EncoderModel\n\n    # this means that there are 3 text encoders that we\n    # could potentially specify manually in the order\n    # displayed for this model\n\nWhen specifying multiple text encoders, a special syntax is allowed to indicate that\na text encoder should be loaded from defaults, this syntax involves the plus\nsymbol. When a plus symbol is encountered it is regarded as \"use default\".\n\nFor instance in the example below, only the last of the three text encoders\ninvolved in the Stable Diffusion 3 pipeline is specified, as it is the only\none not included with the main model file.\n\nThis text encoder is loaded from a subfolder of the Stable Diffusion 3\nrepository on huggingface.\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # This is an example of individually specifying text encoders\n    # specifically for stable diffusion 3, this model from the blob\n    # link includes the clip encoders, so we only need to specify\n    # the T5 encoder, which is encoder number 3, the + symbols indicate\n    # the first 2 encoders are assigned their default value, they are\n    # loaded from the checkpoint file for the main model\n\n    dgenerate https://huggingface.co/stabilityai/stable-diffusion-3-medium/blob/main/sd3_medium_incl_clips.safetensors \\\n    --model-type torch-sd3 \\\n    --variant fp16 \\\n    --dtype float16 \\\n    --inference-steps 30 \\\n    --guidance-scales 5.00 \\\n    --text-encoders + + \\\n        T5EncoderModel;model=stabilityai/stable-diffusion-3-medium-diffusers;subfolder=text_encoder_3 \\\n    --clip-skips 0 \\\n    --gen-seeds 2 \\\n    --output-path output \\\n    --model-sequential-offload \\\n    --prompts \"a horse outside a barn\"\n\n\nYou may also use the URI value ``null``, to indicate that you do not want to ever load a specific text encoder at all.\n\nFor instance, you can prevent Stable Diffusion 3 from loading and using the T5 encoder all together.\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    dgenerate stabilityai/stable-diffusion-3-medium-diffusers \\\n    --model-type torch-sd3 \\\n    --variant fp16 \\\n    --dtype float16 \\\n    --inference-steps 30 \\\n    --guidance-scales 5.00 \\\n    --text-encoders + + null \\\n    --clip-skips 0 \\\n    --gen-seeds 2 \\\n    --output-path output \\\n    --model-sequential-offload \\\n    --prompts \"a horse outside a barn\"\n\n\nAny text encoder shared via the ``\\use_modules`` directive in a config files is considered a default\nvalue for the text encoder in the next pipeline that runs, using ``+`` will maintain this value\nand using ``null`` will override it.\n\n.. code-block:: jinja\n\n    #! /usr/bin/env dgenerate --file\n    #! dgenerate 4.5.1\n\n    # this model will load all three text encoders,\n    # they are not cached individually as we did not explicitly\n    # specify any of them, they are cached with the pipeline\n    # as a whole\n\n    stabilityai/stable-diffusion-3-medium-diffusers\n    --model-type torch-sd3\n    --variant fp16\n    --dtype float16\n    --inference-steps 30\n    --guidance-scales 5.00\n    --clip-skips 0\n    --gen-seeds 2\n    --output-path output\n    --model-sequential-offload\n    --prompts \"a horse outside a barn\"\n\n    # store all the text encoders from the last pipeline\n    # into the variable \"encoders\"\n\n    \\save_modules encoders text_encoder text_encoder_2 text_encoder_3\n\n    # share them with the next pipeline\n\n    \\use_modules encoders\n\n    # use all the encoders except the T5 encoder (third encoder)\n    # sharing modules this way saves a significant amount\n    # of memory\n\n    stabilityai/stable-diffusion-3-medium-diffusers\n    --model-type torch-sd3\n    --variant fp16\n    --dtype float16\n    --inference-steps 30\n    --guidance-scales 5.00\n    --clip-skips 0\n    --text-encoders + + null\n    --gen-seeds 2\n    --output-path output\n    --model-sequential-offload\n    --prompts \"a horse outside a barn\"\n\n\nPrompt Weighting and Enhancement\n================================\n\nBy default, the prompt token weighting syntax that you may be familiar with from other software such as\n`ComfyUI \u003chttps://github.com/comfyanonymous/ComfyUI\u003e`_, `Stable Diffusion Web UI \u003cStable_Diffusion_Web_UI_1_\u003e`_,\nand `CivitAI \u003cCivitAI_1_\u003e`_ etc. is not enabled, and prompts over ``77`` tokens in length are not supported.\n\nHowever! dgenerate implements prompt weighting and prompt enhancements through internal plugins\ncalled prompt weighters, which can be selectively enabled to process your prompts. They support\nspecial token weighting syntaxes, and overcome limitations on prompt length.\n\nThe names of all prompt weighter implementations can be seen by using the argument ``--prompt-weighter-help``,\nand specific documentation for a prompt weighter can be printed py passing its name to this argument.\n\nYou may also use the config directive ``\\prompt_weighter_help`` inside of a config, or\nmore likely when you are working inside the `Console UI`_ shell.\n\nThere are currently two prompt weighter implementations, the ``compel`` prompt weighter, and\nthe ``sd-embed`` prompt weighter.\n\n\nThe compel prompt weighter\n--------------------------\n\nThe ``compel`` prompt weighter uses the `compel \u003chttps://github.com/damian0815/compel\u003e`_ library to\nsupport `InvokeAI \u003chttps://github.com/invoke-ai/InvokeAI\u003e`_ style prompt token weighting syntax for\nStable Diffusion 1/2, and Stable Diffusion XL.\n\nYou can read about InvokeAI prompt syntax here: `Invoke AI prompting documentation \u003chttps://invoke-ai.github.io/InvokeAI/features/PROMPTS/\u003e`_\n\nIt is a bit different than `Stable Diffusion Web UI \u003cStable_Diffusion_Web_UI_1_\u003e`_ syntax,\nwhich is a syntax used by the majority of other image generation software. It possesses some neat\nfeatures not mentioned in this documentation, that are worth reading about in the links provided above.\n\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # print out the documentation for the compel prompt weighter\n\n    dgenerate --prompt-weighter-help compel\n\n\n.. code-block:: text\n\n    compel:\n        arguments:\n            syntax: str = \"compel\"\n\n        Implements prompt weighting syntax for Stable Diffusion 1/2 and Stable Diffusion XL using\n        compel. The default syntax is \"compel\" which is analogous to the syntax used by InvokeAI.\n\n        Specifying the syntax \"sdwui\" will translate your prompt from Stable Diffusion Web UI syntax\n        into compel / InvokeAI syntax before generating the prompt embeddings.\n\n        If you wish to use prompt syntax for weighting tokens that is similar to ComfyUI, Automatic1111,\n        or CivitAI for example, use: 'compel;syntax=sdwui'\n\n        The underlying weighting behavior for tokens is not exactly the same as other software that uses\n        the more common \"sdwui\" syntax, so your prompt may need adjusting if you are reusing a prompt\n        from those other pieces of software.\n\n        You can read about compel here: https://github.com/damian0815/compel\n\n        And InvokeAI here: https://github.com/invoke-ai/InvokeAI\n\n        This prompt weighter supports the model types:\n\n        --model-type torch\n        --model-type torch-pix2pix\n        --model-type torch-upscaler-x4\n        --model-type torch-sdxl\n        --model-type torch-sdxl-pix2pix\n        --model-type torch-s-cascade\n\n        The secondary prompt option for SDXL --sdxl-second-prompts is supported by this prompt weighter\n        implementation. However, --sdxl-refiner-second-prompts is not supported and will be ignored\n        with a warning message.\n\n    ====================================================================================================\n\n\nYou can enable the ``compel`` prompt weighter by specifying it with the ``--prompt-weighter`` argument.\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # Some very simple examples\n\n    # Increase the weight of (picking apricots)\n\n    dgenerate stabilityai/stable-diffusion-2-1 \\\n    --inference-steps 30 \\\n    --guidance-scales 5.00 \\\n    --clip-skips 0 \\\n    --gen-seeds 1 \\\n    --output-path output \\\n    --output-size 1024 \\\n    --prompt-weighter compel \\\n    --prompts \"a tall man (picking apricots)++\"\n\n    # Specify a weight\n\n    dgenerate stabilityai/stable-diffusion-2-1 \\\n    --inference-steps 30 \\\n    --guidance-scales 5.00 \\\n    --clip-skips 0 \\\n    --gen-seeds 1 \\\n    --output-path output \\\n    --output-size 1024 \\\n    --prompt-weighter compel \\\n    --prompts \"a tall man (picking apricots)1.3\"\n\n\nIf you prefer the prompt weighting syntax used by Stable Diffusion Web UI, you can specify\nthe plugin argument ``syntax=sdwui`` which will translate your prompt from that syntax into\ncompel / InvokeAI syntax for you.\n\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # Some very simple examples\n\n    # Increase the weight of (picking apricots)\n\n    dgenerate stabilityai/stable-diffusion-2-1 \\\n    --inference-steps 30 \\\n    --guidance-scales 5.00 \\\n    --clip-skips 0 \\\n    --gen-seeds 1 \\\n    --output-path output \\\n    --output-size 1024 \\\n    --prompt-weighter \"compel;syntax=sdwui\" \\\n    --prompts \"a tall man ((picking apricots))\"\n\n    # Specify a weight\n\n    dgenerate stabilityai/stable-diffusion-2-1 \\\n    --inference-steps 30 \\\n    --guidance-scales 5.00 \\\n    --clip-skips 0 \\\n    --gen-seeds 1 \\\n    --output-path output \\\n    --output-size 1024 \\\n    --prompt-weighter \"compel;syntax=sdwui\" \\\n    --prompts \"a tall man (picking apricots:1.3)\"\n\n\nThe weighting algorithm is not entirely identical to other pieces of software, so if\nyou are migrating prompts they will likely require some adjustment.\n\n\nThe sd-embed prompt weighter\n----------------------------\n\nThe ``sd-embed`` prompt weighter uses the `sd_embed \u003chttps://github.com/xhinker/sd_embed\u003e`_ library to support\n`Stable Diffusion Web UI \u003cStable_Diffusion_Web_UI_1_\u003e`_ style prompt token\nweighting syntax for Stable Diffusion 1/2, Stable Diffusion XL, and Stable Diffusion 3.\n\n\nThe syntax that ``sd-embed`` uses is the more wide spread prompt syntax used by software such as\n`Stable Diffusion Web UI \u003cStable_Diffusion_Web_UI_1_\u003e`_ and `CivitAI \u003cCivitAI_1_\u003e`_\n\n\nQuite notably, the ``sd-embed`` prompt weighter supports Stable Diffusion 3 and Flux, where\nas the ``compel`` prompt weighter currently does not.\n\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # print out the documentation for the sd-embed prompt weighter\n\n    dgenerate --prompt-weighter-help sd-embed\n\n\n.. code-block:: text\n\n    sd-embed:\n\n        Implements prompt weighting syntax for Stable Diffusion 1/2, Stable Diffusion XL, and Stable\n        Diffusion 3, and Flux using sd_embed.\n\n        sd_embed uses a Stable Diffusion Web UI compatible prompt syntax.\n\n        See: https://github.com/xhinker/sd_embed\n\n        @misc{sd_embed_2024,\n          author       = {Shudong Zhu(Andrew Zhu)},\n          title        = {Long Prompt Weighted Stable Diffusion Embedding},\n          howpublished = {\\url{https://github.com/xhinker/sd_embed}},\n          year         = {2024},\n        }\n\n        --model-type torch\n        --model-type torch-pix2pix\n        --model-type torch-upscaler-x4\n        --model-type torch-sdxl\n        --model-type torch-sdxl-pix2pix\n        --model-type torch-s-cascade\n        --model-type torch-sd3\n        --model-type torch-flux\n\n        The secondary prompt option for SDXL --sdxl-second-prompts is supported by this prompt weighter\n        implementation. However, --sdxl-refiner-second-prompts is not supported and will be ignored with\n        a warning message.\n\n        The secondary prompt option for SD3 --sd3-second-prompts is not supported by this prompt\n        weighter implementation. Neither is --sd3-third-prompts. The prompts from these arguments will\n        be ignored.\n\n        The secondary prompt option for Flux --flux-second-prompts is supported by this prompt weighter.\n\n        Flux does not support negative prompting in either prompt.\n\n    ====================================================================================================\n\n\nYou can enable the ``sd-embed`` prompt weighter by specifying it with the ``--prompt-weighter`` argument.\n\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # You need a huggingface API token to run this example\n\n    dgenerate stabilityai/stable-diffusion-3-medium-diffusers \\\n    --model-type torch-sd3 \\\n    --variant fp16 \\\n    --dtype float16 \\\n    --inference-steps 30 \\\n    --guidance-scales 5.00 \\\n    --clip-skips 0 \\\n    --gen-seeds 1 \\\n    --output-path output \\\n    --output-size 1024x1024 \\\n    --model-sequential-offload \\\n    --prompt-weighter sd-embed \\\n    --auth-token $HF_TOKEN \\\n    --prompts \"a (man:1.2) standing on the (beach:1.2) looking out in to the water during a (sunset)\"\n\n\nUtilizing CivitAI links and Other Hosted Models\n===============================================\n\nAny model accepted by dgenerate that can be specified as a single file\ninside of a URI (or otherwise) can be specified by a URL to a model file. \ndgenerate will attempt to download the file from the URL directly, store it in \nthe web cache, and then use it.\n\nYou may also use the ``\\download`` config directive to assist in pre\ndownloading other resources from the internet. The directive has the ability\nto specify arbitrary storage locations. See: `The \\\\download directive`_\n\nYou can also use the ``download()`` template function for similar\npurposes. See: `The download() template function`_\n\nIn the case of CivitAI you can use this to bake models into your script\nthat will be automatically downloaded for you, you just need a CivitAI\naccount and API token to download models.\n\nYour API token can be created on this page: https://civitai.com/user/account\n\nNear the bottom of the page in the section: ``API Keys``\n\nYou can use the `civitai-links \u003cSub Command: civitai-links_\u003e`_ sub-command to fetch the necessary model\nlinks from a CivitAI model page. You may also use this sub-command in the form of the config\ndirective ``\\civitai_links`` from a config file or the Console UI.\n\nYou can also `(Right Click) -\u003e Copy Link Address` on a CivitAI models download link to get the necessary URL.\n\nIf you plan to download many large models to the web cache in this manner you may wish\nto adjust the global cache expiry time so that they exist in the cache longer than the default of 12 hours.\n\nYou can see how to change the cache expiry time in this section `File Cache Control`_\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # Download the main model from civitai using an api token\n\n    # https://civitai.com/models/122822?modelVersionId=133832\n\n    TOKEN=your_api_token_here\n\n    MODEL=\"https://civitai.com/api/download/models/133832?type=Model\u0026format=SafeTensor\u0026size=full\u0026fp=fp16\u0026token=$TOKEN\"\n\n    dgenerate $MODEL \\\n    --model-type torch-sdxl \\\n    --variant fp16 --dtype float16 \\\n    --sdxl-refiner stabilityai/stable-diffusion-xl-refiner-1.0 \\\n    --sdxl-high-noise-fractions 0.8 \\\n    --guidance-scales 8 \\\n    --inference-steps 40 \\\n    --prompts \"a fluffy cat playing in the grass\"\n\n\nThis method can be used for VAEs, LoRAs, ControlNets, and Textual Inversions\nas well, whenever single file loads are supported by the argument.\n\n\nSpecifying Generation Batch Size\n================================\n\nMultiple image variations from the same seed can be produce on a GPU simultaneously\nusing the ``--batch-size`` option of dgenerate. This can be used in combination with\n``--batch-grid-size`` to output image grids if desired.\n\nWhen not writing to image grids the files in the batch will be written to disk\nwith the suffix ``_image_N`` where N is index of the image in the batch of images\nthat were generated.\n\nWhen producing an animation, you can either write ``N`` animation output files\nwith the filename suffixes ``_animation_N`` where ``N`` is the index of the image\nin the batch which makes up the frames.  Or you can use ``--batch-grid-size`` to\nwrite frames to a single animated output where the frames are all image grids\nproduced from the images in the batch.\n\nWith larger ``--batch-size`` values, the use of ``--vae-slicing`` can make the difference\nbetween an out of memory condition and success, so it is recommended that you\ntry this option if you experience an out of memory condition due to the use of\n``--batch-size``.\n\n\nBatching Input Images and Inpaint Masks\n=======================================\n\nFor most model types excluding Stable Cascade, you can process multiple input images for ``img2img`` and\n``inpaint`` mode on the GPU simultaneously.\n\nThis is done using the ``images: ...`` syntax of ``--image-seeds``\n\nHere is an example of ``img2img`` usage:\n\n.. code-block:: bash\n\n    #! /usr/bin/env bash\n\n    # Standard img2img, this results in two outputs\n    # each of the images are resized to 1024 so they match\n    # in dimension, which is a requirement for batching\n\n    dgenerate stabilityai/stable-diffusion-2 \\\n    --inference-steps 30 \\\n    --guidance-scales 8 \\\n    --image-seeds \"images: examples/media/earth.jpg, examples/media/mountain.png;1024\" \\\n    --image-seed-strengths 0.9 \\\n    --vae-tiling  \\\n    --vae-slicing \\\n    --seeds 70466855166895  \\\n    --output-path batching \\\n    --prompts \"A detailed view of the planet mars\"\n\n    # The --batch-size must be divisible by the number of provided images\n    # this results in 4 images being produced, 2 variations of each input image\n\n    dgenerate stabilityai/stable-diffusion-2 \\\n    --inference-steps 30 \\\n    --guidance-scales 8 \\\n    --image-seeds \"images: examples/media/earth.jpg, examples/media/mountain.png;1024\" \\\n    --batch-size 4\n    --image-seed-strengths 0.9 \\\n    --vae-tiling  \\\n    --vae-slicing \\\n    --seeds 70466855166895  \\\n    --output-path batching \\\n    --prompts \"A detailed view of the planet mars\"\n\nAnd an ``inpainting`` example:\n\n.. code-block:: bash\n\n    #! /usr/bin/env bash\n\n    # With inpainting, we can either provide just one mask\n    # for every input image, or a separate mask for each input image\n    # if we wish to provide separate masks we could simply separate\n    # them with commas as we do with the images in the images:\n    # specification\n\n    # These images have different aspect ratios and dimensions\n    # so we are using the extended syntax of --image-seeds to\n    # force them to all be the same shape\n\n    # The same logic for --batch-size still applies as mentioned\n    # in the img2img example\n\n    dgenerate stabilityai/stable-diffusion-2-inpainting \\\n    --inference-steps 30 \\\n    --guidance-scales 8 \\\n    --image-seeds \"images: ../../media/dog-on-bench.png, ../../media/beach.jpg;mask=../../media/dog-on-bench-mask.png;resize=1024;aspect=False\" \\\n    --image-seed-strengths 1 \\\n    --vae-tiling \\\n    --vae-slicing \\\n    --seeds 39877139643371 \\\n    --output-path batching \\\n    --prompts \"A fluffy orange cat, realistic, high quality; deformed, scary\"\n\n\nIn the case of Stable Cascade, this syntax results in multiple images being passed to Stable Cascade\nas an image/style prompt, and does not result in multiple outputs or batching behavior.\n\nThis Stable Cascade functionality is demonstrated in the example config: `examples/stablecascade/img2img/multiple-inputs-config.dgen \u003chttps://github.com/Teriks/dgenerate/blob/v4.5.1/examples/stablecascade/img2img/multiple-inputs-config.dgen\u003e`_\n\n\nImage Processors\n================\n\nImages provided through ``--image-seeds`` can be processed before being used for image generation\nthrough the use of the arguments ``--seed-image-processors``, ``--mask-image-processors``, and\n``--control-image-processors``. In addition, dgenerates output can be post processed with the\nused of the ``--post-processors`` argument, which is useful for using the ``upscaler`` processor.\nAn important note about ``--post-processors`` is that post processing occurs before any image grid\nrendering is preformed when ``--batch-grid-size`` is specified with a ``--batch-size`` greater than one,\nmeaning that the output images are processed with your processor before being put into a grid.\n\nEach of these options can receive one or more specifications for image processing actions,\nmultiple processing actions will be chained together one after another.\n\nUsing the option ``--image-processor-help`` with no arguments will yield a list of available image processor names.\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    dgenerate --image-processor-help\n\nOutput:\n\n.. code-block:: text\n\n    Available image processors:\n\n        \"anyline\"\n        \"canny\"\n        \"flip\"\n        \"grayscale\"\n        \"hed\"\n        \"invert\"\n        \"leres\"\n        \"letterbox\"\n        \"lineart\"\n        \"lineart-anime\"\n        \"lineart-standard\"\n        \"midas\"\n        \"mirror\"\n        \"mlsd\"\n        \"normal-bae\"\n        \"openpose\"\n        \"pidi\"\n        \"posterize\"\n        \"resize\"\n        \"sam\"\n        \"solarize\"\n        \"teed\"\n        \"upscaler\"\n        \"zoe\"\n\n\nSpecifying one or more specific processors for example: ``--image-processor-help canny openpose`` will yield\ndocumentation pertaining to those processor modules. This includes accepted arguments and their types for the\nprocessor module and a description of what the module does.\n\nCustom image processor modules can also be loaded through the ``--plugin-modules`` option as discussed\nin the `Writing Plugins`_ section.\n\n\nImage processor arguments\n-------------------------\n\nAll processors posses the arguments: ``output-file`` and  ``output-overwrite``.\n\nThe ``output-file`` argument can be used to write the processed image to a specific file, if multiple\nprocessing steps occur such as when rendering an animation or multiple generation steps, a numbered suffix\nwill be appended to this filename. Note that an output file will only be produced in the case that the\nprocessor actually modifies an input image in some way. This can be useful for debugging an image that\nis being fed into diffusion or a ControlNet.\n\nThe ``output-overwrite`` is a boolean argument can be used to tell the processor that you do not want numbered\nsuffixes to be generated for ``output-file`` and to simply overwrite it.\n\nSome processors inherit the arguments: ``device``, and ``model-offload``.\n\nThe ``device`` argument can be used to override what device any hardware accelerated image processing\noccurs on if any. It defaults to the value of ``--device`` and has the same syntax for specifying device\nordinals, for instance if you have multiple GPUs you may specify ``device=cuda:1`` to run image processing\non your second GPU, etc. Not all image processors respect this argument as some image processing is only\never CPU based.\n\nThe ``model-offload`` argument is a boolean argument that can be used to force any torch modules / tensors\nassociated with an image processor to immediately evacuate the GPU or other non CPU processing device\nas soon as the processor finishes processing an image.  Usually, any modules / tensors will be\nbrought on to the desired device right before processing an image, and left on the device until\nthe image processor object leaves scope and is garbage collected.\n\n``model-offload`` can be useful for achieving certain GPU or processing device memory constraints, however\nit is slower when processing multiple images in a row, as the modules / tensors must be brought on to the\ndesired device repeatedly for each image. In the context of dgenerate invocations where processors can\nbe used as preprocessors or postprocessors, the image processor object is garbage collected when the\ninvocation completes, this is also true for the ``\\image_process`` directive.  Using this argument\nwith a preprocess specification, such as ``--control-image-processors`` may yield a noticeable memory\noverhead reduction when using a single GPU, as any models from the image processor will be moved to the\nCPU immediately when it is done with an image, clearing up VRAM space before the diffusion models enter GPU VRAM.\n\nFor an example, images can be processed with the canny edge detection algorithm or OpenPose (rigging generation)\nbefore being used for generation with a model + a ControlNet.\n\nThis image of a `horse \u003chttps://raw.githubusercontent.com/Teriks/dgenerate/v4.5.1/examples/media/horse2.jpeg\u003e`_\nis used in the example below with a ControlNet that is trained to generate images from canny edge detected input.\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # --control-image-processors is only used for control images\n    # in this case the single image seed is considered a control image\n    # because --control-nets is being used\n\n    dgenerate stabilityai/stable-diffusion-xl-base-1.0 --model-type torch-sdxl \\\n    --variant fp16 --dtype float16 \\\n    --vae \"AutoencoderKL;model=madebyollin/sdxl-vae-fp16-fix\" \\\n    --sdxl-refiner stabilityai/stable-diffusion-xl-refiner-1.0 \\\n    --inference-steps 30 \\\n    --guidance-scales 8 \\\n    --prompts \"Majestic unicorn, high quality, masterpiece, high resolution; low quality, bad quality, sketches\" \\\n    --control-nets \"diffusers/controlnet-canny-sdxl-1.0;scale=0.5\" \\\n    --image-seeds \"horse.jpeg\" \\\n    --control-image-processors \"canny;lower=50;upper=100\" \\\n    --gen-seeds 2 \\\n    --output-size 1024 \\\n    --output-path unicorn\n\n\nMultiple control net images, and input image batching\n-----------------------------------------------------\n\n\nEach ``--*-image-processors`` option has a special additional syntax, which is used to\ndescribe which processor or processor chain is affecting which input image in an\n``--image-seeds`` specification.\n\nFor instance if you have multiple control guidance images, and multiple control nets which are going\nto use those images, or frames etc. and you want to process each guidance image with a separate\nprocessor OR processor chain. You can specify how each image is processed by delimiting the\nprocessor specification groups with + (the plus symbol)\n\nLike this:\n\n    * ``--control-nets \"huggingface/controlnet1\" \"huggingface/controlnet2\"``\n    * ``--image-seeds \"image1.png, image2.png\"``\n    * ``--control-image-processors \"affect-image1\" + \"affect-image2\"``\n\n\nSpecifying a non-equal amount of control guidance images and ``--control-nets`` URIs is\nconsidered a syntax error and you will receive an error message if you do so.\n\nYou can use processor chaining as well:\n\n    * ``--control-nets \"huggingface/controlnet1\" \"huggingface/controlnet2\"``\n    * ``--image-seeds \"image1.png, image2.png\"``\n    * ``--control-image-processors \"affect-image1\" \"affect-image1-again\" + \"affect-image2\"``\n\nIn the case that you would only like the second image affected:\n\n    * ``--control-nets \"huggingface/controlnet1\" \"huggingface/controlnet2\"``\n    * ``--image-seeds \"image1.png, image2.png\"``\n    * ``--control-image-processors + \"affect-image2\"``\n\n\nThe plus symbol effectively creates a NULL processor as the first entry in the example above.\n\nWhen multiple guidance images are present, it is a syntax error to specify more processor chains\nthan control guidance images.  Specifying less processor chains simply means that the trailing\nguidance images will not be processed, you can avoid processing leading guidance images\nwith the mechanism described above.\n\nThis can be used with an arbitrary amount of control image sources and control nets, take\nfor example the specification:\n\n    * ``--control-nets \"huggingface/controlnet1\" \"huggingface/controlnet2\" \"huggingface/controlnet3\"``\n    * ``--image-seeds \"image1.png, image2.png, image3.png\"``\n    * ``--control-image-processors + + \"affect-image3\"``\n\n\nThe two + (plus symbol) arguments indicate that the first two images mentioned in the control image\nspecification in ``--image-seeds`` are not to be processed by any processor.\n\nThis same syntax applies to ``img2img`` and ``mask`` images when using the ``images: ...`` batching\nsyntax described in: `Batching Input Images and Inpaint Masks`_\n\n.. code-block:: bash\n\n    #! /usr/bin/env bash\n\n    # process these two images as img2img inputs in one go on the GPU\n    # mirror the second image horizontally, the + indicates that\n    # we are skipping processing the first image\n\n    dgenerate stabilityai/stable-diffusion-2 \\\n    --inference-steps 30 \\\n    --guidance-scales 8 \\\n    --image-seeds \"images: examples/media/horse2.jpeg, examples/media/horse2.jpeg\" \\\n    --seed-image-processors + mirror \\\n    --image-seed-strengths 0.9 \\\n    --vae-tiling  \\\n    --vae-slicing \\\n    --output-path unicorn \\\n    --prompts \"A fancy unicorn\"\n\n    # Now with inpainting\n\n    dgenerate stabilityai/stable-diffusion-2 \\\n    --inference-steps 30 \\\n    --guidance-scales 8 \\\n    --image-seeds \"images: examples/media/horse1.jpg, examples/media/horse1.jpg;mask=examples/media/horse1-mask.jpg, examples/media/horse1-mask.jpg\" \\\n    --seed-image-processors + mirror \\\n    --mask-image-processors + mirror \\\n    --image-seed-strengths 0.9 \\\n    --vae-tiling  \\\n    --vae-slicing \\\n    --output-path mars_horse \\\n    --prompts \"A photo of a horse standing on mars\"\n\n\nSub Commands\n============\n\ndgenerate implements additional functionality through the option ``--sub-command``.\n\nFor a list of available sub-commands use ``--sub-command-help``, which by default\nwill list available sub-command names.\n\nFor additional information on a specific sub-command use ``--sub-command-help NAME``\n\nMultiple sub-command names can be specified to ``--sub-command-help`` if desired.\n\nAll sub-commands respect the ``--plugin-modules`` and ``--verbose`` arguments\neven if their help output does not specify them, these arguments are handled\nby dgenerate and not the sub-command.\n\nSub Command: image-process\n--------------------------\n\nThe ``image-process`` sub-command can be used to run image processors implemented\nby dgenerate on any file of your choosing including animated images and videos.\n\nIt has a similar but slightly different design/usage to the main dgenerate\ncommand itself.\n\nIt can be used to run canny edge detection, openpose, etc. on any image or\nvideo/animated file that you want.\n\nThe help output of ``image-process`` is as follows:\n\n\n.. code-block:: text\n\n    usage: image-process [-h] [-p PROCESSORS [PROCESSORS ...]] [--plugin-modules PATH [PATH ...]]\n                         [-o OUTPUT [OUTPUT ...]] [-ff FRAME_FORMAT] [-ox] [-r RESIZE] [-na] [-al ALIGN]\n                         [-d DEVICE] [-fs FRAME_NUMBER] [-fe FRAME_NUMBER] [-nf | -naf]\n                         input [input ...]\n    \n    This command allows you to use dgenerate image processors directly on files of your choosing.\n    \n    positional arguments:\n      input                 Input file paths, may be a static images or animated files supported by dgenerate.\n                            URLs will be downloaded.\n                            ------------------------\n    \n    options:\n      -h, --help            show this help message and exit\n                            -------------------------------\n      -p PROCESSORS [PROCESSORS ...], --processors PROCESSORS [PROCESSORS ...]\n                            One or more image processor URIs, specifying multiple will chain them together. See:\n                            dgenerate --image-processor-help\n                            --------------------------------\n      --plugin-modules PATH [PATH ...]\n                            Specify one or more plugin module folder paths (folder containing __init__.py) or\n                            python .py file paths to load as plugins. Plugin modules can implement image\n                            processors.\n                            -----------\n      -o OUTPUT [OUTPUT ...], --output OUTPUT [OUTPUT ...]\n                            Output files, parent directories mentioned in output paths will be created for you\n                            if they do not exist. If you do not specify output files, the output file will be\n                            placed next to the input file with the added suffix '_processed_N' unless --output-\n                            overwrite is specified, in that case it will be overwritten. If you specify multiple\n                            input files and output files, you must specify an output file for every input file,\n                            or a directory (indicated with a trailing directory seperator character, for example\n                            \"my_dir/\" or \"my_dir\\\" if the directory does not exist yet). Failure to specify an\n                            output file with a URL as an input is considered an error. Supported file extensions\n                            for image output are equal to those listed under --frame-format.\n                            ----------------------------------------------------------------\n      -ff FRAME_FORMAT, --frame-format FRAME_FORMAT\n                            Image format for animation frames. Must be one of: png, apng, blp, bmp, dib, bufr,\n                            pcx, dds, ps, eps, gif, grib, h5, hdf, jp2, j2k, jpc, jpf, jpx, j2c, icns, ico, im,\n                            jfif, jpe, jpg, jpeg, tif, tiff, mpo, msp, palm, pdf, pbm, pgm, ppm, pnm, pfm, bw,\n                            rgb, rgba, sgi, tga, icb, vda, vst, webp, wmf, emf, or xbm.\n                            -----------------------------------------------------------\n      -ox, --output-overwrite\n                            Indicate that it is okay to overwrite files, instead of appending a duplicate\n                            suffix.\n                            -------\n      -r RESIZE, --resize RESIZE\n                            Preform naive image resizing (LANCZOS).\n                            ---------------------------------------\n      -na, --no-aspect      Make --resize ignore aspect ratio.\n                            ----------------------------------\n      -al ALIGN, --align ALIGN\n                            Align images / videos to this value in pixels, default is 8. Specifying 1 will\n                            disable resolution alignment.\n                            -----------------------------\n      -d DEVICE, --device DEVICE\n                            Processing device, for example \"cuda\", \"cuda:1\". Or \"mps\" on MacOS. (default: cuda,\n                            mps on MacOS)\n                            -------------\n      -fs FRAME_NUMBER, --frame-start FRAME_NUMBER\n                            Starting frame slice point for animated files (zero-indexed), the specified frame\n                            will be included. (default: 0)\n                            ------------------------------\n      -fe FRAME_NUMBER, --frame-end FRAME_NUMBER\n                            Ending frame slice point for animated files (zero-indexed), the specified frame will\n                            be included.\n                            ------------\n      -nf, --no-frames      Do not write frames, only an animation file. Cannot be used with --no-animation-\n                            file.\n                            -----\n      -naf, --no-animation-file\n                            Do not write an animation file, only frames. Cannot be used with --no-frames.\n                            -----------------------------------------------------------------------------\n\n\nOverview of specifying ``image-process`` inputs and outputs\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # Overview of specifying outputs, image-process can do simple operations\n    # like resizing images and forcing image alignment with --align, without the\n    # need to specify any other processing operations with --processors. Running\n    # image-process on an image with no other arguments simply aligns it to 8 pixels,\n    # given the defaults for its command line arguments\n\n    # More file formats than .png are supported for static image output, all\n    # extensions mentioned in the image-process --help documentation for --frame-format\n    # are supported, the supported formats are identical to that mentioned in the --image-format\n    # option help section of dgenerates --help output\n\n    # my_file.png -\u003e my_file_processed_1.png\n\n    dgenerate --sub-command image-process my_file.png --resize 512x512\n\n    # my_file.png -\u003e my_file.png (overwrite)\n\n    dgenerate --sub-command image-process my_file.png --resize 512x512 --output-overwrite\n\n    # my_file.png -\u003e my_file.png (overwrite)\n\n    dgenerate --sub-command image-process my_file.png -o my_file.png --resize 512x512 --output-overwrite\n\n    # my_file.png -\u003e my_dir/my_file_processed_1.png\n\n    dgenerate --sub-command image-process my_file.png -o my_dir/ --resize 512x512 --no-aspect\n\n    # my_file_1.png -\u003e my_dir/my_file_1_processed_1.png\n    # my_file_2.png -\u003e my_dir/my_file_2_processed_2.png\n\n    dgenerate --sub-command image-process my_file_1.png my_file_2.png -o my_dir/ --resize 512x512\n\n    # my_file_1.png -\u003e my_dir_1/my_file_1_processed_1.png\n    # my_file_2.png -\u003e my_dir_2/my_file_2_processed_2.png\n\n    dgenerate --sub-command image-process my_file_1.png my_file_2.png \\\n    -o my_dir_1/ my_dir_2/ --resize 512x512\n\n    # my_file_1.png -\u003e my_dir_1/renamed.png\n    # my_file_2.png -\u003e my_dir_2/my_file_2_processed_2.png\n\n    dgenerate --sub-command image-process my_file_1.png my_file_2.png \\\n    -o my_dir_1/renamed.png my_dir_2/ --resize 512x512\n\n\nA few usage examples with processors:\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # image-process can support any input format that dgenerate itself supports\n    # including videos and animated files. It also supports all output formats\n    # supported by dgenerate for writing videos/animated files, and images.\n\n    # create a video rigged with OpenPose, frames will be rendered to the directory \"output\" as well.\n\n    dgenerate --sub-command image-process my-video.mp4 \\\n    -o output/rigged-video.mp4 --processors \"openpose;include-hand=true;include-face=true\"\n\n    # Canny edge detected video, also using processor chaining to mirror the frames\n    # before they are edge detected\n\n    dgenerate --sub-command image-process my-video.mp4 \\\n    -o output/canny-video.mp4 --processors mirror \"canny;blur=true;threshold-algo=otsu\"\n\n\nSub Command: civitai-links\n--------------------------\n\nThe ``civitai-links`` sub-command can be used to list the hard links for models available on a CivitAI model page.\n\nThese links can be used directly with dgenerate, it will automatically download the model for you.\n\nYou only need to select which models you wish to use from the links listed by this command.\n\nSee: `Utilizing CivitAI links and Other Hosted Models`_ for more information about how to use these links.\n\nTo get direct links to CivitAI models you can use the ``civitai-links`` sub-command\nor the ``\\civitai_links`` directive inside of a config to list all available models\non a CivitAI model page.\n\nFor example:\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # get links for the Crystal Clear XL model on CivitAI\n\n    dgenerate --sub-command civitai-links \"https://civitai.com/models/122822?modelVersionId=133832\"\n\n    # you can also automatically append your API token to the end of the URLs with --token\n    # some models will require that you authenticate to download, this will add your token\n    # to the URL for you\n\n    dgenerate --sub-command civitai-links \"https://civitai.com/models/122822?modelVersionId=133832\" --token $MY_API_TOKEN\n\n\nThis will list every model link on the page, with title, there may be many model links\ndepending on what the page has available for download.\n\nOutput from the above example:\n\n.. code-block:: text\n\n    Models at: https://civitai.com/models/122822?modelVersionId=133832\n    ==================================================================\n\n    CCXL (Model): https://civitai.com/api/download/models/133832?format=SafeTensor\u0026size=full\u0026fp=fp16\n\n\nUpscaling\n=========\n\ndgenerate implements four different methods of upscaling images, animated images, or video.\n\nUpscaling with the Stable Diffusion based x2 and x4 upscalers from the `diffusers \u003chttps://github.com/huggingface/diffusers\u003e`_ library.\n\nWith the ``upscale`` image processor, which is compatible with torch models implemented in the `spandrel \u003cspandrel_1_\u003e`_ library.\n\nAnd with the ``upscaler-ncnn`` image processor, which implements upscaling with generic NCNN upscaling models using the `ncnn \u003cncnn_1_\u003e`_ library.\n\nThe `spandrel \u003cspandrel_1_\u003e`_ library supports the use of most torch models on: https://openmodeldb.info/\n\nThe `ncnn \u003cncnn_1_\u003e`_ library supports models compatible with `upscayl \u003chttps://github.com/upscayl/upscayl\u003e`_ as well as `chaiNNer \u003cchaiNNer_1_\u003e`_.\n\nONNX upscaler models can be converted to NCNN format for use with the ``upscaler-ncnn`` image processor.\n\n\nUpscaling with Diffusion Upscaler Models\n----------------------------------------\n\nStable diffusion image upscaling models can be used via the model types:\n\n    * ``--model-type torch-upscaler-x2``\n    * ``--model-type torch-upscaler-x4``\n\nThe image used in the example below is this `low resolution cat \u003chttps://raw.githubusercontent.com/Teriks/dgenerate/v4.5.1/examples/media/low_res_cat.png\u003e`_\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # The image produced with this model will be\n    # two times the --output-size dimension IE: 512x512 in this case\n    # The image is being resized to 256x256, and then upscaled by 2x\n\n    dgenerate stabilityai/sd-x2-latent-upscaler --variant fp16 --dtype float16 \\\n    --model-type torch-upscaler-x2 \\\n    --prompts \"a picture of a white cat\" \\\n    --image-seeds low_res_cat.png \\\n    --output-size 256\n\n\n    # The image produced with this model will be\n    # four times the --output-size dimension IE: 1024x1024 in this case\n    # The image is being resized to 256x256, and then upscaled by 4x\n\n    dgenerate stabilityai/stable-diffusion-x4-upscaler --variant fp16 --dtype float16 \\\n     --model-type torch-upscaler-x4 \\\n    --prompts \"a picture of a white cat\" \\\n    --image-seeds low_res_cat.png \\\n    --output-size 256 \\\n    --upscaler-noise-levels 20\n\n\nUpscaling with chaiNNer Compatible Torch Upscaler Models\n--------------------------------------------------------\n\n`chaiNNer \u003cchaiNNer_1_\u003e`_ compatible torch upscaler models from https://openmodeldb.info/\nand elsewhere can be utilized for tiled upscaling using dgenerates ``upscaler`` image processor and the\n``--post-processors`` option.  The ``upscaler`` image processor can also be used for processing\ninput images via the other options mentioned in `Image Processors`_ such as ``--seed-image-processors``\n\nThe ``upscaler`` image processor can make use of URLs or files on disk.\n\nIn this example we reference a link to the SwinIR x4 upscaler from the creators github release.\n\nThis uses the upscaler to upscale the output image by x4 producing an image that is 4096x4096\n\nThe ``upscaler`` image processor respects the ``--device`` option of dgenerate, and is CUDA accelerated by default.\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    dgenerate stabilityai/stable-diffusion-xl-base-1.0 --model-type torch-sdxl \\\n    --variant fp16 --dtype float16 \\\n    --sdxl-refiner stabilityai/stable-diffusion-xl-refiner-1.0 \\\n    --sdxl-high-noise-fractions 0.8 \\\n    --inference-steps 40 \\\n    --guidance-scales 8 \\\n    --output-size 1024 \\\n    --prompts \"Photo of a horse standing near the open door of a red barn, high resolution; artwork\" \\\n    --post-processors \"upscaler;model=https://github.com/JingyunLiang/SwinIR/releases/download/v0.0/001_classicalSR_DIV2K_s48w8_SwinIR-M_x4.pth\"\n\n\nIn addition to this the ``\\image_process`` config directive, or ``--sub-command image-process`` can be used to upscale\nany file that you want including animated images and videos. It is worth noting that the sub-command and directive\nwill work with any named image processor implemented by dgenerate.\n\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # print the help output of the sub command \"image-process\"\n    # the image-process sub-command can process multiple files and do\n    # and several other things, it is worth reading :)\n\n    dgenerate --sub-command image-process --help\n\n    # any directory mentioned in the output spec is created automatically\n\n    dgenerate --sub-command image-process my-file.png \\\n    --output output/my-file-upscaled.png \\\n    --processors \"upscaler;model=https://github.com/JingyunLiang/SwinIR/releases/download/v0.0/001_classicalSR_DIV2K_s48w8_SwinIR-M_x4.pth\"\n\n\nFor more information see: ``dgenerate --image-processor-help upscaler``\n\nControl over tiling parameters and specifics are discussed in the image processor\nhelp documentation from the above command.\n\n\nUpscaling with NCNN Upscaler Models\n-----------------------------------\n\nThe ``upscaler-ncnn`` image processor will be available if you have manually installed dgenerate\nwith the ``[ncnn]`` extra, or if you are using dgenerate from the packaged windows installer or portable\nwindows install zip from the releases page.\n\nNCNN can use Vulkan for hardware accelerated inference and is also heavily optimized\nfor CPU use if needed.\n\nWhen using the ``upscaler-ncnn`` processor, you must specify both the ``model`` and ``param`` arguments,\nthese refer to the ``model.bin`` and ``model.param`` file associated with the model.\n\nThese arguments may be a path to a file on disk or a hard link to a downloadable model in raw form.\n\nThis upscaler utilizes the same tiling algorithm as the ``upscaler`` image processor\nand features the same ``tile`` and ``overlap`` arguments, albeit with slightly different\ndefaults and constraints.  The ``tile`` argument may not exceed 400 pixels and defaults\nto the max value of 400. Tiling can be disabled for input images under 400 pixels by\npassing ``tile=0``.\n\nBy default the ``upscaler-ncnn`` processor does not run on the GPU, you must\nenable this with the ``use-gpu`` argument.\n\nWhen using this processor as a pre-processor or post-processor for diffusion,\nGPU memory will be fenced, any cached models related to diffusion on the GPU\nwill be evacuated entirely before this processor runs if they exist on the same GPU\nas the processor, this is to prevent catastrophic interaction between the Vulkan\nand Torch cuda allocators.\n\nOnce a Vulkan allocator exists on a specific GPU it cannot be destroyed except\nvia the process exiting due to issues with the ncnn python binding.  If you\ncreate this processor on a GPU you intend to perform diffusion on, you are\ngoing to run into memory errors after the first image generation and\nthere on out until the process exits.\n\nWhen the process exits it is very likely to exit with a non-zero return\ncode after using this processor even if the upscale operations were successful,\nthis is due to problems with the ncnn python binding creating a segfault at exit.\nIf you are using dgenerate interactively in shell mode or from the Console UI,\nthis will occur without consequence when the interpreter process exits.\n\nNote that if any other process runs diffusion / inference via torch on\nthe same GPU as this image processor while ncnn is preforming inference,\nyou will likely encounter a segfault in either of the processes and\na very hard crash.\n\nYou can safely run this processor in parallel with diffusion, or other torch\nbased image processors with GPU acceleration, by placing it on a separate gpu\nusing the ``gpu-index`` argument.\n\nSince the ncnn upscaler can run on GPUs other than Nvidia GPUs, figuring out what index\nyou need to use is platform specific, but for Nvidia users just use the ``nvidia-smi`` command\nfrom a terminal to get this value.\n\nIf you do not specify a ``gpu-index``, index 0 is used, which is most likely your main GPU.\n\nThe ``--device`` argument to dgenerate and the ``image-process`` sub-command / ``\\image_process`` directive\nis ignored by this image processor.\n\n .. code-block:: bash\n\n     #! /usr/bin/env bash\n\n     # this auto downloads x2 upscaler models from the upscayl repository into\n     # dgenerates web cache, and then use them\n\n     MODEL=https://github.com/upscayl/upscayl/raw/main/models/realesr-animevideov3-x2.bin\n     PARAM=https://github.com/upscayl/upscayl/raw/main/models/realesr-animevideov3-x2.param\n\n     dgenerate --sub-command image-process my-file.png \\\n     --output output/my-file-upscaled.png \\\n     --processors \"upscaler-ncnn;model=${MODEL};param=${PARAM};use-gpu=true\"\n\nIf you are upscaling using the CPU, you can specify a thread count using the ``threads`` argument.\n\nThis argument can be an integer quantity of threads, the keyword ``auto``\n(max logical processors, max threads) or the keyword ``half`` (half your logical processors).\n\n .. code-block:: bash\n\n     #! /usr/bin/env bash\n\n     # this auto downloads x2 upscaler models from the upscayl repository into\n     # dgenerates web cache, and then use them\n\n     MODEL=https://github.com/upscayl/upscayl/raw/main/models/realesr-animevideov3-x2.bin\n     PARAM=https://github.com/upscayl/upscayl/raw/main/models/realesr-animevideov3-x2.param\n\n     dgenerate --sub-command image-process my-file.png \\\n     --output output/my-file-upscaled.png \\\n     --processors \"upscaler-ncnn;model=${MODEL};param=${PARAM};threads=half\"\n\n\nThe argument ``winograd=true`` can be used to enable the winograd convolution when running on CPU,\nsimilarly the ``sgemm=true`` argument can be used to enable the sgemm convolution optimization.\n\nIn addition, you can control OpenMP blocktime using the ``blocktime`` argument, which should be\nan integer value between 0 and 400 inclusive, representing milliseconds.\n\nThese arguments can only be used when running on the CPU and will throw an argument error otherwise.\n\nWhen they are not specified, optimal defaults from ncnn for your platform are used.\n\n\nFor more information see: ``dgenerate --image-processor-help upscaler-ncnn``\n\n\nWriting and Running Configs\n===========================\n\nConfig scripts can be read from ``stdin`` using a shell pipe or file redirection, or by\nusing the ``--file`` argument to specify a file to interpret.\n\nConfig scripts are processed with model caching and other optimizations, in order\nto increase speed when many dgenerate invocations with different arguments are desired.\n\nLoading the necessary libraries and bringing models into memory is quite slow, so using dgenerate\nthis way allows for multiple invocations using different arguments, without needing to load the\nmachine learning libraries and models multiple times in a row.\n\nWhen a model is loaded dgenerate caches it in memory with it's creation parameters, which includes\namong other things the pipeline mode (basic, img2img, inpaint), user specified UNets, VAEs, LoRAs,\nTextual Inversions, and ControlNets.\n\nIf another invocation of the model occurs with creation parameters that are identical, it will be\nloaded out of an in memory cache, which greatly increases the speed of the invocation.\n\nDiffusion Pipelines, user specified UNets, VAEs, Text Encoders, Image Encoders, ControlNet,\nand IP Adapter models are cached individually.\n\nAll user specifiable model objects can be reused by diffusion pipelines in certain\nsituations and this is taken advantage of by using an in memory cache of these objects.\n\nIn effect, the creation of a diffusion pipeline is memoized, as well as the creation of\nany pipeline subcomponents when you have specified them explicitly with a URI.\n\nA number of things effect cache hit or miss upon a dgenerate invocation, extensive information\nregarding runtime caching behavior of a pipelines and other models can be observed using ``-v/--verbose``\n\nWhen loading multiple different models be aware that they will all be retained in memory for\nthe duration of program execution, unless all models are flushed using the ``\\clear_model_cache`` directive or\nindividually using one of:\n\n    * ``\\clear_pipeline_cache``\n    * ``\\clear_unet_cache``\n    * ``\\clear_vae_cache``\n    * ``\\clear_text_encoder_cache``\n    * ``\\clear_image_encoder_cache``\n    * ``\\clear_controlnet_cache``\n    * ``\\clear_adapter_cache``\n    * ``\\clear_transformer_cache``\n\ndgenerate uses heuristics to clear the in memory cache automatically when needed, including a size estimation\nof models before they enter system memory, however by default it will use system memory very aggressively\nand it is not entirely impossible to run your system out of memory if you are not careful.\n\nBasic config syntax\n-------------------\n\nThe basic idea of the dgenerate config syntax is that it is a pseudo Unix shell mixed with Jinja2 templating.\n\nThe config language provides many niceties for batch processing large amounts of images\nand image output in a Unix shell like environment with Jinja2 control constructs.\n\nShell builtins, known as directives, are prefixed with ``\\``, for example: ``\\print``\n\nEnvironmental variables will be expanded in config scripts using both Unix and Windows CMD syntax\n\n.. code-block:: jinja\n\n    #! /usr/bin/env dgenerate --file\n    #! dgenerate 4.5.1\n\n    # these all expand from your system environment\n    # if the variable is not set, they expand to nothing\n\n    \\print $VARIABLE\n    \\print ${VARIABLE}\n    \\print %VARIABLE%\n\nEmpty lines and comments starting with ``#`` will be ignored, comments that occur at the end of lines will also be ignored.\n\nYou can create a multiline continuation using ``\\`` to indicate that a line continues similar to bash.\n\nUnlike bash, if the next line starts with ``-`` it is considered part of a continuation as well\neven if ``\\`` had not been used previously. This allows you to list out many Posix style shell\noptions starting with ``-`` without having to end every line with ``\\``.\n\nComments can be interspersed with invocation or directive arguments\non their own line with the use of ``\\`` on the last line before\ncomments and whitespace begin. This can be used to add documentation\nabove individual arguments instead of at the tail end of them.\n\nThe following is a config file example that covers the most basic syntax concepts.\n\n.. code-block:: jinja\n\n    #! /usr/bin/env dgenerate --file\n    #! dgenerate 4.5.1\n\n    # If a hash-bang version is provided in the format above\n    # a warning will be produced if the version you are running\n    # is not compatible (SemVer), this can be used anywhere in the\n    # config file, a line number will be mentioned in the warning when the\n    # version check fails\n\n    # Comments in the file will be ignored\n\n    # Each dgenerate invocation in the config begins with the path to a model,\n    # IE. the first argument when using dgenerate from the command line, the\n    # rest of the options that follow are the options to dgenerate that you\n    # would use on the command line\n\n    # Guarantee unique file names are generated under the output directory by specifying unique seeds\n\n    stabilityai/stable-diffusion-2-1 --prompts \"an astronaut riding a horse\" --seeds 41509644783027 --output-path output --inference-steps 30 --guidance-scales 10\n    stabilityai/stable-diffusion-2-1 --prompts \"a cowboy riding a horse\" --seeds 78553317097366 --output-path output --inference-steps 30 --guidance-scales 10\n    stabilityai/stable-diffusion-2-1 --prompts \"a martian riding a horse\" --seeds 22797399276707 --output-path output --inference-steps 30 --guidance-scales 10\n\n    # Guarantee that no file name collisions happen by specifying different output paths for each invocation\n\n    stabilityai/stable-diffusion-2-1 --prompts \"an astronaut riding a horse\" --output-path unique_output_1  --inference-steps 30 --guidance-scales 10\n    stabilityai/stable-diffusion-2-1 --prompts \"a cowboy riding a horse\" --output-path unique_output_2 --inference-steps 30 --guidance-scales 10\n\n    # Multiline continuations are possible implicitly for argument\n    # switches IE lines starting with '-'\n\n    stabilityai/stable-diffusion-2-1 --prompts \"a martian riding a horse\"\n    --output-path unique_output_3  # there can be comments at the end of lines\n    --inference-steps 30 \\         # this comment is also ignored\n\n    # There can be comments or newlines within the continuation\n    # but you must provide \\ on the previous line to indicate that\n    # it is going to happen\n\n    --guidance-scales 10\n\n    # The continuation ends (on the next line) when the last line does\n    # not end in \\ or start with -\n\n    # the ability to use tail comments means that escaping of the # is sometimes\n    # necessary when you want it to appear literally, see: examples/config_syntax/tail-comments-config.dgen\n    # for examples.\n\n\n    # Configuration directives provide extra functionality in a config, a directive\n    # invocation always starts with a backslash\n\n    # A clear model cache directive can be used inbetween invocations if cached models that\n    # are no longer needed in your generation pipeline start causing out of memory issues\n\n    \\clear_model_cache\n\n    # Additionally these other directives exist to clear user loaded models\n    # out of dgenerates in memory cache individually\n\n    # Clear specifically diffusion pipelines\n\n    \\clear_pipeline_cache\n\n    # Clear specifically user specified UNet models\n\n    \\clear_unet_cache\n\n    # Clear specifically user specified VAE models\n\n    \\clear_vae_cache\n\n    # Clear specifically user specified Text Encoder models\n\n    \\clear_text_encoder_cache\n\n    # Clear specifically ControlNet models\n\n    \\clear_controlnet_cache\n\n\n    # This model was used before but will have to be fully instantiated from scratch again\n    # after a cache flush which may take some time\n\n    stabilityai/stable-diffusion-2-1 --prompts \"a martian riding a horse\"\n    --output-path unique_output_4\n\n\nBuilt in template variables and functions\n-----------------------------------------\n\nThere is valuable information about the previous invocation of dgenerate that\nis set in the environment and available to use via Jinja2 templating or in\nthe ``\\setp`` directive, some of these include:\n\n* ``{{ last_images }}`` (An iterable of un-quoted filenames which were generated)\n* ``{{ last_animations }}`` (An iterable of un-quoted filenames which were generated)\n\nThere are template variables for prompts, containing the previous prompt values:\n\n* ``{{ last_prompts }}`` (List of prompt objects with the un-quoted attributes 'positive' and 'negative')\n* ``{{ last_sdxl_second_prompts }}``\n* ``{{ last_sdxl_refiner_prompts }}``\n* ``{{ last_sdxl_refiner_second_prompts }}``\n\nSome available custom jinja2 functions/filters are:\n\n* ``{{ first(list_of_items) }}`` (First element in a list)\n* ``{{ last(list_of_items) }}`` (Last element in a list)\n* ``{{ unquote('\"unescape-me\"') }}`` (shell unquote / split, works on strings and lists)\n* ``{{ quote('escape-me') }}`` (shell quote, works on strings and lists)\n* ``{{ format_prompt(prompt_object) }}`` (Format and quote one or more prompt objects with their delimiter, works on single prompts and lists)\n* ``{{ format_size(size_tuple) }}`` (Format a size tuple / iterable, join with \"x\" character)\n* ``{{ align_size('700x700', 8) }}`` (Align a size string or tuple to a specific alignment, return a formatted string by default)\n* ``{{ pow2_size('700x700', 8) }}`` (Round a size string or tuple to the nearest power of 2, return a formatted string by default)\n* ``{{ size_is_aligned('700x700', 8) }}`` (Check if a size string or tuple is aligned to a specific alignment, return ``True`` or ``False``)\n* ``{{ size_is_pow2('700x700') }}`` (Check if a size string or tuple is a power of 2 dimension, return ``True`` or ``False``)\n* ``{{ format_model_type(last_model_type) }}`` (Format a ``ModelType`` enum to a value to ``--model-type``)\n* ``{{ format_dtype(last_dtype) }}`` (Format a ``DataType`` enum to a value to ``--dtype``)\n* ``{{ gen_seeds(n) }}`` (Return a list of random integer seeds in the form of strings)\n* ``{{ cwd() }}`` (Return the current working directory as a string)\n* ``{{ download(url) }}`` (Download from a url to the web cache and return the file path)\n* ``{{ have_feature(feature_name) }}`` (Check for feature and return bool, value examples: ``ncnn``)\n* ``{{ platform() }}`` (Return platform.system())\n\nThe above functions which possess arguments can be used as either a function or filter IE: ``{{ \"quote_me\" | quote }}``\n\nThe option ``--functions-help`` and the directive ``\\functions_help`` can be used to print\ndocumentation for template functions. When the option or directive is used alone all built\nin functions will be printed with their signature, specifying function names as arguments\nwill print documentation for those specific functions.\n\nTo receive information about Jinja2 template variables that are set after a dgenerate invocation.\nYou can use the ``\\templates_help`` directive which is similar to the ``--templates-help`` option\nexcept it will print out all the template variables assigned values instead of just their\nnames and types. This is useful for figuring out the values of template variables set after\na dgenerate invocation in a config file for debugging purposes. You can specify one or\nmore template variable names as arguments to ``\\templates_help`` to receive help for only\nthe mentioned variable names.\n\nTemplate variables set with the ``\\set``, ``\\setp``, and ``\\sete`` directive will\nalso be mentioned in this output.\n\n.. code-block:: jinja\n\n    #! /usr/bin/env dgenerate --file\n    #! dgenerate 4.5.1\n\n    # Invocation will proceed as normal\n\n    stabilityai/stable-diffusion-2-1 --prompts \"a man walking on the moon without a space suit\"\n\n    # Print all set template variables\n\n    \\templates_help\n\n\nThe ``\\templates_help`` output from the above example is:\n\n.. code-block:: text\n\n    Config template variables are:\n\n        Name: \"glob\"\n            Type: \u003cclass 'module'\u003e\n            Value: \u003cmodule 'glob'\u003e\n        Name: \"injected_args\"\n            Type: collections.abc.Sequence[str]\n            Value: []\n        Name: \"injected_device\"\n            Type: typing.Optional[str]\n            Value: None\n        Name: \"injected_plugin_modules\"\n            Type: typing.Optional[collections.abc.Sequence[str]]\n            Value: []\n        Name: \"injected_verbose\"\n            Type: typing.Optional[bool]\n            Value: False\n        Name: \"last_adapter_cache_memory_constraints\"\n            Type: typing.Optional[collections.abc.Sequence[str]]\n            Value: []\n        Name: \"last_animation_format\"\n            Type: \u003cclass 'str'\u003e\n            Value: 'mp4'\n        Name: \"last_animations\"\n            Type: collections.abc.Iterable[str]\n            Value: \u003cdgenerate.renderloop.RenderLoop.written_animations.\u003clocals\u003e.Iterable object\u003e\n        Name: \"last_auth_token\"\n            Type: typing.Optional[str]\n            Value: None\n        Name: \"last_batch_grid_size\"\n            Type: typing.Optional[tuple[int, int]]\n            Value: None\n        Name: \"last_batch_size\"\n            Type: typing.Optional[int]\n            Value: None\n        Name: \"last_cache_memory_constraints\"\n            Type: typing.Optional[collections.abc.Sequence[str]]\n            Value: []\n        Name: \"last_clip_skips\"\n            Type: typing.Optional[collections.abc.Sequence[int]]\n            Value: []\n        Name: \"last_control_image_processors\"\n            Type: typing.Optional[collections.abc.Sequence[str]]\n            Value: []\n        Name: \"last_controlnet_cache_memory_constraints\"\n            Type: typing.Optional[collections.abc.Sequence[str]]\n            Value: []\n        Name: \"last_controlnet_uris\"\n            Type: typing.Optional[collections.abc.Sequence[str]]\n            Value: []\n        Name: \"last_device\"\n            Type: \u003cclass 'str'\u003e\n            Value: 'cuda:1'\n        Name: \"last_dtype\"\n            Type: \u003cenum 'DataType'\u003e\n            Value: \u003cDataType.AUTO: 0\u003e\n        Name: \"last_flux_max_sequence_length\"\n            Type: typing.Optional[int]\n            Value: None\n        Name: \"last_flux_second_prompts\"\n            Type: typing.Optional[collections.abc.Sequence[dgenerate.prompt.Prompt]]\n            Value: []\n        Name: \"last_frame_end\"\n            Type: typing.Optional[int]\n            Value: None\n        Name: \"last_frame_start\"\n            Type: \u003cclass 'int'\u003e\n            Value: 0\n        Name: \"last_guidance_rescales\"\n            Type: typing.Optional[collections.abc.Sequence[float]]\n            Value: []\n        Name: \"last_guidance_scales\"\n            Type: collections.abc.Sequence[float]\n            Value: [5]\n        Name: \"last_image_encoder_cache_memory_constraints\"\n            Type: typing.Optional[collections.abc.Sequence[str]]\n            Value: []\n        Name: \"last_image_encoder_uri\"\n            Type: typing.Optional[str]\n            Value: None\n        Name: \"last_image_format\"\n            Type: \u003cclass 'str'\u003e\n            Value: 'png'\n        Name: \"last_image_guidance_scales\"\n            Type: typing.Optional[collections.abc.Sequence[float]]\n            Value: []\n        Name: \"last_image_processor_cuda_memory_constraints\"\n            Type: typing.Optional[collections.abc.Sequence[str]]\n            Value: []\n        Name: \"last_image_processor_memory_constraints\"\n            Type: typing.Optional[collections.abc.Sequence[str]]\n            Value: []\n        Name: \"last_image_seed_strengths\"\n            Type: typing.Optional[collections.abc.Sequence[float]]\n            Value: []\n        Name: \"last_image_seeds\"\n            Type: typing.Optional[collections.abc.Sequence[str]]\n            Value: []\n        Name: \"last_images\"\n            Type: collections.abc.Iterable[str]\n            Value: \u003cdgenerate.renderloop.RenderLoop.written_images.\u003clocals\u003e.Iterable object\u003e\n        Name: \"last_inference_steps\"\n            Type: collections.abc.Sequence[int]\n            Value: [30]\n        Name: \"last_ip_adapter_uris\"\n            Type: typing.Optional[collections.abc.Sequence[str]]\n            Value: []\n        Name: \"last_lora_fuse_scale\"\n            Type: typing.Optional[float]\n            Value: None\n        Name: \"last_lora_uris\"\n            Type: typing.Optional[collections.abc.Sequence[str]]\n            Value: []\n        Name: \"last_mask_image_processors\"\n            Type: typing.Optional[collections.abc.Sequence[str]]\n            Value: []\n        Name: \"last_model_cpu_offload\"\n            Type: \u003cclass 'bool'\u003e\n            Value: False\n        Name: \"last_model_path\"\n            Type: typing.Optional[str]\n            Value: 'stabilityai/stable-diffusion-2-1'\n        Name: \"last_model_sequential_offload\"\n            Type: \u003cclass 'bool'\u003e\n            Value: False\n        Name: \"last_model_type\"\n            Type: \u003cenum 'ModelType'\u003e\n            Value: \u003cModelType.TORCH: 0\u003e\n        Name: \"last_no_aspect\"\n            Type: \u003cclass 'bool'\u003e\n            Value: False\n        Name: \"last_no_frames\"\n            Type: \u003cclass 'bool'\u003e\n            Value: False\n        Name: \"last_offline_mode\"\n            Type: \u003cclass 'bool'\u003e\n            Value: False\n        Name: \"last_output_configs\"\n            Type: \u003cclass 'bool'\u003e\n            Value: False\n        Name: \"last_output_metadata\"\n            Type: \u003cclass 'bool'\u003e\n            Value: False\n        Name: \"last_output_overwrite\"\n            Type: \u003cclass 'bool'\u003e\n            Value: False\n        Name: \"last_output_path\"\n            Type: \u003cclass 'str'\u003e\n            Value: 'output'\n        Name: \"last_output_prefix\"\n            Type: typing.Optional[str]\n            Value: None\n        Name: \"last_output_size\"\n            Type: typing.Optional[tuple[int, int]]\n            Value: None\n        Name: \"last_pag\"\n            Type: \u003cclass 'bool'\u003e\n            Value: False\n        Name: \"last_pag_adaptive_scales\"\n            Type: typing.Optional[collections.abc.Sequence[float]]\n            Value: []\n        Name: \"last_pag_scales\"\n            Type: typing.Optional[collections.abc.Sequence[float]]\n            Value: []\n        Name: \"last_parsed_image_seeds\"\n            Type: typing.Optional[collections.abc.Sequence[dgenerate.mediainput.ImageSeedParseResult]]\n            Value: []\n        Name: \"last_pipeline_cache_memory_constraints\"\n            Type: typing.Optional[collections.abc.Sequence[str]]\n            Value: []\n        Name: \"last_plugin_module_paths\"\n            Type: collections.abc.Sequence[str]\n            Value: []\n        Name: \"last_post_processors\"\n            Type: typing.Optional[collections.abc.Sequence[str]]\n            Value: []\n        Name: \"last_prompt_weighter_uri\"\n            Type: typing.Optional[str]\n            Value: None\n        Name: \"last_prompts\"\n            Type: collections.abc.Sequence[dgenerate.prompt.Prompt]\n            Value: ['a man walking on the moon without a space suit']\n        Name: \"last_revision\"\n            Type: \u003cclass 'str'\u003e\n            Value: 'main'\n        Name: \"last_s_cascade_decoder_cpu_offload\"\n            Type: typing.Optional[bool]\n            Value: None\n        Name: \"last_s_cascade_decoder_guidance_scales\"\n            Type: typing.Optional[collections.abc.Sequence[float]]\n            Value: []\n        Name: \"last_s_cascade_decoder_inference_steps\"\n            Type: typing.Optional[collections.abc.Sequence[int]]\n            Value: []\n        Name: \"last_s_cascade_decoder_prompts\"\n            Type: typing.Optional[collections.abc.Sequence[dgenerate.prompt.Prompt]]\n            Value: []\n        Name: \"last_s_cascade_decoder_scheduler\"\n            Type: typing.Union[str, collections.abc.Sequence[str], NoneType]\n            Value: None\n        Name: \"last_s_cascade_decoder_sequential_offload\"\n            Type: typing.Optional[bool]\n            Value: None\n        Name: \"last_s_cascade_decoder_uri\"\n            Type: typing.Optional[str]\n            Value: None\n        Name: \"last_safety_checker\"\n            Type: \u003cclass 'bool'\u003e\n            Value: False\n        Name: \"last_scheduler\"\n            Type: typing.Union[str, collections.abc.Sequence[str], NoneType]\n            Value: None\n        Name: \"last_sd3_max_sequence_length\"\n            Type: typing.Optional[int]\n            Value: None\n        Name: \"last_sd3_second_prompts\"\n            Type: typing.Optional[collections.abc.Sequence[dgenerate.prompt.Prompt]]\n            Value: []\n        Name: \"last_sd3_third_prompts\"\n            Type: typing.Optional[collections.abc.Sequence[dgenerate.prompt.Prompt]]\n            Value: []\n        Name: \"last_sdxl_aesthetic_scores\"\n            Type: typing.Optional[collections.abc.Sequence[float]]\n            Value: []\n        Name: \"last_sdxl_crops_coords_top_left\"\n            Type: typing.Optional[collections.abc.Sequence[tuple[int, int]]]\n            Value: []\n        Name: \"last_sdxl_high_noise_fractions\"\n            Type: typing.Optional[collections.abc.Sequence[float]]\n            Value: []\n        Name: \"last_sdxl_negative_aesthetic_scores\"\n            Type: typing.Optional[collections.abc.Sequence[float]]\n            Value: []\n        Name: \"last_sdxl_negative_crops_coords_top_left\"\n            Type: typing.Optional[collections.abc.Sequence[tuple[int, int]]]\n            Value: []\n        Name: \"last_sdxl_negative_original_sizes\"\n            Type: typing.Optional[collections.abc.Sequence[tuple[int, int]]]\n            Value: []\n        Name: \"last_sdxl_negative_target_sizes\"\n            Type: typing.Optional[collections.abc.Sequence[tuple[int, int]]]\n            Value: []\n        Name: \"last_sdxl_original_sizes\"\n            Type: typing.Optional[collections.abc.Sequence[tuple[int, int]]]\n            Value: []\n        Name: \"last_sdxl_refiner_aesthetic_scores\"\n            Type: typing.Optional[collections.abc.Sequence[float]]\n            Value: []\n        Name: \"last_sdxl_refiner_clip_skips\"\n            Type: typing.Optional[collections.abc.Sequence[int]]\n            Value: []\n        Name: \"last_sdxl_refiner_cpu_offload\"\n            Type: typing.Optional[bool]\n            Value: None\n        Name: \"last_sdxl_refiner_crops_coords_top_left\"\n            Type: typing.Optional[collections.abc.Sequence[tuple[int, int]]]\n            Value: []\n        Name: \"last_sdxl_refiner_edit\"\n            Type: typing.Optional[bool]\n            Value: None\n        Name: \"last_sdxl_refiner_guidance_rescales\"\n            Type: typing.Optional[collections.abc.Sequence[float]]\n            Value: []\n        Name: \"last_sdxl_refiner_guidance_scales\"\n            Type: typing.Optional[collections.abc.Sequence[float]]\n            Value: []\n        Name: \"last_sdxl_refiner_inference_steps\"\n            Type: typing.Optional[collections.abc.Sequence[int]]\n            Value: []\n        Name: \"last_sdxl_refiner_negative_aesthetic_scores\"\n            Type: typing.Optional[collections.abc.Sequence[float]]\n            Value: []\n        Name: \"last_sdxl_refiner_negative_crops_coords_top_left\"\n            Type: typing.Optional[collections.abc.Sequence[tuple[int, int]]]\n            Value: []\n        Name: \"last_sdxl_refiner_negative_original_sizes\"\n            Type: typing.Optional[collections.abc.Sequence[tuple[int, int]]]\n            Value: []\n        Name: \"last_sdxl_refiner_negative_target_sizes\"\n            Type: typing.Optional[collections.abc.Sequence[tuple[int, int]]]\n            Value: []\n        Name: \"last_sdxl_refiner_original_sizes\"\n            Type: typing.Optional[collections.abc.Sequence[tuple[int, int]]]\n            Value: []\n        Name: \"last_sdxl_refiner_pag\"\n            Type: typing.Optional[bool]\n            Value: None\n        Name: \"last_sdxl_refiner_pag_adaptive_scales\"\n            Type: typing.Optional[collections.abc.Sequence[float]]\n            Value: []\n        Name: \"last_sdxl_refiner_pag_scales\"\n            Type: typing.Optional[collections.abc.Sequence[float]]\n            Value: []\n        Name: \"last_sdxl_refiner_prompts\"\n            Type: typing.Optional[collections.abc.Sequence[dgenerate.prompt.Prompt]]\n            Value: []\n        Name: \"last_sdxl_refiner_scheduler\"\n            Type: typing.Union[str, collections.abc.Sequence[str], NoneType]\n            Value: None\n        Name: \"last_sdxl_refiner_second_prompts\"\n            Type: typing.Optional[collections.abc.Sequence[dgenerate.prompt.Prompt]]\n            Value: []\n        Name: \"last_sdxl_refiner_sequential_offload\"\n            Type: typing.Optional[bool]\n            Value: None\n        Name: \"last_sdxl_refiner_target_sizes\"\n            Type: typing.Optional[collections.abc.Sequence[tuple[int, int]]]\n            Value: []\n        Name: \"last_sdxl_refiner_uri\"\n            Type: typing.Optional[str]\n            Value: None\n        Name: \"last_sdxl_second_prompts\"\n            Type: typing.Optional[collections.abc.Sequence[dgenerate.prompt.Prompt]]\n            Value: []\n        Name: \"last_sdxl_t2i_adapter_factors\"\n            Type: typing.Optional[collections.abc.Sequence[float]]\n            Value: []\n        Name: \"last_sdxl_target_sizes\"\n            Type: typing.Optional[collections.abc.Sequence[tuple[int, int]]]\n            Value: []\n        Name: \"last_second_text_encoder_uris\"\n            Type: typing.Optional[collections.abc.Sequence[str]]\n            Value: []\n        Name: \"last_second_unet_uri\"\n            Type: typing.Optional[str]\n            Value: None\n        Name: \"last_seed_image_processors\"\n            Type: typing.Optional[collections.abc.Sequence[str]]\n            Value: []\n        Name: \"last_seeds\"\n            Type: collections.abc.Sequence[int]\n            Value: [95374578727442]\n        Name: \"last_seeds_to_images\"\n            Type: \u003cclass 'bool'\u003e\n            Value: False\n        Name: \"last_subfolder\"\n            Type: typing.Optional[str]\n            Value: None\n        Name: \"last_t2i_adapter_uris\"\n            Type: typing.Optional[collections.abc.Sequence[str]]\n            Value: []\n        Name: \"last_text_encoder_cache_memory_constraints\"\n            Type: typing.Optional[collections.abc.Sequence[str]]\n            Value: []\n        Name: \"last_text_encoder_uris\"\n            Type: typing.Optional[collections.abc.Sequence[str]]\n            Value: []\n        Name: \"last_textual_inversion_uris\"\n            Type: typing.Optional[collections.abc.Sequence[str]]\n            Value: []\n        Name: \"last_transformer_cache_memory_constraints\"\n            Type: typing.Optional[collections.abc.Sequence[str]]\n            Value: []\n        Name: \"last_transformer_uri\"\n            Type: typing.Optional[str]\n            Value: None\n        Name: \"last_unet_cache_memory_constraints\"\n            Type: typing.Optional[collections.abc.Sequence[str]]\n            Value: []\n        Name: \"last_unet_uri\"\n            Type: typing.Optional[str]\n            Value: None\n        Name: \"last_upscaler_noise_levels\"\n            Type: typing.Optional[collections.abc.Sequence[int]]\n            Value: []\n        Name: \"last_vae_cache_memory_constraints\"\n            Type: typing.Optional[collections.abc.Sequence[str]]\n            Value: []\n        Name: \"last_vae_slicing\"\n            Type: \u003cclass 'bool'\u003e\n            Value: False\n        Name: \"last_vae_tiling\"\n            Type: \u003cclass 'bool'\u003e\n            Value: False\n        Name: \"last_vae_uri\"\n            Type: typing.Optional[str]\n            Value: None\n        Name: \"last_variant\"\n            Type: typing.Optional[str]\n            Value: None\n        Name: \"last_verbose\"\n            Type: \u003cclass 'bool'\u003e\n            Value: False\n        Name: \"path\"\n            Type: \u003cclass 'module'\u003e\n            Value: \u003cmodule 'ntpath' (frozen)\u003e\n        Name: \"saved_modules\"\n            Type: dict[str, dict[str, typing.Any]]\n            Value: {}\n\nThe following is output from ``\\functions_help`` showing every implemented template function signature.\n\n.. code-block:: text\n\n    Available config template functions:\n\n        abs(args, kwargs)\n        align_size(size: str | tuple, align: int, format_size: bool = True) -\u003e str | tuple\n        all(args, kwargs)\n        any(args, kwargs)\n        ascii(args, kwargs)\n        bin(args, kwargs)\n        bool(args, kwargs)\n        bytearray(args, kwargs)\n        bytes(args, kwargs)\n        callable(args, kwargs)\n        chr(args, kwargs)\n        complex(args, kwargs)\n        cwd() -\u003e str\n        dict(args, kwargs)\n        divmod(args, kwargs)\n        download(url: str, output: str | None = None, overwrite: bool = False, text: bool = False) -\u003e str\n        enumerate(args, kwargs)\n        filter(args, kwargs)\n        first(iterable: collections.abc.Iterable[typing.Any]) -\u003e typing.Any\n        float(args, kwargs)\n        format(args, kwargs)\n        format_dtype(dtype: \u003cenum 'DataType'\u003e) -\u003e str\n        format_model_type(model_type: \u003cenum 'ModelType'\u003e) -\u003e str\n        format_prompt(prompts: dgenerate.prompt.Prompt | collections.abc.Iterable[dgenerate.prompt.Prompt]) -\u003e str\n        format_size(size: collections.abc.Iterable[int]) -\u003e str\n        frange(start, stop = None, step = 0.1)\n        frozenset(args, kwargs)\n        gen_seeds(n: int) -\u003e list[str]\n        getattr(args, kwargs)\n        hasattr(args, kwargs)\n        hash(args, kwargs)\n        have_feature(feature_name: str) -\u003e bool\n        hex(args, kwargs)\n        int(args, kwargs)\n        iter(args, kwargs)\n        last(iterable: list | collections.abc.Iterable[typing.Any]) -\u003e typing.Any\n        len(args, kwargs)\n        list(args, kwargs)\n        map(args, kwargs)\n        max(args, kwargs)\n        min(args, kwargs)\n        next(args, kwargs)\n        object(args, kwargs)\n        oct(args, kwargs)\n        ord(args, kwargs)\n        platform() -\u003e str\n        pow(args, kwargs)\n        pow2_size(size: str | tuple, format_size: bool = True) -\u003e str | tuple\n        quote(strings: str | collections.abc.Iterable[typing.Any]) -\u003e str\n        range(args, kwargs)\n        repr(args, kwargs)\n        reversed(args, kwargs)\n        round(args, kwargs)\n        set(args, kwargs)\n        size_is_aligned(size: str | tuple, align: int) -\u003e bool\n        size_is_pow2(size: str | tuple) -\u003e bool\n        slice(args, kwargs)\n        sorted(args, kwargs)\n        str(args, kwargs)\n        sum(args, kwargs)\n        tuple(args, kwargs)\n        type(args, kwargs)\n        unquote(strings: str | collections.abc.Iterable[typing.Any], expand: bool = False) -\u003e list\n        zip(args, kwargs)\n\n\nDirectives, and applying templating\n-----------------------------------\n\nYou can see all available config directives with the command\n``dgenerate --directives-help``, providing this option with a name, or multiple\nnames such as: ``dgenerate --directives-help save_modules use_modules`` will print\nthe documentation for the specified directives. The backslash may be omitted.\nThis option is also available as the config directive ``\\directives_help``.\n\nExample output:\n\n.. code-block:: text\n\n    Available config directives:\n\n        \"\\cd\"\n        \"\\civitai_links\"\n        \"\\clear_controlnet_cache\"\n        \"\\clear_model_cache\"\n        \"\\clear_modules\"\n        \"\\clear_pipeline_cache\"\n        \"\\clear_text_encoder_cache\"\n        \"\\clear_unet_cache\"\n        \"\\clear_vae_cache\"\n        \"\\cp\"\n        \"\\directives_help\"\n        \"\\download\"\n        \"\\echo\"\n        \"\\env\"\n        \"\\exec\"\n        \"\\exit\"\n        \"\\functions_help\"\n        \"\\gen_seeds\"\n        \"\\help\"\n        \"\\image_process\"\n        \"\\image_processor_help\"\n        \"\\import_plugins\"\n        \"\\ls\"\n        \"\\mkdir\"\n        \"\\mv\"\n        \"\\popd\"\n        \"\\print\"\n        \"\\prompt_weighter_help\"\n        \"\\pushd\"\n        \"\\pwd\"\n        \"\\rm\"\n        \"\\rmdir\"\n        \"\\save_modules\"\n        \"\\set\"\n        \"\\sete\"\n        \"\\setp\"\n        \"\\templates_help\"\n        \"\\unset\"\n        \"\\unset_env\"\n        \"\\use_modules\"\n\n\nHere are examples of other available directives such as ``\\set``, ``\\setp``, and\n``\\print`` as well as some basic Jinja2 templating usage. This example also covers\nthe usage and purpose of ``\\save_modules`` for saving and reusing pipeline modules\nsuch as VAEs etc. outside of relying on the caching system.\n\n.. code-block:: jinja\n\n    #! /usr/bin/env dgenerate --file\n    #! dgenerate 4.5.1\n\n    # You can define your own template variables with the \\set directive\n    # the \\set directive does not do any shell args parsing on its value\n    # operand, meaning the quotes will be in the string that is assigned\n    # to the variable my_prompt\n\n    \\set my_prompt \"an astronaut riding a horse; bad quality\"\n\n    # If your variable is long you can use continuation, note that\n    # continuation replaces newlines and surrounding whitespace\n    # with a single space\n\n    \\set my_prompt \"my very very very very very very very \\\n                    very very very very very very very very \\\n                    long long long long long prompt\"\n\n    # You can print to the console with templating using the \\print directive\n    # for debugging purposes\n\n    \\print {{ my_prompt }}\n\n\n    # The \\setp directive can be used to define python literal template variables\n\n    \\setp my_list [1, 2, 3, 4]\n\n    \\print {{ my_list | join(' ') }}\n\n\n    # Literals defined by \\setp can reference other template variables by name.\n    # the following creates a nested list\n\n    \\setp my_list [1, 2, my_list, 4]\n\n    \\print {{ my_list }}\n\n\n    # \\setp can evaluate template functions\n\n    \\setp directory_content glob.glob('*')\n\n    \\setp current_directory cwd()\n\n\n    # the \\gen_seeds directive can be used to store a list of\n    # random seed integers into a template variable.\n    # (they are strings for convenience)\n\n    \\gen_seeds my_seeds 10\n\n    \\print {{ my_seeds | join(' ') }}\n\n\n    # An invocation sets various template variables related to its\n    # execution once it is finished running\n\n    stabilityai/stable-diffusion-2-1 --prompts {{ my_prompt }} --gen-seeds 5\n\n\n    # Print a quoted filename of the last image produced by the last invocation\n    # This could potentially be passed to --image-seeds of the next invocation\n    # If you wanted to run another pass over the last image that was produced\n\n    \\print {{ quote(last(last_images)) }}\n\n    # you can also get the first image easily with the function \"first\"\n\n    \\print {{ quote(first(last_images)) }}\n\n\n    # if you want to append a mask image file name\n\n    \\print \"{{ last(last_images) }};my-mask.png\"\n\n\n    # Print a list of properly quoted filenames produced by the last\n    # invocation separated by spaces if there is multiple, this could\n    # also be passed to --image-seeds\n\n    # in the case that you have generated an animated output with frame\n    # output enabled, this will contain paths to the frames\n\n    \\print {{ quote(last_images) }}\n\n\n    # For loops are possible\n\n    \\print {% for image in last_images %}{{ quote(image) }} {% endfor %}\n\n\n    # For loops are possible with normal continuation\n    # when not using a heredoc template continuation (mentioned below),\n    # such as when the loop occurs in the body of a directive or a\n    # dgenerate invocation, however this sort of continuation usage will\n    # replace newlines and whitespace with a single space.\n\n    # IE this template will be: \"{% for image in last_images %} {{ quote(image) }} {% endfor %}\"\n\n    \\print {% for image in last_images %} \\\n            {{ quote(image) }} \\\n           {% endfor %}\n\n\n    # Access to the last prompt is available in a parsed representation\n    # via \"last_prompt\", which can be formatted properly for reuse\n    # by using the function \"format_prompt\"\n\n    stabilityai/stable-diffusion-2-1 --prompts {{ format_prompt(last(last_prompts)) }}\n\n    # You can get only the positive or negative part if you want via the \"positive\"\n    # and \"negative\" properties on a prompt object, these attributes are not\n    # quoted so you need to quote them one way or another, preferably using the\n    # dgenerate template function \"quote\" which will shell quote any special\n    # characters that the argument parser is not going to understand\n\n    stabilityai/stable-diffusion-2-1 --prompts {{ quote(last(last_prompts).positive) }}\n\n    # \"last_prompts\" returns all the prompts used in the last invocation as a list\n    # the \"format_prompt\" function can also work on a list\n\n    stabilityai/stable-diffusion-2-1 --prompts \"prompt 1\" \"prompt 2\" \"prompt 3\"\n\n    stabilityai/stable-diffusion-2-1 --prompts {{ format_prompt(last_prompts) }}\n\n\n    # Execute additional config with full templating.\n    # The sequence !END is interpreted as the end of a\n    # template continuation, a template continuation is\n    # started when a line begins with the character {\n    # and is effectively a heredoc, in that all whitespace\n    # within is preserved including newlines\n\n    {% for image in last_images %}\n        stabilityai/stable-diffusion-2-1 --image-seeds {{ quote(image) }} --prompts {{ my_prompt }}\n    {% endfor %} !END\n\n\n    # Multiple lines can be used with a template continuation\n    # the inside of the template will be expanded to raw config\n    # and then be ran, so make sure to use line continuations within\n    # where they are necessary as you would do in the top level of\n    # a config file. The whole of the template continuation is\n    # processed by Jinja, from { to !END, so only one !END is\n    # ever necessary after the external template\n    # when dealing with nested templates\n\n    {% for image in last_images %}\n        stabilityai/stable-diffusion-2-1\n        --image-seeds {{ quote(image) }}\n        --prompts {{ my_prompt }}\n    {% endfor %} !END\n\n\n    # The above are both basically equivalent to this\n\n    stabilityai/stable-diffusion-2-1 --image-seeds {{ quote(last_images) }} --prompts {{ my_prompt }}\n\n\n    # You can save modules from the main pipeline used in the last invocation\n    # for later reuse using the \\save_modules directive, the first argument\n    # is a variable name and the rest of the arguments are diffusers pipeline\n    # module names to save to the variable name, this is an advanced usage\n    # and requires some understanding of the diffusers library to utilize correctly\n\n    stabilityai/stable-diffusion-2-1\n    --variant fp16\n    --dtype float16\n    --prompts \"an astronaut walking on the moon\"\n    --safety-checker\n    --output-size 512\n\n\n    \\save_modules stage_1_modules feature_extractor safety_checker\n\n    # that saves the feature_extractor module object in the pipeline above,\n    # you can specify multiple module names to save if desired\n\n    # Possible Module Names:\n\n    # unet\n    # vae\n    # transformer\n    # text_encoder\n    # text_encoder_2\n    # text_encoder_3\n    # tokenizer\n    # tokenizer_2\n    # tokenizer_3\n    # safety_checker\n    # feature_extractor\n    # image_encoder\n    # adapter\n    # controlnet\n    # scheduler\n\n\n    # To use the saved modules in the next invocation use  \\use_modules\n\n    \\use_modules stage_1_modules\n\n    # now the next invocation will use those modules instead of loading them from internal\n    # in memory cache, disk, or huggingface\n\n    stabilityai/stable-diffusion-x4-upscaler\n    --variant fp16\n    --dtype float16\n    --model-type torch-upscaler-x4\n    --prompts {{ format_prompt(last_prompts) }}\n    --image-seeds {{ quote(last_images) }}\n    --vae-tiling\n\n\n    # you should clear out the saved modules if you no longer need them\n    # and your config file is going to continue, or if the dgenerate\n    # process is going to be kept alive for some reason such as in\n    # some library usage scenarios, or perhaps if you are using it\n    # like a server that reads from stdin :)\n\n    \\clear_modules stage_1_modules\n\n\nSetting template variables, in depth\n------------------------------------\n\nThe directives ``\\set``, ``\\sete``, and ``\\setp`` can be used to set the value\nof template variables within a configuration.  The directive ``\\unset`` can be\nused to undefine template variables.\n\nAll three of the assignment directives have unique behavior.\n\nThe ``\\set`` directive sets a value with templating and environmental variable expansion applied to it,\nand nothing else aside from the value being striped of leading and trailing whitespace. The value that is\nset to the template variables is essentially the text that you supply as the value, as is. Or the text that\nthe templates or environment variables in the value expand to, unmodified or parsed in any way.\n\nThis is for assigning literal text values to a template variable.\n\n.. code-block:: jinja\n\n    #! /usr/bin/env dgenerate --file\n    #! dgenerate 4.5.1\n\n    \\set my_variable \"I am an incomplete string and this is completely fine because I am a raw value\n\n    # prints exactly what is above, including the quote at the beginning\n\n    \\print {{ my_variable }}\n\n    # add a quote to the end of the string using templates\n\n    \\set my_variable {{ my_variable }}\"\n\n    # prints a fully quoted string\n\n    \\print {{ my_variable }}\n\n    # indirect expansion is allowed\n\n    \\set var_name template_variable\n    \\env ENV_VAR_NAMED=env_var_named\n\n    \\set {{ var_name }} Hello!\n    \\set $ENV_VAR_NAMED Hi!\n\n    # prints Hello!, Hi!\n\n    \\print {{ template_variable }}\n    \\print {{ env_var_named }}\n\n\nThe ``\\sete`` directive can be used to assign the result of shell parsing and expansion to a\ntemplate variable, the value provided will be shell parsed into tokens as if it were a line of\ndgenerate config. This is useful because you can use the config languages built in shell globbing\nfeature to assign template variables.\n\n.. code-block:: jinja\n\n    #! /usr/bin/env dgenerate --file\n    #! dgenerate 4.5.1\n\n    # lets pretend the directory \"my_files\" is full of files\n\n    \\sete my_variable --argument my_files/*\n\n    # prints the python array ['--argument', 'my_files/file1', 'my_files/file2', ...]\n\n    \\print {{ my_variable }}\n\n    # Templates and environmental variable references\n    # are also parsed in the \\sete directive, just as they are with \\set\n\n    \\set directory my_files\n\n    \\sete my_variable --argument {{ directory }}/*\n\n    # indirect expansion is allowed\n\n    \\set var_name template_variable\n    \\env ENV_VAR_NAMED=env_var_named\n\n    \\sete {{ var_name }} my_files/*\n    \\sete $ENV_VAR_NAMED my_files/*\n\n    # both print ['my_files/file1', 'my_files/file2', ...]\n\n    \\print {{ template_variable }}\n    \\print {{ env_var_named }}\n\n\nThe ``\\setp`` directive can be used to assign the result of evaluating a limited subset of python\nexpressions to a template variable.  This can be used to set a template variable to the result\nof a mathematical expression, python literal value such as a list, dictionary, set, etc...\npython comprehension, or python ternary statement.  In addition, all template functions\nimplemented by dgenerate are available for use in the evaluated expressions.\n\n\n.. code-block:: jinja\n\n    #! /usr/bin/env dgenerate --file\n    #! dgenerate 4.5.1\n\n    \\setp my_variable 10*10\n\n    # prints 100\n\n    \\print {{ my_variable }}\n\n    # you can reference variables defined in the environment\n\n    \\setp my_variable [my_variable, my_variable*2]\n\n    # prints [100, 200]\n\n    \\print {{ my_variable }}\n\n    # all forms of python comprehensions are supported\n    # such as list, dict, and set comprehensions\n\n    \\setp my_variable [i for i in range(0,5)]\n\n    # prints [0, 1, 2, 3, 4]\n\n    \\print {{ my_variable }}\n\n    # declare a literal string value\n\n    \\setp my_variable \"my string value\"\n\n    # prints the string without quotes included, the string was parsed\n\n    \\print {{ my_variable }}\n\n    # templates and environmental variable references\n    # are also expanded in \\setp values\n\n    \\setp my_variable [my_variable, \"{{ my_variable }}\"]\n\n    # prints [\"my string value\", \"my string value\"]\n\n    \\print {{ my_variable }}\n\n    # my_variable is a literal list so it can be\n    # looped over with a jinja template continuation\n\n    {% for value in my_variable %}\n        \\print {{ value }}\n    {% endfor %} !END\n\n    # indirect expansion is allowed\n\n    \\set var_name template_variable\n    \\env ENV_VAR_NAMED=env_var_named\n\n    \\setp {{ var_name }} \"Hello!\"\n    \\setp $ENV_VAR_NAMED [template_variable]\n\n    # prints \"Hello!\", [\"Hello!\"]\n\n    \\print {{ template_variable }}\n    \\print {{ env_var_named }}\n\n\nSetting environmental variables, in depth\n-----------------------------------------\n\nThe directives ``\\env`` and ``\\unset_env`` can be used to\nmanipulate multiple environmental variables at once.\n\nThe directive ``\\env`` can also be used without arguments to print out\nthe values of all environment variables that exist in your environment\nfor debugging purposes.\n\nIndirect expansion is allowed just like with ``\\set``, ``\\sete``, and ``\\setp``.\n\n\n.. code-block:: jinja\n\n    #! /usr/bin/env dgenerate --file\n    #! dgenerate 4.5.1\n\n    \\env MY_ENV_VAR=1 MY_ENV_VAR2=2\n\n    # prints 1 2\n\n    \\print $MY_ENV_VAR $MY_ENV_VAR2\n\n    # indirect expansion is allowed\n\n    \\set name env_var_name\n    \\set value Hello!\n\n    \\set name_holder {{ name }}\n\n    \\env {{ name_holder }}={{ value }}\n\n    # this treats the expansion of {{ name }} as an environmental variable name\n\n    \\set output ${{ name }}\n\n    # prints Hello!\n\n    \\print {{ output }}\n\n    # unset an environmental variable, the names\n    # undergo expansion, and are undefined in order\n\n    \\env NAME_HOLDER=MY_ENV_VAR2\n\n    \\unset_env MY_ENV_VAR $NAME_HOLDER {{ name }} NAME_HOLDER\n\n\n    # prints every defined environmental variable\n    # we have undefined everything that we defined\n    # above so the names from this script will not\n    # be present\n\n    \\env\n\n\nGlobbing and path manipulation\n------------------------------\n\nThe entirety of pythons builtin ``glob`` and ``os.path`` module are also accessible during templating, you\ncan glob directories using functions from the glob module, you can also glob directory's using shell\nglobbing.\n\n.. code-block:: jinja\n\n    #! /usr/bin/env dgenerate --file\n    #! dgenerate 4.5.1\n\n    # globbing can be preformed via shell expansion or using\n    # the glob module inside jinja templates\n\n    # note that shell globbing and home directory expansion\n    # does not occur inside quoted strings\n\n    # \\echo can be use to show the results of globbing that\n    # occurs during shell expansion. \\print does not perform shell\n    # expansion nor does \\set or \\setp, all other directives do, as well\n    # as dgenerate invocations\n\n    # shell globs which produce 0 files are considered an error\n\n    \\echo ../media/*.png\n\n    \\echo ~\n\n    # \\sete can be used to set a template variable to the result\n    # of one or more shell globs\n\n    \\sete myfiles ../media/*.png\n\n\n    # with Jinja2:\n\n\n    # The most basic usage is full expansion of every file\n\n    \\set myfiles {{ quote(glob.glob('../media/*.png')) }}\n\n    \\print {{ myfiles }}\n\n    # If you have a LOT of files, you may want to\n    # process them using an iterator like so\n\n    {% for file in glob.iglob('../media/*.png') %}\n        \\print {{ quote(file) }}\n    {% endfor %} !END\n\n    # usage of os.path via path\n\n    \\print {{ path.abspath('.') }}\n\n    # Simple inline usage\n\n    stabilityai/stable-diffusion-2-1\n    --variant fp16\n    --dtype float16\n    --prompts \"In the style of picaso\"\n    --image-seeds {{ quote(glob.glob('../media/*.png')) }}\n    --output-path {{ quote(path.join(path.abspath('.'), 'output')) }}\n\n    # equivalent\n\n    stabilityai/stable-diffusion-2-1\n    --variant fp16\n    --dtype float16\n    --prompts \"In the style of picaso\"\n    --image-seeds ../media/*.png\n    --output-path ./output\n\n\nThe \\\\print and \\\\echo directive\n--------------------------------\n\nThe ``\\print`` and ``\\echo`` directive can both be used to output text to the console.\n\nThe difference between the two directives is that ``\\print`` only ever prints\nthe raw value with templating and environmental variable expansion applied,\nsimilar to the behavior of ``\\set``\n\n.. code-block:: jinja\n\n    #! /usr/bin/env dgenerate --file\n    #! dgenerate 4.5.1\n\n    # the text after \\print(space) will be printed verbatim\n\n    \\print I am a raw value, I have no ability to * glob\n\n    # Print the PATH environmental variable\n\n    \\set header Path Elements:\n\n    \\print {{ header }} $PATH\n    \\print {{ header }} ${PATH}\n    \\print {{ header }} %PATH%\n\nThe ``\\echo`` directive preforms shell expansion into tokens before printing, like ``\\sete``,\nThis can be useful for debugging / displaying the results of a shell expansion.\n\n.. code-block:: jinja\n\n    #! /usr/bin/env dgenerate --file\n    #! dgenerate 4.5.1\n\n    # lets pretend \"directory\" is full of files\n\n    # this prints: directory/file1 directory/file2 ...\n\n    \\echo directory/*\n\n    # Templates and environmental variables are expanded\n\n    # this prints: Files: directory/file1 directory/file2 ...\n\n    \\set header Files:\n\n    \\echo {{ header }} directory/*\n\n\nThe \\\\image_process directive\n-----------------------------\n\nThe dgenerate sub-command ``image-process`` has a config directive implementation.\n\n\n.. code-block:: jinja\n\n    #! /usr/bin/env dgenerate --file\n    #! dgenerate 4.5.1\n\n    # print the help message of --sub-command image-process, this does\n    # not cause the config to exit\n\n    \\image_process --help\n\n    \\set myfiles {{ quote(glob.glob('my_images/*.png')) }}\n\n    # this will create the directory \"upscaled\"\n    # the files will be named \"upscaled/FILENAME_processed_1.png\" \"upscaled/FILENAME_processed_2.png\" ...\n\n    \\image_process {{ myfiles }} \\\n    --output upscaled/\n    --processors upscaler;model=https://github.com/xinntao/Real-ESRGAN/releases/download/v0.2.5.0/realesr-general-x4v3.pth\n\n\n    # the last_images template variable will be set, last_animations is also usable if\n    # animations were written. In the case that you have generated an animated output with frame\n    # output enabled, this will contain paths to the frames\n\n    \\print {{ quote(last_images) }}\n\nThe \\\\exec directive\n--------------------\n\nThe ``\\exec`` directive can be used to run native system commands and supports bash\npipe and file redirection syntax in a platform independent manner. All file\nredirection operators supported by bash are supported. This can be useful\nfor running other image processing utilities as subprocesses from within a\nconfig script.\n\n\n.. code-block:: jinja\n\n    #! /usr/bin/env dgenerate --file\n    #! dgenerate 4.5.1\n\n    # run dgenerate as a subprocess, read a config\n    # and send stdout and stderr to a file\n\n    \\exec dgenerate \u003c my_config.dgen \u0026\u003e log.txt\n\n    # chaining processes together with pipes is supported\n    # this example emulates 'cat' on Windows using cmd\n\n    \\exec cmd /c \"type my_config.dgen\" | dgenerate \u0026\u003e log.txt\n\n    # on a Unix platform you could simply use cat\n\n    \\exec cat my_config.dgen | dgenerate \u0026\u003e log.txt\n\n\nThe \\\\download directive\n------------------------\n\nArbitrary files can be downloaded via the ``\\download`` directive.\n\nThis directive can be used to download a file and assign its\ndownloaded path to a template variable.\n\nFiles can either be inserted into dgenerates web cache or\ndownloaded to a specific directory or absolute path.\n\nThis directive is designed with using cached files in mind,\nso it will reuse existing files by default when downloading\nto an explicit path.\n\nSee the directives help output for more details: ``\\download --help``\n\nIf you plan to download many large models to the web cache in\nthis manner you may wish to adjust the global cache expiry time\nso that they exist in the cache longer than the default of 12 hours.\n\nYou can see how to do this in the section `File Cache Control`_\n\nThis directive is primarily intended to download models and or other\nbinary file formats such as images and will raise an error if it encounters\na text mimetype. This  behavior can be overridden with the ``-t/--text`` argument.\n\nBe weary that if you have a long-running loop in your config using\na top level jinja template, which refers to your template variable,\ncache expiry may invalidate the file stored in your variable.\n\nYou can rectify this by putting the download directive inside of\nyour processing loop so that the file is simply re-downloaded if\nit expires in the cache.\n\nOr you may be better off using the ``download``\ntemplate function which provides this functionality\nas a template function. See: `The download() template function`_\n\n\n.. code-block:: jinja\n\n    #! /usr/bin/env dgenerate --file\n    #! dgenerate 4.5.1\n\n    # download a model into the web cache,\n    # assign its path to the variable \"path\"\n\n    \\download path https://modelhost.com/somemodel.safetensors\n\n    # download to the models folder in the current directory\n    # the models folder will be created if it does not exist\n    # if somemodel.safetensors already exists it will be reused\n    # instead of being downloaded again\n\n    \\download path https://modelhost.com/somemodel.safetensors -o models/somemodel.safetensors\n\n    # download into the folder without specifying a name\n    # the name will be derived from the URL or content disposition\n    # header from the http request, if you are not careful you may\n    # end up with a file named in a way you were not expecting.\n    # only use this if you know how the service you are downloading\n    # from behaves in this regard\n\n    \\download path https://modelhost.com/somemodel.safetensors -o models/\n\n\n    # download a model into the web cache an overwrite any cached model using -x\n\n    \\download path https://modelhost.com/somemodel.safetensors -x\n\n    # Download to an explicit path without any cached file reuse\n    # using the -x/--overwrite argument. In effect, always freshly\n    # download the file\n\n    \\download path https://modelhost.com/somemodel.safetensors -o models/somemodel.safetensors -x\n\n    \\download path https://modelhost.com/somemodel.safetensors -o models/ -x\n\n\nThe download() template function\n--------------------------------\n\nThe template function ``download`` is analogous to the ``\\download`` directive\n\nAnd can be used to download a file with the same behaviour and return its\npath as a string, this may be easier to use inside of certain jinja flow\ncontrol constructs.\n\n\n.. code-block:: jinja\n\n    #! /usr/bin/env dgenerate --file\n    #! dgenerate 4.5.1\n\n    \\set my_variable {{ download('https://modelhost.com/model.safetensors') }}\n\n    \\set my_variable {{ download('https://modelhost.com/model.safetensors', output='model.safetensors') }}\n\n    \\set my_variable {{ download('https://modelhost.com/model.safetensors', output='directory/') }}\n\n    # you can also use any template function with \\setp (python expression evaluation)\n\n    \\setp my_variable download('https://modelhost.com/model.safetensors')\n\n\nThe signature for this template function is: ``download(url: str, output: str | None = None, overwrite: bool = False, text: bool = False) -\u003e str``\n\n\nThe \\\\exit directive\n--------------------\n\nYou can exit a config early if need be using the ``\\exit`` directive\n\n.. code-block:: jinja\n\n    #! /usr/bin/env dgenerate --file\n    #! dgenerate 4.5.1\n\n    # exit the process with return code 0, which indicates success\n\n    \\print \"success\"\n    \\exit\n\n\nAn explicit return code can be provided as well\n\n\n.. code-block:: jinja\n\n    #! /usr/bin/env dgenerate --file\n    #! dgenerate 4.5.1\n\n    # exit the process with return code 1, which indicates an error\n\n    \\print \"some error occurred\"\n    \\exit 1\n\n\nRunning configs from the command line\n-------------------------------------\n\nTo utilize configuration files use the ``--file`` option,\nor pipe them into the command, or use file redirection:\n\n\nUse the ``--file`` option\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    dgenerate --file my-config.dgen\n\n\nPiping or redirection in Bash:\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # Pipe\n    cat my-config.dgen | dgenerate\n\n    # Redirection\n    dgenerate \u003c my-config.dgen\n\n\nRedirection in Windows CMD:\n\n.. code-block:: bash\n\n    dgenerate \u003c my-config.dgen\n\n\nPiping Windows Powershell:\n\n.. code-block:: powershell\n\n    Get-Content my-config.dgen | dgenerate\n\n\nConfig argument injection\n-------------------------\n\nYou can inject arguments into every dgenerate invocation of a batch processing\nconfiguration by simply specifying them. The arguments will added to the end\nof the argument specification of every call.\n\n.. code-block:: bash\n\n    #!/usr/bin/env bash\n\n    # Pipe\n    cat my-animations-config.dgen | dgenerate --frame-start 0 --frame-end 10\n\n    # Redirection\n    dgenerate --frame-start 0 --frame-end 10 \u003c my-animations-config.dgen\n\n\nOn Windows CMD:\n\n.. code-block:: bash\n\n    dgenerate  --frame-start 0 --frame-end 10 \u003c my-animations-config.dgen\n\n\nOn Windows Powershell:\n\n.. code-block:: powershell\n\n    Get-Content my-animations-config.dgen | dgenerate --frame-start 0 --frame-end 10\n\n\nIf you need arguments injected from the command line within the config for\nsome other purpose such as for using with the ``\\image_process`` directive\nwhich does not automatically recieve injected arguments, use the\n``injected_args``  and related ``injected_*`` template variables.\n\n.. code-block:: jinja\n\n    #! /usr/bin/env dgenerate --file\n    #! dgenerate 4.5.1\n\n    # all injected args\n\n    \\print {{ quote(injected_args) }}\n\n    # just the injected device\n\n    \\print {{ '--device '+injected_device if injected_device else '' }}\n\n    # was -v/--verbose injected?\n\n    \\print {{ '-v' if injected_verbose else '' }}\n\n    # plugin module paths injected with --plugin-modules\n\n    \\print {{ quote(injected_plugin_modules) if injected_plugin_modules else '' }}\n\nConsole UI\n==========\n\n.. image:: https://raw.githubusercontent.com/Teriks/dgenerate-readme-embeds/master/ui.gif\n   :alt: console ui\n\nYou can launch a cross platform Tkinter GUI for interacting with a\nlive dgenerate process using ``dgenerate --console`` or via the optionally\ninstalled desktop shortcut on Windows.\n\nThis provides a basic REPL for the dgenerate config language utilizing\na ``dgenerate --shell`` subprocess to act as the live interpreter, it\nalso features full context aware syntax highlighting for the dgenerate\nconfig language.\n\nIt can be used to work with dgenerate without encountering the startup\noverhead of loading large python modules for every command line invocation.\n\nThe GUI console supports command history via the up and down arrow keys as a\nnormal terminal would, optional multiline input for sending multiline commands / configuration\nto the shell. And various editing niceties such as GUI file / directory path insertion,\nthe ability to insert templated command recipes for quickly getting started and getting results,\nand a selection menu for inserting karras schedulers by name.\n\nAlso supported is the ability to view the latest image as it is produced by ``dgenerate`` or\n``\\image_process`` via an image pane or standalone window.\n\nThe console UI always starts in single line entry mode (terminal mode), multiline input mode\nis activated via the insert key and indicated by the presence of line numbers, you must deactivate this mode\nto submit commands via the enter key, however you can use the run button from the run menu (or ``Ctrl+Space``)\nto run code in this mode. You cannot page through command history in this mode, and code will remain in the\nconsole input pane upon running it making the UI function more like a code editor than a terminal.\n\nThe console can be opened with a file loaded in multiline input mode\nby using the command: ``dgenerate --console filename.dgen``\n\n``Ctrl+Q`` can be used in input pane for killing and then restarting the background interpreter process.\n\n``Ctrl+F`` (find) and ``Ctrl+R`` (find/replace) is supported for both the input and output panes.\n\nAll common text editing features that you would expect to find in a basic text editor are present,\nas well as python regex support for find / replace, with group substitution supporting the syntax\n``\\n`` or ``\\{n}`` where ``n`` is the match group number.\n\nScroll back history in the output window is currently limited to 10000 lines however the console\napp itself echos all ``stdout`` and ``stderr`` of the interpreter, so you can save all output to a log\nfile via file redirection if desired when launching the console from the terminal.\n\nThis can be configured by setting the environmental variable ``DGENERATE_CONSOLE_MAX_SCROLLBACK=10000``\n\nCommand history is currently limited to 500 commands, multiline commands are also\nsaved to command history.  The command history file is stored at ``-/.dgenerate_console_history``,\non Windows this equates to ``%USERPROFILE%\\.dgenerate_console_history``\n\nThis can be configured by setting the environmental variable ``DGENERATE_CONSOLE_MAX_HISTORY=500``\n\nAny UI settings that persist on startup are stored in ``-/.dgenerate_console_settings`` or\non Windows ``%USERPROFILE%\\.dgenerate_console_settings``\n\nWriting Plugins\n===============\n\ndgenerate has the capability of loading in additional functionality through the use of\nthe ``--plugin-modules`` option and ``\\import_plugins`` config directive.\n\nYou simply specify one or more module directories on disk, paths to python files, or references\nto modules installed in the python environment using the argument or import directive.\n\ndgenerate supports implementing image processors, config directives, config template functions,\nprompt weighters, and sub-commands through plugins.\n\n~~~~\n\nImage processor plugins\n-----------------------\n\nA code example as well as a usage example for image processor plugins can be found\nin the `writing_plugins/image_processor \u003chttps://github.com/Teriks/dgenerate/tree/v4.5.1/examples/writing_plugins/image_processor\u003e`_\nfolder of the examples folder.\n\nThe source code for the built in `canny \u003chttps://github.com/Teriks/dgenerate/blob/v4.5.1/dgenerate/imageprocessors/canny.py\u003e`_ processor,\nthe `openpose \u003chttps://github.com/Teriks/dgenerate/blob/v4.5.1/dgenerate/imageprocessors/openpose.py\u003e`_ processor, and the simple\n`pillow image operations \u003chttps://github.com/Teriks/dgenerate/blob/v4.5.1/dgenerate/imageprocessors/imageops.py\u003e`_ processors can also\nbe of reference as they are written as internal image processor plugins.\n\n~~~~\n\n\nConfig directive and template function plugins\n----------------------------------------------\n\nAn example for writing config directives can be found in the `writing_plugins/config_directive \u003chttps://github.com/Teriks/dgenerate/tree/v4.5.1/examples/writing_plugins/config_directive\u003e`_  example folder.\n\nConfig template functions can also be implemented by plugins, see: `writing_plugins/template_function \u003chttps://github.com/Teriks/dgenerate/tree/v4.5.1/examples/writing_plugins/template_function\u003e`_\n\nCurrently the only internal directive that is implemented as a plugin is the ``\\image_process`` directive, who's source file \n`can be located here \u003chttps://github.com/Teriks/dgenerate/blob/v4.5.1/dgenerate/batchprocess/image_process_directive.py\u003e`_.\n\nThe source file for the ``\\image_process`` directive is terse as most of it is implemented as reusable code.\n\nThe behavior of ``\\image_process`` which is also used for ``--sub-command image-process`` is\n`is implemented here \u003chttps://github.com/Teriks/dgenerate/blob/v4.5.1/dgenerate/image_process\u003e`_.\n\n~~~~\n\n\nSub-command plugins\n-------------------\n\nReference for writing sub-commands can be found in the `image-process \u003chttps://github.com/Teriks/dgenerate/blob/v4.5.1/dgenerate/subcommands/image_process.py\u003e`_\nsub-command implementation, and a plugin skeleton file for sub-commands can be found in the \n`writing_plugins/sub_command \u003chttps://github.com/Teriks/dgenerate/tree/v4.5.1/examples/writing_plugins/sub_command\u003e`_ example folder.\n\n~~~~\n\n\nPrompt weighter plugins\n-----------------------\n\nReference for writing prompt weighters can be found in the `CompelPromptWeighter \u003chttps://github.com/Teriks/dgenerate/blob/v4.5.1/dgenerate/promptweighters/compelpromptweighter.py\u003e`_\nand `SdEmbedPromptWeighter \u003chttps://github.com/Teriks/dgenerate/blob/v4.5.1/dgenerate/promptweighters/sdembedpromptweighter.py\u003e`_ internal prompt weighter implementations.\n \nA plugin skeleton file for prompt weighters can be found in the \n`writing_plugins/prompt_weighter \u003chttps://github.com/Teriks/dgenerate/tree/v4.5.1/examples/writing_plugins/prompt_weighter\u003e`_\nexample folder.\n\n~~~~\n\n\nFile Cache Control\n==================\n\ndgenerate will cache downloaded non hugging face models, downloaded ``--image-seeds`` files,\nfiles downloaded by the ``\\download`` directive, ``download`` template function, and downloaded\nfiles used by image processors in the directory ``~/.cache/dgenerate/web``\n\nOn Windows this equates to: ``%USERPROFILE%\\.cache\\dgenerate\\web``\n\nYou can control where these files are cached with the environmental variable ``DGENERATE_WEB_CACHE``.\n\nFiles are cleared from the web cache automatically after an expiry time upon running dgenerate or\nwhen downloading additional files, the default value is after 12 hours.\n\nThis can be controlled with the environmental variable ``DGENERATE_WEB_CACHE_EXPIRY_DELTA``.\n\nThe value of ``DGENERATE_WEB_CACHE_EXPIRY_DELTA`` is that of the named arguments of pythons\n`datetime.timedelta \u003chttps://docs.python.org/3/library/datetime.html#timedelta-objects\u003e`_ class\nseperated by semicolons.\n\nFor example: ``DGENERATE_WEB_CACHE_EXPIRY_DELTA=\"days=5;hours=6\"``\n\nSpecifying ``\"forever\"`` or an empty string will disable cache expiration for every downloaded file.\n\nFiles downloaded from huggingface by the diffusers/huggingface_hub library will be cached under\n``~/.cache/huggingface/``, on Windows this equates to ``%USERPROFILE%\\.cache\\huggingface\\``.\n\nThis is controlled by the environmental variable ``HF_HOME``\n\nIn order to specify that all large model files be stored in another location,\nfor example on another disk, simply set ``HF_HOME`` to a new path in your environment.\n\nYou can read more about environmental variables that affect huggingface libraries on this\n`huggingface documentation page \u003chttps://huggingface.co/docs/huggingface_hub/package_reference/environment_variables\u003e`_.\n","funding_links":["https://github.com/sponsors/Teriks","https://ko-fi.com/teriks"],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fteriks%2Fdgenerate","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fteriks%2Fdgenerate","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fteriks%2Fdgenerate/lists"}