{"id":14461893,"url":"https://github.com/mix1009/sdwebuiapi","last_synced_at":"2025-10-04T20:16:01.706Z","repository":{"id":65001960,"uuid":"562925871","full_name":"mix1009/sdwebuiapi","owner":"mix1009","description":"Python API client for AUTOMATIC1111/stable-diffusion-webui","archived":false,"fork":false,"pushed_at":"2024-12-14T01:01:27.000Z","size":4970,"stargazers_count":1413,"open_issues_count":70,"forks_count":182,"subscribers_count":21,"default_branch":"main","last_synced_at":"2025-04-10T02:16:21.384Z","etag":null,"topics":["api","automatic1111","python","stable-diffusion-webui"],"latest_commit_sha":null,"homepage":"","language":"Jupyter Notebook","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/mix1009.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2022-11-07T14:47:01.000Z","updated_at":"2025-04-09T07:08:39.000Z","dependencies_parsed_at":"2023-02-19T04:45:58.955Z","dependency_job_id":"ca67b53c-36c5-4363-963b-675f38dbd0d2","html_url":"https://github.com/mix1009/sdwebuiapi","commit_stats":{"total_commits":45,"total_committers":4,"mean_commits":11.25,"dds":0.0888888888888889,"last_synced_commit":"a1cb4c6d2f39389d6e962f0e6436f4aa74cd752c"},"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mix1009%2Fsdwebuiapi","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mix1009%2Fsdwebuiapi/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mix1009%2Fsdwebuiapi/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mix1009%2Fsdwebuiapi/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/mix1009","download_url":"https://codeload.github.com/mix1009/sdwebuiapi/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248142906,"owners_count":21054671,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["api","automatic1111","python","stable-diffusion-webui"],"created_at":"2024-09-01T22:01:21.603Z","updated_at":"2025-10-04T20:15:56.659Z","avatar_url":"https://github.com/mix1009.png","language":"Jupyter Notebook","readme":"# sdwebuiapi\nAPI client for AUTOMATIC1111/stable-diffusion-webui\n\nSupports txt2img, img2img, extra-single-image, extra-batch-images API calls.\n\nAPI support have to be enabled from webui. Add --api when running webui.\nIt's explained [here](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/API).\n\nYou can use --api-auth user1:pass1,user2:pass2 option to enable authentication for api access.\n(Since it's basic http authentication the password is transmitted in cleartext)\n\nAPI calls are (almost) direct translation from http://127.0.0.1:7860/docs as of 2022/11/21.\n\n# Install\n\n```\npip install webuiapi\n```\n\n# Usage\n\nwebuiapi_demo.ipynb contains example code with original images. Images are compressed as jpeg in this document.\n\n## create API client\n```\nimport webuiapi\n\n# create API client\napi = webuiapi.WebUIApi()\n\n# create API client with custom host, port\n#api = webuiapi.WebUIApi(host='127.0.0.1', port=7860)\n\n# create API client with custom host, port and https\n#api = webuiapi.WebUIApi(host='webui.example.com', port=443, use_https=True)\n\n# create API client with default sampler, steps.\n#api = webuiapi.WebUIApi(sampler='Euler a', steps=20)\n\n# optionally set username, password when --api-auth=username:password is set on webui.\n# username, password are not protected and can be derived easily if the communication channel is not encrypted.\n# you can also pass username, password to the WebUIApi constructor.\napi.set_auth('username', 'password')\n```\n\n## txt2img\n```\nresult1 = api.txt2img(prompt=\"cute squirrel\",\n                    negative_prompt=\"ugly, out of frame\",\n                    seed=1003,\n                    styles=[\"anime\"],\n                    cfg_scale=7,\n#                      sampler_index='DDIM',\n#                      steps=30,\n#                      enable_hr=True,\n#                      hr_scale=2,\n#                      hr_upscaler=webuiapi.HiResUpscaler.Latent,\n#                      hr_second_pass_steps=20,\n#                      hr_resize_x=1536,\n#                      hr_resize_y=1024,\n#                      denoising_strength=0.4,\n\n                    )\n# images contains the returned images (PIL images)\nresult1.images\n\n# image is shorthand for images[0]\nresult1.image\n\n# info contains text info about the api call\nresult1.info\n\n# info contains paramteres of the api call\nresult1.parameters\n\nresult1.image\n```\n![txt2img](https://user-images.githubusercontent.com/1288793/200459205-258d75bb-d2b6-4882-ad22-040bfcf95626.jpg)\n\n\n## img2img\n```\nresult2 = api.img2img(images=[result1.image], prompt=\"cute cat\", seed=5555, cfg_scale=6.5, denoising_strength=0.6)\nresult2.image\n```\n![img2img](https://user-images.githubusercontent.com/1288793/200459294-ab1127e5-04e5-47ac-82b2-2bbd0648402a.jpg)\n\n## img2img inpainting\n```\nfrom PIL import Image, ImageDraw\n\nmask = Image.new('RGB', result2.image.size, color = 'black')\n# mask = result2.image.copy()\ndraw = ImageDraw.Draw(mask)\ndraw.ellipse((210,150,310,250), fill='white')\ndraw.ellipse((80,120,160,120+80), fill='white')\n\nmask\n```\n![mask](https://user-images.githubusercontent.com/1288793/200459372-7850c6b6-27c5-435a-93e2-8710948d316a.jpg)\n\n```\ninpainting_result = api.img2img(images=[result2.image],\n                                mask_image=mask,\n                                inpainting_fill=1,\n                                prompt=\"cute cat\",\n                                seed=104,\n                                cfg_scale=5.0,\n                                denoising_strength=0.7)\ninpainting_result.image\n```\n![img2img_inpainting](https://user-images.githubusercontent.com/1288793/200459398-9c1004be-1352-4427-bc00-442721a0e5a1.jpg)\n\n## extra-single-image\n```\nresult3 = api.extra_single_image(image=result2.image,\n                                 upscaler_1=webuiapi.Upscaler.ESRGAN_4x,\n                                 upscaling_resize=1.5)\nprint(result3.image.size)\nresult3.image\n```\n(768, 768)\n\n![extra_single_image](https://user-images.githubusercontent.com/1288793/200459455-8579d740-3d8f-47f9-8557-cc177b3e99b7.jpg)\n\n## extra-batch-images\n```\nresult4 = api.extra_batch_images(images=[result1.image, inpainting_result.image],\n                                 upscaler_1=webuiapi.Upscaler.ESRGAN_4x,\n                                 upscaling_resize=1.5)\nresult4.images[0]\n```\n![extra_batch_images_1](https://user-images.githubusercontent.com/1288793/200459540-b0bd2931-93db-4d03-9cc1-a9f5e5c89745.jpg)\n```\nresult4.images[1]\n```\n![extra_batch_images_2](https://user-images.githubusercontent.com/1288793/200459542-aa8547a0-f6db-436b-bec1-031a93a7b1d4.jpg)\n\n### Async API support\ntxt2img, img2img, extra_single_image, extra_batch_images support async api call with use_async=True parameter. You need asyncio, aiohttp packages installed.\n```\nresult = await api.txt2img(prompt=\"cute kitten\",\n                    seed=1001,\n                    use_async=True\n                    )\nresult.image\n```\n\n### Scripts support\nScripts from AUTOMATIC1111's Web UI are supported, but there aren't official models that define a script's interface.\n\nTo find out the list of arguments that are accepted by a particular script look up the associated python file from\nAUTOMATIC1111's repo `scripts/[script_name].py`. Search for its `run(p, **args)` function and the arguments that come\nafter 'p' is the list of accepted arguments\n\n#### Example for X/Y/Z Plot script:\n```\n(scripts/xyz_grid.py file from AUTOMATIC1111's repo)\n\n    def run(self, p, x_type, x_values, y_type, y_values, z_type, z_values, draw_legend, include_lone_images, include_sub_grids, no_fixed_seeds, margin_size):\n    ...\n```\nList of accepted arguments:\n* _x_type_: Index of the axis for X axis. Indexes start from [0: Nothing]\n* _x_values_: String of comma-separated values for the X axis \n* _y_type_: Index of the axis type for Y axis. As the X axis, indexes start from [0: Nothing]\n* _y_values_: String of comma-separated values for the Y axis\n* _z_type_: Index of the axis type for Z axis. As the X axis, indexes start from [0: Nothing]\n* _z_values_: String of comma-separated values for the Z axis\n* _draw_legend_: \"True\" or \"False\". IMPORTANT: It needs to be a string and not a Boolean value\n* _include_lone_images_: \"True\" or \"False\". IMPORTANT: It needs to be a string and not a Boolean value\n* _include_sub_grids_: \"True\" or \"False\". IMPORTANT: It needs to be a string and not a Boolean value\n* _no_fixed_seeds_: \"True\" or \"False\". IMPORTANT: It needs to be a string and not a Boolean value\n* margin_size: int value\n```\n# Available Axis options (Different for txt2img and img2img!)\nXYZPlotAvailableTxt2ImgScripts = [\n    \"Nothing\",\n    \"Seed\",\n    \"Var. seed\",\n    \"Var. strength\",\n    \"Steps\",\n    \"Hires steps\",\n    \"CFG Scale\",\n    \"Prompt S/R\",\n    \"Prompt order\",\n    \"Sampler\",\n    \"Checkpoint name\",\n    \"Sigma Churn\",\n    \"Sigma min\",\n    \"Sigma max\",\n    \"Sigma noise\",\n    \"Eta\",\n    \"Clip skip\",\n    \"Denoising\",\n    \"Hires upscaler\",\n    \"VAE\",\n    \"Styles\",\n]\n\nXYZPlotAvailableImg2ImgScripts = [\n    \"Nothing\",\n    \"Seed\",\n    \"Var. seed\",\n    \"Var. strength\",\n    \"Steps\",\n    \"CFG Scale\",\n    \"Image CFG Scale\",\n    \"Prompt S/R\",\n    \"Prompt order\",\n    \"Sampler\",\n    \"Checkpoint name\",\n    \"Sigma Churn\",\n    \"Sigma min\",\n    \"Sigma max\",\n    \"Sigma noise\",\n    \"Eta\",\n    \"Clip skip\",\n    \"Denoising\",\n    \"Cond. Image Mask Weight\",\n    \"VAE\",\n    \"Styles\",\n]\n\n# Example call\nXAxisType = \"Steps\"\nXAxisValues = \"20,30\"\nXAxisValuesDropdown = \"\"\nYAxisType = \"Sampler\"\nYAxisValues = \"Euler a, LMS\"\nYAxisValuesDropdown = \"\"\nZAxisType = \"Nothing\"\nZAxisValues = \"\"\nZAxisValuesDropdown = \"\"\ndrawLegend = \"True\"\nincludeLoneImages = \"False\"\nincludeSubGrids = \"False\"\nnoFixedSeeds = \"False\"\nmarginSize = 0\n\n\n# x_type, x_values, y_type, y_values, z_type, z_values, draw_legend, include_lone_images, include_sub_grids, no_fixed_seeds, margin_size\n\nresult = api.txt2img(\n                    prompt=\"cute girl with short brown hair in black t-shirt in animation style\",\n                    seed=1003,\n                    script_name=\"X/Y/Z Plot\",\n                    script_args=[\n                        XYZPlotAvailableTxt2ImgScripts.index(XAxisType),\n                        XAxisValues,\n                        XAxisValuesDropdown,\n                        XYZPlotAvailableTxt2ImgScripts.index(YAxisType),\n                        YAxisValues,\n                        YAxisValuesDropdown,\n                        XYZPlotAvailableTxt2ImgScripts.index(ZAxisType),\n                        ZAxisValues,\n                        ZAxisValuesDropdown,\n                        drawLegend,\n                        includeLoneImages,\n                        includeSubGrids,\n                        noFixedSeeds,\n                        marginSize,                        ]\n                    )\n\nresult.image\n```\n![txt2img_grid_xyz](https://user-images.githubusercontent.com/1288793/222345625-dc2e4090-6786-4a53-8619-700dc2f12412.jpg)\n\n\n### Configuration APIs\n```\n# return map of current options\noptions = api.get_options()\n\n# change sd model\noptions = {}\noptions['sd_model_checkpoint'] = 'model.ckpt [7460a6fa]'\napi.set_options(options)\n\n# when calling set_options, do not pass all options returned by get_options().\n# it makes webui unusable (2022/11/21).\n\n# get available sd models\napi.get_sd_models()\n\n# misc get apis\napi.get_samplers()\napi.get_cmd_flags()      \napi.get_hypernetworks()\napi.get_face_restorers()\napi.get_realesrgan_models()\napi.get_prompt_styles()\napi.get_artist_categories() # deprecated ?\napi.get_artists() # deprecated ?\napi.get_progress()\napi.get_embeddings()\napi.get_cmd_flags()\napi.get_scripts()\napi.get_schedulers()\napi.get_memory()\n\n# misc apis\napi.interrupt()\napi.skip()\n```\n\n### Utility methods\n```\n# save current model name\nold_model = api.util_get_current_model()\n\n# get list of available models\nmodels = api.util_get_model_names()\n\n# get list of available samplers\napi.util_get_sampler_names()\n\n# get list of available schedulers\napi.util_get_scheduler_names()\n\n# refresh list of models\napi.refresh_checkpoints()\n\n# set model (use exact name)\napi.util_set_model(models[0])\n\n# set model (find closest match)\napi.util_set_model('robodiffusion')\n\n# wait for job complete\napi.util_wait_for_ready()\n\n```\n\n### LORA and alwayson_scripts example\n\n```\nr = api.txt2img(prompt='photo of a cute girl with green hair \u003clora:Moxin_10:0.6\u003e shuimobysim __juice__',\n                seed=1000,\n                save_images=True,\n                alwayson_scripts={\"Simple wildcards\":[]} # wildcards extension doesn't accept more parameters.\n               )\nr.image\n```\n\n### Extension support - Model-Keyword\n```\n# https://github.com/mix1009/model-keyword\nmki = webuiapi.ModelKeywordInterface(api)\nmki.get_keywords()\n```\nModelKeywordResult(keywords=['nousr robot'], model='robo-diffusion-v1.ckpt', oldhash='41fef4bd', match_source='model-keyword.txt')\n\n\n### Extension support - Instruct-Pix2Pix\n```\n# Instruct-Pix2Pix extension is now deprecated and is now part of webui.\n# You can use normal img2img with image_cfg_scale when instruct-pix2pix model is loaded.\nr = api.img2img(prompt='sunset', images=[pil_img], cfg_scale=7.5, image_cfg_scale=1.5)\nr.image\n```\n\n### Extension support - ControlNet\n```\n# https://github.com/Mikubill/sd-webui-controlnet\n\napi.controlnet_model_list()\n```\n\u003cpre\u003e\n['control_v11e_sd15_ip2p [c4bb465c]',\n 'control_v11e_sd15_shuffle [526bfdae]',\n 'control_v11f1p_sd15_depth [cfd03158]',\n 'control_v11p_sd15_canny [d14c016b]',\n 'control_v11p_sd15_inpaint [ebff9138]',\n 'control_v11p_sd15_lineart [43d4be0d]',\n 'control_v11p_sd15_mlsd [aca30ff0]',\n 'control_v11p_sd15_normalbae [316696f1]',\n 'control_v11p_sd15_openpose [cab727d4]',\n 'control_v11p_sd15_scribble [d4ba51ff]',\n 'control_v11p_sd15_seg [e1f51eb9]',\n 'control_v11p_sd15_softedge [a8575a2a]',\n 'control_v11p_sd15s2_lineart_anime [3825e83e]',\n 'control_v11u_sd15_tile [1f041471]']\n \u003c/pre\u003e\n\n```\napi.controlnet_version()\napi.controlnet_module_list()\n```\n\n```\n# normal txt2img\nr = api.txt2img(prompt=\"photo of a beautiful girl with blonde hair\", height=512, seed=100)\nimg = r.image\nimg\n```\n![cn1](https://user-images.githubusercontent.com/1288793/222315754-43c6dc8c-2a62-4a31-b51a-f68523118e0d.png)\n\n```\n# txt2img with ControlNet\n# input_image parameter is changed to image (change in ControlNet API)\nunit1 = webuiapi.ControlNetUnit(image=img, module='canny', model='control_v11p_sd15_canny [d14c016b]')\n\nr = api.txt2img(prompt=\"photo of a beautiful girl\", controlnet_units=[unit1])\nr.image\n```\n\n![cn2](https://user-images.githubusercontent.com/1288793/222315791-c6c480eb-2987-4044-b673-5f2cb6135f87.png)\n\n\n```\n# img2img with multiple ControlNets\nunit1 = webuiapi.ControlNetUnit(image=img, module='canny', model='control_v11p_sd15_canny [d14c016b]')\nunit2 = webuiapi.ControlNetUnit(image=img, module='depth', model='control_v11f1p_sd15_depth [cfd03158]', weight=0.5)\n\nr2 = api.img2img(prompt=\"girl\",\n            images=[img], \n            width=512,\n            height=512,\n            controlnet_units=[unit1, unit2],\n            sampler_name=\"Euler a\",\n            cfg_scale=7,\n           )\nr2.image\n```\n![cn3](https://user-images.githubusercontent.com/1288793/222315816-1155b0c2-570d-4455-a68e-294fc7061b0a.png)\n\n```\nr2.images[1]\n```\n![cn4](https://user-images.githubusercontent.com/1288793/222315836-9a26afec-c407-426b-9a08-b2cef2a32ab1.png)\n\n```\nr2.images[2]\n```\n![cn5](https://user-images.githubusercontent.com/1288793/222315859-e6b6286e-854d-40c1-a516-5a08c827c49a.png)\n\n\n```\nr = api.controlnet_detect(images=[img], module='canny')\nr.image\n```\n\n\n### Extension support - AnimateDiff\n\n```\n# https://github.com/continue-revolution/sd-webui-animatediff\nadiff = webuiapi.AnimateDiff(model='mm_sd15_v3.safetensors',\n                             video_length=24,\n                             closed_loop='R+P',\n                             format=['GIF'])\n\nr = api.txt2img(prompt='cute puppy', animatediff=adiff)\n\n# save GIF file. need save_all=True to save animated GIF.\nr.image.save('puppy.gif', save_all=True)\n\n# Display animated GIF in Jupyter notebook\nfrom IPython.display import HTML\nHTML('\u003cimg src=\"data:image/gif;base64,{0}\"/\u003e'.format(r.json['images'][0]))\n```\n\n### Extension support - RemBG (contributed by webcoderz)\n```\n# https://github.com/AUTOMATIC1111/stable-diffusion-webui-rembg\nrembg = webuiapi.RemBGInterface(api)\nr = rembg.rembg(input_image=img, model='u2net', return_mask=False)\nr.image\n```\n\n\n### Extension support - SegmentAnything (contributed by TimNekk)\n```python\n# https://github.com/continue-revolution/sd-webui-segment-anything\n\nsegment = webuiapi.SegmentAnythingInterface(api)\n\n# Perform a segmentation prediction using the SAM model using points\nsam_result = segment.sam_predict(\n    image=img,\n    sam_positive_points=[(0.5, 0.25), (0.75, 0.75)],\n    # add other parameters as needed\n)\n\n# Perform a segmentation prediction using the SAM model using GroundingDINO\nsam_result2 = segment.sam_predict(\n    image=img,\n    dino_enabled=True,\n    dino_text_prompt=\"A text prompt for GroundingDINO\",\n    # add other parameters as needed\n)\n\n# Example of dilating a mask\ndilation_result = segment.dilate_mask(\n    image=img,\n    mask=sam_result.masks[0],  # using the first mask from the SAM prediction\n    dilate_amount=30\n)\n\n# Example of generating semantic segmentation with category IDs\nsemantic_seg_result = segment.sam_and_semantic_seg_with_cat_id(\n    image=img,\n    category=\"1+2+3\",  # Category IDs separated by '+'\n    # add other parameters as needed\n)\n```\n\n### Extension support - Tagger (contributed by C-BP)\n\n```python\n# https://github.com/Akegarasu/sd-webui-wd14-tagger\n\ntagger = webuiapi.TaggerInterface(api)\nresult = tagger.tagger_interrogate(image)\nprint(result)\n# {\"caption\": {\"additionalProp1\":0.9,\"additionalProp2\": 0.8,\"additionalProp3\": 0.7}}\n```\n### Extension support - ADetailer (contributed by tomj2ee and davidmartinrius)\n#### txt2img with ADetailer\n```\n# https://github.com/Bing-su/adetailer\n\nimport webuiapi\n\napi = webuiapi.WebUIApi()\n\nads = webuiapi.ADetailer(ad_model=\"face_yolov8n.pt\")\n\nresult1 = api.txt2img(prompt=\"cute squirrel\",\n                    negative_prompt=\"ugly, out of frame\",\n                    seed=-1,\n                    styles=[\"anime\"],\n                    cfg_scale=7,\n                    adetailer=[ads],\n                    steps=30,\n                    enable_hr=True,\n                    denoising_strength=0.5\n            )\n                    \n                    \n   \nimg = result1.image\nimg\n\n# OR\n\nfile_path = \"output_image.png\"\nresult1.image.save(file_path)\n```\n\n#### img2img with ADetailer\n\n```\nimport webuiapi\nfrom PIL import Image\n\nimg = Image.open(\"/path/to/your/image.jpg\")\n\nads = webuiapi.ADetailer(ad_model=\"face_yolov8n.pt\")\n\napi = webuiapi.WebUIApi()\n\nresult1 = api.img2img(\n    images=[img], \n    prompt=\"a cute squirrel\", \n    steps=25, \n    seed=-1, \n    cfg_scale=7, \n    denoising_strength=0.5, \n    resize_mode=2,\n    width=512,\n    height=512,\n    adetailer=[ads],\n)\n\nfile_path = \"img2img_output_image.png\"\nresult1.image.save(file_path)\n```\n### Support for interrogate with \"deepdanbooru / deepbooru\" (contributed by davidmartinrius)\n\n```\nimport webuiapi\nfrom PIL import Image\n\napi = webuiapi.WebUIApi()\n\nimg = Image.open(\"/path/to/your/image.jpg\")\n\ninterrogate_result = api.interrogate(image=img, model=\"deepdanbooru\")\n# also you can use clip. clip is set by default\n#interrogate_result = api.interrogate(image=img, model=\"clip\")\n#interrogate_result = api.interrogate(image=img)\n\nprompt = interrogate_result.info\nprompt\n\n# OR\nprint(prompt)\n```\n\n### Support for ReActor, for face swapping (contributed by davidmartinrius)\n\n```\nimport webuiapi\nfrom PIL import Image\n\nimg = Image.open(\"/path/to/your/image.jpg\")\n\napi = webuiapi.WebUIApi()\n\nyour_desired_face = Image.open(\"/path/to/your/desired/face.jpeg\")\n\nreactor = webuiapi.ReActor(\n    img=your_desired_face,\n    enable=True\n)\n\nresult1 = api.img2img(\n    images=[img], \n    prompt=\"a cute squirrel\", \n    steps=25, \n    seed=-1, \n    cfg_scale=7, \n    denoising_strength=0.5, \n    resize_mode=2,\n    width=512,\n    height=512,\n    reactor=reactor\n)\n\nfile_path = \"face_swapped_image.png\"\nresult1.image.save(file_path)\n```\n\n\n### Support for Self Attention Guidance (contributed by yano)\n\nhttps://github.com/ashen-sensored/sd_webui_SAG\n\n```\nimport webuiapi\nfrom PIL import Image\n\nimg = Image.open(\"/path/to/your/image.jpg\")\n\napi = webuiapi.WebUIApi()\n\nyour_desired_face = Image.open(\"/path/to/your/desired/face.jpeg\")\n\nsag = webuiapi.Sag(\n    enable=True,\n    scale=0.75,\n    mask_threshold=1.00\n)\n\nresult1 = api.img2img(\n    images=[img], \n    prompt=\"a cute squirrel\", \n    steps=25, \n    seed=-1, \n    cfg_scale=7, \n    denoising_strength=0.5, \n    resize_mode=2,\n    width=512,\n    height=512,\n    sag=sag\n)\n\nfile_path = \"face_swapped_image.png\"\nresult1.image.save(file_path)\n```\n\n### Prompt generator API by [David Martin Rius](https://github.com/davidmartinrius/):\n\n\nThis is an unofficial implementation to use the api of promptgen. \nBefore installing the extension you have to check if you already have an extension called Promptgen. If so, you need to uninstall it.\nOnce uninstalled you can install it in two ways:\n\n#### 1. From the user interface\n![image](https://github.com/davidmartinrius/sdwebuiapi/assets/16558194/d879719f-bb9f-44a7-aef7-b893d117bbea)\n\n#### 2. From the command line\n\ncd stable-diffusion-webui/extensions\n\ngit clone -b api-implementation https://github.com/davidmartinrius/stable-diffusion-webui-promptgen.git\n\nOnce installed:\n```\napi = webuiapi.WebUIApi()\n\nresult = api.list_prompt_gen_models()\nprint(\"list of models\")\nprint(result)\n# you will get something like this:\n#['AUTOMATIC/promptgen-lexart', 'AUTOMATIC/promptgen-majinai-safe', 'AUTOMATIC/promptgen-majinai-unsafe']\n\ntext = \"a box\"\n\nTo create a prompt from a text:\n# by default model_name is \"AUTOMATIC/promptgen-lexart\"\nresult = api.prompt_gen(text=text)\n\n# Using a different model\nresult = api.prompt_gen(text=text, model_name=\"AUTOMATIC/promptgen-majinai-unsafe\")\n\n#Complete usage\nresult =  api.prompt_gen(\n        text=text, \n        model_name=\"AUTOMATIC/promptgen-majinai-unsafe\",\n        batch_count= 1,\n        batch_size=10,\n        min_length=20,\n        max_length=150,\n        num_beams=1,\n        temperature=1,\n        repetition_penalty=1,\n        length_preference=1,\n        sampling_mode=\"Top K\",\n        top_k=12,\n        top_p=0.15\n    )\n\n# result is a list of prompts. You can iterate the list or just get the first result like this: result[0]\n\n```\n\n### TIPS for using Flux [David Martin Rius](https://github.com/davidmartinrius/):\n\nIn both cases, it is needed cfg_scale = 1, sampler_name = \"Euler\", scheduler = \"Simple\" and in txt2img enable_hr=False\n\n## For txt2img\n```\nimport webuiapi\n\nresult1 = api.txt2img(prompt=\"cute squirrel\",\n                    negative_prompt=\"ugly, out of frame\",\n                    seed=-1,\n                    styles=[\"anime\"],\n                    cfg_scale=1,\n                    steps=20,\n                    enable_hr=False,\n                    denoising_strength=0.5,\n                    sampler_name= \"Euler\",\n                    scheduler= \"Simple\"\n            )\n                    \n                    \n   \nimg = result1.image\nimg\n\n# OR\n\nfile_path = \"output_image.png\"\nresult1.image.save(file_path)\n\n```\n\n## For img2img\n\n```\nimport webuiapi\nfrom PIL import Image\n\nimg = Image.open(\"/path/to/your/image.jpg\")\n\napi = webuiapi.WebUIApi()\n\nresult1 = api.img2img(\n    images=[img], \n    prompt=\"a cute squirrel\", \n    steps=20, \n    seed=-1, \n    cfg_scale=1, \n    denoising_strength=0.5, \n    resize_mode=2,\n    width=512,\n    height=512,\n    sampler_name= \"Euler\",\n    scheduler= \"Simple\"\n)\n\nfile_path = \"face_swapped_image.png\"\nresult1.image.save(file_path)\n\n```\n\n\n","funding_links":[],"categories":["HarmonyOS"],"sub_categories":["Windows Manager"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmix1009%2Fsdwebuiapi","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fmix1009%2Fsdwebuiapi","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmix1009%2Fsdwebuiapi/lists"}