{"id":14360304,"url":"https://github.com/mrhan1993/Fooocus-API","last_synced_at":"2025-08-22T01:31:04.253Z","repository":{"id":195968547,"uuid":"693638247","full_name":"mrhan1993/Fooocus-API","owner":"mrhan1993","description":"FastAPI powered API for Fooocus","archived":false,"fork":false,"pushed_at":"2024-05-17T16:45:28.000Z","size":23089,"stargazers_count":448,"open_issues_count":30,"forks_count":115,"subscribers_count":11,"default_branch":"main","last_synced_at":"2024-05-19T00:05:30.487Z","etag":null,"topics":["comfyui","fooocus","sdxl","stable-diffusion"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"gpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/mrhan1993.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-09-19T12:24:53.000Z","updated_at":"2024-05-20T03:40:00.360Z","dependencies_parsed_at":"2023-10-11T04:34:51.821Z","dependency_job_id":"86924736-19e3-4ee7-a07c-830eea234b4d","html_url":"https://github.com/mrhan1993/Fooocus-API","commit_stats":null,"previous_names":["konieshadow/fooocus-api","mrhan1993/fooocus-api"],"tags_count":56,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mrhan1993%2FFooocus-API","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mrhan1993%2FFooocus-API/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mrhan1993%2FFooocus-API/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/mrhan1993%2FFooocus-API/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/mrhan1993","download_url":"https://codeload.github.com/mrhan1993/Fooocus-API/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":217188898,"owners_count":16138998,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["comfyui","fooocus","sdxl","stable-diffusion"],"created_at":"2024-08-27T15:01:11.142Z","updated_at":"2024-08-27T15:01:28.692Z","avatar_url":"https://github.com/mrhan1993.png","language":"Python","readme":"[![Docker Image CI](https://github.com/konieshadow/Fooocus-API/actions/workflows/docker-image.yml/badge.svg?branch=main)](https://github.com/konieshadow/Fooocus-API/actions/workflows/docker-image.yml)\n\n[ English | [中文](/README_zh.md) ]\n\n- [Introduction](#introduction)\n  - [Fooocus](#fooocus)\n  - [Fooocus-API](#fooocus-api)\n- [Get-Start](#get-start)\n  - [Run with Replicate](#run-with-replicate)\n  - [Self-hosted](#self-hosted)\n    - [conda](#conda)\n    - [venv](#venv)\n    - [predownload and install](#predownload-and-install)\n    - [already exist Fooocus](#already-exist-fooocus)\n  - [Start with docker](#start-with-docker)\n- [cmd flags](#cmd-flags)\n- [Change log](#change-log)\n- [Apis](#apis)\n- [License](#license)\n- [Thanks :purple\\_heart:](#thanks-purple_heart)\n\n\u003e Note:\n\u003e\n\u003e Although I tested it, I still suggest you test it again before the official update\n\u003e\n\u003e Fooocus 2.5 includes a significant update, with most dependencies upgraded. Therefore, after updating, do not use `--skip-pip` unless you have already performed a manual update.\n\u003e\n\u003e Additionally, `groundingdino-py` may encounter installation errors, especially in Chinese Windows environments. The solution can be found in the following [issue](https://github.com/IDEA-Research/GroundingDINO/issues/206).\n\n\n\u003e GenerateMask is same as DescribeImage, It is not process as a task, result will directly return\n\n# Instructions for Using the ImageEnhance Interface\nBelow are examples of parameters that include the main parameters required for ImageEnhance. The V1 interface adopts a form-like approach similar to ImagePrompt to break down the enhance controller.\n\n\n```python\n{\n  \"enhance_input_image\": \"\",\n  \"enhance_checkbox\": true,\n  \"enhance_uov_method\": \"Vary (Strong)\",\n  \"enhance_uov_processing_order\": \"Before First Enhancement\",\n  \"enhance_uov_prompt_type\": \"Original Prompts\",\n  \"save_final_enhanced_image_only\": true,\n  \"enhance_ctrlnets\": [\n    {\n      \"enhance_enabled\": false,\n      \"enhance_mask_dino_prompt\": \"face\",\n      \"enhance_prompt\": \"\",\n      \"enhance_negative_prompt\": \"\",\n      \"enhance_mask_model\": \"sam\",\n      \"enhance_mask_cloth_category\": \"full\",\n      \"enhance_mask_sam_model\": \"vit_b\",\n      \"enhance_mask_text_threshold\": 0.25,\n      \"enhance_mask_box_threshold\": 0.3,\n      \"enhance_mask_sam_max_detections\": 0,\n      \"enhance_inpaint_disable_initial_latent\": false,\n      \"enhance_inpaint_engine\": \"v2.6\",\n      \"enhance_inpaint_strength\": 1,\n      \"enhance_inpaint_respective_field\": 0.618,\n      \"enhance_inpaint_erode_or_dilate\": 0,\n      \"enhance_mask_invert\": false\n    }\n  ]\n}\n```\n\n- enhance_input_image: The image to be enhanced, which is required and can be provided as an image URL for the V2 interface.\n- enhance_checkbox: A toggle switch that must be set to true if you want to use the enhance image feature.\n- save_final_enhanced_image_only: Since image enhancement is a pipeline operation, it can produce multiple result images. This parameter allows you to only return the final enhanced image.\n\nThere are three parameters related to UpscaleVary, which are used to perform Upscale or Vary before or after enhancement.\n\n- enhance_uov_method: Similar to the UpscaleOrVary interface, Disabled turns it off.\n- enhance_uov_processing_order: Determines whether to process the image before or after enhancement.\n- enhance_uov_prompt_type: I'm not sure about the specific function; you might want to research it based on the WebUI.\n\nThe `enhance_ctrlnets` element is a list of ImageEnhance controller objects, with a maximum of three elements in the list, any additional elements will be discarded. The parameters correspond roughly to the WebUI, and the notable parameters are:\n\n- enhance_enabled: This parameter controls whether the enhance controller is active. If there are no enabled enhance controllers, the task will be skipped.\n- enhance_mask_dino_prompt: This parameter is required and indicates the area to be enhanced. If it is empty, even if the enhance controller is enabled, the task will be skipped.\n\n\n# Introduction\n\nFastAPI powered API for [Fooocus](https://github.com/lllyasviel/Fooocus).\n\nCurrently loaded Fooocus version: [2.3.0](https://github.com/lllyasviel/Fooocus/blob/main/update_log.md).\n\n## Fooocus\n\nThis part from [Fooocus](https://github.com/lllyasviel/Fooocus) project.\n\nFooocus is an image generating software (based on [Gradio](https://www.gradio.app/)).\n\nFooocus is a rethinking of Stable Diffusion and Midjourney’s designs:\n\n- Learned from Stable Diffusion, the software is offline, open source, and free.\n\n- Learned from Midjourney, the manual tweaking is not needed, and users only need to focus on the prompts and images.\n\nFooocus has included and automated lots of inner optimizations and quality improvements. Users can forget all those difficult technical parameters, and just enjoy the interaction between human and computer to \"explore new mediums of thought and expanding the imaginative powers of the human species\"\n\n## Fooocus-API\n\nI think you must have tried to use [Gradio client](https://www.gradio.app/docs/client) to call Fooocus, which was a terrible experience for me. \n\nFooocus API uses [FastAPI](https://fastapi.tiangolo.com/)  provides the `REST` API for using Fooocus. Now, you can use Fooocus's powerful ability in any language you like. \n\nIn addition, we also provide detailed [documentation](/docs/api_doc_en.md) and [sample code](/examples)\n\n# Get-Start\n\n## Run with Replicate\n\nNow you can use Fooocus-API by Replicate, the model is on [konieshadow/fooocus-api](https://replicate.com/konieshadow/fooocus-api).\n\nWith preset:\n\n- [konieshadow/fooocus-api-anime](https://replicate.com/konieshadow/fooocus-api-anime)\n- [konieshadow/fooocus-api-realistic](https://replicate.com/konieshadow/fooocus-api-realistic)\n\nI believe this is the easiest way to generate image with Fooocus's power.\n\n## Self-hosted\n\nYou need python version \u003e= 3.10, or use conda to create a new env.\n\nThe hardware requirements are what Fooocus needs. You can find detail [here](https://github.com/lllyasviel/Fooocus#minimal-requirement)\n\n### conda\n\nYou can easily start app follow this step use conda:\n\n```shell\nconda env create -f environment.yaml\nconda activate fooocus-api\n```\n\nand then, run `python main.py` to start app, default, server is listening on `http://127.0.0.1:8888`\n\n\u003e If you are running the project for the first time, you may have to wait for a while, during which time the program will complete the rest of the installation and download the necessary models. You can also do these steps manually, which I'll mention later.\n\n### venv\n\nSimilar to using conda, create a virtual environment, and then start and wait for a while\n\n```powershell\n# windows\npython -m venv venv\n.\\venv\\Scripts\\Activate\n```\n\n```shell\n# linux\npython -m venv venv\nsource venv/bin/activate\n```\nand then, run `python main.py`\n\n### predownload and install\n\nIf you want to deal with environmental problems manually and download the model in advance, you can refer to the following steps\n\nAfter creating a complete environment using conda or venv, you can manually complete the installation of the subsequent environment, just follow\n\nfirst, install requirements: `pip install -r requirements.txt`\n\nthen, pytorch with cuda: `pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu121` , you can find more info about this [here](https://pytorch.org/get-started/previous-versions/),\n\n\u003e It is important to note that for pytorch and cuda versions, the recommended version of Fooocus is used, which is currently pytorch2.1.0+cuda12.1. If you insist, you can also use other versions, but you need to add `--skip-pip` when you start app, otherwise the recommended version will be installed automatically\n\nGo to the `repositories` directories, download models and put it into `repositories\\Fooocus\\models`\n\nIf you have Fooocus installed, see [already-exist-fooocus](#already-exist-fooocus)\n\nhere is a list need to download for startup (for different [startup params](#cmd-flags) maybe difference):\n\n- checkpoint:  path to `repositories\\Fooocus\\models\\checkpoints`\n    + [juggernautXL_version6Rundiffusion.safetensors](https://huggingface.co/lllyasviel/fav_models/resolve/main/fav/juggernautXL_version6Rundiffusion.safetensors)\n\n- vae_approx: path to `repositories\\Fooocus\\models\\vae_approx`\n    + [xlvaeapp.pth](https://huggingface.co/lllyasviel/misc/resolve/main/xlvaeapp.pth)\n    + [vaeapp_sd15.pth](https://huggingface.co/lllyasviel/misc/resolve/main/vaeapp_sd15.pt)\n    + [xl-to-v1_interposer-v3.1.safetensors](https://huggingface.co/lllyasviel/misc/resolve/main/xl-to-v1_interposer-v3.1.safetensors)\n\n- lora: path to `repositories\\Fooocus\\models\\loras`\n    + [sd_xl_offset_example-lora_1.0.safetensors](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_offset_example-lora_1.0.safetensors?download=true)\n\n\u003e I've uploaded the model I'm using, which contains almost all the base models that Fooocus will use! I put it [here](https://www.123pan.com/s/dF5A-SIQsh.html) 提取码: `D4Mk`\n\n### already exist Fooocus\n\nIf you already have Fooocus installed, and it is work well, The recommended way is to reuse models, you just simple copy `config.txt` file from your local Fooocus folder to Fooocus-API root folder. See [Customization](https://github.com/lllyasviel/Fooocus#customization) for details.\n\nUse this method you will have both Fooocus and Fooocus-API running at the same time. And they operate independently and do not interfere with each other.\n\n\u003e Do not copy Fooocus to repositories directory\n\n## Start with docker\n\nBefore use docker with GPU, you should [install NVIDIA Container Toolkit](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html) first.\n\nRun\n\n```shell\ndocker run -d --gpus=all \\\n    -e NVIDIA_DRIVER_CAPABILITIES=compute,utility \\\n    -e NVIDIA_VISIBLE_DEVICES=all \\\n    -p 8888:8888 konieshadow/fooocus-api\n```\n\nFor a more complex usage:\n\n```shell\nmkdir ~/repositories\nmkdir -p ~/.cache/pip\n\ndocker run -d --gpus=all \\\n    -e NVIDIA_DRIVER_CAPABILITIES=compute,utility \\\n    -e NVIDIA_VISIBLE_DEVICES=all \\\n    -v ~/repositories:/app/repositories \\\n    -v ~/.cache/pip:/root/.cache/pip \\\n    -p 8888:8888 konieshadow/fooocus-api\n```\n\nIt will be persistent the dependent repositories and pip cache.\n\nYou can add `-e PIP_INDEX_URL={pypi-mirror-url}` to docker run command to change pip index url.\n\n\u003e From version 0.4.0.0, Full environment include in docker image, mapping `models` or project root if you needed\n\u003e For example:\n\u003e ```\n\u003e docker run -d --gpus all \\\n\u003e     -v /Fooocus-API:/app \\\n\u003e     -p 8888:8888 konieshadow/fooocus-api\n\u003e```\n\n# cmd flags\n\n- `-h, --help` show this help message and exit\n- `--port PORT` Set the listen port, default: 8888\n- `--host HOST` Set the listen host, default: 127.0.0.1\n- `--base-url BASE_URL` Set base url for outside visit, default is http://host:port\n- `--log-level LOG_LEVEL` Log info for Uvicorn, default: info\n- `--skip-pip` Skip automatic pip install when setup\n- `--preload-pipeline` Preload pipeline before start http server\n- `--queue-size QUEUE_SIZE` Working queue size, default: 100, generation requests exceeding working queue size will return failure\n- `--queue-history QUEUE_HISTORY` Finished jobs reserve size, tasks exceeding the limit will be deleted, including output image files, default: 0, means no limit\n- `--webhook-url WEBHOOK_URL` Webhook url for notify generation result, default: None\n- `--persistent` Store history to db\n- `--apikey APIKEY` Set apikey to enable secure api, default: None\n\nSince v0.3.25, added CMD flags support of Fooocus. You can pass any argument which Fooocus supported.\n\nFor example, to startup image generation (need more vRAM):\n\n```\npython main.py --all-in-fp16 --always-gpu\n```\n\nFor Fooocus CMD flags, see [here](https://github.com/lllyasviel/Fooocus?tab=readme-ov-file#all-cmd-flags).\n\n\n# Change log\n\n[CHANGELOG](./docs/change_logs.md)\n\nolder change history you can find in [release page](https://github.com/konieshadow/Fooocus-API/releases)\n\n\n# Apis\n\nyou can find all api detail [here](/docs/api_doc_en.md)\n\n# License\n\nThis repository is licensed under the [GUN General Public License v3.0](https://github.com/mrhan1993/Fooocus-API/blob/main/LICENSE)\n\nThe default checkpoint is published by [RunDiffusion](https://huggingface.co/RunDiffusion), is licensed under the [CreativeML Open RAIL-M](https://github.com/mrhan1993/Fooocus-API/blob/main/CreativeMLOpenRAIL-M).\n\nor, you can find it [here](https://huggingface.co/spaces/CompVis/stable-diffusion-license)\n\n# Thanks :purple_heart:\n\nThanks for all your contributions and efforts towards improving the Fooocus API. We thank you for being part of our :sparkles: community :sparkles:!\n","funding_links":[],"categories":["Python"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmrhan1993%2FFooocus-API","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fmrhan1993%2FFooocus-API","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fmrhan1993%2FFooocus-API/lists"}