{"id":17961080,"url":"https://github.com/wolverinn/stable-diffusion-multi-user","last_synced_at":"2025-04-06T15:13:28.618Z","repository":{"id":154018403,"uuid":"629252353","full_name":"wolverinn/stable-diffusion-multi-user","owner":"wolverinn","description":"stable diffusion multi-user django server code with multi-GPU load balancing","archived":false,"fork":false,"pushed_at":"2024-03-14T08:03:29.000Z","size":20666,"stargazers_count":314,"open_issues_count":18,"forks_count":64,"subscribers_count":12,"default_branch":"master","last_synced_at":"2025-03-30T13:08:43.190Z","etag":null,"topics":["aigc","stable-diffusion","stable-diffusion-api"],"latest_commit_sha":null,"homepage":"https://image.stable-ai.tech/","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"gpl-3.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/wolverinn.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null}},"created_at":"2023-04-18T00:14:20.000Z","updated_at":"2025-03-29T13:40:52.000Z","dependencies_parsed_at":"2024-03-14T09:42:36.034Z","dependency_job_id":null,"html_url":"https://github.com/wolverinn/stable-diffusion-multi-user","commit_stats":null,"previous_names":[],"tags_count":2,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/wolverinn%2Fstable-diffusion-multi-user","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/wolverinn%2Fstable-diffusion-multi-user/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/wolverinn%2Fstable-diffusion-multi-user/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/wolverinn%2Fstable-diffusion-multi-user/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/wolverinn","download_url":"https://codeload.github.com/wolverinn/stable-diffusion-multi-user/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247500469,"owners_count":20948880,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["aigc","stable-diffusion","stable-diffusion-api"],"created_at":"2024-10-29T11:08:14.219Z","updated_at":"2025-04-06T15:13:28.594Z","avatar_url":"https://github.com/wolverinn.png","language":"Python","readme":"# Stable Diffusion Multi-user\n\u003e stable diffusion multi-user server API deployment that supports autoscaling, webui extension API...\n\nhttps://image.stable-ai.tech/\n\n# Contents:\n\n- [[Option-1] Deploy with Django API](https://github.com/wolverinn/stable-diffusion-multi-user#option-1-deploy-with-django-api)\n    - [Project directory structure](https://github.com/wolverinn/stable-diffusion-multi-user#project-directory-structure)\n    - [Deploy the GPU server](https://github.com/wolverinn/stable-diffusion-multi-user#deploy-the-gpu-server)\n    - [Deploy the load-balancing server](https://github.com/wolverinn/stable-diffusion-multi-user#deploy-the-load-balancing-server)\n- [[Option-2] Deploy using Runpod Serverless](https://github.com/wolverinn/stable-diffusion-multi-user#option-2-deploy-using-runpod-serverless)\n- [[Option-3] Deploy on Replicate](https://github.com/wolverinn/stable-diffusion-multi-user#option-3-deploy-on-replicate)\n\n--------\n\n# [Option-1] Deploy with Django API\n\n**Features**: \n\n- a server code that provides stable-diffusion http API, including:\n    - CHANGELOG-230904: Support torch2.0, support extension API when calling txt2img\u0026img2img, support all API parameters same as webui\n    - txt2img\n    - img2img\n    - check generating progress\n    - interrupt generating\n    - list available models\n    - change models\n    - ...\n- supports civitai models and lora, etc.\n- supports multi-user queuing\n- supports multi-user separately changing models, and won't affect each other\n- provides downstream load-balancing server code that automatically do load-balancing among available GPU servers, and ensure that user requests are sent to the same server within one generation cycle\n- can be used to deploy multiple stable-diffusion models in one GPU card to make the full use of GPU, check [this article](https://mp.weixin.qq.com/s/AktAQ7ek8Tkph3uvSeiOVg) for details\n\nYou can build your own UI, community features, account login\u0026payment, etc. based on these functions!\n\n![load balancing](vx_images/516000908230643.jpg)\n\n## Project directory structure\n\nThe project can be roughly divided into two parts: django server code, and [stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) code that we use to initialize and run models. And I'll mainly explain the django server part.\n\nIn the main project directory:\n\n- `modules/`: stable-diffusion-webui modules\n- `models/`: stable diffusion models\n- `sd_multi/`: the django project name\n    - `urls.py`: server API path configuration\n- `simple/`: the main django code\n    - `views.py`: main API processing logic\n    - `lb_views.py`: load-balancing API\n- `requirements.txt`: stable diffusion pip requirements\n- `setup.sh`: run it with options to setup the server environment\n- `gen_http_conf.py`: called in `setup.sh` to setup the apache configuration\n\n## Deploy the GPU server\n\n1. SSH to the GPU server\n2. clone or download the repository\n3. cd to the main project directory(that contains `manage.py`)\n4. run `sudo bash setup.sh` with options(checkout the `setup.sh` for options)(recommende order: follow the file order: `env`, `venv`, `sd_model`, `apache`)\n    - if some downloads are slow, you can always download manually and upload to your server\n    - if you want to change listening ports: change both `/etc/apache2/ports.conf` and `/etc/apache2/sites-available/sd_multi.conf`\n5. restart apache: `sudo service apache2 restart`\n\n### API definition\n\n- `/`: view the homepage, used to test that apache is configured successfully\n- `/txt2img_v2/`: txt2img with the same parameters as sd-webui, also supports extension parameters(such as controlnet)\n- `/img2img_v2/`: img2img with the same parameters as sd-webui, also supports extension parameters(such as controlnet)\n- previous API version: checkout `old_django_api.md`\n\n## Deploy the load-balancing server\n\n1. SSH to a CPU server\n2. clone or download the repository\n3. cd to the main project directory(that contains `manage.py`)\n4. run `sudo bash setup.sh lb`\n5. run `mv sd_multi/urls.py sd_multi/urls1.py \u0026\u0026 mv sd_multi/urls_lb.py sd_multi/urls.py`\n6. modify `ip_list` variable with your own server ip+port in `simple/lb_views.py`\n7. restart apache: `sudo service apache2 restart`\n8. to test it, view `ip+port/multi_demo/` url path\n\n### Test the load-balancing server locally\nIf you don't want to deploy the load balancing server but still want to test the functions, you can start the load-balancing server on your local computer.\n\n1. clone or download the repository\n2. requirements: python3, django, django-cors-headers, replicate\n3. modify `ip_list` variable with your own GPU server ip+port in `simple/lb_views.py`\n4. cd to the main project directory(that contains `manage.py`)\n5. run `mv sd_multi/urls.py sd_multi/urls1.py` \u0026\u0026 `mv sd_multi/urls_lb.py sd_multi/urls.py` (Rename)\n6. run `python manage.py runserver`\n7. click the url that shows up in the terminal, view `/multi_demo/` path\n\nFinally, you can call your http API(test it using postman).\n\n# [Option-2] Deploy using Runpod Serverless\n\nFeatures:\n\n- Autoscaling with highly customized scaling strategy\n- Supports sd-webui checkpoints, Loras...\n- Docker image separated with model files, upload and replace models anytime you want\n\nsee [sd-docker-slim](https://github.com/wolverinn/stable-diffusion-multi-user/tree/master/sd-docker-slim) for deploy guide and also a ready-to-use docker image.\n\n# [Option-3] Deploy on Replicate\nA replicate demo is deployed [here](https://replicate.com/wolverinn/webui-api)\n\nFeatures:\n\n- Autoscaling\n- latest sd-webui source code, latest torch\u0026cuda version\n- Docker image with torch 2.2\n- Supports sd-webui API with extensions\n- Supports sd-webui checkpoints, Loras...\n\nDeploy steps:\n\n1. create a model on (replicate)(https://replicate.com)\n2. get a Linux GPU machine with 50GB disk space\n3. clone the repository: \n\n```\ngit clone https://github.com/wolverinn/stable-diffusion-multi-user.git\ncd stable-diffusion-multi-user/replicate-cog-slim/\n```\n\n4. modify line-30 in `replicate-cog-slim/cog.yaml` to your own replicate model\n5. [optional] modify `replicate-cog-slim/predicy.py`'s `predict()` function for custom API inputs \u0026 outputs\n6. install cog: https://replicate.com/docs/guides/push-a-model\n7. install docker: https://docs.docker.com/engine/install/ubuntu/#set-up-the-repository\n8. download the checkpoints/Lora/extensions/other models you want to deploy to corresponding directories under `replicate-cog-slim/`\n9. run commands:\n\n```\ncog login\ncog push\n```\n\nThen you can see your model on replicate, and you can use it via API or replicate website.","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fwolverinn%2Fstable-diffusion-multi-user","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fwolverinn%2Fstable-diffusion-multi-user","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fwolverinn%2Fstable-diffusion-multi-user/lists"}