{"id":15664239,"url":"https://github.com/deep-diver/lora-deployment","last_synced_at":"2025-05-05T23:44:15.462Z","repository":{"id":82125141,"uuid":"601444411","full_name":"deep-diver/LoRA-deployment","owner":"deep-diver","description":"LoRA fine-tuned Stable Diffusion Deployment","archived":false,"fork":false,"pushed_at":"2023-02-15T23:38:33.000Z","size":7870,"stargazers_count":31,"open_issues_count":0,"forks_count":6,"subscribers_count":2,"default_branch":"main","last_synced_at":"2025-03-31T00:51:14.811Z","etag":null,"topics":["generative-ai","huggingface-inference-endpoint","serving","stable-diffusion"],"latest_commit_sha":null,"homepage":"https://deep-diver.github.io/LoRA-deployment/","language":"Jupyter Notebook","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/deep-diver.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-02-14T04:25:22.000Z","updated_at":"2024-02-13T09:57:09.000Z","dependencies_parsed_at":"2023-03-12T14:51:56.029Z","dependency_job_id":null,"html_url":"https://github.com/deep-diver/LoRA-deployment","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/deep-diver%2FLoRA-deployment","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/deep-diver%2FLoRA-deployment/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/deep-diver%2FLoRA-deployment/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/deep-diver%2FLoRA-deployment/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/deep-diver","download_url":"https://codeload.github.com/deep-diver/LoRA-deployment/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":252596322,"owners_count":21773842,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["generative-ai","huggingface-inference-endpoint","serving","stable-diffusion"],"created_at":"2024-10-03T13:41:46.579Z","updated_at":"2025-05-05T23:44:15.444Z","avatar_url":"https://github.com/deep-diver.png","language":"Jupyter Notebook","readme":"# LoRA-deployment\n\nThis repository demonstrates how to serve multiple [LoRA fine-tuned Stable Diffusions](https://huggingface.co/blog/lora) from 🧨 Diffusers library on Hugging Face Inference Endpoint. Since only few ~ MB of checkpoint is produced after finetuning with LoRA, we can switch different checkpoint for different fine-tuned Stable Diffusion in super quick, memory efficient, and disk space efficient ways.\n\nFor demonstration purpose, I have tested the following Hugging Face Model repositories which has LoRA fine-tuned checkpoint(`pytorch_lora_weights.bin\n`):\n- [ethan_ai](https://huggingface.co/taesiri/ethan_ai_lora)\n- [noto-emoji](https://huggingface.co/kuotient/noto-emoji-finetuned-lora)\n- [pokemon](https://huggingface.co/pcuenq/pokemon-lora)\n\n## Notebook\n\n- [Pilot notebook](https://github.com/deep-diver/LoRA-deployment/blob/main/notebooks/pilot.ipynb): shows how to write and test a custom handler for Hugging Face Inference Endpoint in local or Colab environments\n- [Inference notebook](https://github.com/deep-diver/LoRA-deployment/blob/main/notebooks/inference.ipynb): shows how to request inference to the custom handler deployed on Hugging Face Inference Endopint\n- [Multi-workers inference notebook](https://github.com/deep-diver/LoRA-deployment/blob/main/notebooks/multiworker_inference.ipynb): shows how to run simultaneous requests to the custom handler deployed on Hugging Face Inference Endpoint in Colab environment\n\n## Custom Handler\n\n- [handler.py](https://github.com/deep-diver/LoRA-deployment/blob/main/custom_handler/handler.py): basic handler. This custom handler is proved to work with [this Hugging Face Model repo](https://huggingface.co/chansung/LoRA-deployment)\n- [multiworker_handler.py](https://github.com/deep-diver/LoRA-deployment/blob/main/custom_handler/multiworker_handler.py): advanced handler with multiple worker(Stable Diffusion) pool. This custom handler is proved to work with [this Hugging Face Model repo](https://huggingface.co/chansung/LoRA-deployment-multiworkers)\n\n## Script\n\n- [inference.py](https://github.com/deep-diver/LoRA-deployment/blob/main/scripts/inference.py): standalone Python script to send requests to the custom handler deployed on Hugging Face Inference Endpoint\n\n## Reference\n- https://huggingface.co/blog/lora\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdeep-diver%2Flora-deployment","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdeep-diver%2Flora-deployment","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdeep-diver%2Flora-deployment/lists"}