{"id":28630989,"url":"https://github.com/thoughtscript/hugging_face_llm_2025","last_synced_at":"2025-06-12T13:09:33.517Z","repository":{"id":294567567,"uuid":"981401166","full_name":"Thoughtscript/Hugging_Face_LLM_2025","owner":"Thoughtscript","description":"Hugging Face LLM on Fast API exploration","archived":false,"fork":false,"pushed_at":"2025-05-21T02:31:41.000Z","size":14,"stargazers_count":0,"open_issues_count":0,"forks_count":0,"subscribers_count":1,"default_branch":"main","last_synced_at":"2025-05-21T03:33:16.166Z","etag":null,"topics":["docker","fastapi","llm","machine-learning","python","tiny-llm"],"latest_commit_sha":null,"homepage":"","language":"HTML","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/Thoughtscript.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2025-05-11T02:47:19.000Z","updated_at":"2025-05-21T02:31:44.000Z","dependencies_parsed_at":"2025-05-21T03:33:18.511Z","dependency_job_id":"b5049420-03c0-469c-8f04-716bf7f75390","html_url":"https://github.com/Thoughtscript/Hugging_Face_LLM_2025","commit_stats":null,"previous_names":["thoughtscript/hugging_face_llm_2025"],"tags_count":0,"template":false,"template_full_name":null,"purl":"pkg:github/Thoughtscript/Hugging_Face_LLM_2025","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Thoughtscript%2FHugging_Face_LLM_2025","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Thoughtscript%2FHugging_Face_LLM_2025/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Thoughtscript%2FHugging_Face_LLM_2025/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Thoughtscript%2FHugging_Face_LLM_2025/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/Thoughtscript","download_url":"https://codeload.github.com/Thoughtscript/Hugging_Face_LLM_2025/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Thoughtscript%2FHugging_Face_LLM_2025/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":259470950,"owners_count":22862999,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["docker","fastapi","llm","machine-learning","python","tiny-llm"],"created_at":"2025-06-12T13:09:31.629Z","updated_at":"2025-06-12T13:09:33.503Z","avatar_url":"https://github.com/Thoughtscript.png","language":"HTML","readme":"# Hugging_Face_LLM_2025\n[![](https://img.shields.io/badge/Python-3.11.11-yellow.svg)](https://www.python.org/downloads/)\n[![](https://img.shields.io/badge/docker-blue.svg)](https://www.docker.com/) \n[![](https://img.shields.io/badge/Hugging-Face-yellow.svg)](https://huggingface.co/arnir0/Tiny-LLM) \n\n\u003e Experimenting with: **Tiny Large Language Models**.\n\n## Setup and Use\n\n```bash\ndocker compose up\n```\n\n1. http://localhost:8000/public/index.html\n2. http://localhost:8000/api/llm?prompt=abcdefghijklmnopqrstuvwxyz\n\n## Warning\n\n1. This is a simple example (**NOT** Production-worthy) and basic safeguards (like input field validation, param sanitization, and the like) are mostly omitted here.\n    * There's not a tremendous one can do within the context of *this* simple demo to break things/be malicious.\n    * But the design pattern *should* be avoided in Production *as is* (without the addition of typical Production security mechanisms)!\n1. The build can take upwards of `30 minutes` due to large (`many GB`) LLM Models.\n\n## Notes\n\n1. Hugging Face will download and cache `models--arnir0--Tiny-LLM` into `root/.cache/huggingface/hub`.\n2. Also, **Docker Desktop 4.40** now supports [Docker Model Runner](https://www.docker.com/blog/introducing-docker-model-runner/).\n   * The following commands will launch of Dockerized LLM Model ([somewhat similar](https://hub.docker.com/r/ai/deepseek-r1-distill-llama) to this one):\n     * `docker model pull ai/deepseek-r1-distill-llama`\n     * `docker model run ai/deepseek-r1-distill-llama`\n     * **Docker Model Runner** supports Hugging Face Models.\n\n## Resources and Links\n\n1. https://huggingface.co/arnir0/Tiny-LLM\n2. https://huggingface.co/learn/llm-course/","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fthoughtscript%2Fhugging_face_llm_2025","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fthoughtscript%2Fhugging_face_llm_2025","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fthoughtscript%2Fhugging_face_llm_2025/lists"}