{"id":13364202,"url":"https://github.com/bentoml/BentoML","last_synced_at":"2025-03-12T16:35:14.914Z","repository":{"id":37275970,"uuid":"178976529","full_name":"bentoml/BentoML","owner":"bentoml","description":"The easiest way to serve AI apps and models - Build Model Inference APIs, Job queues, LLM apps, Multi-model pipelines, and much more!","archived":false,"fork":false,"pushed_at":"2024-10-30T04:32:06.000Z","size":93032,"stargazers_count":7109,"open_issues_count":167,"forks_count":790,"subscribers_count":78,"default_branch":"main","last_synced_at":"2024-10-30T07:22:23.005Z","etag":null,"topics":["ai-inference","deep-learning","generative-ai","inference-platform","llm","llm-inference","llm-serving","llmops","machine-learning","ml-engineering","mlops","model-inference-service","model-serving","multimodal","python"],"latest_commit_sha":null,"homepage":"https://bentoml.com","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/bentoml.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":"CITATION.cff","codeowners":".github/CODEOWNERS","security":"SECURITY.md","support":null,"governance":"GOVERNANCE.md","roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2019-04-02T01:39:27.000Z","updated_at":"2024-10-30T04:32:11.000Z","dependencies_parsed_at":"2024-11-06T10:17:51.752Z","dependency_job_id":"1e7aeba1-a0d4-4774-aa72-8fbb9b8fb360","html_url":"https://github.com/bentoml/BentoML","commit_stats":{"total_commits":2382,"total_committers":164,"mean_commits":"14.524390243902438","dds":0.7703610411418975,"last_synced_commit":"8a07c19a893e7fc442817ac2e142ba1d9f2460f9"},"previous_names":[],"tags_count":176,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bentoml%2FBentoML","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bentoml%2FBentoML/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bentoml%2FBentoML/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bentoml%2FBentoML/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/bentoml","download_url":"https://codeload.github.com/bentoml/BentoML/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":242961689,"owners_count":20213316,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["ai-inference","deep-learning","generative-ai","inference-platform","llm","llm-inference","llm-serving","llmops","machine-learning","ml-engineering","mlops","model-inference-service","model-serving","multimodal","python"],"created_at":"2024-07-30T00:00:42.608Z","updated_at":"2025-03-12T16:35:14.898Z","avatar_url":"https://github.com/bentoml.png","language":"Python","readme":"\u003cpicture\u003e\n    \u003csource media=\"(prefers-color-scheme: dark)\" srcset=\"https://github.com/bentoml/BentoML/assets/489344/d3e6c95d-d224-49a5-9cff-0789f094e127\"\u003e\n    \u003csource media=\"(prefers-color-scheme: light)\" srcset=\"https://github.com/bentoml/BentoML/assets/489344/de4da660-6aeb-4e5a-bf76-b7177435444d\"\u003e\n    \u003cimg alt=\"BentoML: Unified Model Serving Framework\" src=\"https://github.com/bentoml/BentoML/assets/489344/de4da660-6aeb-4e5a-bf76-b7177435444d\" width=\"370\" style=\"max-width: 100%;\"\u003e\n\u003c/picture\u003e\n\n## Unified Model Serving Framework\n\n🍱 Build model inference APIs and multi-model serving systems with any open-source or custom AI models. 👉 [Join our Slack community!](https://l.bentoml.com/join-slack)\n\n[![License: Apache-2.0](https://img.shields.io/badge/License-Apache%202-green.svg)](https://github.com/bentoml/BentoML?tab=Apache-2.0-1-ov-file)\n[![Releases](https://img.shields.io/github/v/release/bentoml/bentoml.svg)](https://github.com/bentoml/bentoml/releases)\n[![CI](https://github.com/bentoml/bentoml/actions/workflows/ci.yml/badge.svg?branch=main)](https://github.com/bentoml/BentoML/actions/workflows/ci.yml?query=branch%3Amain)\n[![Twitter](https://badgen.net/badge/icon/@bentomlai/1DA1F2?icon=twitter\u0026label=Follow)](https://twitter.com/bentomlai)\n[![Community](https://badgen.net/badge/Join/Community/cyan?icon=slack)](https://l.bentoml.com/join-slack)\n\n## What is BentoML?\n\nBentoML is a Python library for building online serving systems optimized for AI apps and model inference.\n\n- **🍱 Easily build APIs for Any AI/ML Model.** Turn any model inference script into a REST API server with just a few lines of code and standard Python type hints.\n- **🐳 Docker Containers made simple.** No more dependency hell! Manage your environments, dependencies and model versions with a simple config file. BentoML automatically generates Docker images, ensures reproducibility, and simplifies how you deploy to different environments.\n- **🧭 Maximize CPU/GPU utilization.** Build high performance inference APIs leveraging built-in serving optimization features like dynamic batching, model parallelism, multi-stage pipeline and multi-model inference-graph orchestration.\n- **👩‍💻 Fully customizable.** Easily implement your own APIs or task queues, with custom business logic, model inference and multi-model composition. Supports any ML framework, modality, and inference runtime.\n- **🚀 Ready for Production.** Develop, run and debug locally. Seamlessly deploy to production with Docker containers or [BentoCloud](https://www.bentoml.com/).\n\n## Getting started\n\nInstall BentoML:\n\n```\n# Requires Python≥3.9\npip install -U bentoml\n```\n\nDefine APIs in a `service.py` file.\n\n```python\nimport bentoml\n\n@bentoml.service(\n    image=bentoml.images.PythonImage(python_version=\"3.11\").python_packages(\"torch\", \"transformers\"),\n)\nclass Summarization:\n    def __init__(self) -\u003e None:\n        import torch\n        from transformers import pipeline\n\n        device = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n        self.pipeline = pipeline('summarization', device=device)\n\n    @bentoml.api(batchable=True)\n    def summarize(self, texts: list[str]) -\u003e list[str]:\n        results = self.pipeline(texts)\n        return [item['summary_text'] for item in results]\n```\n\n### 💻 Run locally\n\nInstall PyTorch and Transformers packages to your Python virtual environment.\n\n```bash\npip install torch transformers  # additional dependencies for local run\n```\n\nRun the service code locally (serving at http://localhost:3000 by default):\n\n```bash\nbentoml serve\n```\n\nYou should expect to see the following output.\n\n```\n[INFO] [cli] Starting production HTTP BentoServer from \"service:Summarization\" listening on http://localhost:3000 (Press CTRL+C to quit)\n[INFO] [entry_service:Summarization:1] Service Summarization initialized\n```\n\nNow you can run inference from your browser at http://localhost:3000 or with a Python script:\n\n```python\nimport bentoml\n\nwith bentoml.SyncHTTPClient('http://localhost:3000') as client:\n    summarized_text: str = client.summarize([bentoml.__doc__])[0]\n    print(f\"Result: {summarized_text}\")\n```\n\n### 🐳 Deploy using Docker\n\nRun `bentoml build` to package necessary code, models, dependency configs into a Bento - the standardized deployable artifact in BentoML:\n\n```bash\nbentoml build\n```\n\nEnsure [Docker](https://docs.docker.com/) is running. Generate a Docker container image for deployment:\n\n```bash\nbentoml containerize summarization:latest\n```\n\nRun the generated image:\n\n```bash\ndocker run --rm -p 3000:3000 summarization:latest\n```\n\n### ☁️ Deploy on BentoCloud\n\n[BentoCloud](www.bentoml.com) provides compute infrastructure for rapid and reliable GenAI adoption. It helps speed up your BentoML development process leveraging cloud compute resources, and simplify how you deploy, scale and operate BentoML in production.\n\n[Sign up for BentoCloud](https://cloud.bentoml.com/signup) for personal access; for enterprise use cases, [contact our team](https://www.bentoml.com/contact).\n\n```bash\n# After signup, run the following command to create an API token:\nbentoml cloud login\n\n# Deploy from current directory:\nbentoml deploy\n```\n\n![bentocloud-ui](./docs/source/_static/img/get-started/cloud-deployment/first-bento-on-bentocloud.png)\n\nFor detailed explanations, read the [Hello World example](https://docs.bentoml.com/en/latest/get-started/hello-world.html).\n\n## Examples\n\n- LLMs: [Llama 3.2](https://github.com/bentoml/BentoVLLM/tree/main/llama3.2-11b-vision-instruct), [Mistral](https://github.com/bentoml/BentoVLLM/tree/main/ministral-8b-instruct-2410), [DeepSeek Distil](https://github.com/bentoml/BentoVLLM/tree/main/deepseek-r1-distill-llama3.1-8b-tool-calling), and more.\n- Image Generation: [Stable Diffusion 3 Medium](https://github.com/bentoml/BentoDiffusion/tree/main/sd3-medium), [Stable Video Diffusion](https://github.com/bentoml/BentoDiffusion/tree/main/svd), [Stable Diffusion XL Turbo](https://github.com/bentoml/BentoDiffusion/tree/main/sdxl-turbo), [ControlNet](https://github.com/bentoml/BentoDiffusion/tree/main/controlnet), and [LCM LoRAs](https://github.com/bentoml/BentoDiffusion/tree/main/lcm).\n- Embeddings: [SentenceTransformers](https://github.com/bentoml/BentoSentenceTransformers) and [ColPali](https://github.com/bentoml/BentoColPali)\n- Audio: [ChatTTS](https://github.com/bentoml/BentoChatTTS), [XTTS](https://github.com/bentoml/BentoXTTS), [WhisperX](https://github.com/bentoml/BentoWhisperX), [Bark](https://github.com/bentoml/BentoBark)\n- Computer Vision: [YOLO](https://github.com/bentoml/BentoYolo) and [ResNet](https://github.com/bentoml/BentoResnet)\n- Advanced examples: [Function calling](https://github.com/bentoml/BentoFunctionCalling), [LangGraph](https://github.com/bentoml/BentoLangGraph), [CrewAI](https://github.com/bentoml/BentoCrewAI)\n\nCheck out the [full list](https://docs.bentoml.com/en/latest/examples/overview.html) for more sample code and usage.\n\n## Advanced topics\n\n- [Model composition](https://docs.bentoml.com/en/latest/get-started/model-composition.html)\n- [Workers and model parallelization](https://docs.bentoml.com/en/latest/build-with-bentoml/parallelize-requests.html)\n- [Adaptive batching](https://docs.bentoml.com/en/latest/get-started/adaptive-batching.html)\n- [GPU inference](https://docs.bentoml.com/en/latest/build-with-bentoml/gpu-inference.html)\n- [Distributed serving systems](https://docs.bentoml.com/en/latest/build-with-bentoml/distributed-services.html)\n- [Concurrency and autoscaling](https://docs.bentoml.com/en/latest/scale-with-bentocloud/scaling/autoscaling.html)\n- [Model loading and Model Store](https://docs.bentoml.com/en/latest/build-with-bentoml/model-loading-and-management.html)\n- [Observability](https://docs.bentoml.com/en/latest/build-with-bentoml/observability/index.html)\n- [BentoCloud deployment](https://docs.bentoml.com/en/latest/get-started/cloud-deployment.html)\n\nSee [Documentation](https://docs.bentoml.com) for more tutorials and guides.\n\n## Community\n\nGet involved and join our [Community Slack 💬](https://l.bentoml.com/join-slack), where thousands of AI/ML engineers help each other, contribute to the project, and talk about building AI products.\n\nTo report a bug or suggest a feature request, use\n[GitHub Issues](https://github.com/bentoml/BentoML/issues/new/choose).\n\n### Contributing\n\nThere are many ways to contribute to the project:\n\n- Report bugs and \"Thumbs up\" on [issues](https://github.com/bentoml/BentoML/issues) that are relevant to you.\n- Investigate [issues](https://github.com/bentoml/BentoML/issues) and review other developers' [pull requests](https://github.com/bentoml/BentoML/pulls).\n- Contribute code or [documentation](https://docs.bentoml.com/en/latest/index.html) to the project by submitting a GitHub pull request.\n- Check out the [Contributing Guide](https://github.com/bentoml/BentoML/blob/main/CONTRIBUTING.md) and [Development Guide](https://github.com/bentoml/BentoML/blob/main/DEVELOPMENT.md) to learn more.\n- Share your feedback and discuss roadmap plans in the `#bentoml-contributors` channel [here](https://l.bentoml.com/join-slack).\n\nThanks to all of our amazing contributors!\n\n\u003ca href=\"https://github.com/bentoml/BentoML/graphs/contributors\"\u003e\n  \u003cimg src=\"https://contrib.rocks/image?repo=bentoml/BentoML\" /\u003e\n\u003c/a\u003e\n\n### Usage tracking and feedback\n\nThe BentoML framework collects anonymous usage data that helps our community improve the product. Only BentoML's internal API calls are being reported. This excludes any sensitive information, such as user code, model data, model names, or stack traces. Here's the [code](https://github.com/bentoml/BentoML/blob/main/src/bentoml/_internal/utils/analytics/usage_stats.py) used for usage tracking. You can opt-out of usage tracking by the `--do-not-track` CLI option:\n\n```bash\nbentoml [command] --do-not-track\n```\n\nOr by setting the environment variable:\n\n```bash\nexport BENTOML_DO_NOT_TRACK=True\n```\n\n### License\n\n[Apache License 2.0](https://github.com/bentoml/BentoML/blob/main/LICENSE)\n","funding_links":[],"categories":["📡 Monitoring \u0026 Observability","🚀 Deployment \u0026 Serving","🎯 Tool Categories","Agentic Workflow 智能体工作流","Python","Serving","Model Serving and Monitoring","Frameworks/Servers for Serving","Deep Learning Framework","Tools","Deployment and Serving","其他_机器学习与深度学习","武器库","Model Deployment","Application Recommendation","App","Platforms / full solutions","Model Serving","📋 Contents","2. **Production Tools**","Technologies","Inference","🚀 MLOps","Inference \u0026 Serving","LLM 部署与推理 (Deployment \u0026 Inference)","Local Inference and Serving"],"sub_categories":["3. The Enterprise / High-Scale Stack (The 1%)","🏆 Top Serving Platforms","Frameworks/Servers for Serving","Deployment \u0026 Distribution","人工智能","🧠 AI Applications","📊 8. MLOps / LLMOps \u0026 Production","Inference Platform","Tools","Model Serving Frameworks","推理网关 (Inference Gateways)","Serve at scale"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbentoml%2FBentoML","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fbentoml%2FBentoML","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbentoml%2FBentoML/lists"}