Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/roboflow/inference
A fast, easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.
https://github.com/roboflow/inference
classification computer-vision deployment docker hacktoberfest inference inference-api inference-server instance-segmentation jetson machine-learning object-detection onnx python tensorrt vit yolo11 yolov5 yolov7 yolov8
Last synced: 2 days ago
JSON representation
A fast, easy-to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models.
- Host: GitHub
- URL: https://github.com/roboflow/inference
- Owner: roboflow
- License: other
- Created: 2023-07-31T17:00:40.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-12-06T19:03:33.000Z (6 days ago)
- Last Synced: 2024-12-08T16:56:19.627Z (4 days ago)
- Topics: classification, computer-vision, deployment, docker, hacktoberfest, inference, inference-api, inference-server, instance-segmentation, jetson, machine-learning, object-detection, onnx, python, tensorrt, vit, yolo11, yolov5, yolov7, yolov8
- Language: Python
- Homepage: https://inference.roboflow.com
- Size: 103 MB
- Stars: 1,396
- Watchers: 23
- Forks: 135
- Open Issues: 71
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Citation: CITATION.cff
- Codeowners: CODEOWNERS
Awesome Lists containing this project
- StarryDivineSky - roboflow/inference - World 等基础模型。 (其他_机器视觉 / 网络服务_其他)
- awesome-production-machine-learning - Inference - A fast, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models. With Inference, you can deploy models such as YOLOv5, YOLOv8, CLIP, SAM, and CogVLM on your own hardware using Docker. (Deployment and Serving)
- awesome-osml-for-devs - Inference - to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models. (Libraries, Platforms and Development Platform-specific Resources / Development Platform)
- awesome-osml-for-devs - Inference - to-use, production-ready inference server for computer vision supporting deployment of many popular model architectures and fine-tuned models. (Libraries, Platforms and Development Platform-specific Resources / Development Platform)
README
[notebooks](https://github.com/roboflow/notebooks) | [supervision](https://github.com/roboflow/supervision) | [autodistill](https://github.com/autodistill/autodistill) | [maestro](https://github.com/roboflow/multimodal-maestro)
[![version](https://badge.fury.io/py/inference.svg)](https://badge.fury.io/py/inference)
[![downloads](https://img.shields.io/pypi/dm/inference)](https://pypistats.org/packages/inference)
![docker pulls](https://img.shields.io/docker/pulls/roboflow/roboflow-inference-server-cpu)
[![license](https://img.shields.io/pypi/l/inference)](https://github.com/roboflow/inference/blob/main/LICENSE.core)
[![huggingface](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/Roboflow/workflows)
[![discord](https://img.shields.io/discord/1159501506232451173)](https://discord.gg/GbfgXGJ8Bk)## 👋 hello
Roboflow Inference is an open-source platform designed to simplify the deployment of computer vision models. It enables developers to perform object detection, classification, and instance segmentation and utilize foundation models like [CLIP](https://inference.roboflow.com/foundation/clip), [Segment Anything](https://inference.roboflow.com/foundation/sam), and [YOLO-World](https://inference.roboflow.com/foundation/yolo_world) through a Python-native package, a self-hosted inference server, or a fully [managed API](https://docs.roboflow.com/).
Explore our [enterprise options](https://roboflow.com/sales) for advanced features like server deployment, active learning, and commercial licenses for YOLOv5 and YOLOv8.
## 💻 install
Inference package requires [**Python>=3.8,<=3.11**](https://www.python.org/). Click [here](https://inference.roboflow.com/quickstart/docker/) to learn more about running Inference inside Docker.
```bash
pip install inference
```👉 additional considerations
- hardware
Enhance model performance in GPU-accelerated environments by installing CUDA-compatible dependencies.
```bash
pip install inference-gpu
```- models
The `inference` and `inference-gpu` packages install only the minimal shared dependencies. Install model-specific dependencies to ensure code compatibility and license compliance. Learn more about the [models](https://inference.roboflow.com/#extras) supported by Inference.
```bash
pip install inference[yolo-world]
```## 🔥 quickstart
Use Inference SDK to run models locally with just a few lines of code. The image input can be a URL, a numpy array (BGR), or a PIL image.
```python
from inference import get_modelmodel = get_model(model_id="yolov8n-640")
results = model.infer("https://media.roboflow.com/inference/people-walking.jpg")
```👉 roboflow models
Set up your `ROBOFLOW_API_KEY` to access thousands of fine-tuned models shared by the [Roboflow Universe](https://universe.roboflow.com/) community and your custom model. Navigate to 🔑 keys section to learn more.
```python
from inference import get_modelmodel = get_model(model_id="soccer-players-5fuqs/1")
results = model.infer(
image="https://media.roboflow.com/inference/soccer.jpg",
confidence=0.5,
iou_threshold=0.5
)
```👉 foundational models
- [CLIP Embeddings](https://inference.roboflow.com/foundation/clip) - generate text and image embeddings that you can use for zero-shot classification or assessing image similarity.
```python
from inference.models import Clipmodel = Clip()
embeddings_text = clip.embed_text("a football match")
embeddings_image = model.embed_image("https://media.roboflow.com/inference/soccer.jpg")
```- [Segment Anything](https://inference.roboflow.com/foundation/sam) - segment all objects visible in the image or only those associated with selected points or boxes.
```python
from inference.models import SegmentAnythingmodel = SegmentAnything()
result = model.segment_image("https://media.roboflow.com/inference/soccer.jpg")
```- [YOLO-World](https://inference.roboflow.com/foundation/yolo_world) - an almost real-time zero-shot detector that enables the detection of any objects without any training.
```python
from inference.models import YOLOWorldmodel = YOLOWorld(model_id="yolo_world/l")
result = model.infer(
image="https://media.roboflow.com/inference/dog.jpeg",
text=["person", "backpack", "dog", "eye", "nose", "ear", "tongue"],
confidence=0.03
)
```## 📟 inference server
- deploy server
The inference server is distributed via Docker. Behind the scenes, inference will download and run the image that is appropriate for your hardware. [Here](https://inference.roboflow.com/quickstart/docker/#advanced-build-a-docker-container-from-scratch), you can learn more about the supported images.```bash
inference server start
```- run client
Consume inference server predictions using the HTTP client available in the Inference SDK.```python
from inference_sdk import InferenceHTTPClient
client = InferenceHTTPClient(
api_url="http://localhost:9001",
api_key=
)
with client.use_model(model_id="soccer-players-5fuqs/1"):
predictions = client.infer("https://media.roboflow.com/inference/soccer.jpg")
```
If you're using the hosted API, change the local API URL to `https://detect.roboflow.com`. Accessing the hosted inference server and/or using any of the fine-tuned models require a `ROBOFLOW_API_KEY`. For further information, visit the 🔑 keys section.## 🎥 inference pipeline
The inference pipeline is an efficient method for processing static video files and streams. Select a model, define the video source, and set a callback action. You can choose from predefined callbacks that allow you to [display results](https://inference.roboflow.com/docs/reference/inference/core/interfaces/stream/sinks/#inference.core.interfaces.stream.sinks.render_boxes) on the screen or [save them to a file](https://inference.roboflow.com/docs/reference/inference/core/interfaces/stream/sinks/#inference.core.interfaces.stream.sinks.VideoFileSink).
```python
from inference import InferencePipeline
from inference.core.interfaces.stream.sinks import render_boxespipeline = InferencePipeline.init(
model_id="yolov8x-1280",
video_reference="https://media.roboflow.com/inference/people-walking.mp4",
on_prediction=render_boxes
)pipeline.start()
pipeline.join()
```## 🔑 keys
Inference enables the deployment of a wide range of pre-trained and foundational models without an API key. To access thousands of fine-tuned models shared by the [Roboflow Universe](https://universe.roboflow.com/) community, [configure your](https://app.roboflow.com/settings/api) API key.
```bash
export ROBOFLOW_API_KEY=
```## 📚 documentation
Visit our [documentation](https://inference.roboflow.com) to explore comprehensive guides, detailed API references, and a wide array of tutorials designed to help you harness the full potential of the Inference package.
## ⚡️ Model-specific extras
Explore the list of [`inference` extras](https://inference.roboflow.com/#extras) to install model-specific dependencies.
## © license
See the "Self Hosting and Edge Deployment" section of the [Roboflow Licensing](https://roboflow.com/licensing) documentation for information on how Roboflow Inference is licensed.
## 🏆 contribution
We would love your input to improve Roboflow Inference! Please see our [contributing guide](https://github.com/roboflow/inference/blob/master/CONTRIBUTING.md) to get started. Thank you to all of our contributors! 🙏