Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/autotraderuk/fastapi-mlflow
Deploy mlflow models as JSON APIs with minimal new code
https://github.com/autotraderuk/fastapi-mlflow
api fastapi mlflow mlops python python3
Last synced: 24 days ago
JSON representation
Deploy mlflow models as JSON APIs with minimal new code
- Host: GitHub
- URL: https://github.com/autotraderuk/fastapi-mlflow
- Owner: autotraderuk
- License: apache-2.0
- Created: 2022-01-28T12:19:48.000Z (almost 3 years ago)
- Default Branch: main
- Last Pushed: 2023-11-08T15:35:03.000Z (almost 1 year ago)
- Last Synced: 2024-10-01T15:52:29.699Z (about 1 month ago)
- Topics: api, fastapi, mlflow, mlops, python, python3
- Language: Python
- Homepage:
- Size: 331 KB
- Stars: 21
- Watchers: 5
- Forks: 4
- Open Issues: 2
-
Metadata Files:
- Readme: README.md
- Changelog: CHANGELOG.md
- License: LICENSE
Awesome Lists containing this project
README
# fastapi mlflow
Deploy [mlflow](https://www.mlflow.org/) models as JSON APIs using [FastAPI](https://fastapi.tiangolo.com) with minimal new code.
## Installation
```shell
pip install fastapi-mlflow
```For running the app in production, you will also need an ASGI server, such as [Uvicorn](https://www.uvicorn.org) or [Hypercorn](https://gitlab.com/pgjones/hypercorn).
## Install on Apple Silicon (ARM / M1)
If you experience problems installing on a newer generation Apple silicon based device, [this solution from StackOverflow](https://stackoverflow.com/a/67586301) before retrying install has been found to help.
```shell
brew install openblas gfortran
export OPENBLAS="$(brew --prefix openblas)"
```## License
Copyright © 2022-23 Auto Trader Group plc.
[Apache-2.0](https://www.apache.org/licenses/LICENSE-2.0)
## Examples
### Simple
#### Create
Create a file `main.py` containing:
```python
from fastapi_mlflow.applications import build_app
from mlflow.pyfunc import load_modelmodel = load_model("/Users/me/path/to/local/model")
app = build_app(model)
```#### Run
Run the server with:
```shell
uvicorn main:app
```#### Check
Open your browser at
You should see the automatically generated docs for your model, and be able to test it out using the `Try it out` button in the UI.
### Serve multiple models
It should be possible to host multiple models (assuming that they have compatible dependencies...) by leveraging [FastAPIs Sub Applications](https://fastapi.tiangolo.com/advanced/sub-applications/#sub-applications-mounts):
```python
from fastapi import FastAPI
from fastapi_mlflow.applications import build_app
from mlflow.pyfunc import load_modelapp = FastAPI()
model1 = load_model("/Users/me/path/to/local/model1")
model1_app = build_app(model1)
app.mount("/model1", model1_app)model2 = load_model("/Users/me/path/to/local/model2")
model2_app = build_app(model2)
app.mount("/model2", model2_app)
```[Run](#run) and [Check](#check) as above.
### Custom routing
If you want more control over where and how the prediction end-point is mounted in your API, you can build the predictor function directly and use it as you need:
```python
from inspect import signaturefrom fastapi import FastAPI
from fastapi_mlflow.predictors import build_predictor
from mlflow.pyfunc import load_modelmodel = load_model("/Users/me/path/to/local/model")
predictor = build_predictor(model)
app = FastAPI()
app.add_api_route(
"/classify",
predictor,
response_model=signature(predictor).return_annotation,
methods=["POST"],
)
```[Run](#run) and [Check](#check) as above.