https://github.com/biswaroop1547/microbatcher
🚀 X-Model server for boosting ML inference without letting you do any heavy-lifting in the backend.
https://github.com/biswaroop1547/microbatcher
api-server batching inference ml mlops restful
Last synced: 9 months ago
JSON representation
🚀 X-Model server for boosting ML inference without letting you do any heavy-lifting in the backend.
- Host: GitHub
- URL: https://github.com/biswaroop1547/microbatcher
- Owner: biswaroop1547
- License: mit
- Created: 2022-07-22T05:34:33.000Z (over 3 years ago)
- Default Branch: main
- Last Pushed: 2022-12-27T21:28:40.000Z (over 3 years ago)
- Last Synced: 2025-04-23T16:38:43.294Z (12 months ago)
- Topics: api-server, batching, inference, ml, mlops, restful
- Language: Python
- Homepage:
- Size: 72.3 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- Changelog: CHANGELOG.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
Awesome Lists containing this project
README
#
MicroBatcher
X-Model server for boosting ML inference
## What is this?
A server taking care of all your inference to deployment needs + boosting performance without letting you do any heavylifting in the backend.
## Quickstart
1. Clone this repo & install
```bash
git clone https://github.com/biswaroop1547/microbatcher.git && cd microbatcher
make install
```
2. Define model path and start server
```bash
echo
```
## Philosophy
lorem ipsum
### Why do you need this?
lorem ipsum
### What will it enable?
lorem ipsum
#### Features
* lorem ipsum
* lorem ipsum