Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/lianxmfor/kfserving-inference-client
a client for kfserving inference protocol - v2 version
https://github.com/lianxmfor/kfserving-inference-client
Last synced: 6 days ago
JSON representation
a client for kfserving inference protocol - v2 version
- Host: GitHub
- URL: https://github.com/lianxmfor/kfserving-inference-client
- Owner: lianxmfor
- Created: 2021-09-02T18:10:57.000Z (about 3 years ago)
- Default Branch: master
- Last Pushed: 2021-11-04T07:52:08.000Z (about 3 years ago)
- Last Synced: 2024-08-02T16:43:26.216Z (3 months ago)
- Language: Go
- Homepage:
- Size: 842 KB
- Stars: 4
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# KFServing Inference Client
A Go re-implementation of [seldon-batch-processor](https://github.com/SeldonIO/seldon-core/blob/master/python/seldon_core/batch_processor.py). See [docs](https://docs.seldon.io/projects/seldon-core/en/stable/servers/batch.html) to understand its usage.
The main reason why we choose to re-implement is because the original implementation follow Seldon protocol, while we need KFServing V2 Protocol in order to use [MLServer](https://github.com/SeldonIO/MLServer) as the inference backed.
## man
```sh
$ ./kfserving-inference-client -h
Usage of ./kfserving-inference-client:
-host string
The hostname for the seldon model to send the request to, which can be the ingress of the Seldon model or the service itself
-i string
The local filestore path where the input file with the data to process is located
-m string
model name
-o string
The local filestore path where the output file should be written with the outputs of the batch processing
-u int
Batch size greater than 1 can be used to group multiple predictions into a single request. (default 100)
-w int
The number of parallel request processor workers to run for parallel processing (default 100)
```## Build Docker Image
$ make build