Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/bmw-innovationlab/bmw-tensorflow-inference-api-cpu

This is a repository for an object detection inference API using the Tensorflow framework.
https://github.com/bmw-innovationlab/bmw-tensorflow-inference-api-cpu

api bounding-boxes computer-vision computervision cpu deep-learning deeplearning detection-inference-api docker docker-ce docker-container docker-image inference inference-engine inference-server object-detection predictions rest-api tensorflow tensorflow-framework

Last synced: about 20 hours ago
JSON representation

This is a repository for an object detection inference API using the Tensorflow framework.

Awesome Lists containing this project

README

        

# Tensorflow CPU Inference API For Windows and Linux with docker swarm
Please use **docker swarm** only if you need to:

* Provide redundancy in terms of API containers: In case a container went down, the incoming requests will be redirected to another running instance.

* Coordinate between the containers: Swarm will orchestrate between the APIs and choose one of them to listen to the incoming request.

* Scale up the Inference service in order to get a faster prediction especially if there's traffic on the service.

## Run the docker container

Docker swarm can scale up the API into multiple replicas and can be used on one or multiple hosts(Linux users only). In both cases, a docker swarm setup is required for all hosts.

#### Docker swarm setup

1- Initialize Swarm:

```sh
docker swarm init
```

2- On the manager host, open the cpu-inference.yaml file and specify the number of replicas needed. In case you are using multiple hosts (With multiple hosts section), the number of replicas will be divided across all hosts.

```yaml
version: "3"

services:
api:
ports:
- "4343:4343"
image: tensorflow_inference_api_cpu
volumes:
- "/mnt/models:/models"
deploy:
replicas: 1
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure
```

**Notes about cpu-inference.yaml:**

* the volumes field on the left of ":" should be an absolute path, can be changeable by the user, and represents the models directory on your Operating System
* the following volume's field ":/models" should never be changed

#### With one host

Deploy the API:

```sh
docker stack deploy -c cpu-inference.yaml tensorflow-cpu
```

![onehost](./docs/tcpu.png)

#### With multiple hosts (Linux users only)

1- **Make sure hosts are reachable on the same network**.

2- Choose a host to be the manager and run the following command on the chosen host to generate a token so the other hosts can join:

```sh
docker swarm join-token worker
```

A command will appear on your terminal, copy and paste it on the other hosts, as seen in the below image

3- Deploy your application using:

```sh
docker stack deploy -c cpu-inference.yaml tensorflow-cpu
```

![multhost](./docs/tcpu2.png)

#### Useful Commands

1- In order to scale up the service to 4 replicas for example use this command:

```sh
docker service scale tensorflow-cpu_api=4
```

2- To check the available workers:

```sh
docker node ls
```

3- To check on which node the container is running:

```sh
docker service ps tensorflow-cpu_api
```

4- To check the number of replicas:

```sh
docker service ls
```

## Benchmarking

Here are two graphs showing time of prediction for different number of requests at the same time.

![CPU 20 req](./docs/TCPU20req.png)

![CPU 40 req](./docs/TCPU40req.png)

We can see that both graphs got the same result no matter what is the number of received requests at the same time. When we increase the number of workers (hosts) we are able to speed up the inference by at least 2 times. For example we can see in the last column we were able to process 40 requests in:

- 17.5 seconds with 20 replicas in 1 machine
- 8.8 seconds with 20 replicas in each of the 2 machines

Moreover, in case one of the machines is down the others are always ready to receive requests.

Finally since we are predicting on CPU scaling more replicas doesn't mean a faster prediction, 4 containers was faster than 20.