https://github.com/nheidloff/watson-deep-learning-javascript
Deploying Watson Deep Learning Models to Browsers
https://github.com/nheidloff/watson-deep-learning-javascript
ai deep-learning dl ibm-cloud javascript tensorflow tensorflowjs watson web
Last synced: 8 months ago
JSON representation
Deploying Watson Deep Learning Models to Browsers
- Host: GitHub
- URL: https://github.com/nheidloff/watson-deep-learning-javascript
- Owner: nheidloff
- License: apache-2.0
- Created: 2018-06-13T14:40:18.000Z (over 7 years ago)
- Default Branch: master
- Last Pushed: 2018-06-18T08:24:34.000Z (over 7 years ago)
- Last Synced: 2024-12-31T19:30:11.975Z (9 months ago)
- Topics: ai, deep-learning, dl, ibm-cloud, javascript, tensorflow, tensorflowjs, watson, web
- Language: TypeScript
- Homepage: https://nh-hunt.mybluemix.net/
- Size: 31.9 MB
- Stars: 4
- Watchers: 4
- Forks: 0
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Deploying Watson Deep Learning Models to Browsers
This project includes sample code how to train a model with [TensorFlow](https://www.tensorflow.org/) and the [Deep Learning service](https://www.ibm.com/blogs/watson/2018/03/deep-learning-service-ibm-makes-advanced-ai-accessible-users-everywhere/) within Watson Studio and how to deploy and access the model in a web browser.
This project extends the open source project [Emoji Scavenger Hunt](https://github.com/google/emoji-scavenger-hunt) which is a web based game that makes use of [TensorFlow.js](https://js.tensorflow.org/) to identify objects seen by your webcam or mobile camera in the browser. Emojis are shown and you have to find those objects in the real world before the timer runs out.
This is a screenshot from the app running on an iPhone where currently a hat is recognized:

I've deployed a [live demo](https://nh-hunt.mybluemix.net) but it will only work for you if you have items that look similar.
Check out the [video](https://youtu.be/4WTpMmqraXI) for a quick demo.
In order to train the model I've taken pictures from seven items: plug, soccer ball, mouse, hat, truck, banana and headphones. Here is how the emojis map to the real objects. You can find the images in the [data](data/images) directory.

## Prerequisites
Get a free [IBM Cloud](https://ibm.biz/nheidloff) lite account (no time restriction, no credit card required).
Create an instance of the [Machine Learning](https://console.bluemix.net/catalog/services/machine-learning) service. From the credentials get the user name, password and the instance id.
Install the IBM Cloud CLI with the machine learning plugin and set environment variables by following these [instructions](https://datascience.ibm.com/docs/content/analyze-data/ml_dlaas_environment.html).
Create an instance of the [Cloud Object Storage
](https://console.bluemix.net/catalog/services/cloud-object-storage) service and create HMAC credentials by following these [instructions](https://datascience.ibm.com/docs/content/analyze-data/ml_dlaas_object_store.html). Make sure to use 'Writer' or 'Manager' access and note the aws_access_key_id and aws_secret_access_key for a later step.Install and configure the AWS CLI by following these [instructions](https://console.bluemix.net/docs/services/cloud-object-storage/cli/aws-cli.html#use-the-aws-cli).
## Training of the Model
Clone this repo:
```bash
$ git clone https://github.com/nheidloff/watson-deep-learning-javascript
```Create two buckets (use unique names):
```bash
$ aws --endpoint-url=http://s3-api.dal-us-geo.objectstorage.softlayer.net --profile ibm_cos s3 mb s3://nh-hunt-input
$ aws --endpoint-url=http://s3-api.dal-us-geo.objectstorage.softlayer.net --profile ibm_cos s3 mb s3://nh-hunt-output
```Download and extract Mobilenet:
```bash
$ cd watson-deep-learning-javascript/data
$ wget http://download.tensorflow.org/models/mobilenet_v1_2018_02_22/mobilenet_v1_0.25_224.tgz
$ tar xvzf mobilenet_v1_0.25_224.tgz
```Upload bucket with MobileNet and data (use your unique bucket name):
```bash
$ cd xxx/watson-deep-learning-javascript/data
$ aws --endpoint-url=http://s3-api.dal-us-geo.objectstorage.softlayer.net --profile ibm_cos s3 cp . s3://nh-hunt-input/ --recursive
```Prepare the training:
* Define your object storage credentials and your bucket names in [tf-train.yaml](model/tf-train.yaml).
* Compress [retrain.py](model/retrain.py) into [tf-model.zip](model/tf-model.zip).Invoke the training and check for status (change the generated training name):
```bash
$ cd xxx/watson-deep-learning-javascript/model
$ bx ml train tf-model.zip tf-train.yaml
$ bx ml list training-runs
$ bx ml monitor training-runs training-5PQK89IiR
$ bx ml show training-runs training-5PQK89IiR
```Download the saved model:
```bash
$ cd xxx/watson-deep-learning-javascript/saved-model
$ aws --endpoint-url=http://s3-api.dal-us-geo.objectstorage.softlayer.net --profile ibm_cos s3 sync s3://nh-hunt-output .
```Optionally evaluate the model via Tensorboard (either from Docker container or Virtualenv):
```bash
$ cd xxx/watson-deep-learning-javascript/saved-model/training-0xebs3Iig
$ tensorboard --logdir=xxx/watson-deep-learning-javascript/saved-model/training-0xebs3Iig/retrain_logs
```## Deployment of the Web Application
Convert the model:
```bash
$ cd xxx/watson-deep-learning-javascript/convert
$ docker build -t model-converter .
$ cp -a xxx/watson-deep-learning-javascript/saved-model/training-qBnjUqImR/model/. xxx/watson-deep-learning-javascript/convert/data/saved_model/
$ docker run -v xxx/watson-deep-learning-javascript/convert/data:/data -it model-converter
```Build the web application (more [details](https://github.com/google/emoji-scavenger-hunt)):
Change your emojis in [scavenger_classes.ts](emoji-scavenger-hunt/src/js/scavenger_classes.ts) and [game_levels.ts](emoji-scavenger-hunt/src/js/game_levels.ts).
```bash
$ cp -a xxx/watson-deep-learning-javascript/convert/data/saved_model_web/. xxx/watson-deep-learning-javascript/emoji-scavenger-hunt/dist/model/
$ cd xxx/watson-deep-learning-javascript/emoji-scavenger-hunt
$ yarn prep
$ yarn build
```Push the application to IBM Cloud (change host and name in [manifest.yaml](emoji-scavenger-hunt/manifest.yaml) to something unique):
```bash
$ cd xxx/watson-deep-learning-javascript/emoji-scavenger-hunt
$ cf login
$ cf push
```After this you can open the application via URLs like [https://nh-hunt.mybluemix.net](https://nh-hunt.mybluemix.net).
## Deployment of the Model to Watson Studio
Deploy the model (change training id and model id):
```bash
$ bx ml store training-runs training-qBnjUqImR
$ bx ml deploy 0c78b7d6-9d22-4719-90da-ab649c0edc90 "my-deployment"
```Generate payloads for predictions:
```bash
$ cd xxx/watson-deep-learning-javascript/predict
$ docker build -t generate-payload .
$ docker run -v xxx/watson-deep-learning-javascript/predict:/data -it -e file_name=ball.JPG generate-payload
```Copy model id, deployment id and [raw-payload.json](predict/raw-payload.json) in [payload.json](predict/payload.json).
Predict something for a test image:
```bash
$ cd xxx/watson-deep-learning-javascript/predict
$ bx ml score payload.json
```To interpret the result, check out [output_labels.txt](saved-model/training-qBnjUqImR/output_lables.txt) for the labels and the order of labels.
As alternative to the IBM Cloud CLI you can also use curl. See the [API documentation](https://watson-ml-api.mybluemix.net/#!/Deployments/post_v3_wml_instances_instance_id_published_models_published_model_id_deployments_deployment_id_online) for details.