Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/patwie/tensorflow-cmake
TensorFlow examples in C, C++, Go and Python without bazel but with cmake and FindTensorFlow.cmake
https://github.com/patwie/tensorflow-cmake
c cmake cpp cuda deep-learning golang inference opencv tensorflow tensorflow-cc tensorflow-cmake tensorflow-examples tensorflow-gpu
Last synced: 2 days ago
JSON representation
TensorFlow examples in C, C++, Go and Python without bazel but with cmake and FindTensorFlow.cmake
- Host: GitHub
- URL: https://github.com/patwie/tensorflow-cmake
- Owner: PatWie
- License: apache-2.0
- Created: 2018-02-07T18:04:35.000Z (almost 7 years ago)
- Default Branch: master
- Last Pushed: 2019-08-18T07:16:15.000Z (over 5 years ago)
- Last Synced: 2024-12-15T20:16:23.751Z (9 days ago)
- Topics: c, cmake, cpp, cuda, deep-learning, golang, inference, opencv, tensorflow, tensorflow-cc, tensorflow-cmake, tensorflow-examples, tensorflow-gpu
- Language: CMake
- Homepage:
- Size: 759 KB
- Stars: 439
- Watchers: 17
- Forks: 91
- Open Issues: 8
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# TensorFlow CMake/C++ Collection
Looking at the official docs: What do you see? The usual fare?
Now, guess what: This is a bazel-free zone. We use CMake here!This collection contains **reliable** and **dead-simple** examples to use TensorFlow in C, C++, Go and Python: load a pre-trained model or compile a custom operation with or without CUDA. All builds are tested against the most recent stable TensorFlow version and rely on CMake with a custom [FindTensorFlow.cmake](https://github.com/PatWie/tensorflow-cmake/blob/master/cmake/modules/FindTensorFlow.cmake). This cmake file includes common work arounds for bugs in specific TF versions.
| TensorFlow | Status |
| ------ | ------ |
| 1.14.0 | [![Build Status TensorFlow](https://ci.patwie.com/api/badges/PatWie/tensorflow-cmake/TensorFlow%201.14.0/status.svg)](http://ci.patwie.com/PatWie/tensorflow-cmake) |
| 1.13.1 | [![Build Status TensorFlow](https://ci.patwie.com/api/badges/PatWie/tensorflow-cmake/TensorFlow%201.13.1/status.svg)](http://ci.patwie.com/PatWie/tensorflow-cmake) |
| 1.12.0 | [![Build Status TensorFlow](https://ci.patwie.com/api/badges/PatWie/tensorflow-cmake/TensorFlow%201.12.0/status.svg)](http://ci.patwie.com/PatWie/tensorflow-cmake) |
| 1.11.0 | [![Build Status TensorFlow](https://ci.patwie.com/api/badges/PatWie/tensorflow-cmake/TensorFlow%201.11.0/status.svg)](http://ci.patwie.com/PatWie/tensorflow-cmake) |
| 1.10.0 | [![Build Status TensorFlow](https://ci.patwie.com/api/badges/PatWie/tensorflow-cmake/TensorFlow%201.10.0/status.svg)](http://ci.patwie.com/PatWie/tensorflow-cmake) |
| 1.9.0 | [![Build Status TensorFlow](https://ci.patwie.com/api/badges/PatWie/tensorflow-cmake/TensorFlow%201.9.0/status.svg)](http://ci.patwie.com/PatWie/tensorflow-cmake) |The repository contains the following examples.
| Example| Explanation |
| ------ | ------ |
| [custom operation](./custom_op) | build a custom operation for TensorFLow in C++/CUDA (requires only pip) |
| [inference (C++)](./inference/cc) | run inference in C++ |
| [inference (C)](./inference/c) | run inference in C |
| [inference (Go)](./inference/go) | run inference in Go |
| [event writer](./examples/event_writer) | write event files for TensorBoard in C++ |
| [keras cpp-inference example](./examples/keras) | run a Keras-model in C++ |
| [simple example](./examples/simple) | create and run a TensorFlow graph in C++ |
| [resize image example](./examples/resize) | resize an image in TensorFlow with/without OpenCV |## Custom Operation
This example illustrates the process of creating a custom operation using C++/CUDA and CMake. It is *not* intended to show an implementation obtaining peak-performance. Instead, it is just a boilerplate-template.
```console
user@host $ pip install tensorflow-gpu --user # solely the pip package is needed
user@host $ cd custom_op/user_ops
user@host $ cmake .
user@host $ make
user@host $ python test_matrix_add.py
user@host $ cd ..
user@host $ python example.py
```## TensorFlow Graph within C++
This example illustrates the process of loading an image (using OpenCV or TensorFlow), resizing the image saving the image as a JPG or PNG (using OpenCV or TensorFlow).
```console
user@host $ cd examples/resize
user@host $ export TENSORFLOW_BUILD_DIR=...
user@host $ export TENSORFLOW_SOURCE_DIR=...
user@host $ cmake .
user@host $ make
```## TensorFlow-Serving
There are two examples demonstrating the handling of TensorFlow-Serving: using a vector input and using an encoded image input.
```console
server@host $ CHOOSE=basic # or image
server@host $ cd serving/${CHOOSE}/training
server@host $ python create.py # create some model
server@host $ cd serving/server/
server@host $ ./run.sh # start server# some some queries
client@host $ cd client/bash
client@host $ ./client.sh
client@host $ cd client/python
# for the basic-example
client@host $ python client_rest.py
client@host $ python client_grpc.py
# for the image-example
client@host $ python client_rest.py /path/to/img.[png,jpg]
client@host $ python client_grpc.py /path/to/img.[png,jpg]
```## Inference
Create a model in Python, save the graph to disk and load it in C/C+/Go/Python to perform inference. As these examples are based on the TensorFlow C-API they require the `libtensorflow_cc.so` library which is *not* shipped in the pip-package (tensorfow-gpu). Hence, you will need to build TensorFlow from source beforehand, e.g.,
```console
user@host $ ls ${TENSORFLOW_SOURCE_DIR}ACKNOWLEDGMENTS bazel-genfiles configure pip
ADOPTERS.md bazel-out configure.py py.pynano
ANDROID_NDK_HOME bazel-tensorflow configure.py.bkp README.md
...
user@host $ cd ${TENSORFLOW_SOURCE_DIR}
user@host $ ./configure
user@host $ # ... or whatever options you used here
user@host $ bazel build -c opt --copt=-mfpmath=both --copt=-msse4.2 --config=cuda //tensorflow:libtensorflow.so
user@host $ bazel build -c opt --copt=-mfpmath=both --copt=-msse4.2 --config=cuda //tensorflow:libtensorflow_cc.souser@host $ export TENSORFLOW_BUILD_DIR=/tensorflow_dist
user@host $ mkdir ${TENSORFLOW_BUILD_DIR}
user@host $ cp ${TENSORFLOW_SOURCE_DIR}/bazel-bin/tensorflow/*.so ${TENSORFLOW_BUILD_DIR}/
user@host $ cp ${TENSORFLOW_SOURCE_DIR}/bazel-genfiles/tensorflow/cc/ops/*.h ${TENSORFLOW_BUILD_DIR}/includes/tensorflow/cc/ops/
```### 1. Save Model
We just run a very basic model
```python
x = tf.placeholder(tf.float32, shape=[1, 2], name='input')
output = tf.identity(tf.layers.dense(x, 1), name='output')
```Therefore, save the model like you regularly do. This is done in `example.py` besides some outputs
```console
user@host $ python example.py[, ]
input [[1. 1.]]
output [[2.1909506]]
dense/kernel:0 [[0.9070684]
[1.2838823]]
dense/bias:0 [0.]
```### 2. Run Inference
#### Python
```console
user@host $ python python/inference.py[, ]
input [[1. 1.]]
output [[2.1909506]]
dense/kernel:0 [[0.9070684]
[1.2838823]]
dense/bias:0 [0.]
```#### C++
```console
user@host $ cd cc
user@host $ cmake .
user@host $ make
user@host $ cd ..
user@host $ ./cc/inference_ccinput Tensor
output Tensor
dense/kernel:0 Tensor
dense/bias:0 Tensor
```#### C
```console
user@host $ cd c
user@host $ cmake .
user@host $ make
user@host $ cd ..
user@host $ ./c/inference_c2.190951
```
#### Go
```console
user@host $ go get github.com/tensorflow/tensorflow/tensorflow/go
user@host $ cd go
user@host $ ./build.sh
user@host $ cd ../
user@host $ ./inference_goinput [[1 1]]
output [[2.1909506]]
dense/kernel:0 [[0.9070684] [1.2838823]]
dense/bias:0 [0]
```