{"id":13418774,"url":"https://github.com/intel/clDNN","last_synced_at":"2025-03-15T04:30:34.562Z","repository":{"id":21228372,"uuid":"87980060","full_name":"intel/clDNN","owner":"intel","description":"Compute Library for Deep Neural Networks (clDNN)","archived":true,"fork":false,"pushed_at":"2023-01-10T00:55:21.000Z","size":141996,"stargazers_count":576,"open_issues_count":27,"forks_count":116,"subscribers_count":70,"default_branch":"master","last_synced_at":"2024-08-05T02:01:21.352Z","etag":null,"topics":["cldnn","deep-learning","deep-neural-networks","intel","intel-hd-graphics"],"latest_commit_sha":null,"homepage":"https://01.org/cldnn","language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/intel.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2017-04-11T21:14:56.000Z","updated_at":"2024-07-04T18:56:22.000Z","dependencies_parsed_at":"2023-01-12T03:30:40.704Z","dependency_job_id":null,"html_url":"https://github.com/intel/clDNN","commit_stats":null,"previous_names":["01org/cldnn"],"tags_count":2,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/intel%2FclDNN","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/intel%2FclDNN/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/intel%2FclDNN/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/intel%2FclDNN/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/intel","download_url":"https://codeload.github.com/intel/clDNN/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":221536598,"owners_count":16839538,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["cldnn","deep-learning","deep-neural-networks","intel","intel-hd-graphics"],"created_at":"2024-07-30T22:01:06.885Z","updated_at":"2025-03-15T04:30:34.549Z","avatar_url":"https://github.com/intel.png","language":"C++","readme":"DISCONTINUATION OF PROJECT\n\nThis project will no longer be maintained by Intel.\n\nIntel has ceased development and contributions including, but not limited to, maintenance, bug fixes, new releases, or updates, to this project.  \n\nIntel no longer accepts patches to this project.\n\nIf you have an ongoing need to use this project, are interested in independently developing it, or would like to maintain patches for the open source software community, please create your own fork of this project.  \n\nContact: webadmin@linux.intel.com\n﻿\n# Compute Library for Deep Neural Networks (clDNN)\n___\n## Discontinued repository\nThis project now is an integral part of Intel® Distribution of OpenVino™ Toolkit. \nIt's content and development has been moved to [*DLDT repo*](https://github.com/opencv/dldt/tree/2019/inference-engine/thirdparty/clDNN).\n\nTo get latest clDNN sources please refer to *DLDT repo*.\n___\n[![Apache License Version 2.0](https://img.shields.io/badge/license-Apache_2.0-green.svg)](LICENSE)\n![v1.0](https://img.shields.io/badge/1.0-RC1-green.svg)\n\n*Compute Library for Deep Neural Networks* (*clDNN*) is an open source performance\nlibrary for Deep Learning (DL) applications intended for acceleration of\nDL Inference on Intel® Processor Graphics – including HD Graphics and\nIris® Graphics.  \n*clDNN* includes highly optimized building blocks for implementation of\nconvolutional neural networks (CNN) with C and C++ interfaces. We created\nthis project to enable the DL community to innovate on Intel® processors.\n\n**Usages supported:** Image recognition, image detection, and image segmentation.\n\n**Validated Topologies:** AlexNet\\*, VGG(16,19)\\*, GoogleNet(v1,v2,v3)\\*, ResNet(50,101,152)\\* Faster R-CNN\\*, Squeezenet\\*, SSD_googlenet\\*, SSD_VGG\\*, PVANET\\*, PVANET_REID\\*, age_gender\\*, FCN\\* and yolo\\*.\n\nAs with any technical preview, APIs may change in future updates.\n\n## License\nclDNN is licensed is licensed under\n[Apache License Version 2.0](http://www.apache.org/licenses/LICENSE-2.0).\n\n### Attached licenses\nclDNN uses 3\u003csup\u003erd\u003c/sup\u003e-party components licensed under following licenses:\n- *googletest* under [Google\\* License](https://github.com/google/googletest/blob/master/googletest/LICENSE)\n- *OpenCL™ ICD and C++ Wrapper* under [Khronos™ License](https://github.com/KhronosGroup/OpenCL-CLHPP/blob/master/LICENSE.txt)\n- *RapidJSON* under [Tencent\\* License](https://github.com/Tencent/rapidjson/blob/master/license.txt)\n\n## Documentation\nThe latest clDNN documentation is at [GitHub pages](https://intel.github.io/clDNN/index.html).\n\nThere is also inline documentation available that can be [generated with Doxygen](#generating-documentation).\n\nAccelerate Deep Learning Inference with Intel® Processor Graphics whitepaper [link](https://software.intel.com/en-us/articles/accelerating-deep-learning-inference-with-intel-processor-graphics).\n\n## Intel® OpenVino™ Toolkit and clDNN\n\nclDNN is released also together with Intel® OpenVino™ Toolkit, which contains:\n- *Model Optimizer* a Python*-based command line tool, which imports trained models from popular deep learning frameworks such as Caffe*, TensorFlow*, and Apache MXNet*.\n- *Inference Engine* an execution engine which uses a common API to deliver inference solutions on the platform of your choice (for example GPU with clDNN library)\n\nYou can find more information [here](https://software.intel.com/en-us/openvino-toolkit/deep-learning-cv).\n\n## Changelog\n\n### Drop 14.1\n    New features:\n    - network serialization\n    - 3D support for: Acitvation, Reorder, Eltwise, Reshape, Deconvolution\n    Bug fixes:\n    - concatenation fix for different input formats\n    UX:\n    - added 2019.4 intel ocl icd\n    - refactored bfyx_f16 format\n    - added i32 and i64 support for select primitive\n\n### Drop 14.0\n    New features:\n    - 3 spatial dimensions support in convolution primitive (3D convolution)\n    - reverse primitive\n    - arg_max_min support for i8/s8/i32/i64 types\n    - concatenation support for bfzyx (5D) format\n    Bug fixes:\n    - fixes in primitive fusing pass (for i8/s8 types)\n    - fixes in graph optimizer (reshape primitive)\n    - overflow/underflow fixes for eltwise (i8/s8)\n    - fixes for convolution-eltwise primitive\n    - fixes for convolution primitive (depth-wise case)\n    - perf fixes for events pool\n    - fixes for pooling primitive (u8)\n    - fixes for deconvolution primitive\n    - fixes for fc primitive\n    - fixes for batch_norm primitive\n    UX:\n    - refactored and cleaned up JIT constants generation mechanism\n    - refactored kernel selection mechanism\n    - removed legacy device info mechanism\n    Performance:\n    - convolution primitive optimizations (for byxf, for MMAD-based, for byxf fp16, for bfyx fp16)\n    - fc primitive optimizations (for byxf)\n    - pooling primitive optimizations (for byxf, bfyx)\n    - convolution-relu primitive fusing (i8 -\u003e s8 case)\n    - eltwise primitive optimizations (for byxf)\n    - fused convolution-eltwise primitive optimizations (IMAD-based)\n    - block-based optimizations for fp16 primitives\n\n### Drop 13.1\n    New features:\n    - added max mode for contract primitive\n    - added one_hot primitive\n    - optional explicit output data type support for all primitives\n    Bug fixes:\n    - fix for graph optimizer (crop primitive)\n    - fix for processing order (deconvolution primitive)\n    - fix for convolution-eltwise primitive\n    UX:\n    - cache.json is searched in to library directory\n    Performance:\n    - optimizations for lstm_gemm primitive\n\n### Drop 13.0\n    New features:\n    - events pool\n    - group support in convolution and deconvolution primitives\n    - broadcastable inputs support for eltwise primitive\n    - asymmetric padding for convolution primitive\n    - fused convolution-eltwise primitive (API extension)\n    - auto-calculated output shape support for reshape primitive\n    - crop support for i8/s8/i32/i64 types\n    - broadcast axis support for broadcast primitive\n    - logic and comparison operations support for eltwise primitive\n    Bug fixes:\n    - added required alignment checks for some fc implementations\n    - added lstm support for f16 (half) type\n    - reorders for fc moved to graph compiler\n    - primitive fusing and reorder fixes\n    UX:\n    - added internal core tests project\n    - refactored optimizations pass manager and passes\n    Performance:\n    - optimized concatenation during upsampling (unpool)\n    - IMAD-based optimizations for convolution, fc, eltwise and pooling primitives (i8/s8)\n    - convolution-eltwise fusing optimizations\n    - partial writes optimizations for block-based kernels\n\n### Drop 12.1\n\t- gtests code refactor\n\t- buildbreak fix\n\t\n### Drop 12.0\n    New features:\n    - pyramidRoiAlign primitive\n    - multiple axes support for reverse mode in index_select\n    - eltwise min/max/mod support for i8/i32/i64\n    - broadcast support for i32/i64\n    Bug fixes:\n    - memory leak fixes\n    - in-place reshape\n    - no padding for output primitives\n    UX:\n    - RapidJSON library for auto-tune cache\n    - less dependencies in program.cpp\n    - do not throw error, when device not validated\n    - global pooling in c API\n    - optimized padding for convolution\n    \n### Drop 11.0\n    New features:\n    - throttle hints\n    - extended border and tile\n    - GPU implementation of Detection Output\n\t- More cases for BatchNorm primitive\n    Bug fixes:\n    - GEMM fix (align with ONNX)\n\t- memory leak fix in memory pool\n\t- increase FC precision for fp16 (fp32 accu) \n    Performance:\n    - cache for new topologies and devices\n    - conv1x1 with stride \u003e1 into eltwise optimization \n\n### Drop 10.0\n    New features:\n    - condition primitive\n    - fused convolution with bn and scale (backprop)\n    - scale/shit and mean/var as an output in batch norm\n    - add LSTM output selection\n    Bug fixes:\n    - memory pool fixes \n    UX:\n    - downgrade to cxx11\n    - add support for u8 data type in custom primitive \n    - library size optimizations\n    Performance:\n    - in place concatenation optimization \n    - conv1x1 with stride \u003e1 into eltwise optimization \n\n### Drop 9.2\n\tNew features\n\t- local convolution\n\t- eltwise with strie\n\n### Drop 9.1\n    New features:\n    - select index primitive\n\t- gemm primitive\n    Bug fixes:\n    - fix for output format in fully connected primitive\n\n### Drop 9.0\n    New features:\n    - log2 activation function\n    - support for i32 and i64 types\n    - select primitive\n\t- border primitive\n\t- tile primitive\n    Bug fixes:\n    - dilation \u003e input size fix\n\n### Drop 8.0\n    New features:\n    - lstm primitive\n    - average unpooling primitive\n    - serialization - dump weights, biases and kernels\n    - scale grad for input and weights primitive\n    Bug fixes:\n    - wrong gws in concatenation\n    - int8 layers\n    - convolution depthwise bias concatenation\n    - params in engine_info\n    - mutable_data filler\n    - momentum calculation\n    UX:\n    - kernel selector renaming\n    - bfyx_yxfb batched reorder\n    - code cleanups\n    - primitives allocation order\n\n### Drop 7.0\n    New features:\n    - support for img_info=4 in proposal_gpu\n    - support images format in winograd\n    - support for 2 or more inputs in eltwise\n    - priority and throttle hints\n    - deconvolution_grad_input primitive\n    - fc_grad_input and fc_grad_weights primitives\n    Bug fixes:\n    - tensor fixes (i.e. less operator fix)\n    - cascade concat fixes\n    - winograd fixes for bfyx format\n    - auto-tuning fixes for weights calculation\n    UX:\n    - memory pool (reusing memory buffers)\n    - added choosen kernel name in graph dump\n    - flush memory functionality\n    Performance:\n    - graph optimizations\n    - depth-concatenation with fused relu optimization\n    - winograd optimizations\n    - deconvolution optimizations (i.e bfyx opt)\n\n### Drop 6.0\n\tNew features:\n\t- fused winograd\n\t- image support for weights\n\t- yolo_region primitive support\n\t- yolo_reorg primitive support\n\tBug fixes:\n\t- winograd bias fix\n\t- mean subtract fix\n\tUX:\n\t- update boost to 1.64.0\n\t- extend graph dumps\n\tPerformance:\n\t- update offline caches for newer drivers\n\t- conv1x1 byxf optimization\n\t- conv1x1 with images\n\t- cascade depth concatenation fuse optimization\n\n### Drop 5.0\n\tNew features:\n\t- split primitive\n\t- upsampling primitive\n\t- add preliminary Coffe Lake support\n\t- uint8 weights support\n\t- versioning\n\t- offline autotuner cache\n\t- Winograd phase 1 - not used yet\n\tBug fixes:\n\t- in-place crop optimization bug fix\n\t- output spatial padding in yxfb kernels fix\n\t- local work sizes fix in softmax\n\t- underflow fix in batch normalization\n\t- average pooling corner case fix\n\tUX:\n\t- graph logger, dumps graphwiz format files\n\t- extended documentation with API diagram and graph compilation steps\n\tPerformance:\n\t- softmax optimization\n\t- lrn within channel optimization\n\t- priorbox optimization\n\t- constant propagation\n\n### Drop 4.0\n\tNew features:\n\t- OOOQ execution model implementation\n\t- depthwise separable convolution implementation\n\t- kernel auto-tuner implementation\n\tBug fixes:\n\t- dump hidden layer fix\n\t- run single layer fix\n\t- reshape fix\n\tUX:\n\t- enable RTTI\n\t- better error handling/reporting\n\tPerformance:\n\t- lrn optimization\n\t- dynamic pruning for sparse fc layers\n\t- reorder optimization\n\t- concatenation optimization\n\t- eltwise optimization\n\t- activation fusing \n\n### Drop 3.0\n\tAdded:\n\t- kernel selector\n\t- custom layer\n\tChanged:\n\t- performance improvments\n\t- bug fixes (deconvolution, softmax, reshape)\n\t- apply fixes from community reported issues\n\n### Drop 2.0\n\tAdded:\n\t- step by step tutorial\n\tChanged:\n\t- perfomance optimization for: softmax, fully connected, eltwise, reshape\n\t- bug fixes (conformance)\n\n### Drop 1.0\n\t- initial drop of clDNN\n\n## Support\nPlease report issues and suggestions \n[GitHub issues](https://github.com/01org/cldnn/issues).\n\n## How to Contribute\nWe welcome community contributions to clDNN. If you have an idea how to improve the library:\n\n- Share your proposal via\n [GitHub issues](https://github.com/01org/cldnn/issues)\n- Ensure you can build the product and run all the examples with your patch\n- In the case of a larger feature, create a test\n- Submit a [pull request](https://github.com/01org/cldnn/pulls)\n\nWe will review your contribution and, if any additional fixes or modifications\nare necessary, may provide feedback to guide you. When accepted, your pull\nrequest will be merged into our internal and GitHub repositories.\n\n## System Requirements\nclDNN supports Intel® HD Graphics and Intel® Iris® Graphics and is optimized for\n- Codename *Skylake*:\n    * Intel® HD Graphics 510 (GT1, *client* market)\n    * Intel® HD Graphics 515 (GT2, *client* market)\n    * Intel® HD Graphics 520 (GT2, *client* market)\n    * Intel® HD Graphics 530 (GT2, *client* market)\n    * Intel® Iris® Graphics 540 (GT3e, *client* market)\n    * Intel® Iris® Graphics 550 (GT3e, *client* market)\n    * Intel® Iris® Pro Graphics 580 (GT4e, *client* market)\n    * Intel® HD Graphics P530 (GT2, *server* market)\n    * Intel® Iris® Pro Graphics P555 (GT3e, *server* market)\n    * Intel® Iris® Pro Graphics P580 (GT4e, *server* market)\n- Codename *Apollolake*:\n    * Intel® HD Graphics 500\n    * Intel® HD Graphics 505\n- Codename *Kabylake*:\n    * Intel® HD Graphics 610 (GT1, *client* market)\n\t* Intel® HD Graphics 615 (GT2, *client* market)\n    * Intel® HD Graphics 620 (GT2, *client* market)\n\t* Intel® HD Graphics 630 (GT2, *client* market)\n    * Intel® Iris® Graphics 640 (GT3e, *client* market)\n    * Intel® Iris® Graphics 650 (GT3e, *client* market)\n    * Intel® HD Graphics P630 (GT2, *server* market)\n    * Intel® Iris® Pro Graphics 630 (GT2, *server* market)\n\t\nclDNN currently uses OpenCL™ with multiple Intel® OpenCL™ extensions and requires Intel® Graphics Driver to run.\n\nclDNN requires CPU with Intel® SSE/Intel® AVX support.\n\n---\n\nThe software dependencies are:\n- [CMake\\*](https://cmake.org/download/) 3.9 or later  \n- C++ compiler with partial or full C++11 standard support compatible with:\n    * GNU\\* Compiler Collection 4.8.2\n    * clang 3.5 or later\n    * [Intel® C++ Compiler](https://software.intel.com/en-us/intel-parallel-studio-xe) 17.0 or later\n    * Visual C++ 2015 (MSVC++ 19.0) or later\n\n\u003e Intel® CPU intrinsics header (`\u003cimmintrin.h\u003e`) must be available during compilation.\n\n- [python™](https://www.python.org/downloads/) 2.7 or later (scripts are both compatible with python™ 2.7.x and python™ 3.x)\n- *(optional)* [Doxygen\\*](http://www.stack.nl/~dimitri/doxygen/download.html) 1.8.13 or later  \n    Needed for manual generation of documentation from inline comments or running `docs` custom target which will generate it automatically.\n\n\u003e [GraphViz\\*](http://www.graphviz.org/Download..php) (2.38 or later) is also recommended to generate documentation with all embedded diagrams.  \n(Make sure that `dot` application is visible in the `PATH` environment variable.)\n\n---\n\n- The software was validated on:\n    * CentOS* 7.2 with GNU* Compiler Collection 5.2 (64-bit only), using [Intel® Graphics Compute Runtime for OpenCL(TM)](https://software.intel.com/en-us/articles/opencl-drivers) .\n    * Windows® 10 and Windows® Server 2012 R2 with MSVC 14.0, using [Intel® Graphics Driver for Windows* [24.20] driver package](https://downloadcenter.intel.com/download/27803/Graphics-Intel-Graphics-Driver-for-Windows-10?v=t).\n\n\tMore information on Intel® OpenCL™ drivers can be found [here](https://software.intel.com/en-us/articles/opencl-drivers).\n\nWe recommend to use latest for Linux [link](https://github.com/intel/compute-runtime/releases) and 24.20 driver for Windows [link](https://downloadcenter.intel.com/download/27803/Graphics-Intel-Graphics-Driver-for-Windows-10?v=t).\n\n## Installation\n\n### Building\n\nDownload [clDNN source code](https://github.com/01org/cldnn/archive/master.zip)\nor clone the repository to your system:\n\n```\n    git clone  https://github.com/intel/cldnn.git\n```\n\nSatisfy all software dependencies and ensure that the versions are correct before building.\n\nclDNN uses multiple 3\u003csup\u003erd\u003c/sup\u003e-party components. They are stored in binary form in `common` subdirectory. Currently they are prepared for MSVC++ and GCC\\*. They will be cloned with repository.\n\n---\n\nclDNN uses a CMake-based build system. You can use CMake command-line tool or CMake GUI (`cmake-gui`) to generate required solution.  \nFor Windows system, you can call in `cmd` (or `powershell`):\n```shellscript\n    @REM Generate 32-bit solution (solution contains multiple build configurations)...\n    cmake -E make_directory build \u0026\u0026 cd build \u0026\u0026 cmake -G \"Visual Studio 14 2015\" ..\n    @REM Generate 64-bit solution (solution contains multiple build configurations)...\n    cmake -E make_directory build \u0026\u0026 cd build \u0026\u0026 cmake -G \"Visual Studio 14 2015 Win64\" ..\n```  \nCreated solution can be opened in Visual Studio 2015 or built using appropriate `msbuild` tool\n(you can also use `cmake --build .` to select build tool automatically).\n\nFor Unix and Linux systems:\n```shellscript\n    @REM Create GNU makefile for release clDNN and build it...\n    cmake -E make_directory build \u0026\u0026 cd build \u0026\u0026 cmake -DCMAKE_BUILD_TYPE=Release .. \u0026\u0026 make\n    @REM Create Ninja makefile for debug clDNN and build it...\n    cmake -E make_directory build \u0026\u0026 cd build \u0026\u0026 cmake -G Ninja -DCMAKE_BUILD_TYPE=Debug .. \u0026\u0026 ninja -k 20\n```\n\nYou can call also scripts in main directory of project which will create solutions/makefiles for clDNN (they\nwill generate solutions/makefiles in `build` subdirectory and binary outputs will be written to `build/out` subdirectory):\n- `create_msvc_mscc.bat` (Windows\\*, Visual Studio\\* 2015)\n- `create_unixmake_gcc.sh [Y|N] [\u003cdevtoolset-version\u003e]` (Linux\\*, GNU\\* or Ninja\\* makefiles, optional devtoolset support)\n    * If you specify the first parameter as `Y`, the Ninja makefiles will be generated.\n    * If you specify second parameter (number), the CMake will be called via `scl` with selected `devtoolset` version.\n\nCMake solution offers multiple options which you can specify using normal CMake syntax (`-D\u003coption-name\u003e=\u003cvalue\u003e`):\n\n| CMake option                              | Type     | Description                                                                  |\n|:------------------------------------------|:---------|:-----------------------------------------------------------------------------|\n| CMAKE\\_BUILD\\_TYPE                        | STRING   | Build configuration that will be used by generated makefiles (it does not affect multi-configuration generators like generators for Visual Studio solutions). Currently supported: `Debug` (default), `Release` |\n| CMAKE\\_INSTALL\\_PREFIX                    | PATH     | Install directory prefix.                                                    |\n| CLDNN\\_\\_ARCHITECTURE\\_TARGET             | STRING   | Architecture of target system (where binary output will be deployed). CMake will try to detect it automatically (based on selected generator type, host OS and compiler properties). Specify this option only if CMake has problem with detection. Currently supported: `Windows32`, `Windows64`, `Linux64` |\n| CLDNN\\_\\_OUTPUT\\_DIR (CLDNN\\_\\_OUTPUT\\_BIN\\_DIR, CLDNN\\_\\_OUTPUT\\_LIB\\_DIR) | PATH | Location where built artifacts will be written to. It is set automatically to roughly `build/out/\u003carch-target\u003e/\u003cbuild-type\u003e` subdirectory. For more control use: `CLDNN__OUTPUT_LIB_DIR` (specifies output path for static libraries) or `CLDNN__OUTPUT_BIN_DIR` (for shared libs and executables). |\n|                                           |          |                                                                              |\n| **CMake advanced option**                 | **Type** | **Description**                                                              |\n| PYTHON\\_EXECUTABLE                        | FILEPATH | Path to Python interpreter. CMake will try to detect Python. Specify this option only if CMake has problem with locating Python. |\n| CLDNN\\_\\_IOCL\\_ICD\\_USE\\_EXTERNAL         | BOOL     | Use this option to enable use of external Intel® OpenCL™ SDK as a source for ICD binaries and headers (based on `INTELOCLSDKROOT` environment variable). Default: `OFF` |\n| CLDNN\\_\\_IOCL\\_ICD\\_VERSION               | STRING   | Version of Intel® OpenCL™ ICD binaries and headers to use (from `common` subdirectory). It is automatically setected by CMake (highest version). Specify, if you have multiple versions and want to use different than automatically selected. |\n|                                           |          |                                                                              |\n| CLDNN__COMPILE_LINK_ALLOW_UNSAFE_SIZE_OPT | BOOL     | Allow unsafe optimizations during linking (like aggressive dead code elimination, etc.). Default: `ON` |\n| CLDNN__COMPILE_LINK_USE_STATIC_RUNTIME    | BOOL     | Link with static C++ runtime. Default: `OFF` (shared C++ runtime is used)    |\n|                                           |          |                                                                              |\n| CLDNN__INCLUDE_CORE                       | BOOL     | Include core clDNN library project in generated makefiles/solutions. Default: `ON` |\n| CLDNN__INCLUDE_TESTS                      | BOOL     | Include tests application project (based on googletest framework) in generated makefiles/solutions . Default: `ON` |\n|                                           |          |                                                                              |\n| CLDNN__RUN_TESTS                          | BOOL     | Run tests after building `tests` project. This option requires `CLDNN__INCLUDE_TESTS` option to be `ON`. Default: `OFF` |\n|                                           |          |                                                                              |\n| CLDNN__CMAKE_DEBUG                        | BOOL     | Enable extended debug messages in CMake. Default: `OFF`                      |\n    \n---\n\nclDNN includes unit tests implemented using the googletest framework. To validate your build, run `tests` target, e.g.:\n\n```\n    make tests\n```\n\n(Make sure that both `CLDNN__INCLUDE_TESTS` and `CLDNN__RUN_TESTS` were set to `ON` when invoking CMake.)\n\n### Generating documentation\n\nDocumentation is provided inline and can be generated in HTML format with Doxygen. We recommend to use latest\n[Doxygen\\*](http://www.stack.nl/~dimitri/doxygen/download.html) and [GraphViz\\*](http://www.graphviz.org/Download..php).\n\nDocumentation templates and configuration files are stored in `docs` subdirectory. You can simply call:\n\n```shellscript\n    cd docs \u0026\u0026 doxygen\n```\nto generate HTML documentation in `docs/html` subdirectory.\n\nThere is also custom CMake target named `docs` which will generate documentation in `CLDNN__OUTPUT_BIN_DIR/html` directory. For example, when using Unix makefiles, you can run:\n```\n    make docs\n```\nin order to create it.\n\n### Deployment\n\nSpecial `install` target will place the API header files and libraries in `/usr/local`\n(`C:/Program Files/clDNN` or `C:/Program Files (x86)/clDNN` on Windows). To change\nthe installation path, use the option `-DCMAKE_INSTALL_PREFIX=\u003cprefix\u003e` when invoking CMake.\n\n---\n\n\n\\* Other names and brands may be claimed as the property of others.\n\nCopyright © 2017, Intel® Corporation\n","funding_links":[],"categories":["TODO scan for Android support in followings","C++"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fintel%2FclDNN","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fintel%2FclDNN","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fintel%2FclDNN/lists"}