{"id":13415272,"url":"https://github.com/plaidml/plaidml","last_synced_at":"2025-04-05T10:06:41.231Z","repository":{"id":37550115,"uuid":"100326126","full_name":"plaidml/plaidml","owner":"plaidml","description":"PlaidML is a framework for making deep learning work everywhere.","archived":false,"fork":false,"pushed_at":"2023-07-23T20:16:29.000Z","size":158660,"stargazers_count":4586,"open_issues_count":265,"forks_count":396,"subscribers_count":155,"default_branch":"plaidml-v1","last_synced_at":"2025-03-29T09:06:45.160Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"https://ai.intel.com/plaidml","language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/plaidml.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null}},"created_at":"2017-08-15T01:43:24.000Z","updated_at":"2025-03-28T05:12:10.000Z","dependencies_parsed_at":"2023-01-23T21:01:47.666Z","dependency_job_id":"75163a8b-738d-4006-96b4-9b9f8d072fad","html_url":"https://github.com/plaidml/plaidml","commit_stats":null,"previous_names":[],"tags_count":12,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/plaidml%2Fplaidml","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/plaidml%2Fplaidml/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/plaidml%2Fplaidml/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/plaidml%2Fplaidml/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/plaidml","download_url":"https://codeload.github.com/plaidml/plaidml/tar.gz/refs/heads/plaidml-v1","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247318742,"owners_count":20919484,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-07-30T21:00:46.334Z","updated_at":"2025-04-05T10:06:36.222Z","avatar_url":"https://github.com/plaidml.png","language":"C++","readme":"\u003cdiv align=center\u003e\u003ca href=\"https://www.intel.ai/plaidml\"\u003e\u003cimg\nsrc=\"docs/assets/images/plaid-final.png\" height=\"200\"\u003e\u003c/a\u003e\u003cbr\u003e\n\n*A platform for making deep learning work everywhere.*\n\n\u003c/div\u003e\n\n[![License]](https://github.com/plaidml/plaidml/blob/master/LICENSE)\n[![Build status](https://badge.buildkite.com/87cb87799399a2e27c6f99b1839a66e9101b6f132b46d36089.svg)](https://buildkite.com/intel/tpp-plaidml)\n\n# To Our Users\n\nFirst off, we’d like to thank you for choosing PlaidML. Whether you’re a new\nuser or a multi-year veteran, we greatly appreciate you for the time you’ve\nspent tinkering around with our source code, sending us feedback, and improving\nour codebase. PlaidML would truly not be the same without you.\n\nThe feedback we have received from our users indicates an ever-increasing need\nfor performance, programmability, and portability.  During the past few months,\nwe have been restructuring PlaidML to address those needs. Below is a summary of\nthe biggest changes: \n* We’ve adopted [MLIR], an extensible compiler infrastructure that has gained\n  industry-wide adoption since its release in early 2019. MLIR makes it easier\n  to integrate new software and hardware into our compiler stack, as well as\n  making it easier to write optimizations for our compiler.\n* We’ve worked extensively on [Stripe], our low-level intermediate\n  representation within PlaidML. Stripe contains optimizations that greatly\n  improve the performance of our compiler. While our work on Stripe began before\n  we decided to use MLIR, we are in the process of fully integrating Stripe into\n  MLIR.\n* We created our C++/Python embedded domain-specific language ([EDSL])\n  to improve the programmability of PlaidML.\n\nToday, we’re announcing a new branch of PlaidML — `plaidml-v1`. This will act as\nour development branch going forward and will allow us to more rapidly prototype\nthe changes we’re making without breaking our existing user base. As a\nprecaution, please note that certain features, tests, and hardware targets may\nbe broken in `plaidml-v1` as is a research project. Right now `plaidml-v1`\nonly supports Intel and AMD CPUs with AVX2 and AVX512 support.\n\nYou can continue to use code on the `master` branch or from our releases on\nPyPI. For your convenience, the contents of our `master` branch will be released\nas version 0.7.0. There is no further development in this branch. `plaidml-v1` is \na research project.\n\n-----\n\nPlaidML is an advanced and portable tensor compiler for enabling deep learning\non laptops, embedded devices, or other devices where the available computing\nhardware is not well supported or the available software stack contains\nunpalatable license restrictions.\n\nPlaidML sits underneath common machine learning frameworks, enabling users to\naccess any hardware supported by PlaidML. PlaidML supports [Keras], [ONNX], and\n[nGraph].\n\nAs a component within the [nGraph Compiler stack], PlaidML further extends the\ncapabilities of specialized deep-learning hardware (especially GPUs,) and makes\nit both easier and faster to access or make use of subgraph-level optimizations\nthat would otherwise be bounded by the compute limitations of the device.\n\nAs a component under [Keras], PlaidML can accelerate training workloads with\ncustomized or automatically-generated Tile code. It works especially well on\nGPUs, and it doesn't require use of CUDA/cuDNN on Nvidia hardware, while\nachieving comparable performance.\n\nPlaidML works on all major operating systems: Linux, macOS, and Windows.\n\n\n## Building PlaidML from source \n\nDue to use of conda PlaidML runs on all major Linux distributions.\n\n```\nexport PLAIDML_WORKSPACE_DIR=[choose a directory of your choice]\n\n# setting up miniconda env\ncd ${PLAIDML_WORKSPACE_DIR}\nwget https://repo.anaconda.com/miniconda/Miniconda3-py37_4.12.0-Linux-x86_64.sh\nbash Miniconda3-py37_4.12.0-Linux-x86_64.sh -p ${PLAIDML_WORKSPACE_DIR}/miniconda3\neval \"$(${PLAIDML_WORKSPACE_DIR}/miniconda3/bin/conda shell.bash hook)\"\nconda activate\n\n# clone plaidml-v1 and set up env\ngit clone https://github.com/plaidml/plaidml.git --recursive -b plaidml-v1\ncd plaidml\nconda env create -f environment.yml -p .cenv/\nconda activate .cenv/\n\n# we might need to go into .cenv/bin and create a sym-link \ncd .cenv/bin/\nln -s ninja ninja-build\ncd ../../\n\n# preparing PlaidML build\n./configure\n\n# buidling PlaidML\ncd build-x86_64/Release\nninja \u0026\u0026 PYTHONPATH=$PWD python plaidml/plaidml_setup.py\n```\n\n## Demos and Related Projects\n\n### Plaidbench\n\n[Plaidbench] is a performance testing suite designed to help users compare the\nperformance of different cards and different frameworks.\n\n```\ncd build-x86_64/Release\nninja plaidbench_py \u0026\u0026 PYTHONPATH=$PWD KMP_AFFINITY=granularity=fine,verbose,compact,1,0 OMP_NUM_THREADS=8 python plaidbench/plaidbench.py -n128 keras resnet50\n```\n\nThe command above is suited for 8-core Intel/AMD CPUs with hyper-threading enabled. E.g. on an Intel i9-11900K we expect around 8.5ms latency. \n\n\n## Reporting Issues\n\nEither open a ticket on [GitHub].\n\n## CI \u0026 Validation\n\n### Validated Hardware\n\nA comprehensive set of tests for each release are run against the hardware\ntargets listed below.\n\n* AMD CPUs with AVX2 and AVX512\n* Intel CPUs with AVX2 and AVX512\n\n### Validated Networks\n\nWe support all of the Keras application networks from\ncurrent versions of 2.x. Validated networks are tested for performance and\ncorrectness as part of our continuous integration system.\n\n* CNNs\n  * Inception v3\n  * ResNet50\n  * VGG19\n  * VGG16\n  * Xception\n  * DenseNet\n\n[LIBXSMM]: https://github.com/libxsmm/libxsmm/ \n[nGraph Compiler stack]: https://ngraph.nervanasys.com/docs/latest/\n[Keras]: https://keras.io/\n[GitHub]: https://github.com/plaidml/plaidml/issues\n[ONNX]: https://github.com/onnx\n[nGraph]: https://github.com/NervanaSystems/ngraph\n[License]: https://img.shields.io/badge/License-Apache%202.0-blue.svg\n[Build status]: https://badge.buildkite.com/5c9add6b89a14fd498e69a5035062368480e688c4c74cbfab3.svg?branch=master\n[Plaidbench]: https://github.com/plaidml/plaidml/tree/plaidml-v1/plaidbench\n[EDSL]: https://plaidml.github.io/plaidml/docs/edsl\n[MLIR]: https://mlir.llvm.org/\n[Stripe]: https://arxiv.org/abs/1903.06498\n","funding_links":[],"categories":["C++","Open Source Projects","ML frameworks \u0026 applications","AI","C++ (70)","ML Frameworks, Libraries, and Tools","Flutter Tools","NLP Tools, Libraries, and Frameworks","Deep Learning Tools, Libraries, and Frameworks","Apache Spark Tools, Libraries, and Frameworks","Tools"],"sub_categories":["Interfaces","viii. Linear Regression","Winetricks","Objective-C Tools, Libraries, and Frameworks","Mesh networks"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fplaidml%2Fplaidml","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fplaidml%2Fplaidml","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fplaidml%2Fplaidml/lists"}