Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://facebookresearch.github.io/TensorComprehensions/
A domain specific language to express machine learning workloads.
https://facebookresearch.github.io/TensorComprehensions/
domain-specific-language machine-learning
Last synced: 9 days ago
JSON representation
A domain specific language to express machine learning workloads.
- Host: GitHub
- URL: https://facebookresearch.github.io/TensorComprehensions/
- Owner: facebookresearch
- License: apache-2.0
- Archived: true
- Created: 2018-02-06T17:11:07.000Z (almost 7 years ago)
- Default Branch: master
- Last Pushed: 2023-04-28T21:51:23.000Z (over 1 year ago)
- Last Synced: 2024-09-26T23:41:41.542Z (about 2 months ago)
- Topics: domain-specific-language, machine-learning
- Language: C++
- Homepage: https://facebookresearch.github.io/TensorComprehensions/
- Size: 36.7 MB
- Stars: 1,758
- Watchers: 108
- Forks: 211
- Open Issues: 90
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Code of conduct: CODE_OF_CONDUCT.md
- Codeowners: CodeOwners.md
Awesome Lists containing this project
- awesome-tensor-compilers - TensorComprehensions: Framework-Agnostic High-Performance Machine Learning Abstractions
README
# ![Tensor Comprehensions](docs/source/_static/img/tc-logo-full-color-with-text-2.png)
Tensor Comprehensions (TC) is a fully-functional C++ library to *automatically* synthesize high-performance machine learning kernels using [Halide](https://github.com/halide/Halide), [ISL](http://isl.gforge.inria.fr/) and NVRTC or LLVM. TC additionally provides basic integration with Caffe2 and PyTorch. We provide more details in our paper on [arXiv](https://arxiv.org/abs/1802.04730).
This library is designed to be highly portable, machine-learning-framework agnostic and only requires a simple tensor library with memory allocation, offloading and synchronization capabilities.
For now, we have integrated TC with [Caffe2](https://github.com/caffe2/caffe2) and [PyTorch](https://github.com/pytorch/pytorch/).
# A simple example
The following illustrates a short but powerful feature of the library: the capacity to JIT-compile high-performance machine learning kernels on demand, for specific sizes.
```python
import tensor_comprehensions as tc
import torch
lang = """
def tensordot(float(N, C1, C2, H, W) I0, float(N, C2, C3, H, W) I1) -> (O) {
O(n, c1, c3, h, w) +=! I0(n, c1, c2, h, w) * I1(n, c2, c3, h, w)
}
"""
N, C1, C2, C3, H, W = 32, 512, 8, 2, 28, 28
tensordot = tc.define(lang, name="tensordot")
I0, I1 = torch.randn(N, C1, C2, H, W).cuda(), torch.randn(N, C2, C3, H, W).cuda()
best_options = tensordot.autotune(I0, I1, cache=True)
out = tensordot(I0, I1, options=best_options)
```After a few generations of `autotuning` on a 2-GPU P100 system, we see results resembling:
![Autotuning Sample](docs/source/_static/img/autotuning.png)
In C++ a minimal autotuning example resembles the [following](tc/examples/tensordot.cc):
```cpp
TEST(TensorDot, SimpleAutotune) {
// 1. Define and setup the TC compilation unit with CUDA memory
// management backed by ATen tensors.
std::string tc = R"TC(
def tensordot(float(N, C1, C2, H, W) I0,
float(N, C2, C3, H, W) I1) -> (O)
{
O(n, c1, c3, h, w) +=! I0(n, c1, r_c2, h, w) * I1(n, r_c2, c3, h, w)
}
)TC";// 2. Allocate tensors with random data.
at::Tensor I0 = at::CUDA(at::kFloat).rand({32, 8, 16, 17, 25});
at::Tensor I1 = at::CUDA(at::kFloat).rand({32, 16, 2, 17, 25});// 3. Run autotuning with evolutionary search starting from a naive option.
auto naiveOptions = Backend::MappingOptionsType::makeNaiveMappingOptions();
tc::aten::ATenAutotuner
geneticAutotuneATen(tc);
auto bestOption =
geneticAutotuneATen.tune("tensordot", {I0, I1}, {naiveOptions});// 4. Compile and run the TC with the best option after allocating output
// tensors.
auto pExecutor =
tc::aten::compile(tc, "tensordot", {I0, I1}, bestOption[0]);
auto outputs = tc::aten::prepareOutputs(tc, "tensordot", {I0, I1});
auto timings = tc::aten::profile(*pExecutor, {I0, I1}, outputs);
std::cout << "tensordot size I0: " << I0.sizes() << ", "
<< "size I1: " << I1.sizes()
<< " ran in: " << timings.kernelRuntime.toMicroSeconds() << "us\n";
}
```Note that we only need to **autotune a TC once** to obtain reasonable mapping options
that can translate to other problem sizes for a given TC as the following snippet
illustrates:
```cpp
// 5. Reuse bestOptions from autotuning on another kernel
for (auto sizes : std::vector>{
{{4, 9, 7, 16, 14}, {4, 7, 3, 16, 14}},
{{8, 5, 11, 10, 10}, {8, 11, 16, 10, 10}},
}) {
at::Tensor I0 = makeATenTensor(sizes.first);
at::Tensor I1 = makeATenTensor(sizes.second);
auto pExecutor =
tc::aten::compile(tc, "tensordot", {I0, I1}, bestOption[0]);
auto outputs = tc::aten::prepareOutputs(tc, "tensordot", {I0, I1});
auto timings = tc::aten::profile(*pExecutor, {I0, I1}, outputs);
std::cout << "tensordot size I0: " << I0.sizes() << ", "
<< "size I1: " << I1.sizes()
<< " ran in: " << timings.kernelRuntime.toMicroSeconds()
<< "us\n";
}
```Putting it all together, one may see:
```shell
> build$ ./examples/example_simple
[==========] Running 1 test from 1 test case.
[----------] Global test environment set-up.
[----------] 1 test from TensorDot
[ RUN ] TensorDot.SimpleAutotune
Generation 0 Jobs(Compiled, GPU)/total (10, 10)/10 (best/median/worst)us: 226/4238/7345
Generation 1 Jobs(Compiled, GPU)/total (10, 10)/10 (best/median/worst)us: 220/221/233
Generation 2 Jobs(Compiled, GPU)/total (10, 10)/10 (best/median/worst)us: 220/221/234
tensordot size I0: [16, 8, 16, 17, 25], size I1: [16, 16, 2, 17, 25] ran in: 239us
tensordot size I0: [4, 9, 7, 16, 14], size I1: [4, 7, 3, 16, 14] ran in: 56us
tensordot size I0: [8, 5, 11, 10, 10], size I1: [8, 11, 16, 10, 10] ran in: 210us
[ OK ] TensorDot.SimpleAutotune (27812 ms)
[----------] 1 test from TensorDot (27812 ms total)[----------] Global test environment tear-down
[==========] 1 test from 1 test case ran. (27812 ms total)
[ PASSED ] 1 test.
```We have not yet characterized the precise fraction of peak performance we obtain but it is not uncommon to obtain 80%+ of peak shared memory bandwidth after autotuning. Solid register-level optimizations are still in the work but TC in its current form already addresses the productivity gap between the needs of research and the needs of production. Which is why we are excited to share it with the entire community and bring this collaborative effort in the open.
# Documentation
**General**: You can find detailed information about Tensor Comprehensions [here](https://facebookresearch.github.io/TensorComprehensions/).
**C++ API**: We also provide documentation for our C++ API which can can be found [here](https://facebookresearch.github.io/TensorComprehensions/api/)
# Installation
## Binaries
We provide conda package for making it easy to install and use TC binary. Please refer to our documentation
[here](https://facebookresearch.github.io/TensorComprehensions/framework/pytorch_integration/getting_started.html) for instructions.## From Source
You can find documentation [here](https://facebookresearch.github.io/TensorComprehensions/) which contains instructions for building TC via docker, conda packages or in non-conda environment.
# Communication
* **Email**: [email protected]
* **GitHub issues**: bug reports, feature requests, install issues, RFCs, thoughts, etc.# Code of Conduct
See the [CODE_OF_CONDUCT.md](CODE_OF_CONDUCT.md) file for more details.# License
Tensor Comprehensions is distributed under a permissive Apache v2.0 license, see the [LICENSE](LICENSE) file for more details.# Contributing
See the [CONTRIBUTING.md](CONTRIBUTING.md) file for more details.