Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/zhangxp1998/TensorIR
https://github.com/zhangxp1998/TensorIR
Last synced: about 1 month ago
JSON representation
- Host: GitHub
- URL: https://github.com/zhangxp1998/TensorIR
- Owner: zhangxp1998
- Created: 2020-01-25T19:34:52.000Z (almost 5 years ago)
- Default Branch: master
- Last Pushed: 2020-05-28T23:58:39.000Z (over 4 years ago)
- Last Synced: 2024-10-26T23:44:34.119Z (about 2 months ago)
- Language: Scala
- Size: 407 KB
- Stars: 5
- Watchers: 4
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- awesome-xmake - TensorIR
README
# TensorIR
## Overview
TensorIR is a scala library that allows you to train a Neural Network with relatively few lines of code. It will automatically generate efficient C++ code, optimize it (for now there's only memory planning optimization), compile it, and run it.
## Repository Structure
* `src` : Contains scala code responsible for generating C++ code
* `src/scala/tensor/ir/` contains frontend code that creates IR nodes
* `src/scala/tensor/ir/CPUTensorOps` defines basic tensor operations: Plus/Sub/Multiply/Divide, convolution, batchnorm, etc.
* `src/scala/tensor/ir/CPUTensorDiff` defines Auto-Diff versions of the same operations.
* `src/scala/tensor/ir/ResNet` contains a small example neural network built with the current IR. It currently runs on CPU backend, to use GPU backend, change `val dslDriver = new CPUTensorDiffDriverC[String,Unit]` to `val dslDriver = new GPUTensorDiffDriverC[String,Unit]` . Simply switching the driver used is sufficient.
* `src/scala/tensor/ir/backend` contains backend code that generates C++ (or CUDA) code from IR nodes created by the frontend.
* `src/scala/tensor/ir/backend/MemoryAnalysis.scala` is responsible for extracting tensor lifetime information. (when is a tensor allocated, when can it be freed.) It returns a `Map[Int, MemoryEvent]` , when the integer represents an arbitrary timestamp, `MemoryEvent` is an event that signals either beginning of end of a tensor's lifetime.
* `src/scala/tensor/ir/StagedMemoryAllocstor` is responsible to taking in tensor lifetime information, and emit a feasible memory plan. It uses a simple best-fit strategy. `MemorySolver` in the same directory uses z3, but is is too slow.
* `src/scala/tensor/ir/backend/CPUMemoryPlanningTransformer` is responsible to taking in a memory plan(emited by `StagedMemoryAllocstor` or `MemorySolver` ) and an IR graph, and returning an modified IR graph with the specified memory plan deployed.
* `gen` Contains build definition files for generated C++(or CUDA) code, also contains runtime libraries for generated code. Currently, `CMake` is used to build the generated code.
* `lms-clean` is a submodule of the Light Weight Modular Staging framework.
* `TensorIR` uses a [fork](https://github.com/zhangxp1998/lms-clean) of the LMS framework. This fork has 2 important modifications:
* Prevent inlining of some tensor operations to preserve lifetime information of tensors.
* Use `CMake` to build generated source code, instead of manually synthesizing compile commands
* `test` contains a few Unit testcases for CPU backend.## Dependencies
The CPU backend relies on intel's [MKL-dnn](https://github.com/oneapi-src/oneDNN)(installable by `brew install mkl-dnn` on mac), the GPU backend relies on CUDA, cuDNN, and thrust.