{"id":13532143,"url":"https://github.com/zhangxp1998/TensorIR","last_synced_at":"2025-04-01T20:31:27.107Z","repository":{"id":71712117,"uuid":"236219508","full_name":"zhangxp1998/TensorIR","owner":"zhangxp1998","description":null,"archived":false,"fork":false,"pushed_at":"2020-05-28T23:58:39.000Z","size":417,"stargazers_count":5,"open_issues_count":0,"forks_count":0,"subscribers_count":3,"default_branch":"master","last_synced_at":"2025-03-25T03:24:08.333Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":null,"language":"Scala","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/zhangxp1998.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null}},"created_at":"2020-01-25T19:34:52.000Z","updated_at":"2024-03-14T02:13:32.000Z","dependencies_parsed_at":null,"dependency_job_id":"86f3fe78-a1db-47de-9b91-ec82debada21","html_url":"https://github.com/zhangxp1998/TensorIR","commit_stats":null,"previous_names":[],"tags_count":1,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zhangxp1998%2FTensorIR","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zhangxp1998%2FTensorIR/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zhangxp1998%2FTensorIR/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/zhangxp1998%2FTensorIR/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/zhangxp1998","download_url":"https://codeload.github.com/zhangxp1998/TensorIR/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":246709923,"owners_count":20821297,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-01T07:01:08.530Z","updated_at":"2025-04-01T20:31:22.096Z","avatar_url":"https://github.com/zhangxp1998.png","language":"Scala","readme":"# TensorIR\n\n## Overview\n\nTensorIR is a scala library that allows you to train a Neural Network with relatively few lines of code. It will automatically generate efficient C++ code, optimize it (for now there's only memory planning optimization), compile it, and run it.\n\n\n\n## Repository Structure\n\n* `src` : Contains scala code responsible for generating C++ code\n  * `src/scala/tensor/ir/` contains frontend code that creates IR nodes\n    * `src/scala/tensor/ir/CPUTensorOps` defines basic tensor operations: Plus/Sub/Multiply/Divide, convolution, batchnorm, etc.\n    * `src/scala/tensor/ir/CPUTensorDiff` defines Auto-Diff versions of the same operations.\n    * `src/scala/tensor/ir/ResNet` contains a small example neural network built with the current IR. It currently runs on CPU backend, to use GPU backend, change `val dslDriver = new CPUTensorDiffDriverC[String,Unit]` to `val dslDriver = new GPUTensorDiffDriverC[String,Unit]` . Simply switching the driver used is sufficient.\n  * `src/scala/tensor/ir/backend` contains backend code that generates C++ (or CUDA) code from IR nodes created by the frontend.\n  * `src/scala/tensor/ir/backend/MemoryAnalysis.scala` is responsible for extracting tensor lifetime information. (when is a tensor allocated, when can it be freed.) It returns a `Map[Int, MemoryEvent]` , when the integer represents an arbitrary timestamp, `MemoryEvent` is an event that signals either beginning of end of a tensor's lifetime.\n  * `src/scala/tensor/ir/StagedMemoryAllocstor` is responsible to taking in tensor lifetime information, and emit a feasible memory plan. It uses a simple best-fit strategy. `MemorySolver` in the same directory uses z3, but is is too slow.\n  * `src/scala/tensor/ir/backend/CPUMemoryPlanningTransformer` is responsible to taking in a memory plan(emited by `StagedMemoryAllocstor` or `MemorySolver` ) and an IR graph, and returning an modified IR graph with the specified memory plan deployed.\n* `gen` Contains build definition files for generated C++(or CUDA) code, also contains runtime libraries for generated code. Currently, `CMake` is used to build the generated code.\n* `lms-clean` is a submodule of the Light Weight Modular Staging framework.\n  * `TensorIR` uses a [fork](https://github.com/zhangxp1998/lms-clean) of the LMS framework. This fork has 2 important modifications:\n    * Prevent inlining of some tensor operations to preserve lifetime information of tensors.\n    * Use `CMake` to build generated source code, instead of manually synthesizing compile commands\n* `test` contains a few Unit testcases for CPU backend. \n\n## Dependencies\n\nThe CPU backend relies on intel's [MKL-dnn](https://github.com/oneapi-src/oneDNN)(installable by `brew install mkl-dnn` on mac), the GPU backend relies on CUDA, cuDNN, and thrust.","funding_links":[],"categories":["Projects"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fzhangxp1998%2FTensorIR","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fzhangxp1998%2FTensorIR","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fzhangxp1998%2FTensorIR/lists"}