{"id":24491109,"url":"https://github.com/tile-ai/tilelang","last_synced_at":"2026-03-09T02:32:06.325Z","repository":{"id":273351479,"uuid":"867004514","full_name":"tile-ai/tilelang","owner":"tile-ai","description":" Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels","archived":false,"fork":false,"pushed_at":"2025-09-25T04:05:53.000Z","size":14961,"stargazers_count":1670,"open_issues_count":76,"forks_count":160,"subscribers_count":13,"default_branch":"main","last_synced_at":"2025-09-25T04:42:13.779Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"https://tilelang.com/","language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/tile-ai.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null,"notice":null,"maintainers":null,"copyright":null,"agents":null,"dco":null,"cla":null}},"created_at":"2024-10-03T09:25:45.000Z","updated_at":"2025-09-25T04:18:12.000Z","dependencies_parsed_at":"2025-02-09T16:25:19.689Z","dependency_job_id":"26c11151-c2fa-4f39-8bce-29fa5921ebad","html_url":"https://github.com/tile-ai/tilelang","commit_stats":null,"previous_names":["tile-ai/tilelang"],"tags_count":10,"template":false,"template_full_name":null,"purl":"pkg:github/tile-ai/tilelang","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tile-ai%2Ftilelang","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tile-ai%2Ftilelang/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tile-ai%2Ftilelang/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tile-ai%2Ftilelang/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/tile-ai","download_url":"https://codeload.github.com/tile-ai/tilelang/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/tile-ai%2Ftilelang/sbom","scorecard":null,"host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":276999945,"owners_count":25742817,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","status":"online","status_checked_at":"2025-09-25T02:00:09.612Z","response_time":80,"last_error":null,"robots_txt_status":"success","robots_txt_updated_at":"2025-07-24T06:49:26.215Z","robots_txt_url":"https://github.com/robots.txt","online":true,"can_crawl_api":true,"host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2025-01-21T18:01:28.147Z","updated_at":"2025-10-02T13:30:41.865Z","avatar_url":"https://github.com/tile-ai.png","language":"C++","readme":"\u003cimg src=./images/logo-row.svg /\u003e\n\n\u003cdiv align=\"center\"\u003e\n\n# Tile Language\n[![PyPI version](https://badge.fury.io/py/tilelang.svg)](https://badge.fury.io/py/tilelang)\n[![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/tile-ai/tilelang) [![Discord](https://img.shields.io/badge/Discord-%235865F2.svg?logo=discord\u0026logoColor=white)](https://discord.gg/TUrHyJnKPG)\n\n\u003c/div\u003e\n\nTile Language (**tile-lang**) is a concise domain-specific language designed to streamline the development of high-performance GPU/CPU kernels (e.g., GEMM, Dequant GEMM, FlashAttention, LinearAttention). By employing a Pythonic syntax with an underlying compiler infrastructure on top of [TVM](https://tvm.apache.org/), tile-lang allows developers to focus on productivity without sacrificing the low-level optimizations necessary for state-of-the-art performance.\n\n\u003cimg src=./images/MatmulExample.png /\u003e\n\n## Latest News\n- 09/29/2025  🎉: Thrilled to announce that ​​AscendC​​ and ​Ascend​NPU IR​​ backends targeting Huawei Ascend chips are now supported!\nCheck out the preview here:\n🔗 [link](https://github.com/tile-ai/tilelang-ascend).\nThis includes implementations across two branches:\n[ascendc_pto](https://github.com/tile-ai/tilelang-ascend) and\n[npuir](https://github.com/tile-ai/tilelang-ascend/tree/npuir).\nFeel free to explore and share your feedback! \n- 07/04/2025 🚀: Introduced `T.gemm_sp` for 2:4 sparse tensor core support, check out [Pull Request #526](https://github.com/tile-ai/tilelang/pull/526) for details.\n- 06/05/2025 ✨: Added [NVRTC Backend](https://github.com/tile-ai/tilelang/pull/461) to significantly reduce compilation time for cute templates!\n- 04/14/2025 🚀: Added high-performance FlashMLA implementation for AMD MI300X, achieving performance parity with hand-optimized assembly kernels of Aiter! See [example_mla_amd](./examples/deepseek_mla/amd/README.md) for details.\n- 03/03/2025 🚀: Added high-performance MLA Decoding support using only 80 lines of Python code, achieving performance on par with FlashMLA on H100 (see [example_mla_decode.py](./examples/deepseek_mla/example_mla_decode.py))! We also provide [documentation](./examples/deepseek_mla/README.md) explaining how TileLang achieves this.\n- 02/15/2025 ✨: Added WebGPU Codegen support, see [Pull Request #86](https://github.com/tile-ai/tilelang/pull/86)!\n- 02/12/2025 ✨: Excited to announce the release of [v0.1.0](https://github.com/tile-ai/tilelang/releases/tag/v0.1.0)!\n- 02/10/2025 🚀: Added debug tools for TileLang—`T.print` for printing variables/buffers ([docs](https://tilelang.com/tutorials/debug_tools_for_tilelang.html)) and a memory layout plotter ([examples/plot_layout](./examples/plot_layout)).\n- 01/20/2025 ✨: We are excited to announce that tile-lang, a dsl for high performance AI workloads, is now open source and available to the public!\n\n## Tested Devices\nAlthough tile-lang aims to be portable across a range of Devices, it has been specifically tested and validated on the following devices: for NVIDIA GPUs, this includes the H100 (with Auto TMA/WGMMA support), A100, V100, RTX 4090, RTX 3090, and RTX A6000; for AMD GPUs, it includes the MI250 (with Auto MatrixCore support) and the MI300X (with Async Copy support).\n\n## OP Implementation Examples\n**tile-lang** provides the building blocks to implement a wide variety of operators. Some examples include:\n\n- [Matrix Multiplication](./examples/gemm/)\n- [Dequantization GEMM](./examples/dequantize_gemm/)\n- [Flash Attention](./examples/flash_attention/)\n- [Flash Linear Attention](./examples/linear_attention/)\n- [Flash MLA Decoding](./examples/deepseek_mla/)\n- [Native Sparse Attention](./examples/deepseek_nsa/)\n\nWithin the `examples` directory, you will also find additional complex kernels—such as convolutions, forward/backward passes for FlashAttention, more operators will continuously be added.\n\n\n## Benchmark Summary\n\nTileLang achieves exceptional performance across a variety of computational patterns. Comprehensive benchmark scripts and settings are available at [tilelang-benchmark](https://github.com/tile-ai/tilelang-benchmark). Below are selected results showcasing its capabilities:\n\n- MLA Decoding Performance on H100\n\n  \u003cdiv style=\"display: flex; gap: 10px; justify-content: center;\"\u003e\n    \u003cdiv style=\"flex: 1;\"\u003e\n      \u003cimg src=\"./examples/deepseek_mla/figures/bs64_float16.png\" alt=\"mla decode performance bs64 on H100\" width=\"100%\" /\u003e\n    \u003c/div\u003e\n    \u003cdiv style=\"flex: 1;\"\u003e\n      \u003cimg src=\"./examples/deepseek_mla/figures/bs128_float16.png\" alt=\"mla decode performance bs128 on H100\" width=\"100%\" /\u003e\n    \u003c/div\u003e\n  \u003c/div\u003e\n  \n- Flash Attention Performance on H100\n\n  \u003cdiv align=\"center\"\u003e    \u003cimg src=\"./images/mha_performance_h100.png\" alt=\"operator performance on H100\" width=80% /\u003e\n  \u003c/div\u003e\n\n- Matmul Performance on GPUs (RTX 4090, A100, H100, MI300X)\n\n  \u003cdiv\u003e\n    \u003cimg src=\"./images/op_benchmark_consistent_gemm_fp16.png\" alt=\"gemm fp16 performance on Gpus\" /\u003e\n  \u003c/div\u003e\n\n- Dequantize Matmul Performance on A100\n\n  \u003cdiv\u003e\n    \u003cimg src=\"./images/op_benchmark_a100_wq_gemv.png\" alt=\"dequantize gemv performance on A100\" /\u003e\n  \u003c/div\u003e\n\n## Installation\n### Method 1: Install with Pip\n\nThe quickest way to get started is to install the latest release from PyPI:\n\n```bash\npip install tilelang\n```\n\nAlternatively, you can install directly from the GitHub repository:\n\n```bash\npip install git+https://github.com/tile-ai/tilelang\n```\n\nOr install locally:\n\n```bash\n# install required system dependencies\nsudo apt-get update\nsudo apt-get install -y python3-setuptools gcc libtinfo-dev zlib1g-dev build-essential cmake libedit-dev libxml2-dev\n\npip install -e . -v # remove -e option if you don't want to install in editable mode, -v for verbose output\n```\n\n### Method 2: Build from Source\nWe currently provide three ways to install **tile-lang** from source:\n - [Install from Source (using your own TVM installation)](./docs/get_started/Installation.md#method-1-install-from-source-using-your-own-tvm-installation)\n - [Install from Source (using the bundled TVM submodule)](./docs/get_started/Installation.md#method-2-install-from-source-using-the-bundled-tvm-submodule)\n - [Install Using the Provided Script](./docs/get_started/Installation.md#method-3-install-using-the-provided-script)\n\n### Method 3: Install with Nightly Version\n\nFor users who want access to the latest features and improvements before official releases, we provide nightly builds of **tile-lang**.\n\n```bash\npip install tilelang -f https://tile-ai.github.io/whl/nightly/cu121/\n# or pip install tilelang --find-links https://tile-ai.github.io/whl/nightly/cu121/\n```\n\n\u003e **Note:** Nightly builds contain the most recent code changes but may be less stable than official releases. They're ideal for testing new features or if you need a specific bugfix that hasn't been released yet.\n\n## Quick Start\n\nIn this section, you'll learn how to write and execute a straightforward GEMM (matrix multiplication) kernel using tile-lang, followed by techniques for layout optimizations, pipelining, and L2-cache–friendly swizzling.\n\n### GEMM Example with Annotations (Layout, L2 Cache Swizzling, and Pipelining, etc.)\n\nBelow is an example that demonstrates more advanced features: layout annotation, parallelized copy, and swizzle for improved L2 cache locality. This snippet shows how to adapt your kernel to maximize performance on complex hardware.\n\n```python\nimport tilelang\nimport tilelang.language as T\n\n# @tilelang.jit(target=\"cuda\")\n# target currently can be \"cuda\" or \"hip\" or \"cpu\".\n# if not specified, it will be inferred from the input tensors during compile time\n@tilelang.jit\ndef matmul(M, N, K, block_M, block_N, block_K, dtype=\"float16\", accum_dtype=\"float\"):\n\n    @T.prim_func\n    def matmul_relu_kernel(\n            A: T.Tensor((M, K), dtype),\n            B: T.Tensor((K, N), dtype),\n            C: T.Tensor((M, N), dtype),\n    ):\n        # Initialize Kernel Context\n        with T.Kernel(T.ceildiv(N, block_N), T.ceildiv(M, block_M), threads=128) as (bx, by):\n            A_shared = T.alloc_shared((block_M, block_K), dtype)\n            B_shared = T.alloc_shared((block_K, block_N), dtype)\n            C_local = T.alloc_fragment((block_M, block_N), accum_dtype)\n\n            # Enable rasterization for better L2 cache locality (Optional)\n            # T.use_swizzle(panel_size=10, enable=True)\n\n            # Clear local accumulation\n            T.clear(C_local)\n\n            for ko in T.Pipelined(T.ceildiv(K, block_K), num_stages=3):\n                # Copy tile of A\n                # This is a sugar syntax for parallelized copy\n                T.copy(A[by * block_M, ko * block_K], A_shared)\n\n                # Copy tile of B\n                T.copy(B[ko * block_K, bx * block_N], B_shared)\n\n                # Perform a tile-level GEMM on the shared buffers\n                # Currently we dispatch to the cute/hip on Nvidia/AMD GPUs\n                T.gemm(A_shared, B_shared, C_local)\n            \n            # relu\n            for i, j in T.Parallel(block_M, block_N):\n                C_local[i, j] = T.max(C_local[i, j], 0)\n\n            # Copy result back to global memory\n            T.copy(C_local, C[by * block_M, bx * block_N])\n\n    return matmul_relu_kernel\n\n\nM = 1024  # M = T.symbolic(\"m\") if you want to use dynamic shape\nN = 1024\nK = 1024\nblock_M = 128\nblock_N = 128\nblock_K = 32\n\n# 1. Define the kernel (matmul) and compile/lower it into an executable module\nmatmul_relu_kernel = matmul(M, N, K, block_M, block_N, block_K)\n\n# 3. Test the kernel in Python with PyTorch data\nimport torch\n\n# Create random input tensors on the GPU\na = torch.randn(M, K, device=\"cuda\", dtype=torch.float16)\nb = torch.randn(K, N, device=\"cuda\", dtype=torch.float16)\nc = torch.empty(M, N, device=\"cuda\", dtype=torch.float16)\n\n# Run the kernel through the Profiler\nmatmul_relu_kernel(a, b, c)\n\nprint(c)\n# Reference multiplication using PyTorch\nref_c = torch.relu(a @ b)\n\n# Validate correctness\ntorch.testing.assert_close(c, ref_c, rtol=1e-2, atol=1e-2)\nprint(\"Kernel output matches PyTorch reference.\")\n\n# 4. Retrieve and inspect the generated CUDA source (optional)\n# cuda_source = jit_kernel.get_kernel_source()\n# print(\"Generated CUDA kernel:\\n\", cuda_source)\n\n# 5.Profile latency with kernel\nprofiler = matmul_relu_kernel.get_profiler(tensor_supply_type=tilelang.TensorSupplyType.Normal)\n\nlatency = profiler.do_bench()\n\nprint(f\"Latency: {latency} ms\")\n```\n\n### Dive Deep into TileLang Beyond GEMM\n\nIn addition to GEMM, we provide a variety of examples to showcase the versatility and power of TileLang, including:\n\n- [Dequantize GEMM](./examples/dequantize_gemm/): Achieve high-performance dequantization by **fine-grained control over per-thread operations**, with many features now adopted as default behaviors in [BitBLAS](https://github.com/microsoft/BitBLAS), which utilizing magic layout transformation and intrins to accelerate dequantize gemm.\n- [FlashAttention](./examples/flash_attention/): Enable cross-operator fusion with simple and intuitive syntax, and we also provide an example of auto tuning.\n- [LinearAttention](./examples/linear_attention/): Examples include RetNet and Mamba implementations.\n- [Convolution](./examples/convolution/): Implementations of Convolution with IM2Col.\n\n## Upcoming Features\n\nCheck our [tilelang v0.2.0 release plan](https://github.com/tile-ai/tilelang/issues/79) for upcoming features.\n\n---\n\nTileLang has now been used in project [BitBLAS](https://github.com/microsoft/BitBLAS) and [AttentionEngine](https://github.com/microsoft/AttentionEngine).\n\n## Join the Discussion\n\nWelcome to join our Discord community for discussions, support, and collaboration!\n\n[![Join our Discord](https://img.shields.io/badge/Discord-Join%20Us-blue?logo=discord\u0026style=for-the-badge)](https://discord.gg/TUrHyJnKPG)\n\n## Acknowledgements\n\nWe would like to express our gratitude to the [TVM](https://github.com/apache/tvm) community for their invaluable contributions. The initial version of this project was mainly developed by [LeiWang1999](https://github.com/LeiWang1999), [chengyupku](https://github.com/chengyupku) and [nox-410](https://github.com/nox-410) with supervision from Prof. [Zhi Yang](https://yangzhihome.github.io) at Peking University. Part of this work was carried out during an internship at Microsoft Research, where Dr. Lingxiao Ma, Dr. Yuqing Xia, Dr. Jilong Xue, and Dr. Fan Yang offered valuable advice and support. We deeply appreciate their mentorship and contributions.\n","funding_links":[],"categories":["C++","Frameworks and Development Tools 🛠️","其他_机器学习与深度学习","Python","Learning Resources","Repos"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftile-ai%2Ftilelang","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ftile-ai%2Ftilelang","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ftile-ai%2Ftilelang/lists"}