https://github.com/bruce-lee-ly/cutlass_gemm
Multiple GEMM operators are constructed with cutlass to support LLM inference.
https://github.com/bruce-lee-ly/cutlass_gemm
cublas cublaslt cutlass gemm gpu llm matrix-multiply nvidia tensor-core
Last synced: about 1 month ago
JSON representation
Multiple GEMM operators are constructed with cutlass to support LLM inference.
- Host: GitHub
- URL: https://github.com/bruce-lee-ly/cutlass_gemm
- Owner: Bruce-Lee-LY
- License: bsd-3-clause
- Created: 2024-09-04T12:30:17.000Z (9 months ago)
- Default Branch: master
- Last Pushed: 2024-09-27T12:53:46.000Z (8 months ago)
- Last Synced: 2025-03-27T14:02:16.615Z (about 2 months ago)
- Topics: cublas, cublaslt, cutlass, gemm, gpu, llm, matrix-multiply, nvidia, tensor-core
- Language: C++
- Homepage:
- Size: 2.14 MB
- Stars: 17
- Watchers: 1
- Forks: 2
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# CUTLASS GEMM
Multiple GEMM operators are constructed with cutlass to support LLM inference.## GEMM
The calculation expression is as follows, where the precision of Matrix A, B, C and D is FP16 or BF16. You can also customize your own epilogue. In some scenarios, it exceeds the performance of cublas and cublasLt.
```
D = alpha * (A * B) + beta * C
```# Compile
## Environment
- OS: Linux
- Cmake Version: >= 3.16
- GCC Version: >= 5.0
- CUDA Version: >= 11.4
- Others: gflags, ccache
```
sudo apt-get install libgflags-dev ccache
```## Clone
```
git clone https://github.com/Bruce-Lee-LY/cutlass_gemm.git
```## Build
### NVIDIA A100
```
cd cutlass_gemm
./build.sh -a 80 -t Release -b OFF
./build.sh -a 80 -t Debug -b OFF
```### RTX3080Ti / RTX3090 / RTX A6000
```
cd cutlass_gemm
./build.sh -a 86 -t Release -b OFF
./build.sh -a 86 -t Debug -b OFF
```# Run Sample
```
./run_sample.sh
```# Performance
Process the data in the log and plot it as a line chart.```
cd tools/performance
./performance.sh
```## GEMM
- GPU: RTX3090
- CUDA Version: 12.1
- Data Type: FP16
- Beta: 0.0Performance achieved by current cutlass methods.
### K == N == 4096
### K == N == 8192
# Reference
- [cutlass](https://github.com/NVIDIA/cutlass): v3.5.1
> Add '#include ' to file 'cute/algorithm/functional.hpp' to avoid error 'namespace "cute" has no member "max"' during compilation.# TODO
- Add SM90 Kernel