Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/Bruce-Lee-LY/cuda_hgemv
Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.
https://github.com/Bruce-Lee-LY/cuda_hgemv
cublas cuda cuda-core gemm gemv gpu hgemm hgemv matrix-multiply nvidia tensor-core
Last synced: about 1 month ago
JSON representation
Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core.
- Host: GitHub
- URL: https://github.com/Bruce-Lee-LY/cuda_hgemv
- Owner: Bruce-Lee-LY
- License: mit
- Created: 2023-10-09T13:41:23.000Z (about 1 year ago)
- Default Branch: master
- Last Pushed: 2024-09-08T01:08:53.000Z (4 months ago)
- Last Synced: 2024-11-15T21:11:52.343Z (about 1 month ago)
- Topics: cublas, cuda, cuda-core, gemm, gemv, gpu, hgemm, hgemv, matrix-multiply, nvidia, tensor-core
- Language: Cuda
- Homepage:
- Size: 459 KB
- Stars: 49
- Watchers: 4
- Forks: 4
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
- awesome-cuda-triton-hpc - Bruce-Lee-LY/cuda_hgemv - Lee-LY/cuda_hgemv?style=social"/> : Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core. (Learning Resources)
- awesome-cuda-triton-hpc - Bruce-Lee-LY/cuda_hgemv - Lee-LY/cuda_hgemv?style=social"/> : Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core. (Learning Resources)
README
# CUDA HGEMV
Several optimization methods of half-precision general matrix vector multiplication (HGEMV) using CUDA core. The calculation expression is as follows, where the precision of matrix A (1 * K), B (K * N) and C (1 * N) is FP16. Through exploring various parallel task design, the current performance between 1 to 4096 dimensions is not less than 150% of the performance of cublas.
```
C (1 * N) = A (1 * K) * B (K * N)
```![hgemv](./media/images/hgemv.png)
# Optimization Method
- Thread Naive: each thread computes 1 result of C
- Thread Smem: each thread computes 1 result of C using shared memory
- Warp1 Naive: each warp computes 1 result of C
- Warp1 Smem: each warp computes 1 result of C using shared memory
- Warp2 Naive: each warp computes 2 results of C
- Warp2 Smem: each warp computes 2 results of C using shared memory
- Warp4 Naive: each warp computes 4 results of C
- Warp4 Smem: each warp computes 4 results of C using shared memory
- Warp8 Naive: each warp computes 8 results of C
- Warp8 Smem: each warp computes 8 results of C using shared memory
- Warp16 Naive: each warp computes 16 results of C
- Warp16 Smem: each warp computes 16 results of C using shared memory# Compile
## Environment
- OS: Linux
- Cmake Version: >= 3.12
- GCC Version: >= 4.8
- CUDA Version: >= 11.0
- Others: gflags, ccache
```
sudo apt-get install libgflags-dev ccache
```## Clone
```
git clone https://github.com/Bruce-Lee-LY/cuda_hgemv.git
```## Build
### NVIDIA A100
```
cd cuda_hgemv
./build.sh -a 80 -t Release -b OFF
./build.sh -a 80 -t Debug -b OFF
```### RTX3080Ti / RTX3090 / RTX A6000
```
cd cuda_hgemv
./build.sh -a 86 -t Release -b OFF
./build.sh -a 86 -t Debug -b OFF
```# Run Sample
```
./run_sample.sh
```# Performance
Process the data in the log and plot it as a line chart.```
cd tools/performance
./performance.sh
```## RTX3090
- CUDA Version: 11.8
- K: 128Performance achieved by current optimization methods.
![throughput](./performance/RTX3090/throughput.png)