https://github.com/versi379/optimized-matrix-multiplication
This project utilizes CUDA and cuBLAS to optimize matrix multiplication, achieving up to a 5x speedup on large matrices by leveraging GPU acceleration. It also improves memory efficiency and reduces data transfer times between CPU and GPU.
https://github.com/versi379/optimized-matrix-multiplication
cublas cuda cuda-programming hpc matrix-multiplication parallel-computing parallel-programming
Last synced: 10 months ago
JSON representation
This project utilizes CUDA and cuBLAS to optimize matrix multiplication, achieving up to a 5x speedup on large matrices by leveraging GPU acceleration. It also improves memory efficiency and reduces data transfer times between CPU and GPU.
- Host: GitHub
- URL: https://github.com/versi379/optimized-matrix-multiplication
- Owner: versi379
- Created: 2024-11-19T22:02:24.000Z (about 1 year ago)
- Default Branch: main
- Last Pushed: 2024-11-19T22:32:24.000Z (about 1 year ago)
- Last Synced: 2025-01-21T05:41:42.230Z (11 months ago)
- Topics: cublas, cuda, cuda-programming, hpc, matrix-multiplication, parallel-computing, parallel-programming
- Language: C++
- Homepage:
- Size: 4.88 KB
- Stars: 0
- Watchers: 1
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
README
# Optimized-Matrix-Multiplication
This implementation leverages the NVIDIA CUDA framework and the cuBLAS library to optimize matrix multiplication using the cublasGemmEx function. By utilizing the power of GPU acceleration and advanced features like Tensor Cores (if supported), the computation is significantly faster compared to traditional CPU-based methods.
# Overview
This application performs basic matrix multiplication: \( A \times B = C \).
- **Matrix dimensions:**
- \( A \): `rowsA x rank`
- \( B \): `rank x colsB`
- \( C \): `rowsA x colsB`
- **Array representation:**
Matrices are represented as 2D arrays using single raw pointers, e.g.:
```c++
float* A = new float[sizeA];
```
- **Accessing elements:**
Elements are accessed using the following pattern:
```c++
for (size_t i = 0; i < rows; ++i)
{
for (size_t j = 0; j < cols; ++j)
{
cout << A[j * rows + i] << " ";
}
cout << endl;
}
```
- **Data types:**
Matrices \( A \) and \( B \) can use either 16-bit or 32-bit floats, but the result matrix \( C \) is always 32-bit.
- **Performance:**
GPU (device) execution significantly outperforms CPU (host, single-threaded) execution.
Starting with cuBLAS version 11, Tensor Cores are utilized automatically. More details are available in the [NVIDIA cuBLAS documentation](https://docs.nvidia.com/cuda/cublas/#tensor-core-usage).
# Build Instructions
## Linux
1. **Install CUDA toolkit dependencies:**
```bash
sudo apt install nvidia-cuda-toolkit
```
2. **Build the application:**
```bash
make all
```
3. **Run the executable:**
```bash
./main.out
```