https://github.com/erkaman/parle-cuda
A reference implementation of RLE in CUDA
https://github.com/erkaman/parle-cuda
c-plus-plus compression cuda data-compression demo gpgpu gpu parle rle run-length-encoding
Last synced: about 2 months ago
JSON representation
A reference implementation of RLE in CUDA
- Host: GitHub
- URL: https://github.com/erkaman/parle-cuda
- Owner: Erkaman
- License: other
- Created: 2016-06-05T17:03:19.000Z (about 9 years ago)
- Default Branch: master
- Last Pushed: 2016-06-26T05:36:41.000Z (about 9 years ago)
- Last Synced: 2025-04-06T23:14:24.387Z (3 months ago)
- Topics: c-plus-plus, compression, cuda, data-compression, demo, gpgpu, gpu, parle, rle, run-length-encoding
- Language: Cuda
- Homepage:
- Size: 88.9 KB
- Stars: 9
- Watchers: 2
- Forks: 6
- Open Issues: 1
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
An implementation of [Parallel Run-Length Encoding(PARLE)](http://tesla.rcub.bg.ac.rs/~taucet/docs/papers/HIPEAC-ShortPaper-AnaBalevic.pdf) in CUDA
You can read the [details on my blog](https://erkaman.github.io/posts/cuda_rle.html). You can also find benchmarking results there.