Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/boostorg/compute

A C++ GPU Computing Library for OpenCL
https://github.com/boostorg/compute

boost c-plus-plus compute cpp gpgpu gpu hpc opencl performance

Last synced: 3 days ago
JSON representation

A C++ GPU Computing Library for OpenCL

Awesome Lists containing this project

README

        

# Boost.Compute #

[![Build Status](https://travis-ci.org/boostorg/compute.svg?branch=master)](https://travis-ci.org/boostorg/compute)
[![Build status](https://ci.appveyor.com/api/projects/status/4s2nvfc97m7w23oi/branch/master?svg=true)](https://ci.appveyor.com/project/jszuppe/compute/branch/master)
[![Coverage Status](https://coveralls.io/repos/boostorg/compute/badge.svg?branch=master)](https://coveralls.io/r/boostorg/compute)
[![Gitter](https://badges.gitter.im/boostorg/compute.svg)](https://gitter.im/boostorg/compute?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)

Boost.Compute is a GPU/parallel-computing library for C++ based on OpenCL.

The core library is a thin C++ wrapper over the OpenCL API and provides
access to compute devices, contexts, command queues and memory buffers.

On top of the core library is a generic, STL-like interface providing common
algorithms (e.g. `transform()`, `accumulate()`, `sort()`) along with common
containers (e.g. `vector`, `flat_set`). It also features a number of
extensions including parallel-computing algorithms (e.g. `exclusive_scan()`,
`scatter()`, `reduce()`) and a number of fancy iterators (e.g.
`transform_iterator<>`, `permutation_iterator<>`, `zip_iterator<>`).

The full documentation is available at http://boostorg.github.io/compute/.

## Example ##

The following example shows how to sort a vector of floats on the GPU:

```c++
#include
#include
#include

namespace compute = boost::compute;

int main()
{
// get the default compute device
compute::device gpu = compute::system::default_device();

// create a compute context and command queue
compute::context ctx(gpu);
compute::command_queue queue(ctx, gpu);

// generate random numbers on the host
std::vector host_vector(1000000);
std::generate(host_vector.begin(), host_vector.end(), rand);

// create vector on the device
compute::vector device_vector(1000000, ctx);

// copy data to the device
compute::copy(
host_vector.begin(), host_vector.end(), device_vector.begin(), queue
);

// sort data on the device
compute::sort(
device_vector.begin(), device_vector.end(), queue
);

// copy data back to the host
compute::copy(
device_vector.begin(), device_vector.end(), host_vector.begin(), queue
);

return 0;
}
```

Boost.Compute is a header-only library, so no linking is required. The example
above can be compiled with:

`g++ -I/path/to/compute/include sort.cpp -lOpenCL`

More examples can be found in the [tutorial](
http://boostorg.github.io/compute/boost_compute/tutorial.html) and under the
[examples](https://github.com/boostorg/compute/tree/master/example) directory.

## Support ##
Questions about the library (both usage and development) can be posted to the
[mailing list](https://groups.google.com/forum/#!forum/boost-compute).

Bugs and feature requests can be reported through the [issue tracker](
https://github.com/boostorg/compute/issues?state=open).

Also feel free to send me an email with any problems, questions, or feedback.

## Help Wanted ##
The Boost.Compute project is currently looking for additional developers with
interest in parallel computing.

Please send an email to Kyle Lutz ([email protected]) for more information.