Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/NVIDIA/thrust
[ARCHIVED] The C++ parallel algorithms library. See https://github.com/NVIDIA/cccl
https://github.com/NVIDIA/thrust
algorithms cpp cpp11 cpp14 cpp17 cpp20 cuda cxx cxx11 cxx14 cxx17 cxx20 gpu gpu-computing nvidia nvidia-hpc-sdk thrust
Last synced: about 2 months ago
JSON representation
[ARCHIVED] The C++ parallel algorithms library. See https://github.com/NVIDIA/cccl
- Host: GitHub
- URL: https://github.com/NVIDIA/thrust
- Owner: NVIDIA
- License: other
- Archived: true
- Created: 2012-03-06T01:01:29.000Z (almost 13 years ago)
- Default Branch: main
- Last Pushed: 2024-02-08T15:54:40.000Z (10 months ago)
- Last Synced: 2024-08-05T02:01:17.667Z (4 months ago)
- Topics: algorithms, cpp, cpp11, cpp14, cpp17, cpp20, cuda, cxx, cxx11, cxx14, cxx17, cxx20, gpu, gpu-computing, nvidia, nvidia-hpc-sdk, thrust
- Language: C++
- Homepage:
- Size: 17 MB
- Stars: 4,896
- Watchers: 207
- Forks: 758
- Open Issues: 8
-
Metadata Files:
- Readme: README.md
- Changelog: CHANGELOG.md
- License: LICENSE
- Code of conduct: CODE_OF_CONDUCT.md
Awesome Lists containing this project
- Metal-Guide - Thrust - level interface greatly enhances programmer productivity while enabling performance portability between GPUs and multicore CPUs. Interoperability with established technologies such as CUDA, TBB, and OpenMP integrates with existing software. (C/C++ Tools and Frameworks)
- Fuchsia-Guide - Thrust - level interface greatly enhances programmer productivity while enabling performance portability between GPUs and multicore CPUs. Interoperability with established technologies such as CUDA, TBB, and OpenMP integrates with existing software. (C/C++ Tools)
- SSD-Guide - Thrust - level interface greatly enhances programmer productivity while enabling performance portability between GPUs and multicore CPUs. Interoperability with established technologies such as CUDA, TBB, and OpenMP integrates with existing software. (C/C++ Tools, Libraries and Frameworks)
- LiDAR-Guide - Thrust - level interface greatly enhances programmer productivity while enabling performance portability between GPUs and multicore CPUs. (CUDA Tools)
- DirectX-Guide - Thrust - level interface greatly enhances programmer productivity while enabling performance portability between GPUs and multicore CPUs. Interoperability with established technologies such as CUDA, TBB, and OpenMP integrates with existing software. (C/C++ Tools and Frameworks)
- CPLD-Guide - Thrust - level interface greatly enhances programmer productivity while enabling performance portability between GPUs and multicore CPUs. Interoperability with established technologies such as CUDA, TBB, and OpenMP integrates with existing software. (C/C++ Tools, Libraries and Frameworks)
- OpenGL-Guide - Thrust - level interface greatly enhances programmer productivity while enabling performance portability between GPUs and multicore CPUs. Interoperability with established technologies such as CUDA, TBB, and OpenMP integrates with existing software. (C/C++ Tools and Frameworks)
- Deep-Learning-Guide - Thrust - level interface greatly enhances programmer productivity while enabling performance portability between GPUs and multicore CPUs. (CUDA Tools Libraries, and Frameworks)
- ARM-Guide - Thrust - level interface greatly enhances programmer productivity while enabling performance portability between GPUs and multicore CPUs. Interoperability with established technologies such as CUDA, TBB, and OpenMP integrates with existing software. (C/C++ Tools)
- Autonomous-Systems-Guide - Thrust - level interface greatly enhances programmer productivity while enabling performance portability between GPUs and multicore CPUs. (CUDA Tools Libraries, and Frameworks)
- Firmware-Guide - Thrust - level interface greatly enhances programmer productivity while enabling performance portability between GPUs and multicore CPUs. Interoperability with established technologies such as CUDA, TBB, and OpenMP integrates with existing software. (C/C++ Tools)
- VHDL-Guide - Thrust - level interface greatly enhances programmer productivity while enabling performance portability between GPUs and multicore CPUs. Interoperability with established technologies such as CUDA, TBB, and OpenMP integrates with existing software. (C/C++ Tools, Libraries and Frameworks)
- CoreML-Guide - Thrust - level interface greatly enhances programmer productivity while enabling performance portability between GPUs and multicore CPUs. Interoperability with established technologies such as CUDA, TBB, and OpenMP integrates with existing software. (C/C++ Tools and Frameworks)
- Robotics-Guide - Thrust - level interface greatly enhances programmer productivity while enabling performance portability between GPUs and multicore CPUs. Interoperability with established technologies such as CUDA, TBB, and OpenMP integrates with existing software. (C/C++ Tools)
- MATLAB-Guide - Thrust - level interface greatly enhances programmer productivity while enabling performance portability between GPUs and multicore CPUs. (CUDA Tools Libraries, and Frameworks)
- CNT-Guide - Thrust - level interface greatly enhances programmer productivity while enabling performance portability between GPUs and multicore CPUs. (CUDA Tools Libraries, and Frameworks)
- CUDA-Guide - Thrust - level interface greatly enhances programmer productivity while enabling performance portability between GPUs and multicore CPUs. (CUDA Tools)
- NLP-Guide - Thrust - level interface greatly enhances programmer productivity while enabling performance portability between GPUs and multicore CPUs. (CUDA Tools Libraries, and Frameworks)
- Vulkan-Guide - Thrust - level interface greatly enhances programmer productivity while enabling performance portability between GPUs and multicore CPUs. (CUDA Tools Libraries, and Frameworks)
README
:warning: **The Thrust repository has been archived and is now part of the unified [nvidia/cccl repository](https://github.com/nvidia/cccl). See the [announcement here](https://github.com/NVIDIA/cccl/discussions/520) for more information. Please visit the new repository for the latest updates.** :warning:
# Thrust: The C++ Parallel Algorithms Library
Examples
Godbolt
DocumentationThrust is the C++ parallel algorithms library which inspired the introduction
of parallel algorithms to the C++ Standard Library.
Thrust's **high-level** interface greatly enhances programmer **productivity**
while enabling performance portability between GPUs and multicore CPUs.
It builds on top of established parallel programming frameworks (such as CUDA,
TBB, and OpenMP).
It also provides a number of general-purpose facilities similar to those found
in the C++ Standard Library.The NVIDIA C++ Standard Library is an open source project; it is available on
[GitHub] and included in the NVIDIA HPC SDK and CUDA Toolkit.
If you have one of those SDKs installed, no additional installation or compiler
flags are needed to use libcu++.## Examples
Thrust is best learned through examples.
The following example generates random numbers serially and then transfers them
to a parallel device where they are sorted.```cuda
#include
#include
#include
#include
#include
#includeint main() {
// Generate 32M random numbers serially.
thrust::default_random_engine rng(1337);
thrust::uniform_int_distribution dist;
thrust::host_vector h_vec(32 << 20);
thrust::generate(h_vec.begin(), h_vec.end(), [&] { return dist(rng); });// Transfer data to the device.
thrust::device_vector d_vec = h_vec;// Sort data on the device.
thrust::sort(d_vec.begin(), d_vec.end());// Transfer data back to host.
thrust::copy(d_vec.begin(), d_vec.end(), h_vec.begin());
}
```[See it on Godbolt](https://godbolt.org/z/GeWEd8Er9)
This example demonstrates computing the sum of some random numbers in parallel:
```cuda
#include
#include
#include
#include
#include
#includeint main() {
// Generate random data serially.
thrust::default_random_engine rng(1337);
thrust::uniform_real_distribution dist(-50.0, 50.0);
thrust::host_vector h_vec(32 << 20);
thrust::generate(h_vec.begin(), h_vec.end(), [&] { return dist(rng); });// Transfer to device and compute the sum.
thrust::device_vector d_vec = h_vec;
double x = thrust::reduce(d_vec.begin(), d_vec.end(), 0, thrust::plus());
}
```[See it on Godbolt](https://godbolt.org/z/cnsbWWME7)
This example show how to perform such a reduction asynchronously:
```cuda
#include
#include
#include
#include
#include
#include
#include
#includeint main() {
// Generate 32M random numbers serially.
thrust::default_random_engine rng(123456);
thrust::uniform_real_distribution dist(-50.0, 50.0);
thrust::host_vector h_vec(32 << 20);
thrust::generate(h_vec.begin(), h_vec.end(), [&] { return dist(rng); });// Asynchronously transfer to the device.
thrust::device_vector d_vec(h_vec.size());
thrust::device_event e = thrust::async::copy(h_vec.begin(), h_vec.end(),
d_vec.begin());// After the transfer completes, asynchronously compute the sum on the device.
thrust::device_future f0 = thrust::async::reduce(thrust::device.after(e),
d_vec.begin(), d_vec.end(),
0.0, thrust::plus());// While the sum is being computed on the device, compute the sum serially on
// the host.
double f1 = std::accumulate(h_vec.begin(), h_vec.end(), 0.0, thrust::plus());
}
```[See it on Godbolt](https://godbolt.org/z/be54efaKj)
## Getting The Thrust Source Code
Thrust is a header-only library; there is no need to build or install the project
unless you want to run the Thrust unit tests.The CUDA Toolkit provides a recent release of the Thrust source code in
`include/thrust`. This will be suitable for most users.Users that wish to contribute to Thrust or try out newer features should
recursively clone the Thrust Github repository:```
git clone --recursive https://github.com/NVIDIA/thrust.git
```## Using Thrust From Your Project
For CMake-based projects, we provide a CMake package for use with
`find_package`. See the [CMake README](thrust/cmake/README.md) for more
information. Thrust can also be added via `add_subdirectory` or tools like
the [CMake Package Manager](https://github.com/cpm-cmake/CPM.cmake).For non-CMake projects, compile with:
- The Thrust include path (`-I`)
- The libcu++ include path (`-I/dependencies/libcudacxx/`)
- The CUB include path, if using the CUDA device system (`-I/dependencies/cub/`)
- By default, the CPP host system and CUDA device system are used.
These can be changed using compiler definitions:
- `-DTHRUST_HOST_SYSTEM=THRUST_HOST_SYSTEM_XXX`,
where `XXX` is `CPP` (serial, default), `OMP` (OpenMP), or `TBB` (Intel TBB)
- `-DTHRUST_DEVICE_SYSTEM=THRUST_DEVICE_SYSTEM_XXX`, where `XXX` is
`CPP`, `OMP`, `TBB`, or `CUDA` (default).## Developing Thrust
Thrust uses the [CMake build system] to build unit tests, examples, and header
tests.
To build Thrust as a developer, it is recommended that you use our
containerized development system:```bash
# Clone Thrust and CUB repos recursively:
git clone --recursive https://github.com/NVIDIA/thrust.git
cd thrust# Build and run tests and examples:
ci/local/build.bash
```That does the equivalent of the following, but in a clean containerized
environment which has all dependencies installed:```bash
# Clone Thrust and CUB repos recursively:
git clone --recursive https://github.com/NVIDIA/thrust.git
cd thrust# Create build directory:
mkdir build
cd build# Configure -- use one of the following:
cmake .. # Command line interface.
ccmake .. # ncurses GUI (Linux only).
cmake-gui # Graphical UI, set source/build directories in the app.# Build:
cmake --build . -j ${NUM_JOBS} # Invokes make (or ninja, etc).# Run tests and examples:
ctest
```By default, a serial `CPP` host system, `CUDA` accelerated device system, and
C++14 standard are used.
This can be changed in CMake and via flags to `ci/local/build.bash`More information on configuring your Thrust build and creating a pull request
can be found in the [contributing section].## Licensing
Thrust is an open source project developed on [GitHub].
Thrust is distributed under the [Apache License v2.0 with LLVM Exceptions];
some parts are distributed under the [Apache License v2.0] and the
[Boost License v1.0].## CI Status
[GitHub]: https://github.com/nvidia/thrust
[CMake section]: https://nvidia.github.io/thrust/setup/cmake_options.html
[contributing section]: https://nvidia.github.io/thrust/contributing.html[CMake build system]: https://cmake.org
[Apache License v2.0 with LLVM Exceptions]: https://llvm.org/LICENSE.txt
[Apache License v2.0]: https://www.apache.org/licenses/LICENSE-2.0.txt
[Boost License v1.0]: https://www.boost.org/LICENSE_1_0.txt