Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/iree-org/iree-nvgpu
https://github.com/iree-org/iree-nvgpu
Last synced: 16 days ago
JSON representation
- Host: GitHub
- URL: https://github.com/iree-org/iree-nvgpu
- Owner: iree-org
- License: apache-2.0
- Archived: true
- Created: 2023-04-04T20:43:44.000Z (over 1 year ago)
- Default Branch: main
- Last Pushed: 2024-03-05T05:27:43.000Z (8 months ago)
- Last Synced: 2024-08-01T03:31:34.741Z (3 months ago)
- Language: MLIR
- Size: 540 KB
- Stars: 49
- Watchers: 7
- Forks: 19
- Open Issues: 17
-
Metadata Files:
- Readme: README.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Authors: AUTHORS
Awesome Lists containing this project
README
# OpenXLA NVIDIA GPU Compiler and Runtime
This project contains the compiler and runtime plugins enabling specialized
targeting of the OpenXLA platform to NVIDIA GPUs. It builds on top of the
core IREE toolkit.## Development setup
The project can be built either as part of IREE by manually specifying
plugin paths via `-DIREE_COMPILER_PLUGIN_PATHS`, or for development tailored
to NVIDIA GPUs specifically, can be built directly:```
cmake -GNinja -B build/ -S . \
-DCMAKE_BUILD_TYPE=RelWithDebInfo \
-DIREE_ENABLE_ASSERTIONS=ON \
-DCMAKE_C_COMPILER=clang \
-DCMAKE_CXX_COMPILER=clang++ \
-DIREE_ENABLE_LLD=ON# Recommended:
# -DCMAKE_C_COMPILER_LAUNCHER=ccache -DCMAKE_CXX_COMPILER_LAUNCHER=ccache
```Note that you will need a check-out of the IREE codebase in `../iree` relative
to the directory where the `openxla-nvgpu` compiler was checked out. Running
the `sync_deps.py` script should bring in all source dependencies at needed
versions (into the parent directory).Additional options for configuring IREE are in the IREE [getting
started](https://openxla.github.io/iree/building-from-source/getting-started/)
guide for details of how to set this up.## Installing dependencies
You must have a CUDA Toolkit installed together with a cuDNN ([see
instructions](https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html#installlinux-tar)).See the project settings for options to build without components requiring
full dependencies.On Linux platform path to `libcudnn.so` should be added to `LD_LIBRARY_PATH`.
```
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH}:/usr/local/cuda/lib64
```## Running tests
Some of the tests can run only on an Ampere+ devices because they rely on the
[cuDNN runtime fusion engine](https://docs.nvidia.com/deeplearning/cudnn/developer-guide/index.html#runtime-fusion-engine).Tests that depend on having a device present can be disabled with
`-DOPENXLA_NVGPU_INCLUDE_DEVICE_TESTS=OFF`.```
cmake --build build --target openxla-nvgpu-run-tests
```## Project Maintenance
This section is a work in progress describing various project maintenance
tasks.### Pre-requisite: Install openxla-devtools
```
pip install git+https://github.com/openxla/openxla-devtools.git
```### Sync all deps to pinned versions
```
openxla-workspace sync
```### Update IREE to head
This updates the pinned IREE revision to the HEAD revision at the remote.
```
# Updates the sync_deps.py metadata.
openxla-workspace roll iree
# Brings all dependencies to pinned versions.
openxla-workspace sync
```### Full update of all deps
This updates the pinned revisions of all dependencies. This is presently done
by updating `openxla-pjrt-plugin` to remote HEAD and deriving the IREE
dependency from its pin.```
# Updates the sync_deps.py metadata.
openxla-workspace roll nightly
# Brings all dependencies to pinned versions.
openxla-workspace sync
```### Pin current versions of all deps
This can be done if local, cross project changes have been made and landed.
It snapshots the state of all deps as actually checked out and updates
the metadata.```
openxla-workspace pin
```