{"id":20725547,"url":"https://github.com/oneapi-src/ishmem","last_synced_at":"2025-06-26T07:02:38.760Z","repository":{"id":208518369,"uuid":"710923864","full_name":"oneapi-src/ishmem","owner":"oneapi-src","description":"Intel® SHMEM - Device initiated shared memory based communication library","archived":false,"fork":false,"pushed_at":"2025-06-09T17:11:48.000Z","size":4479,"stargazers_count":24,"open_issues_count":3,"forks_count":5,"subscribers_count":6,"default_branch":"main","last_synced_at":"2025-06-24T13:45:42.879Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"https://oneapi-src.github.io/ishmem/index.html","language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/oneapi-src.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":"CONTRIBUTING.md","funding":null,"license":"LICENSE","code_of_conduct":"CODE_OF_CONDUCT.md","threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":"SECURITY.md","support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null}},"created_at":"2023-10-27T18:29:35.000Z","updated_at":"2025-06-11T08:53:52.000Z","dependencies_parsed_at":"2024-06-28T16:29:17.049Z","dependency_job_id":"d9337988-3a29-41e3-a858-a19c7d13c0c6","html_url":"https://github.com/oneapi-src/ishmem","commit_stats":null,"previous_names":["oneapi-src/ishmem"],"tags_count":4,"template":false,"template_full_name":null,"purl":"pkg:github/oneapi-src/ishmem","repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/oneapi-src%2Fishmem","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/oneapi-src%2Fishmem/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/oneapi-src%2Fishmem/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/oneapi-src%2Fishmem/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/oneapi-src","download_url":"https://codeload.github.com/oneapi-src/ishmem/tar.gz/refs/heads/main","sbom_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/oneapi-src%2Fishmem/sbom","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":262018678,"owners_count":23245618,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-11-17T04:19:18.636Z","updated_at":"2025-06-26T07:02:38.742Z","avatar_url":"https://github.com/oneapi-src.png","language":"C++","readme":"# Intel® SHMEM \u003c!-- omit in toc --\u003e \u003cimg align=\"right\" width=\"100\" height=\"100\" src=\"https://spec.oneapi.io/oneapi-logo-white-scaled.jpg\"\u003e\n\n[Installation](#installation)\u0026nbsp;\u0026nbsp;\u0026nbsp;|\u0026nbsp;\u0026nbsp;\u0026nbsp;[Usage](#usage)\u0026nbsp;\u0026nbsp;\u0026nbsp;|\u0026nbsp;\u0026nbsp;\u0026nbsp;[Release Notes](RELEASE_NOTES.md)\u0026nbsp;\u0026nbsp;\u0026nbsp;|\u0026nbsp;\u0026nbsp;\u0026nbsp;[Documentation](https://oneapi-src.github.io/ishmem/intro.html)\u0026nbsp;\u0026nbsp;\u0026nbsp;|\u0026nbsp;\u0026nbsp;\u0026nbsp;[How to Contribute](CONTRIBUTING.md)\u0026nbsp;\u0026nbsp;\u0026nbsp;|\u0026nbsp;\u0026nbsp;\u0026nbsp;[License](LICENSE)\n\nIntel® SHMEM provides an efficient implementation of GPU-initiated communication on systems with Intel GPUs.\n\n## Table of Contents \u003c!-- omit in toc --\u003e\n\n- [Prerequisites](#prerequisites)\n- [Installation](#installation)\n- [Usage](#usage)\n  - [Launching Example Application](#launching-example-application)\n- [Additional Resources](#additional-resources)\n  - [OpenSHMEM Specification](#openshmem-spec)\n  - [Specification](#ishmem-spec)\n\n## Prerequisites\n\n- Linux OS\n- Intel® oneAPI DPC++/C++ Compiler 2024.0 or higher.\n\n### SYCL support \u003c!-- omit in toc --\u003e\nIntel® oneAPI DPC++/C++ Compiler with Level Zero support.\n\n\n## Installation\n\n### Building Level Zero\nFor detailed information on Level Zero, refer to the [Intel® Graphics Compute Runtime for oneAPI Level Zero and OpenCL™ Driver repository](https://github.com/intel/compute-runtime/releases) or to the [installation guide](https://dgpu-docs.intel.com/installation-guides/index.html) for oneAPI users.\n\nTo install, download the oneAPI Level Zero from the repository.\n\n```\ngit clone https://github.com/oneapi-src/level-zero.git\n```\n\nBuild Level Zero following instructions below. \n\n```\ncd level-zero\nmkdir build\ncd build\ncmake -DCMAKE_INSTALL_PREFIX=\u003clevel_zero_dir\u003e ..\nmake -j\nmake install\n```\n### The Host Back-End Library\nIntel® SHMEM requires a host OpenSHMEM or MPI back-end to be used for host-sided operations support. In particular, the OpenSHMEM back-end relies on a collection of extension APIs (`shmemx_heap_create`, `shmemx_heap_preinit`, and `shmemx_heap_postinit`) to coordinate the Intel® SHMEM and OpenSHMEM heaps. We recommend [Sandia OpenSHMEM v1.5.3rc1](https://github.com/Sandia-OpenSHMEM/SOS/releases/tag/v1.5.3rc1) or newer for this purpose. A [work-in-progress branch](https://github.com/davidozog/oshmpi/tree/wip/ishmem) of [OSHMPI](https://github.com/pmodels/oshmpi.git) is also supported but is currently considered experimental.  See the [Building OSHMPI](#building-oshmpi-optional-and-experimental) section before for more details.\n\nWe recommend the Intel® MPI Library as the MPI back-end option for the current version of Intel® SHMEM. See the [Building Intel® SHMEM](#building-intel-shmem) section below for more details.\n\n### Building Sandia OpenSHMEM (SOS)\nDownload the SOS repo to be configured as a back-end for Intel® SHMEM.\n\n```\ngit clone --recurse-submodules https://github.com/Sandia-OpenSHMEM/SOS.git SOS\n```\n\nBuild SOS following instructions below. `FI_HMEM` support in the provider is required for use with Intel® SHMEM. To enable `FI_HMEM` with a supported provider, we recommend a specific set of config flags. Below are two examples for configuring and building SOS with two providers supporting `FI_HMEM`. To configure SOS with the `verbs;ofi_rxm` provider, use the following instructions:\n\n```\ncd SOS\n./autogen.sh\nCC=icx CXX=icpx ./configure --prefix=\u003cshmem_dir\u003e --with-ofi=\u003cofi_installation\u003e --enable-pmi-simple --enable-ofi-mr=basic --disable-ofi-inject --enable-ofi-hmem --disable-bounce-buffers --enable-hard-polling\nmake -j\nmake install\n```\nTo configure SOS with the HPE Slingshot provider `cxi`, please use the following instructions:\n```\ncd SOS\n./autogen.sh\nCC=icx CXX=icpx ./configure --prefix=\u003cshmem_dir\u003e --with-ofi=\u003cofi_installation\u003e --enable-pmi-simple --enable-ofi-mr=basic --disable-ofi-inject --enable-ofi-hmem --disable-bounce-buffers --enable-ofi-manual-progress --enable-mr-endpoint --disable-nonfetch-amo --enable-manual-progress\nmake -j\nmake install\n```\nTo configure SOS with the `psm3` provider, please use the following instructions:\n```\ncd SOS\n./autogen.sh\nCC=icx CXX=icpx ./configure --prefix=\u003cshmem_dir\u003e --with-ofi=\u003cofi_installation\u003e --enable-pmi-simple --enable-manual-progress --enable-ofi-hmem --disable-bounce-buffers --enable-ofi-mr=basic --enable-mr-endpoint\nmake -j\nmake install\n```\n\nPlease choose an appropriate PMI configure flag based on the available PMI client library in the system. Please check for further instructions on [SOS Wiki pages](https://github.com/Sandia-OpenSHMEM/SOS/wiki). Optionally, users may also choose to add `--disable-fortran` since fortran interfaces will not be used.\n\n### Building OSHMPI (Optional and experimental)\nIntel® SHMEM has experimental support for OSHMPI when built using the Intel® MPI Library.\nHere is information on how to [Get Started with Intel® MPI Library on Linux](https://www.intel.com/content/www/us/en/docs/mpi-library/get-started-guide-linux/2021-11/overview.html).\n\nTo download the OSHMPI repository:\n\n```\ngit clone -b wip/ishmem --recurse-submodules https://github.com/davidozog/oshmpi.git oshmpi\n```\nAfter ensuring Intel® MPI Library is enabled (for example, by sourcing the `/opt/intel/oneapi/setvars.sh` script),\nplease build OSHMPI following the instructions below.\n\n```\ncd oshmpi\n./autogen.sh\nCC=mpiicx CXX=mpiicpx ./configure --prefix=\u003cshmem_dir\u003e --disable-fortran --enable-rma=direct --enable-amo=direct --enable-async-thread=yes\nmake -j\nmake install\n```\n\n### Building Intel® SHMEM\nCheck that the SOS build process has successfully created a `\u003cshmem_dir\u003e` directory with `include` and `lib` as subdirectories. Please find `shmem.h` and `shmemx.h` in `include`.\n\nBuild Intel® SHMEM with an OpenSHMEM back-end using the following instructions:\n\n```\ncd ishmem\nmkdir build\ncd build\nCC=icx CXX=icpx cmake .. -DENABLE_OPENSHMEM=ON -DSHMEM_DIR=\u003cshmem_dir\u003e -DCMAKE_INSTALL_PREFIX=\u003cishmem_install_dir\u003e\nmake -j\n```\nAlternatively, Intel® SHMEM can be built by enabling an Intel® MPI Library back-end.\nHere is information on how to [Get Started with Intel® MPI Library on Linux](https://www.intel.com/content/www/us/en/docs/mpi-library/get-started-guide-linux/2021-11/overview.html).\n\n```\nCC=icx CXX=icpx cmake .. -DENABLE_OPENSHMEM=OFF -DENABLE_MPI=ON -DMPI_DIR=\u003cimpi_dir\u003e -DCMAKE_INSTALL_PREFIX=\u003cishmem_install_dir\u003e\n```\nwhere `\u003cimpi_dir\u003e` is the path to the Intel® MPI Library installation.\n\nEnabling both the OpenSHMEM and MPI back-ends is also supported.  In this case,\nthe desired backend can be selected via the environment variable,\n`ISHMEM_RUNTIME`, which can be set to either \"OpenSHMEM\" or \"MPI\".\nThe default value for `ISHMEM_RUNTIME` is \"OpenSHMEM\".\n\n## Usage\n\n### Launching Example Application\n\nValidate that Intel® SHMEM was built correctly by running an example program.\n\n1. Add the path for the back-end library to the environment, for example:\n\n```\nexport LD_LIBRARY_PATH=\u003cshmem_dir\u003e/lib:$LD_LIBRARY_PATH\n```\n\nWhen enabling only the Intel® MPI Library back-end, simply source the appropriate\n`setvars.sh` script. When enabling both OpenSHMEM and MPI back-ends, first\nsource the `setvars.sh` script, then configure the dynamic linker to load the\nOpenSHMEM library (for example by prepending `\u003cshmem_dir\u003e/lib` to\n`LD_LIBRARY_PATH`).\n\n2. Run the example program or test on an allocated node using a process launcher:\n\n```\nISHMEM_RUNTIME=\u003cback-end\u003e mpiexec.hydra -n 2 -hosts \u003callocated_node_id\u003e ./scripts/ishmrun ./test/unit/int_get_device\n```\nwhere `\u003cback-end\u003e` is the selected host back-end library.\n\n- *Note:* Current supported launchers include: MPI process launchers (i.e. `mpiexec`, `mpiexec.hydra`, `mpirun`, etc.), Slurm (i.e. `srun`, `salloc`, etc.), and PBS (i.e. `qsub`).\n\n- *Note:* Intel® SHMEM execution model requires applications to use a 1:1 mapping between PEs and GPU devices. Attempting to run an application without the ishmrun launch script may result in undefined behavior if this mapping is not maintained.\n  - For further details on the device selection, please see [the ONEAPI_DEVICE_SELECTOR](https://github.com/intel/llvm/blob/sycl/sycl/doc/EnvironmentVariables.md#oneapi_device_selector).\n\n3. Validate the application ran successfully; example output:\n\n```\nSelected device: Intel(R) Data Center GPU Max 1550\nSelected vendor: Intel(R) Corporation\nSelected device: Intel(R) Data Center GPU Max 1550\nSelected vendor: Intel(R) Corporation\nNo errors\nNo errors\n```\n\n### Launching Example Application w/ CTest\n\n`ctest` can be used to run Intel® SHMEM tests that are generated at compile-time. To see a list of tests available via `ctest`, run:\n\n```\nctest -N\n```\n\nTo launch a single test, execute:\n\n```\nctest -R \u003ctest_name\u003e\n```\n\nAlternatively, all the tests in a directory (such as `test/unit/`) can be run with the following command:\n\n```\nctest --test-dir \u003cdirectory_name\u003e\n```\n\nBy default, a passed or failed test can be detected by the output:\n```\n    Start 69: sync-2-gpu\n1/1 Test #69: sync-2-gpu .......................   Passed    2.29 sec\n\n100% tests passed, 0 tests failed out of 1\n```\n\nTo have a test's output printed to the console, add either the `--verbose` or `--output-on-failure` flag to the `ctest` command\n\n### Available Scheduler Wrappers for Jobs Run via CTest\nThe following values may be assigned to `CTEST_LAUNCHER` at configure-time (ex. `-DCTEST_LAUNCHER=mpi`) to set which scheduler will be used to run tests launched through a call to `ctest`:\n - srun (default)\n   - Launches CTest jobs on a single node using Slurm's `srun`.\n - mpi\n   - Uses `mpirun` to launch CTest jobs with the appropriate number of processes.\n - qsub\n   - Launches CTest jobs on a single node using `qsub`. If this option is being used on a system where a reservation must be made (i.e. via `pbsresnode`) prior to running a test, assign the `JOB_QUEUE` environment variable to the queue associated with your reservation:\n   ```\n   export JOB_QUEUE=\u003cqueue\u003e\n   ```\n\n## Additional Resources\n\n### OpenSHMEM Specification\n\n- [OpenSHMEM](http://openshmem.org/site/)\n- [Specification](http://openshmem.org/site/sites/default/site_files/OpenSHMEM-1.5.pdf)\n\n### Intel® SHMEM Specification\n\n- [Intel® SHMEM Specification](https://oneapi-src.github.io/ishmem/intro.html)\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Foneapi-src%2Fishmem","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Foneapi-src%2Fishmem","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Foneapi-src%2Fishmem/lists"}