{"id":19705881,"url":"https://github.com/llnl/aluminum","last_synced_at":"2025-04-05T13:05:00.939Z","repository":{"id":33045924,"uuid":"145455033","full_name":"LLNL/Aluminum","owner":"LLNL","description":"High-performance, GPU-aware communication library","archived":false,"fork":false,"pushed_at":"2025-01-09T22:05:50.000Z","size":1433,"stargazers_count":85,"open_issues_count":3,"forks_count":22,"subscribers_count":6,"default_branch":"master","last_synced_at":"2025-03-29T12:06:13.429Z","etag":null,"topics":["cpp","cuda","gpu","hpc","mpi"],"latest_commit_sha":null,"homepage":"https://aluminum.readthedocs.io/en/latest/","language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/LLNL.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":"CITATION.cff","codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2018-08-20T18:22:15.000Z","updated_at":"2025-03-24T09:50:42.000Z","dependencies_parsed_at":"2024-02-02T22:31:33.336Z","dependency_job_id":"3bfc84bf-de6a-450e-b9c2-b5606c91cd37","html_url":"https://github.com/LLNL/Aluminum","commit_stats":{"total_commits":582,"total_committers":13,"mean_commits":44.76923076923077,"dds":0.6735395189003437,"last_synced_commit":"4f84b63c24473ca12c5ec3b94b5e1f3f069c6ac5"},"previous_names":[],"tags_count":22,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/LLNL%2FAluminum","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/LLNL%2FAluminum/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/LLNL%2FAluminum/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/LLNL%2FAluminum/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/LLNL","download_url":"https://codeload.github.com/LLNL/Aluminum/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247339154,"owners_count":20923014,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["cpp","cuda","gpu","hpc","mpi"],"created_at":"2024-11-11T21:31:22.327Z","updated_at":"2025-04-05T13:05:00.920Z","avatar_url":"https://github.com/LLNL.png","language":"C++","readme":"![Al](al.svg) Aluminum\n======================\n\n**Aluminum** is a high-performance communication library for CPUs, GPUs, and other accelerator platforms.\nIt leverages existing libraries, such as MPI, NCCL, and RCCL, plus its own infrastructure, to deliver performance and accelerator-centric communication.\n\nAluminum is open-source and maintained by the Lawrence Livermore National Laboratory.\nIf you use Aluminum, please cite [our paper](https://ieeexplore.ieee.org/document/8638639):\n```\n@inproceedings{dryden2018aluminum,\n  title={Aluminum: An Asynchronous, {GPU}-Aware Communication Library Optimized for Large-Scale Training of Deep Neural Networks on {HPC} Systems},\n  author={Dryden, Nikoli and Maruyama, Naoya and Moon, Tim and Benson, Tom and Yoo, Andy and Snir, Marc and Van Essen, Brian},\n  booktitle={Proceedings of the Workshop on Machine Learning in HPC Environments (MLHPC)},\n  year={2018}\n}\n```\n\n## Features\n\n* Support for blocking and non-blocking collective and point-to-point operations\n* Accelerator-centric communication\n* Supported communication backends:\n  * `MPI`: Uses the Message Passing Interface and supports any hardware your underlying MPI library supports.\n  * `NCCL`: Uses either Nvidia's [NCCL](https://developer.nvidia.com/nccl) library for Nvidia GPUs or AMD's [RCCL](https://github.com/ROCmSoftwarePlatform/rccl) library for AMD GPUs.\n  * `HostTransfer`: Uses MPI plus the CUDA or HIP runtime to support Nvidia or AMD GPUs without specialized libraries.\n\n## Getting Started\n\nFor full details, see the [Aluminum documentation](https://aluminum.readthedocs.io/).\n\nFor basic usage examples, see the [examples](examples).\n\n### Building and Installation\n\nAluminum is available via [Spack](https://spack.io/) or can be installed manually from source.\n\nSource builds need a recent CMake, C++ compiler (with support for C++17), MPI, and hwloc.\nAccelerator backends need the appropriate runtime libraries.\n\nA basic out-of-source build can be done with\n```\nmkdir build \u0026\u0026 cd build\ncmake /path/to/Aluminum/source\n```\n\nFor full details on building, configuration, testing, and benchmarking, see the [documentation](https://aluminum.readthedocs.io/en/latest/build.html).\n\n## Authors\n\n* [Nikoli Dryden](https://github.com/ndryden)\n* [Naoya Maruyama](https://github.com/naoyam)\n* [Tom Benson](https://github.com/benson31)\n* Andy Yoo\n\nSee also [contributors](https://github.com/ndryden/Aluminum/graphs/contributors).\n\n## License\n\nAluminum is licensed under the Apache License, Version 2.0. See [LICENSE](LICENSE) for details.\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fllnl%2Faluminum","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fllnl%2Faluminum","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fllnl%2Faluminum/lists"}