{"id":13644640,"url":"https://github.com/NVIDIA/nccl","last_synced_at":"2025-04-21T10:33:53.885Z","repository":{"id":37444741,"uuid":"46153892","full_name":"NVIDIA/nccl","owner":"NVIDIA","description":"Optimized primitives for collective multi-GPU communication","archived":false,"fork":false,"pushed_at":"2025-04-14T06:59:14.000Z","size":3856,"stargazers_count":3641,"open_issues_count":797,"forks_count":895,"subscribers_count":153,"default_branch":"master","last_synced_at":"2025-04-14T07:43:45.197Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":"C++","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"other","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/NVIDIA.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE.txt","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2015-11-14T00:12:04.000Z","updated_at":"2025-04-14T06:59:20.000Z","dependencies_parsed_at":"2022-07-12T13:34:08.063Z","dependency_job_id":"e1d7e69b-3308-4bb3-96f9-1de1cb14b255","html_url":"https://github.com/NVIDIA/nccl","commit_stats":{"total_commits":225,"total_committers":51,"mean_commits":4.411764705882353,"dds":0.5111111111111111,"last_synced_commit":"2ea4ee94bfb04c886c79ccae60ac9961000fdee2"},"previous_names":[],"tags_count":58,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/NVIDIA%2Fnccl","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/NVIDIA%2Fnccl/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/NVIDIA%2Fnccl/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/NVIDIA%2Fnccl/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/NVIDIA","download_url":"https://codeload.github.com/NVIDIA/nccl/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":250040562,"owners_count":21365130,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-02T01:02:09.975Z","updated_at":"2025-04-21T10:33:48.874Z","avatar_url":"https://github.com/NVIDIA.png","language":"C++","readme":"# NCCL\n\nOptimized primitives for inter-GPU communication.\n\n## Introduction\n\nNCCL (pronounced \"Nickel\") is a stand-alone library of standard communication routines for GPUs, implementing all-reduce, all-gather, reduce, broadcast, reduce-scatter, as well as any send/receive based communication pattern. It has been optimized to achieve high bandwidth on platforms using PCIe, NVLink, NVswitch, as well as networking using InfiniBand Verbs or TCP/IP sockets. NCCL supports an arbitrary number of GPUs installed in a single node or across multiple nodes, and can be used in either single- or multi-process (e.g., MPI) applications.\n\nFor more information on NCCL usage, please refer to the [NCCL documentation](https://docs.nvidia.com/deeplearning/sdk/nccl-developer-guide/index.html).\n\n## Build\n\nNote: the official and tested builds of NCCL can be downloaded from: https://developer.nvidia.com/nccl. You can skip the following build steps if you choose to use the official builds.\n\nTo build the library :\n\n```shell\n$ cd nccl\n$ make -j src.build\n```\n\nIf CUDA is not installed in the default /usr/local/cuda path, you can define the CUDA path with :\n\n```shell\n$ make src.build CUDA_HOME=\u003cpath to cuda install\u003e\n```\n\nNCCL will be compiled and installed in `build/` unless `BUILDDIR` is set.\n\nBy default, NCCL is compiled for all supported architectures. To accelerate the compilation and reduce the binary size, consider redefining `NVCC_GENCODE` (defined in `makefiles/common.mk`) to only include the architecture of the target platform :\n```shell\n$ make -j src.build NVCC_GENCODE=\"-gencode=arch=compute_70,code=sm_70\"\n```\n\n## Install\n\nTo install NCCL on the system, create a package then install it as root.\n\nDebian/Ubuntu :\n```shell\n$ # Install tools to create debian packages\n$ sudo apt install build-essential devscripts debhelper fakeroot\n$ # Build NCCL deb package\n$ make pkg.debian.build\n$ ls build/pkg/deb/\n```\n\nRedHat/CentOS :\n```shell\n$ # Install tools to create rpm packages\n$ sudo yum install rpm-build rpmdevtools\n$ # Build NCCL rpm package\n$ make pkg.redhat.build\n$ ls build/pkg/rpm/\n```\n\nOS-agnostic tarball :\n```shell\n$ make pkg.txz.build\n$ ls build/pkg/txz/\n```\n\n## Tests\n\nTests for NCCL are maintained separately at https://github.com/nvidia/nccl-tests.\n\n```shell\n$ git clone https://github.com/NVIDIA/nccl-tests.git\n$ cd nccl-tests\n$ make\n$ ./build/all_reduce_perf -b 8 -e 256M -f 2 -g \u003cngpus\u003e\n```\n\n## Copyright\n\nAll source code and accompanying documentation is copyright (c) 2015-2020, NVIDIA CORPORATION. All rights reserved.\n","funding_links":[],"categories":["Concurrency","C++","HarmonyOS","Frameworks"],"sub_categories":["Windows Manager"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FNVIDIA%2Fnccl","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FNVIDIA%2Fnccl","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FNVIDIA%2Fnccl/lists"}