{"id":13709580,"url":"https://github.com/rusty1s/pytorch_sparse","last_synced_at":"2025-05-14T05:12:09.059Z","repository":{"id":37706406,"uuid":"142701935","full_name":"rusty1s/pytorch_sparse","owner":"rusty1s","description":"PyTorch Extension Library of Optimized Autograd Sparse Matrix Operations","archived":false,"fork":false,"pushed_at":"2025-04-10T19:34:53.000Z","size":699,"stargazers_count":1059,"open_issues_count":25,"forks_count":154,"subscribers_count":14,"default_branch":"master","last_synced_at":"2025-05-10T22:48:16.039Z","etag":null,"topics":["autograd","pytorch","sparse","sparse-matrices"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/rusty1s.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2018-07-28T18:46:53.000Z","updated_at":"2025-05-09T13:39:12.000Z","dependencies_parsed_at":"2024-01-29T08:15:22.139Z","dependency_job_id":"a1a7b7f0-4865-4b2a-8181-0d03c8e0f177","html_url":"https://github.com/rusty1s/pytorch_sparse","commit_stats":{"total_commits":693,"total_committers":45,"mean_commits":15.4,"dds":"0.11399711399711399","last_synced_commit":"bd567515098eff002d3e7af614cef053eb8f0a3f"},"previous_names":[],"tags_count":28,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/rusty1s%2Fpytorch_sparse","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/rusty1s%2Fpytorch_sparse/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/rusty1s%2Fpytorch_sparse/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/rusty1s%2Fpytorch_sparse/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/rusty1s","download_url":"https://codeload.github.com/rusty1s/pytorch_sparse/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254076850,"owners_count":22010611,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["autograd","pytorch","sparse","sparse-matrices"],"created_at":"2024-08-02T23:00:41.859Z","updated_at":"2025-05-14T05:12:04.028Z","avatar_url":"https://github.com/rusty1s.png","language":"Python","readme":"[pypi-image]: https://badge.fury.io/py/torch-sparse.svg\n[pypi-url]: https://pypi.python.org/pypi/torch-sparse\n[testing-image]: https://github.com/rusty1s/pytorch_sparse/actions/workflows/testing.yml/badge.svg\n[testing-url]: https://github.com/rusty1s/pytorch_sparse/actions/workflows/testing.yml\n[linting-image]: https://github.com/rusty1s/pytorch_sparse/actions/workflows/linting.yml/badge.svg\n[linting-url]: https://github.com/rusty1s/pytorch_sparse/actions/workflows/linting.yml\n[coverage-image]: https://codecov.io/gh/rusty1s/pytorch_sparse/branch/master/graph/badge.svg\n[coverage-url]: https://codecov.io/github/rusty1s/pytorch_sparse?branch=master\n\n# PyTorch Sparse\n\n[![PyPI Version][pypi-image]][pypi-url]\n[![Testing Status][testing-image]][testing-url]\n[![Linting Status][linting-image]][linting-url]\n[![Code Coverage][coverage-image]][coverage-url]\n\n--------------------------------------------------------------------------------\n\nThis package consists of a small extension library of optimized sparse matrix operations with autograd support.\nThis package currently consists of the following methods:\n\n* **[Coalesce](#coalesce)**\n* **[Transpose](#transpose)**\n* **[Sparse Dense Matrix Multiplication](#sparse-dense-matrix-multiplication)**\n* **[Sparse Sparse Matrix Multiplication](#sparse-sparse-matrix-multiplication)**\n\nAll included operations work on varying data types and are implemented both for CPU and GPU.\nTo avoid the hazzle of creating [`torch.sparse_coo_tensor`](https://pytorch.org/docs/stable/torch.html?highlight=sparse_coo_tensor#torch.sparse_coo_tensor), this package defines operations on sparse tensors by simply passing `index` and `value` tensors as arguments ([with same shapes as defined in PyTorch](https://pytorch.org/docs/stable/sparse.html)).\nNote that only `value` comes with autograd support, as `index` is discrete and therefore not differentiable.\n\n## Installation\n\n### Anaconda\n\n**Update:** You can now install `pytorch-sparse` via [Anaconda](https://anaconda.org/pyg/pytorch-sparse) for all major OS/PyTorch/CUDA combinations 🤗\nGiven that you have [`pytorch \u003e= 1.8.0` installed](https://pytorch.org/get-started/locally/), simply run\n\n```\nconda install pytorch-sparse -c pyg\n```\n\n### Binaries\n\nWe alternatively provide pip wheels for all major OS/PyTorch/CUDA combinations, see [here](https://data.pyg.org/whl).\n\n#### PyTorch 2.5\n\nTo install the binaries for PyTorch 2.5.0, simply run\n\n```\npip install torch-scatter torch-sparse -f https://data.pyg.org/whl/torch-2.5.0+${CUDA}.html\n```\n\nwhere `${CUDA}` should be replaced by either `cpu`, `cu118`, `cu121`, or `cu124` depending on your PyTorch installation.\n\n|             | `cpu` | `cu118` | `cu121` | `cu124` |\n|-------------|-------|---------|---------|---------|\n| **Linux**   | ✅    | ✅      | ✅      | ✅      |\n| **Windows** | ✅    | ✅      | ✅      | ✅      |\n| **macOS**   | ✅    |         |         |         |\n\n#### PyTorch 2.4\n\nTo install the binaries for PyTorch 2.4.0, simply run\n\n```\npip install torch-scatter torch-sparse -f https://data.pyg.org/whl/torch-2.4.0+${CUDA}.html\n```\n\nwhere `${CUDA}` should be replaced by either `cpu`, `cu118`, `cu121`, or `cu124` depending on your PyTorch installation.\n\n|             | `cpu` | `cu118` | `cu121` | `cu124` |\n|-------------|-------|---------|---------|---------|\n| **Linux**   | ✅    | ✅      | ✅      | ✅      |\n| **Windows** | ✅    | ✅      | ✅      | ✅      |\n| **macOS**   | ✅    |         |         |         |\n\n**Note:** Binaries of older versions are also provided for PyTorch 1.4.0, PyTorch 1.5.0, PyTorch 1.6.0, PyTorch 1.7.0/1.7.1, PyTorch 1.8.0/1.8.1, PyTorch 1.9.0, PyTorch 1.10.0/1.10.1/1.10.2, PyTorch 1.11.0, PyTorch 1.12.0/1.12.1, PyTorch 1.13.0/1.13.1, PyTorch 2.0.0/2.0.1, PyTorch 2.1.0/2.1.1/2.1.2, PyTorch 2.2.0/2.2.1/2.2.2, and PyTorch 2.3.0/2.3.1 (following the same procedure).\nFor older versions, you need to explicitly specify the latest supported version number or install via `pip install --no-index` in order to prevent a manual installation from source.\nYou can look up the latest supported version number [here](https://data.pyg.org/whl).\n\n### From source\n\nEnsure that at least PyTorch 1.7.0 is installed and verify that `cuda/bin` and `cuda/include` are in your `$PATH` and `$CPATH` respectively, *e.g.*:\n\n```\n$ python -c \"import torch; print(torch.__version__)\"\n\u003e\u003e\u003e 1.7.0\n\n$ echo $PATH\n\u003e\u003e\u003e /usr/local/cuda/bin:...\n\n$ echo $CPATH\n\u003e\u003e\u003e /usr/local/cuda/include:...\n```\n\nIf you want to additionally build `torch-sparse` with METIS support, *e.g.* for partioning, please download and install the [METIS library](https://web.archive.org/web/20211119110155/http://glaros.dtc.umn.edu/gkhome/metis/metis/download) by following the instructions in the `Install.txt` file.\nNote that METIS needs to be installed with 64 bit `IDXTYPEWIDTH` by changing `include/metis.h`.\nAfterwards, set the environment variable `WITH_METIS=1`.\n\nThen run:\n\n```\npip install torch-scatter torch-sparse\n```\n\nWhen running in a docker container without NVIDIA driver, PyTorch needs to evaluate the compute capabilities and may fail.\nIn this case, ensure that the compute capabilities are set via `TORCH_CUDA_ARCH_LIST`, *e.g.*:\n\n```\nexport TORCH_CUDA_ARCH_LIST=\"6.0 6.1 7.2+PTX 7.5+PTX\"\n```\n\n## Functions\n\n### Coalesce\n\n```\ntorch_sparse.coalesce(index, value, m, n, op=\"add\") -\u003e (torch.LongTensor, torch.Tensor)\n```\n\nRow-wise sorts `index` and removes duplicate entries.\nDuplicate entries are removed by scattering them together.\nFor scattering, any operation of [`torch_scatter`](https://github.com/rusty1s/pytorch_scatter) can be used.\n\n#### Parameters\n\n* **index** *(LongTensor)* - The index tensor of sparse matrix.\n* **value** *(Tensor)* - The value tensor of sparse matrix.\n* **m** *(int)* - The first dimension of sparse matrix.\n* **n** *(int)* - The second dimension of sparse matrix.\n* **op** *(string, optional)* - The scatter operation to use. (default: `\"add\"`)\n\n#### Returns\n\n* **index** *(LongTensor)* - The coalesced index tensor of sparse matrix.\n* **value** *(Tensor)* - The coalesced value tensor of sparse matrix.\n\n#### Example\n\n```python\nimport torch\nfrom torch_sparse import coalesce\n\nindex = torch.tensor([[1, 0, 1, 0, 2, 1],\n                      [0, 1, 1, 1, 0, 0]])\nvalue = torch.Tensor([[1, 2], [2, 3], [3, 4], [4, 5], [5, 6], [6, 7]])\n\nindex, value = coalesce(index, value, m=3, n=2)\n```\n\n```\nprint(index)\ntensor([[0, 1, 1, 2],\n        [1, 0, 1, 0]])\nprint(value)\ntensor([[6.0, 8.0],\n        [7.0, 9.0],\n        [3.0, 4.0],\n        [5.0, 6.0]])\n```\n\n### Transpose\n\n```\ntorch_sparse.transpose(index, value, m, n) -\u003e (torch.LongTensor, torch.Tensor)\n```\n\nTransposes dimensions 0 and 1 of a sparse matrix.\n\n#### Parameters\n\n* **index** *(LongTensor)* - The index tensor of sparse matrix.\n* **value** *(Tensor)* - The value tensor of sparse matrix.\n* **m** *(int)* - The first dimension of sparse matrix.\n* **n** *(int)* - The second dimension of sparse matrix.\n* **coalesced** *(bool, optional)* - If set to `False`, will not coalesce the output. (default: `True`)\n\n#### Returns\n\n* **index** *(LongTensor)* - The transposed index tensor of sparse matrix.\n* **value** *(Tensor)* - The transposed value tensor of sparse matrix.\n\n#### Example\n\n```python\nimport torch\nfrom torch_sparse import transpose\n\nindex = torch.tensor([[1, 0, 1, 0, 2, 1],\n                      [0, 1, 1, 1, 0, 0]])\nvalue = torch.Tensor([[1, 2], [2, 3], [3, 4], [4, 5], [5, 6], [6, 7]])\n\nindex, value = transpose(index, value, 3, 2)\n```\n\n```\nprint(index)\ntensor([[0, 0, 1, 1],\n        [1, 2, 0, 1]])\nprint(value)\ntensor([[7.0, 9.0],\n        [5.0, 6.0],\n        [6.0, 8.0],\n        [3.0, 4.0]])\n```\n\n### Sparse Dense Matrix Multiplication\n\n```\ntorch_sparse.spmm(index, value, m, n, matrix) -\u003e torch.Tensor\n```\n\nMatrix product of a sparse matrix with a dense matrix.\n\n#### Parameters\n\n* **index** *(LongTensor)* - The index tensor of sparse matrix.\n* **value** *(Tensor)* - The value tensor of sparse matrix.\n* **m** *(int)* - The first dimension of sparse matrix.\n* **n** *(int)* - The second dimension of sparse matrix.\n* **matrix** *(Tensor)* - The dense matrix.\n\n#### Returns\n\n* **out** *(Tensor)* - The dense output matrix.\n\n#### Example\n\n```python\nimport torch\nfrom torch_sparse import spmm\n\nindex = torch.tensor([[0, 0, 1, 2, 2],\n                      [0, 2, 1, 0, 1]])\nvalue = torch.Tensor([1, 2, 4, 1, 3])\nmatrix = torch.Tensor([[1, 4], [2, 5], [3, 6]])\n\nout = spmm(index, value, 3, 3, matrix)\n```\n\n```\nprint(out)\ntensor([[7.0, 16.0],\n        [8.0, 20.0],\n        [7.0, 19.0]])\n```\n\n### Sparse Sparse Matrix Multiplication\n\n```\ntorch_sparse.spspmm(indexA, valueA, indexB, valueB, m, k, n) -\u003e (torch.LongTensor, torch.Tensor)\n```\n\nMatrix product of two sparse tensors.\nBoth input sparse matrices need to be **coalesced** (use the `coalesced` attribute to force).\n\n#### Parameters\n\n* **indexA** *(LongTensor)* - The index tensor of first sparse matrix.\n* **valueA** *(Tensor)* - The value tensor of first sparse matrix.\n* **indexB** *(LongTensor)* - The index tensor of second sparse matrix.\n* **valueB** *(Tensor)* - The value tensor of second sparse matrix.\n* **m** *(int)* - The first dimension of first sparse matrix.\n* **k** *(int)* - The second dimension of first sparse matrix and first dimension of second sparse matrix.\n* **n** *(int)* - The second dimension of second sparse matrix.\n* **coalesced** *(bool, optional)*: If set to `True`, will coalesce both input sparse matrices. (default: `False`)\n\n#### Returns\n\n* **index** *(LongTensor)* - The output index tensor of sparse matrix.\n* **value** *(Tensor)* - The output value tensor of sparse matrix.\n\n#### Example\n\n```python\nimport torch\nfrom torch_sparse import spspmm\n\nindexA = torch.tensor([[0, 0, 1, 2, 2], [1, 2, 0, 0, 1]])\nvalueA = torch.Tensor([1, 2, 3, 4, 5])\n\nindexB = torch.tensor([[0, 2], [1, 0]])\nvalueB = torch.Tensor([2, 4])\n\nindexC, valueC = spspmm(indexA, valueA, indexB, valueB, 3, 3, 2)\n```\n\n```\nprint(indexC)\ntensor([[0, 1, 2],\n        [0, 1, 1]])\nprint(valueC)\ntensor([8.0, 6.0, 8.0])\n```\n\n## Running tests\n\n```\npytest\n```\n\n## C++ API\n\n`torch-sparse` also offers a C++ API that contains C++ equivalent of python models.\nFor this, we need to add `TorchLib` to the `-DCMAKE_PREFIX_PATH` (*e.g.*, it may exists in `{CONDA}/lib/python{X.X}/site-packages/torch` if installed via `conda`):\n\n```\nmkdir build\ncd build\n# Add -DWITH_CUDA=on support for CUDA support\ncmake -DCMAKE_PREFIX_PATH=\"...\" ..\nmake\nmake install\n```\n","funding_links":[],"categories":["Deep Learning Framework","Pytorch实用程序","其他_机器学习与深度学习"],"sub_categories":["High-Level DL APIs"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Frusty1s%2Fpytorch_sparse","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Frusty1s%2Fpytorch_sparse","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Frusty1s%2Fpytorch_sparse/lists"}