{"id":13443027,"url":"https://github.com/Jittor/JSparse","last_synced_at":"2025-03-20T15:31:53.830Z","repository":{"id":61780749,"uuid":"554017944","full_name":"Jittor/JSparse","owner":"Jittor","description":"JSparse is a high-performance auto-differentiation library for sparse voxels computation and point cloud processing based on TorchSparse and Jittor.","archived":false,"fork":false,"pushed_at":"2022-11-24T14:59:01.000Z","size":88,"stargazers_count":17,"open_issues_count":0,"forks_count":0,"subscribers_count":3,"default_branch":"master","last_synced_at":"2024-08-01T03:42:34.084Z","etag":null,"topics":["jittor","sparse"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/Jittor.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE.txt","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2022-10-19T05:47:58.000Z","updated_at":"2024-04-03T13:55:16.000Z","dependencies_parsed_at":"2023-01-23T18:01:02.732Z","dependency_job_id":null,"html_url":"https://github.com/Jittor/JSparse","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Jittor%2FJSparse","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Jittor%2FJSparse/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Jittor%2FJSparse/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/Jittor%2FJSparse/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/Jittor","download_url":"https://codeload.github.com/Jittor/JSparse/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":221772638,"owners_count":16878147,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["jittor","sparse"],"created_at":"2024-07-31T03:01:55.011Z","updated_at":"2024-10-28T03:31:39.129Z","avatar_url":"https://github.com/Jittor.png","language":"Python","readme":"# JSparse\n\n## Introduction\n\nJSparse is a high-performance auto-differentiation library for sparse voxels computation and point cloud processing based on [Jittor](https://github.com/Jittor/jittor), [TorchSparse](https://github.com/mit-han-lab/torchsparse) and [Torch Cluster](https://github.com/rusty1s/pytorch_cluster).\n\n## Installation\n\nIf you use cpu version, you need to install [Google Sparse Hash](https://github.com/sparsehash/sparsehash), and choose the convolution algorithm with `\"jittor\"`.\n\nThe latest JSparse can be installed by \n\n```bash\npython setup.py install # or\npython setup.py develop\n```\n\n## Getting Started\n\n### Architecture\n\n```\n- jsparse\n    - nn\n        - functional\n        - modules\n    - utils\n        collate/quantize/utils.py\n```\n\nYou can use the modules from `jsparse/modules` .\n\n### Sparse Tensor\n\nSparse tensor (`SparseTensor`) is the main data structure for point cloud, which has two data fields:\n\n- Coordinates (`indices`): a 2D integer tensor with a shape of $N \\times 4$, where the first dimension denotes the batch index, and the last three dimensions correspond to quantized $x, y, z$ coordinates.\n\n\n- Features (`values`): a 2D tensor with a shape of $N \\times C$, where $C$ is the number of feature channels.\nMost existing datasets provide raw point cloud data with float coordinates. We can use `sparse_quantize` (provided in `JSparse.utils.quantize`) to voxelize $x, y, z$ coordinates and remove duplicates.\n\n    You can also use the initialization method to automatically obtain the discretized features by turning on `quantize` option\n\n    ```python\n    inputs = SparseTensor(values=feats, indices=coords, voxel_size=self.voxel_size, quantize=True)\n    ```\nWe can then use `sparse_collate_fn` (provided in `JSparse.utils.collate`) to assemble a batch of `SparseTensor`'s (and add the batch dimension to coords). Please refer to this example for more details.\n\n### Sparse Neural Network\n\nWe finished many common modules in `jsparse.nn` such like `GlobalPool`. \n\nThe neural network interface in jsparse is similar to Jittor:\n\n```python\nimport jsparse.nn as spnn\ndef get_conv_block(self, in_channel, out_channel, kernel_size, stride):\n    return nn.Sequential(\n        spnn.Conv3d(\n            in_channel,\n            out_channel,\n            kernel_size=kernel_size,\n            stride=stride,\n        ),\n        spnn.BatchNorm(out_channel),\n        spnn.LeakyReLU(),\n    )\n```\n\nYou can get the usage of most of the functions and modules from the example `examples/MinkNet/classification_model40.py`.\n\n## BenchMark\n\nWe test several networks between JSparse(v0.5.0) and TorchSparse(v1.4.0).\n\nBecause the Jittor framework is fast, inference and training are faster than PyTorch on many operators. We also speed up `quantize` with jittor's operations and get better performance.\n\nWe test the speed on the following model and choose 10 scenes from ScanNet as the dataset.\n\n```python\nimport jsparse.nn as spnn\nfrom jittor import nn\nalgorithm = \"cuda\"\nmodel = nn.Sequential(\n    spnn.Conv3d(3, 32, 2),\n    spnn.BatchNorm(32),\n    spnn.ReLU(),\n    spnn.Conv3d(32, 64, 3, stride=1, algorithm=algorithm),\n    spnn.BatchNorm(64),\n    spnn.ReLU(),\n    spnn.Conv3d(64, 128, 3, stride=1, algorithm=algorithm),\n    spnn.BatchNorm(128),\n    spnn.ReLU(),\n    spnn.Conv3d(128, 256, 2, stride=2, algorithm=algorithm),\n    spnn.BatchNorm(256),\n    spnn.ReLU(),\n    spnn.Conv3d(256, 128, 2, stride=2, transposed=True, algorithm=algorithm),\n    spnn.BatchNorm(128),\n    spnn.ReLU(),\n    spnn.Conv3d(128, 64, 3, stride=1, transposed=True, algorithm=algorithm),\n    spnn.BatchNorm(64),\n    spnn.ReLU(),\n    spnn.Conv3d(64, 32, 3, stride=1, transposed=True, algorithm=algorithm),\n    spnn.BatchNorm(32),\n    spnn.ReLU(),\n    spnn.Conv3d(32, 3, 2),\n)\n``` \n\nWe finnished two versions of Sparse Convolution(completed convolution function with jittor operators or cuda).\n\nWe choose attribute `batch_size = 2, total_len = 10` and run on RTX3080 to test per iteration's speed (JSparse's version is `v0.5.0` ).\n|                   | JSparse | TorchSparse(v1.4.0) |\n|-------------------|---------------|----------------------|\n| voxel_size = 0.50 | 20.05ms       | 33.66ms              |\n| voxel_size = 0.10 | 25.15ms       | 40.40ms              |\n| voxel_size = 0.02 | 81.37ms       | 87.42ms              |\n\n\u003c!-- |                   | JSparse(jittor) | JSparse(cuda) | TorchSparse(v1.4.0) |\n|-------------------|-----------------|---------------|----------------------|\n| voxel_size = 0.50 | 26.60ms         | 20.05ms       | 33.66ms              |\n| voxel_size = 0.10 | 32.34ms         | 25.15ms       | 40.40ms              |\n| voxel_size = 0.02 | 86.89ms         | 81.37ms       | 87.42ms              | --\u003e\n\nWe also test the same 200 scenes of ScanNet on [VMNet](https://github.com/hzykent/VMNet) on JSparse and TorchSparse.\n\nWe choose attribute `batch_size = 3, num_workers=16` and run on RTX Titan and Intel(R) Xeon(R) CPU E5-2678 v3 to test per iteration's speed.\n\n| JSparse(cuda) | TorchSparse(v1.4.0)  |\n|---------------|----------------------|\n| 0.79s         | 0.92s                |\n\n\u003e If we ignore the initiation(`scannet.py`), and just test the speed of network, the speed of JSparse and TorchSparse is similar.\n\n## Acknowledgements\n\nThe implementation and idea of JSparse refers to many open source libraries, including(but not limited to) [MinkowskiEngine](https://github.com/NVIDIA/MinkowskiEngine) and [TorchSparse](https://github.com/mit-han-lab/torchsparse).\n\nIf you use JSparse in your research, please cite our and their works by using the following BibTeX entries:\n\n```bibtex\n@article{hu2020jittor,\n  title={Jittor: a novel deep learning framework with meta-operators and unified graph execution},\n  author={Hu, Shi-Min and Liang, Dun and Yang, Guo-Ye and Yang, Guo-Wei and Zhou, Wen-Yang},\n  journal={Science China Information Sciences},\n  volume={63},\n  number={222103},\n  pages={1--21},\n  year={2020}\n}\n```\n\n```bibtex\n@inproceedings{tang2022torchsparse,\n  title = {{TorchSparse: Efficient Point Cloud Inference Engine}},\n  author = {Tang, Haotian and Liu, Zhijian and Li, Xiuyu and Lin, Yujun and Han, Song},\n  booktitle = {Conference on Machine Learning and Systems (MLSys)},\n  year = {2022}\n}\n```\n\n```bibtex\n@inproceedings{tang2020searching,\n  title = {{Searching Efficient 3D Architectures with Sparse Point-Voxel Convolution}},\n  author = {Tang, Haotian and Liu, Zhijian and Zhao, Shengyu and Lin, Yujun and Lin, Ji and Wang, Hanrui and Han, Song},\n  booktitle = {European Conference on Computer Vision (ECCV)},\n  year = {2020}\n}\n```","funding_links":[],"categories":["Python"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FJittor%2FJSparse","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FJittor%2FJSparse","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FJittor%2FJSparse/lists"}