{"id":17485447,"url":"https://github.com/jerryfeng2003/PointGST","last_synced_at":"2025-03-03T22:30:57.466Z","repository":{"id":258193963,"uuid":"870952606","full_name":"jerryfeng2003/PointGST","owner":"jerryfeng2003","description":"Parameter-Efficient Fine-Tuning in Spectral Domain for Point Cloud Learning","archived":false,"fork":false,"pushed_at":"2024-10-18T02:19:16.000Z","size":13178,"stargazers_count":60,"open_issues_count":0,"forks_count":5,"subscribers_count":2,"default_branch":"main","last_synced_at":"2024-10-21T10:05:54.190Z","etag":null,"topics":["3d-point-clouds","efficient-deep-learning","point-cloud"],"latest_commit_sha":null,"homepage":"https://arxiv.org/abs/2410.08114","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/jerryfeng2003.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-10-11T01:22:17.000Z","updated_at":"2024-10-21T00:38:31.000Z","dependencies_parsed_at":"2024-10-18T00:36:40.343Z","dependency_job_id":"a6e25f34-873b-4e26-a384-ee9cb980c3fa","html_url":"https://github.com/jerryfeng2003/PointGST","commit_stats":null,"previous_names":["jerryfeng2003/pointgst"],"tags_count":1,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jerryfeng2003%2FPointGST","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jerryfeng2003%2FPointGST/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jerryfeng2003%2FPointGST/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/jerryfeng2003%2FPointGST/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/jerryfeng2003","download_url":"https://codeload.github.com/jerryfeng2003/PointGST/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":241752094,"owners_count":20014228,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["3d-point-clouds","efficient-deep-learning","point-cloud"],"created_at":"2024-10-19T02:01:02.684Z","updated_at":"2025-03-03T22:30:57.460Z","avatar_url":"https://github.com/jerryfeng2003.png","language":"Python","readme":"\u003cdiv align=\"center\"\u003e\n\u003ch1\u003eParameter-Efficient Fine-Tuning in Spectral Domain for Point Cloud Learning🚀\u003c/h1\u003e\n\n\n[Dingkang Liang](https://dk-liang.github.io/)\u003csup\u003e1\u003c/sup\u003e\\* ,[Tianrui Feng](https://github.com/jerryfeng2003)\u003csup\u003e1\u003c/sup\u003e\\* ,[Xin Zhou](https://lmd0311.github.io/)\u003csup\u003e1\u003c/sup\u003e\\* , Yumeng Zhang\u003csup\u003e2\u003c/sup\u003e, [Zhikang Zou](https://bigteacher-777.github.io/)\u003csup\u003e2\u003c/sup\u003e, and [Xiang Bai](https://scholar.google.com/citations?user=UeltiQ4AAAAJ\u0026hl=en)\u003csup\u003e 1✉️\u003c/sup\u003e\n\n\u003csup\u003e1\u003c/sup\u003e  Huazhong University of Science and Technology, \u003csup\u003e2\u003c/sup\u003e  Baidu Inc.\n\n(*) equal contribution, (​✉️​) corresponding author.\n\n[![arXiv](https://img.shields.io/badge/Arxiv-2410.08114-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2410.08114)\n[![Code License](https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg)](https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE)\n[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/parameter-efficient-fine-tuning-in-spectral/3d-point-cloud-classification-on-scanobjectnn)](https://paperswithcode.com/sota/3d-point-cloud-classification-on-scanobjectnn?p=parameter-efficient-fine-tuning-in-spectral)\n[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/parameter-efficient-fine-tuning-in-spectral/3d-parameter-efficient-fine-tuning-for)](https://paperswithcode.com/sota/3d-parameter-efficient-fine-tuning-for?p=parameter-efficient-fine-tuning-in-spectral)\n[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/parameter-efficient-fine-tuning-in-spectral/3d-parameter-efficient-fine-tuning-for-1)](https://paperswithcode.com/sota/3d-parameter-efficient-fine-tuning-for-1?p=parameter-efficient-fine-tuning-in-spectral)\n[![PWC](https://img.shields.io/endpoint.svg?url=https://paperswithcode.com/badge/parameter-efficient-fine-tuning-in-spectral/3d-point-cloud-classification-on-modelnet40)](https://paperswithcode.com/sota/3d-point-cloud-classification-on-modelnet40?p=parameter-efficient-fine-tuning-in-spectral)\n\n\u003c/div\u003e\n\n## News\n\n**[2024-10-10]** [PointGST](https://arxiv.org/abs/2410.08114) is released.\n\n## Abstract\n\nRecently, leveraging pre-training techniques to enhance point cloud models has become a hot research topic. However, existing approaches typically require full fine-tuning of pre-trained models to achieve satisfied performance on downstream tasks, accompanying storage-intensive and computationally demanding. To address this issue, we propose a novel Parameter-Efficient Fine-Tuning (PEFT) method for point cloud, called **PointGST** (**Point** cloud **G**raph **S**pectral **T**uning). PointGST freezes the pre-trained model and introduces a lightweight, trainable Point Cloud Spectral Adapter (PCSA) to fine-tune parameters in the spectral domain.\n\n\u003cdiv  align=\"center\"\u003e    \n \u003cimg src=\"./figure/intro.png\" width = \"888\"  align=center /\u003e\n\u003c/div\u003e \n\nExtensive experiments on challenging point cloud datasets across various tasks demonstrate that PointGST not only outperforms its fully fine-tuning counterpart but also significantly reduces trainable parameters, making it a promising solution for efficient point cloud learning. More importantly, it improves upon a solid baseline by +2.28\\%, 1.16\\%, and 2.78\\%, resulting in 99.48\\%, 97.76\\%, and 96.18\\% on the ScanObjNN OBJ\\_BG, OBJ\\_OBLY, and PB\\_T50\\_RS datasets, respectively. This advancement establishes a new state-of-the-art, using only 0.67\\% of the trainable parameters.\n\n## Overview\n\u003cdiv  align=\"center\"\u003e    \n \u003cimg src=\"./figure/pipeline.png\" width = \"888\"  align=center /\u003e\n\u003c/div\u003e\n\n## Getting Started\n\n### Installation\n\nWe recommend using Anaconda for the installation process:\n```bash\ngit clone https://github.com/jerryfeng2003/PointGST.git\ncd PointGST/\n\n```\n### Requirements\n```bash\nconda create -y -n pgst python=3.9\nconda activate pgst\npip install torch==2.0.0 torchvision==0.15.1 torchaudio==2.0.1 --index-url https://download.pytorch.org/whl/cu118\npip install -r requirements.txt\n\n# Chamfer Distance \u0026 emd\ncd ./extensions/chamfer_dist\npython setup.py install --user\ncd ../emd\npython setup.py install --user\n\n# PointNet++\npip install \"git+https://github.com/erikwijmans/Pointnet2_PyTorch.git#egg=pointnet2_ops\u0026subdirectory=pointnet2_ops_lib\"\n\n# GPU kNN\npip install --upgrade https://github.com/unlimblue/KNN_CUDA/releases/download/0.2/KNN_CUDA-0.2-py3-none-any.whl\n\n```\n### Datasets\n\nSee [DATASET.md](./DATASET.md) for details.\n\n## Main Results\n\n\u003cdiv  align=\"center\"\u003e    \n \u003cimg src=\"./figure/result.png\" width = \"888\"  align=center /\u003e\n\u003c/div\u003e\n\n\u003cdiv  align=\"center\"\u003e    \n \u003cimg src=\"./figure/result2.png\" width = \"888\"  align=center /\u003e\n\u003c/div\u003e\n\n| Baseline | Trainable Parameters | Dataset | Config | Acc. | Download |\n| :---- | :---- | :---- | :---- | :---- | :---- |\n| Point-MAE \u003cbr\u003e (ECCV 22) | 0.6M |ModelNet40 \u003cbr\u003e OBJ_BG \u003cbr\u003e OBJ_ONLY \u003cbr\u003e PB_T50_RS | [modelnet](./cfgs/mae/finetune_modelnet_pgst.yaml) \u003cbr\u003e [scan_objbg](./cfgs/mae/finetune_scan_objbg_pgst.yaml) \u003cbr\u003e [scan_objonly](./cfgs/mae/finetune_scan_objonly_pgst.yaml) \u003cbr\u003e [scan_hardest](./cfgs/mae/finetune_scan_hardest_pgst.yaml) | 93.5 \u003cbr\u003e 91.74 \u003cbr\u003e 90.19 \u003cbr\u003e 85.29 | [ckpt](https://github.com/jerryfeng2003/PointGST/releases/download/ckpts/modelnet_mae.pth) \u003cbr\u003e [ckpt](https://github.com/jerryfeng2003/PointGST/releases/download/ckpts/scan_objbg_mae.pth) \u003cbr\u003e [ckpt](https://github.com/jerryfeng2003/PointGST/releases/download/ckpts/scan_objonly_mae.pth) \u003cbr\u003e [ckpt](https://github.com/jerryfeng2003/PointGST/releases/download/ckpts/scan_hardest_mae.pth) |\n| ACT \u003cbr\u003e (ICLR 23)| 0.6M |ModelNet40 \u003cbr\u003e OBJ_BG \u003cbr\u003e OBJ_ONLY \u003cbr\u003e PB_T50_RS | [modelnet](./cfgs/act/finetune_modelnet_pgst.yaml) \u003cbr\u003e [scan_objbg](./cfgs/act/finetune_scan_objbg_pgst.yaml) \u003cbr\u003e [scan_objonly](./cfgs/act/finetune_scan_objonly_pgst.yaml) \u003cbr\u003e [scan_hardest](./cfgs/act/finetune_scan_hardest_pgst.yaml) | 93.4 \u003cbr\u003e 93.46 \u003cbr\u003e 92.60 \u003cbr\u003e 88.27  | [ckpt](https://github.com/jerryfeng2003/PointGST/releases/download/ckpts/modelnet_act.pth) \u003cbr\u003e [ckpt](https://github.com/jerryfeng2003/PointGST/releases/download/ckpts/scan_objbg_act.pth) \u003cbr\u003e [ckpt](https://github.com/jerryfeng2003/PointGST/releases/download/ckpts/scan_objonly_act.pth) \u003cbr\u003e [ckpt](https://github.com/jerryfeng2003/PointGST/releases/download/ckpts/scan_hardest_act.pth) |\n| ReCon \u003cbr\u003e (ICML 23) | 0.6M |ModelNet40 \u003cbr\u003e OBJ_BG \u003cbr\u003e OBJ_ONLY \u003cbr\u003e PB_T50_RS | [modelnet](./cfgs/recon/finetune_modelnet_pgst.yaml) \u003cbr\u003e [scan_objbg](./cfgs/recon/finetune_scan_objbg_pgst.yaml) \u003cbr\u003e [scan_objonly](./cfgs/recon/finetune_scan_objonly_pgst.yaml) \u003cbr\u003e [scan_hardest](./cfgs/recon/finetune_scan_hardest_pgst.yaml) | 93.6 \u003cbr\u003e 94.49 \u003cbr\u003e 92.94 \u003cbr\u003e 89.49 | [ckpt](https://github.com/jerryfeng2003/PointGST/releases/download/ckpts/modelnet_recon.pth) \u003cbr\u003e [ckpt](https://github.com/jerryfeng2003/PointGST/releases/download/ckpts/scan_objbg_recon.pth) \u003cbr\u003e [ckpt](https://github.com/jerryfeng2003/PointGST/releases/download/ckpts/scan_objonly_recon.pth) \u003cbr\u003e [ckpt](https://github.com/jerryfeng2003/PointGST/releases/download/ckpts/scan_hardest_recon.pth) |\n| PointGPT-L \u003cbr\u003e (NeurIPS 24)| 2.4M |ModelNet40 \u003cbr\u003e OBJ_BG \u003cbr\u003e OBJ_ONLY \u003cbr\u003e PB_T50_RS | [modelnet](./cfgs/pointgpt/finetune_modelnet_pgst.yaml) \u003cbr\u003e [scan_objbg](./cfgs/pointgpt/finetune_scan_objbg_pgst.yaml) \u003cbr\u003e [scan_objonly](./cfgs/pointgpt/finetune_scan_objonly_pgst.yaml) \u003cbr\u003e [scan_hardest](./cfgs/pointgpt/finetune_scan_hardest_pgst.yaml) | 94.8 \u003cbr\u003e 98.97 \u003cbr\u003e 97.59 \u003cbr\u003e 94.83| [ckpt](https://github.com/jerryfeng2003/PointGST/releases/download/ckpts/modelnet_gpt.pth) \u003cbr\u003e [ckpt](https://github.com/jerryfeng2003/PointGST/releases/download/ckpts/scan_objbg_gpt.pth) \u003cbr\u003e [ckpt](https://github.com/jerryfeng2003/PointGST/releases/download/ckpts/scan_objonly_gpt.pth) \u003cbr\u003e [ckpt](https://github.com/jerryfeng2003/PointGST/releases/download/ckpts/scan_hardest_gpt.pth) |\n| PointGPT-L (voting) \u003cbr\u003e (NeurIPS 24) | 2.4M |ModelNet40 \u003cbr\u003e OBJ_BG \u003cbr\u003e OBJ_ONLY \u003cbr\u003e PB_T50_RS | [modelnet](./cfgs/pointgpt/finetune_modelnet_pgst.yaml) \u003cbr\u003e [scan_objbg](./cfgs/pointgpt/finetune_scan_objbg_pgst.yaml) \u003cbr\u003e [scan_objonly](./cfgs/pointgpt/finetune_scan_objonly_pgst.yaml) \u003cbr\u003e [scan_hardest](./cfgs/pointgpt/finetune_scan_hardest_pgst.yaml) | 95.3 \u003cbr\u003e 99.48 \u003cbr\u003e 97.76 \u003cbr\u003e 96.18| [log](https://github.com/jerryfeng2003/PointGST/releases/download/ckpts/modelnet_gpt_vote.log) \u003cbr\u003e [log](https://github.com/jerryfeng2003/PointGST/releases/download/ckpts/scan_objbg_gpt_vote.log) \u003cbr\u003e [log](https://github.com/jerryfeng2003/PointGST/releases/download/ckpts/scan_objonly_gpt_vote.log) \u003cbr\u003e [log](https://github.com/jerryfeng2003/PointGST/releases/download/ckpts/scan_hardest_gpt_vote.log) |\n\nThe evaluation commands with checkpoints should be in the following format:\n```shell\nCUDA_VISIBLE_DEVICES=\u003cGPU\u003e python main.py --test --config \u003cpath/to/cfg\u003e --exp_name \u003cpath/to/output\u003e --ckpts \u003cpath/to/ckpt\u003e\n\n# further enable voting mechanism\nCUDA_VISIBLE_DEVICES=\u003cGPU\u003e python main.py --test --vote --config \u003cpath/to/cfg\u003e --exp_name \u003cpath/to/output\u003e --ckpts \u003cpath/to/ckpt\u003e\n```\n\nAll the experiments are conducted on a single NVIDIA 3090 GPU.\n### t-SNE visualization\n\n```shell\n# t-SNE on ScanObjectNN\nCUDA_VISIBLE_DEVICES=\u003cGPU\u003e python main.py --config \u003cpath/to/cfg\u003e --ckpts \u003cpath/to/ckpt\u003e --tsne --exp_name \u003cpath/to/output\u003e\n```\n## To Do\n\n- [x] Release the inference code for classification.\n- [x] Release the checkpoints for classification.\n- [ ] Release the training code for classification.\n- [ ] Release the code for segmentation.\n\n## Acknowledgement\n\nThis project is based on Point-BERT ([paper](https://arxiv.org/abs/2111.14819), [code](https://github.com/lulutang0608/Point-BERT)), Point-MAE ([paper](https://arxiv.org/abs/2203.06604), [code](https://github.com/Pang-Yatian/Point-MAE)), ACT([paper](https://arxiv.org/abs/2212.08320), [code](https://github.com/RunpeiDong/ACT)), ReCon ([paper](https://arxiv.org/abs/2302.02318), [code](https://github.com/qizekun/ReCon)), PointGPT([paper](https://arxiv.org/abs/2305.11487), [code](https://github.com/CGuangyan-BIT/PointGPT)), IDPT ([paper](https://arxiv.org/abs/2304.07221), [code](https://github.com/zyh16143998882/ICCV23-IDPT)), and DAPT([paper](https://arxiv.org/abs/2403.01439), [code](https://github.com/LMD0311/DAPT)). Thanks for their wonderful works.\n\n ## Citation\n\nIf you find this repository useful in your research, please consider giving a star ⭐ and a citation.\n```bibtex\n@article{liang2024pointgst,\n  title={Parameter-Efficient Fine-Tuning in Spectral Domain for Point Cloud Learning},\n  author={Liang, Dingkang and Feng, Tianrui and Zhou, Xin and Zhang, Yumeng and Zou, Zhikang and Bai, Xiang},\n  journal={arXiv preprint arXiv:2410.08114},\n  year={2024}\n}\n```\n","funding_links":[],"categories":["Python"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fjerryfeng2003%2FPointGST","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fjerryfeng2003%2FPointGST","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fjerryfeng2003%2FPointGST/lists"}