{"id":13441262,"url":"https://github.com/LMD0311/DAPT","last_synced_at":"2025-03-20T11:37:53.728Z","repository":{"id":225452267,"uuid":"766012431","full_name":"LMD0311/DAPT","owner":"LMD0311","description":"[CVPR 2024] Dynamic Adapter Meets Prompt Tuning: Parameter-Efficient Transfer Learning for Point Cloud Analysis","archived":false,"fork":false,"pushed_at":"2024-06-16T03:16:40.000Z","size":454,"stargazers_count":162,"open_issues_count":3,"forks_count":4,"subscribers_count":2,"default_branch":"main","last_synced_at":"2024-08-01T03:33:48.017Z","etag":null,"topics":["3d-point-clouds","cvpr2024","efficient-deep-learning","point-cloud"],"latest_commit_sha":null,"homepage":"https://arxiv.org/abs/2403.01439","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"apache-2.0","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/LMD0311.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2024-03-02T05:11:58.000Z","updated_at":"2024-07-26T14:40:34.000Z","dependencies_parsed_at":"2024-03-27T16:52:41.681Z","dependency_job_id":"e5960af0-52ff-4a8f-a96c-1acc60a09df8","html_url":"https://github.com/LMD0311/DAPT","commit_stats":null,"previous_names":["lmd0311/dapt"],"tags_count":1,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/LMD0311%2FDAPT","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/LMD0311%2FDAPT/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/LMD0311%2FDAPT/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/LMD0311%2FDAPT/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/LMD0311","download_url":"https://codeload.github.com/LMD0311/DAPT/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":221759944,"owners_count":16876323,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["3d-point-clouds","cvpr2024","efficient-deep-learning","point-cloud"],"created_at":"2024-07-31T03:01:31.735Z","updated_at":"2024-10-28T01:30:23.387Z","avatar_url":"https://github.com/LMD0311.png","language":"Python","readme":"\u003cdiv align=\"center\"\u003e\n\u003ch1\u003eDynamic Adapter Meets Prompt Tuning: \u003cbr\u003e\nParameter-Efficient Transfer Learning for Point Cloud Analysis\u003c/h1\u003e\n\n\n[Xin Zhou](https://lmd0311.github.io/)\u003csup\u003e1\u003c/sup\u003e\\* , [Dingkang Liang](https://dk-liang.github.io/)\u003csup\u003e1\u003c/sup\u003e\\* , [Wei Xu](https://scholar.google.com/citations?user=oMvFn0wAAAAJ\u0026hl=en)\u003csup\u003e1\u003c/sup\u003e, [Xingkui Zhu](https://scholar.google.com/citations?user=wKKiNQkAAAAJ\u0026hl=en)\u003csup\u003e1\u003c/sup\u003e ,[Yihan Xu](https://github.com/yhxu022)\u003csup\u003e1\u003c/sup\u003e, [Zhikang Zou](https://bigteacher-777.github.io/)\u003csup\u003e2\u003c/sup\u003e, and [Xiang Bai](https://scholar.google.com/citations?user=UeltiQ4AAAAJ\u0026hl=en)\u003csup\u003e 1✉️\u003c/sup\u003e\n\n\u003csup\u003e1\u003c/sup\u003e  Huazhong University of Science \u0026 Technology, \u003csup\u003e2\u003c/sup\u003e  Baidu Inc.\n\n(*) equal contribution, (​✉️​) corresponding author.\n\n[![arXiv](https://img.shields.io/badge/Arxiv-2403.01439-b31b1b.svg?logo=arXiv)](https://arxiv.org/abs/2403.01439)\n[![Zhihu](https://img.shields.io/badge/Intro-zhihu-blue.svg)](https://zhuanlan.zhihu.com/p/686850575)\n[![Hits](https://hits.seeyoufarm.com/api/count/incr/badge.svg?url=https%3A%2F%2Fgithub.com%2FLMD0311%2FDAPT\u0026count_bg=%2379C83D\u0026title_bg=%23555555\u0026icon=\u0026icon_color=%23E7E7E7\u0026title=hits\u0026edge_flat=false)](https://hits.seeyoufarm.com)\n[![GitHub issues](https://img.shields.io/github/issues/LMD0311/DAPT?color=critical\u0026label=Issues)](https://github.com/LMD0311/DAPT/issues?q=is%3Aopen+is%3Aissue)\n[![GitHub closed issues](https://img.shields.io/github/issues-closed/LMD0311/DAPT?color=success\u0026label=Issues)](https://github.com/LMD0311/DAPT/issues?q=is%3Aissue+is%3Aclosed) \n[![Code License](https://img.shields.io/badge/Code%20License-Apache_2.0-green.svg)](https://github.com/tatsu-lab/stanford_alpaca/blob/main/LICENSE)\n\n\u003c/div\u003e\n\n## 📣 News\n\n- **[11/Oct/2024]** 🚀 Check out our latest efficient fine-tuning work **[PointGST](https://github.com/jerryfeng2003/PointGST)** which achieves **99.48%**, **97.76%**, and **96.18%** overall accuracy on the ScanObjNN OBJ_BG, OBJ_OBLY, and PB_T50_RS datasets, respectively.\n- **[02/Mar/2024]** ✨ Release the code and checkpoint. 😊😊\n- **[26/Feb/2024]** 🎉 Our paper DAPT is accepted by **CVPR 2024**! 🥳🥳\n\n## Abstract\n\nPoint cloud analysis has achieved outstanding performance by transferring point cloud pre-trained models. However, existing methods for model adaptation usually update all model parameters, i.e., full fine-tuning paradigm, which is inefficient as it relies on high computational costs (e.g., training GPU memory) and massive storage space. In this paper, we aim to study parameter-efficient transfer learning for point cloud analysis with an ideal trade-off between task performance and parameter efficiency. To achieve this goal, we first freeze the parameters of the default pre-trained models and then propose the Dynamic Adapter, which generates a dynamic scale for each point token, considering the token significance to the downstream task. We further seamlessly integrate **D**ynamic **A**dapter with **P**rompt **T**uning (DAPT) by constructing Internal Prompts, capturing the instance-specific features for interaction. Extensive experiments conducted on five challenging datasets demonstrate that the proposed DAPT achieves superior performance compared to the full fine-tuning counterparts while significantly reducing the trainable parameters and training GPU memory by 95% and 35%, respectively.\n\n## Overview\n\n\u003cdiv  align=\"center\"\u003e    \n \u003cimg src=\"./figure/pipeline.png\" width = \"999\"  align=center /\u003e\n\u003c/div\u003e\n\n\n\n## Getting Started\n\n### Installation\n\nWe recommend using Anaconda for the installation process:\n```shell\n$ git clone https://github.com/LMD0311/DAPT.git\n$ cd DAPT\n# Create virtual env and install PyTorch\n$ conda create -y -n dapt python=3.9\n$ conda activate dapt\n(dapt) $ pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html\n\n# Install basic required packages\n(dapt) $ pip install -r requirements.txt\n\n# Chamfer Distance \u0026 emd\n(dapt) $ cd ./extensions/chamfer_dist \u0026\u0026 python setup.py install --user\n(dapt) $ cd ../..\n(dapt) $ cd ./extensions/emd \u0026\u0026 python setup.py install --user\n\n# PointNet++\n(dapt) $ pip install \"git+https://github.com/erikwijmans/Pointnet2_PyTorch.git#egg=pointnet2_ops\u0026subdirectory=pointnet2_ops_lib\"\n\n# GPU kNN\n(dapt) $ pip install --upgrade https://github.com/unlimblue/KNN_CUDA/releases/download/0.2/KNN_CUDA-0.2-py3-none-any.whl\n```\n\n### Datasets\n\nSee [DATASET.md](./DATASET.md) for details.\n\n### Pretrain\n\nTo fine-tune on downstream tasks, you may need to download or reproduce the pre-trained checkpoint.\n\n## Main Results (Point-MAE)\n\n\n| Task           | Dataset      | Trainable Parameters | Config                                                       | Acc.   |                     Checkpoints Download                     | logs                                                         |\n| :------------- | :----------- | :------------------: | :----------------------------------------------------------- | :----- | :----------------------------------------------------------: | ------------------------------------------------------------ |\n| Classification | ScanObjectNN |         1.1M         | [finetune_scan_objbg_dapt.yaml](./cfgs/finetune_scan_objbg_dapt.yaml) | 90.88% | [OBJ-BG](https://github.com/LMD0311/DAPT/releases/download/ckpt/scan_objbg.pth) | [scan_objbg.log](https://github.com/LMD0311/DAPT/releases/download/ckpt/scan_objbg.log) |\n| Classification | ScanObjectNN |         1.1M         | [finetune_scan_objonly_dapt.yaml](./cfgs/finetune_scan_objonly_dapt.yaml) | 90.19% | [OBJ-ONLY](https://github.com/LMD0311/DAPT/releases/download/ckpt/scan_objonly.pth) | [scan_objonly.log](https://github.com/LMD0311/DAPT/releases/download/ckpt/scan_objonly.log) |\n| Classification | ScanObjectNN |         1.1M         | [finetune_scan_hardest_dapt.yaml](./cfgs/finetune_scan_hardest_dapt.yaml) | 85.08% | [PB-T50-RS](https://github.com/LMD0311/DAPT/releases/download/ckpt/scan_hardest.pth) | [scan_hardest.log](https://github.com/LMD0311/DAPT/releases/download/ckpt/scan_hardest.log) |\n| Classification | ModelNet40   |         1.1M         | [finetune_modelnet_dapt.yaml](./cfgs/finetune_modelnet_dapt.yaml) | 93.5%  | [ModelNet-1k](https://github.com/LMD0311/DAPT/releases/download/ckpt/modelnet.pth) | [modelnet.log](https://github.com/LMD0311/DAPT/releases/download/ckpt/modelnet.log) |\n\n\nThe evaluation commands with checkpoints should be in the following format:\n```shell\nCUDA_VISIBLE_DEVICES=\u003cGPU\u003e python main.py --test --config \u003cyaml_file_name\u003e --exp_name \u003coutput_file_name\u003e --ckpts \u003cpath/to/ckpt\u003e\n```\n\n## Fine-tuning on downstream tasks\n\n###  ModelNet40\n\n```shell\nCUDA_VISIBLE_DEVICES=\u003cGPU\u003e python main.py --config cfgs/finetune_modelnet_dapt.yaml --ckpts \u003cpath/to/pre-trained/model\u003e --finetune_model --exp_name \u003cname\u003e\n\n# further enable voting mechanism\nCUDA_VISIBLE_DEVICES=\u003cGPU\u003e python main.py --config cfgs/finetune_modelnet_dapt.yaml --test --vote --exp_name \u003cname\u003e --ckpts \u003cpath/to/best/model\u003e\n```\n\nThe voting strategy is time-consuming and unfair for various compute platforms; hence, we prioritize reporting overall accuracy without voting. We recommend not to use the voting strategy, despite the promising results. 🙏🙏\n\n### ScanObjectNN\n\n```shell\n# For fine-tuning on OBJ-BG variant\nCUDA_VISIBLE_DEVICES=\u003cGPU\u003e python main.py --config cfgs/finetune_scan_objbg_dapt.yaml --ckpts \u003cpath/to/pre-trained/model\u003e --finetune_model --exp_name \u003cname\u003e\n\n# For fine-tuning on OBJ-ONLY variant\nCUDA_VISIBLE_DEVICES=\u003cGPU\u003e python main.py --config cfgs/finetune_scan_objonly_dapt.yaml --ckpts \u003cpath/to/pre-trained/model\u003e --finetune_model --exp_name \u003cname\u003e\n\n# For fine-tuning on PB-T50-RS variant\nCUDA_VISIBLE_DEVICES=\u003cGPU\u003e python main.py --config cfgs/finetune_scan_hardest_dapt.yaml --ckpts \u003cpath/to/pre-trained/model\u003e --finetune_model --exp_name \u003cname\u003e\n```\n\n## t-SNE visualization\n\nYou can use t-SNE to visualize the results obtained on ScanObjectNN test sets.\n\n\n```shell\n# t-SNE on ScanObjectNN\nCUDA_VISIBLE_DEVICES=\u003cGPU\u003e python main.py --config cfgs/tsne/finetune_scan_hardest_dapt_tsne.yaml --ckpts \u003cpath/to/ckpt\u003e --tsne --exp_name \u003cname\u003e\n```\nYou can also make your own config for other visualization. 😍😍\n\n## Acknowledgements\n\nThis project is based on Point-BERT ([paper](https://arxiv.org/abs/2111.14819), [code](https://github.com/lulutang0608/Point-BERT)), Point-MAE ([paper](https://arxiv.org/abs/2203.06604), [code](https://github.com/Pang-Yatian/Point-MAE)), ACT([paper](https://arxiv.org/abs/2212.08320), [code](https://github.com/RunpeiDong/ACT)), ReCon ([paper](https://arxiv.org/abs/2302.02318), [code](https://github.com/qizekun/ReCon)), IDPT ([paper](https://arxiv.org/abs/2304.07221), [code](https://github.com/zyh16143998882/ICCV23-IDPT)). Thanks for their wonderful works.\n\n\n## Citation\n\nIf you find this repository useful in your research, please consider giving a star ⭐ and a citation\n```bibtex\n@inproceedings{zhou2024dynamic,\n  title={Dynamic Adapter Meets Prompt Tuning: Parameter-Efficient Transfer Learning for Point Cloud Analysis},\n  author={Zhou, Xin and Liang, Dingkang and Xu, Wei and Zhu, Xingkui and Xu, Yihan and Zou, Zhikang and Bai, Xiang},\n  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},\n  pages={14707--14717},\n  year={2024}\n}\n```","funding_links":[],"categories":["Python"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FLMD0311%2FDAPT","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FLMD0311%2FDAPT","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FLMD0311%2FDAPT/lists"}