{"id":13576517,"url":"https://github.com/dongzelian/SSF","last_synced_at":"2025-04-05T05:32:05.859Z","repository":{"id":62402024,"uuid":"550378597","full_name":"dongzelian/SSF","owner":"dongzelian","description":"[NeurIPS'22] This is an official implementation for \"Scaling \u0026 Shifting Your Features: A New Baseline for Efficient Model Tuning\".","archived":false,"fork":false,"pushed_at":"2023-10-10T08:19:15.000Z","size":2955,"stargazers_count":170,"open_issues_count":9,"forks_count":12,"subscribers_count":3,"default_branch":"main","last_synced_at":"2024-11-05T12:33:46.889Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"https://arxiv.org/pdf/2210.08823.pdf","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/dongzelian.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2022-10-12T16:59:07.000Z","updated_at":"2024-10-27T02:53:31.000Z","dependencies_parsed_at":"2024-11-05T12:32:28.408Z","dependency_job_id":null,"html_url":"https://github.com/dongzelian/SSF","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dongzelian%2FSSF","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dongzelian%2FSSF/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dongzelian%2FSSF/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/dongzelian%2FSSF/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/dongzelian","download_url":"https://codeload.github.com/dongzelian/SSF/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":247294446,"owners_count":20915335,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-08-01T15:01:10.975Z","updated_at":"2025-04-05T05:32:00.848Z","avatar_url":"https://github.com/dongzelian.png","language":"Python","readme":"# SSF for Efficient Model Tuning\n\nThis repo is the official implementation of our NeurIPS2022 paper \"Scaling \u0026 Shifting Your Features: A New Baseline for Efficient Model Tuning\" ([arXiv](https://arxiv.org/abs/2210.08823)). \n\n\n\n\n## Usage\n\n### Install\n\n- Clone this repo:\n\n```bash\ngit clone https://github.com/dongzelian/SSF.git\ncd SSF\n```\n\n- Create a conda virtual environment and activate it:\n\n```bash\nconda create -n ssf python=3.7 -y\nconda activate ssf\n```\n\n- Install `CUDA==10.1` with `cudnn7` following\n  the [official installation instructions](https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html)\n- Install `PyTorch==1.7.1` and `torchvision==0.8.2` with `CUDA==10.1`:\n\n```bash\nconda install pytorch==1.7.1 torchvision==0.8.2 cudatoolkit=10.1 -c pytorch\n```\n\n- Install `timm==0.6.5`:\n\n```bash\npip install timm==0.6.5\n```\n\n\n- Install other requirements:\n\n```bash\npip install -r requirements.txt\n```\n\n\n### Data preparation\n\n- FGVC \u0026 vtab-1k\n\nYou can follow [VPT](https://github.com/KMnP/vpt) to download them. \n\nSince the original [vtab dataset](https://github.com/google-research/task_adaptation/tree/master/task_adaptation/data) is processed with tensorflow scripts and the processing of some datasets is tricky, we also upload the extracted vtab-1k dataset in [onedrive](https://shanghaitecheducn-my.sharepoint.com/:f:/g/personal/liandz_shanghaitech_edu_cn/EnV6eYPVCPZKhbqi-WSJIO8BOcyQwDwRk6dAThqonQ1Ycw?e=J884Fp) for your convenience. You can download from here and then use them with our [vtab.py](https://github.com/dongzelian/SSF/blob/main/data/vtab.py) directly. (Note that the license is in [vtab dataset](https://github.com/google-research/task_adaptation/tree/master/task_adaptation/data)).\n\n\n\n- CIFAR-100\n```bash\nwget https://www.cs.toronto.edu/~kriz/cifar-100-python.tar.gz\n```\n\n- For ImageNet-1K, download it from http://image-net.org/, and move validation images to labeled sub-folders. The file structure should look like:\n  ```bash\n  $ tree data\n  imagenet\n  ├── train\n  │   ├── class1\n  │   │   ├── img1.jpeg\n  │   │   ├── img2.jpeg\n  │   │   └── ...\n  │   ├── class2\n  │   │   ├── img3.jpeg\n  │   │   └── ...\n  │   └── ...\n  └── val\n      ├── class1\n      │   ├── img4.jpeg\n      │   ├── img5.jpeg\n      │   └── ...\n      ├── class2\n      │   ├── img6.jpeg\n      │   └── ...\n      └── ...\n \n  ```\n\n- Robustness \u0026 OOD datasets\n\nPrepare [ImageNet-A](https://github.com/hendrycks/natural-adv-examples), [ImageNet-R](https://github.com/hendrycks/imagenet-r) and [ImageNet-C](https://zenodo.org/record/2235448#.Y04cBOxByFw) for evaluation.\n\n\n\n### Pre-trained model preparation\n\n- For pre-trained ViT-B/16, Swin-B, and ConvNext-B models on ImageNet-21K, the model weights will be automatically downloaded when you fine-tune a pre-trained model via `SSF`. You can also manually download them from [ViT](https://github.com/google-research/vision_transformer),[Swin Transformer](https://github.com/microsoft/Swin-Transformer), and [ConvNext](https://github.com/facebookresearch/ConvNeXt).\n\n\n\n- For pre-trained AS-MLP-B model on ImageNet-1K, you can manually download them from [AS-MLP](https://github.com/svip-lab/AS-MLP).\n\n\n\n### Fine-tuning a pre-trained model via SSF\n\nTo fine-tune a pre-trained ViT model via `SSF` on CIFAR-100 or ImageNet-1K, run:\n\n```bash\nbash train_scripts/vit/cifar_100/train_ssf.sh\n```\nor \n```bash\nbash train_scripts/vit/imagenet_1k/train_ssf.sh\n```\n\nYou can also find the similar scripts for Swin, ConvNext, and AS-MLP models. You can easily reproduce our results. Enjoy!\n\n\n\n### Robustness \u0026 OOD\n\nTo evaluate the performance of fine-tuned model via SSF on Robustness \u0026 OOD, run:\n\n```bash\nbash train_scripts/vit/imagenet_a(r, c)/eval_ssf.sh\n```\n\n\n### Citation\nIf this project is helpful for you, you can cite our paper:\n```\n@InProceedings{Lian_2022_SSF,\n  title={Scaling \\\u0026 Shifting Your Features: A New Baseline for Efficient Model Tuning},\n  author={Lian, Dongze and Zhou, Daquan and Feng, Jiashi and Wang, Xinchao},\n  booktitle={Advances in Neural Information Processing Systems (NeurIPS)},\n  year={2022}\n}\n```\n\n\n### Acknowledgement\nThe code is built upon [timm](https://github.com/rwightman/pytorch-image-models). The processing of the vtab-1k dataset refers to [vpt](https://github.com/KMnP/vpt), [vtab github repo](https://github.com/google-research/task_adaptation/tree/master/task_adaptation/data), and [NOAH](https://github.com/ZhangYuanhan-AI/NOAH).\n","funding_links":[],"categories":["Python"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdongzelian%2FSSF","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fdongzelian%2FSSF","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fdongzelian%2FSSF/lists"}