{"id":13737694,"url":"https://github.com/fra31/robust-finetuning","last_synced_at":"2025-05-08T15:30:56.061Z","repository":{"id":111042257,"uuid":"370668867","full_name":"fra31/robust-finetuning","owner":"fra31","description":"Code relative to \"Adversarial robustness against multiple and single $l_p$-threat models via quick fine-tuning of robust classifiers\"","archived":false,"fork":false,"pushed_at":"2022-11-30T13:39:50.000Z","size":20,"stargazers_count":15,"open_issues_count":1,"forks_count":4,"subscribers_count":2,"default_branch":"master","last_synced_at":"2024-08-04T03:11:05.846Z","etag":null,"topics":["adversarial-learning","adversarial-robustness","adversarial-training"],"latest_commit_sha":null,"homepage":"https://arxiv.org/abs/2105.12508","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/fra31.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null}},"created_at":"2021-05-25T11:28:20.000Z","updated_at":"2024-06-21T05:20:52.000Z","dependencies_parsed_at":"2023-03-12T19:16:14.628Z","dependency_job_id":null,"html_url":"https://github.com/fra31/robust-finetuning","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/fra31%2Frobust-finetuning","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/fra31%2Frobust-finetuning/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/fra31%2Frobust-finetuning/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/fra31%2Frobust-finetuning/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/fra31","download_url":"https://codeload.github.com/fra31/robust-finetuning/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":224742231,"owners_count":17362229,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["adversarial-learning","adversarial-robustness","adversarial-training"],"created_at":"2024-08-03T03:01:57.568Z","updated_at":"2024-11-15T06:30:48.145Z","avatar_url":"https://github.com/fra31.png","language":"Python","readme":"# Robust fine-tuning\n\n\"Adversarial robustness against multiple and single $l_p$-threat models via quick fine-tuning of robust classifiers\"\\\n*Francesco Croce, Matthias Hein*\\\nICML 2022\\\n[https://arxiv.org/abs/2105.12508](https://arxiv.org/abs/2105.12508)\n\nWe propose to *i)* use adversarial training wrt Linf and L1 (alternating the two threat models) to achieve robustness also to L2 and *ii)* fine-tune models robust in one\nLp-norm to get multiple norm robustness or robustness wrt another Lq-norm.\n\n## Code\n### Training code\n\nThe file `train.py` allows to train or fine-tune models. For adversarial training use `--attack=apgd`, otherwise standard training is performed. The main arguments\nfor adversarial training are (other options in `train.py`)\n+ `--l_norms='Linf L1'`, the list (as string with blank space separated items) of Lp-norms, even just one, to use for training (note that the training cost is the same\nregardless of the number of threat models used),\n+ `--l_eps`, list of thresholds epsilon for each threat model for training (if not given, the default values are used), sorted as the corresponding norms.\n+ `--l_iters`, list of iterations in adversarial training for each threat model (possibly different), or `--at_iter`, number of steps for all threat models.\n\nFor training new models a PreAct ResNet-18 is used, by default with softplus activation function. \n\n\n### Fine-tuning existing models\n\nTo fine-tune a model add the `--finetune_model` flag, `--lr-schedule=piecewise-ft` to set the standard learning rate schedule,\n`--model_dir=/path/to/pretrained/models` where to download or find the models.\n\n+ We provide [here](https://drive.google.com/drive/folders/1hYWHp5UbTAm9RhSb8JkJZtcB0LDZDvkT?usp=sharing) pre-trained ResNet-18 robust wrt Linf, L2 and L1,\nwhich can be loaded specifying `--model_name=pretr_L*.pth` (insert the desired norm).\n+ It is also possible to use models from the [Model Zoo](https://github.com/RobustBench/robustbench#model-zoo) of [RobustBench](https://robustbench.github.io/)\nwith `--model_name=RB_{}` inserting the identifier of the classifier from the Model Zoo (these are automatically downloaded). Note that models trained with extra data should be fine-tuned with the same\n(currently not supported in the code).\n\n### Evaluation code\nWith `--final_eval` our standard evaluation (with APGD-CE and APGD-T, for a total of 10 restarts of 100 steps) is run for all threat models at the end of training.\nSpecifying `--eval_freq=k` a fast evaluation is run on test and training points every `k` epochs.\n\nTo evaluate a trained model one can run `eval.py` with `--model_name` as above for the pretrained model or `--model_name=/path/to/checkpoint/` for new or fine-tuned\nclassifiers. If the run has the automatically generated name, the corresponding architecture is loaded. More details about the options for evaluation in `eval.py`.\n\n## Credits\nParts of the code in this repo is based on\n+ [https://github.com/tml-epfl/understanding-fast-adv-training](https://github.com/tml-epfl/understanding-fast-adv-training)\n+ [https://github.com/locuslab/robust_overfitting](https://github.com/locuslab/robust_overfitting)\n","funding_links":[],"categories":["Python"],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ffra31%2Frobust-finetuning","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Ffra31%2Frobust-finetuning","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Ffra31%2Frobust-finetuning/lists"}