{"id":18850937,"url":"https://github.com/igitugraz/sparseadversarialtraining","last_synced_at":"2025-04-14T09:51:21.676Z","repository":{"id":113036886,"uuid":"369153350","full_name":"IGITUGraz/SparseAdversarialTraining","owner":"IGITUGraz","description":"Code for \"Training Adversarially Robust Sparse Networks via Bayesian Connectivity Sampling\" [ICML 2021]","archived":false,"fork":false,"pushed_at":"2022-03-14T13:04:40.000Z","size":38,"stargazers_count":10,"open_issues_count":1,"forks_count":1,"subscribers_count":3,"default_branch":"main","last_synced_at":"2025-03-27T23:11:14.383Z","etag":null,"topics":["adversarial-robustness","adversarial-training","icml2021","sparse-training"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/IGITUGraz.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2021-05-20T09:29:48.000Z","updated_at":"2024-12-18T12:25:45.000Z","dependencies_parsed_at":"2023-06-06T19:48:34.644Z","dependency_job_id":null,"html_url":"https://github.com/IGITUGraz/SparseAdversarialTraining","commit_stats":null,"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/IGITUGraz%2FSparseAdversarialTraining","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/IGITUGraz%2FSparseAdversarialTraining/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/IGITUGraz%2FSparseAdversarialTraining/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/IGITUGraz%2FSparseAdversarialTraining/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/IGITUGraz","download_url":"https://codeload.github.com/IGITUGraz/SparseAdversarialTraining/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":248859633,"owners_count":21173337,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["adversarial-robustness","adversarial-training","icml2021","sparse-training"],"created_at":"2024-11-08T03:32:41.648Z","updated_at":"2025-04-14T09:51:21.667Z","avatar_url":"https://github.com/IGITUGraz.png","language":"Python","readme":"# Training Adversarially Robust Sparse Networks via Bayesian Connectivity Sampling\n\nThis is the code repository of the following [paper](http://proceedings.mlr.press/v139/ozdenizci21a/ozdenizci21a.pdf) for end-to-end robust adversarial training of neural networks with sparse connectivity.\n \n\"Training Adversarially Robust Sparse Networks via Bayesian Connectivity Sampling\"\\\n\u003cem\u003eOzan Özdenizci, Robert Legenstein\u003c/em\u003e\\\nInternational Conference on Machine Learning (ICML), 2021.\n\nThe repository supports sparse training of models with the robust training objectives explored in the paper, as well as saved model weights of the adversarially trained sparse networks that are presented.\n\n## Setup\n\nYou will need [TensorFlow 2](https://www.tensorflow.org/install) to run this code. You can simply start by executing:\n```bash\npip install -r requirements.txt\n```\nto install all dependencies and use the repository.\n\n## Usage\n\nYou can use `run_connectivity_sampling.py` to adversarially train sparse networks from scratch. Brief description of possible arguments are:\n\n- `--data`: \"cifar10\", \"cifar100\", \"svhn\"\n- `--model`: \"vgg16\", \"resnet18\", \"resnet34\", \"resnet50\", \"wrn28_2\", \"wrn28_4\", \"wrn28_10\", \"wrn34_10\"\n- `--objective`: \"at\" (Standard AT), \"mat\" (Mixed-batch AT), trades\", \"mart\", \"rst\" (intended for CIFAR-10)\n- `--sparse_train`: enable end-to-end sparse training\n- `--connectivity`: sparse connectivity ratio (used when `--sparse_train` is enabled)\n\nRemarks:\n* For the `--data \"svhn\"` option, you will need to create the directory `datasets/SVHN/` and place the [SVHN](http://ufldl.stanford.edu/housenumbers/) dataset's [train](http://ufldl.stanford.edu/housenumbers/train_32x32.mat) and [test](http://ufldl.stanford.edu/housenumbers/test_32x32.mat) `.mat` files there.\n* We consider usage of robust self-training (RST) `--objective \"rst\"` based on the TRADES loss. To be able to use RST for CIFAR-10 as described in [this repository](https://github.com/yaircarmon/semisup-adv), you need to place the [pseudo-labeled TinyImages](https://drive.google.com/open?id=1LTw3Sb5QoiCCN-6Y5PEKkq9C9W60w-Hi) file at `datasets/tinyimages/ti_500K_pseudo_labeled.pickle`.\n\n### End-to-end robust training for sparse networks\n\nThe following sample scripts can be used to adversarially train sparse networks from scratch, and also perform white box robustness evaluations using PGD attacks via [Foolbox](https://github.com/bethgelab/foolbox).\n\n- `robust_sparse_train_standardAT.sh`: Standard adversarial training for a sparse ResNet-50 on CIFAR-10.\n- `robust_sparse_train_TRADES.sh` Robust training with TRADES for a sparse VGG-16 on CIFAR-100.\n\n## Saved model weights\n\nWe share our adversarially trained sparse models at 90% and 99% sparsity for CIFAR-10, CIFAR-100 and SVHN datasets that are presented in the paper. \nDifferent evaluations may naturally result in slight differences in the numbers presented in the paper.\n\n### Sparse networks with TRADES robust training objective\n\n* CIFAR-10  (TRADES with RST): \n[VGG16 - 90% Sparsity](https://igi-web.tugraz.at/download/OzdenizciLegensteinICML2021/cifar10_vgg16_sparse10_rst.zip) | \n[VGG16 - 99% Sparsity](https://igi-web.tugraz.at/download/OzdenizciLegensteinICML2021/cifar10_vgg16_sparse1_rst.zip)\n* CIFAR-100 (TRADES): \n[ResNet-18 - 90% Sparsity](https://igi-web.tugraz.at/download/OzdenizciLegensteinICML2021/cifar100_resnet18_sparse10_trades.zip) | \n[ResNet-18 - 99% Sparsity](https://igi-web.tugraz.at/download/OzdenizciLegensteinICML2021/cifar100_resnet18_sparse1_trades.zip)\n* SVHN   (TRADES): \n[WideResNet-28-4 - 90% Sparsity](https://igi-web.tugraz.at/download/OzdenizciLegensteinICML2021/svhn_wrn28_4_sparse10_trades.zip) | \n[WideResNet-28-4 - 99% Sparsity](https://igi-web.tugraz.at/download/OzdenizciLegensteinICML2021/svhn_wrn28_4_sparse1_trades.zip)\n\n### Sparse networks with Standard AT for CIFAR-10\n\nThese sparse models trained with standard AT on CIFAR-10 (without additional pseudo-labeled images) that correspond to our models presented in Figure 1 and Table 4 of the paper.\n\n* VGG16      : \n[90% Sparsity](https://igi-web.tugraz.at/download/OzdenizciLegensteinICML2021/cifar10_vgg16_sparse10_at.zip) | \n[99% Sparsity](https://igi-web.tugraz.at/download/OzdenizciLegensteinICML2021/cifar10_vgg16_sparse1_at.zip) | \n[99.5% Sparsity](https://igi-web.tugraz.at/download/OzdenizciLegensteinICML2021/cifar10_vgg16_sparse05_at.zip)\n* ResNet-18  : \n[99% Sparsity](https://igi-web.tugraz.at/download/OzdenizciLegensteinICML2021/cifar10_resnet18_sparse1_at.zip)\n* ResNet-34  : \n[99% Sparsity](https://igi-web.tugraz.at/download/OzdenizciLegensteinICML2021/cifar10_resnet34_sparse1_at.zip)\n* ResNet-50  : \n[99% Sparsity](https://igi-web.tugraz.at/download/OzdenizciLegensteinICML2021/cifar10_resnet50_sparse1_at.zip)\n\n#### An example on how to evaluate saved model weights\n\nOriginally we store the learned model weights in pickle dictionaries, however to enable benchmark evaluations on [Foolbox](https://github.com/bethgelab/foolbox) and [AutoAttack](https://github.com/fra31/auto-attack) we convert and load these saved dictionary of weights into equivalent Keras models for compatibility. \n\nConsider the last pickle file above that corresponds to the ResNet-50 model weights at 99% sparsity trained via Standard AT on CIFAR-10. \nPlace this file such that the following directory can be accessed: `results/cifar10/resnet50/sparse1_at_best_weights.pickle`.\nYou can simply use `run_foolbox_eval.py` to load these network weights into Keras models and evaluate robustness against PGD\u003csup\u003e50\u003c/sup\u003e attacks as follows:\n```bash\npython run_foolbox_eval.py --data \"cifar10\" --n_classes 10 --model \"resnet50\" --objective \"at\" --sparse_train --connectivity 0.01 --pgd_iters 50 --pgd_restarts 10\n```\n\n## Reference\nIf you use this code or models in your research and find it helpful, please cite the following paper:\n```\n@inproceedings{ozdenizci2021icml,\n  title={Training adversarially robust sparse networks via Bayesian connectivity sampling},\n  author={Ozan \\\"{O}zdenizci and Robert Legenstein},\n  booktitle={International Conference on Machine Learning},\n  pages={8314--8324},\n  year={2021},\n  organization={PMLR}\n}\n```\n\n## Acknowledgments\n\nAuthors of this work are affiliated with Graz University of Technology, Institute of Theoretical Computer Science, \nand Silicon Austria Labs, TU Graz - SAL Dependable Embedded Systems Lab, Graz, Austria. This work has been supported by the \"University SAL Labs\" initiative of Silicon Austria Labs (SAL) and its Austrian partner universities for applied fundamental research for electronic based systems. \nThis work is also partially supported by the Austrian Science Fund (FWF) within the ERA-NET CHIST-ERA programme (project SMALL, project number I 4670-N).\n\nParts of this code repository is based on the following works by the machine learning community.\n\n* https://github.com/guillaumeBellec/deep_rewiring\n* https://github.com/inspire-group/hydra\n* https://github.com/yaodongyu/TRADES\n* https://github.com/YisenWang/MART\n* https://github.com/yaircarmon/semisup-adv\n","funding_links":[],"categories":[],"sub_categories":[],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Figitugraz%2Fsparseadversarialtraining","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Figitugraz%2Fsparseadversarialtraining","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Figitugraz%2Fsparseadversarialtraining/lists"}