{"id":13794095,"url":"https://github.com/DSE-MSU/DeepRobust","last_synced_at":"2025-05-12T20:31:37.347Z","repository":{"id":37444023,"uuid":"210014892","full_name":"DSE-MSU/DeepRobust","owner":"DSE-MSU","description":"A pytorch adversarial library for attack and defense methods on images and graphs","archived":false,"fork":false,"pushed_at":"2024-07-23T13:21:18.000Z","size":12506,"stargazers_count":1035,"open_issues_count":52,"forks_count":193,"subscribers_count":16,"default_branch":"master","last_synced_at":"2025-04-14T09:09:09.875Z","etag":null,"topics":["adversarial-attacks","adversarial-examples","deep-learning","deep-neural-networks","defense","graph-convolutional-networks","graph-mining","graph-neural-networks","machine-learning"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/DSE-MSU.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2019-09-21T16:09:07.000Z","updated_at":"2025-04-14T06:08:58.000Z","dependencies_parsed_at":"2023-02-04T08:30:23.464Z","dependency_job_id":"eb2bde70-2cdb-4042-9634-9ca0d6dae64e","html_url":"https://github.com/DSE-MSU/DeepRobust","commit_stats":{"total_commits":748,"total_committers":23,"mean_commits":32.52173913043478,"dds":0.7606951871657754,"last_synced_commit":"a70875a087601688a0dd2b9ca00fa36087d28a10"},"previous_names":[],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DSE-MSU%2FDeepRobust","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DSE-MSU%2FDeepRobust/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DSE-MSU%2FDeepRobust/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DSE-MSU%2FDeepRobust/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/DSE-MSU","download_url":"https://codeload.github.com/DSE-MSU/DeepRobust/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":253816766,"owners_count":21968878,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["adversarial-attacks","adversarial-examples","deep-learning","deep-neural-networks","defense","graph-convolutional-networks","graph-mining","graph-neural-networks","machine-learning"],"created_at":"2024-08-03T23:00:35.832Z","updated_at":"2025-05-12T20:31:32.278Z","avatar_url":"https://github.com/DSE-MSU.png","language":"Python","readme":"\n[contributing-image]: https://img.shields.io/badge/contributions-welcome-brightgreen.svg?style=flat\n[contributing-url]: https://github.com/rusty1s/pytorch_geometric/blob/master/CONTRIBUTING.md\n\n\u003cp align=\"center\"\u003e\n\u003cimg center src=\"https://github.com/DSE-MSU/DeepRobust/blob/master/adversary_examples/Deeprobust.png\" width = \"450\" alt=\"logo\"\u003e\n\u003c/p\u003e\n\n---------------------\n\u003c!--\n\u003ca href=\"https://github.com/DSE-MSU/DeepRobust/stargazers\"\u003e\u003cimg alt=\"GitHub stars\" src=\"https://img.shields.io/github/stars/DSE-MSU/DeepRobust\"\u003e\u003c/a\u003e  \u003ca href=\"https://github.com/DSE-MSU/DeepRobust/network/members\" \u003e\u003cimg alt=\"GitHub forks\" src=\"https://img.shields.io/github/forks/DSE-MSU/DeepRobust\"\u003e\n\u003c/a\u003e \n--\u003e\n\n\u003cimg alt=\"GitHub last commit\" src=\"https://img.shields.io/github/last-commit/DSE-MSU/DeepRobust\"\u003e \u003ca href=\"https://github.com/DSE-MSU/DeepRobust/issues\"\u003e \u003cimg alt=\"GitHub issues\" src=\"https://img.shields.io/github/issues/DSE-MSU/DeepRobust\"\u003e\u003c/a\u003e \u003cimg alt=\"GitHub\" src=\"https://img.shields.io/github/license/DSE-MSU/DeepRobust\"\u003e\n[![Contributing][contributing-image]][contributing-url]\n[![Tweet](https://img.shields.io/twitter/url/http/shields.io.svg?style=social)](https://twitter.com/intent/tweet?text=Build%20your%20robust%20machine%20learning%20models%20with%20DeepRobust%20in%2060%20seconds\u0026url=https://github.com/DSE-MSU/DeepRobust\u0026via=dse_msu\u0026hashtags=MachineLearning,DeepLearning,secruity,data,developers)\n\n\n\u003c!-- \u003cimg alt=\"GitHub top language\" src=\"https://img.shields.io/github/languages/top/DSE-MSU/DeepRobust\"\u003e --\u003e\n\n\u003c!--\n\u003cdiv align=center\u003e\u003cimg src=\"https://github.com/DSE-MSU/DeepRobust/blob/master/adversarial.png\" width=\"500\"/\u003e\u003c/div\u003e\n\u003cdiv align=center\u003e\u003cimg src=\"https://github.com/DSE-MSU/DeepRobust/blob/master/adversary_examples/graph_attack_example.png\" width=\"00\" /\u003e\u003c/div\u003e\n--\u003e\n**[Documentation](https://deeprobust.readthedocs.io/en/latest/)** | **[Paper](https://arxiv.org/abs/2005.06149)** | **[Samples](https://github.com/DSE-MSU/DeepRobust/tree/master/examples)** \n\n[AAAI 2021] DeepRobust is a PyTorch adversarial library for attack and defense methods on images and graphs. \n* If you are new to DeepRobust, we highly suggest you read the [documentation page](https://deeprobust.readthedocs.io/en/latest/) or the following content in this README to learn how to use it.  \n* If you have any questions or suggestions regarding this library, feel free to create an issue [here](https://github.com/DSE-MSU/DeepRobust/issues). We will reply as soon as possible :)\n\n\u003cp float=\"left\"\u003e\n  \u003cimg src=\"https://github.com/DSE-MSU/DeepRobust/blob/master/adversary_examples/adversarial.png\" width=\"430\" /\u003e\n  \u003cimg src=\"https://github.com/DSE-MSU/DeepRobust/blob/master/adversary_examples/graph_attack_example.png\" width=\"380\" /\u003e \n\u003c/p\u003e\n\n**List of including algorithms can be found in [[Image Package]](https://github.com/DSE-MSU/DeepRobust/tree/master/deeprobust/image) and [[Graph Package]](https://github.com/DSE-MSU/DeepRobust/tree/master/deeprobust/graph).**\n\n[Environment \u0026 Installation](#environment)\n\nUsage\n\n* [Image Attack and Defense](#image-attack-and-defense)\n\n* [Graph Attack and Defense](#graph-attack-and-defense)\n\n[Acknowledgement](#acknowledgement) \n\nFor more details about attacks and defenses, you can read the following papers.\n* [Adversarial Attacks and Defenses on Graphs: A Review, A Tool and Empirical Studies](https://arxiv.org/abs/2003.00653)\n* [Adversarial Attacks and Defenses in Images, Graphs and Text: A Review](https://arxiv.org/pdf/1909.08072.pdf)\n\nIf our work could help your research, please cite:\n[DeepRobust: A PyTorch Library for Adversarial Attacks and Defenses](https://arxiv.org/abs/2005.06149)\n```\n@article{li2020deeprobust,\n  title={Deeprobust: A pytorch library for adversarial attacks and defenses},\n  author={Li, Yaxin and Jin, Wei and Xu, Han and Tang, Jiliang},\n  journal={arXiv preprint arXiv:2005.06149},\n  year={2020}\n}\n```\n\n# Changelog\n* [11/2023] Try \u003cspan style=\"color:red\"\u003e `git clone https://github.com/DSE-MSU/DeepRobust.git; cd DeepRobust; python setup_empty.py install` \u003c/span\u003e to directly install DeepRobust without installing dependency packages.\n* [11/2023] DeepRobust 0.2.9 Released. Please try `pip install deeprobust==0.2.9`. We have fixed the OOM issue of metattack on new pytorch versions.\n* [06/2023] We have added a backdoor attack [UGBA, WWW'23](https://arxiv.org/abs/2303.01263) to graph package. We can now use UGBA to conduct unnoticeable backdoor attack on large-scale graphs such as ogb-arxiv (see example in [test_ugba.py](https://github.com/DSE-MSU/DeepRobust/blob/master/examples/graph/test_ugba.py))! \n* [02/2023] DeepRobust 0.2.8 Released. Please try `pip install deeprobust==0.2.8`! We have added a scalable attack [PRBCD, NeurIPS'21](https://arxiv.org/abs/2110.14038) to graph package. We can now use PRBCD to attack large-scale graphs such as ogb-arxiv (see example in [test_prbcd.py](https://github.com/DSE-MSU/DeepRobust/blob/master/examples/graph/test_prbcd.py))! \n* [02/2023] Add a robust model [AirGNN, NeurIPS'21](https://proceedings.neurips.cc/paper/2021/file/50abc3e730e36b387ca8e02c26dc0a22-Paper.pdf) to graph package. Try `python examples/graph/test_airgnn.py`! See details in [test_airgnn.py](https://github.com/DSE-MSU/DeepRobust/blob/master/examples/graph/test_airgnn.py)\n* [11/2022] DeepRobust 0.2.6 Released. Please try `pip install deeprobust==0.2.6`! We have more updates coming. Please stay tuned!\n* [11/2021] A subpackage that includes popular black box attacks in image domain is released. Find it here. [Link](https://github.com/I-am-Bot/Black-Box-Attacks)\n* [11/2021] DeepRobust 0.2.4 Released. Please try `pip install deeprobust==0.2.4`!\n* [10/2021] add scalable attack and MedianGCN. Thank [Jintang](https://github.com/EdisonLeeeee) for his contribution!\n* [06/2021] [Image Package] Add preprocessing method: APE-GAN.\n* [05/2021] DeepRobust is published at AAAI 2021. Check [here](https://ojs.aaai.org/index.php/AAAI/article/view/18017)!\n* [05/2021] DeepRobust 0.2.2 Released. Please try `pip install deeprobust==0.2.2`!\n* [04/2021] [Image Package] Add support for ImageNet. See details in [test_ImageNet.py](https://github.com/DSE-MSU/DeepRobust/blob/master/examples/image/test_ImageNet.py)\n* [04/2021] [Graph Package] Add support for OGB datasets.  See more details in the [tutorial page](https://deeprobust.readthedocs.io/en/latest/graph/pyg.html).\n* [03/2021] [Graph Package] Added node embedding attack and victim models! See this [tutorial page](https://deeprobust.readthedocs.io/en/latest/graph/node_embedding.html).\n* [02/2021] **[Graph Package] DeepRobust now provides tools for converting the datasets between [Pytorch Geometric](https://pytorch-geometric.readthedocs.io/en/latest/) and DeepRobust. See more details in the [tutorial page](https://deeprobust.readthedocs.io/en/latest/graph/pyg.html)!** DeepRobust now also support GAT, Chebnet and SGC based on pyg; see details in [test_gat.py](https://github.com/DSE-MSU/DeepRobust/blob/master/examples/graph/test_gat.py),  [test_chebnet.py](https://github.com/DSE-MSU/DeepRobust/blob/master/examples/graph/test_chebnet.py) and [test_sgc.py](https://github.com/DSE-MSU/DeepRobust/blob/master/examples/graph/test_sgc.py)\n* [12/2020] DeepRobust now can be installed via pip! Try `pip install deeprobust`!\n* [12/2020] [Graph Package] Add four more [datasets](https://github.com/DSE-MSU/DeepRobust/tree/master/deeprobust/graph/#supported-datasets) and one defense algorithm. More details can be found [here](https://github.com/DSE-MSU/DeepRobust/tree/master/deeprobust/graph/#defense-methods). More datasets and algorithms will be added later. Stay tuned :)\n* [07/2020] Add [documentation](https://deeprobust.readthedocs.io/en/latest/) page!\n* [06/2020] Add docstring to both image and graph package\n\n# Basic Environment\n* `python \u003e= 3.6` (python 3.5 should also work)\n* `pytorch \u003e= 1.2.0`\n\nsee `setup.py` or `requirements.txt` for more information.\n\n# Installation\n## Install from pip\n```\npip install deeprobust \n```\n## Install from source\n```\ngit clone https://github.com/DSE-MSU/DeepRobust.git\ncd DeepRobust\npython setup.py install\n```\nIf you find the dependencies are hard to install, please try the following:\n```python setup_empty.py install``` (only install deeprobust without installing other packages) \n\n# Test Examples\n\n```\npython examples/image/test_PGD.py\npython examples/image/test_pgdtraining.py\npython examples/graph/test_gcn_jaccard.py --dataset cora\npython examples/graph/test_mettack.py --dataset cora --ptb_rate 0.05\n```\n\n# Usage\n## Image Attack and Defense\n1. Train model\n\n    Example: Train a simple CNN model on MNIST dataset for 20 epoch on gpu.\n    ```python\n    import deeprobust.image.netmodels.train_model as trainmodel\n    trainmodel.train('CNN', 'MNIST', 'cuda', 20)\n    ```\n    Model would be saved in deeprobust/trained_models/.\n\n2. Instantiated attack methods and defense methods.\n\n    Example: Generate adversary example with PGD attack.\n    ```python\n    from deeprobust.image.attack.pgd import PGD\n    from deeprobust.image.config import attack_params\n    from deeprobust.image.utils import download_model\n    import torch\n    import deeprobust.image.netmodels.resnet as resnet\n    from torchvision import transforms,datasets\n    \n    URL = \"https://github.com/I-am-Bot/deeprobust_model/raw/master/CIFAR10_ResNet18_epoch_20.pt\"\n    download_model(URL, \"$MODEL_PATH$\")\n\n    model = resnet.ResNet18().to('cuda')\n    model.load_state_dict(torch.load(\"$MODEL_PATH$\"))\n    model.eval()\n\n    transform_val = transforms.Compose([transforms.ToTensor()])\n    test_loader  = torch.utils.data.DataLoader(\n                    datasets.CIFAR10('deeprobust/image/data', train = False, download=True,\n                    transform = transform_val),\n                    batch_size = 10, shuffle=True)\n\n    x, y = next(iter(test_loader))\n    x = x.to('cuda').float()\n    \n    adversary = PGD(model, 'cuda')\n    Adv_img = adversary.generate(x, y, **attack_params['PGD_CIFAR10'])\n    ```\n\n    Example: Train defense model.\n    ```python\n    from deeprobust.image.defense.pgdtraining import PGDtraining\n    from deeprobust.image.config import defense_params\n    from deeprobust.image.netmodels.CNN import Net\n    import torch\n    from torchvision import datasets, transforms \n    \n    model = Net()\n    train_loader = torch.utils.data.DataLoader(\n                    datasets.MNIST('deeprobust/image/defense/data', train=True, download=True,\n                                    transform=transforms.Compose([transforms.ToTensor()])),\n                                    batch_size=100,shuffle=True)\n\n    test_loader = torch.utils.data.DataLoader(\n                  datasets.MNIST('deeprobust/image/defense/data', train=False,\n                                transform=transforms.Compose([transforms.ToTensor()])),\n                                batch_size=1000,shuffle=True)\n\n    defense = PGDtraining(model, 'cuda')\n    defense.generate(train_loader, test_loader, **defense_params[\"PGDtraining_MNIST\"])\n    ```\n\n    More example code can be found in deeprobust/examples.\n\n3. Use our evulation program to test attack algorithm against defense.\n\n    Example:\n    ```\n    cd DeepRobust\n    python examples/image/test_train.py\n    python deeprobust/image/evaluation_attack.py\n    ```\n\n## Graph Attack and Defense \n\n### Attacking Graph Neural Networks\n\n1. Load dataset\n    ```python\n    import torch\n    import numpy as np\n    from deeprobust.graph.data import Dataset\n    from deeprobust.graph.defense import GCN\n    from deeprobust.graph.global_attack import Metattack\n\n    data = Dataset(root='/tmp/', name='cora', setting='nettack')\n    adj, features, labels = data.adj, data.features, data.labels\n    idx_train, idx_val, idx_test = data.idx_train, data.idx_val, data.idx_test\n    idx_unlabeled = np.union1d(idx_val, idx_test)\n    ```\n\n2. Set up surrogate model\n    ```python\n    device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n    surrogate = GCN(nfeat=features.shape[1], nclass=labels.max().item()+1, nhid=16,\n                    with_relu=False, device=device)\n    surrogate = surrogate.to(device)\n    surrogate.fit(features, adj, labels, idx_train)\n    ```\n\n\n3. Set up attack model and generate perturbations\n    ```python\n    model = Metattack(model=surrogate, nnodes=adj.shape[0], feature_shape=features.shape, device=device)\n    model = model.to(device)\n    perturbations = int(0.05 * (adj.sum() // 2))\n    model.attack(features, adj, labels, idx_train, idx_unlabeled, perturbations, ll_constraint=False)\n    modified_adj = model.modified_adj\n    ```\n    \nFor more details please refer to [mettack.py](https://github.com/I-am-Bot/DeepRobust/blob/master/examples/graph/test_mettack.py) or run \n    ```\n    python examples/graph/test_mettack.py --dataset cora --ptb_rate 0.05\n    ```\n\n### Defending Against Graph Attacks\n\n1. Load dataset\n    ```python\n    import torch\n    from deeprobust.graph.data import Dataset, PtbDataset\n    from deeprobust.graph.defense import GCN, GCNJaccard\n    import numpy as np\n    np.random.seed(15)\n\n    # load clean graph\n    data = Dataset(root='/tmp/', name='cora', setting='nettack')\n    adj, features, labels = data.adj, data.features, data.labels\n    idx_train, idx_val, idx_test = data.idx_train, data.idx_val, data.idx_test\n\n    # load pre-attacked graph by mettack\n    perturbed_data = PtbDataset(root='/tmp/', name='cora')\n    perturbed_adj = perturbed_data.adj\n    ```\n2. Test \n    ```python\n    # Set up defense model and test performance\n    device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n    model = GCNJaccard(nfeat=features.shape[1], nclass=labels.max()+1, nhid=16, device=device)\n    model = model.to(device)\n    model.fit(features, perturbed_adj, labels, idx_train)\n    model.eval()\n    output = model.test(idx_test)\n\n    # Test on GCN\n    model = GCN(nfeat=features.shape[1], nclass=labels.max()+1, nhid=16, device=device)\n    model = model.to(device)\n    model.fit(features, perturbed_adj, labels, idx_train)\n    model.eval()\n    output = model.test(idx_test)\n    ```\n    \nFor more details please refer to [test_gcn_jaccard.py](https://github.com/I-am-Bot/DeepRobust/blob/master/examples/graph/test_gcn_jaccard.py) or run\n    ```\n    python examples/graph/test_gcn_jaccard.py --dataset cora\n    ```\n\n## Sample Results\nadversary examples generated by fgsm:\n\u003cdiv align=\"center\"\u003e\n\u003cimg height=140 src=\"https://github.com/DSE-MSU/DeepRobust/blob/master/adversary_examples/mnist_advexample_fgsm_ori.png\"/\u003e\u003cimg height=140 src=\"https://github.com/DSE-MSU/DeepRobust/blob/master/adversary_examples/mnist_advexample_fgsm_adv.png\"/\u003e\n\u003c/div\u003e\nLeft:original, classified as 6; Right:adversary, classified as 4.\n\nServeral trained models can be found here: https://drive.google.com/open?id=1uGLiuCyd8zCAQ8tPz9DDUQH6zm-C4tEL\n\n## Acknowledgement\nSome of the algorithms are referred to paper authors' implementations. References can be found at the top of each file. \n\nImplementation of network structure are referred to weiaicunzai's github. Original code can be found here:\n[pytorch-cifar100](https://github.com/weiaicunzai/pytorch-cifar100)\n\nThanks to their outstanding works!\n\n\n\u003c!----\nWe would be glad if you find our work useful and cite the paper.\n\n'''\n@misc{jin2020adversarial,\n    title={Adversarial Attacks and Defenses on Graphs: A Review and Empirical Study},\n    author={Wei Jin and Yaxin Li and Han Xu and Yiqi Wang and Jiliang Tang},\n    year={2020},\n    eprint={2003.00653},\n    archivePrefix={arXiv},\n    primaryClass={cs.LG}\n}\n'''\n```\n@article{xu2019adversarial,\n  title={Adversarial attacks and defenses in images, graphs and text: A review},\n  author={Xu, Han and Ma, Yao and Liu, Haochen and Deb, Debayan and Liu, Hui and Tang, Jiliang and Jain, Anil},\n  journal={arXiv preprint arXiv:1909.08072},\n  year={2019}\n}\n```\n----\u003e\n","funding_links":[],"categories":["0. Toolbox","图对抗攻击"],"sub_categories":["网络服务_其他"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FDSE-MSU%2FDeepRobust","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FDSE-MSU%2FDeepRobust","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FDSE-MSU%2FDeepRobust/lists"}