{"id":13633819,"url":"https://github.com/ffhibnese/Model-Inversion-Attack-ToolBox","last_synced_at":"2025-04-18T14:33:19.820Z","repository":{"id":205565624,"uuid":"641902262","full_name":"ffhibnese/Model-Inversion-Attack-ToolBox","owner":"ffhibnese","description":"A comprehensive toolbox for model inversion attacks and defenses, which is easy to get started.","archived":false,"fork":false,"pushed_at":"2024-05-28T09:04:21.000Z","size":105775,"stargazers_count":94,"open_issues_count":1,"forks_count":1,"subscribers_count":2,"default_branch":"main","last_synced_at":"2024-05-29T04:37:11.789Z","etag":null,"topics":["benchmarks","machine-learning","model-inversion","model-inversion-attacks","privacy","toolbox","trustworthy-ai"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/ffhibnese.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null}},"created_at":"2023-05-17T12:05:17.000Z","updated_at":"2024-05-31T04:33:43.281Z","dependencies_parsed_at":"2024-05-31T04:33:04.642Z","dependency_job_id":null,"html_url":"https://github.com/ffhibnese/Model-Inversion-Attack-ToolBox","commit_stats":{"total_commits":141,"total_committers":6,"mean_commits":23.5,"dds":0.6950354609929078,"last_synced_commit":"b96e7be43ea53629303cce898ded86e51b51fb4c"},"previous_names":["ffhibnese/model_inversion_attack_box","ffhibnese/model_inversion_attack_toolbox","ffhibnese/model-inversion-attack-toolbox"],"tags_count":1,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ffhibnese%2FModel-Inversion-Attack-ToolBox","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ffhibnese%2FModel-Inversion-Attack-ToolBox/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ffhibnese%2FModel-Inversion-Attack-ToolBox/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/ffhibnese%2FModel-Inversion-Attack-ToolBox/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/ffhibnese","download_url":"https://codeload.github.com/ffhibnese/Model-Inversion-Attack-ToolBox/tar.gz/refs/heads/main","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":249505472,"owners_count":21282884,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["benchmarks","machine-learning","model-inversion","model-inversion-attacks","privacy","toolbox","trustworthy-ai"],"created_at":"2024-08-01T23:00:52.199Z","updated_at":"2025-04-18T14:33:19.350Z","avatar_url":"https://github.com/ffhibnese.png","language":"Python","readme":"# 🔥Model Inversion Attack ToolBox v2.0🔥\n\n![Python 3.10](https://img.shields.io/badge/python-3.10-DodgerBlue.svg?style=plastic)\n![Pytorch 2.0.1](https://img.shields.io/badge/pytorch-2.0.1-DodgerBlue.svg?style=plastic)\n![torchvision 0.15.2](https://img.shields.io/badge/torchvision-0.15.2-DodgerBlue.svg?style=plastic)\n![CUDA 11.8](https://img.shields.io/badge/cuda-11.8-DodgerBlue.svg?style=plastic)\n\n[Yixiang Qiu*](https://github.com/final-solution), \n[Hongyao Yu*](https://github.com/Chrisqcwx),\n[Hao Fang*](https://github.com/ffhibnese),\n[Wenbo Yu](https://github.com/cswbyu),\n[Bin Chen#](https://github.com/BinChen2021),\n[Xuan Wang](https://faculty.hitsz.edu.cn/wangxuan),\n[Shu-Tao Xia](https://www.sigs.tsinghua.edu.cn/xst/main.htm)\n\nWelcome to **MIA**! This repository is a comprehensive open-source Python benchmark for model inversion attacks, which is well-organized and easy to get started. It includes uniform implementations of advanced and representative model inversion methods, formulating a unified and reliable framework for a convenient and fair comparison between different model inversion methods. Our repository is continuously updated in **https://github.com/ffhibnese/Model-Inversion-Attack-ToolBox**.\n\n\nIf you have any concerns about our toolbox, feel free to contact us at qiuyixiang@stu.hit.edu.cn, yuhongyao@stu.hit.edu.cn, and fang-h23@mails.tsinghua.edu.cn.\n\nAlso, you are always welcome to contribute and make this repository better! \n\n\n## :rocket: Introduction\n\n**Model inversion attack** is an emerging powerful private data theft attack, where a malicious attacker is able to reconstruct data with the same distribution as the training dataset of the target model. \n\nThe reason why we developed this toolbox is that the research line of **MI** suffers from a lack of unified standards and reliable implementations of former studies. We hope our work can further help people in this area and promote the progress of their valuable research.\n\n## :bulb: Features\n- Easy to get started.\n- Provide all the pre-trained model files.\n- Always up to date.\n- Well organized and encapsulated.\n- A unified and fair comparison between attack methods.\n\n## :memo: Model Inversion Attacks\n\n|Method|Paper|Publication|Scenario|Key Characteristics|\n|:-:|:-:|:-:|:-:|:-:|\n|DeepInversion|Dreaming to Distill: Data-Free Knowledge Transfer via DeepInversion|[CVPR'2020](https://openaccess.thecvf.com/content_CVPR_2020/html/Yin_Dreaming_to_Distill_Data-Free_Knowledge_Transfer_via_DeepInversion_CVPR_2020_paper.html)|whitebox|student-teacher, data-free|\n|GMI|The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks|[CVPR'2020](https://openaccess.thecvf.com/content_CVPR_2020/html/Zhang_The_Secret_Revealer_Generative_Model-Inversion_Attacks_Against_Deep_Neural_Networks_CVPR_2020_paper.html)|whitebox|the first GAN-based MIA, instance-level|\n|KEDMI|Knowledge-Enriched Distributional Model Inversion Attacks|[ICCV'2021](https://openaccess.thecvf.com/content/ICCV2021/html/Chen_Knowledge-Enriched_Distributional_Model_Inversion_Attacks_ICCV_2021_paper.html)|whitebox|the first MIA that recovers data distributions, pseudo-labels|\n|VMI|Variational Model Inversion Attacks|[NeurIPS'2021](https://proceedings.neurips.cc/paper/2021/hash/50a074e6a8da4662ae0a29edde722179-Abstract.html)|whitebox|variational inference, special loss function|\n|SecretGen|SecretGen: Privacy Recovery on Pre-trained Models via Distribution Discrimination|[ECCV'2022](https://link.springer.com/chapter/10.1007/978-3-031-20065-6_9#Abs1)|whitebox, blackbox|instance-level, data augmentation|\n|BREPMI|Label-Only Model Inversion Attacks via Boundary Repulsion|[CVPR'2022](https://openaccess.thecvf.com/content/CVPR2022/html/Kahla_Label-Only_Model_Inversion_Attacks_via_Boundary_Repulsion_CVPR_2022_paper.html)|blackbox|boundary repelling, label-only|\n|Mirror|MIRROR: Model Inversion for Deep Learning Network with High Fidelity|[NDSS'2022](https://www.ndss-symposium.org/ndss-paper/auto-draft-203/)|whitebox, blackbox|both gradient-free and gradient-based, genetic algorithm|\n|PPA|Plug \u0026 Play Attacks: Towards Robust and Flexible Model Inversion Attacks|[ICML'2022](https://arxiv.org/pdf/2201.12179.pdf)|whitebox|Initial selection, pre-trained GANs, results selection|\n|PLGMI|Pseudo Label-Guided Model Inversion Attack via Conditional Generative Adversarial Network|[AAAI'2023](https://ojs.aaai.org/index.php/AAAI/article/view/25442)|whitebox|pseudo-labels, data augmentation, special loss function|\n|C2FMI|C2FMI: Corse-to-Fine Black-box Model Inversion Attack|[TDSC'2023](https://ieeexplore.ieee.org/abstract/document/10148574)|whitebox, blackbox|gradient-free, two-stage|\n|LOMMA|Re-Thinking Model Inversion Attacks Against Deep Neural Networks|[CVPR'2023](https://openaccess.thecvf.com/content/CVPR2023/html/Nguyen_Re-Thinking_Model_Inversion_Attacks_Against_Deep_Neural_Networks_CVPR_2023_paper.html)|blackbox|special loss, model augmentation|\n|RLBMI|Reinforcement Learning-Based Black-Box Model Inversion Attacks|[CVPR'2023](https://openaccess.thecvf.com/content/CVPR2023/html/Han_Reinforcement_Learning-Based_Black-Box_Model_Inversion_Attacks_CVPR_2023_paper.html)|blackbox|reinforcement learning|\n|LOKT|Label-Only Model Inversion Attacks via Knowledge Transfer|[NeurIPS'2023](https://openreview.net/forum?id=NuoIThPPag)|blackbox|surrogate models, label-only|\n|IF-GMI|A Closer Look at GAN Priors: Exploiting Intermediate Features for Enhanced Model Inversion Attacks|[ECCV'2024](https://arxiv.org/abs/2407.13863)|whitebox|intermeidate feature|\n\n\n\n\n## :memo: Model Inversion Defenses\n\n|Method|Paper|Publication|Key Characteristics|\n|:-:|:-:|:-:|:-:|\n|VIB / MID|Improving Robustness to Model Inversion Attacks via Mutual Information Regularization|[AAAI'2021](https://ojs.aaai.org/index.php/AAAI/article/view/17387)| variational method, mutual information, special loss function|\n|BiDO|Bilateral Dependency Optimization: Defending Against Model-inversion Attacks|[KDD'2022](https://dl.acm.org/doi/abs/10.1145/3534678.3539376)|special loss function|\n|TL|Model Inversion Robustness: Can Transfer Learning Help?|[CVPR'2024](https://openreview.net/forum?id=nW0sCc3LLN\u0026nesting=2\u0026sort=date-desc)|transfer learning|\n|LS|Be Careful What You Smooth For: Label Smoothing Can Be a Privacy Shield but Also a Catalyst for Model Inversion Attacks|[ICLR'2024](https://openreview.net/forum?id=1SbkubNdbW)|label smoothing|\n\n## :wrench: Environments\n**MIA** can be built up with the following steps:\n1. Clone this repository and create the virtual environment by Anaconda.\n```sh\ngit clone https://github.com/ffhibnese/Model_Inversion_Attack_ToolBox.git\ncd ./Model_Inversion_Attack_ToolBox\nconda create -n MIA python=3.10\nconda activate MIA\n```\n2. Install the related dependencies:\n```sh\npip install -r requirements.txt\n```\n\n## :page_facing_up: Preprocess Datasets and Pre-trained Models\n\nSee [here](./docs/datasets.md) for details to preprocess datasets. \n\nWe have released pre-trained target models and evaluation models in the `checkpoints_v2.0` of [Google Drive](https://drive.google.com/drive/folders/1ko8zAK1j9lTSF8FMvacO8mCKHY9evG9L?usp=sharing).\n\n\u003c!-- ## :racehorse: Run Examples\n\nSee [here](./docs/) for details. --\u003e\n\n\u003c!-- ## :page_facing_up: Datasets and Model Checkpoints\n- For datasets, you can download them according to the file with detailed instructions placed in `./dataset/\u003cDATASET_NAME\u003e/README.md`. \n- For pre-trained models, we prepare all the related model weights files in the following link.   \nDownload pre-trained models [here](https://drive.google.com/drive/folders/1ko8zAK1j9lTSF8FMvacO8mCKHY9evG9L) and place them in `./checkpoints/`. The detailed file path structure is shown in `./checkpoints_structure.txt`.\n\nGenforces models will be automatically downloaded by running the provided scripts.\n\n## :racehorse: Run Examples\n\n### Attack\nWe provide detailed running scripts of attack algorithms in `./attack_scripts/`.\nYou can run any attack algorithm simply by the following instruction and experimental results will be produced in `./results/\u003cATTACK_METHOD\u003e/` by default:\n```sh\npython attack_scripts/\u003cATTACK_METHOD\u003e.py\n```\n\nFor more information, you can read [here](./attack_scripts/README.md).\n\n### Defense\nWe provide simple running scripts of defense algorithms in `./defense_scripts/`. \n\nTo train the model with defense algorithms, you can run\n```sh\npython defense_scripts/\u003cDEFENSE_METHOD\u003e.py\n```\nand training infos will be produced in `./results/\u003cDEFENSE_METHOD\u003e/\u003cDEFENSE_METHOD\u003e.log` by default.\n\nTo evaluate the effectiveness of the defense, you can attack the model by running\n```sh\npython defense_scripts/\u003cDEFENSE_METHOD\u003e_\u003cATTACK_METHOD\u003e.py\n```\nand attack results will be produced in `./results/\u003cDEFENSE_METHOD\u003e_\u003cATTACK_METHOD\u003e` by default.\n\nFor more information, you can read [here](./defense_scripts/README.md). --\u003e\n\n## 📔 Citation\n**If you find our work helpful for your research, please kindly cite our papers:**\n```\n@article{qiu2024mibench,\n  title={MIBench: A Comprehensive Benchmark for Model Inversion Attack and Defense},\n  author={Qiu, Yixiang and Yu, Hongyao and Fang, Hao and Yu, Wenbo and Chen, Bin and Wang, Xuan and Xia, Shu-Tao and Xu, Ke},\n  journal={arXiv preprint arXiv:2410.05159},\n  year={2024}\n}\n\n@article{fang2024privacy,\n  title={Privacy leakage on dnns: A survey of model inversion attacks and defenses},\n  author={Fang, Hao and Qiu, Yixiang and Yu, Hongyao and Yu, Wenbo and Kong, Jiawei and Chong, Baoli and Chen, Bin and Wang, Xuan and Xia, Shu-Tao},\n  journal={arXiv preprint arXiv:2402.04013},\n  year={2024}\n}\n\n@article{qiu2024closer,\n  title={A Closer Look at GAN Priors: Exploiting Intermediate Features for Enhanced Model Inversion Attacks},\n  author={Qiu, Yixiang and Fang, Hao and Yu, Hongyao and Chen, Bin and Qiu, MeiKang and Xia, Shu-Tao},\n  journal={arXiv preprint arXiv:2407.13863},\n  year={2024}\n}\n```\n\n## :sparkles: Acknowledgement\nWe express great gratitude for all the researchers' contributions to the **Model Inversion** community. \n\nIn particular, we thank the authors of [PLGMI](https://github.com/LetheSec/PLG-MI-Attack) for their high-quality codes for datasets, metrics, and three attack methods. It's their great devotion that helps us make **MIA** better!  \n","funding_links":[],"categories":["Tools","Open Source Security Tools","\u003ca id=\"tools\"\u003e\u003c/a\u003e🛠️ Tools"],"sub_categories":["Safety","Model Security"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fffhibnese%2FModel-Inversion-Attack-ToolBox","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fffhibnese%2FModel-Inversion-Attack-ToolBox","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fffhibnese%2FModel-Inversion-Attack-ToolBox/lists"}