{"id":13640641,"url":"https://github.com/spring-epfl/mia","last_synced_at":"2025-04-20T02:34:21.955Z","repository":{"id":33020144,"uuid":"149789946","full_name":"spring-epfl/mia","owner":"spring-epfl","description":"A library for running membership inference attacks against ML models","archived":true,"fork":false,"pushed_at":"2022-12-08T02:54:03.000Z","size":73,"stargazers_count":134,"open_issues_count":20,"forks_count":27,"subscribers_count":7,"default_branch":"master","last_synced_at":"2024-04-23T18:15:09.539Z","etag":null,"topics":["adversarial-machine-learning","machine-learning","privacy"],"latest_commit_sha":null,"homepage":"","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/spring-epfl.png","metadata":{"files":{"readme":"README.rst","changelog":null,"contributing":"CONTRIBUTING.rst","funding":null,"license":"LICENSE.txt","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2018-09-21T16:31:02.000Z","updated_at":"2024-04-09T06:48:02.000Z","dependencies_parsed_at":"2023-01-14T23:04:47.047Z","dependency_job_id":null,"html_url":"https://github.com/spring-epfl/mia","commit_stats":null,"previous_names":[],"tags_count":1,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/spring-epfl%2Fmia","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/spring-epfl%2Fmia/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/spring-epfl%2Fmia/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/spring-epfl%2Fmia/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/spring-epfl","download_url":"https://codeload.github.com/spring-epfl/mia/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":223816685,"owners_count":17207900,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["adversarial-machine-learning","machine-learning","privacy"],"created_at":"2024-08-02T01:01:13.023Z","updated_at":"2024-11-09T10:31:31.119Z","avatar_url":"https://github.com/spring-epfl.png","language":"Python","readme":"--------\n\n**ATTENTION:** This library is not maintained at the moment due to lack of capacity. There's a plan to eventually update it, but meanwhile check out `these \u003chttps://github.com/inspire-group/membership-inference-evaluation\u003e`_ `projects \u003chttps://github.com/inspire-group/membership-inference-evaluation\u003e`_ for more up-to-date attacks. \n\n--------\n\n###\nmia\n###\n\n|pypi| |license| |build_status| |docs_status| |zenodo|\n\n.. |pypi| image:: https://img.shields.io/pypi/v/mia.svg\n   :target: https://pypi.org/project/mia/\n   :alt: PyPI version\n\n.. |build_status| image:: https://travis-ci.org/spring-epfl/mia.svg?branch=master\n   :target: https://travis-ci.org/spring-epfl/mia\n   :alt: Build status\n\n.. |docs_status| image:: https://readthedocs.org/projects/mia-lib/badge/?version=latest\n   :target: https://mia-lib.readthedocs.io/?badge=latest\n   :alt: Documentation status\n\n.. |license| image:: https://img.shields.io/pypi/l/mia.svg\n   :target: https://pypi.org/project/mia/\n   :alt: License\n\n.. |zenodo| image:: https://zenodo.org/badge/DOI/10.5281/zenodo.1433744.svg\n   :target: https://zenodo.org/record/1433744\n   :alt: Citing with the Zenodo\n\nA library for running membership inference attacks (MIA) against machine learning models. Check out\nthe `documentation \u003chttps://mia-lib.rtfd.io\u003e`_.\n\n.. description-marker-do-not-remove\n\nThese are attacks against privacy of the training data. In MIA, an attacker tries to guess whether a\ngiven example was used during training of a target model or not, only by querying the model. See\nmore in the paper by `Shokri et al \u003chttps://arxiv.org/abs/1610.05820\u003e`_. Currently, you can use the\nlibrary to evaluate the robustness of your Keras or PyTorch models to MIA.\n\nFeatures:\n\n* Implements the original shadow model `attack \u003chttps://arxiv.org/abs/1610.05820\u003e`_\n* Is customizable, can use any scikit learn's ``Estimator``-like object as a shadow or attack model\n* Is tested with Keras and PyTorch\n\n.. getting-started-marker-do-not-remove\n\n===============\nGetting started\n===============\n\nYou can install mia from PyPI:\n\n.. code-block::  bash\n\n    pip install mia\n\n.. usage-marker-do-not-remove\n\n=====\nUsage \n=====\n\nShokri et al. attack\n====================\n\nSee the `full runnable example\n\u003chttps://github.com/spring-epfl/mia/tree/master/examples/cifar10.py\u003e`_.  Read the details of the\nattack in the `paper \u003chttps://arxiv.org/abs/1610.05820\u003e`_.\n\nLet ``target_model_fn()`` return the target model architecture as a scikit-like classifier. The\nattack is white-box, meaning the attacker is assumed to know the architecture. Let ``NUM_CLASSES``\nbe the number of classes of the classification problem.\n\nFirst, the attacker needs to train several *shadow models* —that mimick the target model—\non different datasets sampled from the original data distribution. The following code snippet\ninitializes a *shadow model bundle*, and runs the training of the shadows. For each shadow model,\n``2 * SHADOW_DATASET_SIZE`` examples are sampled without replacement from the full attacker's\ndataset.  Half of them will be used for control, and the other half for training of the shadow model.\n\n.. code-block::  python\n\n    from mia.estimators import ShadowModelBundle\n\n    smb = ShadowModelBundle(\n        target_model_fn,\n        shadow_dataset_size=SHADOW_DATASET_SIZE,\n        num_models=NUM_MODELS,\n    )\n    X_shadow, y_shadow = smb.fit_transform(attacker_X_train, attacker_y_train)\n\n``fit_transform`` returns *attack data* ``X_shadow, y_shadow``. Each row in ``X_shadow`` is a\nconcatenated vector consisting of the prediction vector of a shadow model for an example from the\noriginal dataset, and the example's class (one-hot encoded). Its shape is hence ``(2 *\nSHADOW_DATASET_SIZE, 2 * NUM_CLASSES)``. Each label in ``y_shadow`` is zero if a corresponding\nexample was \"out\" of the training dataset of the shadow model (control), or one, if it was \"in\" the\ntraining.\n\nmia provides a class to train a bundle of attack models, one model per class. ``attack_model_fn()``\nis supposed to return a scikit-like classifier that takes a vector of model predictions ``(NUM_CLASSES, )``,\nand returns whether an example with these predictions was in the training, or out.\n\n.. code-block::  python\n    \n    from mia.estimators import AttackModelBundle\n    \n    amb = AttackModelBundle(attack_model_fn, num_classes=NUM_CLASSES)\n    amb.fit(X_shadow, y_shadow)\n\nIn place of the ``AttackModelBundle`` one can use any binary classifier that takes ``(2 *\nNUM_CLASSES, )``-shape examples (as explained above, the first half of an input is the prediction\nvector from a model, the second half is the true class of a corresponding example).\n\nTo evaluate the attack, one must encode the data in the above-mentioned format. Let ``target_model`` be\nthe target model, ``data_in`` the data (tuple ``X, y``) that was used in the training of the target model, and\n``data_out`` the data that was not used in the training.\n    \n.. code-block::  python\n\n    from mia.estimators import prepare_attack_data    \n\n    attack_test_data, real_membership_labels = prepare_attack_data(\n        target_model, data_in, data_out\n    )\n\n    attack_guesses = amb.predict(attack_test_data)\n    attack_accuracy = np.mean(attack_guesses == real_membership_labels)\n\n.. misc-marker-do-not-remove\n\n======\nCiting\n======\n\n.. code-block::\n\n   @misc{mia,\n     author       = {Bogdan Kulynych and\n                     Mohammad Yaghini},\n     title        = {{mia: A library for running membership inference \n                      attacks against ML models}},\n     month        = sep,\n     year         = 2018,\n     doi          = {10.5281/zenodo.1433744},\n     url          = {https://doi.org/10.5281/zenodo.1433744}\n   }\n\n","funding_links":[],"categories":["Adversarial Robustness Libraries","Pytorch \u0026 related libraries｜Pytorch \u0026 相关库","Adversarial Robustness","Pytorch \u0026 related libraries"],"sub_categories":["Probabilistic/Generative Libraries｜概率库和生成库:","Probabilistic/Generative Libraries:"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fspring-epfl%2Fmia","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fspring-epfl%2Fmia","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fspring-epfl%2Fmia/lists"}