{"id":13633722,"url":"https://github.com/bethgelab/foolbox","last_synced_at":"2025-05-14T04:04:06.961Z","repository":{"id":37318414,"uuid":"94331757","full_name":"bethgelab/foolbox","owner":"bethgelab","description":"A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX","archived":false,"fork":false,"pushed_at":"2024-04-03T16:17:05.000Z","size":11253,"stargazers_count":2852,"open_issues_count":29,"forks_count":432,"subscribers_count":46,"default_branch":"master","last_synced_at":"2025-05-03T05:02:05.397Z","etag":null,"topics":["adversarial-attacks","adversarial-examples","jax","keras","machine-learning","python","pytorch","tensorflow"],"latest_commit_sha":null,"homepage":"https://foolbox.jonasrauber.de","language":"Python","has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":"mit","status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/bethgelab.png","metadata":{"files":{"readme":"README.rst","changelog":null,"contributing":null,"funding":".github/FUNDING.yml","license":"LICENSE","code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null,"governance":null,"roadmap":null,"authors":null,"dei":null,"publiccode":null,"codemeta":null,"zenodo":null},"funding":{"patreon":null,"open_collective":null,"ko_fi":null,"tidelift":null,"community_bridge":null,"liberapay":null,"issuehunt":null,"otechie":null,"custom":["paypal.me/jonasrauber"]}},"created_at":"2017-06-14T13:05:48.000Z","updated_at":"2025-04-24T21:43:49.000Z","dependencies_parsed_at":"2024-06-18T14:03:49.488Z","dependency_job_id":"82662493-2307-4bb7-83f1-706954ab5789","html_url":"https://github.com/bethgelab/foolbox","commit_stats":{"total_commits":1422,"total_committers":36,"mean_commits":39.5,"dds":"0.35724331926863573","last_synced_commit":"12abe74e2f1ec79edb759454458ad8dd9ce84939"},"previous_names":[],"tags_count":59,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bethgelab%2Ffoolbox","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bethgelab%2Ffoolbox/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bethgelab%2Ffoolbox/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/bethgelab%2Ffoolbox/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/bethgelab","download_url":"https://codeload.github.com/bethgelab/foolbox/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":254059501,"owners_count":22007768,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":["adversarial-attacks","adversarial-examples","jax","keras","machine-learning","python","pytorch","tensorflow"],"created_at":"2024-08-01T23:00:50.528Z","updated_at":"2025-05-14T04:04:06.935Z","avatar_url":"https://github.com/bethgelab.png","language":"Python","readme":".. raw:: html\n\n   \u003ca href=\"https://foolbox.jonasrauber.de\"\u003e\u003cimg src=\"https://raw.githubusercontent.com/bethgelab/foolbox/master/guide/.vuepress/public/logo_small.png\" align=\"right\" /\u003e\u003c/a\u003e\n\n.. image:: https://badge.fury.io/py/foolbox.svg\n   :target: https://badge.fury.io/py/foolbox\n\n.. image:: https://readthedocs.org/projects/foolbox/badge/?version=latest\n    :target: https://foolbox.readthedocs.io/en/latest/\n\n.. image:: https://img.shields.io/badge/code%20style-black-000000.svg\n   :target: https://github.com/ambv/black\n\n.. image:: https://joss.theoj.org/papers/10.21105/joss.02607/status.svg\n   :target: https://doi.org/10.21105/joss.02607\n\n===============================================================================================================================\nFoolbox: Fast adversarial attacks to benchmark the robustness of machine learning models in PyTorch, TensorFlow, and JAX\n===============================================================================================================================\n\n`Foolbox \u003chttps://foolbox.jonasrauber.de\u003e`_ is a **Python library** that lets you easily run adversarial attacks against machine learning models like deep neural networks. It is built on top of EagerPy and works natively with models in `PyTorch \u003chttps://pytorch.org\u003e`_, `TensorFlow \u003chttps://www.tensorflow.org\u003e`_, and `JAX \u003chttps://github.com/google/jax\u003e`_.\n\n🔥 Design \n----------\n\n**Foolbox 3** has been rewritten from scratch\nusing `EagerPy \u003chttps://github.com/jonasrauber/eagerpy\u003e`_ instead of\nNumPy to achieve native performance on models\ndeveloped in PyTorch, TensorFlow and JAX, all with one code base without code duplication.\n\n- **Native Performance**: Foolbox 3 is built on top of EagerPy and runs natively in PyTorch, TensorFlow, and JAX and comes with real batch support.\n- **State-of-the-art attacks**: Foolbox provides a large collection of state-of-the-art gradient-based and decision-based adversarial attacks.\n- **Type Checking**: Catch bugs before running your code thanks to extensive type annotations in Foolbox.\n\n📖 Documentation\n-----------------\n\n- **Guide**: The best place to get started with Foolbox is the official `guide \u003chttps://foolbox.jonasrauber.de\u003e`_.\n- **Tutorial**: If you are looking for a tutorial, check out this `Jupyter notebook \u003chttps://github.com/jonasrauber/foolbox-native-tutorial/blob/master/foolbox-native-tutorial.ipynb\u003e`_ |colab|.\n- **Documentation**: The API documentation can be found on `ReadTheDocs \u003chttps://foolbox.readthedocs.io/en/stable/\u003e`_.\n\n.. |colab| image:: https://colab.research.google.com/assets/colab-badge.svg\n   :target: https://colab.research.google.com/github/jonasrauber/foolbox-native-tutorial/blob/master/foolbox-native-tutorial.ipynb\n\n🚀 Quickstart\n--------------\n\n.. code-block:: bash\n\n   pip install foolbox\n\nFoolbox is tested with Python 3.8 and newer - however, it will most likely also work with version 3.6 - 3.8. To use it with `PyTorch \u003chttps://pytorch.org\u003e`_, `TensorFlow \u003chttps://www.tensorflow.org\u003e`_, or `JAX \u003chttps://github.com/google/jax\u003e`_, the respective framework needs to be installed separately. These frameworks are not declared as dependencies because not everyone wants to use and thus install all of them and because some of these packages have different builds for different architectures and CUDA versions. Besides that, all essential dependencies are automatically installed.\n\nYou can see the versions we currently use for testing in the `Compatibility section \u003c#-compatibility\u003e`_ below, but newer versions are in general expected to work.\n\n🎉 Example\n-----------\n\n.. code-block:: python\n\n   import foolbox as fb\n\n   model = ...\n   fmodel = fb.PyTorchModel(model, bounds=(0, 1))\n\n   attack = fb.attacks.LinfPGD()\n   epsilons = [0.0, 0.001, 0.01, 0.03, 0.1, 0.3, 0.5, 1.0]\n   _, advs, success = attack(fmodel, images, labels, epsilons=epsilons)\n\n\nMore examples can be found in the `examples \u003c./examples/\u003e`_ folder, e.g.\na full `ResNet-18 example \u003c./examples/single_attack_pytorch_resnet18.py\u003e`_.\n\n📄 Citation\n------------\n\nIf you use Foolbox for your work, please cite our `JOSS paper on Foolbox Native (i.e., Foolbox 3.0) \u003chttps://doi.org/10.21105/joss.02607\u003e`_ and our `ICML workshop paper on Foolbox \u003chttps://arxiv.org/abs/1707.04131\u003e`_ using the following BibTeX entries:\n\n.. code-block::\n\n   @article{rauber2017foolboxnative,\n     doi = {10.21105/joss.02607},\n     url = {https://doi.org/10.21105/joss.02607},\n     year = {2020},\n     publisher = {The Open Journal},\n     volume = {5},\n     number = {53},\n     pages = {2607},\n     author = {Jonas Rauber and Roland Zimmermann and Matthias Bethge and Wieland Brendel},\n     title = {Foolbox Native: Fast adversarial attacks to benchmark the robustness of machine learning models in PyTorch, TensorFlow, and JAX},\n     journal = {Journal of Open Source Software}\n   }\n\n.. code-block::\n\n   @inproceedings{rauber2017foolbox,\n     title={Foolbox: A Python toolbox to benchmark the robustness of machine learning models},\n     author={Rauber, Jonas and Brendel, Wieland and Bethge, Matthias},\n     booktitle={Reliable Machine Learning in the Wild Workshop, 34th International Conference on Machine Learning},\n     year={2017},\n     url={http://arxiv.org/abs/1707.04131},\n   }\n\n\n👍 Contributions\n-----------------\n\nWe welcome contributions of all kind, please have a look at our\n`development guidelines \u003chttps://foolbox.jonasrauber.de/guide/development.html\u003e`_.\nIn particular, you are invited to contribute\n`new adversarial attacks \u003chttps://foolbox.jonasrauber.de/guide/adding_attacks.html\u003e`_.\nIf you would like to help, you can also have a look at the issues that are\nmarked with `contributions welcome\n\u003chttps://github.com/bethgelab/foolbox/issues?q=is%3Aopen+is%3Aissue+label%3A%22contributions+welcome%22\u003e`_.\n\n💡 Questions?\n--------------\n\nIf you have a question or need help, feel free to open an issue on GitHub.\nOnce GitHub Discussions becomes publicly available, we will switch to that.\n\n💨 Performance\n--------------\n\nFoolbox 3.0 is much faster than Foolbox 1 and 2. A basic `performance comparison`_ can be found in the `performance` folder.\n\n🐍 Compatibility\n-----------------\n\nWe currently test with the following versions:\n\n* PyTorch 1.10.1\n* TensorFlow 2.6.3\n* JAX 0.2.517\n* NumPy 1.18.1\n\n.. _performance comparison: performance/README.md\n","funding_links":["paypal.me/jonasrauber"],"categories":["🛡️ Adversarial Testing","Open Source Security Tools","Adversarial Robustness Libraries","其他_机器学习与深度学习","Python (144)","Tools","Adversarial Attacks","Deep Learning Framework","Adversarial Robustness","CV","对抗学习与鲁棒性","Python","[↑](#table-of-contents)Tools \u003ca name=\"tools\"\u003e\u003c/a\u003e","\u003ca id=\"tools\"\u003e\u003c/a\u003e🛠️ Tools","AI Safety Tools","Frameworks","Security and robustness","[▲](#keywords) Code","Technical Resources","LLM SECURITY / AI SECURITY","Deep Learning Tools","Data Science","2. Adversarial Machine Learning","Attack Techniques \u0026 Red Teaming"],"sub_categories":["Robustness","Interpretability \u0026 Adversarial Training","Red-Teaming Harnesses \u0026 Automated Security Testing","Adversarial Attacks","AI Security Tools","Open Source/Access Responsible AI Software Packages","AI Red Teaming \u0026 Adversarial Testing","Machine Learning","2.1 Toolkits \u0026 Libraries","Adversarial ML \u0026 Classical Models"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbethgelab%2Ffoolbox","html_url":"https://awesome.ecosyste.ms/projects/github.com%2Fbethgelab%2Ffoolbox","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2Fbethgelab%2Ffoolbox/lists"}