https://github.com/trusted-ai/adversarial-robustness-toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
https://github.com/trusted-ai/adversarial-robustness-toolbox
adversarial-attacks adversarial-examples adversarial-machine-learning ai artificial-intelligence attack blue-team evasion extraction inference machine-learning poisoning privacy python red-team trusted-ai trustworthy-ai
Last synced: about 7 hours ago
JSON representation
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
- Host: GitHub
- URL: https://github.com/trusted-ai/adversarial-robustness-toolbox
- Owner: Trusted-AI
- License: mit
- Created: 2018-03-15T14:40:43.000Z (about 7 years ago)
- Default Branch: main
- Last Pushed: 2025-04-03T07:05:30.000Z (20 days ago)
- Last Synced: 2025-04-18T04:34:51.298Z (5 days ago)
- Topics: adversarial-attacks, adversarial-examples, adversarial-machine-learning, ai, artificial-intelligence, attack, blue-team, evasion, extraction, inference, machine-learning, poisoning, privacy, python, red-team, trusted-ai, trustworthy-ai
- Language: Python
- Homepage: https://adversarial-robustness-toolbox.readthedocs.io/en/latest/
- Size: 610 MB
- Stars: 5,198
- Watchers: 99
- Forks: 1,209
- Open Issues: 32
-
Metadata Files:
- Readme: README-cn.md
- Contributing: CONTRIBUTING.md
- License: LICENSE
- Code of conduct: CODE_OF_CONDUCT.md
- Security: SECURITY.md
- Authors: AUTHORS
Awesome Lists containing this project
README
# Adversarial Robustness Toolbox (ART) v1.18
![]()

[](http://adversarial-robustness-toolbox.readthedocs.io/en/latest/?badge=latest)
[](https://badge.fury.io/py/adversarial-robustness-toolbox)
[](https://codecov.io/gh/Trusted-AI/adversarial-robustness-toolbox)
[](https://github.com/psf/black)
[](https://opensource.org/licenses/MIT)
[](https://pypi.org/project/adversarial-robustness-toolbox/)
[](https://ibm-art.slack.com/)
[](https://pepy.tech/project/adversarial-robustness-toolbox)
[](https://pepy.tech/project/adversarial-robustness-toolbox)
[](https://bestpractices.coreinfrastructure.org/projects/5090)
![]()
对抗性鲁棒性工具集(ART)是用于机器学习安全性的Python库。ART 由
[Linux Foundation AI & Data Foundation](https://lfaidata.foundation) (LF AI & Data)。 ART提供的工具可
帮助开发人员和研究人员针对以下方面捍卫和评估机器学习模型和应用程序:
逃逸,数据污染,模型提取和推断的对抗性威胁。ART支持所有流行的机器学习框架
(TensorFlow,Keras,PyTorch,MXNet,scikit-learn,XGBoost,LightGBM,CatBoost,GPy等),所有数据类型
(图像,表格,音频,视频等)和机器学习任务(分类,物体检测,语音识别,
生成模型,认证等)。## Adversarial Threats
![]()
![]()
## ART for Red and Blue Teams (selection)
![]()
## 学到更多
| **[开始使用][get-started]** | **[文献资料][documentation]** | **[贡献][contributing]** |
|-------------------------------------|-------------------------------|-----------------------------------|
| - [安装][installation]
- [例子](examples/README.md)
- [Notebooks](notebooks/README.md) | - [攻击][attacks]
- [防御][defences]
- [评估器][estimators]
- [指标][metrics]
- [技术文档](https://adversarial-robustness-toolbox.readthedocs.io) | - [Slack](https://ibm-art.slack.com), [邀请函](https://join.slack.com/t/ibm-art/shared_invite/enQtMzkyOTkyODE4NzM4LTA4NGQ1OTMxMzFmY2Q1MzE1NWI2MmEzN2FjNGNjOGVlODVkZDE0MjA1NTA4OGVkMjVkNmQ4MTY1NmMyOGM5YTg)
- [贡献](CONTRIBUTING.md)
- [路线图][roadmap]
- [引用][citing] |[get-started]: https://github.com/Trusted-AI/adversarial-robustness-toolbox/wiki/Get-Started
[attacks]: https://github.com/Trusted-AI/adversarial-robustness-toolbox/wiki/ART-Attacks
[defences]: https://github.com/Trusted-AI/adversarial-robustness-toolbox/wiki/ART-Defences
[estimators]: https://github.com/Trusted-AI/adversarial-robustness-toolbox/wiki/ART-Estimators
[metrics]: https://github.com/Trusted-AI/adversarial-robustness-toolbox/wiki/ART-Metrics
[contributing]: https://github.com/Trusted-AI/adversarial-robustness-toolbox/wiki/Contributing
[documentation]: https://github.com/Trusted-AI/adversarial-robustness-toolbox/wiki/Documentation
[installation]: https://github.com/Trusted-AI/adversarial-robustness-toolbox/wiki/Get-Started#setup
[roadmap]: https://github.com/Trusted-AI/adversarial-robustness-toolbox/wiki/Roadmap
[citing]: https://github.com/Trusted-AI/adversarial-robustness-toolbox/wiki/Contributing#citing-art该库正在不断开发中。欢迎反馈,错误报告和贡献!
# 致谢
本材料部分基于国防高级研究计划局(DARPA)支持的工作,合同编号HR001120C0013。
本材料中表达的任何意见,发现和结论或建议均为作者的观点,并不一定反映国防高级研究计划局(DARPA)的观点。