Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

https://github.com/bethgelab/foolbox

A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX

adversarial-attacks adversarial-examples jax keras machine-learning python pytorch tensorflow

Last synced: 01 Jul 2024

https://github.com/chenhongge/RobustTrees

[ICML 2019, 20 min long talk] Robust Decision Trees Against Adversarial Examples

adversarial-examples decision-trees gbdt gbm gbrt robust-decision-trees xgboost

Last synced: 29 Jun 2024

https://github.com/cuge1995/IT-Defense

Our code for paper 'The art of defense: letting networks fool the attacker', IEEE Transactions on Information Forensics and Security, 2023

adversarial-attacks adversarial-examples adversarial-machine-learning point-cloud

Last synced: 29 May 2024

https://github.com/utkuozbulak/adaptive-segmentation-mask-attack

Pre-trained model, code, and materials from the paper "Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation" (MICCAI 2019).

adversarial-examples segmentation u-net

Last synced: 19 Apr 2024

https://github.com/chbrian/awesome-adversarial-examples-dl

A curated list of awesome resources for adversarial examples in deep learning

adversarial-examples computer-vision deep-learning machine-learning security

Last synced: 05 Apr 2024

https://github.com/gmh14/RobNets

[CVPR 2020] When NAS Meets Robustness: In Search of Robust Architectures against Adversarial Attacks

adversarial-attacks adversarial-examples deep-learning-architectures neural-architecture-search robustness

Last synced: 31 Mar 2024

https://github.com/QData/TextAttack

TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/

adversarial-attacks adversarial-examples adversarial-machine-learning data-augmentation machine-learning natural-language-processing nlp security

Last synced: 23 Mar 2024

https://github.com/ryderling/DEEPSEC

DEEPSEC: A Uniform Platform for Security Analysis of Deep Learning Model

adversarial-attacks adversarial-examples deep-leaning defenses

Last synced: 23 Mar 2024

https://github.com/Trusted-AI/adversarial-robustness-toolbox

Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams

adversarial-attacks adversarial-examples adversarial-machine-learning ai artificial-intelligence attack blue-team evasion extraction inference machine-learning poisoning privacy python red-team trusted-ai trustworthy-ai

Last synced: 23 Mar 2024

https://github.com/advboxes/AdvBox

Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models. Advbox give a command line tool to generate adversarial examples with Zero-Coding.

adversarial-attacks adversarial-example adversarial-examples deep-learning deepfool fgsm graphpipe machine-learning onnx paddlepaddle security

Last synced: 23 Mar 2024