Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/ZhengyuZhao/AI-Security-and-Privacy-Events
A curated list of academic events on AI Security & Privacy
adversarial-examples adversarial-machine-learning ai-privacy ai-security data-poisoning
Last synced: 03 Jul 2024
https://github.com/xiaosen-wang/Adversarial-Examples-Paper
Paper list of Adversarial Examples
adversarial-attacks adversarial-examples
Last synced: 03 Jul 2024
https://github.com/bethgelab/foolbox
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
adversarial-attacks adversarial-examples jax keras machine-learning python pytorch tensorflow
Last synced: 01 Jul 2024
https://github.com/chenhongge/RobustTrees
[ICML 2019, 20 min long talk] Robust Decision Trees Against Adversarial Examples
adversarial-examples decision-trees gbdt gbm gbrt robust-decision-trees xgboost
Last synced: 29 Jun 2024
https://github.com/dhowe/AdNauseam
AdNauseam: Fight back against advertising surveillance
adversarial-examples browser-extension critical-design privacy privacy-enhancing-technologies surveillance
Last synced: 26 Jun 2024
https://github.com/MadryLab/photoguard
Raising the Cost of Malicious AI-Powered Image Editing
adversarial-attacks adversarial-examples computer-vision deep-learning deepfakes robustness stable-diffusion
Last synced: 07 Jun 2024
https://github.com/cuge1995/IT-Defense
Our code for paper 'The art of defense: letting networks fool the attacker', IEEE Transactions on Information Forensics and Security, 2023
adversarial-attacks adversarial-examples adversarial-machine-learning point-cloud
Last synced: 29 May 2024
https://github.com/BorealisAI/advertorch
A Toolbox for Adversarial Robustness Research
adversarial-attacks adversarial-example adversarial-examples adversarial-learning adversarial-machine-learning adversarial-perturbations benchmarking machine-learning pytorch robustness security toolbox
Last synced: 17 May 2024
https://github.com/hbaniecki/adversarial-explainable-ai
💡 Adversarial attacks on explanations and how to defend them
adversarial adversarial-attacks adversarial-examples adversarial-machine-learning attacks counterfactual deep defense evaluation explainability explainable-ai iml interpretability interpretable interpretable-machine-learning model responsible-ai robustness security xai
Last synced: 13 May 2024
https://github.com/utkuozbulak/adaptive-segmentation-mask-attack
Pre-trained model, code, and materials from the paper "Impact of Adversarial Examples on Deep Learning Models for Biomedical Image Segmentation" (MICCAI 2019).
adversarial-examples segmentation u-net
Last synced: 19 Apr 2024
https://github.com/zbchern/awesome-machine-learning-reliability
A curated list of awesome resources regarding machine learning reliability.
adversarial-examples adversarial-machine-learning machine-learning-testing machine-leraning-reliability
Last synced: 11 Apr 2024
https://github.com/chbrian/awesome-adversarial-examples-dl
A curated list of awesome resources for adversarial examples in deep learning
adversarial-examples computer-vision deep-learning machine-learning security
Last synced: 05 Apr 2024
https://github.com/ChandlerBang/awesome-graph-attack-papers
Adversarial attacks and defenses on Graph Neural Networks.
adversarial-attacks adversarial-examples awesome awesome-list deep-learning defense graph graph-neural-networks literature-review machine-learning robustness
Last synced: 01 Apr 2024
https://github.com/gmh14/RobNets
[CVPR 2020] When NAS Meets Robustness: In Search of Robust Architectures against Adversarial Attacks
adversarial-attacks adversarial-examples deep-learning-architectures neural-architecture-search robustness
Last synced: 31 Mar 2024
https://github.com/juliusberner/theory2practice
Learning ReLU networks to high uniform accuracy is intractable (ICLR 2023)
adversarial-examples deep-learning learning-theory machine-learning-algorithms neural-networks pytorch ray-tune weights-and-biases
Last synced: 24 Mar 2024
https://github.com/QData/TextAttack
TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/
adversarial-attacks adversarial-examples adversarial-machine-learning data-augmentation machine-learning natural-language-processing nlp security
Last synced: 23 Mar 2024
https://github.com/ryderling/DEEPSEC
DEEPSEC: A Uniform Platform for Security Analysis of Deep Learning Model
adversarial-attacks adversarial-examples deep-leaning defenses
Last synced: 23 Mar 2024
https://github.com/Trusted-AI/adversarial-robustness-toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
adversarial-attacks adversarial-examples adversarial-machine-learning ai artificial-intelligence attack blue-team evasion extraction inference machine-learning poisoning privacy python red-team trusted-ai trustworthy-ai
Last synced: 23 Mar 2024
https://github.com/airbnb/artificial-adversary
🗣️ Tool to generate adversarial text examples and test machine learning models against them
adversarial-examples black-box-attacks black-box-benchmarking classification data-mining data-science machine-learning metrics python python2 python3 spam spam-classification spam-detection spam-filtering text text-analysis text-classification text-mining text-processing
Last synced: 23 Mar 2024
https://github.com/advboxes/AdvBox
Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models. Advbox give a command line tool to generate adversarial examples with Zero-Coding.
adversarial-attacks adversarial-example adversarial-examples deep-learning deepfool fgsm graphpipe machine-learning onnx paddlepaddle security
Last synced: 23 Mar 2024