{"id":13409256,"url":"https://github.com/DeepSpaceHarbor/Awesome-AI-Security","last_synced_at":"2025-03-14T14:31:04.915Z","repository":{"id":37390704,"uuid":"102793957","full_name":"DeepSpaceHarbor/Awesome-AI-Security","owner":"DeepSpaceHarbor","description":":file_folder: #AISecurity","archived":false,"fork":false,"pushed_at":"2022-09-02T00:44:24.000Z","size":18,"stargazers_count":1338,"open_issues_count":1,"forks_count":175,"subscribers_count":88,"default_branch":"master","last_synced_at":"2024-08-02T00:21:28.711Z","etag":null,"topics":[],"latest_commit_sha":null,"homepage":"","language":null,"has_issues":true,"has_wiki":null,"has_pages":null,"mirror_url":null,"source_name":null,"license":null,"status":null,"scm":"git","pull_requests_enabled":true,"icon_url":"https://github.com/DeepSpaceHarbor.png","metadata":{"files":{"readme":"README.md","changelog":null,"contributing":null,"funding":null,"license":null,"code_of_conduct":null,"threat_model":null,"audit":null,"citation":null,"codeowners":null,"security":null,"support":null}},"created_at":"2017-09-07T23:07:00.000Z","updated_at":"2024-08-01T09:08:19.000Z","dependencies_parsed_at":"2022-07-15T21:17:20.773Z","dependency_job_id":null,"html_url":"https://github.com/DeepSpaceHarbor/Awesome-AI-Security","commit_stats":null,"previous_names":["randomadversary/awesome-ai-security"],"tags_count":0,"template":false,"template_full_name":null,"repository_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DeepSpaceHarbor%2FAwesome-AI-Security","tags_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DeepSpaceHarbor%2FAwesome-AI-Security/tags","releases_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DeepSpaceHarbor%2FAwesome-AI-Security/releases","manifests_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories/DeepSpaceHarbor%2FAwesome-AI-Security/manifests","owner_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners/DeepSpaceHarbor","download_url":"https://codeload.github.com/DeepSpaceHarbor/Awesome-AI-Security/tar.gz/refs/heads/master","host":{"name":"GitHub","url":"https://github.com","kind":"github","repositories_count":243593324,"owners_count":20316166,"icon_url":"https://github.com/github.png","version":null,"created_at":"2022-05-30T11:31:42.601Z","updated_at":"2022-07-04T15:15:14.044Z","host_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub","repositories_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repositories","repository_names_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/repository_names","owners_url":"https://repos.ecosyste.ms/api/v1/hosts/GitHub/owners"}},"keywords":[],"created_at":"2024-07-30T20:00:59.268Z","updated_at":"2025-03-14T14:31:04.881Z","avatar_url":"https://github.com/DeepSpaceHarbor.png","language":null,"readme":"# Awesome AI Security ![Awesome](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)\nA curated list of AI security resources inspired by [awesome-adversarial-machine-learning](https://github.com/yenchenlin/awesome-adversarial-machine-learning) \u0026 [awesome-ml-for-cybersecurity](https://github.com/jivoi/awesome-ml-for-cybersecurity).\n    \n[research]: https://cdn4.iconfinder.com/data/icons/48-bubbles/48/12.File-32.png \"Research\"\n[slides]: https://cdn3.iconfinder.com/data/icons/tango-icon-library/48/x-office-presentation-32.png \"Slides\"\n[video]: https://cdn2.iconfinder.com/data/icons/snipicons/500/video-32.png \"Video\"\n[web]: https://cdn3.iconfinder.com/data/icons/tango-icon-library/48/internet-web-browser-32.png \"Website or blog post\"\n[code]: https://cdn2.iconfinder.com/data/icons/snipicons/500/application-code-32.png \"Code\"\n[other]: https://cdn3.iconfinder.com/data/icons/tango-icon-library/48/emblem-symbolic-link-32.png \"Uncategorized\"\n\n#### Legend:\n|Type| Icon|\n|---|---|\n| Research  | ![][research]|\n| Slides  | ![][slides] |\n| Video | ![][video]  |\n| Website / Blog post  | ![][web]  |\n| Code  | ![][code]|\n| Other  | ![][other]|\n\n## Keywords:\n- [Adversarial examples](#-adversarial-examples)\n- [Evasion attacks](#-evasion)\n- [Poisoning attacks](#-poisoning)\n- [Feature selection](#-feature-selection)\n- Tutorials\n- [Misc](#-misc)\n- [Code](#-code)\n- [Links](#-links)\n\n## [▲](#keywords) Adversarial examples\n|Type|Title|\n|---|:---|\n|![][research]  | [Explaining and Harnessing Adversarial Examples](https://arxiv.org/abs/1412.6572)  |\n| ![][research]  | [Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples](https://arxiv.org/abs/1605.07277)|\n|![][research] | [Delving into Transferable Adversarial Examples and Black-box Attacks](https://arxiv.org/abs/1611.02770)|\n|![][research] | [On the (Statistical) Detection of Adversarial Examples](https://arxiv.org/abs/1702.06280)|\n|![][research] | [The Space of Transferable Adversarial Examples](https://arxiv.org/abs/1704.03453)|\n|![][research] | [Adversarial Attacks on Neural Network Policies](http://rll.berkeley.edu/adversarial/)|\n|![][research] | [Adversarial Perturbations Against Deep Neural Networks for Malware Classification](https://arxiv.org/abs/1606.04435)|\n|![][research] | [Crafting Adversarial Input Sequences for Recurrent Neural Networks](https://arxiv.org/abs/1604.08275)|\n|![][research]| [Practical Black-Box Attacks against Machine Learning](https://arxiv.org/abs/1602.02697)|\n|![][research]| [Adversarial examples in the physical world](https://arxiv.org/abs/1607.02533)|\n|![][research]| [Robust Physical-World Attacks on Deep Learning Models](https://arxiv.org/abs/1707.08945)|\n|![][research]| [Can you fool AI with adversarial examples on a visual Turing test?](https://arxiv.org/abs/1709.08693)|\n|![][research]| [Synthesizing Robust Adversarial Examples](https://arxiv.org/abs/1707.07397)|\n|![][research]| [Defensive Distillation is Not Robust to Adversarial Examples](http://nicholas.carlini.com/papers/2016_defensivedistillation.pdf)|\n|![][research]| [Vulnerability of machine learning models to adversarial examples](http://ceur-ws.org/Vol-1649/187.pdf)|\n|![][research]| [Adversarial Examples for Evaluating Reading Comprehension Systems](https://nlp.stanford.edu/pubs/jia2017adversarial.pdf)|\n|![][video]| [Adversarial Examples and Adversarial Training by Ian Goodfellow at Stanford](https://www.youtube.com/watch?v=CIfsB_EYsVI)|\n|![][research]| [Tactics of Adversarial Attack on Deep Reinforcement Learning Agents](http://yclin.me/adversarial_attack_RL/)|\n|![][research]| [Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey](https://arxiv.org/abs/1801.00553)|\n|![][research]| [Did you hear that? Adversarial Examples Against Automatic Speech Recognition](https://arxiv.org/abs/1801.00554)|\n|![][research]| [Adversarial Manipulation of Deep Representations](https://arxiv.org/abs/1511.05122)|\n|![][research]| [Exploring the Space of Adversarial Images](https://arxiv.org/abs/1510.05328)|\n|![][research]| [Note on Attacking Object Detectors with Adversarial Stickers](https://arxiv.org/abs/1712.08062)|\n|![][research]| [Adversarial Patch](https://arxiv.org/abs/1712.09665)|\n|![][research]| [LOTS about Attacking Deep Features](https://arxiv.org/abs/1611.06179)|\n|![][research]| [Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN](https://arxiv.org/abs/1702.05983)|\n|![][research]| [Adversarial Images for Variational Autoencoders](https://arxiv.org/abs/1612.00155)|\n|![][research]| [Delving into adversarial attacks on deep policies](https://arxiv.org/abs/1705.06452)|\n|![][research]| [Simple Black-Box Adversarial Perturbations for Deep Networks](https://arxiv.org/abs/1612.06299)|\n|![][research]| [DeepFool: a simple and accurate method to fool deep neural networks](https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Moosavi-Dezfooli_DeepFool_A_Simple_CVPR_2016_paper.pdf)|\n\n\n## [▲](#keywords) Evasion\n|Type|Title|\n|---|:---|\n|![][research]|[Query Strategies for Evading Convex-Inducing Classifiers](https://people.eecs.berkeley.edu/~adj/publications/paper-files/1007-0484v1.pdf)|\n|![][research]|[Evasion attacks against machine learning at test time](https://pralab.diee.unica.it/sites/default/files/Biggio13-ecml.pdf)|\n|![][research]|[Automatically Evading Classifiers A Case Study on PDF Malware Classifiers](http://evademl.org/docs/evademl.pdf)|\n|![][research]|[Looking at the Bag is not Enough to Find the Bomb: An Evasion of Structural Methods for Malicious PDF Files Detection](https://pralab.diee.unica.it/sites/default/files/maiorca_ASIACCS13.pdf)|\n|![][research]|[Generic Black-Box End-to-End Attack against RNNs and Other API Calls Based Malware Classifiers](https://arxiv.org/abs/1707.05970)|\n|![][research]|[Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition](https://www.cs.cmu.edu/~sbhagava/papers/face-rec-ccs16.pdf)|\n|![][research]| [Fast Feature Fool: A data independent approach to universal adversarial perturbations](https://arxiv.org/abs/1707.05572v1)|\n|![][research]| [One pixel attack for fooling deep neural networks](https://arxiv.org/abs/1710.08864v1)|\n|![][research]| [Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition](https://arxiv.org/abs/1801.00349)|\n|![][research]| [RHMD: Evasion-Resilient Hardware Malware Detectors](http://www.cs.ucr.edu/~kkhas001/pubs/micro17-rhmd.pdf)|\n\n## [▲](#keywords) Poisoning\n|Type|Title|\n|---|:---|\n|![][research] ![][slides]|[Poisoning Behavioral Malware Clustering](http://pralab.diee.unica.it/en/node/1121)|\n|![][research]|[Efficient Label Contamination Attacks Against Black-Box Learning Models](https://www.ijcai.org/proceedings/2017/0551.pdf)|\n|![][research]|[Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization](https://arxiv.org/abs/1708.08689)|\n\n## [▲](#keywords) Feature selection\n|Type|Title|\n|---|:---|\n|![][research] ![][slides]|[Is Feature Selection Secure against Training Data Poisoning?](https://pralab.diee.unica.it/en/node/1191)|\n\n## [▲](#keywords) Misc\n|Type|Title|\n|---|:---|\n|![][research] |[Can Machine Learning Be Secure?](https://people.eecs.berkeley.edu/~adj/publications/paper-files/asiaccs06.pdf)|\n|![][research]|[On The Integrity Of Deep Learning Systems In Adversarial Settings](https://etda.libraries.psu.edu/catalog/28680)|\n|![][research]|[Stealing Machine Learning Models via Prediction APIs](https://arxiv.org/abs/1609.02943)|\n|![][research]|[Data Driven Exploratory Attacks on Black Box Classifiers in Adversarial Domains](https://arxiv.org/abs/1703.07909)|\n|![][research]|[Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures](https://www.cs.cmu.edu/~mfredrik/papers/fjr2015ccs.pdf)|\n|![][research]|[A Methodology for Formalizing Model-Inversion Attacks](https://andrewxiwu.github.io/public/papers/2016/WFJN16-a-methodology-for-modeling-model-inversion-attacks.pdf)|\n|![][research]|[Adversarial Attacks against Intrusion Detection Systems: Taxonomy, Solutions and Open Issues](https://pdfs.semanticscholar.org/d4e8/aed54dc4c6bed41651254a49d47885648142.pdf)|\n|![][slides]|[Adversarial Data Mining for Cyber Security](https://www.utdallas.edu/~muratk/CCS-tutorial.pdf)|\n|![][research]| [High Dimensional Spaces, Deep Learning and Adversarial Examples](https://arxiv.org/abs/1801.00634)|\n|![][research]| [Neural Networks in Adversarial Setting and Ill-Conditioned Weight Space](https://arxiv.org/abs/1801.00905)|\n|![][web]| [Adversarial Machines](https://medium.com/@samim/adversarial-machines-998d8362e996)|\n|![][research]| [Adversarial Task Allocation](https://arxiv.org/abs/1709.00358)|\n|![][research]| [Vulnerability of Deep Reinforcement Learning to Policy Induction Attacks](https://arxiv.org/abs/1701.04143)|\n|![][research]| [Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning](https://arxiv.org/abs/1712.03141)|\n|![][research]| [Adversarial Robustness: Softmax versus Openmax](https://arxiv.org/abs/1708.01697)|\n|![][video]| [DEF CON 25 - Hyrum Anderson - Evading next gen AV using AI](https://youtu.be/FGCle6T0Jpc)|\n|![][web]| [Adversarial Learning for Good: My Talk at #34c3 on Deep Learning Blindspots](http://blog.kjamistan.com/adversarial-learning-for-good-my-talk-at-34c3-on-deep-learning-blindspots/)|\n|![][research]| [Universal adversarial perturbations](https://arxiv.org/abs/1610.08401)|\n|![][other]| [Camouflage from face detection - CV Dazzle](https://www.cvdazzle.com/)|\n\n## [▲](#keywords) Code\n|Type|Title|\n|---|:---|\n|![][code]|[CleverHans - Python library to benchmark machine learning systems vulnerability to adversarial examples](https://github.com/tensorflow/cleverhans)|\n|![][code]|[Model extraction attacks on Machine-Learning-as-a-Service platforms](https://github.com/ftramer/Steal-ML)|\n|![][code]|[Foolbox - Python toolbox to create adversarial examples](https://github.com/bethgelab/foolbox)|\n|![][code]|[Adversarial Machine Learning Library(Ad-lib)](https://github.com/vu-aml/adlib)|\n|![][code]|[Deep-pwning](https://github.com/cchio/deep-pwning)|\n|![][code]|[DeepFool](https://github.com/lts4/deepfool)|\n|![][code]|[Universal adversarial perturbations](https://github.com/LTS4/universal)|\n|![][code]|[Malware Env for OpenAI Gym](https://github.com/endgameinc/gym-malware)|\n|![][code]|[Exploring the Space of Adversarial Images](https://github.com/tabacof/adversarial)|\n|![][code]|[StringSifter - A machine learning tool that ranks strings based on their relevance for malware analysis](https://github.com/fireeye/stringsifter)|\n\n## [▲](#keywords) Links\n|Type|Title|\n|---|:---|\n|![][web]|[EvadeML - Machine Learning in the Presence of Adversaries](http://evademl.org/)|\n|![][web]|[Adversarial Machine Learning - PRA Lab](https://pralab.diee.unica.it/en/AdversarialMachineLearning)|\n|![][web]|[Adversarial Examples and their implications](https://hackernoon.com/the-implications-of-adversarial-examples-deep-learning-bits-3-4086108287c7)|\n\n","funding_links":[],"categories":["Applications","Other Lists","Others","Other awesome AI lists","[↑](#table-of-contents)Related Awesome Lists \u003ca name=\"related-awesome-lists\"\u003e\u003c/a\u003e","\u003ca id=\"8c5a692b5d26527ef346687e047c5c21\"\u003e\u003c/a\u003e收集","Unicorn","Related Awesome Lists"],"sub_categories":["Other Applications","Startup Blogs \u003ca name=\"startup-blogs\"\u003e\u003c/a\u003e","TeX Lists"],"project_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FDeepSpaceHarbor%2FAwesome-AI-Security","html_url":"https://awesome.ecosyste.ms/projects/github.com%2FDeepSpaceHarbor%2FAwesome-AI-Security","lists_url":"https://awesome.ecosyste.ms/api/v1/projects/github.com%2FDeepSpaceHarbor%2FAwesome-AI-Security/lists"}