Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
awesome-ai-cybersecurity
Welcome to the ultimate list of resources for AI in cybersecurity. This repository aims to provide an organized collection of high-quality resources to help professionals, researchers, and enthusiasts stay updated and advance their knowledge in the field.
https://github.com/elniak/awesome-ai-cybersecurity
Last synced: 2 days ago
JSON representation
-
Using AI for Pentesting
-
Prevention
-
Prediction
- DeepExploit
- open-appsec - open-appsec is an open source machine-learning security engine that preemptively and automatically prevents threats against Web Application & APIs.
- OpenVAS
- SEMA
- SEMA
-
Detection
-
Response
-
Monitoring/Scanning
-
Tutorials and Guides
- article
- article
- article
- article
- article
- sequence
- post
- IBM Cybersecurity Analyst
- article
- sequence
- post
- IBM Cybersecurity Analyst
- article
- article
- article
- article
- article
- article
- IBM Cybersecurity Analyst
- article
- article
- article
- article
- article
- article
- article
- article
- article
- article
- article
- article
- article
- article
- article
- article
- article
- article
- article
- article
- article
-
-
Securing AI SaaS
-
Best Practices
-
Case Studies
-
Tools
-
-
2. Network Protection
-
Tools
- Machine Learning Techniques for Intrusion Detection
- A Survey of Network Anomaly Detection Techniques
- A Survey of Network Anomaly Detection Techniques
- Shallow and Deep Networks Intrusion Detection System: A Taxonomy and Survey
- A Taxonomy and Survey of Intrusion Detection System Design Techniques, Network Threats and Datasets - depth review of IDS design techniques and relevant datasets.
- Shallow and Deep Networks Intrusion Detection System: A Taxonomy and Survey
- A Taxonomy and Survey of Intrusion Detection System Design Techniques, Network Threats and Datasets - depth review of IDS design techniques and relevant datasets.
-
-
3. Endpoint Protection
-
4. Application Security
-
5. User Behavior Analysis
-
6. Process Behavior (Fraud Detection)
-
7. Intrusion Detection and Prevention Systems (IDS/IPS)
-
8. Books & Survey Papers
-
8.1 Books
-
8.2 Survey Papers
-
-
9. Offensive Tools and Frameworks
-
Generic Tools
-
Adversarial Tools
- Exploring the Space of Adversarial Images
- Adversarial Machine Learning Library (Ad-lib) - theoretic library for adversarial machine learning.
- Exploring the Space of Adversarial Images
- EasyEdit
-
Poisoning Tools
-
Privacy Tools
-
-
10. Defensive Tools and Frameworks
-
Safety and Prevention
-
Detection Tools
-
Privacy and Confidentiality
- Python Differential Privacy Library
- Diffprivlib
- Python Differential Privacy Library
- Diffprivlib
- PLOT4ai
- TenSEAL
- SyMPC
- PLOT4ai
- TenSEAL
- SyMPC
- PyVertical - preserving vertical federated learning.
- PyVertical - preserving vertical federated learning.
- Cloaked AI - preserving encryption for vector embeddings.
- Cloaked AI - preserving encryption for vector embeddings.
-
-
11. Resources for Learning
-
Privacy and Confidentiality
-
-
12. Uncategorized Useful Resources
-
Privacy and Confidentiality
- OWASP ML TOP 10
- OWASP LLM TOP 10
- OWASP AI Security and Privacy Guide
- OWASP WrongSecrets LLM exercise
- OWASP ML TOP 10
- OWASP LLM TOP 10
- OWASP AI Security and Privacy Guide
- OWASP WrongSecrets LLM exercise
- NIST AIRC
- ENISA Multilayer Framework for Good Cybersecurity Practices for AI
- ENISA Multilayer Framework for Good Cybersecurity Practices for AI
- The MLSecOps Top 10
-
-
13. Research Papers
-
Adversarial Examples and Attacks
- High Dimensional Spaces, Deep Learning and Adversarial Examples - dimensional spaces.
- High Dimensional Spaces, Deep Learning and Adversarial Examples - dimensional spaces.
- Adversarial Task Allocation
- Adversarial Task Allocation
- Robust Physical-World Attacks on Deep Learning Models - world attacks on deep learning models.
- Robust Physical-World Attacks on Deep Learning Models - world attacks on deep learning models.
- The Space of Transferable Adversarial Examples
- RHMD: Evasion-Resilient Hardware Malware Detectors - based malware detectors resilient to evasion.
- The Space of Transferable Adversarial Examples
- RHMD: Evasion-Resilient Hardware Malware Detectors - based malware detectors resilient to evasion.
- Generic Black-Box End-to-End Attack against RNNs and Other API Calls Based Malware Classifiers - box attacks on RNNs and malware classifiers.
- Generic Black-Box End-to-End Attack against RNNs and Other API Calls Based Malware Classifiers - box attacks on RNNs and malware classifiers.
- Vulnerability of Deep Reinforcement Learning to Policy Induction Attacks
- Can you fool AI with adversarial examples on a visual Turing test?
- Can you fool AI with adversarial examples on a visual Turing test?
- Explaining and Harnessing Adversarial Examples
- Delving into Adversarial Attacks on Deep Policies
- Crafting Adversarial Input Sequences for Recurrent Neural Networks
- Practical Black-Box Attacks against Machine Learning - box attacks on machine learning models.
- Data Driven Exploratory Attacks on Black Box Classifiers in Adversarial Domains - driven attacks on black-box classifiers.
- Explaining and Harnessing Adversarial Examples
- Delving into Adversarial Attacks on Deep Policies
- Crafting Adversarial Input Sequences for Recurrent Neural Networks
- Practical Black-Box Attacks against Machine Learning - box attacks on machine learning models.
- Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN
- Data Driven Exploratory Attacks on Black Box Classifiers in Adversarial Domains - driven attacks on black-box classifiers.
- Fast Feature Fool: A Data-Independent Approach to Universal Adversarial Perturbations
- Simple Black-Box Adversarial Perturbations for Deep Networks - box adversarial perturbations.
- Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
- Fast Feature Fool: A Data-Independent Approach to Universal Adversarial Perturbations
- Simple Black-Box Adversarial Perturbations for Deep Networks - box adversarial perturbations.
- Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
- One Pixel Attack for Fooling Deep Neural Networks - pixel modification can fool deep neural networks.
- FedMLSecurity: A Benchmark for Attacks and Defenses in Federated Learning and LLMs
- Jailbroken: How Does LLM Safety Training Fail?
- One Pixel Attack for Fooling Deep Neural Networks - pixel modification can fool deep neural networks.
- FedMLSecurity: A Benchmark for Attacks and Defenses in Federated Learning and LLMs
- Jailbroken: How Does LLM Safety Training Fail?
- Bad Characters: Imperceptible NLP Attacks
- Bad Characters: Imperceptible NLP Attacks
- Universal and Transferable Adversarial Attacks on Aligned Language Models
- Universal and Transferable Adversarial Attacks on Aligned Language Models
- Exploring the Vulnerability of Natural Language Processing Models via Universal Adversarial Texts
- Adversarial Examples Are Not Bugs, They Are Features
- Exploring the Vulnerability of Natural Language Processing Models via Universal Adversarial Texts
- Adversarial Examples Are Not Bugs, They Are Features
- Adversarial Attacks on Tables with Entity Swap
- Here Comes the AI Worm: Unleashing Zero-click Worms that Target GenAI-Powered Applications - click worms targeting AI-powered applications.
- Adversarial Attacks on Tables with Entity Swap
- Here Comes the AI Worm: Unleashing Zero-click Worms that Target GenAI-Powered Applications - click worms targeting AI-powered applications.
-
Model Extraction
-
Evasion
- Looking at the Bag is not Enough to Find the Bomb: An Evasion of Structural Methods for Malicious PDF Files Detection
- Adversarial Demonstration Attacks on Large Language Models
- Looking at the Bag is not Enough to Find the Bomb: An Evasion of Structural Methods for Malicious PDF Files Detection
- Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition
- Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition
- Query Strategies for Evading Convex-Inducing Classifiers - inducing classifiers.
- Query Strategies for Evading Convex-Inducing Classifiers - inducing classifiers.
- Adversarial Prompting for Black Box Foundation Models
- Adversarial Prompting for Black Box Foundation Models
-
Programming Languages
Categories
Using AI for Pentesting
63
13. Research Papers
62
9. Offensive Tools and Frameworks
22
10. Defensive Tools and Frameworks
21
12. Uncategorized Useful Resources
12
Securing AI SaaS
8
2. Network Protection
7
8. Books & Survey Papers
7
6. Process Behavior (Fraud Detection)
4
3. Endpoint Protection
4
7. Intrusion Detection and Prevention Systems (IDS/IPS)
2
4. Application Security
2
5. User Behavior Analysis
2
11. Resources for Learning
2
Sub Categories
Adversarial Examples and Attacks
50
Tutorials and Guides
40
Privacy and Confidentiality
28
Tools
23
Generic Tools
15
Evasion
9
8.1 Books
6
Response
6
Prediction
5
Monitoring/Scanning
5
Detection
4
Adversarial Tools
4
Case Studies
4
Detection Tools
4
Prevention
3
Model Extraction
3
Safety and Prevention
3
Best Practices
2
Poisoning Tools
2
Privacy Tools
1
8.2 Survey Papers
1
Keywords
python
13
machine-learning
7
openai
4
cpp
4
differential-privacy
4
cryptography
4
large-language-models
3
deep-learning
3
privacy
3
chatgpt
3
dfir
3
analysis
2
openai-api
2
analyzer
2
api
2
cortex
2
managers
2
malwareanalysis
2
malware
2
cyber-threat-intelligence
2
chatgpt4
2
chatgpt3
2
chatgpt-python
2
chatgpt-app
2
chatgpt-api
2
chatbot
2
thehive
2
security-incidents
2
scala
2
rest
2
digital-forensics
2
engine
2
free
2
response
2
free-software
2
incident-response
2
open-source
2
observable
2
iocs
2
nlg
2
llm
2
gpt-3
2
foundation-model
2
ai
2
nlp
2
observability
2
prompt-engineering
2
prompt-injection
2
python-wrapper
2
data-privacy
2