awesome-ai-cybersecurity
Welcome to the ultimate list of resources for AI in cybersecurity. This repository aims to provide an organized collection of high-quality resources to help professionals, researchers, and enthusiasts stay updated and advance their knowledge in the field.
https://github.com/elniak/awesome-ai-cybersecurity
Last synced: 2 days ago
JSON representation
-
Using AI for Pentesting
-
Prediction
- DeepExploit - Fully automated penetration testing framework using machine learning. It uses reinforcement learning to improve its attack strategies over time.
- open-appsec - Open-appsec is an open source machine-learning security engine that preemptively and automatically prevents threats against Web Application & APIs.
- SEMA
- Malware environment for OpenAI Gym - Create an AI that learns through reinforcement learning which functionality-preserving transformations to make on a malware sample to break through / bypass machine learning static-analysis malware detection.
- OpenVAS - An open-source vulnerability scanner and vulnerability management solution. AI can be used to improve the identification and prioritization of vulnerabilities based on their potential impact and likelihood of exploitation.
-
Prevention
- OSSEC - An open-source host-based intrusion detection system (HIDS). AI can enhance OSSEC by providing advanced anomaly detection and predictive analysis to identify potential threats before they materialize.
- Snort IDS - An open-source network IDS and IPS capable of real-time traffic analysis and packet logging. Snort can leverage AI for anomaly detection and to enhance its pattern matching algorithms for better intrusion detection.
- PANTHER - PANTHER combines advanced techniques in network protocol verification, integrating the Shadow network simulator with the Ivy formal verification tool. This framework allows for detailed examination of time properties in network protocols and identifies real-world implementation errors. It supports multiple protocols and can simulate advanced persistent threats (APTs) in network protocols.
-
Response
- Metasploit - A tool for developing and executing exploit code against a remote target machine. AI can be used to automate the selection of exploits and optimize the attack vectors based on target vulnerabilities.
- Cortex - A powerful and flexible observable analysis and active response engine. AI can be used in Cortex to automate the analysis of observables and enhance threat detection capabilities.
- PentestGPT - PentestGPT provides advanced AI and integrated tools to help security teams conduct comprehensive penetration tests effortlessly. Scan, exploit, and analyze web applications, networks, and cloud environments with ease and precision, without needing expert skills.
- Metasploit - A tool for developing and executing exploit code against a remote target machine. AI can be used to automate the selection of exploits and optimize the attack vectors based on target vulnerabilities.
- PentestGPT - PentestGPT provides advanced AI and integrated tools to help security teams conduct comprehensive penetration tests effortlessly. Scan, exploit, and analyze web applications, networks, and cloud environments with ease and precision, without needing expert skills.
- Cortex - A powerful and flexible observable analysis and active response engine. AI can be used in Cortex to automate the analysis of observables and enhance threat detection capabilities.
-
Monitoring/Scanning
- Burp Suite - A leading range of cybersecurity tools, brought to you by PortSwigger. Burp Suite can integrate AI to automate vulnerability detection and improve the efficiency of web application security testing.
- Burp Suite
- Nikto - An open-source web server scanner which performs comprehensive tests against web servers for multiple items. AI can help Nikto by automating the identification of complex vulnerabilities and enhancing detection accuracy.
- Nikto - An open-source web server scanner which performs comprehensive tests against web servers for multiple items. AI can help Nikto by automating the identification of complex vulnerabilities and enhancing detection accuracy.
- MISP - Open source threat intelligence platform for gathering, sharing, storing, and correlating Indicators of Compromise (IoCs). AI can enhance the efficiency of threat detection and response by automating data analysis and correlation.
-
Tutorials and Guides
- article
- article
- article
- article
- article
- IBM Cybersecurity Analyst
- article
- AI infosec - first strikes, zero-day markets, hardware supply chains, adoption barriers
- AI Safety in a World of Vulnerable Machine Learning Systems
- IBM Cybersecurity Analyst
- Review - machine learning techniques applied to cybersecurity
- Cybersecurity data science - an overview from machine learning perspective
- article
- article
- article
- article
- article
- article
- article
- article
- Review - machine learning techniques applied to cybersecurity
- Cybersecurity data science - an overview from machine learning perspective
- Review - machine learning techniques applied to cybersecurity
- Cybersecurity data science - an overview from machine learning perspective
- article
- article
- article
- article
- article
- article
- Review - machine learning techniques applied to cybersecurity
- Cybersecurity data science - an overview from machine learning perspective
- article
- article
- article
- article
- article
- article
- article
- article
- article
- article
- Review - machine learning techniques applied to cybersecurity
- Cybersecurity data science - an overview from machine learning perspective
- article
- article
- Review - machine learning techniques applied to cybersecurity
- Cybersecurity data science - an overview from machine learning perspective
- article
- article
- article
- article
- article
- article
- article
- article
- article
- article
- Review - machine learning techniques applied to cybersecurity
- Cybersecurity data science - an overview from machine learning perspective
- Review - machine learning techniques applied to cybersecurity
- Cybersecurity data science - an overview from machine learning perspective
- Review - machine learning techniques applied to cybersecurity
- Cybersecurity data science - an overview from machine learning perspective
- Review - machine learning techniques applied to cybersecurity
- Cybersecurity data science - an overview from machine learning perspective
- AI infosec - first strikes, zero-day markets, hardware supply chains, adoption barriers
- AI Safety in a World of Vulnerable Machine Learning Systems
- Review - machine learning techniques applied to cybersecurity
- Cybersecurity data science - an overview from machine learning perspective
-
Detection
- MARK - The multi-agent ranking framework (MARK) aims to provide all the building blocks required to build large-scale detection and ranking systems. It includes distributed storage suited for BigData applications, a web-based visualization and management interface, a distributed execution framework for detection algorithms, and an easy-to-configure triggering mechanism. This allows data scientists to focus on developing effective detection algorithms.
- Zeek - A powerful network analysis framework focused on security monitoring. AI can be integrated to analyze network traffic patterns and detect anomalies indicative of security threats.
- AIEngine - Next-generation interactive/programmable packet inspection engine with IDS functionality. AIEngine uses machine learning to improve packet inspection and anomaly detection, adapting to new threats over time.
- MARK - The multi-agent ranking framework (MARK) aims to provide all the building blocks required to build large-scale detection and ranking systems. It includes distributed storage suited for BigData applications, a web-based visualization and management interface, a distributed execution framework for detection algorithms, and an easy-to-configure triggering mechanism. This allows data scientists to focus on developing effective detection algorithms.
-
-
Securing AI SaaS
-
Network Protection
- A Survey of Network Anomaly Detection Techniques - Discusses various techniques and methods for detecting anomalies in network traffic.
- Shallow and Deep Networks Intrusion Detection System - A Taxonomy and Survey - A taxonomy and survey of shallow and deep learning techniques for intrusion detection.
- A Taxonomy and Survey of Intrusion Detection System Design Techniques, Network Threats and Datasets - An in-depth review of IDS design techniques and relevant datasets.
- Machine Learning Techniques for Intrusion Detection - A comprehensive survey on various ML techniques used for intrusion detection.
- Shallow and Deep Networks Intrusion Detection System - A Taxonomy and Survey - A taxonomy and survey of shallow and deep learning techniques for intrusion detection.
- A Taxonomy and Survey of Intrusion Detection System Design Techniques, Network Threats and Datasets - An in-depth review of IDS design techniques and relevant datasets.
-
Endpoint Protection
- Deep Learning at the Shallow End - Malware Classification for Non-Domain Experts - Discusses deep learning techniques for malware classification.
- Malware Detection by Eating a Whole EXE - Presents a method for detecting malware by analyzing entire executable files.
- Deep Learning at the Shallow End - Malware Classification for Non-Domain Experts - Discusses deep learning techniques for malware classification.
- Malware Detection by Eating a Whole EXE - Presents a method for detecting malware by analyzing entire executable files.
-
Application Security
- Adaptively Detecting Malicious Queries in Web Attacks - Proposes methods for detecting malicious web queries.
- garak - NVIDIA LLM vulnerability scanner.
-
User Behavior Analysis
- Detecting Anomalous User Behavior Using an Extended Isolation Forest Algorithm - Discusses an extended isolation forest algorithm for detecting anomalous user behavior.
-
Process Behavior (Fraud Detection)
- A Survey of Credit Card Fraud Detection Techniques - A survey on various techniques for credit card fraud detection.
- Anomaly Detection in Industrial Control Systems Using CNNs - Discusses the use of convolutional neural networks for anomaly detection in industrial control systems.
- Anomaly Detection in Industrial Control Systems Using CNNs - Discusses the use of convolutional neural networks for anomaly detection in industrial control systems.
-
Books & Survey Papers
- Machine Learning and Security - Discusses the application of machine learning in security.
- Malware Data Science - Covers data science techniques for malware analysis.
- AI for Cybersecurity - A Handbook of Use Cases - A handbook on various use cases of AI in cybersecurity.
-
Offensive Tools and Frameworks
- DeepFool - A method to fool deep neural networks.
- Counterfit - An automation layer for assessing the security of machine learning systems.
- Snaike-MLflow - A suite of red team tools for MLflow.
- HackGPT - A tool leveraging ChatGPT for hacking purposes.
- HackingBuddyGPT - An automated penetration tester.
- Charcuterie - Code execution techniques for machine learning libraries.
- Deep-pwning - A lightweight framework for evaluating machine learning model robustness against adversarial attacks.
- Deep-pwning - A lightweight framework for evaluating machine learning model robustness against adversarial attacks.
- Counterfit - An automation layer for assessing the security of machine learning systems.
-
Adversarial Tools
- Exploring the Space of Adversarial Images - A tool to experiment with adversarial images.
- Adversarial Machine Learning Library (Ad-lib) - A game-theoretic library for adversarial machine learning.
- EasyEdit - A tool to modify the ground truths of large language models (LLMs).
-
Poisoning Tools
- BadDiffusion - Official repository to reproduce the paper "How to Backdoor Diffusion Models?" published at CVPR 2023.
-
Privacy Tools
- PrivacyRaven - A privacy testing library for deep learning systems.
-
Defensive Tools and Frameworks
- langkit - A toolkit for monitoring language models and detecting attacks.
- Python Differential Privacy Library - A library for implementing differential privacy.
- Diffprivlib - IBM's differential privacy library.
- PLOT4ai - A threat modeling library for building responsible AI.
- SyMPC - A secure multiparty computation library.
- PyVertical - Privacy-preserving vertical federated learning.
- Cloaked AI - Open source property-preserving encryption for vector embeddings.
- TenSEAL - A library for performing homomorphic encryption operations on tensors.
- PLOT4ai - A threat modeling library for building responsible AI.
- TenSEAL - A library for performing homomorphic encryption operations on tensors.
- SyMPC - A secure multiparty computation library.
- PyVertical - Privacy-preserving vertical federated learning.
- CircleGuardBench - A full-fledged benchmark for evaluating protection capabilities of AI models.
-
Tools
- IBM Watson - Tools and solutions for securing AI applications. Watson uses AI to analyze vast amounts of security data and identify potential threats, providing actionable insights for cybersecurity professionals.
- Azure Security Center - Comprehensive security management system for cloud environments. AI and machine learning are used to identify threats and vulnerabilities in real-time.
-
Best Practices
- NIST AI RMF - A framework for managing risks associated with AI in SaaS. It provides guidelines on how to implement AI securely, focusing on risk assessment, mitigation, and governance.
- NIST AI RMF - A framework for managing risks associated with AI in SaaS. It provides guidelines on how to implement AI securely, focusing on risk assessment, mitigation, and governance.
-
Case Studies
- Google AI Security - Insights and case studies from Google on how to secure AI applications in the cloud.
- Google AI Security - Insights and case studies from Google on how to secure AI applications in the cloud.
- Microsoft AI Security - Case studies on securing AI applications in SaaS environments. These case studies demonstrate how AI can be used to enhance security and protect against evolving threats.
-
-
2. Network Protection
-
4. Application Security
-
5. User Behavior Analysis
-
6. Process Behavior (Fraud Detection)
-
8. Books & Survey Papers
-
9. Offensive Tools and Frameworks
-
Generic Tools
-
Adversarial Tools
-
Poisoning Tools
-
-
10. Defensive Tools and Frameworks
-
Detection Tools
-
Privacy and Confidentiality
- Python Differential Privacy Library
- Diffprivlib
- Cloaked AI - preserving encryption for vector embeddings.
-
-
Theoretical Resources
-
Resources for Learning
- MLSecOps podcast - A podcast dedicated to the intersection of machine learning and security operations.
- MLSecOps podcast - A podcast dedicated to the intersection of machine learning and security operations.
-
Uncategorized Useful Resources
- OWASP ML TOP 10 - The top 10 machine learning security risks identified by OWASP.
- OWASP AI Security and Privacy Guide - A guide to securing AI systems and ensuring privacy.
- ENISA Multilayer Framework for Good Cybersecurity Practices for AI - A framework for good cybersecurity practices in AI.
- The MLSecOps Top 10 - Top 10 security practices for machine learning operations.
- OWASP LLM TOP 10 - The top 10 security risks for large language models as identified by OWASP.
- OWASP ML TOP 10 - The top 10 machine learning security risks identified by OWASP.
- OWASP LLM TOP 10 - The top 10 security risks for large language models as identified by OWASP.
- OWASP AI Security and Privacy Guide - A guide to securing AI systems and ensuring privacy.
- ENISA Multilayer Framework for Good Cybersecurity Practices for AI - A framework for good cybersecurity practices in AI.
-
Research Papers
- Adversarial Task Allocation - Explores adversarial task allocation in machine learning systems.
- Robust Physical-World Attacks on Deep Learning Models - Examines physical-world attacks on deep learning models.
- The Space of Transferable Adversarial Examples - Discusses transferable adversarial examples in deep learning.
- High Dimensional Spaces, Deep Learning and Adversarial Examples - Discusses the challenges of adversarial examples in high-dimensional spaces.
- High Dimensional Spaces, Deep Learning and Adversarial Examples - Discusses the challenges of adversarial examples in high-dimensional spaces.
- Adversarial Task Allocation - Explores adversarial task allocation in machine learning systems.
-
-
13. Research Papers
-
Adversarial Examples and Attacks
- Robust Physical-World Attacks on Deep Learning Models - world attacks on deep learning models.
- RHMD: Evasion-Resilient Hardware Malware Detectors - based malware detectors resilient to evasion.
- The Space of Transferable Adversarial Examples
- RHMD: Evasion-Resilient Hardware Malware Detectors - based malware detectors resilient to evasion.
- Generic Black-Box End-to-End Attack against RNNs and Other API Calls Based Malware Classifiers - box attacks on RNNs and malware classifiers.
- Generic Black-Box End-to-End Attack against RNNs and Other API Calls Based Malware Classifiers - box attacks on RNNs and malware classifiers.
- Vulnerability of Deep Reinforcement Learning to Policy Induction Attacks
- Can you fool AI with adversarial examples on a visual Turing test?
- Can you fool AI with adversarial examples on a visual Turing test?
- Explaining and Harnessing Adversarial Examples
- Delving into Adversarial Attacks on Deep Policies
- Crafting Adversarial Input Sequences for Recurrent Neural Networks
- Practical Black-Box Attacks against Machine Learning - box attacks on machine learning models.
- Data Driven Exploratory Attacks on Black Box Classifiers in Adversarial Domains - driven attacks on black-box classifiers.
- Explaining and Harnessing Adversarial Examples
- Delving into Adversarial Attacks on Deep Policies
- Crafting Adversarial Input Sequences for Recurrent Neural Networks
- Practical Black-Box Attacks against Machine Learning - box attacks on machine learning models.
- Data Driven Exploratory Attacks on Black Box Classifiers in Adversarial Domains - driven attacks on black-box classifiers.
- Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN
- Fast Feature Fool: A Data-Independent Approach to Universal Adversarial Perturbations
- Simple Black-Box Adversarial Perturbations for Deep Networks - box adversarial perturbations.
- Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
- Fast Feature Fool: A Data-Independent Approach to Universal Adversarial Perturbations
- Simple Black-Box Adversarial Perturbations for Deep Networks - box adversarial perturbations.
- Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
- One Pixel Attack for Fooling Deep Neural Networks - pixel modification can fool deep neural networks.
- FedMLSecurity: A Benchmark for Attacks and Defenses in Federated Learning and LLMs
- Jailbroken: How Does LLM Safety Training Fail?
- One Pixel Attack for Fooling Deep Neural Networks - pixel modification can fool deep neural networks.
- FedMLSecurity: A Benchmark for Attacks and Defenses in Federated Learning and LLMs
- Jailbroken: How Does LLM Safety Training Fail?
- Bad Characters: Imperceptible NLP Attacks
- Bad Characters: Imperceptible NLP Attacks
- Universal and Transferable Adversarial Attacks on Aligned Language Models
- Universal and Transferable Adversarial Attacks on Aligned Language Models
- Exploring the Vulnerability of Natural Language Processing Models via Universal Adversarial Texts
- Adversarial Examples Are Not Bugs, They Are Features
- Exploring the Vulnerability of Natural Language Processing Models via Universal Adversarial Texts
- Adversarial Examples Are Not Bugs, They Are Features
- Adversarial Attacks on Tables with Entity Swap
- Here Comes the AI Worm: Unleashing Zero-click Worms that Target GenAI-Powered Applications - click worms targeting AI-powered applications.
- Adversarial Attacks on Tables with Entity Swap
- Here Comes the AI Worm: Unleashing Zero-click Worms that Target GenAI-Powered Applications - click worms targeting AI-powered applications.
-
Model Extraction
-
Evasion
- Adversarial Demonstration Attacks on Large Language Models
- Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition
- Adversarial Generative Nets: Neural Network Attacks on State-of-the-Art Face Recognition
- Query Strategies for Evading Convex-Inducing Classifiers - inducing classifiers.
- Query Strategies for Evading Convex-Inducing Classifiers - inducing classifiers.
- Adversarial Prompting for Black Box Foundation Models
- Adversarial Prompting for Black Box Foundation Models
-
Programming Languages
Categories
Using AI for Pentesting
93
13. Research Papers
54
Securing AI SaaS
53
Theoretical Resources
17
9. Offensive Tools and Frameworks
7
10. Defensive Tools and Frameworks
4
8. Books & Survey Papers
3
6. Process Behavior (Fraud Detection)
1
4. Application Security
1
2. Network Protection
1
5. User Behavior Analysis
1
Sub Categories
Tutorials and Guides
70
Adversarial Examples and Attacks
44
Defensive Tools and Frameworks
13
Uncategorized Useful Resources
9
Offensive Tools and Frameworks
9
Evasion
7
Tools
6
Research Papers
6
Network Protection
6
Response
6
Generic Tools
5
Prediction
5
Monitoring/Scanning
5
Endpoint Protection
4
Detection
4
Adversarial Tools
4
Prevention
3
Model Extraction
3
Case Studies
3
Process Behavior (Fraud Detection)
3
Books & Survey Papers
3
8.1 Books
3
Privacy and Confidentiality
3
Best Practices
2
Resources for Learning
2
Application Security
2
Poisoning Tools
2
Detection Tools
1
User Behavior Analysis
1
Privacy Tools
1
Keywords
python
14
machine-learning
7
large-language-models
5
differential-privacy
4
cpp
4
cryptography
4
privacy
3
deep-learning
3
dfir
3
malware
3
chatgpt
3
digital-forensics
2
chatgpt4
2
cyber-threat-intelligence
2
cortex
2
api
2
analyzer
2
analysis
2
chatgpt3
2
chatgpt-python
2
chatgpt-app
2
nlg
2
nlp
2
observability
2
prompt-engineering
2
prompt-injection
2
chatgpt-api
2
chatbot
2
openai-api
2
openai
2
llm
2
penetration-testing
2
pentesting
2
security-incidents
2
scala
2
rest
2
response
2
open-source
2
observable
2
iocs
2
managers
2
incident-response
2
free-software
2
free
2
malwareanalysis
2
engine
2
thehive
2
cybersecurity
2
tensor
2
microsoft-seal
2