awesome-safety-critical-ai
A curated list of references on the role of AI in safety-critical systems β οΈ
https://github.com/jgalego/awesome-safety-critical-ai
Last synced: about 14 hours ago
JSON representation
-
<a id="articles"></a>π Articles
- Ukraine's Future Vision and Current Capabilities for Waging AI-Enabled Autonomous Warfare
- Inspect, Understand, Overcome: A Survey of Practical Methods for AI Safety
- Engineering problems in machine learning systems
- Safety-critical computer vision: an empirical survey of adversarial evasion attacks and defenses on computer vision systems
- Artificial Intelligence for Safety-Critical Systems in Industrial and Transportation Domains: A Survey
- White Paper Machine Learning in Certified Systems
- Artificial intelligence in health care: accountability and safety
- Trustworthy Artificial Intelligence in Medical Imaging
- Inspect, Understand, Overcome: A Survey of Practical Methods for AI Safety
- A Goal-Directed Dialogue System for Assistance in Safety-Critical Application
- The Increasing Risks of Risk Assessment: On the Rise of Artificial Intelligence and Non-Determinism in Safety-Critical Systems
- AI-supported estimation of safety critical wind shear-induced aircraft go-around events utilizing pilot reports - 0)
- Engineering problems in machine learning systems
- Trustworthy AI: From Principles to Practices
- Understanding and Identifying Challenges in Design of Safety-Critical AI Systems
- Architectural Patterns for Integrating AI Technology into Safety-Critical System
- Safety-critical computer vision: an empirical survey of adversarial evasion attacks and defenses on computer vision systems
- Challenges of Machine Learning Applied to Safety-Critical Cyber-Physical Systems
- Inspect, Understand, Overcome: A Survey of Practical Methods for AI Safety
- Engineering problems in machine learning systems
- Safety-critical computer vision: an empirical survey of adversarial evasion attacks and defenses on computer vision systems
- Inspect, Understand, Overcome: A Survey of Practical Methods for AI Safety
- Engineering problems in machine learning systems
- Safety-critical computer vision: an empirical survey of adversarial evasion attacks and defenses on computer vision systems
- Inspect, Understand, Overcome: A Survey of Practical Methods for AI Safety
- Engineering problems in machine learning systems
- Safety-critical computer vision: an empirical survey of adversarial evasion attacks and defenses on computer vision systems
- Assurance Argument Patterns and Processes for Machine Learning in Safety-Related Systems
- Collaborative Intelligence for Safety-Critical Industries: A Literature Review
- Inspect, Understand, Overcome: A Survey of Practical Methods for AI Safety
- Engineering problems in machine learning systems
- Safety-critical computer vision: an empirical survey of adversarial evasion attacks and defenses on computer vision systems
- Trustworthy AI - Part I - AI-Part-II-Mariani-Rossi/9f354b3a88e6d6512d22ec152e6c6131a1e44cab) and [III](https://www.semanticscholar.org/paper/Trustworthy-AI-Part-III-Mariani-Rossi/ff446b46c5b9b4c0d18849d479fe5645f6182a36)
- Lessons From Red Teaming 100 Generative AI Products
- Engineering Dependable AI Systems
- Unpacking Human-AI Interaction in Safety-Critical Industries: A Systematic Literature Review
- Requirements Engineering Challenges in Building AI-Based Complex Systems
- Quantification of the Impact of Random Hardware Faults on Safety-Critical AI Applications: CNN-Based Traffic Sign Recognition Case Study
- Resilience of Deep Learning applications: a systematic literature review of analysis and hardening techniques
- Where AI Assurance Might Go Wrong: Initial lessons from engineering of critical systems
- Whatβs your ML test score? A rubric for ML production systems
- Addressing uncertainty in the safety assurance of machine-learning
- Rethinking the Maturity of Artificial Intelligence in Safety-Critical Settings
- Output range analysis for deep feedforward neural networks
- Safety-critical computer vision: an empirical survey of adversarial evasion attacks and defenses on computer vision systems
- SoK: Security and Privacy in Machine Learning
- Engineering Dependable AI Systems
- Unpacking Human-AI Interaction in Safety-Critical Industries: A Systematic Literature Review
- Requirements Engineering Challenges in Building AI-Based Complex Systems
- Resilience of Deep Learning applications: a systematic literature review of analysis and hardening techniques
- Where AI Assurance Might Go Wrong: Initial lessons from engineering of critical systems
- Whatβs your ML test score? A rubric for ML production systems
- Addressing uncertainty in the safety assurance of machine-learning
- AI Safety for Physical Infrastructures: A Collaborative and Interdisciplinary Approach
- Machine learning safety: An overview
- The role of AI in detecting and mitigating human errors in safety-critical industries: A review
- Inspect, Understand, Overcome: A Survey of Practical Methods for AI Safety
- Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems
- Formal Specification and Verification of Autonomous Robotic Systems: A Survey
- Effective Mitigations for Systemic Risks from General-Purpose AI
- Artificial intelligence in safety-critical systems: a systematic review
- A corroborative approach to verification and validation of human-robot teams
- Holistic Safety and Responsibility Evaluations of Advanced AI Models
- A Survey on Failure Analysis and Fault Injection in AI Systems
- Testing and verification of neural-network-based safety-critical control software: A systematic literature review
- Machine Learning Testing: Survey, Landscapes and Horizons
- Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems
- Output range analysis for deep feedforward neural networks
- AI Safety for Physical Infrastructures: A Collaborative and Interdisciplinary Approach
- Machine learning safety: An overview
- The role of AI in detecting and mitigating human errors in safety-critical industries: A review
- Artificial intelligence in health care: accountability and safety
- Trustworthy Artificial Intelligence in Medical Imaging
- Inspect, Understand, Overcome: A Survey of Practical Methods for AI Safety
- A Goal-Directed Dialogue System for Assistance in Safety-Critical Application
- The Increasing Risks of Risk Assessment: On the Rise of Artificial Intelligence and Non-Determinism in Safety-Critical Systems
- AI-supported estimation of safety critical wind shear-induced aircraft go-around events utilizing pilot reports - 0)
- Engineering problems in machine learning systems
- Automated Verification of Neural Networks: Advances, Challenges and Perspectives
- Machine Learning Testing: Survey, Landscapes and Horizons
- Engineering problems in machine learning systems
- Automated Verification of Neural Networks: Advances, Challenges and Perspectives
- Trustworthy AI: From Principles to Practices
- Understanding and Identifying Challenges in Design of Safety-Critical AI Systems
- Formal Specification and Verification of Autonomous Robotic Systems: A Survey
- Large-scale machine learning systems in real-world industrial settings: A review of challenges and solutions
- Safety-critical computer vision: an empirical survey of adversarial evasion attacks and defenses on computer vision systems
- SoK: Security and Privacy in Machine Learning
- Challenges of Machine Learning Applied to Safety-Critical Cyber-Physical Systems
- Collaborative Intelligence for Safety-Critical Industries: A Literature Review
- Hidden Technical Debt in Machine Learning Systems
- Towards Verified Artificial Intelligence
- Expert-in-the-loop Systems Towards Safety-critical Machine Learning Technology in Wildfire Intelligence
- Hidden Technical Debt in Machine Learning Systems
- Towards Verified Artificial Intelligence
- Neural Bridge Sampling for Evaluating Safety-Critical Autonomous Systems
- Expert-in-the-loop Systems Towards Safety-critical Machine Learning Technology in Wildfire Intelligence
- Effective Mitigations for Systemic Risks from General-Purpose AI
- A corroborative approach to verification and validation of human-robot teams
- Holistic Safety and Responsibility Evaluations of Advanced AI Models
- A Survey on Failure Analysis and Fault Injection in AI Systems
- Testing and verification of neural-network-based safety-critical control software: A systematic literature review
- Engineering problems in machine learning systems
- Formal-LLM: Integrating Formal Language and Natural Language for Controllable LLM-based Agents
- Safety-critical computer vision: an empirical survey of adversarial evasion attacks and defenses on computer vision systems
- Understanding and Avoiding AI Failures: A Practical Guide
- Machine Learning and Software Product Assurance: Bridging the Gap
- The Fusion of Large Language Models and Formal Methods for Trustworthy AI Agents: A Roadmap
- The Brittleness of AI-Generated Image Watermarking Techniques: Examining Their Robustness Against Visual Paraphrasing Attacks
- AI at work β Mitigating safety and discriminatory risk with technical standards
- Can Large Language Models Transform Natural Language Intent into Formal Method Postconditions?
- Assurance for Autonomy β JPLβs past research, lessons learned, and future directions
- Inspect, Understand, Overcome: A Survey of Practical Methods for AI Safety
- Inspect, Understand, Overcome: A Survey of Practical Methods for AI Safety
- Engineering problems in machine learning systems
- Safety-critical computer vision: an empirical survey of adversarial evasion attacks and defenses on computer vision systems
- Ignore This Title and HackAPrompt: Exposing Systemic Vulnerabilities of LLMs through a Global Scale Prompt Hacking Competition
- The Prompt Report: A Systematic Survey of Prompt Engineering Techniques
- Datasheets for Datasets
- Inspect, Understand, Overcome: A Survey of Practical Methods for AI Safety
- Engineering problems in machine learning systems
- Safety-critical computer vision: an empirical survey of adversarial evasion attacks and defenses on computer vision systems
- Model cards for model reporting
- Data Cards: Purposeful and Transparent Dataset Documentation for Responsible AI
- A Review of Formal Methods applied to Machine Learning
- How to Certify Machine Learning Based Safety-critical Systems? A Systematic Literature Review
- Inspect, Understand, Overcome: A Survey of Practical Methods for AI Safety
- Engineering problems in machine learning systems
- Safety-critical computer vision: an empirical survey of adversarial evasion attacks and defenses on computer vision systems
-
<a id="books"></a>π Books
- Machine Learning Safety
- Artificial Intelligence for Safety and Reliability Engineering: Methods, Applications, and Challenges
- Artificial Intelligence for Safety and Reliability Engineering: Methods, Applications, and Challenges
- Machine Learning Safety
- Artificial Intelligence for Safety and Reliability Engineering: Methods, Applications, and Challenges
- Machine Learning Safety
- Artificial Intelligence for Safety and Reliability Engineering: Methods, Applications, and Challenges
- Machine Learning Safety
- Artificial Intelligence for Safety and Reliability Engineering: Methods, Applications, and Challenges
- Machine Learning Safety
- Interpretable Machine Learning: A Guide for Making Black Box Models Explainable
- Artificial Intelligence for Safety and Reliability Engineering: Methods, Applications, and Challenges
- Artificial Intelligence for Safety and Reliability Engineering: Methods, Applications, and Challenges
- Trust in Machine Learning
- Reliable Machine Learning: Applying SRE Principles to ML in Production
- Machine Learning Safety
- Designing Machine Learning Systems: An Iterative Process for Production-Ready Applications
- Machines We Trust: Perspectives on Dependable AI
- Artificial Intelligence for Safety and Reliability Engineering: Methods, Applications, and Challenges
- Trust in Machine Learning
- Adversarial Machine Learning
- Machines We Trust: Perspectives on Dependable AI
- Machine Learning Safety
- Designing Machine Learning Systems: An Iterative Process for Production-Ready Applications
- Machine Learning Safety
- Software for Dependable Systems: Sufficient Evidence?
- Artificial Intelligence for Safety and Reliability Engineering: Methods, Applications, and Challenges
- Machine Learning Safety
- Artificial Intelligence for Safety and Reliability Engineering: Methods, Applications, and Challenges
- Machine Learning Safety
- Artificial Intelligence for Safety and Reliability Engineering: Methods, Applications, and Challenges
- Reliable Machine Learning: Applying SRE Principles to ML in Production
- Machine Learning Safety
- Artificial Intelligence for Safety and Reliability Engineering: Methods, Applications, and Challenges
- Machine Learning Safety
-
<a id="standards"></a>π Standards
-
Generic
- SAE G-34
- ISO/IEC 38507:2022
- ISO/IEC 42001:2023
- ANSI/UL 4600
- IEEE 7009-2024 - Safe Design of Autonomous and Semi-Autonomous Systems
- ISO/IEC 23053:2022
- ISO/IEC 23894:2023
- ISO/IEC 38507:2022
- ISO/IEC 42001:2023
- ISO/IEC JTC 1/SC 42
- NIST AI 100-1
- ANSI/UL 4600
- IEEE 7009-2024 - Safe Design of Autonomous and Semi-Autonomous Systems
- ISO/IEC 23053:2022
- ISO/IEC 23894:2023
-
Coding
-
-
<a id="blogs"></a>βοΈ Blogs
- When Doctors With AI Are Outperformed by AI Alone
- Designing Effective Policies for Safety-Critical AI
- Designing Effective Policies for Safety-Critical AI
- Verifying and validating AI in safety-critical systems
- Unpacking Human-AI Interaction in Safety-Critical Industries: A Systematic Literature Review
- Breaking things is easy
- Building safe artificial intelligence: specification, robustness, and assurance
- Can We Trust AI in Safety Critical Systems?
- The impact of AI/ML on qualifying safety-critical software
- Part 2: Reflections On AI (Historical Safety Critical Systems)
- When Doctors With AI Are Outperformed by AI Alone
- Artificial Intelligence, Critical Systems, and the Control Problem
- How is AI being used in Aviation?
- The Road to AI Certification: The importance of Verification and Validation in AI
- The Surprising Brittleness of AI
- Safety In Critical AI Systems
- The risks and benefits of AI translations in safety-critical industries
- Building safe artificial intelligence: specification, robustness, and assurance
- The Surprising Brittleness of AI
- Safety In Critical AI Systems
- Artificial Intelligence in Safety-Critical Systems
- The impact of AI/ML on qualifying safety-critical software
- When Doctors With AI Are Outperformed by AI Alone
- Artificial Intelligence, Critical Systems, and the Control Problem
- AI Governance Mega-map: Safe, Responsible AI and System, Data & Model Lifecycle
- AI Red Teaming: Securing Unpredictable Systems
- What is AI Red Teaming?
- The Expanding Role of Red Teaming in Defending AI Systems
- What is AI Red Teaming?
- Verifying and validating AI in safety-critical systems
- Unpacking Human-AI Interaction in Safety-Critical Industries: A Systematic Literature Review
- Breaking things is easy
- Can We Trust AI in Safety Critical Systems?
- Artificial Intelligence in Safety-Critical Systems
-
<a id="miscellaneous"></a>πΎ Miscellaneous
-
Bleeding Edge βοΈ
- DHS AI
- NIST
- NVIDIA
- CO/AI
- Hacker News on The Best Language for Safety-Critical Software
- MITRE ATLAS - world insights
- CO/AI, actionable resources & strategies for the AI era
- CISA's Roadmap for Artificial Intelligence
- Google's Responsible Generative AI Toolkit
- Hacker News on The Best Language for Safety-Critical Software
- MITRE ATLAS to navigate threats to AI systems through real-world insights
- OWASP's Top 10 LLM Applications & Generative AI
- Paul Niquette's Software Does Not Fail essay
- RobustML - run hub for learning about robust ML
- SEBoK Verification and Validation of Systems in Which AI is a Key Element
- StackOverflow discussion on Python coding standards for Safety Critical applications
- Deloitte
- IBM
- Microsoft
- AI Incident Database
- AI Safety
- AI Safety Atlas
- DARPA's Assured Autonomy Tools Portal
- Awful AI
- OWASP's Top 10 LLM Applications & Generative AI
- Avid - source, extensible knowledge base of AI failures
- Data Cards Playbook
- ECSS's Space engineering β Machine learning qualification handbook
- ML Safety
- AI Safety Landscape
- AI Safety Quest - minded people and find projects that are a good fit for their skills
- AI Safety Support - building project working to reduce the likelihood of existential risk from AI by providing resources, networking opportunities and support to early career, independent and transitioning researchers
- AI Snake Oil
- Awful AI
- MLSecOps
-
-
<a id="roadmaps"></a>π£οΈ Roadmaps
- Roadmaps for AI Integration in the Rail Sector
- Roadmap for Artificial Intelligence Safety Assurance
- Roadmap for Artificial Intelligence - of-agency plan aligned with national AI strategy
- Artificial Intelligence Roadmap - centric approach to AI in aviation
-
<a id="reports"></a>π Reports
- State of AI Agents
- Superagency in the workplace: Empowering people to unlock AIβs full potential
- State of AI Report 2024
- The Flight to Safety-Critical AI: Lessons in AI Safety from the Aviation Industry
- AI Safety Index 2024
- Responsible AI Progress Report 2025
- International AI Safety Report 2025
- State of AI Report 2024
- Responsible AI Transparency Report 2024
- Examining Proposed Uses of LLMs to Produce or Assess Assurance Arguments
- US Responsible AI Survey
- AI Safety Index 2024
- Responsible AI Progress Report 2025
- International AI Safety Report 2025
- Superagency in the workplace: Empowering people to unlock AIβs full potential
- Responsible AI Transparency Report 2024
- US Responsible AI Survey
-
<a id="tools"></a>π οΈ Tools
-
Adversarial Attacks
- `bethgelab/foolbox`
- `Trusted-AI/adversarial-robustness-toolbox` - evasion, poisoning, extraction, inference - red and blue teams
- `bethgelab/foolbox`
- `Trusted-AI/adversarial-robustness-toolbox` - evasion, poisoning, extraction, inference - red and blue teams
-
Data Management
- `cleanlab/cleanlab` - centric AI package for data quality and ML with messy, real-world data and labels.
- `facebook/Ax` - purpose platform for understanding, managing, deploying, and automating adaptive experiments
- `great-expectations/great_expectations`
- `cleanlab/cleanlab` - centric AI package for data quality and ML with messy, real-world data and labels.
- `facebook/Ax` - purpose platform for understanding, managing, deploying, and automating adaptive experiments
- `great-expectations/great_expectations`
- `iterative/dvc`
- `pydantic/pydantic`
- `tensorflow/data-validation`
- `unionai-oss/pandera`
-
Model Evaluation
- `confident-ai/deepeval` - to-use, open-source LLM evaluation framework, for evaluating and testing LLM systems
- `RobustBench/robustbench`
- `trust-ai/SafeBench` - critical scenarios
- `confident-ai/deepeval` - to-use, open-source LLM evaluation framework, for evaluating and testing LLM systems
- `RobustBench/robustbench`
- `trust-ai/SafeBench` - critical scenarios
-
Oldies π°οΈ
-
Bleeding Edge βοΈ
- `langgenius/dify` - source LLM app development platform, which combines agentic AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production
- `latitude-dev/latitude-llm` - source prompt engineering platform to build, evaluate, and refine your prompts with AI
- `agno-agi/agno` - modal agents
- `Arize-ai/phoenix` - source AI observability platform designed for experimentation, evaluation, and troubleshooting
- `BerriAI/litellm`
- `browser-use/browser-use`
- `Cinnamon/kotaemon` - source RAG-based tool for chatting with your documents
- `ComposioHQ/composio` - quality integrations via function calling
- `deepset-ai/haystack` - ready LLM applications
- `dottxt-ai/outlines`
- `agno-agi/agno` - modal agents
- `Arize-ai/phoenix` - source AI observability platform designed for experimentation, evaluation, and troubleshooting
- `ComposioHQ/composio` - quality integrations via function calling
- `deepset-ai/haystack` - ready LLM applications
- `dottxt-ai/outlines`
- `exo-explore/exo`
- `FlowiseAI/Flowise`
- `groq/groq-python`
- `guidance-ai/guidance`
- `h2oai/h2o-llmstudio` - code GUI for fine-tuning LLMs
- `hiyouga/LLaMA-Factory` - tuning of 100+ LLMs and VLMs
- `instructor-ai/instructor`
- `keephq/keep` - source AIOps and alert management platform
- `khoj-ai/khoj` - hostable AI second brain
- `ItzCrazyKns/Perplexica` - powered search engine and open source alternative to Perplexity AI
- `langfuse/langfuse`
- `h2oai/h2o-llmstudio` - code GUI for fine-tuning LLMs
- `hiyouga/LLaMA-Factory` - tuning of 100+ LLMs and VLMs
- `instructor-ai/instructor`
- `khoj-ai/khoj` - hostable AI second brain
- `run-llama/llama_index` - powered agents over your data
- `stanfordnlp/dspy` - not prompting - language models
- `topoteretes/cognee`
- `unitaryai/detoxify`
- `unslothai/unsloth` - R1 and reasoning LLMs 2x faster with 70% less memory! π¦₯
- `microsoft/data-formulator`
- `microsoft/prompty`
- `Mintplex-Labs/anything-llm` - in-one Desktop & Docker AI application with built-in RAG, AI agents, No-code agent builder, and more
- `ollama/ollama` - R1, Phi-4, Gemma 2, and other large LMs
- `promptfoo/promptfoo` - friendly local tool for testing LLM applications
- `run-llama/llama_index` - powered agents over your data
- `ScrapeGraphAI/Scrapegraph-ai`
- `stanfordnlp/dspy` - not prompting - language models
- `unslothai/unsloth` - R1 and reasoning LLMs 2x faster with 70% less memory! π¦₯
- `Giskard-AI/giskard`
- `DS4SD/docling`
- `eth-sri/lmql`
- `exo-explore/exo`
- `FlowiseAI/Flowise`
- `groq/groq-python`
- `guidance-ai/guidance`
-
Model Fairness & Privacy
-
Model Intepretability
-
Model Lifecycle
- `aimhubio/aim` - to-use and supercharged open-source experiment tracker
- `comet-ml/opik` - source platform for evaluating, testing and monitoring LLM applications
- `comet-ml/opik` - source platform for evaluating, testing and monitoring LLM applications
- `evidentlyai/evidently` - source ML and LLM observability framework
- `IDSIA/sacred`
- `mlflow/mlflow` - source platform for the ML lifecycle
- `wandb/wandfb` - featured AI developer platform
- `evidentlyai/evidently` - source ML and LLM observability framework
- `mlflow/mlflow` - source platform for the ML lifecycle
- `wandb/wandfb` - featured AI developer platform
-
Model Security
-
Model Testing & Validation
- `deepchecks/deepchecks` - source package for validating ML models and data
- `explodinggradients/ragas` - driven insights for LLM apps
- `pytorchfi/pytorchfi`
- `deepchecks/deepchecks` - source package for validating ML models and data
- `explodinggradients/ragas` - driven insights for LLM apps
- `pytorchfi/pytorchfi`
-
Miscellaneous
-
-
<a id="initiatives"></a>π€ Initiatives
- Future of Life Institute
- Foundations of responsible data management
- Dependable, Certifiable & Explainable Artificial Intelligence for Critical Systems
- Best practices for trustworthy AI in medicine
- AI for Critical Systems Competence Center
- AI for Good
- Safety Critical AI
- Sustainable Machine Learning
- Responsible AI Institute
- Responsible AI Institute
- Center for Responsible AI
- WASP WARA Public Safety
- Foundations of responsible data management
- Dependable, Certifiable & Explainable Artificial Intelligence for Critical Systems
- AI for Critical Systems Competence Center
- AI for Good
- Safety Critical AI
-
<a id="tldr"></a>π TLDR
-
<a id="top-picks"></a>π Editor's Choice
-
<a id="meta"></a>π Meta
-
Bleeding Edge βοΈ
- safety-critical-systems
- Awesome LLM Apps
- Awesome Python Data Science
- Awesome MLOps
- Awesome Production ML
- Awesome Trustworthy AI - of-distribution generalization, adversarial examples, backdoor attack, model inversion attack, machine unlearning, &c.
- Awesome Responsible AI - Centered AI
- Awesome Safety Critical - critical software
- FDA Draft Guidance on AI
-
-
<a id="certifications"></a>π Certifications
-
<a id="conferences"></a>π€ Conferences
- 20th European Dependable Computing Conference
- Workshop on Sociotechnical AI Safety
- AI for Critical Infrastructure
- Trustworthy machine learning
- FAA Artificial Intelligence Safety Assurance: Roadmap and Technical Exchange Meetings
- AI/ML Components in Safety-Critical Aviation Systems: Selected Concepts and Underlying Principles
- Developing Standards for AI/ML Systems in Civil Aviation: Challenges and Barriers
- FAA Artificial Intelligence Safety Assurance: Roadmap and Technical Exchange Meetings
- AI/ML Components in Safety-Critical Aviation Systems: Selected Concepts and Underlying Principles
- Developing Standards for AI/ML Systems in Civil Aviation: Challenges and Barriers
- NFM Workshop on AI Safety
- LLMs in Production 2023
- 32nd annual Safety-Critical Systems Symposium
- AI in Production 2024
- ML:Integrity 2022
- 32nd annual Safety-Critical Systems Symposium
- 20th European Dependable Computing Conference
- Robust ML Workshop 2024
- Workshop on Sociotechnical AI Safety
- Trustworthy machine learning
- Safety Critical Systems Symposium SSS'25
- South Wales Safety Groups Alliance Conference and Exhibition
-
<a id="courses"></a>π©βπ« Courses
- Machine Learning in Production - Mellon University
- Trustworthy Machine Learning
- Trustworthy Machine Learning
- AI for Good Specialization
- AI Red Teaming
- AI for Good Specialization
- AI Red Teaming
- Dependable AI Systems - Champaign
- Limits to Prediction
- Machine Learning for Healthcare
- Machine Learning in Production - Mellon University
- Responsible AI
- Robustness in Machine Learning
- Security and Privacy of Machine Learning
- Trustworthy Artificial Intelligence
- Trustworthy Machine Learning
- Machine Learning for Healthcare
- Responsible AI
- Robustness in Machine Learning
- Security and Privacy of Machine Learning
- Trustworthy Artificial Intelligence
- AI for Social Good
- Introduction to AI Safety
- Real-Time Mission-Critical Systems Design
- Safety Critical Systems
- Safety Critical Systems
- Dependable AI Systems - Champaign
- Limits to Prediction
-
<a id="guidelines"></a>π Guidelines
- Reponsible AI Guidelines
- Ethics guidelines for trustworthy AI
- The EU AI Act
- AI Principles
- SAIF // Secure AI Framework: A practitionerβs guide to navigating AI security
- Initial guidelines for the use of Generative AI tools at Harvard
- Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure
- Safety and Security Guidelines for Critical Infrastructure Owners and Operators
- SAIF // Secure AI Framework: A practitionerβs guide to navigating AI security
- Universal Guidelines for AI
- Reponsible AI Guidelines
- Ethics guidelines for trustworthy AI
- The EU AI Act
- AI Principles
- Initial guidelines for the use of Generative AI tools at Harvard
- Roles and Responsibilities Framework for Artificial Intelligence in Critical Infrastructure
- Safety and Security Guidelines for Critical Infrastructure Owners and Operators
- Responsible AI: Principles and Approach
- JSP 936: Dependable Artificial Intelligence (AI) in defense (part 1: directive)
- AI Principles
- Responsible AI at Stanford
- Artificial Intelligence/Machine Learning System Safety
- Guidelines for AI in Parliaments
- Responsible AI: Principles and Approach
- JSP 936: Dependable Artificial Intelligence (AI) in defense (part 1: directive)
- Guidelines for secure AI system development
- AI Principles
- Responsible AI at Stanford
-
<a id="working-groups"></a>π·πΌ Working Groups
-
<a id="videos"></a>πΊ Videos
-
Bleeding Edge βοΈ
- AI Revolution Transforming Safety-Critical Systems EXPLAINED!
- AI in Safety-Critical Systems
- Incorporating Machine Learning Models into Safety-Critical Systems
- Stanford Seminar - Challenges in AI Safety: A Perspective from an Autonomous Driving Company
- Best of - AI and safety critical systems
- Integrating machine learning into safety-critical systems
- AI Revolution Transforming Safety-Critical Systems EXPLAINED!
- AI in Safety-Critical Systems
- Incorporating Machine Learning Models into Safety-Critical Systems
- Robustness, Detectability, and Data Privacy in AI
- Best of - AI and safety critical systems
- Integrating machine learning into safety-critical systems
- How Microsoft Approaches AI Red Teaming
-
-
<a id="whitepapers"></a>π Whitepapers
-
Bleeding Edge βοΈ
- The Application of Artificial Intelligence in Functional Safety
- The Challenges of using AI in Critical Systems
- Dependable AI: How to use Artificial Intelligence even in critical applications?
- The Application of Artificial Intelligence in Functional Safety
- The Challenges of using AI in Critical Systems
-
-
Contributions
-
Bleeding Edge βοΈ
- issues - safety-critical-ai/pulls) or [discussions](https://github.com/JGalego/awesome-safety-critical-ai/discussions).
-
-
About Us
-
Bleeding Edge βοΈ
- Critical Software - and mission-critical software.
- open roles
-
Programming Languages
Categories
<a id="articles"></a>π Articles
129
<a id="tools"></a>π οΈ Tools
106
<a id="miscellaneous"></a>πΎ Miscellaneous
35
<a id="books"></a>π Books
35
<a id="blogs"></a>βοΈ Blogs
34
<a id="standards"></a>π Standards
29
<a id="guidelines"></a>π Guidelines
28
<a id="courses"></a>π©βπ« Courses
28
<a id="conferences"></a>π€ Conferences
22
<a id="initiatives"></a>π€ Initiatives
17
<a id="reports"></a>π Reports
17
<a id="videos"></a>πΊ Videos
13
<a id="meta"></a>π Meta
9
<a id="working-groups"></a>π·πΌ Working Groups
6
<a id="whitepapers"></a>π Whitepapers
5
<a id="certifications"></a>π Certifications
4
<a id="roadmaps"></a>π£οΈ Roadmaps
4
<a id="top-picks"></a>π Editor's Choice
4
<a id="tldr"></a>π TLDR
2
About Us
2
Contributions
1
Sub Categories
Keywords
machine-learning
33
llm
31
ai
28
python
23
llmops
17
data-science
14
rag
14
mlops
13
openai
12
llms
11
chatgpt
11
pytorch
11
prompt-engineering
11
llm-evaluation
10
llama3
10
generative-ai
10
agents
10
deep-learning
9
large-language-models
9
responsible-ai
8
langchain
8
fine-tuning
8
adversarial-machine-learning
8
artificial-intelligence
8
privacy
7
data-validation
7
interpretability
7
llama
7
chatbot
6
llamaindex
6
transformers
6
data-quality
6
trustworthy-ai
6
ml
6
tensorflow
6
pandas-dataframe
5
ai-agents
5
qwen
5
open-source
5
developer-tools
5
language-model
5
awesome-list
5
mistral
5
gpt
5
agent
5
typescript
4
semantic-search
4
qlora
4
javascript
4
llm-eval
4