Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
awesome-machine-learning-interpretability
A curated list of awesome responsible machine learning resources.
https://github.com/jphall663/awesome-machine-learning-interpretability
Last synced: 4 days ago
JSON representation
-
Community and Official Guidance Resources
-
Community Frameworks and Guidance
- AI Model Registries: A Foundational Tool for AI Governance, September 2024
- The Alan Turing Institute, AI Standards Hub
- 8 Principles of Responsible ML
- A Brief Overview of AI Governance for Responsible Machine Learning Systems
- AI Verify Foundation, Generative AI: Implications for Trust and Governance
- AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models
- Andreessen Horowitz (a16z) AI Canon
- Anthropic's Responsible Scaling Policy
- AuditBoard: 5 AI Auditing Frameworks to Encourage Accountability
- Auditing machine learning algorithms: A white paper for public auditors
- AWS Data Privacy FAQ
- AWS Privacy Notice
- AWS, What is Data Governance?
- BIML Interactive Machine Learning Risk Framework
- November 7, 2023, The Executive Order on Safe, Secure, and Trustworthy AI: Decoding Biden’s AI Policy Roadmap
- October 2023, Decoding Intentions: Artificial Intelligence and Costly Signals
- August 11, 2023, Understanding AI Harms: An Overview
- August 1, 2023, Large Language Models (LLMs): An Explainer
- July 21, 2023, Making AI (more) Safe, Secure, and Transparent: Context and Research from CSET
- July 2023, Adding Structure to AI Harm: An Introduction to CSET's AI Harm Framework
- June 2023, The Inigo Montoya Problem for Trustworthy AI: The Use of Keywords in Policy and Research
- June 2023, A Matrix for Selecting Responsible AI Frameworks
- March 2023, Reducing the Risks of Artificial Intelligence for Military Decision Advantage
- February 2023, One Size Does Not Fit All: Assessment, Safety, and Trust for the Diverse Range of AI Products, Tools, Services, and Resources
- January 2023, Forecasting Potential Misuses of Language Models for Disinformation Campaigns—and How to Reduce Risk
- October 2022, A Common Language for Responsible AI: Evolving and Defining DOD Terms for Implementation
- December 2021, AI and the Future of Disinformation Campaigns: Part 1: The RICHDATA Framework
- December 2021, AI and the Future of Disinformation Campaigns: Part 2: A Threat Model
- ICT Institute: A checklist for auditing AI systems
- July 2021, AI Accidents: An Emerging Threat: What Could Happen and What to Do
- May 2021, Truth, Lies, and Automation: How Language Models Could Change Disinformation
- March 2021, Key Concepts in AI Safety: An Overview
- February 2021, Trusted Partners: Human-Machine Teaming and the Future of Military AI
- Censius: AI Audit
- Crowe LLP: Internal auditor's AI safety checklist
- DAIR Prompt Engineering Guide
- The Data Cards Playbook
- Data Provenance Explorer
- Data & Society, AI Red-Teaming Is Not a One-Stop Solution to AI Harms: Recommendations for Using Red-Teaming for AI Accountability
- Dealing with Bias and Fairness in AI/ML/Data Science Systems
- Debugging Machine Learning Models (ICLR workshop proceedings)
- Decision Points in AI Governance
- Distill
- Evaluating LLMs is a minefield
- Extracting Training Data from ChatGPT
- FATML Principles and Best Practices
- ForHumanity Body of Knowledge (BOK)
- The Foundation Model Transparency Index
- From Principles to Practice: An interdisciplinary framework to operationalise AI ethics
- Frontier Model Forum: What is Red Teaming?
- Gage Repeatability and Reproducibility
- Georgetown University Library's Artificial Intelligence (Generative) Resources
- Data governance in the cloud - part 1 - People and processes
- Data Governance in the Cloud - part 2 - Tools
- Evaluating social and ethical risks from generative AI
- Generative AI Prohibited Use Policy
- Principles and best practices for data governance in the cloud
- Responsible AI Framework
- Responsible AI practices
- Testing and Debugging in Machine Learning
- H2O.ai Algorithms - tutorials?style=social)
- Haptic Networks: How to Perform an AI Audit for UK Organisations
- Hogan Lovells, The AI Act is coming: EU reaches political agreement on comprehensive regulation of artificial intelligence
- Hugging Face, The Landscape of ML Documentation Tools
- IAPP EU AI Act Cheat Sheet
- IEEE P3119 Standard for the Procurement of Artificial Intelligence and Automated Decision Systems
- IEEE Std 1012-1998 Standard for Software Verification and Validation
- Independent Audit of AI Systems
- Identifying and Eliminating CSAM in Generative ML Training Data and Models
- Identifying and Overcoming Common Data Mining Mistakes
- Infocomm Media Development Authority (Singapore), First of its kind Generative AI Evaluation Sandbox for Trusted AI by AI Verify Foundation and IMDA
- Institute of Internal Auditors: Artificial Intelligence Auditing Framework, Practical Applications, Part A, Special Edition
- ISACA: Auditing Artificial Intelligence
- ISACA: Auditing Guidelines for Artificial Intelligence
- ISACA: Capability Maturity Model Integration Resources
- Large language models, explained with a minimum of math and jargon
- Larry G. Wlosinski, April 30, 2021, Information System Contingency Planning Guidance
- Llama 2 Responsible Use Guide
- Machine Learning Attack_Cheat_Sheet
- Machine Learning Quick Reference: Algorithms
- Machine Learning Quick Reference: Best Practices
- Towards Traceability in Data Ecosystems using a Bill of Materials Model
- Microsoft AI Red Team building future of safer AI
- Microsoft Responsible AI Standard, v2
- NewsGuard AI Tracking Center
- Open Sourcing Highly Capable Foundation Models
- OpenAI Red Teaming Network
- Organization and Training of a Cyber Security Team
- Our Data Our Selves, Data Use Policy
- Partnership on AI, ABOUT ML Reference Document
- Partnership on AI, Responsible Practices for Synthetic Media: A Framework for Collective Action
- PwC's Responsible AI
- Real-World Strategies for Model Debugging
- RecoSense: Phases of an AI Data Audit – Assessing Opportunity in the Enterprise
- Red Teaming of Advanced Information Assurance Concepts
- Red Teaming Language Models to Reduce Harms: Methods, Scaling Behaviors, and Lessons Learned
- Robust ML
- Safe and Reliable Machine Learning
- SHRM Generative Artificial Intelligence (AI) Chatbot Usage Policy
- Stanford University, Responsible AI at Stanford: Enabling innovation through AI best practices
- The Rise of Generative AI and the Coming Era of Social Media Manipulation 3.0: Next-Generation Chinese Astroturfing and Coping with Ubiquitous AI
- Taskade: AI Audit PBC Request Checklist Template
- TechTarget: 9 questions to ask when auditing your AI systems
- Troubleshooting Deep Neural Networks
- Unite.AI: How to perform an AI Audit in 2023
- University of California, Berkeley, Center for Long-Term Cybersecurity, A Taxonomy of Trustworthiness for Artificial Intelligence
- University of California, Berkeley, Information Security Office, How to Write an Effective Website Privacy Statement
- Warning Signs: The Future of Privacy and Security in an Age of Machine Learning
- When Not to Trust Your Explanations
- You Created A Machine Learning Application Now Make Sure It's Secure
- Organization and Training of a Cyber Security Team
- Real-World Strategies for Model Debugging
- Real-World Strategies for Model Debugging
- PAIR Explorables: Datasets Have Worldviews
- Real-World Strategies for Model Debugging
- Real-World Strategies for Model Debugging
- Know Your Data
- System cards
- Real-World Strategies for Model Debugging
- University of Washington Tech Policy Lab, Data Statements
- CSET Publications
- Real-World Strategies for Model Debugging
- Integrity Institute Report, February 2024, On Risk Assessment and Mitigation for Algorithmic Systems
- Real-World Strategies for Model Debugging
- Open Source Audit Tooling (OAT) Landscape
- Deloitte, Trust in the age of automation and Generative AI
- Oxford Commission on AI & Good Governance, AI in the Public Service: From Principles to Practice
- Center for AI and Digital Policy Reports
- The Future Society
- Real-World Strategies for Model Debugging
- AI Verfiy Foundation, Model Governance Framework for Generative AI
- World Privacy Forum, Risky Analysis: Assessing and Improving AI Governance Tools
- IAPP, EU AI Act Compliance Matrix
- IAPP, EU AI Act Compliance Matrix - At a Glance
- EU AI Act Cheat Sheet Series 2, Prohibited AI Systems
- Berkeley Center for Long-Term Cybersecurity (CLTC), https://cltc.berkeley.edu/publication/benchmark-early-and-red-team-often-a-framework-for-assessing-and-managing-dual-use-hazards-of-ai-foundation-models/
- Acceptable Use Policies for Foundation Models
- Access Now, Regulatory Mapping on Artificial Intelligence in Latin America: Regional AI Public Policy Report
- Adversarial ML Threat Matrix
- CSET's Harm Taxonomy for the AI Incident Database - cset/CSET-AIID-harm-taxonomy?style=social)
- Demos, AI – Trustworthy By Design: How to build trust in AI systems, the institutions that create them and the communities that use them
- IAPP, Global AI Governance Law and Policy: Canada, EU, Singapore, UK and US
- Library of Congress, LC Labs AI Planning Framework - ai-framework?style=social)
- Manifest MLBOM Wiki
- model-cards-and-datasheets - cards-and-datasheets?style=social)
- OpenAI, Evals
- Sample AI Incident Response Checklist
- A-LIGN, ISO 42001 Requirement, NIST SP 800-218A Task, Recommendations and Considerations
- What Access Protections Do AI Companies Provide for Independent Safety Research?
- 0xk1h0 / ChatGPT "DAN" (and other "Jailbreaks")
- Azure's PyRIT
- ChatGPT_system_prompt
- DAIR Prompt Engineering Guide GitHub - ai/Prompt-Engineering-Guide?style=social)
- In-The-Wild Jailbreak Prompts on LLMs
- LLM Security & Privacy - sp?style=social)
- Membership Inference Attacks and Defenses on Machine Learning Models Literature - inference-machine-learning-literature?style=social)
- leondz / garak
- Ada Lovelace Institute, Code and Conduct: How to Create Third-Party Auditing Regimes for AI Systems
- Closing the AI accountability gap: defining an end-to-end framework for internal algorithmic auditing
- Real-World Strategies for Model Debugging
- Ravit Dotan's Projects
- Taylor & Francis, AI Policy
- Trustible, Enhancing the Effectiveness of AI Governance Committees
- Azure AI Content Safety
- Harm categories in Azure AI Content Safety
- Real-World Strategies for Model Debugging
- EU AI Act Cheat Sheet Series 1, Definitions, Scope & Applicability
- EU AI Act Cheat Sheet Series 3, High-Risk AI Systems
- India AI Policy Cheat Sheet
- World Economic Forum, Responsible AI Playbook for Investors
- A-LIGN, ISO 42001 Requirement, NIST SP 800-218A Task, Recommendations and Considerations
- Future of Privacy Forum, EU AI Act: A Comprehensive Implementation & Compliance Timeline
- Real-World Strategies for Model Debugging
- European Data Protection Board (EDPB), Checklist for AI Auditing
- Instruction finetuning an LLM from scratch
- EU AI Act Cheat Sheet Series 1, Definitions, Scope & Applicability
- EU AI Act Cheat Sheet Series 3, High-Risk AI Systems
- EU AI Act Cheat Sheet Series 6, General-Purpose AI Models
- AI Verify Foundation
- Real-World Strategies for Model Debugging
- Real-World Strategies for Model Debugging
- EU AI Act Cheat Sheet Series 7, Compliance & Conformity Assessment
- India AI Policy Cheat Sheet
- Pivot to AI
- Jay Alammar, Finding the Words to Say: Hidden State Visualizations for Language Models
- Jay Alammar, Interfaces for Explaining Transformer Language Models
- Brown University, How Can We Tackle AI-Fueled Misinformation and Disinformation in Public Health?
- Trustible, Model Transparency Ratings
- Perspectives on Issues in AI Governance
- Institute for AI Policy and Strategy (IAPS), AI-Relevant Regulatory Precedents: A Systematic Search Across All Federal Agencies
- GDPR and Generative AI: A Guide for Public Sector Organizations
- Real-World Strategies for Model Debugging
- AppliedAI Institute, Navigating the EU AI Act: A Process Map for making AI Systems available
- Canada AI Law & Policy Cheat Sheet
- India AI Policy Cheat Sheet
- LLM Agents can Autonomously Exploit One-day Vulnerabilities
- No, LLM Agents can not Autonomously Exploit One-day Vulnerabilities
- Berryville Institute of Machine Learning, Architectural Risk Analysis of Large Language Models (requires free account login)
- Fairly's Global AI Regulations Map - ai-regulations-map?style=social)
- OpenAI Cookbook, How to implement LLM guardrails
- Real-World Strategies for Model Debugging
- LLM Visualization
- EU AI Act Cheat Sheet Series 5, Requirements for Deployers
- AI Snake Oil
- EU AI Act Cheat Sheet Series 4, Requirements for Providers
- The Alan Turing Institute, Responsible Data Stewardship in Practice
- Digital Policy Alert, The Anatomy of AI Rules: A systematic comparison of AI rules across the globe
- Federation of American Scientists, A NIST Foundation To Support The Agency’s AI Mandate
- ISO/IEC 42001:2023, Information technology — Artificial intelligence — Management system
- Advancing AI responsibly
- Open Data Institute, Understanding data governance in AI: Mapping governance
- Center for Security and Emerging Technology (CSET), High Level Comparison of Legislative Perspectives on Artificial Intelligence US vs. EU
- Definitions, Scope & Applicability EU AI Act Cheat Sheet Series, Part 1
- EU AI Act Cheat Sheet Series 5, Requirements for Deployers
- Backpack Language Models
- The Remarkable Robustness of LLMs: Stages of Inference?
- ACL 2024 Tutorial: Vulnerabilities of Large Language Models to Adversarial Attacks
- Real-World Strategies for Model Debugging
- Real-World Strategies for Model Debugging
- Real-World Strategies for Model Debugging
- Trustible, Is It AI? How different laws & frameworks define AI
- @dotey on X/Twitter exploring GPT prompt security and prevention measures
- coolaj86 / Chat GPT "DAN" (and other "Jailbreaks")
- Exploiting Novel GPT-4 APIs
- Learn Prompting, Prompt Hacking
- MiesnerJacob / learn-prompting, Prompt Hacking - prompting?style=social)
- r/ChatGPTJailbreak
- Y Combinator, ChatGPT Grandma Exploit
- Twitter Algorithmic Bias Bounty
- A Brief Overview of AI Governance for Responsible Machine Learning Systems
- Real-World Strategies for Model Debugging
- MLA, How do I cite generative AI in MLA style?
- Real-World Strategies for Model Debugging
- Real-World Strategies for Model Debugging
- CivAI, GenAI Toolkit for the NIST AI Risk Management Framework: Thinking Through the Risks of a GenAI Chatbot
- Real-World Strategies for Model Debugging
- Real-World Strategies for Model Debugging
- Real-World Strategies for Model Debugging
- HackerOne, An Emerging Playbook for AI Red Teaming with HackerOne
- China AI Law Cheat Sheet
- EU AI Act Cheat Sheet
- Governance Audit, Model Audit, and Application Audit
- developer mode fixed
- Gulf Countries AI Policy Cheat Sheet
- Singapore AI Policy Cheat Sheet
- UK AI Policy Cheat Sheet
- AI Incident Collection: An Observational Study of the Great AI Experiment
- Repurposing the Wheel: Lessons for AI Standards
- Translating AI Risk Management Into Practice
- Coalition for Content Provenance and Authenticity (C2PA)
- Dominique Shelton Leipzig, Countries With Draft AI Legislation or Frameworks
- HackerOne Blog
- AI Verify Foundation, Cataloguing LLM Evaluations
- OpenAI, Building an early warning system for LLM-aided biological threat creation
- Partnership on AI, PAI’s Guidance for Safe Foundation Model Deployment: A Framework for Collective Action
- Real-World Strategies for Model Debugging
- Special Competitive Studies Project and Johns Hopkins University Applied Physics Laboratory, Framework for Identifying Highly Consequential AI Use Cases
- Synack, The Complete Guide to Crowdsourced Security Testing, Government Edition
- Future of Privacy Forum, The Spectrum of Artificial Intelligence
- CDAO frameworks, guidance, and best practices for AI test & evaluation
- CSET, What Does AI-Red Teaming Actually Mean?
- Jailbreaking Black Box Large Language Models in Twenty Queries
- Lakera AI's Gandalf
- AI Governance in 2023
- China AI Law Cheat Sheet
- EU AI Act Cheat Sheet
- Governance Audit, Model Audit, and Application Audit
- Gulf Countries AI Policy Cheat Sheet
- Singapore AI Policy Cheat Sheet
- UK AI Policy Cheat Sheet
- RAND Corporation, Analyzing Harms from AI-Generated Images and Safeguarding Online Authenticity
- Real-World Strategies for Model Debugging
- Real-World Strategies for Model Debugging
- Oliver Patel's Cheat Sheets
- 10 Key Pillars for Enterprise AI Governance
- AI Governance in 2023
- Real-World Strategies for Model Debugging
- Real-World Strategies for Model Debugging
- AI Governance Needs Sociotechnical Expertise: Why the Humanities and Social Sciences Are Critical to Government Efforts
- Boston University AI Task Force Report on Generative AI in Education and Research
- Institute for Public Policy Research (IPPR), Transformed by AI: How Generative Artificial Intelligence Could Affect Work in the UK—And How to Manage It
- Real-World Strategies for Model Debugging
- Tech Policy Press - Artificial Intelligence
- Phil Lee, AI Act: Difference between AI systems and AI models
- Phil Lee, AI Act: Meet the regulators! (Arts 30, 55b, 56 and 59)
- Phil Lee, How the AI Act applies to integrated generative AI
- Phil Lee, Overview of AI Act requirements for deployers of high risk AI systems
- Phil Lee, Overview of AI Act requirements for providers of high risk AI systems
- Real-World Strategies for Model Debugging
- Real-World Strategies for Model Debugging
- Real-World Strategies for Model Debugging
- 10 Key Pillars for Enterprise AI Governance
- Phil Lee, AI Act: Difference between AI systems and AI models
- Phil Lee, AI Act: Meet the regulators! (Arts 30, 55b, 56 and 59)
- Phil Lee, How the AI Act applies to integrated generative AI
- Phil Lee, Overview of AI Act requirements for deployers of high risk AI systems
- Phil Lee, Overview of AI Act requirements for providers of high risk AI systems
- Casey Flores, AIGP Study Guide
- Scaling Monosemanticity: Extracting Interpretable Features from Claude 3 Sonnet
- Center for Security and Emerging Technology (CSET), High Level Comparison of Legislative Perspectives on Artificial Intelligence US vs. EU
- Future of Privacy Forum, EU AI Act: A Comprehensive Implementation & Compliance Timeline
- Canada AI Law & Policy Cheat Sheet
- Definitions, Scope & Applicability EU AI Act Cheat Sheet Series, Part 1
- Center for Democracy and Technology (CDT), Applying Sociotechnical Approaches to AI Governance in Practice
- IBM, The CEO's Guide to Generative AI
- GraphRAG: Unlocking LLM discovery on narrative private data
- Real-World Strategies for Model Debugging
- BCG Robotaxonomy
- Purpose and Means AI Explainer Series - issue #4 - Navigating the EU AI Act
- BCG Robotaxonomy
- India AI Policy Cheat Sheet
- The Alan Turing Institute, AI Ethics and Governance in Practice
- EU Digital Partners, U.S. A.I. Laws: A State-by-State Study
- EU AI Act Cheat Sheet Series 4, Requirements for Providers
- Purpose and Means AI Explainer Series - issue #4 - Navigating the EU AI Act
- EU AI Act Cheat Sheet Series 2, Prohibited AI Systems
- Humane Intelligence, SeedAI, and DEFCON AI Village, Generative AI Red Teaming Challenge: Transparency Report 2024
-
Conferences and Workshops
- AAAI Conference on Artificial Intelligence
- ACM FAccT (Fairness, Accountability, and Transparency)
- FAT/ML (Fairness, Accountability, and Transparency in Machine Learning)
- AIES (AAAI/ACM Conference on AI, Ethics, and Society)
- Black in AI
- Computer Vision and Pattern Recognition (CVPR)
- International Conference on Machine Learning (ICML)
- 2nd ICML Workshop on Human in the Loop Learning (HILL)
- 5th ICML Workshop on Human Interpretability in Machine Learning (WHI)
- Challenges in Deploying and Monitoring Machine Learning Systems
- Economics of privacy and data labor
- Federated Learning for User Privacy and Data Confidentiality
- Healthcare Systems, Population Health, and the Role of Health-tech
- Law & Machine Learning
- ML Interpretability for Scientific Discovery
- MLRetrospectives: A Venue for Self-Reflection in ML Research
- Participatory Approaches to Machine Learning
- XXAI: Extending Explainable AI Beyond Deep Models and Classifiers
- Human-AI Collaboration in Sequential Decision-Making
- Machine Learning for Data: Automated Creation, Privacy, Bias
- ICML Workshop on Algorithmic Recourse
- ICML Workshop on Human in the Loop Learning (HILL)
- ICML Workshop on Theoretic Foundation, Criticism, and Application Trend of Explainable AI
- Information-Theoretic Methods for Rigorous, Responsible, and Reliable Machine Learning (ITR3)
- International Workshop on Federated Learning for User Privacy and Data Confidentiality in Conjunction with ICML 2021 (FL-ICML'21)
- Interpretable Machine Learning in Healthcare
- Self-Supervised Learning for Reasoning and Perception
- The Neglected Assumptions In Causal Inference
- Theory and Practice of Differential Privacy
- Uncertainty and Robustness in Deep Learning
- Workshop on Computational Approaches to Mental Health @ ICML 2021
- Workshop on Distribution-Free Uncertainty Quantification
- Workshop on Socially Responsible Machine Learning
- 1st ICML 2022 Workshop on Safe Learning for Autonomous Driving (SL4AD)
- 2nd Workshop on Interpretable Machine Learning in Healthcare (IMLH)
- DataPerf: Benchmarking Data for Data-Centric AI
- Disinformation Countermeasures and Machine Learning (DisCoML)
- Responsible Decision Making in Dynamic Environments
- Spurious correlations, Invariance, and Stability (SCIS)
- The 1st Workshop on Healthcare AI and COVID-19
- Theory and Practice of Differential Privacy
- Workshop on Human-Machine Collaboration and Teaming
- 2nd ICML Workshop on New Frontiers in Adversarial Machine Learning
- 2nd Workshop on Formal Verification of Machine Learning
- 3rd Workshop on Interpretable Machine Learning in Healthcare (IMLH)
- Challenges in Deployable Generative AI
- Federated Learning and Analytics in Practice: Algorithms, Systems, Applications, and Opportunities
- Generative AI and Law (GenLaw)
- Interactive Learning with Implicit Human Feedback
- The Second Workshop on Spurious Correlations, Invariance and Stability
- Knowledge, Discovery, and Data Mining (KDD)
- 2nd ACM SIGKDD Workshop on Ethical Artificial Intelligence: Methods and Applications
- KDD Data Science for Social Good 2023
- Neural Information Processing Systems (NeurIPs)
- 5th Robot Learning Workshop: Trustworthy Robotics
- Algorithmic Fairness through the Lens of Causality and Privacy
- Causal Machine Learning for Real-World Impact
- Challenges in Deploying and Monitoring Machine Learning Systems
- Cultures of AI and AI for Culture
- Empowering Communities: A Participatory Approach to AI for Mental Health
- Federated Learning: Recent Advances and New Challenges
- Gaze meets ML
- HCAI@NeurIPS 2022, Human Centered AI
- Human Evaluation of Generative Models
- Human in the Loop Learning (HiLL) Workshop at NeurIPS 2022
- Learning Meaningful Representations of Life
- Machine Learning for Autonomous Driving
- Progress and Challenges in Building Trustworthy Embodied AI
- Tackling Climate Change with Machine Learning
- Trustworthy and Socially Responsible Machine Learning
- Workshop on Machine Learning Safety
- AI meets Moral Philosophy and Moral Psychology: An Interdisciplinary Dialogue about Computational Ethics
- Algorithmic Fairness through the Lens of Time
- Attributing Model Behavior at Scale (ATTRIB)
- Backdoors in Deep Learning: The Good, the Bad, and the Ugly
- Computational Sustainability: Promises and Pitfalls from Theory to Deployment
- Socially Responsible Language Modelling Research (SoLaR)
- Regulatable ML: Towards Bridging the Gaps between Machine Learning Research and Regulations
- Workshop on Distribution Shifts: New Frontiers with Foundation Models
- XAI in Action: Past, Present, and Future Applications
- Oxford Generative AI Summit Slides
- NAACL 24 Tutorial: Explanations in the Era of Large Language Models
- Evaluating Generative AI Systems: the Good, the Bad, and the Hype (April 15, 2024)
- IAPP, AI Governance Global 2024, June 4-7, 2024
- Mission Control AI, Booz Allen Hamilton, and The Intellectual Forum at Jesus College, Cambridge, The 2024 Leaders in Responsible AI Summit, March 22, 2024
- “Could it have been different?” Counterfactuals in Minds and Machines
- Neural Conversational AI Workshop - What’s left to TEACH (Trustworthy, Enhanced, Adaptable, Capable and Human-centric) chatbots?
- I Can’t Believe It’s Not Better: Understanding Deep Learning Through Empirical Falsification
- I Can’t Believe It’s Not Better (ICBINB): Failure Modes in the Age of Foundation Models
- OECD.AI, Building the foundations for collaboration: The OECD-African Union AI Dialogue
-
Official Policy, Frameworks, and Guidance
- 12 CFR Part 1002 - Equal Credit Opportunity Act (Regulation B)
- Algorithmic Accountability Act of 2023
- Algorithm Charter for Aotearoa New Zealand
- A Regulatory Framework for AI: Recommendations for PIPEDA Reform
- Artificial Intelligence (AI) in the Securities Industry
- Assessment List for Trustworthy Artificial Intelligence (ALTAI) for self-assessment - Shaping Europe’s digital future - European Commission
- Audit of Governance and Protection of Department of Defense Artificial Intelligence Data and Technology
- Commodity Futures Trading Commission (CFTC), A Primer on Artificial Intelligence in Securities Markets
- Biometric Information Privacy Act
- Booker Wyden Health Care Letters
- California Consumer Privacy Act (CCPA)
- California Department of Justice, How to Read a Privacy Policy
- California Privacy Rights Act (CPRA)
- Children's Online Privacy Protection Rule ("COPPA")
- Civil liability regime for artificial intelligence
- Congressional Research Service, Artificial Intelligence: Overview, Recent Advances, and Considerations for the 118th Congress
- Consumer Data Protection Act (Code of Virginia)
- DARPA, Explainable Artificial Intelligence (XAI) (Archived)
- Data Availability and Transparency Act 2022 (Australia)
- data.gov, Privacy Policy and Data Policy
- Defense Technical Information Center, Computer Security Technology Planning Study, October 1, 1972
- De-identification Tools
- Department for Science, Innovation and Technology, Frontier AI: capabilities and risks - discussion paper (United Kingdom)
- United States Department of Commerce, Intellectual property
- RAI Toolkit
- Developing Financial Sector Resilience in a Digital World: Selected Themes in Technology and Related Risks
- The Digital Services Act package (EU Digital Services Act and Digital Markets Act)
- Directive on Automated Decision Making (Canada)
- EEOC Letter (from U.S. senators re: hiring software)
- European Commission, Hiroshima Process International Guiding Principles for Advanced AI system
- Executive Order 13960 (2020-12-03), Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government
- Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
- Facial Recognition and Biometric Technology Moratorium Act of 2020
- FDA Artificial Intelligence/Machine Learning (AI/ML)-Based: Software as a Medical Device (SaMD) Action Plan, updated January 2021
- FDA Software as a Medical Device (SAMD) guidance (December 8, 2017)
- FDIC Supervisory Guidance on Model Risk Management
- Federal Consumer Online Privacy Rights Act (COPRA)
- Federal Reserve Bank of Dallas, Regulation B, Equal Credit Opportunity, Credit Scoring Interpretations: Withdrawl of Proposed Business Credit Amendments, June 3, 1982
- FHA model risk management/model governance guidance
- FTC Business Blog
- 2021-01-11 Facing the facts about facial recognition
- 2021-04-19 Aiming for truth, fairness, and equity in your company’s use of AI
- 2022-07-11 Location, health, and other sensitive information: FTC committed to fully enforcing the law against illegal use and sharing of highly sensitive data
- 2023-09-15 Updated FTC-HHS publication outlines privacy and security laws and rules that impact consumer health data
- 2023-09-27 Could PrivacyCon 2024 be the place to present your research on AI, privacy, or surveillance?
- 2022-05-20 Security Beyond Prevention: The Importance of Effective Breach Disclosures
- 2023-02-01 Security Principles: Addressing underlying causes of risk in complex systems
- 2023-06-29 Generative AI Raises Competition Concerns
- FTC Privacy Policy
- Government Accountability Office: Artificial Intelligence: An Accountability Framework for Federal Agencies and Other Entities
- General Data Protection Regulation (GDPR)
- Article 22 EU GDPR "Automated individual decision-making, including profiling"
- General principles for the use of Artificial Intelligence in the financial sector
- Guidelines for secure AI system development
- Innovation spotlight: Providing adverse action notices when using AI/ML models
- Justice in Policing Act
- National Conference of State Legislatures (NCSL) 2020 Consumer Data Privacy Legislation
- National Institute of Standards and Technology (NIST), AI 100-1 Artificial Intelligence Risk Management Framework (NIST AI RMF 1.0)
- National Institute of Standards and Technology (NIST), Four Principles of Explainable Artificial Intelligence, Draft NISTIR 8312, 2020-08-17
- National Institute of Standards and Technology (NIST), Four Principles of Explainable Artificial Intelligence, NISTIR 8312, 2021-09-29
- National Institute of Standards and Technology (NIST), Measurement Uncertainty
- National Institute of Standards and Technology (NIST), NIST Special Publication 800-30 Revision 1, Guide for Conducting Risk Assessments
- National Science and Technology Council (NSTC), Select Committee on Artificial Intelligence, National Artificial Intelligence Research and Development Strategic Plan 2023 Update
- New York City Automated Decision Systems Task Force Report (November 2019)
- OECD, Open, Useful and Re-usable data (OURdata) Index: 2019 - Policy Paper
- Office of the Director of National Intelligence (ODNI), The AIM Initiative: A Strategy for Augmenting Intelligence Using Machines
- Office of Management and Budget, Guidance for Regulation of Artificial Intelligence Applications, finalized November 2020
- Office of Science and Technology Policy, Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People
- Office of the Comptroller of the Currency (OCC), 2021 Model Risk Management Handbook
- Online Harms White Paper: Full government response to the consultation (United Kingdom)
- Online Privacy Act of 2023
- Online Safety Bill (United Kingdom)
- Principles of Artificial Intelligence Ethics for the Intelligence Community
- Privacy Act 1988 (Australia)
- Proposal for a Regulation laying down harmonised rules on artificial intelligence (Artificial Intelligence Act)
- Amendments adopted by the European Parliament on 14 June 2023 on the proposal for a regulation of the European Parliament and of the Council on laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts
- Psychological Foundations of Explainability and Interpretability in Artificial Intelligence
- The Public Sector Bodies (Websites and Mobile Applications) Accessibility Regulations 2018 (United Kingdom)
- Questions and Answers to Clarify and Provide a Common Interpretation of the Uniform Guidelines on Employee Selection Procedures
- Questions from the Commission on Protecting Privacy and Preventing Discrimination
- RE: Use of External Consumer Data and Information Sources in Underwriting for Life Insurance
- Singapore Personal Data Protection Commission (PDPC), Companion to the Model AI Governance Framework – Implementation and Self-Assessment Guide for Organizations
- Singapore Personal Data Protection Commission (PDPC), Compendium of Use Cases: Practical Illustrations of the Model AI Governance Framework
- Supervisory Guidance on Model Risk Management
- Testing the Reliability, Validity, and Equity of Terrorism Risk Assessment Instruments
- UNESCO, Artificial Intelligence: examples of ethical dilemmas
- United States Department of Homeland Security, Use of Commercial Generative Artificial Intelligence (AI) Tools
- United States Department of Justice, Privacy Act of 1974
- United States Department of Justice, Overview of The Privacy Act of 1974 (2020 Edition)
- United States Patent and Trademark Office (USPTO), Public Views on Artificial Intelligence and Intellectual Property Policy
- U.S. Army Concepts Analysis Agency, Proceedings of the Thirteenth Annual U.S. Army Operations Research Symposium, Volume 1, October 29 to November 1, 1974
- U.S. Web Design System (USWDS) Design principles
- Department for Science, Innovation and Technology and AI Safety Institute, International Scientific Report on the Safety of Advanced AI
- National Physical Laboratory (NPL), Beginner's guide to measurement GPG118
- AI Safety Institute (AISI), Advanced AI evaluations at AISI: May update
- Colorado General Assembly, SB24-205 Consumer Protections for Artificial Intelligence, Concerning consumer protections in interactions with artificial intelligence systems"
- European Data Protection Supervisor, First EDPS Orientations for EUIs using Generative AI
- European Parliament, Addressing AI risks in the workplace: Workers and algorithms
- Consumer Financial Protection Bureau (CFPB), Chatbots in consumer finance
- Office of Educational Technology, Designing for Education with Artificial Intelligence: An Essential Guide for Developers
- Department for Science, Innovation and Technology, The Bletchley Declaration by Countries Attending the AI Safety Summit, 1-2 November 2023
- National framework for the assurance of artificial intelligence in government (Australia)
- Using Artificial Intelligence and Algorithms
- United States Department of Energy Artificial Intelligence and Technology Office
- European Parliament, The impact of the General Data Protection Regulation (GDPR) on artificial intelligence
- 2023-08-16 Can’t lose what you never had: Claims about digital ownership and creation in the age of generative AI
- Singapore Personal Data Protection Commission (PDPC), Model Artificial Intelligence Governance Framework (Second Edition)
- Singapore Personal Data Protection Commission (PDPC), Privacy Enhancing Technology (PET): Proposed Guide on Synthetic Data Generation
- Bundesamt für Sicherheit in der Informationstechnik, Generative AI Models - Opportunities and Risks for Industry and Authorities
- California Department of Technology, GenAI Executive Order
- Commodity Futures Trading Commission (CFTC), Responsible Artificial Intelligence in Financial Markets
- Mississippi Department of Education, Artificial Intelligence Guidance for K-12 Classrooms
- National Security Agency, Central Security Service, Artificial Intelligence Security Center
- United States Department of Homeland Security, Safety and Security Guidelines for Critical Infrastructure Owners and Operators
- Callaghan Innovation, EU AI Fact Sheet 4, High-risk AI systems
- European Data Protection Board (EDPB), AI Auditing documents
- European Labour Authority (ELA), Artificial Intelligence and Algorithms in Risk Assessment: Addressing Bias, Discrimination and other Legal and Ethical Issues
- OECD.AI, The Bias Assessment Metrics and Measures Repository
- OECD, AI, data governance and privacy: Synergies and areas of international co-operation
- AI Risk Management Playbook (AIRMP)
- AI Use Case Inventory (DOE Use Cases Releasable to Public in Accordance with E.O. 13960)
- Digital Climate Solutions Inventory
- Generative Artificial Intelligence Reference Guide
- Department for Science, Innovation and Technology, Guidance, Introduction to AI Assurance
- National Security Commission on Artificial Intelligence, Final Report
- Securities and Exchange Commission, SEC Charges Two Investment Advisers with Making False and Misleading Statements About Their Use of Artificial Intelligence
- Office of the United Nations High Commissioner for Human Rights
- United States Department of Energy Artificial Intelligence and Technology Office
- National Telecommunications and Information Administration, AI Accountability Policy Report
- United States Department of Defense, AI Principles: Recommendations on the Ethical Use of Artificial Intelligence
- United States Department of Defense, Chief Data and Artificial Intelligence Officer (CDAO) Assessment and Assurance
- United States Department of the Treasury, Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector, March 2024
- State of California, Department of Technology, Office of Information Security, Generative Artificial Intelligence Risk Assessment, SIMM 5305-F, March 2024
- (Draft Guideline) E-23 – Model Risk Management
- 2020-04-08 Using Artificial Intelligence and Algorithms
- 2023-07-25 Protecting the privacy of health information: A baker’s dozen takeaways from FTC cases
- 2023-08-22 For business opportunity sellers, FTC says “AI” stands for “allegedly inaccurate”
- 2023-09-18 Companies warned about consequences of loose use of consumers’ confidential data
- 2023-12-19 Coming face to face with Rite Aid’s allegedly unfair use of facial recognition technology
- Gouvernance des algorithmes d’intelligence artificielle dans le secteur financier (France)
- International Bureau of Weights and Measures (BIPM), Evaluation of measurement data—Guide to the expression of uncertainty in measurement
- United States Department of Commerce Internet Policy Task Force, Commercial Data Privacy and Innovation in the Internet Economy: A Dynamic Policy Framework
- The White House, Consumer Data Privacy in a Networked World: A Framework for Protecting Privacy and Promoting Innovation in the Global Digital Economy, February 2012
- National Institute of Standards and Technology (NIST), Assessing Risks and Impacts of AI (ARIA)
- Autoriteit Persoonsgegevens, scraping bijna altijd illegal (Dutch Data Protection Authority, "Scraping is always illegal")
- NATO, Narrative Detection and Topic Modelling in the Baltics
- Health Canada, Transparency for machine learning-enabled medical devices: Guiding principles
-
-
Technical Resources
-
Common or Useful Datasets
- Wikipedia Talk Labels: Personal Attacks
- Statlog (German Credit Data)
- Adult income dataset
- COMPAS Recidivism Risk Score Data and Analysis
- All Lending Club loan data
- Amazon Open Data
- Data.gov
- Home Mortgage Disclosure Act (HMDA) Data
- MIMIC-III Clinical Database
- UCI ML Data Repository
- FANNIE MAE Single Family Loan Performance
- NYPD Stop, Question and Frisk Data
- Bruegel, A dataset on EU legislation for the digital world
- Presidential Deepfakes Dataset
- Have I Been Trained?
-
Machine Learning Environment Management Tools
-
Open Source/Access Responsible AI Software Packages
- Hugging Face, BiasAware: Dataset Bias Detection
- DrWhyAI
- TensorBoard Projector
- What-if Tool
- AI Explainability 360
- algofairness
- Bayesian Case Model
- Bayesian Rule List (BRL)
- Falling Rule List (FRL)
- Grad-CAM - CAM is a technique for making convolutional neural networks more transparent by visualizing the regions of input that are important for predictions in computer vision models. |
- parity-fairness
- ProtoPNet - duke?style=social) | "This code package implements the prototypical part network (ProtoPNet) from the paper "This Looks Like That: Deep Learning for Interpretable Image Recognition" (to appear at NeurIPS 2019), by Chaofan Chen (Duke University), Oscar Li| (Duke University), Chaofan Tao (Duke University), Alina Jade Barnett (Duke University), Jonathan Su (MIT Lincoln Laboratory), and Cynthia Rudin (Duke University).” |
- pymc3 - devs/pymc3?style=social) | "PyMC (formerly PyMC3) is a Python package for Bayesian statistical modeling focusing on advanced Markov chain Monte Carlo (MCMC) and variational inference (VI) algorithms. Its flexibility and extensibility make it applicable to a large suite of problems.” |
- pytorch-grad-cam - grad-cam?style=social) | "a package with state of the art methods for Explainable AI for computer vision. This can be used for diagnosing model predictions, either in production or while developing models. The aim is also to serve as a benchmark of algorithms and metrics for research of new explainability methods.” |
- rationale
- Decision Trees - parametric supervised learning method used for classification and regression.” |
- Generalized Linear Models
- scikit-multiflow
- shap
- text_explainability - known state-of-the-art explainability approaches for text can be composed.” |
- text_sensitivity
- xplique - ai/xplique?style=social) | "A Python toolkit dedicated to explainability. The goal of this library is to gather the state of the art of Explainable AI to help you understand your complex neural network models.” |
- ALEPlot - order interaction effects in black-box supervised learning models." |
- arules
- DALEXtra: Extension for 'DALEX' Package
- elasticnet - Net and also provides functions for doing sparse PCA." |
- fairness
- forestmodel
- fscaret
- gam
- glm2
- glmnet - net regularization path for linear regression, logistic and multinomial regression models, Poisson regression, Cox model, multiple-response Gaussian, and the grouped multinomial regression." |
- Penalized Generalized Linear Models
- Monotonic GBM
- Sparse Principal Components (GLRM)
- ICEbox: Individual Conditional Expectation Plot Toolbox
- ingredients
- live
- modelDown
- modelOriented - based MI².AI. |
- quantreg
- rpart
- RuleFit
- Scalable Bayesian Rule Lists (SBRL)
- shapper
- smbinning
- Monotonic
- shap
- Sparse Principal Components (GLRM)
- cdt15, Causal Discovery Lab., Shiga University - Gaussianity of the data." |
- Scikit-Explain - friendly Python module for machine learning explainability," featuring PD and ALE plots, LIME, SHAP, permutation importance and Friedman's H, among other methods. |
- LDNOOBW
- RuleFit
- interpret: Fit Interpretable Machine Learning Models
- Scikit-learn - learn.org/stable/modules/decomposition.html#sparse-principal-components-analysis-sparsepca-and-minibatchsparsepca) | "a variant of [principal component analysis, PCA], with the goal of extracting the set of sparse components that best reconstruct the data.” |
-
Benchmarks
- HELM
- Nvidia MLPerf
- OpenML Benchmarking Suites
- Real Toxicity Prompts (Allen Institute for AI)
- GEM
- TrustLLM-Benchmark
- Trust-LLM-Benchmark Leaderboard
- MLCommons, MLCommons AI Safety v0.5 Proof of Concept
- MLCommons, Introducing v0.5 of the AI Safety Benchmark from MLCommons
- Sociotechnical Safety Evaluation Repository
- SafetyPrompts.com
- WAVES: Benchmarking the Robustness of Image Watermarks
-
-
Miscellaneous Resources
-
AI Law, Policy, and Guidance Trackers
- IAPP Global AI Legislation Tracker
- IAPP US State Privacy Legislation Tracker
- The Ethical AI Database
- Institute for the Future of Work, Tracking international legislation relevant to AI at work
- Legal Nodes, Global AI Regulations Tracker: Europe, Americas & Asia-Pacific Overview
- OECD.AI, National AI policies & strategies
- Raymond Sun, Global AI Regulation Tracker
- Runway Strategies, Global AI Regulation Tracker
- VidhiSharma.AI, Global AI Governance Tracker
- University of North Texas, Artificial Intelligence (AI) Policy Collection
- George Washington University Law School's AI Litigation Database
-
AI Incident Information Sharing Resources
- AI Incident Database (Responsible AI Collaborative)
- AI Vulnerability Database (AVID)
- AIAAIC
- OECD AI Incidents Monitor
- Verica Open Incident Database (VOID)
- AI Risk Database
- Merging AI Incidents Research with Political Misinformation Research: Introducing the Political Deepfakes Incidents Database
- AI Badness: An open catalog of generative AI badness
- EthicalTech@GW, Deepfakes & Democracy Initiative
-
Challenges and Competitions
-
Curated Bibliographies
- Proposed Guidelines for Responsible Use of Explainable Machine Learning (presentation, bibliography)
- Proposed Guidelines for Responsible Use of Explainable Machine Learning (paper, bibliography)
- A Responsible Machine Learning Workflow (paper, bibliography) - information-2019?style=social)
- Fairness, Accountability, and Transparency in Machine Learning (FAT/ML) Scholarship
- Blair Attard-Frost, INF1005H1S: Artificial Intelligence Policy Supplementary Reading List
- White & Case, AI Watch: Global regulatory tracker - United States
-
List of Lists
- AI Ethics Resources
- AI Tools and Platforms
- OECD-NIST Catalogue of AI Tools and Metrics
- OpenAI Cookbook - cookbook?style=social)
- Worldwide AI ethics: A review of 200 guidelines and recommendations for AI governance
- XAI Resources
- Casey Fiesler's AI Ethics & Policy News spreadsheet
- Tech & Ethics Curricula
- Ravit Dotan's Resources
- AI Ethics Guidelines Global Inventory
-
Critiques of AI
- Ed Zitron's Where's Your Ed At
- Generative AI’s environmental costs are soaring — and mostly secret
- Julia Angwin, Press Pause on the Silicon Valley Hype Machine
- The mechanisms of AI hype and its planetary and social costs
- The perpetual motion machine of AI-generated data and the distraction of ChatGPT as a ‘scientist’
- Re-evaluating GPT-4’s bar exam performance
- Which Humans?
- Generative AI’s environmental costs are soaring — and mostly secret
- LLMs Can’t Plan, But Can Help Planning in LLM-Modulo Frameworks
- Long-context LLMs Struggle with Long In-context Learning
- Making AI Less "Thirsty": Uncovering and Addressing the Secret Water Footprint of AI Models
- The mechanisms of AI hype and its planetary and social costs
- Nepotistically Trained Generative-AI Models Collapse
- Non-discrimination Criteria for Generative Language Models
- Sustainable AI: Environmental Implications, Challenges and Opportunities
- Companies like Google and OpenAI are pillaging the internet and pretending it’s progress
- Generative AI’s environmental costs are soaring — and mostly secret
- The mechanisms of AI hype and its planetary and social costs
- The perpetual motion machine of AI-generated data and the distraction of ChatGPT as a ‘scientist’
- Re-evaluating GPT-4’s bar exam performance
- Toward Sociotechnical AI: Mapping Vulnerabilities for Machine Learning in Context
- ChatGPT is bullshit
- Generative AI’s environmental costs are soaring — and mostly secret
- The mechanisms of AI hype and its planetary and social costs
- The perpetual motion machine of AI-generated data and the distraction of ChatGPT as a ‘scientist’
- Re-evaluating GPT-4’s bar exam performance
- Theory Is All You Need: AI, Human Cognition, and Decision Making
- Toward Sociotechnical AI: Mapping Vulnerabilities for Machine Learning in Context
- ChatGPT is bullshit
- Emergent and Predictable Memorization in Large Language Models
- Generative AI’s environmental costs are soaring — and mostly secret
- The mechanisms of AI hype and its planetary and social costs
- The perpetual motion machine of AI-generated data and the distraction of ChatGPT as a ‘scientist’
- Quantifying Memorization Across Neural Language Models
- Re-evaluating GPT-4’s bar exam performance
- Toward Sociotechnical AI: Mapping Vulnerabilities for Machine Learning in Context
- Generative AI’s environmental costs are soaring — and mostly secret
- The mechanisms of AI hype and its planetary and social costs
- The perpetual motion machine of AI-generated data and the distraction of ChatGPT as a ‘scientist’
- AI is effectively ‘useless’—and it’s created a ‘fake it till you make it’ bubble that could end in disaster, veteran market watcher warns
- Are Language Models Actually Useful for Time Series Forecasting?
- ChatGPT is bullshit
- Gen AI: Too Much Spend, Too Little Benefit?
- Meta AI Chief: Large Language Models Won't Achieve AGI
- The perpetual motion machine of AI-generated data and the distraction of ChatGPT as a ‘scientist’
- Re-evaluating GPT-4’s bar exam performance
- Toward Sociotechnical AI: Mapping Vulnerabilities for Machine Learning in Context
- We still don't know what generative AI is good for
- Generative AI’s environmental costs are soaring — and mostly secret
- The mechanisms of AI hype and its planetary and social costs
- FABLES: Evaluating faithfulness and content selection in book-length summarization
- Generative AI’s environmental costs are soaring — and mostly secret
- Ghost in the Cloud: Transhumanism’s simulation theology
- Internet of Bugs, Debunking Devin: "First AI Software Engineer" Upwork lie exposed! (video)
- The mechanisms of AI hype and its planetary and social costs
- The perpetual motion machine of AI-generated data and the distraction of ChatGPT as a ‘scientist’
- Re-evaluating GPT-4’s bar exam performance
- There Is No A.I.
- What’s in a Name? Experimental Evidence of Gender Bias in Recommendation Letters Generated by ChatGPT
- ChatGPT is bullshit
- How AI lies, cheats, and grovels to succeed - and what we need to do about it
- ChatGPT is bullshit
- Re-evaluating GPT-4’s bar exam performance
- The perpetual motion machine of AI-generated data and the distraction of ChatGPT as a ‘scientist’
- Re-evaluating GPT-4’s bar exam performance
- Toward Sociotechnical AI: Mapping Vulnerabilities for Machine Learning in Context
- AI already uses as much energy as a small country. It’s only the beginning.
- The AI Carbon Footprint and Responsibilities of AI Scientists
- The Environmental Impact of AI: A Case Study of Water Consumption by Chat GPT
- The Environmental Price of Intelligence: Evaluating the Social Cost of Carbon in Machine Learning
- Generative AI’s environmental costs are soaring — and mostly secret
- The Hidden Environmental Impact of AI
- The mechanisms of AI hype and its planetary and social costs
- Promoting Sustainability: Mitigating the Water Footprint in AI-Embedded Data Centres
- Sustainable AI: Environmental Implications, Challenges and Opportunities
- Toward Sociotechnical AI: Mapping Vulnerabilities for Machine Learning in Context
- Generative AI’s environmental costs are soaring — and mostly secret
- The mechanisms of AI hype and its planetary and social costs
- ArtPrompt: ASCII Art-based Jailbreak Attacks against Aligned LLMs
- Aylin Caliskan's publications
- Consciousness in Artificial Intelligence: Insights from the Science of Consciousness
- Data and its (dis)contents: A survey of dataset development and use in machine learning research - 7.pdf)
- Evaluating Language-Model Agents on Realistic Autonomous Tasks
- Generative AI: UNESCO study reveals alarming evidence of regressive gender stereotypes
- Get Ready for the Great AI Disappointment
- Identifying and Eliminating CSAM in Generative ML Training Data and Models
- Insanely Complicated, Hopelessly Inadequate
- Lazy use of AI leads to Amazon products called “I cannot fulfill that request”
- Leak, Cheat, Repeat: Data Contamination and Evaluation Malpractices in Closed-Source LLMs
- Low-Resource Languages Jailbreak GPT-4
- Machine Learning: The High Interest Credit Card of Technical Debt
- Measuring the predictability of life outcomes with a scientific mass collaboration
- Most CEOs aren’t buying the hype on generative AI benefits
- Pretraining Data Mixtures Enable Narrow Model Selection Capabilities in Transformer Models
- Researchers surprised by gender stereotypes in ChatGPT
- Scalable Extraction of Training Data from (Production) Language Models
- Task Contamination: Language Models May Not Be Few-Shot Anymore
- The Cult of AI
- The Data Scientific Method vs. The Scientific Method
- Futurism, Disillusioned Businesses Discovering That AI Kind of Sucks
- Are Emergent Abilities of Large Language Models a Mirage?
- Artificial Hallucinations in ChatGPT: Implications in Scientific Writing
- Artificial intelligence and illusions of understanding in scientific research
- Against predictive optimization
- AI chatbots use racist stereotypes even after anti-racism training
- AI Is a Lot of Work
- AI Tools Still Permitting Political Disinfo Creation, NGO Warns
- Why We Must Resist AI’s Soft Mind Control
- Winner's Curse? On Pace, Progress, and Empirical Rigor
- AI Safety Is a Narrative Problem
- Generative AI’s environmental costs are soaring — and mostly secret
- The mechanisms of AI hype and its planetary and social costs
- Meta’s AI chief: LLMs will never reach human-level intelligence
- The perpetual motion machine of AI-generated data and the distraction of ChatGPT as a ‘scientist’
- Speed of AI development stretches risk assessments to breaking point
- Generative AI’s environmental costs are soaring — and mostly secret
- The mechanisms of AI hype and its planetary and social costs
- The perpetual motion machine of AI-generated data and the distraction of ChatGPT as a ‘scientist’
- Generative AI’s environmental costs are soaring — and mostly secret
- The mechanisms of AI hype and its planetary and social costs
- The perpetual motion machine of AI-generated data and the distraction of ChatGPT as a ‘scientist’
- Re-evaluating GPT-4’s bar exam performance
- Ryan Allen, Explainable AI: The What’s and Why’s, Part 1: The What
- Generative AI’s environmental costs are soaring — and mostly secret
- The mechanisms of AI hype and its planetary and social costs
- Generative AI’s environmental costs are soaring — and mostly secret
- The mechanisms of AI hype and its planetary and social costs
- The perpetual motion machine of AI-generated data and the distraction of ChatGPT as a ‘scientist’
- Generative AI’s environmental costs are soaring — and mostly secret
- The mechanisms of AI hype and its planetary and social costs
- The perpetual motion machine of AI-generated data and the distraction of ChatGPT as a ‘scientist’
- Re-evaluating GPT-4’s bar exam performance
- Generative AI’s environmental costs are soaring — and mostly secret
- The mechanisms of AI hype and its planetary and social costs
- The perpetual motion machine of AI-generated data and the distraction of ChatGPT as a ‘scientist’
- Re-evaluating GPT-4’s bar exam performance
- Generative AI’s environmental costs are soaring — and mostly secret
- The mechanisms of AI hype and its planetary and social costs
- The perpetual motion machine of AI-generated data and the distraction of ChatGPT as a ‘scientist’
- Re-evaluating GPT-4’s bar exam performance
- Toward Sociotechnical AI: Mapping Vulnerabilities for Machine Learning in Context
- ChatGPT is bullshit
- Generative AI’s environmental costs are soaring — and mostly secret
- I Will Fucking Piledrive You If You Mention AI Again
- The mechanisms of AI hype and its planetary and social costs
- The perpetual motion machine of AI-generated data and the distraction of ChatGPT as a ‘scientist’
- Re-evaluating GPT-4’s bar exam performance
- Toward Sociotechnical AI: Mapping Vulnerabilities for Machine Learning in Context
- ChatGPT is bullshit
- The perpetual motion machine of AI-generated data and the distraction of ChatGPT as a ‘scientist’
- Re-evaluating GPT-4’s bar exam performance
- Generative AI’s environmental costs are soaring — and mostly secret
- Toward Sociotechnical AI: Mapping Vulnerabilities for Machine Learning in Context
- The mechanisms of AI hype and its planetary and social costs
- ChatGPT is bullshit
- The perpetual motion machine of AI-generated data and the distraction of ChatGPT as a ‘scientist’
- Re-evaluating GPT-4’s bar exam performance
- Toward Sociotechnical AI: Mapping Vulnerabilities for Machine Learning in Context
- Generative AI’s environmental costs are soaring — and mostly secret
- The mechanisms of AI hype and its planetary and social costs
-
Groups and Organizations
-
-
Education Resources
-
Comprehensive Software Examples and Tutorials
- COMPAS Analysis Using Aequitas
- Explaining Quantitative Measures of Fairness (with SHAP)
- Getting a Window into your Black Box Model
- H20.ai, From GLM to GBM Part 1
- H20.ai, From GLM to GBM Part 2
- IML
- Interpreting Machine Learning Models with the iml Package
- Interpretable Machine Learning using Counterfactuals
- Machine Learning Explainability by Kaggle Learn
- Model Interpretability with DALEX
- The Importance of Human Interpretable Machine Learning
- Model Interpretation Strategies
- Hands-on Machine Learning Model Interpretation
- Interpreting Deep Learning Models for Computer Vision
- Partial Dependence Plots in R
- PiML Medium Tutorials
- PiML-Toolbox Examples - Toolbox?style=social)
- Saliency Maps for Deep Learning
- Visualizing ML Models with LIME
- Visualizing and debugging deep convolutional networks
- What does a CNN see?
- The Importance of Human Interpretable Machine Learning
- Model Interpretation Strategies
- Hands-on Machine Learning Model Interpretation
- The Importance of Human Interpretable Machine Learning
- Model Interpretation Strategies
- Hands-on Machine Learning Model Interpretation
- The Importance of Human Interpretable Machine Learning
- Model Interpretation Strategies
- Hands-on Machine Learning Model Interpretation
- The Importance of Human Interpretable Machine Learning
- Model Interpretation Strategies
- Hands-on Machine Learning Model Interpretation
- The Importance of Human Interpretable Machine Learning
- Model Interpretation Strategies
- Hands-on Machine Learning Model Interpretation
- The Importance of Human Interpretable Machine Learning
- Model Interpretation Strategies
- Hands-on Machine Learning Model Interpretation
- The Importance of Human Interpretable Machine Learning
- Model Interpretation Strategies
- Hands-on Machine Learning Model Interpretation
- The Importance of Human Interpretable Machine Learning
- The Importance of Human Interpretable Machine Learning
- Model Interpretation Strategies
- Hands-on Machine Learning Model Interpretation
- The Importance of Human Interpretable Machine Learning
- Interpretable Machine Learning with Python
- The Importance of Human Interpretable Machine Learning
- Reliable-and-Trustworthy-AI-Notebooks - and-Trustworthy-AI-Notebooks?style=social)
- The Importance of Human Interpretable Machine Learning
- Model Interpretation Strategies
- Hands-on Machine Learning Model Interpretation
- The Importance of Human Interpretable Machine Learning
- The Importance of Human Interpretable Machine Learning
- Model Interpretation Strategies
- Hands-on Machine Learning Model Interpretation
- The Importance of Human Interpretable Machine Learning
- Model Interpretation Strategies
- Hands-on Machine Learning Model Interpretation
- The Importance of Human Interpretable Machine Learning
- The Importance of Human Interpretable Machine Learning
- Model Interpretation Strategies
- Hands-on Machine Learning Model Interpretation
- The Importance of Human Interpretable Machine Learning
- Model Interpretation Strategies
- Hands-on Machine Learning Model Interpretation
- The Importance of Human Interpretable Machine Learning
- The Importance of Human Interpretable Machine Learning
- Model Interpretation Strategies
- Hands-on Machine Learning Model Interpretation
- The Importance of Human Interpretable Machine Learning
- Model Interpretation Strategies
- Hands-on Machine Learning Model Interpretation
- The Importance of Human Interpretable Machine Learning
- The Importance of Human Interpretable Machine Learning
- The Importance of Human Interpretable Machine Learning
- The Importance of Human Interpretable Machine Learning
- Model Interpretation Strategies
- Hands-on Machine Learning Model Interpretation
- The Importance of Human Interpretable Machine Learning
- Model Interpretation Strategies
- The Importance of Human Interpretable Machine Learning
- Model Interpretation Strategies
- Hands-on Machine Learning Model Interpretation
- The Importance of Human Interpretable Machine Learning
- Model Interpretation Strategies
- Hands-on Machine Learning Model Interpretation
- The Importance of Human Interpretable Machine Learning
- Model Interpretation Strategies
- Hands-on Machine Learning Model Interpretation
- The Importance of Human Interpretable Machine Learning
- Model Interpretation Strategies
- Hands-on Machine Learning Model Interpretation
- The Importance of Human Interpretable Machine Learning
- Model Interpretation Strategies
- Hands-on Machine Learning Model Interpretation
- The Importance of Human Interpretable Machine Learning
- Model Interpretation Strategies
- Hands-on Machine Learning Model Interpretation
- The Importance of Human Interpretable Machine Learning
- Model Interpretation Strategies
- Hands-on Machine Learning Model Interpretation
- The Importance of Human Interpretable Machine Learning
- Model Interpretation Strategies
- Hands-on Machine Learning Model Interpretation
- The Importance of Human Interpretable Machine Learning
- Model Interpretation Strategies
- Hands-on Machine Learning Model Interpretation
- The Importance of Human Interpretable Machine Learning
- Model Interpretation Strategies
- Hands-on Machine Learning Model Interpretation
- The Importance of Human Interpretable Machine Learning
- Model Interpretation Strategies
- Hands-on Machine Learning Model Interpretation
- The Importance of Human Interpretable Machine Learning
- Model Interpretation Strategies
- Hands-on Machine Learning Model Interpretation
- The Importance of Human Interpretable Machine Learning
- Model Interpretation Strategies
- Hands-on Machine Learning Model Interpretation
- The Importance of Human Interpretable Machine Learning
- Model Interpretation Strategies
- Hands-on Machine Learning Model Interpretation
- The Importance of Human Interpretable Machine Learning
- The Importance of Human Interpretable Machine Learning
- Model Interpretation Strategies
- Hands-on Machine Learning Model Interpretation
- The Importance of Human Interpretable Machine Learning
- Model Interpretation Strategies
- Hands-on Machine Learning Model Interpretation
- The Importance of Human Interpretable Machine Learning
- The Importance of Human Interpretable Machine Learning
- Model Interpretation Strategies
- Hands-on Machine Learning Model Interpretation
- The Importance of Human Interpretable Machine Learning
- The Importance of Human Interpretable Machine Learning
- The Importance of Human Interpretable Machine Learning
- The Importance of Human Interpretable Machine Learning
-
Free-ish Books
- César A. Hidalgo, Diana Orghian, Jordi Albo-Canals, Filipa de Almeida, and Natalia Martin, 2021, *How Humans Judge Machines*
- Charles Perrow, 1984, *Normal Accidents: Living with High-Risk Technologies*
- Charles Perrow, 1999, *Normal Accidents: Living with High-Risk Technologies with a New Afterword and a Postscript on the Y2K Problem*
- Deborah G. Johnson and Keith W. Miller, 2009, *Computer Ethics: Analyzing Information Technology*, Fourth Edition
- Ed Dreby and Keith Helmuth (contributors) and Judy Lumb (editor), 2009, *Fueling Our Future: A Dialogue about Technology, Ethics, Public Policy, and Remedial Action*
- George Reynolds, 2002, *Ethics in Information Technology*
- George Reynolds, 2002, *Ethics in Information Technology*, Instructor's Edition
- Kenneth Vaux (editor), 1970, *Who Shall Live? Medicine, Technology, Ethics*
- Kush R. Varshney, 2022, *Trustworthy Machine Learning: Concepts for Developing Accurate, Fair, Robust, Explainable, Transparent, Inclusive, Empowering, and Beneficial Machine Learning Systems*
- Marsha Cook Woodbury, 2003, *Computer and Information Ethics*
- M. David Ermann, Mary B. Williams, and Claudio Gutierrez, 1990, *Computers, Ethics, and Society*
- Morton E. Winston and Ralph D. Edelbach, 2000, *Society, Ethics, and Technology*, First Edition
- Morton E. Winston and Ralph D. Edelbach, 2003, *Society, Ethics, and Technology*, Second Edition
- Morton E. Winston and Ralph D. Edelbach, 2006, *Society, Ethics, and Technology*, Third Edition
- Patrick Hall and Navdeep Gill, 2019, *An Introduction to Machine Learning Interpretability: An Applied Perspective on Fairness, Accountability, Transparency, and Explainable AI*, Second Edition
- Patrick Hall, Navdeep Gill, and Benjamin Cox, 2021, *Responsible Machine Learning: Actionable Strategies for Mitigating Risks & Driving Adoption*
- Paula Boddington, 2017, *Towards a Code of Ethics for Artificial Intelligence*
- Przemyslaw Biecek and Tomasz Burzykowski, 2020, *Explanatory Model Analysis: Explore, Explain, and Examine Predictive Models. With examples in R and Python*
- Przemyslaw Biecek, 2023, *Adversarial Model Analysis*
- Raymond E. Spier (editor), 2003, *Science and Technology Ethics*
- Richard A. Spinello, 1995, *Ethical Aspects of Information Technology*
- Richard A. Spinello, 1997, *Case Studies in Information and Computer Ethics*
- Richard A. Spinello, 2003, *Case Studies in Information Technology Ethics*, Second Edition
- Solon Barocas, Moritz Hardt, and Arvind Narayanan, 2022, *Fairness and Machine Learning: Limitations and Opportunities*
- Soraj Hongladarom and Charles Ess, 2007, *Information Technology Ethics: Cultural Perspectives*
- Stephen H. Unger, 1982, *Controlling Technology: Ethics and the Responsible Engineer*, First Edition
- Stephen H. Unger, 1994, *Controlling Technology: Ethics and the Responsible Engineer*, Second Edition
- Ethics for people who work in tech
- Christoph Molnar, 2021, *Interpretable Machine Learning: A Guide for Making Black Box Models Explainable*
- christophM/interpretable-ml-book - ml-book?style=social)
-
Glossaries and Dictionaries
- A.I. For Anyone: The A-Z of AI
- Appen Artificial Intelligence Glossary
- Brookings: The Brookings glossary of AI and emerging technologies
- Built In, Responsible AI Explained
- Center for Security and Emerging Technology: Glossary
- CompTIA: Artificial Intelligence (AI) Terminology: A Glossary for Beginners
- Council of Europe Artificial Intelligence Glossary
- Coursera: Artificial Intelligence (AI) Terms: A to Z Glossary
- Dataconomy: AI dictionary: Be a native speaker of Artificial Intelligence
- Dennis Mercadal, 1990, *Dictionary of Artificial Intelligence*
- G2: 70+ A to Z Artificial Intelligence Terms in Technology
- General Services Administration: AI Guide for Government: Key AI terminology
- Google Developers Machine Learning Glossary
- H2O.ai Glossary
- IAPP Glossary of Privacy Terms
- IAPP International Definitions of Artificial Intelligence
- IAPP Key Terms for AI Governance
- Jerry M. Rosenberg, 1986, *Dictionary of Artificial Intelligence & Robotics*
- MakeUseOf: A Glossary of AI Jargon: 29 AI Terms You Should Know
- Moveworks: AI Terms Glossary
- National Institute of Standards and Technology (NIST), The Language of Trustworthy AI: An In-Depth Glossary of Terms
- Otto Vollnhals, 1992, *A Multilingual Dictionary of Artificial Intelligence (English, German, French, Spanish, Italian)*
- Raoul Smith, 1989, *The Facts on File Dictionary of Artificial Intelligence*
- Raoul Smith, 1990, *Collins Dictionary of Artificial Intelligence*
- Salesforce: AI From A to Z: The Generative AI Glossary for Business Leaders
- Stanford University HAI Artificial Intelligence Definitions
- TechTarget: Artificial intelligence glossary: 60+ terms to know
- TELUS International: 50 AI terms every beginner should know
- University of New South Wales, Bill Wilson, The Machine Learning Dictionary
- Wikipedia: Glossary of artificial intelligence
- William J. Raynor, Jr, 1999, *The International Dictionary of Artificial Intelligence*, First Edition
- William J. Raynor, Jr, 2009, *International Dictionary of Artificial Intelligence*, Second Edition
- Artificial intelligence and illusions of understanding in scientific research (glossary on second page)
- Artificial intelligence and illusions of understanding in scientific research (glossary on second page)
- European Commission, Glossary of human-centric artificial intelligence
- Artificial intelligence and illusions of understanding in scientific research (glossary on second page)
- European Commission, EU-U.S. Terminology and Taxonomy for Artificial Intelligence - Second Edition
- Artificial intelligence and illusions of understanding in scientific research (glossary on second page)
- Artificial intelligence and illusions of understanding in scientific research (glossary on second page)
- VAIR (Vocabulary of AI Risks)
- Artificial intelligence and illusions of understanding in scientific research (glossary on second page)
- IBM: AI glossary
- Artificial intelligence and illusions of understanding in scientific research (glossary on second page)
- Artificial intelligence and illusions of understanding in scientific research (glossary on second page)
- Artificial intelligence and illusions of understanding in scientific research (glossary on second page)
- IAPP Key Terms for AI Governance
- Towards AI, Generative AI Terminology — An Evolving Taxonomy To Get You Started
- IEEE, A Glossary for Discussion of Ethics of Autonomous and Intelligent Systems, Version 1
- ISO/IEC DIS 22989(en) Information technology — Artificial intelligence — Artificial intelligence concepts and terminology
- Siemens, Artificial Intelligence Glossary
- UK Parliament, Artificial intelligence (AI) glossary
- Artificial intelligence and illusions of understanding in scientific research (glossary on second page)
- Open Access Vocabulary
- Artificial intelligence and illusions of understanding in scientific research (glossary on second page)
- The Alan Turing Institute: Data science and AI glossary
- Artificial intelligence and illusions of understanding in scientific research (glossary on second page)
- Artificial intelligence and illusions of understanding in scientific research (glossary on second page)
- Artificial intelligence and illusions of understanding in scientific research (glossary on second page)
- Artificial intelligence and illusions of understanding in scientific research (glossary on second page)
- Artificial intelligence and illusions of understanding in scientific research (glossary on second page)
- Artificial intelligence and illusions of understanding in scientific research (glossary on second page)
- Artificial intelligence and illusions of understanding in scientific research (glossary on second page)
- Artificial intelligence and illusions of understanding in scientific research (glossary on second page)
- ISO: Information technology — Artificial intelligence — Artificial intelligence concepts and terminology
- Oliver Houdé, 2004, *Dictionary of Cognitive Science: Neuroscience, Psychology, Artificial Intelligence, Linguistics, and Philosophy*
- Artificial intelligence and illusions of understanding in scientific research (glossary on second page)
- Artificial intelligence and illusions of understanding in scientific research (glossary on second page)
- National Institute of Standards and Technology (NIST), NIST AI 100-2 E2023: Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations
- Artificial intelligence and illusions of understanding in scientific research (glossary on second page)
- Artificial intelligence and illusions of understanding in scientific research (glossary on second page)
- Artificial intelligence and illusions of understanding in scientific research (glossary on second page)
- Salesforce: AI From A to Z: The Generative AI Glossary for Business Leaders
- Artificial intelligence and illusions of understanding in scientific research (glossary on second page)
- Artificial intelligence and illusions of understanding in scientific research (glossary on second page)
- Artificial intelligence and illusions of understanding in scientific research (glossary on second page)
- Artificial intelligence and illusions of understanding in scientific research (glossary on second page)
- Artificial intelligence and illusions of understanding in scientific research (glossary on second page)
-
Open-ish Classes
- An Introduction to Data Ethics
- Certified Ethical Emerging Technologist
- Coursera, DeepLearning.AI, Generative AI for Everyone
- Coursera, DeepLearning.AI, Generative AI with Large Language Models
- Coursera, Google Cloud, Introduction to Generative AI
- Coursera, Vanderbilt University, Prompt Engineering for ChatGPT
- CS103F: Ethical Foundations of Computer Science
- Fairness in Machine Learning
- Fast.ai Data Ethics course
- Human-Centered Machine Learning
- Introduction to AI Ethics
- INFO 4270: Ethics and Policy in Data Science
- Machine Learning Fairness by Google
- OECD.AI, Disability-Centered AI And Ethics MOOC
- Awesome LLM Courses - ai/awesome-llm-courses?style=social)
- ETH Zürich ReliableAI 2022 Course Project repository - Trustworthy-AI?style=social)
- DeepLearning.AI
- Google Cloud Skills Boost
- Attention Mechanism
- Create Image Captioning Models
- Encoder-Decoder Architecture
- Introduction to Generative AI
- Introduction to Image Generation
- Introduction to Large Language Models
- Introduction to Responsible AI
- Introduction to Vertex AI Studio
- Transformer Models and BERT Model
- Grow with Google, Generative AI for Educators
- IBM SkillsBuild
- Jay Alammar, Visualizing A Neural Machine Translation Model (Mechanics of Seq2seq Models With Attention)
- Carnegie Mellon University, Computational Ethics for NLP
- Piotr Sapieżyński's CS 4910 - Special Topics in Computer Science: Algorithm Audits
- Certified Ethical Emerging Technologist
- Build a Large Language Model (From Scratch) - from-scratch?style=social)
-
Podcasts and Channels
-
Programming Languages
Categories
Sub Categories
Community Frameworks and Guidance
317
Critiques of AI
160
Official Policy, Frameworks, and Guidance
147
Comprehensive Software Examples and Tutorials
139
Conferences and Workshops
90
Glossaries and Dictionaries
77
Open Source/Access Responsible AI Software Packages
55
Open-ish Classes
34
Free-ish Books
30
Common or Useful Datasets
15
Benchmarks
12
AI Law, Policy, and Guidance Trackers
11
List of Lists
10
AI Incident Information Sharing Resources
9
Curated Bibliographies
6
Machine Learning Environment Management Tools
4
Challenges and Competitions
3
Podcasts and Channels
2
Groups and Organizations
2
Keywords
llm
4
chatgpt
3
interpretable-ai
2
deep-learning
2
llm-security
2
awesome-list
2
prompt
2
reliable-ai
2
openai
2
jailbreak
2
llms
2
ai
2
privacy
1
security
1
llm-privacy
1
infosectools
1
adversarial-machine-learning
1
security-scanner
1
vulnerability-scanner
1
large-language-model
1
prompt-engineering
1
language-model
1
responsible-ai
1
red-team-tools
1
generative-ai
1
ai-red-team
1
gpt-4
1
gpt-3-5
1
sbom
1
mlbom
1
ml
1
aibom
1
robust-machine-learning
1
deep-neural-networks
1
computer-vision
1
online-courses
1
nlp
1
natural-language-processing
1
large-language-models
1
courses
1
awesome
1
trustworthy-ai
1
neural-networks
1
xai
1
transparency
1
python
1
machine-learning-interpretability
1
machine-learning
1
lime
1
interpretable-ml
1