Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
AwesomeResponsibleAI
A curated list of awesome academic research, books, code of ethics, data sets, institutes, newsletters, principles, podcasts, reports, tools, regulations and standards related to Responsible AI and Human-Centered AI.
https://github.com/AthenaCore/AwesomeResponsibleAI
Last synced: about 13 hours ago
JSON representation
-
Academic Research
-
Sustainability
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- Article
- Article
- Article
- Parcollet, T., & Ravanelli, M. 2021
- van Wynsberghe, A. 2021 - 6
- Lannelongue, L. et al. 2020
- van Wynsberghe, A. 2021 - 6
- Article
- van Wynsberghe, A. 2021 - 6
- Article
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- Article
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
- van Wynsberghe, A. 2021 - 6
-
Challenges
-
Drift
-
Explainability
-
Fairness
-
Ethical Data Products
- Building Inclusive Products Through A/B Testing - Jacques et al, 2020](https://arxiv.org/pdf/2002.05819)) `LinkedIn`
- Article
- Article
- Article
- Article
-
Reproducible/Non-Reproducible Research
-
Collections
-
Evaluation (of model explanations)
-
Bias
-
-
Courses
-
Data/AI Ethics
-
Safety
-
Data Privacy
-
Ethical Design
-
Causality
-
-
Frameworks
-
Institutes
-
Safety
- Center for Human-Compatible AI
- Ada Lovelace Institute
- Open Data Institute
- Center for Responsible AI
- Montreal AI Ethics Institute
- National AI Centre's Responsible AI Network
- The Institute for Ethical AI & Machine Learning
- University of Oxford Institute for Ethics in AI
- European Centre for Algorithmic Transparency
-
Commercial / Propietary / Closed Access
-
Data Privacy
-
-
Tools
-
Performance (& Automated ML)
- DataPerf
- DataPerf
- DataPerf
- DataPerf
- DataPerf
- DataPerf
- DataPerf
- automl: Deep Learning with Metaheuristic
- DataPerf
- deepchecks
- Yellowbrick
- auditor
- AutoKeras
- Auto-Sklearn
- DataPerf
- EloML
- LOFO Importance
- forester
- metrica: Prediction performance metrics
- NNI: Neural Network Intelligence
- performance
- TensorFlow Model Analysis
- TPOT
- WeightWatcher - Examples)) `Python`
- DataPerf
- DataPerf
- DataPerf
- DataPerf
- DataPerf
- DataPerf
- DataPerf
- DataPerf
- DataPerf
- DataPerf
- DataPerf
- DataPerf
- DataPerf
- DataPerf
- DataPerf
- DataPerf
- DataPerf
- DataPerf
- DataPerf
- DataPerf
- DataPerf
- DataPerf
- DataPerf
- DataPerf
- DataPerf
- DataPerf
- DataPerf
- DataPerf
- DataPerf
- DataPerf
-
Benchmarks
-
Bias
-
Causal Inference
-
Drift
-
Fairness
-
Interpretability/Explicability
- breakDown: Model Agnostic Explainers for Individual Predictions
- aorsf: Accelerated Oblique Random Survival Forests
- DALEX: moDel Agnostic Language for Exploration and eXplanation
- ceterisParibus: Ceteris Paribus Profiles
- DALEX: moDel Agnostic Language for Exploration and eXplanation
- ecco - transformers/) `Python`
- eXplainability Toolbox
- ExplainerHub
- hstats
- interactions: Comprehensive, User-Friendly Toolkit for Probing Interactions
- kernelshap: Kernel SHAP
- FAT Forensics
- hstats
- lime: Local Interpretable Model-Agnostic Explanations
- Network Dissection
- shapviz
- Skater
- TCAV (Testing with Concept Activation Vectors)
- truelens
- truelens-eval
- pre: Prediction Rule Ensembles
- vivid
- AI360 Toolkit
- captum
- DALEXtra: extension for DALEX
- Dianna
- Diverse Counterfactual Explanations (DiCE)
- dtreeviz
- eli5
- ExplainaBoard
- fastshap
- fasttreeshap
- flashlight
- Human Learn
- innvestigate
- Network Dissection
- Shapash
- survex
- Vetiver
- vip
- XAI - An eXplainability toolbox for machine learning
- xplique
- XAIoGraphs
- Zennit
- shapper
- teller
- explabox
-
(RAI) Toolkit
-
Sustainability
-
Security
-
Interpretable Models
- imodels
- imodelsX
- interpretML - project.org/web/packages/interpret/index.html)
- PiML Toolbox
- Tensorflow Lattice
-
LLM Evaluation
-
(AI/Data) Poisoning
-
Privacy
-
Robustness
-
(AI) Watermaring
-
Poisoning
-
Reliability Evaluation (of post hoc explanation methods)
-
-
Regulations
-
European Union
- Hiroshima Process International Guiding Principles for Advanced AI system
- General Data Protection Regulation GDPR - Legal text for the EU GDPR regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC
- GDPR.EU Guide - A project co-funded by the Horizon 2020 Framework programme of the EU which provides a resource for organisations and individuals researching GDPR, including a library of straightforward and up-to-date information to help organisations achieve GDPR compliance ([Legal Text](https://www.govinfo.gov/content/pkg/USCODE-2012-title5/pdf/USCODE-2012-title5-partI-chap5-subchapII-sec552a.pdf)).
- AI ACT
- Data Act
- Data Governance Act
- Digital Market Act
- Digital Services Act
-
Singapore
-
(AI) Watermaring
-
Canada
-
Sustainability
-
United States
- CCPA - bin/legp604.exe?212+sum+HB2307)), and Colorado ([ColoPA](https://leg.colorado.gov/sites/default/files/documents/2021A/bills/2021a_190_rer.pdf)).
- Executive Order on Maintaining American Leadership in AI - Official mandate by the President of the US to
- Privacy Act of 1974 - The privacy act of 1974 which establishes a code of fair information practices that governs the collection, maintenance, use and dissemination of information about individuals that is maintained in systems of records by federal agencies.
- AI Bill of Rights - The Blueprint for an AI Bill of Rights is a guide for a society that protects all people from IA threats based on five principles: Safe and Effective Systems, Algorithmic Discrimination Protections, Data Privacy, Notice and Explanation, and Human Alternatives, Consideration, and Fallback.
- HIPAA - credit-reporting-act), [FERPA](https://www.cdc.gov/phlp/publications/topic/ferpa.html), [GLBA](https://www.ftc.gov/tips-advice/business-center/privacy-and-security/gramm-leach-bliley-act), [ECPA](https://bja.ojp.gov/program/it/privacy-civil-liberties/authorities/statutes/1285), [COPPA](https://www.ftc.gov/enforcement/rules/rulemaking-regulatory-reform-proceedings/childrens-online-privacy-protection-rule), [VPPA](https://www.law.cornell.edu/uscode/text/18/2710) and [FTC](https://www.ftc.gov/enforcement/statutes/federal-trade-commission-act).
- EU-U.S. and Swiss-U.S. Privacy Shield Frameworks - The EU-U.S. and Swiss-U.S. Privacy Shield Frameworks were designed by the U.S. Department of Commerce and the European Commission and Swiss Administration to provide companies on both sides of the Atlantic with a mechanism to comply with data protection requirements when transferring personal data from the European Union and Switzerland to the United States in support of transatlantic commerce.
- Privacy Protection Act of 1980 - The Privacy Protection Act of 1980 protects journalists from being required to turn over to law enforcement any work product and documentary materials, including sources, before it is disseminated to the public.
-
-
Books
-
Open Access
-
Commercial / Propietary / Closed Access
- Privacy-Preserving Machine Learning
- Varshney, K., 2022
- Thampi, A., 2022
- Mahoney, T., Varshney, K.R., Hind, M., 2020
- Rothman, D., 2020
- Kilroy, K., 2021
- Hall, P., Gill, N., Cox, B., 2020
- Human-In-The-Loop Machine Learning: Active Learning and Annotation for Human-Centered AI
- Interpretable Machine Learning With Python: Learn to Build Interpretable High-Performance Models With Hands-On Real-World Examples
-
-
Reports
-
Market Analysis
- The AI Index Report - from 2017 up to now - `Stanford Institute for Human-Centered Artificial Intelligence`
-
Other
-
Commercial / Propietary / Closed Access
- Inferring Concept Drift Without Labeled Data, 2021
- Interpretability, Fast Forward Labs, 2020
- State of AI - from 2018 up to now -
-
(AI) Incidents databases
- AI Vulnerability Database (AVID)
- AIAAIC
- AI Badness: An open catalog of generative AI badness
- George Washington University Law School's AI Litigation Database
- Merging AI Incidents Research with Political Misinformation Research: Introducing the Political Deepfakes Incidents Database
- OECD AI Incidents Monitor
- Verica Open Incident Database (VOID)
-
-
Principles
-
Safety
- Google's AI Principles
- FAIR Principles
- Allianz's Principles for a responsible usage of AI
- Asilomar AI principles
- OECD's AI principles
- Telefonica's AI principles
- The Institute for Ethical AI & Machine Learning: The Responsible Machine Learning Principles
- European Commission's Guidelines for Trustworthy AI
- Microsoft's AI principles
-
Commercial / Propietary / Closed Access
-
-
Standards
-
IEEE Standards
-
UNE/ISO Standards
- https://tienda.aenor.com/norma-une-especificacion-une-0077-2023-n0071116
- https://tienda.aenor.com/norma-une-especificacion-une-0078-2023-n0071117
- https://tienda.aenor.com/norma-une-especificacion-une-0079-2023-n0071118
- https://tienda.aenor.com/norma-une-especificacion-une-0080-2023-n0071383
- https://tienda.aenor.com/norma-une-especificacion-une-0081-2023-n0071807
-
NIST Standards
-
-
Code of Ethics
-
Commercial / Propietary / Closed Access
-
-
Data Sets
-
Commercial / Propietary / Closed Access
-
Safety
-
-
Podcasts
-
Safety
-
Commercial / Propietary / Closed Access
-
-
Newsletters
-
Safety
-
Data Privacy
-
-
Main Concepts
-
What is Open Source AI
-
Programming Languages
Categories
Sub Categories
Sustainability
62
Performance (& Automated ML)
54
Interpretability/Explicability
47
Safety
29
Commercial / Propietary / Closed Access
24
Fairness
16
Explainability
11
Privacy
10
Drift
10
Causal Inference
9
LLM Evaluation
8
European Union
8
United States
7
(AI) Incidents databases
7
Canada
7
(AI) Watermaring
6
Robustness
5
UNE/ISO Standards
5
Ethical Data Products
5
Interpretable Models
5
(RAI) Toolkit
4
Data Privacy
4
Other
4
Benchmarks
4
Bias
4
Security
3
(AI/Data) Poisoning
3
NIST Standards
3
Collections
3
Open Access
2
Evaluation (of model explanations)
2
Data/AI Ethics
1
What is Open Source AI
1
Singapore
1
Market Analysis
1
Ethical Design
1
Reliability Evaluation (of post hoc explanation methods)
1
IEEE Standards
1
Causality
1
Challenges
1
Reproducible/Non-Reproducible Research
1
Poisoning
1
Keywords
machine-learning
34
python
14
data-science
11
explainable-ai
11
interpretability
10
xai
10
deep-learning
8
explainable-ml
8
scikit-learn
8
ai
7
artificial-intelligence
6
pytorch
6
mlops
6
interpretable-machine-learning
6
ml
6
explainability
5
differential-privacy
5
r
5
data-drift
4
transparency
4
automl
4
interpretable-ml
4
automated-machine-learning
4
llm
4
privacy
4
fairness-ai
4
r-package
3
trustworthy-ai
3
fairness
3
visualization
3
tensorflow
3
hyperparameter-optimization
3
bias
3
statistics
3
random-forest
3
xgboost
3
explainable-artificial-intelligence
3
interpretable-ai
3
jupyter-notebook
3
variable-importance
3
model-monitoring
3
feature-importance
3
shap
3
trusted-ai
2
machine-learning-algorithms
2
lightgbm
2
discrimination
2
gradient-boosting
2
keras
2
neural-architecture-search
2