Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
awesome-trustworthy-deep-learning
A curated list of trustworthy deep learning papers. Daily updating...
https://github.com/MinghuiChen43/awesome-trustworthy-deep-learning
Last synced: 3 days ago
JSON representation
-
Robustness Lists
- OOD robustness and transfer learning - commit/jindongwang/transferlearning)
- Must-read Papers on Textual Adversarial Attack and Defense - commit/thunlp/TAADpapers)
- Backdoor Learning Resources - learning-resources) ![ ](https://img.shields.io/github/last-commit/THUYimingLi/backdoor-learning-resources)
- Paper of Robust ML - of-Robust-ML) ![ ](https://img.shields.io/github/last-commit/P2333/Papers-of-Robust-ML)
- The Papers of Adversarial Examples - wang/Adversarial-Examples-Paper) ![ ](https://img.shields.io/github/last-commit/xiaosen-wang/Adversarial-Examples-Paper)
- A Complete List of All (arXiv) Adversarial Example Papers
-
Evasion Attacks and Defenses
-
Poisoning Attacks and Defenses
-
Privacy
-
Interpretability
-
Alignment
-
Others
-
Privacy Lists
- Awesome Attacks on Machine Learning Privacy - ml-privacy-attacks) ![ ](https://img.shields.io/github/last-commit/stratosphereips/awesome-ml-privacy-attacks)
- Aweosme Privacy - Privacy) ![ ](https://img.shields.io/github/last-commit/Guyanqi/Awesome-Privacy)
- Privacy-Preserving-Machine-Learning-Resources - D/PPML-Resource) ![ ](https://img.shields.io/github/last-commit/Ye-D/PPML-Resource)
- Awesome Machine Unlearning - machine-unlearning) ![ ](https://img.shields.io/github/last-commit/tamlhp/awesome-machine-unlearning)
- Awesome Privacy Papers for Visual Data - ai/awesome-privacy-papers) ![ ](https://img.shields.io/github/last-commit/brighter-ai/awesome-privacy-papers)
-
Fairness Lists
- Awesome Fairness Papers - fairness-papers) ![ ](https://img.shields.io/github/last-commit/uclanlp/awesome-fairness-papers)
- Awesome Fairness in AI - fairness-in-ai) ![ ](https://img.shields.io/github/last-commit/datamllab/awesome-fairness-in-ai)
-
Interpretability Lists
- Awesome Machine Learning Interpretability - machine-learning-interpretability) ![ ](https://img.shields.io/github/last-commit/jphall663/awesome-machine-learning-interpretability)
- Awesome Interpretable Machine Learning - interpretable-machine-learning) ![ ](https://img.shields.io/github/last-commit/lopusz/awesome-interpretable-machine-learning)
- Awesome Explainable AI - ntu/Awesome-explainable-AI) ![ ](https://img.shields.io/github/last-commit/wangyongjie-ntu/Awesome-explainable-AI)
- Awesome Deep Learning Interpretability - commit/oneTaken/awesome_deep_learning_interpretability)
- Awesome Interpretability in Large Language Models - Interpretability-in-Large-Language-Models) ![](https://img.shields.io/github/last-commit/ruizheliUOA/Awesome-Interpretability-in-Large-Language-Models)
- Awesome LLM Interpretability - llm-interpretability) ![](https://img.shields.io/github/last-commit/JShollaj/awesome-llm-interpretability)
-
Other Lists
- Awesome Out-of-distribution Detection - commit/iCGY96/awesome_OpenSetRecognition_list)
- Awesome Open Set Recognition list - commit/iCGY96/awesome_OpenSetRecognition_list)
- Awesome Novel Class Discovery - Novel-Class-Discovery) ![ ](https://img.shields.io/github/last-commit/JosephKJ/Awesome-Novel-Class-Discovery)
- Awesome Open-World-Learning - zdw/Awesome-open-world-learning) ![ ](https://img.shields.io/github/last-commit/zhoudw-zdw/Awesome-open-world-learning)
- Blockchain Papers - org/blockchain-papers) ![ ](https://img.shields.io/github/last-commit/decrypto-org/blockchain-papers)
- Awesome Causality Algorithms - causality-algorithms) ![ ](https://img.shields.io/github/last-commit/rguo12/awesome-causality-algorithms)
- Awesome AI Security - AI-Security) ![ ](https://img.shields.io/github/last-commit/DeepSpaceHarbor/Awesome-AI-Security)
- A curated list of AI Security & Privacy events - Security-and-Privacy-Events) ![ ](https://img.shields.io/github/last-commit/ZhengyuZhao/AI-Security-and-Privacy-Events)
- Awesome Deep Phenomena - deep-phenomena) ![ ](https://img.shields.io/github/last-commit/MinghuiChen43/awesome-deep-phenomena)
- Awesome Blockchain AI - blockchain-ai) ![ ](https://img.shields.io/github/last-commit/steven2358/awesome-blockchain-ai)
-
Robustness Toolboxes
- Cleverhans - lab/cleverhans)
- Adversarial Robustness Toolbox (ART) - AI/adversarial-robustness-toolbox)
- Adversarial-Attacks-Pytorch - attacks-pytorch)
- Advtorch
- RobustBench
- BackdoorBox
- BackdoorBench
- DeepDG: OOD generalization toolbox
-
Privacy Toolboxes
- Diffprivlib - privacy-library)
- Privacy Meter
- OpenDP
- PrivacyRaven
- PersonalizedFL
- TAPAS - turing-institute/privacy-sdg-toolbox)
-
Fairness Toolboxes
- AI Fairness 360 - AI/AIF360)
- Fairlearn
- Aequitas
- FAT Forensics - forensics/fat-forensics)
-
Interpretability Toolboxes
- Lime
- Deep Visualization Toolbox - visualization-toolbox)
- Captum
- Alibi
- AI Explainability 360 - AI/AIX360)
-
Other Toolboxes
-
Robustness Workshops
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Distribution Shifts Connecting Methods and Applications (NeurIPS 2021)
- Workshop on Adversarial Robustness In the Real World (ICCV 2021)
- Uncertainty and Robustness in Deep Learning Workshop (ICML 2021)
- Uncertainty and Robustness in Deep Learning Workshop (ICML 2020)
- Backdoor Attacks and Defenses in Machine Learning (ICLR 2023)
- Adversarial Machine Learning on Computer Vision: Art of Robustness (CVPR 2023)
- Workshop on Adversarial Robustness In the Real World (ECCV 2022)
- Shift Happens Workshop (ICML 2022)
- Principles of Distribution Shift (ICML 2022)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- New Frontiers in Adversarial Machine Learning (ICML 2022)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
- Workshop on Spurious Correlations, Invariance, and Stability (ICML 2022)
- RobustML Workshop (ICLR 2021)
-
Other Workshops
- Secure and Safe Autonomous Driving (SSAD) Workshop and Challenge (CVPR 2023)
- Trustworthy and Reliable Large-Scale Machine Learning Models (ICLR 2023)
- TrustNLP: Third Workshop on Trustworthy Natural Language Processing (ACL 2023)
- Workshop on Physics for Machine Learning (ICLR 2023)
- Pitfalls of limited data and computation for Trustworthy ML (ICLR 2023)
- Workshop on Mathematical and Empirical Understanding of Foundation Models (ICLR 2023)
- ARTIFICIAL INTELLIGENCE AND SECURITY (CCS 2022)
- Automotive and Autonomous Vehicle Security (AutoSec) (NDSS 2022)
- Trustworthy and Socially Responsible Machine Learning (NeurIPS 2022)
- International Workshop on Trustworthy Federated Learning (IJCAI 2022)
- Workshop on AI Safety (IJCAI 2022)
- Workshop on Distribution-Free Uncertainty Quantification (ICML 2022)
- First Workshop on Causal Representation Learning (UAI 2022)
- I Can’t Believe It’s Not Better! (ICBINB) Workshop Series
- Pitfalls of limited data and computation for Trustworthy ML (ICLR 2023)
- 1st Workshop on Formal Verification of Machine Learning (ICML 2022)
- NeurIPS ML Safety Workshop (NeurIPS 2022)
-
Robustness Tutorials
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Tutorial on Domain Generalization (IJCAI-ECAI 2022)
- Practical Adversarial Robustness in Deep Learning: Problems and Solutions (CVPR 2021)
- A Tutorial about Adversarial Attacks & Defenses (KDD 2021)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness: Theory and Practice (NeurIPS 2018) - ml-tutorial.org/)
- Adversarial Machine Learning Tutorial (AAAI 2018)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
- Adversarial Robustness of Deep Learning Models (ECCV 2020)
-
Privacy Workshops
-
Fairness Workshops
-
Interpretability Workshops
-
Robustness Talks
-
Robustness Blogs
-
Interpretability Blogs
-
Other Blogs
- Cleverhans Blog - Ian Goodfellow, Nicolas Papernot
- AI Security and Privacy (AISP) Seminar Series
- ML Safety Newsletter
- Trustworthy ML Initiative
- Trustworthy AI Project
- ECE1784H: Trustworthy Machine Learning (Course, Fall 2019) - Nicolas Papernot
- A School for all Seasons on Trustworthy Machine Learning (Course) - Reza Shokri, Nicolas Papernot
- Trustworthy Machine Learning (Book)
- AI Safety Support (Lots of Links)
- [paper
-
Fairness
-
Out-of-Distribution Generalization
Programming Languages
Categories
Robustness Workshops
124
Robustness Tutorials
57
Alignment
20
Other Workshops
17
Interpretability
16
Privacy
15
Others
12
Evasion Attacks and Defenses
12
Other Blogs
10
Other Lists
10
Robustness Toolboxes
8
Other Toolboxes
8
Interpretability Lists
6
Privacy Toolboxes
6
Robustness Lists
6
Out-of-Distribution Generalization
6
Privacy Lists
5
Interpretability Toolboxes
5
Poisoning Attacks and Defenses
5
Interpretability Blogs
4
Fairness
4
Fairness Toolboxes
4
Robustness Blogs
3
Fairness Lists
2
Robustness Talks
2
Interpretability Workshops
1
Privacy Workshops
1
Fairness Workshops
1
Sub Categories
Keywords
machine-learning
23
deep-learning
13
awesome-list
11
awesome
8
privacy
8
interpretability
7
xai
6
fairness
6
explainable-ai
5
artificial-intelligence
5
adversarial-attacks
5
python
5
interpretable-ai
5
ai
5
adversarial-machine-learning
4
adversarial-examples
4
data-privacy
3
pytorch
3
interpretable-ml
3
bias
3
papers
3
trusted-ai
3
data-science
3
differential-privacy
3
ibm-research-ai
2
fairness-testing
2
fairness-ai
2
codait
2
ibm-research
2
transparency
2
gdpr
2
membership-inference
2
r
2
privacy-preserving-machine-learning
2
privacy-enhancing-technologies
2
computer-vision
2
benchmarking
2
toolbox
2
security
2
trustworthy-ai
2
inference
2
explainable-ml
2
uncertainty
2
calibration
2
uncertainty-calibration
2
uncertainty-estimation
2
causal-inference
2
uncertainty-quantification
2
adversarial-learning
2
causality
2