Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
Projects in Awesome Lists by Trusted-AI
A curated list of projects in awesome lists by Trusted-AI .
https://github.com/Trusted-AI/adversarial-robustness-toolbox
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
adversarial-attacks adversarial-examples adversarial-machine-learning ai artificial-intelligence attack blue-team evasion extraction inference machine-learning poisoning privacy python red-team trusted-ai trustworthy-ai
Last synced: 31 Jul 2024
https://github.com/IBM/AIF360
A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
ai artificial-intelligence bias bias-correction bias-detection bias-finder bias-reduction codait deep-learning discrimination fairness fairness-ai fairness-awareness-model fairness-testing ibm-research ibm-research-ai machine-learning python r trusted-ai
Last synced: 28 Aug 2024
https://github.com/Trusted-AI/AIF360
A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
ai artificial-intelligence bias bias-correction bias-detection bias-finder bias-reduction codait deep-learning discrimination fairness fairness-ai fairness-awareness-model fairness-testing ibm-research ibm-research-ai machine-learning python r trusted-ai
Last synced: 31 Jul 2024
https://github.com/Trusted-AI/AIX360
Interpretability and explainability of data and machine learning models
artificial-intelligence codait deep-learning explainabil explainable-ai explainable-ml ibm-research ibm-research-ai machine-learning trusted-ai trusted-ml xai
Last synced: 31 Jul 2024