Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/dssg/aequitas
Bias Auditing & Fair ML Toolkit
https://github.com/dssg/aequitas
bias fairness fairness-testing machine-bias
Last synced: 2 months ago
JSON representation
Bias Auditing & Fair ML Toolkit
- Host: GitHub
- URL: https://github.com/dssg/aequitas
- Owner: dssg
- License: mit
- Created: 2018-02-13T19:40:30.000Z (almost 7 years ago)
- Default Branch: master
- Last Pushed: 2024-09-11T16:24:06.000Z (4 months ago)
- Last Synced: 2024-10-11T17:50:31.171Z (3 months ago)
- Topics: bias, fairness, fairness-testing, machine-bias
- Language: Python
- Homepage: http://www.datasciencepublicpolicy.org/aequitas/
- Size: 977 MB
- Stars: 683
- Watchers: 43
- Forks: 113
- Open Issues: 51
-
Metadata Files:
- Readme: README.md
- License: LICENSE
- Authors: AUTHORS.rst
Awesome Lists containing this project
- awesome-trustworthy-deep-learning - Aequitas
- Awesome-AIML-Data-Ops - Aequitas - An open-source bias audit toolkit for data scientists, machine learning researchers, and policymakers to audit machine learning models for discrimination and bias, and to make informed and equitable decisions around developing and deploying predictive risk-assessment tools. (Explaining Black Box Models and Datasets)
- awesome-production-machine-learning - Aequitas - An open-source bias audit toolkit for data scientists, machine learning researchers, and policymakers to audit machine learning models for discrimination and bias, and to make informed and equitable decisions around developing and deploying predictive risk-assessment tools. (Explaining Black Box Models and Datasets)
- awesome-python-machine-learning-resources - GitHub - 65% open · ⏱️ 27.05.2021): (模型的可解释性)
- StarryDivineSky - dssg/aequitas
- awesome-production-machine-learning - Aequitas - An open-source bias audit toolkit for data scientists, machine learning researchers, and policymakers to audit machine learning models for discrimination and bias, and to make informed and equitable decisions around developing and deploying predictive risk-assessment tools. (Explainability and Fairness)