Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/ResponsiblyAI/responsibly
Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems 🔎🤖🧰
https://github.com/ResponsiblyAI/responsibly
artificial-intelligence audit bias bias-correction bias-finder bias-reduction data-science ethics fairness fairness-ai fairness-awareness-model fairness-ml fairness-testing machine-bias machine-learning natural-language-processing python
Last synced: 2 months ago
JSON representation
Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems 🔎🤖🧰
- Host: GitHub
- URL: https://github.com/ResponsiblyAI/responsibly
- Owner: ResponsiblyAI
- License: mit
- Created: 2018-08-02T11:31:18.000Z (almost 6 years ago)
- Default Branch: master
- Last Pushed: 2023-11-17T17:33:35.000Z (6 months ago)
- Last Synced: 2024-02-27T07:44:02.299Z (3 months ago)
- Topics: artificial-intelligence, audit, bias, bias-correction, bias-finder, bias-reduction, data-science, ethics, fairness, fairness-ai, fairness-awareness-model, fairness-ml, fairness-testing, machine-bias, machine-learning, natural-language-processing, python
- Language: Python
- Homepage: http://docs.responsibly.ai
- Size: 33 MB
- Stars: 87
- Watchers: 6
- Forks: 20
- Open Issues: 12
-
Metadata Files:
- Readme: README.rst
- Changelog: CHANGELOG.rst
- Contributing: CONTRIBUTING.rst
- License: LICENSE
Lists
- awesome-production-machine-learning - responsibly - Toolkit for auditing and mitigating bias and fairness of machine learning systems (Explaining Black Box Models and Datasets)
- awesome-machine-learning-interpretability - responsibly
- awesome-production-machine-learning - responsibly - Toolkit for auditing and mitigating bias and fairness of machine learning systems (Explaining Black Box Models and Datasets)
- Awesome-AIML-Data-Ops - responsibly - Toolkit for auditing and mitigating bias and fairness of machine learning systems (Explaining Black Box Models and Datasets)
- awesome-production-machine-learning - responsibly - Toolkit for auditing and mitigating bias and fairness of machine learning systems (Explaining Black Box Models and Datasets)
README
Responsibly
===========.. image:: https://img.shields.io/badge/docs-passing-brightgreen.svg
:target: https://docs.responsibly.ai.. image:: https://img.shields.io/gitter/room/nwjs/nw.js.svg
:alt: Join the chat at https://gitter.im/ResponsiblyAI/responsibly
:target: https://gitter.im/ResponsiblyAI/responsibly.. image:: https://img.shields.io/github/workflow/status/ResponsiblyAI/responsibly/CI/master.svg
:target: https://github.com/ResponsiblyAI/responsibly/actions/workflows/ci.yml
.. image:: https://img.shields.io/coveralls/ResponsiblyAI/responsibly/master.svg
:target: https://coveralls.io/r/ResponsiblyAI/responsibly.. image:: https://img.shields.io/scrutinizer/g/ResponsiblyAI/responsibly.svg
:target: https://scrutinizer-ci.com/g/ResponsiblyAI/responsibly/?branch=master.. image:: https://img.shields.io/pypi/v/responsibly.svg
:target: https://pypi.org/project/responsibly.. image:: https://img.shields.io/github/license/ResponsiblyAI/responsibly.svg
:target: https://docs.responsibly.ai/about/license.html**Toolkit for Auditing and Mitigating Bias and Fairness**
**of Machine Learning Systems 🔎🤖🧰***Responsibly* is developed for **practitioners** and **researchers** in mind,
but also for learners. Therefore, it is compatible with
data science and machine learning tools of trade in Python,
such as Numpy, Pandas, and especially **scikit-learn**.The primary goal is to be one-shop-stop for **auditing** bias
and fairness of machine learning systems, and the secondary one
is to mitigate bias and adjust fairness through
**algorithmic interventions**.
Besides, there is a particular focus on **NLP** models.*Responsibly* consists of three sub-packages:
1. ``responsibly.dataset``
Collection of common benchmark datasets from fairness research.2. ``responsibly.fairness``
Demographic fairness in binary classification,
including metrics and algorithmic interventions.3. ``responsibly.we``
Metrics and debiasing methods for bias (such as gender and race)
in word embedding.For fairness, *Responsibly*'s functionality is aligned with the book
`Fairness and Machine Learning
- Limitations and Opportunities `_
by Solon Barocas, Moritz Hardt and Arvind Narayanan.If you would like to ask for a feature or report a bug,
please open a
`new issue `_
or write us in `Gitter `_.Requirements
------------- Python 3.6+
Installation
------------Install responsibly with pip:
.. code:: sh
$ pip install responsibly
or directly from the source code:
.. code:: sh
$ git clone https://github.com/ResponsiblyAI/responsibly.git
$ cd responsibly
$ python setup.py installCitation
--------If you have used *Responsibly* in a scientific publication,
we would appreciate citations to the following:::
@Misc{,
author = {Shlomi Hod},
title = {{Responsibly}: Toolkit for Auditing and Mitigating Bias and Fairness of Machine Learning Systems},
year = {2018--},
url = "http://docs.responsibly.ai/",
note = {[Online; accessed ]}
}