https://github.com/yochem/bursting-burden
📝 Accompanying code for our paper "Bursting the Burden Bubble: An Assessment of Sharma et al.’s Counterfactual-Based Fairness Metric"
https://github.com/yochem/bursting-burden
burden certifai fairness metric sharma
Last synced: 6 months ago
JSON representation
📝 Accompanying code for our paper "Bursting the Burden Bubble: An Assessment of Sharma et al.’s Counterfactual-Based Fairness Metric"
- Host: GitHub
- URL: https://github.com/yochem/bursting-burden
- Owner: yochem
- License: mit
- Created: 2022-08-31T12:04:53.000Z (about 3 years ago)
- Default Branch: main
- Last Pushed: 2024-10-10T08:06:29.000Z (about 1 year ago)
- Last Synced: 2025-04-30T21:52:39.498Z (6 months ago)
- Topics: burden, certifai, fairness, metric, sharma
- Language: Jupyter Notebook
- Homepage:
- Size: 3.87 MB
- Stars: 6
- Watchers: 2
- Forks: 0
- Open Issues: 0
-
Metadata Files:
- Readme: README.md
- License: LICENSE
Awesome Lists containing this project
README
# Bursting the Burden Bubble
**An assessement of Sharma et al.'s Counterfactual-Based Fairness Metric.**
By Yochem van Rosmalen, Florian van der Steen, Sebastiaan Jans, and Daan van
der Weijden.As presented at the [BNAIC/BeNeLearn](https://bnaic2022.uantwerpen.be/) 2022
conference in Mechelen, Belgium.### Abstract
Machine learning has seen an increase in negative publicity in recent years,
due to biased, unfair, and uninterpretable models. There is a rising interest
in making machine learning models more fair for unprivileged communities, such
as women or people of color. Metrics are needed to evaluate the fairness of a
model. A novel metric for evaluating fairness between groups is Burden, which
uses counterfactuals to approximate the average distance of negatively
classified individuals in a group to the decision boundary of the model. The
goal of this study is to compare Burden to statistical parity, a well-known
fairness metric, and discover Burden's advantages and disadvantages. We do this
by calculating the Burden and statistical parity of a sensitive attribute in
three datasets: two synthetic datasets are created to display differences
between the two metrics, and one real-world dataset is used. We show that
Burden can be more nuanced than statistical parity, but also that the metrics
can disagree on which group is treated unfairly. We therefore conclude that
Burden is a valuable metric to add to the existing group of fairness metrics,
but should not be used on its own.Read the full paper at [bursting-burden.pdf](paper/bursting-burden.pdf)!
### Credits
The implementation of CERTIFAI is written by @Ighina, and can be found at
github.com/Ighina/CERTIFAI. The Python file of the project is included
in this repository: CERTIFAI.py. Licensed MIT.