Ecosyste.ms: Awesome

An open API service indexing awesome lists of open source software.

Awesome Lists | Featured Topics | Projects

https://github.com/brandeis-machine-learning/awesome-ml-fairness

Papers and online resources related to machine learning fairness
https://github.com/brandeis-machine-learning/awesome-ml-fairness

List: awesome-ml-fairness

awesome fairness fairness-ai fairness-ml human-ai-interaction machine-learning papers research-paper trustworthy-machine-learning

Last synced: about 1 month ago
JSON representation

Papers and online resources related to machine learning fairness

Awesome Lists containing this project

README

        

# Awesome Machine Learning Fairness [![Awesome](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg)](https://github.com/sindresorhus/awesome)

> Research papers and online resources on **machine learning fairness**.
> If I miss your paper, please let me know!
> Contact: *[email protected]*

## Table of Contents

1. [Survey](#survey)
1. [Book, Blog, Case Study, and Introduction](#book-blog-case-study-and-introduction)
1. [Group Fairness in Classification](#group-fairness-in-classification)
1. [Individual Fairness](#individual-fairness)
1. [Minimax Fairness](#minimax-fairness)
1. [Counterfactual Fairness](#counterfactual-fairness)
1. [Graph Mining](#graph-mining)
1. [Online Learning & Bandits](#online-learning--bandits)
1. [Clustering](#clustering)
1. [Regression](#regression)
1. [Outlier Detection](#outlier-detection)
1. [Ranking](#ranking)
1. [Generation](#generation)
1. [Fairness and Robustness](#fairness-and-robustness)
1. [Transfer & Federated Learning](#transfer--federated-learning)
1. [Long-term Impact](#long-term-impact)
1. [Trustworthiness](#trustworthiness)
1. [Auditing](#auditing)
1. [Empirical Study](#empirical-study)
1. [Software Engineering](#software-engineering)
1. [Library & Toolkit](#library--toolkit)
1. [Dataset](#dataset)

For fairness & bias in **computer vision** & **natural language processing**, please refer to:
1. [Computer Vision](cv.md)
2. [Natural Language Processing](nlp.md)

## Survey

1. [Fairness in rankings and recommendations: an overview](https://link.springer.com/content/pdf/10.1007/s00778-021-00697-y.pdf?pdf=button), The VLDB Journal'22
1. [Fairness in Ranking, Part I: Score-based Ranking](https://dl.acm.org/doi/pdf/10.1145/3533379), ACM Computing Surveys'22
1. [Fairness in Ranking, Part II: Learning-to-Rank and Recommender Systems](https://dl.acm.org/doi/pdf/10.1145/3533380), ACM Computing Surveys'22
1. [A survey on datasets for fairness-aware machine learning](https://wires.onlinelibrary.wiley.com/doi/epdf/10.1002/widm.1452), WIREs Data Mining and Knowledge Discovery'22
1. [A Survey on Bias and Fairness in Machine Learning](https://dl.acm.org/doi/pdf/10.1145/3457607), ACM Computing Surveys'21
1. [An Overview of Fairness in Clustering](https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9541160), IEEE Access'21
1. [Trustworthy AI: A Computational Perspective](https://arxiv.org/pdf/2107.06641.pdf), arXiv'21
1. [Algorithm Fairness in AI for Medicine and Healthcare](https://arxiv.org/pdf/2110.00603.pdf), arXiv'21
1. [Socially Responsible AI Algorithms: Issues, Purposes, and Challenges](https://arxiv.org/pdf/2101.02032.pdf), arXiv'21
1. [Fairness in learning-based sequential decision algorithms: A survey](https://arxiv.org/pdf/2001.04861.pdf), arXiv'20
1. [Language (Technology) is Power: A Critical Survey of “Bias” in NLP](https://aclanthology.org/2020.acl-main.485.pdf), ACL'20
1. [Fairness in Machine Learning: A Survey](https://arxiv.org/abs/2010.04053), arXiv'20
1. [The Frontiers of Fairness in Machine Learning](https://arxiv.org/abs/1810.08810), arXiv'18
1. [The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning](https://arxiv.org/pdf/1808.00023.pdf), arXiv'18

## Book, Blog, Case Study, and Introduction

1. [Identifying, measuring, and mitigating individual unfairness for supervised learning models and application to credit risk models](https://arxiv.org/pdf/2211.06106.pdf)
1. [Assessing and mitigating unfairness in credit models with the Fairlearn toolkit](https://www.microsoft.com/en-us/research/uploads/prod/2020/09/Fairlearn-EY_WhitePaper-2020-09-22.pdf)
1. [To regulate AI, try playing in a sandbox](https://www.emergingtechbrew.com/stories/2021/05/26/regulate-ai-just-play-sandbox), Emerging Tech Brew
1. [NSF grant decisions reflect systemic racism, study argues](https://www.science.org/content/article/nsf-grant-decisions-reflect-systemic-racism-study-argues)
1. [Fairness and Machine Learning: LImitations and Opportunities](https://fairmlbook.org/pdf/fairmlbook.pdf)
1. [Apple Card algorithm sparks gender bias allegations against Goldman Sachs](https://www.washingtonpost.com/business/2019/11/11/apple-card-algorithm-sparks-gender-bias-allegations-against-goldman-sachs/)
1. [Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need?](https://dl.acm.org/doi/pdf/10.1145/3290605.3300830), CHI'19
1. [Unequal Representation and Gender Stereotypes in Image Search Results for Occupations](https://dl.acm.org/doi/pdf/10.1145/2702123.2702520), CHI'15
1. [Big Data’s Disparate Impact](https://www.californialawreview.org/wp-content/uploads/2016/06/2Barocas-Selbst.pdf), California Law Review
1. [An Analysis of the New York City Police Department’s “Stop-and-Frisk” Policy in the Context of Claims of Racial Bias](http://www.stat.columbia.edu/~gelman/research/published/frisk9.pdf), JASA'07
1. [What a machine learning tool that turns Obama white can (and can’t) tell us about AI bias](https://www.theverge.com/21298762/face-depixelizer-ai-machine-learning-tool-pulse-stylegan-obama-bias)
1. [Amazon scraps secret AI recruiting tool that showed bias against women](https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G)
1. [Consumer-Lending Discrimination in the FinTech Era](http://faculty.haas.berkeley.edu/morse/research/papers/discrim.pdf?_ga=2.180355815.1160518121.1646542021-2138401856.1646542021)
1. [Apple Card Investigated After Gender Discrimination Complaints](https://www.nytimes.com/2019/11/10/business/Apple-credit-card-investigation.html)
1. [When a Computer Program Keeps You in Jail](https://www.nytimes.com/2017/06/13/opinion/how-computers-are-harming-criminal-justice.html)
1. [European Union regulations on algorithmic decision-making and a “right to explanation”](https://arxiv.org/pdf/1606.08813.pdf)
1. [An Algorithm That Grants Freedom, or Takes It Away](https://www.nytimes.com/2020/02/06/technology/predictive-algorithms-crime.html)

## Group Fairness in Classification

### Pre-processing

1. [Achieving Fairness at No Utility Cost via Data Reweighing](https://arxiv.org/pdf/2202.00787.pdf), ICML'22
1. [Fairness with Adaptive Weights](https://proceedings.mlr.press/v162/chai22a/chai22a.pdf), ICML'22
1. [Bias in Machine Learning Software: Why? How? What to Do?](https://arxiv.org/pdf/2105.12195.pdf), FSE'21
1. [Identifying and Correcting Label Bias in Machine Learning](http://proceedings.mlr.press/v108/jiang20a/jiang20a.pdf), AISTATS‘20
1. [Fair Class Balancing: Enhancing Model Fairness without Observing Sensitive Attributes](https://dl.acm.org/doi/pdf/10.1145/3340531.3411980), CIKM'20
1. [Repairing without Retraining: Avoiding Disparate Impact with Counterfactual Distributions](http://proceedings.mlr.press/v97/wang19l/wang19l.pdf), ICML'19
1. [Adaptive Sensitive Reweighting to Mitigate Bias in Fairness-aware Classification](https://dl.acm.org/doi/pdf/10.1145/3178876.3186133), WWW'18
1. [Optimized Pre-Processing for Discrimination Prevention](https://proceedings.neurips.cc/paper/2017/file/9a49a25d845a483fae4be7e341368e36-Paper.pdf), NeurIPS'17
1. [Certifying and Removing Disparate Impact](https://dl.acm.org/doi/pdf/10.1145/2783258.2783311), KDD'15
1. [Learning Fair Representations](http://www.cs.toronto.edu/~toni/Papers/icml-final.pdf), ICML'13
1. [Data preprocessing techniques for classification without discrimination](https://link.springer.com/content/pdf/10.1007/s10115-011-0463-8.pdf), Knowledge and Information Systems'12

### In-processing

1. [On Learning Fairness and Accuracy on Multiple Subgroups](https://openreview.net/pdf?id=YsRH6uVcx2l), NeurIPS'22
1. [Fair Representation Learning through Implicit Path Alignment](https://proceedings.mlr.press/v162/shui22a/shui22a.pdf), ICML'22
1. [Fair Generalized Linear Models with a Convex Penalty](https://proceedings.mlr.press/v162/do22a/do22a.pdf), ICML'22
1. [Fair Normalizing Flows](https://openreview.net/pdf?id=BrFIKuxrZE), ICLR'22
1. [A Stochastic Optimization Framework for Fair Risk Minimization](https://arxiv.org/pdf/2102.12586.pdf), arXiv'22
1. [Fairness via Representation Neutralization](https://papers.nips.cc/paper/2021/file/64ff7983a47d331b13a81156e2f4d29d-Paper.pdf), NeurIPS'21
1. [Scalable and Stable Surrogates for Flexible Classifiers with Fairness Constraints](https://papers.nips.cc/paper/2021/file/fc2e6a440b94f64831840137698021e1-Paper.pdf), NeurIPS'21
1. [A Fair Classifier Using Kernel Density Estimation](https://papers.nips.cc/paper/2020/file/ac3870fcad1cfc367825cda0101eee62-Paper.pdf), NeurIPS'20
1. [Exploiting MMD and Sinkhorn Divergences for Fair and Transferable Representation Learning](https://papers.nips.cc/paper/2020/file/af9c0e0c1dee63e5acad8b7ed1a5be96-Paper.pdf), NeurIPS'20
1. [Rényi Fair Inference](https://openreview.net/pdf?id=HkgsUJrtDB), ICLR'20
1. [Conditional Learning of Fair Representations](https://openreview.net/pdf?id=Hkekl0NFPr), ICLR'20
1. [A General Approach to Fairness with Optimal Transport](https://ojs.aaai.org/index.php/AAAI/article/view/5771/5627), AAAI'20
1. [Fairness Constraints: A Flexible Approach for Fair Classification](https://www.jmlr.org/papers/volume20/18-262/18-262.pdf), JMLR'19
1. [Fair Regression: Quantitative Definitions and Reduction-based Algorithms](), ICML'19
1. [Wasserstein Fair Classification](http://proceedings.mlr.press/v115/jiang20a/jiang20a.pdf), UAI'19
1. [Empirical Risk Minimization Under Fairness Constraints](https://papers.nips.cc/paper/2018/file/83cdcec08fbf90370fcf53bdd56604ff-Paper.pdf), NeurIPS'18
1. [A Reductions Approach to Fair Classification](http://proceedings.mlr.press/v80/agarwal18a/agarwal18a.pdf), ICML'18
1. [Mitigating Unwanted Biases with Adversarial Learning](https://dl.acm.org/doi/pdf/10.1145/3278721.3278779), AIES'18
1. [Classification with Fairness Constraints: A Meta-Algorithm with Provable Guarantees](https://arxiv.org/pdf/1806.06055.pdf), arXiv'18
1. [Fairness Constraints: Mechanisms for Fair Classification](http://proceedings.mlr.press/v54/zafar17a/zafar17a.pdf), AISTATS‘17

### Post-processing

1. [FairCal: Fairness Calibration for Face Verification](https://openreview.net/pdf?id=nRj0NcmSuxb), ICLR'22
1. [Fairness-aware Model-agnostic Positive and Unlabeled Learning](https://facctconference.org/static/pdfs_2022/facct22-136.pdf), FAccT'22
1. [FACT: A Diagnostic for Group Fairness Trade-offs](http://proceedings.mlr.press/v119/kim20a/kim20a.pdf), ICML'20
1. [Equality of Opportunity in Supervised Learning](https://proceedings.neurips.cc/paper/2016/file/9d2682367c3935defcb1f9e247a97c0d-Paper.pdf), NeurIPS'16

### Tradeoff

1. [Fair Classification and Social Welfare](https://dl.acm.org/doi/pdf/10.1145/3351095.3372857), FAccT'20
1. [Is There a Trade-Off Between Fairness and Accuracy? A Perspective Using Mismatched Hypothesis Testing](http://proceedings.mlr.press/v119/dutta20a/dutta20a.pdf), ICML'20
1. [Inherent Tradeoffs in Learning Fair Representations](https://papers.nips.cc/paper/2019/hash/b4189d9de0fb2b9cce090bd1a15e3420-Abstract.html), NeurIPS'19
1. [The Cost of Fairness in Binary Classification](http://proceedings.mlr.press/v81/menon18a/menon18a.pdf), FAT'18
1. [Inherent Trade-Offs in the Fair Determination of Risk Scores](https://drops.dagstuhl.de/opus/volltexte/2017/8156/pdf/LIPIcs-ITCS-2017-43.pdf), ITCS'17
1. [Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments](https://www.andrew.cmu.edu/user/achoulde/files/disparate_impact.pdf), Big Data'17
1. [On the (im)possibility of fairness](https://arxiv.org/pdf/1609.07236.pdf?ref=https://githubhelp.com), arXiv'16

### Others

1. [Understanding Instance-Level Impact of Fairness Constraints](https://proceedings.mlr.press/v162/wang22ac/wang22ac.pdf), ICML'22
1. [Generalized Demographic Parity for Group Fairness](https://openreview.net/pdf?id=YigKlMJwjye), ICLR'22
1. [Assessing Fairness in the Presence of Missing Data](https://papers.nips.cc/paper/2021/file/85dca1d270f7f9aef00c9d372f114482-Paper.pdf), NeurIPS'21
1. [Characterizing Fairness Over the Set of Good Models Under Selective Labels](http://proceedings.mlr.press/v139/coston21a/coston21a.pdf), ICML'21
1. [Fair Selective Classification via Sufficiency](http://proceedings.mlr.press/v139/lee21b/lee21b.pdf), ICML'21
1. [Testing Group Fairness via Optimal Transport Projections](http://proceedings.mlr.press/v139/si21a/si21a.pdf), ICML'21
1. [Fairness with Overlapping Groups](https://papers.nips.cc/paper/2020/file/29c0605a3bab4229e46723f89cf59d83-Paper.pdf), NeurIPS'20
1. [Feature Noise Induces Loss Discrepancy Across Groups](http://proceedings.mlr.press/v119/khani20a/khani20a.pdf), ICML'20
1. [Why Is My Classifier Discriminatory?](https://papers.nips.cc/paper/2018/file/1f1baa5b8edac74eb4eaa329f14a0361-Paper.pdf), NeurIPS'18
1. [Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification](http://proceedings.mlr.press/v81/buolamwini18a/buolamwini18a.pdf), FAT'18
1. [On Fairness and Calibration](https://papers.nips.cc/paper/2017/file/b8b9c74ac526fffbeb2d39ab038d1cd7-Paper.pdf), NeurIPS'17
1. [Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment](https://dl.acm.org/doi/pdf/10.1145/3038912.3052660), WWW'17

## Individual Fairness

1. [Learning Antidote Data to Individual Unfairness](https://arxiv.org/pdf/2211.15897.pdf), arXiv'22
1. [Metric-Fair Active Learning](https://proceedings.mlr.press/v162/shen22b/shen22b.pdf), ICML'22
1. [Metric-Fair Classifier Derandomization](https://proceedings.mlr.press/v162/wu22a/wu22a.pdf), ICML'22
1. [Post-processing for Individual Fairness](https://papers.nips.cc/paper/2021/file/d9fea4ca7e4a74c318ec27c1deb0796c-Paper.pdf), NeurIPS'21
1. [SenSeI: Sensitive Set Invariance for Enforcing Individual Fairness](https://openreview.net/pdf?id=DktZb97_Fx), ICLR'21
1. [Individually Fair Gradient Boosting](https://openreview.net/forum?id=JBAa9we1AL), ICLR'21
1. [Learning Individually Fair Classifier with Path-Specific Causal-Effect Constraint](http://proceedings.mlr.press/v130/chikahara21a/chikahara21a.pdf), AISTATS'21
1. [What’s Fair about Individual Fairness?](https://dl.acm.org/doi/pdf/10.1145/3461702.3462621), AIES'21
1. [Learning Certified Individually Fair Representations](https://papers.nips.cc/paper/2020/hash/55d491cf951b1b920900684d71419282-Abstract.html), NeurIPS'20
1. [Metric-Free Individual Fairness in Online Learning](https://proceedings.neurips.cc//paper/2020/file/80b618ebcac7aa97a6dac2ba65cb7e36-Paper.pdf), NeurIPS'20
1. [Two Simple Ways to Learn Individual Fairness Metrics from Data](http://proceedings.mlr.press/v119/mukherjee20a), ICML'20
1. [Training Individually Fair ML Models with Sensitive Subspace Robustness](https://openreview.net/pdf?id=B1gdkxHFDH), ICLR'20
1. [Individual Fairness Revisited: Transferring Techniques from Adversarial Robustness](https://www.ijcai.org/proceedings/2020/0061.pdf), IJCAI'20
1. [Metric Learning for Individual Fairness](https://drops.dagstuhl.de/opus/volltexte/2020/12018/pdf/LIPIcs-FORC-2020-2.pdf), FORC'20
1. [Individual Fairness in Pipelines](https://par.nsf.gov/servlets/purl/10217368), FORC'20
1. [Average Individual Fairness: Algorithms, Generalization and Experiments](https://proceedings.neurips.cc/paper/2019/hash/0e1feae55e360ff05fef58199b3fa521-Abstract.html), NeurIPS'19
1. [iFair: Learning Individually Fair Data Representations for Algorithmic Decision Making](https://ieeexplore.ieee.org/document/8731591), ICDM'19
1. [Operationalizing Individual Fairness with Pairwise Fair Representations](https://dl.acm.org/doi/pdf/10.14778/3372716.3372723), VLDB'19

## Minimax Fairness

1. [Active Sampling for Min-Max Fairness](https://proceedings.mlr.press/v162/abernethy22a/abernethy22a.pdf), ICML'22
1. [Adaptive Sampling for Minimax Fair Classification](https://papers.nips.cc/paper/2021/file/cd7c230fc5deb01ff5f7b1be1acef9cf-Paper.pdf), NeurIPS'21
1. [Blind Pareto Fairness and Subgroup Robustness](http://proceedings.mlr.press/v139/martinez21a/martinez21a.pdf), ICML'21
1. [Fairness without Demographics through Adversarially Reweighted Learning](https://papers.nips.cc/paper/2020/file/07fc15c9d169ee48573edd749d25945d-Paper.pdf), NeurIPS'20
1. [Minimax Pareto Fairness: A Multi Objective Perspective](http://proceedings.mlr.press/v119/martinez20a/martinez20a.pdf), ICML'20
1. [Fairness Without Demographics in Repeated Loss Minimization](http://proceedings.mlr.press/v80/hashimoto18a.html), ICML'18

## Counterfactual Fairness

1. [Causal Conceptions of Fairness and their Consequences](https://proceedings.mlr.press/v162/nilforoshan22a/nilforoshan22a.pdf), ICML'22
1. [Contrastive Mixture of Posteriors for Counterfactual Inference, Data Integration and Fairness](https://proceedings.mlr.press/v162/foster22a/foster22a.pdf), ICML'22
1. [PC-Fairness: A Unified Framework for Measuring Causality-based Fairness](https://papers.nips.cc/paper/2019/file/44a2e0804995faf8d2e3b084a1e2db1d-Paper.pdf), NeurIPS'19
1. [Fairness through Causal Awareness: Learning Causal Latent-Variable Models for Biased Data](https://dl.acm.org/doi/pdf/10.1145/3287560.3287564), FAccT'19
1. [Counterfactual Fairness](https://papers.nips.cc/paper/2017/file/a486cd07e4ac3d270571622f4f316ec5-Paper.pdf), NeurIPS'17
1. [When Worlds Collide: Integrating Different Counterfactual Assumptions in Fairness](https://papers.nips.cc/paper/2017/file/1271a7029c9df08643b631b02cf9e116-Paper.pdf), NeurIPS'17
1. [Avoiding Discrimination through Causal Reasoning](https://papers.nips.cc/paper/2017/file/f5f8590cd58a54e94377e6ae2eded4d9-Paper.pdf), NeurIPS'17
1. [A Causal Framework for Discovering and Removing Direct and Indirect Discrimination](https://www.ijcai.org/proceedings/2017/0549.pdf), IJCAI'17

## Graph Mining

1. [RawlsGCN: Towards Rawlsian Difference Principle on Graph Convolutional Network](https://arxiv.org/abs/2202.13547), WWW'22
1. [Correcting Exposure Bias for Link Recommendation](http://proceedings.mlr.press/v139/gupta21c/gupta21c.pdf), ICML'21
1. [On Dyadic Fairness: Exploring and Mitigating Bias in Graph Connections](https://openreview.net/pdf?id=xgGS6PmzNq6), ICLR'21
1. [Individual Fairness for Graph Neural Networks: A Ranking based Approach](https://dl.acm.org/doi/pdf/10.1145/3447548.3467266), KDD'21
1. [Fairness constraints can help exact inference in structured prediction](https://papers.nips.cc/paper/2020/file/8248a99e81e752cb9b41da3fc43fbe7f-Paper.pdf), NeurIPS'20
1. [InFoRM: Individual Fairness on Graph Mining](https://dl.acm.org/doi/pdf/10.1145/3394486.3403080), KDD'20
1. [Fairness-Aware Explainable Recommendation over Knowledge Graphs](https://dl.acm.org/doi/pdf/10.1145/3397271.3401051), SIGIR'20
1. [Compositional Fairness Constraints for Graph Embeddings](http://proceedings.mlr.press/v97/bose19a/bose19a.pdf), ICML'19

## Online Learning & Bandits

1. [The price of unfairness in linear bandits with biased feedback](https://openreview.net/pdf?id=PCZfDUH8fIn), NeurIPS'22
1. [Fair Sequential Selection Using Supervised Learning Models](https://papers.nips.cc/paper/2021/file/ed277964a8959e72a0d987e598dfbe72-Paper.pdf), NeurIPS'21
1. [Online Market Equilibrium with Application to Fair Division](https://papers.nips.cc/paper/2021/file/e562cd9c0768d5464b64cf61da7fc6bb-Paper.pdf), NeurIPS'21
1. [A Unified Approach to Fair Online Learning via Blackwell Approachability](https://papers.nips.cc/paper/2021/file/97ea3cfb64eeaa1edba65501d0bb3c86-Paper.pdf), NeurIPS'21
1. [Fair Algorithms for Multi-Agent Multi-Armed Bandits](https://papers.nips.cc/paper/2021/file/c96ebeee051996333b6d70b2da6191b0-Paper.pdf), NeurIPS'21
1. [Fair Exploration via Axiomatic Bargaining](https://papers.nips.cc/paper/2021/file/b90c46963248e6d7aab1e0f429743ca0-Paper.pdf), NeurIPS'21
1. [Group-Fair Online Allocation in Continuous Time](https://papers.nips.cc/paper/2020/file/9ec0cfdc84044494e10582436e013e64-Paper.pdf), NeurIPS'20

## Clustering

1. [Robust Fair Clustering: A Novel Fairness Attack and Defense Framework](https://arxiv.org/pdf/2210.01953.pdf), arXiv'22
1. [Fair and Fast k-Center Clustering for Data Summarization](https://proceedings.mlr.press/v162/angelidakis22a/angelidakis22a.pdf), ICML'22
1. [Fair Clustering Under a Bounded Cost](https://papers.nips.cc/paper/2021/file/781877bda0783aac5f1cf765c128b437-Paper.pdf), NeurIPS'21
1. [Better Algorithms for Individually Fair k-Clustering](https://papers.nips.cc/paper/2021/file/6f221fcb5c504fe96789df252123770b-Paper.pdf), NeurIPS'21
1. [Approximate Group Fairness for Clustering](http://proceedings.mlr.press/v139/li21j/li21j.pdf), ICML'21
1. [Variational Fair Clustering](https://ojs.aaai.org/index.php/AAAI/article/view/17336/17143), AAAI'21
1. [Socially Fair k-Means Clustering](https://dl.acm.org/doi/pdf/10.1145/3442188.3445906), FAccT'21
1. [Deep Fair Clustering for Visual Learning](https://openaccess.thecvf.com/content_CVPR_2020/html/Li_Deep_Fair_Clustering_for_Visual_Learning_CVPR_2020_paper.html), CVPR'20
1. [Fair Hierarchical Clustering](https://papers.nips.cc/paper/2020/file/f10f2da9a238b746d2bac55759915f0d-Paper.pdf), NeurIPS'20
1. [Fair Algorithms for Clustering](https://papers.nips.cc/paper/2019/file/fc192b0c0d270dbf41870a63a8c76c2f-Paper.pdf), NeurIPS'19
1. [Coresets for Clustering with Fairness Constraints](https://proceedings.neurips.cc/paper/2019/file/810dfbbebb17302018ae903e9cb7a483-Paper.pdf), NeurIPS'19
1. [Scalable Fair Clustering](http://proceedings.mlr.press/v97/backurs19a/backurs19a.pdf), ICML'19
1. [Fair k-Center Clustering for Data Summarization](http://proceedings.mlr.press/v97/kleindessner19a/kleindessner19a.pdf), ICML'19
1. [Guarantees for Spectral Clustering with Fairness Constraints](http://proceedings.mlr.press/v97/kleindessner19b/kleindessner19b.pdf), ICML'19
1. [Fair Clustering Through Fairlets](https://papers.nips.cc/paper/2017/file/978fce5bcc4eccc88ad48ce3914124a2-Paper.pdf), NeurIPS'17

## Regression

1. [Selective Regression Under Fairness Criteria](https://proceedings.mlr.press/v162/shah22a/shah22a.pdf), ICML'22
1. [Pairwise Fairness for Ordinal Regression](https://proceedings.mlr.press/v151/kleindessner22a/kleindessner22a.pdf), NeurIPS'22
1. [Fair Sparse Regression with Clustering: An Invex Relaxation for a Combinatorial Problem](https://papers.nips.cc/paper/2021/file/c39b9a47811f1eaf3244a63ae8c22734-Paper.pdf), NeurIPS'21
1. [Fair Regression with Wasserstein Barycenters](https://papers.nips.cc/paper/2020/file/51cdbd2611e844ece5d80878eb770436-Paper.pdf), NeurIPS'20
1. [Fair Regression via Plug-In Estimator and Recalibration](https://papers.nips.cc/paper/2020/file/ddd808772c035aed516d42ad3559be5f-Paper.pdf), NeurIPS'20

## Outlier Detection

1. [Deep Clustering based Fair Outlier Detection](https://arxiv.org/pdf/2106.05127.pdf), KDD'21
1. [FairOD: Fairness-aware Outlier Detection](https://dl.acm.org/doi/pdf/10.1145/3461702.3462517), AIES'21
1. [FairLOF: Fairness in Outlier Detection](https://www.researchgate.net/publication/354203620_FairLOF_Fairness_in_Outlier_Detection/fulltext/612bc29f0360302a0066ef8f/FairLOF-Fairness-in-Outlier-Detection.pdf), Data Science and Engineering'21

## Ranking

1. [Fair Rank Aggregation](https://openreview.net/pdf?id=xbgtFOO9J5D), NeurIPS'22
1. [Fair Ranking with Noisy Protected Attributes](https://openreview.net/pdf?id=mTra5BIUyRV), NeurIPS'22
1. [Individually Fair Rankings](https://openreview.net/pdf?id=71zCSP_HuBN), ICLR'21
1. [Two-sided fairness in rankings via Lorenz dominance](https://papers.nips.cc/paper/2021/file/48259990138bc03361556fb3f94c5d45-Paper.pdf), NeurIPS'21
1. [Fairness in Ranking under Uncertainty](https://papers.nips.cc/paper/2021/file/63c3ddcc7b23daa1e42dc41f9a44a873-Paper.pdf), NeurIPS'21
1. [Fair algorithms for selecting citizens’ assemblies](https://www.nature.com/articles/s41586-021-03788-6.pdf), Nature'21
1. [On the Problem of Underranking in Group-Fair Ranking](http://proceedings.mlr.press/v139/gorantla21a/gorantla21a.pdf), ICML'21
1. [Fairness and Bias in Online Selection](http://proceedings.mlr.press/v139/correa21a/correa21a.pdf), ICML'21
1. [Policy Learning for Fairness in Ranking](https://papers.nips.cc/paper/2019/file/9e82757e9a1c12cb710ad680db11f6f1-Paper.pdf), NeurIPS'19
1. [The Fairness of Risk Scores Beyond Classification: Bipartite Ranking and the XAUC Metric](https://papers.nips.cc/paper/2019/file/73e0f7487b8e5297182c5a711d20bf26-Paper.pdf), NeurIPS'19
1. [Balanced Ranking with Diversity Constraints](https://www.ijcai.org/proceedings/2019/0836.pdf), IJCAI'19

## Generation

1. [DECAF: Generating Fair Synthetic Data Using Causally-Aware Generative Networks](https://papers.nips.cc/paper/2021/file/ba9fab001f67381e56e410575874d967-Paper.pdf), NeurIPS'21
1. [Fairness for Image Generation with Uncertain Sensitive Attributes](http://proceedings.mlr.press/v139/jalal21b/jalal21b.pdf), ICML'21
1. [FairGAN: Fairness-aware Generative Adversarial Networks](http://www.csce.uark.edu/~xintaowu/publ/bigdata18.pdf), BigData'18

## Fairness and Robustness

1. [Robust Fair Clustering: A Novel Fairness Attack and Defense Framework](https://openreview.net/forum?id=4LMIZY7gt7h), ICLR'23
1. [Fair Classification with Adversarial Perturbations](https://papers.nips.cc/paper/2021/file/44e207aecc63505eb828d442de03f2e9-Paper.pdf), NeurIPS'21
1. [Sample Selection for Fair and Robust Training](https://papers.nips.cc/paper/2021/file/07563a3fe3bbe7e3ba84431ad9d055af-Paper.pdf), NeurIPS'21
1. [Fair Classification with Noisy Protected Attributes: A Framework with Provable Guarantees](http://proceedings.mlr.press/v139/celis21a/celis21a.pdf), ICML'21
1. [To be Robust or to be Fair: Towards Fairness in Adversarial Training](http://proceedings.mlr.press/v139/xu21b/xu21b.pdf), ICML'21
1. [Exacerbating Algorithmic Bias through Fairness Attacks](https://ojs.aaai.org/index.php/AAAI/article/view/17080/16887), AAAI'21
1. [Fair Classification with Group-Dependent Label Noise](https://dl.acm.org/doi/pdf/10.1145/3442188.3445915), FAccT'21
1. [Robust Optimization for Fairness with Noisy Protected Groups](https://papers.nips.cc/paper/2020/file/37d097caf1299d9aa79c2c2b843d2d78-Paper.pdf), NeurIPS'20
1. [FR-Train: A Mutual Information-Based Approach to Fair and Robust Training](http://proceedings.mlr.press/v119/roh20a/roh20a.pdf), ICML'20
1. [Poisoning Attacks on Algorithmic Fairness](https://repositori.upf.edu/bitstream/handle/10230/47626/solans_ecmlpkdd_poiso.pdf?sequence=1&isAllowed=y), ECML'20
1. [Noise-tolerant Fair Classification](https://proceedings.neurips.cc/paper/2019/file/8d5e957f297893487bd98fa830fa6413-Paper.pdf), NeurIPS'19
1. [Stable and Fair Classification](http://proceedings.mlr.press/v97/huang19e/huang19e.pdf), ICML'19

## Transfer & Federated Learning

1. [Fairness Guarantees under Demographic Shift](https://openreview.net/pdf?id=wbPObLm6ueA), ICLR'22
1. [Addressing Algorithmic Disparity and Performance Inconsistency in Federated Learning](https://proceedings.neurips.cc/paper/2021/file/db8e1af0cb3aca1ae2d0018624204529-Paper.pdf), NeurIPS'21
1. [Gradient-Driven Rewards to Guarantee Fairness in Collaborative Machine Learning](https://papers.nips.cc/paper/2021/file/8682cc30db9c025ecd3fee433f8ab54c-Paper.pdf), NeurIPS'21
1. [FjORD: Fair and Accurate Federated Learning under heterogeneous targets with Ordered Dropout](https://papers.nips.cc/paper/2021/file/6aed000af86a084f9cb0264161e29dd3-Paper.pdf), NeurIPS'21
1. [Does enforcing fairness mitigate biases caused by subpopulation shift?](https://papers.nips.cc/paper/2021/file/d800149d2f947ad4d64f34668f8b20f6-Paper.pdf), NeurlPS'21
1. [Ditto: Fair and Robust Federated Learning Through Personalization](http://proceedings.mlr.press/v139/li21h/li21h.pdf), ICML'21
1. [Fair Transfer Learning with Missing Protected Attributes](https://dl.acm.org/doi/pdf/10.1145/3306618.3314236), AIES'19

## Long-term Impact

1. [Achieving Long-Term Fairness in Sequential Decision Making](https://ojs.aaai.org/index.php/AAAI/article/view/21188/20937), AAAI'22
1. [Unintended Selection: Persistent Qualification Rate Disparities and Interventions](https://proceedings.neurips.cc/paper/2021/file/db00f1b7fdf48fd26b5fb5f309e9afaf-Paper.pdf), NeurIPS'21
1. [How Do Fair Decisions Fare in Long-term Qualification?](https://papers.nips.cc/paper/2020/file/d6d231705f96d5a35aeb3a76402e49a3-Paper.pdf), NeurIPS'20
1. [The Disparate Equilibria of Algorithmic Decision Making when Individuals Invest Rationally](https://dl.acm.org/doi/pdf/10.1145/3351095.3372861), FAccT'20
1. [Group Retention when Using Machine Learning in Sequential Decision Making: the Interplay between User Dynamics and Fairness](https://proceedings.neurips.cc/paper/2019/file/7690dd4db7a92524c684e3191919eb6b-Paper.pdf), NeurIPS'19
1. [Delayed Impact of Fair Machine Learning](https://proceedings.mlr.press/v80/liu18c/liu18c.pdf), ICML'18
1. [A Short-term Intervention for Long-term Fairness in the Labor Market](https://dl.acm.org/doi/pdf/10.1145/3178876.3186044), WWW'18

## Trustworthiness

1. [Washing The Unwashable : On The (Im)possibility of Fairwashing Detection](https://openreview.net/pdf?id=3vmKQUctNy), NeurIPS'22
1. [The Road to Explainability is Paved with Bias: Measuring the Fairness of Explanations](https://dl.acm.org/doi/pdf/10.1145/3531146.3533179), FAccT'22
1. [Differentially Private Empirical Risk Minimization under the Fairness Lens](https://papers.nips.cc/paper/2021/file/e7e8f8e5982b3298c8addedf6811d500-Paper.pdf), NeurIPS'21
1. [Characterizing the risk of fairwashing](https://papers.nips.cc/paper/2021/file/7caf5e22ea3eb8175ab518429c8589a4-Paper.pdf), NeurIPS'21
1. [Fair Performance Metric Elicitation](https://papers.nips.cc/paper/2020/file/7ec2442aa04c157590b2fa1a7d093a33-Paper.pdf), NeurIPS'20
1. [Can I Trust My Fairness Metric? Assessing Fairness with Unlabeled Data and Bayesian Inference](https://papers.nips.cc/paper/2020/file/d83de59e10227072a9c034ce10029c39-Paper.pdf), NeurIPS'20
1. [You Shouldn’t Trust Me: Learning Models Which Conceal Unfairness From Multiple Explanation Methods](http://ceur-ws.org/Vol-2560/paper8.pdf), AAAI'20
1. [Fairwashing: the risk of rationalization](http://proceedings.mlr.press/v97/aivodji19a/aivodji19a.pdf), ICML'19

## Auditing

1. [Active Fairness Auditing](https://proceedings.mlr.press/v162/yan22c/yan22c.pdf), ICML'22
1. [Statistical inference for individual fairness](https://openreview.net/forum?id=z9k8BWL-_2u), ICLR'21
1. [Verifying Individual Fairness in Machine Learning Models](https://auai.org/uai2020/proceedings/327_main_paper.pdf), UAI'20

## Empirical Study

1. [Are My Deep Learning Systems Fair? An Empirical Study of Fixed-Seed Training](https://papers.nips.cc/paper/2021/file/fdda6e957f1e5ee2f3b311fe4f145ae1-Paper.pdf), NeurIPS'21
1. [An empirical characterization of fair machine learning for clinical risk prediction](https://www.sciencedirect.com/science/article/pii/S1532046420302495), Journal of Biomedical Informatics

## Software Engineering

1. [Fairway: A Way to Build Fair ML Software](https://dl.acm.org/doi/pdf/10.1145/3368089.3409697), FSE'20

## Library & Toolkit

1. [FairPy: A Python Library for Machine Learning Fairness](https://github.com/brandeis-machine-learning/FairPy), Brandeis University
1. [AI Fairness 360](https://aif360.mybluemix.net/), IBM Research
1. [Fairlearn: A toolkit for assessing and improving fairness in AI](https://www.microsoft.com/en-us/research/publication/fairlearn-a-toolkit-for-assessing-and-improving-fairness-in-ai/), Microsoft Research
1. [fairpy: An open-source library of fair division algorithms in Python](https://github.com/erelsgl/fairpy), Ariel University
1. [FairML: Auditing Black-Box Predictive Models](https://github.com/adebayoj/fairml), MIT
1. [Folktables](https://github.com/zykls/folktables#3), UC Berkeley

## Dataset

1. [fairness_dataset](https://github.com/tailequy/fairness_dataset), Leibniz University

### Tabular Data

1. [Communities and Crime Data Set](https://archive.ics.uci.edu/ml/datasets/communities+and+crime)
1. [Statlog (German Credit Data) Data Set](https://archive.ics.uci.edu/ml/datasets/statlog+(german+credit+data))
1. [Bank Marketing Data Set](https://archive.ics.uci.edu/ml/datasets/bank+marketing)
1. [Adult Data Set](https://archive.ics.uci.edu/ml/datasets/adult)
1. [COMPAS Recidivism Risk Score Data and Analysis](https://www.propublica.org/datastore/dataset/compas-recidivism-risk-score-data-and-analysis)
1. [Arrhythmia Data Set](https://archive.ics.uci.edu/ml/datasets/arrhythmia)
1. [LSAC National Longitudinal Bar Passage Study](https://eric.ed.gov/?id=ED469370)
1. [Medical Expenditure Panel Survey Data](https://meps.ahrq.gov/mepsweb/)
1. [Drug consumption Data Set](https://archive.ics.uci.edu/ml/datasets/Drug+consumption+%28quantified%29)
1. [Student Performance Data Set](https://archive.ics.uci.edu/ml/datasets/Student+Performance)
1. [default of credit card clients Data Set](https://archive.ics.uci.edu/ml/datasets/default+of+credit+card+clients)
1. [Adult Reconstruction dataset](https://github.com/zykls/folktables#3)
1. [America Community Survey Public Use Microdata Sample](https://www2.census.gov/programs-surveys/acs/data/pums/)
1. [Census-Income (KDD) Data Set](https://archive.ics.uci.edu/ml/datasets/Census-Income+(KDD))
1. [The Dutch Virtual Census of 2001 - IPUMS Subset](https://international.ipums.org/international/)
1. [Diabetes 130-US hospitals for years 1999-2008 Data Set](https://archive.ics.uci.edu/ml/datasets/diabetes+130-us+hospitals+for+years+1999-2008)
1. [Parkinsons Telemonitoring Data Set](https://archive.ics.uci.edu/ml/datasets/Parkinsons+Telemonitoring)

### Graph Data

1. [MovieLens 100K Dataset](https://grouplens.org/datasets/movielens/100k/)

### Text Data

1. [Jigsaw Unintended Bias in Toxicity Classification](https://www.kaggle.com/c/jigsaw-unintended-bias-in-toxicity-classification/data)

### Image Data

1. [CelebFaces Attributes Dataset (CelebA)](https://mmlab.ie.cuhk.edu.hk/projects/CelebA.html)
1. [FairFace: Face Attribute Dataset for Balanced Race, Gender, and Age](https://github.com/joojs/fairface)