Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
awesome-ml-fairness
Papers and online resources related to machine learning fairness
https://github.com/brandeis-machine-learning/awesome-ml-fairness
Last synced: 4 days ago
JSON representation
-
Ranking
-
Others
- Fair algorithms for selecting citizens’ assemblies
- Fair Rank Aggregation
- Fair Ranking with Noisy Protected Attributes
- Individually Fair Rankings
- Two-sided fairness in rankings via Lorenz dominance
- Fairness in Ranking under Uncertainty
- Fair algorithms for selecting citizens’ assemblies
- On the Problem of Underranking in Group-Fair Ranking
- Fairness and Bias in Online Selection
- Policy Learning for Fairness in Ranking
- The Fairness of Risk Scores Beyond Classification: Bipartite Ranking and the XAUC Metric
- Balanced Ranking with Diversity Constraints
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
- Fair algorithms for selecting citizens’ assemblies
-
-
Survey
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- A survey on datasets for fairness-aware machine learning
- An Overview of Fairness in Clustering
- Trustworthy AI: A Computational Perspective
- Algorithm Fairness in AI for Medicine and Healthcare
- Socially Responsible AI Algorithms: Issues, Purposes, and Challenges
- Fairness in learning-based sequential decision algorithms: A survey
- Fairness in Machine Learning: A Survey
- The Frontiers of Fairness in Machine Learning
- The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Fairness in Ranking, Part I: Score-based Ranking
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Trustworthy AI: A Computational Perspective
- Socially Responsible AI Algorithms: Issues, Purposes, and Challenges
- Fairness in learning-based sequential decision algorithms: A survey
- The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- An Overview of Fairness in Clustering
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Language (Technology) is Power: A Critical Survey of “Bias” in NLP
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
- Algorithm Fairness in AI for Medicine and Healthcare
- Fairness in rankings and recommendations: an overview
- Fairness in rankings and recommendations: an overview
-
Book, Blog, Case Study, and Introduction
- Identifying, measuring, and mitigating individual unfairness for supervised learning models and application to credit risk models
- Assessing and mitigating unfairness in credit models with the Fairlearn toolkit
- To regulate AI, try playing in a sandbox
- NSF grant decisions reflect systemic racism, study argues
- Fairness and Machine Learning: LImitations and Opportunities
- Apple Card algorithm sparks gender bias allegations against Goldman Sachs
- Big Data’s Disparate Impact
- An Analysis of the New York City Police Department’s “Stop-and-Frisk” Policy in the Context of Claims of Racial Bias
- What a machine learning tool that turns Obama white can (and can’t) tell us about AI bias
- Amazon scraps secret AI recruiting tool that showed bias against women
- Consumer-Lending Discrimination in the FinTech Era
- Apple Card Investigated After Gender Discrimination Complaints
- When a Computer Program Keeps You in Jail
- European Union regulations on algorithmic decision-making and a “right to explanation”
- An Algorithm That Grants Freedom, or Takes It Away
- Identifying, measuring, and mitigating individual unfairness for supervised learning models and application to credit risk models
- Unequal Representation and Gender Stereotypes in Image Search Results for Occupations
-
Group Fairness in Classification
-
Pre-processing
- Achieving Fairness at No Utility Cost via Data Reweighing
- Fairness with Adaptive Weights
- Bias in Machine Learning Software: Why? How? What to Do?
- Identifying and Correcting Label Bias in Machine Learning
- Repairing without Retraining: Avoiding Disparate Impact with Counterfactual Distributions
- Optimized Pre-Processing for Discrimination Prevention
- Learning Fair Representations
- Data preprocessing techniques for classification without discrimination
- Data preprocessing techniques for classification without discrimination
- Data preprocessing techniques for classification without discrimination
- Data preprocessing techniques for classification without discrimination
- Data preprocessing techniques for classification without discrimination
- Data preprocessing techniques for classification without discrimination
- Data preprocessing techniques for classification without discrimination
- Data preprocessing techniques for classification without discrimination
- Data preprocessing techniques for classification without discrimination
- Data preprocessing techniques for classification without discrimination
- Data preprocessing techniques for classification without discrimination
- Data preprocessing techniques for classification without discrimination
- Data preprocessing techniques for classification without discrimination
- Data preprocessing techniques for classification without discrimination
- Data preprocessing techniques for classification without discrimination
- Data preprocessing techniques for classification without discrimination
- Data preprocessing techniques for classification without discrimination
- Data preprocessing techniques for classification without discrimination
- Data preprocessing techniques for classification without discrimination
- Data preprocessing techniques for classification without discrimination
- Data preprocessing techniques for classification without discrimination
- Data preprocessing techniques for classification without discrimination
- Data preprocessing techniques for classification without discrimination
- Data preprocessing techniques for classification without discrimination
- Data preprocessing techniques for classification without discrimination
- Data preprocessing techniques for classification without discrimination
- Certifying and Removing Disparate Impact
-
In-processing
- On Learning Fairness and Accuracy on Multiple Subgroups
- Fair Representation Learning through Implicit Path Alignment
- Fair Generalized Linear Models with a Convex Penalty
- Fair Normalizing Flows
- A Stochastic Optimization Framework for Fair Risk Minimization
- Fairness via Representation Neutralization
- Scalable and Stable Surrogates for Flexible Classifiers with Fairness Constraints
- A Fair Classifier Using Kernel Density Estimation
- Exploiting MMD and Sinkhorn Divergences for Fair and Transferable Representation Learning
- Rényi Fair Inference
- Conditional Learning of Fair Representations
- A General Approach to Fairness with Optimal Transport
- Fairness Constraints: A Flexible Approach for Fair Classification
- Wasserstein Fair Classification
- Empirical Risk Minimization Under Fairness Constraints
- A Reductions Approach to Fair Classification
- Classification with Fairness Constraints: A Meta-Algorithm with Provable Guarantees
- Fairness Constraints: Mechanisms for Fair Classification
- Mitigating Unwanted Biases with Adversarial Learning
-
Post-processing
-
Tradeoff
- Is There a Trade-Off Between Fairness and Accuracy? A Perspective Using Mismatched Hypothesis Testing
- Inherent Tradeoffs in Learning Fair Representations
- The Cost of Fairness in Binary Classification
- Inherent Trade-Offs in the Fair Determination of Risk Scores
- Fair Prediction with Disparate Impact: A Study of Bias in Recidivism Prediction Instruments
- On the (im)possibility of fairness
- On the (im)possibility of fairness
-
Others
- Understanding Instance-Level Impact of Fairness Constraints
- Generalized Demographic Parity for Group Fairness
- Assessing Fairness in the Presence of Missing Data
- Characterizing Fairness Over the Set of Good Models Under Selective Labels
- Fair Selective Classification via Sufficiency
- Testing Group Fairness via Optimal Transport Projections
- Fairness with Overlapping Groups
- Feature Noise Induces Loss Discrepancy Across Groups
- Why Is My Classifier Discriminatory?
- Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification
- On Fairness and Calibration
- Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment
-
-
Individual Fairness
-
Others
- Learning Antidote Data to Individual Unfairness
- Metric-Fair Active Learning
- Metric-Fair Classifier Derandomization
- Post-processing for Individual Fairness
- SenSeI: Sensitive Set Invariance for Enforcing Individual Fairness
- Individually Fair Gradient Boosting
- Learning Individually Fair Classifier with Path-Specific Causal-Effect Constraint
- Learning Certified Individually Fair Representations
- Metric-Free Individual Fairness in Online Learning
- Two Simple Ways to Learn Individual Fairness Metrics from Data
- Training Individually Fair ML Models with Sensitive Subspace Robustness
- Individual Fairness Revisited: Transferring Techniques from Adversarial Robustness
- Metric Learning for Individual Fairness
- Individual Fairness in Pipelines
- Average Individual Fairness: Algorithms, Generalization and Experiments
- iFair: Learning Individually Fair Data Representations for Algorithmic Decision Making
- What’s Fair about Individual Fairness?
-
-
Minimax Fairness
-
Others
- Active Sampling for Min-Max Fairness
- Adaptive Sampling for Minimax Fair Classification
- Blind Pareto Fairness and Subgroup Robustness
- Fairness without Demographics through Adversarially Reweighted Learning
- Minimax Pareto Fairness: A Multi Objective Perspective
- Fairness Without Demographics in Repeated Loss Minimization
-
-
Counterfactual Fairness
-
Others
- Causal Conceptions of Fairness and their Consequences
- Contrastive Mixture of Posteriors for Counterfactual Inference, Data Integration and Fairness
- PC-Fairness: A Unified Framework for Measuring Causality-based Fairness
- Counterfactual Fairness
- When Worlds Collide: Integrating Different Counterfactual Assumptions in Fairness
- Avoiding Discrimination through Causal Reasoning
- A Causal Framework for Discovering and Removing Direct and Indirect Discrimination
-
-
Graph Mining
-
Others
- RawlsGCN: Towards Rawlsian Difference Principle on Graph Convolutional Network
- Correcting Exposure Bias for Link Recommendation
- On Dyadic Fairness: Exploring and Mitigating Bias in Graph Connections
- Fairness constraints can help exact inference in structured prediction
- Compositional Fairness Constraints for Graph Embeddings
-
-
Online Learning & Bandits
-
Others
- The price of unfairness in linear bandits with biased feedback
- Fair Sequential Selection Using Supervised Learning Models
- Online Market Equilibrium with Application to Fair Division
- A Unified Approach to Fair Online Learning via Blackwell Approachability
- Fair Algorithms for Multi-Agent Multi-Armed Bandits
- Fair Exploration via Axiomatic Bargaining
- Group-Fair Online Allocation in Continuous Time
-
-
Clustering
-
Others
- Robust Fair Clustering: A Novel Fairness Attack and Defense Framework
- Fair and Fast k-Center Clustering for Data Summarization
- Fair Clustering Under a Bounded Cost
- Better Algorithms for Individually Fair k-Clustering
- Approximate Group Fairness for Clustering
- Variational Fair Clustering
- Deep Fair Clustering for Visual Learning
- Fair Hierarchical Clustering
- Fair Algorithms for Clustering
- Coresets for Clustering with Fairness Constraints
- Scalable Fair Clustering
- Fair k-Center Clustering for Data Summarization
- Guarantees for Spectral Clustering with Fairness Constraints
- Fair Clustering Through Fairlets
-
-
Regression
-
Outlier Detection
-
Generation
-
Fairness and Robustness
-
Others
- Robust Fair Clustering: A Novel Fairness Attack and Defense Framework
- Fair Classification with Adversarial Perturbations
- Sample Selection for Fair and Robust Training
- Fair Classification with Noisy Protected Attributes: A Framework with Provable Guarantees
- To be Robust or to be Fair: Towards Fairness in Adversarial Training
- Exacerbating Algorithmic Bias through Fairness Attacks
- Robust Optimization for Fairness with Noisy Protected Groups
- FR-Train: A Mutual Information-Based Approach to Fair and Robust Training
- Poisoning Attacks on Algorithmic Fairness
- Noise-tolerant Fair Classification
- Stable and Fair Classification
- Fair Classification with Group-Dependent Label Noise
-
-
Transfer & Federated Learning
-
Others
- Fairness Guarantees under Demographic Shift
- Addressing Algorithmic Disparity and Performance Inconsistency in Federated Learning
- Gradient-Driven Rewards to Guarantee Fairness in Collaborative Machine Learning
- FjORD: Fair and Accurate Federated Learning under heterogeneous targets with Ordered Dropout
- Does enforcing fairness mitigate biases caused by subpopulation shift?
- Ditto: Fair and Robust Federated Learning Through Personalization
-
-
Long-term Impact
-
Others
- Achieving Long-Term Fairness in Sequential Decision Making
- Unintended Selection: Persistent Qualification Rate Disparities and Interventions
- How Do Fair Decisions Fare in Long-term Qualification?
- Group Retention when Using Machine Learning in Sequential Decision Making: the Interplay between User Dynamics and Fairness
- Delayed Impact of Fair Machine Learning
- The Disparate Equilibria of Algorithmic Decision Making when Individuals Invest Rationally
- A Short-term Intervention for Long-term Fairness in the Labor Market
-
-
Trustworthiness
-
Others
- Washing The Unwashable : On The (Im)possibility of Fairwashing Detection
- Differentially Private Empirical Risk Minimization under the Fairness Lens
- Characterizing the risk of fairwashing
- Fair Performance Metric Elicitation
- Can I Trust My Fairness Metric? Assessing Fairness with Unlabeled Data and Bayesian Inference
- You Shouldn’t Trust Me: Learning Models Which Conceal Unfairness From Multiple Explanation Methods
- Fairwashing: the risk of rationalization
-
-
Auditing
-
Empirical Study
-
Library & Toolkit
-
Dataset
-
Tabular Data
- Adult Reconstruction dataset
- Communities and Crime Data Set
- Statlog (German Credit Data) Data Set
- Bank Marketing Data Set
- Adult Data Set
- COMPAS Recidivism Risk Score Data and Analysis
- Arrhythmia Data Set
- LSAC National Longitudinal Bar Passage Study
- Medical Expenditure Panel Survey Data
- Drug consumption Data Set
- Student Performance Data Set
- default of credit card clients Data Set
- America Community Survey Public Use Microdata Sample
- Census-Income (KDD) Data Set
- The Dutch Virtual Census of 2001 - IPUMS Subset
- Diabetes 130-US hospitals for years 1999-2008 Data Set
- Parkinsons Telemonitoring Data Set
-
Graph Data
-
Text Data
-
Image Data
-
Categories
Group Fairness in Classification
76
Survey
69
Ranking
57
Dataset
20
Individual Fairness
17
Book, Blog, Case Study, and Introduction
17
Clustering
14
Fairness and Robustness
12
Online Learning & Bandits
7
Trustworthiness
7
Long-term Impact
7
Counterfactual Fairness
7
Transfer & Federated Learning
6
Minimax Fairness
6
Graph Mining
5
Regression
5
Auditing
3
Generation
3
Library & Toolkit
2
Outlier Detection
2
Empirical Study
2