Fairness in Algorithmic Decision Making: An Excursion Through the Lens of Causality

1 0
  • 0 Collaborators

We tackled the problem of assessing whether a decision-making system discriminates against a group of people with certain attributes. An example would be whether a machine learning algorithm discriminates against women. We developed two measures of detecting such discrimination. ...learn more

Project status: Published/In Market

Artificial Intelligence

Code Samples [1]

Overview / Usage

As virtually all aspects of our lives are increasingly impacted by algorithmic decision making systems, it is incumbent upon us as a society to ensure such systems do not become instruments of unfair discrimination on the basis of gender, race, ethnicity, religion, etc. We consider the problem of determining whether the decisions made by such systems are discriminatory, through the lens of causal models. We introduce two definitions of group fairness grounded in causality: fair on average causal effect (FACE), and fair on average causal effect on the treated (FACT). We use the Rubin-Neyman potential outcomes framework for the analysis of cause-effect relationships to robustly estimate FACE and FACT. We demonstrate the effectiveness of our proposed approach on synthetic data. Our analyses of two real-world data sets, the Adult income data set from the UCI repository (with gender as the protected attribute), and the NYC Stop and Frisk data set (with race as the protected attribute), show that the evidence of discrimination obtained by FACE and FACT, or lack thereof, is often in agreement with the findings from other studies. We further show that FACT, being somewhat more nuanced compared to FACE, can yield findings of discrimination that differ from those obtained using FACE.

Methodology / Approach

We introduce two explicitly causal definitions for fairness “on average” in a population or a protected group (as opposed to causal definitions of individual fairness, e.g., counterfactual fairness) with respect to a protected attribute (e.g., gender, race).

Technologies Used

The project was done completely in R.

Repository

https://dl.acm.org/citation.cfm?id=3313559

Comments (0)