Decisions based on machine learning (ML) are potentially advantageous over human decisions because they do not suffer from the same subjectivity and may be more accurate and easier to analyze. At the same time, the data used to train ML systems often contain human and societal prejudices that can lead to harmful decisions: Comprehensive evidence on, for example, recruitment, criminal justice, supervision, and health care suggests that ML decision-making systems may treat individuals unfavorably (wrongly) on the basis of characteristics such as race, gender, disability and sexual orientation sensitive attributes.

Currently, most of the fairness criteria used in the evaluation and design of ML decision-making systems focus on the relationship between the sensitive attribute and the output of the system. However, training data may show different injustices depending on how and why the sensitive feature affects other variables. Using such criteria without taking this fully into account can be problematic: for example, it can lead to the erroneous conclusion that a model with harmful prejudices is fair and, conversely, that a model with harmless prejudices is unfair. The development of technical solutions for fairness also requires consideration of the various potentially complex ways in which injustice can manifest itself in the data.

Understanding how and why a sensitive property affects other variables in a data set can be a challenging task that requires both technical and sociological analysis. A visual but mathematically accurate frame Reason Bayesian networks (CBNs) are a flexible and useful tool in this regard, as they can be used to formalize, measure, and address different underlying injustice scenarios. CBN (Figure 1) is a diagram consisting of nodes representing random variables connected by links showing a causal effect. By defining a delicate attribute in an injustice chart as a detrimental effect, CBNs provide us with a simple and intuitive visual tool to describe the different possible situations of injustice in the data set. In addition, CBNs offer us powerful quantitative tool measure injustice in the data set and help researchers develop techniques to combat it.

Cause-Bayesian networks as a visual tool

Create the models of injustice behind the data set

Consider a hypothetical example of university admission (inspired by the Berkeley case), to which applicants will be admitted on the basis of qualifications Q, the choice of section D and gender G; and where female applicants are more likely to apply for certain departments (for simplicity, we consider gender to be binary, but this is not a necessary constraint imposed by the framework).

LEAVE A REPLY

Please enter your comment!
Please enter your name here