Page 38 - HRM-00-v1
P. 38
ARTIFICIAL INTELLIGENCE
But how can we make an algorithm fair without moral intuition? Many approaches have been proposed. Unfortunately they are almost all heuristic, meaning they provide a rule of thumb that is open to in- terpretation and lacks a coherent underlying theory or framework. A new approach proposed recently by a group of researchers from The Alan Turing Institute called counterfactual fairness 1 provides a more principled approach to making algorithms fair and resolves many of the shortcomings of past approaches.
Counterfactual fairness poses the question, “If a protected person- al attribute was hypothetically changed, does the system’s decision change?” Consider the example of a woman applying to university. We could set gender as the “protected attribute,” that is, an attribute that we believe is a source of unfair bias. We’d ask “If they were a man, would they have been accepted into this university?” When the an- swer is different in the real case and the hypothetical case, we say the decision was counterfactually unfair.
Importantly, the counterfactual fairness approach is much richer than simply hiding or ignoring the protected attribute. Consider our woman applying to university. In both the real case and the hypo- thetical, the candidate has the same “X-factor”: intelligence, commit- ment, determination, and hustle. However her opportunities and cir- cumstances are different. In the hypothetical scenario we attempt to model the effects of those differences across a lifetime, not just at the moment of the application. If our person with the same innate ability had experienced their lifetime in the switched gender, in this case as a male, would they have been accepted? This takes into account the downstream effects that would have been caused by the hypothetical change in the protected attribute.
How can you quantify the effects of a counterfactual when hypothet- ically changing the protected attribute? By using the relatively new field of causal modeling, pioneered by Judea Pearl and others. Causal modeling extends the classical observational tools of statistics (e.g., there is a correlation between smoking, stained teeth, and cancer) to reasoning about causes and intervention (e.g., stopping smoking will reduce cancer risk, but whitening your teeth will not). Causal model- ing, or simply causality, provides the tools necessary to reason about what could cause what, design the experiments necessary to empir- ically test proposed causal relationships, and importantly, quantify the effect of an intervention or counterfactual scenario.
Consider the following causal diagram for an illustrative “red car” example:
Causal Diagram
In the diagram above the boxes are attributes, and the arrows indi- cate causality, that is, that one attribute causes another. In this exam- ple there are four attributes: “Ethnicity,” “Red Car,” “Risky Behav- ior,” and “Car Crash.”
What if Algorithms Could be Fair?
Article by Andy Kitchen, Laura Summers T Illustration by Leandro Lassmar
HAT’S NOT FAIR! A SPIKE IN OUR GUT, A FLARE OF ANGER, THE WEIGHT OF RESENTMENT. FAIRNESS
IS IN OUR NATURE. HUMANS ARE DEEPLY SOCIALIZED WITH A MORAL INTUITION FOR FAIRNESS; COMPUT- ERS ARE NOT. BUT CAN COMPUTERS BE PROGRAMMED WITH A FUNC- TIONAL SUBSTITUTE FOR FAIRNESS? THIS QUESTION IS URGENT TODAY AS MORE AND MORE DECISIONS ARE MADE BY STATISTICAL ALGORITHMS.
38 | Human Readable Magazine