Despite an increasing reliance on fully-automated algorithmic decision-making in our lives, human beings still make consequential decisions. We develop a statistical methodology for experimentally evaluating the causal impacts of algorithmic recommendations on human decisions. We also show how to examine whether algorithmic recommendations improve the fairness of human decisions and derive the optimal decision rules under various settings. We apply the proposed methodology to preliminary data from the first-ever randomized controlled trial that evaluates the pretrial Public Safety Assessment (PSA) in the criminal justice system.
A goal of the PSA is to help judges decide which arrested individuals should be released. We find that providing the PSA to the judge has little overall impact on the judge's decisions and subsequent arrestee behavior. However, we find that the PSA may help avoid unnecessarily harsh decisions for female arrestees while it encourages the judge to make stricter decisions for male arrestees who are deemed to be risky. For fairness, the PSA appears to increase the gender bias against males while having little effect on any existing racial differences in judges' decision. Finally, we find that the PSA's recommendations might be unnecessarily severe unless the cost of a new crime is sufficiently high.
Kosuke Imai, Harvard University, USA
Zhichao Jiang, University of Massachusetts, USA D
James Greiner, Harvard Law School, USA
Ryan Halen, Harvard Law School, USA
Sooahn Shin, Harvard University, USA
Statistics & the Law Section, Data Ethics & Governance Section and Discussion Meetings Committee
Julia Mortera (Statistics & the Law), Kevin McConway (Data Ethics & Governance) and Judith Shorten