The use of Artificial Intelligence (AI), or more generally data-driven algorithms, has become ubiquitous in today's society. Yet, in many cases and especially when stakes are high, humans still make final decisions. The critical question, therefore, is whether AI helps humans make better decisions compared to a human-alone or AI-alone system. We introduce a new methodological framework to empirically answer this question with a minimal set of assumptions.
Specifically, we show how to compare the performance of three alternative decision-making systems--human-alone, human-with-AI, and AI-alone. We also study when AI recommendations should be provided to a human-decision maker, and when one should follow such recommendations. We apply the proposed methodology to our own randomized controlled trial evaluating a pretrial risk assessment instrument. Our analysis shows that the risk assessment recommendations do not improve the classification accuracy of a judge's decision to impose cash bail. Furthermore, we find that replacing a human judge with algorithms -- the risk assessment score and a large language model in particular-- leads to a worse classification performance.
The meeting will be followed by the section AGM from 4.45-5pm.
Speaker: Professor Kosuke Imai, Harvard University
Book now