When
Where
Human–Algorithmic Bias: Source, Evolution, and Impact
Abstract: Prior work on human-algorithmic bias has seen difficulty in empirically identifying the underlying mechanisms of bias because in a typical “one-time” decision-making scenario, different mechanisms generate the same patterns of observable decisions. In this study, leveraging a unique repeat decision-making setting in a high-stakes microlending context, we aim to uncover the underlying source, evolution dynamics, and associated impacts of bias. We first develop a structural econometric model of the decision dynamics to understand the source and evolution of bias in human evaluators in microloan granting. We find that both preference-based and belief-based biases exist in human decisions and are in favor of female applicants. Our counterfactual simulations show that the elimination of either of the two biases improves the fairness in financial resource allocation as well as the platform profits. The profit improvement mainly stems from the increased approval probability for male borrowers, especially those who would eventually pay back loans. Furthermore, to examine how human biases evolve when being inherited by machine learning (ML) algorithms, we train state-of-the-art ML algorithms for default risk prediction on both real-world data sets with human biases encoded within and counterfactual data sets with human biases partially or fully removed. We find that even fairness-unaware ML algorithms can reduce bias in human decisions. Interestingly, although removing both types of human bias from the training data can further improve ML fairness, the fairness-enhancing effects vary significantly between new and repeat applicants. Based on our findings, we discuss how to reduce decision bias most effectively in a human-ML pipeline.
Bio: Xiyang Hu is an Assistant Professor of Information Systems at Arizona State University. His research focuses on Trustworthy AI, human–AI, generative AI, agents, and computational social science. He studies the design of machine learning systems that support decision-making across multiple application domains, with particular attention to support the deployment of AI technologies that are both effective and socially responsible. He received the AIS SIG AI Best Dissertation Award, and his research has been supported by the National Science Foundation, the Amazon Research Award, and the Marketing Science Institute.