MIS Speaker's Series: Ahmed Abbasi

Image
Sunset over McClelland Hall

When

1 to 2 p.m., Jan. 26, 2024

Where

Ahmed Abbasi

Joe and Jane Giovanini Professor of IT, Analytics, and Operations, Mendoza College of Business, University of Notre Dame 

The Challenge of using Large-scale Digital Experiment Platforms for Scientific Discovery 

Abstract: Robust digital experimentation platforms have become increasingly pervasive at major technology and e-commerce firms worldwide. They allow product managers to use data driven decision-making through online controlled experiments that estimate the average treatment effect (ATE) relative to a status quo control setting and make associated inferences. As demand for experiments continues to grow, orthogonal test planes (OTPs) have become the industry standard for managing the assignment of users to multiple concurrent experimental treatments in companies using large-scale digital experimentation platforms. In recent years, firms have begun to recognize that test planes might be confounding experimental results, but nevertheless, the practical benefits outweigh the costs. However, the uptick in practitioner-led digital experiments has coincided with an increase in academic-industry research partnerships, where large-scale digital experiments on websites, mobile apps, wearables, and other IT artifacts are being used to scientifically answer research questions, validate design choices, and/or to derive computational social science-based empirical insights. In such contexts, confounding and biased estimation may have much more pronounced implications for the validity of scientific findings, contributions to theory, building a cumulative literature, and ultimately practice. The purpose of this issues and opinions (I&O) article is to shed light on OTPs – in our experience, most researchers are unaware how such test planes can lead to incorrect inferences. We use a case study conducted at a major e-commerce company to illustrate the extent to which interactions in concurrent experiments can bias ATEs, often making them appear more positive than they actually are. We discuss implications for research, including the distinction between practical industry experiments and academic research, methodological best practices for mitigating such concerns, and transparency and reproducibility considerations stemming from the complexity and opacity of large-scale experimentation platforms. More broadly, we worry that confounding in scientific research due to reliance on large-scale digital experiments, meant to serve a different purpose, is a microcosm of a larger epistemological confounding regarding what constitutes a contribution to scientific knowledge. 

BioAhmed Abbasi is the Joe and Jane Giovanini Professor of IT, Analytics, and Operations. He serves as Director of the Analytics Ph.D. program and Co-Director of the Human-centered Analytics Lab. Ahmed completed his Ph.D. work in Information Systems at the University of Arizona’s Artificial Intelligence (AI) Lab. He attained an M.B.A. and B.S. in Information Technology from Virginia Tech. His research interests relate to text and predictive analytics. Ahmed has published nearly one hundred articles in journals and conferences, including several in top-tier outlets such as MIS Quarterly, Information Systems Research (ISR), Journal of MIS, ACM TOIS, IEEE TKDE, and IEEE Intelligent Systems. His work has been funded by over a dozen grants from the National Science Foundation and industry partners such as Microsoft, eBay, Deloitte, and Oracle. Ahmed serves as Senior Editor for ISR and Associate Editor for ACM TMIS and IEEE Intelligent Systems. He is a recipient of the IEEE Technical Achievement Award, INFORMS Design Science Award, and IBM Faculty Award. Ahmed’s work has been featured in various media outlets including the Wall Street Journal, Harvard Business Review, the Associated Press, WIRED, CBS, and Fox. 

Contacts

Seokjun Youn