When
Where
Developing Secure, Trustworthy, and Reliable Artificial Intelligence (AI)
Abstract: On one hand, AI has recently helped achieve breakthroughs in many business applications. On the other hand, with the rapid advancements in AI, AI-enabled adversaries have raised new concerns in our society regarding the security of these models and the privacy of the data they are trained on. This has led to emergence of a new attack surface: the AI model itself. AI models have been shown to be vulnerable to adversarial attacks and privacy attacks, where an adversary aims to mislead them or gain access to their training data. My talk features collaborative efforts on designing, implementing, and evaluating methods rooted in statistical machine learning, adversarial learning, reinforcement learning, differential privacy, and optimal transport to make AI safer and more trustworthy. I introduce an ecosystem of novel methods carefully designed to address security and privacy of AI models during their lifecycle, including training and deployment. The talk constitutes six thrusts across two major areas: AI security and AI privacy. Within the AI safety area, we developed three thrusts: 1) AI-enabled Reconnaissance, to identify threats in modern malware repositories and the dark web, 2) AI-enabled attack emulation, to simulate adversarial attacks in a sand-box environment, and 3) AI-enabled defense realization: to enhance the resilience of AI models against adversarial attacks. Within the trustworthy AI area, we developed three thrusts: 1) protecting the privacy of AI models, to prevent training data leakage from the AI models, 2) protecting the privacy of AI models Built by multiple firms, to prevent data leakage in collaborative settings where training is conducted by multiple firms each with different data and business interests, and 3) building GDPR-compliant AI Models, to incorporate right-to-be-forgotten in AI models.
Bio: Reza Ebrahimi (ebrahimim@usf.edu) is an Assistant Professor and the founder of Secure Trustworthy and Reliable (STAR)-AI Lab in the School of Information Systems and Management at the University of South Florida. Reza received his Ph.D. in Management Information Systems from the University of Arizona, where he was a research associate at the Artificial Intelligence (AI) Lab in 2021. He received his master’s degree in Computer Science from Concordia University, Canada, in 2016. Reza’s dissertation on AI-enabled cybersecurity analytics received the ICIS ACM SIGMIS best doctoral dissertation award in 2021. His current research is focused on statistical and adversarial machine learning theories for AI-enabled secure and trustworthy cyberspace. Reza has published over 35 articles in peer reviewed journals, conferences, and workshops including MIS Quarterly, NeurIPS, JMIS, IEEE TPAMI, IEEE TDSC, IEEE ACSAC, Applied Artificial Intelligence, Digital Forensics, IEEE S&PW, AAAIW, IEEE ICDMW, and IEEE ISI. He has served as a Program Chair and Program Committee member in IEEE ICDM Workshop on Machine Learning for Cybersecurity (MLC) since 2022 as well as the IEEE S&P Workshop on Deep Learning Security and Privacy (DLSP). He has contributed to several projects supported by the National Science Foundation (NSF). He is an IEEE Senior Member and a member of the AIS, ACM, and AAAI.