Date: September 19, 2025
Title: The Biases To Care About: ML Fairness that is “Just Right”
Speaker: Angelina Wang, Assistant Professor in the Department of Information Science, Cornell Bowers

Abstract: As machine learning has proliferated, so have concerns about fairness and bias. Yet, machine learning fairness has faced backlash from multiple dimensions: as being too narrow and acontextual, but also as being too excessive and "woke." In this talk I'll discuss three lines of research that aim to find the right balance when thinking about ML fairness. First, I'll talk about how our attention on the biases of large-scale datasets in vision might be better spent on the far smaller and more tractable finetuning datasets. Then, I will discuss how in the excitement of using LLMs for tasks like human participant replacement, we have been too focused on enforcing groups are treated similarly and even as the same, that we have neglected to consider the importance of human positionality. Finally, I will talk about how ML fairness has tended to endorse a colorblind treatment of groups, but how in many reasonable situations, we may actually desire demographic group differentiation. Overall, I will explore the ways that we can address both the critiques that fairness has gone too far, as well as that fairness has not gone far enough.
Bio: Angelina Wang is an Assistant Professor in the Department of Information Science at Cornell University and at Cornell Tech. Her research is on responsible AI, with a particular interest in fairness, evaluation, and societal impacts. Previously she did her postdoc at Stanford University, PhD in computer science from Princeton University, and BS from UC Berkeley.