Realizable Learning is All You Need (via Zoom)

Abstract: The equivalence of realizable and agnostic learnability is a fundamental phenomenon in learning theory. With variants ranging from classical settings like PAC learning and regression to recent trends such as adversarially robust and private learning, it’s surprising we still lack a unifying theory explaining these results.

In this talk, we'll introduce exactly such a framework: a simple, model-independent blackbox reduction between agnostic and realizable learnability that explains their equivalence across a wide host of classic models. Further, we’ll discuss how this reduction extends our understanding to new settings that are traditionally considered difficult to handle such as learning with arbitrary distributional assumptions or more general loss. Finally, we will discuss some cool and exciting open problems. Based on joint work with Max Hopkins, Daniel Kane, Shachar Lovett.

Bio: Gaurav is a 5th year PhD student in the theory group at UCSD advised by Sanjoy Dasgupta and Shachar Lovett. He has broad interests in problems related to learning and recently worked in reinforcement learning theory and learning theory. He has spent some fun summers at Microsoft Research, Institute for Advanced Study and Simons Institute.