An Equivalence Between Private Classification and Online Prediction (via Zoom)

Abstract: Differential privacy enables rich statistical analyses on data while provably protecting individual-level privacy. The last 15 years of research has shown that, at least in principle, a number of fundamental statistical tasks are compatible with differential privacy. However, privacy-preserving analyses often require additional complexity over their non-private counterparts, for instance, in terms of the number of data samples one needs to collect in order to get accurate results. In fact, some infinite concept classes that are "easy" to learn in standard computational learning theory become impossible to learn under differential privacy using any finite number of samples.

Alon, Livni, Malliaris, and Moran recently showed that finite Littlestone dimension is a necessary condition for a concept class to be privately learnable, and conjectured that it is sufficient as well. Here, Littlestone dimension characterizes learnability in the mistake-bound model of online learning. We prove their conjecture, establishing a qualitative equivalence between private learnability and online learnability. Our result goes by way of a new connection between privacy and algorithmic stability for learning.

Joint work with Roi Livni and Shay Moran.

Bio: Mark Bun focuses on theoretical computer science, including data privacy, computational complexity, cryptography, and the foundations of machine learning. He uses polynomials (continuous functions) to investigate fundamental properties of Boolean (discrete) functions and has also developed new algorithms and lower bound techniques for differentially private data analysis. He spent the 2018–2019 academic year at the Simons Institute for the Theory of Computing at UC Berkeley and joined BU as a tenure-track assistant professor in July 2019.