Lower Bounds for Log-Concave and Gaussian Sampling

Abstract: Log-concave sampling has witnessed remarkable algorithmic advances in recent years, but the corresponding problem of proving lower bounds for this task has remained elusive, with lower bounds previously known only in dimension one. In this work, we establish the following query lower bounds: (1) sampling from strongly log-concave and log-smooth distributions in dimension d \ge 2 requires \Omega(\log \kappa) queries, which is sharp in any constant dimension, and (2) sampling from Gaussians in dimension d (hence also from general log-concave and log-smooth distributions in dimension d) requires \tilde{\Omega}(min(\sqrt{\kappa} \log d, d)) queries, which is nearly sharp for the class of Gaussians. Here, \kappa denotes the condition number of the target distribution.
In this talk, I will focus on the lower bound for Gaussians. This lower bound is the first polynomial lower bound for general log-concave sampling when \kappa is polynomial in d, as well as the first lower bound proving that the query complexity of general log-concave sampling cannot be dimension-free (i.e., only depend on \kappa). The proof is based on a novel reduction that demonstrates that "block Krylov" algorithms are optimal for this problem, which we believe to be of independent interest.
Bio: Shyam Narayanan is a fourth-year Ph.D. student, advised by Piotr Indyk, in the Theory of Computation group at MIT. His main research interest is on high-dimensional algorithmic statistics and learning. In general, his research interests span a wide range of algorithmic areas, including testing and learning distributions, clustering, differential privacy, sublinear algorithms, and streaming algorithms.