Localization Schemes: A Framework for the Analysis of Sampling Algorithms (via Zoom)

Abstract: Two recent and seemingly-unrelated techniques for proving mixing bounds for Markov chains are: (i) the framework of "Spectral Independence", introduced by Anari, Liu and Oveis Gharan, and its numerous extensions, which have given rise to several breakthroughs in the analysis of mixing times of discrete Markov chains and (ii) the Stochastic Localization technique which has proven useful in establishing mixing and expansion bounds for both log-concave measures and for measures on the discrete hypercube.

In this talk, I'll present a framework which aims to both unify and extend those techniques, thus providing an approach that gives bounds for sampling algorithms in both discrete and continuous settings. In its center is the concept of a ``localization scheme'' which, to every probability measure on some space $\Omega$ (which will usually be either the discrete hypercube or R^n), assigns a martingale of probability measures which ``localize'' in space as time evolves. As it turns out, every such scheme can be associated with a Markov chain, and many chains of interest (such as Glauber dynamics) appear naturally in this framework. This viewpoint provides tools for deriving mixing bounds for the dynamics through the analysis of the corresponding localization process. Generalizations of the concept of Spectral Independence naturally arise from our definitions, and in particular we will show how to recover the main theorems in the spectral independence framework via simple martingale arguments (completely bypassing the need to use the theory of high-dimensional expanders). We demonstrate how to apply our machinery towards simple proofs to many mixing bounds in the recent literature. We will briefly discuss some applications, among which are obtaining the first $O(n \log n)$ bound for mixing time of the hardcore-model (of arbitrary degree) in the tree-uniqueness regime, under Glauber dynamics and to proving a KL-divergence decay bound for log-concave sampling via the Restricted Gaussian Oracle, which achieves optimal mixing under any $\exp(n)$-warm start.

Based on a joint work with Yuansi Chen.

Bio: Ronen Eldan is a senior researcher at the Weizmann Institute of Science. He received his Ph.D. from the Tel-Aviv University in 2013, under the supervision of Bo├íz Klartag and Vitali Milman, and has since also spent a couple of years as a postdoc at Microsoft Research, Redmond, and at the University of Washington.

Originally, a mathematician interested mainly in probability theory, functional analysis and convex geometry, in recent years his research interests have expanded toward theoretical computer science, learning theory and optimization. His two main directions of research are applying methods from stochastic calculus to proving inequalities of geometric nature in a high-dimensional setting and applying the theory of high-dimensional probability and geometry to problems in learning theory and optimization.