Point location with near-optimal bounds (via Zoom)

Abstract: The point location problem is a classical problem in discrete and computational geometry. Given a known partition of R^d by n hyperplanes into cells, and an unknown point, find as efficiently as possible the cell in which the point lies. Access to the unknown point is done indirectly, via linear queries with respect to hyperplanes.

This problem is equivalent to another well-studied problem, this time in machine learning: active learning of linear classifiers. Here, there are n known points in R^d, which are labeled by an unknown hyperplane. The goal is to as efficiently as possible find the labeling of all n points. Similarly here, access to the unknown hyperplane is done indirectly, in the form of label queries.

For both problems, there is a simple information-theoretic lower bound of Omega(d log n) on the number of queries needed. Despite 40 years of work, the best known upper bounds had a polynomial dependence on the dimension d. In this work, we achieve for the first time a near-linear dependence on the dimension. The proof combines tools from geometry, analysis and machine learning.

Joint work with Max Hopkins, Daniel Kane and Gaurav Mahajan.

Paper will appear in FOCS 2020
Arxiv: https://arxiv.org/abs/2004.11380

Bio: Shachar Lovett has a broad interest in theoretical computer science and mathematics. In particular, he is interested in computational complexity, randomness and pseudorandomness, algebraic constructions, coding theory, discrete mathematics and additive  combinatorics. Lovett is currently an assistant professor at UC San Diego. Before that, he was a postdoc at the Institute of Advanced Study with Avi Wigderson, and a PhD student at the Weizmann Institute, where he was advised by Omer Reingold and Ran Raz.