Recovering a hidden signal or structure in the presence of random noise is a recurring theme in fundamental problems arising in computational complexity, cryptography, machine learning, and statistics. In this talk, I'll present a promising approach for such average-case problems based on the sum-of-squares (SoS) meta-algorithm, a systematic hierarchy of convex programs independently invented in multiple disciplines including control theory, quantum information, and proof complexity. 

In a sequence of recent works, we show that this meta-algorithm yields the best-known provable guarantees for well-studied non-convex optimization problems arising in unsupervised machine learning, cryptography, statistics, and quantum information. These results often significantly outperform previously known methods. For example, we obtain the first improvement on the classical spectral algorithm for clustering due to Vempala and Wang (2004). As a special case, we partially resolve a recent open question due to Regev and Vijayraghavan by giving an efficient algorithm for the benchmark problem of learning mixtures of high-dimensional Gaussians with center-separations approaching the information theoretically optimal bounds.  

Remarkably, such results are obtained without tailoring the algorithm to the problem at hand. 

On the flip side, establishing strong lower bounds for the SoS method has proven challenging and has been actively investigated in both proof complexity and hardness of approximation. In a sequence of recent works, we build two general methods for establishing tight SoS lower bounds for fundamental average-case problems with applications to machine learning, cryptography, and game theory. For example, we show a tight SoS lower bound for the Planted Clique problem completing a program begun in an influential recent work of Meka and Wigderson (2013). 

Taken together, these results present the tantalizing possibility of a unified theory of algorithm design for average-case problems based on the SoS method. 

Pravesh K. Kothari is a Research Instructor at the Institute for Advanced Study and Princeton University. His research focuses on the analysis of general-purpose algorithmic-schemes for non-convex optimization problems arising in machine learning, cryptography, quantum information in addition to classical algorithm design. Earlier, he obtained his Ph.D. from UT Austin in August 2016 advised by Adam Klivans where his research was partially supported by a Simons award for graduate students in theoretical computer science.