Frameworks for High Dimensional Parallel and Distributed Optimization

Abstract: In this talk I present frameworks for solving high dimensional optimization problems, in which both the number of variables and the amount of data are huge. In these settings, practical applications require solvers to work at extreme scales, and despite decades of work, existing solvers do not scale as desired in many cases. I present black-box acceleration frameworks for speeding up convex solvers, which can be used in either parallel or distributed settings. Given a huge problem, collaborators and I develop dimension reduction techniques that allow the problem to be solved in a fraction of the original time, and make the computation more amenable to distributed or parallel computation.
I present a novel framework that can speedup linear programming (LP) solvers, such as Cplex and Gurobi, by orders of magnitude, while maintaining nearly optimal solutions. The framework can be used for both linear programs and integer linear programs. I present worst-case guarantees on both the quality of the solution and the speedup provided. Here, I consider the special case of packing linear programming. Secondly, I consider convex problems with linear constraints, defined with respect to a graph, where the problem data is distributed across the nodes. I present a framework that uses far less communication than existing distributed techniques require and is robust to link failures in the graph. Both theoretically and empirically, the approach uses orders of magnitude less communication than existing approaches, while maintaining nearly optimal solutions.

Bio: Palma London is a Ph.D. student in the Department of Computing and Mathematical Sciences at the California Institute of Technology. Prior to joining Caltech, she received a double B.S. degree in Electrical Engineering and Math at the University of Washington. Her research interests are generally in convex optimization, distributed algorithms, and machine learning.