Making Machines that "Think":  Neural Nets that Generalize from Easy to Hard Problem Instances Via Recurrent Extrapolation

Abstract: This talk will have two parts.  In the first half of the talk, I'll survey the basics of adversarial machine learning, and discuss whether adversarial attacks and dataset poisoning can scale up to work on industrial systems.  I'll also present applications where adversarial methods provide benefits for domain shift robustness, dataset privacy, and data augmentation.  In the second half of the talk, I'll present my recent work on "thinking systems."  These systems use recurrent networks to emulate a human-like thinking process, in which problems are represented in memory and then iteratively manipulated and simplified over time until a solution to a problem is found.  When these models are trained only on "easy" problem instances, they can then solve "hard" problem instances without having ever seen one, provided the model is allowed the "think" for longer at test time.

Bio: Tom Goldstein is the Perotto Associate Professor of Computer Science at the University of Maryland.  His research lies at the intersection of machine learning and optimization, and targets applications in computer vision and signal processing. Before joining the faculty at Maryland, Tom completed his PhD in Mathematics at UCLA, and was a research scientist at Rice University and Stanford University. Professor Goldstein has been the recipient of several awards, including SIAM’s DiPrima Prize, a DARPA Young Faculty Award, a JP Morgan Faculty Award, and a Sloan Fellowship.