The main goal of this talk is to illustrate how machine learning can start to address some of the fundamental perceptual and control challenges involved in building intelligent robots. I’ll discuss how to learn dynamics models for planning and control, how to use imitation to efficiently learn deep policies directly from sensor data, and how policies can be parameterized with task-relevant structure. I’ll show how some of these ideas have been applied to a new high speed autonomous “AutoRally” platform built at Georgia Tech and an off-road racing task that requires impressive sensing, speed, and agility to complete. Along the way, I’ll show how theoretical insights from reinforcement learning, imitation learning, and online learning help us to overcome practical challenges involved in learning on real-world platforms. I will conclude by discussing ongoing work in my lab related to machine learning for robotics.

Byron Boots is an Assistant Professor in the School of Interactive Computing at the Georgia Institute of Technology. He holds a secondary appointment in the School of Electrical and Computer Engineering at Georgia Tech and is Visiting Faculty at Nvidia Research. He received his M.S. and Ph.D. in Machine Learning from Carnegie Mellon University and was a postdoctoral scholar in Computer Science and Engineering at the University of Washington. Byron directs the Georgia Tech Robot Learning Lab, affiliated with the Center for Machine Learning and the Institute for Robotics and Intelligent Machines. His lab conducts research on theory and systems that tightly integrate perception, learning, and control. He has received several awards including Best Paper at ICML, Best Paper at AISTATS, Best Paper Finalist at ICRA, Best Systems Paper Finalist at RSS, and the NSF CAREER award.