Perceptual Robot Learning

Abstract: Robots today are typically confined to interact with rigid, opaque objects with known object models. However, the objects in our daily lives are often non-rigid, can be transparent or reflective, and are diverse in shape and appearance. One reason for the limitations of current methods is that computer vision and robot planning are often considered separate fields. I argue that, to enhance the capabilities of robots, we should design state representations that consider both the perception and planning algorithms needed for the robotics task. I will show how we can develop novel perception and planning algorithms to assist with the tasks of manipulating cloth, manipulating novel objects, and grasping transparent and reflective objects. By thinking about the downstream task while jointly developing perception and planning algorithms, we can significantly improve our progress on difficult robots tasks.

Bio: David Held is an assistant professor at Carnegie Mellon University in the Robotics Institute and is the director of the RPAD lab: Robots Perceiving And Doing. His research focuses on perceptual robot learning, i.e. developing new methods at the intersection of robot perception and planning for robots to learn to interact with novel, perceptually challenging, and deformable objects.  David has applied these ideas to robot manipulation and autonomous driving. Prior to coming to CMU, David was a post-doctoral researcher at U.C. Berkeley, and he completed his Ph.D. in Computer Science at Stanford University.  David also has a B.S. and M.S. in Mechanical Engineering at MIT.  David is a recipient of the Google Faculty Research Award in 2017 and the NSF CAREER Award in 2021.