Bharath Hariharan

I am an assistant professor in Computer Science at Cornell University. I work on computer vision and machine learning, in particular on important problems that defy the "Big Data" label. I enjoy problems that require marrying advances in machine learning with insights from computer vision, geometry and domain-specific knowledge. Currently, my group is working on building systems that can learn about tens of thousands of visual concepts with very little or no supervision, produce rich and detailed outputs such as precise 3D shape, and reason about the world and communicate this reasoning to humans. A sampling of the research problems my group works on is presented below; an exhaustive list of publications is available on scholar.


My work has been recognized with an NSF CAREER award.

My CV is here.



Note to prospective PhD students: Admissions at Cornell are done through a committee. If you are interested in working with me, please directly apply through the application website and mention my name

Assistant Professor
311 Gates Hall
Cornell University
bharathh-AT-cs-DOT-cornell-DOT-edu

Teaching

PhD students

Former PhD students

Research

Recognition with minimal labels

Deep learning and ConvNets revolutionized visual recognition, but require large labeled datasets for training. This is a problem in new domains like satellite imagery, in expert applications like fine-grained recognition, and in "open-world" settings like robotics where the space of possible classes is not known a priori. We are designing new classes of recognition systems that can be trained with very few labeled examples. The key insight is to look beyond the available data, leveraging domain knowledge and visual learning that transcends domains. Funding: This work is funded by an NSF CAREER award, DARPA and IARPA.
Papers

Learning to reconstruct and synthesize 3D

Humans can not just recognize objects but also reconstruct 3D objects. We can reason about 3D even when we can only see one view, or a few sparse views. For computer vision systems to have this ability, they must reason not just about the well-explored geometry of perspective projection, but also about priors about scenes and shapes. Combining mathematical constraints from geometry with the data-driven priors provided by machine learning is an open research question.
Papers

Recognition in 3D

With advances in 3D reconstruction and recognition, vision is now being deployed in a variety of robotics applications, including self-driving cars. These robots have multiple cameras and LiDAR sensors, and require precise 3D location estimates for control. We are bringing insights from recognition, limited-label learning and 3D reconstruction/synthesis to perception in 3D . Funding:This work is funded by NSF.
Papers