Artificial Intelligence Seminar

Spring 2018, Fridays
Talks - 12:15-1:15 p.m. in Gates G01
Discussion & Refreshments - 1:15-2:00 p.m. in Gates 122

http://www.cs.cornell.edu/courses/CS7790/2018sp/

 

The AI seminar will meet weekly for lectures by graduate students, faculty, and researchers emphasizing work-in-progress and recent results in AI research. The talks will run in Gates G01 between 12:15 and 1:15 p.m., with a discussion and lunch in Gates 122 following at 1:15 p.m. The new format is designed to allow AI chit-chat after the talks. Also, we're trying to make some of the presentations less formal so that students and faculty will feel comfortable using the seminar to give presentations about work in progress or practice talks for conferences.

If you or others would like to be deleted or added from this announcement, please contact Vanessa Maley at vsm34@cornell.edu.

 

January 26th, 2018

Speaker: Yisong Yue, California Institute of Technology

Host: Thorsten Joachims, Cornell University

Title: New Frontiers in Imitation Learning

Abstract: Imitation learning is a branch of machine learning that pertains to learning to make (a sequence of) decisions given demonstrations and/or feedback. Canonical settings include self-driving cars and playing games. When scaling up to complex state/action spaces, one major challenge is how best to incorporate structure into the learning process.  For instance, the complexity of unstructured imitation learning can scale very poorly w.r.t. the naive size of the state/action space.

In this talk, I will describe recent and ongoing work in developing principled structured imitation learning approaches that can exploit interdependencies in the state/action space, and achieve orders-of-magnitude improvements in learning rate or accuracy, or both.  These approaches are showcased on a wide range of (often commercially deployed) applications, including modeling professional sports, laboratory animals, speech animation, and expensive computational oracles.

Biography: Yisong Yue is an assistant professor in the Computing and Mathematical Sciences Department at the California Institute of Technology.  He was previously a research scientist at Disney Research. Before that, he was a postdoctoral researcher at Carnegie Mellon University. He received a Ph.D. from Cornell University and a B.S. from the University of Illinois at Urbana-Champaign.

Yisong's research interests lie primarily in the theory and application of statistical machine learning. His research is largely centered around developing learning approaches that can characterize structured and adaptive decision-making settings. In the past, his research has been applied to information retrieval, recommender systems, text classification, learning from rich user interfaces, analyzing implicit human feedback, data-driven animation, behavior analysis, sports analytics, policy learning in robotics, and adaptive planning & allocation problems.

“The AI-Seminar is sponsored by Yahoo!”

February 2nd, 2018

Speaker: Ross Knepper, Cornell University

Title: Communicative Actions in Human-Robot Teams

Abstract: Robots out in the world today work for people but not with people.  Before robots can work closely with ordinary people as part of a human-robot team in a home or office setting, robots need the ability to think and act more like people. When people act jointly as part of a team, they engage in collaborative planning, which involves forming a consensus through an exchange of information about goals, capabilities, and partial plans.  In this talk, I describe a framework for robots to understand and generate messages broadly -- not only through natural language but also by functional actions that carry meaning through the context in which they occur. Careful action selection allows robots to clearly and concisely communicate meaning with human partners in a manner that almost resembles telepathy.  I show examples of how this implicit communication can facilitate activities as basic as hallway navigation and as sophisticated as collaborative tool use in assembly tasks.  I also show how these abilities can assist in recovery after a failure.

“The AI-Seminar is sponsored by Yahoo!”


February 9th, 2018

No Speaker - Lunch Discussion Only at 12 (noon)

“The AI-Seminar is sponsored by Yahoo!”

February 16th, 2018

No Speaker - Lunch Discussion Only at 12 (Noon)

“The AI-Seminar is sponsored by Yahoo!”

February 23rd, 2018

Speaker: Olga Russakovsky, Princeton

*This is a joint seminar with Cornell Tech Learning Machine Seminar Series. It will be live at Cornell Tech Campus, but broadcasted to Gates G01 at 12:00 p.m.* More info here: http://lmss.tech.cornell.edu/

Title: The Human Side of Computer Vision

Abstract: Abstract: Intelligent agents acting in the real world need advanced vision capabilities to perceive, learn from, reason about and interact with their environment. In this talk, I will explore the role that humans play in the design and deployment of computer vision systems. Large-scale manually labeled datasets have proven instrumental for scaling up visual recognition, but they come at a substantial human cost. I will first briefly talk about strategies for making optimal use of human annotation effort for computer vision progress. However, no dataset can foresee all the visual scenarios that a real-world system might encounter. I will describe several recent works that integrate human and computer expertise for visual recognition in the fields of semantic segmentation and visual question answering. I will conclude with some thoughts around making fair, transparent and representative computer vision systems going forward.

Bio: Dr. Olga Russakovsky is an Assistant Professor in the Computer Science Department at Princeton University. Her research is in computer vision, closely integrated with machine learning and human-computer interaction. She completed her PhD at Stanford University and her postdoctoral fellowship at Carnegie Mellon University. She was awarded the PAMI Everingham Prize as one of the leaders of the ImageNet Large Scale Visual Recognition Challenge, the NSF Graduate Fellowship and the MIT Technology Review 35-under-35 Innovator award. In addition to her research, she co-founded the Stanford AI Laboratory’s outreach camp SAILORS to educate high school girls about AI. She then co-founded and continues to serve as a board member of the AI4ALL foundation dedicated to educating a diverse group of future AI leaders.

“The AI-Seminar is sponsored by Yahoo!”

March 2nd, 2018

Speaker: Wei-Lun (Harry) Chao, University of Southern California

Host: Kilian Weinberger, Cornell University

Title: Transfer learning towards intelligent systems in the wild

Abstract: Developing intelligent systems for vision and language understanding in the wild has long been a crucial part that people dream about the future. In the past few years, with the accessibility to large-scale data and the advance of machine learning algorithms, vision and language understanding has had significant progress for constrained environments. However, it remains challenging for unconstrained environments in the wild where the intelligent system needs to tackle unseen objects and unfamiliar language usage that it has not been trained on. Transfer learning, which aims to transfer and adapt the learned knowledge from the training environment to a different but related test environment has thus emerged as a promising paradigm to remedy the difficulty.

In this talk, I will present my recent work on transfer learning towards intelligent systems in the wild. I will begin with zero-shot learning, which aims to expand the learned knowledge from seen objects, of which we have training data, to unseen objects, of which we have no training data. I will present an algorithm SynC that can construct classifiers of any object class given its semantic description, even without training data, followed by a comprehensive study on how to apply it to different environments. I will then describe an adaptive visual question answering framework that builds upon the insight of zero-shot learning and can further adapt its knowledge to the new environment given limited information. I will finish my talk with some directions for future research.

Bio: Wei-Lun (Harry) Chao is a Computer Science PhD candidate at University of Southern California, working with Fei Sha. His research interests are in machine learning and its applications to computer vision and artificial intelligence. His recent work has focused on transfer learning towards vision and language understanding in the wild. His earlier research includes work on probabilistic inference, structured prediction for video summarization, and face understanding.

“The AI-Seminar is sponsored by Yahoo!”

March 9th, 2018

Speaker: Byron Boots, Georgia Tech

*This is a joint seminar with Cornell Tech Learning Machine Seminar Series* More info here: http://lmss.tech.cornell.edu/

Title:

Abstract:

“The AI-Seminar is sponsored by Yahoo!”

March 16th, 2018

Speaker: Andrew Wilson, Cornell University

Host:

Title:

Abstract:

“The AI-Seminar is sponsored by Yahoo!”

March 23rd, 2018

Speaker: Jesse Thomason, UT Austin

Host: Ross Knepper, Cornell University

Title: Continuously Improving Embodied Natural Language Understanding through Human-Robot Conversation

Abstract: As robots become more ubiquitous in homes and workplaces such as hospitals and factories, they must be able to communicate with humans. Several kinds of knowledge are required to understand and respond to a human's natural language commands and questions. If a person requests an assistant robot to Take me to Alice's office, the robot must know that Alice is a person who owns some unique office, and that take me means it should navigate there. Similarly, if a person requests bring me the heavy, green mug, the robot must know heavygreen, and mug are properties that describe a physical object in the environment, and must have accurate concept models of those properties to select the right one. In this talk, we discuss work that performs language parsing with sparse initial data, using the conversations between a robot and human users to induce pairs of natural language utterances with the target semantic forms a robot discovers through its questions. Additionally, we discuss strategies for learning perceptual concepts like heavy, and the objects those concepts apply to, using multi-modal sensory information and interaction with humans. Finally, we present a system with both parsing and perception capabilities that learns from conversations with users to improve both components over time.

Bio: Jesse Thomason is a fifth year PhD candidate working with Dr. Raymond Mooney and collaborating with Dr. Peter Stone at the University of Texas at Austin Computer Science Department (UTCS). He works at the intersection of natural language processing and robotics. His research interests are primarily in semantic understanding and language grounding in human-robot dialogs. He focuses on algorithms that bootstrap robot understanding from interaction with humans, improving language understanding and perceptual grounding for whatever task and domain an embodied robot operates in. He is supported by a National Science Foundation Graduate Research Fellowship and has published at AI, Robotics, and NLP venues such as AAAI, CoRL, IJCAI, NAACL, and COLING.

“The AI-Seminar is sponsored by Yahoo!”

March 30th, 2018

Speaker: Yuqian Zhang,

Host: Killian Weinberger

Title:

Abstract:

“The AI-Seminar is sponsored by Yahoo!”

April 6th, 2018

SPRING BREAK- NO SEMINAR

"The AI-Seminar is sponsored by Yahoo!"

April 13th, 2018

ACSU Faculty Luncheon - NO SEMINAR

“The AI-Seminar is sponsored by Yahoo!"

April 20th, 2018

Speaker: Molly Feldman, Cornell University

Title:

Abstract:

“The AI-Seminar is sponsored by Yahoo!”

April 27th, 2018

Speaker: Ali Farhadi

*This is a joint seminar with Cornell Tech Learning Machine Seminar Series*

Title:

Abstract:

“The AI-Seminar is sponsored by Yahoo!”

May 4th, 2018

Speaker:

Host:

Title:

Abstract:

“The AI-Seminar is sponsored by Yahoo!”

 

See also the AI graduate study brochure.

Please contact any of the faculty below if you'd like to give a talk this semester. We especially encourage graduate students to sign up!

Sponsored by


CS7790, Spring 18

 

Back to CS course websites