Artificial Intelligence Seminar

Fall 2015
Friday 12:00-1:15
Gates Hall 122

http://www.cs.cornell.edu/courses/CS7790/2015fa/

(also NY Tech 1202.09 [Hackers])

 

The AI seminar will meet weekly for lectures by graduate students, faculty, and researchers emphasizing work-in-progress and recent results in AI research. Lunch will be served starting at noon, with the talks running between 12:15 and 1:15. The new format is designed to allow AI chit-chat before the talks begin. Also, we're trying to make some of the presentations less formal so that students and faculty will feel comfortable using the seminar to give presentations about work in progress or practice talks for conferences.

If you or others would like to be deleted or added from this announcement, please contact Amy Elser at ahf42@cornell.edu.

 

August 28th
Note: there will be two
speakers for this seminar

Speaker: Ian Lenz

Host: Ross Knepper

Title: Learning Deep Latent Features for Model Predictive Control

Abstract: Designing controllers for tasks with complex non- linear dynamics is extremely challenging, time-consuming, and in many cases, infeasible. This difficulty is exacerbated in tasks such as robotic food-cutting, in which dynamics might vary both with environmental properties, such as material and tool class, and with time while acting. In this work, we present DeepMPC, an online real-time model-predictive control approach designed to handle such difficult tasks. Rather than hand-design a dynamics model for the task, our approach uses a novel deep architecture and learning algorithm, learning controllers for complex tasks directly from data. We validate our method in experiments on a large-scale dataset of 1488 material cuts for 20 diverse classes, and in 450 real-world robotic experiments, demonstrating significant improvement over several other approaches.

Bio: Ian is a sixth (and final) year PhD student in Cornell CS, advised by Ashutosh Saxena and Ross Knepper. He obtained his BS in Electrical and Computer Engineering from Carnegie Mellon University. His work focuses on applying deep learning and neural network techniques to problems in robotic manipulation, perception, and navigation, addressing challenges in designing robotic systems to work with deep networks and adapting deep networks and learning algorithms to the unique challenges presented by robotics.

---------------

Speaker: Jon DeCastro

Host: Ross Knepper

Title: Collision-Free Reactive Mission and Motion Planning for Multi-Agent Systems

Abstract: We describe a holistic method for automatically synthesizing controllers for a team of robots operating in an environment shared with other agents. The proposed approach builds on recent advances in Reactive Mission Planning using Linear Temporal Logic, and Local Motion Planning using convex optimization. A local planner enforces the dynamic constraints of the robot and guarantees collision avoidance in 2D and 3D workspaces. A reactive mission planner takes a high-level specification that captures complex motion sequencing, and generates a correct-by-construction controller guaranteed to achieve the specified behavior and be reactive to sensor events. If there is no controller that fulfills the specification because of possible deadlock in the the local planner, a minimal set of human-readable assumptions is generated as a certificate of the conditions on deadlock where the task is guaranteed. This is truly a synergistic method: the low-level motion planner enables scalability of the high-level plan synthesis with respect to dynamic obstacles, and the high-level mission planner enforces correctness of the low-level motion. We provide formal guarantees for our approach and demonstrate it via physical experiments with ground robots.

Bio: Jon DeCastro is a fifth-year Ph.D. student in Mechanical Engineering studying the high-level implications of provably-correct controller synthesis for robots with dynamics. He is investigating automatic synthesis of continuous controllers for verified high-level control of challenging tasks and discovery of specification revisions for guaranteeing controller execution in dynamic environments. In his free time, Jon enjoys hiking, running and spending time with his two kids.

“The AI-Seminar is sponsored by Yahoo!”

September 4th

Speaker: Jordan Boyd-Graber, University of Colorado

Host: David Mimno

Title: Thinking on your Feet: Reinforcement Learning for Incremental Language Tasks

Abstract: In this talk, I'll discuss two real-world language applications that require "thinking on your feet": synchronous machine translation (or"machine simultaneous interpretation") and question answering (when questions are revealed one piece at a time).  In both cases, effective algorithms for these tasks must interrupt the input stream and decide when to provide output.

Synchronous machine translation is when a sentence is being produced one word at a time in a foreign language and we want to produce a translation in English simultaneously (i.e., with as little delay between a foreign language word and its English translation). This is particularly difficult in verb-final languages like German or Japanese, where an English translation can barely begin until the verb is seen. Effective translation thus requires predictions of unseen elements of the sentence (e.g., the main verb in German and Japanese, or relative clauses in Japanese, or post-positions in Japanese). We use reinforcement learning to decide when to trust our verb predictions. It must learn to balance incorrect translation versus
timely translations, and must use those predictions to translate the sentence.

For question answering, we use a specially designed dataset that challenges humans: a trivia game called quiz bowl. These questions are written so that they can be interrupted by someone who knows more about the answer; that is, harder clues are at the start of the question and easier clues are at the end of the question. We create a recursive neural network to predict answers from incomplete questions and use reinforcement learning to decide when to guess.  We are able to answer questions earlier in the questions than most college trivia contestants.

Bio: Jordan Boyd-Graber is an assistant professor in the University of Colorado Boulder's Computer Science Department, formerly serving as an assistant professor at the University of Maryland.  He is a 2010 graduate of Princeton University, with a PhD thesis on "Linguistic Extensions of Topic Models" working under David Blei.

“The AI-Seminar is sponsored by Yahoo!”

September 8th
SPECIAL SESSION
Noon
310 Gates Hall

Speaker: Natalie Glance, Duolingo

Host: David Mimno

Title: Duolingo: Improving Language Education with Data 

Abstract: Duolingo is a free online education service that allows people to learn new languages. Since launching three years ago, Duolingo has grown to more than 100 million students from all over the world, and our mobile apps were awarded top honors from both Apple and Google in 2013. In this talk, I will discuss our architecture and present some examples of how we use a data-driven approach to improve the system, drawing on various disciplines including psychometrics, natural language processing, and machine learning.

Bio: Natalie Glance is Engineer Director at Duolingo.  See: https://www.linkedin.com/in/nglance

“The AI-Seminar is sponsored by Yahoo!”

September 18th
Joint AI/Systems Seminar

Malott Hall, Room 228

NO NYC STREAMING

 


Speaker: Chandu Thekkath and Sriram Rajamani, MSR India (Joint Seminar with Systems)

Host: Lillian Lee

Title: Microsoft Research India Overview

Abstract: Established in 2005, Microsoft Research India recently celebrated its 10th year anniversary. We  give an overview of research at MSR India.  We cover our systems work (specifically, cloud security, privacy and language and tool support for machine learning) in some detail, and give a  brief tour of our work in theory, machine learning and  the role of technology in socio-economic development. We also present career opportunities at MSR India (we are hiring!), and encourage students to apply.

Bio: Chandu Thekkath is Managing Director and  Sriram Rajamani is Assistant Managing Director of Microsoft Research India.  See http://research.microsoft.com/en-us/press/chandu-thekkath.aspx and http://research.microsoft.com/en-us/people/sriram/bio.aspx

“The AI-Seminar is sponsored by Yahoo!”

September 25th

 

Speaker: Satyen Kale, Yahoo

Host: Ross Knepper

Title:Online Boosting Algorithms

Abstract: We initiate the study of boosting in the online setting, where the task is to convert a "weak" online learner into a "strong" online learner. The notions of weak and strong online learners directly generalize the corresponding notions from standard batch boosting. For the classification setting, we develop two online boosting algorithms. The first algorithm is an online version of boost-by-majority, and we prove that it is essentially optimal in terms of the number of weak learners and the sample complexity needed to achieve a specified accuracy. The second algorithm is adaptive and parameter-free, albeit not optimal.

For the regression setting, we give an online gradient boosting algorithm which converts a weak online learning algorithm for a base class of regressors into a strong online learning algorithm which works for the linear span of the base class. We also give a simpler boosting algorithm for regression that obtains a strong online learning algorithm which works for the convex hull of the base class, and prove its optimality.

“The AI-Seminar is sponsored by Yahoo!”

October 2nd

Speaker: Yoav Artzi, Cornell University

Host: Ross Knepper

Title: Broad-coverage CCG Semantic Parsing with AMR

Abstract: Semantic parsing, the task of mapping sentences to logical form meaning representations, has recently received significant attention. We propose a grammar induction technique for AMR semantic parsing. While previous grammar induction techniques were designed to re-learn a new parser for each target application, the recently annotated AMR Bank provides a unique opportunity to induce a single model for understanding broad-coverage newswire text and support a wide range of applications. We present a new model that combines CCG parsing to recover compositional aspects of meaning and a factor graph to model non-compositional phenomena, such as anaphoric dependencies. Our approach achieves 66.2 Smatch F1 score on the AMR bank, significantly outperforming the previous state of the art. This talk is an extended version of our EMNLP 2015 talk. 

Bio: Yoav Artzi is an Assistant Professor in the Department of Computer Science and Cornell Tech at Cornell University. His research interests are in the intersection of natural language processing and machine learning. In particular, he focuses on designing latent variable learning algorithms that recover rich representations of linguistic meaning for situated natural language understanding. He received the best paper award in EMNLP 2015. He completed a B.Sc. summa cum laude from Tel Aviv University and a Ph.D. from the University of Washington.

“The AI-Seminar is sponsored by Yahoo!”

October 9th

Speaker: Alekh Agarwal, MSR NY

Host: Karthik Sridharan

Title: Efficient and Parsimonious Agnostic Active Learning

Abstract: We develop a new active learning algorithm for the streaming setting satisfying three important properties: 1) It provably works for any classifier representation and classification problem including those with severe noise. 2) It is efficiently implementable with an ERM oracle. 3) It is more aggressive than all previous approaches satisfying 1 and 2. To do this we create an algorithm based on a newly defined optimization problem and analyze it. We also conduct the first experimental analysis of all efficient agnostic active learning algorithms, evaluating their strengths and weaknesses in different settings.

This is joint work with Tzu-Kuo Huang, John Langford and Rob Schapire at Microsoft Research.

 

“The AI-Seminar is sponsored by Yahoo!”

October 16th

Speaker: Wolfgang Gatterbauer

Host: Joseph Halpern

Title: Approximate lifted inference with probabilistic databases

Abstract: Probabilistic inference over large data sets is becoming a central data management problem. Recent large knowledge bases, such as Yago, Nell or DeepDive have millions to billions of uncertain tuples. Yet probabilistic inference is known to be #P-hard in the size of the database, even for some very simple queries. This talk shows a new approach that allows ranking answers to hard probabilistic queries in guaranteed polynomial time, and by using only basic operators of existing database management systems (e.g., no sampling required).
(1) The first part of this talk develops upper and lower bounds for the probability of Boolean functions by treating multiple occurrences of variables as independent and assigning them new individual probabilities. We call this approach dissociation and give an exact characterization of optimal oblivious bounds, i.e. when the new probabilities are chosen independent of the probabilities of all other variables. Our new bounds shed light on the connection between previous relaxation-based and model-based approximations and unify them as concrete choices in a larger design space.
(2) The second part then draws the connection to lifted inference and shows how application of this theory allows a standard relational database management system to both upper and lower bound hard probabilistic queries in guaranteed polynomial time. We give experimental evidence on synthetic TPC-H data that our approach is by orders of magnitude faster and also more accurate than currently used sampling-based approaches.
(Talk based on joint work with Dan Suciu from TODS 2014 and VLDB 2015:
http://arxiv.org/abs/1409.6052, http://arxiv.org/pdf/1412.1069)

 

“The AI-Seminar is sponsored by Yahoo!”

October 23rd

Speaker: Jeffrey Mark Siskind, School of Electrical and Computer Engineering, Purdue University

Host: Claire Cardie

Title: Decoding the Brain to Help Build Machines

Abstract: Humans can describe observations and act upon instructions.  This requires that language be grounded in perception and motor control.  I will present several components of my long-term research program to understand the vision-language-motor interface in the human brain and emulate such on computers.

In the first half of the talk, I will present fMRI investigation of the vision-language interface in the human brain.  Subjects were presented with stimuli in different modalities,---spoken sentences, textual presentation of sentences, and video clips depicting activity that can be described by sentences---while undergoing fMRI.  The scan data is analyzed to allow readout of individual constituent concepts and words---people/names, objects/nouns, actions/verbs, and spatial-relations/prepositions---as well as phrases and entire sentences.  This can be done across subjects and across modality; we use classifiers trained on scan data for one subject to read out from another subject and use classifiers trained on scan data for one modality, say text, to read out from scans of another modality, say video or speech.  Analysis of this indicates that the brain regions involved in processing the different kinds of constituents are largely disjoint but also largely shared across subjects and modality.  Further, we can determine the predication relations; when the stimuli depict multiple people, objects, and actions, we can read out which people are performing which actions with which objects.  This points to a compositional mental semantic representation common across subjects and modalities.

In the second half of the talk, I will use this work to motivate the development of three computational systems.  First, I will present a system that can search a collection of ten full-length Hollywood movies for clips that depict actions specified in sentential queries.  This is done without any annotation or markup of the video.  Second, I will present a system that can use sentential description of human interaction with previously unseen objects in video to automatically find and track those objects.  This is done without any annotation of the objects and without any pretrained object detectors.
Third, I will present a system that learns the meanings of nouns and prepositions from video and tracks of a mobile robot navigating through its environment paired with sentential descriptions of such activity.  Such a learned language model then supports both generation of sentential description of new paths driven in new environments as well as automatic driving of paths to satisfy navigational instructions specified with new sentences in new environments.

Joint work with Andrei Barbu, Daniel Paul Barrett, Scott Alan Bronikowski, Wei Chen, N. Siddharth, Caiming Xiong, Haonan Yu, Jason J. Corso, Christiane D. Fellbaum, Catherine Hanson, Stephen Jose Hanson, Sebastien Helie, Evguenia Malaia, Barak A. Pearlmutter, Thomas Michael Talavage, and Ronnie B. Wilbur.

Bio: Jeffrey M. Siskind received the B.A. degree in computer science from the Technion, Israel Institute of Technology, Haifa, in 1979, the S.M. degree in computer science from the Massachusetts Institute of Technology (M.I.T.), Cambridge, in 1989, and the Ph.D. degree in computer science from M.I.T. in 1992.  He did a postdoctoral fellowship at the University of Pennsylvania Institute for Research in Cognitive Science from 1992 to 1993.  He was an assistant professor at the University of Toronto Department of Computer Science from 1993 to 1995, a senior lecturer at the Technion Department of Electrical Engineering in 1996, a visiting assistant professor at the University of Vermont Department of Computer Science and Electrical Engineering from 1996 to 1997, and a research scientist at NEC Research Institute, Inc. from 1997 to 2001.  He joined the Purdue University School of Electrical and Computer Engineering in 2002 where he is currently an associate professor.  His research interests include machine vision, artificial intelligence, cognitive science, computational linguistics, child language acquisition, and programming languages and compilers.

 

“The AI-Seminar is sponsored by Yahoo!”

   
November 6th

Speaker: Yudong Chen

Host: Kilian Weinberger

Title: Non-convex Gradient Descent for Fitting Low-rank Models

Abstract: Fitting a structured rank-r matrix to noisy data is an important subroutine in many applications (PCA, clustering, collaborative filtering…) It involves solving a (NP-hard) rank-constrained optimization problem. Popular approach via convex relaxation runs in polynomial time in principle and has strong statistical guarantees, but its quadratic time complexity is often too high for large problems. An attractive and highly scalable alternative is to run (projected) gradient descent directly over the space of low-rank matrices, but convergence and statistical accuracy were unclear due to non-convexity. We develop a unified framework characterizing the convergence of this non-convex method as well as the statistical properties of the resulting solution. Our results provide convergence guarantees for a broad range of low-rank problems in machine learning and statistics, including matrix sensing, matrix completion with real and one-bit observations, matrix decomposition, robust and structured PCA, graph clustering, and others. For these problems non-convex projected gradient descent runs in near linear time, and provides statistical guarantees that match (and sometimes better than) the best known results.

Bio: Yudong Chen joined the School of ORIE, Cornell University in 2015 as an assistant professor. He was a postdoctoral scholar at the Department of Electrical Engineering and Computer Sciences at UC Berkeley from 2013-2015. He obtained his Ph.D. in Electrical and Computer Engineering from the University of Texas at Austin in 2013, and his M.S. and B.S. from Tsinghua University. His research interests include machine learning, high-dimensional and robust statistics, large-scale optimization, and applications in networks and transportation systems.

"The AI-Seminar is sponsored by Yahoo!"


November 13th

Speaker: Alex Kulesza, Google

Host: Kilian Weinberger

Title: Low-Rank Spectral Learning

Abstract: Modern machine learning can often be characterized as the application of optimization techniques to loss objectives.  This approach has clear intuitions, generalization guarantees, and a track record of empirical success.  However, optimization can be computationally expensive, and often there is no guarantee of finding the global optimum.  As a result, there has been a growing interest in alternative "method of moments" algorithms, including spectral learning and tensor decomposition methods, that attempt to recover
model parameters directly by manipulating statistics of the training set.  Hsu, Kakade, and Zhang, for example, showed that their seminal spectral learning algorithm can learn hidden Markov models (HMMs) quickly, exactly, and in closed form, in stark contrast to EM.

However, the assumptions of Hsu et al are unrealistic: they require that the training data are generated by an HMM whose number of states (rank) exactly matches the model being learned.  In the real world, this is virtually never true.  Empirical data are noisy and complex, and for computational and statistical reasons we usually want to fit a low-rank model.  As a result, during spectral learning we typically throw out the smallest singular values of the statistics matrix. Intuitively, this seems like a reasonable approximation.

However, I will describe a surprising result: even when the singular values thrown out are arbitrarily small, the resulting prediction errors can be arbitrarily large.  I will identify two distinct causes for this bad behavior, illustrate them with simple examples, and prove that they are essentially complete: if neither occurs, the prediction error is bounded by the magnitudes of the omitted singular values. Finally, I will describe a limiting case in which we can prove that this problem disappears entirely.  By studying the properties of this
limiting case we can derive several conceptual and practical insights that lead to improved empirical performance on both synthetic and real-world problems, as well as pave the way toward a theoretical understanding of spectral learning under realistic assumptions

Bio: Alex Kulesza is a research scientist at Google.  His research involves developing efficient models and algorithms for machine learning, including online learning, determinantal point processes, and spectral methods.  He recently completed a postdoc at the University of Michigan with Satinder Singh and received his Ph.D. in 2012 from the department of Computer and Information Science at the University of Pennsylvania, where he was advised by Ben Taskar and Fernando Pereira.

“The AI-Seminar is sponsored by Yahoo!”

November 20th

Speaker: NO SEMINAR, ACSU LUNCHEON

Host:

Title:

Abstract:

Bio:

“The AI-Seminar is sponsored by Yahoo!”

November 27th

Speaker: NO SEMINAR, THANKSGIVING BREAK

Host:

Title:

Abstract:

Bio:

“The AI-Seminar is sponsored by Yahoo!”

December 4th

Speaker: Peter Enns, Cornell University

Host: Lillian Lee

Title: A Computational Approach to Understanding the Congressional (Non-) Response to Economic Inequality

Abstract: Despite the growth and increasing salience of economic inequality in the U.S., this issue has yet to feature prominently on the Congressional agenda. We argue that the upper class bias in the interest system is critical to understanding the congressional “non-response” to rising inequality. Specifically, we propose that as inequality increases politicians become more reliant on upper income interests for resources, making them less likely to raise the issue of economic inequality because the wealthy beneficiaries of growing inequality prefer that Congress avoids discussing inequality and addresses other economic issues instead. To test this argument, we use the Congressional Record to generate over-time speech-based measures of policy attention for each member of Congress. We then link these measures to individual-level campaign donations data, which indicate how much each member of Congress received and from what source. The current analysis extends from 1995 to 2012. The next phase of the project will extend the analysis of the Congressional Record back to 1913. Our initial results are consistent with expectations, suggesting that who donates to campaigns has a major influence on what economic issues Congress prioritizes.

Bio: http://falcon.arts.cornell.edu/pe52/

“The AI-Seminar is sponsored by Yahoo!”

December 11th

Speaker: Olga Russakovsky, CMU

Host: Kavita Bala

Title: Scaling Up Object Detection

Abstract: Hundreds of billions of photographs are uploaded on the web each year. An important step towards automatically analyzing the content of these photographs is building computer vision models that can recognize and localize all the depicted objects. Traditionally, work on scaling up object recognition has focused on algorithmic improvements, e.g., building more efficient or more powerful models. However, I will argue that data plays at least as big a role: effectively collecting and annotating the right data is a critical component of scaling up object detection.

The first part of the talk will be about constructing an object detection dataset (as part of the ImageNet Large Scale Visual Recognition Challenge) that is an order of magnitude larger than previous datasets such as the PASCAL VOC. I will discuss some of the decisions we made in designing this benchmark as well as some of our crowd engineering innovations. The availability of this large-scale data gives us as a field an unprecedented opportunity to work on designing algorithms for scalable and diverse object detection. It also allows for thorough analysis to understand the current algorithmic shortcomings and to focus the next round of algorithmic improvements.

In the second part of the talk, I will bring together the insights from large-scale data collection and from recent algorithmic innovations into a principled human-in-the-loop framework for image understanding. This approach can be used both for reducing the cost of large-scale detailed dataset annotation efforts as well as for effectively understanding a single target image.

Bio: Olga Russakovsky (http://cs.cmu.edu/~orussako) is a postdoctoral research fellow at Carnegie Mellon University. She recently completed a PhD in computer science at Stanford University advised by Prof. Fei-Fei Li. Her research interests are in computer vision and machine learning, specifically focusing on large-scale object detection and recognition. She was the lead organizer of the ImageNet Large Scale Visual Recognition Challenge (http://image-net.org/challenges/LSVRC) for two years, which was featured in the New York Times and MIT Technology Review. She organized multiple workshops and tutorials at premier computer vision conferences, including helping pioneer the "Women in Computer Vision" workshop at CVPR'15. She founded and directs the Stanford AI Laboratory's outreach camp SAILORS (http://sailors.stanford.edu, featured in Wired) designed to expose high school students in underrepresented populations to the field of AI.

“The AI-Seminar is sponsored by Yahoo!”

See also the AI graduate study brochure.

Please contact any of the faculty below if you'd like to give a talk this semester. We especially encourage graduate students to sign up!

Sponsored by


CS7790, Fall '15

 

Back to CS course websites