Artificial Intelligence Seminar

Spring 2009
Friday 12:00-1:15
Upson 5130

The AI seminar will meet weekly for lectures by graduate students, faculty, and researchers emphasizing work-in-progress and recent results in AI research. Lunch will be served starting at noon, with the talks running between 12:15 and 1:15. The new format is designed to allow AI chit-chat before the talks begin. Also, we're trying to make some of the presentations less formal so that students and faculty will feel comfortable using the seminar to give presentations about work in progress or practice talks for conferences.


January 23 *No Seminar - Cancelled*
January 30

*Note - location change - 655 Rhodes Hall - combined with Systems Lunch*

Ian Kash, Cornell University
Host: Joe Halpern

Optimizing Scrip Systems: Efficiency, Crashes, Hoarders, and Learning
Scrip systems, where users pay for service with an artificial currency (scrip) created for the system, are an attractive solution to a variety of problems faced by P2P and distributed systems. Despite the interest in building scrip systems, relatively little work has been done to help answer basic design questions. For example, how much money should there be in the system? What will happen if some of the users start hoarding money? In this talk, I present a game theoretic model of a scrip system and show that this model has Nash equilibria where all agents use simple strategies known as threshold strategies. In fact, the same techniques provide an efficient method of computing these equilibria as well as the equilibrium distribution of wealth. I will show how these results provide practical insights into the design of scrip systems. For example, social welfare is maximized by increasing the money supply up to the point that the system experiences a "monetary crash," where money is sufficiently devalued that no agent is willing to perform a service. Hoarders generally decrease social welfare but, surprisingly, they also promote system stability by helping prevent monetary crashes. Furthermore, the effects of hoarders can be mitigated simply by printing more money.

While scrip systems have many attractive features for a system designer, for a user they add complexity. As a user, I don't want to think about how much of my resources I should contribute or how much money I need to save. I just want service delivered when I want it, with a minimum of inconvenience the rest of the time. However, optimal behavior depends not only on my preference but on the behavior of everyone else in the system. Furthermore, this behavior cannot be directly observed. Despite this, I will show that a simple algorithm allows the system to quickly converge to optimal behavior.

February 6

Yisong Yue, Cornell University
Host: Joe Halpern

Interactively Optimizing Information Retrieval Systems as a Dueling Bandits Problem
Supervised learning to rank methods have become dominant in the design of information retrieval systems such as commercial search engines. However, several limitations have become increasingly evident over time. These include questions about the objective function to use for training, and collecting labeled training data. Recent advances in evaluation methods have shown it possible to reliably infer relative preferences from users regarding a pair of competing ranking functions. This motivates the design of new learning frameworks that can leverage pairwise comparison tests when training retrieval systems. Algorithms fitting into this framework might then be able to learn directly from user feedback, and avoid difficult issues such as supervised training objectives and collection of labeled data. We present one such framework, which we call the Dueling Bandits Problem. In this talk, I will review the learning to rank setting, briefly describe advances in interactive evaluation methods, and present the Dueling Bandits Problem along with an algorithm which provably minimizes a novel regret formulation.

This is joint work with Thorsten Joachims.

February 13

Isaac Miller, Cornell University
Host: Dan Huttenlocher

Getting It to Work (For Real): How to Automate Your Car in 10 “Easy” Steps
So often in academic settings, we spend all our time sorting out purely theoretical questions. In many cases the practical application of theory takes a back seat to new research projects, with little consideration for the real-time interactions between algorithms on physical systems. When those algorithms are finally pressed into service, the resulting systems pay the price of poor integration: brittleness, instability, and sometimes even catastrophic failure.

Once in a very long while, an opportunity comes along to blur the line between theory, practice, and academic research. The 2007 DARPA Urban Challenge was just such a rarity, challenging industry, academic, and private teams across the globe to build full-size vehicular robots to complete sixty miles of autonomous mock supply missions in moving traffic. When the dust settled at the end of the Challenge, six robots, all from universities, had crossed the finish line. The message is clear: the challenges that stand in the way of robust robotic transportation are largely research oriented, from individual algorithms all the way to systems-level analysis.

In “Getting It To Work (For Real): How To Automate Your Car in 10 ‘Easy' Steps,” I will discuss some of the major academic and practical challenges Team Cornell faced while building one of the six robots that finished the DARPA Urban Challenge. The talk will be a blend of academics and application, stories from the line where research mingles with the real world. I will discuss Cornell's system design at a high level, and position estimation, target tracking, and path planning algorithms in more detail. I hope to build the talk around rigorous algorithms and experimental validation, but to do so without losing sight of the (often highly amusing) compromises made… where the rubber meets the road.

February 20

Chun-Nam Yu, Cornell University

Learning Structural SVMs with Latent Variables
We present a large-margin formulation and algorithm for structured output prediction that allows the use of latent variables, which we call Latent Structural SVM. Our work identifies a particular formulation that covers a large range of application problems, while showing that the resulting optimization problem can generally be addressed using Concave-Convex Programming. The generality and performance of the approach is demonstrated on a motif-finding application, noun-phrase coreference resolution, and optimizing precision at k in information retrieval.

Host: Thorsten Joachims

February 27

Yejin Choi, Cornell University

Learning with Compositional Semantics as Structural Inference for Subsentential Sentiment Analysis
Determining the polarity of a sentiment-bearing expression requires more than a simple bag-of-words approach. In particular, words or constituents within the expression can interact with each other to yield a particular overall polarity. In this talk, we view such sub-sentential interactions in light of compositional semantics, and present a novel learning-based approach that incorporates structural inference motivated by compositional semantics into the learning procedure. Our experiments show that (1) simple heuristics based on compositional semantics can perform better than learning-based methods that do not incorporate compositional semantics, but (2) a method that integrates compositional semantics into learning performs better than all other alternatives. We also find that “content-word negators”, not widely employed in previous work, play an important role in determining expression-level polarity. Finally, in contrast to conventional wisdom, we find that expression-level classification accuracy uniformly decreases as additional, potentially disambiguating, context is considered.

Host: Claire Cardie

March 6

Tom Dietterich, Oregon State University
*Joint with the Institute for Computational Sustainability seminar series*

Computer Vision and Machine Learning for Automated Arthropod Biodiversity Studies
Population counts of arthropod species provide important data for understanding and monitoring the health of ecosystems. Arthropod samples are easily collected from lakes, streams, soils, and the ocean. However, analysis of these samples to count the number of specimens of each species is currently a time-consuming manual task that requires a high degree of expertise. This talk will describe the BugID project at Oregon State, which is developing robotic devices and computer vision methods to automate the processing of arthropod samples. We will describe the computer vision and machine learning techniques that we have developed for this problem. We have developed a novel ensemble learning technique and developed a mathematical model that shows why this technique performs better than existing methods.
We will also present data that suggest that these methods are applicable to the more general problem of generic object recognition.

Host: Carla Gomes

March 13

Craig Boutilier, University of Toronto and CombineNet, Inc.

Automated Channel Abstraction for Advertising Auctions

The use of auction mechanisms like the GSP in online advertising can lead to loss of both efficiency and revenue when advertisers have rich preferences: even simple forms of expressiveness like budget constraints can lead to suboptimal outcomes. This has led to a recognition of the value of express bidding paradigms and optimization for matching advertisers to ad supply.

In this talk, I'll first present our language for campaign-level expressiveness in online advertising. I'll then describe several techniques we've developed to address a major pediment to the use of optimization for ad matching, namely, the channel explosion: the fact that the number of assignable channels of ad supply grows exponentially in either attributes or bidders. Our abstraction technique that groups together channels into abstract channels that maintain only the most important attribute distinctions. We develop novel LP/MIP column and constraint generation heuristics that allow optimization to scale to practical levels and demonstrate results on both auctions with simple-per item bids and budgets and more expressive campaign-level offers. Time permitting, I'll briefly discuss how the nature of optimization changes when one explicitly accounts for uncertainty in ad supply. Formulating the problem as a Markov decision process, I'll sketch our online, sample-based approach to its solution and present some preliminary results that indicate the value of accounting for uncertainty and risk.

(joint work with Bill Walsh, Tuomas Sandholm, Rob Shields, George Nemhauser and David Parkes)

Host: Joseph Halpern

March 27

*No Seminar - ACSU Faculty Luncheon*

April 3

Cristian Danescu-Niculescu-Mizil, Cornell University

How opinions are received by online communities: A case study on helpfulness votes.
There are many on-line settings in which users publicly express opinions. A number of these offer mechanisms for other users to evaluate these opinions; a canonical example is, where reviews come with annotations like ``26 of 32 people found the following review helpful.'' Opinion evaluation appears in many off-line settings as well, including market research and political campaigns. Reasoning about the evaluation of an opinion is fundamentally different from reasoning about the opinion itself: rather than asking, ``What did Y think of X?'', we are asking, ``What did Z think of Y's opinion of X?'' Here we develop a framework for analyzing and modeling opinion evaluation, using a large-scale collection of Amazon book reviews as a dataset. We find that the perceived helpfulness of a review depends not just on its content but also but also in subtle ways on how the expressed evaluation relates to other evaluations of the same product. As part of our approach, we develop novel methods that take advantage of the phenomenon of review ``plagiarism'' to control for the effects of text in opinion evaluation, and we provide a simple and natural mathematical model consistent with our findings. Our analysis also allows us to distinguish among the predictions of competing theories from sociology and social psychology, and to discover unexpected differences in the collective opinion-evaluation behavior of user populations from different countries.

This is joint work with Gueorgi Kossinets, Jon Kleinberg and Lillian Lee.

April 10

Sebastien Lahaie, Yahoo Research
Host: Joe Halpern

A Kernel Method for Market Clearing
The problem of market clearing in an economy is that of finding prices such that supply meets demand. In this work, we propose a kernel method to compute nonlinear clearing prices for instances where linear prices do not suffice. We first present a procedure that, given a sample of values and costs for a set of bundles, implicitly computes nonlinear clearing prices by solving an appropriately formulated quadratic program. We then use this as a subroutine in an elicitation procedure that queries demand and supply incrementally over rounds, only as much as needed to reach clearing prices. An empirical evaluation demonstrates that, with a proper choice of kernel function, the method is able to find sparse nonlinear clearing prices with much less than full revelation of values and costs. When the kernel function is not suitable to clear the market, the method can be tuned to achieve approximate clearing.

Bio: Sebastien Lahaie is a research scientist in the microeconomics and social systems group at Yahoo Research. Sebastien received his PhD in Computer Science from Harvard University in 2007. At Yahoo his research focuses on marketplace design, including sponsored search and display advertising. He is interested in designing market algorithms that scale well and properly anticipate user behavior.Other interests include preference elicitation, reputation systems, combinatorial optimization, and mechanism design.

April 17

Veselin Stoyanov, Cornell University
Host: Claire Cardie

Opinion Summarization: Creating Useful Representations of Opinions Expressed in Text
The field of opinion (sentiment) analysis has received much recent research attention motivated by its practical appeal and the interesting computational problems that it presents. Sentiment analysis is concerned with automatically extracting attitudes, opinions, evaluations, and sentiment from text. More precisely, we are interested in the area of fine-grained sentiment analysis, which is concerned with opinions at the level of sentences, clauses, or individual expressions of opinions.

Our work assumes that we have access to automatically extracted fine-grained expressions of opinions. We argue that to be fully useful fine-grained expressions of opinions need to be augmented with additional information (e.g. opinions need to be grouped by their opinion holder and/or their topic). We refer to this task as opinion summarization. Our work focuses on combining fine-grained opinion expressions into higher-level opinion summaries, which are easy to browse, explore and manipulate.

In my talk I will define our notion of opinion summary and give some motivating examples of why opinion summaries are needed. I will then focus on the research challenges that have to be addressed in order to create opinion summaries. I will discuss briefly each of the challenges and focus on the problems of grouping together opinions that are on the same topic. I will conclude with discussing challenges related to evaluating complete opinion summaries.

April 24

Nikolaos Karampatziakis, Cornell University

Online Learning of Prediction Suffix Trees
Prediction suffix trees (PSTs) are a popular tool for modeling sequences and have been successfully applied in many domains such as compression and language modeling. In this talk, I will outline the benefits of using these models and present an online learning algorithm that learns a PST without any a priori assumptions on the maximum depth of the tree.

The proposed algorithm automatically grows the tree, so that it provably remains competitive with any fixed PST determined in hindsight. At the same time, the depth of the tree grows only logarithmically with the number of mistakes made by the algorithm. Finally, I will empirically demonstrate the effectiveness of the algorithm on the task of predicting the next system call a program will execute given the previous system calls it made.

Host: Joe Halpern

May 1

David Crandall, Cornell University
Host: Dan Huttenlocher

Mapping the World's Photos
We investigate how to organize a large collection of geotagged photos, working with a dataset of about 35 million images collected from Flickr. Our approach combines content analysis based on text tags and image data with structural analysis based on geospatial data. We use the spatial distribution of where people take photos to define a relational structure between the photos that are taken at popular places. We then study the interplay between this structure and the content, using classification methods for predicting such locations from visual, textual and temporal features of the photos. We find that visual and temporal features improve the ability to estimate the location of a photo, compared to using just textual features. We illustrate using these techniques to organize a large photo collection, while also revealing various interesting properties about popular cities and landmarks at a global scale.

Joint work with Lars Backstrom, Dan Huttenlocher, and Jon Kleinberg.

See also the AI graduate study brochure.

Please contact any of the faculty below if you'd like to give a talk this semester. We especially encourage graduate students to sign up! 

Sponsored by

CS7790, Spring '09
Claire Cardie

Carla Gomes
Joe Halpern
Dan Huttenlocher
Thorsten Joachims
Lillian Lee
Bart Selman
Ramin Zabih

Back to CS course websites