NESCAI07: The Second North East Student Colloquium on Artificial Intelligence1315 April 2007, Ithaca, NY 

Important Dates Driving Directions Schedule Tutorials Invited Talks Author Guidelines Call for Papers Why come? Registration Program Committee Contact Info Sponsored By 
Invited TalksBoth invited talks will be held in Upson Hall, room B17.Structured Prediction Problems in Natural Language ProcessingMichael Collins (MIT EECS/CSAI)Supervised machine learning involves the induction of a function from inputs x to output labels y. In structured prediction problems, the number of possible labels for a given x is often exponentially large, but each label has rich internal structure. Examples of problems of this form are sequence learning, where y is a sequence of state labels, or natural language parsing, where y is a contextfree parse tree for a sentence x. In this talk I'll describe some recent work on structured prediction problems in NLP. Specifically, I'll describe learning algorithms for syntaxbased machine translation, semantic interpretation (learning to map sentences to logical form), and nonprojective dependency parsing. This is joint work with Xavier Carreras, Brooke Cowan, Amir Globerson, Terry Koo, Ivona Kucerova, and Luke Zettlemoyer. Michael Collins is Associate Professor of Computer Science at MIT and a Sloan Research Fellow. His research interests are in natural language processing, and machine learning. After completing a PhD in computer science from the University of Pennsylvania in December 1998, he was a researcher at AT&T LabsResearch until November 2002. He joined MIT in January 2003. Redoing the foundations of decision theory: Decision theory with subjective states and outcomesJoe Halpern (Cornell Univ.)The standard approach in decision theory (going back to Savage) is to place a preference order on acts, where an act is a function from states to outcomes. If the preference order satisfies appropriate postulates, then the decision maker can be viewed as acting as if he has a probability on states and a utility function on outcomes, and is maximizing expected utility. This framework implicitly assumes that the decision maker knows what the states and outcomes are. That isn't reasonable in a complex situation. For example, in trying to decide whether or not to attack Iraq, what are the states and what are the outcomes? We redo Savage viewing acts essentially as syntactic programs. We don't need to assume either states or outcomes. However, among other things, we can get representation theorems in the spirit of Savage's theorems; for Savage, the agent's probability and utility are subjective; for us, in addition to the probability and utility being subjective, so is the state space and the outcome space. I discuss the benefits, both conceptual and pragmatic, of this approach. As I show, among other things, it provides an elegant solution to framing problems. This is joint work with Larry Blume and David Easley. No prior knowledge of Savage's work is assumed. Joseph Y. Halpern received a B.Sc. in mathematics from the University of Toronto in 1975 and a Ph.D. in mathematics from Harvard in 1981. In between, he spent two years as the head of the Mathematics Department at Bawku Secondary School, in Ghana. He is currently a professor of computer science at Cornell University, where he moved in 1996 after spending 14 years at the IBM Almaden Research Center. His interests include reasoning about knowledge and uncertainty, decision theory and game theory, faulttolerant distributed computing, causality, and security. Together with his former student, Yoram Moses, he pioneered the approach of applying reasoning about knowledge to analyzing distributed protocols and multiagent systems; he won a Godel Prize for this work. He received the Publishers' Prize for Best Paper at at the International Joint Conference on Artificial Intelligence in 1985 (joint with Ronald Fagin) and in 1989, and the Reiter Prize for best paper at the Conference on Knowledge Representation and Reasoning in 2006 (joint with Larry Blume and David Easley). He has coauthored 6 patents, two books ("Reasoning About Knowledge" and "Reasoning About Uncertainty"), over 100 journal publications, and over 150 conference publications. He is a former editorinchief of the Journal of the ACM, a Fellow of the ACM, the AAAI, and the AAAS, and was the recipient of a Guggenheim and a Fulbright Fellowship. 