Invited Speaker: Eva Tardos, Cornell University

Eva Tardos

Title: Learning in Games

Abstract:

Selfish behavior can often lead to suboptimal outcome for all participants, a phenomenon illustrated by many classical examples in game theory. Over the last decade we developed good understanding on how to quantify the impact of strategic user behavior on the overall performance in many games (including traffic routing as well as online auctions). In this talk we will focus on games where players use a form of learning that helps them adapt to the environment, and consider two closely related questions: What are broad classes of learning behaviors that guarantee high social welfare in games, and are these results robust to situations when game or the population of players is dynamically changing.

Biography:

Eva Tardos received her Dipl. Math. in 1981, and her Ph.D. 1984, from Eotvos University, Budapest, Hungary. She has been elected to the National Academy of Engineering and the American Academy of Arts and Sciences, and is the recipient of Packard, Sloan Foundation, and Guggenheim fellowship, an ACM Fellow, INFORMS fellow; and has received the Fulkerson Prize, and the Dantzig prize. She was editor editor-in-Chief of SIAM Journal of Computing 2004-2009, and is currently editor of several other journals: Journal of the ACM, Theory of Computing, and Combinatorica.




Invited Speaker: Claudio Gentile, INRIA and Google NY

Claudio Gentile

Title: Online clustering of bandit algorithms

Abstract:

Bandit algorithms have become a standard tool for facing the the so-called exploration-exploitation dilemma that naturally arises in learning problems with partial information. Recommendation systems are one of the best applications where such algorithmic solutions may be of relevance. In many cases, these applications have social components (either explicit or hidden), whose integration in the bandit algorithms could lead to a significant performance increase. For instance, we may want to serve content to a group of users by taking advantage of an underlying network of social relationships among them. In typical scenarios like social networks, it is often possible to single out a few groups or communities made up of users sharing similar interests. Such communities need not be static over time, and are often clustered around specific content types, so that a given set of users can in fact host a multiplex of interdependent communities depending on specific content items or group of items. We call this multiplex of interdependent clusterings a context-dependent clustering. Typically, these clusterings are unknown to the learning agent, and have to be inferred on the fly based on the observed data.
We review recent research activity in the framework of stochastic bandit algorithms where users are structured in (unknown) clusters in either context-independent or context-dependent fashions. The presentation includes algorithms, associated theoretical analyses, and empirical evidence on real-world data.

Biography:

Claudio Gentile is currently senior researcher at INRIA (France) and visiting faculty at Google LLC (USA). He has been PC Chair or Area Chair of several conferences in the field of Machine Learning/Artificial Intelligence (e.g., COLT, ALT, NIPS, ICML, IJCAI). He held visiting positions at UC Santa Cruz, Microsoft Research, Technion, University College London, Telefonica. His current research interests include online learning with partial information, online clustering, and applications thereof.