Speaker 1:  by Valts Blukis

Title: Generalizable Learning for Natural Language Instruction Following on Physical Robots

Abstract: There is a growing need for accessible robot control interfaces that enable end-users to utilize their capabilities. Natural language provides the necessary expressivity and accessibility. In this talk, I present a series of model and learning algorithm contributions that address perception, language understanding, language grounding, planning, exploration, and control challenges within a single, modular learning architecture. The result is the first learning-based robot system that follows natural language instructions from first-person images. The system is trained in simulation, deployed in the real world, and is extensible at test-time to reason about previously unseen objects.

Speaker 2: Manish Raghavan

Title: Transparency and Incentives in Algorithmic Decision-Making 

Abstract: Algorithms are used to make decisions about people in a variety of contexts, including lending, hiring, and healthcare. While algorithms provide the potential to make consistent and scalable decisions, they introduce a number of new challenges. Scholars and activists have raised concerns over issues including fairness, accountability, and transparency, which has led to a fast-growing field of research in these subjects.

In this talk, I'll discuss the implications of algorithmic transparency in strategic contexts, i.e., when decision subjects want to be classified in a particular way. Drawing upon insights from work in both computing and economics, we model how agents change their behavior under evaluation and characterize the incentives produced by various evaluation schemes. Our results suggest that with careful consideration, decision-makers can develop evaluation schemes that align their incentives with those of decision subjects.

Speaker 3: Matvey Soloviev

Title: Information-Acquisition Games and Rational Inattention

Abstract: We introduce a game-theoretical model of information acquisition under resource limitations in a noisy environment. An agent must guess the truth value of a given Boolean formula φ after performing a bounded number of noisy tests of the truth values of variables in the formula. The problem of how to do this optimally turns out to have interesting structure: agents can exhibit *rational inattention*, in that optimal strategies may involve entirely ignoring some variables in favour of others that are no more relevant than the ignored ones, and sometimes it really matters more that *some* variable is ignored than which particular one is. This implies that optimal strategies can not be obtained by simply picking variables to test according to some local notion of usefulness, and that the set of optimal strategies is not  convex in any obvious sense. To establish that rational inattention is widespread, we develop an LP-based framework for analysing the structure of certain optimal strategies. Moreover, by looking at the number of test outcomes required to attain a given level of certainty about a formula's truth value, we obtain a new notion of complexity for Boolean formulas, with formulas that need more tests being considered more complex.