BEGIN:VCALENDAR
METHOD:PUBLISH
VERSION:2.0
PRODID:-//Cornell U. Department of Computer Science//Brown Bag Seminar//EN
BEGIN:VEVENT
SUMMARY:Brown bag: Yoav Artzi
DESCRIPTION:Title: Situated Language Understanding with Visual
	 Observations\nSpeaker: Yoav Artzi\nAbstract: An agent following
	 instructions requires a robust understanding of language and its
	 environment. This talk will be divided to two parts. In the first part\,
	 I will propose a neural network model for mapping instructions to
	 actions. The model jointly reasons about instructions and raw visual
	 input obtained from a camera sensor. Training uses reinforcement
	 learning in a few-samples regime with reward shaping to exploit training
	 data. This approach does not require intermediate representations\,
	 planning procedures\, or training different models for visual and
	 language reasoning. In the second part\, I will present a new visual
	 reasoning language dataset\, containing natural statements grounded in
	 synthetic images. The data demonstrates a broad set of linguistic
	 phenomena\, requiring visual and set-theoretic reasoning. The data
	 contains 92K examples and demonstrates a challenging task for
	 state-of-the-art methods.\n\nThe research presented in the first part is
	 led by Dipendra Misra\, and the research in the second part is led by
	 Alane Suhr.
LOCATION:Gates 122
UID:2017-08-22
STATUS:CONFIRMED
DTSTART:20170822T160000Z
DTEND:20170822T170000Z
ORGANIZER;CN=Jonathan Shi:http://www.cs.cornell.edu/~jshi/brownbag/
DTSTAMP:20260409T081123Z
END:VEVENT
END:VCALENDAR