- About
- Events
- Calendar
- Graduation Information
- Cornell Tech Colloquium
- Student Colloquium
- Student Recognition
- 2020 Celebratory Event
- BOOM
- CS Colloquium
- Conway-Walker Lecture Series
- Salton Lecture Series
- Seminars / Lectures
- Big Red Hacks
- Cornell University High School Programming Contest
- Game Design Initiative
- CSMore: The Rising Sophomore Summer Program in Computer Science
- Explore CS Research
- Research Night
- People
- Courses
- Research
- Undergraduate
- M Eng
- MS
- PhD
- Admissions
- Current Students
- Field of Computer Science Ph.D. Student Handbook
- Ph.D. Requirements
- Business Card Policy
- Computer Science Graduate Office Hours
- Cornell Tech
- Curricular Practical Training
- Exam Scheduling Guidelines
- Fellowship Opportunities
- Field A Exam Summary Form
- Graduate School Forms
- Ph.D. Student Financial Support
- Special Committee Selection
- The Outside Minor Requirement
- Travel Funding Opportunities
- Diversity and Inclusion
- Graduation Information
- CS Graduate Minor
- Outreach Opportunities
- Parental Accommodation Policy
- Special Masters
- Student Groups
- Student Spotlights
- Contact PhD Office
Focusing on the Representation
Abstract: What is the point of machine learning? Arguably it is to learn how to extract useful representations of data. Despite the representation being the implicit goal, most objectives in machine learning are naturally generative in nature. With modern variational approaches, we can instead directly optimize constrained information-theoretic objectives that put the representation first. Doing so has demonstrated improvements in generalization, adversarial robustness, calibration, and out-of-sample detection for supervised-learning. It has also led to better understanding of some of the deficiencies present in modern unsupervised-learning in the form of variational autoencoders (VAEs). In this talk, I'll try to convince you that representations matter, highlighting some of their successes and discussing their potential future.