- About
- Events
- Calendar
- Graduation Information
- Cornell Learning Machines Seminar
- Student Colloquium
- BOOM
- Fall 2024 Colloquium
- Conway-Walker Lecture Series
- Salton 2024 Lecture Series
- Seminars / Lectures
- Big Red Hacks
- Cornell University - High School Programming Contests 2024
- Game Design Initiative
- CSMore: The Rising Sophomore Summer Program in Computer Science
- Explore CS Research
- ACSU Research Night
- Cornell Junior Theorists' Workshop 2024
- People
- Courses
- Research
- Undergraduate
- M Eng
- MS
- PhD
- Admissions
- Current Students
- Computer Science Graduate Office Hours
- Advising Guide for Research Students
- Business Card Policy
- Cornell Tech
- Curricular Practical Training
- A & B Exam Scheduling Guidelines
- Fellowship Opportunities
- Field of Computer Science Ph.D. Student Handbook
- Graduate TA Handbook
- Field A Exam Summary Form
- Graduate School Forms
- Instructor / TA Application
- Ph.D. Requirements
- Ph.D. Student Financial Support
- Special Committee Selection
- Travel Funding Opportunities
- Travel Reimbursement Guide
- The Outside Minor Requirement
- Diversity and Inclusion
- Graduation Information
- CS Graduate Minor
- Outreach Opportunities
- Parental Accommodation Policy
- Special Masters
- Student Spotlights
- Contact PhD Office
Loss as the Inconsistency of a Probabilistic Dependency Graph: Choose Your Model, Not Your Loss Function (via Zoom)
Abstract: In a world blessed with a great diversity of loss functions, I argue that that choice between them is not a matter of taste or pragmatics, but of model.
Probabilistic dependency graphs (PDGs) are a very expressive class of probabilistic graphical models that comes equipped with a natural measure of inconsistency.
The central finding of this work is that many standard loss functions can be viewed as measuring the inconsistency of the PDG that describes the scenario at hand.
As a byproduct of this approach, we obtain an intuitive visual calculus for deriving inequalities between loss functions. In addition to variants of cross entropy, a large class of statistical divergences can be expressed as inconsistencies, from which we can derive visual proofs of properties such as the information processing inequality. We can also use the approach to justify a well-known connection between regularizers and priors. In variational inference, we find that the ELBO, a somewhat opaque objective for latent variable models, and variants of it arise for free out of uncontroversial modeling assumptions -- as do simple graphical proofs of their corresponding bounds. Based on my AISTATS 22 paper.
Bio: Oliver is a PhD student in Computer Science at Cornell, advised by Joseph Halpern. His research focuses on modeling computations with internally conflicted agents, but his research interests range broadly from machine learning to applied category theory. Previously, Oliver did a masters in computer science at the University of Cambridge, and undergraduate degrees in mathematics, biology, and computer science at the University of Utah.