- About
- Events
- Calendar
- Graduation Information
- Cornell Learning Machines Seminar
- Student Colloquium
- BOOM
- Fall 2024 Colloquium
- Conway-Walker Lecture Series
- Salton 2024 Lecture Series
- Seminars / Lectures
- Big Red Hacks
- Cornell University - High School Programming Contests 2024
- Game Design Initiative
- CSMore: The Rising Sophomore Summer Program in Computer Science
- Explore CS Research
- ACSU Research Night
- Cornell Junior Theorists' Workshop 2024
- People
- Courses
- Research
- Undergraduate
- M Eng
- MS
- PhD
- Admissions
- Current Students
- Computer Science Graduate Office Hours
- Advising Guide for Research Students
- Business Card Policy
- Cornell Tech
- Curricular Practical Training
- A & B Exam Scheduling Guidelines
- Fellowship Opportunities
- Field of Computer Science Ph.D. Student Handbook
- Graduate TA Handbook
- Field A Exam Summary Form
- Graduate School Forms
- Instructor / TA Application
- Ph.D. Requirements
- Ph.D. Student Financial Support
- Special Committee Selection
- Travel Funding Opportunities
- Travel Reimbursement Guide
- The Outside Minor Requirement
- Diversity and Inclusion
- Graduation Information
- CS Graduate Minor
- Outreach Opportunities
- Parental Accommodation Policy
- Special Masters
- Student Spotlights
- Contact PhD Office
Expanding Knowledge Graphs with Humans in the Loop (via Zoom)
Abstract: Curated knowledge graphs encode domain expertise and improve the performance of machine learning systems in several domains. As new concepts emerge in a domain, knowledge graphs must be expanded to preserve machine learning performance. However, manually expanding knowledge graphs is infeasible at scale. In this work, we propose a method for knowledge graph expansion with humans-in-the-loop. Given a hierarchical knowledge graph (or a "taxonomy"), our method predicts the "parents" of new concepts to be added to this graph for further verification by human experts.
We show that our method is both accurate and provably human-friendly. Specifically, we prove that our method predicts parents that are "near" concepts' true parents in the knowledge graph, even when the predictions are incorrect. We then show, with a human subject experiment, that satisfying this property reduces the time needed by humans to fix incorrect predictions and increases the accuracy of their fixes. Thus, we provide evidence that being human-friendly can increase the speed and accuracy of the human-machine collaboration over and above being accurate. We further evaluate our method on a knowledge graph from Pinterest and show that it outperforms competing methods on both accuracy and human-friendliness. Upon deployment in production at Pinterest, our method reduced the time needed for knowledge graph expansion by 400% (compared to manual expansion) and led to a subsequent increase in shopping ad revenue of 20%.
Bio: Emaad Manzoor is an Assistant Professor at the University of Wisconsin Madison, where he leads the Data Analytics Group. His research spans knowledge graph mining, causal inference with text, and the design of randomized and quasi-experiments toward understanding persuasion in technology-mediated communication. He is a recipient of the 2022 Psychology of Technology Dissertation Award, was a 2021 Rising Star in Data Science by the University of Chicago, received a Best Paper Award at the 2021 AAAI workshop on AI for Behavioral Change, and was a runner up for the 2021 INFORMS-ISS Nunamaker-Chen Dissertation Award. He received his PhD in Information Systems from Carnegie Mellon University and has spent time working at Yahoo!, Pinterest, and the Max Planck Institute for Software Systems.