Second-year PhD student, Department of Computer Science, Cornell University
Computers operate on precisely defined rules. For everything from floating point numbers to the world wide web, there is a well-defined standard that tells your machine exactly what to expect and exactly what it will look like. By contrast, humans live in a world of ambiguity -- we fluently handle concepts like "happiness" despite lacking any precise definition of what they are and how to measure them. As computers become more and more integrated into everyday society, it will become all the more important to figure out how to bridge this gap. This is exactly what I am interested in exploring: how to teach machines to handle and interpret the ambiguity of human concepts and behaviors.
I am particularly interested in using language as a signal for picking up on human behavior. We might not be able to program a machine with the definition of offensiveness, but perhaps through clues hidden in the language of forum users, we can train a computer to detect when users might be getting frustrated and more likely to veer into insulting behavior. There might not be a fixed threshold for what constitutes productivity, but perhaps a computer can look at the language used in a discussion and predict if it will lead to a productive outcome. I believe there is a lot of information hidden in the noisy signal that is human language, and my goal is to develop computational tools that can unlock even a small fraction of that information and make it accessible to us all.
- Undergraduate: B.S. Computer Science, Harvey Mudd College (2013-2017)
- Graduate: PhD Computer Science, Cornell University (2017-Present)
Courses I TA
- CS 7400, Foundations of Artificial Intelligence (Fall 2017)