- About
- Events
- Calendar
- Graduation Information
- Cornell Learning Machines Seminar
- Student Colloquium
- BOOM
- Fall 2024 Colloquium
- Conway-Walker Lecture Series
- Salton 2024 Lecture Series
- Seminars / Lectures
- Big Red Hacks
- Cornell University - High School Programming Contests 2024
- Game Design Initiative
- CSMore: The Rising Sophomore Summer Program in Computer Science
- Explore CS Research
- ACSU Research Night
- Cornell Junior Theorists' Workshop 2024
- People
- Courses
- Research
- Undergraduate
- M Eng
- MS
- PhD
- Admissions
- Current Students
- Computer Science Graduate Office Hours
- Advising Guide for Research Students
- Business Card Policy
- Cornell Tech
- Curricular Practical Training
- A & B Exam Scheduling Guidelines
- Fellowship Opportunities
- Field of Computer Science Ph.D. Student Handbook
- Graduate TA Handbook
- Field A Exam Summary Form
- Graduate School Forms
- Instructor / TA Application
- Ph.D. Requirements
- Ph.D. Student Financial Support
- Special Committee Selection
- Travel Funding Opportunities
- Travel Reimbursement Guide
- The Outside Minor Requirement
- Diversity and Inclusion
- Graduation Information
- CS Graduate Minor
- Outreach Opportunities
- Parental Accommodation Policy
- Special Masters
- Student Spotlights
- Contact PhD Office
Speaker 1: Tegan Wilson
Title: Optimal Oblivious Reconfigurable Networks
Abstract: Oblivious routing has a long history in both the theory and practice of networking. In this talk, I will initialize the formal study of oblivious routing in the context of reconfigurable networks, a new architecture that has recently come to the fore in datacenter networking. I focus on the tradeoffs between maximizing throughput and minimizing latency. For every constant guaranteed throughput rate, I characterize the minimum latency (up to a constant factor) achievable by an oblivious reconfigurable network design. The tradeoff curve turns out to be surprisingly subtle: it has an unexpected scalloped shape, reflecting the fact that load balancing, which I show is necessary for these oblivious designs, becomes more costly when average path length is not an integer, since equalizing the load balanced path lengths is not achievable.
This talk is based on joint work with Daniel Amir, Nitika Saran, Robert Kleinberg, Hakim Weatherspoon, Vishal Shrivastav, and Rachit Agarwal.
Bio: Tegan is a 6th year PhD student at Cornell advised by Robert Kleinberg, and is generally interested in algorithms, graph theory, networks and routing, and combinatorics. Her recent work has focused on network and routing design for reconfigurable datacenter networks, and proving optimal throughput versus latency guarantees in this space.
Speaker 2: Oliver Richardson
Title: Learning, Inference, and The Pursuit of Probabilistic Consistency
Abstract: Internal conflict is an important aspect of human thought. Yet we computer scientists build artificial agents using probabilistic models that, by construction, cannot represent conflicted beliefs. Inconsistency is clearly undesirable, but I argue that we stand to gain a lot by being able to represent it. It turns out that there is a natural way to measure inconsistency, and that many standard algorithms can be viewed as ways of resolving it. We present an expressive class of graphical models called Probabilistic Dependency Graphs (PDGs) that can capture conflicting beliefs. PDGs generalize Bayesian Networks and Factor Graphs, yet allow a modeler to specify arbitrary (even conflicting) probabilistic and causal information, with any degree of confidence.
Beyond their role as graphical models, PDGs provide a unified account of where learning objectives come from, and a visual language for understanding relationships between them. There is a natural way to measure a PDG's degree of inconsistency, and a wide breadth of loss functions can be viewed as the inconsistency of a PDG that models the situation appropriately. This includes variants of cross-entropy, Rényi divergences, accuracy, regularizers, mean-squared error, and variational objectives (i.e., variants of the ELBO). In general, calculating a PDG's degree of inconsistency is hard, and, perhaps surprisingly, equivalent to inference. So it makes sense to try to identify and resolve inconsistencies locally. We will see how algorithms in machine learning are simple instances of an algorithm called local inconsistency resolution.
This talk is based on joint work with Chris de Sa and Joe Halpern.
Bio: Oliver Richardson is a PhD candidate in Computer Science (CS) at Cornell University, advised by Joe Halpern. He works on the mathematical foundations of agents, sewing together probabilistic graphical models, information theory, programming languages, logic, and machine learning. Oliver also holds an MPhil in CS from the University of Cambridge, where he worked on diagrammatic reasoning with Mateja Jamnik. He earned three majors (Math, Biology, CS) and minors (Chemistry, Physics, Cognitive Science) as an undergraduate at the University of Utah. There, his research focused on pure math (tropical geometry, with Aaron Bertram) and applied machine learning (structured prediction for natural language, with Vivek Srikumar).
Speaker 3: Varsha Kishore
Title: Optimization over Semantic Embeddings Spaces
Abstract: Measuring text similarity is a notoriously hard problem, as small changes in language can have a drastic impact on meaning. Large language models are good at learning semantic latent spaces, and the resulting contextual text embeddings from these models serve as powerful representations of information. In this talk, I present two non-traditional uses of semantic distances in these latent spaces. In the first part, I introduce BERTScore, an algorithm designed to measure the similarity between machine translation outputs and human gold standards. BERTScore approximates a form of transport distance to match tokens in the generated and human text. In the second part, I focus on an information retrieval setting, where transformers are trained end-to-end in order to map search queries to corresponding documents. In this setting, I describe IncDSI, a method to add new documents to a trained retrieval system by solving a constrained convex optimization problem to obtain new document representations.
Bio: Varsha Kishore is a sixth-year computer science PhD candidate at Cornell advised by Kilian Weinberger. She works on applied machine learning problems relating to evaluating textual similarity, information retrieval models, and text diffusion models. During her PhD, she has interned at ASAPP, Microsoft Research, and Google. Before starting her PhD, she studied Math and Computer Science at Harvey Mudd College.
Speaker 4: Julia Len
Title: Designing secure-by-default cryptography for computer systems
Abstract: Cryptography has become a powerful tool for securing deployed computer systems. Yet, actually designing cryptography that protects against all the threats of deployment can still be surprisingly hard to do. This frequently translates into practitioners forced to implement ad-hoc mitigations or users having to make the correct choices about security. And very often, they don’t do this well, if at all. The end result is subtle vulnerabilities in our most important cryptographic protocols. In this talk, I will describe my approach to instead designing new secure-by-default cryptography for computer systems. I will focus on my work in using this approach for authenticated encryption, in which I introduced a new class of attacks providing exponential speed-ups in password-guessing attacks against systems, developed a new theory to act as guidance in designing better encryption schemes, and designed new practical cryptographic schemes for real-world deployment.
Bio: Julia Len is a PhD student at Cornell University (based in NYC at Cornell Tech) advised by Thomas Ristenpart. Her research interests are broadly in the areas of applied cryptography and computer security.