CS Theory Seminar

The CS theory seminar is a weekly meeting where speakers present their research in theoretical computer science. All are welcome.
 

Stay in the loop. 
Write "join" in the subject line. Leave the body of the email blank.

cs-theory-seminar-l-request [at] cornell.edu (Join the event mailing list)

 

 

Spring 2026 Schedule

Theory seminars will take place in Gates 122, 3:45-5 p.m. Click here to attend via Zoom


Date: February 2, 2026
Title: Collectively flipping many coins
Speaker: Mohit Gurumukhani, Ph.D. student at Cornell University
Host: Eshan Chattopadhyay

 

*CANCELED*
Date: February 9, 2026
Title: TBD
Speaker: Ira Globus-Harris, Cornell Tech
Host: Éva Tardos
 

Date: February 23, 2026
Title: Randomised 2-Source Extractor and 2-Source Non-Malleable Extractor in the Linear Entropy Regime
Speaker: Satyajeet Nagargoje, Georgetown University
Host: Noah Stephens-Davidowitz
 

Date: March 2, 2026
Title: TBD
Speaker: Nikhil Shagrithaya, University of Michigan
Host: Eshan Chattopadhyay
 

Date: March 9, 2026
Title: TBD
Speaker: Rachel Zhang, MIT
Host: Eshan Chattopadhyay
 

Date: March 16, 2026
Title: TBD
Speaker: Omer Tamuz, CalTech
Host: Éva Tardos

Event Archive

Browse past lectures. If you wish to view an event listing prior to 2024, please email comm-office [at] cis.cornell.edu.

Date: August 25 
Title:Bayesian Algorithms for Adversarial Online Learning in Non-discrete Action Spaces
Speaker:Alexander Terenin, Cornell University
Host: Eshan Chattopadhyay

 

Bonus Theory Seminar 
Date: August 27
Title: Breaking Verifiable Delay Functions in the Random Oracle Model
Speaker: Ziyi Guan, EPFL
Computing and Information Science Building, Room 450 
Host: Nick Spooner
Attend this talk via Zoom

 

Date: September 8
Title:Low-Depth Arithmetic Circuits for Multivariate Resultants
Speaker:Robert Andrews, University of Waterloo
Host: Eshan Chattopadhyay

 

Date: September 15
Title:Breaking the $T^{2/3}$ Barrier for Sequential Calibration
Speaker: Maxwell (Max) Fishelson
 

Date: September 22
Title: Efficient Sampling by Algorithmic Diffusion
Speaker:Santosh Vempala, Georgia Tech University
Host: Éva Tardos

 

Date: September 29
Title:Learning in Budgeted Auctions with Spacing Objectives
Speaker: Giannis Fikioris, Cornell University
Host: Éva Tardos

 

Date: October 6
Title:Menus: A Framework for Learning Against Strategic Opponents
Speaker: Natalie Collina, University of Pennsylvania
Host: Éva Tardos
 

Bonus Theory Seminar 
Date: October 8
Title: Combinatorial Selection Models with Costly Information
Speaker: Dimitrios Christou, Ph.D. student, UT Austin
Computing and Information Science Building, Room 450 


Date: October 20
Title:How to Appease a Voter Majority
Speaker:Prasanna Ramakrishnan, Ph.D. student, Stanford University, CS Theory group
Host: Noah Stephens-Davidowitz


Date: November 3
Title: Sparsifying intersections of halfspaces
Speaker:Shivam Nadimpalli, MIT
Host: Eshan Chattopadhyay


Date: November 10
Title:Linear Hashing is Optimal
Speaker: Vinayak Kumar, UT Austin
Host: Eshan Chattopadhyay


Date: November 17
Title: Graphic Matroid Secretary without the Graph
Speaker:Renato Paes Leme, Google Research New York
Host: Éva Tardos


Date: November 24
Title: Reed-Muller based IOPs via the Line vs. Point Test 
Speaker:Kai Zhe Zheng, MIT
Host: Nick Spooner


Date: December 1
Title:The structure of in-place space-bounded computation
Speaker: Surendra Ghentiyala, Cornell University
Host: Noah Stephens-Davidowitz


Date: December 8
Title: Fourier Growth in Quantum Advantage and Pseudorandomness
Speaker:Avishay Tal, UC Berkley
Host: Eshan Chattopadhyay

01.28.25: A quest for an algorithmic theory for high-dimensional statistical inference
Speaker: Sidhanth Mohanty, MIT
Host: Eshan Chattopadhyay
Abstract: When does a statistical inference problem admit an efficient algorithm?
There is an emergent body of research that studies this question by trying to understand the power and limitations of various algorithmic paradigms in solving statistical inference problems; for example, convex programming, Markov chain Monte Carlo (MCMC) algorithms, and message passing algorithms to name a few.

Of these, MCMC algorithms are easy to adapt to new inference problems and have shown strong performance in practice, which makes them promising as a universal algorithm for inference.  However, provable guarantees for MCMC have been scarce, lacking even for simple stylized models of inference.

 

 

02.03.25: An efficient quantum parallel repetition theorem and applications
Speaker: Nick Spooner, Cornell
Host: Eshan Chattopadhyay
Abstract: A fundamental question in cryptography is whether the security of a “weak” cryptographic primitive can be generically amplified. We investigate this question in the quantum setting. We prove a general security amplification result for any quantum cryptographic primitive with a three-message security game, including (canonical) quantum commitments and quantum money.

 

02.10.25: Hypergraph Projection and Reconstruction
Speaker: Yanbang Wang, Cornell
Host: Jon Kleinberg|
Abstract: Hypergraphs generalize the concept of graphs by extending edges from node pairs to node sets of arbitrary sizes. In this talk, we discuss the implications of using a graph, instead of a hypergraph, to represent real-world interconnected systems that contain relationships of higher order by nature. Such a modeling choice typically involves a projection (clique expansion) process that maps the original hypergraph onto a graph, and is common in graph-based analysis. While hypergraph projection can potentially lead to loss of higher-order relations, there exist limited studies on the consequences of doing so, as well as possible remedies. 

 

02.24.25: Improved Approximation Algorithms for the Joint Replenishment Problem with Outliers, and with Fairness Constraints
Speaker: Varun Suriyanarayana, Cornell
Host: David Shmoys
Abstract: The joint replenishment problem (JRP) is a classical inventory management problem. We consider a natural generalization with outliers, where we are allowed to reject (that is, not service) a subset of demand points. In this talk, we are motivated by issues of fairness - if we do not serve all of the demands, we wish to “spread out the pain” in a balanced way among customers, communities, or any specified market segmentation. One approach is to constrain the rejections allowed, and to have separate bounds for each given customer. In our most general setting, we consider a set of C features, where each demand point has an associated rejection cost for each feature, and we have a given bound on the allowed rejection cost incurred in total for each feature. 
We give the first constant approximation algorithms for the fairness-constrained JRP with a constant number of features; specifically, we give a 2.86-approximation algorithm in this case. Even for the special case in which we bound the total (weighted) number of outliers, this performance guarantee improves upon bounds previously known for this case.

 

03.03.25: A Zero-Knowledge PCP Theorem
Speaker: Jack O'Connor, Cambridge
Host: Nick Spooner
Abstract: A probabilistically checkable proof (PCP) is a proof which can be verified by inspecting a small number of symbols from the proof. The PCP theorem states that for any language in NP, there exists a polynomial-size proof that can be verified probabilistically by reading only a constant number of bits from the proof. 
We prove a “zero-knowledge PCP theorem”. For every polynomial q*, we construct PCPs for NP of polynomial length, which require only a constant number of queries to verify, and which are perfect zero knowledge against adaptive adversaries making at most q* queries to the proof. In addition, we construct exponential-length PCPs for NEXP, which require only a constant number of queries to verify, and which are perfect zero knowledge against any polynomial-time adversary.

 

 

03.05.25: When Connectivity is Hard, Random Walks are Easy (with Nondeterminism)
Speaker: Ted Pyne, MIT
Host: Eshan Chattopadhyay
Abstract: Two well-studied graph problems are to 1: determine connectivity, and 2: estimate the behavior of random walks. Currently, there is no algorithm which solves (1) in polynomial time and strongly sublinear space, and no algorithm for (2) that runs in nondeterministic logspace.
We show that for every graph, at least one of these problems is solvable more efficiently than the state of the art. Our results build on recent work on distinguish-to-predict transformations (Li, Pyne, Tell) and bootstrapping systems (Chen, Tell). As a consequence, either randomized linear space can be derandomized, or a time- and space- efficient simulation of nondeterministic linear space holds. 

 

 

03.10.25: QMA vs. QCMA and Pseudorandomness
Speaker: Henry Yuen, Columbia
Host: Nick Spooner
Abstract: In quantum complexity theory, the complexity classes QMA and QCMA are two different generalizations of NP. Both are defined as sets of languages whose Yes instances can be efficiently checked by a quantum verifier that is given a witness. With QMA the witness can be a quantum state, whereas with QCMA the witness must be a classical string. It is clear that QMA contains QCMA, but could there be a separation? In other words, can a quantum state prove more to a polynomial-time quantum verifier than a polynomial-length classical string?
In 2007, Aaronson and Kuperberg asked whether there is a (classical) oracle separation between QMA and QCMA. We show that such an oracle exists if a certain quantum pseudorandomness conjecture holds. Roughly speaking, the conjecture posits that quantum algorithms cannot, by making few queries, distinguish between the uniform distribution over permutations versus permutations drawn from so-called "dense" distributions. It turns out that this pseudorandomness conjecture is deeply related to another, seemingly-unrelated, question in theoretical computer science known as the Aaronson-Ambainis conjecture.

 

 

03.17.25: Language Generation in the Limit
Speaker: Jon Kleinberg, Cornell
Host: Eshan Chattopadhyay
Abstract: Although current large language models are complex, the most basic specifications of the underlying language generation problem itself are simple to state: given a finite set of training samples from an unknown language, produce valid new strings from the language that don't already appear in the training data. Here we ask what we can conclude about language generation using only this specification, without any further properties or distributional assumptions. In particular, we consider models in which an adversary enumerates the strings of an unknown target language that is known only to come from a possibly infinite list of candidate languages, and we show that it is possible to give certain non-trivial guarantees for language generation in this setting. The resulting guarantees contrast dramatically with negative results due to Gold and Angluin in a well-studied model of language learning where the goal is to identify an unknown language from samples; the difference between these results suggests that identifying a language is a fundamentally different problem than generating from it. (This is joint work with Sendhil Mullainathan.) 

 

04.07.25: Near-Optimal Algorithms for Omniprediction
Speaker: Princewill Okoroafor, Cornell
Host: Eshan Chattopadhyay
Abstract: Omnipredictors are simple prediction functions that encode loss-minimizing predictions with respect to a hypothesis class H, simultaneously for every loss function within a class of losses L. In this work, we give near-optimal learning algorithms for omniprediction, in both the online and offline settings. To begin, we give an oracle-efficient online learning algorithm that acheives (L, H)-omniprediction with O(\sqrt{T log |H|}) regret for any class of Lipschitz loss functions L ⊆ LLip. Quite surprisingly, this regret bound matches the optimal regret for minimization of a single loss function (up to a \sqrt{log(T)} factor). Given this online algorithm, we develop an online-to-offline conversion that achieves near-optimal complexity across a number of measures. In particular, for all bounded loss functions within the class of Bounded Variation losses LBV (which include all convex, all Lipschitz, and all proper losses) and any (possibly-infinite) H, we obtain an offline learning algorithm that, leveraging an (offline) ERM oracle and m samples from D, returns an efficient (LBV, H, ε(m))-omnipredictor for ε(m) scaling near-linearly in the Rademacher complexity of Th ◦ H. 

 

04.14.25: Randomness extractors for a few hidden sources
Speaker: Jesse Goodman, UT Austin
Host: Eshan Chattopadhyay
Abstract: Given a sequence of N independent sources of randomness X_1, …, X_N, how many of them must be *good* (i.e., contain some entropy) in order to extract a uniformly random string? This question was first raised at STOC’20, motivated by applications in cryptography, and by the unreliable nature of real-world randomness. There, it was shown how to extract uniform bits with very low error, even when just K = N^{0.5} of the sources are good. In a follow-up work at FOCS’21, the number of good sources required was significantly improved to K = N^{0.01}.

 

04.21.25: Good Data and Bad Data: The Welfare Effects of Price Discrimination
Speaker: Maryam Farboodi, MIT Sloan
Host: Eva Tardos
Abstract: The rise of big data technologies has revived the longstanding debate on who reaps the economic benefit of more information, firms, or consumers. We study the welfare consequences of monopoly pricing across all possible segmentations of a given family of demand curves. We provide necessary and sufficient conditions on this family of demand curves under which the monopolist's use of information never always depress versus always improve consumer welfare despite the ensuing price discrimination. Our classification provides insights to guide the recent policy discussion in use of consumer data and sheds light on the type of information that can be allowed in price discrimination. Joint work with Nima Haghpanah and Ali Shourideh.

 

04.28.25: Basefold: Efficient Field-Agnostic Polynomial Commitment Schemes from Foldable Codes.
Speaker: Hadas Zeilberger, Yale
Host: Nick Spooner
Abstract: This works introduces BaseFold, a new field-agnostic Polynomial Commitment Scheme (PCS) for multilinear polynomials that has O(log2(n)) verifier costs and O(n log n) prover time. An important application of a multilinear PCS is constructing Succinct Non-interactive Arguments (SNARKs) from multilinear polynomial interactive oracle proofs (PIOPs). Furthermore, field-agnosticism is a major boon to SNARK efficiency in applications that require (or benefit from) a certain field choice.
Our inspiration for BaseFold is the Fast Reed-Solomon Interactive-Oracle Proof of Proximity (FRI IOPP), which leverages two properties of Reed-Solomon (RS) codes defined over “FFT-friendly” fields: O(n log n) encoding time, and a second property that we call foldability. We first introduce a generalization of the FRI IOPP that works over any foldable linear code in linear time. Second, we construct a new family of linear codes which we call random foldable codes, that are a special type of punctured Reed-Muller codes, and prove tight bounds on their minimum distance. Unlike RS codes, our new codes are foldable and have O(n log n) encoding time over any sufficiently large field. Finally, we construct a new multilinear PCS by carefully interleaving our IOPP with the classical sumcheck protocol, which also gives a new multilinear PCS from FRI.

 

05.07.25: Springboards
Speaker: Cynthia Dwork, Harvard
Host: Noah Stephens-Davidowitz
Abstract: Differential Privacy and Outcome Indistinguishability have yielded results beyond the areas in which they were conceived.  In this talk, we use Differential Privacy to obtain a Reusable Holdout Set, thereby enabling sound exploratory data analysis.  We then use Outcome Indistinguishability to obtain a strong complexity-theoretic regularity theorem analogous to Szemerédi's Graph Regularity Lemma, and explore some corollaries and open questions.  

 

05.12.25: Oblivious Defense in ML Models: Backdoor Removal without Detection
Speaker: Jonathan Shafer, MIT
Host: Michael Kim
Abstract: As society grows more reliant on machine learning, ensuring the security of machine learning systems against sophisticated attacks becomes a pressing concern. A recent result of Goldwasser, Kim, Vaikuntanathan, and Zamir (FOCS 2022) shows that an adversary can plant undetectable backdoors in machine learning models, allowing the adversary to covertly control the model’s behavior. Backdoors can be planted in such a way that the backdoored machine learning model is computationally indistinguishable from an honest model without backdoors. 

08.26.24: Efficiently Learning Markov Random Fields from Dynamics
Speaker: Jason Gaitonde, MIT
Host: Eva Tardos
Abstract: An important task in high-dimensional statistics is learning the parameters or dependency structure of undirected graphical models, or Markov random fields (MRF), which have applications across computer science, machine learning, and statistics. Prior work on this problem assumes access to i.i.d. samples from the MRF distribution and state-of-the-art algorithms succeed with n^{\Theta(k)} runtime, where n is the dimension and k is the order of the interactions. However, well-known computational lower bounds imply that given i.i.d. samples from a sparse, order-k MRF, any learning algorithm likely requires n^{\Omega(k)} time. Moreover, in many economic, network, or statistical physics applications, even obtaining i.i.d. samples from the MRF distribution is unrealistic or itself computationally hard.

 

09.09.24: Eliciting Informative Text Evaluations with Large Language Models
Speaker: Grant Schoenebeck, U Michigan
Host: Eva Tardos
Abstract: In a wide variety of contexts including peer grading, peer review, and crowd-sourcing (e.g. evaluating LLM outputs) we would like to design mechanisms which reward agents for producing high quality responses. Unfortunately, computing rewards by comparing to ground truth or gold standard is often cumbersome, costly, or impossible. Instead we would like to compare agent reports.
Peer prediction mechanisms motivate high-quality feedback with provable guarantees. However, current methods only apply to rather simple reports, like multiple-choice or scalar numbers. We aim to broaden these techniques to the larger domain of text-based reports, drawing on the recent developments in large language models. This vastly increases the applicability of peer prediction mechanisms as textual feedback is the norm in a large variety of feedback channels: peer reviews, e-commerce customer reviews, and comments on social media. We introduce two mechanisms, the Generative Peer Prediction Mechanism (GPPM) and the Generative Synopsis Peer Prediction Mechanism (GSPPM). These mechanisms utilize LLMs as predictors, mapping from one agent’s report to a prediction of her peer’s report. Theoretically, we show that when the LLM prediction is sufficiently accurate, our mechanisms can incentivize high effort and truth-telling as an (approximate) Bayesian Nash equilibrium. Empirically, we confirm the efficacy of our mechanisms through experiments conducted on two real datasets: the Yelp review dataset and the ICLR OpenReview dataset. We highlight the results that on the ICLR dataset, our mechanisms can differentiate three quality levels — human written reviews, GPT-4-generated reviews, and GPT-3.5-generated reviews in terms of expected scores. Additionally, GSPPM penalizes LLM-generated reviews more effectively than GPPM.
No background is required, and the talk will introduce relevant background of peer prediction mechanisms.   

 

09.16.24: Algorithmic Threshold for the Random Perceptron
Speaker: Brice Huang, MIT
Host: Ahmed El Alaoui
Abstract: We consider the problem of efficiently optimizing random (spherical or Ising) perceptron models with general bounded Lipschitz activation. We focus on a class of algorithms with Lipschitz dependence on the disorder: this includes gradient descent, Langevin dynamics, approximate message passing, and any constant-order method on dimension-free time-scales. Our main result exactly characterizes the optimal value ALG such algorithms can attain in terms of a one-dimensional stochastic control problem. Qualitatively, ALG is the largest value whose level set contains a certain "dense solution cluster." Quantitatively, this characterization yields both improved algorithms and hardness results for a variety of asymptotic regimes, which are sharp up to absolute constant factors. Joint work (in progress) with Mark Sellke and Nike Sun.

 

09.23.24: On the Existence of Seedless Condensers: Exploring the Terrain
Speaker: Noam Ringach, Cornell
Host: Eshan Chattopadhyay
Abstract: While the existence of randomness extractors has been thoroughly studied, very little is known regarding the existence of seedless condensers. We prove several new results regarding seedless condensers for three related classes of sources: Non-Oblivious Symbol Fixing (NOSF) sources, online NOSF sources, and almost Chor-Goldreich (CG) sources. These sources are a sequence of random variables over many symbols. There are l symbols out of which g symbols are "good", and the remaining "bad" symbols adversarially depend on the good symbols. The difference between each of these sources is realized by restrictions on the power of the adversary.

 

10.07.24: Efficient Algorithms for Semirandom Planted CSPs at the Refutation Threshold
Speaker: Pravesh Kothari, Princeton
Host: Eshan Chattopadhyay
Abstract: We present an efficient algorithm to solve semi-random planted instances of any Boolean constraint satisfaction problem (CSP). The semi-random model is a hybrid between worst-case and average-case input models, where the input is generated by (1) choosing an arbitrary planted assignment x∗, (2) choosing an arbitrary clause structure, and (3) choosing literal negations for each clause from an arbitrary distribution "shifted by x∗" so that x∗ satisfies each constraint. For an n variable semi-random planted instance of a k-arity CSP, our algorithm runs in polynomial time and outputs an assignment that satisfies all but an o(1)-a fraction of constraints, provided that the instance has at least Õ (nk/2) constraints. This matches, up to polylog(n) factors, the clause threshold for algorithms that solve fully random planted CSPs [FPV15] and algorithms that refute random and semi-random CSPs. Our result shows that despite having a worst-case clause structure, the randomness in the literal patterns makes semi-random planted CSPs significantly easier than worst-case, where analogous results require O(n^k) constraints.
Perhaps surprisingly, our algorithm follows a conceptual framework different from the recent resolution of semi-random CSP refutation. This turns out to be inherent and, at a technical level, can be attributed to the need for relative spectral approximation of certain random matrices - reminiscent of the classical spectral sparsification - which ensures that an SDP can certify the uniqueness of the planted assignment. In contrast, in the refutation setting, obtaining a weaker guarantee of absolute upper bounds on the spectral norm of related matrices suffices.
Based on joint work with Venkatesan Guruswami, Jun-Ting Hsieh, and, Peter Manohar

 

10.21.24: Sampling From Mean-Field Spin Glass Gibbs Measures
Speaker: Ahmed El Alaoui, Cornell
Host: Eshan Chattopadhyay
Abstract: I will discuss the questions of sampling and finding solutions of a prescribed energy in the mean-field p-spin Gibbs measure. Whether these tasks are efficiently possible is connected to the geometric properties of the measure, which undergoes a sequence of phase transition phenomena as the temperature is changed. I will present the phase diagram and discuss what is supposed to happen in each phase. I will present an efficient sampling algorithm in a large sub-interval of temperatures where this task is believed to be possible, and rule out a large sub-family of algorithms in the complement regime by showing that the Gibbs measure is chaotic in a certain sense.      

 

11.04.24: Uncertain Repeated Games
Speaker: Rohit Lamba, Cornell
Host: Eshan Chattopadhyay
Abstract: Multiple long run players play one amongst multiple possible stage games in each period. They observe and recall past play and are aware of the current stage game being played, but are uncertain about the future evolution of stage games. This setup is termed an uncertain repeated game. The solution concept requires that a subgame perfect equilibrium be played no matter what sequence of stage games realize. The feasible set of payoffs is then so large and complex that it is not obvious how to frame standard results such as the folk theorem, and further how to construct credible rewards and punishments that work irrespective of the future evolution of games. The main goal of the paper is to build such a language through two different perspectives— one in which the modeler has access to the true stochastic process but not the players and another in which there is simply maximal uncertainty; and then to construct credible dynamic incentives that work generally for uncertain repeated games. A complete characterization of equilibria is presented for large discount factors and various extensions to related models and results are discussed.

 

11.11.24: Tools of Thought: Building Algorithms that Enhance Human Capacity
Speaker: Sendhil Mullainathan, MIT
Host: Center for Data Science for Enterprise and Society
Abstract: The biggest proponents for AI lack no confidence. We are close, they argue, to being able to build algorithms that do everything humans can. The problem with this vision is not hubris, quite the opposite. I will argue that it lacks ambition.  Computing will have the biggest (positive) impact on society not from algorithms doing what we already do cheaper or in silico; but from doing things that humans cannot even dream of. In effect, we want thought partners, not substitutes. Building such algorithms requires that we integrate our knowledge of people with our knowledge of computing. I will illustrate with several examples. 

 

11.18.24: Contagion in Large-Scale Networks: Frameworks for Resource Allocation and Simulation
Speaker: Marios Papachristou, Cornell
Host: Eshan Chattopadhyay
Abstract: In this talk, I focus on resource allocation in dynamically changing complex environments that undergo contagion in the real world such as peer-to-peer lending networks, viral marketing campaigns, ridesharing, and financial networks. In these systems, a planner needs to allocate resources subject to a budget to cause or prevent contagion. These environments are often very large-scale and dynamically changing and usually have missing links. My work aims to address these two problems.
I present a model for allocations that can be provably and efficiently solved in large-scale networks and is more flexible than existing contagion models. I show the model’s efficiency in several use cases such as physical financial networks, ridesharing networks, and digital financial networks.

 

11.25.24: Aggregating Preferences with Limited Queries
Speaker: Daniel Halpern, Harvard
Host: Robert Kleinberg
Abstract: Social choice theory studies how to aggregate individual preferences into a single collective decision. Traditionally, this assumes complete access to each individual’s complete preferences. However, in a variety of settings, this assumption does not hold. For example, modern online platforms promoting civic participation, such as Pol.is, aggregate complex preferences over a vast space of alternatives, rendering it infeasible to learn any individual's preferences completely. Instead, preferences are elicited by asking each user a query about a small subset of their preferences.

 

12.02.24: Agentic Design in Economic Environments
Speaker: Brendan Lucier, Microsoft Research
Host: Eva Tardos
Abstract: Many online market platforms have associated algorithmic tools that can optimize on a user's behalf, such as algorithmic bidders or price optimizers. Recent advances in agentic AI enable even more use cases for LLM-powered tools and advice. But as algorithmic assistance becomes increasingly common, how will users strategically maneuver their usage of these tools? What will be the system-wide impact at equilibrium as agents interact, what are the potential pitfalls of algorithmic delegation, and how does this inform platform and agent design?
In this talk we will explore these questions through the lens of resource allocation markets.  Buyers in our model have access to platform-provided algorithmic agents that compete for resources on their behalf, given a budget and other parameters. We consider designs for the platform and agents that yield stable and approximately efficient outcomes at equilibrium, for broad classes of buyer preferences over resources and money.

 

 

12.09.24: Behavioral Bias in School Choice: Theory and Empirics
Speaker: Emily Ryu, Cornell
Host: Eshan Chattopadhyay
Abstract: A fundamental component in the theoretical school choice literature is the subset selection problem a student faces in deciding which schools to apply to. Recent models consider a student who is unsure of their strength as an applicant and can only apply to at most k schools of varying selectiveness, and therefore must diversify their limited portfolio in order to optimize the best school they get into. However, experience suggests that students are additionally influenced by crucial behavioral biases based on reputational effects: they experience a subjective reputational benefit when admitted to a selective school, and a subjective loss based on disappointment when rejected.