![ ](Bertram-298.jpg width="20%")**Noah Bertram**
nbertram@cs.cornell.edu [CV](cv.pdf)
Hello! I am a fourth year PhD student in Computer Science at Cornell University. I am broadly interested in programming language theory, though most recently, my time has been spent on verification of economic mechanisms and separately, weighted automata learning. I am grateful to be advised by Justin Hsu.
For fun, I find myself trying to make my computer keyboards sound nice, being particular about how my coffee tastes, or simply enjoying my wife Ashley's company. Also, my faith in Jesus is quite important to me - I'd be happy to talk about it!
Publications
===========================
[Verifying Cake-Cutting, Faster](https://arxiv.org/abs/2405.14068)
Envy-free cake-cutting protocols procedurally divide an infinitely divisible good among a set of agents so that no agent prefers another's allocation to their own. These protocols are highly complex and difficult to prove correct. Recently, Bertram, Levinson, and Hsu introduced a language called Slice for describing and verifying cake-cutting protocols. Slice programs can be translated to formulas encoding envy-freeness, which are solved by SMT. While Slice works well on smaller protocols, it has difficulty scaling to more complex cake-cutting protocols.
We improve Slice in two ways. First, we show any protocol execution in Slice can be replicated using piecewise uniform valuations. We then reduce Slice's constraint formulas to formulas within the theory of linear real arithmetic, showing that verifying envy-freeness is efficiently decidable. Second, we design and implement a linear type system which enforces that no two agents receive the same part of the good. We implement our methods and verify a range of challenging examples, including the first nontrivial four-agent protocol.
Noah Bertram, Tean Lai, Justin Hsu
CAV 2024
[Cutting the Cake: A Langauge for Fair Division](https://arxiv.org/abs/2304.04642)
The fair division literature in economics considers how to divide resources between multiple agents such that the allocation is envy-free: each agent receives their favorite piece. Researchers have developed a variety of fair division protocols for the most standard setting, where the agents want to split a single item, however, the protocols are highly intricate and the proofs of envy-freeness involve tedious case analysis.
We propose Slice, a domain specific language for fair-division. Programs in our language can be converted to logical formulas encoding envy-freeness and other target properties. Then, the constraints can be dispatched to automated solvers. We prove that our constraint generation procedure is sound and complete. We also report on a prototype implementation of Slice, which we have used to automatically check envy-freeness for several protocols from the fair division literature.
Noah Bertram, Alex Levinson, Justin Hsu
PLDI 2023
[Blast from the Past: Least Expected Use (LEU) Cache Replacement with Statistical History](https://dl.acm.org/doi/abs/10.1145/3591195.3595267)
Cache replacement policies typically use some form of statistics on past access behavior. As a common limitation, however, the extent of the history being recorded is limited to either just the data in cache or, more recently, a larger but still finite-length window of accesses, because the cost of keeping a long history can easily outweigh its benefit.
This paper presents a statistical method to keep track of instruction pointer-based access reuse intervals of arbitrary length and uses this information to identify the Least Expected Use (LEU) blocks for replacement. LEU uses dynamic sampling supported by novel hardware that maintains a state to record arbitrarily long reuse intervals. LEU is evaluated using the Cache Replacement Championship simulator, tested on PolyBench and SPEC, and compared with five policies including a recent technique that approximates optimal caching using a fixed-length history. By maintaining statistics for an arbitrary history, LEU outperforms previous techniques for a broad range of scientific kernels, whose data reuses are longer than those in traces traditionally used in computer architecture studies.
Sayak Chakraborti, Zhizhou (Chris) Zhang, Noah Bertram, Sandhya Dwarkadas, Chen Ding
ISMM 2023
[Partial zeta functions, partial exponential sums, and p-adic estimates](https://arxiv.org/abs/2106.09755)
Partial zeta functions of algebraic varieties over finite fields generalize the classical zeta function by allowing each variable to be defined over a possibly different extension field of a fixed finite field. Due to this extra variation their rationality is surprising, and even simple examples are delicate to compute. For instance, we give a detailed description of the partial zeta function of an affine curve where the number of unit poles varies, a property different from classical zeta functions. On the other hand, they do retain some properties similar to the classical case. To this end, we give Chevalley-Warning type bounds for partial zeta functions and L-functions associated to partial exponential sums.
Noah Bertram, Xiantao Deng, C. Douglas Haessig, Yan Li
Finite Fields and Their Applications (March 2023)
[AWLCO: All-Window Length Co-Occurrence](https://arxiv.org/abs/2011.14460)
Analyzing patterns in a sequence of events has applications in text analysis, computer programming, and genomics research. In this paper, we consider the all-window-length analysis model which analyzes a sequence of events with respect to windows of all lengths. We study the exact co-occurrence counting problem for the all-window-length analysis model. Our first algorithm is an offline algorithm that counts all-window-length co-occurrences by performing multiple passes over a sequence and computing single-window-length co-occurrences. This algorithm has the time complexity $O(n)$ for each window length and thus a total complexity of $O(n^2)$ and the space complexity $O(|I|)$ for a sequence of size $n$ and an itemset of size $|I|$. We propose AWLCO, an online algorithm that computes all-window-length co-occurrences in a single pass with the expected time complexity of $O(n)$ and space complexity of $O(n\sqrt{|I|})$. Following this, we generalize our use case to patterns in which we propose an algorithm that computes all-window-length co-occurrence with expected time complexity $O(n|I|)$ and space complexity $O(n\sqrt{|I|}+e_{max}|I|)$, where $e_{max}$ is the length of the largest pattern.
Joshua Sobel, Noah Bertram, Chen Ding, Fatemeh Nargesian, Daniel Gildea
CPM 2021