homepage pic

Postdoctoral Researcher
Cornell Tech, New York, NY

nephraim "at" cs.cornell.edu

Publications

Parallelizable Delegation from LWE

Cody Freitag, Rafael Pass, and Naomi Sirkin

TCC 2022

PDF

We present the first non-interactive delegation scheme for P with time-tight parallel prover efficiency based on standard hardness assumptions. More precisely, in a time-tight delegation scheme—which we refer to as a SPARG (succinct parallelizable argument)—the prover's parallel running time is \(t + \mathsf{polylog} (t)\), while using only \(\mathsf{polylog}(t)\) processors and where \(t\) is the length of the computation. (In other words, the proof is computed essentially in parallel with the computation, with only some minimal additive overhead in terms of time).

Our main results show the existence of a publicly-verifiable, non-interactive, SPARG for P assuming polynomial hardness of LWE. Our SPARG construction relies on the elegant recent delegation construction of Choudhuri, Jain, and Jin (FOCS'21) and combines it with techniques from Ephraim et al (EuroCrypt'20).

We next demonstrate how to make our SPARG time-independent—where the prover and verifier do not need to known the running-time \(t\) in advance; as far as we know, this yields the first construction of a time-tight delegation scheme with time-independence based on any hardness assumption.

We finally present applications of (time-independent) SPARGs to the constructions of VDFs (Boneh et al, Crypto'18), resulting in the first VDF construction from standard polynomial hardness assumptions (namely LWE and the minimal assumption of a sequentially hard function).

Non-Malleable Time-Lock Puzzles and Applications

Cody Freitag, Ilan Komargodski, Rafael Pass, and Naomi Sirkin

TCC 2021

PDF Video

Time-lock puzzles are a mechanism for sending messages "to the future", by allowing a sender to quickly generate a puzzle with an underlying message that remains hidden until a receiver spends a moderately large amount of time solving it. We introduce and construct a variant of a time-lock puzzle which is non-malleable, which roughly guarantees that it is impossible to "maul" a puzzle into one for a related message without solving it.

Using non-malleable time-lock puzzles, we achieve the following applications:

  • The first fair non-interactive multi-party protocols for coin flipping and auctions in the plain model without setup.
  • Practically efficient fair multi-party protocols for coin flipping and auctions proven secure in the (auxiliary-input) random oracle model.

As a key step towards proving the security of our protocols, we introduce the notion of functional non-malleability, which protects against tampering attacks that affect a specific function of the related messages. To support an unbounded number of participants in our protocols, our time-lock puzzles satisfy functional non-malleability in the fully concurrent setting. We additionally show that standard (non-functional) non-malleability is impossible to achieve in the concurrent setting (even in the random oracle model).

SPARKs: Succinct Parallelizable Arguments of Knowledge

Naomi Ephraim, Cody Freitag, Ilan Komargodski, and Rafael Pass

EUROCRYPT 2020

PDF

Journal of the ACM 2022

We introduce the notion of a Succinct Parallelizable Argument of Knowledge (SPARK). This is an argument of knowledge with the following three efficiency properties for computing and proving a (non-deterministic, polynomial time) parallel RAM computation that can be computed in parallel time \(T\) with at most \(p\) processors:

  • The prover's (parallel) running time is \(T + \mathrm{polylog}(T \cdot p)\). (In other words, the prover's running time is essentially \(T\) for large computation times!)
  • The prover uses at most \(p \cdot \mathrm{polylog}(T \cdot p)\) processors.
  • The communication and verifier complexity are both \(\mathrm{polylog}(T \cdot p)\).
The combination of all three is desirable as it gives a way to leverage a moderate increase in parallelism in favor of near-optimal running time. We emphasize that even a factor two overhead in the prover's parallel running time is not allowed.

Our main contribution is a generic construction of SPARKs from any succinct argument of knowledge where the prover's parallel running time is \(T \cdot \mathrm{polylog}(T \cdot p)\) when using \(p\) processors, assuming collision-resistant hash functions. When suitably instantiating our construction, we achieve a four-round SPARK for any parallel RAM computation assuming only collision resis- tance. Additionally assuming the existence of a succinct non-interactive argument of knowledge (SNARK), we construct a non-interactive SPARK that also preserves the space complexity of the underlying computation up to \(\mathrm{polylog}(T \cdot p)\) factors.

We also show the following applications of non-interactive SPARKs. First, they immediately imply delegation protocols with near optimal prover (parallel) running time. This, in turn, gives a way to construct verifiable delay functions (VDFs) from any sequential function. When the sequential function is also memory-hard, this yields the first construction of a memory-hard VDF.

Continuous Verifiable Delay Functions

Naomi Ephraim, Cody Freitag, Ilan Komargodski, and Rafael Pass

EUROCRYPT 2020

PDF Video

We introduce the notion of a continuous verifiable delay function (cVDF): a function \(g\) which is (a) iteratively sequential—meaning that evaluating the iteration \(g^{(t)}\) of \(g\) (on a random input) takes time roughly \(t\) times the time to evaluate \(g\), even with many parallel processors, and (b) (iteratively) verifiable—the output of \(g^{(t)}\) can be efficiently verified (in time that is essentially independent of \(t\)). In other words, the iterated function \(g^{(t)}\) is a verifiable delay function (VDF) (Boneh et al., CRYPTO '18), having the property that intermediate steps of the computation (i.e., \(g^{(t')}\) for \(t'\lt t\)) are publicly and continuously verifiable.

We demonstrate that cVDFs have intriguing applications: (a) they can be used to construct a public randomness beacon that only requires an initial random seed (and no further unpredictable sources of randomness), (b) enable outsourceable VDFs where any part of the VDF computation can be verifiably outsourced, and (c) have deep complexity-theoretic consequences: in particular, they imply the existence of depth-robust moderately-hard Nash equilibrium problem instances, i.e. instances that can be solved in polynomial time yet require a high sequential running time.

Our main result is the construction of a cVDF based on the repeated squaring assumption and the soundness of the Fiat-Shamir (FS) heuristic for constant-round proofs. We highlight that when viewed as a (plain) VDF, our construction requires a weaker FS assumption than previous ones (earlier constructions require the FS heuristic for either super-logarithmic round proofs, or for arguments).

Reception Capacity: Definitions, Game Theory, and Hardness

Michael Dinitz and Naomi Ephraim

ALGOSENSORS 2019

PDF

The capacity of wireless networks is a classic and important topic of study. Informally, the capacity of a network is simply the total amount of information which it can transfer. In the context of models of wireless radio networks, this has usually meant the total number of point-to-point messages which can be sent or received in one time step. This definition has seen intensive study in recent years, particularly with respect to more accurate models of radio networks such as the SINR model. This paper is motivated by an obvious fact: radio antennae are (at least traditionally) omnidirectional, and hence point-to-point connections are not necessarily the best definition of the true capacity of a wireless network. To fix this, we introduce a new definition of reception capacity as the maximum number of messages which can be received in one round, and show that this is related to a new optimization problem we call the Maximum Perfect Dominated Set (MaxPDS) problem. Using this relationship we give a tight lower bound for approximating this capacity which essentially matches a known upper bound. As our main result, we analyze this notion of capacity under game-theoretic constraints, giving tight bounds on the average quality achieved at any coarse correlated equilibrium (and thus at any Nash). This immediately gives bounds on the average behavior of the natural distributed algorithm in which every transmitter uses online learning algorithms to learn whether to transmit.

On Perfect Correctness without Derandomization

Gilad Asharov, Naomi Ephraim, Ilan Komargodski, and Rafael Pass

PDF

We give a method to transform any indistinguishability obfuscator that suffers from correctness errors into an indistinguishability obfuscator that is perfectly correct, assuming hardness of Learning With Errors (LWE). The transformation requires sub-exponential hardness of the obfuscator and of LWE. Our technique also applies to eliminating correctness errors in general-purpose functional encryption schemes, but here it is sufficient to rely on the polynomial hardness of the given scheme and of LWE. Both of our results can be based generically on any perfectly correct, single-key, succinct functional encryption scheme (that is, a scheme supporting Boolean circuits where encryption time is a fixed polynomial in the security parameter and the message size), in place of LWE.

Previously, Bitansky and Vaikuntanathan (EUROCRYPT '17) showed how to achieve the same task using a derandomization-type assumption (concretely, the existence of a function with deterministic time complexity \(2^{O(n)}\) and non-deterministic circuit complexity \(2^{\Omega(n)}\) which is non-game-based and non-falsifiable.

On the Complexity of Compressing Obfuscation

Gilad Asharov, Naomi Ephraim, Ilan Komargodski, and Rafael Pass

CRYPTO 2018

PDF Video

Journal of Cryptology 2022

Indistinguishability obfuscation has become one of the most exciting cryptographic primitives due to its far reaching applications in cryptography and other fields. However, to date, obtaining a plausibly secure construction has been an illusive task, thus motivating the study of seemingly weaker primitives that imply it, with the possibility that they will be easier to construct.

In this work, we provide a systematic study of compressing obfuscation, one of the most natural and simple to describe primitives that is known to imply indistinguishability obfuscation when combined with other standard assumptions. A compressing obfuscator is roughly an indistinguishability obfuscator that outputs just a slightly compressed encoding of the truth table. This generalizes notions introduced by Lin et al. (PKC 2016) and Bitansky et al. (TCC 2016) by allowing for a broader regime of parameters.

We view compressing obfuscation as an independent cryptographic primitive and show various positive and negative results concerning its power and plausibility of existence, demonstrating significant differences from full-fledged indistinguishability obfuscation.

First, we show that as a cryptographic building block, compressing obfuscation is weak. In particular, when combined with one-way functions, it cannot be used (in a black-box way) to achieve public-key encryption, even under (sub-)exponential security assumptions. This is in sharp contrast to indistinguishability obfuscation, which together with one-way functions implies almost all cryptographic primitives.

Second, we show that to construct compressing obfuscation with perfect correctness, one only needs to assume its existence with a very weak correctness guarantee and polynomial hardness. Namely, we show a correctness amplification transformation with optimal parameters that relies only on polynomial hardness assumptions. This implies a universal construction assuming only polynomially secure compressing obfuscation with approximate correctness. In the context of indistinguishability obfuscation, we know how to achieve such a result only under sub-exponential security assumptions together with derandomization assumptions.

Lastly, we characterize the existence of compressing obfuscation with statistical security. We show that in some range of parameters and for some classes of circuits such an obfuscator exists, whereas it is unlikely to exist with better parameters or for larger classes of circuits. These positive and negative results reveal a deep connection between compressing obfuscation and various concepts in complexity theory and learning theory.

Teaching

Spring 2017 (Cornell): Lead TA for CS 4820 - Introduction to Analysis of Algorithms

Fall 2016 (Cornell): TA for CS 5435 - Security and Privacy in the Wild