homepage pic

CS PhD Student
Cornell Tech, New York, NY

nephraim "at" cs.cornell.edu

Naomi Ephraim

I am a fourth year Computer Science Ph.D. student at Cornell University. My advisor is Professor Rafael Pass. My research interests are in theoretical cryptography.

I received my B.S. in Computer Science at Johns Hopkins University with a second major in Mathematics, where I worked with Professor Michael Dinitz.

Activities

I help run the Cornell Crypto Seminar. If you are interested in giving a talk, please let me know!

Publications

SPARKs: Succinct Parallelizable Arguments of Knowledge

Naomi Ephraim, Cody Freitag, Ilan Komargodski, and Rafael Pass

EUROCRYPT 2020

Continuous Verifiable Delay Functions

Naomi Ephraim, Cody Freitag, Ilan Komargodski, and Rafael Pass

EUROCRYPT 2020

PDF

We introduce the notion of a continuous verifiable delay function (cVDF): a function \(g\) which is (a) iteratively sequential—meaning that evaluating the iteration \(g^{(t)}\) of \(g\) (on a random input) takes time roughly \(t\) times the time to evaluate \(g\), even with many parallel processors, and (b) (iteratively) verifiable—the output of \(g^{(t)}\) can be efficiently verified (in time that is essentially independent of \(t\)). In other words, the iterated function \(g^{(t)}\) is a verifiable delay function (VDF) (Boneh et al., CRYPTO '18), having the property that intermediate steps of the computation (i.e., \(g^{(t')}\) for \(t'\lt t\)) are publicly and continuously verifiable.

We demonstrate that cVDFs have intriguing applications: (a) they can be used to construct a public randomness beacon that only requires an initial random seed (and no further unpredictable sources of randomness), (b) enable outsourceable VDFs where any part of the VDF computation can be verifiably outsourced, and (c) have deep complexity-theoretic consequences: in particular, they imply the existence of depth-robust moderately-hard Nash equilibrium problem instances, i.e. instances that can be solved in polynomial time yet require a high sequential running time.

Our main result is the construction of a cVDF based on the repeated squaring assumption and the soundness of the Fiat-Shamir (FS) heuristic for constant-round proofs. We highlight that when viewed as a (plain) VDF, our construction requires a weaker FS assumption than previous ones (earlier constructions require the FS heuristic for either super-logarithmic round proofs, or for arguments).

Reception Capacity: Definitions, Game Theory, and Hardness

Michael Dinitz and Naomi Ephraim

ALGOSENSORS 2019

PDF

The capacity of wireless networks is a classic and important topic of study. Informally, the capacity of a network is simply the total amount of information which it can transfer. In the context of models of wireless radio networks, this has usually meant the total number of point-to-point messages which can be sent or received in one time step. This definition has seen intensive study in recent years, particularly with respect to more accurate models of radio networks such as the SINR model. This paper is motivated by an obvious fact: radio antennae are (at least traditionally) omnidirectional, and hence point-to-point connections are not necessarily the best definition of the true capacity of a wireless network. To fix this, we introduce a new definition of reception capacity as the maximum number of messages which can be received in one round, and show that this is related to a new optimization problem we call the Maximum Perfect Dominated Set (MaxPDS) problem. Using this relationship we give a tight lower bound for approximating this capacity which essentially matches a known upper bound. As our main result, we analyze this notion of capacity under game-theoretic constraints, giving tight bounds on the average quality achieved at any coarse correlated equilibrium (and thus at any Nash). This immediately gives bounds on the average behavior of the natural distributed algorithm in which every transmitter uses online learning algorithms to learn whether to transmit.

On the Complexity of Compressing Obfuscation

Gilad Asharov, Naomi Ephraim, Ilan Komargodski, and Rafael Pass

CRYPTO 2018

PDF

Indistinguishability obfuscation has become one of the most exciting cryptographic primitives due to its far reaching applications in cryptography and other fields. However, to date, obtaining a plausibly secure construction has been an illusive task, thus motivating the study of seemingly weaker primitives that imply it, with the possibility that they will be easier to construct.

In this work, we provide a systematic study of compressing obfuscation, one of the most natural and simple to describe primitives that is known to imply indistinguishability obfuscation when combined with other standard assumptions. A compressing obfuscator is roughly an indistinguishability obfuscator that outputs just a slightly compressed encoding of the truth table. This generalizes notions introduced by Lin et al. (PKC 2016) and Bitansky et al. (TCC 2016) by allowing for a broader regime of parameters.

We view compressing obfuscation as an independent cryptographic primitive and show various positive and negative results concerning its power and plausibility of existence, demonstrating significant differences from full-fledged indistinguishability obfuscation.

First, we show that as a cryptographic building block, compressing obfuscation is weak. In particular, when combined with one-way functions, it cannot be used (in a black-box way) to achieve public-key encryption, even under (sub-)exponential security assumptions. This is in sharp contrast to indistinguishability obfuscation, which together with one-way functions implies almost all cryptographic primitives.

Second, we show that to construct compressing obfuscation with perfect correctness, one only needs to assume its existence with a very weak correctness guarantee and polynomial hardness. Namely, we show a correctness amplification transformation with optimal parameters that relies only on polynomial hardness assumptions. This implies a universal construction assuming only polynomially secure compressing obfuscation with approximate correctness. In the context of indistinguishability obfuscation, we know how to achieve such a result only under sub-exponential security assumptions together with derandomization assumptions.

Lastly, we characterize the existence of compressing obfuscation with statistical security. We show that in some range of parameters and for some classes of circuits such an obfuscator exists, whereas it is unlikely to exist with better parameters or for larger classes of circuits. These positive and negative results reveal a deep connection between compressing obfuscation and various concepts in complexity theory and learning theory.

Manuscripts

Non-Malleable Time-Lock Puzzles and Applications

Naomi Ephraim, Cody Freitag, Ilan Komargodski, and Rafael Pass

PDF

Time-lock puzzles are a mechanism for sending messages "to the future", by allowing a sender to quickly generate a puzzle with an underlying message that remains hidden until a receiver spends a moderately large amount of time solving it.

We introduce and construct a variant of a time-lock puzzle which is non-malleable. A non-malleable time-lock puzzle guarantees, roughly, that it is impossible to "maul" a puzzle into one for a related message without solving it. The security of this construction relies on the existence of any (plain) time-lock puzzle and it is proven secure in the auxiliary-input random oracle model. We show that our construction satisfies bounded concurrency and prove that it is impossible to obtain full concurrency. We additionally introduce a more general non-malleability notion, termed functional non-malleability, which protects against tampering attacks that affect a specific function of the related messages. We show that in many (useful) cases, our construction satisfies fully concurrent functional non-malleability.

We use our (functional) non-malleable time-lock puzzles to give efficient multi-party protocols for desirable tasks such as coin flipping and auctions. Our protocols are (1) fair, meaning that no malicious party can influence the output, (2) optimistically efficient, meaning that if all parties are honest, then the protocol terminates within two message rounds, and (3) publicly verifiable, meaning that from the transcript of the protocol anyone can quickly infer the outcome, without the need to perform a long computation phase. Our protocols support an unbounded number of participants and require no adversary-independent trusted setup. Our protocol is the first protocol that satisfies all of the above properties under any assumption. Security is proven assuming the repeated squaring assumption and in the auxiliary-input random oracle model. Along the way, we introduce a publicly verifiable notion of time-lock puzzles which is of independent interest. This notion allows the solver of the puzzle to compute the solution together with a proof which can be quickly verified by anyone.

On Perfect Correctness without Derandomization

Gilad Asharov, Naomi Ephraim, Ilan Komargodski, and Rafael Pass

PDF

We give a method to transform any indistinguishability obfuscator that suffers from correctness errors into an indistinguishability obfuscator that is perfectly correct, assuming hardness of Learning With Errors (LWE). The transformation requires sub-exponential hardness of the obfuscator and of LWE. Our technique also applies to eliminating correctness errors in general-purpose functional encryption schemes, but here it is sufficient to rely on the polynomial hardness of the given scheme and of LWE. Both of our results can be based generically on any perfectly correct, single-key, succinct functional encryption scheme (that is, a scheme supporting Boolean circuits where encryption time is a fixed polynomial in the security parameter and the message size), in place of LWE.

Previously, Bitansky and Vaikuntanathan (EUROCRYPT '17) showed how to achieve the same task using a derandomization-type assumption (concretely, the existence of a function with deterministic time complexity \(2^{O(n)}\) and non-deterministic circuit complexity \(2^{\Omega(n)}\) which is non-game-based and non-falsifiable.

Teaching

Spring 2017: TA for CS 4820 - Introduction to Analysis of Algorithms

Fall 2016: TA for CS 5435 - Security and Privacy in the Wild