Distributed Computing Meets Game Theory:
Robust Mechanisms for Rational Secret Sharing and Multiparty Computation
Joe Halpern
Cornell University
Monday Sept. 4, 2006
4:00 PM, 655 Rhodes Hall
Abstract: Traditionally,
work on secret sharing and multiparty function computation in the cryptography community,
just like work in distributed computation, has divided the agents into "good guys"
and "bad guys". The good guys follow the protocol; the bad guys do everything in their power
to make sure it does not work. By way of contrast, game theory has focused on ``rational''
agents, who try to maximize their utilities. Here I try to combine these viewpoints, focusing
on two key problems in cyrptography: secret sharing and multiparty computation.
Roughly speaking, m out of n secret sharing allows someone to give "shares" of a secret
to n other players so that any m of them can reconstruct the secret. Implicitly,
we are assuming that there are m "good" agents, who will cooperate, and n-m "bad"
agents who may not. But what if instead we assume that all agents are rational: they
prefer getting the secret to not getting the secret but, secondarily, prefer
that as few as possible of the other agents get it. Vanessa Teague
and I showed that (1) there is no fixed-length secret-sharing protocol for rational
agents that survives iterated deletion of weakly dominated strategies, and (2) there is a
randomized protocol for secret sharing that survives iterated deletion, and runs in expected
constant number of rounds. We then proved analogous results for multiparty computation:
the problem of computing some function of individual agents' inputs without revealing
any information about the inputs. More recently, Ittai Abraham, Danny Dolev, and Rica
Gonen, and I considered what happens if we allow not just one agent to defect (as is the
case in Nash equilibrium) but we allow up to k agents to defect. We showed that such
k-resilient Nash equilibria exist for secret sharing and multiparty computation.
We further extended these results so that they hold even in the presence for up to t
players with "unexpected" utilities, who might not act the way you expect.
(These players can be viewed as "faulty".) Finally, we show that our techniques can
be used to implement mediators; if something can be done with a mediator,
then it can be done without one (under the appropriate conditions).
The talk is self-contained. No background in game theory or distributed computing will be assumed.