Abstracts

Theoretical Aspects of Reasoning About Knowledge: Proceedings of the 1986 Conference

Joseph Y. Halpern (editor)

x This is the proceedings of the first TARK conference, with papers by (among others) Aumann, Hintikka, Fagin, Moses, and Vardi. It includes my first overview on knowledge.

Published by Morgan Kaufmann, 1986.

Reasoning About Knowledge

Joseph Y. Halpern, Ronald Fagin, Yoram Moses, and Moshe Y. Vardi

This book covers much of my research on knowledge. Ordering information is available from MIT Press.

Published by MIT Press, 1995. A revised paperback edition was published in 2003.

Reasoning About Uncertainty

Joseph Y. Halpern

This book covers much of my research on uncertainty. Ordering information is available from MIT Press.

Published by MIT Press, 2003. A slightly revised paperback edition was published in 2005. A second edition was published in 2017.

Uncertainty in Artificial Intelligence: Proceedings of Twentieth Conference

Max Chickering and Joseph Y. Halpern (editors)

Published by AUAI Press, 2004

Handbook of Epistemic Logic

Hans van Ditmarsch, Joseph Y. Halpern, Wiebe van der Hoek, and Barteld Kooi (editors)

Published by College Publications, 2015

Towards a theory of knowledge and ignorance: preliminary report

Joseph Y. Halpern and Yoram Moses

Situations in which the information about a given domain is partial are common in many AI applications. In planning and analysis of scenarios involving partial information, the state of knowledge of an intelligent agent in such circumstances becomes important. This paper addresses the problem of characterizing this state of knowledge, with the emphasis on the single-agent case. We give a number of equivalent ways to characterize this state of knowledge, as well as an algorithm for computing the formulas that are true in this state. The relationship between this work and related works by Stark, Konolige, and Moore is discussed.

In Logics and Models of Concurrent Systems (ed. K. Apt), Springer-Verlag, 1985, pp. 459-476 and in Proceedings of the Workshop on Non-Monotonic Reasoning, 1984, pp. 125-143. The paper is available in postscript and pdf. There never was a journal version, but A theory of knowledge and ignorance for many agents does the multi-agent extension that we were hoping to do in the journal version.

Using Reasoning About Knowledge to Analyze Distributed Systems

Joseph Y. Halpern

An overview of work on using reasoning about knowledge to understand and analyze distributed systems.

In Annual Reviews of Computer Science, Vol. 2 (ed. J. F. Traub, B. J. Grosz, B. W. Lampson, and N. J. Nilsson), Annual Reviews Inc., 1987, pp. 37-68. A version is available in pdf.

I'm OK if you're OK: On the notion of trusting communication

We consider the issue of what an agent or a processor needs to know in order to know that its messages are true. This may be viewed as a first step to a general theory of cooperative communication in distributed systems. An honest message is one that is known to be true when it is sent (or said). If every message that is sent is honest, then of course every message that is sent is true. Various conditions weaker than honesty are investigated with the property that provided every message sent satisfies the condition, then every message sent is true.

In Philosophical Logic and Artificial Intelligence (ed. R. H. Thomason), Kluwer, 1989, pp. 9-34 and Journal of Philosophical Logic 17:4, 1988, pp. 329-354. A preliminary version appears in Proceedings of the Second IEEE Symposium on Logic in Computer Science, 1987, pp. 280-292.

Model checking vs. theorem proving: a manifesto

Joseph Y. Halpern and Moshe Y. Vardi

We argue that rather than representing an agent's knowledge as a collection of formulas, and then doing theorem proving to see if a given formula follows from an agent's knowledge base, it may be more useful to represent this knowledge by a semantic model, and then do model checking to see if the given formula is true in that model. We discuss how to construct a model that represents an agent's knowledge in a number of different contexts, and then consider how to approach the model-checking problem.

In Artificial Intelligence and Mathematical Theory of Computation (Papers in Honor of John McCarthy) (ed. V. Lifschitz), Academic Press, 1991, pp. 151-176. A version of the paper similar to the published version is available in postscript and pdf. A preliminary version (which includes some material not in this book) appears in Principles of Knowledge Representation and Reasoning: Proceedings of the Second International Conference (J. A. Allen, R. Fikes, and E. Sandewall, eds.), 1991, pp. 325-334.

A new approach to updating beliefs

Ronald Fagin and Joseph Y. Halpern

We define a new notion of conditional belief, which plays the same role for Dempster-Shafer belief functions as conditional probability does for probability functions. Our definition is different from the standard definition given by Dempster, and avoids many of the well-known problems of that definition. Just as the conditional probability Pr(.|B) is a probability function which is the result of conditioning on B being true, so too our conditional belief function Bel(.|B) is a belief function which is the result of conditioning on B being true. We define the conditional belief as the lower envelope (that is, the inf) of a family of conditional probability functions, and provide a closed-form expression for it. An alternate way of understanding our definition of conditional belief is provided by considering ideas from an earlier paper, where we connect belief functions with inner measures. In particular, we show here how to extend the definition of conditional probability to nonmeasurable sets, in order to get notions of inner and outer conditional probabilities, which can be viewed as best approximations to the true conditional probability, given our lack of information. Our definition of conditional belief turns out to be an exact analogue of our definition of inner conditional probability.

In Uncertainty in Artificial Intelligence 6, (eds. P. P. Bonissone, M. Henrion, L. N. Kanal, and J. F. Lemmer), 1991, pp. 347-374. A version of the paper similar to the published version is available in postscript and pdf. A preliminary version appears in Proceedings of the 6th Conference on Uncertainty in AI, 1990, pp. 317-325.

Reasoning about knowledge: a survey circa 1991

Joseph Y. Halpern

This is my second overview on knowledge. It's an update of the 1986 overview, and a predecessor of the 1995 survey.

Appears in both Encyclopedia of Computer Science and Technology, Vol. 27, Supplement 12 (ed. A. Kent and J. G. Williams), Marcel Dekker, 1993, pp. 275-296. and Encyclopedia of Microcomputers, Vol. 14 (ed. A. Kent and J. G. Williams), Marcel Dekker, 1994, pp. 287-308.

Reasoning about knowledge: a survey

Joseph Y. Halpern

In this survey, I attempt to identify and describe some of the common threads that tie together work in reasoning about knowledge in such diverse fields as philosophy, economics, linguistics, artificial intelligence, and theoretical computer science, with particular emphasis on work of the past five years, particularly in computer science.

In Handbook of Logic in Artificial Intelligence and Logic Programming, Vol. 4, D. Gabbay, C. J. Hogger, and J. A. Robinson, eds., Oxford University Press, 1995, pp. 1-34. A version of the paper similar to the published version is available in postscript and pdf. This is an update of two earlier surveys, one from 1986 and one from 1991. Of course, this material is all covered much more thoroughly in our book.

A logical approach to reasoning about uncertainty: a tutorial

Joseph Y. Halpern

I consider a logical framework for modeling uncertainty, based on the use of possible worlds, that incorporates knowledge, probability, and time. This turns out to be a powerful approach for modeling many problems of interest. I show how it can be used to give insights into (among other things) several well-known puzzles.

In Discourse, Interaction, and Communication, X. Arrazola, K. Korta, and F. J. Pelletier, eds., Kluwer, 1998, pp. 141-155. The paper is available postscript and pdf.

Axiomatic definitions of programming languages: a theoretical assessment

Albert R. Meyer and Joseph Y. Halpern

A precise definition is given of how partial correction or termination assertions serve to define the semantics of program schemes. Assertions involving only formulas of first-order predicate calculus are proved capable of defining program scheme semantics, and effective axiom systems for deriving such assertions are described. Such axiomatic definitions are possible despite the limited expressive power of predicate calculus.

In Journal of the ACM 29:2, April, 1982, pp. 555-576. A preliminary version appears Proceedings of 7th Annual ACM Symposium on Principles of Programming Languages, 1980, pp. 203-212. The journal paper also includes material from Joseph Y. Halpern and Albert R. Meyer, Axiomatic definitions of programming languages, II, Proceedings of 8th Annual ACM Symposium on Principles of Programming Languages, 1981, pp. 139-148.

Deterministic propositional dynamic logic: finite models, complexity, and completeness

Mordechai Ben-Ari, Joseph Y. Halpern, and Amir Pnueli

Let p be a formula in deterministic propositional dynamic logic. A decision procedure for the satisfiability of p is given along with a construction of a finite model for every satisfiable p. The decision procedure runs in deterministic exponential time and the size of the model are both exponential in the length of p. Finally, a complete axiomatization for deterministic propositional dynamic logic is given, based on the Segerberg axioms for propositional dynamic logic.

In Journal of Computer and Systems Science 25:3, 1982, pp. 402-417. A preliminary version with the title ``Finite models of deterministic propositional dynamic logic'', appears in Proceedings of the 8th International Colloquium on Automata, Languages, and Programming, 1981, pp. 249-263.

The propositional dynamic logic of deterministic, well-structured programs

Joseph Y. Halpern and John H. Reif

We consider a restricted propositional dynamic logic, Strict Deterministic Propositional Dynamic Logic (SDPDL), which is appropriate for reasoning about deterministic well-structured programs. In contrast to PDL, for which the validity problem is known to be complete in deterministic exponential time, the validity problem for SDPDL is shown to polynomial space complete. We also show that SDPDL is less expressive than PDL, and given a complete axiomatization for it. The results rely on structure theorems for models of satisfiable SDPDL formulas, and the proofs give insight into the effects of nondeterminism on intractability and expressiveness in program logics.

In Theoretical Computer Science 27, 1983, pp. 127-165. A preliminary version appears in Proceedings of the 22nd Annual Symposium on the Foundations of Computer Science, 1981, pp. 322-334.

Effective axiomatizations of Hoare Logics

Edmund M. Clarke, Joseph Y. Halpern, and Steven M. German

For a wide class of programming languages P and expressive interpretations I, it is shown that there exist sound and relatively complete Hoare logics for both partial correctness and termination assertions. In fact, under mild assumptions on P and I it is shown that the assertions true in I are uniformly decidable in the theory of I (Th(I)) iff the halting problem for P is decidable for finite interpretations. Moreover, the set of true terminations is uniformly recursively enumerable in Th(I) even if the halting problem for P is not decidable for finite interpretations. Since total-correctness assertions coincide with termination assertions for deterministic programming languages, this last result unexpectedly suggests that good axiom systems for total correctness may exist for a wider spectrum of languages than is the case for partial correctness.

In Journal of the ACM 30:3, 1983, pp. 612-637. A preliminary version appears in Proceeding of the 9th Annual ACM Symposium on Principles of Programming Languages, 1982, pp. 309-321.

Deterministic process logic is elementary

Joseph Y. Halpern

Process Logic (PL) is a language for reasoning about the behavior of a program during a computation, while Propositional Dynamic Logic (PDL) can only reason about the input-output states of a program. Nevertheless, we show that to each PL model M, there corresponds in a natural way a PDL model M' such that each path in M is represented by a state in M'. Moreover, to every PL formula p, there corresponds a PDL formula p', whose length is linear in that of p, such that p is true of a path in M iff p' is true of the state which represents that path in M'. We then show that p is satisfiable iff p' is satisfiable in a finite PDL model with special properties which we call a pseudomodel. The size of the pseudomodel is in general nonelementary but is bounded by both the depth of nesting of the suf operator and the alternation of the suf and diamond operators. However, for DPL, a deterministic version of PL, the pseudomodel has exponential size, giving us a deterministic exponential time procedure for deciding DPL validity. These results suggest that it is the interaction between nondeterministic programs and the suf operator that makes the general decision problem for PL so difficult.

In Information and Control 57:1, 1983, pp. 56-89. A preliminary version appears in Proceedings of 23rd Annual Symposium on the Foundations of Computer Science, 1982, pp. 204-216.

Decision procedures and expressiveness in the temporal logic of branching time

E. Allen Emerson and Joseph Y. Halpern

We consider the computation tree logic (CTL) proposed by Emerson and Clarke, which extends the unified time logic (UB) of Ben-Ari, Manna, and Pnueli by adding an until operator. It is established that CTL has the small model property by showing that any satisfiable CTL formula is satisfiable in a small finite model obtained from the small ``pseudomodel'' resulting from the Fischer-Ladner quotient construction. Then an exponential-time algorithm is given for deciding satisfiability in CTL, and the axiomatization of UB given by Ben-Ari, Manna, and Pnueli is extended to a complete axiomatization for CTL. Finally, the relative expressive power of a family of temporal logics obtained by extending or restricting the syntax of UB and CTL is studied.

In Journal of Computer and Systems Science 30:1, 1985, pp. 1-24; available in pdf. A preliminary version appears in Proceedings of the 14th ACM Symposium on Theory of Computing, 1982, pp. 169-180.

Optimal precision in the presence of uncertainty

Joseph Y. Halpern, Nimrod Megiddo, and Ashfaq Munshi

We consider the problem of achieving coordinated actions in a real-time distributed system. In particular, we consider how closely (in terms of real time) processors can be guaranteed to perform a particular action, in a system where message transmission is guaranteed, but there is some uncertainty in message transmission time. We present an algorithm to achieve optimal precision in arbitrary networks. In networks where clocks run at the rate of real time, the optimal precision achievable in a network is exactly how tightly clocks can be guaranteed to be synchronized.

In Journal of Complexity 1, 1985, pp. 170-196. A preliminary version appears in Proceedings of 17th ACM Symposium on the Theory of Computing, 1985, pp. 346-355.

Equations between regular terms and an application to process logic

Ashok K. Chandra, Joseph Y. Halpern, Albert R. Meyer, and Rohit Parikh

Regular terms with the Kleene operations union, concatenation, and * can be thought of as operators on languages, generating other languages. An equation T1 = T2 between two such terms is said to be satisfiable just in case languages exist which make this equation true. We show that the satisfiability problem even for *-free regular terms is undecidable. Similar techniques are used to show that a very natural extension of the Process Logic of Harel, Kozen, and Parikh is undecidable.

In SIAM J. Computing 14:4, 1985, pp. 935-942. A preliminary version appears in Proceedings of 13th ACM Symposium on Theory of Computing, 1981, pp. 384-390.

``Sometimes'' and ``not never'' revisited: on branching vs. linear time

E. Allen Emerson and Joseph Y. Halpern

The differences between and appropriateness of branching versus linear time temporal logic for reasoning about concurrent programs is studied. These issues have been previously considered by Lamport. To facilitate a careful examination of these issues, a language, CTL*, in which a universal or existential or existential path quantifier can prefix an arbitrary linear time assertion, is defined. The expressive power of a number of sublanguages is then compared. CTL* is also related to the logics MPL of Abrahamson and PL of Harel, Kozen, and Parikh. The paper concludes with a comparison of the utility of branching and linear time temporal logics.

In Journal of the ACM 33:1, 1986, pp. 151-178 on branching vs. linear time, Proceedings of the 10th Annual ACM Symposium on Principles of Programming Languages, 1983, pp. 127-140.

On time versus space III

Joseph Y. Halpern, Michael C. Loui, Albert R. Meyer, and Daniel Weise

Paul and Reischuk devised space efficient simulations of logarithmic cost random access machines and multidimensional Turing machines. We simplify their general space reduction technique and extend it to other computational models, including pointer machines, which model computations on graphs and data structures. Every pointer machine of time complexity T(n) can be simulated by a pointer machine of space complexity O(T(n)/log T(n)).

In Math. Systems Theory 19, 1986, pp. 13-28.

On the possibility and impossibility of achieving clock synchronization

Danny Dolev, Joseph Y. Halpern, and H. Raymond Strong

It is known that clock synchronization can be achieved in the presence of faulty processors as long as the nonfaulty processors are connected, provided that some authentication technique is used. Without authentication the number of faults that can be tolerated has been an open question. Here we show that if we restrict logical clocks to running within some linear function of real time, then clock synchronization is impossible, without authentication, when one-third or more of the processors are faulty. However, if there is a bound on the rate at which a processor can generate messages, then we show that clock synchronization is achievable, without authentication, as long as the faults do not disconnect the network. Finally, we provide a lower bound on the closeness to which simultaneity can be achieved in the network as a function of the transmission and processing delay properties of the network.

In Journal of Computer and Systems Science 32:2, 1986, pp. 230-250. A preliminary version appears in Proceedings of the 16th ACM Symposium on the Theory of Computing, 1984, pp. 504-511.

Cheating husbands and other stories: a case study in knowledge, action, and communication

Y. O. Moses, Danny Dolev, and Joseph Y. Halpern

The relationship between knowledge and action is a fundamental one: a processor in a computer network (or a robot or a person, for that matter) should base its actions on the knowledge (or information) it has. One of the main uses of communication is passing around information that may eventually be required by the receiver in order to decide upon subsequent actions. Understanding the relationship between knowledge, action, and communication is fundamental to the design of computer network protocols, intelligent robots, etc. By looking at a number of variants of the cheating husbands puzzle, we illustrate the subtle relationship between knowledge, communication, and action in a distributed environment.

In Distributed Computing 1:3, 1986, pp. 167-176. A preliminary version appears in Proceedings of the 4th ACM Symposium on Principles of Distributed Computing, 1985, pp. 215-223. Available in pdf.

Taken by surprise: the paradox of the surprise test revisited

Joseph Y. Halpern and Yoram Moses.

We reexamine the surprise test paradox (also called the hangman paradox) and translate it into a formal logic with a fixed-point operator and a provability operator. We consider four possible translations of the paradox. The first is contradictory, the second is consistent with the test being given any day of the week including the last, the third rules out the last day (but no others), while the fourth does not even admit of a reasonable semantic interpretation, and is paradoxical in that it is consistent if and only if it is inconsistent!

In Journal of Philosophical Logic 15:3, 1986, pp. 281-304. Available in pdf.

A new look at fault-tolerant network routing

Danny Dolev, Joseph Y. Halpern, Barbara B. Simons, and H. Raymond Strong

We model a communication network as a graph in which a processor is a node and a communication link is an edge. A routing for such a network is a fixed path, or route, between each pair of nodes. Given a network with a predefined routing, we study the effects of faulty components on the routing. Of particular interest is the number of routes along which a message must travel between any two non-faulty nodes. This problem is analyzed for specific families of graphs and for classes of routings. We also give some bounds for general versions of the problem. Finally, we conclude with one of the most important contributions of this paper, a list of interesting and apparently difficult open problems.

In Information and Computation 72:3, 1987, pp. 180-196. A preliminary version appears in Proceedings of the 16th ACM Symposium on the Theory of Computing, 1984, pp. 526-535.

A logic to reason about likelihood

Joseph Y. Halpern and Michael O. Rabin

We present a logic LL which uses a modal operator L to help capture the notion of likely. Despite the fact that no use is made of numbers, LL can capture many of the properties of likelihood in an intuitively appealing way. Using standard techniques of modal logic, we give a complete axiomatization for LL and show that satisfiability of LL formulas can be decided in exponential time. We discuss how the logic might be used in areas where decision making is crucial, such as management and medical diagnosis, and conclude by using LL to give a formal proof of correctness of a protocol for exchanging secrets.

In Artificial Intelligence 32:3, 1987, pp. 379-405; a version is available in pdf. A preliminary version appears in Proceedings of the 15th ACM Symposium on the Theory of Computing, 1983, pp. 310-319. See also Likelihood, probability, and knowledge.

Belief, awareness, and limited reasoning

Ronald Fagin and Joseph Y. Halpern

Several new logics for belief and knowledge are introduced and studied, all of which have the property that agents are not logically omniscient. In particular, in these logics, the set of beliefs of an agent does not necessarily contain all valid formulas. Thus, these logics are more suitable than traditional logics for modeling beliefs of humans (or machines) with limited reasoning capabilities. Our first logic is essentially an extension of Levesque's logic of implicit and explicit belief, where we extend to allow multiple agents and higher-level belief (i.e., beliefs about beliefs). Our second logic deals explicitly with ``awareness'', where, roughly speaking, it is necessary to be aware of a concept before one can have beliefs about it. Our third logic gives a model of ``local reasoning'', where an agent is viewed as a ``society of minds'', each with its own cluster of beliefs, which may contradict each other.

In Artificial Intelligence 34, 1988, pp. 39-76. A preliminary version appears in Proceedings of the 9th International Joint Conference on Artificial Intelligence (IJCAI 85), 1985, pp. 491-501. The full paper is here.

The complexity of reasoning about knowledge and time, I: lower bounds

Joseph Y. Halpern and Moshe Y. Vardi

We study the propositional modal logic of knowledge and time for distributed systems. We consider a number of logics (ninety-six in all!), which vary according to the choice of language and the assumptions made on the underlying system. The major parameters in the language are whether there is a common knowledge operator, whether we reason about the knowledge of one or more than one processor, and whether our temporal operators are branching or linear. The assumptions on distributed systems that we consider are: whether or not processors forget, whether or not processors learn, whether or not time is synchronous, and whether or not there is a unique initial state in the system. We completely characterize the complexity of the validity problem for all the logics we consider. This paper focuses on lower bounds; a sequel will deal with the corresponding upper bounds. Typical results include a -completeness result for the language with common knowledge with respect to systems where processors do not forget, and a corresponding non-elementary-time result for the language without common knowledge. It is shown that, in general, the assumption that processors do not forget or do not learn greatly increases the complexity of reasoning about knowledge and time.

In Journal of Computer and Systems Science 38:1, 1989, pp. 195-237. Preliminary versions of some of the material in this paper can be found in ``The complexity of reasoning about knowledge and time'', Proceedings of 18th ACM Symposium on the Theory of Computing, 1986, pp. 304-314 and ``Reasoning about knowledge and time in asynchronous systems'', Proceedings of the 20th ACM Symposium on Theory of Computing, 1988, pp. 53-65. We hope to bring out part II on lower bounds in the reasonably near future.

Likelihood, probability, and knowledge

Joseph Y. Halpern and David A. McAllester

The modal logic LL was introduced by Halpern and Rabin as a means of doing qualitative reasoning about likelihood. Here the relationship between LL and probability theory is examined. It is shown that there is a way of translating probability assertions into LL in a sound manner, so that LL in some sense can capture the probabilistic interpretation of likelihood. However, the translation is subtle; several more obvious attempts are shown to lead to inconsistencies. We also extend LL by adding modal operators for knowledge. This allows us to reason about the interaction between knowledge and likelihood. The propositional version of the resulting logic LLK is shown to have a complete axiomatization and to be decidable in exponential time, provably the best possible.

In Computational Intelligence, 5, pp. 151-160, 1989; a version is available in pdf. A preliminary version appears in AAAI-84 (Proceedings of the Second National Conference on Artificial Intelligence, 1984, pp. 137-141.

Reasoning about procedures as parameters in the language L4

Steven M. German, Edmund M. Clarke, and Joseph Y. Halpern

We provide a sound and relatively complete axiom system for partial correctness assertions in an Algol-like language with procedures passed as parameters, but with no global values (known traditionally as L4). The axiom system allows us to reason syntactically about programs and to construct proofs for assertions about complicated programs from proofs of assertions about their components. Such an axiom system has been sought by a number of researchers, but no previously published solution has been entirely satisfactory. Our axiom system extends the natural style of reasoning used in previous Hoare systems to programs with procedures of higher type. The details of the proof that our axiom system is relatively complete in the sense of Cook may be of independent interest, because we introduce results about expressiveness for programs with higher types that are useful beyond the immediate problem of the language L4. We also prove a new incompleteness result that applies to our logic and to similar Hoare logics.

In Information and Computation, 83:3, 1989, pp. 265-359 (the whole volume). Some material in this paper appeared in preliminary form in ``True relative completeness of an axiom system for L4'', Proceedings of the IEEE Symposium on Logic in Computer Science, 1986, pp. 11-25 and ``Reasoning about procedures with parameters'' Proceedings of the CMU Logics of Programs Conference, Lecture Notes in Computer Science, 1983, pp. 206-220.

Modelling knowledge and action in distributed systems

Joseph Y. Halpern and Ronald Fagin

We present a formal model that captures the subtle interaction between knowledge and action in distributed systems. We view a distributed system as a set of runs, where a run is a function from time to global states and a global state is a tuple consisting of an environment state and a local state for each process in the system. This model is a generalization of those used in many previous papers. Actions in this model are associated with functions from global states to global states. A protocol is a function from local states to actions. We extend the standard notion of a protocol by defining knowledge-based protocols, ones in which a process' actions may depend explicitly on its knowledge. Knowledge-based protocols provide a natural way of describing how actions should take place in a distributed system. Finally, we show how the notion of one protocol implementing another can be captured in our model.

In Distributed Computing, 3:4, 1989, pp. 159-177. A version of the paper similar to the published version is available in postscript and pdf. A preliminary version appears in Proceedings of Concurrency-88, 1988, pp. 18-32; some material also appears in ``A formal model of knowledge, action, and communication in distributed systems'', Proceedings of the 4th ACM Symposium on Principles of Distributed Computing, 1985, pp. 224-236.

Completeness of rewrite rules and rewrite strategies for FP

Joseph Y. Halpern, John H. Williams and Edward L. Wimmers

We consider languages whose operational semantics is given by a set of rewrite rules. For such languages, it is important to be able to determine that there are enough rules to be able to compute the correct meaning of all expressions, but not so many that the system of rules is inconsistent. We develop a formal framework in which to give a precise treatment of these completeness and soundness issues, and then investigate them in the context of an extended version of the functional programming language FP. We show that the rewrite rules of FP are sound and complete with respect to three different notions of completeness. In the latter half of the paper we turn our attention to rewrite strategies. In order to implement a language based on rewrite rules, it does not suffice to know that there are ``enough'' rules in the language; we also need to have a good strategy for determining the order in which to apply them. But what is ``good''? Corresponding to each of our notions of completeness there is a notion of a good rewrite strategy. We examine and characterize these notions of goodness, and give examples of a number of natural good strategies. Although we have confined ourselves to FP here, we believe that our techniques (some of which are nontrivial extensions of techniques first used in the context of lambda-calculus) will apply well beyond the realm of FP rewriting systems.

In Journal of the ACM, 37:1, 1990, pp. 86-143. Some material in this paper appeared in preliminary form in ``Denotational semantics and rewrite rules for FP'', Joseph Y. Halpern, John H. Williams, Edward L. Wimmers, and Timothy C. Winkler, Proceedings of the 12th Annual ACM Symposium on Principles of Programming Languages, 1985, pp. 108-120 and ``Good rewriting strategies for FP'', Joseph Y. Halpern, John H. Williams, and Edward L. Wimmers, Proceedings of the IEEE Symposium on Logic in Computer Science, 1986, pp. 149-162.

Knowledge and common knowledge in a distributed environment

Joseph Y. Halpern and Yoram Moses

Reasoning about knowledge seems to play a fundamental role in distributed systems. Indeed, such reasoning is a central part of the informal intuitive arguments used in the design of distributed protocols. Communication in a distributed system can be viewed as the act of transforming the system's state of knowledge. This paper presents a general framework for formalizing and reasoning about knowledge in distributed systems. We argue that states of knowledge of groups of processors are useful concepts for the design and analysis of distributed protocols. In particular, distributed knowledge corresponds to knowledge that is ``distributed'' among the members of the group, while common knowledge corresponds to a fact being ``publicly known''. The relationship between common knowledge and a variety of desirable actions in a distributed system is illustrated. Furthermore, it is shown that, formally speaking, in practical systems common knowledge cannot be attained. A number of weaker variants of common knowledge that are attainable in many cases of interest are introduced and investigated.

In Journal of the ACM, 37:3, 1990, pp. 549-587. A version of the paper similar to the published version is available on CoRR and locally in postscript and pdf. A preliminary version appears in Proceedings of the 3rd ACM Symposium on Principles of Distributed Computing, 1984, pp. 50-61.

A logic for reasoning about probabilities

Ronald Fagin, Joseph Y. Halpern, and Nimrod Megiddo

We consider a language for reasoning about probability which allows us to make statements such as ``the probability of E is less than 1/3'' and ``the probability of E is at least twice the probability of F'', where E and F are arbitrary events. We consider the case where all events are measurable (i.e., represent measurable sets) and the more general case, which is also of interest in practice, where they may not be measurable. The measurable case is essentially a formalization of (the propositional fragment of) Nilsson's probabilistic logic. As we show in a companion paper, the general (nonmeasurable) case corresponds precisely to replacing probability measures by Dempster-Shafer belief functions. In both cases, we provide a complete axiomatization and show that the problem of deciding satisfiability is NP-complete, no worse than that of propositional logic. As a tool for proving our complete axiomatizations, we give a complete axiomatization for reasoning about Boolean combinations of linear inequalities, which is of independent interest. This proof and others make crucial use of results from the theory of linear programming. We then extend the language to allow reasoning about conditional probability, and show that the resulting logic is decidable and completely axiomatizable, by making use of the theory of real closed fields.

In Information and Computation, 87:1,2, 1990, pp. 78-128. A version of the paper similar to the published version is available in postscript and pdf. A preliminary version appears in Proceedings of the 3rd IEEE Conference on Logic in Computer Science, 1988, pp. 410-421.

An analysis of first-order logics of probability

Joseph Y. Halpern

We consider two approaches to giving semantics to first-order logics of probability. The first approach puts a probability on the domain, and is appropriate for giving semantics to formulas involving statistical information such as ``The probability that a randomly chosen bird flies is greater than .9.'' The second approach puts a probability on possible worlds, and is appropriate for giving semantics to formulas describing degrees of belief, such as ``The probability that Tweety (a particular bird) flies is greater than .9.'' We show that the two approaches can be easily combined, allowing us to reason in a straightforward way about statistical information and degrees of belief. We then consider axiomatizing these logics. In general, it can be shown that no complete axiomatization is possible. We provide axiom systems that are sound and complete in cases where a complete axiomatization is possible, showing that they do allow us to capture a great deal of interesting reasoning about probability.

In Artificial Intelligence 46, 1990, pp. 311-350. A version of the paper similar to the published version is available in postscript and pdf. A preliminary version appears in Proceedings of the 11th International Joint Conference on Artificial Intelligence (IJCAI 89), 1989, pp. 1375-1381.

Let many flowers bloom: a response to ``An inquiry into computer understanding''

Joseph Y. Halpern Two main issues are addressed in this response to Peter Cheeseman's ``An inquiry into computer understanding'' and ``In defense of `An inquiry into computer understanding' '': (1) the ``right'' approach to reasoning about uncertainty and (2) the relationship between logic and probability.

In Computational Intelligence 6, 1990, pp. 184-188. A version of the paper similar to the published version is available in postscript and pdf.

Presburger arithmetic with unary predicates is complete

Joseph Y. Halpern

We give a simple proof characterizing the complexity of Presburger arithmetic augmented with additional predicates. We show that Presburger arithmetic with additional predicates is complete. Adding unary predicate is enough to get hardness, while adding more predicates (of any arity) does not make the complexity any worse.

In Journal of Symbolic Logic, 56:2, 1991, pp. 637-642. A version of the paper similar to the published version is available in postscript and pdf.

The relationship between knowledge, belief, and certainty

Joseph Y. Halpern

We consider the relation between knowledge and certainty, where a fact is known if it is true at all worlds an agent considers possible and is certain if it holds with probability 1. We identify certainty with belief, interpreted probabilistically. We show that if we assume one fixed probability assignment, then the logic KD45, which has been identified as perhaps the most appropriate for belief, provides a complete axiomatization for reasoning about certainty. Just as an agent may believe a fact p although p is false, he may be certain that a fact p is true although p is false. However, it is easy to see that an agent can have such false (probabilistic) beliefs only at a set of worlds of probability 0. If we restrict attention to structures where all worlds have positive probability, then S5 provides a complete axiomatization. If we consider a more general setting, where there might be a different probability assignment at each world, then by placing appropriate conditions on the support of the probability function (the set of worlds which have non-zero probability), we can capture many other well-known modal logics, such as T and S4. Finally, we consider Miller's principle, a well-known principle relating higher-order probabilities to lower-order probabilities, and show that in a precise sense KD45 characterizes certainty in those structures satisfying Miller's principle.

In Annals of Mathematics and Artificial Intelligence 4, 1991, pp. 301-322. A version of the paper similar to the published paper is available in postscript and pdf. There is a bug in the published version; a correction is available in postscript and pdf, and appears in Annals of Mathematics and Artificial Intelligence 26, 1999, pp. 253-256. A preliminary version appears in Proceedings of the 5th Workshop on Uncertainty in AI, 1989, pp. 142-151.

A model-theoretic analysis of knowledge

Ronald Fagin, Joseph Y. Halpern, and Moshe Y. Vardi

Understanding knowledge is a fundamental issue in many disciplines. In computer science, knowledge arises not only in the obvious contexts (such as knowledge-based systems), but also in distributed systems (where the goal is to have each processor ``know'' something, as in agreement protocols). A general semantic model of knowledge is introduced, to allow reasoning about statements such as ``He knows that I know whether or not she knows whether or not it is raining.'' This approach more naturally models a state of knowledge than previous proposals (including Kripke structures). Using this notion of model, a model theory for knowledge is developed. This theory enables one to interpret the notion of a ``finite amount of information''.

In Journal of the ACM, 38:2, 1991, pp. 382-428. A version of the paper similar to the published version is available in postscript and pdf. A preliminary version appears in Proceedings of the 25th Annual Conference on Foundations of Computer Science, 1984, pp. 268-278.

A propositional modal interval logic

Joseph Y. Halpern and Yoav Shoham

In certain areas of artificial intelligence there is need to represent continuous change and to make statements that are interpreted with respect to time intervals rather than time points. To this end we develop a modal temporal logic based on time intervals, a logic which can be viewed as a generalization of point-based modal temporal logic. We discuss related logics, give an intuitive presentation of the new logic, and define its formal syntax and semantics. We make no assumption about the underlying nature of time, allowing it to be discrete (such as the natural numbers) or continuous (such as the rationals or the reals), linear or branching, complete (such as the reals) or not (such as the rationals). We show, however, that there are formulas in the logic that allow us to distinguish all these situations. We also give a translation of our logic into first-order logic, which allows us to apply some results on first-order logic to our modal one. Finally, we consider the difficulty of validity problems for the logic. This turns out to depend critically, and in surprising ways, on our assumptions about time. For example, if we take our underlying temporal structure to be the rationals, we can show that the validity problem is r.e.-complete, if it is the reals then we can show that validity is -hard, and if it is the natural numbers, then validity is -complete.

In Journal of the ACM, 38:4, 1991, pp. 935-962. A version of the paper similar to the published version is available in postscript and pdf. A preliminary version appears in Proceedings of the IEEE Symposium on Logic in Computer Science, 1986, pp. 279-292.

Clock synchronization and the power of broadcasting

Joseph Y. Halpern and Ichiro Suzuki

We investigate the power of a broadcast mechanism in a distributed network. We do so by considering the problem of synchronizing clocks in an error-free network, under the assumption that there is no upper bound on message transmission time, but that broadcast messages are guaranteed to be received within an interval of size d, for some fixed constant d. This is intended to be an idealization of what happens in multiple access networks, such as the Ethernet. We then consider tradeoffs between the type and number of broadcasts, and the tightness of synchronization. Our results include (1) matching upper and lower bounds of (1 + 1/K)d on the precision of clock synchronization attainable for processes using K (n-1)-casts, for K between 3 and n, (2) matching upper and lower bounds of (1 + 1/n)d on the precision of clock synchronization attainable for n > 2 processes using an arbitrary number of (n-1)-casts, and (3) matching upper and lower bounds of (1 + (n-2)/n)d on the precision attainable using 2-casting.

In Distributed Computing 5:2, 1991, pp. 73-83. A version of the paper similar to the published version is available in postscript and pdf. A preliminary version appears in Proceedings of the 28th Annual Allerton Conference on Communication, Control, and Computing, 1990, pp. 588-597.

Uncertainty, belief, and probability

Ronald Fagin and Joseph Y. Halpern

We introduce a new probabilistic approach to dealing with uncertainty, based on the observation that probability theory does not require that every event be assigned a probability. For a nonmeasurable event (one to which we do not assign a probability), we can talk about only the inner measure and outer measure of the event. In addition to removing the requirement that every event be assigned a probability, our approach circumvents other criticisms of probability-based approaches to uncertainty. For example, the measure of belief in an event turns out to be represented by an interval (defined by the inner and outer measure), rather than by a single number. Further, this approach allows us to assign a belief (inner measure) to an event without committing to a belief about its negation (since the inner measure of an event plus the inner measure of its negation is not necessarily one). Interestingly enough, inner measures induced by probability measures turn out to correspond in a precise sense to Dempster-Shafer belief functions. Hence, in addition to providing promising new conceptual tools for dealing with uncertainty, our approach shows that a key part of the important Dempster-Shafer theory of evidence is firmly rooted in classical probability theory.

In Computational Intelligence 7, 1991, pp. 160-173. A version of the paper similar to the published version is available in postscript and pdf. A preliminary version appears in Proceedings of the 11th International Joint Conference on Artificial Intelligence (IJCAI 89), 1989, pp. 1161-1167.

What can machines know? On the properties of knowledge in distributed systems

Ronald Fagin, Joseph Y. Halpern, and Moshe Y. Vardi

It has been argued that knowledge is a useful tool for designing and analyzing complex systems. The notion of knowledge that seems most relevant in this context is an external, information-based notion that can be shown to satisfy all the axioms of the modal logic S5. We carefully examine the properties of this notion of knowledge and show that they depend crucially, and in subtle ways, on assumptions we make about the system and about the language used for describing knowledge. We present a formal model in which we can capture various assumptions frequently made about systems, such as whether they are deterministic or nondeterministic, whether knowledge is cumulative (which means that processes never ``forget''), and whether or not the ``environment'' affects the state transitions of the processes. We then show that under some assumptions about the system and the language, certain states of knowledge are not attainable and the axioms of S5 do not completely characterize the properties of knowledge; extra axioms are needed. We provide complete axiomatizations for knowledge in a number of cases of interest.

In Journal of the ACM, 39:2, 1992, pp. 328-376. A version of the paper similar to the published version is available in postscript and pdf. A preliminary version appears with the title ``What can machines know? On the epistemic properties of machines'' appears in AAAI-86 (Proceedings of Fourth National Conference on Artificial Intelligence, 1986, pp. 428-434.

Two views of belief: belief as generalized probability and belief as evidence

Joseph Y. Halpern and Ronald Fagin

Belief functions are mathematical objects defined to satisfy three axioms that look somewhat similar to the Kolmogorov axioms defining probability functions. We argue that there are (at least) two useful and quite different ways of understanding belief functions. The first is as a generalized probability function (which technically corresponds to the inner measure induced by a probability function). The second is as a way of representing evidence. Evidence, in turn, can be understood as a mapping from probability functions to probability functions. It makes sense to think of updating a belief if we think of it as a generalized probability. On the other hand, it makes sense to combine two beliefs (using, say, Dempster's rule of combination) only if we think of the belief functions as representing evidence. Many previous papers have pointed out problems with the belief function approach; the claim of this paper is that these problems can be explained as a consequence of confounding these two views of belief functions.

In Artificial Intelligence 54, 1992, pp. 275-317. A version of the paper similar to the published version is available in postscript and pdf. A preliminary version appears in AAAI-90 (Proceedings of the Eighth National Conference on Artificial Intelligence, 1990, pp. 112-119.

A guide to completeness and complexity for modal logics of knowledge and belief

Joseph Y. Halpern and Yoram Moses

We review and re-examine possible-worlds semantics for propositional logics of knowledge and belief with three particular points of emphasis: (1) we show how general techniques for finding decision procedures and complete axiomatizations apply to models for knowledge and belief, (2) we show how sensitive the difficulty of the decision procedure is to such issues as the choice of modal operators and the axiom system, and (3) we discuss how notions of common knowledge and distributed knowledge among a group of agents fit into the possible-worlds framework, As far as complexity is concerned, we show, among other results, that while the problem of deciding satisfiability of an S5 formula with one agent is NP-complete, the problem for many agents is PSPACE-complete. Adding a distributed knowledge operator does not change the complexity, but once a common knowledge operator is added to the language, the problem becomes complete for exponential time.

In Artificial Intelligence 54, 1992, pp. 319-379. A version of the paper similar to the published version is available in postscript and pdf. A preliminary version, with the title ``A guide to the modal logics of knowledge and belief'' appears in Proceedings of the 9th International Joint Conference on Artificial Intelligence (IJCAI 85), 1985, pp. 480-490.

A little knowledge goes a long way: knowledge-based derivations and correctness proofs for a family of protocols

Joseph Y. Halpern and Lenore D. Zuck

We use a high-level, knowledge-based approach for deriving a family of protocols for the sequence transmission problem. The protocols of Aho, Ullman, and Yannakakis, the Alternating Bit protocol, and Stenning's protocol are all instances of one knowledge-based protocol that we derive. Our derivation leads to transparent and uniform correctness proofs for all these protocols.

In Journal of the ACM, 39:3, 1992, pp. 449-478. A version of the paper similar to the published version is available in postscript and pdf. of the paper similar to the published version. A preliminary version appears in Proceedings of the 6th ACM Symposium on Principles of Distributed Computing, 1987, pp. 269-280.

What is an inference rule?

Ronald Fagin, Joseph Y. Halpern, and Moshe Y. Vardi

What is an inference rule? This question does not have a unique answer. One usually finds two distinct standard answers in the literature: validity inference (we can infer q from p if, for every substitution T, the validity of T[p] entails the validity of T[q]) and truth inference (we can infer q from p if, for every substitution T, the truth of T[p] entails the truth of T[q]). In this paper we introduce a general semantic framework that allows us to investigate the notion of inference more carefully. Validity inference and truth inference are in some sense the extremal points in our framework. We investigate the relationship between various types of inference in our general framework, and consider the complexity of deciding if an inference rule is sound, in the context of a number of logics of interest: classical propositional logic, a nonstandard propositional logic, various propositional modal logics, and first-order logic.

In Journal of Symbolic Logic 57:3, 1992, pp. 1018-1045. A version of the paper similar to the published version is available in postscript and pdf. A preliminary version appears in Proceedings of the 5th Jerusalem Conference on Information Technology, 1990, pp. 391-401.

Message-optimal protocols for Byzantine agreement

Vassos Hadzilacos and Joseph Y. Halpern

It is often important for the correct processes in a distributed system to reach agreement, despite the presence of some faulty processes. Byzantine agreement (BA) is a paradigm problem that attempts to isolate the key features of reaching agreement. We focus here on the number of messages required to reach BA, with particular emphasis on the number of messages required in the failure-free runs, since these are the ones that occur most often in practice. The number of messages required is sensitive to the types of failures considered. In earlier work, Amdur et al. [1990] established tight upper and lower bounds on the worst- and average-case number of messages required in failure-free runs for crash failures. We provide tight upper and lower bounds for all remaining types of failures that have been considered in the literature on the BA problem: receiving omission, sending omission and general omission failures, as well as arbitrary failures with or without message authentication. We also establish a tradeoff between number of rounds and number of messages in the failure-free runs required to reach agreement in the case of crash, sending and general omission failures.

In Mathematical Systems Theory 26, 1993, pp. 41-102. A version of the paper similar to the published version is available in postscript and pdf. A preliminary version of this paper appears in Proceedings of the 10th ACM Symposium on Principles of Distributed Computing, 1991, pp. 309-323.

The failure discovery problem

Vassos Hadzilacos and Joseph Y. Halpern

We define a simple variant of the Byzantine agreement (BA) problem, called the Failure Discovery (FD) problem that, roughly speaking, amounts to reaching Byzantine Agreement provided that no failures are discovered. We show how a protocol for FD can be extended to one for BA, with no message overhead in the failure-free runs. We also show that, for so-called benign failures, if the FD protocol satisfies an additional property, the message-preserving extension to a BA protocol can be accomplished with minimal time overhead in the failure-free runs. Our results show that FD is a useful building block for BA; indeed, it it has been used in this way in a companion paper.

In Mathematical Systems Theory 26, 1993, pp. 103-129. A version of the paper similar to the published version is available in postscript and pdf.

Knowledge, probability, and adversaries

Joseph Y. Halpern and Mark Tuttle

What should it mean for an agent to know or believe an assertion is true with probability .99? Different papers give different answers, choosing to use quite different probability spaces when computing the probability that an agent assigns to an event. We show that each choice can be understood in terms of a betting game. This betting game itself can be understood in terms of three types of adversaries influencing three different aspects of the game. The first selects the outcome of all nondeterministic choices in the system; the second represents the knowledge of the agent's opponent in the betting game (this is the key place the papers mentioned above differ); the third is needed in asynchronous systems to choose the time the bet is placed. We illustrate the need for considering all three types of adversaries with a number of examples. Given a class of adversaries, we show how to assign probability spaces to agents in a way most appropriate for that class, where ``most appropriate'' is made precise in terms of this betting game. We conclude by showing how different assignments of probability spaces (corresponding to different opponents) yield different levels of guarantees in probabilistic coordinated attack.

In Journal of the ACM, 40:4, 1993, pp. 917-962. A version of the paper similar to the published version is available in postscript and pdf. A preliminary version appears in Proceedings of the 8th ACM Symposium on Principles of Distributed Computing, 1989, pp. 103-118.

Naming and identity in epistemic logics, I: The propositional case

Adam J.Grove and Joseph Y. Halpern

Modal epistemic logics for many agents often assume a fixed one-to-one correspondence between agents and the names for agents that occur in the language. This assumption restricts the applicability of any logic because it prohibits, for instance, anonymous agents, agents with many names, named groups of agents, and relative (indexical) reference. Here we examine the principles involved in such cases, and give simple propositional logics that are expressive enough to cope with them all.

In Journal of Logic and Computation, 3:4, 1993, pp. 345-378. A version of the paper similar to the published version is available in postscript and pdf. A preliminary version under the title ``Naming and identity in a multi-agent epistemic logic'' appears in Principles of Knowledge Representation and Reasoning: Proceedings of the Second International Conference (J. A. Allen, R. Fikes, and E. Sandewall, eds.), 1991, pp. 301-312. This preliminary version also contains material from Part II of the paper of which Adam Grove is the sole author; Part II appears in Artificial Intelligence 74, 1995, pp. 311-350.

A response to ``Believing on the basis of evidence''

Fahiem Bacchus, Adam J.Grove, Joseph Y. Halpern, and Daphne Koller

This is part of a special issue of Computational Intelligence devoted to ``Believing on the basis of evidence'', a paper by Henry Kyburg, and responses to it. Kyburg addresses the issue of how we can use our information to reason to a set of conclusions that are plausible but not certain. This set of conclusions goes beyond the set of deductive conclusions. His program is certainly ambitious and the approach he suggests meets with varying degrees of success and failure when it comes to particular issues. We examine and critique a few parts of his approach.

In Computational Intelligence, 10:1, 1994, pp. 21-25 A version of the paper similar to the published version is available in postscript and pdf.

Reasoning about knowledge and probability

Ronald Fagin and Joseph Y. Halpern

We provide a model for reasoning about knowledge and probability together. We allow explicit mention of probabilities in formulas, so that our language has formulas that essentially say ``according to agent i, formula p holds with probability at least a.'' The language is powerful enough to allow reasoning about higher-order probabilities, as well as allowing explicit comparisons of the probabilities an agent places on distinct events. We present a general framework for interpreting such formulas, and consider various properties that might hold of the interrelationship between agents' probability assignments at different states. We provide a complete axiomatization for reasoning about knowledge and probability, prove a small model property, and obtain decision procedures. We then consider the effects of adding common knowledge and a probabilistic variant of common knowledge to the language.

In Journal of the ACM, 41:2, 1994, pp. 340-367. A minor corrigendum appears in Journal of the ACM, 45:1, 1998, p. 214. A version of the paper similar to the published version is available in postscript and pdf. A preliminary version appears in Proceedings of the Second Conference on Theoretical Aspects of Reasoning About Knowledge, 1988, pp. 277-294.

Decidability and expressiveness for first-order logics of probability

Martin Abadi and Joseph Y. Halpern

We consider decidability and expressiveness issues for two first-order logics of probability. In one, the probability is on possible worlds, while in the other, it is on the domain. It turns out that in both cases it takes very little to make reasoning about probability highly undecidable. We show that when the probability is on the domain, if the language contains only unary predicates then the validity problem is decidable. However, if the language contains even one binary predicate, the validity problem is -complete, as hard as elementary analysis with free predicate and function symbols. With equality in the language, even with no other symbol, the validity problem is at least as hard as that for elementary analysis, -hard. Thus, the logic cannot be axiomatized in either case. When we put the probability on the set of possible worlds, the validity problem is -complete with as little as one unary predicate in the language, even without equality. With equality, we get -hardness with only a constant symbol. We then turn our attention to an analysis of what causes this overwhelming complexity. For example, we show that if we require rational probabilities then we drop from to In many contexts it suffices to restrict attention to domains of bounded size; fortunately, the logics are decidable in this case. Finally, we show that, although the two logics capture quite different intuitions about probability, there is a precise sense in which they are equi-expressive.

In Information and Computation, 112:1, 1994, pp. 1-36. A version of the paper similar to the published version is available in postscript and pdf. A preliminary version appears in Proceedings of the 30th Annual Conference on Foundations of Computer Science, 1989, pp. 148-153.

Random worlds and maximum entropy

Adam J.Grove, Joseph Y. Halpern, and Daphne Koller

Given a knowledge base KB containing first-order and statistical facts, we consider a principled method, called the random-worlds method, for computing a degree of belief that some formula p holds given KB. If we are reasoning about a world or system consisting of N individuals, then we can consider all possible worlds, or first-order models, with domain 1,...,N that satisfy KB, and compute the fraction of them in which p is true. We define the degree of belief to be the asymptotic value of this fraction as N grows large. We show that when the vocabulary underlying p and KB uses constants and unary predicates only, we can naturally associate an entropy with each world. As N grows larger, there are many more worlds with higher entropy. Therefore, we can use a maximum-entropy computation to compute the degree of belief. This result is in a similar spirit to previous work in physics and artificial intelligence, but is far more general. Of equal interest to the result itself are the limitations on its scope. Most importantly, the restriction to unary predicates seems necessary. Although the random-worlds method makes sense in general, the connection to maximum entropy seems to disappear in the non-unary case. These observations suggest unexpected limitations to the applicability of maximum-entropy methods.

In Journal of Artificial Intelligence Research, 2, 1994, pp. 33-88. The paper is available in postscript and pdf. A preliminary version appears in Proceedings of Seventh Annual IEEE Symposium on Logic in Computer Science, 1992, p. 22-33.

Zero-one laws for modal logic

Joseph Y. Halpern and Bruce M. Kapron

We show that a 0-1 law holds for propositional modal logic, both for structure validity and frame validity. In the case of structure validity, the result follows easily from the well-known 0-1 law for first-order logic. However, our proof gives considerably more information. It leads to an elegant axiomatization for almost-sure structure validity and to sharper complexity bounds. Since frame validity can be reduced to a formula, the 0-1 law for frame validity helps delineate when 0-1 laws exist for second-order logics.

In Annals of Pure and Applied Logic, 69, 1994, pp. 157-193. A version similar to the published version is available in postscript and pdf. A preliminary version appears in Proceedings of Seventh Annual IEEE Symposium on Logic in Computer Science, 1992 pp. 369-380 (with B. M. Kapron). Unfortunately, one of the main theorems in the paper is incorrect. An erratum appears in Annals of Pure and Applied Logic 121, 2003, pp. 281-283, and is available in postscript and pdf.

Full abstraction and expressive completeness for FP

Joseph Y. Halpern and E. L. Wimmers

We consider issues related to the expressive power of the programming language FP. In particular, we consider whether a number of variants of FP are fully abstract and expressively complete. For example, we show that a version of FP with only one-sided sequences behave similarly to PCF in that the addition of parallel or is sufficient to make it fully abstract. However, the addition of parallel or to FP (with its two-sided infinite sequences) is not sufficient to achieve full abstraction. By considering these and other variants, we obtain a better understanding of what is required of a language and semantics in order to guarantee full abstraction and expressive completeness.

In Information and Computation 118:2, 1995, pp. 246-271. A version of the paper similar to the published version is available in postscript and pdf. A preliminary version appears in Proceedings of the Second IEEE Symposium on Logic in Computer Science, 1987, pp. 257-271.

Dynamic fault-tolerant clock synchronization

Danny Dolev, Joseph Y. Halpern, Barbara B. Simons, and H. Raymond Strong

This paper gives two simple efficient distributed algorithms: one for keeping clocks in a network synchronized and one for allowing new processors to join the network with their clocks synchronized. Assuming a fault tolerant authentication protocol, the algorithms tolerate both link and processor failures of any type. The algorithm for maintaining synchronization works for arbitrary networks (rather than just completely connected networks) and tolerates any number of processor or communication link faults as long as the correct processors remain connected by fault-free paths. It thus represents an improvement over other clock synchronization algorithms such as those of Lamport and Melliar-Smith [1985] and Welch and Lynch [1988], although, unlike them, it does require an authentication protocol to handle Byzantine faults. Our algorithm for allowing new processors to join requires that more than half the processors be correct, a requirement that is provably necessary.

In Journal of the ACM 42:1, 1995, pp. 143-185. A version of the paper similar to the published version is available in postscript and pdf. A preliminary version, with the title ``Fault-tolerant clock synchronization'' appears in Proceedings of the 3rd ACM Symposium on Principles of Distributed Computing, 1984, pp. 89-102.

The effect of bounding the number of primitive propositions and the depth of nesting on the complexity of modal logic

Joseph Y. Halpern

A well-known result of Ladner says that the satisfiability problem for K45, KD45, and S5 is NP-complete. This result implicitly assumes that there are infinitely many primitive propositions in the language; it is easy to see that the satisfiability problem for these logics becomes linear time if there are only finitely many primitive propositions in the language. By way of contrast, we show that the PSPACE-completeness results of Ladner and Halpern and Moses hold for the modal logics K, T, S4, and for the multi-agent versions of K45, KD45, and S5 (provided there are at least two agents), even if there is only one primitive proposition in the language. We go on to examine the effect on complexity of bounding the depth of nesting of modal operators. If we restrict to finite nesting, then the satisfiability problem is NP-complete for all the modal logics considered but S4. If we then further restrict the language to having only finitely many primitive propositions, the complexity goes down to linear time in all cases.

In Artificial Intelligence, 75:2, 1995, pp. 361-372. A version of the paper similar to the published version is available in postscript and pdf.

Levesque's axiomatization of only knowing is incomplete

Joseph Y. Halpern and Gerhard Lakemeyer

Levesque introduced a notion of ``only knowing'', with the goal of capturing certain types of nonmonotonic reasoning. Levesque's logic dealt with only the case of a single agent. Recently, both Halpern and Lakemeyer independently attempted to extend Levesque's logic to the multi-agent case. Although there are a number of similarities in their approaches, there are some significant differences. In this paper, we reexamine the notion of only knowing, going back to first principles. In the process, we point out some problems with the earlier definitions. This leads us to reconsider what the properties of only knowing ought to be. We provide an axiom system that captures our desiderata, and show that it has a semantics that corresponds to it. The axiom system has an added feature of interest: it includes a modal operator for satisfiability, and thus provides a complete axiomatization for satisfiability in the logic K45.

In Artificial Intelligence 74:2, 1995, pp. 381-387. A version of the paper similar to the published version is available in postscript and pdf.

A nonstandard approach to the logical omniscience problem

Ronald Fagin, Joseph Y. Halpern, and Moshe Y. Vardi

We introduce a new approach to dealing with the well-known logical omniscience problem in epistemic logic. Instead of taking possible worlds where each world is a model of classical propositional logic, we take possible worlds which are models of a nonstandard propositional logic we call NPL, which is somewhat related to relevance logic. This approach gives new insights into the logic of implicit and explicit belief considered by Levesque and Lakemeyer. In particular, we show that in a precise sense agents in the structures considered by Levesque and Lakemeyer are perfect reasoners in NPL.

In Artificial Intelligence, 79:2, 1996, pp. 203-240. A version of the paper similar to the published version is available in postscript and pdf. A preliminary version appears in Proceedings of the 3rd Conference on Theoretical Aspects of Reasoning About Knowledge, 1990, pp. 41-55.

Asymptotic conditional probabilities: the unary case

Adam J.Grove, Joseph Y. Halpern, and Daphne Koller

Motivated by problems that arise in computing degrees of belief, we consider the problem of computing asymptotic conditional probabilities for first-order sentences. Given first-order sentences p and q, we consider the structures with domain {1,..., N} that satisfy q, and compute the fraction of them in which p is true. We then consider what happens to this fraction as N gets large. This extends the work on 0-1 laws that considers the limiting probability of first-order sentences, by considering asymptotic conditional probabilities. As shown in by Liogonkii and in a companion paper), in the general case, asymptotic conditional probabilities do not always exist, and most questions relating to this issue are highly undecidable. These results, however, all depend on the assumption that q can use a nonunary predicate symbol. Liogonkii shows that if we condition on formulas q involving unary predicate symbols only (but no equality or constant symbols), then the asymptotic conditional probability does exist and can be effectively computed. This is the case even if we place no corresponding restrictions on p. We extend this result here to the case where q involves equality and constants. We show that the complexity of computing the limit depends on various factors, such as the depth of quantifier nesting, or whether the vocabulary is finite or infinite. We completely characterize the complexity of the problem in the different cases, and show related results for the associated approximation problem.

In SIAM Journal on Computing, 25:1, 1996, pp. 1-51. A version of the paper similar to the published version is available in postscript and pdf. A preliminary version (which also includes results of a companion paper), with the title ``Asymptotic conditional probabilities for first-order logic'', appears in Proceedings of the 24th Annual ACM Symposium on Theory of Computing, 1992, pp. 294-305.

Asymptotic conditional probabilities: the non-unary case

Adam J.Grove, Joseph Y. Halpern, and Daphne Koller

Motivated by problems that arise in computing degrees of belief, we consider the problem of computing asymptotic conditional probabilities for first-order sentences. Given first-order sentences p and q, we consider the structures with domain {1,..., N} that satisfy q, and compute the fraction of them in which p is true. We then consider what happens to this fraction as N gets large. This extends the work on 0-1 laws that considers the limiting probability of first-order sentences, by considering asymptotic conditional probabilities. As shown by Liogonkii, if there is a non-unary predicate symbol in the vocabulary, asymptotic conditional probabilities do not always exist. We extend this result to show that asymptotic conditional probabilities do not always exist for any reasonable notion of limit. Liogonkii also showed that the problem of deciding whether the limit exists is undecidable. We analyze the complexity of three problems with respect to this limit: deciding whether it is well-defined, whether it exists, and whether it lies in some nontrivial interval. Matching upper and lower bounds are given for all three problems, showing them to be highly undecidable.

In Journal of Symbolic Logic, 61:1, 1996, pp. 250-275. A version of the paper similar to the published version is available in postscript and pdf. A preliminary version (which also includes results of a companion paper), with the title ``Asymptotic conditional probabilities for first-order logic'', appears in Proceedings of the 24th Annual ACM Symposium on Theory of Computing, 1992, pp. 294-305.

Should knowledge entail belief?

Joseph Y. Halpern

The appropriateness of S5 as a logic of knowledge has been attacked at some length in the philosophical literature. Here one particular attack based on the interplay between knowledge and belief is considered: Suppose that knowledge satisfies S5, belief satisfies KD45, and both the entailment property (knowledge implies belief) and positive certainty (if the agent believes something, she believes she knows it) hold. Then it can be shown that belief reduces to knowledge: it is impossible to have false beliefs. While the entailment property has typically been viewed as perhaps the least controversial of these assumptions, an argument is presented that it can plausibly be viewed as the culprit. More precisely, it is shown that this attack fails if we weaken the entailment property so that it applies only to objective (nonmodal) formulas, rather than to arbitrary formulas. Since the standard arguments in favor of the entailment property are typically given only for objective formulas, this observation suggests that care must be taken in applying intuitions that seem reasonable in the case of objective formulas to arbitrary formulas.

In Journal of Philosophical Logic, 25:5, 1996, pp. 483-494. A version of the paper similar to the published version is available in postscript and pdf.

A theory of knowledge and ignorance for many agents

Joseph Y. Halpern

We extend the notion of ``only knowing'' introduced by Halpern and Moses (see ``Towards a theory of knowledge and ignorance'') to many agents and to a number of modal logics. In this approach, all an agent knows is p is true in a structure M if, in M, the agent knows p and has the maximum number of ``possibilities''. To extend this approach, we need to make precise what counts as a ``possibility''. In the single-agent case, we can identify a possibility with a truth assignment. In the multi-agent case, things are more complicated. We consider three notions of possibility (all related). We argue that the first is most appropriate for non-introspective logics, such as K, T, and S4, the second is most appropriate for K45 and KD45, and the last is most appropriate for S5. With the appropriate notion of possibility, we show that are reasonable extensions in all cases. These results also shed light on the single-agent case. It was always assumed that one of the key aspects of Halpern-Moses approach in the single-agent case was its use of S5, rather than K45 or KD45. Our results show that the notion is better understood in the context of K45 (or KD45). In the single-agent case, the notion remains unchanged if we use K45 instead of S5. However, in the multi-agent case, there are significant differences between K45 and S5. Moreover, in some sense, the K45 variants behave better: all results proved for the single-agent case extend more naturally to the multi-agent case of K45 than to the multi-agent of S5.

In Journal of Logic and Computation, 7:1, 1997, pp. 79-108. A version of the paper similar to the published version is available in postscript and pdf. A preliminary version appears as ``Reasoning about only knowing with many agents'', AAAI-93 (Proceedings of the Eleventh National Conference on Artificial Intelligence) 1993, pp. 655-661.

From statistical knowledge bases to degrees of belief

Fahiem Bacchus, Adam J.Grove, Joseph Y. Halpern, and Daphne Koller

An intelligent agent will often be uncertain about various properties of its environment, and when acting in that environment it will frequently need to quantify its uncertainty. For example, if the agent wishes to employ the expected-utility paradigm of decision theory to guide its actions, she will need to assign degrees of belief (subjective probabilities) to various assertions. Of course, these degrees of belief should not be arbitrary, but rather should be based on the information available to the agent. This paper describes one approach for inducing degrees of belief from very rich knowledge bases, which might include information about particular individuals, statistical correlations, physical laws, and default rules. We call our approach the random-worlds method. This method is based on a principle of indifference: it treats all of the worlds the agent considers possible as being equally likely. It is able to integrate qualitative default reasoning with quantitative probabilistic reasoning by providing a language in which both types of information can be easily expressed. Our results show that a number of desiderata that arise in direct inference (reasoning from statistical information to conclusions about individuals) and default reasoning follow directly from the semantics of random worlds. For example, random worlds captures important patterns of reasoning such as specificity, inheritance, indifference to irrelevant information, and default assumptions of independence. Furthermore, the expressive power of the language used and the intuitive semantics of random worlds allow the method to deal well with problems that are beyond the scope of many other non-deductive reasoning systems.

In Artificial Intelligence 87:1-2, 1996, pp. 75-143, A version of the paper similar to the published version is available in postscript and pdf. Some material in this paper appeared in preliminary form in ``Statistical foundations for default reasoning'', Proceedings of the 13th International Joint Conference on Artificial Intelligence (IJCAI 93), 1993, pp. 563-569.

A critical reexamination of default logic, autoepistemic logic, and only knowing

Joseph Y. Halpern

Fifteen years of work on nonmonotonic logic has certainly increased our understanding of the area. However, given a problem in which nonmonotonic reasoning is called for, it is far from clear how one should go about modeling the problem using the various approaches. In particular, we return to the original technical definitions given in these papers, and examine the extent to which they capture the intuitions they were designed to capture.

In Computational Intelligence 13:1, 1997, pp. 144-163. A version of the paper similar to the published version is available in postscript and pdf. A preliminary version appears in Computational Logic and Proof Theory (Proceedings of the Kurt Gödel Symposium, KGC '93), Lecture Notes in Computer Science, vol. 713, Springer, 1993, pp. 43-60.

On ambiguities in the interpretation of game trees

Joseph Y. Halpern

Piccione and Rubinstein have pointed out ambiguities in the interpretation of games of imperfect recall. They focus on the notion of time consistency, and argue that a player in a game of imperfect recall may be time inconsistent, changing his strategy despite no new information and no change in his preferences. In this paper, it is argued that the apparent time inconsistency arises from implicit assumptions made in the definition about what the driver knows when he reconsiders his strategy and what he will remember if he changes his strategy, and about how the node at which reconsideration takes place is chosen. A model is proposed, based on earlier work in the computer science literature, that allows us -- indeed, almost forces us -- to make these issues explicit. Once these issues are made explicit, time inconsistency seems less inconsistent.

In Games and Economic Behavior 20, 1997, pp. 66-96. A preliminary version appears in Proceedings of the Sixth Conference on Theoretical Aspects of Rationality and Knowledge, 1996, pp. 77-96. A version of the paper similar to the published version is available in postscript and pdf.

On the expected value of games with absentmindedness

Adam J. Grove and Joseph Y. Halpern

Piccione and Rubinstein have argued that a seemingly paradoxical form of time inconsistency can arise in games of imperfect recall. At the heart of this argument is a calculation of the expected value of a game, from the standpoint of a player who is in the middle of play. We claim that this concept is not well defined in games with absentmindedness (that is, games where two nodes in the same path can be in one information set). To make it well defined, additional assumptions must be made. We show that there are reasonable assumptions under which no time inconsistency arises. It is also possible to make assumptions that validate Piccione and Rubinstein's calculations, but the nature of these assumptions is generally such as to remove the appearance of paradox.

In Games and Economic Behavior 20, 1997, pp. 51-65. A version of the paper similar to the published version is available in postscript and pdf.

Performing work efficiently in the presence of faults

Cynthia Dwork, Joseph Y. Halpern, and Orli Waarts

We consider a system of t synchronous processes that communicate only by sending messages to one another, and that together must perform n independent units of work. Processes may fail by crashing; we want to guarantee that in every execution of the protocol in which at least one process survives, all n units of work will be performed. We consider three parameters: the number of messages sent, the total number of units of work performed (including multiplicities), and time. We present three protocols for solving the problem. All three are work-optimal, doing O(n+t) work. The first has moderate costs in the remaining two parameters, sending O() messages, and taking (n+t) time. This protocol can be easily modified to run in any completely asynchronous system equipped with a failure detection mechanism. The second sends only O() messages, but its running time is exponential in n and t. The third is essentially time-optimal in the (usual) case in which there are no failures, and its time complexity degrades gracefully as the number of failures increases.

SIAM Journal on Computing 27:5, 1998. A version of the paper similar to the published version is available on CoRR and locally in postscript and pdf. A preliminary version appears in Proceedings of the Eleventh Annual ACM Symposium on Principles of Distributed Computing, 1992, pp. 91-102.

Knowledge-based programs

Ronald Fagin, Joseph Y. Halpern, Yoram Moses, and Moshe Y. Vardi

Reasoning about activities in a distributed system at the level of knowledge allows us to abstract away from many irrelevant details. We introduce two notions to facilitate designing and reasoning about systems in terms of knowledge. The first is knowledge-based programs, which improve upon Halpern and Fagin's definition of knowledge-based protocols. Knowledge-based programs are syntactic objects: programs with tests for knowledge. The second notion is contexts, which capture the setting in which a program is executed. In a given context, a standard program (one without tests for knowledge) is represented by (i.e., corresponds in a precise sense to) a unique system. A knowledge-based program, on the other hand, may be represented by no system, one system, or many systems. We provide a condition that is sufficient to guarantee that a knowledge-based program is represented by a unique system, and covers many of the knowledge-based programs considered in the literature. We also characterize the complexity of determining whether a given knowledge-based program has a unique representation, or any representation at all, in a finite-state context.

In Distributed Computing 10:4, 1997, pp. 199-225. A version of the paper similar to the published version is available in postscript and pdf. A preliminary version appears in Proceedings of the Fourteenth Annual ACM Symposium on Principles of Distributed Computing, 1995, pp. 153-163. There is significant overlap between this paper and Chapters 4, 5, and 7 of our book, although the paper has some new material.

Modeling belief in dynamic systems. Part I: Foundations

Nir Friedman and Joseph Y. Halpern

Belief change is a fundamental problem in AI: Agents constantly have to update their beliefs to accommodate new observations. In recent years, there has been much work on axiomatic characterizations of belief change. We claim that a better understanding of belief change can be gained from examining appropriate semantic models. In this paper we propose a general framework in which to model belief change. We begin by defining belief in terms of knowledge and plausibility: an agent believes p if he knows that p is more plausible than not-p. We then consider some properties defining the interaction between knowledge and plausibility, and show how these properties affect the properties of belief. In particular, we show that by assuming two of the most natural properties, belief becomes a KD45 operator. Finally, we add time to the picture. This gives us a framework in which we can talk about knowledge, plausibility (and hence belief), and time, which extends the framework of Halpern and Fagin for modeling knowledge in multi-agent systems. We then examine the problem of ``minimal change''. This notion can be captured by using prior plausibilities, an analogue to prior probabilities, which can be updated by ``conditioning''. We show by example that conditioning on a plausibility measure can capture many scenarios of interest. In a companion paper, we show how the two best-studied scenarios of belief change, belief revision and belief update, fit into our framework.

In Artificial Intelligence 95:2, 1997, pp. 257-316. A version of the paper similar to the published version is available on CoRR and locally in postscript and pdf. An earlier (and much different) version of the paper appeared under the title "A knowledge-based framework for belief change, Part I: Foundations'', in Proceedings of the 5th Conference on Theoretical Aspects of Reasoning About Knowledge, 1994, pp. 44-64.

Defining relative likelihood in partially-ordered preferential structures

Joseph Y. Halpern

Starting with a likelihood or preference order on worlds, we extend it to a likelihood ordering on sets of worlds in a natural way, and examine the resulting logic. Lewis earlier considered such a notion of relative likelihood in the context of studying counterfactuals, but he assumed a total preference order on worlds. Complications arise when examining partial orders that are not present for total orders. There are subtleties involving the exact approach to lifting the order on worlds to an order on sets of worlds. In addition, the axiomatization of the logic of relative likelihood in the case of partial orders gives insight into the connection between relative likelihood and default reasoning.

In Journal of AI Research 7, 1997, pp. 1-24. A preliminary version appears in Proceedings of the Twelfth Conference on Uncertainty in AI, 1996, pp. 299-306. The paper is available available on CoRR and locally in postscript and pdf.

Plausibility measures and default reasoning

Nir Friedman and Joseph Y. Halpern

We introduce a new approach to modeling uncertainty based on plausibility measures. This approach is easily seen to generalize other approaches to modeling uncertainty, such as probability measures, belief functions, and possibility measures. We focus on one application of plausibility measures in this paper: default reasoning. In recent years, a number of different semantics for defaults have been proposed, such as preferential structures, epsilon-semantics, possibilistic structures, and kappa-rankings, that have been shown to be characterized by the same set of axioms, known as the KLM properties. While this was viewed as a surprise, we show here that it is almost inevitable. In the framework of plausibility measures, we can give a necessary condition for the KLM axioms to be sound, and an additional condition necessary and sufficient to ensure that the KLM axioms are complete. This additional condition is so weak that it is almost always met whenever the axioms are sound. In particular, it is easily seen to hold for all the proposals made in the literature.

In Journal of the ACM 48:4, 2001, pp. 648-685. A version of the paper similar to the published version is available on CoRR and locally in postscript and pdf. A preliminary version appears in AAAI-96 (Proceedings of the Thirteenth National Conference on Artificial Intelligence), 1996, pp. 1297-1304.

On the knowledge requirements of tasks

Ronen I. Brafman, Joseph Y. Halpern, and Yoav Shoham

In order to successfully perform a task, a situated system requires some information about its domain. If we can understand what information the system requires, we may be able to equip it with more suitable sensors or make better use of the information available to it. These considerations have motivated roboticists to examine the issue of sensor design, and in particular, the minimal information required to perform a task. We show here that reasoning in terms of what the robot knows and needs to know to perform a task is a useful approach for analyzing these issues. We extend the formal framework for reasoning about knowledge, already used in AI and distributed computing, by developing a set of concepts basic and tools for modeling and analyzing the knowledge requirements of tasks. We investigate properties of the resulting framework, and show how it can be applied to robotics tasks.

In Artificial Intelligence, 98:1-2, 1998, pp. 317-349. A version of the paper similar to the published version is available in postscript and pdf.

A counterexample to theorems of Cox and Fine

Joseph Y. Halpern

Cox's well-known theorem justifying the use of probability is shown not to hold in finite domains. The counterexample also suggests that Cox's assumptions are insufficient to prove the result even in infinite domains. The same counterexample is used to disprove a result of Fine on comparative conditional probability.

In Journal of AI Research, 10, 1999, pp. 67-85. A preliminary version appears in AAAI-96 (Proceedings of the Thirteenth National Conference on Artificial Intelligence), 1996, pp. 1313-1319. The paper is available in postscript and pdf. Further discussion of Cox's Theorem can be found in Cox's Theorem revisited.

The hierarchical approach to modeling knowledge and common knowledge

Ronald Fagin, Joseph Y. Halpern, J. Geanakoplos, and Moshe Y. Vardi

One approach to representing knowledge or belief of agents, which has been explored independently by economists (Boge and Eisele; Mertens and Zamir; Brandenburger and Dekel; Tan and Werlang) and by computer scientists (Fagin, Halpern, and Vardi) involves an infinite hierarchy of beliefs. Such a hierarchy consists of an agent's beliefs about the state of the world, his beliefs about other agents' beliefs about the worlds, his beliefs about other agents' beliefs about other agents' beliefs about the worlds, etc. Economists and computer scientists differ, however, in the way they model beliefs. Economists prefer a probability-based framework, where belief is modeled as a probability distribution on the uncertainty space. In contrast, computer scientists prefer an information-based framework, where belief is modeled as a subset of the underlying space. The idea is that whatever is in the subset is believed to be possible, and whatever is not in the subset is believed to be impossible. We consider the question of when such an infinite hierarchy completely describes the uncertainty of the agents. We provide various necessary and sufficient conditions for this property. It turns out that the probability-based approach can be viewed as satisfying one of these conditions, which explains why the infinite hierarchy always completely describes the uncertainty of the agents in the probability-based approach. An interesting consequence of our conditions is that adequacy of an infinite hierarchy may depend on the ``richness'' of the states in the underlying state space. We also consider the question of whether an infinite hierarchy completely describes the uncertainty of the agents with respect to ``interesting'' sets of events and show that the answers depends on the definition of ``interesting''.

International Journal of Game Theory 28:3, 1999, pp. 331-365. A version of the paper similar to the published version is available in postscript and pdf. A preliminary version with the title ``The expressive power of the hierarchical approach to modeling knowledge and common knowledge'' appears in Proceedings of the 4th Conference on Theoretical Aspects of Reasoning About Knowledge, 1992, pp. 229-244

Hypothetical knowledge and counterfactual reasoning

Joseph Y. Halpern

Samet introduced a notion of hypothetical knowledge and showed how it could be used to capture the type of counterfactual reasoning necessary to force the backwards induction solution in a game of perfect information. He argued that while hypothetical knowledge and the extended information structures used to model it bear some resemblance to the way philosophers have used conditional logic to model counterfactuals, hypothetical knowledge cannot be reduced to conditional logic together with epistemic logic. Here it is shown that in fact hypothetical knowledge can be captured using the standard counterfactual operator and knowledge operator, provided that some assumptions are made regarding the interaction between the two. It is argued, however, that these assumptions are unreasonable in general, as are the axioms that follow from them. Some implications for game theory are discussed.

In International Journal of Game Theory 28:3, 1999, pp. 331-365. A version of the paper similar to the published version is available in postscript and pdf. A preliminary version appears in Proceedings of the Seventh Conference on Theoretical Aspects of Rationality and Knowledge, 1998, pp. 83-96.

Modeling belief in dynamic systems. Part II: Revision and update

Nir Friedman and Joseph Y. Halpern

The study of belief change has been an active area in philosophy and AI. In recent years two special cases of belief change, belief revision and belief update, have been studied in detail. In a companion paper we introduced a new framework to model belief change. This framework combines temporal and epistemic modalities with a notion of plausibility, allowing us to examine the changes of beliefs over time. In this paper we show how belief revision and belief update can be captured in our framework. This allows us to compare the assumptions made by each method and to better understand the principles underlying them. In particular, it allows us to understand the source of Gardenfors' triviality result for belief revision and suggests a way of mitigating the problem. It also shows that Katsuno and Mendelzon's notion of belief update depends on several strong assumptions that may limit its applicability in AI.

Journal of AI Research 10, 1999, pp. 117-167. A preliminary version, with the title ``A knowledge-based framework for belief change. Part II: Revision and Update'' appears in Proceedings of the 4th International Conference on Principles of Knowledge Representation and Reasoning, 1994, pp. 190-201. The full paper is available on CoRR and locally in postscript and pdf.

Common knowledge revisited

Ronald Fagin, Joseph Y. Halpern, Yoram Moses, and Moshe Y. Vardi

We consider the common-knowledge paradox raised in Knowledge and common knowledge in a distributed environment common knowledge is necessary for coordination, but common knowledge is unattainable in the real world because of temporal imprecision. We discuss two solutions to this paradox: (1) modeling the world with a coarser granularity, and (2) relaxing the requirements for coordination.

In Annals of Pure and Applied Logic 96, 1999, pp. 89-105. A preliminary version appears in Proceedings of the Sixth Conference on Theoretical Aspects of Rationality and Knowledge, 1996, pp. 283-298. This material is largely taken from Chapter 11 of our book. The paper is available available on CoRR and locally in postscript and pdf.

Belief revision: a critique

Nir Friedman and Joseph Y. Halpern

We examine carefully the rationale underlying the approaches to belief change taken in the literature, and highlight what we view as methodological problems. We argue that to study belief change carefully, we must be quite explicit about the ``ontology'' or scenario underlying the belief change process. This is something that has been missing in previous work, with its focus on postulates. Our analysis shows that we must pay particular attention to two issues that have often been taken for granted: The first is how we model the agent's epistemic state. (Do we use a set of beliefs, or a richer structure, such as an ordering on worlds? And if we use a set of beliefs, in what language are these beliefs are expressed?) We show that even postulates that have been called ``beyond controversy'' are unreasonable when the agent's beliefs include beliefs about her own epistemic state as well as the external world. The second is the status of observations. (Are observations known to be true, or just believed? In the latter case, how firm is the belief?) Issues regarding the status of observations arise particularly when we consider iterated belief revision, and we must confront the possibility of revising by p and then by p.

In Journal of Logic, Language, and Information 8, 1999, pp. 401-420. A preliminary version appeared in Proceedings of the Fifth Internal Conference on Principles of Knowledge Representation and Reasoning (KR '96), 1996, pp. 421-431. The paper is available on CoRR and locally in postscript and pdf.

Cox's Theorem revisited

Joseph Y. Halpern

The assumptions needed to prove Cox's Theorem are discussed and examined. Various sets of assumptions under which a Cox-style theorem can be proved are provided, although all are rather strong and, arguably, not natural. This paper provides further discussion of the issues raised in A counterexample to theorems of Cox and Fine.

In Journal of AI Research 11, 1999, pp. 429-435. This is really a short note expanding on the issues raised in A counterexample to theorems of Cox and Fine. The paper is available in available on CoRR and locally postscript and pdf.

Set-theoretic completeness for epistemic and conditional logic

Joseph Y. Halpern

The standard approach to logic in the literature in philosophy and mathematics, which has also been adopted in computer science, is to define a language (the syntax), an appropriate class of models together with an interpretation of formulas in the language (the semantics), a collection of axioms and rules of inference characterizing reasoning (the proof theory), and then relate the proof theory to the semantics via soundness and completeness results. Here we consider an approach that is more common in the economics literature, which works purely at the semantic, set-theoretic level. We provide set-theoretic completeness results for a number of epistemic and conditional logics, and contrast the expressive power of the syntactic and set-theoretic approaches.

In Annals of Mathematics and Artificial Intelligence 26, 1999, pp. 1-27. A preliminary version appears in Proceedings of the Fifth International Symposium on Artificial Intelligence and Mathematics, 1998. The paper is available available on CoRR and locally in postscript and pdf.

Reasoning about noisy sensors in the situation calculus

Fahiem Bacchus, Joseph Y. Halpern, and Hector J. Levesque,

Agents interacting with an incompletely known dynamic world need to be able to reason about the effects of their actions, and to gain further information about that world using sensors of some sort. Unfortunately, sensor information is inherently noisy, and in general serves only to increase the agent's degree of confidence in various propositions. Building on a general logical theory of action formalized in the situation calculus developed by Reiter and others, we propose a simple axiomatization of the effect on an agent's state of belief of taking a reading from a noisy sensor. By exploiting Reiter's solution to the frame problem, we automatically obtain that these sensor actions leave the rest of the world unaffected, and further, that non-sensor actions change the state of belief of the agent in appropriate ways.

In Artificial Intelligence, 1999, pp. 131-169. A previous version appears in Proceedings of the 14th International Joint Conference on Artificial Intelligence (IJCAI 95), 1995, pp. 1933-1940. Available available on CoRR and locally in postscript and pdf.

On the adequacy of modal logic

Joseph Y. Halpern

McCarthy has argued that modal logic is too limited for various purposes. I consider the extent to which he is right.

In Electronic News Journal on Reasoning about Action and Change 3, July 1999. Available here.

A note on knowledge-based programs and specifications

Joseph Y. Halpern

We view a knowledge-based protocol as specifying a set of systems (where a system is a set of runs), in contrast to a standard protocol, which specifies a set of runs. This point of view helps clarify the relationship between standard and knowledge-based protocols. It also leads naturally to a useful notion of knowledge-based specification, a specification that, like a knowledge-based protocol, can be identified with a set of systems.

In Distributed Computing 13:3, 2000, pp. 145-153. The paper is available on CoRR and locally in postscript and pdf.

A logic for SDSI's linked local named spaces

Joseph Y. Halpern and Ronald van der Meyden

Abadi has introduced a logic to explicate the meaning of local names in SDSI, the Simple Distributed Security Infrastructure proposed by Rivest and Lampson. Abadi's logic does not correspond precisely to SDSI, however; it draws conclusions about local names that do not follow from SDSI's name resolution algorithm. Moreover, its semantics is somewhat unintuitive. This paper presents the Logic of Local Name Containment, which does not suffer from these deficiencies. It has a clear semantics and provides a tight characterization of SDSI name resolution. The semantics is shown to be closely related to that of logic programs, leading to an approach to the efficient implementation of queries concerning local names. A complete axiomatization of the logic is also provided.

In Journal of Computer Security 9:1,2, 2001, pp. 47-74. A preliminary version appears in Proceedings of the 12th IEEE Computer Security Foundations Workshop, 1999, pp. 111-122. The paper is on CoRR and locally in postscript and pdf.

CoRR: A Computing Research Repository (with commentary)

Joseph Y. Halpern

I describe how CoRR was set up and discuss some policy issues.

In ACM Journal of Computer Documentation, 24:2, 2000, pp. 41-48. There were also commentators who wrote a response to the article; my response to the commentators is on pp. 72-77. This article is based on (and borrows liberally from) two earlier articles: A Computing Research Repository, D-Lib Magazine, November 1998 and The Computing Research Repository: Promoting the Rapid Dissemination of Computer Science Research. The paper is available on CoRR (of course) and locally in postscript and pdf. My response to the commentators (which is fairly self-contained) is available in postscript and pdf.

Substantive rationality and backward induction

Joseph Y. Halpern

Aumann has proved that common knowledge of substantive rationality implies the backwards induction solution in games of perfect information. Stalnaker has proved that it does not. Roughly speaking, a player is substantively rational if, for all vertices v, if the player were to reach vertex v, then the player would be rational at vertex v. It is shown here that the key difference between Aumann and Stalnaker lies in how they interpret this counterfactual. A formal model is presented that lets us capture this difference, in which both Aumann's result and Stalnaker's result are true (under appropriate assumptions).

In Games and Economic Behavior 37, 2001, pp. 425-435. The paper is available in postscript and pdf.

Alternative semantics for unawareness

Joseph Y. Halpern

Modica and Rustichini [1994] provided a logic for reasoning about knowledge where agents may be unaware of certain propositions. However, their original approach had the unpleasant property that nontrivial unawareness was incompatible with partitional information structures. More recently, Modica and Rustichini [1999] have provided an approach that allows for nontrivial unawareness in partitional information structures. Here it is shown that their approach can be viewed as a special case of a general approach to unawareness considered in Belief, awareness, and limited reasoning.

In Games and Economic Behavior 37, 2001, pp. 321-339. The paper is available in postscript and pdf.

Axiomatizing causal reasoning

Joseph Y. Halpern

Causal models defined in terms of a collection of equations, as defined by Pearl, are axiomatized here. Axiomatizations are provided for three successively more general classes of causal models: (1) the class of recursive theories (those without feedback), (2) the class of theories where the solutions to the equations are unique, (3) arbitrary theories (where the equations may not have solutions and, if they do, they are not necessarily unique). It is shown that to reason about causality in the most general third class, we must extend the language used by Galles and Pearl. In addition, the complexity of the decision procedures is examined for all the languages and classes of models considered.

In Journal of AI Research 12, 2000, pp. 317-337. A preliminary version appears in Proceedings of the Fourteenth Conference on Uncertainty in AI, 1998, pp. 202-210. The paper is available on CoRR and locally in postscript and pdf.

First-order conditional logic for default reasoning revisited

Nir Friedman, Joseph Y. Halpern, and Daphne Koller

Conditional logics play an important role in recent attempts to investigate default reasoning. This paper investigates first-order conditional logic. We show that, as for first-order probabilistic logic, it is important not to confound statistical conditionals over the domain (such as ``most birds fly''), and subjective conditionals over possible worlds (such as ``I believe that Tweety is unlikely to fly''). We then address the issue of ascribing semantics to first-order conditional logic. As in the propositional case, there are many possible semantics. To study the problem in a coherent way, we use plausibility structures. These provide us with a general framework in which many of the standard approaches can be embedded. We show that while these standard approaches are all the same at the propositional level, they are significantly different in the context of a first-order language. We show that plausibilities provide the most natural extension of conditional logic to the first-order case: We provide a sound and complete axiomatization that contains only the KLM properties and standard axioms of first-order modal logic. We show that most of the other approaches have additional properties, which result in an inappropriate treatment of an infinitary version of the lottery paradox.

In ACM Transactions on Computational Logic 1:2, 2000, pp. 175-207. A preliminary version, with the title "First-order conditional logic revisited" appears in AAAI-96 (Proceedings of the Thirteenth National Conference on Artificial Intelligence), 1996, pp. 1305-1312. The full paper is available available on CoRR and locally in postscript and pdf.

A decision-theoretic approach to reliable message delivery

Francis C. Chu and Joseph Y. Halpern

We argue that the tools of decision theory need to be taken more seriously in the specification and analysis of systems. We illustrate this by considering a simple problem involving reliable communication, showing how considerations of utility and probability can be used to decide when it is worth sending heartbeat messages and, if they are sent, how often they should be sent.

In Distributed Computing 14, 2001, pp. 1-16. A preliminary version appears in Proceedings of the 12th International Symposium on Distributed Computing, 1998, pp. 89-103. The full paper is available on CoRR and locally in postscript and pdf.

Multi-agent only knowing

Joseph Y. Halpern and Gerhard Lakemeyer

Levesque introduced a notion of ``only knowing'', with the goal of capturing certain types of nonmonotonic reasoning. Levesque's logic dealt with only the case of a single agent. Recently, both Halpern and Lakemeyer independently attempted to extend Levesque's logic to the multi-agent case. Although there are a number of similarities in their approaches, there are some significant differences. In this paper, we reexamine the notion of only knowing, going back to first principles. In the process, we simplify Levesque's completeness proof, and point out some problems with the earlier definitions. This leads us to reconsider what the properties of only knowing ought to be. We provide an axiom system that captures our desiderata, and show that it has a semantics that corresponds to it. The axiom system has an added feature of interest: it includes a modal operator for satisfiability, and thus provides a complete axiomatization for satisfiability in the logic K45.

In Journal of Logic and Computation 11:1, 2001, pp. 41-70. A preliminary version appears in Proceedings of the Sixth Conference on Theoretical Aspects of Rationality and Knowledge, 1996, pp. 251-266. The paper is available on CoRR and locally in postscript and pdf.

Conditional plausibility measures and Bayesian networks

Joseph Y. Halpern

A general notion of algebraic conditional plausibility measures is defined. Probability measures, ranking functions, possibility measures, and (under the appropriate definitions) sets of probability measures can all be viewed as defining algebraic conditional plausibility measures. It is shown that the technology of Bayesian networks can be applied to algebraic conditional plausibility measures.

In Journal of AI Research 14, 2001, pp. 359-389. A preliminary version appears in Proceedings of the Sixteenth Conference on Uncertainty in AI, 2000, pp. 247-255. The full paper is available on CoRR and locally in postscript and pdf.

On the NP-completeness of finding an optimal strategy in games with common payoffs

Francis C. Chu and Joseph Y. Halpern

Given a finite game with common payoffs (i.e, the players have completely common interests), we show that the problem of determining whether there exists a joint strategy where each player nets at least k is NP-complete.

In International Journal of Game Theory 30:1, 2001, pp. 99-106. The paper is available on CoRR and locally in postscript and pdf.

A characterization of eventual Byzantine agreement

Joseph Y. Halpern, Yoram Moses, and Orli Waarts

We investigate eventual Byzantine agreement (EBA) in the crash and omission failure models. The emphasis is on characterizing optimal EBA protocols in terms of the states of knowledge required by the processors in order to attain EBA. It is well known that common knowledge among the nonfaulty processors is a necessary and sufficient condition for attaining simultaneous Byzantine agreement (SBA). We define a new variant that we call continual common knowledge, and use it to provide necessary and sufficient conditions for attaining EBA. Using our characterization, we provide a technique that allows us to start with any EBA protocol and convert it to an optimal EBA protocol using a two-step process.

In SIAM Journal on Computing 31:3, 2001, pp. 838-865. A preliminary version appears in Proceedings of the 9th ACM Symposium on Principles of Distributed Computing, 1990, pp. 333-346. A version of the paper similar to the published version is available in postscript and pdf.

Characterizing the common prior assumption

Joseph Y. Halpern

Logical characterizations of the common prior assumption (CPA) are investigated. Two approaches are considered. The first is called frame distinguishability, and is similar in spirit to the approaches considered in the economics literature. Results similar to those obtained in the economics literature are proved here as well, namely, that we can distinguish finite spaces that satisfy the CPA from those that do not in terms of disagreements in expectation. However, it is shown that, for the language used here, no formulas can distinguish infinite spaces satisfying the CPA from those that do not. The second approached considered is that of finding a sound and complete axiomatization. Such an axiomatization is provided; again, the key axiom involves disagreements in expectation. The same axiom system is shown to be sound and complete both in the finite and the infinite case. Thus, the two approaches to characterizing the CPA behave quite differently in the case of infinite spaces.

In Journal of Economic Theory 106:2, 2002, pp. 316-355. A preliminary version appears in Proceedings of the Seventh Conference on Theoretical Aspects of Rationality and Knowledge, 1998, pp. 133-146. The paper is available in postscript and pdf.

Using counterfactuals in knowledge-based programming

Joseph Y. Halpern and Yoram Moses

We show how counterfactuals can be added to the framework of knowledge-based programs of Fagin, Halpern, Moses, and Vardi. We show that counterfactuals allow us to capture in a natural way notions like minimizing the number of messages that are sent, whereas attempts to formalize these notions without counterfactuals lead to some rather counterintuitive behavior. We also show how knowledge-based programs with counterfactuals can capture subgame-perfect equilibria in games of perfect information.

In Distributed Computing 17:2, pp. 91-106. A preliminary version appears in Proceedings of the Seventh Conference on Theoretical Aspects of Rationality and Knowledge, 1998, pp. 97-110. The paper is available on CoRR and locally in postscript and pdf.

A knowledge-theoretic analysis of uniform distributed coordination and failure detectors

Joseph Y. Halpern and Aleta Ricciardi

It is shown that if there is no bound on the number of faulty processes, then that, in a precise sense, in a system with unreliable but fair communication, Uniform Distributed Coordination (UDC) can be attained if and only if a system has perfect failure detectors. This result is generalized to the case where there is a bound t on the number of faulty processes. It is shown that a certain type of generalized failure detector is necessary and sufficient for achieving UDC in a context with at most t faulty processes. Reasoning about processes' knowledge as to which other processes are faulty plays a key role in the analysis.

In Distributed Computing 17:3, 2005, pp. 223-236. A preliminary version appears in Proceedings of the Eighteenth Annual ACM Symposium on Principles of Distributed Computing, 1999, pp. 73-82. The paper is available on CoRR and locally in postscript and pdf.

On the expressive power of nondeterminism in dynamic logic

Piotr Berman, Joseph Y. Halpern, and Jerzy Tiuryn

We show that nondeterministic first-order regular dynamic logic is strictly more powerful than its deterministic counterpart, thus settling a long-standing open question.

In Proceedings of the 9th International Colloquium on Automata, Languages, and Programming, 1982, pp. 48-60.

From denotational to operational and axiomatic semantics for Algol-like languages: an overview

Boris A. Trakhtenbrot, Joseph Y. Halpern, and Albert R. Meyer

The advantages of denotational over operational semantics are argued. A denotational semantics is provided for an ALGOL-like language with finite-mode procedures, blocks with local storage, and sharing (aliasing). Procedure declarations are completely explained in the usual framework of complete partial orders, but cpo's are inadequate for the semantic of blocks, and a new class of store models is developed. Partial correctness theory over store models is developed for commands which may contain calls to global procedures, but do not contain function procedures returning storable values.

In Proceedings of CMU Logics of Programs Conference, Lecture Notes in Computer Science, 1983, pp. 474-500 The paper is available in pdf.

A hardware semantics based on temporal intervals

Joseph Y. Halpern, Benjamin Moszkowski, and Zohar Manna

We present an interval-based logic that permits the rigorous specification of a variety of hardware components and facilitates describing properties such as correctness of implementation. Conceptual levels of circuit operation ranging from detailed quantitative timing and signal propagation up to functional behavior are integrated in a unified way. After giving some motivation for reasoning about hardware, we present the propositional and first-order syntax and semantics of the temporal logic. In addition, we illustrate techniques for describing signal transitions as well as for formally specifying and comparing a number of delay models. Throughout the discussion, the formalism provides a means for examining such concepts as device equivalence and internal states.

In Proceedings of the 10th International Colloquium on Automata, Languages, and Programming, 1983, pp. 278-291.

The semantics of local storage, or What makes the free-list free

Joseph Y. Halpern, Albert R. Meyer, and Boris A. Trakhtenbrot

Denotational semantics for an ALGOL-like language with finite-mode procedures, blocks with local storage, and sharing (aliasing) is given by translating programs into an appropriately typed lambda-calculus. Procedures are entirely explained at a purely functional level -- independent of the interpretation of program constructs -- by continuous models for lambda-calculus. However, the usual (cpo) models are note adequate to model storage allocation for blocks because storage overflow presents an apparent discontinuity. New domains of store models are offered to solve this problem.

In Proceedings of the 11th Annual ACM Symposium on the Principles of Programming Languages, 1984, pp. 245-257.

A good Hoare axiom system for an Algol-like language

Joseph Y. Halpern

Clarke [1979] has shown that it is impossible to obtain a relatively complete axiomatization of a block-structured programming language if it has features such as static scope, recursive procedure calls with procedure parameters, and global variables, provided we take first-order logic as the underlying assertion language. We show that if we take a more powerful assertion language, and hence a more powerful notion of expressiveness, such a complete axiomatization is possible. The crucial point is that we need to be able to express the weakest precondition of commands with free procedure parameters. The axioms presented here are natural and reflect the syntax of the programming language. Such an axiom system provides a tool for understanding how to reason about languages with powerful control features.

In Proceedings of the 11th Annual ACM Symposium on the Principles of Programming Languages, 1984, pp. 262-271.

Reasoning about knowledge: an overview

Joseph Y. Halpern

This is my first survey on knowledge. It was updated in 1991 and 1995.

In Theoretical Aspects of Reasoning About Knowledge: Proceedings of the 1986 Conference, Morgan Kaufmann, 1986, pp. 1-17; reprinted in Proceedings of the National Computer Conference, 1986, pp. 219-228.

A knowledge-based analysis of zero knowledge

Joseph Y. Halpern, Yoram Moses, and Mark Tuttle

While the intuition underlying a zero knowledge proof system is that no ``knowledge'' is leaked by the prover to the verifier, researchers are just beginning to analyze such proof systems in terms of formal notions of knowledge. In this paper, we show how interactive proof systems motivate a new notion of practical knowledge, and we capture the definition of an interactive proof system in terms of practical knowledge. Using this notion of knowledge, we formally capture and prove the intuition that the prover does not leak any knowledge of any fact (other than the fact being proven) during a zero knowledge proof. We extend this result to show that the prover does not leak any knowledge of how to compute any information (such as the factorization of a number) during a zero knowledge proof. Finally, we define the notion of a weak interactive proof in which the prover is limited to probabilistic, polynomial-time computations, and we prove analogous security results for such proof systems. We show that, in a precise sense, any nontrivial weak interactive proof must be a proof about the prover's knowledge, and show that, under natural conditions, the notions of interactive proofs of knowledge defined by Tompa and Woll [1987] and Fiat, Feige, and Shamir [1987] are instances of weak interactive proofs.

In Proceedings of the 20th ACM Symposium on Theory of Computing, 1988, pp. 132-147. A version of the paper similar to the published version is available in postscript and pdf.

Reasoning about knowledge and time in asynchronous systems

Joseph Y. Halpern and Moshe Y. Vardi

In Proceedings of the 20th ACM Symposium on Theory of Computing, 1988, pp. 53-65.

Knowledge and probability in distributed systems (Abstract)

Joseph Y. Halpern

This is an abstract for an invited talk; it's a brief summary of material in ``Knowledge, probability, and adversaries''.

In TAPSOFT '91, Proceedings of the International Joint Conference on Theory and Practice of Software Development (S. Abramsky and T. S. E. Maibaum, eds.), Volume 2, 1991, pp. 50-54.

From statistics to beliefs

Fahiem Bacchus, Adam J.Grove, Joseph Y. Halpern, and Daphne Koller

An intelligent agent uses known facts, including statistical knowledge, to assign degrees of belief to assertions it is uncertain about. We investigate three principled techniques for doing this. All three are applications of the principle of indifference, because they assign equal degree of belief to all basic ``situations'' consistent with the knowledge base. They differ because there are competing intuitions about what the basic situations are. Various natural patterns of reasoning, such as the preference for the most specific statistical data available, turn out to follow from some or all of the techniques. This is an improvement over earlier theories, such as work on direct inference and reference classes, which arbitrarily postulate these patterns without offering any deeper explanations or guarantees of consistency. The three methods we investigate have surprising characterizations: there are connections to the principle of maximum entropy, a principle of maximal independence, and a ``center of mass'' principle. There are also unexpected connections between the three, that help us understand why the specific language chosen (for the knowledge base) is much more critical in inductive reasoning of the sort we consider than it is in traditional deductive reasoning.

In Proceedings of AAAI-92 (Proceedings of the Tenth National Conference on Artificial Intelligence, 1992, pp. 602-608 A version of the paper similar to the published version is available in postscript and pdf.

Indifference and irrelevance

Fahiem Bacchus, Adam J.Grove, Joseph Y. Halpern, and Daphne Koller

This is a preliminary version of work that appears in ``Statistical foundations for default reasoning''. In Proceedings of the Fourth International Workshop on Nonmonotonic Reasoning 1992.

A logic for approximate reasoning

Daphne Koller and Joseph Y. Halpern

We investigate the problem of reasoning with imprecise quantitative information. We give formal semantics to a notion of approximate observations, and define two types of entailment for a knowledge base with imprecise information: a cautious notion, which allows only completely justified conclusions, and a bold one, which allows jumping to conclusions. Both versions of the entailment relation are shown to be decidable. We investigate the behavior of the two alternatives on various examples, and show that the answers obtained are intuitively desirable. The behavior of these two entailment relations is completely characterized for a certain sublanguage, in terms of the logic of true equality. We demonstrate various properties of the full logic, and show how it applies to many situations of interest.

In Proceedings of the Third International Conference on Principles of Knowledge Representation and Reasoning, 1992, pp. 153-164. A version of the paper similar to the published version is available in postscript and pdf.

Statistical foundations for default reasoning

Fahiem Bacchus, Adam J.Grove, Joseph Y. Halpern, and Daphne Koller

We describe a new approach to default reasoning, based on a principle of indifference among possible worlds. We interpret default rules as extreme statistical statements, thus obtaining a knowledge base KB comprised of statistical and first-order statements. We then assign equal probability to all worlds consistent with KB in order to assign a degree of belief to a statement p. The degree of belief can be used to decide whether to defeasibly conclude p. Various natural patterns of reasoning, such as a preference for more specific defaults, indifference to irrelevant information, and the ability to combine independent pieces of evidence, turn out to follow naturally from this technique. Furthermore, our approach is not restricted to default reasoning; it supports a spectrum of reasoning, from quantitative to qualitative. It is also related to other systems for default reasoning. In particular, we show that the work of Goldszmidt, Morris, and Pearl, which applies maximum entropy ideas to epsilon-semantics, can be embedded in our framework.

In Proceedings of the 13th International Joint Conference on Artificial Intelligence (IJCAI 93), 1993, pp. 563-569. More details on these and related results can be found in ``From statistical knowledge bases to degrees of belief''.

Reasoning about only knowing with many agents

Joseph Y. Halpern

We extend two notions of ``only knowing'', that of Halpern and Moses and that of Levesque, to many agents. The main lesson of this paper is that these approaches do have reasonable extensions to the multi-agent case. Our results also shed light on the single-agent case. For example, it was always viewed as significant that the HM notion of only knowing was based on S5, while Levesque's was based on K45. In fact, our results show that the HM notion is better understood in the context of K45. Indeed, in the single-agent case, the HM notion remains unchanged if we use K45 (or KD45) instead of S5. However, in the multi-agent case, there are significant differences between K45 and S5. Moreover, all the results proved by Halpern and Moses for the single-agent case extend naturally to the multi-agent case for K45, but not for S5.

In AAAI-93 (Proceedings of the Eleventh National Conference on Artificial Intelligence, 1993, pp. 655-661. A more thorough discussion of the multi-agent version of the HM notion of only knowing appears in ``A theory of knowledge and ignorance for many agents''. ``Multi-agent only knowing'' (joint with Gerhard Lakemeyer) discusses the multi-agent version of Levesque's notion in more detail.

Generating degrees of belief from statistical information: an overview

Fahiem Bacchus, Adam J.Grove, Joseph Y. Halpern, and Daphne Koller

This is an abridged version of material in ``From statistical knowledge bases to degrees of belief''.

In Proceedings of the 13th Conference on Foundations of Software Technology and Theoretical Computer Science, Lecture Notes in Computer Science, Vol. 761, Springer, 1993, pp. 318-325.

Algorithmic knowledge

Joseph Y. Halpern, Yoram Moses, and Moshe Y. Vardi

The standard model of knowledge in multi-agent systems suffers from what has been called the logical omniscience problem: agents know all tautologies, and know all the logical consequences of their knowledge. For many types of analysis, this turns out not to be a problem. Knowledge is viewed as being ascribed by the system designer to the agents; agents are not assumed to compute their knowledge in any way, nor is it assumed that they can necessarily answer questions based on their knowledge. Nevertheless, in many applications that we are interested in, agents need to act on their knowledge. In such applications, an externally ascribed notion of knowledge is insufficient: clearly an agent can base his actions only on what he explicitly knows. Furthermore, an agent that has to act on his knowledge has to be able to compute this knowledge; we do need to take into account the algorithms available to the agent, as well as the ``effort'' required to compute knowledge. In this paper, we show how the standard model can be modified in a natural way to take the computational aspects of knowledge into account.

In Proceedings of the 5th Conference on Theoretical Aspects of Reasoning About Knowledge, 1994, pp. 255-266. The paper in postscript and pdf. Most of the material here is taken from Chapter 10 of our book.

A knowledge-based framework for belief change, Part I: Foundations

Nir Friedman and Joseph Y. Halpern

We propose a general framework in which to study belief change. We begin by defining belief in terms of knowledge and plausibility: an agent believes p if he knows that p is true in all the worlds he considers most plausible. We then consider some properties defining the interaction between knowledge and plausibility, and show how these properties affect the properties of belief. In particular, we show that by assuming two of the most natural properties, belief becomes a KD45 operator. Finally, we add time to the picture. This gives us a framework in which we can talk about knowledge, plausibility (and hence belief), and time, which extends the framework of Halpern and Fagin for modeling knowledge in multi-agent systems. (See ``Modelling knowledge and action in distributed systems''.) We show that our framework is quite expressive and lets us model in a natural way a number of different scenarios for belief change. For example, we show how we can capture an analogue to prior probabilities, which can be updated by ``conditioning''. In a related paper, we show how the two best studied scenarios, belief revision and belief update, fit into the framework.

In Proceedings of the 5th Conference on Theoretical Aspects of Reasoning About Knowledge, 1994, pp. 44-64. The full version of the paper is Modeling belief in dynamic systems, Part I: Foundations''.

A knowledge-based framework for belief change, Part II: Revision and update

Nir Friedman and Joseph Y. Halpern

The study of belief change has been an active area in philosophy and AI. In recent years two special cases of belief change, belief revision and belief update, have been studied in detail. In a companion paper we introduced a new framework to model belief change. This framework combines temporal and epistemic modalities with a notion of plausibility, allowing us to examine the changes of beliefs over time. In this paper we show how belief revision and belief update can be captured in our framework. This allows us to compare the assumptions made by each method and to better understand the principles underlying them. In particular, it allows us to understand the source of Gardenfors' triviality result for belief revision and suggests a way of mitigating the problem. It also shows that Katsuno and Mendelzon's notion of belief update depends on several strong assumptions that may limit its applicability in AI.

In Proceedings of the 4th International Conference on Principles of Knowledge Representation and Reasoning, 1994, pp. 190-201. The full paper, Modeling belief in dynamic systems, Part II: Revision and Update, is available in postscript and pdf. It appears in Journal of AI Research 10, 1999, pp. 117-167.

On the complexity of conditional logics

Nir Friedman and Joseph Y. Halpern

Conditional logics, introduced by Lewis and Stalnaker, have been utilized in artificial intelligence to capture a broad range of phenomena. In this paper we examine the complexity of several variants discussed in the literature. We show that, in general, deciding satisfiability is PSPACE-complete for formulas with arbitrary conditional nesting and NP-complete for formulas with bounded nesting of conditionals. However, we provide several exceptions to this rule. Of particular note are results showing that (a) when assuming uniformity (i.e., that all worlds agree on what worlds are possible), the decision problem becomes EXPTIME-complete even for formulas with bounded nesting, and (b) when assuming absoluteness (i.e., that all worlds agree on all conditional statements), the decision problem is NP-complete for formulas with arbitrary nesting.

In Proceedings of the 4th International Conference on Principles of Knowledge Representation and Reasoning, 1994, pp. 202-213. The paper is available in postscript and pdf.

Conditional logics of belief change

Nir Friedman and Joseph Y. Halpern

The study of belief change has been an active area in philosophy and AI. In recent years two special cases of belief change, belief revision and belief update, have been studied in detail. Belief revision and update are clearly not the only possible notions of belief change. In this paper we investigate properties of a range of possible belief change operations. We start with an abstract notion of a belief change system and provide a logical language that describes belief change in such systems. We then consider several reasonable properties one can impose on such systems and characterize them axiomatically. We show that both belief revision and update fit into our classification. As a consequence, we get both a semantic and an axiomatic (proof-theoretic) characterization of belief revision and update (as well as some belief change operations that generalize them), in one natural framework.

In AAAI-94 (Proceedings of the Twelfth National Conference on Artificial Intelligence), 1994, pp. 915-921. The paper is available in postscript and pdf.

An operational semantics for knowledge bases

Ronald Fagin, Joseph Y. Halpern, Yoram Moses, and Moshe Y. Vardi

The standard approach in AI to knowledge representation is to represent an agent's knowledge symbolically as a collection of formulas, which we can view as a knowledge base. An agent is then said to know a fact if it is provable from the formulas in his knowledge base. Halpern and Vardi advocated a model-theoretic approach to knowledge representation. In this approach, the key step is representing the agent's knowledge using an appropriate semantic model. Here, we model knowledge bases operationally as multi-agent systems. Our results show that this approach offers significant advantages.

In AAAI-94 (Proceedings of the Twelfth National Conference on Artificial Intelligence), pp. 1142-1147, 1994. This material is largely taken from Chapter 4 of our book. The paper is available in postscript and pdf.

Forming beliefs about a changing world

Fahiem Bacchus, Adam J.Grove, Joseph Y. Halpern, and Daphne Koller

The situation calculus is a popular technique for reasoning about action and change. However, its restriction to a first-order syntax and pure deductive reasoning makes it unsuitable in many contexts. In particular, we often face uncertainty, due either to lack of knowledge or to some probabilistic aspects of the world. While attempts have been made to address aspects of this problem, most notably using nonmonotonic reasoning formalisms, the general problem of uncertainty in reasoning about action has not been fully dealt with in a logical framework. In this paper we present a theory of action that extends the situation calculus to deal with uncertainty. Our framework is based on applying the random-worlds approach of ``From statistical knowledge bases to degrees of belief'' to a situation calculus ontology, enriched to allow the expression of probabilistic action effects. Our approach is able to solve many of the problems imposed by incomplete and probabilistic knowledge within a unified framework. In particular, we obtain a default Markov property for chains of actions, a derivation of conditional independence from irrelevance, and a simple solution to the frame problem.

In AAAI-94 (Proceedings of the Twelfth National Conference on Artificial Intelligence), 1994, pp. 222-229. The paper is available in postscript and pdf.

Generating new beliefs from old

Fahiem Bacchus, Adam J.Grove, Joseph Y. Halpern, and Daphne Koller

In ``From statistical knowledge bases to degrees of belief'' we studied the random-worlds approach---a particular (and quite powerful) method for generating degrees of belief (i.e., subjective probabilities) from a knowledge base consisting of objective (first-order, statistical, and default) information. But allowing a knowledge base to contain only objective information is sometimes limiting. We occasionally wish to include information about degrees of belief in the knowledge base as well, because there are contexts in which old beliefs represent important information that should influence new beliefs. In this paper, we describe three quite general techniques for extending a method that generates degrees of belief from objective information to one that can make use of degrees of belief as well. All of our techniques are based on well-known approaches, such as cross-entropy. We discuss general connections between the techniques and in particular show that, although conceptually and technically quite different, all of the techniques give the same answer when applied to the random-worlds method.

In Proceedings of the Tenth Conference on Uncertainty in AI, 1994, pp. 37-45. The paper is available on CoRR and locally in postscript and pdf.

Representation dependence in probabilistic inference

Joseph Y. Halpern and Daphne Koller

Non-deductive reasoning systems are often representation dependent: representing the same situation in two different ways may cause such a system to return two different answers. This is generally viewed as a significant problem. For example, the principle of maximum entropy has been subjected to much criticism due to its representation dependence. There has, however, been almost no work investigating representation dependence. In this paper, we formalize this notion and show that it is not a problem specific to maximum entropy. In fact, we show that any probabilistic inference system that sanctions certain important patterns of reasoning, such as a minimal default assumption of independence, must suffer from representation dependence. We then show that invariance under a restricted class of representation changes can form a reasonable compromise between representation independence and other desiderata.

In Journal of AI Research 21, 2004, pp. 319-356. A preliminary version appears in Proceedings of the 14th International Joint Conference on Artificial Intelligence (IJCAI 95), 1995, pp. 1853-1860. The paper is available on CoRR and locally in postscript and pdf.

Plausibility measures: a user's manual

Nir Friedman and Joseph Y. Halpern

We examine a new approach to modeling uncertainty based on plausibility measures, where a plausibility measure just associates with an event its plausibility, an element is some partially ordered set. This approach is easily seen to generalize other approaches to modeling uncertainty, such as probability measures, belief functions, and possibility measures. The lack of structure in a plausibility measure makes it easy for us to add structure on an ``as needed'' basis, letting us examine what is required to ensure that a plausibility measure has certain properties of interest. This gives us insight into the essential features of the properties in question, while allowing us to prove general results that apply to many approaches to reasoning about uncertainty. Plausibility measures have already proved useful in analyzing default reasoning. In this paper, we examine their ``algebraic properties'', analogues to the use of + and x in probability theory. An understanding of such properties will be essential if plausibility measures are to be used in practice as a representation tool.

In Proceedings of the Eleventh Conference on Uncertainty in AI, 1995, pp. 175-184. The paper is available in postscript and pdf.

Irrelevance and conditioning in first-order probabilistic logic

Daphne Koller and Joseph Y. Halpern

First-order probabilistic logic is a powerful knowledge representation language. Unfortunately, deductive reasoning based on the standard semantics for this logic does not support certain desirable patterns of reasoning, such as indifference to irrelevant information or substitution of constants into universal rules. We show that both these patterns rely on a first-order version of probabilistic independence, and provide semantic conditions to capture them. The resulting insight enables us to understand the effect of conditioning on independence, and allows us to describe a procedure for determining when independencies are preserved under conditioning. We apply this procedure in the context of a sound and powerful inference algorithm for reasoning from statistical knowledge bases.

In AAAI-96 (Proceedings of the Thirteenth National Conference on Artificial Intelligence), 1996, pp. 569-576. The paper is available in postscript and pdf.

Using multi-agent systems to represent uncertainty

Joseph Y. Halpern

I consider a logical framework for modeling uncertainty, based on the use of possible worlds, that incorporates knowledge, probability, and time. This turns out to be a powerful approach for modeling many problems of interest. I show how it can be used to give insights into (among other things) several well-known puzzles.

In AAAI-96 (Proceedings of the Thirteenth National Conference on Artificial Intelligence), 1996. pp. 1329-1330. This is just a summary of an invited talk I gave. A slightly longer version appears in SCAI '97 (Proceedings of the Sixth Scandinavian Conference on Artificial Intelligence), 1997, pp. 1-4. The talk was largely based on A logical approach to reasoning about uncertainty: a tutorial.

A qualitative Markov assumption and its implications for belief change

Nir Friedman and Joseph Y. Halpern

The study of belief change has been an active area in philosophy and AI. In recent years, two special cases of belief change, belief revision and belief update, have been studied in detail. Roughly speaking, revision treats a surprising observation as a sign that previous beliefs were wrong, while update treats a surprising observation as an indication that the world has changed. In general, we would expect that an agent making an observation may both want to revise some earlier beliefs and assume that some change has occurred in the world. We define a novel approach to belief change that allows us to do this, by applying ideas from probability theory in a qualitative settings. The key idea is to use a qualitative Markov assumption, which says that state transitions are independent. We show that a recent approach to modeling qualitative uncertainty using plausibility measures allows us to make such a qualitative Markov assumption in a relatively straightforward way, and show how the Markov assumption can be used to provide an attractive belief-change model.

In Proceedings of the Twelfth Conference on Uncertainty in AI, 1996, pp. 263-273. The paper is available in postscript and pdf.

Common knowledge: now you have it, now you don't

Ronald Fagin, Joseph Y. Halpern, Yoram Moses, and Moshe Y. Vardi

This is a variant of Common knowledge revisited, which in turn is largely taken from Chapter 11 of our book.

In Intelligent Systems: A Semiotics Perspective, Proceedings of the International Multidisciplinary Conference ``Intelligent Systems: A Semiotic Perspective'', Vol.1, 1996, pp. 177-183.

Probability update: conditioning vs. cross-entropy

Adam J. Grove and Joseph Y. Halpern

Conditioning is the generally agreed-upon method for updating probability distributions when one learns that an event is certainly true. But it has been argued that we need other rules, in particular the rule of cross-entropy minimization, to handle updates that involve uncertain information. In this paper we re-examine such a case: van Fraassen's Judy Benjamin problem, which in essence asks how one might update given the value of a conditional probability. We argue that--contrary to the suggestions in the literature--it is possible to use simple conditionalization in this case, and thereby obtain answers that agree fully with intuition. This contrasts with proposals such as cross-entropy, which are easier to apply but can give unsatisfactory answers. Based on the lessons from this example, we speculate on some general philosophical issues concerning probability update.

In Proceedings of the Thirteenth Conference on Uncertainty in AI, 1997, pp. 208-214. Available available on CoRR and locally in postscript and pdf.

Defining explanation in probabilistic systems

Urszula Chajewska and Joseph Y. Halpern

As probabilistic systems gain popularity and are coming into wider use, the need for a mechanism that explains the system's findings and recommendations becomes more critical. The system will also need a mechanism for ordering competing explanations. We examine two representative approaches to explanation in the literature -- one due to Gardenfors and one due to Pearl -- and show that both suffer from significant problems. We propose an approach to defining a notion of ``better explanation'' that combines some of the features of both together with more recent work by Pearl and others on causality.

In Proceedings of the Thirteenth Conference on Uncertainty in AI, 1997, pp. 62-71. The paper is available in postscript and pdf.

Belief revision with unreliable observations

Craig Boutilier, Nir Friedman, and Joseph Y. Halpern

Research in belief revision has been dominated by work that lies firmly within the classic AGM paradigm, characterized by a well-known set of postulates governing the behavior of ``rational'' revision functions. A postulate that is rarely criticized is the success postulate: the result of revising by an observed proposition p results in belief in p. This postulate, however, is often undesirable in settings where an agent's observations may be imprecise or noisy. We propose a semantics that captures a new ontology for studying revision functions, which can handle noisy observations in a natural way while retaining the classical AGM model as a special case. We present a characterization theorem for our semantics, and describe a number of natural special cases that allow ease of specification and reasoning with revision functions. In particular, by making the Markov assumption, we can easily specify and reason about revision.

In AAAI-98 (Proceedings of the Fifteenth National Conference on Artificial Intelligence), 1998. The paper is available in postscript and pdf.

Updating sets of probabilities

Adam J. Grove and Joseph Y. Halpern

There are several well-known justifications for conditioning as the appropriate method for updating a single probability measure, given an observation. However, there is a significant body of work arguing for sets of probability measures, rather than single measures, as a more realistic model of uncertainty. Conditioning still makes sense in this context---we can simply condition each measure in the set individually, then combine the results---and, indeed, it seems to be the preferred updating procedure in the literature. But how justified is conditioning in this richer setting? Here we show, by considering an axiomatic account of conditioning given by van Fraassen, that the single-measure and sets-of-measures cases are very different. We show that van Fraassen's axiomatization for the former case is nowhere near sufficient for updating sets of measures. We give a considerably longer (and not as compelling) list of axioms that together force conditioning in this setting, and describe other update methods that are allowed once any of these axioms is dropped.

In Proceedings of the Fourteenth Conference on Uncertainty in AI, 1998, pp. 173-182. Available on HREF="http://arxiv.org/abs/0906.4332">CoRR and locally in postscript and pdf.

Sensor-assisted ALOHA for wireless networks

L. Terrence Fine, Stephen B. Wicker, Toby Berger, and Joseph Y. Halpern

In Proceedings of the 1998 International Symposium on Information Theory, 1998.

Sensor-assisted multiple-access protocols for wireless networks

L. Terrence Fine, Stephen B. Wicker, Toby Berger, and Joseph Y. Halpern

In Proceedings of the 1998 International Conference on Universal Personal Communications, 1998.

Least expected cost query optimization: an exercise in utility

Francis C. Chu, Joseph Y. Halpern and Praveen Seshadri

We identify two unreasonable, though standard, assumptions made by database query optimizers that can adversely affect the quality of the chosen evaluation plans. One assumption is that it is enough to optimize for the expected case---that is, the case where various parameters (like available memory) take on their expected value. The other assumption is that the parameters are constant throughout the execution of the query. We present an algorithm based on the ``System R''-style query optimization algorithm that does not rely on these assumptions. The algorithm we present chooses the plan of the least expected cost instead of the plan of least cost given some fixed value of the parameters. In execution environments that exhibit a high degree of variability, our techniques should result in better performance.

In Proceedings of the Eighteenth Annual ACM Symposium on Principles of Database Systems , 1999. The paper is available available on CoRR and locally in postscript and pdf.

Reasoning about common knowledge with infinitely many agents

Joseph Y. Halpern and Richard A. Shore

Complete axiomatizations and exponential-time decision procedures are provided for reasoning about knowledge and common knowledge when there are infinitely many agents. The results show that reasoning about knowledge and common knowledge with infinitely many agents is no harder than when there are finitely many agents, provided that we can check the cardinality of certain set differences G - G', where G and G' are sets of agents. Since our complexity results are independent of the cardinality of the sets G involved, they represent improvements over the previous results even with the sets of agents involved are finite. Moreover, our results make clear the extent to which issues of complexity and completeness depend on how the sets of agents involved are represented.

In Information and Computation 191:1, 2004, pp. 1-40. A preliminary version appears in Proceedings of the 14th IEEE Symposium on Logic in Computer Science, 1999. The full paper is available available on CoRR and locally in postscript and pdf.

The Computing Research Repository: promoting the rapid dissemination and archiving of computer science

Joseph Y. Halpern and Carl Lagoze

We describe the Computing Research Repository (CoRR), a new electronic archive for rapid dissemination and archiving of computer science research results. CoRR was initiated in September 1998 through the cooperation of ACM, LANL (Los Alamos National Laboratory) e-Print archive, and NCSTRL (Networked Computer Science Technical Reference Library). Through an implementation of the Dienst protocol, CoRR combines the open and extensible architecture of NCSTRL with the reliable access and well-established management practices of the LANL XXX e-Print repository. This architecture will allow integration with other e-Print archives and provides a foundation for a future broad-based scholarly digital library. We describe the decisions that were made in creating CoRR, the architecture of the CoRR/NCSTRL interoperation, and issues that have arisen during the operation of CoRR.

In Proceedings of ACM Digital Libraries '99, 1999, pp. 3-11.

Available on CoRR (naturally!). Some material in the paper is updated in CoRR: A computing research repository.

A decision theoretic-approach to resource allocation in wireless multimedia networks

Zygmunt Haas, Joseph Y. Halpern, Li Li, and Stephen B. Wicker

The allocation of scarce spectral resources to support as many user applications as possible while maintaining reasonable quality of service is a fundamental problem in wireless communication. We argue that the problem is best formulated in terms of decision theory. We propose a scheme that takes decision-theoretic concerns (like preferences) into account and discuss the difficulties and subtleties involved in applying standard techniques from the theory of Markov Decision Processes (MDPs) in constructing an algorithm that is decision-theoretically optimal. As an example of the proposed framework, we construct such an algorithm under some simplifying assumptions. Additionally, we present analysis and simulation results that show that our algorithm meets its design goals. Finally, we investigate how far from optimal one well-known heuristic is. The main contribution of our results is in providing insight and guidance for the design of near-optimal admission-control policies.

In Resource Allocation in Next Generation Wireless Networks (W. Li and Y. Pan, eds.), Nova Science Publishers, 2005, pp. 1-23. A preliminary version appears in Proceedings of Dial M for Mobility, 2000, pp. 86-95. The full paper is available on CoRR and locally in postscript and pdf.

Minimum-energy topology-control algorithms in ad hoc networks

Li Li and Joseph Y. Halpern

We propose a protocol that, given a communication network, computes a subnetwork such that, for every pair (u,v) of nodes connected in the original network, there is a a minimum-energy path between u and v in the subnetwork (where a minimum-energy path is one that allows messages to be transmitted with a minimum use of energy). The network computed by our protocol is in general a subnetwork of the one computed by the protocol given by Rodoplu and Meng [1999]. Moreover, our protocol is computationally simpler. We demonstrate the performance improvements obtained by using the subnetwork computed by our protocol through simulation.

In Handbook of Theoretical and Algorithmic Aspects of Ad Hoc, Sensor, and Peer-to-Peer Networks, (J. Wu, editor), Auerbach Publications, 2006, pp. 115-132. This is a slightly version of ``A minimum-energy path-preserving topology-control algorithm'', which appears in IEEE Transactions on Wireless Communications 3:3, 2004, pp. 910-921. A preliminary version of the paper, with the title "Minimum-energy mobile wireless networks revisited", appears in Proceedings of the IEEE Conference on Communications, 2001, pp. 278-283. The paper is available on CoRR and locally in pdf.

A logical reconstruction of SPKI

Joseph Y. Halpern and Ron van der Meyden

SPKI/SDSI is a proposed public key infrastructure standard that incorporates the SDSI public key infrastructure. SDSI's key innovation was the use of local names. We previously introduced a Logic of Local Name Containment that has a clear semantics and was shown to completely characterize SDSI name resolution. Here we show how our earlier approach can be extended to deal with a number of key features of SPKI, including revocation, expiry dates, and tuple reduction, without invoking nonmonotonicity. We show that these extensions add relatively little complexity to the logic. We then use our semantics to examine SPKI's tuple reduction rules. Our analysis highlights places where SPKI's informal description of tuple reduction is somewhat vague, and shows that extra reduction rules are necessary in order to capture general information about binding and authorization.

In Journal of Computer Security 11:4, 2003, pp. 581-614. A preliminary version appears in Proceedings of the 14th IEEE Computer Security Foundations Workshop, 2001, pp. 59-70. The paper is available on CoRR and locally in postscript and pdf. A talk based on the paper is available in postscript and pdf.

A logic for reasoning about upper probabilities

Joseph Y. Halpern and Riccardo Pucella

We present a propositional logic to reason about the uncertainty of events, where the uncertainty is modeled by a set of probability measures assigning an interval of probability to each event. We give a sound and complete axiomatization for the logic, and show that the satisfiability problem is NP-complete, no harder than satisfiability for propositional logic.

In Journal of AI Research 17, pp. 57-81, 2002. A preliminary version appears in Proceedings of the Seventeenth Conference on Uncertainty, pp. 203-210, 2001. The paper is available on CoRR and locally in postscript and pdf.

On the relationship between strand spaces and multi-agent systems

Joseph Y. Halpern and Riccardo Pucella

Strand spaces are a popular framework for the analysis of security protocols. Strand spaces have some similarities to a formalism used successfully to model protocols for distributed systems, namely multi-agent systems. We explore the exact relationship between these two frameworks here. It turns out that a key difference is the handling of agents, which are unspecified in strand spaces and explicit in multi-agent systems. We provide a family of translations from strand spaces to multi-agent systems parameterized by the choice of agents in the strand space. We also show that not every multi-agent system of interest can be expressed as a strand space. This reveals a lack of expressiveness in the strand-space framework that can be characterized by our translation. To highlight this lack of expressiveness, we show one simple way in which strand spaces can be extended to model more systems.

In ACM Transactions on Information and System Security 6:1, pp. 43-70, 2003. A preliminary version appears in Proceedings of the Eighth ACM Conference on Computer and Communications Security, 2001, pp. 106-115. The paper is available on CoRR and locally in postscript and pdf.

Causes and explanations: a structural-model approach. Part I: Causes

Joseph Y. Halpern and Judea Pearl

We propose a new definition of actual cause, using structural equations to model counterfactuals. We show that this definition yields a plausible and elegant account of causation that handles well examples that have caused problems for other definitions and resolves major difficulties in the traditional account.

In British Journal for the Philosophy of Science 56:4, 2005, pp. 843-887. A preliminary version appears in Proceedings of the Seventeenth Conference on Uncertainty in AI, 2001, 2001, pp. 194-202. The full paper is available on CoRR and locally in postscript and pdf. A companion paper considers explanation.

Causes and explanations: a structural-model approach. Part II: Explanations

Joseph Y. Halpern and Judea Pearl

We propose a new definition of (causal) explanation, using structural equations to model counterfactuals. The definition is based on the notion of actual cause, as defined and motivated in a companion paper. Essentially, an explanation is a fact that is not known for certain but, if found to be true, would constitute an actual cause of the fact to be explained, regardless of the agent's initial uncertainty. We show that the definition handles well a number of problematic examples from the literature.

In British Journal for the Philosophy of Science 56:4, 2005, pp. 889-911. A preliminary version appears in Proceedings of the 17th International Joint Conference on Artificial Intelligence (IJCAI 2001), 2001, pp. 27-34. The full paper is available on CoRR and locally in postscript and pdf.

Lexicographic probability, conditional probability, and nonstandard probability

Joseph Y. Halpern

The relationship between Popper spaces (conditional probability spaces that satisfy some regularity conditions), lexicographic probability systems (LPS's), and nonstandard probability spaces (NPS's) is considered. If countable additivity is assumed, Popper spaces and a subclass of LPS's are equivalent; without the assumption of countable additivity, the equivalence no longer holds. If the state space is finite, LPS's are equivalent to NPS's. However, if the state space is infinite, NPS's are shown to be more general than LPS's.

In Games and Economic Behavior 68:1, 2010, pp. 155-179. A preliminary version appears in Proceedings of the Eighth Conference on Theoretical Aspects of Rationality and Knowledge, 2001, pp. 17-30. The paper is available on CoRR and locally in postscript and pdf.

Plausibility Measures: A General Approach for Representing Uncertainty

Joseph Y. Halpern

This is an overview of a number of my work on plausibility, which incorporates some material from Plausibility measures and default reasoning, Conditional plausibility measures and Bayesian networks , and Modeling belief in dynamic systems. Part II: Revision and update, as well as some new material on decision making.

In Proceedings of the 17th International Joint Conference on AI (IJCAI 2001), 2001, pp. 1474-1483. I hope to write a version of the paper more accessible to economists soon. The paper is available in postscript and pdf.

A cone-based distributed topology-control algorithm for wireless multi-hop networks

Li Li, Joseph Y. Halpern, Paramvir Bahl, Yi-Min Wang, and Roger Wattenhofer

The topology of a wireless multi-hop network can be controlled by varying the transmission power at each node. In this paper, we give a detailed analysis of a cone-based distributed topology control algorithm. This algorithm does not assume that nodes have GPS information available; rather it depends only on directional information. Roughly speaking, the basic idea of the algorithm is that a node u transmits with the minimum power p required to ensure that in every cone of degree d around u, there is some node that u can reach with power p We show that taking d = 5pi/6 is a necessary and sufficient condition to guarantee that network connectivity is preserved. More precisely, if there is a path from s to t when every node communicates at maximum power then, if d < 5pi/6, there is still a path in the smallest symmetric graph G containing all edges (u,v) such that u can communicate with v using power p. On the other hand, if d > 5pi/6, connectivity is not necessarily preserved. We also propose a set of optimizations that further reduce power consumption and prove that they retain network connectivity. Dynamic reconfiguration in the presence of failures and mobility is also discussed. Simulation results are presented to demonstrate the effectiveness of the algorithm and the optimizations.

In IEEE/ACM Transactions on Networks 13:1, 2005, pp. 147-159. A preliminary version appears in Proceedings of the 20th ACM Symposium on Principles of Distributed Computing, 2001, pp. 264-273, under the title "An analysis of a cone-based distributed topology-control algorithm for wireless multi-hop networks". The full paper is available on CoRR and locally in postscript and pdf.

Gossip-based ad hoc routing

Zygmunt Haas, Joseph Y. Halpern, and Li Li

Many ad hoc routing protocols are based on some variant of flooding. Despite various optimizations, many routing messages are propagated unnecessarily. We propose a gossiping-based approach, where each node forwards a message with some probability, to reduce the overhead of the routing protocols. Gossiping exhibits bimodal behavior in sufficiently large networks: in some executions, the gossip dies out quickly and hardly any node gets the message; in the remaining executions, a substantial fraction of the nodes gets the message. The fraction of executions in which most nodes get the message depends on the gossiping probability and the topology of the network. In the networks we have considered, using gossiping probability between 0.6 and 0.8 suffices to ensure that almost every node gets the message in almost every execution. For large networks, this simple gossiping protocol uses up to 35% fewer messages than flooding, with improved performance. Gossiping can also be combined with various optimizations of flooding to yield further benefits. Simulations show that adding gossiping to AODV results in significant performance improvement, even in networks as small as 150 nodes. We expect that the improvement should be even more significant in larger networks.

In IEEE/ACM Transactions on Networking 14:3, 2006, pp. 479-491. A preliminary version appears in Proceedings of Infocom, 2002, pp. 1707-1716. The full paper is available on CoRR and locally in postscript and pdf.

Least expected cost query optimization: What can we expect?

Francis Chu, Johannes Gehrke, and Joseph Y. Halpern

A standard assumption in the database query optimization literature is that it is adequate to optimize for the ``typical'' case -- that is, the case in which various parameters (e.g., the amount of available memory, the selectivities of predicates, etc.) take on their ``typical'' values. In an earlier paper, we argued that we could do better by choosing plans based on their expected cost. Here we investigate this issue more thoroughly. We show that in many circumstances of interest, a ``typical'' value of the parameter often does give acceptable answers, provided that it is chosen carefully and we are interested only in minimizing expected running time. However, by minimizing the expected running time, we are effectively assuming that if plan P1 runs three times as long as plan P2, then P1 is exactly three times as bad as P2. An assumption like this is not always appropriate (for example, for time-critical data). We show that the focusing on least expected cost can lead to significant improvement for a number of cost functions of interest.

In Proceedings of the 21st ACM Symposium on Principles of Database Systems, 2002, pp. 293-302. The paper is available in postscript and pdf.

Secrecy in multi-agent systems

Joseph Y. Halpern and Kevin O'Neill

We introduce a general framework for reasoning about secrecy requirements in multiagent systems. Because secrecy requirements are closely connected with the knowledge of individual agents of a system, our framework employs the modal logic of knowledge within the context of the well-studied runs and systems framework. Put simply, ``secrets'' are facts about a system that low-level agents are never allowed to know. The framework presented here allows us to formalize this intuition precisely, in a way that is much in the spirit of Sutherland's notion of nondeducibility. Several well-known attempts to characterize the absence of information flow, including separability, generalized noninterference, and nondeducibility on strategies, turn out to be special cases of our definition of secrecy. However, our approach lets us go well beyond these definitions. It can handle probabilistic secrecy in a clean way, and it suggests generalizations of secrecy that may be useful for dealing with resource-bounded reasoning and with issues such as downgrading of information.

In ACM Transactions on Information and System Security 12:1, 2008. A preliminary version appears in Proceedings of the 15th IEEE Computer Security Foundations Workshop, 2002, pp. 32-46. The paper is available on CoRR and locally in postscript and pdf.

Updating probabilities

Peter D. Grunwald and Joseph Y. Halpern

As examples such as the Monty Hall puzzle show, applying conditioning to update a probability distribution on a ``naive space'', which does not take into account the protocol used, can often lead to counterintuitive results. Here we examine why. A criterion known as CAR (``coarsening at random'') in the statistical literature characterizes when ``naive'' conditioning in a naive space works. We show that the CAR condition holds rather infrequently, and we provide a procedural characterization of it, by giving a randomized algorithm that generates all and only distributions for which CAR holds. This substantially extends previous characterizations of CAR. We also consider more generalized notions of update such as Jeffrey conditioning and minimizing relative entropy (MRE). We give a generalization of the CAR condition that characterizes when Jeffrey conditioning leads to appropriate answers, but show that there are no such conditions for MRE. This generalizes and interconnects previous results obtained in the literature on CAR and MRE.

In Journal of AI Research 19, 2003. A preliminary version appears in Proceedings of the Eighteenth Conference on Uncertainty in AI, 2002, pp. 187-196. The paper is available on CoRR and locally in postscript and pdf.

Characterizing and reasoning about probabilistic and non-probabilistic expectation

Joseph Y. Halpern and Riccardo Pucella

Expectation is a central notion in probability theory. The notion of expectation also makes sense for other notions of uncertainty. We introduce a propositional logic for reasoning about expectation, where the semantics depends on the underlying representation of uncertainty. We give sound and complete axiomatizations for the logic in the case that the underlying representation is (a) probability, (b) sets of probability measures, (c) belief functions, and (d) possibility measures. We show that this logic is more expressive than the corresponding logic for reasoning about likelihood in the case of sets of probability measures, but equi-expressive in the case of probability, belief, and possibility. Finally, we show that satisfiability for these logics is NP-complete, no harder than satisfiability for propositional logic.

In Journal of the ACM 54:3, 2007. A preliminary version, with the title "Reasoning about expectation", appears in Proceedings of the Eighteenth Conference on Uncertainty in AI, 2002, pp. 207-215. The full paper is available on CoRR and locally in postscript and pdf

Modeling adversaries in a logic for security protocol analysis

Joseph Y. Halpern and Riccardo Pucella

Logics for security protocol analysis require the formalization of an adversary model that specifies the capabilities of adversaries. A common model is the Dolev-Yao model, which considers only adversaries that can compose and replay messages, and decipher them with known keys. The Dolev-Yao model is a useful abstraction, but it suffers from some drawbacks: it cannot handle the adversary knowing protocol-specific information, and it cannot handle probabilistic notions, such as the adversary attempting to guess the keys. We show how we can analyze security protocols under different adversary models by using a logic with a notion of algorithmic knowledge. Roughly speaking, adversaries are assumed to use algorithms to compute their knowledge; adversary capabilities are captured by suitable restrictions on the algorithms used. We show how we can model the standard Dolev-Yao adversary in this setting, and how we can capture more general capabilities including protocol-specific knowledge and guesses.

In Logical Methods in Computer Science 8:1, 2012. A preliminary version appears in Proceedings Formal Aspects of Security, 2003. The paper is available on CoRR and locally in postscript and pdf.

Using first-order logic to reason about policies

Joseph Y. Halpern and Vicky Weissman

A policy describes the conditions under which an action is permitted or forbidden. We show that a fragment of (multi-sorted) first-order logic can be used to represent and reason about policies. Because we use first-order logic, policies have a clear syntax and semantics. We show that further restricting the fragment results in a language that is still quite expressive yet is also tractable. More precisely, questions about entailment, such as `May Alice access the file?', can be answered in time that is a low-order polynomial (indeed, almost linear in some cases), as can questions about the consistency of policy sets. We also give a brief overview of a prototype that we have built whose reasoning engine is based on the logic and whose interface is designed for non-logicians, allowing them to enter both policies and background information, such as `Alice is a student', and to ask questions about the policies.

In ACM Transactions on Information and System Security 11:4, 2008. A preliminary version appears in Proceedings of the 16th IEEE Computer Security Foundations Workshop, 2003, pp. 187-201. The paper is available in on CoRR and locally postscript and pdf.

Anonymity and information hiding in multiagent systems

Joseph Y. Halpern and Kevin O'Neill

We provide a framework for reasoning about information-hiding requirements in multiagent systems and for reasoning about anonymity in particular. Our framework employs the modal logic of knowledge within the context of the runs and systems framework, much in the spirit of our earlier work on secrecy. We give several definitions of anonymity with respect to agents, actions, and observers in multiagent systems, and we relate our definitions of anonymity to other definitions of information hiding, such as secrecy. We also give probabilistic definitions of anonymity that are able to quantify an observer's uncertainty about the state of the system. Finally, we relate our definitions of anonymity to other formalizations of anonymity and information hiding, including definitions of anonymity in the process algebra CSP and definitions of information hiding using function views.

In Journal of Computer Security 13:3, 2005, pp. 483-514. A preliminary version appears in Proceedings of the 16th IEEE Computer Security Foundations Workshop 2003, pp. 75-88. The paper is available on CoRR and locally in postscript and pdf.

Great expectations. Part I: On the customizability of generalized expected utility

Francis C. Chu and Joseph Y. Halpern

Many different rules for decision making have been introduced in the literature. We present a single framework in which to study and compare these rules. This is done by defining expected utility with respect to general expectation structures, where a decision maker's beliefs are represented by plausibility measures and the decision maker's tastes are represented by general (i.e., not necessarily real-valued) utility functions. We call the resulting notion of expected utility generalized EU (GEU) and show that we can represent arbitrary preference relations on acts using GEU. We then shown that each of Savage's postulates corresponds to an axiom on GEU. Thus, GEU can be customized to capture postulates of interest.

In Theory and Decision 64:1, 2008, pp. 1-36. A preliminary version appears in Proceedings of the 18th International Joint Conference on Artificial Intelligence (IJCAI 2003), 2003, pp. 291-296. The full paper is available on CoRR and locally in postscript and pdf.

Great expectations. Part II: Generalized expected utility as a universal decision rule

Francis C. Chu and Joseph Y. Halpern

Many different rules for decision making have been introduced in the literature. We show that a notion of generalized expected utility proposed in a companion paper is a universal decision rule, in the sense that it can represent essentially all other decision rules.

In Artificial Intelligence 159:1,2, 2004, pp. 207-230. A preliminary version appears in Proceedings of the 18th International Joint Conference on Artificial Intelligence (IJCAI 2003), 2003, pp. 297-302. The full paper is available on CoRR and locally in postscript and pdf.

Responsibility and blame: A atructural-model approach

Hana Chockler and Joseph Y. Halpern

Causality is typically treated an all-or-nothing concept; either A is a cause of B or it is not. We extend the definition of causality introduced by Halpern and Pearl to take into account the degree of responsibility of for B. For example, if someone wins an election 11-0, then each person who votes for him is less responsible for the victory than if he had won 6-5. We then define a notion of degree of blame, which takes into account an agent's epistemic state. Roughly speaking, the degree of blame of A for B is the expected degree of responsibility of A for B, taken over the epistemic state of an agent.

In Journal of AI Research 22, 2004, pp. 93-115. A preliminary version appears in Proceedings of the 18th International Joint Conference on Artificial Intelligence (IJCAI 2003), 2003, pp. 147-153. The full paper is available on CoRR and locally in postscript and pdf.

Probabilistic algorithmic knowledge

Joseph Y. Halpern and Riccardo Pucella

The framework of algorithmic knowledge assumes that agents use deterministic knowledge algorithms to compute the facts they explicitly know. We extend the framework to allow for randomized knowledge algorithms. We then characterize the information provided by a randomized knowledge algorithm when its answers have some probability of being incorrect. We formalize this information in terms of evidence; a randomized knowledge algorithm returning ``Yes'' to a query about a fact p provides evidence for p being true. Finally, we discuss the extent to which this evidence can be used as a basis for decisions.

In Logical Methods in Computer Science 1:3, 2005. A preliminary version appears in Proceedings of Ninth Conference on Theoretical Aspects of Rationality and Knowledge, 2003, pp. 118-130. The paper is available on CoRR and locally in postscript and pdf.

A logic for reasoning about evidence

Joseph Y. Halpern and Riccardo Pucella

We introduce a logic for reasoning about evidence, that essentially views evidence as a function from prior beliefs (before making an observation) to posterior beliefs (after making the observation). We provide a sound and complete axiomatization for the logic, and consider the complexity of the decision problem. Although the reasoning in the logic is mainly propositional, we allow variables representing numbers and quantification over them. This expressive power seems necessary to capture important properties of evidence.

In Journal of AI Research 26, 2006, pp. 1-34. A preliminary version appears in Proceedings of Nineteenth Conference on Uncertainty in AI, 2003, pp. 297-304. The paper is available on CoRR and locally in postscript and pdf.

Sleeping Beauty Reconsidered: Conditioning and Reflection in Asynchronous Systems

Joseph Y. Halpern

A careful analysis of conditioning in the Sleeping Beauty problem is done, using the formal model for reasoning about knowledge and probability developed by Halpern and Tuttle. While the Sleeping Beauty problem has been viewed as revealing problems with conditioning in the presence of imperfect recall, the analysis done here reveals that the problems are not so much due to imperfect recall as to asynchrony. The implications of this analysis for van Fraassen's Reflection Principle and Savage's Sure-Thing Principle are considered.

In Oxford Studies in Epistemology, Vol. 1 (T. S. Gendler and J. Hawthorne, eds.), 2005, pp. 111-142. A preliminary version appears in Proceedings of the Ninth International Conference on Principles of Knowledge Representation and Reasoning (KR 2004), 2004, pp. 12-22. The full paper is available on CoRR and locally in pdf.

Intransitivity and vagueness

Joseph Y. Halpern

There are many examples in the literature that suggest that indistinguishability is intransitive, despite the fact that the indistinguishability relation is typically taken to be an equivalence relation (and thus transitive). It is shown that if the uncertainty perception and the question of when an agent reports that two things are indistinguishable are both carefully modeled, the problems disappear, and indistinguishability can indeed be taken to be an equivalence relation. Moreover, this model also suggests a logic of vagueness that seems to solve many of the problems related to vagueness discussed in the philosophical literature. In particular, it is shown here how the logic can handle the Sorites Paradox.

In Review of Symbolic Logic 1:4, 2009, pp. 530-547. A preliminary version appears in Proceedings of the Ninth International Conference on Principles of Knowledge Representation and Reasoning (KR 2004), pp. 121-129. The full paper is available on CoRR and locally in postscript and pdf.

On the expressive power of dynamic logic, II

Joseph Y. Halpern

We show that nondeterministic first-order regular dynamic logic without equality is strictly more powerful than its deterministic counterpart. This paper is subsumed by On the expressive power of nondeterminism in dynamic logic, which deals with the case with equality.

MIT/LCS/TM-204, 1981.

Rational secret sharing and multiparty computation

Joseph Y. Halpern and Vanessa Teague

We consider the problems of secret sharing and multiparty computation, assuming that agents prefer to get the secret (resp., function value) to not getting it, and secondarily, prefer that as few as possible of the other agents get it. We show that, under these assumptions, neither secret sharing nor multiparty function computation is possible using a mechanism that has a fixed running time. However, we show that both are possible using randomized mechanisms with constant expected running time.

In Proceedings of 36th ACM Symposium on Theory of Computing, 2004, pp. 623-632. The paper is available on CoRR and locally in postscript and pdf.

A formal foundation for XrML

Joseph Y. Halpern and Vicky Weissman

XrML is becoming a popular language in industry for writing software licenses. The semantics for XrML is implicitly given by an algorithm that determines if a permission follows from a set of licenses. We focus on a representative fragment of the language and use it to highlight some problematic aspects of the algorithm. We then correct the problems, introduce formal semantics, and show that our semantics matches the (corrected) algorithm. Finally, we consider the complexity of determining if a permission is implied by a set of XrML licenses. We show that the general problem is NP-hard, but it is polynomial-time computable for an expressive fragment of the language.

In Journal of the ACM 55:1, 2008. A preliminary version In Proceedings of the 17th IEEE Computer Security Foundations Workshop, 2004, pp. 251-263. The paper is available on CoRR and locally in postscript and pdf.

An efficient algorithm for fault-tolerant clock synchronization

Joseph Y. Halpern, Barbara B. Simons, and H. Raymond Strong

This paper presents an efficient distributed algorithm for synchronizing clocks in the presence of failures. It is essentially subsumed by Dynamic fault-tolerant clock synchronization.

IBM RJ 4094, 1983.

On the power of the hypothesis of expressiveness

Steven M. German and Joseph Y. Halpern

We show that if an interpretation is Herbrand definable (every element in the domain is equal to some term in the Herbrand universe over some finite type) and expressive (i.e., we can express weakest preconditions), for a sufficiently rich programming language (one with assignment statements, if ... then ... else statements, and recursive procedure declarations) then it is either finite or strongly arithmetic (in the sense of Effective axiomatizations of Hoare logics). The result follows from an observation that in such a programming language we can write a program to generate all the terms in the Herbrand universe over a given type.

IBM RJ 4079, 1983.

Complete axiomatizations for reasoning about knowledge and time

Joseph Y. Halpern, Ronald van der Meyden, and Moshe Y. Vardi

Sound and complete axiomatizations are provided for a number of different logics involving modalities for knowledge and time. These logics arise from different choices for various parameters. All the logics considered involve the discrete time linear temporal logic operators `next' and `until' and an operator for the knowledge of each of a number of agents. Both the single agent and multiple agent cases are studied: in some instances of the latter there is also an operator for the common knowledge of the group of all agents. Four different semantic properties of agents are considered: whether they have a unique initial state, whether they operate synchronously, whether they have perfect recall, and whether they learn. The property of no learning essentially dual to perfect recall. Not all settings of these parameters lead to recursively axiomatizable logics, but sound and complete axiomatizations are presented for all the ones that do.

In SIAM Journal on Computing 33:2, 2004, pp. 674-703. The paper is available on CoRR and locally in postscript and pdf.

Plausibility Measures and Default Reasoning: An Overview

Nir Friedman and Joseph Y. Halpern

We introduce a new approach to modeling uncertainty based on plausibility measures. This approach is easily seen to generalize other approaches to modeling uncertainty, such as probability measures, belief functions, and possibility measures. We then consider one application of plausibility measures: default reasoning. In recent years, a number of different semantics for defaults have been proposed, such as preferential structures, epsilon-semantics, possibilistic structures, and kappa-rankings, that have been shown to be characterized by the same set of axioms, known as the KLM properties. While this was viewed as a surprise, we show here that it is almost inevitable. In the framework of plausibility measures, we can give a necessary condition for the KLM axioms to be sound, and an additional condition necessary and sufficient to ensure that the KLM axioms are complete. This additional condition is so weak that it is almost always met whenever the axioms are sound. In particular, it is easily seen to hold for all the proposals made in the literature. Finally, we show that plausibility measures provide an appropriate basis for examining first-order default logics.

This paper is a brief overview of the results of Plausibility measures and default reasoning and First-order conditional logic for default logic revisited. It is a summary of an invited talk given at the 14th IEEE Symposium on Logic in Computer Science, 1999, and appears in the proceedings (pp. 130-135). The paper is available in postscript and pdf.

On the adequacy of modal logic

Joseph Y. Halpern

McCarthy has argued that modal logic is too limited for various purposes. I consider the extent to which he is right.

In Electronic News Journal on Reasoning about Action and Change 3, July-August 1999. The paper is available in postscript and pdf. McCarthy wrote a response to my paper, and I wrote a response to his, which also appears in the the News Journal on Reasoning about Action and Change and is available in postscript and pdf.

A computer scientist looks at game theory

Joseph Y. Halpern

I consider issues in distributed computation that should be of relevance to game theory. In particular, I focus on (a) representing knowledge and uncertainty, (b) dealing with failures, and (c) specification and verification.

In Games and Economic Behavior 45:1, 2003, pp. 114-131. The paper is available on CoRR and locally in postscript and pdf.

On spectrum sharing games

Magnus M. Halldorsson, Joseph Y. Halpern, Li Li, and Vahab Mirrokni

Each access point (AP) in a WiFi network must be assigned a channel for it to service users. There are only finitely many possible channels that can be assigned. Moreover, neighboring access points must use different channels so as to avoid interference. Currently these channels are assigned by administrators who carefully consider channel conflicts and network loads. Channel conflicts among APs operated by different entities are currently resolved in an ad hoc manner or not resolved at all. We view the channel assignment problem as a game, where the players are the service providers and APs are acquired sequentially. We consider the price of anarchy of this game, which is the ratio between the total coverage of the APs in the worst Nash equilibrium of the game and what the total coverage of the APs would be if the channel assignment were done by a central authority. We provide bounds on the price of anarchy depending on assumptions on the underlying network and the type of bargaining allowed between service providers. The key tool in the analysis is the identification of the Nash equilibria with the solutions to a maximal coloring problem in an appropriate graph. We relate the price of anarchy of these games to the approximation factor of local optimization algorithms for the maximum k-colorable subgraph problem. We also study the speed of convergence to equilibrium in these games.

In Proceedings of the 23rd Annual ACM Symposium on Principles of Distributed Computing, 2004, pp. 107-114. The paper is available in postscript and pdf.

When ignorance is bliss

Peter D. Grunwald and Joseph Y. Halpern

It is commonly-accepted wisdom that more information is better, and that information should never be ignored. Here we argue, using both a Bayesian and a non-Bayesian analysis, that in some situations you are better off ignoring information if your uncertainty is represented by a set of probability measures. These include situations in which the information is relevant for the prediction task at hand. In the non-Bayesian analysis, we show how ignoring information avoids dilation, the phenomenon that additional pieces of information sometimes lead to an increase in uncertainty. In the Bayesian analysis, we show that for small sample sizes and certain prediction tasks, the Bayesian posterior based on a noninformative prior yields worse predictions than simply ignoring the given information.

In Proceedings of the Twentieth Conference on Uncertainty in AI, 2004, pp. 226-234. The paper is available on CoRR and locally in postscript and pdf.

A data-acquisition model for learning and cognitive development and its implications for autism

Arnon Lotem and Joseph Y. Halpern

A data-driven model of learning is proposed, where a network of nodes and links is constructed that represents what has been heard and observed. Autism is viewed as the consequence of a disorder in the data-acquisition component of the model---essentially, it is the result of getting an ``inappropriate'' distribution of data. It is shown how the model, given ``inappropriate'' data distributions, can reproduce the main symptoms associated with autism, including weak central coherence, impaired theory of mind, and executive dysfunction. In addition, it is shown how the model itself can explain the inappropriate data distribution as the result of an inappropriate initial network. Finally, implications of the model for treatment are discussed.

The paper is available in pdf.

What causes a system to satisfy a specification?

Hana Chockler, Joseph Y. Halpern, and Orna Kupferman

Even when a system is proven to be correct with respect to a specification, there is still a question of how complete the specification is, and whether it really covers all the behaviors of the system. Coverage metrics attempt to check which parts of a system are actually relevant for the verification process to succeed. Recent work on coverage in model checking suggests several coverage metrics and algorithms for finding parts of the system that are not covered by the specification. The work has already proven to be effective in practice, detecting design errors that escape early verification efforts in industrial settings. In this paper, we relate a formal definition of causality given in by Halpern and Pearl to coverage. We show that it gives significant insight into unresolved issues regarding the definition of coverage and leads to potentially useful extensions of coverage. In particular, we introduce the notion of responsibility, which assigns to components of a system a quantitative measure of their relevance to the satisfaction of the specification.

In ACM Transactions on Computational Logic 9:3, 2008. The paper is available on CoRR and locally in postscript and pdf.

Knowledge-based synthesis of distributed systems using event structures

Mark Bickford, Robert L. Constable, Joseph Y. Halpern, and Sabina Petride

To produce a program guaranteed to satisfy a given specification one can synthesize it from a formal constructive proof that a computation satisfying that specification exists. This process is particularly effective if the specifications are written in a high-level language that makes it easy for designers to specify their goals. We consider a high-level specification language that results from adding knowledge to a fragment of Nuprl specifically tailored for specifying distributed protocols, called event theory. We then show how high-level knowledge-based programs can be synthesized from the knowledge-based specifications using a proof development system such as Nuprl. Methods of Halpern and Zuck then apply to convert these knowledge-based protocols to ordinary protocols. These methods can be expressed as heuristic transformation tactics in Nuprl.

In Logical Methods in Computer Science 7:2, 2011. A preliminary version appears in Proceedings of the 11th International Conference on Logic for Programming, Artificial Intelligence, and Reasoning (LPAR 2004), 2005 (Lecture Notes in Computer Science, vol. 3452), pp. 449-465. The full paper is available on CoRR and locally in pdf.

Interactive awareness revisited

Joseph Y. Halpern and Leandro Rego

We analyze a model of interactive unawareness introduced by Heifetz, Meier and Schipper (HMS). We consider two axiomatizations for their model, which capture different notions of validity. These axiomatizations allow us to compare the HMS approach to both the standard (S5) epistemic logic and two other approaches to unawareness: that of Fagin and Halpern and that of Modica and Rustichini. We show that the differences between the HMS approach and the others are mainly due to the notion of validity used and the fact that the HMS is based on a 3-valued propositional logic.

In Games and Economic Behavior 62:1, 2008, pp. 232-262. A preliminary version appears in Proceedings of Tenth Conference on Theoretical Aspects of Rationality and Knowledge, 2005, pp. 78-91. The full paper is available on CoRR and locally in postscript and pdf.

Evidence with uncertain likelihoods

Joseph Y. Halpern and Riccardo Pucella

An agent often has a number of hypotheses, and must choose among them based on observations, or outcomes of experiments. Each of these observations can be viewed as providing evidence for or against various hypotheses. All the attempts to formalize this intuition up to now have assumed that associated with each hypothesis h there is a likelihood function, which is a probability measure that intuitively describes how likely each observation is, conditional on h being the correct hypothesis. We consider an extension of this framework where there is uncertainty as to which of a number of likelihood functions is appropriate, and discuss how one formal approach to defining evidence, which views evidence as a function from priors to posteriors, can be generalized to accommodate this uncertainty.

In Synthese 171:1, 2009, pp. 111-133. A preliminary version appears in Proceedings of Twenty-First Conference on Uncertainty in AI, 2005, pp. 243-250. The full paper is available on CoRR and locally in postscript and pdf.

Constructive decision theory

Lawrence E. Blume, Joseph Y. Halpern, and David Easley

In most contemporary approaches to decision making, a decision problem is described by a sets of states and set of outcomes, and a rich set of acts, which are functions from states to outcomes over which the decision maker (DM) has preferences. Most interesting decision problems, however, do not come with a state space and an outcome space. Indeed, in complex problems it is often far from clear what the state and outcome spaces would be. We present an alternative foundation for decision making, in which the primitive objects of choice are syntactic programs. A representation theorem is proved in the spirit of standard representation theorems, showing that if the DM's preference relation on objects of choice satisfies appropriate axioms, then there exist a set S of states, a set O of outcomes, a way of interpreting the objects of choice as functions from S to O, a probability on S, and a utility function on O, such that the DM prefers choice a to choice b if and only if the expected utility of a is higher than that of b. Thus, the state space and outcome space are subjective, just like the probability and utility; they are not part of the description of the problem. In principle, a modeler can test for SEU behavior without having access to states or outcomes. We illustrate the power of our approach by showing that it can capture decision makers who are subject to framing effects.

In Journal of Econonic Theory 196, 2021, A preliminary version with the title "Redoing the foundations of decision theory" appears in Proceedings of the Tenth International Conference on Principles of Knowledge Representation and Reasoning (KR 2006), 2006, pp. 14-24, The full version, is available on CoRR, locally, and on the journal website.

Reasoning about knowledge of unawareness

Joseph Y. Halpern and Leandro Rego

Awareness has been shown to be a useful addition to standard epistemic logic for many applications. However, standard propositional logics for knowledge and awareness cannot express the fact that an agent knows that there are facts of which he is unaware without there being an explicit fact that the agent knows he is unaware of. We propose a logic for reasoning about knowledge of unawareness, by extending Fagin and Halpern's Logic of General Awareness. The logic allows quantification over variables, so that there is a formula in the language that can express the fact that ``an agent explicitly knows that there exists a fact of which he is unaware''. Moreover, that formula can be true without the agent explicitly knowing that he is unaware of any particular formula. We provide a sound and complete axiomatization of the logic, using standard axioms from the literature to capture the quantification operator. Finally, we show that the validity problem for the logic is recursively enumerable, but not decidable.

In Games and Economic Behavior 67:2, 2009, pp. 503-525. A preliminary version appears in Proceedings of the Tenth International Conference on Principles of Knowledge Representation and Reasoning (KR 2006), 2006, pp. 14-24. The full paper is available on CoRR and locally in postscript and pdf.

Extensive games with possibly unaware players

Joseph Y. Halpern and Leandro Rego

Standard game theory assumes that the structure of the game is common knowledge among players. We relax this assumption by considering extensive games where agents may be unaware of the complete structure of the game. In particular, they may not be aware of moves that they and other agents can make. We show how such games can be represented; the key idea is to describe the game from the point of view of every agent at every node of the game tree. We provide a generalization of Nash equilibrium and show that every game with awareness has a generalized Nash equilibrium. Finally, we extend these results to games with awareness of unawareness, where a player i may be aware that a player j can make moves that i is not aware of and to subjective games, where payers may have no common knowledge regarding the actual game and their beliefs are incompatible with a common prior.

In Mathematical Social Sciences 70, 2014, pp. 42-58. A preliminary version appears in Proceedings of the Fifth International Joint Conference on Autonomous Agents and Multiagent Systems, 2006, pp. 744-751. The full version is available on CoRR and locally in postscript and pdf.

Efficiency and Nash equilibria in a scrip system for P2P networks

Eric J. Friedman, Joseph Y. Halpern, and Ian Kash

A model of providing service in a P2P network is analyzed. It is shown that by adding a scrip system, a mechanism that admits a reasonable Nash equilibrium that reduces free riding can be obtained. The effect of varying the total amount of money (scrip) in the system on efficiency (i.e., social welfare) is analyzed, and it is shown that by maintaining the appropriate ratio between the total amount of money and the number of agents, efficiency is maximized. The work has implications for many online systems, not only P2P networks but also a wide variety of online forums for which scrip systems are popular, but formal analyses have been lacking.

A preliminary version appears in Proceedings of the Seventh ACM Conference on Electronic Commerce, 2006, pp. 140-149, and is available on CoRR and locally in postscript and pdf. Material from this paper, the conference version of "Optimizing scrip systems: efficiency, crashes, hoarders, and altruists", and "Manipulating scrip systems: sybils and collusion" was combined and then split into two journal papers; see here and here.

Distributed computing meets game theory: robust mechanisms for rational secret sharing and multiparty computation

Ittai Abraham, Danny Dolev, Rica Gonen, and Joseph Y. Halpern

We study k-resilient Nash equilibria, joint strategies where no member of a coalition C of size up to k can do better, even if the whole coalition defects. We show that such k-resilient Nash equilibria exist for secret sharing and multiparty computation, provided that players prefer to get the information than not to get it. Our results hold even if there are only 2 players, so we can do multiparty computation with only two rational agents. We extend our results so that they hold even in the presence of up to t players with ``unexpected'' utilities. Finally, we show that our techniques can be used to simulate games with mediators by games without mediators.

A preliminary version appears in Proceedings of the Twenty-Fifth Annual ACM Symposium on Principles of Distributed Computing, 2006, pp. 53-62, and is available in postscript and pdf.

A knowledge-based analysis of global function computation

Joseph Y. Halpern and Sabina Petride

Consider a distributed system N in which each agent has an input value and each communication link has a weight. Given a global function, that is, a function f whose value depends on the whole network, the goal is for every agent to eventually compute the value f(N). We call this problem global function computation. Various solutions for instances of this problem, such as Boolean function computation, leader election, (minimum) spanning tree construction, and network determination, have been proposed, each under particular assumptions about what processors know about the system and how this knowledge can be acquired. We give a necessary and sufficient condition for the problem to be solvable that generalizes a number of well-known results. We then provide a knowledge-based (kb) program (like those of Fagin, Halpern, Moses, and Vardi) that solves global function computation whenever possible. Finally, we improve the message overhead inherent in our initial kb program by giving a counterfactual belief-based program that also solves the global function computation whenever possible, but where agents send messages only when they believe it is necessary to do so. The latter program is shown to be implemented by a number of well-known algorithms for solving leader election.

In Distributed Computing 23:3, 2010, pp. 197-224. A preliminary version appears in Proceedings of the 20th International Symposium on Distributed Computing, 2006, pp. 136-150. The full version is available on CoRR and locally in pdf.

Characterizing the NP-PSPACE Gap in the Satisfiability Problem for Modal Logic

Joseph Y. Halpern and Leandro Rego

There has been a great deal of work on characterizing the complexity of the satisfiability and validity problem for modal logics. In particular, Ladner showed that the satisfiability problem for all logics between K and S4 is PSPACE-hard, while for S5 it is NP-complete. We show that it is negative introspection, the axiom ~Kp -> K~Kp, that causes the gap: if we add this axiom to any modal logic between K and S4, then the satisfiability problem becomes NP-complete. Indeed, the satisfiability problem is NP-complete for any modal logic that includes the negative introspection axiom.

In Journal of Logic and Computation 17:4, pp. 795-806, 2007. A preliminary version appears in Proceedings of the 20th International Joint Conference on Artificial Intelligence (IJCAI 2007), 2007, pp. 2306-2312. The full version is available on CoRR and locally in postscript and pdf.

Worst-case background knowledge for privacy-preserving data publishing

David J. Martin, Daniel Kifer, Ashwin Machanavajjhala, Johannes Gehrke, and Joseph Y. Halpern

Recent work has shown the necessity of considering an attacker's background knowledge when reasoning about privacy in data publishing. However, in practice, the data publisher does not know what background knowledge the attacker possesses. Thus, it is important to consider the worst-case. In this paper, we initiate a formal study of worst-case background knowledge. We propose a language that can express any background knowledge about the data. We provide a polynomial time algorithm to measure the amount of disclosure of sensitive information in the worst case, given that the attacker has at most k pieces of information in this language. We also provide a method to efficiently sanitize the data so that the amount of disclosure in the worst case is less than a specified threshold.

In Proceedings of the 23rd International Conference on Data Engineering, 2007, pp. 126-135. Available available on CoRR and locally in pdf.

Optimizing scrip systems: efficiency, crashes, hoarders, and altruists

Ian A. Kash, Eric J. Friedman, and Joseph Y. Halpern

We discuss the design of efficient scrip systems and develop tools for empirically analyzing them. For those interested in the empirical study of scrip systems, we demonstrate how characteristics of agents in a system can be inferred from the equilibrium distribution of money. From the perspective of a system designer, we examine the effect of the money supply on social welfare and show that social welfare is maximized by increasing the money supply up to the point that the system experiences a ``monetary crash,'' where money is sufficiently devalued that no agent is willing to perform a service. We also examine the implications of the presence of altruists and hoarders on the performance of the system. While a small number of altruists may improve social welfare, too many can also cause the system to experience a monetary crash, which may be bad for social welfare. Hoarders generally decrease social welfare but, surprisingly, they also promote system stability by helping prevent monetary crashes. In addition, we provide new technical tools for analyzing and computing equilibria by showing that our model exhibits strategic complementarities, which implies that there exist equilibria in pure strategies that can be computed efficiently.

In Distributed Computing 25:25, 2012, pp. 335-357. A preliminary version appears in Proceedings of the Eighth ACM Conference on Electronic Commerce, 2007, pp. 305-315. The conference version is available on CoRR and locally in postscript and pdf. The journal version (which also incorporates material from "Efficiency and Nash equilibria in a scrip system for P2P networks" and "Manipulating scrip systems: sybils and collusion", is available here.

Dealing with logical omniscience

Joseph Y. Halpern and Riccardo Pucella

We examine four approaches for dealing with the logical omniscience problem and their potential applicability: the syntactic approach, awareness, algorithmic knowledge, and impossible possible worlds. Although in some settings these approaches are equi-expressive and can capture all epistemic states, in other settings of interest they are not. In particular, adding probabilities to the language allows for finer distinctions between different approaches.

In Artificial Intelligence 175:1, 2011, pp. 220-235. A preliminary version appears in Proceedings of Eleventh Conference on Theoretical Aspects of Rationality and Knowledge, 2007, pp. 169-176. The full paper is available on CoRR and locally in postscript and pdf.

From statistical knowledge bases to degrees of belief: an overview

Joseph Y. Halpern

This is a summary of an invited talk, which covers material from From statistical knowledge bases to degrees of belief. Available in postscript and pdf.

Causality, responsibility, and blame: a structural-model approach

Joseph Y. Halpern

This is a summary of an invited talk, which covers material from Causes and explanations: a structural-model approach. Part I: Causes, Responsibility and blame: A atructural-model approach, and What causes a system to satisfy a specification?. In Proceedings of the Third International Conference on the Quantitative Evaluation of Systems, 2006, pp. 3-6. Available in postscript and pdf.

Generalized solution concepts in games with possibly unaware players

Joseph Y. Halpern and Leandro Rego

Most work in game theory assumes that players are perfect reasoners and have common knowledge of all significant aspects of the game. In earlier work, we proposed a framework for representing and analyzing games with possibly unaware players, and suggested a generalization of Nash equilibrium appropriate for games with unaware players that we called generalized Nash equilibrium. Here, we use this framework to analyze other solution concepts that have been considered in the game-theory literature, with a focus on sequential equilibrium. We also provide some insight into the notion of generalized Nash equilibrium by proving that it is closely related to the notion of rationalizability when we restrict the analysis to games in normal form and no unawareness is involved.

In International Journal of Game Theory :1, 2012, pp. 131-155. A preliminary version appears in In Proceedings of Eleventh Conference on Theoretical Aspects of Rationality and Knowledge, 2007, pp. 253-262. The full paper is available on CoRR and locally in postscript and pdf.

A nonstandard characterization of sequential equilibrium, perfect equilibrium, and proper equilibrium

Joseph Y. Halpern

New characterizations of sequential equilibrium, perfect equilibrium, and proper equilibrium are provided that use nonstandard probability. It is shown that there exists a belief system μ such that (σ,μ) is a sequential equilibrium in an extensive game with perfect recall iff there exist an infinitesimal epsilon and a strategy profile σ' consisting of completely mixed behavioral strategies (so that σi assigns positive, although possibly infinitesimal, probability to all actions at every information set) which differs only infinitesimally from σ such that at each information set I for player i, σi is an ε-best response to σ'-i conditional on having reached I. Note that the characterization of sequential equilibrium does not involve belief systems. There is a similar characterization of perfect equilibrium; the only difference is that σi must be a best response to σ'-i conditional on having reached I. Yet another variant is used to characterize proper equilibrium.

In International Journal of Game Theory 3838:1, 2009, pp. 37-50; available in postscript and pdf. An erratum appears here.

Computer science and game theory: A brief survey

Joseph Y. Halpern

There has been a remarkable increase in work at the interface of computer science and game theory in the past decade. Game theory forms a significant component of some major computer science conferences; leading computer scientists are often invited to speak at major game theory conferences. I survey some of the main themes of work in the area, with a focus on the work in computer science. In particular, I look at the various roles of computational complexity in game theory, including its use in modeling bounded rationality, its role in mechanism design, and the problem of computing Nash equilibria; I consider a game-theoretic problem that originated in the computer science literature, but should be of interest to the game theory community: computing the price of anarchy, that is, the cost of using decentralizing solution to a problem; and I consider interactions between distributed computing and game theory. Given the length constraints, I make no attempt at being comprehensive.

In Palgrave Dictionary of Economics (S. N. Durlauf and L. E. Blume, eds.), Palgrave MacMillan, 2008. Available on CoRR and locally postscript and pdf.

Toward expressive and scalable sponsored search auctions

David J. Martin, Johannes Gehrke, and Joseph Y. Halpern

Internet search results are a growing and highly profitable advertising platform. Search providers auction advertising slots to advertisers on their search result pages. Due to the high volume of searches and the users' low tolerance for search result latency, it is imperative to resolve these auctions fast. Current approaches restrict the expressiveness of bids in order to achieve fast winner determination, which is the problem of allocating slots to advertisers so as to maximize the expected revenue given the advertisers' bids. The goal of our work is to permit more expressive bidding, thus allowing advertisers to achieve complex advertising goals, while still providing fast and scalable techniques for winner determination. We also discuss the application of our framework to advertising in massively multiplayer online games.

In Proceedings of the 24th International Conf. Data Engineering, 2008, pp. 237-246. Available on CoRR and locally.

Lower bounds on implementing robust and resilient mediators

Ittai Abraham, Danny Dolev, and Joseph Y. Halpern

We provide new and tight lower bounds on the ability of players to implement equilibria using cheap talk, that is, just allowing communication among the players. One of our main results is that, in general, it is impossible to implement three-player Nash equilibria in a bounded number of rounds. We also give the first rigorous connection between Byzantine agreement lower bounds and lower bounds on implementation. To this end we consider a number of variants of Byzantine agreement and introduce reduction arguments. We also give lower bounds on the running time of two player implementations. All our results extended to lower bounds on (k,t)-robust equilibria, a solution concept that tolerates deviations by coalitions of size up to k and deviations by up to t players with unknown utilities (who may be malicious).

A preliminary version appears in Proceedings of the Fifth Theory of Cryptography Conference, 2008, pp. 302-319 and is available on CoRR and locally in postscript and pdf.

From qualitative to quantitative proofs of security properties using first-order conditional logic

Joseph Y. Halpern

A first-order conditional logic is considered, with semantics given by a variant of ε-semantics, where pq means that Pr(q | p) approaches 1 super-polynomially---faster than any inverse polynomial. This type of convergence is needed for reasoning about security protocols. A complete axiomatization is provided for this semantics, and it is shown how a qualitative proof of the correctness of a security protocol can be automatically converted to a quantitative proof appropriate for reasoning about concrete security.

In Journal of Computer Security 25, 2017, pp. 1-19. A preliminary version appears in AAAI-08 (Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence), 2008; available on CoRR and locally.

Beyond Nash equilibrium: Solution concepts for the 21st century

Joseph Y. Halpern

Nash equilibrium is the most commonly-used notion of equilibrium in game theory. However, it suffers from numerous problems. Some are well known in the game theory community; for example, the Nash equilibrium of repeated prisoner's dilemma is neither normatively nor descriptively reasonable. However, new problems arise when considering Nash equilibrium from a computer science perspective: for example, Nash equilibrium is not robust (it does not tolerate ``faulty'' or ``unexpected'' behavior), it does not deal with coalitions, it does not take computation cost into account, and it does not deal with cases where players are not aware of all aspects of the game. Solution concepts that try to address these shortcomings of Nash equilibrium are discussed.

In Lectures in Game Theory for Computer Scientists (K. R. Apt and E. Gradel, editors), pp. 264-289. A preliminary version appears in Proceedings of the 27th Annual ACM Symposium on Principles of Distributed Computing, 2008, pp. 1-10 and is reprinted in Proceedings of the Eleventh International Conference on Principles of Knowledge Representation and Reasoning (KR 2008), 2008. Available on CoRR and locally in postscript and pdf.

Defaults and normality in causal structures

Joseph Y. Halpern

A serious defect with the Halpern-Pearl (HP) definition of causality is repaired by combining a theory of causality with a theory of defaults. In addition, it is shown that (despite a claim to the contrary) a cause according to the HP condition need not be a single conjunct. A definition of causality motivated by Wright's NESS test is shown to always hold for a single conjunct. Moreover, conditions that hold for all the examples considered by HP are given that guarantee that causality according to (this version) of the NESS test is equivalent to the HP definition.

In Proceedings of the Eleventh International Conference on Principles of Knowledge Representation and Reasoning (KR 2008), 2008, pp. 198-208. Available on CoRR and locally in postscript and pdf.

Making decisions using sets of probabilities: updating, time consistency, and calibration

Peter D. Grunwald and Joseph Y. Halpern

We consider how an agent should update her uncertainty when it is represented by a set P of probability distributions and the agent observes that a random variable X takes on value x, given that the agent makes decisions using the minimax criterion, perhaps the best-studied and most commonly-used criterion in the literature. We adopt a game-theoretic framework, where the agent plays against a bookie, who chooses some distribution from P. We consider two reasonable games that differ in what the bookie knows when he makes his choice. Anomalies that have been observed before, like time inconsistency, can be understood as arising because different games are being played, against bookies with different information. We characterize the important special cases in which the optimal decision rules according to the minimax criterion amount to either conditioning or simply ignoring the information. Finally, we consider the relationship between conditioning and calibration when uncertainty is described by sets of probabilities. Our results emphasize the key role of Epstein and Schneider's rectangularity condition.

In Journal of AI Research 42, 2011, pp. 393-426. A preliminary version, with the title ``A game-theoretic analysis of updating sets of probabilities'', appears in Proceedings of the Twenty-Fourth Conference on Uncertainty in AI, 2008, pp. 240-247. The paper is available on CoRR and locally in postscript and pdf.

Expressing security properties using selective interleaving functions

Joseph Y. Halpern and Sabina Petride

McLean's notion of Selective Interleaving Functions (SIFs) is perhaps the best-known attempt to construct a framework for expressing various security properties. We examine the expressive power of SIFs carefully. We show that SIFs cannot capture nondeducibility on strategies (NOS). We also prove that the set of security properties expressed with SIFs is not closed under conjunction, from which it follows that separability is strictly stronger than double generalized noninterference. However, we show that if we generalize the notion of SIF in a natural way, then NOS is expressible, and the set of security properties expressible by generalized SIFs is closed under conjunction.

Unpublished manuscript. Available on CoRR and locally in postscript and pdf.

An almost-surely terminating polynomial protocol for asynchronous Byzantine agreement with optimal resilience

Ittai Abraham, Danny Dolev, and Joseph Y. Halpern

Consider an asynchronous system with private channels and n processes, up to t of which may be faulty. We settle a longstanding open question by providing a Byzantine agreement protocol that simultaneously achieves three properties:

Earlier protocols have achieved only two of these three properties. In particular, the protocol of Bracha is not polynomially efficient, the protocol of Feldman and Micali is not optimally resilient, and the protocol of Canetti and Rabin does not have almost-sure termination. Our protocol utilizes a new primitive called shunning (asynchronous) verifiable secret sharing (SVSS), which ensures, roughly speaking, that either a secret is successfully shared or a new faulty process is ignored from this point onwards by some nonfaulty process.

In Proceedings of the Twenty-Seventh Annual ACM Symposium on Principles of Distributed Computing, 2008, pp. 405-414. Available on CoRR and locally in in postscript and pdf.

The Lotus-Eater Attack

Ian A. Kash, Eric J. Friedman, and Joseph Y. Halpern

Many protocols for distributed and peer-to-peer systems have the feature that nodes will stop providing service for others once they have received a certain amount of service. Examples include BitTorrent's unchoking policy, BAR Gossip's balanced exchanges, and threshold strategies in scrip systems. An attacker can exploit this by providing service in a targeted way to prevent chosen nodes from providing service.

In Proceedings of the Twenty-Seventh Annual ACM Symposium on Principles of Distributed Computing, 2008, p. 455. Available on CoRR.

Iterated regret minimization: A more realistic solution concept

Joseph Y. Halpern and Rafael Pass

For some well-known games, such as the Traveler's Dilemma or the Centipede Game, traditional game-theoretic solution concepts---and most notably Nash equilibrium---predict outcomes that are not consistent with empirical observations. In this paper, we introduce a new solution concept, iterated regret minimization, which exhibits the same qualitative behavior as that observed in experiments in many games of interest, including Traveler's Dilemma, the Centipede Game, Nash bargaining, and Bertrand competition. As the name suggests, iterated regret minimization involves the iterated deletion of strategies that do not minimize regret.

In Games and Economic Behavior 74:1, 2012, pp. 194-207. A preliminary version appears in Proceedings of the 21st International Joint Conference on Artificial Intelligence (IJCAI 2009), 2009, p. 153-158. The full paper is available in postscript and pdf.

Game theory with costly computation

Joseph Y. Halpern and Rafael Pass

We develop a general game-theoretic framework for reasoning about strategic agents performing possibly costly computation. In this framework, many traditional game-theoretic results (such as the existence of a Nash equilibrium) no longer hold. Nevertheless, we can use the framework to provide psychologically appealing explanations to observed behavior in well-studied games (such as finitely repeated prisoner's dilemma and rock-paper-scissors). Furthermore, we provide natural conditions on games sufficient to guarantee that equilibria exist. As an application of this framework, we consider a notion of game-theoretic implementation of mediators in computational games. We show that a special case of this notion is equivalent to a variant of the traditional cryptographic definition of protocol security; this result shows that, when taking computation into account, the two approaches used for dealing with ``deviating'' players in two different communities -- Nash equilibrium in game theory, and zero-knowledge ``simulation'' in cryptography -- are intimately connected.

In First Symposium on Innovations in Computer Science, 2010, pp. 120-142. The full version is available in postscript and pdf. This material is largely subsumed by "A computational game-theoretic framework for cryptography" and "Algorithmic rationality: Adding cost of computation to game theory".

Shared winner determination in sponsored search auctions

David J. Martin, Johannes Gehrke, and Joseph Y. Halpern

Sponsored search auctions form a multibillion dollar industry. Search providers auction advertisement slots on search result pages to advertisers who are charged only if the end-user clicks on the advertiser's ad. The high volume of searches presents an opportunity for sharing the work required to resolve multiple auctions that occur simultaneously. We provide techniques for efficiently resolving sponsored search auctions involving large numbers of advertisers, with a focus on two issues: sharing work between multiple search auctions using shared aggregation and shared sort, and dealing with budget uncertainty arising from ads that have been displayed from previous auctions but have not received clicks yet.

In Proceedings of the 25th International Conf. Data Engineering, 2009, pp. 270-280. Available locally here.

Multiagent learning in large anonymous games

Ian A. Kash, Eric J. Friedman, and Joseph Y. Halpern

In large systems, it is important for agents to learn to act effectively, but sophisticated multi-agent learning algorithms generally do not scale. An alternative approach is to find restricted classes of games where simple, efficient algorithms converge. It is shown that stage learning efficiently converges to Nash equilibria in large anonymous games if best-reply dynamics converge. Two features are identified that improve convergence. First, rather than making learning more difficult, more agents are actually beneficial in many settings. Second, providing agents with statistical information about the behavior of others can significantly reduce the number of observations needed.

In Journal of AI Research 40, 2011, pp. 571-598. A preliminary version appears in Proceedings of the Eighth International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS), 2009, pp. 765-772. Available on CoRR and locally in pdf.

Manipulating scrip systems: sybils and collusion

Ian A. Kash, Eric J. Friedman, and Joseph Y. Halpern

Game-theoretic analyses of distributed and peer-to-peer systems typically use the Nash equilibrium solution concept, but this explicitly excludes the possibility of strategic behavior involving more than one agent. We examine the effects of two types of strategic behavior involving more than one agent, sybils and collusion, in the context of scrip systems where agents provide each other with service in exchange for scrip. Sybils make an agent more likely to be chosen to provide service, which generally makes it harder for agents without sybils to earn money and decreases social welfare. Surprisingly, in certain circumstances it is possible for sybils to make all agents better off. While collusion is generally bad, in the context of scrip systems it actually tends to make all agents better off, not merely those who collude. These results also provide insight into the effects of allowing agents to advertise and loan money.

In Proceedings of the First Conference on Auctions, Market Mechanisms, and Multiagent Systems (AMMA), 2009. Available on CoRR.

On Definability in Multimodal Logic

Joseph Y. Halpern, Dov Samet, and Ella Segev

Three notions of definability in multimodal logic are considered. Two are analogous to the notions of explicit definability and implicit definability introduced by Beth in the context of first-order logic. However, while by Beth's theorem the two types of definability are equivalent for first-order logic, such an equivalence does not hold for multimodal logics. A third notion of definability, reducibility, is introduced; it is shown that in multimodal logics, explicit definability is equivalent to the combination of implicit definability and reducibility. The three notions of definability are characterized semantically using (modal) algebras. The use of algebras, rather than frames, is shown to be necessary for these characterizations.

In Review of Symbolic Logic 2:3, 2009, pp. 451-468; available in postscript and pdf.

Defining knowledge in terms of belief: the modal logic perspective

Joseph Y. Halpern, Dov Samet, and Ella Segev

The question of whether knowledge is definable in terms of belief, which has played an important role in epistemology for the last fifty years, is studied here in the framework of epistemic and doxastic logics. Three notions of definability are considered: explicit definability, implicit definability, and reducibility, where explicit definability is equivalent to the combination of implicit definability and reducibility. It is shown that if knowledge satisfies any set of axioms contained in S5, then it cannot be explicitly defined in terms of belief. S5 knowledge can be implicitly defined by belief, but not reduced to it. On the other hand, S4.4 knowledge and weaker notions of knowledge cannot be implicitly defined by belief, but can be reduced to it by defining knowledge as true belief. It is also shown that S5 knowledge cannot be reduced to belief and justification, provided that there are no axioms that involve both belief and justification.

In Review of Symbolic Logic 2:3, 2009, pp. 469-487; available in postscript and pdf.

A logical characterization of iterated admissibility

Joseph Y. Halpern and Rafael Pass

Brandenburger, Friedenberg, and Keisler provide an epistemic characterization of iterated admissibility (i.e., iterated deletion of weakly dominated strategies) where uncertainty is represented using LPSs (lexicographic probability sequences). Their characterization holds in a rich structure called a complete structure, where all types are possible. Here, a logical characterization of iterated admissibility is given that involves only standard probability and holds in all structures, not just complete structures. A stronger notion of strong admissibility is then defined. Roughly speaking, strong admissibility is meant to capture the intuition that ``all the agent knows'' is that the other agents satisfy the appropriate rationality assumptions. Strong admissibility makes it possible to relate admissibility, canonical structures (as typically considered in completeness proofs in modal logic), complete structures, and the notion of ``all I know''.

In Proceedings of Twelfth Conference on Theoretical Aspects of Rationality and Knowledge, 2009, pp. 146-155; the full paper is available on CoRR and locally in postscript and pdf. A version of the paper also appears in Knowing, Reasoning, and Acting: Essays in Honour of Hector J. Levesque (G. Lakemeyer and S. A. McIlraith, editors), College Publications, 2011, with the title ``That's all I know: A logical characterization of iterated admissibility''.

An epistemic characterization of zero knowledge

Joseph Y. Halpern, Rafael Pass, and Vasumathi Raman

Halpern, Moses and Tuttle presented a definition of interactive proofs using a notion they called practical knowledge, but left open the question of finding an epistemic formula that completely characterizes zero knowledge; that is, a formula that holds iff a proof is zero knowledge. We present such a formula, and show that it does characterize zero knowledge. Moreover, we show that variants of the formula characterize variants of zero knowledge such as concurrent zero knowledge and proofs of knowledge.

In Proceedings of Twelfth Conference on Theoretical Aspects of Rationality and Knowledge, 2009, pp. 156-165; available locally in postscript and pdf.

Reasoning about knowledge of unawareness revisited

Joseph Y. Halpern and Leandro Rego

In earlier work, we proposed a logic that extends the Logic of General Awareness of Fagin and Halpern [1988] by allowing quantification over primitive propositions. This makes it possible to express the fact that an agent knows that there are some facts of which he is unaware. In that logic, it is not possible to model an agent who is uncertain about whether he is aware of all formulas. To overcome this problem, we keep the syntax of the earlier paper, but allow models where, with each world, a possibly different language is associated. We provide a sound and complete axiomatization for this logic and show that, under natural assumptions, the quantifier-free fragment of the logic is characterized by exactly the same axioms as the logic of Heifetz, Meier, and Schipper [2008].

In Mathematical Social Sciences 66:2, 2013, pp. 73-84. hA preliminary version appears in in Proceedings of Twelfth Conference on Theoretical Aspects of Rationality and Knowledge, 2009, pp. 166-173; the full paper is available on CoRR and locally in postscript and pdf.

Joseph Y. Halpern

Joseph Y. Halpern

I answer five questions, including ``Why were you initially drawn to epistemology (and what keeps you interested)?'' and ``What do you see as being your main contributions to epistemology?''.

In Epistemology: 5 Questions (ed. V. F Hendricks and D. Pritchard), Automatic Press/VIP, 2008, pp. 155-166. Available locally in postscript and pdf.

Conservative belief and rationality

Joseph Y. Halpern and Rafael Pass

Brandenburger and Dekel have shown that common belief of rationality (CBR) characterizes rationalizable strategies, which are also characterized by a refinement of subjective correlated equilibrium called a posteriori equilibrium. It is possible that players' beliefs might be incompatible, in the sense that player i can assign probability 1 to an event E to which player j assigns probability 0. One way to block incompatibility is to assume a common prior. We consider here a different approach: we require players beliefs to be justified, in the sense that all players must ascribe the actual world positive probability. Aumann has shown that, under the common prior assumption (CPA), common belief of rationality characterizes strategies in the support of an objective correlated equilibrium. We can show that, if the CPA holds, then we can assume without loss of generality that players' beliefs are justified. We then consider the consequences of common justified belief of rationality (CJBR), without the common prior assumption. We show that CJBR characterize strategies in the support of a subjective correlated equilibrium where all players' beliefs have common support. We also define a notion of strong rationalizability, and show that this is characterized by CJBR.

In Games and Economic Behavior 80, 2013, pp. 186-192. Available locally in pdf.

Heuristics, Probability and Causality: A Tribute to Judea Pearl

Rina Dechter, Hector Geffner, and Joseph Y. Halpern (editors)

This is a festschrift for Judea Pearl. More details regarding the festscrift can be found here. The book was published by College Publications, 2010.

Viewpoint: Journals for Certification, Conferences for Rapid Dissemination

Joseph Y. Halpern and David C. Parkes

The publication culture in Computer Science is different from that of all other disciplines. Whereas other disciplines focus on journal publication, the standard practice in CS has been to publish in a conference and then (sometimes) publish a journal version of the conference paper. We discuss the role of journal publication in CS.

In Communications of the ACM 54:8, 2011, pp. 36-38. The paper is available on CoRR and locally in pdf.

MDPs with unawareness

Joseph Y. Halpern, Nan Rong, and Ashutosh Saxena

Markov decision processes (MDPs) are widely used for modeling decision-making problems in robotics, automated control, and economics. Traditional MDPs assume that the decision maker (DM) knows all states and actions. However, this may not be true in many situations of interest. We define a new framework, MDPs with unawareness (MDPUs) to deal with the possibilities that a DM may not be aware of all possible actions. We provide a complete characterization of when a DM can learn to play near-optimally in an MDPU, and give an algorithm that learns to play In particular, we characterize when a near-optimal solution can be found in polynomial time.

In Proceedings of the Twenty-Fourth Conference on Uncertainty in AI, 2010, pp. 228-235. The paper is available on CoRR.

Cooperative Equilibrium

Joseph Y. Halpern and Nan Rong

Nash equilibrium (NE) assumes that players always make a best response. However, this is not always true; sometimes people cooperate even it is not a best response to do so. For example, in the Prisoner's Dilemma, people often cooperate. Are there rules underlying cooperative behavior? In an effort to answer this question, we propose a new equilibrium concept: perfect cooperative equilibrium . PCE may help explain players' behavior in games where cooperation is observed in practice. A player's payoff in a PCE is at least as high as in any NE. However, a PCE does not always exist. We thus consider a-PCE, where a takes into account the degree of cooperation; a PCE is a 0-PCE. Every game has a Pareto-optimal max-perfect cooperative equilibrium (M-PCE); that is, an a-PCE for a maximum a. We show that M-PCE does well at predicting behavior in quite a few games of interest. We provide further insight into M-PCE, at least in two-player games, by considering another generalization of PCE called cooperative equilibrium (CE), which takes the possibility of punishment into account. We show that a Pareto-optimal M-PCE must be a CE.

In Proceedings of the Ninth International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS 2010), 2010, pp. 1465-1466. Here is a greatly expanded version of the AAMAS paper.

From causal models to counterfactual structures

Joseph Y. Halpern

Galles and Pearl claimed that ``for recursive models, the causal model framework does not add any restrictions to counterfactuals, beyond those imposed by Lewis's [possible-worlds] framework.'' This claim is examined carefully, with the goal of clarifying the exact relationship between causal models and Lewis's framework. Recursive models are shown to correspond precisely to a subclass of (possible-world) counterfactual structures. On the other hand, a slight generalization of recursive models, models where all equations have unique solutions, is shown to be incomparable in expressive power to counterfactual structures, despite the fact that the Galles and Pearl arguments should apply to them as well. The problem with the Galles and Pearl argument is identified: an axiom that they viewed as irrelevant, because it involved disjunction (which was not in their language), is not irrelevant at all.

In Review of Symbolic Logic 6:2, 2013, pp. 305-322. A preliminary version appears in Proceedings of the Twelfth International Conference on Principles of Knowledge Representation and Reasoning (KR 2010), 2010, pp. 153-160. Available on CoRR and locally in pdf.

I don't want to think about it now: Decision theory with costly computation

Joseph Y. Halpern and Rafael Pass

Computation plays a major role in decision making. Even if an agent is willing to ascribe a probability to all states and a utility to all outcomes, and maximize expected utility, doing so might present serious computational problems. Moreover, computing the outcome of a given act might be difficult. In a companion paper we develop a framework for game theory with costly computation, where the objects of choice are Turing machines. Here we apply that framework to decision theory. We show how well-known phenomena like first-impression-matters biases (i.e., people tend to put more weight on evidence they hear early on), belief polarization (two people with different prior beliefs, hearing the same evidence, can end up with diametrically opposed conclusions), and the status quo bias (people are much more likely to stick with what they already have) can be easily captured in that framework. Finally, we use the framework to define some new notions: value of computational information (a computational variant of value of information) and and computational value of conversation.

In Proceedings of the Twelfth International Conference on Principles of Knowledge Representation and Reasoning (KR 2010), 2010, pp. 182-190. Available on CoRR locally in pdf.

Actual causation and the art of modeling

Joseph Y. Halpern and Christopher Hitchcock

We look more carefully at the modeling of causality using structural equations. It is clear that the structural equations can have a major impact on the conclusions we draw about causality. In particular, the choice of variables and their values can also have a significant impact on causality. These choices are, to some extent, subjective. We consider what counts as an appropriate choice. More generally, we consider what makes a model an appropriate model, especially if we want to take defaults into account, as was argued is necessary in recent work.

In Heuristics, Probability and Causality: A Tribute to Judea Pearl (editors, R. Dechter, H. Geffner, and J. Y. Halpern), College Publications, 2010, pp. 383-406. Available on CoRR and locally in pdf.

No justified complaints: On fair sharing of multiple resources

Danny Dolev, Dror G. Feitelson, Joseph Y. Halpern, Raz Kupferman, and Nati Linial

Fair allocation has been studied intensively in both economics and computer science, and fair sharing of resources has aroused renewed interest with the advent of virtualization and cloud computing. Prior work has typically focused on mechanisms for fair sharing of a single resource. We provide a new definition for the simultaneous fair allocation of multiple continuously-divisible resources. Roughly speaking, we define fairness as the situation where every user either gets all the resources he wishes for, or else gets at least his entitlement on some bottleneck resource, and therefore cannot complain about not getting more. This definition has the same desirable properties as the recently suggested dominant resource fairness, and also handles the case of multiple bottlenecks. We then prove that a fair allocation according to this definition is guaranteed to exist for any combination of user requests and entitlements (where a user's relative use of the different resources is fixed). The proof, which uses tools from the theory of ordinary differential equations, is constructive and provides a method to compute the allocations numerically.

Proceedings of 3rd Conference on Innovations in Theoretical Computer Science (ITCS 2012), 2012. Available on CoRR and locally in pdf.

Causality in databases

Alexandra Meliou, Wolfgang Gatterbauer, Joseph Y. Halpern, Christoph Koch, Katherine F. Moore, and Dan Suciu

Provenance is often used to validate data, by verifying its origin and explaining its derivation. When searching for ``causes'' of tuples in the query results or in general observations, the analysis of lineage becomes an essential tool for providing such justifications. However, lineage can quickly grow very large, limiting its immediate use for providing intuitive explanations to the user. The formal notion of causality is a more refined concept that identifies causes for observations based on user-defined criteria, and that assigns to them gradual degrees of responsibility based on their respective contributions. In this paper, we initiate a discussion on causality in databases, give some simple definitions, and motivate this formalism through a number of example applications.

In IEEE Data Engineering Bulletin 33:3, 2010, pp. 59-67. Available locally in pdf.

General cognitive principles for learning structure in time and space

Michael Goldstein, Heidi Westerfall, Arnon Lotem, Joseph Y. Halpern, Jennifer A. Schwade, Luca Onnis, and Shimon Edelman

How are hierarchically structured sequences of objects, events or actions learned from experience and represented in the brain? When several streams of regularities present themselves, which will be learned and which ignored? Can statistical regularities take effect on their own, or are additional factors such as behavioral outcomes expected to influence statistical learning? Answers to these questions are starting to emerge through a convergence of findings from naturalistic observations, behavioral experiments, neurobiological studies, and computational analyses and simulations. We propose that a small set of principles are at work in every situation that involves learning of structure from patterns of experience and outline.

In Trends in Cognitive Science 14:6, 2010, pp. 249-258. Available here.

Reasoning about justified belief

Adam Bjorndahl, Joseph Y. Halpern, and Rafael Pass

Halpern and Pass introduce a logic of justified belief and go on to prove that strong rationalizability is characterized in this logic in terms of common justified belief of rationality (CJBR). Their paper provides semantics for this logic but no axiomatization. We correct this deficiency by reformulating the definition of justified belief and providing a complete axiomatization of this new system. We then prove a result analogous to the characterization of strong rationalizability in terms of CJBR, and analyze the additional assumptions needed to do so.

In Proceedings of Thirteenth Conference on Theoretical Aspects of Rationality and Knowledge, 2011, pp. 221-227; the full paper is available in pdf.

Ambiguous language and differences in beliefs

Joseph Y. Halpern and Willemien Kets

Standard models of multi-agent modal logic do not capture the fact that information is often ambiguous, and may be interpreted in different ways by different agents. We propose a framework that can model this, and consider different semantics that capture different assumptions about the agents' beliefs regarding whether or not there is ambiguity. We consider the impact of ambiguity on a seminal result in economics: Aumann's result saying that agents with a common prior cannot agree to disagree. This result is known not to hold if agents do not have a common prior; we show that it also does not hold in the presence of ambiguity. We then consider the tradeoff between assuming a common interpretation (i.e., no ambiguity) and a common prior (i.e., shared initial beliefs).

In Proceedings of the Thirteenth International Conference on Principles of Knowledge Representation and Reasoning (KR 2012), 2012, pp. 329-338. The conference version is available in pdf. The conference version was expanded and then split into two papers: "A logic for reasoning about ambiguity" and "Ambiguous language and common priors".

I'm doing as well as I can: modeling people as rational finite automata

Joseph Y. Halpern, Rafael Pass, and Lior Seeman

We show that by modeling people as bounded finite automata, we can capture at a qualitative level the behavior observed in experiments. We consider a decision problem with incomplete information and a dynamically changing world, which can be viewed as an abstraction of many real-world settings. We provide a simple strategy for a finite automaton in this setting, and show that it does quite well, both through theoretical analysis and simulation. We show that, if the probability of nature changing state goes to 0 and the number of states in the automaton increases, then this strategy performs optimally (as well as if it were omniscient and knew when nature was making its state changes). Thus, although simple, the strategy is a sensible strategy for a resource-bounded agent to use. Moreover, at a qualitative level, the strategy does exactly what people have been observed to do in experiments.

In Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence (AAAI-12), 2012, pp. 1917-1923. The paper is available here.

Weighted sets of probabilities and minimax weighted expected regret: new approaches for representing uncertainty and making decisions

Joseph Y. Halpern and Samantha Leung

We consider a setting where an agent's uncertainty is represented by a set of probability measures, rather than a single measure. Measure-by-measure updating of such a set of measures upon acquiring new information is well-known to suffer from problems; agents are not always able to learn appropriately. To deal with these problems, we propose using weighted sets of probabilities: a representation where each measure is associated with a weight, which denotes its significance. We describe a natural approach to updating in such a situation and a natural approach to determining the weights. We then show how this representation can be used in decision-making, by modifying a standard approach to decision making---minimizing expected regret -- to obtain minimax weighted expected regret (MWER). We provide an axiomatization that characterizes preferences induced by MWER both in the static and dynamic case.

In Theory and Decision 79:3, 2015, pp. 415-450. A preliminary version appears in Proceedings of the Twenty-Eighth Conference on Uncertainty in AI (UAI '2012). The full version of the paper is available here.

Co-evolution of learning and data-acquisition mechanisms: a model for cognitive evolution

Arnon Lotem and Joseph Y. Halpern

A fundamental and frequently overlooked aspect of animal learning is its reliance on compatibility between the learning rules used and the attentional and motivational mechanisms directing them to process the relevant data (called here data-acquisition mechanisms). We propose that this coordinated action, which may first appear fragile and error prone, is in fact extremely powerful, and critical for understanding cognitive evolution. Using basic examples from imprinting and associative learning, we argue that by coevolving to handle the natural distribution of data in the animal's environment, learning and data-acquisition mechanisms are tuned jointly so as to facilitate effective learning using relatively little memory and computation. We then suggest that this coevolutionary process offers a feasible path for the incremental evolution of complex cognitive systems, because it can greatly simplify learning. This is illustrated by considering how animals and humans can use these simple mechanisms to learn complex patterns and represent them in the brain. We conclude with some predictions and suggested directions for experimental and theoretical work.

In Philosophical Transactions of the Royal Society 367, pp. 2686-2694. The paper is available locally and on the journal website.

Dexter Kozen: An appreciation

Joseph Y. Halpern

In Kozen Festschrift, Lecture Notes in Computer Science, vol. 7230, Springer, 2012, pp. 324-325.

The paper is available here.

Algorithmic rationality: game theory with costly computation

Joseph Y. Halpern and Rafael Pass

We develop a general game-theoretic framework for reasoning about strategic agents performing possibly costly computation. In this framework, many traditional game-theoretic results (such as the existence of a Nash equilibrium) no longer hold. Nevertheless, we can use the framework to provide psychologically appealing explanations of observed behavior in well-studied games (such as finitely repeated prisoner's dilemma and rock-paper-scissors). Furthermore, we provide natural conditions on games sufficient to guarantee that equilibria exist.

In Journal of Economic Theory 156, 2015, pp. 246-268. 2015. Available here.

A computational game-theoretic framework for cryptography

Joseph Y. Halpern and Rafael Pass

We develop a definition of protocol security relying on game-theoretic notions of implementation. We show that a natural special case of this this definition is equivalent to a variant of the traditional cryptographic definition of protocol security; this result shows that, when taking computation into account, the two approaches used for dealing with ``deviating'' players in two different communities -- Nash equilibrium in game theory and zero-knowledge ``simulation'' in cryptography -- are intimately related. Other special cases of our definition instead lead to more practical protocols and circumvent known lower bounds with respect to the cryptographic notion of security.

Unpublished manuscript, available here.

Distributed computing meets game theory: combining insights from two fields

Ittai Abraham, Lorenzo Alvisi, and Joseph Y. Halpern

This is a discussion of work that appears here and related work by Alvisi.

In ACM SIGACT News 42:2, 2011, pp. 69-76. The paper can be found here.

Solution to Exchanges 8.1 puzzle: Identifying the champion

Stephane Airiau, Ulle Endriss, and Joseph Y. Halpern

In SIGECOM Exchanges 8.2, 2009, pp. 1-2. The paper can be found here.

Awareness in games, awareness in logic

Joseph Y. Halpern

This is a summary of an invited talk, which covers material from "Reasoning about knowledge of unawareness", "Reasoning about knowledge of unawareness revisited", and "Extensive games with possibly unaware players".

In Proceedings of LPAR-17 Lecture Notes in Computer Science, vol. 6397, Springer, 2010, p. 15.

Algorithmic rationality: Adding cost of computation to game theory

In SIGECOM Exchanges 10.2, 2011, pp. 9-15. The paper can be found here. This is a summary of work in A computational game-theoretic framework for cryptography and Algorithmic rationality: Adding cost of computation to game theory.

Graded causation and defaults

Joseph Y. Halpern and Christopher Hitchcock

This paper extends the account of actual causation offered by Halpern and Pearl. We show that this account yields the wrong judgment in certain classes of cases. We offer a revised definition that incorporates consideration of defaults, typicality, and normality. The revised definition takes actual causation to be both graded and comparative. We then apply our definition to a number of cases.

British Journal for the Philosophy of Science 66:2, 2015, pp. 413-457; an updated version of the journal paper is available here. (It corrects some typos, the most significant of which is in Figure 4, which was missing an edge in the journal version.)

Language-based games

Adam Bjorndahl, Joseph Y. Halpern, and Rafael Pass

We introduce language-based games, a generalization of psychological games [Geanakoplos, Pearce, and Stacchetti 1989] that can also capture reference-dependent preferences [Koszegi and Rabin 2006]. The idea is to extend the domain of the utility function to situations, maximal consistent sets in some language. The role of the underlying language in this framework is thus particularly critical. Of special interest are languages that can express only coarse beliefs [Mullainathan 2002]. Despite the expressive power of the approach, we show that it can describe games in a simple, natural way. Nash equilibrium and rationalizability are generalized to this setting; Nash equilibrium is shown not to exist in general, while the existence of rationalizable strategies is proved under mild conditions.

In Proceedings of Fourteenth Conference on Theoretical Aspects of Rationality and Knowledge, 2013, pp. 39-48; the full paper is available here.

Game theory with translucent players

Joseph Y. Halpern and Rafael Pass

A traditional assumption in game theory is that players are opaque to one another---if a player changes strategies, then this change in strategies does not affect the choice of other players' strategies. In many situations this is an unrealistic assumption. We develop a framework for reasoning about games where the players may be translucent to one another; in particular, a player may believe that if she were to change strategies, then the other player would also change strategies. Translucent players may achieve significantly more efficient outcomes than opaque ones. Our main result is a characterization of strategies consistent with appropriate analogues of common belief of rationality. Common Counterfactual Belief of Rationality (CCBR) holds if (1) everyone is rational, (2) everyone counterfactually believes that everyone else is rational (i.e., all players i believe that everyone else would still be rational even if i were to switch strategies), (3) everyone counterfactually believes that everyone else is rational, and counterfactually believes that everyone else is rational, and so on. CCBR characterizes the set of strategies surviving iterated removal of minimax dominated strategies, where a strategy s is minimax dominated for i if there exists a strategy s' such that the worst possible utility for s' (taken over all strategy profiles for the other players) is better than the best possible utility for s.

International Journal of Game Theory, 47:3, 2018, pp. 949-976. A preliminary version appears in Proceedings of Fourteenth Conference on Theoretical Aspects of Rationality and Knowledge, 2013, pp. 216-221. The full paper is available here.

Compact representations of extended causal models

Joseph Y. Halpern and Christopher Hitchcock

Judea Pearl was the first to propose a definition of actual causation using causal models. A number of authors have suggested that an adequate account of actual causation must appeal not only to causal structure, but also to considerations of normality. In related work, we offer a definition of actual causation using extended causal models, which include information about both causal structure and normality. Extended causal models are potentially very complex. In this paper, we show how it is possible to achieve a compact representation of extended causal models.

In Cognitive Science 37:6, pp. 986-1010, 2013. The full paper (which corrects some typos in the journal version) is available here.

Weighted regret-based likelihood: a new approach to describing uncertainty

Joseph Y. Halpern

Recently, Halpern and Leung [2012] suggested representing uncertainty by a weighted set of probability measures, and suggested a way of making decisions using this representation of uncertainty: maximizing weighted regret. Their paper leaves does not answer an apparently simpler question: what it means, according to this representation of uncertainty, for an event E to be more likely than an event E'. In this paper, a notion of comparative likelihood when uncertainty is represented by a weighted set of probability measures is defined. It generalizes the ordering defined by probability (and by lower probability) in a natural way; a generalization of upper probability can also be defined. A complete axiomatic characterization of this notion of regret-based likelihood is given.

In Journal of AI Research 54, 2015, pp. 471-492. A preliminary version appears in 12th European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty (ECSQARU), 2013, pp. 266-277. The full paper is available here.

Distributed protocols for leader election: a game-theoretic perspective

Ittai Abraham, Danny Dolev, and Joseph Y. Halpern

We do a game-theoretic analysis of leader election, under the assumption that each agent prefers to have some leader than to have no leader at all. We show that it is possible to obtain a fair Nash equilibrium, where each agent has an equal probability of being elected leader, in a completely connected network, in a bidirectional ring, and a unidirectional ring, in the synchronous setting. In the asynchronous setting, Nash equilibrium is not quite the right solution concept. Rather, we must consider ex post Nash equilibrium; this means that we have a Nash equilibrium no matter what a scheduling adversary does. We show that ex post Nash equilibrium is attainable in the asynchronous setting in all the networks we consider, using a protocol with bounded running time. However, in the asynchronous setting, we require that n > 2. We can get a fair ε-Nash equilibrium if n = 2 in the asynchronous setting, under some cryptographic assumptions (specifically, the existence of a pseudo-random number generator and polynomially-bounded agents), using ideas from bit-commitment protocols. We then generalize these results to a setting where we can have deviations by a coalition of size k. n > 2k; under the same cryptographic assumptions, we can a get a k-resilient equilibrium if n=2k. Finally, we show that, under minimal assumptions, not only do our protocols give a Nash equilibrium, they also give a sequential equilibrium, so players even play optimally off the equilibrium path.

In ACM Transactions on Economics and Computation 7:1, 2019. An earlier version appers in Proceedings of the 27th International Symposium on Distributed Computing, 2013, pp. 61-75. The full paper is available here.

Sequential equilibrium in computational games

Joseph Y. Halpern and Rafael Pass

We examine sequential equilibrium in the context of computational games [Halpern and Pass, 2011a], where agents are charged for computation. In such games, an agent can rationally choose to forget, so issues of imperfect recall arise. In this setting, we consider two notions of sequential equilibrium. One is an ex ante notion, where a player chooses his strategy before the game starts and is committed to it, but chooses it in such a way that it remains optimal even off the equilibrium path. The second is an interim notion, where a player can reconsider at each information set whether he is doing the ``right'' thing, and if not, can change his strategy. The two notions agree in games of perfect recall, but not in games of imperfect recall. Although the interim notion seems more appealing, Halpern and Pass [2011b] argue that there are some deep conceptual problems with it in standard games of imperfect recall. We show that the conceptual problems largely disappear in the computational setting. Moreover, in this setting, under natural assumptions, the two notions coincide.

In ACM Transactions on Economics and Computation 7:2, 2019. An earlier version appears in Proceedings of the 23rd International Joint Conference on Artificial Intelligence (IJCAI 2013)}, 2013, pp. 171-176. The full paper is available here.

Towards a deeper understanding of cooperative equilibrium: characterization and complexity

Nan Rong and Joseph Y. Halpern

Nash equilibrium (NE) assumes that players always make a best response. However, this is not always true; sometimes people cooperate even it is not a best response to do so. For example, in the Prisoner's Dilemma, people often cooperate. We consider two solution concepts that were introduced recently that try to capture such cooperation in two-player games: perfect cooperative equilibrium (PCE) (and an extension called maximum PCE (M-PCE)) [Halpern and Rong, 2010] and the coco value [Kalai and Kalai, 2009]. We show that, despite their definitions being quite different, these notions are closely related, both in terms of axiomatization and algebraic characterization. We also consider the problem of computing how well players do when they cooperate according to these solution concepts, and show that in both cases in polynomial time. In the case of the coco value, this follows easily from the definition; in the case of the corresponding M-PCE value, it follows from a theorem showing that bilinear programming for a class of 2x2 matrices is in constant time, a result that may be of independent interest.

In Proceedings of the Twelfth International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS 2013), 2013, pp. 319-326. The paper is available here.

Decision theory with resource-bounded agents

Joseph Y. Halpern, Rafael Pass, and Lior Seeman

There have been two major lines of research aimed at capturing resource-bounded players in game theory. The first, initiated by Rubinstein [1985] charges an agent for doing costly computation; the second, initiated by Neyman [1985] does not charge for computation, but limits the computation that agents can do, typically by modeling agents as finite automata. We review recent work on applying both approaches in the context of decision theory. For the first approach, we take the objects of choice in a decision problem to be Turing machines, and charge players for the ``complexity'' of the Turing machine chosen (e.g., its running time). This approach can be used to explain well-known phenomena like first-impression-matters biases (i.e., people tend to put more weight on evidence they hear early on) and belief polarization (two people with different prior beliefs, hearing the same evidence, can end up with diametrically opposed conclusions) as the outcomes of quite rational decisions. For the second approach, we model people as finite automata, and provide a simple algorithm that, on a problem that captures a number of settings of interest, provably performs optimally as the number of states in the automaton increases.

In Topics in Cognitive Science 6:2, 2014, pp. 245-257. The paper can be found here. This paper covers material in "I don't want to think about it now: decision theory with costly computation" and "I'm doing as well as I can: modeling people as rational finite automata".

The truth behind the myth behind the folk theorem

Joseph Y. Halpern, Rafael Pass, and Lior Seeman

We study the problem of computing an ε-Nash equilibrium in repeated games. Earlier work by Borgs et al. [2010] suggests that this problem is intractable. We show that if we make a slight change to their model---modeling the players as polynomial-time Turing machines that maintain state (rather than stateless polynomial-time Turing machines)---and make some standard cryptographic hardness assumptions (the existence of public-key encryption), the problem can actually be solved in polynomial time.

In Games and Economic Behavior 117, 2019, pp. 479--498. An earlier version appears in Proceedings of 5th Conference on Innovations in Theoretical Computer Science (ITCS 2014), 2014. Available on CoRR and locally.

Appropriate causal models and stability of causation

Joseph Y. Halpern

Causal models defined in terms of structural equations have proved to be quite a powerful way of representing knowledge regarding causality. However, a number of authors have given examples that seem to show that the Halpern-Pearl [2005] (HP) definition of causality gives intuitively unreasonable answers. Here it is shown that, for each of these examples, we can give two stories consistent with the description in the example, such that intuitions regarding causality are quite different for each story. By adding additional variables, we can disambiguate the stories. Moreover, in the resulting causal models, the HP definition of causality gives the intuitively correct answer. It is also shown that, by adding extra variables, a modification to the original HP definition made to deal with an example by Hopkins and Pearl [2003] may not be necessary. Given how much can be done by adding extra variables, there might be a concern that the notion of causality is somewhat unstable. Can adding extra variables in a ``conservative'' way (i.e., maintaining all the relations between the variables in the original model) cause the answer to the question ``Is X=x a cause of Y=y?'' to alternate between ``yes'' and ``no''? Here it is shown that adding an extra variable can change the answer from ``yes' to ``no'', but after that, it cannot cannot change back to ``yes''.

In Review of Symbolic Logic 9:1, 2016, pp. 76-102. A preliminary version appears in Proceedings of the Fourteenth International Conference on Principles of Knowledge Representation and Reasoning (KR 2014), 2014, pp. 198-207. The full paper is available here.

Reasoning about rationality

Adam Bjorndahl, Joseph Y. Halpern, and Rafael Pass

We provide a sound and complete axiomatization for a class of logics appropriate for reasoning about the rationality of players in games. Essentially the same axiomatization applies to a wide class of decision rules.

In Games and Economic Behavior 104, 2017, pp. 146-164. A preliminary version, with the title "Axiomatizing rationality", appears in Proceedings of the Fourteenth International Conference on Principles of Knowledge Representation and Reasoning (KR 2014), 2014, pp. 178-187. Available here.

The role of the protocol in anthropic reasoning

Joseph Y. Halpern

It is shown how thinking in terms of the protocol used can help clarify problems related to anthropic reasoning, such as the Doomsday Argument and the Sleeping Beauty Problem. Ergo 2:9, 2015, pp. 195-206. Available here.

A logic for reasoning about ambiguity

Joseph Y. Halpern and Willemien Kets

Standard models of multi-agent modal logic do not capture the fact that information is often ambiguous, and may be interpreted in different ways by different agents. We propose a framework that can model this, and consider different semantics that capture different assumptions about the agents' beliefs regarding whether or not there is ambiguity. We examine the expressive power of logics of ambiguity compared to logics that cannot model ambiguity, with respect to the different semantics that we propose.

In Artificial Intelligence 209, pp. 1-10, 2014. Available in pdf. Some of the material in this paper appeared in preliminary form in "Ambiguous language and differences of belief"

Ambiguous language and common priors

Joseph Y. Halpern and Willemien Kets

Standard economic models cannot capture the fact that information is often ambiguous, and is interpreted in multiple ways. Using a framework that distinguishes between the language in which statements are made and the interpretation of statements, we show that ambiguity can have important consequences. We show that players can agree to disagree in the presence of ambiguity, even if there is a common prior, but that allowing for ambiguity is more restrictive than assuming heterogeneous priors. We also demonstrate that, unlike in the case where there is no ambiguity, players may come to have different beliefs starting from a common prior, even if they have received exactly the same information, unless the information is common knowledge. Taken together, these results suggest that ambiguity provides a potential explanation for heterogeneous beliefs. At the same time, it imposes nontrivial restrictions on the situations that can be modeled, so that it is not the case that ``anything goes'' once we allow for ambiguous statements.

In Games and Economic Behavior 90, 2015, pp. 171-180. Available in pdf. Some of the material in this paper appeared in preliminary form in Ambiguous language and differences of belief

A procedural characterization of solution concepts in games

Joseph Y. Halpern and Yoram Moses

We show how game-theoretic solution concepts such as Nash equilibrium, correlated equilibrium, rationalizability, and sequential equilibrium can be given a uniform definition in terms of a knowledge-based program. In a precise sense, this program can be viewed as providing a procedural characterization of rationality.

In Journal of AI Research 49, 2014, pp. 143-170. Available at JAIR and locally.

Characterizing solution concepts in terms of common knowledge of rationality

Joseph Y. Halpern and Yoram Moses

Characterizations of Nash equilibrium, correlated equilibrium, and rationalizability in terms of common knowledge of rationality are well known. We show how to get analogous characterizations of sequential equilibrium and (trembling hand) perfect equilibrium, as a consequence of recent results of Halpern.

The paper can be found here.

The computational complexity of structure-based causality

Gadi Aleksandrowicz, Hana Chockler, Joseph Y. Halpern, and Alexander Ivrii

Halpern and Pearl introduced a definition of actual causality; Eiter and Lukasiewicz showed that computing whether X=x is a cause of Y=y is NP-complete in binary models (where all variables can take on only two values) and ΣP2-complete in general models. In the final version of their paper, Halpern and Pearl slightly modified the definition of actual cause, in order to deal with problems pointed by Hopkins and Pearl.As we show, this modification has a nontrivial impact on the complexity of computing actual cause. To characterize the complexity, a new family DPk, k = 1, 2, 3, ..., of complexity classes is introduced, which generalizes the class DP introduced by Papadimitriou and Yannakakis (DP is just DP1). We show that the complexity of computing causality under the updated definition is DP2-complete. Chockler and Halpern extended the definition of causality by introducing notions of responsibility and blame. The complexity of determining the degree of responsibility and blame using the original definition of causality was completely characterized. Again, we show that changing the definition of causality affects the complexity, and completely characterize it using the updated definition.

In Journal of AI Research 58, 2017, pp. 431-341. An earlier version appears in Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence (AAAI-14), 2014. The paper is available at the JAIR site and locally here.

Conditional belief types

Alfredo Di Tillio, Joseph Y. Halpern, and Dov Samet

We study type spaces where a player's type at a state is a conditional probability on the space. We axiomatize these spaces using conditional belief operators, examining three additional axioms of increasing strength. First, introspectionM, which requires the agent to be unconditionally certain of her beliefs. Second, echo, according to which the unconditional beliefs implied by the condition must be held given the condition. Third, <\em>determination, which says that the conditional beliefs are the unconditional beliefs that are conditionally certain. Echo implies that conditioning on an event is the same as conditioning on the event being certain, which formalizes the standard informal interpretation of conditional probability. The game-theoretic application of our model, discussed within an example, sheds light on a number of issues in the analysis of extensive form games. Type spaces are closely related to the sphere models of counterfactual conditionals and to models of hypothetical knowledge.

In Games and Economic Behavior 87, 2014, pp. 253-268. The paper can be found here.

An introduction to logics of knowledge and belief

Hans van Ditmarsch, Joseph Y. Halpern, Wiebe van der Hoek, and Barteld Kooi

This is the introductory chapter of the Handbook of Epistemic Logic It provides an introduction to some basic concepts of epistemic logic, basic formal languages, their semantics, and proof systems. It also contains an overview of the handbook, and a brief history of epistemic logic and pointers to the literature.

In Handbook of Epistemic Logic (H. van Ditmarsch, J. Y. Halpern, W. van der Hoek, and B. Kooi, editors), College Publications, 2015, pp. 1-51. Available here.

An equilibrium analysis of scrip systems

Ian A. Kash, Joseph Y. Halpern, and Eric J. Friedman

A game-theoretic model of scrip (artificial currency) systems is analyzed. It is shown that relative entropy can be used to characterize the distribution of agent wealth when all agents use threshold strategies -- that is, they volunteer to do work iff they have below a threshold amount of money. Monotonicity of agents' best-reply functions is used to show that scrip systems have pure strategy equilibria where all agents use threshold strategies. An algorithm is given that can compute such an equilibrium and the resulting distribution of wealth.

In ACM Transactions on Economics and Computation 3:3, 2015. Material from "Efficiency and Nash equilibria in a scrip system for P2P networks", the conference version of "Optimizing scrip systems: efficiency, crashes, hoarders, and altruists", and "Manipulating scrip systems: sybils and collusion" was combined and then split into two journal papers. This is one of them; the other is here. The paper is available here.

Updating probability: tracking statistics as criterion

Bas van Fraassen and Joseph Y. Halpern

What is generally called Bayesian Conditionalization is a policy for updating probability assignments. It specifies as admissible input (“evidence”) elements of the domain of the prior probability function, and allows as possible posteriors the conditionalizations on such elements (“propositions”). Under certain ideal conditions, this is the only coherent policy. When those conditions are not met, other policies might be appropriate. Putative examples would involve updating by Richard Jeffrey’s generalized conditionalization ("Jeffrey Conditionalization'') or Edwin T. Jaynes' rule to maximize relative entropy ("MAXENT"). This raises the question of what general constraints any such policy should satisfy. We propose an answer here.

In British Journal for the Philosophy of Science 68:3, 2017, pp. 725-743. The paper is available here.

Cause, responsibility, and blame: a structural-model approach

Joseph Y. Halpern

A definition of causality introduced by Halpern and Pearl, which uses structural equations, is reviewed. A more refined definition is then considered, which takes into account issues of normality and typicality, which are well known to affect causal ascriptions. Causality is typically an all-or-nothing notion: either A is a cause of B or it is not. An extension of the definition of causality to capture notions of degree of responsibility and degree of blame, due to Chockler and Halpern, is reviewed. For example, if someone wins an election 11-0, then each person who votes for him is less responsible for the victory than if he had won 6-5. Degree of blame takes into account an agent's epistemic state. Roughly speaking, the degree of blame of A for B is the expected degree of responsibility of A for B, taken over the epistemic state of an agent. Finally, the structural-equations definition of causality is compared to Wright's NESS test.

In Law, Probability, and Risk 14:2, 2015, pp. 91-118. This paper is meant to be a gentle introduction to some of my work on causality, particularly that in "Causes and explanations: a structural-model approach. Part I: Causes", "Responsibility and blame: A structural-model approach", and, to a lesser extent "Actual causation and the art of modeling", and "Defaults and normality in causal structures". The paper is available here.

Not just an empty threat: subgame-perfect equilibrium in repeated games played by computationally bounded players

Joseph Y. Halpern, Rafael Pass, and Lior Seeman

We study the problem of finding a subgame-perfect equilibrium in repeated games. In earlier work, we showed how to efficiently find an (approximate) Nash equilibrium if assuming that players are computationally bounded (and making standard cryptographic hardness assumptions); in contrast, as demonstrated in the work of Borgs et al. [2010], unless we restrict to computationally bounded players, the problem is PPAD-hard. But it is well-known that for extensive-form games (such as repeated games), Nash equilibrium is a weak solution concept. In this work, we define and study an appropriate notion of a subgame-perfect equilibrium for computationally bounded players, and show how to efficiently find such an equilibrium in repeated games (again, making standard cryptographic hardness assumptions). We also show that our algorithm works not only for games with a finite number of players, but also for constant-degree graphical games.

In Proceedings of WINE 2014: 10th Conference on Web and Internet Economics, 2014, pp. 249-262. The paper is available here.

Translucent players: explaining cooperative behavior in social dilemmas

Valerio Capraro and Joseph Y. Halpern

In the last few decades, numerous experiments have shown that humans do not always behave so as to maximize their material payoff. Cooperative behavior when non-cooperation is a dominant strategy (with respect to the material payoffs) is particularly puzzling. Here we propose a novel approach to explain cooperation, assuming what Halpern and Pass call translucent players. Typically, players are assumed to be opaque, in the sense that a deviation by one player in a normal-form game does not affect the strategies used by other players. But a player may believe that if he switches from one strategy to another, the fact that he chooses to switch may be visible to the other players. For example, if he chooses to defect in Prisoner's Dilemma, the other player may sense his guilt. We show that by assuming translucent players, we can recover many of the regularities observed in human behavior in well-studied games such as Prisoner's Dilemma, Traveler's Dilemma, Bertrand Competition, and the Public Goods game.

In Rationality and Society, 31, 2019, pp. 371--408. An earlier version appears in Proceedings of Fifteenth Conference on Theoretical Aspects of Rationality and Knowledge (TARK), 2015. The paper is available here and on the journal website.

Bayesian games with intentions

Adam Bjorndahl, Joseph Y. Halpern, and Rafael Pass

We show that standard Bayesian games cannot represent the full spectrum of belief-dependent preferences. However, by introducing a fundamental distinction between intended and actual strategies, we remove this limitation. We define Bayesian games with intentions, generalizing both Bayesian games and psychological games [Geanakoplos, Pearce, and Stacchetti 1989], and prove that Nash equilibria in psychological games correspond to a special class of equilibria as defined in our setting.

In Games and Economic Behavior 123, 2020, pp. 54--67. An earlier version appears in Proceedings of Fifteenth Conference on Theoretical Aspects of Rationality and Knowledge, 2013, pp. 39-48; also in Electronic Proceedings in Theoretical Computer Science 215, 2016, pp. 99-113. The full paper is available here and on the journal website.

A modification of the Halpern-Pearl definition of causality

Joseph Y. Halpern

The original Halpern-Pearl definition of causality was updated in the journal version of the paper to deal with some problems pointed out by Hopkins and Pearl [2002]. Here the definition is modified yet again, in a way that (a) leads to a simpler definition, (b) handles the problems pointed out by Hopkins and Pearl, and many others, (c) gives reasonable answers (that agree with those of the original and updated definition) in the standard problematic examples of causality, and (d) has lower complexity than either the original or updated definitions.

In Proceedings of the 24th International Joint Conference on Artificial Intelligence (IJCAI 2015), 2015, pp. 3022-3033. The full paper is available here.

Why bother with syntax?

Joseph Y. Halpern

This short note discusses the role of syntax vs. semantics and the interplay between logic, philosophy, and language in computer science and game theory.

In Rohit Parikh on Logic, Language and Society (C. Baskent, L. Moss, and R. Ramanujum, editors), Spring Verlag, 2017, pp. 111-119. The full paper is available here.

Responsibility judgments in voting scenarios

Tobias Gerstenberg, Joseph Y. Halpern, and Joshua B. Tenenbaum

How do people assign responsibility for the outcome of an election? In previous work, we have shown that responsibility judgments in achievement contexts are affected by the prob- ability that a person's contribution is necessary, and by how close it was to being pivotal [Lagnado, Gerstenberg, & Zultan, 2013]. Here we focus on responsibility judgments in voting scenarios. We varied the number of people in different voting committees, their political affiliations, the number of votes required for a policy to pass, which party supports the policy, and the pattern of votes (creating 170 different situations). As expected, we found that participants' responsibility judgments increased the closer the voter was to being pivotal. Further, judgments increased the more unexpected a vote was. Voters were assigned more responsibility when they voted against the majority in the committee, and when they voted against their party affiliation.

In Proceedings of the 37th Annual Conference of the Cognitive Science Society (CogSci 2015, 2015, pp. 788-793. The paper is available here. This material forms part of the paper Predicting responsibility judgments from dispositional inferences and causal attribution.

Minimizing regret in dynamic decision problems

Joseph Y. Halpern and Samantha Leung

The menu-dependent nature of regret-minimization creates subtleties when it is applied to dynamic decision problems. It is not clear whether forgone opportunities should be included in the menu. We explain commonly observed behavioral patterns as minimizing regret when forgone opportunities are present. If forgone opportunities are included, we can characterize when a form of dynamic consistency is guaranteed.

In Theory and Decision 81:1, 2016, pp. 123-151. A preliminary version appears in 13th European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty (ECSQARU)}, 2015, pp. 3-13. The full version of the paper is available here.

Maxmin weighted expected utility: a simpler characterization

Joseph Y. Halpern and Samantha Leung

Chateauneuf and Faro [2009] axiomatize a weighted version of maxmin expected utility over acts with nonnegative utilities, where weights are represented by a confidence function. We argue that their representation is only one of many possible, and we axiomatize a more natural form of maxmin weighted expected utility. We also provide stronger uniqueness results.

In Theory and Decision 80:4, 2016, pp. 581-610. The full paper is available here.

Sufficient conditions for causality to be transitive

Joseph Y. Halpern

Natural conditions are provided that are sufficient to ensure that causality as defined by approaches that use counterfactual dependence and structural equations will be transitive.

In Philosophy of Science 83:2, 2016, pp. 213-226. The full paper is available here.

Actual Causality

Joseph Y. Halpern

This book covers much of my research on causality.

Published by MIT Press, 2016. An html version is available here.

Incentive-compatible mechanisms for norm monitoring in open multi-agent systems,

Natasha Alechina, Joseph Y. Halpern, Ian Kash, and Brian Logan

We consider the problem of detecting norm violations in open multi-agent systems (MAS). We show how, using ideas from scrip systems, we can design mechanisms where the agents comprising the MAS are incentivised to monitor the actions of other agents for norm violations. The cost of providing the incentives is not borne by the MAS and does not come from fines charged for norm violations (fines may be impossible to levy in a system where agents are free to leave and rejoin again under a different identity). Instead, monitoring incentives come from (scrip) fees for accessing the services provided by the MAS. In some cases, perfect monitoring (and hence enforcement) can be achieved: no norms will be violated in equilibrium. In other cases, we show that, while it is impossible to achieve perfect enforcement, we can get arbitrarily close; we can make the probability of a norm violation in equilibrium arbitrarily small. We show using simulations that our theoretical results hold for multi-agent systems with as few as 1000 agents the system rapidly converges to the steady-state distribution of scrip tokens necessary to ensure monitoring and then remains close to the steady state.

In Journal of AI Research 62, 2018, pp. 433--459. A preliminary version, with the title ``Decentralised norm monitoring in open multi-agent systems'', appears in Proceedings of the Fifteenth International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS 2016), 2016, pp. 1399-1400. An abridged version also appears in Proceedings of the 27th International Joint Conference on Artificial Intelligence (IJCAI 2018) (it was invited to the journal track of the conference). The full paper is available here.

Sequential equilibrium in games of imperfect recall

Joseph Y. Halpern and Rafael Pass

Definitions of sequential equilibrium and perfect equilibrium are given in games of imperfect recall. Subtleties regarding the definition are discussed.

In ACM Transactions on Economics and Computation 9:4, 2021. A preliminary version appears in Proceedings of the Fourteenth International Conference on Principles of Knowledge Representation and Reasoning (KR 2016), 2016, pp. 278-287. The full paper is available here.

A procedural characterization of solution concepts in games

Joseph Y. Halpern and Yoram Moses

We show how game-theoretic solution concepts such as Nash equilibrium, correlated equilibrium, rationalizability, and sequential equilibrium can be given a uniform definition in terms of a knowledge-based program with counterfactual semantics. In a precise sense, this program can be viewed as providing a procedural characterization of rationality. We show how solution concepts in games such as Nash equilibrium, correlated equilibrium, rationalizability, and sequential equilibrium can be given a uniform definition in terms of knowledge-based programs. Intuitively, all solution concepts are implementations of two knowledge-based programs, one appropriate for games represented in normal form, the other for games represented in extensive form. These knowledge-based programs can be viewed as embodying rationality. The representation works even if (a) information sets do not capture an agent's knowledge, (b) uncertainty is not represented by probability, or (c) the underlying game is not common knowledge.

Journal of AI Research 49, 2014, pp. 143-170. Some of the material in this paper appeared in preopreliminaryliminary form in "Characterizing solution concepts in games using knowledge-based programs", Proceedings of the 20th International Joint Conference on Artificial Intelligence (IJCAI 2007), 2007, pp. 1300-1307. The full paper is available here.

Characterizing Solution Concepts in Terms of Common Knowledge of Rationality

Joseph Y. Halpern and Yoram Moses

Characterizations of Nash equilibrium, correlated equilibrium, and rationalizability in terms of common knowledge of rationality are well known. Analogous characterizations of sequential equilibrium, (trembling hand) perfect equilibrium, and quasi-perfect equilibrium in n-player games are obtained here, using results of Halpern.

In International Journal of Game Theory 46:2, 2017, pp. 457-473. Some of the material in this paper appeared in preliminary form in "Characterizing solution concepts in games using knowledge-based programs", Proceedings of the 20th International Joint Conference on Artificial Intelligence (IJCAI 2007), 2007, pp. 1300-1307. The full paper is available here.

The bottleneck may be the solution, not the problem (invited commentary)

Arnon Lotem, Oren Kolodny, Joseph Y. Hapern, Luca Onnis, and Shimon Edelman

As a highly consequential biological trait, a memory “bottleneck” cannot escape selection pres- sures. It must therefore co-evolve with other cognitive mechanisms rather than act as an inde- pendent constraint. Recent theory and an implemented model of language acquisition suggest that a limit on working memory may evolve to help learning. Furthermore, it need not hamper the use of language for communication.

In Behavioral and Brain Sciences 39, 2016. The paper is available here.

Rational consensus

Joseph Y. Hapern and Xavier Vilaca

We provide a game-theoretic analysis of consensus, assuming that processes are controlled by rational agents and may fail by crashing. We consider agents that care only about consensus: that is, (a) an agent's utility depends only on the consensus value achieved (and not, for example, on the number of messages the agent sends) and (b) agents strictly prefer reaching consensus to not reaching consensus. We show that, under these assumptions, there is no ex post Nash Equilibrium, even with only one failure. Roughly speaking, this means that there must always exist a failure pattern (a description of who fails, when they fail, and which agents they do not send messages to in the round that they fail) and initial preferences for which an agent can gain by deviating. On the other hand, if we assume that there is a distribution p on the failure patterns and initial preferences, then under minimal assumptions on p, there is a Nash equilibrium that tolerates f failures (i.e., p puts probability 1 on there being at most f failures) if f+1 < n (where n is the total number of agents). Moreover, we show that a slight extension of the Nash equilibrium strategy is also a sequential equilibrium (under the same assumptions about the distribution p).

In Proceedings of the 35th Annual ACM Symposium on Principles of Distributed Computing, 2016, pp. 137-146 The full paper is available here.

MDPs with unawareness in robotics

Joseph Y. Halpern, Nan Rong, and Ashutosh Saxena

We formalize decision-making problems in robotics and automated control using continuous MDPs and actions that take place over continuous time intervals. We then approximate the continuous MDP using finer and finer discretizations. Doing this results in a family of systems, each of which has an extremely large action space, although only a few actions are ``interesting''. We can view the decision maker as being unaware of which actions are ``interesting''. We an model this using MDPUs, MDPs with unawareness, where the action space is much smaller. As we show, MDPUs can be used as a general framework for learning tasks in robotic problems. We prove results on the difficulty of learning a near-optimal policy in an an MDPU for a continuous task. We apply these ideas to the problem of having a humanoid robot learn on its own how to walk.

In Proceedings of the 32nd Conference on Uncertainty in AI, 2016, pp. 627-636. The paper is available here.

Computational extensive-form games

Joseph Y. Halpern, Rafael Pass, and Lior Seeman

We define solution concepts appropriate for computationally bounded players playing a fixed finite game. To do so, we need to define what it means for a computational game, which is a sequence of games that get larger in some appropriate sense, to represent a single finite underlying extensive-form game. Roughly speaking, we require all the games in the sequence to have essentially the same structure as the underlying game, except that two histories that are indistinguishable (i.e., in the same information set) in the underlying game may correspond to histories that are only computationally indistinguishable in the computational game. We define a computational version of both Nash equilibrium and sequential equilibrium for computational games, and show that every Nash (resp., sequential) equilibrium in the underlying game corresponds to a computational Nash (resp., sequential) equilibrium in the computational game. One advantage of our approach is that if a cryptographic protocol represents an abstract game, then we can analyze its strategic behavior in the abstract game, and thus separate the cryptographic analysis of the protocol from the strategic analysis. Finally, we use our approach to study the power of having memory in a TM. Specifically, we show that there is a gap between what can be done with stateful strategies (ones that make use of memory) and what can be done with stateless strategies.

In Proceedings of the 17th ACM Conference on Electronic Commerce, 2016, pp. 681-698. The full paper is available here.

Is state-dependent valuation more adaptive than simpler rules?

Joseph Y. Halpern and Lior Seeman

McNamara, Trimmer, and Houston claim to provide an explanation of certain systematic deviations from rational behavior using a mechanism that could arise through natural selection. We provide an arguably much simpler mechanism in terms of computational limitations, that performs better in the environment described by McNamara, Trimmer, and Houston. To argue convincingly that animals' use of state-dependent valuation is adaptive and is likely to be selected for by natural selection, one must argue that, in some sense, it is a better approach than the simple strategies that we propose.

Behavioural Processes 147, 2018, pp. 33-37. Available here.

Incentivising monitoring in open normative systems

Natasha Alechina, Joseph Y. Halpern, Ian Kash, and Brian Logan

We consider the problem of detecting norm violations in open multi-agent systems (MAS). We show how, using ideas from scrip systems, we can design mechanisms where the agents comprising the MAS are incentivised to monitor the actions of other agents for norm violations. The cost of providing the incentives is not borne by the MAS and does not come from fines charged for norm violations (fines may be impossible to levy in a system where agents are free to leave and rejoin again under a different identity). Instead, monitoring incentives come from (scrip) fees for accessing the services provided by the MAS. In some cases, perfect monitoring (and hence enforcement) can be achieved: no norms will be violated in equilibrium. In other cases, we show that, while it is impossible to achieve perfect enforcement, we can get arbitrarily close; we can make the probability of a norm violation in equilibrium arbitrarily small. We show using simulations that our theoretical results, which apply to systems with a large number of agents, hold for multi-agent systems with as few as 1000 agents---the system rapidly converges to the steady-state distribution of scrip tokens necessary to ensure monitoring and then remains close to the steady state.

In Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence (AAAI-17), 2017, pp. 305-311. The paper is available here.

Causality, responsibility, and blame in team plans

Natasha Alechina, Joseph Y. Halpern, and Brian Logan

Many objectives can be achieved (or may be achieved more effectively) only by a group of agents executing a team plan. If a team plan fails, it is often of interest to determine what caused the failure, the degree of responsibility of each agent for the failure, and the degree of blame attached to each agent. We show how team plans can be represented in terms of structural equations, and then apply the definitions of causality introduced by Halpern, based on that of Halpern and Pearl, and degree of responsibility and blame introduced by Chockler and Halpern to determine the agent(s) who caused the failure and what their degree of responsibility/blame is. We also prove new results on the complexity of computing causality and degree of responsibility and blame, showing that they can be determined in polynomial time for many team plans of interest.

In Proceedings of the Sixteenth International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS 2017), pp. 1091-1099, 2017. The paper is available here.

Games with tolerant players

Arpita Ghosh and Joseph Y. Halpern

A notion of π-tolerant equilibrium is defined that takes into account that players have some tolerance regarding payoffs in a game. This solution concept generalizes Nash equilibrium and refines ε-Nash equilibrium in a natural way. We show that π-tolerant equilibrium can explain cooperation in social dilemmas such as Prisoner's Dilemma and the Public Goods game. We then examine the structure of particularly cooperative π-tolerant equilibria, where players are as cooperative as they can be, subject to their tolerances, in Prisoner's Dilemma. To the extent that cooperation is due to tolerance, these results provide guidance to a mechanism designer who has some control over the payoffs in a game, and suggest ways in which cooperation can be increased.

In Proceedings of Sixteenth Conference on Theoretical Aspects of Rationality Knowledge (TARK), Electronic Proceedings in Theoretical Computer Science 251, 2017, pp. 251-264. Available on CoRR.

A knowledge-based analysis of the blockchain protocol

Joseph Y. Halpern and Rafael Pass

At the heart of the Bitcoin is the blockchain protocol, a protocol for achieving consensus on a public ledger that records bitcoin transactions. To the extent that a blockchain protocol is used for applications such as contract signing and making certain transactions (such as house sales) public, we need to understand what guarantees the protocol gives us in terms of agents' knowledge. Here, we provide a complete characterization of agent's knowledge when running a blockchain protocol using a variant of common knowledge that takes into account the fact that agents can enter and leave the system, it is not known which agents are in fact following the protocol (some agents may want to deviate if they can gain by doing so), and the fact that the guarantees provided by blockchain protocols are probabilistic. We then consider some scenarios involving contracts and show that this level of knowledge suffices for some scenarios, but not others.

In Proceedings of Sixteenth Conference on Theoretical Aspects of Rationality Knowledge (TARK), Electronic Proceedings in Theoretical Computer Science 251, 2017, pp. 324-335. Available on CoRR.

From type spaces to probability frames and back, via language

Adam Bjorndahl and Joseph Y. Halpern

We investigate the connection between the two major mathematical frameworks for modeling interactive beliefs: Harsanyi type spaces and possible-worlds-style probability frames. While translating the former into the latter is straightforward, we demonstrate that the reverse translation relies implicitly on a background logical language. Once this ``language parameter'' is made explicit, it reveals a close relationship between universal type spaces and canonical models: namely, that they are essentially the same construct. As the nature of a canonical model depends heavily on the background logic used to generate it, this work suggests a new view into a corresponding landscape of universal type spaces.

In Proceedings of Sixteenth Conference on Theoretical Aspects of Rationality Knowledge (TARK), Electronic Proceedings in Theoretical Computer Science 251, 2017, pp. 75-87. Available on CoRR.

An epistemic foundation for authentication logics

Joseph Y. Halpern, Ron van der Meyden, and Riccardo Pucella

While there have been many attempts, going back to BAN logic, to base reasoning about security protocols on epistemic notions, they have not been all that successful. Arguably, this has been due to the particular logics chosen. We present a simple logic based on the well-understood modal operators of knowledge, time, and probability, and show that it is able to handle issues that have often been swept under the rug by other approaches, while being flexible enough to capture all the higher-level security notions that appear in BAN logic. Moreover, while still assuming that the knowledge operator allows for unbounded computation, it can handle the fact that a computationally bounded agent cannot decrypt messages in a natural way, by distinguishing strings and message terms. We demonstrate that our logic can capture BAN logic notions by providing a translation of the BAN operators into our logic, capturing belief by a form of probabilistic knowledge.

In Proceedings of Sixteenth Conference on Theoretical Aspects of Rationality Knowledge (TARK), Electronic Proceedings in Theoretical Computer Science 251, 2017, pp. 306-323. Available on CoRR.

The evolution of cognitive mechanisms in response to cultural innovations

Arnon Lotem, Joseph Y. Halpern, Shimon Edelman, and Oren Kolodny

When humans and other animals make cultural innovations, they also change their environment, thereby imposing new selective pressures that can modify their biological traits. For example, there is evidence that dairy farming by humans favored alleles for adult lactose tolerance. Similarly, the invention of cooking possibly affected the evolution of jaw and tooth morphology. However, when it comes to cognitive traits and learning mechanisms, it is much more difficult to determine whether and how their evolution was affected by culture or by their use in cultural transmission. Here we argue that, excluding very recent cultural innovations, the assumption that culture shaped the evolution of cognition is both more parsimonious and more productive than assuming the opposite. In considering how culture shapes cognition, we suggest that a process-level model of cognitive evolution is necessary and offer such a model. The model employs relatively simple coevolving mechanisms of learning and data acquisition that jointly construct a complex network of a type previously shown to be capable of supporting a range of cognitive abilities. The evolution of cognition, and thus the effect of culture on cognitive evolution, is captured through small modifications of these coevolving learning and data-acquisition mechanisms, whose coordinated action is critical for building an effective network. We use the model to show how these mechanisms are likely to evolve in response to cultural phenomena, such as language and tool-making, which are associated with major changes in data patterns and with new computational and statistical challenges.

In Proceedings of the National Academy of Science114:30, 2017, pp. 7915-7922. Available locally here.

Combining experts' causal judgments

Dalal Alrajeh, Hana Chockler, and Joseph Y. Halpern

Consider a policymaker who wants to decide which intervention to perform in order to change a currently undesirable situation. The policymaker has at her disposal a team of experts, each with their own understanding of the causal dependencies between different factors contributing to the outcome. The policymaker has varying degrees of confidence in the experts' opinions. She wants to combine their opinions in order to decide on the most effective intervention. We formally define the notion of an effective intervention, and then consider how experts' causal judgments can be combined in order to determine the most effective intervention. We define a notion of two causal models being compatible, and show how compatible causal models can be combined. We then use it as the basis for combining experts causal judgments. We illustrate our approach on a number of real-life examples.

In Artificial Intelligence 288, 2020. An earlier version appears in Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18), 2018. The paper is available here.

Towards formal definitions of blameworthiness, intention, and moral responsibility

Joseph Y. Halpern and Max Kleiman-Weiner

We provide formal definitions of degree of blameworthiness and intention relative to an epistemic state (a probability over causal models and a utility function on outcomes). These, together with a definition of actual causality, provide the key ingredients for moral responsibility judgments. We show that these definitions give insight into commonsense intuitions in a variety of puzzling cases from the literature.

In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18), 2018. The paper is available here.

Information acquisition under resource limitations in a noisy environment

Matvey Soloviev and Joseph Y. Halpern

We introduce a theoretical model of information acquisition under resource limitations in a noisy environment. An agent must guess the truth value of a given Boolean formula φ after performing a bounded number of noisy tests of the truth values of variables in the formula. We observe that, in general, the problem of finding an optimal testing strategy for φ is hard, but we suggest a useful heuristic. The techniques we use also give insight into two apparently unrelated, but well-studied problems: (1) rational inattention (the optimal strategy may involve hardly ever testing variables that are clearly relevant to φ) and (2) what makes a formula hard to learn/remember.

Journal of the ACM 69:3, 2022. A preliminary version appears in Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18), 2018. The full paper is available here.

A note on the existence of ratifiable acts

Joseph Y. Halpern

Sufficient conditions are given under which ratifiable acts exist.

In Review of Symbolic Logic, 13:3, 2020, pp. 503--508. The paper is available here.

Combining the causal judgments of experts with possibly different focus areas

Meir Friedenberg and Joseph Y. Halpern

In many real-world settings, a decision-maker must combine information provided by different experts in order to decide on an effective policy. Alrajeh, Chockler, and Halpern showed how to combine causal models that are compatible in the sense that, for variables that appear in both models, the experts agree on the causal structure. In this work we show how causal models can be combined in cases where the experts might disagree on the causal structure for variables that appear in both models due to having different focus areas. We provide a new formal definition of compatibility of models in this setting and show how compatible models can be combined. We also consider the complexity of determining whether models are compatible. We believe that the notions defined in this work are of direct relevance to many practical decision making scenarios that come up in natural, social, and medical science settings.

In Proceedings of the Fifteenth International Conference on Principles of Knowledge Representation and Reasoning (KR 2016), 2018. The paper is available here.

Blameworthiness in multi-agent settings

Meir Friedenberg and Joseph Y. Halpern

We provide a formal definition of blameworthiness in settings where multiple agents can collaborate to avoid a negative outcome. We first provide a method for ascribing blameworthiness to groups relative to an epistemic state (a distribution over causal models that describe how the outcome might arise). We then show how we can go from an ascription of blameworthiness for groups to an ascription of blameworthiness for individuals using a standard notion from cooperative game theory, the Shapley value. We believe that getting a good notion of blameworthiness in a group setting will be critical for designing autonomous agents that behave in a moral manner.

In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19)}, 2019 The paper is available here.

Abstracting causal models

Sander Beckers and Joseph Y. Halpern

We consider a sequence of successively more restrictive definitions of abstraction for causal models, starting with a notion introduced by Rubenstein et al. called exact transformation that applies to probabilistic causal models, moving to a notion of uniform transformation that applies to deterministic causal models and does not allow differences to be hidden by the ``right'' choice of distribution, and then to abstraction, where the interventions of interest are determined by the map from low-level states to high-level states, and strong abstraction, which takes more seriously all potential interventions in a model, not just the allowed interventions. We show that procedures for combining micro-variables into macro-variables are instances of our notion of strong abstraction, as are all the examples considered by Rubenstein et al.

In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19)}, 2019 The paper is available here.

Partial awareness

Joseph Y. Halpern and Evan Piermont

We develop a modal logic to capture partial awareness. The logic has three building blocks: objects, properties, and concepts. Properties are unary predicates on objects; concepts are Boolean combinations of properties. We take an agent to be partially aware of a concept if she is aware of the concept without being aware of the properties that define it. The logic allows for quantification over objects and properties, so that the agent can reason about her own unawareness. We then apply the logic to contracts, which we view as syntactic objects that dictate outcomes based on the truth of formulas. We show that when agents are unaware of some relevant properties, referencing concepts that agents are only partially aware of can improve welfare.

In Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19)}, 2019 The paper is available here.

Implementing mediators with asynchronous cheap talk

Ittai Abraham, Danny Dolev, Ivan Geffner, and Joseph Y. Halpern

A mediator can help non-cooperative agents obtain an equilibrium that may otherwise not be possible. We study the ability of players to obtain the same equilibrium without a mediator, using only cheap talk, that is, nonbinding pre-play communication. Previous work has considered this problem in a synchronous setting. Here we consider the effect of asynchrony on the problem, and provide upper bounds for implementing mediators. Considering asynchronous environments introduces new subtleties, including exactly what solution concept is most appropriate and determining what move is played if the cheap talk goes on forever. Different results are obtained depending on whether the move after such ``infinite play'' is under the control of the players or part of the description of the game.

In Proceedings of the 38th Annual ACM Symposium on Principles of Distributed Computing, 2019, pp. 501--510. The paper is available here.

Approximate causal abstraction

Sander Beckers, Frederick Eberhardt, and Joseph Y. Halpern

Scientific models describe natural phenomena at different levels of abstraction. Abstract descriptions can provide the basis for interventions on the system and explanation of observed phenomena at a level of granularity that is coarser than the most fundamental account of the system. Beckers and Halpern, building on work of Rubenstein et al., developed an account of abstraction for causal models that is exact. Here we extend this account to the more realistic case where an abstract causal model offers only an approximation of the underlying system. We show how the resulting account handles the discrepancy that can arise between low- and high-level causal models of the same system, and in the process provide an account of how one causal model approximates another, a topic of independent interest. Finally, we extend the account of approximate abstractions to probabilistic causal models, indicating how and where uncertainty can enter into an approximate abstraction.

In Proceedings of the 35th Conference on Uncertainty in AI (UAI 2019), 2019. The paper is available here.

A conceptually well-founded characterization of iterated admissibility using an ``all I know'' operator

Joseph Y. Halpern and Rafael Pass

Brandenburger, Friedenberg, and Keisler provide an epistemic characterization of iterated admissibility (IA), also known as iterated deletion of weakly dominated strategies, where uncertainty is represented using LPSs (lexicographic probability sequences). Their characterization holds in a rich structure called a complete structure, where all types are possible. In earlier work, we gave a characterization of iterated admissibility using an ``all I know'' operator, that captures the intuition that ``all the agent knows'' is that agents satisfy the appropriate rationality assumptions. That characterization did not need complete structures and used probability structures, not LPSs. However, that characterization did not deal with Samuelson's conceptual concern regarding IA, namely, that at higher levels, players do not consider possible strategies that were used to justify their choice of strategy at lower levels. In this paper, we give a characterization of IA using the all I know operator that does deal with Samuelson's concern. However, it uses LPSs. We then show how to modify the characterization using notions of ``approximate belief'' and ``approximately all I know'' so as to deal with Samuelson's concern while still working with probability structures.

In Proceedings of Seventeenth Conference on Theoretical Aspects of Rationality and Knowledge (TARK), Electronic Proceedings in Theoretical Computer Science, 2019. The paper is available here.

On the existence of Nash Equilibrium in games with resource-bounded players

Joseph Y. Halpern, Rafael Pass, and Daniel Reichman

We consider computational games, sequences of games (G1,G2,...) where, for all n, Gn has the same set of players. Computational games arise in electronic money systems such as Bitcoin, in cryptographic protocols, and in the study of generative adversarial networks in machine learning. Assuming that one-way functions exist, we prove that there is 2-player zero-sum computational game such that, for all n, the size of the action space in Gn is polynomial in n and the utility function in Gn is computable in time polynomial in n, and yet there is no ε-Nash equilibrium if players are restricted to using strategies computable by polynomial-time Turing machines, where we use a notion of Nash equilibrium that is tailored to computational games. We also show that an ε-Nash equilibrium may not exist if players are constrained to perform at most T computational steps in each of the games in the sequence. On the other hand, we show that if players can use arbitrary Turing machines to compute their strategies, then every computational game has an ε-Nash equilibrium. These results may shed light on competitive settings where the availability of more running time or faster algorithms can lead to a ``computational arms race", precluding the existence of equilibrium. They also point to inherent limitations of concepts such as ``best response" and Nash equilibrium in games with resource-bounded players.

In Proceedings of the 12th International Symposium on A Game Theory (SAGT)} 2019, pp. 139--152. The paper is available here.

Bounded rationality in Las Vegas: finite automata play multi-armed bandits

Xinming Liu and Joseph Y. Halpern

While traditional economics assumes that humans are fully rational agents who always maximize their expected utility, in practice, we constantly observe apparently irrational behavior. One explanation is that people have limited computational power, so that they are, quite rationally, making the best decisions they can, given their computational limitations. To test this hypothesis, we consider the multi-armed bandit (MAB) problem. We examine a simple strategy for playing an MAB that can be implemented easily by a probabilistic finite automaton (PFA). Roughly speaking, the PFA sets certain expectations, and plays an arm as long as it meets them. If the PFA has sufficiently many states, it performs near-optimally. Its performance degrades gracefully as the number of states decreases. Moreover, the PFA acts in a ``human-like'' way, exhibiting a number of standard human biases, like an optimism bias and a negativity bias.

In Proceedings of the 36th Conference on Uncertainty in AI (UAI 2020), 2020. The paper is available here.

Dynamic partial awareness

Joseph Y. Halpern and Evan Piermont

We investigate how to model the beliefs of an agent who becomes more aware. We use the framework of Halpern and Rego by adding probability, and define a notion of a model transition that describes constraints on how, if an agent becomes aware of a new formula φ in state s of a model M, she transitions to state s* in a model M*. We then discuss how such a model can be applied to information disclosure.

In Proceedings of the Seventeenth International Conference on Principles of Knowledge Representation and Reasoning (KR 2020), 2020. The paper is available here.

Probabilistic dependency graphs

Oliver Richardson and Joseph Y. Halpern

We introduce Probabilistic Dependency Graphs (PDGs), a new class of directed graphical models. PDGs can capture inconsistent beliefs in a natural way and are more modular than Bayesian Networks (BNs), in that they make it easier to incorporate new information and restructure the representation. We show by example how PDGs are an especially natural modeling tool. We provide three semantics for PDGs, each of which can be derived from a scoring function (on joint distributions over the variables in the network) that can be viewed as representing a distribution's incompatibility with the PDG. For the PDG corresponding to a BN, this function is uniquely minimized by the distribution the BN represents, showing that PDG semantics extend BN semantics. We show further that factor graphs and their exponential families can also be faithfully represented as PDGs, while there are significant barriers to modeling a PDG with a factor graph.

In Proceedings of the Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21), 2021. The paper is available here.

Language-based decisions

Adam Bjorndahl and Joseph Y. Halpern

In Savage's classic decision-theoretic framework, actions are formally defined as functions from states to outcomes. But where do the state space and outcome space come from? Expanding on recent work by Blume, Easley, and Halpern, we consider a language-based framework in which actions are identified with (conditional) descriptions in a simple underlying language, while states and outcomes (along with probabilities and utilities) are constructed as part of a representation theorem. Our work expands the role of language from that of Blume, Easley, and Halpern, by using it not only for the conditions that determine which actions are taken, but also the effects. More precisely, we take the set of actions to be built from those of the form do(&\phi;, for formulas &\phi; in the underlying language. This presents a problem: how do we interpret the result of do(&\phi;) when &\phi; is underspecified (i.e., compatible with multiple states)? We answer this using tools familiar from the semantics of counterfactuals: roughly speaking, do(φ) maps each state to the ``closest'' &\phi;-state. This notion of ``closest'' is also something we construct as part of the representation theorem; in effect, then, we prove that (under appropriate assumptions) the agent is acting \textit{as if} each underspecified action is first made definite and then evaluated (i.e., by maximizing expected utility). Of course, actions in the real world are often not presented in a fully precise manner, yet agents reason about and form preferences among them all the same. Our work brings the abstract tools of decision theory into closer contact with such real-world scenarios.

In Proceedings of Eighteenth Conference on Theoretical Aspects of Rationality and Knowledge (TARK), Electronic Proceedings in Theoretical Computer Science, 2021. The paper is available here.

Predicting responsibility judgments from dispositional inferences and causal attributions

Antonia F. Langenhoff, Alex Wiegmann, Joseph Y. Halpern, Joshua B. Tenenbaum, and Tobias Gerstenberg

How do people hold others responsible for their actions? In this paper, we test and extend a computational framework originally introduced by Gerstenberg et al. [2018] that assigns responsibility as a function of 1) a dispositional inference that captures what we learn about a person's character from their action and 2) the causal role that the person's action played in bringing about the outcome. Previously, this framework has been shown to accurately capture responsibility judgments of decision-makers in achievement contexts. Here, we focus on responsibility judgments in voting scenarios, in which political committee members vote on whether or not a policy should be passed. This allowed us to manipulate dispositional inferences and causal attributions in graded ways and show that the predictions of the model hold in group settings that are causally more complex. We provide further support in favor of the framework by showing that its key components can be tested directly by asking how 1) surprising and 2) important a committee member's vote was. Participants' answers to these questions accurately predict the responsibility judgments of another group of participants. Finally, we show that the computational framework also captures participants' judgments in the moral domain, where which component is most relevant shifts from causal attributions to dispositional inferences.

In Cognitive Psychology 129, 2021. This paper incorporates some material from Responsibility judgments in voting scenarios.

The paper can be found here.

Security in asynchronous interactive systems

Ivan Geffner and Joseph Y. Halpern

Secure function computation has been thoroughly studied and optimized in the past decades. We extend techniques used for secure computation to simulate arbitrary protocols involving a mediator. The key feature of our notion of simulation is that it is bidirectional: not only does the simulation produce only outputs that could happen in the original protocol, but the simulation produces all such outputs. In asynchronous systems there are also new subtleties that arise because the scheduler can influence the output. Thus, these requirements cannot be achieved by the standard notion of secure computation. We provide a construction that is secure if n > 4t, where t is the number of malicious agents and n is the total number of agents, which is provably the best possible. We also show that our construction is secure in the universal composability model and that it satisfies additional security properties even if 3t < n ≤ 4t.

In Proceedings of the 23rd International Symposium on Stabilization, Safety, and Security of Distributed Systems, 2021. The full paper can be found on arxiv.

On testing for discrimination using causal models

Hana Chockler and Joseph Y. Halpern

Consider a bank that uses an AI system to decide which loan applications to approve. We want to ensure that the system is fair, that is, it does not discriminate against applicants based on a predefined list of sensitive attributes, such as gender and ethnicity. We expect there to be a regulator whose job it is to certify the bank’s system as fair or unfair. We consider issues that the regulator will have to confront when making such a decision, including the precise definition of fairness, dealing with proxy variables, and dealing with what we call allowed variables, that is, variables such as salary on which the decision is allowed to depend, despite being correlated with sensitive variables. We show (among other things) that the problem of deciding fairness as we have defined it is coNP-complete, but then argue that, despite that, in practice the problem should be manageable.

In Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI-22), 2022. The paper can be found here.

Reasoning about causal models with infinitely many variables

Joseph Y. Halpern and Spencer Peters

Generalized structural equations models (GSEMs) [Peters and Halpern, 2021] are, as the name suggests, a generalization of structural equations models (SEMs). They can deal with (among other things) infinitely many variables with infinite ranges, which is critical for capturing dynamical systems. We provide a sound and complete axiomatization of causal reasoning in GSEMs that is an extension of the sound and complete axiomatization provided by Halpern for SEMs. Considering GSEMs helps clarify what properties Halpern's axioms capture.

In Proceedings of the Thirty-Sixth AAAI Conference on Artificial Intelligence (AAAI-22), 2022. The paper can be found on arxiv.

Probabilistic and Causal Inference: The Works of Judea Pearl

Hector Geffner, Joseph Y. Halpern, and Rina Dechter (editors)

Published by MIT Press, 2022.

From outcome-based to language-based preferences

Valerio Capraro, Joseph Y. Halpern, and Matjaz Perc

We review the literature on models that try to explain human behavior in social interactions described by normal-form games with monetary payoffs. We start by covering social and moral preferences. We then focus on the growing body of research showing that people react to the language in which actions are described, especially when it activates moral concerns. We conclude by arguing that behavioral economics is in the midst of a paradigm shift towards language-based preferences, which will require an exploration of new models and experimental setups.

To appear, Journal of Economic Literature. The paper can be found here.

Lower bounds on implementing mediators in asynchronous systems with rational and malicious agents

Ivan Geffner and Joseph Y. Halpern

Abraham, Dolev, Geffner, and Halpern proved that, in asynchronous systems, a (k, t)-robust equilibrium for n players and a trusted mediator can be implemented without the mediator as long as n > 4(k+t), where an equilibrium is (k, t)-robust if, roughly speaking, no coalition of t players can decrease the payoff of any of the other players, and no coalition of k players can increase their payoff by deviating. We prove that this bound is tight, in the sense that if n ≤ 4(k+t) there exist (k, t)-robust equilibria with a mediator that cannot be implemented by the players alone. Even though implementing (k,t)-robust mediators seems closely related to implementing asynchronous multiparty (k+t)-secure computation, to the best of our knowledge there is no known straightforward reduction from one problem to another. Nevertheless, we show that there is a non-trivial reduction from a slightly weaker notion of (k+t)-secure computation, which we call (k+t)-strict secure computation, to implementing (k, t)-robust mediators. We prove the desired lower bound by showing that there are functions on n variables that cannot be (k+t)-strictly securely computed if n ≤ 4(k+t). This also provides a simple alternative proof for the well-known lower bound of 4t+1 on asynchronous secure computation in the presence of up to t malicious agents.

Journal of the ACM 70:2, 2023, pp. 1--20. The paper can be found on arxiv.

A causal analysis of harm

Sander Beckers, Hana Chockler, and Joseph Y. Halpern

As autonomous systems rapidly become ubiquitous, there is a growing need for a legal and regulatory framework that addresses when and how such a system harms someone. There have been several attempts within the philosophy literature to define harm, but none of them has proven capable of dealing with the many examples that have been presented, leading some to suggest that the notion of harm should be abandoned and ``replaced by more well-behaved notions''. As harm is generally something that is caused, most of these definitions have involved causality at some level. Yet surprisingly, none of them makes use of causal models and the definitions of actual causality that they can express. In this paper we formally define a qualitative notion of harm that uses causal models and is based on a well-known definition of actual causality. The key features of our definition are that it is based on contrastive causation and uses a default utility to which the utility of actual outcomes is compared. We show that our definition is able to handle the examples from the literature, and illustrate its importance for reasoning about situations involving autonomous systems.

In Proceedings of the 36th Conference on Neural Information Processing Systems (NeurIPS 2022), 2022, pp. 2365--2376. The paper can be found on arxiv.

Causal models with constraints

Sander Beckers, Joseph Y. Halpern, and Christopher Hitchcock

Causal models have proven extremely useful in offering formal representations of causal relationships between a set of variables. Yet in many situations, there are non-causal relationships among variables. For example, we may want variables LDL, HDL, and TOT that represent the level of low-density lipoprotein cholesterol, the level of lipoprotein high-density lipoprotein cholesterol, and total cholesterol level, with the relation LDL+HDL=TOT. This cannot be done in standard causal models, because we can intervene simultaneously on all three variables. The goal of this paper is to extend standard causal models to allow for constraints on settings of variables. Although the extension is relatively straightforward, to make it useful we have to define a new intervention operation that disconnects a variable from a causal equation. We give examples showing the usefulness of this extension, and provide a sound and complete axiomatization for causal models with constraints.

In Proceedings of the Second Conference on Causal Learning and Reasoning, 2023. The paper can be found on arxiv.

Strategic play by resource-bounded agents in security games

Xinming Liu and Joseph Y. Halpern

Many studies have shown that humans are "predictably irrational'': they do not act in a fully rational way, but their deviations from rational behavior are quite systematic. Our goal is to see the extent to which we can explain and justify these deviations as the outcome of rational but resource-bounded agents doing as well as they can, given their limitations. We focus on the well-studied ranger-poacher game, where rangers are trying to protect a number of sites from poaching. We capture the computational limitations by modeling the poacher and the ranger as probabilistic finite automata (PFAs). We show that, with sufficiently large memory, PFAs learn to play the Nash equilibrium (NE) strategies of the game and achieve the NE utility. However, if we restrict the memory, we get more ``human-like'' behaviors, such as probability matching, and avoiding sites where there was a bad outcome, that we also observed in experiments conducted on Amazon Mechanical Turk. Interestingly, we find that adding human-like behaviors such as probability matching and overweighting significant events actually improves performance, showing that this seemingly irrational behavior can be quite rational.

In Proceedings of the 22nd International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS 2023), pp. 2478--2480. The full paper can be found on arxiv.

Optimal eventual Byzantine agreement protocols with omission failures

Kaya Alpturer, Joseph Y. Halpern, and Ron van der Meyden

Work on optimal protocols for eventual Byzantine Agreement (EBA) -- protocols that, in a precise sense, decide as soon as possible in every run and guarantee that all nonfaulty agents decide on the same value -- has focused on full-information protocols (FIPs), where agents repeatedly send messages that completely describe their past observations to every other agent. While it can be shown that, without loss of generality, we can take an optimal protocol to be an FIP, full information exchange is impractical to implement for many applications due to the required message size. We separate protocols into two parts, the information-exchange protocol and the action protocol, so as to be able to examine the effects of more limited information exchange. We then define a notion of optimality with respect to an information-exchange protocol. Roughly speaking, an action protocol P is optimal with respect to an information-exchange protocol E if, with P, agents decide as soon as possible among action protocols that exchange information according to E. We present a knowledge-based EBA program for omission failures all of whose implementations are guaranteed to be correct and are optimal if the information exchange satisfies a certain safety condition. We then construct concrete programs that implement this knowledge-based program in two settings of interest that are shown to satisfy the safety condition. Finally, we show that a small modification of our program results in an FIP that is both optimal and efficiently implementable, settling an open problem posed by Halpern, Moses, and Waarts (SIAM J. Comput., 2001).

In Proceedings of the 42nd Annual ACM Symposium on Principles of Distributed Computing, 2023, pp. 244--252. The full paper can be found on arxiv.

Quantifying harm

Sander Beckers, Hana Chockler, and Joseph Y. Halpern

In a companion paper, we defined a qualitative notion of harm: either harm is caused, or it is not. For practical applications, we often need to quantify harm; for example, we may want to choose the lest harmful of a set of possible interventions. We first present a quantitative definition of harm in a deterministic context involving a single individual, then we consider the issues involved in dealing with uncertainty regarding the context and going from a notion of harm for a single individual to a notion of ``societal harm", which involves aggregating the harm to individuals. We show that the "obvious" way of doing this (just taking the expected harm for an individual and then summing the expected harm over all individuals can lead to counterintuitive or inappropriate answers, and discuss alternatives, drawing on work from the decision-theory literature.

In Proceedings of the Thirty-Sevenh AAAI Conference on Artificial Intelligence (AAAI-23), 2023. The paper can be found on arxiv.

Joint behavior and common belief

Meir Friedenberg and Joseph Y. Halpern

For over 25 years, common belief has been widely viewed as necessary for joint behavior. But this is not quite correct. We show by example that what can naturally be thought of as joint behavior can occur without common belief. We then present two variants of common belief that can lead to joint behavior, even without standard common belief ever being achieved, and show that one of them, action-stamped common belief, is in a sense necessary and sufficient for joint behavior. These observations are significant because, as is well known, common belief is quite difficult to achieve in practice, whereas these variants are more easily achievable.

In Proceedings of Ninteenth Conference on Theoretical Aspects of Rationality and Knowledge (TARK), Electronic Proceedings in Theoretical Computer Science 379, 2023, pp. 221--232. The paper is available on arxiv.

Sequential language-based decisions

Adam Bjorndahl and Joseph Y. Halpern

In earlier work, we introduced the framework of language-based decisions, the core idea of which was to modify Savage's classical decision-theoretic framework by taking actions to be descriptions in some language, rather than functions from states to outcomes, as they are defined classically. Actions had the form "if ψ then do(φ)", where ψ and φ were formulas in some underlying language, specifying what effects would be brought about under what circumstances. The earlier work allowed only one-step actions. But, in practice, plans are typically composed of a sequence of steps. Here, we extend the earlier framework to sequential actions, making it much more broadly applicable. Our technical contribution is a representation theorem in the classical spirit: agents whose preferences over actions satisfy certain constraints can be modeled as if they are expected utility maximizers. As in the earlier work, due to the language-based specification of the actions, the representation theorem requires a construction not only of the probability and utility functions representing the agent's beliefs and preferences, but also the state and outcomes spaces over which these are defined, as well as a "selection function" which intuitively captures how agents disambiguate coarse descriptions. The (unbounded) depth of action sequencing adds substantial interest (and complexity!) to the proof.

In Proceedings of Ninteenth Conference on Theoretical Aspects of Rationality and Knowledge (TARK), Electronic Proceedings in Theoretical Computer Science 379, 2023, pp. 131--141. The paper is available on arxiv.

In defense of liquid democracy

Daniel Halpern, Joseph Y. Halpern, Ali Jadbabaie, Elchanan Mossel, Ariel D. Procaccia, and Manon Ravel

Fluid democracy is a voting paradigm that allows voters to choose between directly voting and transitively delegating their votes to other voters. While fluid democracy has been viewed as a system that can combine the best aspects of direct and representative democracy, it can also result in situations where few voters amass a large amount of influence. To analyze the impact of this shortcoming, we consider what has been called an epistemic setting, where voters decide on a binary issue for which there is a ground truth. Previous work has shown that under certain assumptions on the delegation mechanism, the concentration of power is so severe that fluid democracy is less likely to identify the ground truth than direct voting. We examine different, arguably more realistic, classes of mechanisms, and prove they behave well by ensuring that (with high probability) there is a limit on concentration of power. Our proofs demonstrate that delegations can be treated as stochastic processes and that they can be compared to well-known processes from the literature -- such as preferential attachment and multi-types branching process -- that are sufficiently bounded for our purposes. Our results suggest that the concerns raised about fluid democracy can be overcome, thereby bolstering the case for this emerging paradigm.

In Proceedings of the Twenty-fourth ACM Conference on Electronic Commerce, 2023, p. 852. The full paper is available on arxiv.

Chunking tasks for present-biased agents

Joseph Y. Halpern and Aditya Saraf

Everyone puts things off sometimes. How can we combat this tendency to procrastinate? A well-known technique used by instructors is to break up a large project into more manageable chunks. But how should this be done best? Here we study the process of chunking using the graph-theoretic model of present bias introduced by Kleinberg and Oren. We first analyze how to optimally chunk single edges within a task graph, given a limited number of chunks. We show that for edges on the shortest path, the optimal chunking makes initial chunks easy and later chunks progressively harder. For edges not on the shortest path, optimal chunking is significantly more complex, but we provide an efficient algorithm that chunks the edge optimally. We then use our optimal edge-chunking algorithm to optimally chunk task graphs. We show that with a linear number of chunks on each edge, the biased agent’s cost can be exponentially lowered, to within a constant factor of the true cheapest path. Finally, we extend our model to the case where a task designer must chunk a graph for multiple types of agents simultaneously. The problem grows significantly more complex with even two types of agents, but we provide optimal graph chunking algorithms for two types. Our work highlights the efficacy of chunking as a means to combat present bias.

In Proceedings of the Twenty-fourth ACM Conference on Electronic Commerce, 2023, pp. 853-884. The full paper is available on arxiv.

Colordag: an incentive-compatible blockchain

Ittai Abraham, Danny Dolev, Ittay Eyal, and Joseph Y. Halpern

We present Colordag, a blockchain protocol where following the prescribed strategy is, with high probability, a best response as long as all miners have less than 1/2 of the mining power. We prove the correctness of Colordag even if there is an extremely powerful adversary who knows future actions of the scheduler: specifically, when agents will generate blocks and when messages will arrive. The state-of-the-art protocol, Fruitchain, is an ε-Nash equilibrium as long as all miners have less than 1/2 of the mining power. However, there is a simple deviation that guarantees that deviators are never worse off than they would be by following Fruitchain, and can sometimes do better. Thus, agents are motivated to deviate. Colordag implements a solution concept that we call ε-sure Nash equilibrium and does not suffer from this problem. Because it is an ε-sure Nash equilibrium, Colordag is an ε-Nash equilibrium and with probability 1-&\epsilon; is a best response to any deviation.

In Proceedings of the International Symposium on Distributed Computing (DISC 2023), 2023. The full paper is available on arxiv.