- Economic Aspects of Algorithms
- Online Learning and Its Applications
- Routing and Information Transmission in Networks
- Modeling Complex Networks
- Computational Algebra
- Internet Content Delivery
- Other

We analyze the performnce of protocols for load balancing in distributed systems based on no-regret algorithms from online learning theory. These protocols treat load balancing as a repeated game and apply algorithms whose average performance over time is guaranteed to match or exceed the average performance of the best strategy in hindsight. Our approach captures two important aspects of distributed systems. First, in our setting of atomic load balancing, every single process can have a significant impact on the performance and behavior of the system. Furthermore, although in distributed systems participants can query the current state of the system, they cannot reliably predict the effect of their actions on it. We address this issue by considering load balancing games in the bulletin board model, where players can find out the delay on all machines, but do not have information on what their experienced delay would have been if they had selected another machine. We show that under these more realistic assumptions, if all players use the well-known multiplicative weights algorithm, then the quality of the resulting solution is exponentially better than the worst correlated equilibrium, and almost as good as that of the worst Nash equilibrium.

Selling ad campaigns: Online algorithms with cancellations

We study online pricing problems in markets with cancellations, i.e., markets in which prior allocation decisions can be revoked, but at a cost. In our model, the cost of such a cancellation is a fixed fraction of the price originally charged. We analyze a simple constant-competitive algorithm for selling a single item (or, more generally, a matroid basis) in this model, proving that its competitive ratio is optimal among deterministic algorithms but that the competitive ratio can be improved using randomized algorithms. We then model ad campaigns using “knapsack domains”, in which the requests differ in size as well as in value. We give a deterministic bi-criterion approximation in which both factors approach 1 as the buyback factor and the size of the maximum request approach 0.

Multiplicative updates outperform generic no-regret learning in congestion games

We study the dynamics of repeated atomic congestion games whose players update their mixed strategies using the well-known weighted majority learning algorithm. Atomic congestion games have a wide variety of equilibria often with vastly differing social costs. We show that in almost all such games, the well-known multiplicative-weights learning algorithm results in convergence to pure equilibria. In fact, the dynamics always converges to a type of mixed Nash equilibrium that we callweakly stable. Pure Nash equilibria are weakly stable, and the converse holds outside of a measure-zero set of games whose edge costs are contained in the zero-set of a nonvanishing polynomial. Our results thus show that natural learning behavior can avoid bad outcomes predicted by the price of anarchy, or the price of total anarchy, in atomic congestion games. Our proof is based on approximating the learning process by a continuous-time differential equation, and analyzing this differential equation using tools from dynamical systems and algebraic geometry to extract a game-theoretic interpretation of its stable fixed points.

Characterizing truthful mechanisms with convex type spaces

This article contains an annotated discussion of Archer and Kleinberg's paper Truthful germs are contagious: A local-to-global characterization of truthfulness that characterizes truthful mechanisms with convex type spaces and potentially infinite outcome spaces. Characterization theorems for truthful mechanisms, when they exist, are a boon to the mechanism designer since they allow us to reduce problems of optimal mechanism design to algorithm design problems in which the algorithm that computes the allocation function is required to satisfy additional constraints.

Online auctions and generalized secretary problems

This article surveys several recent papers on generalizations of the famous secretary problem and their applications to the analysis of online auctions. All of these problems involve designing online algorithms to choose a set of elements from a randomly-ordered sequence, subject to constraints on the set of elements which may be picked.Selling banner ads: Online algorithms with buyback

We initiate the study of online pricing problems in markets with "buyback," i.e., markets in which prior allocation decisions can be revoked, but at a cost which is a fixed fraction of the request value. This scenario models a market for web advertising, in which the buyback cost represents the cost of canceling an existing contract.Truthful germs are contagious: A local-to-global characterization of truthfulness

We study the question of how to easily recognize whether a social choice functionA knapsack secretary problem with applicationsffrom an abstract type space to a set of outcomes is truthful, i.e. implementable by a truthful mechanism. We provide a local-to-global characterization theorem for any set of outcomes (possibly infinite) and any convex space of types (possibly infinite-dimensional):fis truthful if its restriction to every sufficiently small 2-dimensional neighborhood about each point is truthful. Using our characterization theorem we give a simple alternate derivation of the Saks-Yu theorem that weak monotonicity implies truthfulness for convex type spaces and finite outcome sets. We also show that the reliance on unboundedly-large subsets is inevitable in any characterization of truthfulness with infinite outcome sets: for every integerkthere are non-truthful functions whose restriction to everyk-point subset of the type space is truthful.

We consider situations in which a decision-maker with a fixed budget faces a sequence of options, each with a cost and a value, and must select a subset of them online so as to maximize the total value. Such situations arise in many contexts, e.g., hiring workers, scheduling jobs, and bidding in sponsored search auctions. This problem, often called theAutomated online mechanism design and prophet inequalitiesonline knapsack problem, is known to be inapproximable. Therefore, we make the enabling assumption that elements arrive in a random order, which makes our problem into a generalization of the classical secretary problem. We call this theknapsack secretary problemand we present a constant-competitive algorithm for arbitrary costs and values.

We develop algorithms that compute an optimal, or approximately optimal, online auction mechanism for digital goods given distributional knowledge of bid values. We first prove that when the seller does not know the market size, no constant-approximation to the optimum efficiency or revenue is achievable in the worst case. Second, we show that when the seller has distributional knowledge of the market size as well as the bid values, one can combine dynamic programming with prophet inequalities (a technique from optimal stopping theory) to design and analyze online mechanisms which are truthful (even with respect to arrival and departure times) and which approximately maximize revenue or efficiency.Congestion games with malicious players

We study the equilibria of nonatomic congestion games in which there are two types of players: rational players, who seek to minimize their own delay, and malicious players, who seek to maximize the average delay experienced by the rational players. We study the existence of pure and mixed Nash equilibria for these games, and we seek to quantify the impact of the malicious players on the equilibrium. One counterintuitive phenomenon which we demonstrate is the "windfall of malice": paradoxically, when a myopically malicious player gains control of a fraction of the flow, the new equilibrium may be more favorable for the remaining rational players than the previous equilibrium.

Algorithmic pricing via virtual valuations

We consider the algorithmic problem of computing approximately profit-maximizing prices in a unit-demand, multi-product scenario when the consumer's valuations for the different items are independent random variables. Using a connection between this problem and the concept ofvirtual valuationsfrom single-parameter Bayesian optimal mechanism design, we design a 4-approximation algorithm for this multi-parameter pricing problem.

Matroids, secretary problems, and online mechanisms

Thematroid secretary problemis a generalization of the classical secretary problem. An algorithm observes the elements of a weighted matroid in random order. As each element is observed, the algorithm makes an irrevocable decision to choose it or skip it, with the constraint that the chosen elements must constitute an independent set. The objective is to maximize the combined weight of the chosen elements. This paper raises the question of whether every matroid admits a constant-competitive secretary algorithm, and solves the question for certain special classes of matroids.

Secretary problems with competing employers

In many decentralized labor markets, job candidates are offered positions at very early stages in the hiring process. We explore this phenomenon by studying the equilibria of a game based on the secretary problem with competing employers. Our results confirm the observation of early offers in labor markets: for several classes of strategies based on optimal stopping theory, as the number of employers grows, the timing of the earliest offer approaches zero.

On the competitive ratio of the random sampling auction

We give a simple analysis of the competitive ratio of the random sampling auction introduced by Goldberg, Hartline, and Wright in 2001. Our analysis improves the best known bound on this auction's competitive ratio from 7600 to 15. In support of the conjecture that the random sampling auction is in fact 4-competitive, we show that on the equal revenue input, where any sale price gives the same revenue, random sampling is exactly a factor of four from optimal.

Online auctions with re-usable goods

This paper concerns the design of mechanisms for online scheduling in which agents bid for access to a re-usable resource such as processor time or wireless network access. Agents are assumed to arrive and depart dynamically over time. We design mechanisms that are truthful in the sense that truthful revelation of arrival, departure, and value information is a dominant strategy, and that are online in the sense that they make allocation and payment decisions without knowledge of the future. We also prove some general characterization theorems and lower bounds for such mechanisms.

A multiple-choice secretary problem with applications to online auctions

A sequence ofnpositive numbers is presented to an algorithm in random order. After seeing each number, the algorithm must decide whether to keep or reject it. Such decisions are irrevocable, and the algorithm may only keepknumbers altogether. Whenk=1, this is a version of the famoussecretary problem, and it is well-known that the optimal competitive ratio approacheseasntends to infinity. Here we prove that for generalk, the optimal competitive ratio is1+O(k. This leads to a temporally strategyproof online auction mechanism achieving the same competitive ratio when agents arrive in random order.^{-1/2})

Adaptive limited-supply online auctions

We study online auctions in which the agents arrive and depart over time, and may strategically mis-state the time of their arrival or departure if there is an incentive to do so. Assuming that the agents arrive in random order, we present a constant-competitive strategyproof mechanism for auctioningkidentical goods. Whenk=1, the mechanism can be regarded as atemporally strategyproofmodification of the classical algorithm for the "secretary problem."

The value of knowing a demand curve: bounds on regret for on-line posted-price auctions

In anonline posted-price auction,a seller interacts with a sequence of buyers by presenting a take-it-or-leave-it offer to each. What is the optimal adaptive price-setting algorithm for this transaction model, and how well does it perform? Stated differently:"What is the value of knowing the demand curve?"We answer this question under three different assumptions on the distribution of the bidders' values for the good: identical, random, or worst-case valuations.

We consider an online learning setting where the learner's performance is judged according to aglobalcost function, e.g. theLnorm of the combined loss vector. Using approachability theory, we design a general algorithm that guarantees vanishing average regret for this setting, as well as a simple and efficient algorithm for the special case when the cost function is the_{p}Lnorm. We also show that for concave global cost functions, the learner's worst-case average regret does not vanish in general._{∞}

The

We introduce a variant of the multi-armed bandit problem in which the feedback comes from pairwise comparisons between the arms, rather than choosing a single arm and observing its reward. The problem is motivated by applications such as information retrieval in which users have an easier time comparing two alternatives than estimating the quality of a single choice. We formulate an appropriate notion of regret for this setting, and give an algorithm whose regret is within a constant factor of the information-theoretic optimum.

Learning diverse rankings with multi-armed bandits

Algorithms for learning to rank Web documents usually assume a document's relevance is independent of other documents. This leads to learned ranking functions that produce rankings with redundant results. In contrast, user studies have shown that diversity at high ranks is often preferred. We present two online learning algorithms that directly learn a diverse ranking of documents based on users' clicking behavior. We show that these algorithms minimize abandonment, or alternatively, maximize the probability that a relevant document is found in the topRegret bounds for sleeping experts and banditskpositions of a ranking. Moreover, one of our algorithms asymptotically achieves optimal worst-case performance even if users' interests change.

We study on-line decision problems where the set of actions available to the decision algorithm varies over time. Departing from previous work on this "sleeping experts" problem, we compare algorithms against the payoff obtained by theMulti-armed bandit problems in metric spacesbest orderingof the alternatives, which is a natural benchmark for this type of problem.

Motivated in part by problems arising in the placement of web advertisements, we study multi-armed bandit problems whose strategy set is a metric space and whose payoff functions satisfy a Lipschitz condition with respect to the metric. We define an isometry-invariant for every metric space — themax-min-covering dimension— which characterizes the performance of the optimal multi-armed bandit algorithm for that metric space, and we present an algorithm which meets this bound. Furthermore, we demonstrate that our technique gives even better results for "benign" payoff functions.

Noisy binary search and its applications

We study a noisy version of the classical binary search problem of inserting an element into its proper place within an ordered sequence by comparing it with elements of the sequence. In the noisy version we can not compare elements directly. Instead, when we compare our element with an element of the sequence, we observe the outcome of an independent coin-toss whose heads probability increases with that element's position in the sequence. We design algorithms for this search problem and prove information-theoretic lower bounds which match up to a constant factor.

Anytime algorithms for multi-armed bandit problems

This paper introduces a new criterion for evaluating multi-armed bandit algorithms. The criterion is a relaxation of the notion of regret, and it raises the possibility of designing bandit algorthms which meet non-trivial performance guarantees even when the strategy set is infinite and the cost functions are unconstrained. Instead of comparing the algorithm's expected average cost with that of the best strategy, one compares it with the average cost of the best strategyoutside of the top, whereεfraction of the strategy setεis a parameter unknown to the algorithm. If the difference between the algorithm's cost and this benchmark converges to zero as the number of trials goes to infinity,for all positive ε, we call the algorithm ananytime bandit algorithm. This paper presents examples of anytime bandit algorithms and proves a non-trivial lower bound on their convergence time.

Competitive collaborative learning

How can systems like eBay's reputation system or peer-to-peer resource location protocols be designed to give honest users good advice about selecting products or resources, in spite of potentially misleading information from malicious users? This paper proposes a paradigm for addressing such questions using online learning theory. The model is a generalization of the multi-armed bandit problem, but with multiple users sharing information. An unknown subset of the users are Byzantine adversaries, and the goal is for the remaining users to converge as rapidly as possible to making decisions with low average regret. The main result is an algorithm with polylogarithmic convergence time, assuming a constant fraction of the users are honest.

Provably competitive adaptive routing

This paper contains theorems and experimental results concerning an algorithm for learning the most reliable path between two designated terminals in a directed graph with a time-varying pattern of edge failures. The algorithm is allowed to probe only one path in each trial; the feedback from such a trial reveals the location of the first edge failure.

Online linear optimization and adaptive routing

Motivated by the problem of adaptive routing in overlay networks, we consider theonline shortest pathproblem and its generalization, theonline linear optimizationproblem. Our algorithms work in thepartial feedback model, where only the function value at the selected point is revealed. (E.g., only the end-to-end length of the chosen path is revealed.) Our algorithm builds on an elegant algorithm of Kalai and Vempala for solving this problem in thefull feedback model, in which the entire cost function is revealed after each trial.

Nearly tight bounds for the continuum-armed bandit problem

In acontinuum-armed bandit problem, a learner has a set of strategies parametrized by one or more parameters and must choose one strategy in each of a sequence of trials. In each trial an adversary (or a random sampling process) specifies a function assigning a cost to each strategy, and reveals only the cost of the strategy chosen by the learner. This paper establishes nearly tight bounds on the regret of the optimal algorithm in the one-parameter case. In the multi-parameter case, assuming the cost functions are convex, we present an algorithm achieving polynomial convergence time, based on Zinkevich's online convex programming algorithm.

Online decision problems with large strategy sets

In an online decision problem, an algorithm performs a sequence of trials, each of which involves selecting one element from a fixed set of alternatives (the "strategy set") whose costs vary over time. AfterTtrials, the combined cost of the algorithm's choices is compared with that of the single strategy whose combined cost is minimum. Their difference is calledregret, and one seeks algorithms which are efficient in that their regret is sublinear inTand polynomial in the problem size.This thesis concerns an important class of online decision problems called

generalized multi-armed bandit problems. Most existing algorithms for such problems were efficient only in the case of a small (i.e. polynomial-sized) strategy set. We extend the theory by supplying non-trivial algorithms and lower bounds for cases in which the strategy set is much larger (exponential or infinite) and the cost function class is structured, e.g. by constraining the cost functions to be linear or convex. As applications, we consider adaptive routing in networks, adaptive pricing in electronic markets, and collaborative decision-making by untrusting peers in a dynamic environment.

We study algorithms for maintaining a matching in a bipartite graph describing a set of potential connections betweenclientsandservers. In our model, clients arrive one at a time, revealing their set of adjacent servers at the time of arrival. The graph is assumed to always have at least one matching containing all the clients, and the algorithm maintains such a matching after each client arrival, updating its previous matching using an augmenting path if necessary. The cost of a sequence of updates is the totalswitching cost, i.e. the total length of all the augmenting paths. We give algorithms achieving total switching costO(n log n)in trees and in arbitrary graphs with random client arrival order. We also give an algorithm achieving total switching costO(n)with high probability in random graphs of average degreeΘ(log n).

A "chicken and egg" network coding problem

We consider the multi-source network coding problem in cyclic networks. This problem involves several difficulties not found in acyclic networks, due to additional causality requirements. This paper highlights the difficulty of these causality conditions by analyzing two example cyclic networks which are structurally similar. Both networks have an essentially identical network code which appears to transmit all information from the sources to the sinks; however, this network code is invalid since it violates causality. We show that, in one of the networks, the invalid code can be modified to obey causality, whereas in the other network this is impossible. This unachievability result is proven by a new information inequality for causal coding schemes in a simple cyclic network.Geographic routing using hyperbolic space

We study the design of point-to-point routing schemes in ad hoc wireless networks and sensornets based on greedy routing in a virtual coordinate space. We prove that every connected graph may be embedded in the hyperbolic plane in such a way that the greedy routing algorithm succeeds in connecting every sender-receiver pair. We prove thatSemi-oblivious routing: lower boundsO(log n)dimensions are required to achieve the same property using a normed vector space, and we give a construction which matches this lower bound.

We initiate the study ofOn the capacity of information networkssemi-oblivious routing, a relaxation of oblivious routing in which an algorithm is given an undirected graph and must select a polynomial number of distinguished paths in the graph so as to route any subsequently-specified demand matrix with approximately optimal congestion while using only the distinguished paths. We prove near-logarithmic lower bounds in this model for grids and series-parallel graphs, and polynomial lower bounds under stronger restrictions on the set of distinguished paths.

We study the maximum concurrent rate of transmission for network coding solutions ofk-pairs problems, in which one seeks to simultaneously transmit data streams betweenksender-receiver pairs in an edge-capacitated network. Theundirected k-pairs conjectureasserts that fork-pairs problems in undirected graphs, the maximum rate achievable by a network coding solution is also achievable by a concurrent multicommodity flow. To make progress on this conjecture, we introduce a technique for proving upper bounds on the network coding rate by combining information-theoretic inequalities (e.g. submodularity of entropy functions) with combinatorial inequalities based oninformational dominance, a generalization of the sparsest-cut condition. We demonstrate the power of these techniques by verifying the undirectedk-pairs conjecture for an infinite family of non-trivial instances. Finally, turning our attention tok-pairs problems in directed graphs, we give a tight characterization of the largest possible gap between the network coding rate and the maximum concurrent multicommodity flow rate.An extended abstract announcing a superset of these results appeared in SODA 2006, and is listed below.

Improved lower and upper bounds for universal TSP in planar metrics

A universal TSP tour of a metric space is a total ordering of the points of the space such that for any finite subset, the tour which visits these points in the given order is not too much longer than the optimal tour. We prove a super-constant lower bound on the competitive ratio of the best universal TSP tour of ann-by-ngrid. When the metric space is the shortest-path metric of a planar graph (or more generally, anH-minor free graph for some fixed graphH), we prove that there exists a universal TSP tour whose competitive ratio isO(log, and we provide a randomized polynomial-time algorithm for computing such a tour.^{2}n)

New lower bounds for oblivious routing in undirected graphs

The first half of this paper studies lower bounds for the average-case competitive ratio of oblivious routing in undirected graphs in therandom demand model, when the demand for each commodity is sampled independently from a distribution which is known in advance of defining the routing scheme. In this random demand model, we prove two lower bounds on the competitive ratio of the opimal oblivious routing algorithm:Ω(log n / log log n)in the general case, andΩ(login the single-sink case. In the second half of the paper, we introduce a natural measure of the^{1/2}n)throughput ratioof an oblivious routing scheme, and we prove that the throughput ratio of oblivious routing schemes in expander graphs must be at leastΩ(log n / log log n).

On the capacity of information networks

A subset of this work appears in the journal paper with the same title, listed above. In addition to the results contained in that paper, we provide evidence for the hardness of the undirectedk-pairs conjecture. Specifically, we show that proving the conjecture for general graphs would imply the strongest known lower bounds for the I/O complexity of matrix transposition and for computation in the oblivious cell-probe model, and would imply a non-trivial lower bound for two-tape oblivious Turing computation.

Tighter cut-based bounds for k-pairs communication problems

We study the extent to which combinatorial cut conditions determine the maximum network coding rate ofk-pairs communication problems in directed graphs. We seek a combinatorial parameter of the network which constitutes a valid upper bound on the network coding rate but whosegap--- the worst-case ratio between the parameter's value and the true maximum network coding rate --- is small. We begin by showing that two parameters proposed earlier, vertex-sparsity and meagerness, have a gap which is linear in the network size. Next we propose a new bound called informational meagerness, which generalizes both vertex-sparsity and meagerness. While it is not known whether informational meagerness has a sublinear gap, we prove here that the gap is at least logarithmic in the network size.

Localized client-server load balancing without global information

This paper studies load-balancing algorithms for a network of clients and servers modeled as a bipartite graph. We require the algorithms to satisfy an extremely strict local-control requirement: each participant must make its admission control decisions knowing only the state of its neighbors. This work can be interpreted as a companion to recent work by Racke and others onoblivious routing, but with the objective of maximizing throughput rather than minimizing congestion.

Oblivious routing on node-capacitated and directed graphs

We study oblivious routing in general directed networks, and in undirected networks with node capacities. We present the first non-trivial upper bounds for both of these problems. We also settle an open question about single-sink (or single-source) oblivious routing problems in undirected edge-capacitated graphs.

(Almost) tight bounds and existence theorems for single-commodity confluent flows

A confluent flow is a network flow in which each node sends all of its flow along a single outgoing edge. (For example, flows in IP networks are confluent because IP routing is destination-based.) We study two combinatorial optimization problems involving confluent flow: the problem of computing a minimum-congestion confluent flow, and the problem of computing a confluent flow which satisfies the maximum demand subject to hard capacity constraints. Approximation algorithms are given for both problems, as well as hardness proofs showing that both approximation ratios are tight up to a constant factor.

This work investigates the dimensionality properties of the Internet delay space, i.e., the matrix of measured round-trip times between Internet hosts. Our key findings are that fractal measures, such as the correlation fractal dimension and Hausdorff dimension, reveal a dimensionality less than 2, and that the dimensionality is further reduced when one restricts the measurements to subsets composed of a single Tier-1 autonomous system and its downstream customers.Emergence of tempered preferential attachment from optimization

We show how preferential attachment can emerge in an optimization framework, resolving a long-standing theoretical controversy. We also show that the preferential attachment model so obtained has two novel features, saturation and viability, which have natural interpretations in the underlying network and lead to a power-law degree distribution with exponential cutoff. We present a collection of empirical observations from social, biological, physical, and technological networks, for which such degree distributions give excellent fits.

Isomorphism and embedding problems for infinite limits of scale-free graphs

Thepreferential attachment modelis a popular random graph process for explaining the structure of large complex networks such as the Internet and World-Wide Web. Here, we study the infinite random graphs which arise as limits of this process, focusing on the questions, "What is the probability that two such graphs are isomorphic?" and, "What finite subgraphs appear in these graphs, and how often do they appear?"

Competition-induced preferential attachment

We study a network formation model in which preferential-attachment behavior arises as a result of trade-offs between competing proximity measures. In our model, vertices arrive at random locations in a one-dimensional space and join themselves to the network by optimizing a function which trades off two notions of proximity, one defined in terms of the spatial configuration of the points and the other defined in terms of their distances in the network itself.

We perform forward error analysis for a large class of recursive matrix multiplication algorithms. As a consequence of our analysis, we show that for every fast matrix multiplication algorithm with running timeO(nthere are numerically stable algorithms with running time^{x})O(n. We also show that the new group-theoretic algorithms proposed by Cohn, Kleinberg, Szegedy, and Umans are numerically stable.^{x+o(1)})

Group-theoretic algorithms for matrix multiplication

We develop fast matrix multiplication algorithms based on the theory of finite group representations, according to the paradigm introduced by Cohn and Umans in 2003. The fastest of these algorithms runs inO(noperations. We present two conjectures, either of which (if resolved affirmatively) would imply that the exponent of matrix multiplication is equal to 2.^{2.41})

Designing a system for distibuting streaming media over the Internet poses significant challenges in the areas of scalability, quality, reliability, and cost, among others. In this paper we discuss the design choices made during the evolution of Akamai's Content Delivery Network for streaming media, focusing on design choices made to address these four challenges, and we present the results of performance studies conducted on the evolving system to determine the efficacy of these design choices.

Consistent load balancing via spread minimization

This paper introduces thespread minimization problem,which is an abstract version of the load-balancing problem faced by systems which seek to use cache resources efficiently over time, e.g. in an Internet content delivery network. We present deterministic offline and randomized online algorithms for this problem, whose worst-case performance is within a constant factor of the optimum assignment's worst-case performance.

Hat problems have become a popular topic in recreational mathematics. In a typical hat problem, each of $n$ players tries to guess the color of the hat they are wearing by looking at the colors of the hats worn by some of the other players. In this paper we consider several variants of the problem, united by the common theme that the guessing strategies are required to be deterministic and the objective is to maximize the number of correct answers in the worst case. We also summarize what is currently known about the worst-case analysis of deterministic hat-guessing problems with a finite number of players.