next up previous
Next: CoDoNS: Architecture Up: Cooperative Domain Name System Previous: Cooperative Domain Name System


Overview of Beehive

CoDoNS derives its performance characteristics from a proactive caching layer called Beehive [32]. Beehive is a proactive replication framework that enables prefix-matching DHTs to achieve O(1) lookup performance. Pastry [34], and Tapestry [42] are examples of structured DHTs that use prefix-matching [31] to lookup objects. In these DHTs, both objects and nodes have randomly assigned identifiers from the same circular space, and each object is stored at the nearest node in the identifier space, called the home node. Each node routes a request for an object, say 2101, by successively matching prefixes; that is, by routing the request to a node that matches one more digit with the object until the home node, say 2100, is reached. Overlay routing by matching prefixes in this manner incurs O(log N) hops in the worst case to reach the home node. Figure 3 illustrates the prefix matching routing algorithm in Pastry. A routing table of O(log N) size provides the overlay node with pointers to nodes with matching prefixes. In a large system, log N translates to several hops across the Internet and is not sufficient to meet the performance requirements of latency critical applications such as DNS.

Beehive proposes a novel technique based on controlled proactive caching to reduce the average lookup latency of structured DHTs. Figure 3 illustrates how Beehive applies proactive caching to decrease lookup latency in Pastry. In the example mentioned above, where a query is issued for the object 2101, Pastry incurs three hops to find a copy of the object. By placing copies of the object at all nodes one hop prior to the home node in the request path, the lookup latency can be reduced by one hop. In this example, the lookup latency can be reduced from three hops to two hops by replicating the object at all nodes that start with 21. Similarly, the lookup latency can be reduced to one hop by replicating the object at all nodes that start with 2. Thus, we can vary the lookup latency of the object between 0 and log N hops by systematically replicating the object more extensively. In Beehive, an object replicated at all nodes with i matching prefixes incurs i hops for a lookup, and is said to be replicated at level i.

Figure: Proactive Caching in Beehive: Caching an object at all nodes with i matching prefix-digits ensures that it can be located in i hops. Beehive achieves O(1) average lookup time with minimal replication of objects.
\begin{figure}\centering\psfig{file=levels.eps, width=3in}\par\end{figure}

The central insight behind Beehive is that by judiciously choosing different levels of replication for different objects, the average lookup performance of the system can be tuned to any desired constant. Naturally, replicating every object at every node would achieve O(1) lookups, but would incur excessive space overhead, consume significant bandwidth and lead to large update latencies. Beehive minimizes bandwidth and space consumption by posing the following optimization problem: minimize the total number of replicas subject to the constraint that the aggregate lookup latency is less than a desired constant C. For power-law (or Zipf-like) query distributions, Beehive analytically derives the optimal closed-form solution to this problem. The derivation of the analytical solution is provided in [32]; the final expression for the closed-form solution that minimizes the total number of replicas for Zipf-like query distributions with parameter a < 1 is the following:

\begin{eqnarray*}
x_i = [\frac{d^i(logN - C)}{1 + d + \cdots + d^{logN-1}}]^{\frac{1}{1-\alpha}}, \mbox{where } d = b^{\frac{1-\alpha}{\alpha}}
\end{eqnarray*}


In this expression, b is the base of the underlying DHT and xi is the fraction of most popular objects that get replicated at level i. This solution is immediately applicable to DNS, since DNS queries follow a Zipf-like distribution [18].

The analytical result provides several properties suited for latency-sensitive applications such as DNS. First, it succinctly captures the space-time tradeoff and enables applications to achieve any targeted average lookup latency by selecting an appropriate C. In CoDoNS, we set Beehive's target as C = 0.5 hops, which means that a large percentage of requests are answered immediately without having to take any extra network hops. Second, it incurs minimal bandwidth and storage overhead by picking the optimal number of replicas required to achieve the target lookup performance. Further, replicating objects across several nodes balances the load on individual Beehive nodes, reduces hotspots and improves resilience against DoS attacks. Finally, the level of replication for each object can be used to quickly determine the location of all object replicas, and to update them when necessary.

Beehive nodes need to know only the Zipf parameter and the relative popularity rank of objects to apply the closed-form solution and determine the extent of replication for each object. Beehive employs a combination of local measurement and limited aggregation to estimate the Zipf parameter and the popularity ranks of objects. Beehive nodes locally measure the access frequency of each object, and periodically aggregate them with other nodes every aggregation interval. Each node aggregates values gathered from nodes one level higher in the routing table. Eventually, the aggregates trickle down through different levels of the routing table and reach the home node. The home node computes a final aggregate and propagates it to all replicas in the system. The Zipf parameter is locally derived from the aggregate access frequencies of the object and fed to the analytical model. Using these estimates, each Beehive node invokes the analytical model once every analysis interval and obtains the appropriate replication levels for objects it stores. The replication of these objects to the specified levels is then performed by the replication protocol. Replication in Beehive is controlled locally; that is, each node is responsible for replicating an object on nodes at most one hop away from itself. For example, the home node of a popular object replicates it at nodes that share one prefix less. Those nodes then independently decide to replicate that object further to one more level. In the replication phase, each node exchanges messages with nodes in its routing table, which are one hop away from them, to push, delete, or update replicas for which they are responsible.

The aggregation and replication protocols enable Beehive to quickly detect changes in the demand for objects. Large changes in the popularity of domain names occur during denial of service attacks and flash crowds. Beehive nodes constantly monitor the access frequency of objects and adjust the extent of replication. In response to DoS attacks, they promptly increase the number of replicas and spread the load among several nodes, curbing the attack.

Proactive replication also enables Beehive to rapidly push updates to all the replicas in the system. In general, proactive propagation of updates demands expensive mechanisms to keep track of all the nodes where the object is cached. Beehive requires just a small integer, the replication level of an object, to determine the range of nodes with replicas of the object. An object at level i is replicated at all nodes with i matching prefix digits. For a level i object, the home node propagates updates to all the nodes at level i in the routing table. Those nodes in turn propagate updates to nodes at level i+1. This recursive process disseminates the updates to nodes with i matching prefix digits. Nodes in the process of joining the DHT may miss the initial update propagation. Such nodes will receive the latest copy of the object in the next replication phase; they may incur a slight performance penalty, but will not serve stale data. Proactive update propagation obviates the need for timeout-based caching.


next up previous
Next: CoDoNS: Architecture Up: Cooperative Domain Name System Previous: Cooperative Domain Name System
beehive-l@cs.cornell.edu