next up previous
Next: Conclusions Up: The Design and Implementation Previous: Summary


Related Work

Several researchers have extensively studied the performance of the legacy DNS, and proposed new mechanisms to improve its performance. In this section, we discuss past approaches to measure and improve the legacy DNS.

Performance Studies: In 1988, Mockapetris and Dunlap published a retrospective study on the development of the legacy DNS identifying its successful features and shortcomings [27]. Several measurement studies since then have provided good insight into the advantages and the drawbacks of the system. A recent survey by Pappas et al. [29] studies the impact of lame delegations, cyclic dependencies and reduced nameserver redundancies, on the performance of the legacy DNS; their findings confirm the results of our survey. Earlier studies by Danzig et al. [10] and Brownlee et al. [4,5] analyze DNS traffic at root and gTLD nameservers. Huitema et al. [16] and Wills et al. [41] study DNS performance observed from the client side. A more detailed study of the performance experienced by clients and analysis of the effectiveness of caching in the legacy DNS is presented by Jung et al. in [18,19]. These studies have provided invaluable insight into the operation of the legacy DNS and helped us to motivate and design CoDoNS.

Design Alternatives: Recently, a few schemes have been proposed to improve the failure-resilience and lookup performance of DNS. Cohen and Kaplan [8] propose a proactive caching scheme for DNS records. In their scheme, expired DNS records in the cache are proactively fetched from the authoritative nameservers. They analyze several heuristics-based prefetching algorithms, and evaluate their performance. This scheme entails prefetching at intermediate caches, which generates substantial amount of background traffic to update DNS records. In contrast, CoDoNS fetches each record only once at the home node, significantly reducing the overhead imposed on the legacy DNS.

CoDNS [30] masks delays in the legacy DNS caused by failures in local resolvers by diverting queries to other, healthy resolvers. CoDNS provides resilience against temporary problems with the legacy DNS, but is not intended as a replacement. DDNS [9] and Overlook [37] are peer-to-peer name services designed to enhance fault tolerance and load balancing properties. DDNS implements the legacy DNS functionalities on top of Chord [36], an O(log N) lookup time structured DHT based on consistent hashing. Overlook is a general purpose name service built on top of Pastry [34]. Both DDNS and Overlook incur high lookup latencies as requests are routed through O(log N) overlay hops. Beehive provides a replication framework that enables CoDoNS to achieve O(1) lookup performance.

Some researchers have proposed to replace the hierarchical DNS and URL namespace with flat global identifiers [39]. CoDoNS can be used to map such identifiers to physical locations or to their content with high performance.

Structured DHTs: In addition to Chord and Pastry, several structured DHTs have been proposed in recent years. CAN [33] maps both objects and nodes on a d-dimensional torus and provides O(dN1/d) latency. Tapestry [42] employs consistent hashing [20] to map objects to nodes, and prefix-matching [31] to route lookups in O(log N) hops. Kademlia [24] provides O(log N) lookup performance using a similar search technique, but uses the XOR metric for routing. SkipNet [15] uses skip-lists to provide O(log N) probabilistic lookup guarantee. Viceroy [23] provides O(log N) lookup performance with a constant degree routing graph. A few DHTs use De Bruijn graphs [21,40] to achieve O(log N / loglog N) lookup performance. The multi-hop lookup performance provided by these DHTs is inadequate to support performance sensitive application like DNS.

A few recent DHTs provide O(1) lookup performance at the cost of increased storage and bandwidth consumption. Kelips [13] limits lookup latency to one or two hops by replicating each object on O(ÖN) nodes and using gossip-based protocols to manage replication. An alternative method to achieve one hop lookups is described in [14], and relies on maintaining full routing state (i.e. a complete description of system membership) at each node. The space and bandwidth costs of this approach scale linearly with the size of the network. Farsite [11] also routes in a constant number of hops using a fixed depth hierarchy, but does not address rapid membership changes. Overall, these DHTs incur a minimum delay of at least one overlay hop, whereas CoDoNS can decrease the average lookup time to less than a single hop.


next up previous
Next: Conclusions Up: The Design and Implementation Previous: Summary
beehive-l@cs.cornell.edu