Network Working Group P. Karimi
Internet-Draft S. Mukherjee
Intended status: Informational Rutgers University
Expires: September 13, 2017 March 12, 2017

Global Name Resolution Service


This document describes the requirement of a new mapping system, explains why DNS was not chosen, follows the introductions of a few proposed new mapping system designs.

Requirements Language

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119].

Status of This Memo

This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.

Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at

Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."

This Internet-Draft will expire on September 13, 2017.

Copyright Notice

Copyright (c) 2017 IETF Trust and the persons identified as the document authors. All rights reserved.

This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents ( in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.

Table of Contents

1. Introduction

The current Internet architecture which was designed with fixed hosts in mind, uses IP address to identify both the users as well as their locations. This overloading of the namespace or location-identity conflation [RFC1498] makes deploying basic mobility services such as session continuity, multi-homing etc., challenging. In order for future networks to natively support these services, location-independent communication for fixed names of various endpoint principal (host, content or services) is a crucial underlying requirement.

Separation of names/identities from addressing/location has been proposed in multiple architectures to facilitate location-independent communication [MF], [RFC6830], [RFC4423], [XIA]. There is a need for an efficient resolution system that can therefore provide this identity to location translation for all network-attached objects. In the current internet a similar resolution of identities (domain names) to obtain network locations (IP addresses) is provided by the Domain Name System (DNS). Although the DNS has historically evolved significantly from the time it was based on text files to sophisticated hierarchically distributed resolvers, it still lacks the support for the requirements of next generation networks, i.e. a distributed mapping infrastructure that can scale to orders of magnitude higher update rates with orders of magnitude of lower user-perceived latency.

2. Specification of Requirements

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119].

3. Functional Requirements for a Mapping System

The following lists the requirements of a mapping system:

This list of functional requirements is not comprehensive. Please refer to [IDEAS-PS] for a detailed discussion on the requirements for the next-generation mapping and resolution services. The goal of this document is to highlight key functional requirements which are not well-handled by existing mapping systems such as the DNS and the proposed LISP DDT.

4. The Domain Name System

The DNS is a distributed mapping service ubiquitously used for translating human readable names/URLs to IP addresses. DNS database is stored in a structured zone file which hierarchically divides the name space into zones and distributes it over a collection of DNS servers. The top-down hierarchy in DNS servers is as follows: Root servers (13), Top-Level Domain servers (TLD includes generic domains, country domain and managed professionally domains), Authoritative DNS servers (providing public records for domain names).

The resolution of domain names follows the same hierarchy. Each network host is affiliated with a local DNS server (via DHCP or configuration) which is configured with initial cache of publicly-known addresses for root name servers (the top of the hierarchy). This functionality can also be put on the application to query the DNS directly. The root servers usually do not conduct the resolution directly and instead will refer to resolver to TLDs and authoritative servers iteratively. This iterative back-and-forth messaging will result in really large DNS query latency and massive amount of unnecessary traffic. In order to reduce DNS traffic overhead, decrease the latency for the end-user application and improve efficiency caching of DNS query results is performed by local DNS servers or browsers.

DNS suffers from high propagation and response latency. The DNS name resolution is initiated by the client process calling the resolver to return a cached record or, in case of a cache miss, request the DNS for the record of a given domain name. The request is sent to the DNS root server and the referral will be forwarded iteratively between the local DNS server and along the DNS hierarchy and the chain of authoritative DNS servers. In summary the lookup latency for DNS query will consist of a) the latency from the client to the resolver and b) from the resolver through the DNS hierarchy, if there is no available cache for that name.

a) Client to resolver latency: In [DNS-MEASURE1], thorough latency measurements from global vantage points to 9 most commonly used public DNS providers is provided. It has been shown that the average latency for cached queries (no cache-misses were counted) vary from 38-159 millisecond based on the provider. The latency and its variation is governed by the number and location of data centers, anycast routing latency, additional caching [GOOGLE-EDGE], congestion and load on servers, etc.

b) Iterative lookup along DNS hierarchy latency: This latency consists of back-and-forth latency between the resolvers and root servers, TDLs and the authoritative name servers, specifically when there is a cache miss in the resolver. This latency can be exacerbated by unprovisioning and high queues at the DNS resolvers and malicious traffic [GOOGLE-DNS]. In [DNS-MEASURE2], the global average latency for the fastest root server within each country has been reported as 70 ms. To the best of our knowledge there has not been a recent comprehensive measurement of latencies to access TDLs and authoritative name servers. This is reasonable as the geo-distribution and number of TDLs and authoritative nameservers are very diverse. However considering the static placement of authoritative name servers in most of the cases, unless the domain name being served by services like Google’s cloud DNS [GOOGLE-CLOUDDNS], the latency from the resolvers to authoritative name servers will be unresponsive to the change in popularity/demand for the domain names it is responsible for, and hence will supposedly have high values for some cases. In [GOOGLE-DNS], the actual end-to-end resolution time is estimated to be around 300-400 ms with high variance and long tail.

There have been many proposals focused on improving the lookup latency for the DNS resolvers [CODONS] and a number of public DNS resolvers like Google public DNS [GOOGLE-DNS] which perform more optimized lookups (measurements of which have been discussed above). However, low lookup latency values can be achieved only if an optimally-located DNS resolver has a cache hit for the lookup query. Unfortunately, cache misses are fundamentally difficult to avoid due to internet growth and size, low TTL and cache isolation. Heavy reliance of DNS’s performance on TTL-based caching is a known issue. Despite the benefits of TTL-based caching for static contents, what DNS has been designed for, caching becomes completely ineffective for highly mobile users or CDNs which require close to zero TTL values for mobility or load balancing purposes. This is discussed in more details in the next section.

5. MobilityFirst Name Resolution Service

5.1. Separation of names, address and flat ID

MobilityFirst is a clean-slate future internet architecture with principal design goals of mobility and trustworthiness. The vision of MobilityFirst is that due to the abundance of mobile users, ranging from cellphones to drones, mobility should be treated as a first-class service. The current approaches to provide mobility like mobile IP [RFC5944] suffer from routing inefficiency (both in terms of latency and overhead) due to tunneling all the data through an anchor point. In MobilityFirst these goals are achieved by the clean separation of names and addresses. Every network entity is represented by a flat self-certifying identifier, which is location-independent and allows network layer authentication through a bilateral procedure. In addition to GUID which is a statically assigned identifier, each point of attachment of a network entity to the network will be assigned a network address (similar to IP addresses), which can dynamically change.

Binding the GUID to network addresses is facilitated by a global name resolution service, which is a logically centralized but physically distributed called the GNRS (Global Name Resolution Service).

In today’s internet the DNS provides the functionality of going from a human-readable name to an IP address determining where the name is located. DNS provides this service to the end-point applications. However, in MobilityFirst a human-readable name is translated to the corresponding GUID (globally-unique identifier) through the ORS (Object Resolution Service). It is noteworthy that this operation happens infrequently as GUIDs are statically assigned.

Within the network the tuple [GUID, Network Address (NA)] is a routable destination identifier carried in packet headers. So after obtaining the corresponding GUID, the next step is to discover the location for that GUID. GNRS is the service that performs this name to address resolution. This allows the entities to retain their long-lasting globally unique identifiers and maintain reachability and session continuity more effectively.

5.2. Different Implementations of the GNRS

MobilityFirst relies heavily on the name service for advanced network-layer functionalities. This reliance necessitates high performance for the name service, which depends on resolving identifiers to dynamic attributes in a fast, consistent, and cost-effective manner at Internet scale. As mentioned before, a name resolution service should support two main functionalities: insert/update and lookup. These operations which include querying the massively distributed name resolution service by any node in the network (an end-host or a router) should not induce a big overhead.

To achieve this goal, large geo-distributed deployment of name servers is necessary. The challenge will be to ensure the placement of a consistent replica close to its demand regions, while taking into account frequent updates due to mobility. There have been two major proposals for the efficient implementation and deployment of GNRS: 1) DHT-based name service, in which hash of a name determines where the mapping entry for that name should be stored [DMAP]. In a more advanced version of this implementation, popularity and locality has been integrated into storing the mapping entries as well [GMAP]. 2) Demand-aware mapping entry replica placement engine that intelligently replicates name records to provide low lookup latency, low update cost, and high availability [AUSPICE].

All of these design and deployment proposals argue that a DNS-like design is ill-suited to enable fast, consistent and cost-effective query-update mechanism. DNS’s TTL-based caching mechanism is one of the strengths of DNS, with long TTLs reducing client-perceived latency and overhead on the infrastructure. However, this very strength of DNS, can pose serious challenges in the face of frequent node mobility, which requires near-zero TTL values to ensure consistent responses. Low TTL values make caching ineffective and exacerbate update propagation times. Moreover, authoritative name servers need high provisioning to keep low lookup latencies which will increase the maintenance cost for the mapping system.

5.2.1. Auspice

The main design goal of Auspice is to provide an automated infrastructure for the placement of geo-distributed name resolvers. The two main components of Auspice are the replica controllers, which determine the number and geo-location of name resolvers, and the name resolvers, which are responsible for maintaining the identifier’s attributes and replying to user-request read or write operations. Each name is associated to a fixed number, k , of replica-controllers and a variable number of active replicas of the corresponding resolver.

Replica controllers: They have fixed number of locations computed using k well-known consistent hash functions. Replica controllers take into account the popularity and frequency of queries, which can dynamically change in short and long timescale. Having an automated resolver placement as an infrastructure service relinquishes the manual and redundant effort from authoritative name servers to do this task. Paxos [PAXOS] is executed for consistency, coordination and fault tolerance between replica controllers.

Replica controllers are in charge of responding to client’s requests for a GUID. Specifically, if a sender want to communicate with GUID_A, it will perform a consistent function on GUID_A. The result of this hashing will be a list of current replica controllers responsible for GUID_A. This request is then redirected to name resolvers, where the attribute for GUID_A are stored.

Name resolvers: The resolvers host active replicas for identifiers. The decision to distribute/migrate all the active replicas for identifiers is made at pre-determined time period called an epoch. In each epoch, the replica-controllers receive a summarized load report, which can be a spatial vector of request rates for an identifier from different regions seen by the active replica. By aggregating these load reports, the replica-controller will develop a concise spatial distribution of request for identifiers.

After having this distribution of requests and capacity constraints at the mapping servers, the replica-controllers will use a mapping algorithm to determine the number and location of active replicas for an identifier.

The number of active replicas is proportional to the ratio of the read rate and write rate for an identifier. The placement of active replicas is based on highest number of requests in addition to some random locations for load balancing. Redirecting the client's requests to corresponding active replicas is done taking into account name server load and latency to it.

One important challenge for updating the name servers is to maintain write consistency between the various active replicas of a GUID. The consistency is maintained by Paxos, by forwarding the write to any GUID attribute to the current Paxos coordinator node. After numbering the request and checking with the majority of replicas the coordinator will send a commit notification to all replicas.

5.2.2. Direct Map

The direct mapping (DMap) design was the first proposed implementation which is an in-network approach, wherein every autonomous system (AS) in the world participate in a hashmap based name resolution service and share the workload of hosting GUID to network address mapping. Assuming the underlying routing to be stable and all networks to be reachable, DMap hashes every GUID to K network addresses (which are IP addresses in this case) and then stores the mapping at those K addresses. Every time the mapping changes, K update messages are sent to each of the servers at these locations. Correspondingly every query for the current mapping of the GUID is anycasted to the nearest of the K locations.

DMap is the simplest of the three designs and it manages workload balance across all ASes efficiently. Since uniform hash functions decide where a mapping is stored, basic DMap implementation is not suitable for optimized mapping placement based on the service requirements. However the focus of this work was on providing a globally available mapping system with high availability, and moderate latencies, making it ideal to handle basic mobility and services with medium latency requirements. Detailed internet scale simulation of DMap shows that with 3 replicas per GUID the 98\% latency is around 100 milliseconds [DMAP], which is reasonable for most user-mobility centric applications.

5.2.3. G Map

GMap [GMAP] is an updated version of DMap, in which the GUID -> address mapping is distributed considering geo-location and local popularity. For each GUID, similar consistent hash functions are used to assign resolution servers. However for each mapping, the servers are categorized into local, regional and global sets, based on geo-locality. Each mapping now gets replicated into K1 local servers, K2 regional servers and K3 global servers. Therefore, unlike Auspice, GMap does not require per-GUID replica optimization, but still achieves better latency goals than DMap, at the cost of higher storage workload, due to increased number of replicas per GUID. In addition, GMap allows temporary in-network caching of the mapping along the route between a resolution server and a querying entity, to ensure future mapping requests for the same GUID can be resolved faster. Internet-scale simulations show GMap to achieve similar latency goals of tens of milliseconds as Auspice but with lower complexity and computation overhead.

5.3. GNRS summary

A summary of different functionalities and features for these name resolution implementations is shown in Table.1.

   |           |    Auspice    |        GMap        |       DMap       |
   |  Location |  overlaid on  |     in-network     |    in-network    |
   |  relative |     top of    |                    |                  |
   |     to    |    network    |                    |                  |
   |  network  |               |                    |                  |
   |           |               |                    |                  |
   | Algorithm |  Demand-aware |  Distributed hash  | Distributed hash |
   |    type   |   replicated  |       table        |      table       |
   |           | state machine |                    |                  |
   |           |               |                    |                  |
   |   Record  |    GUID to    | GUID to arbitrary  |  GUID to upto 5  |
   |  content  |   arbitrary   | values(recursively |  NAs, each with  |
   |           |   number of   |   other GUIDs or   |  an expiration   |
   |           |     values    | Network Addresses) |     time and     |
   |           |               |                    |  prioritization  |
   |           |               |                    |      weight      |
   |           |               |                    |                  |
   |    Name   |  Geo-located  | Geo-located based  | Not Geolocated,  |
   |   server  |    based on   | on GUIDs physical  | One name server  |
   | placement |    requests   |      location      | in the GUIDs AS  |
   |           |               |                    |                  |
   | Number of |    Based on   | Fixed number; each |  Fixed number:   |
   |  replicas | recent demand | GUID has K1 local, | each GUID has K  |
   |  per GUID |   and update  |  K2 regional, K3   | global, 1 local  |
   |           |   frequency   |  global replicas   |     replicas     |
   |           |               |                    |                  |
   |  Caching  |  No caching;  |  Caches response   |   Future work    |
   |           |      load     |   along the path   |                  |
   |           |  balancing by |   from querying    |                  |
   |           |   adjusting   |  entity and name   |                  |
   |           |   number of   |       server       |                  |
   |           |  name servers |                    |                  |

Table.1 Summary of different name resolution implementations

6. Security Considerations


7. Acknowledgements


8. IANA Considerations


9. Normative References

, ", ", ", ", ", ", ", "
[AUSPICE] Sharma, A., Tie, X., Uppal, H., Venkataramani, A., Westbrook, D. and A. Yadav, "A global name service for a highly mobile internetwork", 2014.
[CISCO-VNI]Cisco Visual Networking Index: Global Mobile Data Traffic Forecast Update, 2016–2021", 2017.
[CODONS] Ramasubramanian, V. and E. Sirer, "The design and implementation of a next generation name service for the internet", 2004.
[DMAP] Vu, T., Baid, A., Zhang, Y., Nguyen, T., Fukuyama, J., Martin, R. and R. Raychaudhuri, "DMap: A Shared Hosting Scheme for Dynamic Identifier to Locator Mappings in the Global Internet", 2012.
[DNS-MEASURE1]Comparing Latency of the Top Public DNS Providers", 2015.
[DNS-MEASURE2]Comparing Root Server Performance Around the World", 2015.
[GMAP] Hu, Y., Yates, R. and R. Raychaudhuri, "A Hierarchically Aggregated In-Network Global Name Resolution Service for the Mobile Internet", WINLAB TR 442, March 2015.
[GOOGLE-CLOUDDNS]Reliable, resilient, low-latency DNS serving from Google’s worldwide network"
[GOOGLE-DNS]Google Public DNS"
[GOOGLE-EDGE]Google Edge Caching Project"
[HUAWEI-WP]5G: A Technology Vision", 2013.
[IDEAS-PS] Pillay-Esnault, P., Boucadair, M., Jacquenet, C., Fioccola, M. and A. Nennker, Problem Statement for Mapping Systems in Identity Enabled Networks", March 2017.
[ILA] Herbert, T., "Identifier-locator addressing for network virtualization", March 2016.
[LOC-INDEP-ARCH] Gao, Z., Venkataramani, A., Kurose, J. and S. Heimlicher, Towards a Quantitative Comparison of Location-Independent Network Architectures", 2014.
[MF] Venkataramani, A., Kurose, J., Raychaudhuri, D., Nagaraja, K., Mao, M. and S. Banerjee, "MobilityFirst: a mobility-centric and trustworthy internet architecture", 2014.
[PAXOS] Lamport, L., "The part-time parliament", 1998.
[RFC1498] Saltzer, J., "On the Naming and Binding of Network Destinations", RFC 1498, August 1993.
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, March 1997.
[RFC4423] Moskowitz, R. and P. Nikander,
[RFC5944] Perkins, C., "IP Mobility Support for IPv4, Revised", RFC 5944, November 2010.
[RFC6830] Farinacci, D., Fuller, V., Meyer, D. and D. Lewis, "The Locator/ID Separation Protocol (LISP)", RFC 6830, January 2013., RFC 4423, May 2006.
[VMWARE-WP]VMware View 5 with PCoIP, Network Optimization Guide White Paper", 2011.
[XIA] Anand, A., Dogar, F., Han, D., Li, B., Lim, H., Wu, W., Akella, A., Andersen, D., Byers, J. and S. Seshan, "XIA: Efficient Support for Evolvable Internetworking", 2012.

Authors' Addresses

Parishad Karimi Rutgers University EMail:
Shreyasee Mukherjee Rutgers University EMail: