Network Working Group D. Purkayastha
Internet-Draft A. Rahman
Intended status: Informational D. Trossen
Expires: September 2, 2018 InterDigital Communications, LLC
March 1, 2018

Leading indicators of change for routing in Modern Data Center environments
draft-purkayastha-dcrouting-leading-indicators-00

Abstract

This document describes a few use cases to illustrate the expectations on today's network. Based on those expectations, it describes how the network architecture and network requirements are changing. The new requirements are impacting data center architecture. The way data centers are evolving, such as central data centers to smaller data centers, a single deployment to multiple deployments, is described. With this new data center model, areas such as routing inside the data center and outside the data center is impacted. The document describes this impact and summarizes few features for these new data center model.

Status of This Memo

This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.

Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.

Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."

This Internet-Draft will expire on September 2, 2018.

Copyright Notice

Copyright (c) 2018 IETF Trust and the persons identified as the document authors. All rights reserved.

This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.


Table of Contents

1. Introduction

The requirements on today’s networks are very diverse, enabling multiple use cases such as IoT, Content Distribution, Multi player online gaming, Virtual Network functions such as Cloud RAN. Huge amount of data is generated, stored and consumed at the edge of the network. These use cases has led to the evolution of data centers into smaller form factors a.k.a. Micro Data Center (MDC), suitable to be deployed at the edge of the network.

In this document, we will describe use cases to illustrate the trend where MDCs are deployed at multiple physical locations instead of one. This is akin to having several Internet POPs (points of presence) rather than one single one, with the MDC representing the services commonly found in the Internet. With this evolving landscape of multi POP deployment of MDC at the edge of the network, we envision that the MDCs will be deployed over pure L2 network. We will describe the impact on routing within the MDC, as well as among the multiple MDCs in the edge.

The composition of ‘multi-POP’ MDC, compiled of several smaller Micro DCs, drives the need for standardize routing between those POPs, particularly if those POPs are purely deployed in an L2 network.

2. Conventions used in this document

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119].

3. Evolving landscape

Use cases such as IoT, Connected Vehicle imposes stringent requirements on the network:

The sheer number of mobile devices connected to the internet, machine to machine communication (M2M), the Industrial Internet of Things, resource-dependent applications, such as data-heavy streaming video and wearables, all generate huge amount of data.

Processing these huge amounts of data in a central data center has the following disadvantage:

In order to reduce the burden on network and improve on latency, data is being processed more closely at the edge. We will describe in detail the rationale for moving computation at the edge of the network, how data centers are changing to handle that and summarize few requirements. To understand the changes that are happening, we start with few relevant use cases.

3.1. Video orchestration and delivery

The video orchestration service example from ETSI MEC Requirements document [ETSI_MEC] may be considered. The proposed use case of edge video orchestration suggests a scenario where visual content can be produced and consumed at the same location close to consumers in a densely populated and clearly limited area. Such a case could be a sports event or concert where a remarkable number of consumers are using their hand-held devices to access user select tailored content. The overall video experience is combined from multiple sources, such as local recording devices, which may be fixed as well as mobile, and master video from central production server. The user is given an opportunity to select tailored views from a set of local video sources.

3.2. Vehicle to vehicle, anything V2X

The V2X use case group “Safety” includes several different types of use cases to support road safety using the vehicle-to-infrastructure (V2I) communication in addition to the vehicle-to-vehicle (V2V).

Intersection Movement Assist (IMA): This type of use cases was specifically listed in the US DOT NHTSA publication 2016-0126 [USDOT], and ESTI TR 102 638 [ETSI_ITS]. The main purpose of IMA is to warn drivers of vehicles approaching from a lateral direction at an intersection. IMA is designed to avoid intersection crossing crashes, the most severe crashes based on the fatality counts. Intersection crashes include intersection, intersection-related, driveway/alley, and driveway access related crashes.

Advanced driving assistance represented by the two use cases collects the most challenging requirements for V2X. It can require distribution of a relative large amount of data with high reliability and low latency in parallel. Additionally, the advanced driving use cases would benefit from predictive reliability. This means that vehicles moving along should have the possibility to receive a prediction of the network availability to plan ahead.

Real Time Situational Awareness and High Definition (Local) Maps: Real time situational awareness is essential for autonomous vehicles especially at critical road segments in cases of changing road conditions (e.g. new traffic cone detected by another vehicle some time ago). In addition, the relevant high definition local maps need to be made available via downloading from a backend server.

The use case for real time situational awareness and High Definition (Local) Maps should not only be seen as a case to distribute information on relatively slow changing road conditions. The case should be extended to distribute and aggregate locally available information in real time to the traffic participants via road side units.

See-Through (or High Definition Sensor Sharing): In this type of use cases vehicles such as trucks, minivans, cars in platoons are required to share camera images of road conditions ahead of them to vehicles behind them.

The vulnerable road user (VRU) use case covers pedestrians and cyclists. A critical requirement to allow efficient use of information provided by VRUs is the accuracy of the positioning information provided by these traffic participants. Additional means to use available information for better and reliable accuracy is crucial to allow a real-world usage of information shared from VRUs. Cooperation between vehicles and vulnerable road users (such as pedestrians, cyclists, etc.) through their mobile devices (e.g., smartphone, tablets) will be an important key element to improve traffic safety and to avoid accidents.

4. Analysis

The use cases described above leads to certain expectation / capabilities from the network. These are listed below:

Based on the analysis of the use case, we can summarize the emerging trends:

5. Data Center Evolution

Installing more hardware resources and bigger switches to increase bandwidth in centralized enterprise data centers can only reduce latency to certain extent. Today’s approach is to move compute and storage resource close to the end user, e.g. at the edge of the network such as GW, CPE etc. Businesses are looking for ways to expand data processing infrastructure closer to where data is generated. Today, many organizations that need to share and analyze quickly growing amount of data, such as retailers, manufacturers, telcos, financial services firms, and many more, are turning to localized micro data centers installed on the factory floor, in the telco central office, the back of a retail outlet, etc. The solution applies to a broad base of applications that require low latency, high bandwidth, or both.

A micro data center is a "a self-contained, secure computing environment that includes all the storage, processing and networking required to run the customer's applications." They are assembled and tested in a factory environment and shipped in single enclosures that include all necessary power, cooling, security, and associated management tools.

Micro data centers are designed to minimize capital outlay, reduce footprint, energy consumption, and increase speed of deployment. Several business and technology trends have created the conditions for micro data centers to emerge as a solution. The reason for the emergence of micro data center are:

Micro Data Centers deployed at the edge of the network is entirely or largely deployed over Layer 2 for cost and efficiency reasons, e.g., due to the integration with cellular subsystems [_3GPP_SBA], or through moves to SDN connectivity in smart cities [BIO_TRIAL] and operator core networks [ATT].

It is also common to deploy more than one micro data centers in the edge to support diverse data and computing requirement. Cloud service providers are moving away from single POP deployment of MDC at the edge to multiple POP deployment of MDC at the edge. These multiple POPs are deployed over L2 interface for fast and efficient switching, exchange of information etc., thus enabling dynamic service composition at the edge. The following diagram describes the trend in MDC deployment at the edge in today’s network.


                 +-----+
            +----+ MDC +----+
+------+    |    +-----+    |      +---------------+
|      |    |               |      |               |
|  UE  |----+  EDGE CLOUD   +------+  DATA CENTER  +
|      |    |               |      | DISTANT CLOUD | 
+------+    |    +-----+    |      +---------------+
            +----+ MDC +----+
                 +-----+
|--Service routing over L2--|-Service routing over L3-|

               

Figure 1: Service Routing at Edge

6. Considerations for MDC (Micro Data Centre) at edge

As Micro Date centers are deployed at the edge of the network, it raises few pointers which needs to be considered.

In such a dynamic network environment, the capability to identify and aggregate one or more data sources within a micro data center as well across micro data centers is desirable.

Given that deployments of micro-DCs are considered in edge environments, service routing should be possible over pure Layer 2 solutions, in particular emerging SDN-based transport networks.

It should be possible to quickly move a service/data instance in response to user mobility or resource availability within a micro data center as well as across micro data center. From a routing perspective, this means that any Request for data and service needs to be switched fast from one service instance to another running within a micro data centre or across micro data centres. Given the evolution of virtual instance technologies which push (virtual) service instantiation down into the seconds and below range, any such service routing change must be in the same time order as the instantiation of the service instance.

Since service interactions can run over a longer period (e.g., for video chunk download), changes of service requests to new service instances should be possible mid-session without loss of already obtained data.

As users or service end points are moved within a micro data centre or across micro data centres, any service request should follow based on direct path.

7. Conclusion

We are going to witness deployment of MDC at the edge of the network to support advanced use cases of the future. In order to realize that future, we believe that the above features need to be considered for tomorrow's data centres.

8. IANA Considerations

This document requests no IANA actions.

9. Security Considerations

TBD.

10. Informative References

[_3GPP_SBA] 3GPP, "Technical Realization of Service Based Architecture", 3GPP TS 29.500 0.4.0, January 2018.
[ATT] ATT, "ATT's Network of the Future",
[BIO_TRIAL] Bristol, "BIO TRIAL",
[ETSI_ITS] ETSI, "Vehicular Communications, Basic Set of Applications, Definitions", ETSI TR 102 638 1.1.1, June 2009.
[ETSI_MEC] ETSI, "Mobile Edge Computing (MEC), Technical Requirements", GS MEC 002 1.1.1, March 2016.
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate Requirement Levels", BCP 14, RFC 2119, DOI 10.17487/RFC2119, March 1997.
[USDOT] US DOT, "Federal Motor Vehicle Safety Standards; V2V Communications", NHTSA-2016-0126 RIN 2127-AL55, 2016.

Authors' Addresses

Debashish Purkayastha InterDigital Communications, LLC Conshohocken, USA EMail: Debashish.Purkayastha@InterDigital.com
Akbar Rahman InterDigital Communications, LLC Montreal, Canada EMail: Akbar.Rahman@InterDigital.com
Dirk Trossen InterDigital Communications, LLC 64 Great Eastern Street, 1st Floor London, EC2A 3QR United Kingdom EMail: Dirk.Trossen@InterDigital.com URI: http://www.InterDigital.com/