NFVRG R. Szabo
Internet-Draft Z. Qiang
Intended status: Informational Ericsson
Expires: September 10, 2015 M. Kind
Deutsche Telekom AG
March 9, 2015

Towards recursive virtualization and programming for network and cloud resources
draft-unify-nfvrg-recursive-programming-00

Abstract

The introduction of Network Function Virtualization (NFV) in carrier-grade networks promises improved operations in terms of flexibility, efficiency, and manageability. NFV is an approach to combine network and compute virtualizations together. However, network and compute resource domains expose different virtualizations and programmable interfaces. In [I-D.unify-nfvrg-challenges] we argued for a joint compute and network virtualization by looking into different compute abstractions.

In this document we analyze different approaches to orchestrate a service graph with transparent network functions into a commodity data center. We show, that a recursive compute and network joint virtualization and programming has clear advantages compared to other approaches with separated control between compute and network resources. The discussion of the problems and the proposed solution is generic for any data center use case, however, we use NFV as an example.

Status of This Memo

This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.

Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at http://datatracker.ietf.org/drafts/current/.

Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."

This Internet-Draft will expire on September 10, 2015.

Copyright Notice

Copyright (c) 2015 IETF Trust and the persons identified as the document authors. All rights reserved.

This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.


Table of Contents

1. Introduction

To a large degree there is agreement in the research community that rigid network control limits the flexibility of service creation. In [I-D.unify-nfvrg-challenges]

Our goal here is to analyze different approaches to instantiate a service graph with transparent network functions into a commodity Data Center (DC). More specifically, we analyze

The discussion of the problems and the proposed solution is generic for any data center use case, however, we use NFV as an example.

2. Terms and Definitions

We use the term compute and "compute and storage" interchangeably throughout the document. Moreover, we use the following definitions, as established in [ETSI-NFV-Arch]:

NFV:
Network Function Virtualization - The principle of separating network functions from the hardware they run on by using virtual hardware abstraction.
NFVI:
NFV Infrastructure - Any combination of virtualized compute, storage and network resources.
VNF:
Virtualized Network Function - a software-based network function.
MANO:
Management and Orchestration - In the ETSI NFV framework [ETSI-NFV-MANO], this is the global entity responsible for management and orchestration of NFV lifecycle.

Further, we make use of the following terms:

NF:
a network function, either software-based (VNF) or appliance-based.
SW:
a (routing/switching) network element with a programmable control plane interface.
DC:
a data center network element which in addition to a programmable control plane interface offers a DC control interface.
CN:
a compute node network element, which is controlled by a DC control plane and provides execution environment for virtual machine (VM) images such as VNFs.

3. Use Cases

The inclusion of commodity Data Centers (DCs), e.g., OpenStack, into the service graphs is far from trivial [I-D.ietf-sfc-dc-use-cases]: different exposures of the internals of the DC will imply different dynamisms in operations, different orchestration complexities and may yield for different business cases with regards to infrastructure sharing.

We investigate different scenarios with a simple forwarding graph of three VNFs (o->VNF1->VNF2->VNF3->o), where all VNFs are deployed within the same DC. We assume that the DC is a multi-tier leaf and spine (CLOS) fabric with top-of-the rack switches in Compute Nodes (CNs) and that all VNFs are transparent (bump-in-the-wire) Service Functions.

3.1. Black Box DC

In Black Bock DC set-ups, we assume, that the compute domain is a autonomous domain with legacy (e.g., OpenStack) orchestration APIs. Due to the lack of direct forwarding control within the DC no native L2 forwarding can be used to insert VNFs running in the DC into the forwarding graph. Instead, explicit tunnels (e.g., VxLAN) must be used, which need termination support within the deployed VNFs. Therefore, VNFs must be aware of the previous and the next hops of the forwarding graph to receive and forward packets accordingly.

3.1.1. Black Box DC with L3 tunnels

Figure 1 illustrates a set-up where an external VxLAN termination point in the SDN domain is used to forward packets into the first SF (VNF1) of the chain within the DC. VNF1, in turn, is configured to forward packets to the next SF (VNF2) in the chain and so forth with VNF2 and VNF3.

In this set-up VNFs must be capable of handling L3 tunnels (e.g., VxLAN) and must act as forwarders themselves. Additionally, an operational L3 underlay must be present so that VNFs can address each other.

Furthermore, VNFs holding chain forwarding information could be untrusted user plane functions from 3rd party developers. Enforcement of proper forwarding is problematic.

Additionally, compute only orchestration might result in sub-optimal allocation of the VNFs with regards to the forwarding overlay, for example, see back-forth use of a core switch in Figure 1.

In [I-D.unify-nfvrg-challenges] we also pointed out that within a single Compute Node (CN) similar VNF placement and overlay optimization problem may reappear in the context of network interface cards and CPU cores.


                              |                         A     A
                            +---+                       | S   |
                            |SW1|                       | D   |
                            +---+                       | N   | P
                           /     \                      V     | H
                          /       \                           | Y
                         |         |                    A     | S
                       +---+      +-+-+                 |     | I
                       |SW |      |SW |                 |     | C
                      ,+--++.._  _+-+-+                 |     | A
                   ,-"   _|,,`.""-..+                   | C   | L
                 _,,,--"" |    `.   |""-.._             | L   |
            +---+      +--++     `+-+-+    ""+---+      | O   |
            |SW |      |SW |      |SW |      |SW |      | U   |
            +---+    ,'+---+    ,'+---+    ,'+---+      | D   |
            |   | ,-"  |   | ,-"  |   | ,-"  |   |      |     |
          +--+ +--+  +--+ +--+  +--+ +--+  +--+ +--+    |     |
          |CN| |CN|  |CN| |CN|  |CN| |CN|  |CN| |CN|    |     |
          +--+ +--+  +--+ +--+  +--+ +--+  +--+ +--+    V     V
            |          |                          |
           +-+        +-+                        +-+          A
           |V|        |V|                        |V|          | L
           |N|        |N|                        |N|          | O
           |F|        |F|                        |F|          | G
           |1|        |3|                        |2|          | I
           +-+        +-+                        +-+          | C
+---+ --1>-+ |        | +--<3---------------<3---+ |          | A
|SW1|        +-2>-----------------------------2>---+          | L
+---+ <4--------------+                                       V

    <<=============================================>>
		   IP tunnels, e.g., VxLAN

						   

Figure 1: Black Box Data Center with VNF Overlay

3.1.2. Black Box DC with external steering

Figure 2 illustrates a set-up where an external VxLAN termination point in the SDN domain is used to forward packets among all the SFs (VNF1-VNF3) of the chain within the DC. VNFs in the DC need to be configured to receive and send packets between only the SDN endpoint, hence are not aware of the next hop VNF address. Shall any VNFs need to be relocated, e.g., due to scale in/out as described in [I-D.zu-nfvrg-elasticity-vnf], the forwarding overlay can be transparently re-configured at the SDN domain.

Note however, that traffic between the DC internal SFs (VNF1, VNF2, VNF3) need to exit and re-enter the DC through the external SDN switch. This, certainly, is sub-optimal an results in ping-pong traffic similar to the local and remote DC case discussed in [I-D.zu-nfvrg-elasticity-vnf].


                              |                         A     A
                            +---+                       | S   |
                            |SW1|                       | D   |
                            +---+                       | N   | P
                           /     \                      V     | H
                          /       \                           | Y
                         |         |   ext port         A     | S
                       +---+      +-+-+                 |     | I
                       |SW |      |SW |                 |     | C
                      ,+--++.._  _+-+-+                 |     | A
                   ,-"   _|,,`.""-..+                   | C   | L
                 _,,,--"" |    `.   |""-.._             | L   |
            +---+      +--++     `+-+-+    ""+---+      | O   |
            |SW |      |SW |      |SW |      |SW |      | U   |
            +---+    ,'+---+    ,'+---+    ,'+---+      | D   |
            |   | ,-"  |   | ,-"  |   | ,-"  |   |      |     |
          +--+ +--+  +--+ +--+  +--+ +--+  +--+ +--+    |     |
          |CN| |CN|  |CN| |CN|  |CN| |CN|  |CN| |CN|    |     |
          +--+ +--+  +--+ +--+  +--+ +--+  +--+ +--+    V     V
            |          |                          |
           +-+        +-+                        +-+          A
           |V|        |V|                        |V|          | L
           |N|        |N|                        |N|          | O
           |F|        |F|                        |F|          | G
           |1|        |3|                        |2|          | I
           +-+        +-+                        +-+          | C
+---+ --1>-+ |        | |                        | |          | A
|SW1| <2-----+        | |                        | |          | L
|   | --3>---------------------------------------+ |          |
|   | <4-------------------------------------------+          |
|   | --5>------------+ |                                     |
+---+ <6----------------+                                     V

     <<=============================================>>
                     IP tunnels, e.g., VxLAN

						    

Figure 2: Black Box Data Center with ext Overlay

3.2. White Box DC

Figure 3 illustrates a set-up where the internal network of the DC is exposed in full details through an SDN Controller for steering control. We assume that native L2 forwarding can be applied all through the DC until the VNFs’ port, hence IP tunneling and tunnel termination at the VNFs are not needed. Therefore, VNFs need not be forwarding graph aware but transparently receive and forward packets. However, the implications are that the network control of the DC must be handed over to an external forwarding controller (see that the SDN domain and the DC domain overlaps in Figure 3). This most probably prohibits clear operational separation or separate ownerships of the two domains.


                              |                     A         A
                            +---+                   | S       |
                            |SW1|                   | D       |
                            +---+                   | N       | P
                           /     \                  |         | H
                          /       \                 |         | Y
                         |         |   ext port     |   A     | S
                       +---+      +-+-+             |   |     | I
                       |SW |      |SW |             |   |     | C
                      ,+--++.._  _+-+-+             |   |     | A
                   ,-"   _|,,`.""-..+               |   | C   | L
                 _,,,--"" |    `.   |""-.._         |   | L   |
            +---+      +--++     `+-+-+    ""+---+  |   | O   |
            |SW |      |SW |      |SW |      |SW |  |   | U   |
            +---+    ,'+---+    ,'+---+    ,'+---+  V   | D   |
            |   | ,-"  |   | ,-"  |   | ,-"  |   |      |     |
          +--+ +--+  +--+ +--+  +--+ +--+  +--+ +--+    |     |
          |CN| |CN|  |CN| |CN|  |CN| |CN|  |CN| |CN|    |     |
          +--+ +--+  +--+ +--+  +--+ +--+  +--+ +--+    V     V
            |          |                          |
           +-+        +-+                        +-+          A
           |V|        |V|                        |V|          | L
           |N|        |N|                        |N|          | O
           |F|        |F|                        |F|          | G
           |1|        |3|                        |2|          | I
           +-+        +-+                        +-+          | C
+---+ --1>-+ |        | +--<3---------------<3---+ |          | A
|SW1|        +-2>-----------------------------2>---+          | L
+---+ <4--------------+                                       V

    <<=============================================>>
      		      L2 overlay

		      

Figure 3: White Box Data Center with L2 Overlay

4. Recursive approach

We argued in [I-D.unify-nfvrg-challenges] for a joint software and network programming interface. Consider that such joint software and network abstraction (virtualization) exists around the DC with a corresponding resource programmatic interface. A software and network programming interface could include VNF requests and the definition of the corresponding network overlay. However, such programming interface is similar to the top level services definition, for example, by the means of a VNF Forwarding Graph.

Figure 4 illustrates a joint domain virtualization and programming setup. VNF placement and the corresponding traffic steering could be defined in an abstract way, which is orchestrated, split and handled to the next level in the hierarchy for further orchestration. Such setup allows clear operational separation, arbitrary domain virtualization (e.g., topology details could be omitted) and constraint based optimization of domain wide resources.

+-------------------------------------------------------+ A
| +----------------------------------------------+  A   | |
| | SDN Domain            |                      |  |   | |
| |                     +---+                    |  |S  | |
| |                     |SW1|                    |  |D  | |O
| |                     +---+                    |  |N  | |V
| |                    /     \                   |  |   | |E
| +-------------------+-------+------------------+  V   | |R
|                    |         |                        | |A
| +----------------------------------------------+  A   | |R
| | DC Domain                                    |  |   | |C
| | Joint         +---+      +-+-+               |  |   | |H
| | Abstraction   |SW |      |SW |               |  |D  | |I
| | Softw +      ,+--++.._  _+-+-+               |  |C  | |N
| | Network   ,-"   _|,,`.""-..+                 |  |   | |G
| |         _,,,--"" |    `.   |""-.._           |  |V  | |
| |    +---+      +--++     `+-+-+    ""+---+    |  |I  | |V
| |    |SW |      |SW |      |SW |      |SW |    |  |R  | |I
| |    +---+    ,'+---+    ,'+---+    ,'+---+    |  |T  | |R
| |    |   | ,-"  |   | ,-"  |   | ,-"  |   |    |  |   | |T
| |  +--+ +--+  +--+ +--+  +--+ +--+  +--+ +--+  |  |   | |
| |  |CN| |CN|  |CN| |CN|  |CN| |CN|  |CN| |CN|  |  |   | |
| |  +--+ +--+  +--+ +--+  +--+ +--+  +--+ +--+  |  |   | |
| |                                              |  |   | |
| +----------------------------------------------+  V   | |
+-------------------------------------------------------+ V

        +--------------------------------------+
        |                    DC Domain         |
        |              +---------------------+ |
        |              |  +-+    +-+    +-+  | |
        |              |  |V|    |V|    |V|  | |
        |              |  |N|    |N|    |N|  | |
        | SDN Domain   |  |F|    |F|    |F|  | |
        | +---------+  |  |1|    |2|    |3|  | |
        | |         |  |  +-+    +-+    +-+  | |
        | |  +---+--+--+>-+ |    | |    | |  | |
        | |  |SW1|  |  |    +-->-+ +-->-+ |  | |
        | |  +---+--+<-+------------------+  | |
        | +---------+  +---------------------+ |
        |                                      |
        |<<=========>><<=====================>>|
        |   VNF FG1            VNF FG2         |
        +--------------------------------------+

         <<==================================>>
              VNF Forwarding Graph overall

    

Figure 4: Recursive Domain Virtualization and Joint VNF FG programming

5. IANA Considerations

This memo includes no request to IANA.

6. Security Considerations

TBD

7. Acknowledgement

The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement no. 619609 - the UNIFY project. The views expressed here are those of the authors only. The European Commission is not liable for any use that may be made of the information in this document.

We would like to thank in particular David Jocha and Janos Elek from Ericsson for the useful discussions.

8. Informative References

[ETSI-NFV-Arch] ETSI, "Architectural Framework v1.1.1", Oct 2013.
[ETSI-NFV-MANO] ETSI, "Network Function Virtualization (NFV) Management and Orchestration V0.6.1 (draft)", Jul. 2014.
[I-D.ietf-sfc-dc-use-cases] Surendra, S., Tufail, M., Majee, S., Captari, C. and S. Homma, "Service Function Chaining Use Cases In Data Centers", Internet-Draft draft-ietf-sfc-dc-use-cases-02, January 2015.
[I-D.unify-nfvrg-challenges] Szabo, R., Csaszar, A., Pentikousis, K., Kind, M. and D. Daino, "Unifying Carrier and Cloud Networks: Problem Statement and Challenges", Internet-Draft draft-unify-nfvrg-challenges-00, October 2014.
[I-D.zu-nfvrg-elasticity-vnf] Qiang, Z. and R. Szabo, "Elasticity VNF", Internet-Draft draft-zu-nfvrg-elasticity-vnf-01, March 2015.

Authors' Addresses

Robert Szabo Ericsson Research, Hungary Irinyi Jozsef u. 4-20 Budapest, 1117 Hungary EMail: robert.szabo@ericsson.com URI: http://www.ericsson.com/
Zu Qiang Ericsson 8400, boul. Decarie Ville Mont-Royal, QC 8400 Canada EMail: zu.qiang@ericsson.com URI: http://www.ericsson.com/
Mario Kind Deutsche Telekom AG Winterfeldtstr. 21 10781 Berlin, Germany EMail: mario.kind@telekom.de