SDNRG E. Haleplidis, Ed.
Internet-Draft University of Patras
Intended status: Informational K. Pentikousis, Ed.
Expires: April 25, 2015 EICT
S. Denazis
University of Patras
J. Hadi Salim
Mojatatu Networks
D. Meyer
Brocade
O. Koufopavlou
University of Patras
October 22, 2014

SDN Layers and Architecture Terminology
draft-irtf-sdnrg-layer-terminology-04

Abstract

Software-Defined Networking (SDN) refers to a new approach for network programmability, that is, the capacity to initialize, control, change, and manage network behavior dynamically via open interfaces. SDN emphasizes the role of software in running networks through the introduction of an abstraction for the data forwarding plane and, by doing so, separates it from the control plane. This separation allows faster innovation cycles at both planes as experience has already shown. However, there is increasing confusion as to what exactly SDN is, what is the layer structure in an SDN architecture and how do layers interface with each other. This document, a product of the IRTF Software-Defined Networking Research Group (SDNRG), addresses these questions and provides a concise reference for the SDN research community based on relevant peer-reviewed literature, the RFC series, and relevant documents by other standards organizations.

Status of This Memo

This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.

Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at http://datatracker.ietf.org/drafts/current/.

Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."

This Internet-Draft will expire on April 25, 2015.

Copyright Notice

Copyright (c) 2014 IETF Trust and the persons identified as the document authors. All rights reserved.

This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.


Table of Contents

1. Introduction

Software-Defined Networking (SDN) is a term of the programmable networks paradigm [PNSurvey99][OF08]. In short, SDN refers to the ability of software applications to program individual network devices dynamically and therefore control the behavior of the network as a whole [NV09]. Boucadair and Jacquenet [RFC7149] point out that SDN is a set of techniques used to facilitate the design, delivery and operation of network services in a deterministic, dynamic, and scalable manner.

A key element in SDN is the introduction of an abstraction between the (traditional) forwarding and control planes in order to separate them and provide applications with the means necessary to programmatically control the network. The goal is to leverage this separation, and the associated programmability, in order to reduce complexity and enable faster innovation at both planes [A4D05].

The historical evolution of the programmable networks R&D area is reviewed in detail in [SDNHistory][SDNSurvey], starting with efforts dating back to the 1980s. As Feamster et al. [SDNHistory] document, many of the ideas, concepts and concerns are applicable to the latest R&D in SDN, and SDN standardization we may add, and have been under extensive investigation and discussion in the research community for quite some time. For example, Rooney et al. [Tempest] discuss how to allow third-party access to the network without jeopardizing network integrity, or how to accommodate legacy networking solutions in their (then new) programmable environment. Further, the concept of separating the control and forwarding planes, which is prominent in SDN, has been extensively discussed even prior to 1998 [Tempest][P1520], in SS7 networks [ITUSS7], Ipsilon Flow Switching [RFC1953][RFC2297] and ATM [ITUATM].

SDN research often focuses on varying aspects of programmability, and we are frequently confronted with conflicting points of view regarding what exactly SDN is. For instance, we find that for various reasons (e.g. work focusing on one domain and therefore not necessarily applicable as-is to other domains), certain well-accepted definitions do not correlate well with each other. For example, both OpenFlow [OpenFlow] and NETCONF [RFC6241] have been characterized as SDN interfaces, but they refer to control and management respectively.

This motivates us to consolidate the definitions of SDN in the literature and correlate them with earlier work at the IETF and the research community. Of particular interest is, for example, to determine which layers comprise the SDN architecture and which interfaces and their corresponding attributes are best suitable to be used between them. As such, the aim of this document is not to standardize any particular layer or interface but rather to provide a concise reference which reflects current approaches regarding the SDN layers architecture. We expect that this document would be useful to upcoming work in SDNRG as well as future discussions within the SDN community as a whole.

This document addresses the work item in the SDNRG charter entitled "Survey of SDN approaches and Taxonomies", fostering better understanding of prominent SDN technologies in a technology-impartial and business-agnostic manner but does not constitute a new IETF standard. It is meant as a common base for further discussion. As such, we do not make any value statements nor discuss the applicability of any of the frameworks examined in this draft for any particular purpose. Instead, we document their characteristics and attributes and classify them, thus providing a taxonomy. This document does not intend to provide an exhaustive list of SDN research issues; interested readers should consider reviewing [SLTSDN] and [SDNACS]. In particular, Nunes et al. [SLTSDN] overview SDN-related research topics, e.g. control partitioning, which is related to the CAP theorem discussed in Section 3.5.4.

This document has been extensively reviewed, discussed, and commented by the vast majority of SDNRG members, a community which certainly exceeds 100 individuals. It is the consensus of SDNRG that this document should be published in the IRTF Stream RFC Series [RFC5743].

The remainder of this document is organized as follows. Section 2 explains the terminology used in this document. Section 3 introduces a high-level overview of current SDN architecture abstractions. Finally, Section 4 discusses how the SDN Layer Architecture relates with prominent SDN-enabling technologies.

2. Terminology

This document uses the following terms:

Software-Defined Networking (SDN) - A programmable networks approach that supports the separation of control and forwarding planes via standardized interfaces.
Resource - A physical or virtual component available within a system. Resources can be very simple or fine-grained, e.g. a port or a queue, or complex, comprised of multiple resources, e.g. a network device.
Network Device - A device that performs one or more network operations related to packet manipulation and forwarding. This reference model makes no distinction whether a network device is physical or virtual. A device can also be considered as a container for resources and can be a resource in itself.
Interface - A point of interaction between two entities. When the entities are placed at different locations, the interface is usually implemented through a network protocol. If the entities are collocated in the same physical location the interface can be implemented using a software application programming interface (API), inter-process communication (IPC), or a network protocol.
Application (App) - An application in the context of SDN is a piece of software that utilizes underlying services to perform a function. Application operation can be parametrized, for example by passing certain arguments at call time, but it is meant to be a standalone piece of software: an App does not offer any interfaces to other applications or services.
Service - A piece of software that performs one or more functions and provides one or more APIs to applications or other services of the same or different layers to make use of said functions and returns one or more results. Services can be combined with other services, or called in a certain serialized manner, to create a new service.
Forwarding Plane (FP) - The collection of resources across all network devices responsible for forwarding traffic.
Operational Plane (OP) - The collection of resources responsible for managing the overall operation of individual network devices.
Control plane (CP) - The collection of functions responsible for controlling one or more network devices. CP instructs network devices with respect to how to process and forward packets. The control plane interacts primarily with the forwarding plane and to a lesser extent with the operational plane.
Management plane (MP) - The collection of functions responsible for monitoring, configuring and maintaining one or more network devices or parts of network devices. The management plane is mostly related with the operational plane and less with the forwarding plane.
Application Plane - The collection of applications and services which program network behavior.
Device and resource Abstraction Layer (DAL) - The device's resource abstraction layer based on one or more models. If it is a physical device it may be referred to as the Hardware Abstraction Layer (HAL). DAL provides a uniform point of reference for the device's forwarding and operational plane resources.
Control Abstraction Layer (CAL) - The control plane's abstraction layer. CAL provides access to the control plane southbound interface.
Management Abstraction Layer (MAL) - The management plane's abstraction layer. MAL provides access to the management plane southbound interface.
Network Services Abstraction Layer (NSAL) - Provides service abstractions that can be used by applications and services.

3. SDN Layers and Architecture

Figure 1 summarizes in the form of a detailed high-level schematic the SDN architecture abstractions. Note that in a particular implementation planes can be collocated with other planes or can be physically separated, as we discuss below.

SDN is based on the concept of separation between a controlled entity and a controller entity. The controller manipulates the controlled entity via an Interface. Interfaces, when local, are mostly API calls through some library or system call. However, such interfaces may be extended via some protocol definition, which may use local inter-process communication (IPC) or a protocol that could also act remotely; the protocol may be defined as an open standard or in a proprietary manner.

Day [PiNA] explores the use of IPC as the mainstay for the definition of recursive network architectures with varying degrees of scope and range of operation. RINA [RINA] outlines a recursive network architecture based on IPC which capitalizes on repeating patterns and structures. This document does not propose a new architecture--we simply document previous work through a taxonomy. Although recursion is out of scope for this work, Figure 1 illustrates a hierarchical model in which layers can be stacked on top of each other and employed recursively as needed.

              o--------------------------------o
              |                                |
              | +-------------+   +----------+ |
              | | Application |   |  Service | |
              | +-------------+   +----------+ |
              |       Application Plane        |
              o---------------Y----------------o
                              |              
*-----------------------------Y---------------------------------*
|           Network Services Abstraction Layer (NSAL)           |
*------Y------------------------------------------------Y-------*
       |                                                |
       |               Service Interface                |
       |                                                |
o------Y------------------o       o---------------------Y------o
|      |    Control Plane |       | Management Plane    |      |
| +----Y----+   +-----+   |       |  +-----+       +----Y----+ |
| | Service |   | App |   |       |  | App |       | Service | |
| +----Y----+   +--Y--+   |       |  +--Y--+       +----Y----+ |
|      |           |      |       |     |               |      |
| *----Y-----------Y----* |       | *---Y---------------Y----* |
| | Control Abstraction | |       | | Management Abstraction | |
| |     Layer (CAL)     | |       | |      Layer (MAL)       | |
| *----------Y----------* |       | *----------Y-------------* |
|            |            |       |            |               |
o------------|------------o       o------------|---------------o
             |                                 |
             | CP                              | MP
             | Southbound                      | Southbound 
             | Interface                       | Interface
             |                                 |
*------------Y---------------------------------Y----------------*
|         Device and resource Abstraction Layer (DAL)           |
*------------Y---------------------------------Y----------------*
|            |                                 |                |
|    o-------Y----------o   +-----+   o--------Y----------o     |
|    | Forwarding Plane |   | App |   | Operational Plane |     |
|    o------------------o   +-----+   o-------------------o     |
|                       Network Device                          |
+---------------------------------------------------------------+

Figure 1: SDN Layer Architecture

3.1. Overview

This document follows a network device centric approach: Control mostly refers to the device packet handling capability, while management typically refer to the overall device operation aspects. We view a network device as a complex resource which contains and is part of multiple resources similar to [DIOPR]. Resources can be simple, single components of a network device, for example a port or a queue of the device, and can also be aggregated into complex resources, for example a network card or a complete network device.

The reader should keep in mind throughout this document that we make no distinction between "physical" and "virtual" resources or "hardware" and "software" realizations, as we do not delve into implementation or performance aspects. In other words, a resource can be implemented fully in hardware, fully in software, or any hybrid combination in between. Further, we do not distinguish on whether a resource is implemented as an overlay or as a part/component of some other device. In general, network device software can run on so-called "bare metal" or on a virtualized substrate. Finally, this document does not discuss how resources are allocated, orchestrated, and released. Indeed, orchestration is out of scope for this document.

SDN spans multiple planes as illustrated in Figure 1. Starting from the bottom part of the figure and moving towards the upper part, we identify the following planes:

Forwarding Plane - Responsible for handling packets in the datapath based on the instructions received from the control plane. Actions of the forwarding plane include, but are not limited to, forwarding, dropping and changing packets. The forwarding plane is usually the termination point for control plane services and applications. The forwarding plane can contain forwarding resources such as classifiers. The forwarding plane is also widely referred to as the "data plane" or the "data path".
Operational Plane - Responsible for managing the operational state of the network device, e.g. whether the device is active or inactive, the number of ports available, the status of each port, and so on. The operational plane is usually the termination point for management plane services and applications. The operational plane relates to network device resources such as ports, memory, and so on. We note that some participants of the IRTF SDNRG have a different opinion in regards to the definition of the operational plane. That is, one can argue that the operational plane does not constitute a "plane" per se, but it is in practice an amalgamation of functions on the forwarding plane. For others, however, a "plane" allows to distinguish between different areas of operations and therefore the operational plane should be included as a "plane" in Figure 1. We have adopted this latter view in this document.
Control Plane - Responsible for taking decisions on how packets should be forwarded by one or more network devices and pushing such decisions down to the network devices for execution. The control plane usually focuses mostly on the forwarding plane and less on the operational plane of the device. The control plane may be interested in operational plane information which could include, for instance, the current state of a particular port or its capabilities. The control plane's main job is to fine-tune the forwarding tables that reside in the forwarding plane, based on the network topology or external service requests.
Management Plane - Responsible for monitoring, configuring and maintaining network devices, e.g. taking decisions regarding the state of a network device. The management plane usually focuses mostly on the operational plane of the device and less on the forwarding plane. The management plane may be used to configure the forwarding plane, but it does so infrequently and through a more wholesale approach than the control plane. For instance, the management plane may set up all or part of the forwarding rules at once, although such action would be expected to be taken sparingly.
Application Plane - The plane where applications and services that define network behavior reside. Applications that directly (or primarily) support the operation of the forwarding plane (such as routing processes within the control plane) are not considered part of the application plane. Note that applications may be implemented in a modular and distributed fashion and, therefore, can often span multiple planes in Figure 1.

[RFC7276] has defined the data, control and management plane in terms of Operations, Administration, and Maintenance (OAM). This document attempts to broaden the terms defined in [RFC7276] in order to reflect all aspects of an SDN architecture.

All planes mentioned above are connected via interfaces (as indicated with "Y" in Figure 1. An interface may take multiple roles depending on whether the connected planes reside on the same (physical or virtual) device. If the respective planes are designed so that they do not have to reside in the same device, then the interface can only take the form of a protocol. If the planes are co-located on the same device, then the interface could be implemented via an open/proprietary protocol, an open/proprietary software inter-process communication API, or operating system kernel system calls.

Applications, i.e. software programs that perform specific computations that consume services without providing access to other applications, can be implemented natively inside a plane or can span multiple planes. For instance, applications or services can span both the control and management plane and, thus, be able to use both the Control Plane Southbound Interface (CPSI) and Management Plane Southbound Interface (MPSI), although this is only implicitly illustrated in Figure 1. An example of such a case would be an application that uses both [OpenFlow] and [OF-CONFIG].

Services, i.e. software programs that provide APIs to other applications or services, can also be natively implemented in specific planes. Services that span multiple planes belong to the application plane as well.

While not shown explicitly in Figure 1, services, applications and entire planes, can be placed in a recursive manner thus providing overlay semantics to the model. For example, application plane services can provide through NSAL services to other applications or services. Additional examples include virtual resources that are realized on top of a physical resources and hierarchical control plane controllers [KANDOO].

Note that the focus in this document is, of course, on the north/south communication between entities in different planes. But this, clearly, does not exclude entity communication within any one plane.

It must be noted, however, that in Figure 1 we present an abstract view of the various planes, which is devoid of implementation details. Many implementations in the past have opted for placing the management plane on top of the control plane. This can be interpreted as having the control plane acting as a service to the management plane. Further, in many networks especially in Internet routers and Ethernet switches, the control plane has been usually implemented as tightly coupled with the network device. When taken as a whole, the control plane has been distributed network-wide. On the other hand, the management plane has been traditionally centralized and has been responsible for managing the control plane and the devices. However, with the adoption of SDN principles, this distinction is no longer so clear-cut.

Additionally, this document considers four abstraction layers:

The Device and resource Abstraction Layer (DAL) abstracts the device's forwarding and operational plane resources to the control and management plane. Variations of DAL may abstract both planes or either of the two and may abstract any plane of the device to either the control or management plane.
The Control Abstraction Layer (CAL) abstracts the CP southbound interface and the DAL from the applications and services of the control plane.
The Management Abstraction Layer (MAL) abstracts the MP southbound interface and the DAL from the applications and services of the management plane.
The Network Services Abstraction Layer (NSAL) provides service abstractions for use by applications and other services.

At the time of this writing, SDN-related activities have begun in other SDOs. For example, at the ITU work on architectural [ITUSG13] and signaling requirements and protocols [ITUSG11] has commenced, but the respective study groups have yet to publish their documents with the exception of [ITUY3300]. The views presented in [ITUY3300] as well as [ONFArch] are well aligned with this document.

3.2. Network Devices

A Network Device is an entity that receives packets on its ports and performs one or more network functions on them. For example, the network device could forward a received packet, drop it, alter the packet header (or payload) and forward the packet, and so on. A Network Device is an aggregation of multiple resources such as ports, CPU, memory and queues. Resources are either simple or can be aggregated to form complex resources that can be viewed as one resource. The Network Device is in itself a complex resource. Examples of Network Devices include switches and routers. Additional examples include network elements that may operate at a layer above IP, such as firewalls, load balancers and video transcoders; or below IP, such as Layer 2 switches, optical or microwave network elements.

Network devices can be implemented in hardware or software and can be either physical or virtual. As has already been mentioned before, this document makes no such distinction. Each network device has a presence in a Forwarding Plane and an Operational Plane.

The Forwarding Plane, commonly referred to as the "data path", is responsible for handling and forwarding packets. The Forwarding Plane provides switching, routing, packet transformation and filtering functions. Resources of the forwarding plane include but are not limited to filters, meters, markers and classifiers.

The Operational Plane is responsible for the operational state of the network device, for instance, with respect to status of network ports and interfaces. Operational plane resources include, but are not limited to, memory, CPU, ports, interfaces and queues.

The Forwarding and the Operational Planes are exposed via the Device and resource Abstraction Layer (DAL), which may be expressed by one or more abstraction models. Examples of Forwarding Plane abstraction models are ForCES [RFC5812], OpenFlow [OpenFlow], YANG model [RFC6020], and SNMP MIBs [RFC3418]. Examples of the Operational Plane abstraction model include the ForCES model [RFC5812], the YANG model [RFC6020], and SNMP MIBs [RFC3418].

Note that applications can also reside in a network device. Examples of such applications include event monitoring, and handling (offloading) topology discovery or ARP [RFC0826] in the device itself instead of forwarding such traffic to the control plane.

3.3. Control Plane

The control plane is usually distributed and is responsible mainly for the configuration of the forwarding plane using a Control Plane Southbound Interface (CPSI) with DAL as a point of reference. CP is responsible for instructing FP about how to handle network packets.

Communication between control plane entities, colloquially referred to as the "east-west" interface, is usually implemented through gateway protocols such as BGP [RFC4271] or other protocols such as PCEP [RFC5440]. These corresponding protocol messages are usually exchanged in-band and subsequently redirected by the forwarding plane to the control plane for further processing. Examples in this category include [RCP], [SoftRouter] and [RouteFlow].

Control Plane functionalities usually include:

The CPSI is usually defined with the following characteristics:

Examples include fast- and high-frequency of flow or table updates, high throughput and robustness for packet handling and events.

CPSI can be implemented using a protocol, an API or even interprocess communication. If the Control Plane and the Network Device are not collocated, then this interface is certainly a protocol. Examples of CPSIs are ForCES [RFC5810] and the Openflow protocol [OpenFlow].

The Control Abstraction Layer (CAL) provides access to control applications and services to various CPSIs. The Control Plane may support more than one CPSIs.

Control applications can use CAL to control a network device without providing any service to upper layers. Examples include applications that perform control functions, such as OSPF, IS-IS, and BGP.

Control Plane service examples include a virtual private LAN service, service tunnels, topology services, etc.

3.4. Management Plane

The Management Plane is usually centralized and aims to ensure that the network as a whole is running optimally by communicating with the network devices' Operational Plane using a Management Plane Southbound Interface (MPSI) with DAL as a point of reference.

Management plane functionalities are typically initiated, based on an overall network view, and traditionally have been human-centric. However, lately algorithms are replacing most human intervention. Management plane functionalities [FCAPS] typically include:

In addition, management plane functionalities may also include entities such as orchestrators, Virtual Function Managers (VNF manager) and Virtualised Infrastructure Managers, as described in [NFVArch]. Such entities can use management interfaces to operational plane resources to request and provision resources for virtual functions, as well as instruct the instantiation of virtual forwarding functions on top of physical forwarding functions. The possibility of a common abstraction model for both SDN and NFV is explored in [SDNNFV]. Note, however, that these are only examples of applications and services in the management plane and not formal definitions of entities in this document. As has been noted above, orchestration and therefore the definition of any associated entities is out of scope for this document.

The MPSI, in contrast to the CPSI, is usually not a time-critical interface and does not share the CPSI requirements.

MPSI is typically closer to human interaction than CPSI (cf. [RFC3535]) and, therefore, MPSI usually has the following characteristics:

As an example of usability versus performance, we refer to the consensus of the 2002 IAB Workshop [RFC3535], such as that the key requirement for a network management technology is ease of use and not performance. As per [RFC6632], textual configuration files should be able to contain international characters. Human-readable strings should utilize UTF-8, and protocol elements should be in case-insensitive ASCII which require more processing capabilities to parse.

MPSI can range from a protocol, to an API or even interprocess communication. If the Management Plane is not embedded in the network device, the MPSI is certainly a protocol. Examples of MPSIs are ForCES [RFC5810], NETCONF [RFC6241], IPFIX [RFC7011], SYSLOG [RFC5424], OVSDB [RFC7047] and SNMP [RFC3411].

The Management Abstraction Layer (MAL) provides access to management applications and services to various MPSIs. The Management Plane may support more than one MPSI.

Management Applications can use MAL to manage the network device without providing any service to upper layers. Examples of management applications include network monitoring, fault detection and recovery applications.

Management Plane Services provide access to other services or applications above the Management Plane.

3.5. Control and Management Plane Discussion

The definition of a clear distinction between "control" and "management" in the context of SDN received significant community attention during the preparation of this document. We observed that the role of the management plane has been earlier largely ignored or specified as out-of-scope for the SDN ecosystem. In the remainder of this subsection we summarize the characteristics that differentiate the two planes in order to have a clear understanding of the mechanics, capabilities and needs of each respective interface.

3.5.1. Timescale

A point has been raised regarding the reference timescales for the control and management planes. That is, how fast is the respective plane required to react, or needs to manipulate, the forwarding or operational plane of the device. In general, the control plane needs to send updates "often", which translates roughly to a range of milliseconds; that requires high-bandwidth and low-latency links. In contrast, the management plane reacts generally at longer time frames, i.e. minutes, hours or even days, and thus wire-efficiency is not always a critical concern. A good example of this is the case of changing the configuration state of the device.

3.5.2. Persistence

Another distinction between the control and management planes relates to state persistence. A state is considered ephemeral if it has a very limited lifespan. A good example is determining routing, which is usually associated with the control plane. On the other hand, a persistent state has an extended lifespan which may range from hours to days and months and is usually associated with the management plane. Persistent state is also usually associated with data store of the state.

3.5.3. Locality

As mentioned earlier, traditionally the control plane has been executed locally on the network device and is distributed in nature whilst the management plane is usually executed in a centralized manner, remotely from the device. However, with the advent of SDN centralizing, or "locally centralizing" the controller tends to muddle the distinction of the control and management plane based on locality.

3.5.4. CAP Theorem Insights

The CAP theorem views a distributed computing system as composed of multiple computational resources (i.e., CPU, memory, storage) that are connected via a communications network and together perform a task. The theorem, or conjecture by some, identifies three characteristics of distributed systems that are universally desirable:

In 2000 Eric Brewer [CAPBR] conjectured that a distributed system can satisfy any two of these guarantees at the same time, but not all three. This conjecture was later proven by Gilbert and Lynch [CAPGL] and is now usually referred to as the CAP theorem [CAPFN].

Forwarding a packet through a network correctly is a computational problem. One of the major abstractions that SDN posits is that all network elements are computational resources that perform the simple computational task of inspecting fields in an incoming packet and deciding how to forward it. Since the task of forwarding a packet from network ingress to network egress is obviously carried out by a large number of forwarding elements, the network of forwarding devices is a distributed computational system. Hence, the CAP theorem applies to forwarding of packets.

In the context of the CAP theorem, if one considers partition tolerance of paramount importance, traditional control plane operations are usually local and fast (available), while management plane operations are usually centralized (consistent) and may be slow.

The CAP theorem also provides insights into SDN architectures. For example a centralized SDN controller acts as a consistent global database, and specific SDN mechanisms ensure that a packet entering the network is handled consistently by all SDN switches. The issue of tolerance to loss of connectivity to the controller is not addressed by the basic SDN model. When an SDN switch cannot reach its controller, the flow will be unavailable until the connection is restored. The use of multiple non-collocated SDN controllers has been proposed (e.g., by configuring the SDN switch with a list of controllers); this may improve partition tolerance, but at the cost of loss of absolute consistency. Panda et al. [CAPFN] provide a first exploration of how the CAP theorem applies to SDN.

3.6. Network Services Abstraction Layer

The Network Services Abstraction Layer (NSAL) provides access from services of the control, management and application planes to other services and applications. We note that the term SAL is overloaded, as it is often used in several contexts ranging from system design to service-oriented architectures, therefore we explicitly add "Network" to the title of this layer to emphasize that this term relates to Figure 1 and we map it accordingly in Section 4 to prominent SDN approaches.

Service Interfaces can take many forms pertaining to their specific requirements. Examples of service interfaces include but are not limited to, RESTful APIs, open protocols such as NETCONF, inter-process communication, CORBA [CORBA] interfaces, and so on. The two leading approaches for service interfaces are RESTful interfaces and RPC interfaces. Both follow a client-server architecture and use XML or JSON to pass messages but each has some slightly different characteristics.

RESTful interfaces, designed according to the representational state transfer design paradigm [REST], have the following characteristics:

Remote procedure calls (RPC), e.g. [RFC5531], XML-RPC and the like, have the following characteristics:

3.7. Application Plane

Applications and services that use services from the control and/or management plane form the Application Plane.

Additionally, services residing in the Application Plane may provide services to other services and applications that reside in the application plane via the service interface.

Examples of applications include network topology discovery, network provisioning, path reservation, etc.

4. SDN Model View

We advocate that the SDN southbound interface should encompass both CSPI and MPSI.

SDN controllers such as [NOX] and [Beacon] are a collection of control plane applications and services that implement a CPSI, [NOX] and [Beacon] both use OpenFlow, and provide a northbound interface for applications. The SDN northbound interface for controllers is implemented in the Network Services Abstraction Layer of Figure 1.

The above model can be used to describe in a concise manner all prominent SDN-enabling technologies, as we explain in the following subsections.

4.1. ForCES

The IETF-standardized Forwarding and Control Element Separation (ForCES) framework [RFC3746] consists of one model and two protocols. ForCES separates the Forwarding from the Control Plane via an open interface, namely the ForCES protocol [RFC5810] which operates on entities of the forwarding plane that have been modeled using the ForCES model [RFC5812].

The ForCES model [RFC5812] is based on the fact that a network element is composed of numerous logically separate entities that cooperate to provide a given functionality -such as routing or IP switching- and yet appear as a normal integrated network element to external entities and secondly with a protocol to transport information.

ForCES models the Forwarding Plane using Logical Functional Blocks (LFBs) which when connected in a graph compose the Forwarding Element (FE). LFBs are described in an XML language, based on an XML schema.

LFB definitions include base and custom-defined datatypes; metadata definitions; input and output ports; operational parameters or components; capabilities and event definitions.

The ForCES model can be used to define LFBs from fine- to coarse-grained as needed, irrespective of whether they are physical or virtual.

The ForCES protocol is agnostic to the model and can be used to monitor, configure and control any ForCES-modeled element. The protocol has very simple commands: Set, Get and Del(ete). The ForCES protocol has been designed for high throughput and fast updates.

With respect to Figure 1, the ForCES model [RFC5812] is suitable for the DAL, both for the Operational and the Forwarding Plane, using LFBs. The ForCES protocol [RFC5810] has been designed and is suitable for the CPSI, although it could also be utilized for the MPSI.

4.2. NETCONF/YANG

The Network Configuration Protocol (NETCONF [RFC6241]), is an IETF-standardized network management protocol [RFC6632]. NETCONF provides mechanisms to install, manipulate, and delete the configuration of network devices.

NETCONF protocol operations are realized as remote procedure calls (RPCs). The NETCONF protocol uses an Extensible Markup Language (XML) based data encoding for the configuration data as well as the protocol messages. Recent studies, such as [ESNet] and [PENet], have shown that NETCONF performs better than SNMP [RFC3411].

Additionally, the YANG data modeling language [RFC6020] has been developed for specifying NETCONF data models and protocol operations. YANG is a data modeling language used to model configuration and state data manipulated by NETCONF, NETCONF remote procedure calls, and NETCONF notifications.

YANG models the hierarchical organization of data as a tree, in which each node has either a value or a set of child nodes. Additionally, YANG structures data models into modules and submodules allowing reusability and augmentation. YANG models can describe constraints to be enforced on the data. Additionally YANG has a set of base datatype and allows custom defined datatypes as well.

YANG allows the definition of NETCONF RPCs allowing the protocol to have an extensible number of commands. For RPC definition, the operations names, input parameters, and output parameters are defined using YANG data definition statements.

With respect to Figure 1, the YANG model [RFC6020] is suitable for specifying DAL for the forwarding and operational plane. NETCONF [RFC6241] is suitable for the MPSI. NETCONF is a management protocol [RFC6241] which was not (originally) designed for fast CP updates, and it might not be suitable for addressing the requirements of CPSI.

4.3. OpenFlow

OpenFlow is a framework originally developed at Stanford University, and currently under active standards development [OpenFlow] through the Open Networking Foundation (ONF). Initially, the goal was to provide a way for researchers to run experimental protocols in a production network [OFSIGC]. OpenFlow has undergone many revisions and additional revisions are likely. The following description reflects version 1.4 [OpenFlow]. In short, OpenFlow defines a protocol through which a logically centralized controller can control an OpenFlow switch. Each OpenFlow-compliant switch maintains one or more flow tables which are used to perform packet lookups. Distinct actions are to be taken regarding packet lookup and forwarding. A group table and an OpenFlow channel to external controllers are also part of the switch specification.

With respect to Figure 1, the Openflow switch specifications [OpenFlow] define a DAL for the Forwarding Plane as well as for CPSI. The OF-CONFIG protocol [OF-CONFIG] based on the YANG model [RFC6020], provides a DAL for the Forwarding and Operational Plane of an OpenFlow switch, and specifies NETCONF [RFC6241] as the MPSI. OF-CONFIG overlaps with the OpenFlow DAL, but with NETCONF [RFC6241] as the transport protocol it shares the limitations described in the previous section.

4.4. Interface to the Routing System

Interface to the Routing System (I2RS) provides a standard interface to the routing system for real-time or event-driven interaction through a collection of protocol-based control or management interfaces. Essentially, one of the main goals of I2RS, is to make the routing information base (RIB) programmable thus enabling new kinds of network provisioning and operation.

I2RS does not initially intend to create new interfaces, but rather leverage or extend existing ones and define informational models for the routing system. For example, the latest I2RS problem statement [I-D.ietf-i2rs-problem-statement] discusses previously-defined IETF protocols such as ForCES [RFC5810], NETCONF [RFC6241], and SNMP [RFC3417]. Regarding the definition of informational and data models, the I2RS working group has opted to use the YANG [RFC6020] modelling language.

Currently the I2RS working group is developing an Information Model [I-D.ietf-i2rs-rib-info-model] in regards to the Network Services Abstraction Layer for the I2RS agent.

With respect to Figure 1, the I2RS architecture [I-D.ietf-i2rs-architecture] encompasses the Control and Application Planes and uses any CPSI and DAL that is available, whether that may be ForCES [RFC5810], OpenFlow [OpenFlow] or another interface. In addition, the I2RS agent is a Control Plane Service. All services or applications on top of that belong to either the Control, Management or the Application plane. In the I2RS documents, management access to the agent may be provided by management protocols like SNMP and NETCONF. The I2RS protocol may also be mapped to the Service Interface as it will provide access even to other than control applications.

4.5. SNMP

The Simple Network Management Protocol (SNMP) is an IETF-standardized management protocol and is currently at its third revision (SNMPv3) [RFC3417][RFC3412][RFC3414]. It consists of a set of standards for network management, including an application layer protocol, a database schema, and a set of data objects. SNMP exposes management data (managed objects) in the form of variables on the managed systems, which describe the system configuration. These variables can then be queried and set by managing applications.

SNMP uses an extensible design for describing data, defined by management information bases (MIBs). MIBs describe the structure of the management data of a device subsystem. MIBs use a hierarchical namespace containing object identifiers (OID). Each OID identifies a variable that can be read or set via SNMP. MIBs use the notation defined by Structure of Management Information Version 2 [RFC2578]

An early example of SNMP in the context of SDN is discussed in [Peregrine].

With respect to Figure 1, SNMP MIBs can be used to describe DAL for the Forwarding and Operational Plane. Similar to YANG, SNMP MIBs are able to describe DAL for the Forwarding Plane. SNMP, similar to NETCONF, is suited for the MPSI.

4.6. PCEP

The Path Computation Element (PCE) [RFC4655] architecture defines an entity capable of computing paths for a single service or a set of services. A PCE might be a network node, network management station, or dedicated computational platform that is resource-aware and has the ability to consider multiple constraints for a variety of path computation problems and switching technologies. The PCE Communication Protocol (PCEP) [RFC5440] is an IETF protocol for communication between a Path Computation Client (PCC) and a PCE, or between multiple PCEs.

The PCE represents a vision of networks that separates path computation for services, the signaling of end-to-end connections and actual packet forwarding. The definition of online and offline path computation is dependent on the reachability of the PCE from network and NMS nodes, and the type of optimization request which may significantly impact the optimization response time from the PCE to the PCC.

The PCEP messaging mechanism facilitates the specification of computation endpoints (source and destination node addresses) and objective functions (requested algorithm and optimization criteria), and the associated constraints such as traffic parameters (e.g. requested bandwidth), the switching capability, and encoding type.

With respect to Figure 1, PCE is a control plane service that provides services for control plane applications. PCEP may be used as an east-west interface between PCEs which may act as domain control entities (services and applications). The PCE working group is specifying extensions [I-D.ietf-pce-stateful-pce], which allow an active PCE to control, using PCEP, MPLS or GMPLS Label Switched Paths (LSP), thus making it applicable for the CPSI for MPLS and GMPLS switches.

4.7. BFD

Bidirectional Forwarding Detection (BFD) [RFC5880], is an IETF-standardized network protocol designed for detecting path failures between two forwarding elements, including physical interfaces, subinterfaces, data link(s), and to the extent possible the forwarding engines themselves, with potentially very low latency. BFD can provide low-overhead failure detection on any kind of path between systems, including direct physical links, virtual circuits, tunnels, MPLS LSPs, multihop routed paths, and unidirectional links where there exists a return path as well. It is often implemented in some component of the forwarding engine of a system, in cases where the forwarding and control engines are separated.

With respect to Figure 1, a BFD agent can be implemneted as a control plane service or application that would use the CPSI towards the forwarding plane to send/receive BFD packets. However a BFD agent is usually implemented as an application on the device and use the forwarding plane to send/receive BFD packets and update the operational plane resources accordingly. Services and applications of control and management plane that monitor or has subscribed to changes of resources, learn these changes through their respective interface and will take the necessary action.

5. Summary

This document has been developed after a thorough and detailed analysis of related peer-reviewed literature, the RFC series, and documents produced by other relevant standards organizations. It has been reviewed publicly by the wider SDN community and we hope that it can serve as a handy tool for network researchers, engineers and practitioners in the years to come.

We conclude this document with a brief summary of the SDN architecture layers terminology. In general, we consider a network element as a composition of resources. Each network element has a forwarding plane (FP), responsible for handling packets in the datapath, and an operational plane (OP), responsible for managing the operational state of the device. Resources in the network element are abstracted by the device and resource abstraction layer (DAL) to be controlled and managed by services or applications that belong to the control or management plane. The control plane (CP) is responsible for taking decisions on how packets should be forwarded. The management plane (MP) is responsible for monitoring, configuring and maintaining network devices. Service interfaces are abstracted by the network service abstraction layer (NSAL) where other more network applications or services may use them. The taxonomy introduced in this document defines distinct SDN planes, abstraction layers and interfaces, aiming to clarify SDN terminology and establish commonly accepted reference definitions across the SDN community irrespective of specific implementation choices.

6. Contributors

The authors would like to acknowledge (in alphabetical order) the following persons as contributors to this document. They all provided text, pointers and comments that made this document more complete:

Daniel King for providing text related to PCEP.

Scott Mansfield for information regarding current ITU work on SDN.

Yaakov Stein for providing text related to the CAP theorem and SDO-related information.

Russ White for text suggestions on the definitions on control, management and application.

7. Acknowledgements

The authors would like to acknowledge Salvatore Loreto and Sudhir Modali for their contributions in the initial discussion on the SDNRG mailing list as well as their draft-specific comments; they helped put this document in a better shape.

Additionally we would like to thank (in alphabetical order) Shivleela Arlimatti, Roland Bless, Scott Brim, Alan Clark, Luis Miguel Contreras Murillo, Tim Copley, Linda Dunbar, Ken Gray, Deniz Gurkan, Dave Hood, Georgios Karagiannis, Bhumip Khasnabish, Sriganesh Kini, Ramki Krishnan, Dirk Kutscher, Diego Lopez, Scott Mansfield, Pedro Martinez-Julia, David E Mcdysan, Erik Nordmark, Carlos Pignataro, Robert Raszuk, Bless Roland, Francisco Javier Ros Munoz, Yaakov Stein, Dimitri Staessens, Eve Varma, Stuart Venters, Russ White and Lee Young for their critical comments and discussions at the IETF 88, IETF 89 and IETF 90 meetings and the SDNRG mailing list, which we took into consideration while revising this document.

We would also like to thank (in alphabetical order) Spencer Dawkins and Eliot Lear for their IRSG reviews which further refined this document.

Finally we thank Nobo Akiya for his review on the section on BFD, Julien Meuric for his review on the section of PCE, and Adrian Farrel and Benoit Claise for their IESG reviews of this document.

Kostas Pentikousis is supported by [ALIEN], a research project partially funded by the European Community under the Seventh Framework Program (grant agreement no. 317880). The views expressed here are those of the author only. The European Commission is not liable for any use that may be made of the information in this document.

8. IANA Considerations

This memo makes no requests to IANA.

9. Security Considerations

This document does not propose a new network architecture or protocol and therefore does not have any impact on the security of the Internet. That said, security is paramount in networking and thus it should be given full consideration when designing a network architecture or operational deployment. Security in SDN is discussed in the literature, for example in [SDNSecurity][SDNSecServ] and [SDNSecOF]. Security considerations regarding specific interfaces, such as, for example, ForCES, I2RS, SNMP, or NETCONF are addressed in their respective documents as well as [RFC7149].

10. Informative References

[A4D05] Greenberg, Albert, et al., "A clean slate 4D approach to network control and management", ACM SIGCOMM Computer Communication Review 35.5 (2005): 41-54 , 2005.
[ALIEN] D. Parniewicz, R. Doriguzzi Corin, et al., "Design and Implementation of an OpenFlow Hardware Abstraction Layer", Proc. ACM SIGCOMM Workshop on Distributed Cloud Computing (DCC), Chicago, Illinois, USA, August 2014, pp. 71-76. doi> 10.1145/2627566.2627577 , 2014.
[Beacon] Erickson, David., "The beacon openflow controller.", In Proceedings of the second ACM SIGCOMM workshop on Hot topics in software defined networking, pp. 13-18. ACM, 2013. , 2013.
[CAPBR] Eric A. Brewer, "Towards robust distributed systems.", Symposium on Principles of Distributed Computing (PODC). 2000 , 2000.
[CAPFN] Panda, Aurojit, Colin Scott, Ali Ghodsi, Teemu Koponen, and Scott Shenker., "CAP for Networks.", In Proceedings of the second ACM SIGCOMM workshop on Hot topics in software defined networking, pp. 91-96. ACM, 2013. , 2013.
[CAPGL] Seth Gilbert, and Nancy Ann Lynch., "Brewer's conjecture and the feasibility of consistent, available, partition-tolerant web services", ACM SIGACT News 33.2 (2002): 51-59. , 2002.
[CORBA] Object Management Group, "Common Object Request Broker Architecture specification version 3.3", November 2012.
[DIOPR] Denazis, Spyros, Kazuho Miki, John Vicente, and Andrew Campbell., "Designing interfaces for open programmable routers.", In Active Networks, pp. 13-24. Springer Berlin Heidelberg, 1999 , 1999.
[ESNet] Yu, James, and Imad Al Ajarmeh., "An empirical study of the NETCONF protocol.", In Networking and Services (ICNS), 2010 Sixth International Conference on, pp. 253-258. IEEE, 2010. , 2010.
[FCAPS] International Telecommunication Union, "X.700: Management Framework For Open Systems Interconnection (OSI) For CCITT Applications", September 1992.
[I-D.ietf-i2rs-architecture] Atlas, A., Halpern, J., Hares, S., Ward, D. and T. Nadeau, "An Architecture for the Interface to the Routing System", Internet-Draft draft-ietf-i2rs-architecture-05, July 2014.
[I-D.ietf-i2rs-problem-statement] Atlas, A., Nadeau, T. and D. Ward, "Interface to the Routing System Problem Statement", Internet-Draft draft-ietf-i2rs-problem-statement-04, June 2014.
[I-D.ietf-i2rs-rib-info-model] Bahadur, N., Folkes, R., Kini, S. and J. Medved, "Routing Information Base Info Model", Internet-Draft draft-ietf-i2rs-rib-info-model-03, May 2014.
[I-D.ietf-pce-stateful-pce] Crabbe, E., Minei, I., Medved, J. and R. Varga, "PCEP Extensions for Stateful PCE", Internet-Draft draft-ietf-pce-stateful-pce-09, June 2014.
[ITUATM] CCITT, Geneva, Switzerland, "CCITT Recommendation 1.361, B-ISDN ATM Layer Specification", 1990.
[ITUSG11] Telecommunication Standardization sector of ITU, "ITU, Study group 11", 2013.
[ITUSG13] Telecommunication Standardization sector of ITU, "ITU, Study group 13", 2013.
[ITUSS7] Telecommunication Standardization sector of ITU, "ITU, Q.700 : Introduction to CCITT Signalling System No. 7", 1993.
[ITUY3300] ITU-T Study Group 13, "Y.3300, Framework of software-defined networking", June 2014.
[KANDOO] Hassas Yeganeh, Soheil, and Yashar Ganjali., "Kandoo: a framework for efficient and scalable offloading of control applications.", In Proceedings of the first workshop on Hot topics in software defined networks, pp. 19-24. ACM SIGCOMM, 2012. , 2012.
[NFVArch] European Telecommunication Standards Institute, "Network Functions Virtualisation (NFV): Architectural Framework; White paper, ETSI GS 9 NFV 002, 2013", December 2013.
[NOX] Gude, Natasha, Teemu Koponen, Justin Pettit, Ben Pfaff, Martin Casado, Nick McKeown, and Scott Shenker., "NOX: towards an operating system for networks.", ACM SIGCOMM Computer Communication Review 38, no. 3 (2008): 105-110. , 2008.
[NV09] Chowdhury, NM Mosharaf Kabir, and Raouf Boutaba, "Network virtualization: state of the art and research challenges", Communications Magazine, IEEE 47.7 (2009): 20-26 , 2009.
[OF-CONFIG] Open Networking Foundation, "OpenFlow Management and Configuration Protocol 1.1.1", March 2013.
[OF08] McKeown, Nick, et al., "OpenFlow: enabling innovation in campus networks", ACM SIGCOMM Computer Communication Review 38.2 (2008): 69-74 , 2008.
[OFSIGC] McKeown, Nick, Tom Anderson, Hari Balakrishnan, Guru Parulkar, Larry Peterson, Jennifer Rexford, Scott Shenker, and Jonathan Turner., "OpenFlow: enabling innovation in campus networks.", ACM SIGCOMM Computer Communication Review 38, no. 2 (2008): 69-74. , 1998.
[ONFArch] Open Networking Foundation, "SDN Architecture, Issue 1", June 2014.
[OpenFlow] Open Networking Foundation, "The OpenFlow 1.4 Specification.", October 2013.
[P1520] Biswas, Jit, Aurel A. Lazar, J-F. Huard, Koonseng Lim, Semir Mahjoub, L-F. Pau, Masaaki Suzuki, Soren Torstensson, Weiguo Wang, and Stephen Weinstein., "The IEEE P1520 standards initiative for programmable network interfaces.", Communications Magazine, IEEE 36, no. 10 (1998): 64-70. , 1998.
[PENet] Hedstrom, Brian, Akshay Watwe, and Siddharth Sakthidharan, "Protocol Efficiencies of NETCONF versus SNMP for Configuration Management Functions", PhD dissertation, Master's thesis, University of Colorado, 2011 , 2011.
[PNSurvey99] Campbell, Andrew T., et al, "A survey of programmable networks", ACM SIGCOMM Computer Communication Review 29.2 (1999): 7-23 , September 1992.
[Peregrine] Chiueh, Tzi-cker, Cheng-Chun Tu, Yu-Cheng Wang, Pai-Wei Wang, Kai-Wen Li, and Yu-Ming Huang., "Peregrine: An All-Layer-2 Container Computer Network.", In Cloud Computing (CLOUD), 2012 IEEE 5th International Conference on, pp. 686-693. IEEE, 2012 , 2012.
[PiNA] John Day, "Patterns in network architecture: a return to fundamentals.", Prentice Hall (ISBN 0132252422). , 2007.
[RCP] Caesar, Matthew, Donald Caldwell, Nick Feamster, Jennifer Rexford, Aman Shaikh, and Jacobus van der Merwe., "Design and implementation of a routing control platform.", In Proceedings of the 2nd conference on Symposium on Networked Systems Design & Implementation-Volume 2, pp. 15-28. USENIX Association, 2005. , 2005.
[REST] Fielding, Roy, "Fielding Dissertation: Chapter 5: Representational State Transfer (REST).", 2000.
[RFC0826] Plummer, D., "Ethernet Address Resolution Protocol: Or converting network protocol addresses to 48.bit Ethernet address for transmission on Ethernet hardware", STD 37, RFC 826, November 1982.
[RFC1953] Newman, P., Edwards, W., Hinden, R., Hoffman, E., Ching Liaw, F., Lyon, T. and G. Minshall, "Ipsilon Flow Management Protocol Specification for IPv4 Version 1.0", RFC 1953, May 1996.
[RFC2297] Newman, P., Edwards, W., Hinden, R., Hoffman, E., Liaw, F., Lyon, T. and G. Minshall, "Ipsilon's General Switch Management Protocol Specification Version 2.0", RFC 2297, March 1998.
[RFC2578] McCloghrie, K., Perkins, D. and J. Schoenwaelder, "Structure of Management Information Version 2 (SMIv2)", STD 58, RFC 2578, April 1999.
[RFC3411] Harrington, D., Presuhn, R. and B. Wijnen, "An Architecture for Describing Simple Network Management Protocol (SNMP) Management Frameworks", STD 62, RFC 3411, December 2002.
[RFC3412] Case, J., Harrington, D., Presuhn, R. and B. Wijnen, "Message Processing and Dispatching for the Simple Network Management Protocol (SNMP)", STD 62, RFC 3412, December 2002.
[RFC3414] Blumenthal, U. and B. Wijnen, "User-based Security Model (USM) for version 3 of the Simple Network Management Protocol (SNMPv3)", STD 62, RFC 3414, December 2002.
[RFC3417] Presuhn, R., "Transport Mappings for the Simple Network Management Protocol (SNMP)", STD 62, RFC 3417, December 2002.
[RFC3418] Presuhn, R., "Management Information Base (MIB) for the Simple Network Management Protocol (SNMP)", STD 62, RFC 3418, December 2002.
[RFC3535] Schoenwaelder, J., "Overview of the 2002 IAB Network Management Workshop", RFC 3535, May 2003.
[RFC3746] Yang, L., Dantu, R., Anderson, T. and R. Gopal, "Forwarding and Control Element Separation (ForCES) Framework", RFC 3746, April 2004.
[RFC4271] Rekhter, Y., Li, T. and S. Hares, "A Border Gateway Protocol 4 (BGP-4)", RFC 4271, January 2006.
[RFC4655] Farrel, A., Vasseur, J. and J. Ash, "A Path Computation Element (PCE)-Based Architecture", RFC 4655, August 2006.
[RFC5424] Gerhards, R., "The Syslog Protocol", RFC 5424, March 2009.
[RFC5440] Vasseur, JP. and JL. Le Roux, "Path Computation Element (PCE) Communication Protocol (PCEP)", RFC 5440, March 2009.
[RFC5531] Thurlow, R., "RPC: Remote Procedure Call Protocol Specification Version 2", RFC 5531, May 2009.
[RFC5743] Falk, A., "Definition of an Internet Research Task Force (IRTF) Document Stream", RFC 5743, December 2009.
[RFC5810] Doria, A., Hadi Salim, J., Haas, R., Khosravi, H., Wang, W., Dong, L., Gopal, R. and J. Halpern, "Forwarding and Control Element Separation (ForCES) Protocol Specification", RFC 5810, March 2010.
[RFC5812] Halpern, J. and J. Hadi Salim, "Forwarding and Control Element Separation (ForCES) Forwarding Element Model", RFC 5812, March 2010.
[RFC5880] Katz, D. and D. Ward, "Bidirectional Forwarding Detection (BFD)", RFC 5880, June 2010.
[RFC6020] Bjorklund, M., "YANG - A Data Modeling Language for the Network Configuration Protocol (NETCONF)", RFC 6020, October 2010.
[RFC6241] Enns, R., Bjorklund, M., Schoenwaelder, J. and A. Bierman, "Network Configuration Protocol (NETCONF)", RFC 6241, June 2011.
[RFC6632] Ersue, M. and B. Claise, "An Overview of the IETF Network Management Standards", RFC 6632, June 2012.
[RFC7011] Claise, B., Trammell, B. and P. Aitken, "Specification of the IP Flow Information Export (IPFIX) Protocol for the Exchange of Flow Information", STD 77, RFC 7011, September 2013.
[RFC7047] Pfaff, B. and B. Davie, "The Open vSwitch Database Management Protocol", RFC 7047, December 2013.
[RFC7149] Boucadair, M. and C. Jacquenet, "Software-Defined Networking: A Perspective from within a Service Provider Environment", RFC 7149, March 2014.
[RFC7276] Mizrahi, T., Sprecher, N., Bellagamba, E. and Y. Weingarten, "An Overview of Operations, Administration, and Maintenance (OAM) Tools", RFC 7276, June 2014.
[RINA] John Day, Ibrahim Matta, and Karim Mattar., "Networking is IPC: a guiding principle to a better internet.", In Proceedings of the 2008 ACM CoNEXT Conference, p. 67. ACM, 2008. , 2008.
[RouteFlow] Nascimento, Marcelo R., Christian E. Rothenberg, Marcos R. Salvador, Carlos NA Correa, Sidney C. de Lucena, and Mauricio F. Magalhaes., "Virtual routers as a service: the routeflow approach leveraging software-defined networks.", In Proceedings of the 6th International Conference on Future Internet Technologies, pp. 34-37. ACM, 2011. , 2011.
[SDNACS] Diego Kreutz, Fernando M. V. Ramos, Paulo Verissimo, Christian Esteve Rothenberg, Siamak Azodolmolky, Steve Uhlig, "Software-Defined Networking: A Comprehensive Survey.", arXiv preprint arXiv:1406.0440 , 2014.
[SDNHistory] Feamster, Nick, Jennifer Rexford, and Ellen Zegura., "The Road to SDN", ACM Queue11, no. 12 (2013): 20. , 2013.
[SDNNFV] Haleplidis, Evangelos, Jamal Hadi Salim, Spyros Denazis, and Odysseas Koufopavlou., "Towards a Network Abstraction Model for SDN.", Journal of Network and Systems Management (2014): 1-19. Special Issue on Management of Software Defined Networks, Springer , 2014.
[SDNSecOF] Kloti, Rowan, Vasileios Kotronis, and Paul Smith., "Openflow: A security analysis.", Proceedings Workshop on Secure Network Protocols (NPSec). IEEE (2013). , 2013.
[SDNSecServ] Sandra Scott-Hayward, Gemma O'Callaghan, and Sakir Sezer., "SDN security: A survey.", In Future Networks and Services (SDN4FNS), 2013 IEEE SDN for, pp. 1-7. IEEE, 2013. , 2013.
[SDNSecurity] Diego Kreutz, Fernando Ramos, and Paulo Verissimo., "Towards secure and dependable software-defined networks.", In Proceedings of the second ACM SIGCOMM workshop on Hot topics in software defined networking, pp. 55-60. ACM, 2013. , 2013.
[SDNSurvey] Bruno Astuto A. Nunes, Marc Mendonca, Xuan-Nam Nguyen, Katia Obraczka, and Thierry Turletti, "A Survey of Software-Defined Networking: Past, Present, and Future of Programmable Networks", IEEE Communications Surveys and Tutorials DOI:10.1109/SURV.2014.012214.00180 , 2014.
[SLTSDN] Yosr Jarraya, Taous Madi, and Mourad Debbabi, "A Survey and a Layered Taxonomy of Software-Defined Networking", To be published in Communications Surveys and Tutorials, IEEE Issue: 99 , 2014.
[SoftRouter] Lakshman, T. V., T. Nandagopal, R. Ramjee, K. Sabnani, and T. Woo., "The softrouter architecture.", In Proc. ACM SIGCOMM Workshop on Hot Topics in Networking. 2004. , 2004.
[Tempest] Rooney, Sean, Jacobus E. van der Merwe, Simon A. Crosby, and Ian M. Leslie., "The Tempest: a framework for safe, resource assured, programmable networks.", Communications Magazine, IEEE 36, no. 10 (1998): 42-53 , 1998.

Authors' Addresses

Evangelos Haleplidis (editor) University of Patras Department of Electrical and Computer Engineering Patras, 26500 Greece EMail: ehalep@ece.upatras.gr
Kostas Pentikousis (editor) EICT GmbH Torgauer Strasse 12-15 10829 Berlin, Germany EMail: k.pentikousis@eict.de
Spyros Denazis University of Patras Department of Electrical and Computer Engineering Patras, 26500 Greece EMail: sdena@upatras.gr
Jamal Hadi Salim Mojatatu Networks Suite 400, 303 Moodie Dr. Ottawa, Ontario, K2H 9R4 Canada EMail: hadi@mojatatu.com
David Meyer Brocade EMail: dmm@1-4-5.net
Odysseas Koufopavlou University of Patras Department of Electrical and Computer Engineering Patras, 26500 Greece EMail: odysseas@ece.upatras.gr