DMM Working Group S. Matsushima
Internet-Draft SoftBank
Intended status: Standards Track L. Bertz
Expires: September 14, 2017 Sprint
M. Liebsch
NEC
S. Gundavelli
Cisco
D. Moses
Intel Corporation
C. Perkins
Futurewei
March 13, 2017

Protocol for Forwarding Policy Configuration (FPC) in DMM
draft-ietf-dmm-fpc-cpdp-07

Abstract

This document describes a way, called Forwarding Policy Configuration (FPC) to manage the separation of data-plane and control-plane. FPC defines a flexible mobility management system using FPC agent and FPC client functions. An FPC agent provides an abstract interface to the data-plane. The FPC client configures data-plane nodes by using the functions and abstractions provided by the FPC agent for that data-plane nodes. The data-plane abstractions presented in this document is extensible, in order to support many different types of mobility management systems and data-plane functions.

Status of This Memo

This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.

Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at http://datatracker.ietf.org/drafts/current/.

Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."

This Internet-Draft will expire on September 14, 2017.

Copyright Notice

Copyright (c) 2017 IETF Trust and the persons identified as the document authors. All rights reserved.

This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.


Table of Contents

1. Introduction

This document describes Forwarding Policy Configuration (FPC), a system for managing the separation of data-plane and control-plane. FPC enables flexible mobility management using FPC agent and FPC client functions. An FPC agent exports an abstract interface to the data-plane. To configure data-plane nodes and functions, the FPC client uses the interface to the data-plane offered by the FPC agent.

Control planes of mobility management systems, or other applications which require data-plane control, can utilize the FPC client at various granularities of operation. The operations are capable of configuring a single Data-Plane Node (DPN) directly, as well as multiple DPNs as determined by abstracted data-plane models on the FPC agent.

A FPC agent provides data-plane abstraction in the following three areas:

Topology:
DPNs are grouped and abstracted according to well-known concepts of mobility management such as access networks, anchors and domains. A FPC agent provides an interface to the abstract DPN-groups that enables definition of a topology for the forwarding plane. For example, access nodes may be assigned to a DPN-group which peers to a DPN-group of anchor nodes.
Policy:
A Policy embodies the mechanisms for processing specific traffic flows or packets. This is needed for QoS, for packet processing to rewrite headers, etc. A Policy consists of one or more rules. Each rule is composed of Descriptors and Actions. Descriptors in a rule identify traffic flows, and Actions apply treatments to packets that match the Descriptors in the rule. An arbitrary set of policies can be abstracted as a Policy-group to be applied to a particular collection of flows, which is called the Virtual Port (Vport).
Mobility:
A mobility session which is active on a mobile node is abstracted as a Context with associated runtime concrete attributes, such as tunnel endpoints, tunnel identifiers, delegated prefix(es), routing information, etc. Contexts are attached to DPN-groups along with consequence of the control plane. One or multiple Contexts which have same sets of policies are assigned Vports which abstract those policy sets. A Context can belong to multiple Vports which serve various kinds of purpose and policy. Monitors provide a mechanism to produce reports when events regarding Vports, Sessions, DPNs or the Agent occur.

The Agent assembles applicable sets of forwarding policies for the mobility sessions from the data model, and then renders those policies into specific configurations for each DPN to which the sessions attached. The specific protocols and configurations to configure DPN from a FPC Agent are outside the scope of this document.

The data-plane abstractions may be extended to support many different mobility management systems and data-plane functions. The architecture and protocol design of FPC is not tied to specific types of access technologies and mobility protocols.

2. Terminology

The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in [RFC2119].

DPN:
A data-plane node (DPN) is capable of deploying data-plane features. DPNs may be switches or routers regardless of their realiziation, i.e. whether they are hardware or software based.
FPC Agent:
A functional entity in FPC that manages DPNs and provides abstracted data-plane networks to mobility management systems and/or applications through FPC Clients.
FPC Client:
A functional entity in FPC that is integrated with mobility management systems and/or applications to control forwarding policy, mobility sessions and DPNs.
Tenant:
An operational entity that manages mobility management systems or applications which require data-plane functions.
Domain:
One or more DPNs that form a data-plane network. A mobility management system or an application in a tenant may utilize a single or multiple domains.
Virtual Port (Vport):
A set of forwarding policies.
Context:
An abstracted endpoint of a mobility session associated with runtime attributes. Vports may apply to Context which instantiates those forwarding policies on a DPN.

3. FPC Architecture

To fulfill the requirements described in [RFC7333], FPC enables mobility control-planes and applications to configure DPNs with various roles of the mobility management as described in [I-D.ietf-dmm-deployment-models].

FPC defines building blocks of FPC Agent and FPC Client, as well as data models for the necessary data-plane abstractions. The attributes defining those data models serve as protocol elements for the interface between the FPC Agent and the FPC Client.

Mobility control-planes and applications integrate the FPC Client function. The FPC Client connects to FPC Agent functions. The Client and the Agent communicate based on information models for the data-plane abstractions described in Section 4. The data models allow the control-plane and the applications to support forwarding policies on the Agent for their mobility sessions.

The FPC Agent carries out the required configuration and management of the DPN(s). The Agent determines DPN configurations according to the forwarding policies requested by the FPC Client. The DPN configurations could be specific to each DPN implementation such that how FPC Agent determines implementation specific configuration for a DPN is outside of the scope of this document. Along with the models, the control-plane and the applications put Policies to the Agent prior to creating their mobility sessions.

Once the Topology of DPN(s) and domains are defined for a data plane on an Agent, the data-plane nodes (DPNs) are available for further configuration. The FPC Agent connects those DPNs to manage their configurations.

This architecture is illustrated in Figure 1. An FPC Agent may be implemented in a network controller that handles multiple DPNs, or there is a simple case where another FPC Agent may itself be integrated into a DPN.

This document does not adopt a specific protocol for the FPC interface protocol and it is out of scope. However it must be capable of supporting FPC protocol messages and transactions described in Section 5.

                    +-------------------------+
                    | Mobility Control-Plane  |
                    |          and            |
                    |      Applications       |
                    |+-----------------------+|
                    ||      FPC Client       ||
                    |+----------^------------+|
                    +-----------|-------------+
        FPC interface protocol  |
                +---------------+-----------------+
                |                                 |
  Network       |                                 |
  Controller    |                      DPN        |
    +-----------|-------------+        +----------|---------+
    |+----------v------------+|        |+---------v--------+|
    ||   [Data-plane model]  ||        ||[Data-plane model]||
    ||       FPC Agent       ||        ||    FPC Agent     ||
    |+-----------------------+|        |+------------------+|
    |+------------+----------+|        |                    |
    ||SB Protocols|FPC Client||        |  DPN Configuration |
    ||   Modules  |  Module  ||        +--------------------+
    |+------^-----+----^-----+|
    +-------|----------|------+
            |          |
  Other     |          | FPC interface
  Southband |          | Protocol
  Protocols |          |
            |          +-----------------+
            |                            |
DPN         |                 DPN        |
 +----------|---------+       +----------|---------+
 |+---------v--------+|       |+---------v--------+|
 ||  Configuration   ||       ||[Data-plane model]||
 || Protocol module  ||       ||     FPC Agent    ||
 |+------------------+|       |+------------------+|
 |                    |       |                    |
 | DPN Configuration  |       |  DPN Configuration |
 +--------------------+       +--------------------+
              

Figure 1: Reference Forwarding Policy Configuration (FPC) Architecture

The FPC architecture supports multi-tenancy; an FPC enabled data-plane supports tenants of multiple mobile operator networks and/or applications. It means that the FPC Client of each tenant connects to the FPC Agent and it MUST partition namespace and data for their data-planes. DPNs on the data-plane may fulfill multiple data-plane roles which are defined per session, domain and tenant.

Note that all FPC models SHOULD be configurable. The FPC interface protocol in Figure 1 is only required to handle runtime data in the Mobility model. The rest of the FPC models, namely Topology and Policy, may be pre-configured, and in that case real-time protocol exchanges would not be required for them. Operators that are tenants in the FPC data-plane could configure Topology and Policy on the Agent through other means, such as Restconf [I-D.ietf-netconf-restconf] or Netconf [RFC6241].

4. Information Model for FPC

This section presents an information model representing the abstract concepts of FPC, which are language and protocol neutral. Figure 2 shows an overview of the FPC data-plane information model.


(Mobile operator tenant that abstracted data-plane is used)
        |
        +---FPC-Topology
        |     |
        |     +---DPNs
        |     |
        |     +---DPN-groups
        |     |
        |     +---Domains
        |
        +---FPC-Policy
        |    |
        |    +---Descriptors
        |    |
        |    +---Actions
        |    |
        |    +---Policies
        |    |
        |    +---Policy-groups
        |
        +---FPC-Mobility
              |
              +---Vports
              |
              +---Contexts

              

Figure 2: FPC Data-plane Information Model

4.1. FPC-Topology

Topology abstraction enables a physical data-plane network to support multiple overlay topologies. An FPC-Topology consists of DPNs, DPN-groups and Domains which abstract data-plane topologies for the Client's mobility control-planes and applications.

Utilizing a FPC Agent, a mobile operator can create virtual DPNs in an overlay network. Those such virtual DPNs are treated the same as physical forwarding DPNs in this document.

4.1.1. DPNs

The DPNs define all available nodes to a tenant of the FPC data-plane network. FPC Agent defines DPN binding to actual nodes. The role of a DPN in the data-plane is determined at the time the DPN is assigned to a DPN-group.


 (FPC-Topology)
     |
     +---DPNs
            |
            +---DPN-id
            |
            +---DPN-name
            |
            +---DPN-groups
            |
            +---Node-reference

              

Figure 3: DPNs Model Structure

DPN-id:
The identifier for the DPN. The ID format MUST conform to Section 4.4.
DPN-name:
The name of the DPN.
DPN-groups:
The list of DPN-groups to which the DPN belongs.
Node-reference:
Indicates a physical node, or a platform of virtualization, to which the DPN is bound by the Agent. The Agent SHOULD maintain that node's information, including IP address of management and control protocol to connect them. In the case of a node as a virtualization platform, FPC Agent directs the platform to instantiate a DPN to which a DPN-group attributes.

4.1.2. DPN-groups

A DPN-group is a set of DPNs which share certain specified data-plane attributes. DPN-groups define the data-plane topology consisting of a DPN-group of access nodes connecting to an anchor node's DPN-group.

A DPN-group has attributes such as its data-plane role, supported access technologies, mobility profiles, connected peer groups and domain. A DPN may be assigned to multiple DPN-groups in different data-plane roles or different domains.


 (FPC-Topology)
     |
     +---DPN-groups
            |
            +---DPN-group-id
            |
            +---Data-plane-role
            |
            +---Domains
            |
            +---Access-type
            |
            +---Mobility-profile
            |
            +---DPN-group-peers

              

Figure 4: DPN-groups Model Structure

DPN-group-id:
The identifier of the DPN-group. The ID format MUST conform to Section 4.4.
Data-plane-role:
The data-plane role of the DPN-group, such as access-dpn, anchor-dpn.
Domains:
The domains to which the DPN-group belongs.
Access-type:
The access type supported by the DPN-group such as ethernet(802.3/11), 3gpp cellular(S1, RAB), if any.
Mobility-profile:
Identifies a supported mobility profile, such as ietf-pmip, or 3gpp. New profiles may be defined as extensions of this specification. Mobility profiles are defined so that some or all data-plane parameters of the mobility contexts that are part of the profile can be automatically determined by the FPC Agent.
DPN-group-peers:
The remote peers of the DPN-group with parameters described in Section 4.1.2.1.

4.1.2.1. DPN-group Peers

DPN-group-peers lists relevant parameters of remote peer DPNs as illustrated in Figure 5.

(DPN-groups)
    |
    +---DPN-group-peers
          |
          +---Remote-DPN-group-id
          |
          +---Remote-mobility-profile
          |
          +---Remote-data-plane-role
          |
          +---Remote-endpoint-address
          |
          +---Local-endpoint-address
          |
          +---MTU-size

              

Figure 5: DPN-groups Peer Model Structure

Remote-DPN-group-id:
The ID of the peering DPN-Group. The ID format MUST conform to Section 4.4.
Remote-mobility-profile:
The mobility-profile for the peering DPN-group. Currently defined profiles are ietf-pmip, or 3gpp. New profiles may be defined as extensions of this specification.
Remote-data-plane-role:
The data-plane role of the peering DPN-group.
Remote-endpoint-address:
Defines Endpoint address of the peering DPN-group.
Local-endpoint-address:
Defines Endpoint address of its own DPN-group to peer the remote DPN-group.
MTU-size:
Defines MTU size of traffic between the DPN-Group and this DPN-group-peer.

4.1.3. Domains

A domain is defined by an operator to refer to a particular network, considered as a system of cooperating DPN-groups. Domains may represent services or applications that are resident within an operator's network.


 (FPC-Topology)
     |
     +---Domains
            |
            +---Domain-id
            |
            +---Domain-name
            |
            +---Domain-type
            |
            +---Domain-reference

              

Figure 6: Domain Model Structure

Domain-id:
Identifier of Domain. The ID format MUST conform to Section 4.4.
Domain-name:
The name of the Domain.
Domain-type:
Specifies which address families are supported within the domain.
Domain-reference:
Indicates a set of resources for the domain which consists a topology of physical nodes, platforms of virtualization and physical/virtual links with certain bandwidth, etc,.

4.2. FPC-Policy

The FPC-Policy consists of Descriptors, Actions, Policies and Policy-groups. These can be viewed as configuration data, in contrast to Contexts and Vports, which are structures that are instantiated on the Agent. The Descriptors and Actions in a Policy referenced by a Vport are active when the Vport is in an active Context, i.e. they can be applied to traffic on a DPN.

4.2.1. Descriptors

Descriptors defines classifiers of specific traffic flows, such as those based on source and destination addresses, protocols, port numbers of TCP/UDP/SCTP/DCCP, or any way of classifying packets. Descriptors are defined by specific profiles that may be produced by 3gpp, ietf or other SDOs. Many specifications also use the terms Filter, Traffic Descriptor or Traffic Selector [RFC6088]. A packet that meets the criteria of a Descriptor is said to satisfy, pass or be consumed by the Descriptor. Descriptors are assigned an identifier and contain a type and value.


 (FPC-Policy)
     |
     +---Descriptors
            |
            +---Descriptor-id
            |
            +---Descriptor-type
            |
            +---Descriptor-value

              

Figure 7: Descriptor Model Structure

Descriptor-id:
Identifier of Descriptor. The ID format MUST conform to Section 4.4.
Descriptor-type:
The descriptor type, which determines the classification of a specific traffic flows, such as source and destination addresses, protocols, port numbers of TCP/UDP/SCTP/DCCP, or any other way of selecting packets.
Descriptor-value:
The value of Descriptor such as IP prefix/address, protocol number, port number, etc.

4.2.2. Actions

A Policy defines a list of Actions that are to be applied to traffic meeting the criteria defined by the Descriptors. Actions include traffic management such as shaping, policing based on given bandwidth, and connectivity actions such as pass, drop, forward to given nexthop. Actions may be defined as part of specific profiles which are produced by 3gpp, ietf or other SDOs.


 (FPC-Policy)
     |
     +---Actions
            |
            +---Action-id
            |
            +---Action-type
            |
            +---Action-value

              

Figure 8: Action Model Structure

Action-id:
Identifier for the Action. The ID format MUST conform to Section 4.4.
Action-type:
The type of the action -- i.e. how to treat the specified traffic flows. Examples include pass, drop, forward to a given nexthop value, shape or police based on given bandwidth value, etc.
Action-value:
Specifies a value for the Action-type, such as bandwidth, nexthop address or drop, etc.

4.2.3. Policies

Policies are collections of Rules. Each Policy has a Policy Identifier and a list of Rule/Order pairs. The Order and Rule values MUST be unique in the Policy. Unlike the AND filter matching of each Rule the Policy uses an OR matching to find the first Rule whose Descriptors are satisfied by the packet. The search for a Rule to apply to packet is executed according to the unique Order values of the Rules. This is an ascending order search, i.e. the Rule with the lowest Order value is tested first and if its Descriptors are not satisfied by the packet the Rule with the next lowest Order value is tested. If a Rule is not found then the Policy does not apply. Policies contain Rules (not references to Rules).


 (FPC-Policy)
     |
     +---Policies
            |
            +---Policy-id
            |
            +---Rules
                  |
                  +---Order
                  |
                  +---Descriptors
                  |      |
                  |      +---Descriptor-id
                  |      |
                  |      +---Direction
                  |
                  +---Actions
                         |
                         +---Action-id
                         |
                         +---Action-Order

              

Figure 9: Model Structure for Policies

Policy-id:
Identifier of Policy. The ID format MUST conform to Section 4.4.
Rules:
List of Rules which are a collection of Descriptors and Actions. All Descriptors MUST be satisfied before the Actions are taken. This is known as an AND Descriptor list, i.e. Descriptor 1 AND Descriptor 2 AND ... Descriptor X all MUST be satisfied for the Rule to apply.
Order:
Specifies ordering if the Rule has multiple Descriptors and Action sets. Order values MUST be unique within the Rules list.
Descriptors:
The list of Descriptors.
Descriptor-id:
Identifies each Descriptor in the Rule.
Direction:
Specifies which direction applies, such as uplink, downlink or both.
Actions:
List of Actions.
Action-id:
Indicates each Action in the rule.
Action-Order:
Specifies Action ordering if the Rule has multiple actions. Action-Order values MUST be unique within the Actions list.

4.2.4. Policy-groups

List of Policy-groups which are an aggregation of Policies. Common applications include aggregating Policies that are defined by different functions, e.g. Network Address Translation, Security, etc. The structure has an Identifier and references the Policies via their Identifiers.


 (FPC-Policy)
     |
     +---Policy-groups
            |
            +---Policy-group-id
            |
            +---Policies

              

Figure 10: Policy-group Model Structure

Policy-group-id:
The identifier of the Policy-group. The ID format MUST conform to Section 4.4.
Policies:
List of Policies in the Policy-group.

4.3. FPC for Mobility Management

The FPC-Mobility consists of Vports and Contexts. A mobility session is abstracted as a Context with its associated runtime concrete attributes, such as tunnel endpoints, tunnel identifiers, delegated prefix(es) and routing information, etc. A Vport abstracts a set of policies applied to the Context.

4.3.1. Vport

A Vport represents a collection of policy groups, that is, a group of rules that can exist independently of the mobility/session lifecycle. Mobility control-plane applications create, modify and delete Vports on FPC Agent through the FPC Client.

When a Vport is indicated in a Context, the set of Descriptors and Actions in the Policies of the Vport are collected and applied to the Context. They must be instantiated on the DPN as forwarding related actions such as QoS differentiations, packet processing of encap/decap, header rewrite, route selection, etc.


(FPC-Mobility)
        |
        +---Vports
               |
               +---Vport-id
               |
               +---Policy-groups

              

Figure 11: Vport Model Structure

Vport-id:
The identifier of Vport. The ID format MUST conform to Section 4.4.
Policy-groups:
List of references to Policy-groups which apply to the Vport.

4.3.2. Context

An endpoint of a mobility session is abstracted as a Context with its associated runtime concrete attributes, such as tunnel endpoints, tunnel identifiers, delegated prefix(es) and routing information, etc. A mobility control-plane, or other applications, can create, modify and delete contexts on an FPC Agent by using the FPC Client.

FPC Agent SHOULD determine runtime attributes of a Context from the Vport's policies and the attached DPN's attributes. A mobility control-plane, or other applications, MAY set some of the runtime attributes directly when they create data-plane related attributes. In the case of that a mobility control-plane assigns tunnel identifiers, for instance.


(FPC-Mobility)
        |
        +---Contexts
               |
               +---Context-id
               |
               +---Vports
               |
               +---DPN-group
               |
               +---Delegated-ip-prefixes
               |
               +---Parent-context

              

Figure 12: Common Context Model Structure

Context-id:
Identifier of the Context. The ID format MUST conform to Section 4.4.
Vports:
List of Vports. When a Context is applied to a Vport, the context is configured by policies at each such Vport. Vport-id references indicate Vports which apply to the Context. Context can be a spread over multiple Vports which have different policies.
DPN-group:
The DPN-group assigned to the Context.
Delegated-ip-prefixes:
List of IP prefixes to be delegated to the mobile node of the Context.
Parent-context:
Indicates a parent context from which this context inherits.

4.3.2.1. Single DPN Agent Case

In the case where a FPC Agent supports only one DPN, the Agent MUST maintain Context data just for the DPN. The Agent does not need to maintain a Topology model. Contexts in single DPN case consists of following parameters for both direction of uplink and downlink.


(Contexts)
    |
    +---UL-Tunnel-local-address
    |
    +---UL-Tunnel-remote-address
    |
    +---UL-MTU-size
    |
    +---UL-Mobility-specific-tunnel-parameters
    |
    +---UL-Nexthop
    |
    +---UL-QoS-profile-specific-parameters
    |
    +---UL-DPN-specific-parameters
    |
    +---UL-Vendor-specific-parameters

                  

Figure 13: Uplink Context Model of Single DPN Structure

UL-Tunnel-local-address:
Specifies uplink endpoint address of the DPN.
UL-Tunnel-remote-address:
Specifies uplink endpoint address of the remote DPN.
UL-MTU-size:
Specifies the uplink MTU size.
UL-Mobility-specific-tunnel-parameters:
Specifies profile specific uplink tunnel parameters to the DPN which the agent exists. This may, for example, include GTP/TEID for 3gpp profile, or GRE/Key for ietf-pmip profile.
UL-Nexthop:
Indicates next-hop information of uplink in external network such as IP address, MAC address, SPI of service function chain [I-D.ietf-sfc-nsh], SID of segment routing[I-D.ietf-6man-segment-routing-header] [I-D.ietf-spring-segment-routing-mpls], etc.
UL-QoS-profile-specific-parameters:
Specifies profile specific QoS parameters of uplink, such as QCI/TFT for 3gpp profile, [RFC6089]/[RFC7222] for ietf-pmip, or parameters of new profiles defined by extensions of this specification.
UL-DPN-specific-parameters:
Specifies optional node specific parameters needed by uplink such as if-index, tunnel-if-number that must be unique in the DPN.
UL-Vendor-specific-parameters:
Specifies a vendor specific parameter space for the uplink.

(Contexts)
    |
    +---DL-Tunnel-local-address
    |
    +---DL-Tunnel-remote-address
    |
    +---DL-MTU-size
    |
    +---DL-Mobility-specific-tunnel-parameters
    |
    +---DL-Nexthop
    |
    +---DL-QoS-profile-specific-parameters
    |
    +---DL-DPN-specific-parameters
    |
    +---DL-Vendor-specific-parameters

                  

Figure 14: Downlink Context Model of Single DPN Structure

DL-Tunnel-local-address:
Specifies downlink endpoint address of the DPN.
DL-Tunnel-remote-address:
Specifies downlink endpoint address of the remote DPN.
DL-MTU-size:
Specifies the downlink MTU size of tunnel.
DL-Mobility-specific-tunnel-parameters:
Specifies profile specific downlink tunnel parameters to the DPN which the agent exists. This may, for example, include GTP/TEID for 3gpp profile, or GRE/Key for ietf-pmip profile.
DL-Nexthop:
Indicates next-hop information of downlink in external network such as IP address, MAC address, SPI of service function chain [I-D.ietf-sfc-nsh], SID of segment routing[I-D.ietf-6man-segment-routing-header] [I-D.ietf-spring-segment-routing-mpls], etc.
DL-QoS-profile-specific-parameters:
Specifies profile specific QoS parameters of downlink, such as QCI/TFT for 3gpp profile, [RFC6089]/[RFC7222] for ietf-pmip, or parameters of new profiles defined by extensions of this specification.
DL-DPN-specific-parameters:
Specifies optional node specific parameters needed by downlink such as if-index, tunnel-if-number that must be unique in the DPN.
DL-Vendor-specific-parameters:
Specifies a vendor specific parameter space for the downlink.

4.3.2.2. Multiple DPN Agent Case

Alternatively, a FPC Agent may connect to multiple DPNs. The Agent MUST maintain a set of Context data for each DPN. The Context contains a list of DPNs, where each entry of the list consists of the parameters in Figure 15. A Context data for one DPN has two entries - one for uplink and another for downlink or, where applicable, a direction of 'both'.


(Contexts)
    |
    +---DPNs
         |
         +---DPN-id
         |
         +---Direction
         |
         +---Tunnel-local-address
         |
         +---Tunnel-remote-address
         |
         +---MTU-size
         |
         +---Mobility-specific-tunnel-parameters
         |
         +---Nexthop
         |
         +---QoS-profile-specific-parameters
         |
         +---DPN-specific-parameters
         |
         +---Vendor-specific-parameters

                  

Figure 15: Multiple-DPN Supported Context Model Structure

DPN-id:
Indicates DPN of which the runtime Context data installed.
Direction:
Specifies which side of connection at the DPN indicated - uplink, downlink or both.
Tunnel-local-address:
Specifies endpoint address of the DPN at the uplink or downlink.
Tunnel-remote-address:
Specifies endpoint address of remote DPN at the uplink or downlink.
MTU-size:
Specifies the packet MTU size on uplink or downlink.
Mobility-specific-tunnel-parameters:
Specifies profile specific tunnel parameters for uplink or downlink to the DPN. This may, for example, include GTP/TEID for 3gpp profile, or GRE/Key for ietf-pmip profile.
Nexthop:
Indicates next-hop information for uplink or downlink in external network such as IP address, MAC address, SPI of service function chain [I-D.ietf-sfc-nsh], SID of segment routing[I-D.ietf-6man-segment-routing-header] [I-D.ietf-spring-segment-routing-mpls], etc.
QoS-profile-specific-parameters:
Specifies profile specific QoS parameters for uplink or downlink to the DPN, such as QCI/TFT for 3gpp profile, [RFC6089]/[RFC7222] for ietf-pmip, or parameters of new profiles defined by extensions of this specification.
DPN-specific-parameters:
Specifies optional node specific parameters needed by uplink or downlink to the DPN such like if-index, tunnel-if-number that must be unique in the DPN.
Vendor-specific-parameters:
Specifies a vendor specific parameter space for the DPN.

Multi-DPN Agents will use only the DPNs list of a Context for processing as described in this section. A single-DPN Agent MAY use both the Single Agent DPN model Section 4.3.2.1 and the multi-DPN Agent Context described here.

4.3.3. Monitors

Monitors provide a mechanism to produce reports when events occur. A Monitor will have a target that specifies what is to be watched.

When a Monitor is specified, the configuration MUST be applicable to the attribute/entity monitored. For example, a Monitor using a Threshold configuration cannot be applied to a Context, because Contexts do not have thresholds. But such a monitor could be applied to a numeric threshold property of a Context.


(FPC-Mobility)
        |
        +---Monitors
               |
               +---Monitor-id
               |
               +---Target
               |
               +---Configuration

              

Figure 16: Common Monitor Model Structure

Monitor-id:
Name of the Monitor. The ID format MUST conform to Section 4.4.
Target:
Target to be monitored. This may be an event, a Context, a Vport or attribute(s) of Contexts. When the type is an attribute(s) of a Context, the target name is a concatenation of the Context-Id and the relative path (separated by '/') to the attribute(s) to be monitored.
Configuration:
Determined by the Monitor subtype. Four report types are defined:
  • Periodic reporting specifies an interval by which a notification is sent to the Client.
  • Event reporting specifies a list of event types that, if they occur and are related to the monitored attribute, will result in sending a notification to the Client.
  • Scheduled reporting specifies the time (in seconds since Jan 1, 1970) when a notification for the monitor should be sent to the Client. Once this Monitor's notification is completed the Monitor is automatically de-registered.
  • Threshold reporting specifies one or both of a low and high threshold. When these values are crossed a corresponding notification is sent to the Client.

4.4. Namespace and Format

The identifiers and names in FPC models which reside in the same namespace must be unique. That uniqueness must be kept in agent or data-plane tenant namespace on an Agent. The tenant namespace uniqueness MUST be applied to all elements of the tenant model, i.e. Topology, Policy and Mobility models.

When a Policy needs to be applied to Contexts in all tenants on an Agent, the Agent SHOULD define that policy to be visible from all the tenants. In this case, the Agent assigns an unique identifier in the agent namespace.

The format of identifiers can utilize any format with agreement between data-plane agent and client operators. The formats include but are not limited to Globally Unique IDentifiers (GUIDs), Universally Unique IDentifiers (UUIDs), Fully Qualified Domain Names (FQDNs), Fully Qualified Path Names (FQPNs) and Uniform Resource Identifiers (URIs).

The FPC model does not limit the types of format that dictate the choice of FPC protocol. However the choice of identifiers which are used in Mobility model need to be considered to handle runtime parameters in real-time. The Topology and Policy models are not restricted to meet that requirement, as described in Section 3.

4.5. Attribute Application

Attributes in FPC Topology and Policy SHOULD be pre-configured in a FPC Agent prior to Contexts and Vports. The FPC Agent requires those pre-configured attributes to be able to derive a Context's detailed runtime attributes.

When a FPC Client creates a Context, the FPC Client is then able to indicate specific DPN-group(s) instead of all endpoint addresses of the DPN(s) and MTU-size of the tunnels for example. This is because that the FPC Agent can derive data for those details from the pre-configured DPN-group information in the FPC Topology.

Similarly when a Vport is created for the Context, the FPC Agent can derive detailed forwarding policies from the pre-configured Policy information in the FPC Policy. The FPC Client thereby has no need to indicate those specific policies to all of the Contexts which share the same set of Policy-groups.

This is intentional as it provides FPC Clients the ability to reuse pre-configured FPC Topology and FPC Policy attributes. It helps to minimize over the wire exchanges and reduce system errors by exchanging less information.

The Agent turns those derived data into runtime attributes of UL and DL objects which are in the DPNs list of the Context (multiple-DPNs Agent case) or directly under the Context (single-DPN Agent case). The Agent consequently instantiates forwarding policies on DPN(s) based on those attributes.

When a Context inherits another Context as its parent, missing attributes in the child Context are provided by the Parent Context (for example, IMSI defined in the 3GPP extension) .

It is noted that the Agent SHOULD update the Context's attributes which are instantiated on DPN(s) when the applied attributes of Topology and Policy are changed.

In the case of FPC Client modifying an existing runtime attribute of a Context which the FPC Agent derived, the FPC Agent MUST overwrite that attribute with the value which the Client brings to the Agent. However risks exist, for example, the attributes could be outside of allowable range of DPNs which the FPC Agent managed.

5. Protocol

5.1. Protocol Messages and Semantics

Five message types are supported:

Client to Agent Messages
Message Type Description
CONF HEADER ADMIN_STATE SESSION_STATE OP_TYPE BODY Configure processes a single operation.
CONF_BUNDLE 1*[HEADER ADMIN_STATE SESSION_STATE TRANS_STRATEGY OP_TYPE BODY] A Conf-bundle takes multiple operations that are to be executed as a group with partial failures allowed. They are executed according to the OP_ID value in the OP_BODY in ascending order. If a CONF_BUNDLE fails, any entities provisioned in the CURRENT operation are removed. However, any successful operations completed prior to the current operation are preserved in order to reduce system load.
REG_MONITOR HEADER ADMIN_STATE *[ MONITOR ] Register a monitor at an Agent. The message includes information about the attribute to monitor and the reporting method. Note that a MONITOR_CONFIG is required for this operation.
DEREG_MONITOR HEADER *[ MONITOR_ID ] [ boolean ] Deregister monitors from an Agent. Monitor IDs are provided. Boolean (optional) indicates if a successful DEREG triggers a NOTIFY with final data.
PROBE HEADER MONITOR_ID Probe the status of a registered monitor.

Each message contains a header with the Client Identifier, an execution delay timer and an operation identifier. The delay, in ms, is processed as the delay for operation execution from the time the operation is received by the Agent.

The Client Identifier is used by the Agent to associate specific configuration characteristics, e.g. options used by the Client when communicating with the Agent, as well as the association of the Client and tenant in the information model.

Messages that create or update Monitors and Entities, i.e. CONFIG, CONF_BUNDLE and REG_MONITOR, specify an Administrative State which specifies the Administrative state of the message subject(s) after the successful completion of the operation. If the status is set to virtual, any existing data on the DPN is removed. If the value is set to disabled, and if that entity exists on the DPN, then an operation to disable the associated entity will occur on the DPN . If set to 'active' the DPN will be provisioned. Values are 'enabled', 'disabled', and 'virtual'.

CONF_BUNDLE also has the Transaction Strategy (TRANS_STRATEGY) attribute. This value specifies the behavior of the Agent when an operation fails while processing a CONF_BUNDLE message. The value of 'default' uses the default strategy defined for the message. The value 'all_or_nothing' will roll back all successfully executed operations within the bundle as well as the operation that failed.

An FPC interface protocol used to support this specification may not need to support CONF_BUNDLE messages or specific TRANS_STRATEGY types beyond 'default' when the protocol provides similar semantics. However, this MUST be clearly defined in the specification that defines the interface protocol.

An Agent will respond with an ERROR, OK, or an OK WITH INDICATION that remaining data will be sent via a notify from the Agent to the Client Section 5.1.1.6.2 for CONFIG and CONF_BUNDLE requests. When returning an 'ok' of any kind, optional data may be present.

Two Agent notifications are supported:

Agent to Client Messages (notifications)
Message Type Description
CONFIG_RESULT_NOTIFY See Table 15 An asynchronous notification from Agent to Client based upon a previous CONFIG or CONF_BUNDLE request.
NOTIFY See Table 16 An asynchronous notification from Agent to Client based upon a registered MONITOR.

5.1.1. CONFIG and CONF_BUNDLE Messages

CONFIG and CONF_BUNDLE specify the following information for each operation in addition to the header information:

SESSION_STATE:
sets the expected state of the entities embedded in the operation body after successful completion of the operation. Values can be 'complete', 'incomplete' or 'outdated'. Any operation that is 'incomplete' MAY NOT result in communication between the Agent and DPN. If the result is 'outdated' any new operations on these entities or new references to these entities have unpredictable results.
OP_TYPE:
specifies the type of operation. Valid values are 'create' (0), 'update' (1), 'query' (2) or 'delete' (3).
COMMAND_SET:
If the feature is supported, specifies the Command Set (see Section 5.1.1.4).
BODY:
A list of Clones, if supported, Vports and Contexts when the OP_TYPE is 'create' or 'update'. Otherwise it is a list of Targets for 'query' or 'deletion'. See Section 6.2.2 for details.

5.1.1.1. Agent Operation Processing

The Agent will process entities provided in an operation in the following order:

  1. Clone Instructions, if the feature is supported
  2. Vports
  3. Contexts according to COMMAND_SET order processing

The following Order Processing occurs when COMMAND Sets are present

  1. The Entity-specific COMMAND_SET is processed according to its bit order unless otherwise specified by the technology specific COMMAND_SET definition.
  2. Operation specific COMMAND_SET is processed upon all applicable entities (even if they had Entity-specific COMMAND_SET values present) according to its bit order unless otherwise specified by the technology specific COMMAND_SET definition.
  3. Operation OP_TYPE is processed for all entities.

When deleting objects only their name needs to be provided. However, attributes MAY be provided if the Client wishes to avoid requiring the Agent cache lookups.

When deleting an attribute, a leaf reference should be provided. This is a path to the attributes.

5.1.1.2. Policy RPC Support

This optional feature permits policy elements, (Policy-Group, Policy, Action and Descriptor), values to be in CONFIG or CONF_BUNDLE requests. It enables RPC based policy provisioning.

5.1.1.3. Cloning

Cloning is an optional feature that allows a Client to copy one structure to another in an operation. Cloning is always done first within the operation (see Operation Order of Execution for more detail). If a Client wants to build an object then Clone it, use CONF_BUNDLE with the first operation being the entities to be copied and a second operation with the Cloning instructions. A CLONE operation takes two arguments, the first is the name of the target to clone and the second is the name of the newly created entity. Individual attributes are not clonable; only Vports and Contexts can be cloned.

5.1.1.4. Command Bitsets

The COMMAND_SET is a technology specific bitset that allows for a single entity to be sent in an operation with requested sub-transactions to be completed. For example, a Context could have the Home Network Prefix absent but it is unclear if the Client would like the address to be assigned by the Agent or if this is an error. Rather than creating a specific command for assigning the IP a bit position in a COMMAND_SET is reserved for Agent based IP assignment. Alternatively, an entity could be sent in an update operation that would be considered incomplete, e.g. missing some required data in for the entity, but has sufficient data to complete the instructions provided in the COMMAND_SET.

5.1.1.5. Reference Scope

The Reference Scope is an optional feature that provides the scope of references used in a configuration command, i.e. CONFIG or CONF_BUNDLE. These scopes are defined as

  • none - all entities have no references to other entities. This implies only Contexts are present. Vports MUST have references to Policy-Groups.
  • op - All references are contained in the operation body, i.e. only intra-operation references exist.
  • bundle - All references exist in bundle (inter-operation/intra-bundle). NOTE - If this value is present in a CONFIG message it is equivalent to 'op'.
  • storage - One or more references exist outside of the operation and bundle. A lookup to a cache / storage is required.
  • unknown - the location of the references are unknown. This is treated as a 'storage' type.

If supported by the Agent, when cloning instructions are present, the scope MUST NOT be 'none'. When Vports are present the scope MUST be 'storage' or 'unknown'.

An agent that only accepts 'op' or 'bundle' reference scope messages is referred to as 'stateless' as it has no direct memory of references outside messages themselves. This permits low memory footprint Agents. Even when an Agent supports all message types an 'op' or 'bundle' scoped message can be processed quickly by the Agent as it does not require storage access.

5.1.1.6. Operation Response

5.1.1.6.1. Immediate Response

Results will be supplied per operation input. Each result contains the RESULT_STATUS and OP_ID that it corresponds to. RESULT_STATUS values are:

  • OK - Success
  • ERR - An Error has occurred
  • OK_NOTIFY_FOLLOWS - The Operation has been accepted by the Agent but further processing is required. A CONFIG_RESULT_NOTIFY will be sent once the processing has succeeded or failed.

Any result MAY contain nothing or entities created or partially fulfilled as part of the operation as specified in Table 14. For Clients that need attributes back quickly for call processing, the AGENT MUST respond back with an OK_NOTIFY_FOLLOWS and minimally the attributes assigned by the Agent in the response. These situations MUST be determined through the use of Command Sets (see Section 5.1.1.4).

If an error occurs the following information is returned.

  • ERROR_TYPE_ID (Unsigned 32) - The identifier of a specific error type
  • ERROR_INFORMATION - An OPTIONAL string of no more than 1024 characters.

5.1.1.6.2. Asynchronous Notification

A CONFIG_RESULT_NOTIFY occurs after the Agent has completed processing related to a CONFIG or CONF_BUNDLE request. It is an asynchronous communication from the Agent to the Client.

The values of the CONFIG_RESULT_NOTIFY are detailed in Table 15.

5.1.2. Monitors

When a monitor has a reporting configuration of SCHEDULED it is automatically de-registered after the NOTIFY occurs. An Agent or DPN may temporarily suspend monitoring if insufficient resources exist. In such a case the Agent MUST notify the Client.

All monitored data can be requested by the Client at any time using the PROBE message. Thus, reporting configuration is optional and when not present only PROBE messages may be used for monitoring. If a SCHEDULED or PERIODIC configuration is provided during registration with the time related value (time or period respectively) of 0 a NOTIFY is immediately sent and the monitor is immediately de-registered. This method should, when a MONITOR has not been installed, result in an immediate NOTIFY sufficient for the Client's needs and lets the Agent realize the Client has no further need for the monitor to be registered. An Agent may reject a registration if it or the DPN has insufficient resources.

PROBE messages are also used by a Client to retrieve information about a previously installed monitor. The PROBE message SHOULD identify one or more monitors by means of including the associated monitor identifier. An Agent receiving a PROBE message sends the requested information in a single or multiple NOTIFY messages.

5.1.2.1. Operation Response

5.1.2.1.1. Immediate Response

Results will be supplied per operation input. Each result contains the RESULT_STATUS and OP_ID that it corresponds to. RESULT_STATUS values are:

  • OK - Success
  • ERR - An Error has occurred

Any OK result will contain no more information.

If an error occurs the following information is returned.

  • ERROR_TYPE_ID (Unsigned 32) - The identifier of a specific error type
  • ERROR_INFORMATION - An OPTIONAL string of no more than 1024 characters.

5.1.2.1.2. Asynchronous Notification

A NOTIFY can be sent as part of de-registraiton, a trigger based upon a Monitor Configuration or a PROBE. A NOTIFY is comprised of unique Notification Identifier from the Agent, the Monitor ID the notification applies to, the Trigger for the notification, a timestamp of when the notification's associated event occurs and data that is specific to the monitored value's type.

5.2. Protocol Operation

5.2.1. Simple RPC Operation

An FPC Client and Agent MUST identify themselves using the CLI_ID and AGT_ID respectively to ensure that for all transactions a recipient of an FPC message can unambiguously identify the sender of the FPC message. A Client MAY direct the Agent to enforce a rule in a particular DPN by including a DPN_ID value in a Context. Otherwise the Agent selects a suitable DPN to enforce a Context and notifies the Client about the selected DPN using the DPN_ID.

All messages sent from a Client to an Agent MUST be acknowledged by the Agent. The response must include all entities as well as status information, which indicates the result of processing the message, using the RESPONSE_BODY property. In case the processing of the message results in a failure, the Agent sets the ERROR_TYPE_ID and ERROR_INFORMATION accordingly and MAY clear the Context or Vport, which caused the failure, in the response.

If based upon Agent configuration or the processing of the request possibly taking a significant amount of time the Agent MAY respond with an OK_NOTIFY_FOLLOWS with an optional RESPONSE_BODY containing the partially completed entities. When an OK_NOTIFY_FOLLOWS is sent, the Agent will, upon completion or failure of the operation, respond with an asynchronous CONFIG_RESULT_NOTIFY to the Client.

A Client MAY add a property to a Context without providing all required details of the attribute's value. In such case the Agent SHOULD determine the missing details and provide the completed property description back to the Client. If the processing will take too long or based upon Agent configuration, the Agent MAY respond with an OK_NOTIFY_FOLLOWS with a RESPONSE_BODY containing the partially completed entities.

In case the Agent cannot determine the missing value of an attribute's value per the Client's request, it leaves the attribute's value cleared in the RESPONSE_BODY and sets the RESULT to Error, ERROR_TYPE_ID and ERROR_INFORMATION. As example, the Control-Plane needs to setup a tunnel configuration in the Data-Plane but has to rely on the Agent to determine the tunnel endpoint which is associated with the DPN that supports the Context. The Client adds the tunnel property attribute to the FPC message and clears the value of the attribute (e.g. IP address of the local tunnel endpoint). The Agent determines the tunnel endpoint and includes the completed tunnel property in its response to the Client.

Figure 17 illustrates an exemplary session life-cycle based on Proxy Mobile IPv6 registration via MAG Control-Plane function 1 (MAG-C1) and handover to MAG Control-Plane function 2 (MAG-C2). Edge DPN1 represents the Proxy CoA after attachment, whereas Edge DPN2 serves as Proxy CoA after handover. As exemplary architecture, the FPC Agent and the network control function are assumed to be co-located with the Anchor-DPN, e.g. a Router.

                                              +-------Router--------+
                        +-----------+         |+-------+ +---------+|
+------+ +------+     +-----+ FPC   |          | FPC   | |  Anchor |
|MAG-C1| |MAG-C2|     |LMA-C| Client|          | Agent | |   DPN   |
+------+ +------+     +-----+-------+          +-------+ +---------+
[MN attach]  |            |                          |           |
   |-------------PBU----->|                          |           |
   |         |            |---(1)--CONFIG(CREATE)--->|           |
   |         |            |   [ CONTEXT_ID,          |--tun1 up->|
   |         |            |   DOWNLINK(QOS/TUN),     |           |
   |         |            |   UPLINK(QOS/TUN),       |--tc qos-->|
   |         |            |     IP_PREFIX(HNP) ]     |           |
   |         |            |<---(2)- OK --------------|-route add>|
   |         |            |                          |           |
   |<------------PBA------|                          |           |
   |         |            |                          |           |
   | +----+  |            |                          |           |
   | |Edge|  |            |                          |           |
   | |DPN1|  |            |                          |           |
   | +----+  |            |                          |           |
   |   |                                                         |
   |   |-=======================================================-|
   |                      |                          |           |
   |   [MN handover]      |                          |           |
   |         |---PBU ---->|                          |           |
   |         |            |--(3)- CONFIG(MODIFY)---->|           |
   |         |<--PBA------|    [ CONTEXT_ID          |-tun1 mod->|
   |         |            |      DOWNLINK(TUN),      |           |
   |         |  +----+    |      UPLINK(TUN) ]       |           |
   |         |  |Edge|    |<---(4)- OK --------------|           |
   |         |  |DPN2|    |                          |           |
   |         |  +----+    |                          |           |
   |         |    |       |                          |           |
   |         |    |-============================================-|
   |         |            |                          |           |

      

Figure 17: Exemplary Message Sequence (focus on FPC reference point)

After reception of the Proxy Binding Update (PBU) at the LMA Control-Plane function (LMA-C), the LMA-C selects a suitable DPN, which serves as Data-Plane anchor to the mobile node's (MN) traffic. The LMA-C adds a new logical Context to the DPN to treat the MN's traffic (1) and includes a Context Identifier (CONTEXT_ID) to the CONFIG command. The LMA-C identifies the selected Anchor DPN by including the associated DPN identifier.

The LMA-C adds properties during the creation of the new Context. One property is added to specify the forwarding tunnel type and endpoints (Anchor DPN, Edge DPN1) in each direction (as required). Another property is added to specify the QoS differentiation, which the MN's traffic should experience. At reception of the Context, the FPC Agent utilizes local configuration commands to create the tunnel (tun1) as well as the traffic control (tc) to enable QoS differentiation. After configuration has been completed, the Agent applies a new route to forward all traffic destined to the MN's HNP specified as a property in the Context to the configured tunnel interface (tun1).

During handover, the LMA-C receives an updating PBU from the handover target MAG-C2. The PBU refers to a new Data-Plane node (Edge DPN2) to represent the new tunnel endpoints in the downlink and uplink, as required. The LMA-C sends a CONFIG message (3) to the Agent to modify the existing tunnel property of the existing Context and to update the tunnel endpoint from Edge DPN1 to Edge DPN2. Upon reception of the CONFIG message, the Agent applies updated tunnel property to the local configuration and responds to the Client (4).

                                              +-------Router--------+
                        +-----------+         |+-------+ +---------+|
+------+ +------+     +-----+ FPC   |          | FPC   | |  Anchor |
|MAG-C1| |MAG-C2|     |LMA-C| Client|          | Agent | |   DPN   |
+------+ +------+     +-----+-------+          +-------+ +---------+
[MN attach]  |            |                          |           |
   |-------------PBU----->|                          |           |
   |         |            |---(1)--CONFIG(MODIFY)--->|           |
   |<------------PBA------|   [ CONTEXT_ID,          |--tun1   ->|
   |         |            |   DOWNLINK(TUN delete),  |    down   |
   |         |            |   UPLINK(TUN delete) ]   |           |
   |         |            |                          |           |
   |         |            |<-(2)- OK ----------------|           |
   |         |            |                          |           |
   |         |  [ MinDelayBeforeBCEDelete expires ]  |           |
   |         |            |                          |           |
   |         |            |---(3)--CONFIG(DELETE)--->|-- tun1 -->|
   |         |            |                          |  delete   |
   |         |            |<-(4)- OK ----------------|           |
   |         |            |                          |-- route ->|
   |         |            |                          |   remove  |
   |         |            |                          |           |
      

Figure 18: Exemplary Message Sequence (focus on FPC reference point)

When a teardown of the session occurs, MAG-C1 will send a PBU with a lifetime value of zero. The LMA-C sends a CONFIG message (1) to the Agent to modify the existing tunnel property of the existing Context to delete the tunnel information.) Upon reception of the CONFIG message, the Agent removes the tunnel configuration and responds to the Client (2). Per [RFC5213], the PBA is sent back immediately after the PBA is received.

If no valid PBA is received after the expiration of the MinDelayBeforeBCEDelete timer (see [RFC5213]), the LMA-C will send a CONFIG (3) message with a deletion request for the Context. Upon reception of the message, the Agent deletes the tunnel and route on the DPN and responds to the Client (4).

When a multi-DPN Agent is used the DPN list permits several DPNs to be provisioned in a single message.


                        +-----------+           +-------+ +---------+
+------+ +------+     +-----+ FPC   |           | FPC   | |  Anchor |
|MAG-C1| |MAG-C2|     |LMA-C| Client|           | Agent | |   DPN1  |
+------+ +------+     +-----+-------+           +-------+ +---------+
[MN attach]  |            |                          |           |
   |-------------PBU----->|                          |           |
   |         |            |---(1)--CONFIG(CREATE)--->|           |
   |         |            |   [ CONTEXT_ID, DPNS [   |--tun1 up->|
   |         |            |[DPN1,DOWNLINK(QOS/TUN)], |           |
   |         |            | [DPN1,UPLINK(QOS/TUN)],  |--tc qos-->|
   |         |            |[DPN2,DOWNLINK(QOS/TUN)], |           |
   |         |            | [DPN2,UPLINK(QOS/TUN)],  |           |
   |         |            |     IP_PREFIX(HNP) ]     |           |
   |         |            |<-(2)- OK_NOTIFY_FOLLOWS -|-route add>|
   |         |            |                          |           |
   |<------------PBA------|                          |           |
   |         |            |                          |           |
   | +----+               |                          |           |
   | |Edge|               |                          |           |
   | |DPN2|               |                          |           |
   | +----+               |                          |           |
   |   |<---------------------- tun1 up -------------|           |
   |   |<---------------------- tc qos --------------|           |
   |   |<---------------------- route add -----------|           |
   |   |                  |                          |           |
   |   |                  |<(3) CONFIG_RESULT_NOTIFY |           |
   |   |                  |   [ Response Data ]      |           |
   |   |                  |                          |           |
      

Figure 19: Exemplary Message Sequence for Multi-DPN Agent

Figure 19 shows how the first 2 messages in Figure 17 are supported when a multi-DPN Agent communicates with both Anchor DPN1 and Edge DPN2. In such a case, the FPC Client sends the downlink and uplink for both DPNs in the "DPNS" list of the same Context. Message 1 shows the DPNS list with all entries. Each entry identifies the DPN and direction (one of 'uplink', 'downlink' or 'both'). Generally, the 'both' direction is not used for normal mobility session processing. It is commonly used for the instantiation of Policies on a specific DPN (see Section 5.2.4).

The Agent responds with an OK_NOTIFY_FOLLOWS while it simultaneoulsy provisions both DPNs. Upon successful completion, the Agent responds to the Client with a CONFIG_RESULT_NOTIFY indicating the operation status.

5.2.2. Policy And Mobility on the Agent

A Client may build Policy and Topology using any mechanism on the Agent. Such entities are not always required to be constructed in realtime and, therefore, there are no specific messages defined for them in this specification.

The Client may add, modify or delete many Vports and Contexts in a single FPC message. This includes linking Contexts to Actions and Descriptors, i.e. a Rule. As example, a Rule which performs re-writing of an arriving packet's destination IP address from IP_A to IP_B matching an associated Descriptor, can be enforced in the Data-Plane via an Agent to implicitly consider matching arriving packet's source IP address against IP_B and re- write the source IP address to IP_A.

Figure 20 illustrates the generic policy configuration model as used between a FPC Client and a FPC Agent.


  Descriptor_1 -+          +- Action_1
                |          |
  Descriptor_2 -+--<Rule>--+- Action_2
                  +------+
                  /Order#/-------------+
                  +------+             |
                                       |
  Descriptor_3 -+          +- Action_3 +-<PolicyID>
                |          |           |  ^
  Descriptor_4 -+--<Rule>--+- Action_4 |  |
                  +------+             | <PolicyGroupID>
                  /Order#/-------------+  ^
                  +------+                |
                                         <VportID>

  +-------------------+     +---------------------+
  | Bind 1..M traffic |     |  Bind 1..N traffic  |
  |  Descriptors to   | --> |  treatment actions  |
  |     a Policy,     |     |      to a Policy,   |
  | Policy-Group and  |     |   Policy-Group and  |
  |       Vport       |     |       Vport         |
  +-------------------+     +---------------------+

 |                                                 |
 +-------------- Data-Plane Rule ------------------+
        

Figure 20: Structure of Policies and Vports

As depicted in Figure 20, the Vport represents the anchor of Rules through the Policy-group, Policy, Rule hierarchy configured by any mechanism including RPC or N. A Client and Agent use the identifier of the associated Policy to directly access the Rule and perform modifications of traffic Descriptors or Action references. A Client and Agent use the identifiers to access the Descriptors or Actions to perform modifications. From the viewpoint of packet processing, arriving packets are matched against traffic Descriptors and processed according to the treatment Actions specified in the list of properties associated with the Vport.

A Client complements a rule's Descriptors with a Rule's Order (priority) value to allow unambiguous traffic matching on the Data-Plane.

Figure 21 illustrates the generic context configuration model as used between a FPC Client and a FPC Agent.


  TrafficSelector_1
           |
  profile-parameters
           |
  mobility-profile-- dl ------+
                     ^        |
                     |      qos-profile
                <ContextID1>       |
                     ^        per-mn-agg-max-dl_2
                     |
                <ContextID2>

  +-------------------+     +---------------------+
  | Bind 1..M traffic |     |  Bind 1..N traffic  |
  |    selectors to   | --> |  treatment / qos    |
  |     a Context     |     |  actions to a       |
  |                   |     |       Context       |
  +-------------------+     +---------------------+

 |                                                 |
 +-------------- Data-Plane Rule ------------------+
        

Figure 21: Structure of Contexts

As depicted in Figure 21, the Context represents a mobility session hierarchy. A Client and Agent directly assigns values such as downlink traffic descriptors, QoS information, etc. A Client and Agent use the context identifiers to access the descriptors, qos information, etc. to perform modifications. From the viewpoint of packet processing, arriving packets are matched against traffic Descriptors and processed according to the qos or other mobility profile related Actions specified in the Context's properties. If present, the final action is to use a Context's tunnel information to encapsulate and forward the packet.

A second Context also references context1 in the figure. Based upon the technology a property in a parent context MAY be inherited by its descendants. This permits concise over the wire representation. When a Client deletes a parent Context all children are also deleted.

5.2.3. Optimization for Current and Subsequent Messages

5.2.3.1. Bulk Data in a Single Operation

A single operation MAY contain multiple entities. This permits bundling of requests into a single operation. In the example below two PMIP sessions are created via two PBU messages and sent to the Agent in a single CONFIG message (1). Upon recieveing the message, the Agent responds back with an OK_NOTIFY_FOLLOWS (2), completes work on the DPN to activate the associated sessions then responds to the Client with a CONFIG_RESULT_NOTIFY (3).

                                              +-------Router--------+
                        +-----------+         |+-------+ +---------+|
+------+ +------+     +-----+ FPC   |          | FPC   | |  Anchor |
|MAG-C1| |MAG-C2|     |LMA-C| Client|          | Agent | |   DPN   |
+------+ +------+     +-----+-------+          +-------+ +---------+
[MN1 attach] |            |                          |           |
   |-------------PBU----->|                          |           |
   |  [MN2 attach]        |                          |           |
   |         |---PBU----->|                          |           |
   |         |            |                          |           |
   |         |            |---(1)--CONFIG(CREATE)--->|           |
   |<------------PBA------|   [ CONTEXT_ID 1,        |--tun1 up->|
   |         |            |   DOWNLINK(QOS/TUN),     |           |
   |         |<--PBA------|   UPLINK(QOS/TUN),       |--tc1 qos->|
   |         |            |     IP_PREFIX(HNP) ]     |           |
   |         |            |   [ CONTEXT_ID 2,        |-route1    |
   |         |            |   DOWNLINK(QOS/TUN),     |   add>    |
   |         |            |   UPLINK(QOS/TUN),       |           |
   |         |            |     IP_PREFIX(HNP) ]     |--tun2 up->|
   |         |            |<-(2)- OK_NOTIFY_FOLLOWS--|           |
   |         |            |                          |--tc2 qos->|
   |<------------PBA------|                          |           |
   |         |            |                          |-route2    |
   |         |            |<(3) CONFIG_RESULT_NOTIFY |   add>    |
   |         |            |   [ Response Data ]      |           |
   |         |            |                          |           |
   |         |            |                          |           |
      

Figure 22: Exemplary Bulk Entity with Asynchronous Notification Sequence (focus on FPC reference point)

5.2.3.2. Configuration Bundles

Bundles provide transaction boundaries around work in a single message. Operations in a bundle MUST be successfully executed in the order specified. This allows references created in one operation to be used in a subsequent operation in the bundle.

The example bundle shows in Operation 1 (OP 1) the creation of a Context 1 which is then referenced in Operation 2 (OP 2) by CONTEXT_ID 2. If OP 1 fails then OP 2 will not be executed. The advantage of the CONF_BUNDLE is preservation of dependency orders in a single message as opposed to sending multiple CONFIG messages and awaiting results from the Agent.

When a CONF_BUNDLE fails, any entities provisioned in the CURRENT operation are removed, however, any successful operations completed prior to the current operation are preserved in order to reduce system load.

                        +-------Router--------+
+-----------+           |+-------+ +---------+|
|   FPC     |            | FPC   | |  Anchor |
|  Client   |            | Agent | |   DPN   |
+-----------+            +-------+ +---------+
     |                          |           |
     |--CONF_BUNDLE(CREATE)---->|           |
     |   [ OP 1, [VPORT X ]     |           |
     |   [ CONTEXT_ID 1,        |           |
     |   DOWNLINK(QOS/TUN),     |           |
     |   UPLINK(QOS/TUN),       |           |
     |     IP_PREFIX(HNP) ]     |           |
     |   [ OP 2,                |           |
     |    [ CONTEXT_ID 2,       |           |
     |   PARENT_CONTEXT_ID 1,   |           |
     |   UPLINK(QOS/TUN),       |           |
     |   DOWNLINK(QOS/TUN) ] ]  |           |
     |                          |           |
      

Figure 23: Exemplary Bundle Message (focus on FPC reference point)

5.2.3.3. Cloning Feature (Optional)

Cloning provides a high speed copy/paste mechanism. The example below shows a single Context that will be copied two times. A subsequent update will then override copied values. To avoid the accidental activation of the Contexts on the DPN, the CONFIG (1) message with the cloning instruction has a SESSION_STATE with a value of 'incomplete' and OP_TYPE of 'CREATE'. A second CONFIG (2) is sent with the SESSION_STATE of 'complete' and OP_TYPE of 'UPDATE'. The second message includes any differences between the original (copied) Context and its Clones.

                        +-------Router--------+
+-----------+           |+-------+ +---------+|
|   FPC     |            | FPC   | |  Anchor |
|  Client   |            | Agent | |   DPN   |
+-----------+            +-------+ +---------+
     |                          |           |
     |--CONF_BUNDLE(CREATE)---->|           |
     |   [ OP 1,                |           |
     |    [ SESSION_STATE       |           |
     |       (incomplete) ],    |           |
     | [CLONE SRC=2, TARGET=3], |           |
     | [CLONE SRC=2, TARGET=4], |           |
     |    [ CONTEXT_ID 2,       |           |
     |   PARENT_CONTEXT_ID 1,   |           |
     |   UPLINK(QOS/TUN),       |           |
     |   DOWNLINK(QOS/TUN),     |           |
     |   IP_PREFIX(HNP)    ] ]  |           |
     |<----- OK ----------------|           |
     |                          |           |
     |--CONF_BUNDLE(UPDATE)--->|           |
     |    [ CONTEXT_ID 3,       |           |
     | PARENT_CONTEXT_ID(empty),|           |
     |   UPLINK(QOS/TUN),       |           |
     |   DOWNLINK(QOS/TUN) ],   |           |
     |    [ CONTEXT_ID 4,       |           |
     | PARENT_CONTEXT_ID(empty),|           |
     |   UPLINK(QOS/TUN),       |           |
     |   DOWNLINK(QOS/TUN) ] ]  |           |
     |<----- OK ----------------|           |
     |                          |           |
      

Figure 24: Exemplary Bundle Message (focus on FPC reference point)

Cloning has the added advantage of reducing the over the wire data size required to create multiple entities. This can improve performance if serialization / deserialization of multiple entities incurs some form of performance penalty.

5.2.3.4. Command Bitsets (Optional)

Command Sets permit the ability to provide a single, unified data structure, e.g. CONTEXT, and specify which activities are expected to be performed on the DPN. This has some advantages

  • Rather than sending N messages with a single operation performed on the DPN a single message can be used with a Command Set that specifies the N DPN operations to be executed.
  • Errors become more obvious. For example, if the HNP is NOT provided but the Client did not specify that the HNP should be assigned by the Agent this error is easily detected. Without the Command Set the default behavior of the Agent would be to assign the HNP and then respond back to the Client where the error would be detected and subsequent messaging would be required to remedy the error. Such situations can increase the time to error detection and overall system load without the Command Set present.
  • Unambiguous provisioning specification. The Agent is exactly in sync with the expectations of the Client as opposed to guessing what DPN work could be done based upon data present at the Agent. This greatly increases the speed by which the Agent can complete work.
  • Permits different technologies with different instructions to be sent in the same message.

As Command Bitsets are technology specific, e.g. PMIP or 3GPP Mobility, the type of work varies on the DPN and the amount of data present in a Context or Port will vary. Using the technology specific instructions allows the Client to serve multiple technologies and MAY result in a more stateless Client as the instructions are transferred the Agent which will match the desired, technology specific instructions with the capabilities and over the wire protocol of the DPN more efficiently.

5.2.3.5. Reference Scope(Optional)

Although entities MAY refer to any other entity of an appropriate type, e.g. Contexts can refer to Vports or Contexts, the Reference Scope gives the Agent an idea of where those references reside. They may be in the same operation, an operation in the same CONF_BUNDLE message or in storage. There may also be no references. This permits the Agent to understand when it can stop searching for reference it cannot find. For example, if a CONF_BUNDLE message uses a Reference Scope of type 'op' then it merely needs to keep an operation level cache and consume no memory or resources searching across the many operations in the CONF_BUNDLE message or the data store.

Agents can also be stateless by only supporting the 'none', 'op' and 'bundle' reference scopes. This does not imply they lack storage but merely the search space they use when looking up references for an entity. The figure below shows the caching hierarchy provided by the Reference Scope

Caches are temporarily created at each level and as the scope includes more caches the amount of entities that are searched increases. Figure 25 shows an example containment hierarchy provided for all caches.

                       +---------------+
                       | Global Cache  |
                       |  (storage)    |
                       +------+--------+
                              |
                              +----------------------+
                              |                      |
                       +------+--------+      +------+--------+
                       | Bundle Cache  |      | Bundle Cache  |
                       |   (bundle)    | .... |   (bundle)    |
                       +------+--------+      +------+--------+
                              |
         +--------------------+--------------------+
         |                    |                    |
+--------+---------+ +--------+---------+ +--------+---------+
| Operation Cache  | | Operation Cache  | | Operation Cache  |
|       (op)       | |       (op)       | |       (op)       |
+------------------+ +------------------+ +------------------+

                          (no cache)
      

Figure 25: Exemplary Hierarchical Cache

5.2.4. Pre-provisioning

Although Contexts are used for Session based lifecycle elements, Vports may exist outside of a specific lifecycle and represent more general policies that may affect multiple Contexts (sessions). The use of pre-provisioning of Vports permits policy and administrative use cases to be executed. For example, creating tunnels to forward traffic to a trouble management platform and dropping packets to a defective web server can be accomplished via provisioning of Vports.

The figure below shows a CONFIG (1) message used to install a Policy-group, policy-group1, using a Context set aside for pre-provisioning on a DPN.

                        +-------Router--------+
+-----------+           |+-------+ +---------+|
|   FPC     |            | FPC   | |  Anchor |
|  Client   |            | Agent | |   DPN   |
+-----------+            +-------+ +---------+
     |                          |           |
     |------CONFIG(CREATE)----->|           |
     |  [ VPORT_ID port1,       |           |
     |     [ policy-group1 ] ]  |           |
     |  [ CONTEXT_ID preprov,   |           |
     |     DPN_ID X,            |           |
     |     [ port1 ] ]          |           |
     |                          |           |
      

Figure 26: Exemplary Config Message for policy pre-provisioning

5.2.4.1. Basename Registry Feature (Optional)

The Optional BaseName Registry support feature is provided to permit Clients and tenants with common scopes, referred to in this specification as BaseNames, to track the state of provisioned policy information on an Agent. The registry records the BaseName and Checkpoint set by a Client. If a new Client attaches to the Agent it can query the Registry to determine the amount of work that must be executed to configure the Agent to a BaseName / checkpoint revision. A State value is also provided in the registry to help Clients coordinate work on common BaseNames.

6. Protocol Message Details

6.1. Data Structures And Type Assignment

6.1.1. Policy Structures

Action Fields
Structure Field Type
ACTION ACTION_ID FPC-Identity (Section 4.4)
ACTION TYPE [32, unsigned integer]
ACTION VALUE Type specific
DESCRIPTOR DESCRIPTOR_ID FPC-Identity (Section 4.4)
DESCRIPTOR TYPE [32, unsigned integer]
DESCRIPTOR VALUE Type specific
POLICY POLICY_ID FPC-Identity (Section 4.4)
POLICY RULES *[ RULE ] (See Table 4)
POLICY-GROUP POLICY_GROUP_ID FPC-Identity (Section 4.4)
POLICY-GROUP POLICIES *[ POLICY_ID ]

Policies contain a list of Rules by their order value. Each Rule contains Descriptors with optional directionality and Actions with order values that specifies action execution ordering if the Rule has multiple actions.

Rules consist of the following fields.

Rule Fields
Field Type Sub-Fields
ORDER [16, INTEGER]
RULE_DESCRIPTORS *[ DESCRIPTOR_ID DIRECTION ] DIRECTION [2, unsigned bits] is an ENUMERATION (uplink, downlink or both).
RULE_ACTIONS *[ ACTION_ID ACTION-ORDER ] ACTION-ORDER [8, unsigned integer] specifies action execution order.

6.1.2. Mobility Structures

Vport Fields
Field Type
VPORT_ID FPC-Identity (Section 4.4)
POLICIES *[ POLICY_GROUP_ID ]
Context Fields
Field Type
CONTEXT_ID FPC-Identity (Section 4.4)
VPORTS *[ VPORT_ID ]
DPN_GROUP_ID FPC-Identity (Section 4.4)
DELEGATED IP PREFIXES *[ IP_PREFIX ]
PARENT_CONTEXT_ID FPC-Identity (Section 4.4)
UPLINK [NOTE 1] MOB_FIELDS
DOWNLINK [NOTE 1] MOB_FIELDS
DPNS [NOTE 2] *[ DPN_ID DPN_DIRECTION MOB_FIELDS ]
MOB_FIELDS All parameters from Table 7

NOTE 1 - These fields are present when the Agent supports only a single DPN.

NOTE 2 - This field is present when the Agent supports multiple DPNs.