Internet Engineering Task Force W. Wang
Internet-Draft Zhejiang Gongshang University
Intended status: Standards Track E. Haleplidis
Expires: June 4, 2011 University of Patras
K. Ogawa
NTT Corporation
C. Li
Hangzhou BAUD Networks
J. Halpern
Ericsson
December 1, 2010
ForCES Logical Function Block (LFB) Library
draft-ietf-forces-lfb-lib-03
Abstract
This document defines basic classes of Logical Function Blocks (LFBs)
used in the Forwarding and Control Element Separation (ForCES). It
is defined according to ForCES FE model [RFC5812] and ForCES protocol
[RFC5810] specifications. These basic LFB classes are scoped to meet
requirements of typical router functions and considered as the basic
LFB library for ForCES. Descriptions of individual LFBs are
presented and detailed XML definitions are included in the library.
Several use cases of the defined LFB classes are also proposed.
Status of this Memo
This Internet-Draft is submitted in full conformance with the
provisions of BCP 78 and BCP 79.
Internet-Drafts are working documents of the Internet Engineering
Task Force (IETF). Note that other groups may also distribute
working documents as Internet-Drafts. The list of current Internet-
Drafts is at http://datatracker.ietf.org/drafts/current/.
Internet-Drafts are draft documents valid for a maximum of six months
and may be updated, replaced, or obsoleted by other documents at any
time. It is inappropriate to use Internet-Drafts as reference
material or to cite them other than as "work in progress."
This Internet-Draft will expire on June 4, 2011.
Copyright Notice
Copyright (c) 2010 IETF Trust and the persons identified as the
document authors. All rights reserved.
Wang, et al. Expires June 4, 2011 [Page 1]
Internet-Draft ForCES LFB Library December 2010
This document is subject to BCP 78 and the IETF Trust's Legal
Provisions Relating to IETF Documents
(http://trustee.ietf.org/license-info) in effect on the date of
publication of this document. Please review these documents
carefully, as they describe your rights and restrictions with respect
to this document. Code Components extracted from this document must
include Simplified BSD License text as described in Section 4.e of
the Trust Legal Provisions and are provided without warranty as
described in the Simplified BSD License.
Table of Contents
1. Terminology and Conventions . . . . . . . . . . . . . . . . . 4
1.1. Requirements Language . . . . . . . . . . . . . . . . . . 4
2. Definitions . . . . . . . . . . . . . . . . . . . . . . . . . 5
3. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 7
3.1. Scope of the Library . . . . . . . . . . . . . . . . . . . 7
3.2. Overview of LFB Classes in the Library . . . . . . . . . . 9
3.2.1. LFB Design Choices . . . . . . . . . . . . . . . . . . 9
3.2.2. LFB Class Groupings . . . . . . . . . . . . . . . . . 9
3.2.3. Sample LFB Class Application . . . . . . . . . . . . . 11
3.3. Document Structure . . . . . . . . . . . . . . . . . . . . 12
4. Base Types . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.1. Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.2. Frame . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.3. MetaData . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.4. XML for Base Type Library . . . . . . . . . . . . . . . . 15
5. LFB Class Description . . . . . . . . . . . . . . . . . . . . 36
5.1. Ethernet Processing LFBs . . . . . . . . . . . . . . . . . 36
5.1.1. EtherPHYCop . . . . . . . . . . . . . . . . . . . . . 36
5.1.2. EtherMACIn . . . . . . . . . . . . . . . . . . . . . . 38
5.1.3. EtherClassifier . . . . . . . . . . . . . . . . . . . 40
5.1.4. EtherEncapsulator . . . . . . . . . . . . . . . . . . 41
5.1.5. EtherMACOut . . . . . . . . . . . . . . . . . . . . . 44
5.2. IP Packet Validation LFBs . . . . . . . . . . . . . . . . 45
5.2.1. IPv4Validator . . . . . . . . . . . . . . . . . . . . 45
5.2.2. IPv6Validator . . . . . . . . . . . . . . . . . . . . 46
5.3. IP Forwarding LFBs . . . . . . . . . . . . . . . . . . . . 47
5.3.1. IPv4UcastLPM . . . . . . . . . . . . . . . . . . . . . 48
5.3.2. IPv4NextHop . . . . . . . . . . . . . . . . . . . . . 49
5.3.3. IPv6UcastLPM . . . . . . . . . . . . . . . . . . . . . 51
5.3.4. IPv6NextHop . . . . . . . . . . . . . . . . . . . . . 51
5.4. Address Resolution LFBs . . . . . . . . . . . . . . . . . 51
5.4.1. ARP . . . . . . . . . . . . . . . . . . . . . . . . . 51
5.4.2. ND . . . . . . . . . . . . . . . . . . . . . . . . . . 53
5.5. Redirect LFBs . . . . . . . . . . . . . . . . . . . . . . 53
5.5.1. RedirectIn . . . . . . . . . . . . . . . . . . . . . . 53
Wang, et al. Expires June 4, 2011 [Page 2]
Internet-Draft ForCES LFB Library December 2010
5.5.2. RedirectOut . . . . . . . . . . . . . . . . . . . . . 54
5.6. General Purpose LFBs . . . . . . . . . . . . . . . . . . . 54
5.6.1. BasicMetadataDispatch . . . . . . . . . . . . . . . . 54
5.6.2. GenericScheduler . . . . . . . . . . . . . . . . . . . 55
6. XML for LFB Library . . . . . . . . . . . . . . . . . . . . . 57
7. LFB Class Use Cases . . . . . . . . . . . . . . . . . . . . . 83
7.1. IP Forwarding . . . . . . . . . . . . . . . . . . . . . . 83
7.2. Address Resolution . . . . . . . . . . . . . . . . . . . . 83
7.3. ICMP . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
7.4. Running Routing Protocol . . . . . . . . . . . . . . . . . 83
7.5. Network Management . . . . . . . . . . . . . . . . . . . . 84
8. Contributors . . . . . . . . . . . . . . . . . . . . . . . . . 85
9. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 86
10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 87
11. Security Considerations . . . . . . . . . . . . . . . . . . . 88
12. References . . . . . . . . . . . . . . . . . . . . . . . . . . 89
12.1. Normative References . . . . . . . . . . . . . . . . . . . 89
12.2. Informative References . . . . . . . . . . . . . . . . . . 89
Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 90
Wang, et al. Expires June 4, 2011 [Page 3]
Internet-Draft ForCES LFB Library December 2010
1. Terminology and Conventions
1.1. Requirements Language
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in [RFC2119].
Wang, et al. Expires June 4, 2011 [Page 4]
Internet-Draft ForCES LFB Library December 2010
2. Definitions
This document follows the terminology defined by the ForCES
Requirements in [RFC3654]and by the ForCES framework in [RFC3746].
The definitions below are repeated for clarity.
Control Element (CE) - A logical entity that implements the ForCES
protocol and uses it to instruct one or more FEs on how to process
packets. CEs handle functionality such as the execution of
control and signaling protocols.
Forwarding Element (FE) - A logical entity that implements the
ForCES protocol. FEs use the underlying hardware to provide per-
packet processing and handling as directed/controlled by one or
more CEs via the ForCES protocol.
ForCES Network Element (NE) - An entity composed of one or more
CEs and one or more FEs. To entities outside an NE, the NE
represents a single point of management. Similarly, an NE usually
hides its internal organization from external entities.
LFB (Logical Function Block) - The basic building block that is
operated on by the ForCES protocol. The LFB is a well defined,
logically separable functional block that resides in an FE and is
controlled by the CE via ForCES protocol. The LFB may reside at
the FE's datapath and process packets or may be purely an FE
control or configuration entity that is operated on by the CE.
Note that the LFB is a functionally accurate abstraction of the
FE's processing capabilities, but not a hardware-accurate
representation of the FE implementation.
FE Topology - A representation of how the multiple FEs within a
single NE are interconnected. Sometimes this is called inter-FE
topology, to be distinguished from intra-FE topology (i.e., LFB
topology).
LFB Class and LFB Instance - LFBs are categorized by LFB Classes.
An LFB Instance represents an LFB Class (or Type) existence.
There may be multiple instances of the same LFB Class (or Type) in
an FE. An LFB Class is represented by an LFB Class ID, and an LFB
Instance is represented by an LFB Instance ID. As a result, an
LFB Class ID associated with an LFB Instance ID uniquely specifies
an LFB existence.
LFB Metadata - Metadata is used to communicate per-packet state
from one LFB to another, but is not sent across the network. The
FE model defines how such metadata is identified, produced and
consumed by the LFBs. It defines the functionality but not how
Wang, et al. Expires June 4, 2011 [Page 5]
Internet-Draft ForCES LFB Library December 2010
metadata is encoded within an implementation.
LFB Component - Operational parameters of the LFBs that must be
visible to the CEs are conceptualized in the FE model as the LFB
components. The LFB components include, for example, flags,
single parameter arguments, complex arguments, and tables that the
CE can read and/or write via the ForCES protocol (see below).
LFB Topology - Representation of how the LFB instances are
logically interconnected and placed along the datapath within one
FE. Sometimes it is also called intra-FE topology, to be
distinguished from inter-FE topology.
ForCES Protocol - While there may be multiple protocols used
within the overall ForCES architecture, the term "ForCES protocol"
and "protocol" refer to the Fp reference points in the ForCES
Framework in [RFC3746]. This protocol does not apply to CE-to-CE
communication, FE-to-FE communication, or to communication between
FE and CE managers. Basically, the ForCES protocol works in a
master-slave mode in which FEs are slaves and CEs are masters.
This document defines the specifications for this ForCES protocol.
Wang, et al. Expires June 4, 2011 [Page 6]
Internet-Draft ForCES LFB Library December 2010
3. Introduction
RFC 3746 [RFC3746] specifies Forwarding and Control Element
Separation (ForCES) framework. In the framework, Control Elements
(CEs) configure and manage one or more separate Forwarding Elements
(FEs) within a Network Element (NE) by use of a ForCES protocol. RFC
5810 [RFC5810] specifies the ForCES protocol. RFC 5812 [RFC5812]
specifies the Forwarding Element (FE) model. In the model, resources
in FEs are described by classes of Logical Function Blocks (LFBs).
The FE model defines the structure and abstract semantics of LFBs,
and provides XML schema for the definitions of LFBs.
This document conforms to the specifications of the FE model
[RFC5812] and specifies detailed definitions of classes of LFBs,
including detailed XML definitions of LFBs. These LFBs form a base
LFB library for ForCES. LFBs in the base library are expected to be
combined to form an LFB topology for a typical router to implement IP
forwarding. It should be emphasized that an LFB is an abstraction of
functions rather than its implementation details. The purpose of the
LFB definitions is to represent functions so as to provide
interoperability between separate CEs and FEs.
More LFB classes with more functions may be developed in future time
and documented by IETF. Vendors may also develop proprietary LFB
classes as described in the FE model [RFC5812].
3.1. Scope of the Library
It is intended that the LFB classes described in this document are
designed to provide the functions of a typical router. RFC 1812
specifies that a typical router is expected to provide functions to:
(1) Interface to packet networks and implement the functions required
by that network. These functions typically include:
o Encapsulating and decapsulating the IP datagrams with the
connected network framing (e.g., an Ethernet header and checksum).
o Sending and receiving IP datagrams up to the maximum size
supported by that network, this size is the network's Maximum
Transmission Unit or MTU.
o Translating the IP destination address into an appropriate
network-level address for the connected network (e.g., an Ethernet
hardware address), if needed, and
o Responding to network flow control and error indications, if any.
Wang, et al. Expires June 4, 2011 [Page 7]
Internet-Draft ForCES LFB Library December 2010
(2) Conform to specific Internet protocols including the Internet
Protocol (IPv4 and/or IPv6), Internet Control Message Protocol
(ICMP), and others as necessary.
(3) Receive and forwards Internet datagrams. Important issues in
this process are buffer management, congestion control, and fairness.
o Recognizes error conditions and generates ICMP error and
information messages as required.
o Drops datagrams whose time-to-live fields have reached zero.
o Fragments datagrams when necessary to fit into the MTU of the next
network.
(4) Choose a next-hop destination for each IP datagram, based on the
information in its routing database.
(5) Usually support an interior gateway protocol (IGP) to carry out
distributed routing and reachability algorithms with the other
routers in the same autonomous system. In addition, some routers
will need to support an exterior gateway protocol (EGP) to exchange
topological information with other autonomous systems.
(6) Provide network management and system support facilities,
including loading, debugging, status reporting, exception reporting
and control.
The classical IP router utilizing the ForCES framework constitutes a
CE running some controlling IGP and/or EGP function and FEs
implementing using Logical Function Blocks (LFBs) conforming to the
FE model[RFC5812] specifications. The CE, in conformance to the
ForCES protocol[RFC5810] and the FE model [RFC5812] specifications,
instructs the LFBs on the FE how to treat received/sent packets. In
a typical packet flow within an IP router, a port LFB receives
packets and decapsulates them to form IP level packets. Different
port media will have different ways to achieve the goal of
decapsulating media-specific headers and therefore LFBs for various
media will have to be defined although this document sticks to
ethernet only. IP packets emanating from port LFBs are then
processed by a validation LFB before being further forwarded to the
next LFB. After the validation process the packet is passed to an
LFB where IP forwarding decision is made. In the IP Forwarding LFBs,
a Longest Prefix Match LFB is used to look up the destination
information in a packet and select a next hop index for sending the
packet onward. A next hop LFB uses the next hop index metadata to
apply the proper headers to the IP packets, and direct them to the
proper egress. Note that in the process of IP packets processing, in
Wang, et al. Expires June 4, 2011 [Page 8]
Internet-Draft ForCES LFB Library December 2010
this document, we are adhering to the weak host model[RFC1122] since
that is the most usable model for a packet processing Network
Element(NE). (Editorial note - describe how a strong host model is
achieved if needed.)
3.2. Overview of LFB Classes in the Library
It is critical to classify functional requirements into various
classes of LFBs and construct a typical but also flexible enough base
LFB library for various IP forwarding equipments.
3.2.1. LFB Design Choices
A few design principles were factored into choosing how the base LFBs
looked like. These are:
o if a function can be designed by either one LFB or two or more
LFBs with the same cost, the choice is to go with two or more LFBs
so as to provide more flexibility for implementers.
o when flexibility is not required, an LFB should take advantage of
its independence as much as possible and have minimal coupling
with other LFBs. The coupling may be from LFB attributes
definitions as well as physical implementations.
o unless there is a clear difference in functionality, similar
packet processing should not be represented as two or more
different LFBs. Or else, it may add extra burden on
implementation to achieve interoperability.
3.2.2. LFB Class Groupings
The document defines groups of LFBs for typical router function
requirements:
(1) A group of Ethernet processing LFBs are defined to abstract the
packet processing for Ethernet as the port media type. As the most
popular media type with rich processing features, Ethernet media
processing LFBs was a natural choice. Definitions for processing of
other port media types like POS or ATM may be incorporated in the
library in future version of the document or in a future separate
document.
The following LFBs are defined for Ethernet processing:
EtherPHYCop (Section 5.1.1)
Wang, et al. Expires June 4, 2011 [Page 9]
Internet-Draft ForCES LFB Library December 2010
EtherMACIn (section 5.1.2)
EtherClassifier (section 5.1.3)
EtherEncapsulator (section 5.1.4)
EtherMACOut (section 5.1.5)
(2) A group of LFBs are defined for IP packet validation process.
The following LFBs are defined for IP Validation processing:
IPv4Validator (section 5.2.1)
IPv6Validator (section 5.2.2)
(3) A group of LFBs are defined to abstract IP forwarding process.
The following LFBs are defined for IP Forwarding processing:
IPv4UcastLPM (section 5.3.1)
IPv4NextHop (section 5.3.2)
IPv6UcastLPM (section 5.3.4)
IPv6NextHop (section 5.3.4)
(4) A group of address resolution LFBs are defined for the purpose to
abstract the process for address resolution function.
The following LFBs are defined for Address Resolution processing:
ARP (section 5.4.1)
ND (section 5.4.2)
(5) A group of LFBs are defined to abstract the process for redirect
operation, i.e., data packet transmission between CE and FEs.
The following LFBs are defined for redirect processing:
RedirectIn (section 5.5.1)
RedirectOut (section 5.5.2)
(6) A group of LFBs are defined for abstracting some general purpose
packet processing. These processing processes are usually general to
Wang, et al. Expires June 4, 2011 [Page 10]
Internet-Draft ForCES LFB Library December 2010
many processing locations in an FE LFB topology.
The following LFBs are defined for redirect processing:
BasicMetadataDispatch (section 5.6.1)
GenericScheduler (section 5.6.2)
3.2.3. Sample LFB Class Application
Although Section 7 will present use cases for LFBs defined in this
document, this section shows a sample LFB class application in
advance so that readers can get a quick overlook of the LFB classes.
Figure 1 shows the typical LFB processing path for the IPv4 unicast
forwarding case with Ethernet media interfaces. Section 7.1 will
describe the LFB topology in more details.
Wang, et al. Expires June 4, 2011 [Page 11]
Internet-Draft ForCES LFB Library December 2010
+-----+ +------+
| | | |
| |<---------------|Ether |<----------------------------+
| | |MACOut| |
| | | | |
|Ether| +------+ |
|PHY | |
|Cop | +---+ |
|#1 | +-----+ | |----->IPv6 Packets |
| | | | | | +----+ |
| | |Ether| | | | | |
| |->|MACIn|-->| |IPv4| | |
+-----+ | | | |-+->| | +---+ |
+-----+ +--+ | | |unicast +-----+ | | |
Ether | | |------->| | | | |
. Classifier| | |packet |IPv4 | | | |
. | | | |Ucast|->| |--+ |
. | | | |LPM | | | | |
+---+ | +----+ +-----+ | | | |
+-----+ | | | IPv4 +---+ | |
| | | | | Validator IPv4 | |
+-----+ |Ether| | |-+ NextHop | |
| |->|MACIn|-->| |IPv4 | |
| | | | | |----->IPv6 Packets | |
|Ether| +-----+ +---+ +----+ | |
|PHY | Ether | | | |
|Cop | Classifier | | +-------+ | |
|#n | | | | | | |
| | +------+ | |<--| Ether |<-+ |
| | | |<------| | | Encap | |
| |<---------------|Ether | ...| | +-------+ |
| | |MACOut| +---| | |
| | | | | +----+ |
+-----+ +------+ | BasicMetadataDispatch |
+-------------------------+
Figure 1: A Sample of LFB Class Application
3.3. Document Structure
Base type definitions, including data types, packet frame types, and
metadata types are presented in advance for definitions of various
LFB classes. Section 4 (Base Types Section) provide a description on
the base types used by this LFB library. In order for an extensive
use of these base types for other LFB class definitions, the base
type definitions are provided by an xml file in a way as a library
which is separate from the LFB definition library.
Wang, et al. Expires June 4, 2011 [Page 12]
Internet-Draft ForCES LFB Library December 2010
Within every group of LFB classes, a set of LFBs are defined for
individual function purposes. Section 5 (LFB Class Descriptions
Section) makes text descriptions on the individual LFBs. Note that
for a complete definition of an LFB, a text description as well as a
XML definition is required.
LFB classes are finally defined by XML with specifications and schema
defined in the ForCES FE model[RFC5812]. Section 6 (XML LFB
Definitions Section) provide the complete XML definitions of the base
LFB classes library.
Section 7 provides several use cases on how some typical router
functions can be implemented using the base LFB library defined in
this document.
Wang, et al. Expires June 4, 2011 [Page 13]
Internet-Draft ForCES LFB Library December 2010
4. Base Types
The FE model [RFC5812] has specified the following data types as
predefined (built-in) atomic data-types:
char, uchar, int16, uint16, int32, uint32, int64, uint64, string[N],
string, byte[N], boolean, octetstring[N], float16, float32, float64.
Based on these atomic data types and with the use of type definition
elements in the FE model XML schema, new data types, packet frame
types, and metadata types can further be defined.
To define a base LFB library for typical router functions, a base
data types, frame types, and metadata types MUST be defined. This
section provides a description of these types and detailed XML
definitions for the base types.
In order for extensive use of the base type definitions for LFB
definitions other than this base LFB library, the base type
definitions are provided with a separate xml library file labeled
with "BaseTypeLibrary". Users can refer to this library by the
statement:
4.1. Data
The following data types are currently defined and put in the base
type library:
(TBD)
4.2. Frame
According to FE model [RFC5812], frame types are used in LFB
definitions to define the types of frames the LFB expects at its
input port(s) and emits at its output port(s). The
element in the FE model is used to define a new frame type.
The following frame types are currently defined and put in the base
type library as base frame types for the LFB library:
(TBD)
Wang, et al. Expires June 4, 2011 [Page 14]
Internet-Draft ForCES LFB Library December 2010
4.3. MetaData
LFB Metadata is used to communicate per-packet state from one LFB to
another. The element in the FE model is used to define
a new metadata type.
The following metadata types are currently defined and put in the
base type library as base metadata types for the LFB library
definitions:
(TBD)
4.4. XML for Base Type Library
EthernetAll
An kinds of Ethernet frame
EthernetII
An Ethernet II frame
ARP
an arp packet
IPv4
An IPv4 packet
IPv6
An IPv6 packet
IPv4Unicast
An IPv4 unicast packet
IPv4Multicast
An IPv4 multicast packet
Wang, et al. Expires June 4, 2011 [Page 15]
Internet-Draft ForCES LFB Library December 2010
IPv6Unicast
An IPv6 unicast packet
IPv6Multicast
An IPv6 multicast packet
Arbitrary
Any kinds of frames
IPv4Addr
IPv4 address
byte[4]
IPv6Addr
IPv6 address
byte[16]
IEEEMAC
IEEE mac.
byte[6]
LANSpeedType
Network speed values
uint32
LAN_SPEED_10M
10M Ethernet
LAN_SPEED_100M
100M Ethernet
LAN_SPEED_1G
1000M Ethernet
Wang, et al. Expires June 4, 2011 [Page 16]
Internet-Draft ForCES LFB Library December 2010
LAN_SPEED_10G
10G Ethernet
LAN_SPEED_AUTO
LAN speed auto
DuplexType
Duplex types
uint32
Auto
Auto negotitation.
Half-duplex
port negotitation half duplex
Full-duplex
port negotitation full duplex
PortStatusValues
The possible values of status. Used for both
administrative and operation status
uchar
Wang, et al. Expires June 4, 2011 [Page 17]
Internet-Draft ForCES LFB Library December 2010
Disabled
the port is operatively disabled.
UP
the port is up.
Down
The port is down.
PortStatsType
Port statistics
InUcastPkts
Number of unicast packets received
uint64
InMulticastPkts
Number of multicast packets received
uint64
InBroadcastPkts
Number of broadcast packets received
uint64
InOctets
number of octets received
uint64
OutUcastPkts
Number of unicast packets transmitted
uint64
Wang, et al. Expires June 4, 2011 [Page 18]
Internet-Draft ForCES LFB Library December 2010
OutMulticastPkts
Number of multicast packets transmitted
uint64
OutBroadcastPkts
Number of broadcast packets transmitted
uint64
OutOcetes
Number of octets transmitted
uint64
InErrorPkts
Number of input error packets
uint64
OutErrorPkts
Number of output error packets
uint64
MACInStatsType
The content of statistic for EtherMACIn.
NumPacketsReceived
The number of packets received.
uint64
NumPacketsDroped
The number of packets droped.
uint64
MACOutStatsType
The content of statistic for EtherMACOut.
Wang, et al. Expires June 4, 2011 [Page 19]
Internet-Draft ForCES LFB Library December 2010
NumPacketsTransimtted
The number of packets transimtted.
uint64
NumPacketsDroped
The number of packets droped.
uint64
EtherDispatchTableType
the type of etherDispatch table entry.
LogicalPortID
Logical port ID.
uint32
EtherType
The EtherType value in the Ether head.
uint32
OutputIndex
Group output port index.
uint32
VlanInputTableType
VLAN Output table entry type.
IncomingPortID
The incoming port ID.
uint32
VlanID
VLAN ID.
uint32
Wang, et al. Expires June 4, 2011 [Page 20]
Internet-Draft ForCES LFB Library December 2010
LogicalPortID
logical port ID.
uint32
EtherClassifyStatsType
VLAN Output table entry type.
EtherType
The EtherType value
uint32
PacketsNum
Packets number
uint64
IPv4ValidatorStatisticsType
Statistics type in IPv4validator.
badHeaderPkts
Number of bad header packets.
uint32
badTotalLengthPkts
Number of bad total length packets.
uint32
badTTLPkts
Number of bad TTL packets.
uint32
badChecksum
Number of bad checksum packets.
uint32
Wang, et al. Expires June 4, 2011 [Page 21]
Internet-Draft ForCES LFB Library December 2010
IPv6ValidatorStatisticsType
Statistics type in IPv6validator.
badHeaderPkts
Number of bad header packets.
uint64
badTotalLengthPkts
Number of bad total length packets.
uint64
badHopLimitPkts
Number of bad Hop limit packets.
uint64
IPv4PrefixTableType
Each row of the IPv4 Prefix Table
IPv4Address
An IPv4 Address
IPv4Addr
Prefixlen
The prefix length
uchar
HopSelector
HopSelector is the nexthop ID which points to
the nexthop table
uint32
ECMPFlag
Wang, et al. Expires June 4, 2011 [Page 22]
Internet-Draft ForCES LFB Library December 2010
An ECMP Flag for this route
boolean
False
This route does not have multiple
nexthops.
True
This route has multiple nexthops.
DefaultRouteFlag
A Default Route Flag for supporting loose RPF.
boolean
False
This is not a default route.
True
This route is a default route. for
supporting the loose RPF.
IPv4UcastLPMStatsType
Statistics type in IPv4Unicast.
InRcvdPkts
The total number of input packets
received
uint64
Wang, et al. Expires June 4, 2011 [Page 23]
Internet-Draft ForCES LFB Library December 2010
FwdPkts
IPv4 packets forwarded by this LFB
uint64
NoRoutePkts
The number of IP datagrams discarded because
no route could be found.
uint64
IPv6PrefixTableType
Each row of the IPv6 Prefix Table
IPv6Address
An IPv6 Address
IPv6Addr
Prefixlen
The prefix length
uchar
HopSelector
HopSelector is the nexthop ID which points
to the nexthop table
uint32
ECMPFlag
An ECMP Flag for this route
boolean
False
This route does not have multiple
Wang, et al. Expires June 4, 2011 [Page 24]
Internet-Draft ForCES LFB Library December 2010
nexthops.
True
This route has multiple nexthops.
DefaultRouteFlag
A Default Route Flag.
boolean
False
This is not a default route.
True
This route is a default route.
IPv6UcastLPMStatsType
Statistics type in IPv6Unicast.
InRcvdPkts
The total number of input packets
received
uint64
FwdPkts
IPv6 packets forwarded by this LFB
uint64
NoRoutePkts
Wang, et al. Expires June 4, 2011 [Page 25]
Internet-Draft ForCES LFB Library December 2010
The number of IP datagrams discarded because
no route could be found.
uint64
NexthopOptionType
Special Values of NextHopOption Type
uint8
Normal
Normal Forwarding
Local
The packet need to be forwarded to locally
attached host
IPv4NextHopTableType
Each row of the IPv4 NextHop Table
NexthopID
ID of the NextHop
uint32
OutputLogicalPortID
The ID of the Logical OutputPort
uint32
MTU
Maximum Transmission Unit for out going port.
It is for desciding whether the packet need fragmentation
uint32
NexthopIPAddr
Next Hop IPv4 Address
Wang, et al. Expires June 4, 2011 [Page 26]
Internet-Draft ForCES LFB Library December 2010
IPv4Addr
NexthopOption
Next Hop Option
NexthopOptionType
EncapOutputIndex
Group output port index
uint32
IPv6NextHopTableType
Each row of the IPv4 NextHop Table
NexthopID
ID of the NextHop
uint32
OutputLogicalPortID
The ID of the Logical OutputPort
uint32
MTU
Maximum Transmission Unit for out going port.
It is for desciding whether the packet need fragmentation
uint32
NexthopIPAddr
Next Hop IPv4 Address
IPv6Addr
NexthopOption
Next Hop Option
NexthopOptionType
EncapOutputIndex
Group output port index
Wang, et al. Expires June 4, 2011 [Page 27]
Internet-Draft ForCES LFB Library December 2010
uint32
ArpTableType
ARP table entry type.
LogicalPortID
Logical port ID.
uint32
DstIPv4Address
Destination IPv4 address.
IPv4Addr
DstMac
Mac of the Neighbor.
IEEEMAC
SrcMac
Source MAC.
IEEEMAC
NbrTableType
IPv6 neighbour table entry type.
LogicalPortID
Logical port ID.
uint32
DstIPv6Address
Destination IPv4 address.
IPv6Addr
DstMac
Mac of the Neighbor.
IEEEMAC
Wang, et al. Expires June 4, 2011 [Page 28]
Internet-Draft ForCES LFB Library December 2010
SrcMac
Source MAC.
IEEEMAC
VlanOutputTableType
Vlan Output table entry type.
LogicalPortID
Logical port ID.
uint32
VlanID
VLAN ID.
uint32
OutputLogicalPortID
Output logical port ID.
uint32
Portv4AddressInforType
Port address information, for v4 port.
IPv4Address
IPv4 address
IPv4Addr
IPv4NetMask
IPv4 net mask length
uint32
SrcMAC
Source Mac address
IEEEMAC
Wang, et al. Expires June 4, 2011 [Page 29]
Internet-Draft ForCES LFB Library December 2010
Portv4AddrInfoTableType
Logical port (v4) address information table type
LogicalPortID
Logical port id.
uint32
Portv4AddrInfo
Portv4AddressInforType
MetadataDispatchTableType
Metadata dispatch table type.
MetadataID
metadata ID
uint32
MetadataValue
metadata value.
uint32
OutputIndex
group output port index.
uint32
SchdDisciplineType
scheduling discipline type.
uint32
Wang, et al. Expires June 4, 2011 [Page 30]
Internet-Draft ForCES LFB Library December 2010
FIFO
First In First Out scheduler.
RR
Round Robin.
QueueDepthType
the Depth of Queue.
QueueID
Queue ID
uint32
QueueDepthInPackets
the Queue Depth when the depth units
are packets.
uint32
QueueDepthInBytes
the Queue Depth when the depth units
are bytes.
uint32
PHYPortID
The physical port ID that a packet has entered.
1
uint32
SrcMAC
Source MAC Address
2
IEEEMAC
Wang, et al. Expires June 4, 2011 [Page 31]
Internet-Draft ForCES LFB Library December 2010
DstMAC
Destination MAC Address
3
IEEEMAC
LogicalPortID
ID of logical port.
4
uint32
EtherType
The value of EtherType.
5
uint32
VlanID
Vlan ID.
6
uint32
VlanPriority
The priority of Vlan.
7
uint32
NexthopIPv4Addr
Nexthop IP address.
8
IPv4Addr
NexthopIPv6Addr
Nexthop IP address.
9
IPv6Addr
HopSelector
HopSelector is the nexthop ID which points to the
nexthop table
10
Wang, et al. Expires June 4, 2011 [Page 32]
Internet-Draft ForCES LFB Library December 2010
uint32
ExceptionID
Exception Types
11
uint32
Other
Any other exception.
BroadCastPacket
Packet with destination address equal to
255.255.255.255
BadTTL
The packet can't be forwarded as the TTL has
expired.
IPv4HeaderLengthMismatch
IPv4 Packet with header length > 5
LengthMismatch
The packet length reported by link layer is
less than the total length field.
RouterAlertOptions
Packet IP head include Router Alert options.
RouteInTableNotFound
There is no route in the route table
corresponding to the packet destination address
NextHopInvalid
The NexthopID is invalid
Wang, et al. Expires June 4, 2011 [Page 33]
Internet-Draft ForCES LFB Library December 2010
FragRequired
The MTU for outgoing interface is less than
the packet size.
LocalDelivery
The packet is for a local interface.
GenerateICMP
ICMP packet needs to be generated.
PrefixIndexInvalid
The prefixIndex is wrong.
ArpTableL2NotFound
Packet can't find the associated L2
information in the Arptable
OutputLogiclPortIDNotFound
Packet can't find OutputLogicalPortID in
VLANOutputTable
IPv6HopLimitZero
Packet with Hop Limit zero
IPv6NextHeaderHBH
Packet with next header set to Hop-by-Hop
OutputLogicalPortID
ID of output logical port.
12
uint32
Wang, et al. Expires June 4, 2011 [Page 34]
Internet-Draft ForCES LFB Library December 2010
RedirectIndex
Redirect Output port index.
13
uint32
Wang, et al. Expires June 4, 2011 [Page 35]
Internet-Draft ForCES LFB Library December 2010
5. LFB Class Description
According to ForCES specifications, LFB (Logical Function Block) is a
well defined, logically separable functional block that resides in an
FE, and is a functionally accurate abstraction of the FE's processing
capabilities. An LFB Class (or type) is a template that represents a
fine-grained, logically separable aspect of FE processing. Most LFBs
are related to packet processing in the data path. LFB classes are
the basic building blocks of the FE model. Note that RFC 5810 has
already defined an 'FE Protocol LFB' which is as a logical entity in
each FE to control the ForCES protocol. RFC 5812 has already defined
an 'FE Object LFB'. Information like the FE Name, FE ID, FE State,
LFB Topology in the FE are represented in this LFB.
As specified in Section 3.1, this document focuses the base LFB
library for implementing typical router functions, especially for IP
forwarding functions. As a result, LFB classes in the library are
all base LFBs to implement router forwarding.
5.1. Ethernet Processing LFBs
As the most popular physical and data link layer protocols, Ethernets
are widely deployed. It becomes a basic requirement for a router to
be able to process various Ethernet data packets.
Note that there exist different versions of Ethernet protocols, like
Ethernet V2, 802.3 RAW, IEEE 802.3/802.2, IEEE 802.3/802.2 SNAP.
There also exist varieties of LAN techniques based on Ethernet, like
various VLANs, MACinMAC, etc. Ethernet processing LFBs defined here
are intended to be able to cope with all these variations of Ethernet
technology.
There are also various types of Ethernet physical interface media.
Among them, copper and fiber media may be the most popular ones. As
a base LFB definition and a start work, the document only defines an
Ethernet physical LFB with copper media. For other media interfaces,
specific LFBs may be defined in the future versions of the library.
5.1.1. EtherPHYCop
EtherPHYCop LFB abstracts an Ethernet interface at its physical
layer. It limits the physical media to copper.
The LFB is defined with one singleton input. The input data of the
LFB are expected to be Ethernet packets. Note that Ethernet packets
here cover all packets encapsulated with different versions of
Ethernet protocols, like Ethernet V2, 802.3 RAW, IEEE 802.3/802.2,
IEEE 802.3/802.2 SNAP. It also includes packets encapsulated with
Wang, et al. Expires June 4, 2011 [Page 36]
Internet-Draft ForCES LFB Library December 2010
varieties of LAN techniques based on Ethernet, like various VLANs,
MACinMAC, etc. As a result, we define various Ethernet frames as a
frame name called 'EthernetAll'. In an LFB abstracted processing
path, usually the Ethernet packets are from an upstream LFB like an
EtherMACOut LFB. It is not expected that an input Ethernet packet be
associated with some metadata. After the LFB receives the Ethernet
packets, it will further process the packets at physical layer and
eventually put them on the physical media wire for transmission.
Note that the media wire transmission process in the LFB is
abstracted as a default function of the LFB rather than an input or
output interface of the LFB.
The LFB is also defined with one singleton output. The output data
produced are also with 'EthernetAll' frame type. Every output data
packet is associated with a 'PHYPortID' metadata to indicate later
processing LFBs of which physical port the packet is from. Note that
all the data packets are originated from media wire inside the LFB,
which is defined as a default function of the LFB. As a physical
layer abstraction module, the LFB does not possess the ability to
specify encapsulations of types of Ethernet, rather, it produces
various Ethernet types just according to what it receives from
Ethernet media wire. In an LFB-based processing path topology,
packets output from the EtherPHYCop lFB will usually go to an LFB
like EtherMACIn LFB for further Ethernet processing.
Note that as a base definition, functions like multiple virtual
physical layers are not supported in this LFB version. It may be
supported in the future by defining a subclass or a new version of
this LFB.
Several components are defined for the LFB.
AdminStatus is defined for CE to administratively manage the status
of the LFB. Via the component, CE may startup or shutdown the LFB.
The default status is set to 'Down'. An OperStatus component is
specifically defined for CE to access the actual operational status
of the LFB, in case that a physical layer port may be in a failed
state that its operational status does not correctly reflect
administrative status. A PHYPortStatusChanged event is defined for
the LFB to report to CE whenever there is a port status change during
operation.
PHYPortID component is defined for CE to assign an ID to the physical
port. The component will be used to produce a metadata associated
with every Ethernet packet the LFB receives from media and is going
to hand to later LFBs for further processing.
A group of components are defined for link speed management. The
Wang, et al. Expires June 4, 2011 [Page 37]
Internet-Draft ForCES LFB Library December 2010
AdminLinkSpeed is for CE to configure proper link speed for the port
and the OperLinkSpeed is for CE to query the actual link speed in
operation. The default value for the AdminLinkSpeed is set to auto-
negotiation mode. A SupportedLinkSpeed capability attribute is also
defined for CE to query the link speed ability. A LinkSpeedChanged
event is defined for the LFB to report to CE whenever there is a link
speed change during operation.
A group of components are defined for duplex mode management. The
AdminDuplexMode is for CE to configure proper duplex mode for the
port and the OperDuplexMode is for CE to query the actual duplex mode
in operation. The default value for the AdminDuplexMode is set to
auto-negotiation mode. A SupportedDuplexMode capability is also
defined for CE to query the port duplex mode ability. A
DuplexModeChanged event is defined for the LFB to report to CE
whenever there is a duplex mode change during operation.
There are also some other components, capabilities, events defined in
the LFB for various purposes. See section 6 for detailed XML
definitions of the LFB.
5.1.2. EtherMACIn
EtherMACIn LFB abstracts an Ethernet port at MAC data link layer. It
specifically describes Ethernet processing functions like MAC address
locality check, deciding if the Ethernet packets should be bridged,
provide Ethernet layer flow control, etc.
The LFB is defined with one singleton input. The input is expected
to receive all types of Ethernet packets which are usually output
from some Ethernet physical abstraction layer LFB, like an
EtherPHYCop LFB. Every input packet is associated with a metadatum
indicating the physical port ID that the packet comes.
Input Ethernet packets will usually be checked for locality. A
LocalMACAddresses component is defined for the LFB so that CE is able
to configure one or more Ethernet MAC addresses to the LFB for the
use of locality check. All packets that do not pass through the
locality check will be dropped in the LFB. A PromiscuousMode
component in the LFB is further defined to decide if the LFB should
work in a promiscuous mode. In this mode, the LFB will not do the
locality check and all Ethernet packets will pass through the LFB
without being dropped.
The LFB is defined with two separate singleton outputs. All Output
packets are in Ethernet format, possibly with various Ethernet types.
One singleton output is called NormalPathOut. It usually outputs
Ethernet packets to some LFB like an EtherClassifier LFB for further
Wang, et al. Expires June 4, 2011 [Page 38]
Internet-Draft ForCES LFB Library December 2010
L3 forwarding process. Metadata associated with every packet from
this output is PHYPortID, which keeps indicating which physical port
the packet is from.
Another singleton output is called L2BridgingPathOut. Although this
LFB library is basically defined to meet typical router functions, it
is with natural requirement that the definitions here should provide
reasonable compatibility considerations for future wider use. The
L2BridgingPathOut is defined to meet the requirement that L2 bridging
functions may be optionally supported simultaneously with L3
processing and Some L2 bridging LFBs may be defined in the future. A
Boolean flag component called L2BridgingPathEnable is defined to make
the L2 bridging output as optional. An FE that does not support
bridging will internally set this flag to false, and additionally
sets the flag property as read-only. In this case CE then can read
the flag to know that the FE does not support bridging function and
the L2 bridging output is always disabled. An FE that supports L2
bridging will internally set the flag property as read-write. In
this case, CE then can choose to enable or disable the
L2BridgingPathOut output by setting this flag as desired. If the
flag is set to true, by also instantiating some L2 bridging LFB
instances following the L2BridgingPathOut, FE are expected to fulfill
L2 bridging functions. Whereas, in this case, the default value for
the flag is defined as false, meaning L2 bridging output is closed by
default. Note that, when enabled, l2BridgingPathOut will output
packets exactly the same as that in the NormalPathOut
output(Editorial note: need more discussions here on if the L2 output
is the same as normal output). The metadata associated with every
packet is also PHYPortID.
Ethernet layer flow control is usually implemented cooperatively by
EtherMACIn LFB and EtherMACOut LFB. How the flow control is
implemented is vendor-specific. As an abstraction, this LFB defines
two flag components for CE to enable or disable the flow control
functions. The flow control is further distinguished by Tx flow
control and Rx flow control, separately for sending process and
receiving process flow controls. A TxFlowControl flag and a
RxFlowControl flag are then separately defined. In order for
EtherMACOut LFB able to cooperatively work for flow control, the
flags are also referenced in the EtherMACOut LFB as aliases in this
LFB.
AdminStatus is defined for CE to administratively manage the status
of the LFB. Via the component, CE can startup or shutdown the LFB.
The default status is set to 'Down'. /
Note that as a base definition, functions like multiple virtual MAC
layers are not supported in this LFB version. It may be supported in
Wang, et al. Expires June 4, 2011 [Page 39]
Internet-Draft ForCES LFB Library December 2010
the future by defining a subclass or a new version of this LFB.
There are also some other components, capabilities, events defined in
the LFB for various purposes. See section 6 for detailed XML
definitions of the LFB.
5.1.3. EtherClassifier
EtherClassifier LFB abstracts the process to decapsulate Ethernet
packets and classify the data packets into various network layer data
packets according to information included in the Ethernet packets
headers.
Input of the LFB expects all types of Ethernet packets, including
VLAN Ethernet types. The input is a singleton input which may
connect to an upstream LFB like EtherMACIn LFB. The input is also
capable of multiplexing to allow for multiple upstream LFBs being
connected. For instance, when L2 bridging function is enabled in
EtherMACIn LFB, some L2 bridging LFBs may be applied. In this case,
some Ethernet packets after L2 processing may have to be input to
EtherClassifier LFB for classification, while simultaneously packets
directly output from EtherMACIn may also need to input to this LFB.
Input of this LFB is capable of handling this case. Usually, every
input Ethernet packet is expected to be associated with a PHYPortID
metadatum, indicating the physical port the packet comes from. In
some cases, for instance, like in an MACinMAC case, a LogicalPortID
metadatum may be expected to associate with the Ethernet packet to
further indicate which logical port the Ethernet packet belongs to.
Note that PHYPortID metadata is always expected while LogicalPortID
metadata is optionally expected.
A VLANInputTable component is defined in the LFB to classify VLAN
Ethernet packets. According to IEEE VLAN specifications, all
Ethernet packets can be recognized as VLAN types by defining that if
there is no VLAN encapsulation in a packet, a case with VLAN tag 0 is
considered. Therefore the table actually applies to every input
packet of the LFB. The table assigns every input packet with a new
LogicalPortID according to the packet incoming port ID and the VLAN
ID. A packet incoming port ID is defined as a physical port ID if
there is no logical port ID associated with the packet, or a logical
port ID if there is a logical port ID associated with the packet.
The VLAN ID is exactly the Vlan ID in the packet if it is a VLAN
packet, or 0 if it is not a VLAN packet. Note that a logical port ID
of a packet may be rewritten with a new one by the VLANInputTable
processing.
An EtherDispatchTable component is defined to dispatch every Ethernet
packet to a group of outputs according to the logical port ID
Wang, et al. Expires June 4, 2011 [Page 40]
Internet-Draft ForCES LFB Library December 2010
assigned by VLANInputTable to the packet and the Ethernet type in the
Ethernet packet header. By CE configuring the dispatch table, the
LFB can be expected to classify various network layer protocol type
packets and make them output at different output port. It is then
easily expected that the LFB classify packets according to protocols
like IPv4, IPv6, MPLS, ARP, ND, etc.
Output of the LFB is hence defined as a group output. Because there
may be various types of protocol packets at the output ports, the
frameproduced is defined as arbitrary for the purpose of wide
extensibility in the future. In order for downstream LFBs to use, a
bunch of metadata is produced to associate with every output packet.
The medatdata contain normal information like PHYPortID. It also
contains information on Ethernet type, source MAC address, and
destination MAC address of its original Ethernet packet. Moreover,
it contains information of logical port ID assigned by this LFB.
This metadata may be used by downstream LFBs for packet processing.
Lastly, it may conditionally contain information like VlanID and
VlanPriority with the condition that the packet is a VLAN packet.
A MaxOutPutPorts is defined as the capability of the LFB to indicate
how many classification output ports the LFB is capable.
/*discussion*/
Note that logical port ID and physical port ID mentioned above are
all originally configured by CE, and are globally effective within an
ForCES NE (Network Element). To distinguish a physical port ID from
a logical port ID in the incoming port ID field of the
VLANInputTable, physical port ID and logical port ID must be assigned
with separate number spaces. /*discussion */
There are also some other components, capabilities, events defined in
the LFB for various purposes. See section 6 for detailed XML
definitions of the LFB.
5.1.4. EtherEncapsulator
EtherEncapsulator LFB abstracts the process to encapsulate IP packets
to Ethernet packets.
Input of the LFB expects types of IP packets, including IPv4 and IPv6
types. The input is a singleton one which may connect to an upstream
LFB like an IPv4NextHop, an IPv6NextHop, or any LFB which requires to
output packets for Ethernet encapsulation. The input is capable of
multiplexing to allow for multiple upstream LFBs being connected.
For instance, an IPv4NextHop or an IPv6NextHop may concurrently
exist, and some L2 bridging LFBs may also output packets to this LFB
simultaneously. Input of this LFB is capable of handling this case.
Wang, et al. Expires June 4, 2011 [Page 41]
Internet-Draft ForCES LFB Library December 2010
Usually, every input Ethernet packet is expected to be associated
with an output logical port ID and a next hop IP address as its
metadata. In the case when L2 bridging function is implemented, an
input packet may also optionally receive a VLAN priority as its
metadata. In this case, default value for this metadata is set to 0.
There are several outputs for this LFB. One singleton output is for
normal success packet output. Packets which have found Ethernet L2
information and have been successfully encapsulated to an Ethernet
packet will output from this port to downstream LFB. Note that this
LFB specifies to use Ethernet II as its Ethernet encapsulation type.
Success output also produces an output logical port ID as metadatum
of every output packet for a downstream LFB to decide which logical
port the packet should go out. The downstream LFB usually dispatches
the packets based on its associated output logical port ID. Hence, a
generic dispatch LFB as defined in Section 5.6.1 may be adopted for
dispatching packets based on output logical port ID.
Note that in some implementations of LFBs topology, the processing to
dispatch packets based on an output logical port ID may also take
place before an Ethernet encapsulation,i.e., packets coming into the
encapsulator LFB have already been switched to individual logical
output port paths. In this case, the EtherEncap LFB success output
may be directly connected to a downstream LFB like an EtherMACOut
LFB.
Another singleton output is for IPv4 packets that are unfortunately
unable to find Ethernet L2 encapsulation information by ARP table in
the LFB. In this case, a copy of the packets may need to be
redirected to an ARP LFB in the FE, or to CE if ARP function is
implemented by CE. More importantly, a next hop IP address metadata
should be associated with every packet output here. When an ARP LFB
or CE receives these packets and associated next hop IP address
metadata, it may be expected to generate ARP protocol messages based
on these packets next hop IP addresses to try to get L2 information
for these packets. Refreshed L2 information is then able to be added
in ARP table in this encapsulator LFB by ARP LFB or by CE. As a
result, these packets are then able to successfully find L2
information and be encapsulated to Ethernet packets, and output via
the normal success output to downstream LFB. (Editorial note1: may
need discussion on what more metadata this output packets need? Note
that the packets may be redirected to CE and CE should know what the
purpose of the packets for. A metadata may need to indicate this.
Editorial note2: we may adopt another way to address the case of
packets unable to do ARP. The packets may be redirected to ARP LFB
or CE without keeping a copy of them in this encapsulator LFB. We
expect that after ARP LFB or CE have processed ARP requests based on
the packets, the packets will be redirected back from ARP LFB or CE
Wang, et al. Expires June 4, 2011 [Page 42]
Internet-Draft ForCES LFB Library December 2010
to this encapsulator LFB for Ethernet encapsulation. At this time,
it is hoped the ARP table has been refreshed with new L2 information
that will make these packets able to find)
A more singleton output is for IPv6 packets that are unfortunately
unable to find Ethernet L2 encapsulation information by Neighbor
table in the LFB. In this case, a copy of the packets may need to be
redirected to an ND LFB in the FE, or to CE if IPv6 Neighbor
discovery function is implemented by CE. More importantly, a next
hop IP address metadata should be associated with every packet output
here. When the ND LFB or CE receives these packets and associated
next hop IP address metadata, it may be expected to generate neighbor
discovery protocol messages based on these packets next hop IP
addresses to try to get L2 information for these packets. Refreshed
L2 information is then able to be added in neighbor table in this LFB
by ND LFB or by CE. As a result, these packets are then able to
successfully find L2 information and be encapsulated to Ethernet
packets, and output via the normal success output to downstream
LFB.(Editorial note: may need discussion on what more metadata this
output packets need? Note that the packets may be redirected to CE
and CE should know what the purpose of the packets for. A metadata
may need to indicate this)
A singleton output is specifically defined for exception packets
output. All packets that are abnormal during the operations in this
LFB are output via this port. Currently, only one abnormal case is
defined, that is, packets can not find proper information in a VLAN
output table.
The VLAN output table is defined as the component of the LFB. The
table uses a logical port ID as an index to find a VLAN ID and a new
output logical port ID. In reality, the logical port ID applied here
is the output logical port ID received from every input packet in its
associated metadata. According to IEEE VLAN specifications, all
Ethernet packets can be recognized as VLAN types by defining that if
there is no VLAN encapsulation in a packet, a case with VLAN tag 0 is
considered. Therefore, every input IP packet actually has to look up
the VLAN output table to find out a VLAN ID and a new output logical
port ID according to its original logical port ID.
The ARP table in the LFB is defined as a component of the LFB. The
table is for IPv4 packet to find its next hop Ethernet layer MAC
addresses. Input IPv4 packet will use an output logical port ID
which is got by looking up the VLAN output table, and a next hop IPv4
address which is got by upstream next hop applicator LFB, to look up
the ARP table to find the Ethernet L2 information, i.e., the source
MAC address and destination MAC address.
Wang, et al. Expires June 4, 2011 [Page 43]
Internet-Draft ForCES LFB Library December 2010
The neighbor table is defined as another component of the LFB. The
table is for IPv6 packet to find its next hop Ethernet layer MAC
addresses. Like the ARP table, input IPv6 packet will use its output
logical port ID got from looking up the VLAN output table, and the
packet next hop IPv4 address got by upstream next hop applicator LFB,
to look up the neighbor table to find the Ethernet source MAC address
and destination MAC address.
As will be described in the address resolution LFBs section (section
5.4), Layer 2 address resolution protocols may be implemented with
two choices. One is by FE with specific address resolution LFB, like
an ARP LFB or an ND LFB. The other is to redirect address resolution
protocol messages to CE for CE to implement the function.
As described in section 5.4, the ARP LFB defines the ARP table in
this encapsulator LFB as its alias, and the ND LFB defines the
neighbor table as its alias. This means that the ARP table or the
neighbor table will be maintained or refreshed by the ARP LFB or the
ND LFB when the LFBs are used.
Note that the ARP table and the neighbor table defined in this LFB
are all with property of read-write. CE can also configure the
tables by ForCES protocol [RFC5810]. This makes possible that IPv4
ARP protocol or IPv6 Neighbor Discovery protocol may be implemented
at CE side,i.e., after CE manages an ARP or Neighbor discovery
protocol and gets address resolution results, CE can configure them
to an ARP or neighbor table in FE.
With all the information got from VLAN table and ARP or Neighbor
table, input IPv4 or IPv6 packets are then able to be encapsulated to
Ethernet layer packets. Note that according to IEEE 802.1Q, if input
packets are with non-zero VLAN priority metadata, the packets will
always be encapsulated with a VLAN tag, no matter the value of VLAN
ID is zero or not. If the VLAN priority and the VLAN ID are all
zero, the packets will be encapsulated without a VLAN tag.
Successfully encapsulated packets are then output via success output
port.
There are also some other components, capabilities, events defined in
the LFB for various purposes. See section 6 for detailed XML
definitions of the LFB.
5.1.5. EtherMACOut
EtherMACOut LFB abstracts an Ethernet port at MAC data link layer.
It specifically describes Ethernet packet output process. Ethernet
output functions are closely related to Ethernet input functions,
therefore many components defined in this LFB are actually alias of
Wang, et al. Expires June 4, 2011 [Page 44]
Internet-Draft ForCES LFB Library December 2010
EtherMACIn LFB.
The LFB is defined with one singleton input(Editorial note: do we
need another input for L2 bridging input?). The input is expected to
receive all types of Ethernet packets which are usually output from
some Ethernet encapsulation LFB. Every input packet is associated
with a metadatum indicating the physical port ID that the packet will
go(Editorial note: Ethernet encapsulation LFB actually generate
logical port ID metadata, how has it been changed to physical port
ID?).
The LFB is defined with a singleton output. All Output packets are
in Ethernet format, possibly with various Ethernet types. Downstream
LFB the output links to is usually Ethernet physical LFBs like
EtherPHYcop LFB. Metadata associated with every packet from this
output is PHYPortID, which keeps indicating which physical port the
packet is to.
Ethernet layer flow control is usually implemented cooperatively by
EtherMACIn LFB and EtherMACOut LFB. How the flow control is
implemented is vendor-specific. As an abstraction, this LFB defines
two flag components for CE to enable or disable the flow control
functions, a TxFlowControl flag and a RxFlowControl flag, and they
are all defined as aliases of EtherMACIn LFB.
AdminStatus is defined for CE to administratively manage the status
of the LFB. Via the component, CE can startup or shutdown the LFB.
The default status is set to 'Down'.
Note that as a base definition, functions like multiple virtual MAC
layers are not supported in this LFB version. It may be supported in
the future by defining a subclass or a new version of this LFB.
There are also some other components, capabilities, events defined in
the LFB for various purposes. See section 6 for detailed XML
definitions of the LFB.
5.2. IP Packet Validation LFBs
An LFB is defined to abstract IP packet validation process. An
IPv4Validator LFB is specifically for IPv4 protocol validation and an
IPv6Validator LFB for IPv6.
5.2.1. IPv4Validator
This LFB performs IPv4 packets validation according to RFC 1812.
Input of the LFB always expects packets which have been indicated as
Wang, et al. Expires June 4, 2011 [Page 45]
Internet-Draft ForCES LFB Library December 2010
IPv4 packets by an upstream LFB, like an EtherClassifier LFB. There
is no specific metadata expected by the input of the validator LFB.
Note that, as a default provision of RFC 5812, in FE model, all
metadata produced by upstream LFBs will pass through all downstream
LFBs by default without being specified by input port or output port.
Only those metadata that will be used(consumed) by an LFB will be
explicitly marked in input of the LFB as expected metadata. For
instance, in this LFB, even there is no specific metadata expected,
metadata like PHYPortID produced by some upstreaming PHY LFBs will
always pass through this LFB. In some cases, if some component in
the LFB may use the metadata, it actually still can use it regardless
of whether the metadata has been expected or not.
Four output ports are defined to output various validation results.
All validated IPv4 unicast packets will be output at the singleton
IPv4UnicastOut port. All validated IPv4 multicast packets will be
output at the singleton IPv4MulticastOut port. There is no metadata
specifically required to be produced at these output ports.
A singleton ExceptionOut port is defined to output packets which have
been validated as exceptional packets. An exception ID metadata is
produced to indicate which causes it exceptional. Currently defined
exception types include cases like, packet with destination address
equal to 255.255.255.255, Packet with expired TTL, Packet with header
length more than 5 words, and packet IP head including Router Alert
options, etc. Note that even TTL is checked for validity here,
actual operation like decrease of TTL will not be made here, rather
made by followed forwarding LFB.
A singleton output is defined for all packets which have failed the
packet validation. A validate error ID is associated to every failed
packet to indicate the reasons like an invalid packet size, wrong IP
protocol version, wrong checksum, etc.
There are also some other components defined in the LFB for various
purposes. See section 6 for detailed XML definitions of the LFB.
5.2.2. IPv6Validator
This LFB performs IPv6 packets validation according to RFC 2460.
Input of the LFB always expects packets which have been indicated as
IPv6 packets by an upstream LFB like an EtherClassifier LFB. There
is no specific metadata expected by the input of the validator LFB.
Similar to IPv4 validator LFB, IPv6Validator LFB has also defined
four output ports to output various validation results. All
Wang, et al. Expires June 4, 2011 [Page 46]
Internet-Draft ForCES LFB Library December 2010
validated IPv6 unicast packets will be output at the singleton
IPv6UnicastOut port. All validated IPv6 multicast packets will be
output at the singleton IPv6MulticastOut port. There is no metadata
specifically required to be produced at these output ports. A
singleton ExceptionOut port is defined to output packets which have
been validated as exceptional packets. An exception ID is produced
to indicate which causes it exceptional. Currently, exception types
include the following cases:
a packet with hop limit to zero
a packet with a link-local destination address.
a packet with a link-local source address.
a packet with destination all-routers.
a packet with destination all-nodes.
a packet with next header set to Hop-by-Hop.
A singleton output is defined for packets which have failed the
packet validation. A validate error ID is associated to every failed
packet to indicate the reasons for the failures. The reasons may
include an invalid packet size, wrong IPv6 protocol version, wrong
source or destination IPv6 addresses, etc.
There are also some other components defined in the LFB for various
purposes. See section 6 for detailed XML definitions of the LFB.
5.3. IP Forwarding LFBs
IP Forwarding LFBs are specifically defined to abstract the IP
forwarding processes. As definitions for a base LFB library, this
document restricts its LFB definition scope for IP forwarding jobs
only to IP unicast forwarding. LFBs for jobs like IP multicast may
be defined in future versions of the document.
A typical IP unicast forwarding job is usually realized by looking up
some forwarding information table to find some next hop information,
and then based on the next hop information, forwarding packets to
specific output ports. It usually takes two steps to do so, firstly
to look up a forwarding information table by means of Longest Prefix
Matching(LPM) rule to find a next hop index, then to use the index to
look up a next hop information table to find enough information to
submit packets to output ports. This document abstracts the
forwarding processes mainly based on the two steps model. However,
there actually exists other models, like one which may only have a
Wang, et al. Expires June 4, 2011 [Page 47]
Internet-Draft ForCES LFB Library December 2010
forwarding information base that have conjoined next hop information
together with forwarding information. In this case, if ForCES
technology is to be applied, some translation work will have to be
done in FE to translate attributes defined by this document into real
attributes the implementation has actually applied.
Based on the IP forwarding abstraction, two kind of typical IP
unicast forwarding LFBs are defined, Unicast LPM lookup LFB and next
hop application LFB. They are further distinguished by IPv4 and IPv6
protocols.
5.3.1. IPv4UcastLPM
The LFB abstracts the process for IPv4 unicast LPM table looking up.
Input of the LFB always expects to receive IPv4 unicast packets. An
IPv4 prefix table is defined as a component for the LFB to provide
forwarding information for every input packet. The destination IPv4
address of every packet is as the index to look up the table with the
rule of longest prefix matching(LPM). A hop selector is the matching
result, which will be output to downstream LFBs as an index for next
hop information.
Normal output of the LFB is a singleton output, which will output
IPv4 unicast packet that has passed the LPM lookup and got a hop
selector as the lookup result. The hop selector is associated with
the packet as a metadatum. Followed the normal output of the LPM LFB
is usually a next hop applicator LFB. The LFB receives packets with
their next hop IDs and based on the next hop IDs to forward the
packets. A hop selector associated with every packet from the normal
output will directly act as a next hop ID for a next hop applicator
LFB.
The LFB is defined to provide some facilities to support users to
implement equal-cost multi-path routing (ECMP) or reverse path
forwarding (RPF). However, this LFB itself does not provide ECMP or
RPF. To implement ECMP or RPF, additional specific LFBs, like a
specific ECMP LFB, will have to be defined. This work may be done in
the future version of the document.
For the LFB to support ECMP, an ECMP flag is defined in the prefix
table entries. When the flag is set to true, it indicates this table
entry is for ECMP only. In this case, the hop selector in this table
entry will be used as an index for a downstream specific ECMP LFB to
find its correspondent next hop IDs. When ECMP is applied, it will
usually get multiple next hops information.
To distinguish normal output from ECMP case output, a specific ECMP
Wang, et al. Expires June 4, 2011 [Page 48]
Internet-Draft ForCES LFB Library December 2010
output is defined. A packet, which has passed through prefix table
entry lookup with true ECMP flag, will always output from this port,
with the hop selector being its lookup result. The output will
usually directly go to a downstream ECMP processing LFB. In the ECMP
LFB, based on the hop selector, multiple next hop IDs may be found,
and more ECMP algorithms may be applied to optimize the route. As
the result of the ECMP LFB, it will output optimized one or multiple
next hop IDs to its downstream LFB that is usually a next hop
applicator LFB.
For the LFB to support RPF, a default route flag is defined in the
prefix table entry. When set true, the prefix entry is identified as
a default route, and also as a forbidden route for RPF. To implement
various RPF, one or more specific LFBs have to be defined. This job
may be done for the future version of the library.
An exception output is defined to allow some exceptional packets to
output here. Exceptions include cases like packets can not find any
routes by the prefix table.
There are also some other components defined in the LFB for various
purposes. See section 6 for detailed XML definitions of the LFB.
5.3.2. IPv4NextHop
This LFB abstracts the process of next hop information application to
IPv4 packets.
The LFB receives an IPv4 packet with an associated next hop ID, and
uses the ID to look up a next hop table to find an appropriate output
port from the LFB. Simultaneously, the LFB also implements TTL
operation and checksum recalculation of every IPv4 packet received.
Input of the LFB is a singleton one which expects to receive IPv4
unicast packets and hop selector metadata from an upstream LFB.
Usually, the upstream LFB is directly an IPv4UnicastLPM LFB. While
it is possible some other upstream LFB may be applied. For instance,
when ECMP is supported, the upstream LFB may be some specific ECMP
LFB.
The next hop ID in hop selector metadata of a packet is then used as
an index to look up a next hop table defined in the LFB. Via this
table and the next hop index, important information for forwarding
the packet is found. The information includes:
Wang, et al. Expires June 4, 2011 [Page 49]
Internet-Draft ForCES LFB Library December 2010
output logical port ID, which will be used by downstream LFBs to
find proper output port.
next hop option, which decides if the packet should be locally
processed or not. For packets that will be redirected to CE for
processing or that need FE local processing, next hop option will
be marked as 'forwarded to locally attached host' . Packets that
will be normally forwarded will be marked as 'normal forwarding'.
next hop IP address, which will be used by downstream LFB to find
proper output port IP address for this packet.
encapsulation output index, which is used by the packet to find
proper output of this LFB.
There are two output ports. One is for success output and another is
for exception output. Success output is a group output, with an
index to indicate which output instance in the group is adopted. The
index is the encapsulation output index described above. Downstream
LFBs connected to the success output group may be various LFBs for
encapsulation like LFBs for Ethernet encapsulation and for PPP
encapsulation, various LFBs for local processing, and LFBs for
redirecting packets to CE for processing. Next hop table uses the
encapsulation output index to indicate which port instance in the
output group a packet should go.
Every port instance of the success output group will produce metadata
of output logical port ID and next hop IP address for every output
packet. These metadata will be used by downstream LFBs to further
implementing forwarding process.
Note that for next hop option marked as local host processing, the
next hop IP address for the packet is exactly the destination IP
address of the packet.
The exception output of the LFB is a singleton output. It outputs
packets with exceptional cases. An exception ID further indicates
the exception reasons. Exception may happen when a hop selector is
found invalid, or ICMP packets need to be generated (Editorial note:
more discussions here), etc. The exception ID is also produced as a
metadata by the output to be transmitted to a downstream LFB.
There are also some other components defined in the LFB for various
purposes. See section 6 for detailed XML definitions of the LFB.
Wang, et al. Expires June 4, 2011 [Page 50]
Internet-Draft ForCES LFB Library December 2010
5.3.3. IPv6UcastLPM
The LFB abstracts the process for IPv6 unicast LPM table looking up.
Definitions of this IPv6UcastLPM LFB is very identical to
IPv4UcastLPM LFB except that all IP addresses related are changed
from IPv4 addresses to IPv6 addresses. See section 6 for detailed
XML definitions of this LFB.
5.3.4. IPv6NextHop
This LFB abstracts the process of next hop information application to
IPv6 packets.
Definitions of this IPv6NextHop LFB is very identical to IPv4NextHop
LFB except that all IP addresses related are changed from IPv4
addresses to IPv6 addresses. See section 6 for detailed XML
definitions of this LFB.
5.4. Address Resolution LFBs
The address resolution LFBs abstracts the process for address
resolution functions. In the process, address resolution protocols,
like ARP protocol for IPv4 and neighbor discovery protocol for IPv6,
are applied.
There exist two schema under ForCES architecture to implement address
resolution function. One is for FE to implement the address
resolution by use of address resolution LFBs as defined in this
section. The other is to offload the address resolution from FE to
CE. In this case, address resolution LFBs will not be used. All
address resolution protocol messages FE has received will be
redirected to CE via ForCES protocol [RFC5810]. CE is responsible to
process the protocol messages and generate new address resolution
messages to send to outer network via FE using ForCES prococol
[RFC5810]. CE will also use ForCES protocol to manage the address
resolution tables, like the ARP table and the neighbor table, in
Ethernet encapsulator LFB.
According to address resolution individually for IPv4 or IPv6
packets, an ARP LFB and an ND(neighbor discovery) LFB are defined as
below.
5.4.1. ARP
The ARP LFB provides the function of address resolution for IPv4
nodes. Two singleton inputs are defined for the LFB. One is for ARP
protocol packet input. The packets are usually come from upstream
Wang, et al. Expires June 4, 2011 [Page 51]
Internet-Draft ForCES LFB Library December 2010
LFBs like an Ethernet classifier LFB where ARP protocol messages are
categorized. The frame type hence expected is the ARP protocol
message type. The other singleton input is for IPv4 packets that
usually come from Ethernet encapsulator LFB and are unable to find L2
information to finish encapsulation process in that LFB. The
associated metadata include a next hop IPv4 address, which is the
encapsulator LFB can not find its binding Ethernet MAC address, the
logical port ID, and the VLAN ID (Editorial note: need more
discussions on what metadata these inputs should expect.)
There are two components defined in the ARP LFB. One is the ARP
table. Note that ARP table in this LFB is defined as an alias
component of ARP table in Ethernet encapsulator LFB. This means
management of the ARP table will be shared by both of the LFBs. The
ARP LFB will manage the table and refresh the table entries based on
the ARP protocol messages received. The protocol messages provide
bindings of IPv4 addresses with destination MAC addresses. The ARP
table fields include destination IP address, logical port ID, source
MAC address, and destination MAC address (Editorial note: need more
discussions on what fields needed).
Another component defined is the local IPv4 address table for all
ports of the FE. An FE port here is indexed by a logical port ID.
Note that every physical port may be capable of multiple logical
ports with multiple IP or MAC addresses. The port IPv4 address table
provides binding of a logical port to an IP address and a MAC address
(Editorial note: is it possible one logical port binds multiple IP
addresses?). The table will be used by the ARP LFB to check locality
of arrived ARP protocol messages. Usually the table will be
configured by CE via ForCES protocol.(Editorial note: need more
discussions on what fields the port IP address table needs and how
the logical port ID and MAC address take effect in the process).
Two singleton outputs are defined for the ARP LFB. One is for ARP
protocol message output. All ARP request and response packets are
sent out from here to downstream LFB, which usually is Ethernet
encapsulation LFB.
Another output is for sending all packets that are input to this LFB
because they can not find L2 encapsulation information when doing
encapsulation in an Ethernet encapsulation LFB. They are just sent
back to the LFB for encapsulation again with the expected refreshed
ARP table contents. (Editorial note: need more discussions on how
the mechanism should be defined for those packets unable to do
encapsulation in encapsulation LFB. An alternative schema is to let
the ARP LFB to do encapsulation rather than send them back to
encapsulation LFB, then output the packets directly to an LFB after
the encapsulation LFB).
Wang, et al. Expires June 4, 2011 [Page 52]
Internet-Draft ForCES LFB Library December 2010
5.4.2. ND
(TBD)
5.5. Redirect LFBs
Redirect LFBs abstract data packets transportation process between CE
and FE. Some packets output from some LFBs may have to be delivered
to CE for further processing, and some packets generated by CE may
have to be delivered to FE and further to some specific LFBs for data
path processing. According to RFC 5810 [RFC5810], data packets and
their associated metadata are encapsulated in ForCES redirect message
for transportation between CE and FE. We define two LFBs to abstract
the process, a RedirectIn LFB and a RedirectOut LFB. Usually, in an
LFB topology of an FE, only one RedirectIn LFB instance and one
RedirectOut LFB instance exist.
5.5.1. RedirectIn
A RedirectIn LFB abstracts the process for CE to inject data packets
into FE LFB topology so as to input data packets into FE data paths.
From LFB topology point of view, the RedirectIn LFB acts as a source
point for data packets coming from CE, therefore the RedirectIn LFB
is defined with only one output, while without any input.
Output of the RedirectIn LFB is defined as a group output. Packets
produced by the output will have arbitrary frame types decided by CE
which generates the packets. Possible frames may include IPv4, IPv6,
or ARP protocol packets. CE may associate some metadata to indicate
the frame types. CE may also associate other metadata to data
packets to indicate various information on the packets. Among them,
there MUST exist a 'RedirectIndex' metadata, which is an integer
acting as an index. When CE transmits the metadata and a binging
packet to a RedirectIn LFB, the LFB will read the metadata and output
the packet to one of its group output port instance, whose port index
is just as indicated by the metadata. Detailed XML definition of the
metadata is in the XML for base type library as in Section 4.4.
All metadata from CE other than the 'RedirectIndex' metadata will
output from the RedirectIn LFB along with their binding packets.
Note that, a packet without a 'RedirectIndex' metadata associated
will be dropped by the LFB.
There is no component defined for current version of RedirectIn LFB.
Detailed XML definitions of the LFB can be found in Section 6.
Wang, et al. Expires June 4, 2011 [Page 53]
Internet-Draft ForCES LFB Library December 2010
5.5.2. RedirectOut
A RedirectOut LFB abstracts the process for LFBs in FE to deliver
data packets to CE. From LFB topology point of view, the RedirectOut
LFB acts as a sink point for data packets going to CE, therefore the
RedirectOut LFB is defined with only one input, while without any
output.
Input of the RedirectOut LFB is defined as a singleton input, but it
is capable of receiving packets from multiple LFBs by multiplexing
the singleton input. Packets expected by the input will have
arbitrary frame types. All metadata associated with the input
packets will be delivered to CE via a ForCES protocol redirect
message [RFC5810]. The input will expect all types of metadata.
There is no component defined for current version of RedirectOut LFB.
Detailed XML definitions of the LFB can be found in Section 6.
5.6. General Purpose LFBs
5.6.1. BasicMetadataDispatch
A basic medatata dispatch LFB is defined to abstract a process in
which a packet is dispatched to some path based on its associated
metadata value.
The LFB is with a singleton input. Packets of arbitrary frame types
can input into the LFB. Whereas, every input packet is required to
be associated with a metadata that will be used by the LFB to do
dispatch. If a packet is not associated with such metadata, the
packet will be dropped inside the LFB.
A group of output is defined to output packets according to a
MetaDispatchTable as defined a component in the LFB. The table
contains the fields of a metadata ID, a metadata value, and an output
port index. A packet, if it is associated with a metadata with the
metadata ID, will be output to the group port instance with the index
corresponding to the metadata value in the table. The metadata value
ussed by the table is required with an interger data type. This
means this LFB currently only allow a metadata with an interger value
to be used for dispatch.
Moreover, the LFB is defined with only one metadata adopted for
dispatch, i.e., the metadata ID in the dispatch table is always the
same for all table rows.
A more complex metadata dispatch LFB may be defined in future version
of the library. In that LFB, multiple tuples of metadata may be
Wang, et al. Expires June 4, 2011 [Page 54]
Internet-Draft ForCES LFB Library December 2010
adopted to dispatch packets.
5.6.2. GenericScheduler
There exist various kinds of scheduling strategies with various
implementations. As a base LFB library, this document only defines a
preliminary generic scheduler LFB for abstracting a simple scheduling
process. The generic scheduler LFB is the one. Users may use this
LFB as a basic scheduler LFB to further construct more complex
scheduler LFBs by means of inheritance as described in RFC 5812
[RFC5812].
The LFB describes scheduling process in the following model:
o It is with a group input and expects packets with arbitrary frame
types to arrive for scheduling. The group input is capable of
multiple input port instances. Each port instance may be
connected to different upstream LFB output. No metadata is
expected for each input packet.
o Multiple queues reside at the input side, with every input port
instance connected to one queue.
o Every queue is marked with a queue ID, and the queue ID is exactly
the same as the index of corresponding input port instance.
o Scheduling disciplines are applied to all queues and also all
packets in the queues.
o Scheduled packets are output from a singleton output port of the
LFB.
Two LFB components are defined to further describe above model. A
scheduling discipline component is defined for CE to specify a
scheduling discipline to the LFB. Currently defined scheduling
disciplines only include FIFO and round robin(RR). For FIFO, we
limit above model that when a FIFO discipline is applied, it is
require that there is only one input port instance for the group
input. If user accidentally defines multiple input port instances
for FIFO scheduling, only packets in the input port with lowest port
index will be scheduled to output port, and all packets in other
input port instances will just ignored.
We specify that if the generic scheduler LFB is defined only one
input port instance, the default scheduling discipline is FIFO. If
the LFB is defined with more than one input port instances, the
default scheduling discipline is round robin (RR).
Wang, et al. Expires June 4, 2011 [Page 55]
Internet-Draft ForCES LFB Library December 2010
A current queue depth component is defined to allow CE to query every
queue status of the scheduler. Using the queue ID as the index, CE
can query every queue for its used length in unit of packets or
bytes.
Several capabilities are defined for the LFB. A queue number limit
is defined which limits the scheduler maximum supported queue number,
which is also the maximum number of input port instances. Capability
of disciplines supported provides scheduling discipline types
supported by the FE to CE. Queue length limit provides the
capability of storage ability for every queue.
More complex scheduler LFB may be defined with more complex
scheduling discipline by succeeding this LFB. For instance, a
priority scheduler LFB may be defined only by inheriting this LFB and
define a component to indicate priorities for all input queues.
See Section 6 for detailed XML definition for this LFB.
Wang, et al. Expires June 4, 2011 [Page 56]
Internet-Draft ForCES LFB Library December 2010
6. XML for LFB Library
EtherPHYCop
The LFB describes an Ethernet port abstracted at
physical layer.It limits its physical media to copper.
Multiple virtual PHYs isn't supported in this LFB version.
1.0
EtherPHYIn
The Input Port of the EtherPHYLFB. It
expects any kind of Ethernet frame.
[EthernetAll]
EtherPHYOut
The Output Port of the EtherPHYLFB. It can
produce any kind of Ethernet frame and along with
the frame passes the ID of the Physical Port as
metadata to be used by the next LFBs.
[EthernetII]
[PHYPortID]
PHYPortID
The ID of the physical port that this LFB
Wang, et al. Expires June 4, 2011 [Page 57]
Internet-Draft ForCES LFB Library December 2010
handles.
uint32
AdminStatus
Admin Status of the LFB
PortStatusValues
2
OperStatus
Operational Status of the LFB.
PortStatusValues
AdminLinkSpeed
The link speed that the admin has requested.
LANSpeedType
0x00000005
OperLinkSpeed
The actual operational link speed.
LANSpeedType
AdminDuplexMode
The duplex mode that the admin has requested.
DuplexType
0x00000001
OperDuplexMode
The actual duplex mode.
DuplexType
CarrierStatus
The status of the Carrier. Whether the port
is linked with an operational connector.
boolean
false
Wang, et al. Expires June 4, 2011 [Page 58]
Internet-Draft ForCES LFB Library December 2010
SupportedLinkSpeed
Supported Link Speeds
LANSpeedType
SupportedDuplexMode
Supported Duplex Modes
DuplexType
PHYPortStatusChanged
When the status of the Physical port is
changed,the LFB sends the new status.
OperStatus
OperStatus
LinkSpeedChanged
When the operational speed of the link
is changed, the LFB sends the new operational link
speed.
OperLinkSpeed
OperLinkSpeed
DuplexModeChanged
When the operational duplex mode
is changed, the LFB sends the new operational mode.
Wang, et al. Expires June 4, 2011 [Page 59]
Internet-Draft ForCES LFB Library December 2010
speed.
OperDuplexMode
OperDuplexMode
EtherMACIn
a LFB abstracts an Ethernet port at MAC data link
layer. Multiple virtual MACs isn't supported in this LFB
version.
1.0
EtherMACIn
The Input Port of the EtherMACIn. It
expects any kind of Ethernet frame.
[EthernetAll]
[PHYPortID]
NormalPathOut
The Normal Output Port of the EtherMACIn.
It can produce any kind of Ethernet frame and along
with the frame passes the ID of the Physical Port as
metadata to be used by the next LFBs.
[EthernetAll]
[PHYPortID]
Wang, et al. Expires June 4, 2011 [Page 60]
Internet-Draft ForCES LFB Library December 2010
L2BridgingPathOut
The Bridging Output Port of the EtherMACIn.
It can produce any kind of Ethernet frame and along
with the frame passes the ID of the Physical Port as
metadata to be used by the next LFBs.
[EthernetAll]
[PHYPortID]
AdminStatus
Admin Status of the port
PortStatusValues
2
LocalMACAddresses
Local Mac Addresses
IEEEMAC
L2BridgingPathEnable
Is the LFB doing L2 Bridging?
boolean
false
PromiscuousMode
Is the LFB in Promiscuous Mode?
boolean
false
TxFlowControl
Transmit Flow control
boolean
Wang, et al. Expires June 4, 2011 [Page 61]
Internet-Draft ForCES LFB Library December 2010
false
RxFlowControl
Receive Flow control
boolean
false
MTU
Maximum Transmission Unit
uint32
MACInStats
MACIn statistics
MACInStatsType
EtherClassifier
LFB that decapsulates Ethernet II packets and
classifies them.
1.0
EtherPktsIn
Input port for data packet.
[EthernetAll]
[PHYPortID]
[
LogicalPortID]
ClassifyOut
Classify Out
[Arbitrary]
Wang, et al. Expires June 4, 2011 [Page 62]
Internet-Draft ForCES LFB Library December 2010
[PHYPortID]
[SrcMAC]
[DstMAC]
[EtherType]
[VlanID]
[VlanPriority]
EtherDispatchTable
Ether classify dispatch table
EtherDispatchTableType
VlanInputTable
Vlan input table
VlanInputTableType
EtherClassifyStats
Ether Classify statistic table
EtherClassifyStatsType
MaxOutputPorts
Maximum number of ports in the output
group.
uint32
EtherEncapsulator
A LFB that performs packets ethernet L2
encapsulation.
Wang, et al. Expires June 4, 2011 [Page 63]
Internet-Draft ForCES LFB Library December 2010
1.0
EncapIn
A Single Packet Input
[IPv4]
[IPv6]
[NexthopIPv4Addr]
[NexthopIPv6Addr]
[OutputLogicalPortID]
[
VlanPriority]
SuccessOut
Output port for Packets which have found
Ethernet L2 information and have been successfully
encapsulated to an Ethernet packet.
[IPv4]
[IPv6]
[OutputLogicalPortID]
PakcetNoARPOut
Output port for packets can't find the
associated L2 information in the ARP table.
[IPv4]
[OutputLogicalPortID]
Wang, et al. Expires June 4, 2011 [Page 64]
Internet-Draft ForCES LFB Library December 2010
[NexthopIPv4Addr]
[VlanPriority]
PakcetNoNbrOut
Output port for packets can't find the
associated L2 information in the Nbr table.
[IPv6]
[OutputLogicalPortID]
[NexthopIPv6Addr]
[VlanPriority]
ExceptionOut
All packets that fail with the other
operations in this LFB are output via this port.
[IPv4]
[IPv6]
[ExceptionID]
[OutputLogicalPortID]
[NexthopIPv4Addr]
[NexthopIPv6Addr]
[VlanPriority]
ArpTable
ARP table.
ArpTableType
Wang, et al. Expires June 4, 2011 [Page 65]
Internet-Draft ForCES LFB Library December 2010
NbrTable
Nbr table.
NbrTableType
VLANOutputTable
VLAN output table.
VLANOutputTableType
EtherMACOut
EtherMACOut LFB abstracts an Ethernet port at MAC
data link layer. It specifically describes Ethernet packet
output process. Ethernet output functions are closely related
to Ethernet input functions, therefore many components
defined in this LFB are actually alias of EtherMACIn LFB.
1.0
EtherPktsIn
The Input Port of the EtherMACIn. It expects
any kind of Ethernet frame.
[EthernetAll]
[PHYPortID]
EtherMACOut
The Normal Output Port of the EtherMACOut. It
can produce any kind of Ethernet frame and along with
the frame passes the ID of the Physical Port as
Wang, et al. Expires June 4, 2011 [Page 66]
Internet-Draft ForCES LFB Library December 2010
metadata to be used by the next LFBs.
[EthernetAll]
[PHYPortID]
OperStatus
Operational Status of the LFB.
PortStatusValues
TxFlowControl
Transmit Flow control
boolean
false
RxFlowControl
Receive Flow control
boolean
false
MACOutStats
MACOut statistics
MACOutStatsType
IPv4Validator
a LFB that performs IPv4 packets validation
according to RFC1812 and RFC2644.
1.0
ValidatePktsIn
Input port for data packet.
[Arbitrary]
Wang, et al. Expires June 4, 2011 [Page 67]
Internet-Draft ForCES LFB Library December 2010
IPv4UnicastOut
Output for IPv4 unicast packet.
[IPv4Unicast]
IPv4MulticastOut
Output for IPv4 multicast packet.
[IPv4Multicast]
ExceptionOut
Output for exception packet.
[IPv4]
[ExceptionID]
FailOut
Output for failed validation packet.
[IPv4]
[ValidateErrorID]
Wang, et al. Expires June 4, 2011 [Page 68]
Internet-Draft ForCES LFB Library December 2010
IPv4ValidatorStats
Ether classify dispatch table
IPv4ValidatorStatisticsType
IPv6Validator
A LFB that performs IPv6 packets validation
according to RFC2460 and RFC4291.
1.0
ValidatePktsIn
Input port for data packet.
[Arbitrary]
IPv6UnicastOut
Output for IPv6 unicast packet.
[IPv6Unicast]
IPv6MulticastOut
Output for IPv6 multicast packet.
[IPv6Multicast]
ExceptionOut
Output for exception packet.
Wang, et al. Expires June 4, 2011 [Page 69]
Internet-Draft ForCES LFB Library December 2010
[IPv6]
[ExceptionID]
FailOut
Output for failed validation packet.
[IPv6]
[ValidateErrorID]
IPv6ValidatorStats
Ether classify dispatch table
IPv6ValidatorStatisticsType
IPv4UcastLPM
a LFB that performs IPv4 Longest Prefix Match
Lookup.
1.0
PktsIn
A Single Packet Input
[IPv4Unicast]
[DstIPv4Address]
Wang, et al. Expires June 4, 2011 [Page 70]
Internet-Draft ForCES LFB Library December 2010
NormalOut
This output port is connected with
IPv4NextHop LFB
[IPv4Unicast]
[HopSelector]
ECMPOut
This output port is connected with ECMP LFB,
if there is ECMP LFB in the FE.
[IPv4Unicast]
[HopSelector]
ExceptionOut
The output for the packet if an exception
occurs
[IPv4Unicast]
[ExceptionID]
IPv4PrefixTable
The IPv4 Prefix Table.
IPv4PrefixTableType
Wang, et al. Expires June 4, 2011 [Page 71]
Internet-Draft ForCES LFB Library December 2010
IPv4UcastLPMStats
Statistics for IPv4 Unicast Longest Prefix
Match
IPv4UcastLPMStatsType
IPv6UcastLPM
A LFB that performs IPv6 Longest Prefix Match
Lookup.
1.0
PktsIn
A Single Packet Input
[IPv6Unicast]
[DstIPv6Address]
NormalOut
This output port is connected with
IPv6NextHop LFB
[IPv6Unicast]
[HopSelector]
ECMPOut
This output port is connected with ECMP LFB,
if there is ECMP LFB in the FE.
Wang, et al. Expires June 4, 2011 [Page 72]
Internet-Draft ForCES LFB Library December 2010
[IPv6Unicast]
[HopSelector]
ExceptionOut
The output for the packet if an exception
occurs
[IPv6Unicast]
[ExceptionID]
IPv6PrefixTable
The IPv6 Prefix Table.
IPv6PrefixTableType
IPv6UcastLPMStats
Statistics for IPv6 Unicast Longest Prefix
Match
IPv6UcastLPMStatsType
IPv4NextHop
A LFB for applicating next hop action to IPv4
packets,the actions include:TTL operation,checksum
recalculation. The input packets with the metadata
"HopSelector"(the nexthop ID), get the nexthop
information through looking up nexthop table.
1.0
Wang, et al. Expires June 4, 2011 [Page 73]
Internet-Draft ForCES LFB Library December 2010
PktsIn
A Single Packet Input
[IPv4Unicast]
[HopSelector]
SuccessOut
The output for the packet if it is valid to be
forwarded
[IPv4Unicast]
[OutputLogicalPortID]
[NextHopIPv4Addr]
ExceptionOut
The output for the packet if an exception
occurs
[IPv4Unicast]
[ExceptionID]
IPv4NextHopTable
The Next Hop Table.
IPv4NextHopTableType
Wang, et al. Expires June 4, 2011 [Page 74]
Internet-Draft ForCES LFB Library December 2010
MaxOutputPorts
Maximum number of ports in the output group.
uint32
IPv6NextHop
A LFB definition for applicating next hop action to
IPv6 packets. The input packets with the metadata
"HopSelector"(the nexthop ID), get the nexthop information
through looking up nexthop table.
1.0
PktsIn
A Single Packet Input
[IPv6Unicast]
[HopSelector]
SuccessOut
The output for the packet if it is valid to
be forwarded
[IPv6Unicast]
[OutputLogicalPortID]
[NextHopIPv6Addr]
Wang, et al. Expires June 4, 2011 [Page 75]
Internet-Draft ForCES LFB Library December 2010
ExceptionOut
The output for the packet if an exception
occurs
[IPv6Unicast]
[ExceptionID]
IPv6NextHopTable
The Next Hop Table.
IPv6NextHopTableType
MaxOutputPorts
Maximum number of ports in the output group.
uint32
ARP
ARP
1.0
ArpPktsIn
The input port for ARP packets.
[ARP]
[PHYPortID]
[LogicalPortID]
[SrcMAC]
[DstMAC]
Wang, et al. Expires June 4, 2011 [Page 76]
Internet-Draft ForCES LFB Library December 2010
AddrResDataPktsIn
The input port for the packet which need
address resolution..
[IPv4]
[NexthopIPv4Addr]
[OutputLogicalPortID]
[
VlanID]
[
VlanPriority]
ArpPktsOut
The output port for Arp packets.
[EthernetII]
[OutputLogicalPortID]
AddrResDataPktsOut
The output port for the packet which has been
encapsulated with the L2 head.
[EthernetII]
[OutputLogicalPortID]
Wang, et al. Expires June 4, 2011 [Page 77]
Internet-Draft ForCES LFB Library December 2010
PortV4AddrInfoTable
The IPv4 address for all local ports.
Portv4AddrInfoTableType
ND
TBD
1.0
RedirectIn
The RedirectIn LFB abstracts the process for CE to
inject data packets into FE LFB topology so as to input data
packets into FE data paths. From LFB topology point of view,
the RedirectIn LFB acts as a source point for data packets
coming from CE, therefore the RedirectIn LFB is defined with
only one output, while without any input. Output of the
RedirectIn LFB is defined as a group output. Packets produced
by the output will have arbitrary frame types decided by CE
which generates the packets. Possible frames may include IPv4,
IPv6, or ARP protocol packets. CE may associate some metadata
to indicate the frame types. CE may also associate other
metadata to data packets to indicate various information on
the packets. Among them, there MUST exist a 'RedirectIndex'
metadata, which is an integer acting as an index. When CE
transmits the metadata and a binging packet to a RedirectIn
LFB, the LFB will read the metadata and output the packet to
one of its group output port instance, whose port index is
just as indicated by the metadata.All metadata from CE other
than the 'RedirectIndex' metadata will output from the
RedirectIn LFB along with their binding packets. Note that,
a packet without a 'RedirectIndex' metadata associated
will be dropped by the LFB.
1.0
PktsOut
This output group sends the redirected packet
in the data path.
Wang, et al. Expires June 4, 2011 [Page 78]
Internet-Draft ForCES LFB Library December 2010
[Arbitrary]
MaxOutputPorts
Maximum number of ports in the output group
uint32
RedirectOut
A RedirectOut LFB abstracts the process for LFBs in
FE to deliver data packets to CE. From LFB topology point of
view, the RedirectOut LFB acts as a sink point for data
packets going to CE, therefore the RedirectOut LFB is defined
with only one input, while without any output.Input of the
RedirectOut LFB is defined as a singleton input, but it is
capable of receiving packets from multiple LFBs by
multiplexing the singleton input. Packets expected by the
input will have arbitrary frame types. All metadata
associated with the input packets will be delivered to CE
via the redirect message of ForCES protocol [RFC5810],
therefore the input will expect all types of metadata.
1.0
PktsIn
This input group receives packets to send to
the CE.
[Arbitrary]
BasicMetadataDispatch
This LFB provides the function to dispatch input
packets to a group output according to a metadata and a
Wang, et al. Expires June 4, 2011 [Page 79]
Internet-Draft ForCES LFB Library December 2010
dispatch table.
1.0
PacketsIn
Input port for data packet.
[Arbitrary]
[Arbitrary]
PacketsOut
Data packet output
[Arbitrary]
MetadataDispatchTable
metadata dispatch table.
MetadataDispatchTableType
MaxOutputPorts
Maxium number of ports in the output group.
uint32
GenericScheduler
Generic Scheduler LFB.
1.0
Wang, et al. Expires June 4, 2011 [Page 80]
Internet-Draft ForCES LFB Library December 2010
PacketsIn
Input port for data packet.
[Arbitrary]
PacketsOut
Data packet output
[Arbitrary]
QueueCount
the number of queues to be scheduled.
uint32
SchedulingDiscipline
the Scheduler discipline.
SchdDisciplineType
CurrentQueueDepth
Current Depth of all queues
QueueDepth
QueueLenLimit
Maximum length of each queue,the unit is
byte.
uint32
Wang, et al. Expires June 4, 2011 [Page 81]
Internet-Draft ForCES LFB Library December 2010
QueueScheduledLimit
Max number of queues that can be scheduled by
this scheduluer.
uint32
DisciplinesSupported
the scheduling disciplines supported.
SchdDisciplineType
Wang, et al. Expires June 4, 2011 [Page 82]
Internet-Draft ForCES LFB Library December 2010
7. LFB Class Use Cases
This section demonstrates examples on how the LFB classes defined by
the Base LFB library in Section 6 are applied to achieve typical
router functions.
As mentioned in the overview section, typical router functions can be
categorized in short into the following functions:
o IP forwarding
o address resolution
o ICMP
o network management
o running routing protocol
To achieve the functions, processing paths organized by the LFB
classes with their interconnections should be established in FE. In
general, CE controls and manages the processing paths by use of the
ForCES protocol.
Note that LFB class use cases shown in this section are only as
examples to demonstrate how typical router functions can be
implemented with the defined base LFB library. Users and
implementers should not be limited by the examples.
7.1. IP Forwarding
TBD
7.2. Address Resolution
TBD
7.3. ICMP
TBD
7.4. Running Routing Protocol
TBD
Wang, et al. Expires June 4, 2011 [Page 83]
Internet-Draft ForCES LFB Library December 2010
7.5. Network Management
TBD
Wang, et al. Expires June 4, 2011 [Page 84]
Internet-Draft ForCES LFB Library December 2010
8. Contributors
The authors would like to thank Jamal Hadi Salim, Ligang Dong, and
Fenggen Jia who made major contributions to the development of this
document.
Jamal Hadi Salim
Mojatatu Networks
Ottawa, Ontario
Canada
Email: hadi@mojatatu.com
Ligang Dong
Zhejiang Gongshang University
149 Jiaogong Road
Hangzhou 310035
P.R.China
Phone: +86-571-28877751
EMail: donglg@mail.zjgsu.edu.cn
Fenggen Jia
National Digital Switching Center(NDSC)
Jianxue Road
Zhengzhou 452000
P.R.China
EMail: jfg@mail.ndsc.com.cn
Wang, et al. Expires June 4, 2011 [Page 85]
Internet-Draft ForCES LFB Library December 2010
9. Acknowledgements
This document is based on earlier documents from Joel Halpern, Ligang
Dong, Fenggen Jia and Weiming Wang.
Wang, et al. Expires June 4, 2011 [Page 86]
Internet-Draft ForCES LFB Library December 2010
10. IANA Considerations
(TBD)
Wang, et al. Expires June 4, 2011 [Page 87]
Internet-Draft ForCES LFB Library December 2010
11. Security Considerations
These definitions if used by an FE to support ForCES create
manipulable entities on the FE. Manipulation of such objects can
produce almost unlimited effects on the FE. FEs should ensure that
only properly authenticated ForCES protocol participants are
performing such manipulations. Thus the security issues with this
protocol are defined in the ForCES protocol [RFC5810].
Wang, et al. Expires June 4, 2011 [Page 88]
Internet-Draft ForCES LFB Library December 2010
12. References
12.1. Normative References
[RFC5810] Doria, A., Hadi Salim, J., Haas, R., Khosravi, H., Wang,
W., Dong, L., Gopal, R., and J. Halpern, "Forwarding and
Control Element Separation (ForCES) Protocol
Specification", RFC 5810, March 2010.
[RFC5812] Halpern, J. and J. Hadi Salim, "Forwarding and Control
Element Separation (ForCES) Forwarding Element Model",
RFC 5812, March 2010.
12.2. Informative References
[RFC1812] Baker, F., "Requirements for IP Version 4 Routers",
RFC 1812, June 1995.
[RFC2119] Bradner, S., "Key words for use in RFCs to Indicate
Requirement Levels", BCP 14, RFC 2119, March 1997.
[RFC2629] Rose, M., "Writing I-Ds and RFCs using XML", RFC 2629,
June 1999.
[RFC3552] Rescorla, E. and B. Korver, "Guidelines for Writing RFC
Text on Security Considerations", BCP 72, RFC 3552,
July 2003.
[RFC3654] Khosravi, H. and T. Anderson, "Requirements for Separation
of IP Control and Forwarding", RFC 3654, November 2003.
[RFC3746] Yang, L., Dantu, R., Anderson, T., and R. Gopal,
"Forwarding and Control Element Separation (ForCES)
Framework", RFC 3746, April 2004.
[RFC5226] Narten, T. and H. Alvestrand, "Guidelines for Writing an
IANA Considerations Section in RFCs", BCP 26, RFC 5226,
May 2008.
Wang, et al. Expires June 4, 2011 [Page 89]
Internet-Draft ForCES LFB Library December 2010
Authors' Addresses
Weiming Wang
Zhejiang Gongshang University
18 Xuezheng Str., Xiasha University Town
Hangzhou, 310018
P.R.China
Phone: +86-571-28877721
Email: wmwang@zjgsu.edu.cn
Evangelos Haleplidis
University of Patras
Patras,
Greece
Email: ehalep@ece.upatras.gr
Kentaro Ogawa
NTT Corporation
Tokyo,
Japan
Email: ogawa.kentaro@lab.ntt.co.jp
Chuanhuang Li
Hangzhou BAUD Networks
408 Wen-San Road
Hangzhou, 310012
P.R.China
Phone: +86-571-28877751
Email: chuanhuang_li@mail.zjgsu.edu.cn
Halpern Joel
Ericsson
P.O. Box 6049
Leesburg, 20178
VA
Phone: +1 703 371 3043
Email: joel.halpern@ericsson.com
Wang, et al. Expires June 4, 2011 [Page 90]