EVPN Virtual Ethernet Segment Cisco Systemssajassi@cisco.comCisco Systemspbrisset@cisco.comVerizonrichard.schell@verizon.comJuniperjdrake@juniper.netNokiajorge.rabadan@nokia.com
Routing
BESS WorkGroup
EVPN and PBB-EVPN introduce a family of solutions for multipoint
Ethernet services over MPLS/IP network with many advanced features
among which their multi&nbhy;homing capabilities. These solutions introduce
Single-Active and All-Active for an Ethernet Segment (ES),
itself defined as a set of physical links between the multi&nbhy;homed device/network
and a set of PE devices that they are connected to.
This document extends the Ethernet Segment concept so that an ES can
be associated to a set of EVCs (e.g., VLANs) or other objects such as
MPLS Label Switch Paths (LSPs) or Pseudowires (PWs), referred to as
Virtual Ethernet Segments (vES). This draft describes the requirements
and the extensions needed to support vES in EVPN and PBB-EVPN.
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in RFC 2119
and RFC 8174.
and introduce a family of solutions for
multipoint Ethernet services over MPLS/IP network with many advanced
features among which their multi&nbhy;homing capabilities. These solutions introduce
Single-Active and All-Active for an Ethernet Segment (ES),
itself defined as a set of links between the multi&nbhy;homed device/network
and a set of PE devices that they are connected to. This document extends the Ethernet Segment concept so that an ES can
be associated to a set of EVCs (e.g., VLANs) or other objects such as
MPLS Label Switch Paths (LSPs) or Pseudowires (PWs), referred to as
Virtual Ethernet Segments (vES). This draft describes the requirements
and the extensions needed to support vES in EVPN and PBB-EVPN.
Some Service Providers (SPs) want to extend the concept of the
physical links in an ES to Ethernet Virtual Circuits (EVCs) where
many of such EVCs (e.g., VLANs) can be aggregated on a single
physical External Network-to-Network Interface (ENNI). An ES that
consists of a set of EVCs instead of physical links is referred to as
a virtual ES (vES). Figure 1 depicts two PE devices (PE1 and PE2)
each with an ENNI where a number of vESes are aggregated on - each of
which through its associated EVC.
ENNIs are commonly used to reach off-network / out-of-franchise
customer sites via independent Ethernet access networks or third-
party Ethernet Access Providers (EAP) (see Figure 1). ENNIs can
aggregate traffic from hundreds to thousands of vESes, where each
vES is represented by its associated EVC on that ENNI. As a result,
ENNIs and their associated EVCs are a key element of SP off-networks
that are carefully designed and closely monitored. In order to meet customers' Service Level Agreements (SLA), SPs build
redundancy via multiple EVPN PEs and across multiple ENNIs (as shown
in Figure 1) where a given vES can be multi&nbhy;homed to two or more EVPN
PE devices (on two or more ENNIs) via their associated EVCs. Just
like physical ES's in and solutions, these vESes
can be single&nbhy;homed or multi&nbhy;homed ES's and when multi&nbhy;homed, then
can operate in either Single-Active or All-Active redundancy modes.
In a typical SP off-network scenario, an ENNI can be associated with
several thousands of single&nbhy;homed vESes, several hundreds of Single-
Active vESes and it may also be associated with tens or hundreds of
All-Active vESes. Other Service Providers (SPs) want to extend the concept of the
physical links in an ES to individual Pseudowires (PWs) or to MPLS
Label Switched Paths (LSPs) in Access MPLS networks - i.e., a vES
consisting of a set of PWs or a set of LSPs. Figure 2 illustrates
this concept. In some cases, Service Providers use MPLS Aggregation Networks that belong
to separate administrative entities or third parties as a way to get
access to their own IP/MPLS Core network infrastructure. This is the
case illustrated in Figure 2. In such scenarios, a virtual ES (vES) is defined as a set of
individual PWs if they cannot be aggregated into a common LSP. If the
aggregation of PWs is possible, the vES can be associated to an LSP
in a given PE. In the example of Figure 2, EVC3 is connected to a
VPWS instance in AG2 that is connected to PE1 and PE2 via PW3 and PW5
respectively. EVC4 is connected to a separate VPWS instance on AG2
that gets connected to an EVI on PE1 and PE2 via PW4 and PW6,
respectively. Since the PWs for the two VPWS instances can be
aggregated into the same LSPs going to the MPLS network, a common
virtual ES can be defined for LSP1 and LSP2. This vES will be shared
by two separate EVIs in the EVPN network. In some cases, this aggregation of PWs into common LSPs may not be
possible. For instance, if PW3 were terminated into a third PE, e.g.
PE3, instead of PE1, the vES would need to be defined on a per
individual PW on each PE, i.e. PW3 and PW5 would belong to ES-1,
whereas PW4 and PW6 would be associated to ES-2. For MPLS/IP access networks where a vES represents a set of PWs or
LSPs, this document extends Single-Active multi&nbhy;homing procedures of
and to vES. The vES extension
to All-Active multi&nbhy;homing is outside of the scope of this document for MPLS/IP access
networks. This draft describes requirements and the extensions needed to
support a vES in and .
lists the set of
requirements for a vES. describes extensions for a vES that
are applicable to EVPN solutions including and .
Furthermore, these extensions meet the requirements
described in . gives solution overview
and describes failure
handling, recovery, scalability, and
fast convergence of and for vESes. Attachment CircuitBackbone Edge Bridge Backbone MAC Address Customer Edge Connectivity Fault Management (802.1ag) Customer/Client MAC Address Designated ForwarderDual-homed Device Dual-homed Network External Network-Network Interface Ethernet Segment Ethernet Segment Identifier Ethernet Virtual Circuit Ethernet VPN Service Instance Identifier (24 bits and global within a PBB
network see ) Link Aggregation Control Protocol Provider Backbone Bridge Provider Backbone Bridge EVPN Provider Edge Multi-homed Device Multi-homed Network Single-Homed Virtual Pseudowire ServiceWhen only a single PE, among a
group of PEs attached to an Ethernet Segment, is allowed to forward
traffic to/from that Ethernet Segment, then the Ethernet Segment is
defined to be operating in Single-Active redundancy mode. When all PEs attached to an Ethernet
segment are allowed to forward traffic to/from that Ethernet Segment,
then the Ethernet Segment is defined to be operating in All-Active
redundancy mode. This section describes the requirements specific to virtual Ethernet
Segment (vES) for (PBB-)EVPN solutions. These requirements are in
addition to the ones described in , , and
. A PE needs to support the following types of vESes: (R1a) A PE MUST handle single&nbhy;homed vESes on a single physical port
(e.g., single ENNI) (R1b) A PE MUST handle a mix of Single-Homed vESes and Single-Active
multi&nbhy;homed vESes simultaneously on a single physical port (e.g.,
single ENNI). Single-Active multi&nbhy;homed vESes will be simply referred
to as Single-Active vESes through the rest of this document. (R1c) A PE MAY handle All-Active multi&nbhy;homed vESes on a single
physical port. All-Active multi&nbhy;homed vESes will be simply referred
to as All-Active vESes through the rest of this document. (R1d) A PE MAY handle a mix of All-Active vESes along with other
types of vESes on a single physical port. (R1e) A Multi-Homed vES (Single-Active or All-Active) can be spread
across two or more ENNIs, on any two or more PEs. A single physical port (e.g., ENNI) can be associated with many
vESes. The following requirements give a quantitative measure for
each vES type. (R2a) A PE SHOULD handle very large number of Single-Homed vESes on a
single physical port (e.g., thousands of vESes on a single ENNI). (R2b) A PE SHOULD handle large number of Single-Active vESes on a
single physical port (e.g., hundreds of vESes on a single ENNI). (R2c) A PE MAY handle large number of All-Active vESes on
a single physical port (e.g., hundreds of vESes on a single ENNI). (R2d) A PE SHOULD handle the above scale for a mix of Single-homed
vESes and Single-Active vESes simultaneously on a single physical
port (e.g., single ENNI). (R2e) A PE MAY handle the above sale for a mix of All-Active
vESes along with other types of vESes on a single physical port. Many vESes of different types can be aggregated on a single physical
port on a PE device and some of these vES can belong to the same
service instance (or customer). This translates into the need for
supporting local switching among the vESes of the same service
instance on the same physical port (e.g., ENNI) of the PE. (R3a) A PE MUST support local switching among different vESes
belonging to the same service instance (or customer) on a single
physical port. For example, in Figure 1, PE1 MUST support local
switching between CE11 and CE12 (both belonging to customer A) that
are mapped to two Single-homed vESes on ENNI1. In case of Single-Active
vESes, the local switching is performed
among active EVCs belonging to the same service instance on the same
ENNI. A physical port (e.g., ENNI) of a PE can aggregate many EVCs each of
which is associated with a vES. Furthermore, an EVC may carry one or
more VLANs. Typically, an EVC carries a single VLAN and thus it is
associated with a single broadcast domain. However, there is no
restriction on an EVC to carry more than one VLAN. (R4a) An EVC can be associated with a single broadcast domain - e.g.,
VLAN-based service or VLAN bundle service. (R4b) An EVC MAY be associated with several broadcast domains - e.g.,
VLAN-aware bundle service. In the same way, a PE can aggregate many LSPs and PWs. In the case of
individual PWs per vES, typically a PW is associated with a single
broadcast domain, but there is no restriction on the PW to carry more
than one VLAN if the PW is of type Raw mode. (R4c) A PW can be associated with a single broadcast domain - e.g.,
VLAN-based service or VLAN bundle service. (R4d) An PW MAY be associated with several broadcast domains - e.g.,
VLAN-aware bundle service. Section 8.5 of describes the default procedure for DF
election in EVPN which is also used in and .
describes the additional procedures for DF
election in EVPN.
These DF election procedures is performed at the granularity of
(ESI, Ethernet Tag). In case of a vES, the same EVPN default
procedure for DF election also applies; however, at the granularity
of (vESI, Ethernet Tag); where vESI is the virtual Ethernet Segment
Identifier and the Ethernet Tag field is represented by and I-SID in
PBB-EVPN and by a VLAN ID (VID) in EVPN.
As in , this default procedure for DF election at the
granularity of (vESI, Ethernet Tag) is also referred to as
"service carving". With service carving, it is desirable to evenly
partition the DFs for different vES's among different PEs, thus
evenly distributing the traffic among different PEs. The following
list the requirements apply to DF election of vES's for (PBB-)EVPN. (R5a) A vES with m EVCs can be distributed among n ENNIs belonging to
p PEs in any arbitrary order; where n >= p >= m. For example, if there
is an vES with 2 EVCs and there are 5 ENNIs on 5 PEs (PE1 through
PE5), then vES can be dual-homed to PE2 and PE4 and the DF election
must be performed between PE2 and PE4. (R5b) Each vES MUST be identified by its own virtual ESI (vESI). In order to detect the failure of an individual EVC and perform DF
election for its associated vES as the result of this failure, each
EVC should be monitored independently. (R6a) Each EVC SHOULD be monitored for its health independently. (R6b) A single EVC failure (among many aggregated on a single
physical port/ENNI) MUST trigger DF election for its associated vES. (R7a) Failure and failure recovery of an EVC for a Single-homed vES
SHALL NOT impact any other EVCs within its service instance or any
other service instances. In other words, for PBB-EVPN, it SHALL NOT
trigger any MAC flushing both within its own I-SID as well as other
I-SIDs. (R7b) In case of All-Active vES, failure and failure
recovery of an EVC for that vES SHALL NOT impact any other EVCs within
its service instance or any other service instances. In other
words, for PBB-EVPN, it SHALL NOT trigger any MAC flushing both
within its own I-SID as well as other I-SIDs. (R7c) Failure and failure recovery of an EVC for a Single-Active vES
SHALL impact only its own service instance. In other words, for PBB-
EVPN, MAC flushing SHALL be limited to the associated I-SID only and
SHALL NOT impact any other I-SIDs. (R7d) Failure and failure recovery of an EVC for a Single-Active vES
MAY only impact C-MACs associated with MHD/MHNs for that service
instance. In other words, MAC flushing SHOULD be limited to single
service instance (I-SID in the case of PBB-EVPN) and only C-MACs for
Single-Active MHD/MHNs. Since a large number of EVCs (and their associated vESes) are
aggregated via a single physical port (e.g., ENNI), then the failure
of that physical port impacts a large number of vESes and triggers
equally many ES route withdrawals. Formulating, sending,
receiving, and processing such large number of BGP messages can
introduce delay in DF election and convergence time. As such, it is
highly desirable to have a mass&nbhy;withdraw mechanism similar to the one
in for withdrawing many Ethernet A-D per ES routes. (R8a) There SHOULD be a mechanism equivalent to EVPN mass&nbhy;withdraw
such that upon an ENNI failure, only a single BGP message is needed
to indicate to the remote PEs to trigger DF election for all impacted
vES associated with that ENNI. The solutions described in and are leveraged
as&nbhy;is with the modification that the ESI assignment is
performed for an EVC or a group of EVCs or LSPs/PWs instead of a link or a group of
physical links. In other words, the ESI is associated with a virtual
ES (vES), hereby referred to as vESI. For the EVPN solution, everything basically remains the same except
for the handling of physical port failure where many vESes can be
impacted. Sections and
below describe the handling of physical
port/link failure for EVPN. In a typical multi&nbhy;homed operation, MAC
addresses are learned behind a vES and are advertised with the ESI
corresponding to the vES (i.e., vESI). EVPN aliasing and mass&nbhy;withdraw
operations are performed with respect to vES identifier: the Ethernet A-D
routes for these operations are advertised with vESI instead of ESI. For PBB-EVPN solution, the main change is with respect to the B-MAC
address assignment which is performed similar to what is described in
section 7.2.1.1 of with the following refinements:
One shared B-MAC address SHOULD be used per PE for the single&nbhy;homed
vESes. In other words, a single B-MAC is shared for all single&nbhy;homed
vESes on that PE. One shared B-MAC address SHOULD be used per PE per physical port
(e.g., ENNI) for the Single-Active vESes. In other words, a single
B-MAC is shared for all Single-Active vESes that share the same ENNI. One shared B-MAC address MAY be used for all Single-Active vESes on
that PE. One B-MAC address SHOULD be used per set of EVCs representing an
All-Active vES. In other words, a single B-MAC address is
used per vES for All-Active scenarios. A single B-MAC address MAY also be used per vES per PE for Single-
Active scenarios.
The procedure for service carving for virtual Ethernet Segments is
the same as the ones outlined in section 8.5 of
and except for the fact that
ES is replaced with vES.For the sake of clarity and completeness, the default DF election
procedure of is repeated below:
When a PE discovers the vESI or is configured with the vESI
associated with its attached vES, it advertises an Ethernet Segment
route with the associated ES-Import extended community attribute. The PE then starts a timer (default value = 3 seconds) to allow
the reception of Ethernet Segment routes from other PE nodes
connected to the same vES. This timer value MUST be same across all
PEs connected to the same vES. When the timer expires, each PE builds an ordered list of the IP
addresses of all the PE nodes connected to the vES (including
itself), in increasing numeric value. Each IP address in this list is
extracted from the "Originator Router's IP address" field of the
advertised Ethernet Segment route. Every PE is then given an ordinal
indicating its position in the ordered list, starting with 0 as the
ordinal for the PE with the numerically lowest IP address. The
ordinals are used to determine which PE node will be the DF for a
given EVPN instance on the vES using the following rule: Assuming a
redundancy group of N PE nodes, the PE with ordinal i is the DF for
an EVPN instance with an associated Ethernet Tag value of V when (V
mod N) = i.
It should be noted that using "Originator Router's IP address" field
in the Ethernet Segment route to get the PE IP address needed for the
ordered list, allows for a CE to be multi&nbhy;homed across different ASes
if such need ever arises. The PE that is elected as a DF for a given EVPN instance will
unblock traffic for that EVPN instance. Note that the DF PE unblocks
all traffic in both ingress and egress directions for Single-Active
vES and unblocks multi&nbhy;destination in egress direction for All-Active
Multi-homed vES. All non-DF PEs block all traffic in both ingress and
egress directions for Single-Active vES and block multi&nbhy;destination
traffic in the egress direction for All-Active vES. In the case of an EVC failure, the affected PE withdraws its Virtual Ethernet
Segment route if there are no more EVCs associated to the vES in the
PE. This will re-trigger the DF Election procedure on all the PEs in
the Redundancy Group. For PE node failure, or upon PE commissioning
or decommissioning, the PEs re-trigger the DF Election procedure
across all affected vESes. In case of a Single-Active,
when a service moves from one PE in the Redundancy Group to another
PE as a result of DF re-election, the PE, which ends up being the
elected DF for the service, SHOULD trigger a MAC address flush
notification towards the associated vES. This can be done, for e.g.
using IEEE 802.1ak MVRP 'new' declaration. For LSP-based and PW-based vES, the non-DF PE SHOULD signal PW-status
'standby' to the Aggregation PE (e.g., AG PE in Figure 2),
and a new DF PE MAY send an LDP MAC withdraw message as a MAC
address flush notification. It should be noted that the PW-status is
signaled for the scenarios where there is a one-to-one mapping
between EVI/BD and the PW. Physical ports (e.g. ENNI) which aggregate a large number of EVCs
are 'coloured' to enable the grouping schemes described below. By default, the MAC address of the corresponding port (e.g. ENNI)
is used to represent the 'colour' of the port, and the
EVPN Router's MAC Extended Community defined
in is used to
signal this colour.The difference between colouring mechanism for EVPN and PBB-EVPN is that
for EVPN, the extended community is advertised with the Ethernet A-D per ES
route whereas for PBB-EVPN, the extended community may be advertised
with the B-MAC route.The following sections describe Grouping Ethernet A-D per ES and
Grouping B-MAC, will become crucial for port failure
handling as seen in ,
and below.When a PE discovers the vESI or is configured with the vESI associated
with its attached vES, an Ethernet-Segment route and Ethernet A-D per ES
route are generated using the vESI identifier.These Ethernet-Segment and Ethernet A-D per ES routes specific to each
vES are coloured with an attribute representing their association
to a physical port (e.g. ENNI).The corresponding port 'colour' is encoded in the
EVPN Router's MAC Extended Community defined
in and advertised
along with the Ethernet Segment and Ethernet A-D per ES routes for this vES.The PE also constructs a special Grouping Ethernet A-D per ES route
which represents all the vES associated with the port (e.g. ENNI).
The corresponding port 'colour' is encoded in the ESI field.
For this encoding, Type 3 ESI ( Section 5) is used
with the MAC field set to the colour (MAC address) of the port
and the 3-octet local discriminator field set to 0xFFFFFF.
The ESI label extended community ( Section 7.5)
is not relevant to Grouping Ethernet A-D per ES. The label value is NOT
used for encalsulating BUM packets for any split-horizon function and
the 'Single-Active' but is left as 0.
To save label space, all Grouping Ethernet A-D per ES of a PE SHOULD use
same label value. This Grouping Ethernet A-D per ES is advertised with a list of Route
Targets corresponding to the impacted service instances. If the
number of Route Targets is more than can fit into a single
attribute, then a set of Grouping Ethernet A-D per ES routes are advertised.For PBB-EVPN, especially where there here are large number of
service instances (i.e., I-SIDs) associated with each EVC the PE
MAY colour each vES B-MAC route with an attribute representing
their association to a physical port (e.g. ENNI).The corresponding port 'colour' is encoded in the
EVPN Router's MAC Extended Community defined
in and advertised
along with the B-MAC for this vES in PBB-EVPN.The PE MAY then also construct a special Grouping B-MAC route
which represents all the vES associated with the port (e.g. ENNI).
The corresponding port 'colour' is encoded directly into this
special Grouping B-MAC route. There are a number of failure scenarios to consider such as:
CE uplink port failureEthernet Access Network failurePE access-facing port or link failurePE node failurePE isolation from IP/MPLS network, , and
solutions provide protection
against such failures as described in the corresponding references.
In the presence of virtual Ethernet Segments (vESes) in these
solutions, besides the above failure scenarios, individual EVC failure is an additional
scenario to consider. Handling vES failure scenarios implies that
individual EVCs or PWs need to be monitored and upon detection of failure
or restoration of services, appropriate DF election and failure recovery
mechanisms are executed. is used for monitoring EVCs and upon failure detection of a
given EVC, DF election procedure per is executed. For
PBB-EVPN, some extensions are needed to handle the failure and
recovery procedures of in order to meet the above
requirements. These extensions are described in the next section. and are used for monitoring the status of LSPs
and/or PWs associated to vES.
In , when a DF PE connected to a Single-Active multi&nbhy;homed Ethernet
Segment loses connectivity to the segment, due to link or port
failure, it signals to the remote PEs to invalidate all MAC addresses
associated with that Ethernet Segment. This is done by means of a
mass&nbhy;withdraw message, by withdrawing the Ethernet A-D per ES route.
It should be
noted that for dual-homing use cases where there is only a single
backup path, MAC invalidating can be avoided by the remote PEs as they
can update their nexthop associated with the affected MAC
entries to the backup path per procedure described in section 8.2 of
.
In case of an EVC failure which impacts a single vES, this same
EVPN procedure is used. In this case, the mass&nbhy;withdraw is conveyed
by withdrawing the Ethernet A-D per vES route carrying the vESI representing
the failed EVC. The remote PEs upon receiving this
message perform the same procedures outlined in section 8.2 of
.
In , when a PE connected to a Single-Active Ethernet Segment
loses connectivity to the segment, due to link or port failure, it
signals the remote PE to flush all C-MAC addresses associated with
that Ethernet Segment. This is done by updating the advertised a B-MAC route's
MAC Mobility Extended community. In case of an EVC failure that impacts a single vES, if the above
PBB-EVPN procedure is used, it results in excessive C-MAC flushing
because a single physical port can support large number of EVCs (and
their associated vESes) and thus updating the advertised B-MAC corresponding to
the physical port, with MAC mobility Extended community, will result in
flushing C-MAC addresses not just for the impacted EVC but for all
other EVCs on that port. In order to reduce the scope of C-MAC flushing to only the impacted
service instances (the service instance(s) impacted by the EVC
failure), the PBB-EVPN C-MAC flushing needs to be adapted on a per service
instance basis (i.e., per I-SID).
introduces B-MAC/I-SID route where
existing PBB-EVPN B-MAC route is modified to carry an I-SID in the "Ethernet Tag ID"
field instead of NULL value. This field indicates to the receiving PE, to flush all
C-MAC addresses associated with that I-SID for that B-MAC. This C-MAC flushing mechanism per I-SID
SHOULD be used in case of EVC failure impacting a vES. Since typically an EVC maps to a single
broadcast domain and thus a single service instance, the affected PE only needs to
advertise a single B-MAC/I-SID route. However, if the failed EVC carries multiple
VLANs each with its own broadcast domain, then the affected PE needs to advertise multiple
B-MAC/I-SID routes - one for each VLAN (broadcast domain) - i.e., one for each I-SID.
Each B-MAC/I-SID route basically instructs the remote PEs to perform flushing for
C-MACs corresponding to the advertised B-MAC only for the advertised I-SID. The C-MAC flushing based on B-MAC/I-SID route works fine when there are only a few
VLANs (e.g., I-SIDs) per EVC. However if the number of I-SIDs associated with a
failed EVC is large, then it is recommended to assign a B-MAC per vES and upon EVC failure,
the affected PE simply withdraws this B-MAC message to other PEs. When a large number of EVCs are aggregated via a single physical port
on a PE, where each EVC corresponds to a vES, then the port failure
impacts all the associated EVCs and their corresponding vESes. If the
number of EVCs corresponding to the Single-Active vESes for that
physical port is in thousands, then thousands of service instances
are impacted. Therefore, the propagation if failure in BGP needs
to address all these impacted service instances. In order to achieve this,
the following extensions are added to the baseline EVPN mechanism:
When a PE advertises an Ethernet A-D per ES route for a given vES, it
is coloured as described in using the
physical port MAC by default.
The receiving PEs take note of this colour and create a list of vESes
for this colour.The PE also advertises a special Grouping Ethernet A-D per ES route
for that colour, which represents all the vES associated with the port. Upon a port failure (e.g., ENNI failure), the PE sends a mass&nbhy;withdraw
message by withdrawing the Grouping Ethernet A-D per ES route. The remote PEs upon receiving this message, by identifying the
Grouping Ethernet A-D per ES route, detect the special vES mass&nbhy;withdraw
message. The remote PEs then access the list created in (1) of the vES's for the
specified colour, and initiate locally MAC address invalidating
procedures for each of the vES's in the list.
In scenarios where a logical ENNI is used the above procedure equally
applies. The logical ENNI is represented by a Grouping Ethernet A-D per ES
where the Type 3 ESI and the 6 bytes used in the ENNI's ESI MAC address
field is used as a colour for vESes as described above
and in .When a large number of EVCs are aggregated via a single physical port
on a PE, where each EVC corresponds to a vES, then the port failure
impacts all the associated EVCs and their corresponding vESes. If the
number of EVCs corresponding to the Single-Active vESes for that
physical port is in thousands, then thousands of service instances
(I-SIDs) are impacted. In such failure scenarios, the following two
MAC flushing mechanisms per can be performed.
If the MAC address of the physical port is used for PBB
encapsulation as B-MAC SA, then upon the port failure, the PE MUST use
the EVPN MAC route withdrawal message to signal the flush. If the PE shared MAC address is used for PBB encapsulation as B-MAC
SA, then upon the port failure, the PE MUST re-advertise this MAC
route with the MAC Mobility Extended Community to signal the flush.
The first method is recommended because it reduces the scope of
flushing the most.
As noted above, the
advertisement of the extended community along with B-MAC route for colouring purposes is optional
and only recommended when there are many vESes per physical port and each vES is associated with
very large number of service instances (i.e., large numbe of I-SIDs). If there are large number of service instances (i.e., I-SIDs)
associated with each EVC, and if there is a B-MAC assigned per vES
as recommended in the above section, then in order to handle
port failure efficiently, the following extensions are added
to the baseline PBB-EVPN mechanism:
Each vES MAY be coloured with a MAC address representing the
physical port similar to the colouring mechanism for EVPN.
In other words, each B-MAC representing a vES is advertised
with the 'colour' of the physical port per .
The receiving PEs take note of this colour being advertised
along with the B-MAC route and for each such colour,
create a list of vESes associated with this colour.The PE also advertises a special Grouping B-MAC route
for that colour (consisting by default of port MAC address),
which represents all the vES associated with the port. Upon a port failure (e.g., ENNI failure), the PE sends a mass&nbhy;withdraw
message by withdrawing the Grouping B-MAC route. The remote PEs upon receiving this message, by identifying the
Grouping B-MAC route, detect the special vES mass&nbhy;withdraw
message. The remote PEs then access the list created in (1) of the vES's for the
specified colour, and flush C-MACs associated with the failed physical port. As described above, when a large number of EVCs are aggregated via a
physical port on a PE, and where each EVC corresponds to a vES, then the
port failure impacts all the associated EVCs and their corresponding
vESes. Two actions must be taken as the result of such port failure:
For EVPN initiate mass&nbhy;withdraw procedure for all vESes associated with
the failed port to invalidate MACs and for PBB-EVPN flush all C-MACs associated with
the failed port across all vESes and the impacted I-SIDs DF election for all impacted vESes associated with the failed port already describes how to perform mass&nbhy;withdraw
for all affected vESes and invalidating MACs using a single BGP withdrawal
of the Grouping Ethernet A-D per ES route.
describes how to only flush C-MAC address
associated with the failed physical port (e.g., optimum C-MAC flushing)
as well as, optionally, the withdrawal of a Grouping B-MAC route.This section describes how to perform DF election in the most
optimal way - e.g., to trigger DF election for all impacted vESes
(which can be very large) among the participating PEs via a single
BGP message as opposed to sending large number of BGP messages (one per
vES). This section assumes that the MAC flushing mechanism described in
, bullet (1) is used and route colouring is used.
The procedure for colouring vES Ethernet Segment routes is described
in . The following describes the procedure for fast
convergence for DF election using these coloured routes:
When a vES is configured, the PE advertises the
Ethernet Segment route for this vES with a colour corresponding to
the physical port. All receiving PEs (in the redundancy group) take note of this colour
and create a list of vESes for this colour. Recall, that the PE is advertising also a Grouping Ethernet A-D per ES (for EVPN)
and a Grouping B-MAC (for PBB-EVPN) representing this colour and vES grouping. Upon a port failure (e.g., ENNI failure), the PE withdraws this previously
advertised Grouping Ethernet A-D per ES or Grouping B-MAC associated with the
failed port. The PE should prioritize sending these Grouping routes withdraw
message over individual vES route withdrawal messages of impacted vESes. On reception of Grouping Ethernet A-D per ES or Grouping B-MAC route withdrawal,
other PEs in the redundancy group initiate DF election procedures
across all their affected vESes. The PE with the physical port failure (ENNI failure), also sends
vES route withdrawal for every impacted vES. The other PEs upon
receiving these messages, clear up their BGP tables. It should be
noted the vES route withdrawal messages are not used for executing DF
election procedures by the receiving PEs when Grouping Ethernet A-D per ES
or Grouping B-MAC withdrawal has been previously received.
The authors would like to thank Mei Zhang, Jose Liste, and Luc Andre Burdet for their
reviews of this document and feedback.
All the security considerations in and apply
directly to this document because this document leverages the control
and data plane procedures described in those documents.
This document does not introduce any new security considerations
beyond that of and because advertisements and
processing of Ethernet Segment route for vES in this document follows
that of physical ES in those RFCs.
IANA has allocated sub-type value 7 in the "EVPN Extended Community
Sub-Types" registry defined in "https://www.iana.org/assignments/bgp-
extended-communities/bgp-extended-communities.xhtml#evpn" as follows:
It is requested from IANA to update the reference to this document.
This document is being submitted for use in IETF standards discussions.