Explicit Congestion
Notification (ECN) Protocol for Ultra-Low Queuing Delay (L4S)
Nokia Bell Labs
Antwerp
Belgium
koen.de_schepper@nokia.com
https://www.bell-labs.com/usr/koen.de_schepper
Independent
UK
ietf@bobbriscoe.net
http://bobbriscoe.net/
Transport
Transport Services (tsv)
Internet-Draft
I-D
This specification defines the protocol to be used for a new network
service called low latency, low loss and scalable throughput (L4S). L4S
uses an Explicit Congestion Notification (ECN) scheme at the IP layer
that is similar to the original (or 'Classic') ECN approach, except as
specified within. 'Classic' ECN marking is required to be equivalent to
a drop, both when applied in the network and when responded to by a
transport. Unlike 'Classic' ECN marking, for packets carrying the L4S
identifier, the network applies marking more immediately and more
aggressively than drop, and the transport response to each mark is
reduced and smoothed relative to that for drop. The two changes
counterbalance each other so that the throughput of an L4S flow will be
roughly the same as a non-L4S flow under the same conditions.
Nonetheless, the much more frequent control signals and the finer
responses to them result in much more fine-grained adjustments, so that
ultra-low and consistently low queuing delay (typically sub-millisecond
on average) becomes possible for L4S traffic without compromising link
utilization. Thus even capacity-seeking (TCP-like) traffic can have high
bandwidth and very low delay at the same time, even during periods of
high traffic load.
The L4S identifier defined in this document distinguishes L4S from
'Classic' (e.g. TCP-Reno-friendly) traffic. It gives an incremental
migration path so that suitably modified network bottlenecks can
distinguish and isolate existing traffic that still follows the Classic
behaviour, to prevent it degrading the low queuing delay and low loss of
L4S traffic. This specification defines the rules that L4S transports
and network elements need to follow to ensure they neither harm each
other's performance nor that of Classic traffic. Examples of new active
queue management (AQM) marking algorithms and examples of new transports
(whether TCP-like or real-time) are specified separately.
This specification defines the protocol to be used for a new network
service called low latency, low loss and scalable throughput (L4S). L4S
uses an Explicit Congestion Notification (ECN) scheme at the IP layer
that is similar to the original (or 'Classic') Explicit Congestion
Notification (ECN ). RFC 3168 required an ECN
mark to be equivalent to a drop, both when applied in the network and
when responded to by a transport. Unlike Classic ECN marking, the
network applies L4S marking more immediately and more aggressively than
drop, and the transport response to each mark is reduced and smoothed
relative to that for drop. The two changes counterbalance each other so
that the throughput of an L4S flow will be roughly the same as a non-L4S
flow under the same conditions. Nonetheless, the much more frequent
control signals and the finer responses to them result in ultra-low
queuing delay without compromising link utilization, and this low delay
can be maintained during high load. Ultra-low queuing delay means that
queuing due to the e2e congestion control is less than 1 millisecond
(ms) on average and less than about 2 ms at the 99th percentile.
An example of a scalable congestion control that would enable the L4S
service is Data Center TCP (DCTCP), which until now has been applicable
solely to controlled environments like data centres , because it is too aggressive to co-exist with
existing TCP-Reno-friendly traffic. The DualQ Coupled AQM, which is
defined in a complementary experimental specification , is an AQM framework that
enables scalable congestion controls like DCTCP to co-exist with
existing traffic, each getting roughly the same flow rate when they
compete under similar conditions. Note that a transport such as DCTCP is
still not safe to deploy on the Internet unless it satisfies the
requirements listed in .
L4S is not only for elastic (TCP-like) traffic - there are scalable
congestion controls for real-time media, such as the L4S variant of the
SCReAM real-time media congestion avoidance
technique (RMCAT). The factor that distinguishes L4S from Classic
traffic is its behaviour in response to congestion. The transport wire
protocol, e.g. TCP, QUIC, SCTP, DCCP, RTP/RTCP, is orthogonal (and
therefore not suitable for distinguishing L4S from Classic packets).
The L4S identifier defined in this document is the key piece that
distinguishes L4S from 'Classic' (e.g. Reno-friendly) traffic. It
gives an incremental migration path so that suitably modified network
bottlenecks can distinguish and isolate existing Classic traffic from
L4S traffic to prevent the former from degrading the ultra-low delay and
loss of the new scalable transports, without harming Classic
performance. Initial implementation of the separate parts of the system
has been motivated by the performance benefits.
Latency is becoming the critical performance factor for many
(most?) applications on the public Internet, e.g. interactive
Web, Web services, voice, conversational video, interactive video,
interactive remote presence, instant messaging, online gaming, remote
desktop, cloud-based applications, and video-assisted remote control
of machinery and industrial processes. In the 'developed' world,
further increases in access network bit-rate offer diminishing
returns, whereas latency is still a multi-faceted problem. In the last
decade or so, much has been done to reduce propagation time by placing
caches or servers closer to users. However, queuing remains a major
intermittent component of latency.
The Diffserv architecture provides Expedited Forwarding , so that low latency traffic can jump the queue of
other traffic. However, on access links dedicated to individual sites
(homes, small enterprises or mobile devices), often all traffic at any
one time will be latency-sensitive. Then, given nothing to
differentiate from, Diffserv makes no difference. Instead, we need to
remove the causes of any unnecessary delay.
The bufferbloat project has shown that excessively-large buffering
('bufferbloat') has been introducing significantly more delay than the
underlying propagation time. These delays appear only
intermittently—only when a capacity-seeking (e.g. TCP) flow
is long enough for the queue to fill the buffer, making every packet
in other flows sharing the buffer sit through the queue.
Active queue management (AQM) was originally developed to solve
this problem (and others). Unlike Diffserv, which gives low latency to
some traffic at the expense of others, AQM controls latency for all traffic in a class. In general, AQM methods
introduce an increasing level of discard from the buffer the longer
the queue persists above a shallow threshold. This gives sufficient
signals to capacity-seeking (aka. greedy) flows to keep the
buffer empty for its intended purpose: absorbing bursts. However,
RED and other algorithms from the 1990s
were sensitive to their configuration and hard to set correctly. So,
this form of AQM was not widely deployed.
More recent state-of-the-art AQM methods,
e.g. FQ-CoDel , PIE , Adaptive RED , are
easier to configure, because they define the queuing threshold in time
not bytes, so it is invariant for different link rates. However, no
matter how good the AQM, the sawtoothing sending window of a Classic
congestion control will either cause queuing delay to vary or cause
the link to be under-utilized. Even with a perfectly tuned AQM, the
additional queuing delay will be of the same order as the underlying
speed-of-light delay across the network.
If a sender's own behaviour is introducing queuing delay variation,
no AQM in the network can 'un-vary' the delay without significantly
compromising link utilization. Even flow-queuing (e.g. ), which isolates one flow from another, cannot
isolate a flow from the delay variations it inflicts on itself.
Therefore those applications that need to seek out high bandwidth but
also need low latency will have to migrate to scalable congestion
control.
Altering host behaviour is not enough on its own though. Even if
hosts adopt low latency behaviour (scalable congestion controls), they
need to be isolated from the behaviour of existing Classic congestion
controls that induce large queue variations. L4S enables that
migration by providing latency isolation in the network and
distinguishing the two types of packets that need to be isolated: L4S
and Classic. L4S isolation can be achieved with a queue per flow
(e.g. ) but a DualQ is sufficient, and
actually gives better tail latency. Both approaches are addressed in
this document.
The DualQ solution was developed to make ultra-low latency
available without requiring per-flow queues at every bottleneck. This
was because FQ has well-known downsides - not least the need to
inspect transport layer headers in the network, which makes it
incompatible with privacy approaches such as IPSec VPN tunnels, and
incompatible with link layer queue management, where transport layer
headers can be hidden, e.g. 5G.
Latency is not the only concern addressed by L4S: It was known when
TCP congestion avoidance was first developed that it would not scale
to high bandwidth-delay products (footnote 6 of Jacobson and Karels
). Given regular broadband bit-rates over WAN
distances are already beyond the scaling
range of Reno congestion control, 'less unscalable' Cubic and Compound variants of TCP have been
successfully deployed. However, these are now approaching their
scaling limits. Unfortunately, fully scalable congestion controls such
as DCTCP cause Classic ECN congestion
controls sharing the same queue to starve themselves, which is why
they have been confined to private data centres or research testbeds
(until now).
It turns out that a congestion control algorithm like DCTCP that
solves the latency problem also solves the scalability problem of
Classic congestion controls. The finer sawteeth in the congestion
window have low amplitude, so they cause very little queuing delay
variation and the average time to recover from one congestion signal
to the next (the average duration of each sawtooth) remains invariant,
which maintains constant tight control as flow-rate scales. A
background paper gives the full explanation
of why the design solves both the latency and the scaling problems,
both in plain English and in more precise mathematical form. The
explanation is summarised without the maths in the L4S architecture
document .
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and
"OPTIONAL" in this document are to be interpreted as described in
. In this document, these words will appear
with that interpretation only when in ALL CAPS. Lower case uses of
these words are not to be interpreted as carrying RFC-2119
significance.
A congestion control
behaviour that can co-exist with standard Reno without causing significantly negative impact
on its flow rate . With Classic congestion
controls, as flow rate scales, the number of round trips between
congestion signals (losses or ECN marks) rises with the flow rate.
So it takes longer and longer to recover after each congestion
event. Therefore control of queuing and utilization becomes very
slack, and the slightest disturbance prevents a high rate from
being attained .For instance, with 1500 byte packets and an
end-to-end round trip time (RTT) of 36 ms, over the years, as Reno
flow rate scales from 2 to 100 Mb/s the number of round trips
taken to recover from a congestion event rises proportionately,
from 4 round trips to 200. Cubic was
developed to be less unscalable, but it is approaching its scaling
limit; with the same RTT of 36ms, at 100Mb/s it takes about 106
round trips to recover, and at 800 Mb/s its recovery time triples
to over 340 round trips, or still more than 12 seconds (Reno would
take 57 seconds). Cubic only becomes significantly better than
Reno at high delay and rate combinations, for example at 90 ms RTT
and 800 Mb/s a Reno flow takes 4000 RTTs or 6 minutes to recover,
whereas Cubic 'only' needs 188 RTTs, which is still 17 seconds
(double its recovery time at 100Mb/s).
A congestion control
where the average time from one congestion signal to the next (the
recovery time) remains invariant as the flow rate scales, all
other factors being equal. This maintains the same degree of
control over queueing and utilization whatever the flow rate, as
well as ensuring that high throughput is robust to disturbances.
For instance, DCTCP averages 2 congestion signals per round-trip
whatever the flow rate, as do other recently developed scalable
congestion controls, e.g. Relentless TCP , TCP Prague
and the L4S variant of SCREAM for real-time media ). See for more explanation.
The Classic service is intended for
all the congestion control behaviours that co-exist with Reno
(e.g. Reno itself, Cubic , Compound , TFRC ). The term 'Classic queue' means a queue
providing the Classic service.
The
'L4S' service is intended for traffic from scalable congestion
control algorithms, such as Data Center TCP . The L4S service is for more general traffic
than just DCTCP—it allows the set of congestion controls
with similar scaling properties to DCTCP to evolve, such as the
examples listed above (Relentless, Prague, SCReAM). The term 'L4S
queue' means a queue providing the L4S service.The terms Classic or L4S can also qualify other
nouns, such as 'queue', 'codepoint', 'identifier',
'classification', 'packet', 'flow'. For example: an L4S packet
means a packet with an L4S identifier sent from an L4S congestion
control.Both Classic and L4S services can
cope with a proportion of unresponsive or less-responsive traffic
as well, as long as it does not build a queue (e.g. DNS,
VoIP, game sync datagrams, etc).
The subset of Classic traffic that is
friendly to the standard Reno congestion control defined for TCP
in . Reno-friendly is used in place of
'TCP-friendly', given the latter has become imprecise, because the
TCP protocol is now used with so many different congestion control
behaviours, and Reno is used in non-TCP transports such as
QUIC.
The original Explicit Congestion
Notification (ECN) protocol , which
requires ECN signals to be treated the same as drops, both when
generated in the network and when responded to by the sender. The
names used for the four codepoints of the 2-bit IP-ECN field are
as defined in : Not ECT, ECT(0), ECT(1)
and CE, where ECT stands for ECN-Capable Transport and CE stands
for Congestion Experienced. A packet marked with the CE codepoint
is termed 'ECN-marked' or sometimes just 'marked' where the
context makes ECN obvious.
The new L4S identifier defined in this specification is applicable
for IPv4 and IPv6 packets (as for Classic ECN ). It is applicable for the unicast, multicast and
anycast forwarding modes.
The L4S identifier is an orthogonal packet classification to the
Differentiated Services Code Point (DSCP) .
explains what this means in
practice.
This document is intended for experimental status, so it does not
update any standards track RFCs. Therefore it depends on , which is a standards track specification
that:
updates the ECN proposed standard to
allow experimental track RFCs to relax the requirement that an ECN
mark must be equivalent to a drop (when the network applies
markings and/or when the sender responds to them);
changes the status of the experimental ECN nonce to historic;
makes consequent updates to the following additional proposed
standard RFCs to reflect the above two bullets:
ECN for RTP ;
the congestion control specifications of various DCCP
congestion control identifier (CCID) profiles , , .
This document is about identifiers that are used for interoperation
between hosts and networks. So the audience is broad, covering
developers of host transports and network AQMs, as well as covering
how operators might wish to combine various identifiers, which would
require flexibility from equipment developers.
This subsection briefly records the process that led to a consensus
choice of L4S identifier, selected from all the alternatives in .
The identifier for packets using the Low Latency, Low Loss, Scalable
throughput (L4S) service needs to meet the following requirements:
it SHOULD survive end-to-end between source and destination
applications: across the boundary between host and network, between
interconnected networks, and through middleboxes;
it SHOULD be visible at the IP layer
it SHOULD be common to IPv4 and IPv6 and transport-agnostic;
it SHOULD be incrementally deployable;
it SHOULD enable an AQM to classify packets encapsulated by outer
IP or lower-layer headers;
it SHOULD consume minimal extra codepoints;
it SHOULD be consistent on all the packets of a transport layer
flow, so that some packets of a flow are not served by a different
queue to others.
Whether the identifier would be recoverable if the experiment failed
is a factor that could be taken into account. However, this has not been
made a requirement, because that would favour schemes that would be
easier to fail, rather than those more likely to succeed.
It is recognised that the chosen identifier is unlikely to satisfy
all these requirements, particularly given the limited space left in the
IP header. Therefore a compromise will be necessary, which is why all
the above requirements are expressed with the word 'SHOULD' not 'MUST'.
discusses the pros and cons of the
compromises made in various competing identification schemes against the
above requirements.
On the basis of this analysis, "ECT(1) and CE codepoints" is the best
compromise. Therefore this scheme is defined in detail in the following
sections, while records the rationale for
this decision.
The L4S treatment is an experimental track alternative packet marking
treatment to the Classic ECN treatment in , which has been updated by
to allow experiments such as the one defined in the present
specification. Like Classic ECN, L4S ECN identifies both network and
host behaviour: it identifies the marking treatment that network nodes
are expected to apply to L4S packets, and it identifies packets that
have been sent from hosts that are expected to comply with a broad type
of sending behaviour.
For a packet to receive L4S treatment as it is forwarded, the sender
sets the ECN field in the IP header to the ECT(1) codepoint. See for full transport layer behaviour
requirements, including feedback and congestion response.
A network node that implements the L4S service always classifies
arriving ECT(1) packets for L4S treatment and by default classifies CE
packets for L4S treatment unless the heuristics described in are employed. See for full network element behaviour
requirements, including classification, ECN-marking and interaction of
the L4S identifier with other identifiers and per-hop behaviours.
A sender that wishes a packet to receive L4S treatment as it is
forwarded, MUST set the ECN field in the IP header (v4 or v6) to the
ECT(1) codepoint.
For a transport protocol to provide scalable congestion control
() it MUST provide feedback
of the extent of CE marking on the forward path. When ECN was added to
TCP , the feedback method reported no more
than one CE mark per round trip. Some transport protocols derived from
TCP mimic this behaviour while others report the accurate extent of
ECN marking. This means that some transport protocols will need to be
updated as a prerequisite for scalable congestion control. The
position for a few well-known transport protocols is given below.
Support for the accurate ECN feedback
requirements (such as that provided by
AccECN ) by both ends
is a prerequisite for scalable congestion control in TCP.
Therefore, the presence of ECT(1) in the IP headers even in one
direction of a TCP connection will imply that both ends must be
supporting accurate ECN feedback. However, the converse does not
apply. So even if both ends support AccECN, either of the two ends
can choose not to use a scalable congestion control, whatever the
other end's choice.
A suitable ECN feedback mechanism for SCTP
could add a chunk to report the number of received CE marks
(e.g. ), and update
the ECN feedback protocol sketched out in Appendix A of the
standards track specification of SCTP .
A prerequisite for scalable congestion
control is for both (all) ends of one media-level hop to signal
ECN support and use the new generic RTCP
feedback format of . The presence of
ECT(1) implies that both (all) ends of that media-level hop
support ECN. However, the converse does not apply. So each end of
a media-level hop can independently choose not to use a scalable
congestion control, even if both ends support ECN.
Support for sufficiently fine-grained ECN
feedback is provided by the v1 IETF QUIC transport .
The ACK vector in DCCP is already sufficient to report the extent of
CE marking as needed by a scalable congestion control.
As a condition for a host to send packets with the L4S identifier
(ECT(1)), it SHOULD implement a congestion control behaviour that
ensures that, in steady state, the average duration between induced
ECN marks does not increase as flow rate scales up, all other factors
being equal. This is termed a scalable congestion control. This
invariant duration ensures that, as flow rate scales, the average
period with no feedback information about capacity does not become
excessive. It also ensures that queue variations remain small, without
having to sacrifice utilization.
With a congestion control that sawtooths to probe capacity, this
duration is called the recovery time, because each time the sawtooth
yields, on average it take this time to recover to its previous high
point. A scalable congestion control does not have to sawtooth, but it
has to coexist with scalable congestion controls that do.
For instance, for DCTCP , TCP Prague , and the L4S variant of SCReAM , the average recovery time is always half a round
trip (or half a reference round trip), whatever the flow rate.
As with all transport behaviours, a detailed specification
(probably an experimental RFC) is preferable for each congestion
control, following the guidelines for specifying new congestion
control algorithms in . In addition it is
expected to document these L4S-specific matters, specifically the
timescale over which the proportionality is averaged, and control of
burstiness. The recovery time requirement above is worded as a
'SHOULD' rather than a 'MUST' to allow reasonable flexibility for such
implementations.
The condition 'all other factors being equal', allows the recovery
time to be different for different round trip times, as long as it
does not increase with flow rate for any particular RTT.
Saying that the recovery time remains roughly invariant is
equivalent to saying that the number of ECN CE marks per round trip
remains invariant as flow rate scales, all other factors being equal.
For instance, an average recovery time of half of 1 RTT is equivalent
to 2 ECN marks per round trip. For those familiar with steady-state
congestion response functions, it is also equivalent to say that the
congestion window is inversely proportional to the proportion of bytes
in packets marked with the CE codepoint (see section 2 of ).
In order to coexist safely with other Internet traffic, a scalable
congestion control MUST NOT tag its packets with the ECT(1) codepoint
unless it complies with the following bulleted requirements:
A scalable congestion control MUST be capable of being replaced
by a Classic congestion control (by application and/or by
administrative control). If a Classic congestion control is
activated, it will not tag its packets with the ECT(1) codepoint
(see for rationale).
As well as responding to ECN markings, a scalable congestion
control MUST react to packet loss in a way that will coexist
safely with Classic congestion controls such as standard Reno
, as required by
(see for
rationale).
In uncontrolled environments, monitoring MUST be implemented to
support detection of problems with an ECN-capable AQM at the path
bottleneck that appears not to support L4S and might be in a
shared queue. Such monitoring SHOULD be applied to live traffic
that is using Scalable congestion control. Alternatively,
monitoring need not be applied to live traffic, if monitoring has
been arranged to cover the paths that live traffic takes through
uncontrolled environments.The detection
function SHOULD be capable of making the congestion control adapt
its ECN-marking response to coexist safely with Classic congestion
controls such as standard Reno , as
required by . Alternatively, if adaptation
is not implemented and problems with such an AQM are detected, the
scalable congestion control MUST be replaced by a Classic
congestion control. Note that a scalable
congestion control is not expected to change to setting ECT(0)
while it transiently adapts to coexist with Classic congestion
controls.See and for rationale.
In the range between the minimum likely RTT and typical RTTs
expected in the intended deployment scenario, a scalable
congestion control MUST converge towards a rate that is as
independent of RTT as is possible without compromising stability
or efficiency (see for
rationale).
A scalable congestion control SHOULD remain responsive to
congestion when typical RTTs over the public Internet are
significantly smaller because they are no longer inflated by
queuing delay. It would be preferable for the minimum window of a
scalable congestion control to be lower than 1 segment rather than
use the timeout approach described for TCP in S.6.1.2 of (or an equivalent for other transports).
However, a lower minimum is not set as a formal requirement for
L4S experiments (see for
rationale).
A scalable congestion control's loss detection SHOULD be
resilient to reordering over an adaptive time interval that scales
with throughput and adapts to reordering (as in [RFC8985]), as
opposed to counting only in fixed units of packets (as in the 3
DupACK rule of and , which is not scalable). As packet rates
increase (e.g., due to new and/or improved technology), congestion
controls that detect loss by counting in units of packets become
more likely to incorrectly treat reordering events as
congestion-caused loss events (see for further rationale).
This requirement does not apply to congestion controls that are
solely used in controlled environments where the network
introduces hardly any reordering.
A scalable congestion control is expected to limit the queue
caused by bursts of packets. It would not seem necessary to set
the limit any lower than 10% of the minimum RTT expected in a
typical deployment (e.g. additional queuing of roughly 250 us
for the public Internet). This would be converted to a number of
packets under the worst-case assumption that the bottleneck link
capacity equals the current flow rate. No normative requirement to
limit bursts is given here and, until there is more industry
experience from the L4S experiment, it is not even known whether
one is needed - it seems to be in an L4S sender's self-interest to
limit bursts.
Each sender in a session can use a scalable congestion control
independently of the congestion control used by the receiver(s) when
they send data. Therefore there might be ECT(1) packets in one
direction and ECT(0) or Not-ECT in the other.
Later () this document
discusses the conditions for mixing other "'Safe' Unresponsive
Traffic" (e.g. DNS, LDAP, NTP, voice, game sync packets) with L4S
traffic. To be clear, although such traffic can share the same queue
as L4S traffic, it is not appropriate for the sender to tag it as
ECT(1), except in the (unlikely) case that it satisfies the above
conditions.
below specifies that an L4S AQM is
expected to signal L4S ECN without filtering or smoothing. This
contrasts with a Classic AQM, which filters out variations in the
queue before signalling ECN marking or drop. In the L4S architecture
, responsibility for smoothing
out these variations shifts to the sender's congestion control.
This shift of responsibility has the advantage that each sender can
smooth variations over a timescale proportionate to its own RTT.
Whereas, in the Classic approach, the network doesn't know the RTTs of
all the flows, so it has to smooth out variations for a worst-case RTT
to ensure stability. For all the typical flows with shorter RTT than
the worst-case, this makes congestion control unnecessarily
sluggish.
This also gives an L4S sender the choice not to smooth, depending
on its context (start-up, congestion avoidance, etc). Therefore, this
document places no requirement on an L4S congestion control to smooth
out variations in any particular way. Implementers are encouraged to
openly publish the approach they take to smoothing, and the results
and experience they gain during the L4S experiment.
A network node that implements the L4S service:
MUST classify arriving ECT(1) packets for L4S treatment, unless
overridden by a another classifier (e.g., see );
MUST classify arriving CE packets for L4S treatment as well,
unless overridden by a another classifier or unless the exception
referred to next applies;CE packets might
have originated as ECT(1) or ECT(0), but the above rule to
classify them as if they originated as ECT(1) is the safe choice
(see for rationale). The exception
is where some flow-aware in-network mechanism happens to be
available for distinguishing CE packets that originated as ECT(0),
as described in , but there is no
implication that such a mechanism is necessary.
An L4S AQM treatment follows similar codepoint transition rules to
those in RFC 3168. Specifically, the ECT(1) codepoint MUST NOT be
changed to any other codepoint than CE, and CE MUST NOT be changed to
any other codepoint. An ECT(1) packet is classified as ECN-capable
and, if congestion increases, an L4S AQM algorithm will increasingly
mark the ECN field as CE, otherwise forwarding packets unchanged as
ECT(1). Necessary conditions for an L4S marking treatment are defined
in .
Under persistent overload an L4S marking treatment MUST begin
applying drop to L4S traffic until the overload episode has subsided,
as recommended for all AQM methods in
(Section 4.2.1), which follows the similar advice in RFC 3168 (Section
7). During overload, it MUST apply the same drop probability to L4S
traffic as it would to Classic traffic.
Where an L4S AQM is transport-aware, this requirement could be
satisfied by using drop in only the most overloaded individual
per-flow AQMs. In a DualQ with flow-aware queue protection (e.g. ), this could be achieved by
redirecting packets in those flows contributing most to the overload
out of the L4S queue so that they are subjected to drop in the Classic
queue.
For backward compatibility in uncontrolled environments, a network
node that implements the L4S treatment MUST also implement an AQM
treatment for the Classic service as defined in . This Classic AQM treatment need not mark
ECT(0) packets, but if it does, it will do so under the same
conditions as it would drop Not-ECT packets .
It MUST classify arriving ECT(0) and Not-ECT packets for treatment by
this Classic AQM (for the DualQ Coupled AQM, see the extensive
discussion on classification in Sections 2.3 and 2.5.1.1 of ).
In case unforeseen problems arise with the L4S experiment, it MUST
be possible to configure an L4S implementation to disable the L4S
treatment. Once disabled, all packets of all ECN codepoints will
receive Classic treatment and ECT(1) packets MUST be treated as if
they were {ToDo: Not-ECT / ECT(0) ?}.
The relative strengths of L4S CE and drop are irrelevant where AQMs
are implemented in separate queues per-application-flow, which are
then explicitly scheduled (e.g. with an FQ scheduler as in ). Nonetheless, the relationship between them needs
to be defined for the coupling between L4S and Classic congestion
signals in a DualQ Coupled AQM , as below.
Unless an AQM node schedules application flows explicitly, the
likelihood that the AQM drops a Not-ECT Classic packet (p_C) MUST be
roughly proportional to the square of the likelihood that it would
have marked it if it had been an L4S packet (p_L). That is
p_C ~= (p_L / k)^2
The constant of proportionality (k) does not have to be
standardised for interoperability, but a value of 2 is RECOMMENDED.
The term 'likelihood' is used above to allow for marking and dropping
to be either probabilistic or deterministic.
This formula ensures that Scalable and Classic flows will converge
to roughly equal congestion windows, for the worst case of Reno
congestion control. This is because the congestion windows of Scalable
and Classic congestion controls are inversely proportional to p_L and
sqrt(p_C) respectively. So squaring p_C in the above formula
counterbalances the square root that characterizes Reno-friendly
flows.
Note that, contrary to RFC 3168, a Dual Queue Coupled AQM
implementing the L4S and Classic treatments does not mark an ECT(1)
packet under the same conditions that it would have dropped a Not-ECT
packet, as allowed by , which updates RFC
3168. However, if it marks ECT(0) packets, it does so under the same
conditions that it would have dropped a Not-ECT packet.
Also, L4S CE marking needs to be interpreted as an unsmoothed
signal, in contrast to the Classic approach in which AQMs filter out
variations before signalling congestion. An L4S AQM SHOULD NOT smooth
or filter out variations in the queue before signalling congestion. In
the L4S architecture , the
sender, not the network, is responsible for smoothing out
variations.
This requirement is worded as 'SHOULD NOT' rather than 'MUST NOT'
to allow for the case where the signals from a Classic smoothed AQM
are coupled with those from an unsmoothed L4S AQM. Nonetheless, the
spirit of the requirement is for all systems to expect that L4S ECN
signalling is unsmoothed and unfiltered, which is important for
interoperability.
To implement the L4S treatment, a network node does not need to
identify transport-layer flows. Nonetheless, if an implementer is
willing to identify transport-layer flows at a network node, and if
the most recent ECT packet in the same flow was ECT(0), the node MAY
classify CE packets for Classic ECN
treatment. In all other cases, a network node MUST classify all CE
packets for L4S treatment. Examples of such other cases are: i) if no
ECT packets have yet been identified in a flow; ii) if it is not
desirable for a network node to identify transport-layer flows; or
iii) if the most recent ECT packet in a flow was ECT(1).
If an implementer uses flow-awareness to classify CE packets, to
determine whether the flow is using ECT(0) or ECT(1) it only uses the
most recent ECT packet of a flow (this advice will need to be verified
as part of L4S experiments). This is because a sender might switch
from sending ECT(1) (L4S) packets to sending ECT(0) (Classic ECN)
packets, or back again, in the middle of a transport-layer flow
(e.g. it might manually switch its congestion control module
mid-connection, or it might be deliberately attempting to confuse the
network).
The examples in this section concern how additional identifiers
might complement the L4S identifier to classify packets between
class-based queues. Firstly
considers two queues, L4S and Classic, as in the Coupled DualQ AQM
, either alone () or within a larger queuing hierarchy
(). Then considers schemes that might combine
per-flow 5-tuples with other identifiers.
In a typical case for the public Internet a network element
that implements L4S in a shared queue might want to classify some
low-rate but unresponsive traffic (e.g. DNS, LDAP, NTP,
voice, game sync packets) into the low latency queue to mix with
L4S traffic. Such non-ECN-based packet types MUST be safe to mix
with L4S traffic without harming the low latency service, where
'safe' is explained in
below.
In this case it would not be appropriate to call the queue an
L4S queue, because it is shared by L4S and non-L4S traffic.
Instead it will be called the low latency or L queue. The L queue
then offers two different treatments:
The L4S treatment, which is a combination of the L4S AQM
treatment and a priority scheduling treatment;
The low latency treatment, which is solely the priority
scheduling treatment, without ECN-marking by the AQM.
To identify packets for just the scheduling treatment, it would
be inappropriate to use the L4S ECT(1) identifier, because such
traffic is unresponsive to ECN marking. Therefore, a network
element that implements L4S in a shared queue MAY classify
additional packets into the L queue if they carry certain non-ECN
identifiers. For instance:
addresses of specific applications or hosts configured to
be safe (or perhaps they comply with L4S behaviour and can
respond to ECN feedback, but perhaps cannot set the ECN field
for some reason);
certain protocols that are usually lightweight
(e.g. ARP, DNS);
specific Diffserv codepoints that indicate traffic with
limited burstiness such as the EF (Expedited Forwarding ), Voice-Admit or
proposed NQB (Non-Queue-Building ) service classes or equivalent
local-use DSCPs (see ).
Of course, a packet that carried both the ECT(1) codepoint and
a non-ECN identifier associated with the L queue would be
classified into the L queue.
For clarity, non-ECN identifiers, such as the examples itemized
above, might be used by some network operators who believe they
identify non-L4S traffic that would be safe to mix with L4S
traffic. They are not alternative ways for a host to indicate that
it is sending L4S packets. Only the ECT(1) ECN codepoint indicates
to a network element that a host is sending L4S packets (and CE
indicates that it could have originated as ECT(1)). Specifically
ECT(1) indicates that the host claims its behaviour satisfies the
prerequisite transport requirements in .
To include additional traffic with L4S, a network element only
reads identifiers such as those itemized above. It MUST NOT alter
these non-ECN identifiers, so that they survive for any potential
use later on the network path.
The above section requires unresponsive traffic to be 'safe'
to mix with L4S traffic. Ideally this means that the sender
never sends any sequence of packets at a rate that exceeds the
available capacity of the bottleneck link. However, typically an
unresponsive transport does not even know the bottleneck
capacity of the path, let alone its available capacity.
Nonetheless, an application can be considered safe enough if it
paces packets out (not necessarily completely regularly) such
that its maximum instantaneous rate from packet to packet stays
well below a typical broadband access rate.
This is a vague but useful definition, because many low
latency applications of interest, such as DNS, voice, game sync
packets, RPC, ACKs, keep-alives, could match this
description.
To extend the above example, an operator might want to exclude
some traffic from the L4S treatment for a policy reason,
e.g. security (traffic from malicious sources) or commercial
(e.g. initially the operator may wish to confine the benefits
of L4S to business customers).
In this exclusion case, the operator MUST classify on the
relevant locally-used identifiers (e.g. source addresses)
before classifying the non-matching traffic on the end-to-end L4S
ECN identifier.
The operator MUST NOT alter the end-to-end L4S ECN identifier
from L4S to Classic, because an operator decision to exclude
certain traffic from L4S treatment is local-only. The end-to-end
L4S identifier then survives for other operators to use, or
indeed, they can apply their own policy, independently based on
their own choice of locally-used identifiers. This approach also
allows any operator to remove its locally-applied exclusions in
future, e.g. if it wishes to widen the benefit of the L4S
treatment to all its customers.
An operator that excludes traffic carrying the L4S identifier
from L4S treatment MUST NOT treat such traffic as if it carries
the ECT(0) codepoint, which could confuse the sender.
L4S concerns low latency, which it can provide for all traffic
without differentiation and without necessarily
affecting bandwidth allocation. Diffserv provides for
differentiation of both bandwidth and low latency, but its control
of latency depends on its control of bandwidth. The two can be
combined if a network operator wants to control bandwidth
allocation but it also wants to provide low latency - for any
amount of traffic within one of these allocations of bandwidth
(rather than only providing low latency by limiting bandwidth)
.
The DualQ examples so far have been framed in the context of
providing the default Best Efforts Per-Hop Behaviour (PHB) using
two queues - a Low Latency (L) queue and a Classic (C) Queue. This
single DualQ structure is expected to be the most common and
useful arrangement. But, more generally, an operator might choose
to control bandwidth allocation through a hierarchy of Diffserv
PHBs at a node, and to offer one (or more) of these PHBs with a
low latency and a Classic variant.
In the first case, if we assume that a network element provides
no PHBs except the DualQ, if a packet carries ECT(1) or CE, the
network element would classify it for the L4S treatment
irrespective of its DSCP. And, if a packet carried (say) the EF
DSCP, the network element could classify it into the L queue
irrespective of its ECN codepoint. However, where the DualQ is in
a hierarchy of other PHBs, the classifier would classify some
traffic into other PHBs based on DSCP before classifying between
the low latency and Classic queues (based on ECT(1), CE and
perhaps also the EF DSCP or other identifiers as in the above
example). gives a
number of examples of such arrangements to address various
requirements.
describes how
an operator might use L4S to offer low latency for all L4S traffic
as well as using Diffserv for bandwidth differentiation. It
identifies two main types of approach, which can be combined: the
operator might split certain Diffserv PHBs between L4S and a
corresponding Classic service. Or it might split the L4S and/or
the Classic service into multiple Diffserv PHBs. In either of
these cases, a packet would have to be classified on its Diffserv
and ECN codepoints.
In summary, there are numerous ways in which the L4S ECN
identifier (ECT(1) and CE) could be combined with other
identifiers to achieve particular objectives. The following
categorization articulates those that are valid, but it is not
necessarily exhaustive. Those tagged 'Recommended-standard-use'
could be set by the sending host or a network. Those tagged
'Local-use' would only be set by a network:
Identifiers Complementing the L4S Identifier
Including More Traffic in the L Queue(Could use Recommended-standard-use or
Local-use identifiers)
Excluding Certain Traffic from the L Queue(Local-use only)
Identifiers to place L4S classification in a PHB
Hierarchy(Could use
Recommended-standard-use or Local-use identifiers)
PHBs Before L4S ECN Classification
PHBs After L4S ECN Classification
At a node with per-flow queueing (e.g. FQ-CoDel ), the L4S identifier could complement the Layer-4
flow ID as a further level of flow granularity (i.e. Not-ECT and
ECT(0) queued separately from ECT(1) and CE packets). "Risk of
reordering Classic CE packets" in
discusses the resulting ambiguity if packets originally marked
ECT(0) are marked CE by an upstream AQM before they arrive at a node
that classifies CE as L4S. It argues that the risk of reordering is
vanishingly small and the consequence of such a low level of
reordering is minimal.
Alternatively, it could be assumed that it is not in a flow's own
interest to mix Classic and L4S identifiers. Then the AQM could use
the ECN field to switch itself between a Classic and an L4S AQM
behaviour within one per-flow queue. For instance, for ECN-capable
packets, the AQM might consist of a simple marking threshold and an
L4S ECN identifier might simply select a shallower threshold than a
Classic ECN identifier would.
As well as senders needing to limit packet bursts (), links need to limit the degree
of burstiness they introduce. In both cases (senders and links) this
is a tradeoff, because batch-handling of packets is done for good
reason, e.g. processing efficiency or to make efficient use of
medium acquisition delay. Some take the attitude that there is no
point reducing burst delay at the sender below that introduced by
links (or vice versa). However, delay reduction proceeds by cutting
down 'the longest pole in the tent', which turns the spotlight on the
next longest, and so on.
This document does not set any quantified requirements for links to
limit burst delay, primarily because link technologies are outside the
remit of L4S specifications. Nonetheless, it would not make sense to
implement an L4S AQM that feeds into a particular link technology
without also reviewing opportunities to reduce any form of burst delay
introduced by that link technology. This would at least limit the
bursts that the link would otherwise introduce into the onward
traffic, which would cause jumpy feedback to the sender as well as
potential extra queuing delay downstream. This document does not
presume to even give guidance on an appropriate target for such burst
delay until there is more industry experience of L4S. However, as
suggested in it would not
seem necessary to limit bursts lower than roughly 10% of the minimum
base RTT expected in the typical deployment scenario (e.g. 250 us
burst duration for links within the public Internet).
This section describes open questions that L4S Experiments ought to
focus on. This section also documents outstanding open issues that will
need to be investigated as part of L4S experimentation, given they could
not be fully resolved during the WG phase. It also lists metrics that
will need to be monitored during experiments (summarizing text elsewhere
in L4S documents) and finally lists some potential future directions
that researchers might wish to investigate.
In addition to this section, sets operational and
management requirements for experiments with DualQ Coupled AQMs; and
General operational and management requirements for experiments with L4S
congestion controls are given in
and above, e.g. co-existence and
scaling requirements, incremental deployment arrangements.
The specification of each scalable congestion control will need to
include protocol-specific requirements for configuration and monitoring
performance during experiments. Appendix A of
provides a helpful checklist.
L4S experiments would be expected to answer the following
questions:
Have all the parts of L4S been deployed, and if so, what
proportion of paths support it?
Does use of L4S over the Internet result in significantly
improved user experience?
Has L4S enabled novel interactive applications?
Did use of L4S over the Internet result in improvements to the
following metrics:
queue delay (mean and 99th percentile) under various
loads
utilization
starvation / fairness
scaling range of flow rates and RTTs
How much does burstiness in the Internet affect L4S
performance, and how much limitation of bustiness was needed
and/or was realized - both at senders and at links, especially
radio links?
Was per-flow queue protection typically (un)necessary?
How well did overload protection or queue protection
work?
How well did L4S flows coexist with Classic flows when sharing
a bottleneck?
How frequently did problems arise?
What caused any coexistence problems, and were any problems
due to single-queue Classic ECN AQMs (this assumes
single-queue Classic ECN AQMs can be distinguished from FQ
ones)?
How prevalent were problems with the L4S service due to tunnels
/ encapsulations that do not support ECN decapsulation?
How easy was it to implement a fully compliant L4S congestion
control, over various different transport protocols (TCP. QUIC,
RMCAT, etc)?
Monitoring for harm to other traffic, specifically bandwidth
starvation or excess queuing delay, will need to be conducted
alongside all early L4S experiments. It is hard, if not impossible,
for an individual flow to measure its impact on other traffic. So such
monitoring will need to be conducted using bespoke monitoring across
flows and/or across classes of traffic.
What is the best way forward to deal with L4S over single-queue
Classic ECN AQM bottlenecks, given current problems with
misdetecting L4S AQMs as Classic ECN AQMs?
Fixing the poor Interaction between current L4S congestion
controls and CoDel with only Classic ECN support during flow
startup
Researchers might find that L4S opens up the following interesting
areas for investigation:
Potential for faster convergence time and tracking of available
capacity
Potential for improvements to particular link technologies, and
cross-layer interactions with them.
Potential for using virtual queues, e.g. to further reduce
latency jitter, or to leave headroom for capacity variation in
radio networks
Development and specification of reverse path congestion
control using L4S building bocks (e.g. AccECN, QUIC)
Once queuing delay is cut down, what becomes the 'second
longest pole in the tent' (other than the speed of light)?
Novel alternatives to the existing set of L4S AQMs
Novel applications enabled by L4S
The 01 codepoint of the ECN Field of the IP header is specified by
the present Experimental RFC. The process for an experimental RFC to
assign this codepoint in the IP header (v4 and v6) is documented in
Proposed Standard , which updates the Proposed
Standard .
When the present document is published as an RFC, IANA is asked to
update the 01 entry in the registry, "ECN Field (Bits 6-7)" to the
following (see
https://www.iana.org/assignments/dscp-registry/dscp-registry.xhtml#ecn-field
):
Binary
Keyword
References
01
ECT(1) (ECN-Capable Transport(1))[1]
[RFC Errata 5399] [RFCXXXX]
[XXXX is the number that the RFC Editor assigns to the present
document (this sentence to be removed by the RFC Editor)].
Approaches to assure the integrity of signals using the new
identifier are introduced in .
See the security considerations in the L4S architecture for further discussion of mis-use of
the identifier, as well as extensive discussion of policing rate and
latency in regard to L4S.
The recommendation to detect loss in time units prevents the
ACK-splitting attacks described in .
Thanks to Richard Scheffenegger, John Leslie, David Täht,
Jonathan Morton, Gorry Fairhurst, Michael Welzl, Mikael Abrahamsson and
Andrew McGregor for the discussions that led to this specification.
Ing-jyh (Inton) Tsang was a contributor to the early drafts of this
document. And thanks to Mikael Abrahamsson, Lloyd Wood, Nicolas Kuhn,
Greg White, Tom Henderson, David Black, Gorry Fairhurst, Brian
Carpenter, Jake Holland, Rod Grimes, Richard Scheffenegger, Sebastian
Moeller, Neal Cardwell, Praveen Balasubramanian, Reza Marandian Hagh,
Stuart Cheshire and Vidhi Goel for providing help and reviewing this
draft and thanks to Ingemar Johansson for reviewing and providing
substantial text. Particular thanks to Wes Eddy for patiently
shepherding this and the other L4S drafts through the IETF process.
listing the Prague L4S
Requirements is based on text authored by Marcelo Bagnulo Braun that was
originally an appendix to . That
text was in turn based on the collective output of the attendees listed
in the minutes of a 'bar BoF' on DCTCP Evolution during IETF-94 .
The authors' contributions were part-funded by the European Community
under its Seventh Framework Programme through the Reducing Internet
Transport Latency (RITE) project (ICT-317700). Bob Briscoe was also
funded partly by the Research Council of Norway through the TimeIn
project, partly by CableLabs and partly by the Comcast Innovation Fund.
The views expressed here are solely those of the authors.
Adaptive RED: An Algorithm for Increasing the Robustness of
RED's Active Queue Management
ACIRI
ACIRI
ACIRI
Relentless Congestion Control
PSC
'Data Centre to the Home': Ultra-Low Latency for All
Bell Labs
Simula Research Lab
BT
Bell Labs
PI^2 : A Linearized AQM for both Classic and Scalable
TCP
Bell Labs
Simula Research Lab
Bell Labs
BT
One more bit is enough
Up to Speed with Queue View
BT
Uni Karlstad
Analysis of DCTCP: Stability, Convergence, and
Fairness
Notes: DCTCP evolution 'bar BoF': Tue 21 Jul 2015, 17:40,
Prague
Simula
Scaling TCP's Congestion Window for Small Round Trip
Times
BT
Bell Labs
TCP Prague Fall-back on Detection of a Classic ECN
AQM
Independent
Simula and Uni Oslo
Extending TCP for Low Round Trip Delay
Simula and Uni Oslo
Rapid Acceleration in TCP Prague
University of Oslo and Simula
Adaptive-Acceleration Data Center TCP
Implementing the `TCP Prague' Requirements for Low Latency
Low Loss Scalable Throughput (L4S)
Independent
Nokia Bell Labs
Simula
University of Oslo
Nokia Bell Labs
ETHZ
Simula
Paced Chirping - Rethinking TCP start-up
University of Oslo
Independent
Congestion Avoidance and Control
TCP Congestion Control with a Misbehaving Receiver
Uni Washington
Uni Washington
Uni Washington
Uni Washington
This appendix is informative, not normative. It gives a list of
modifications to current scalable congestion controls so that they can
be deployed over the public Internet and coexist safely with existing
traffic. The list complements the normative requirements in that a sender has to comply with before
it can set the L4S identifier in packets it sends into the Internet. As
well as necessary safety improvements (requirements) this appendix also
includes preferable performance improvements (optimizations).
These recommendations have become know as the Prague L4S
Requirements, because they were originally identified at an ad hoc
meeting during IETF-94 in Prague . They were
originally called the 'TCP Prague Requirements', but they are not solely
applicable to TCP, so the name and wording has been generalized for all
transport protocols, and the name 'TCP Prague' is now used for a
specific implementation of the requirements.
At the time of writing, DCTCP is the most
widely used scalable transport protocol. In its current form, DCTCP is
specified to be deployable only in controlled environments. Deploying it
in the public Internet would lead to a number of issues, both from the
safety and the performance perspective. The modifications and additional
mechanisms listed in this section will be necessary for its deployment
over the global Internet. Where an example is needed, DCTCP is used as a
base, but it is likely that most of these requirements equally apply to
other scalable congestion controls, covering adaptive real-time media,
etc., not just capacity-seeking behaviours.
Description: A scalable congestion control needs to distinguish
the packets it sends from those sent by Classic congestion controls
(see the precise normative requirement wording in ).
Motivation: It needs to be possible for a network node to
classify L4S packets without flow state into a queue that applies an
L4S ECN marking behaviour and isolates L4S packets from the queuing
delay of Classic packets.
Description: The transport protocol for a scalable congestion
control needs to provide timely, accurate feedback about the extent
of ECN marking experienced by all packets (see the precise normative
requirement wording in ).
Motivation: Classic congestion controls only need feedback about
the existence of a congestion episode within a round trip, not
precisely how many packets were marked with ECN or dropped.
Therefore, in 2001, when ECN feedback was added to TCP , it could not inform the sender of more than one
ECN mark per RTT. Since then, requirements for more accurate ECN
feedback in TCP have been defined in and
specifies a change to
the TCP protocol to satisfy these requirements. Most other transport
protocols already satisfy this requirement (see ).
Description: It needs to be possible to replace the
implementation of a scalable congestion control with a Classic
control (see the precise normative requirement wording in ).
Motivation: L4S is an experimental protocol, therefore it seems
prudent to be able to disable it at source in case of insurmountable
problems, perhaps due to some unexpected interaction on a particular
sender; over a particular path or network; with a particular
receiver or even ultimately an insurmountable problem with the
experiment as a whole.
Description: As well as responding to ECN markings in a scalable
way, a scalable congestion control needs to react to packet loss in
a way that will coexist safely with a Reno congestion control (see the precise normative requirement wording in
).
Motivation: Part of the safety conditions for deploying a
scalable congestion control on the public Internet is to make sure
that it behaves properly when it builds a queue at a network
bottleneck that has not been upgraded to support L4S. Packet loss
can have many causes, but it usually has to be conservatively
assumed that it is a sign of congestion. Therefore, on detecting
packet loss, a scalable congestion control will need to fall back to
Classic congestion control behaviour. If it does not comply with
this requirement it could starve Classic traffic.
A scalable congestion control can be used for different types of
transport, e.g. for real-time media or for reliable transport
like TCP. Therefore, the particular Classic congestion control
behaviour to fall back on will need to be dependent on the specific
congestion control implementation. In the particular case of DCTCP,
the DCTCP specification states that "It is
RECOMMENDED that an implementation deal with loss episodes in the
same way as conventional TCP." For safe deployment of a scalable
congestion control in the public Internet, the above requirement
would need to be defined as a "MUST".
Even though a bottleneck is L4S capable, it might still become
overloaded and have to drop packets. In this case, the sender may
receive a high proportion of packets marked with the CE bit set and
also experience loss. Current DCTCP implementations each react
differently to this situation. At least one implementation reacts
only to the drop signal (e.g. by halving the CWND) and at least
another DCTCP implementation reacts to both signals (e.g. by
halving the CWND due to the drop and also further reducing the CWND
based on the proportion of marked packet). A third approach for the
public Internet has been proposed that adjusts the loss response to
result in a halving when combined with the ECN response. We believe
that further experimentation is needed to understand what is the
best behaviour for the public Internet, which may or not be one of
these existing approaches.
Description: Monitoring has to be in place so that a non-L4S but
ECN-capable AQM can be detected at path bottlenecks. This is in case
such an AQM has been implemented in a shared queue, in which case
any long-running scalable flow would predominate over any
simultaneous long-running Classic flow sharing the queue. The
requirement is written so that such a problem could either be
resolved in real-time, or via administrative intervention (see the
precise normative requirement wording in ).
Motivation: Similarly to the requirement in , this requirement is a safety
condition to ensure an L4S congestion control coexists well with
Classic flows when it builds a queue at a shared network bottleneck
that has not been upgraded to support L4S. Nonetheless, if
necessary, it is considered reasonable to resolve such problems over
management timescales (possibly involving human intervention)
because:
although a Classic flow can considerably reduce its
throughput in the face of a competing scalable flow, it still
makes progress and does not starve;
implementations of a Classic ECN AQM in a queue that is
intended to be shared are believed to be rare;
detection of such AQMs is not always clear-cut; so focused
out-of-band testing (or even contacting the relevant network
operator) would improve certainty.
Therefore, the relevant normative requirement () is divided into three stages:
monitoring, detection and action:
Monitoring involves collection of the
measurement data to be analysed. Monitoring is expressed as a
'MUST' for uncontrolled environments, although the placement of
the monitoring function is left open. Whether monitoring has to
be applied in real-time is expressed as a 'SHOULD'. This allows
for the possibility that the operator of an L4S sender
(e.g. a CDN) might prefer to test out-of-band for signs of
Classic ECN AQMs, perhaps to avoid continually consuming
resources to monitor live traffic.
Detection involves analysis of the
monitored data to detect the likelihood of a Classic ECN AQM.
The requirements recommend that detection occurs live in
real-time. However, detection is allowed to be deferred (e.g. it
might involve further testing targeted at candidate AQMs);
This involves the act of switching the
sender to a Classic congestion control. This might occur in
real-time within the congestion control for the subsequent
duration of a flow, or it might involve administrative action to
switch to Classic congestion control for a specific interface or
for a certain set of destination addresses.Instead of the sender taking action itself, the
operator of the sender (e.g. a CDN) might prefer to ask the
network operator to modify the Classic AQM's treatment of L4S
packets; or to ensure L4S packets bypass the AQM; or to upgrade
the AQM to support L4S. Once L4S flows no longer shared the
Classic ECN AQM they would obviously no longer detect it, and
the requirement to act on it would no longer apply.
The whole set of normative requirements concerning Classic ECN
AQMs does not apply in controlled environments, such as private
networks or data centre networks. CDN servers placed within an
access ISP's network can be considered as a single controlled
environment, but any onward networks served by the access network,
including all the attached customer networks, would be unlikely to
fall under the same degree of coordinated control. Monitoring is
expressed as a 'MUST' for these uncontrolled segments of paths (e.g.
beyond the access ISP in a home network), because there is a
possibility that there might be a shared queue Classic ECN AQM in
that segment. Nonetheless, the intent is to only require occasional
monitoring of these uncontrolled regions, and not to burden CDN
operators if monitoring never uncovers any potential problems, given
it is anyway in the CDN's own interests not to degrade the service
of its own customers.
More detailed discussion of all the above options and
alternatives can be found in .
Having said all the above, the approach recommended in the
requirements is to monitor, detect and act in real-time on live
traffic. A passive monitoring algorithm to detect a Classic ECN AQM
at the bottleneck and fall back to Classic congestion control is
described in an extensive technical report , which also provides a link to Linux source
code, and a large online visualization of its evaluation results.
Very briefly, the algorithm primarily monitors RTT variation using
the same algorithm that maintains the mean deviation of TCP's
smoothed RTT, but it smooths over a duration of the order of a
Classic sawtooth. The outcome is also conditioned on other metrics
such as the presence of CE marking and congestion avoidance phase
having stabilized. The report also identifies further work to
improve the approach, for instance improvements with low capacity
links and combining the measurements with a cache of what had been
learned about a path in previous connections. The report also
suggests alternative approaches.
Finally, we motivate the recommendation in that a scalable congestion
control is not expected to change to setting ECT(0) while it adapts
its behaviour to coexist with Classic flows. This is because the
sender needs to continue to check whether it made the right decision
- and switch back if it was wrong, or if a different link becomes
the bottleneck:
If, as recommended, the sender changes only its behaviour but
not its codepoint to Classic, its codepoint will still be
compatible with either an L4S or a Classic AQM. If the
bottleneck does actually support both, it will still classify
ECT(1) into the same L4S queue, where the sender can measure
that switching to Classic behaviour was wrong, so that it can
switch back.
In contrast, if the sender changes both its behaviour and its
codepoint to Classic, even if the bottleneck supports both, it
will classify ECT(0) into the Classic queue, reinforcing the
sender's incorrect decision so that it never switches back.
Also, not changing codepoint avoids the risk of being flipped
to a different path by a load balancer or multipath routing that
hashes on the whole of the ex-ToS byte (unfortunately still a
common pathology).
Note that if a flow is configured to only
use a Classic congestion control, it is then entirely appropriate
not to use ECT(1).
Description: A scalable congestion control needs to reduce RTT
bias as much as possible at least over the low to typical range of
RTTs that will interact in the intended deployment scenario (see the
precise normative requirement wording in ).
Motivation: The throughput of Classic congestion controls is
known to be inversely proportional to RTT, so one would expect flows
over very low RTT paths to nearly starve flows over larger RTTs.
However, Classic congestion controls have never allowed a very low
RTT path to exist because they induce a large queue. For instance,
consider two paths with base RTT 1ms and 100ms. If a Classic
congestion control induces a 100ms queue, it turns these RTTs into
101ms and 200ms leading to a throughput ratio of about 2:1. Whereas
if a scalable congestion control induces only a 1ms queue, the ratio
is 2:101, leading to a throughput ratio of about 50:1.
Therefore, with very small queues, long RTT flows will
essentially starve, unless scalable congestion controls comply with
this requirement.
The RTT bias in current Classic congestion controls works
satisfactorily when the RTT is higher than typical, and L4S does not
change that. So, there is no additional requirement for high RTT L4S
flows to remove RTT bias - they can but they don't have to.
Description: A scalable congestion control needs to remain
responsive to congestion when typical RTTs over the public Internet
are significantly smaller because they are no longer inflated by
queuing delay (see the precise normative requirement wording in
).
Motivation: As currently specified, the minimum congestion window
of ECN-capable TCP (and its derivatives) is expected to be 2 sender
maximum segment sizes (SMSS), or 1 SMSS after a retransmission
timeout. Once the congestion window reaches this minimum, if there
is further ECN-marking, TCP is meant to wait for a retransmission
timeout before sending another segment (see section 6.1.2 of ). In practice, most known window-based congestion
control algorithms become unresponsive to congestion signals at this
point. No matter how much drop or ECN marking, the congestion window
no longer reduces. Instead, the sender's lack of any further
congestion response forces the queue to grow, overriding any AQM and
increasing queuing delay (making the window large enough to become
responsive again).
Most congestion controls for other transport protocols have a
similar minimum, albeit when measured in bytes for those that use
smaller packets.
L4S mechanisms significantly reduce queueing delay so, over the
same path, the RTT becomes lower. Then this problem becomes
surprisingly common . This is because,
for the same link capacity, smaller RTT implies a smaller window.
For instance, consider a residential setting with an upstream
broadband Internet access of 8 Mb/s, assuming a max segment size of
1500 B. Two upstream flows will each have the minimum window of 2
SMSS if the RTT is 6ms or less, which is quite common when accessing
a nearby data centre. So, any more than two such parallel TCP flows
will become unresponsive and increase queuing delay.
Unless scalable congestion controls address this requirement from
the start, they will frequently become unresponsive, negating the
low latency benefit of L4S, for themselves and for others.
That would seem to imply that scalable congestion controllers
ought to be required to be able work with a congestion window less
than 1 SMSS. For instance, if an ECN-capable TCP gets an ECN-mark
when it is already sitting at a window of 1 SMSS, RFC 3168 requires
it to defer sending for a retransmission timeout. A less drastic but
more complex mechanism can maintain a congestion window less than 1
SMSS (significantly less if necessary), as described in . Other approaches are likely to be feasible.
However, the requirement in is worded as a "SHOULD" because
the existence of a minimum window is not all bad. When competing
with an unresponsive flow, a minimum window naturally protects the
flow from starvation by at least keeping some data flowing.
By stating the requirement to go lower than 1 SMSS as a "SHOULD",
while the requirement in RFC 3168 still stands as well, we shall be
able to watch the choices of minimum window evolve in different
scalable congestion controllers.
Description: When detecting loss, a scalable congestion control
needs to be tolerant to reordering over an adaptive time interval,
which scales with throughput, rather than counting only in fixed
units of packets, which does not scale (see the precise normative
requirement wording in ).
Motivation: A primary purpose of L4S is scalable throughput (it's
in the name). Scalability in all dimensions is, of course, also a
goal of all IETF technology. The inverse linear congestion response
in is necessary, but not
sufficient, to solve the congestion control scalability problem
identified in . As well as maintaining
frequent ECN signals as rate scales, it is also important to ensure
that a potentially false perception of loss does not limit
throughput scaling.
End-systems cannot know whether a missing packet is due to loss
or reordering, except in hindsight - if it appears later. So they
can only deem that there has been a loss if a gap in the sequence
space has not been filled, either after a certain number of
subsequent packets has arrived (e.g. the 3 DupACK rule of
standard TCP congestion control ) or after a
certain amount of time (e.g. the RACK approach ).
As we attempt to scale packet rate over the years:
Even if only some sending hosts
still deem that loss has occurred by counting reordered packets,
all networks will have to keep
reducing the time over which they keep packets in order. If some
link technologies keep the time within which reordering occurs
roughly unchanged, then loss over these links, as perceived by
these hosts, will appear to continually rise over the years.
In contrast, if all senders detect loss in units of time, the
time over which the network has to keep packets in order stays
roughly invariant.
Therefore hosts have an incentive to detect loss in time
units (so as not to fool themselves too often into detecting losses
when there are none). And for hosts that are changing their
congestion control implementation to L4S, there is no downside to
including time-based loss detection code in the change (loss
recovery implemented in hardware is an exception, covered later).
Therefore requiring L4S hosts to detect loss in time-based units
would not be a burden.
If this requirement is not placed on L4S hosts, even though it
would be no burden on them to do so, all networks will face
unnecessary uncertainty over whether some L4S hosts might be
detecting loss by counting packets. Then all
link technologies will have to unnecessarily keep reducing the time
within which reordering occurs. That is not a problem for some link
technologies, but it becomes increasingly challenging for other link
technologies to continue to scale, particularly those relying on
channel bonding for scaling, such as LTE, 5G and DOCSIS.
Given Internet paths traverse many link technologies, any scaling
limit for these more challenging access link technologies would
become a scaling limit for the Internet as a whole.
It might be asked how it helps to place this loss detection
requirement only on L4S hosts, because networks will still face
uncertainty over whether non-L4S flows are detecting loss by
counting DupACKs. The answer is that those link technologies for
which it is challenging to keep squeezing the reordering time will
only need to do so for non-L4S traffic (which they can do because
the L4S identifier is visible at the IP layer). Therefore, they can
focus their processing and memory resources into scaling non-L4S
(Classic) traffic. Then, the higher the proportion of L4S traffic,
the less of a scaling challenge they will have.
To summarize, there is no reason for L4S hosts not to be part of
the solution instead of part of the problem.
Requirement ("MUST") or recommendation ("SHOULD")? As explained
above, this is a subtle interoperability issue between hosts and
networks, which seems to need a "MUST". Unless networks can be
certain that all L4S hosts follow the time-based approach, they
still have to cater for the worst case - continually squeeze
reordering into a smaller and smaller duration - just for hosts that
might be using the counting approach. However, it was decided to
express this as a recommendation, using "SHOULD". The main
justification was that networks can still be fairly certain that L4S
hosts will follow this recommendation, because following it offers
only gain and no pain.
Details:
The speed of loss recovery is much more significant for short
flows than long, therefore a good compromise is to adapt the
reordering window; from a small fraction of the RTT at the start of
a flow, to a larger fraction of the RTT for flows that continue for
many round trips.
This is broadly the approach adopted by TCP RACK (Recent
ACKnowledgements) . However, RACK starts
with the 3 DupACK approach, because the RTT estimate is not
necessarily stable. As long as the initial window is paced, such
initial use of 3 DupACK counting would amount to time-based loss
detection and therefore would satisfy the time-based loss detection
recommendation of . This
is because pacing of the initial window would ensure that 3 DupACKs
early in the connection would be spread over a small fraction of the
round trip.
As mentioned above, hardware implementations of loss recovery
using DupACK counting exist (e.g. some implementations of
RoCEv2 for RDMA). For low latency, these implementations can change
their congestion control to implement L4S, because the congestion
control (as distinct from loss recovery) is implemented in software.
But they cannot easily satisfy this loss recovery requirement.
However, it is believed they do not need to. It is believed that
such implementations solely exist in controlled environments, where
the network technology keeps reordering extremely low anyway. This
is why controlled environments with hardly any reordering are
excluded from the scope of the normative recommendation in .
Detecting loss in time units also prevents the ACK-splitting
attacks described in .
Description: This item concerns TCP and its derivatives
(e.g. SCTP) as well as RTP/RTCP . The
original specification of ECN for TCP precluded the use of ECN on
control packets and retransmissions. To improve performance,
scalable transport protocols ought to enable ECN at the IP layer in
TCP control packets (SYN, SYN-ACK, pure ACKs, etc.) and in
retransmitted packets. The same is true for derivatives of TCP,
e.g. SCTP. Similarly precludes the use
of ECT on RTCP datagrams, in case the path changes after it has been
checked for ECN traversal.
Motivation (TCP): RFC 3168 prohibits the use of ECN on these
types of TCP packet, based on a number of arguments. This means
these packets are not protected from congestion loss by ECN, which
considerably harms performance, particularly for short flows. counters each argument in
RFC 3168 in turn, showing it was over-cautious. Instead it proposes
experimental use of ECN on all types of TCP packet as long as AccECN
feedback is available
(which itself satisfies the accurate feedback requirement in for using a scalable congestion
control).
Motivation (RTCP): L4S experiments in general will need to
observe the rule in that precludes ECT on
RTCP datagrams. Nonetheless, as ECN usage becomes more widespread,
it would be useful to conduct specific experiments with ECN-capable
RTCP to gather data on whether such caution is necessary.
Description: It would improve performance if scalable congestion
controls did not limit their congestion window increase to the
standard additive increase of 1 SMSS per round trip during congestion avoidance. The same is true for
derivatives of TCP congestion control, including similar approaches
used for real-time media.
Motivation: As currently defined , DCTCP
uses the traditional Reno additive increase in congestion avoidance
phase. When the available capacity suddenly increases
(e.g. when another flow finishes, or if radio capacity
increases) it can take very many round trips to take advantage of
the new capacity. TCP Cubic was designed to solve this problem, but
as flow rates have continued to increase, the delay accelerating
into available capacity has become prohibitive. See, for instance,
the examples in . Even when out of
its Reno-compatibility mode, every 8x scaling of Cubic's flow rate
leads to 2x more acceleration delay.
In the steady state, DCTCP induces about 2 ECN marks per round
trip, so it is possible to quickly detect when these signals have
disappeared and seek available capacity more rapidly, while
minimizing the impact on other flows (Classic and scalable) . Alternatively, approaches such as
Adaptive Acceleration (A2DTCP ) have been
proposed to address this problem in data centres, which might be
deployable over the public Internet.
Description: It would improve performance if scalable congestion
controls converged (reached their steady-state share of the
capacity) faster than Classic congestion controls or at least no
slower. This affects the flow start behaviour of any L4S congestion
control derived from a Classic transport that uses TCP slow start,
including those for real-time media.
Motivation: As an example, a new DCTCP flow takes longer than a
Classic congestion control to obtain its share of the capacity of
the bottleneck when there are already ongoing flows using the
bottleneck capacity. In a data centre environment DCTCP takes about
a factor of 1.5 to 2 longer to converge due to the much higher
typical level of ECN marking that DCTCP background traffic induces,
which causes new flows to exit slow start early . In testing for use over the public
Internet the convergence time of DCTCP relative to a regular
loss-based TCP slow start is even less favourable due to the shallow ECN marking threshold
needed for L4S. It is exacerbated by the typically greater mismatch
between the link rate of the sending host and typical Internet
access bottlenecks. This problem is detrimental in general, but
would particularly harm the performance of short flows relative to
Classic congestion controls.
This appendix is informative, not normative. It records the pros and
cons of various alternative ways to identify L4S packets to record the
rationale for the choice of ECT(1) () as
the L4S identifier. At the end,
summarises the distinguishing features of the leading alternatives. It
is intended to supplement, not replace the detailed text.
The leading solutions all use the ECN field, sometimes in combination
with the Diffserv field. This is because L4S traffic has to indicate
that it is ECN-capable anyway, because ECN is intrinsic to how L4S
works. Both the ECN and Diffserv fields have the additional advantage
that they are no different in either IPv4 or IPv6. A couple of
alternatives that use other fields are mentioned at the end, but it is
quickly explained why they are not serious contenders.
Definition:
Packets with ECT(1) and conditionally packets with CE would
signify L4S semantics as an alternative to the semantics of
Classic ECN , specifically:
The ECT(1) codepoint would signify that the packet was sent
by an L4S-capable sender.
Given shortage of codepoints, both L4S and Classic ECN
sides of an AQM would have to use the same CE codepoint to
indicate that a packet had experienced congestion. If a packet
that had already been marked CE in an upstream buffer arrived
at a subsequent AQM, this AQM would then have to guess whether
to classify CE packets as L4S or Classic ECN. Choosing the L4S
treatment would be a safer choice, because then a few Classic
packets might arrive early, rather than a few L4S packets
arriving late.
Additional information might be available if the classifier
were transport-aware. Then it could classify a CE packet for
Classic ECN treatment if the most recent ECT packet in the
same flow had been marked ECT(0). However, the L4S service
ought not to need tranport-layer awareness.
Cons:
The L4S service
could potentially supersede the service provided by Classic ECN,
therefore using ECT(1) to identify L4S packets could ultimately
mean that the ECT(0) codepoint was 'wasted' purely to distinguish
one form of ECN from its successor.
It is not always
possible to support ECN in an AQM acting in a buffer below the IP
layer . In
such cases, the L4S service would have to drop rather than mark
frames even though they might encapsulate an ECN-capable
packet.
Classifying all
CE packets into the L4S queue risks any CE packets that were
originally ECT(0) being incorrectly classified as L4S. If there
were delay in the Classic queue, these incorrectly classified CE
packets would arrive early, which is a form of reordering.
Reordering can cause TCP senders (and senders of similar
transports) to retransmit spuriously. However, the risk of
spurious retransmissions would be extremely low for the following
reasons:
It is quite unusual to experience queuing at more than one
bottleneck on the same path (the available capacities have to
be identical).
In only a subset of these unusual cases would the first
bottleneck support Classic ECN marking while the second
supported L4S ECN marking, which would be the only scenario
where some ECT(0) packets could be CE marked by an AQM
supporting Classic ECN then the remainder experienced further
delay through the Classic side of a subsequent L4S DualQ
AQM.
Even then, when a few packets are delivered early, it takes
very unusual conditions to cause a spurious retransmission, in
contrast to when some packets are delivered late. The first
bottleneck has to apply CE-marks to at least N contiguous
packets and the second bottleneck has to inject an
uninterrupted sequence of at least N of these packets between
two packets earlier in the stream (where N is the reordering
window that the transport protocol allows before it considers
a packet is lost).
For example consider N=3, and consider the
sequence of packets 100, 101, 102, 103,... and imagine
that packets 150,151,152 from later in the flow are
injected as follows: 100, 150, 151, 101, 152, 102, 103...
If this were late reordering, even one packet arriving out
of sequence would trigger a spurious retransmission, but
there is no spurious retransmission here with early
reordering, because packet 101 moves the cumulative ACK
counter forward before 3 packets have arrived out of
order. Later, when packets 148, 149, 153... arrive, even
though there is a 3-packet hole, there will be no problem,
because the packets to fill the hole are already in the
receive buffer.
Even with the current TCP recommendation of N=3 spurious retransmissions will be unlikely
for all the above reasons. As RACK is
becoming widely deployed, it tends to adapt its reordering
window to a larger value of N, which will make the chance of a
contiguous sequence of N early arrivals vanishingly small.
Even a run of 2 CE marks within a Classic ECN flow is
unlikely, given FQ-CoDel is the only known widely deployed AQM
that supports Classic ECN marking and it takes great care to
separate out flows and to space any markings evenly along each
flow.
It is extremely unlikely that the above set of 5
eventualities that are each unusual in themselves would all happen
simultaneously. But, even if they did, the consequences would
hardly be dire: the odd spurious fast retransmission. Whenever the
traffic source (a Classic congestion control) mistakes the
reordering of a string of CE marks for a loss, one might think
that it will reduce its congestion window as well as emitting a
spurious retransmission. However, it would have already reduced
its congestion window when the CE markings arrived early. If it is
using ABE , it might reduce cwnd a little
more for a loss than for a CE mark. But it will revert that
reduction once it detects that the retransmission was
spurious.In conclusion, the impact of
early reordering due to CE being ambiguous will generally be
vanishingly small.
With this
scheme, when a source receives ECN feedback, it is not explicitly
clear which type of AQM generated the CE markings. This is not a
problem for Classic ECN sources that send ECT(0) packets, because
an L4S AQM will recognize the ECT(0) packets as Classic and apply
the appropriate Classic ECN marking behaviour.However, in the absence of explicit disambiguation
of the CE markings, an L4S source needs to use heuristic
techniques to work out which type of congestion response to apply
(see ).
Otherwise, if long-running Classic flow(s) are sharing a Classic
ECN AQM bottleneck with long-running L4S flow(s), which then apply
an L4S response to Classic CE signals, the L4S flows would
outcompete the Classic flow(s). Experiments have shown that L4S
flows can take about 20 times more capacity share than equivalent
Classic flows. Nonetheless, as link capacity reduces (e.g. to
4 4 Mb/s), the inequality reduces. So Classic flows always make
progress and are not starved.When L4S was
first proposed (in 2015, 14 years after
was published), it was believed that Classic ECN AQMs had failed
to be deployed, because research measurements had found little or
no evidence of CE marking. In subsequent years Classic ECN was
included in FQ-CoDel deployments, however an FQ scheduler stops an
L4S flow outcompeting Classic, because it enforces equality
between flow rates. It is not known whether there have been any
non-FQ deployments of Classic ECN AQMs in the subsequent years, or
whether there will be in future.An
algorithm for detecting a Classic ECN AQM as soon as a flow
stabilizes after start-up has been proposed (see for a brief summary).
Testbed evaluations of v2 of the algorithm have shown detection is
reasonably good for Classic ECN AQMs, in a wide range of
circumstances. However, although it can correctly detect an L4S
ECN AQM in many circumstances, its is often incorrect at low link
capacities and/or high RTTs. Although this is the safe way round,
there is a danger that it will discourage use of the
algorithm.
The Classic ECN
RFCs and require
a sender to clear the ECN field to Not-ECT on retransmissions and
on certain control packets specifically pure ACKs, window probes
and SYNs. When L4S packets are classified by the ECN field, these
control packets would not be classified into an L4S queue, and
could therefore be delayed relative to the other packets in the
flow. This would not cause reordering (because retransmissions are
already out of order, and these control packets typically carry no
data). However, it would make critical control packets more
vulnerable to loss and delay. To address this problem, proposes an experiment in
which all TCP control packets and retransmissions are ECN-capable
as long as appropriate ECN feedback is available in each case.
Pros:
The ECN field generally works
end-to-end across the Internet. Unlike the DSCP, the setting of
the ECN field is at least forwarded unchanged by networks that do
not support ECN, and networks rarely clear it to zero.
Unlike Diffserv, ECN is
defined to always work across tunnels. This scheme works within a
tunnel that propagates the ECN field in any of the variant ways it
has been defined, from the year 2001
onwards. However, it is likely that some tunnels still do not
implement ECN propagation at all.
If all Classic ECN
senders eventually evolve to use the L4S service, the ECT(0)
codepoint could be reused for some future purpose, but only once
use of ECT(0) packets had reduced to zero, or near-zero, which
might never happen.
Being based on the ECN field, this
scheme does not need the network to access transport layer flow
identifiers. Nonetheless, it does not preclude solutions that
do.
Definition:
In this proposal, an L4S AQM would indicate congestion with
ECT(1) in contrast to a Classic AQM, which indicates congestion
with CE. More specifically:
Given shortage of codepoints, with this proposal L4S ECN
hosts send packets as ECT(0), like Classic ECN does by default
hosts.
If the ECT(1) codepoint were used to indicate congestion in
this way, it would signify a shallow queue AQM to the
end-to-end transport. So those who proposed this approach
called it 'Some Congestion Experienced' (SCE) because of its
similarity to . It has
also been described as 'ECT(1) on output', in contrast to the
'ECT(1) on input' approach outlined in .
The approach works best if the network is transport-aware
and isolates each application flow in its own queue (per-flow
queuing, or FQ). Two AQMs are implemented in each queue, one
with a shallow target that marks selected ECT packets as
ECT(1), the other with a deeper target that marks selected ECT
packets as CE, or drops selected non-ECT packets.
A Classic congestion control would not have the logic to
recognize ECT(1) as a congestion signal. So it would
(correctly) drive the queue to the deeper threshold,
responding only to CE markings. An L4S congestion control that
understands this scheme would respond to ECT(1) markings,
which ought to therefore keep the queue close to the shallower
threshold.
A dual queue approach has been informally proposed, with an
L4S and a Classic queue and coupling similar to . In an interim
classification, all ECT packets would be classified into the
low latency queue, and non-ECT packets into the Classic queue.
But then, in front of the low latency queue, a stateful flow
characterization function would maintain a queue occupancy
metric. It would then redirect any high occupancy flows into
the Classic queue.
Cons:
There is
no variant of this approach that works without network visibility
of transport layer flow identifiers (the 5-tuple). Obviously the
FQ variant needs to see 5-tuples, but so does the DualQ SCE1
variant (to redirect flows based on sparseness). So there is no
arrangement of this approach that operators could choose if they
could not access the transport layer, or did not want to
(e.g. to support full end-to-end encryption above the IP
layer).
When evaluated, the DualQ
variant of ECN-DualQ-SCE1 introduced impairments to both L4S and
Classic flows. The evaluation used the DOCSIS queue protection
function to
maintain the per-flow sparseness metrics and redirect packets from
non-sparse flows into the Classic queue. Unfortunately, it is
impossible to determine non-sparseness until sufficient packets of
each flow have been analyzed. Up to this point, all packets
default to the L4S queue. Then:
Long-running Classic flows experience reordering during the
transition to classifying them as Classic. Worse, the
reordering occurs early in the flow when it is less robust to
confusing RTT measurements;
Considerable numbers of Classic packets add to the L4S
queue - from all the short flows and the start of long flows
before the classifier can be certain enough to redirect them
to the other (Classic) queue. So true L4S flows unavoidably
experience a degree of extra delay.
The L4S service
could potentially supersede the service provided by Classic ECN,
therefore using ECT(1) to indicate L4S congestion could ultimately
mean that the CE codepoint was 'wasted' purely to distinguish one
form of congestion from its successor.
If this scheme is
applied to an outer header within a tunnel or lower layer
encapsulation, the ECT(1) codepoint will be black-holed at
decapsulation, unless the decapsulator complies with changes to
IP-in-IP tunnels introduced in 2010 , or
changes to other tunnels that are (currently) work in progress
, .
This approach
requires transport layer feedback of two congestion signals ECT(1)
and CE. Recently developed protocols such as QUIC provide this by
default. However, there is limited space in the main TCP header to
feed back both signals reliably and accurately . AccECN devotes the limited space in
the main TCP header to CE feedback, and optionally feeds back
ECT(1) in a new TCP option, which will have limited initial
deployment support.
An AQM following
this approach alters some selected ECT(0) packets to ECT(1)
irrespective of whether they are participating in the L4S
experiment. Although ECT(0) and ECT(1) have historically been
defined as equivalent, in practice ECT(1) packets have been
extremely rare on the Internet. Therefore, in practice, there
might be a risk that firewalls and other devices will block ECT(1)
packets, or at least treat them with greater suspicion.
Similarly to the
'Con' point in , it is not always
possible to support ECN in an AQM acting in a buffer below the IP
layer .
However, adding support to lower layers would be even harder with
this scheme, because it needs space for two severity levels of
congestion, not one. Without lower layer ECN support, the L4S
service would have to drop rather than mark frames even though
they might encapsulate an ECN-capable packet. .
Identical to
'Con' point in .
Pros:
An AQM
following the ECN-DualQ-SCE1 approach outputs distinctive signals
(ECT(1)) compared to those output by a Classic ECN AQM. So an L4S
congestion control using the SCE1 approach would inherently
respond appropriately to a Classic AQM.
Identical to 'Pro' point in .
Definition:
This proposal is the inverse of the ECN-DualQ-SCE1 scheme (see
above). L4S AQMs signal
congestion with the transition ECT(1) -> ECT(0). More
specifically:
L4S senders would send their packets as ECT(1), while
Classic ECN senders would continue to send ECT(0) by default
.
FQ AQMs would work in a similar way to that described for
ECN-DualQ-SCE1 in above.
Except the shallow queue AQM would mark selected ECT packets
with ECT(0), rather than ECT(1).It
would seem possible to classify packets by both 5-tuple and
ECT codepoint, so that each per-flow queue could instantiate
just the one AQM appropriate to the ECT codepoint using it. In
this case, CE and Not-ECT packets would be classified into the
same queue as ECT(0). However, this would open up the risk of
reordering explained below, so it is not considered
further.
A Classic congestion control would only receive CE
feedback, and it would have no logic to recognize ECT(0) as
congestion markings, because it would send all its packets as
ECT(0) anyway. So it would (correctly) drive the queue to the
deeper threshold, responding only to CE markings. An L4S
congestion control would understand ECT(0) markings as L4S
congestion signals and therefore ought to keep the queue close
to the shallower threshold.
Under the SCE0 scheme, a dual queue coupled AQM would use ECT(1)
as the L4S classifier in a very similar way to the 'ECT(1) and
CE' scheme it was originally designed for. The one difference
would be to classify CE packets into the Classic queue along
with ECT(0) and Not-ECT.
Cons:
The L4S service
could potentially supersede the service provided by Classic ECN,
therefore using ECT(0) to indicate L4S congestion could ultimately
mean that the CE codepoint was 'wasted' purely to distinguish one
form of congestion from its successor.
The transition
ECT(1) -> ECT(0) has never previously been recognized as valid.
So, any ECT(0) marking applied to an ECT(1) outer header within a
tunnel or lower layer encapsulation will be black-holed at
decapsulation by any decapsulator whatever variant of ECN tunnel
RFC it complies with.
Identical to 'Con'
point in above except space
would be needed for CE and ECT(0) rather than CE and ECT(1)
feedback.
If an L4S
flow traverses a path with two or more bottleneck AQMs that both
support L4S, reordering is likely to occur. This is because the
first bottleneck will re-mark some ECT(1) packets to ECT(0), which
will then be classified into the Classic queue of the second AQM,
even though they originated as L4S packets.In contrast to the 'ECT(1) and CE' scheme in , the risk of impairment in the
ECN-DualQ-SCE0 case is not vanishingly small:
Certainly, queuing at more than one bottleneck on the same
path would still be quite unusual.
However, the ECN-DualQ-SCE0 case occurs if both bottlenecks
support L4S ECN and the traffic is L4S. This contrasts with
the "ECT(1) and CE" case, which solely occurs if the AQMs are
in a certain order (Classic followed by L4S).
When misclassification occurs, it is from L4S to Classic.
So selected packets are delivered late, which in itself adds
delay, and also increases the risk that each late delivery
will be deemed a loss and cause a high level of spurious
retransmissions. This contrasts with the "ECT(1) and CE" case
where selected packets are delivered early, which is very
unlikely to have any effect (as already explained in ).
Identical to 'Con'
point in .
Identical to
'Con' point in .
Pros:
An AQM
following the ECN-DualQ-SCE0 approach outputs distinctive signals
(ECT(0)) compared to those output by a Classic ECN AQM (CE). So an
L4S congestion control can inherently respond appropriately to a
Classic AQM.
Identical to 'Pro' point in .
Definition:
For packets with a defined DSCP, all codepoints of the ECN
field (except Not-ECT) would signify alternative L4S semantics to
those for Classic ECN , specifically:
The L4S DSCP would signify that the packet came from an
L4S-capable sender.
ECT(0) and ECT(1) would both signify that the packet was
travelling between transport endpoints that were both
ECN-capable.
CE would signify that the packet had been marked by an AQM
implementing the L4S service.
Use of a DSCP is the only approach for alternative ECN semantics
given as an example in . However, it was
perhaps considered more for controlled environments than new
end-to-end services.
Cons:
A DSCP is by definition not
orthogonal to Diffserv. Therefore, wherever the L4S service is
applied to multiple Diffserv scheduling behaviours, it would be
necessary to replace each DSCP with a pair of DSCPs.
The
resulting increased number of DSCPs might be hard to support for
some lower layer technologies, e.g. 802.1Q and MPLS both
offer only 3-bits for a maximum of 8 traffic class identifiers.
Although L4S should reduce and possibly remove the need for some
DSCPs intended for differentiated queuing delay, it will not
remove the need for Diffserv entirely, because Diffserv is also
used to allocate bandwidth, e.g. by prioritising some classes
of traffic over others when traffic exceeds available
capacity.
Very few networks
honour a DSCP set by a host. Typically a network will zero
(bleach) the Diffserv field from all hosts. DSCP bleaching would
turn an L4S ECN packet into a Classic ECN packet.
Very few networks
honour a DSCP received from a neighbouring network. Typically a
network will zero (bleach) the Diffserv field from all
neighbouring networks at an interconnection point. Sometimes
bilateral arrangements are made between networks, such that the
receiving network remarks some DSCPs to those it uses for roughly
equivalent services. The likelihood that a DSCP will be bleached
or ignored depends on the type of DSCP:
These tend to be used to
implement application-specific network policies, but a
bilateral arrangement to remark certain DSCPs is often applied
to DSCPs in the local-use range simply because it is easier
not to change all of a network's internal configurations when
a new arrangement is made with a neighbour.
These do not tend to
be honoured across network interconnections more than
local-use DSCPs. However, if two networks decide to honour
certain of each other's DSCPs, the reconfiguration is a little
easier if both of their globally recognised services are
already represented by the relevant recommended standard
DSCPs. Note that today a recommended
standard DSCP gives little more assurance of end-to-end
service than a local-use DSCP. In future the range recommended
as standard might give more assurance of end-to-end service
than local-use, but it is unlikely that either assurance will
be high, particularly given the hosts are included in the
end-to-end path.
Whenever DSCP bleaching did occur, it would turn an L4S
ECN packet into a Classic ECN packet.
Diffserv codepoints are often not
propagated to the outer header when a packet is encapsulated by a
tunnel header. DSCPs are propagated to the outer of uniform mode
tunnels, but not pipe mode , and pipe mode
is fairly common. Whenever pipe mode was used, it would
temporarily turn an L4S ECN packet into a Classic ECN packet.
Because this
approach uses both the Diffserv and ECN fields, an AQM will only
work at a lower layer if both can be supported. If individual
network operators wished to deploy an AQM at a lower layer, they
would usually propagate an IP Diffserv codepoint to the lower
layer, using for example IEEE 802.1p. However, the ECN capability
is harder to propagate down to lower layers because few lower
layers support it.
Defining a DSCP
to indicate L4S is a way to help network nodes identify L4S
packets (albeit unreliable due to the likelihood of bleaching -
see above). However, it does not help hosts distinguish between
ECN markings from L4S and Classic AQMs. This is because Classic
AQMs would have been implemented without any logic to recognize an
L4S DSCP or apply L4S marking behaviour.
Pros:
If all usage of Classic ECN
migrates to usage of L4S, the DSCP would become redundant, and the
ECN capability alone could eventually identify L4S packets without
the interconnection problems of Diffserv detailed above, and
without having permanently consumed more than one codepoint in the
IP header. Although the DSCP does not generally function as an
end- to-end identifier (see above), it could be used initially by
individual ISPs to introduce the L4S service for their own locally
generated traffic.
This approach uses ECN capability alone as the L4S identifier. It
would only have been feasible if RFC 3168 ECN had not been widely
deployed. This was the case when the choice of L4S identifier was
being made and this appendix was first written. Since then, RFC 3168
ECN has been widely deployed on end-systems, and deployment in AQMs is
growing (probably mainly due to its support in FQ AQMs), and L4S did
not take this approach anyway. So this approach is not discussed
further, because it is no longer a feasible option.
It has been suggested that a new Protocol ID in the IPv4 Protocol
field or the IPv6 Next Header field could identify L4S packets.
However this approach is ruled out by numerous problems:
A duplicate protocol ID would need to be created for each
transport (TCP, SCTP, UDP, etc.).
In IPv6, there can be a sequence of Next Header fields, and it
would not be obvious which one would be expected to identify a
network service like L4S.
A new protocol ID would rarely provide an end-to-end service,
because It is well-known that new protocol IDs are often blocked
by numerous types of middlebox.
The approach is not a solution for AQM methods below the IP
layer.
Locally, a network operator could arrange for L4S service to be
applied based on source or destination addressing, e.g. packets
from its own data centre and/or CDN hosts, packets to its business
customers, etc. It could use addressing at any layer, e.g. IP
addresses, MAC addresses, VLAN IDs, etc. Although addressing might be
a useful tactical approach for a single ISP, it would not be a
feasible approach to identify an end-to-end service like L4S. Even for
a single ISP, it would require packet classifiers in buffers to be
dependent on changing topology and address allocation decisions
elsewhere in the network. Therefore this approach is not a feasible
solution.
and provide a very high level summary of the
pros and cons detailed against the schemes described respectively in
, , and for nine issues that set them
apart.
Issue
ECT(1) + CE (Chosen)
DSCP + ECN
initial eventual
initial eventual
end-to-end
. . Y . . Y
N . . . ? .
tunnels
. . ? . . Y
. O . . O .
lower layers
. O . . . ?
N . . . ? .
spare codepoint
N . . . . ?
N . . . . ?
reordering
. O . . . ?
. . Y . . Y
Classic ECN AQM
. O . . . ?
. O . . . ?
isolation
. . Y . . Y
. . Y . . Y
poss w/o L4 IDs
. . Y . . Y
. . Y . . Y
TCP feedback
. O . . . Y
. O . . . Y
TCP ctrl pkts
. O . . . ?
. . Y . . Y
Issue
ECN-DualQ-SCE1
ECN-DualQ-SCE0
initial eventual
initial eventual
end-to-end
. . Y . . Y
. . Y . . Y
tunnels
. ? . . . ?
N . . . ? .
lower layers
N . . . ? .
N . . . ? .
spare codepoint
N . . N . .
N . . N . .
reordering
N . . N . .
N . . N . .
Classic ECN AQM
. . Y . . Y
. . Y . . Y
isolation
N . . N . .
. . Y . . Y
poss w/o L4 IDs
N . . N . .
. . Y . . Y
TCP feedback
N . . . O .
N . . . O .
TCP ctrl pkts
. O . . . ?
. O . . . ?
The schemes are scored based on both their capabilities now
('initial') and in the long term ('eventual'). The scores are one of
'N, O, Y', meaning 'Poor', 'Ordinary', 'Good' respectively. The same
scores are aligned vertically to aid the eye. A score of "?" in one of
the positions means that this approach might optimistically become
this good, given sufficient effort. The tables summarize the text and
are not meant to be understandable without having read the text.
The ECT(1) codepoint of the ECN field has already been assigned once
for the ECN nonce , which has now been
categorized as historic . ECN is probably the
only remaining field in the Internet Protocol that is common to IPv4 and
IPv6 and still has potential to work end-to-end, with tunnels and with
lower layers. Therefore, ECT(1) should not be reassigned to a different
experimental use (L4S) without carefully assessing competing potential
uses. These fall into the following categories:
Receiving hosts can fool a sender into downloading faster by
suppressing feedback of ECN marks (or of losses if retransmissions are
not necessary or available otherwise).
The historic ECN nonce protocol proposed
that a TCP sender could set either of ECT(0) or ECT(1) in each packet
of a flow and remember the sequence it had set. If any packet was lost
or congestion marked, the receiver would miss that bit of the
sequence. An ECN Nonce receiver had to feed back the least significant
bit of the sum, so it could not suppress feedback of a loss or mark
without a 50-50 chance of guessing the sum incorrectly.
It is highly unlikely that ECT(1) will be needed for integrity
protection in future. The ECN Nonce RFC as
been reclassified as historic, partly because other ways have been
developed to protect feedback integrity of TCP and other transports
that do not consume a codepoint in the IP
header. For instance:
the sender can test the integrity of the receiver's feedback by
occasionally setting the IP-ECN field to a value normally only set
by the network. Then it can test whether the receiver's feedback
faithfully reports what it expects (see para 2 of Section 20.2 of
. This works for loss and it will work for
the accurate ECN feedback intended for
L4S.
A network can enforce a congestion response to its ECN markings
(or packet losses) by auditing congestion exposure (ConEx) . Whether the receiver or a downstream network
is suppressing congestion feedback or the sender is unresponsive
to the feedback, or both, ConEx audit can neutralise any advantage
that any of these three parties would otherwise gain.
The TCP authentication option (TCP-AO )
can be used to detect any tampering with TCP congestion feedback
(whether malicious or accidental). TCP's congestion feedback
fields are immutable end-to-end, so they are amenable to TCP-AO
protection, which covers the main TCP header and TCP options by
default. However, TCP-AO is often too brittle to use on many
end-to-end paths, where middleboxes can make verification fail in
their attempts to improve performance or security, e.g. by
resegmentation or shifting the sequence space.
Various researchers have proposed to use ECT(1) as a less severe
congestion notification than CE, particularly to enable flows to fill
available capacity more quickly after an idle period, when another
flow departs or when a flow starts, e.g. VCP , Queue View (QV) .
Before assigning ECT(1) as an identifier for L4S, we must carefully
consider whether it might be better to hold ECT(1) in reserve for
future standardisation of rapid flow acceleration, which is an
important and enduring problem .
Pre-Congestion Notification (PCN) is another scheme that assigns
alternative semantics to the ECN field. It uses ECT(1) to signify a
less severe level of pre-congestion notification than CE . However, the ECN field only takes on the PCN
semantics if packets carry a Diffserv codepoint defined to indicate
PCN marking within a controlled environment. PCN is required to be
applied solely to the outer header of a tunnel across the controlled
region in order not to interfere with any end-to-end use of the ECN
field. Therefore a PCN region on the path would not interfere with any
of the L4S service identifiers proposed in .