DualQ Coupled AQMs for Low Latency, Low
Loss and Scalable Throughput (L4S)Nokia Bell LabsAntwerpBelgiumkoen.de_schepper@nokia.comhttps://www.bell-labs.com/usr/koen.de_schepperCableLabsUKietf@bobbriscoe.nethttp://bobbriscoe.net/Simula Research LabLysakerNorwayolgabnd@gmail.comhttps://www.simula.no/people/olgaboNokiaAntwerpBelgiuming-jyh.tsang@nokia.com
Transport
Transport Area working group (tsvwg)Internet-DraftI-DThe Low Latency Low Loss Scalable Throughput (L4S) architecture
allows data flows over the public Internet to predictably achieve
ultra-low queuing latency, generally zero congestion loss and scaling of
per-flow throughput without the problems of traditional TCP. To achieve
this, L4S data flows use a 'scalable' congestion control similar to Data
Centre TCP (DCTCP) and a form of Explicit Congestion Notification (ECN)
with modified behaviour. However, until now, scalable congestion
controls did not co-exist with existing TCP Reno/Cubic
traffic---scalable controls are so aggressive that 'Classic' TCP
algorithms drive themselves to starvation. Therefore, until now, L4S
controls could only be deployed where a clean-slate environment could be
arranged, such as in private data centres (hence the name DCTCP). This
specification defines `DualQ Coupled Active Queue Management (AQM)',
which enables these scalable congestion controls to safely co-exist with
Classic Internet traffic.The Coupled AQM ensures that a flow runs at about the same rate
whether it uses DCTCP or TCP Reno/Cubic. It achieves this indirectly,
without having to inspect transport layer flow identifiers, When tested
in a residential broadband setting, DCTCP also achieves sub-millisecond
average queuing delay and zero congestion loss under a wide range of
mixes of DCTCP and `Classic' broadband Internet traffic, without
compromising the performance of the Classic traffic. The solution also
reduces network complexity and eliminates network configuration.Latency is becoming the critical performance factor for many
(most?) applications on the public Internet, e.g. interactive Web, Web
services, voice, conversational video, interactive video, interactive
remote presence, instant messaging, online gaming, remote desktop,
cloud-based applications, and video-assisted remote control of
machinery and industrial processes. In the developed world, further
increases in access network bit-rate offer diminishing returns,
whereas latency is still a multi-faceted problem. In the last decade
or so, much has been done to reduce propagation time by placing caches
or servers closer to users. However, queuing remains a major
intermittent component of latency.The Diffserv architecture provides Expedited Forwarding , so that low latency traffic can jump the queue of
other traffic. However, on access links dedicated to individual sites
(homes, small enterprises or mobile devices), often all traffic at any
one time will be latency-sensitive and, if all the traffic on a link
is marked as EF, Diffserv cannot reduce the delay of any of it. In
contrast, the Low Latency Low Loss Scalable throughput (L4S) approach
removes the causes of any unnecessary queuing delay.The bufferbloat project has shown that excessively-large buffering
(`bufferbloat') has been introducing significantly more delay than the
underlying propagation time. These delays appear only
intermittently—only when a capacity-seeking (e.g. TCP) flow is
long enough for the queue to fill the buffer, making every packet in
other flows sharing the buffer sit through the queue.Active queue management (AQM) was originally developed to solve
this problem (and others). Unlike Diffserv, which gives low latency to
some traffic at the expense of others, AQM controls latency for all traffic in a class. In general, AQMs
introduce an increasing level of discard from the buffer the longer
the queue persists above a shallow threshold. This gives sufficient
signals to capacity-seeking (aka. greedy) flows to keep the buffer
empty for its intended purpose: absorbing bursts. However,
RED and other algorithms from the 1990s
were sensitive to their configuration and hard to set correctly. So,
AQM was not widely deployed in the 1990s.More recent state-of-the-art AQMs, e.g. fq_CoDel , PIE , Adaptive
RED , are easier to configure, because
they define the queuing threshold in time not bytes, so it is
invariant for different link rates. However, no matter how good the
AQM, the sawtoothing rate of TCP will either cause queuing delay to
vary or cause the link to be under-utilized. Even with a perfectly
tuned AQM, the additional queuing delay will be of the same order as
the underlying speed-of-light delay across the network. Flow-queuing
can isolate one flow from another, but it cannot isolate a TCP flow
from the delay variations it inflicts on itself, and it has other
problems - it overrides the flow rate decisions of variable rate video
applications, it does not recognise the flows within IPSec VPN tunnels
and it is relatively expensive to implement.It seems that further changes to the network alone will now yield
diminishing returns. Data Centre TCP (DCTCP ) teaches us that a small but radical change to TCP
is needed to cut two major outstanding causes of queuing delay
variability: the `sawtooth' varying rate of TCP itself;the smoothing delay deliberately introduced into AQMs to permit
bursts without triggering losses.The former causes a flow's round trip time (RTT) to vary from
about 1 to 2 times the base RTT between the machines in question. The
latter delays the system's response to change by a worst-case
(transcontinental) RTT, which could be hundreds of times the actual
RTT of typical traffic from localized CDNs.Latency is not our only concern:It was known when TCP was first developed that it would not
scale to high bandwidth-delay products .Given regular broadband bit-rates over WAN distances are
already beyond the scaling range of
`classic' TCP Reno, `less unscalable' Cubic and Compound variants of TCP have been
successfully deployed. However, these are now approaching their
scaling limits. Unfortunately, fully scalable TCPs such as DCTCP cause
`classic' TCP to starve itself, which is why they have been confined
to private data centres or research testbeds (until now).This document specifies a `DualQ Coupled AQM' extension that solves
the problem of coexistence between scalable and classic flows, without
having to inspect flow identifiers. The AQM is not like flow-queuing
approaches that classify packets by flow
identifier into numerous separate queues in order to isolate sparse
flows from the higher latency in the queues assigned to heavier flows.
In contrast, the AQM exploits the behaviour of scalable congestion
controls like DCTCP so that every packet in every flow sharing the
queue for DCTCP-like traffic can be served with very low latency.This AQM extension can be combined with any AQM designed for a
single queue that generates a statistical or deterministic mark/drop
probability driven by the queue dynamics. In many cases it simplifies
the basic control algorithm, and requires little extra processing.
Therefore it is believed the Coupled AQM would be applicable and easy
to deploy in all types of buffers; buffers in cost-reduced mass-market
residential equipment; buffers in end-system stacks; buffers in
carrier-scale equipment including remote access servers, routers,
firewalls and Ethernet switches; buffers in network interface cards,
buffers in virtualized network appliances, hypervisors, and so on.For the public Internet, nearly all the benefit will typically be
achieved by deploying the Coupled AQM into either end of the access
link between a 'site' and the Internet, which is invariably the
bottleneck. Here, the term 'site' is used loosely to mean a home, an
office, a campus or mobile user equipment.The overall L4S architecture gives more detail, including on
wider deployment aspects such as coexistence in bottlenecks where a
DualQ Coupled AQM has not been deployed. The supporting papers and give the full rationale
for the AQM's design, both discursively and in more precise
mathematical form.The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in when, and only when, they appear in all capitals,
as shown here.The DualQ Coupled AQM uses two queues for two services. Each of the
following terms identifies both the service and the queue that
provides the service:The `Classic'
service is intended for all the behaviours that currently co-exist
with TCP Reno (TCP Cubic, Compound, SCTP, etc).The
`L4S' service is intended for a set of congestion controls with
scalable properties (e.g. DCTCP ,
Relentless TCP , the L4S variant of
SCREAM for real-time media {ToDo: ref}). For the public
Internet a scalable control has to comply with the requirements
in (aka. the 'TCP
Prague requirements').Either service can cope with a proportion of unresponsive or
less-responsive traffic as well, as long (e.g. DNS, VoIP, game sync
datagrams, etc), just as a single queue AQM can if this traffic makes
minimal contribution to queuing. The DualQ Coupled AQM behaviour below
is defined to be similar to a single FIFO queue with respect to
unresponsive and overload traffic.The AQM couples marking and/or dropping across the two queues such
that a flow will get roughly the same throughput whichever it uses.
Therefore both queues can feed into the full capacity of a link and no
rates need to be configured for the queues. The L4S queue enables
scalable congestion controls like DCTCP to give stunningly low and
predictably low latency, without compromising the performance of
competing 'Classic' Internet traffic. Thousands of tests have been
conducted in a typical fixed residential broadband setting. Typical
experiments used base round trip delays up to 100ms between the data
centre and home network, and large amounts of background traffic in
both queues. For every L4S packet, the AQM kept the average queuing
delay below 1ms (or 2 packets if serialization delay is bigger for
slow links), and no losses at all were introduced by the AQM. Details
of the extensive experiments are available .Subjective testing was also conducted by multiple people all
simultaneously using very demanding high bandwidth low latency
applications over a single shared access link . In one application, each user could use finger
gestures to pan or zoom their own high definition (HD) sub-window of a
larger video scene generated on the fly in 'the cloud' from a football
match. Another user wearing VR goggles was remotely receiving a feed
from a 360-degree camera in a racing car, again with the sub-window in
their field of vision generated on the fly in 'the cloud' dependent on
their head movements. Even though other users were also downloading
large amounts of L4S and Classic data, playing a gaming benchmark and
watchings videos over the same 40Mb/s downstream broadband link,
latency was so low that the football picture appeared to stick to the
user's finger on the touchpad and the experience fed from the remote
camera did not noticeably lag head movements. All the L4S data (even
including the downloads) achieved the same ultra-low latency. With an
alternative AQM, the video noticeably lagged behind the finger
gestures and head movements.Unlike Diffserv Expedited Forwarding, the L4S queue does not have
to be limited to a small proportion of the link capacity in order to
achieve low delay. The L4S queue can be filled with a heavy load of
capacity-seeking flows like DCTCP and still achieve low delay. The L4S
queue does not rely on the presence of other traffic in the Classic
queue that can be 'overtaken'. It gives low latency to L4S traffic
whether or not there is Classic traffic, and the latency of Classic
traffic does not suffer when a proportion of the traffic is L4S. The
two queues are only necessary because DCTCP-like flows cannot keep
latency predictably low and keep utilization high if they are mixed
with legacy TCP flows,The experiments used the Linux implementation of DCTCP that is
deployed in private data centres, without any modification despite its
known deficiencies. Nonetheless, certain modifications will be
necessary before DCTCP is safe to use on the Internet, which are
recorded in Appendix A of .
However, the focus of this specification is to get the network service
in place. Then, without any management intervention, applications can
exploit it by migrating to scalable controls like DCTCP, which can
then evolve while their benefits are being
enjoyed by everyone on the Internet.There are two main aspects to the approach:the Coupled AQM that addresses throughput equivalence between
Classic (e.g. Reno, Cubic) flows and L4S flows (that satisfy the TCP
Prague requirements).the Dual Queue structure that provides latency separation for L4S
flows to isolate them from the typically large Classic queue.In the 1990s, the `TCP formula' was derived for the relationship
between TCP's congestion window, cwnd, and its drop probability, p. To
a first order approximation, cwnd of TCP Reno is inversely
proportional to the square root of p.We focus on Reno as the worst case, because if we do not harm Reno,
we will not harm Cubic. Nonetheless, TCP Cubic implements a
Reno-compatibility mode, which is the only relevant mode for typical
RTTs under 20ms as long as the throughput of a single flow is less
than about 500Mb/s. Therefore it can be assumed that Cubic traffic
behaves similarly to Reno (but with a slightly different constant of
proportionality). The term 'Classic' will be used for the collection
of Reno-friendly traffic including Cubic in Reno mode.The supporting paper includes the
derivation of the equivalent rate equation for DCTCP, for which cwnd
is inversely proportional to p (not the square root), where in this
case p is the ECN marking probability. DCTCP is not the only
congestion control that behaves like this, so the term 'L4S' traffic
will be used for all similar behaviour.For safe co-existence, under stationary conditions, a DCTCP flow
has to run at roughly the same rate as a Reno TCP flow (all other
factors being equal). So the drop or marking probability for Classic
traffic, p_C has to be distinct from the marking probability for L4S
traffic, p_L. updates the original ECN
specification to allow these probabilities to
be distinct, because RFC 3168 required them to be the same.Also, to remain stable, Classic sources need the network to smooth
p_C so it changes relatively slowly. In contrast, L4S avoids smoothing
in the network, because it delays all signals for a worst-case RTT. So
instead, L4S sources smooth the ECN marking probability themselves, so
they expect the network to generate ECN marks with a probability p_L
that tracks the instantaneous unsmoothed queue.The Coupled AQM achieves safe coexistence by making the Classic
drop probability p_C proportional to the square of the coupled L4S
probability p_CL. p_CL is an input to the instantaneous L4S marking
probability p_L but it changes as slowly as p_C. This makes the Reno
flow rate roughly equal the DCTCP flow rate, because the squaring of
p_CL counterbalances the square root of p_C in the Classic 'TCP
formula'.Stating this as a formula, the relation between Classic drop
probability, p_C, and the coupled L4S probability p_CL needs to take
the form:where k is the constant of proportionality, which we shall call the
coupling factor.Classic traffic typically builds a large queue to prevent
under-utilization. Therefore a separate queue is provided for L4S
traffic, and it is scheduled with priority over Classic. Priority is
conditional to prevent starvation of Classic traffic.Nonetheless, coupled marking ensures that giving priority to L4S
traffic still leaves the right amount of spare scheduling time for
Classic flows to each get equivalent throughput to DCTCP flows (all
other factors such as RTT being equal).Both the Coupled AQM and DualQ mechanisms need an identifier to
distinguish L and C packets. Then the coupling algorithm can achieve
coexistence without having to inspect flow identifiers, because it can
apply the appropriate marking or dropping probability to all flows of
each type. A separate specification requires the sender to use the
ECT(1) codepoint of the ECN field as this identifier, having assessed
various alternatives. An additional process document has proved
necessary to make the ECT(1) codepoint available for
experimentation .For policy reasons, an operator might choose to steer certain
packets (e.g. from certain flows or with certain addresses) out of the
L queue, even though they identify themselves as L4S by their ECN
codepoints. In such cases, the device MUST NOT alter the ECN field, so
that it is preserved end-to-end. The aim is that each operator can
choose how it treats L4S traffic locally, but an individual operator
does not alter the identification of L4S packets, which would prevent
other operators downstream from making their own choices on how to
treat L4S traffic.In addition, other identifiers could be used to classify certain
additional packet types into the L queue, that are deemed not to risk
harming the L4S service. For instance addresses of specific
applications or hosts (see ), specific Diffserv codepoints
such as EF (Expedited Forwarding) and Voice-Admit service classes (see
) or certain protocols
(e.g. ARP, DNS).Note that the mechanism only reads these classifiers, it MUST NOT
re-mark or alter these identifiers (except for marking the ECN field
with the CE codepoint - with increasing frequency to indicate
increasing congestion). shows the overall structure
that any DualQ Coupled AQM is likely to have. This schematic is
intended to aid understanding of the current designs of DualQ Coupled
AQMs. However, it is not intended to preclude other innovative ways of
satisfying the normative requirements in that minimally define a DualQ Coupled
AQM.The classifier on the left separates incoming traffic between the
two queues (L and C). Each queue has its own AQM that determines the
likelihood of marking or dropping (p_L and p_C). It has been
proved that it is preferable to control load
with a linear controller, then square the output before applying it as
a drop probability to TCP (because TCP decreases its load proportional
to the square-root of the increase in drop). So, the AQM for Classic
traffic needs to be implemented in two stages: i) a base stage that
outputs an internal probability p' (pronounced p-prime); and ii) a
squaring stage that outputs p_C, whereSubstituting for p_C in Eqn (1) gives:So the slow-moving input to ECN marking in the L queue (the
coupled L4S probability) is:where k is the constant coupling factor (see ).It can be seen that these two transformations of p' implement the
required coupling given in equation (1) earlier. The actual probability p_L that we apply to the L queue needs to
track the immediate L queue delay, as well as track p_CL under
stationary conditions. So we use a native AQM in the L queue that
calculates a probability p'_L as a function of the instantaneous L
queue. And, given the L queue has conditional strict priority over the
C queue, whenever the L queue grows, we should apply marking
probability p'_L, but p_L should not fall below p_CL. This
suggests:which has also been found to work very well in
practice.After the AQMs have applied their dropping or marking, the
scheduler forwards their packets to the link, giving priority to L4S
traffic. Priority has to be conditional in some way (see ). Simple strict priority is inappropriate
otherwise it could lead the L4S queue to starve the Classic queue. For
example, consider the case where a continually busy L4S queue blocks a
DNS request in the Classic queue, arbitrarily delaying the start of a
new Classic flow.Example DualQ Coupled AQM algorithms called DualPI2 and Curvy RED
are given in and . Either example AQM can be used to couple
packet marking and dropping across a dual Q.DualPI2 uses a Proportional-Integral (PI) controller as the Base
AQM. Indeed, this Base AQM with just the squared output and no L4S
queue can be used as a drop-in replacement for PIE , in which case we call it just PI2 . PI2 is a principled simplification of PIE that is both
more responsive and more stable in the face of dynamically varying
load.Curvy RED is derived from RED , but
its configuration parameters are insensitive to link rate and it
requires less operations per packet. However, DualPI2 is more
responsive and stable over a wider range of RTTs than Curvy RED. As a
consequence, DualPI2 has attracted more development attention than
Curvy RED, leaving the Curvy RED design incomplete and not so fully
evaluated.Both AQMs regulate their queue in units of time not bytes. As
already explained, this ensures configuration can be invariant for
different drain rates. With AQMs in a dualQ structure this is
particularly important because the drain rate of each queue can vary
rapidly as flows for the two queues arrive and depart, even if the
combined link rate is constant.It would be possible to control the queues with other alternative
AQMs, as long as the normative requirements (those expressed in
capitals) in are observed.The following requirements are intended to capture only the
essential aspects of a DualQ Coupled AQM. They are intended to be
independent of the particular AQMs used for each queue.A Dual Queue Coupled AQM implementation MUST utilize two queues,
each with an AQM algorithm. The two queues can be part of a larger
queuing hierarchy .The AQM algorithm for the low latency (L) queue MUST apply ECN
marking.The scheduler draining the two queues MUST give L4S packets
priority over Classic, although priority MUST be bounded in order
not to starve Classic traffic. defines the meaning of
an ECN marking on L4S traffic, relative to drop of Classic traffic.
In order to prevent starvation of Classic traffic by scalable L4S
traffic, it says, "The likelihood that an AQM drops a Not-ECT
Classic packet (p_C) MUST be roughly proportional to the square of
the likelihood that it would have marked it if it had been an L4S
packet (p_L)." The term 'likelihood' is used to allow for marking
and dropping to be either probabilistic or deterministic.For the current specification, this translates into the following
requirement. A DualQ Coupled AQM MUST apply ECN marking to traffic
in the L queue that is no lower than that derived from the
likelihood of drop (or ECN marking) in the Classic queue using Eqn.
(1).The constant of proportionality, k, in Eqn (1) determines the
relative flow rates of Classic and L4S flows when the AQM concerned
is the bottleneck (all other factors being equal). says, "The constant of
proportionality (k) does not have to be standardised for
interoperability, but a value of 2 is RECOMMENDED."Assuming scalable congestion controls for the Internet will be as
aggressive as DCTCP, this will ensure their congestion window will
be roughly the same as that of a standards track TCP congestion
control (Reno) and other so-called
TCP-friendly controls, such as TCP Cubic in its TCP-friendly
mode.The choice of k is a matter of operator policy, and operators MAY
choose a different value using
and the guidelines in .If multiple users share capacity at a bottleneck (e.g. in the
Internet access link of a campus network), the operator's choice of
k will determine capacity sharing between the flows of different
users. However, on the public Internet, access network operators
typically isolate customers from each other with some form of
layer-2 multiplexing (OFDM(A) in DOCSIS3.1, CDMA in 3G, SC-FDMA in
LTE) or L3 scheduling (WRR in DSL), rather than relying on TCP to
share capacity between customers . In
such cases, the choice of k will solely affect relative flow rates
within each customer's access capacity, not between customers. Also,
k will not affect relative flow rates at any times when all flows
are Classic or all L4S, and it will not affect the relative
throughput of small flows.The flexibility to allow operator-specific classifiers () leads to the need to specify what
the AQM in each queue ought to do with packets that do not carry
the ECN field expected for that queue. It is recommended that the
AQM in each queue inspects the ECN field to determine what sort of
congestion notification to signal, then decides whether to apply
congestion notification to this particular packet, as
follows:If a packet that does not carry an ECT(1) or CE codepoint
is classified into the L queue:if the packet is ECT(0), the L AQM SHOULD apply
CE-marking using a probability appropriate to Classic
congestion control and appropriate to the target delay in
the L queueif the packet is Not-ECT, the appropriate action
depends on whether some other function is protecting the L
queue from misbehaving flows (e.g. per-flow queue
protection or latency policing):If separate queue protection is provided, the L AQM
SHOULD ignore the packet and forward it unchanged,
meaning it should not calculate whether to apply
congestion notification and it should neither drop nor
CE-mark the packet (for instance, the operator might
classify EF traffic that is unresponsive to drop into
the L queue, alongside responsive L4S-ECN traffic)if separate queue protection is not provided, the L
AQM SHOULD apply drop using a drop probability
appropriate to Classic congestion control and
appropriate to the target delay in the L queueIf a packet that carries an ECT(1) codepoint is classified
into the C queue:the C AQM SHOULD apply CE-marking using the coupled AQM
probability p_CL (= k*p').If the DualQ Coupled AQM has detected overload, it will signal
congestion solely using drop, irrespective of the ECN field.The above requirements are worded as "SHOULDs", because
operator-specific classifiers are for flexibility, by definition.
Therefore, alternative actions might be appropriate in the
operator's specific circumstances. An example would be where the
operator knows that certain legacy traffic marked with one
codepoint actually has a congestion response associated with
another codepoint.By default, a DualQ Coupled AQM SHOULD NOT need any configuration
for use at a bottleneck on the public Internet . The following parameters MAY be
operator-configurable, e.g. to tune for non-Internet settings:Optional packet classifier(s) to use in addition to the ECN
field (see );Expected typical RTT (a parameter for typical or target
queuing delay in each queue might be configurable instead; if so
it MUST be expressed in units of time);Expected maximum RTT (a stability parameter that depends on
maximum RTT might be configurable instead);Coupling factor, k;The limit to the conditional priority of L4S
(scheduler-dependent, e.g. the scheduler weight for WRR, or the
time-shift for time-shifted FIFO);The maximum Classic ECN marking probability, p_Cmax, before
switching over to drop.An experimental DualQ Coupled AQM SHOULD allow the operator to
monitor each of the following operational statistics on demand, per
queue and per configurable sample interval, for performance
monitoring and perhaps also for accounting in some cases:Bits forwarded, from which utilization can be calculated;Total packets arriving, enqueued and dequeued to distinguish
tail discard from proactive AQM discard;ECN packets marked, non-ECN packets dropped, ECN packets
dropped, from which marking and dropping probabilities can be
calculated;Queue delay (not including serialization delay of the head
packet or medium acquisition delay) - see further notes
below.Unlike the other statistics, queue
delay cannot be captured in a simple accumulating counter.
Therefore the type of queue delay statistics produced (mean,
percentiles, etc.) will depend on implementation constraints. To
facilitate comparative evaluation of different implementations
and approaches, an implementation SHOULD allow mean and 99th
percentile queue delay to be derived (per queue per sample
interval). A relatively simple way to do this would be to store
a coarse-grained histogram of queue delay. This could be done
with a small number of bins with configurable edges that
represent contiguous ranges of queue delay. Then, over a sample
interval, each bin would accumulate a count of the number of
packets that had fallen within each range. The maximum queue
delay per queue per interval MAY also be recorded.An experimental DualQ Coupled AQM SHOULD asynchronously report
the following data about anomalous conditions:Start-time and duration of overload state.A hysteresis mechanism SHOULD be used to prevent
flapping in and out of overload causing an event storm. For
instance, exit from overload state could trigger one report, but
also latch a timer. Then, during that time, if the AQM enters
and exits overload state any number of times, the duration in
overload state is accumulated but no new report is generated
until the first time the AQM is out of overload once the timer
has expired. suggests that deployment, coexistence
and scaling should also be covered as management requirements. The
raison d'être of the DualQ Couple AQM is to enable deployment
and coexistence of scalable congestion controls - as incremental
replacements for today's TCP-friendly controls that do not scale
with bandwidth-delay product. Therefore, these motivating issues are
explained in the Introduction and detailed in the L4S architecture
. Also, the descriptions of
specific DualQ Coupled AQM algorithms in the appendices cover
scaling of their configuration parameters, e.g. with respect to RTT
and sampling frequency.This specification contains no IANA considerations.Where the interests of users or flows might conflict, it could be
necessary to police traffic to isolate any harm to the performance of
individual flows. However it is hard to avoid unintended side-effects
with policing, and in a trusted environment policing is not necessary.
Therefore per-flow policing needs to be separable from a basic AQM, as
an option under policy control.However, a basic DualQ AQM does at least need to handle overload. A
useful objective would be for the overload behaviour of the DualQ AQM
to be at least no worse than a single queue AQM. However, a trade-off
needs to be made between complexity and the risk of either traffic
class harming the other. In each of the following three subsections,
an overload issue specific to the DualQ is described, followed by
proposed solution(s).Under overload the higher priority L4S service will have to
sacrifice some aspect of its performance. Alternative solutions are
provided below that each relax a different factor: e.g. throughput,
delay, drop. These choices need to be made either by the developer or
by operator policy, rather than by the IETF.Priority of L4S is required to be conditional to avoid total
throughput starvation of Classic by heavy L4S traffic. This raises
the question of whether to sacrifice L4S throughput or L4S delay (or
some other policy) to mitigate starvation of Classic:By using weighted round
robin as the conditional priority scheduler, the L4S service can
sacrifice some throughput during overload to guarantee a minimum
throughput service for Classic traffic. The scheduling weight of
the Classic queue should be small (e.g. 1/16). Then, in most
traffic scenarios the scheduler will not interfere and it will
not need to - the coupling mechanism and the end-systems will
share out the capacity across both queues as if it were a single
pool. However, because the congestion coupling only applies in
one direction (from C to L), if L4S traffic is over-aggressive
or unresponsive, the scheduler weight for Classic traffic will
at least be large enough to ensure it does not starve. In cases where the ratio of L4S to Classic flows
(e.g. 19:1) is greater than the ratio of their scheduler weights
(e.g. 15:1), the L4S flows will get less than an equal share of
the capacity, but only slightly. For instance, with the example
numbers given, each L4S flow will get (15/16)/19 = 4.9% when
ideally each would get 1/20=5%. In the rather specific case of
an unresponsive flow taking up a large part of the capacity set
aside for L4S, using WRR could significantly reduce the capacity
left for any responsive L4S flows.To control milder overload of
responsive traffic, particularly when close to the maximum
congestion signal, the operator could choose to control overload
of the Classic queue by allowing some delay to 'leak' across to
the L4S queue. The scheduler can be made to behave like a single
First-In First-Out (FIFO) queue with different service times by
implementing a very simple conditional priority scheduler that
could be called a "time-shifted FIFO" (see the Modifier Earliest
Deadline First (MEDF) scheduler of ). This
scheduler adds tshift to the queue delay of the next L4S packet,
before comparing it with the queue delay of the next Classic
packet, then it selects the packet with the greater adjusted
queue delay. Under regular conditions, this time-shifted FIFO
scheduler behaves just like a strict priority scheduler. But
under moderate or high overload it prevents starvation of the
Classic queue, because the time-shift (tshift) defines the
maximum extra queuing delay of Classic packets relative to
L4S.The example implementation in
can implement either policy.To keep the throughput of both L4S and Classic flows roughly
equal over the full load range, a different control strategy needs
to be defined above the point where one AQM first saturates to a
probability of 100% leaving no room to push back the load any
harder. If k>1, L4S will saturate first, even though saturation
could be caused by unresponsive traffic in either queue.The term 'unresponsive' includes cases where a flow becomes
temporarily unresponsive, for instance, a real-time flow that takes
a while to adapt its rate in response to congestion, or a TCP-like
flow that is normally responsive, but above a certain congestion
level it will not be able to reduce its congestion window below the
minimum of 2 segments , effectively becoming
unresponsive. (Note that L4S traffic ought to remain responsive
below a window of 2 segments (see ).Saturation raises the question of whether to relieve congestion
by introducing some drop into the L4S queue or by allowing delay to
grow in both queues (which could eventually lead to tail drop
too):Saturation can be avoided by
setting a maximum threshold for L4S ECN marking (assuming
k>1) before saturation starts to make the flow rates of the
different traffic types diverge. Above that the drop probability
of Classic traffic is applied to all packets of all traffic
types. Then experiments have shown that queueing delay can be
kept at the target in any overload situation, including with
unresponsive traffic, and no further measures are required .When L4S marking saturates,
instead of switching to drop, the drop and marking probabilities
could be capped. Beyond that, delay will grow either solely in
the queue with unresponsive traffic (if WRR is used), or in both
queues (if time-shifted FIFO is used). In either case, the
higher delay ought to control temporary high congestion. If the
overload is more persistent, eventually the combined DualQ will
overflow and tail drop will control congestion.The example implementation in
solely applies the "drop on saturation" policy.Unresponsive traffic has a greater advantage if it is also
ECN-capable. The advantage is undetectable at normal low levels of
drop/marking, but it becomes significant with the higher levels of
drop/marking typical during overload. This is an issue whether the
ECN-capable traffic is L4S or Classic.This raises the question of whether and when to switch off ECN
marking and use solely drop instead, as required by both Section 7
of and Section 4.2.1 of .Experiments with the DualPI2 AQM () have shown that introducing 'drop on
saturation' at 100% L4S marking addresses this problem with
unresponsive ECN as well as addressing the saturation problem. It
leaves only a small range of congestion levels where unresponsive
traffic gains any advantage from using the ECN capability, and the
advantage is hardly detectable .Thanks to Anil Agarwal, Sowmini Varadhan's and Gabi Bracha for
detailed review comments particularly of the appendices and suggestions
on how to make our explanation clearer. Thanks also to Greg White for
improving the normative requirements and both Greg and Tom Henderson for
insights on the choice of schedulers, queue delay measurement
techniques.The authors' contributions were originally part-funded by the
European Community under its Seventh Framework Programme through the
Reducing Internet Transport Latency (RITE) project (ICT-317700). Bob
Briscoe's contribution was also part-funded by the Research Council of
Norway through the TimeIn project. The views expressed here are solely
those of the authors.Adaptive RED: An Algorithm for Increasing the Robustness of
RED's Active Queue ManagementACIRIACIRIACIRIIdentifying Modified Explicit Congestion Notification (ECN)
Semantics for Ultra-Low Queuing DelayThis specification defines the identifier to be used on IP
packets for a new network service called low latency, low loss and
scalable throughput (L4S). It is similar to the original (or
'Classic') Explicit Congestion Notification (ECN). 'Classic' ECN
marking was required to be equivalent to a drop, both when applied
in the network and when responded to by a transport. Unlike
'Classic' ECN marking, for packets carrying the L4S identifier,
the network applies marking more immediately and more aggressively
than drop, and the transport response to each mark is reduced and
smoothed relative to that for drop. The two changes counterbalance
each other so that the throughput of an L4S flow will be roughly
the same as a 'Classic' flow under the same conditions. However,
the much more frequent control signals and the finer responses to
them result in ultra-low queuing delay without compromising link
utilization, even during high load. Examples of new active queue
management (AQM) marking algorithms and examples of new transports
(whether TCP-like or real- time) are specified separately. The new
L4S identifier is the key piece that enables them to interwork and
distinguishes them from 'Classic' traffic. It gives an incremental
migration path so that existing 'Classic' TCP traffic will be no
worse off, but it can be prevented from degrading the ultra-low
delay and loss of the new scalable transports.Low Latency, Low Loss, Scalable Throughput (L4S) Internet
Service: ArchitectureThis document describes the L4S architecture for the provision
of a new service that the Internet could provide to eventually
replace best efforts for all traffic: Low Latency, Low Loss,
Scalable throughput (L4S). It is becoming common for _all_ (or
most) applications being run by a user at any one time to require
low latency. However, the only solution the IETF can offer for
ultra-low queuing delay is Diffserv, which only favours a minority
of packets at the expense of others. In extensive testing the new
L4S service keeps average queuing delay under a millisecond for
_all_ applications even under very heavy load, without sacrificing
utilization; and it keeps congestion loss to zero. It is becoming
widely recognized that adding more access capacity gives
diminishing returns, because latency is becoming the critical
problem. Even with a high capacity broadband access, the reduced
latency of L4S remarkably and consistently improves performance
under load for applications such as interactive video,
conversational video, voice, Web, gaming, instant messaging,
remote desktop and cloud-based apps (even when all being used at
once over the same access link). The insight is that the root
cause of queuing delay is in TCP, not in the queue. By fixing the
sending TCP (and other transports) queuing latency becomes so much
better than today that operators will want to deploy the network
part of L4S to enable new products and services. Further, the
network part is simple to deploy - incrementally with zero-config.
Both parts, sender and network, ensure coexistence with other
legacy traffic. At the same time L4S solves the long- recognized
problem with the future scalability of TCP throughput. This
document describes the L4S architecture, briefly describing the
different components and how the work together to provide the
aforementioned enhanced Internet service.Interactions between Low Latency, Low Loss, Scalable
Throughput (L4S) and Differentiated ServicesThis document describes the L4S architecture for the provision
of a new service that the Internet could provide to eventually
replace best efforts for all traffic: Low Latency, Low Loss,
Scalable throughput (L4S). It is becoming common for _all_ (or
most) applications being run by a user at any one time to require
low latency. However, the only solution the IETF can offer for
ultra-low queuing delay is Diffserv, which only favours a minority
of packets at the expense of others. In extensive testing the new
L4S service keeps average queuing delay under a millisecond for
_all_ applications even under very heavy load, without sacrificing
utilization; and it keeps congestion loss to zero. It is becoming
widely recognized that adding more access capacity gives
diminishing returns, because latency is becoming the critical
problem. Even with a high capacity broadband access, the reduced
latency of L4S remarkably and consistently improves performance
under load for applications such as interactive video,
conversational video, voice, Web, gaming, instant messaging,
remote desktop and cloud-based apps (even when all being used at
once over the same access link). The insight is that the root
cause of queuing delay is in TCP, not in the queue. By fixing the
sending TCP (and other transports) queuing latency becomes so much
better than today that operators will want to deploy the network
part of L4S to enable new products and services. Further, the
network part is simple to deploy - incrementally with zero-config.
Both parts, sender and network, ensure coexistence with other
legacy traffic. At the same time L4S solves the long- recognized
problem with the future scalability of TCP throughput. This
document describes the L4S architecture, briefly describing the
different components and how the work together to provide the
aforementioned enhanced Internet service.Relentless Congestion ControlPSC`Data Centre to the Home': Ultra-Low Latency for AllNokia Bell LabsSimula Research LabBTNokia Bell Labs(Under submission)PI2: A Linearized AQM for both Classic and Scalable
TCPNokia Bell LabsSimula Research LabBTNokia Bell Labs(To appear)Ultra-Low Delay for All: Live Experience, Live
AnalysisSimula Research LabBell LabsBell LabsBTCongestion Avoidance and ControlInsights from Curvy RED (Random Early Detection)BTControlling Queue DelayPARCPollere IncMEDF - a simple scheduling algorithm for two real-time
transport service classes with application in the UTRANUniversity of WuerzburgInfosim AGSiemensSiemensDestruction Testing: Ultra-Low Delay using Dual Queue Coupled
Active Queue ManagementUni OsloAs a first concrete example, the pseudocode below gives the DualPI2
algorithm. DualPI2 follows the structure of the DualQ Coupled AQM
framework in . A simple step
threshold (in units of queuing time) is used for the Native L4S AQM, but
a ramp is also described as an alternative. And the PI2 algorithm is used for the Classic AQM. PI2 is an improved variant
of the PIE AQM .We will introduce the pseudocode in two passes. The first pass
explains the core concepts, deferring handling of overload to the second
pass. To aid comparison, line numbers are kept in step between the two
passes by using letter suffixes where the longer code needs extra
lines.A full open source implementation for Linux is available at:
https://github.com/olgabo/dualpi2.The pseudocode manipulates three main structures of variables: the
packet (pkt), the L4S queue (lq) and the Classic queue (cq). The
pseudocode consists of the following five functions:initialization code () that sets parameter
defaults (the API for setting non-default values is omitted for
brevity)enqueue code ()dequeue code ()a ramp function ()
used to calculate the ECN-marking probability for the L4S
queuecode to regularly update the base probability (p) used in the
dequeue code ().It also uses the following functions that are not shown in
full here:scheduler(), which selects between the head packets of the two
queues; the choice of scheduler technology is discussed later;cq.len() or lq.len() returns the current length (aka. backlog)
of the relevant queue in bytes;cq.time() or lq.time() returns the current queuing delay (aka.
sojourn time or service time) of the relevant queue in units of
time;Queuing delay could be measured directly by storing a per-packet
time-stamp as each packet is enqueued, and subtracting this from the
system time when the packet is dequeued. If time-stamping is not easy
to introduce with certain hardware, queuing delay could be predicted
indirectly by dividing the size of the queue by the predicted
departure rate, which might be known precisely for some link
technologies (see for example ).In our experiments so far (building on experiments with PIE) on
broadband access links ranging from 4 Mb/s to 200 Mb/s with base RTTs
from 5 ms to 100 ms, DualPI2 achieves good results with the default
parameters in . The
parameters are categorised by whether they relate to the Base PI2 AQM,
the L4S AQM or the framework coupling them together. Variables derived
from these parameters are also included at the end of each category.
Each parameter is explained as it is encountered in the walk-through
of the pseudocode below.For brevity the pseudocode shows some parameters in units of
microseconds (us), but a real implementation would probably use
nanoseconds.The overall goal of the code is to maintain the base probability
(p), which is an internal variable from which the marking and dropping
probabilities for L4S and Classic traffic (p_L and p_C) are derived.
The variable named p in the pseudocode and in this walk-through is the
same as p' (p-prime) in . The
probabilities p_L and p_C are derived in lines 3, 4 and 5 of the
dualpi2_update() function ()
then used in the dualpi2_dequeue() function (). The code walk-through below
builds up to explaining that part of the code eventually, but it
starts from packet arrival.When packets arrive, first a common queue limit is checked as shown
in line 2 of the enqueuing pseudocode in . Note that the limit is
deliberately tested before enqueue to avoid any bias against larger
packets (so depending whether the implementation stores a packet while
testing whether to drop it from the tail, it might be necessary for
the actual buffer memory to be one MTU larger than limit).Line 2 assumes an implementation where lq and cq share common
buffer memory. An alternative implementation could use separate
buffers for each queue, in which case the arriving packet would have
to be classified first to determine which buffer to check for
available space. The choice is a trade off; a shared buffer can use
less memory whereas separate buffers isolate the L4S queue from
tail-drop due to large bursts of Classic traffic (e.g. a Classic TCP
during slow-start over a long RTT).Returning to the shared buffer case, if limit is not exceeded, the
packet will be classified and enqueued to the Classic or L4S queue
dependent on the least significant bit of the ECN field in the IP
header (line 5). Packets with a codepoint having an LSB of 0 (Not-ECT
and ECT(0)) will be enqueued in the Classic queue. Otherwise, ECT(1)
and CE packets will be enqueued in the L4S queue. Optional additional
packet classification flexibility is omitted for brevity (see ).The dequeue pseudocode () is repeatedly called whenever
the lower layer is ready to forward a packet. It schedules one packet
for dequeuing (or zero if the queue is empty) then returns control to
the caller, so that it does not block while that packet is being
forwarded. While making this dequeue decision, it also makes the
necessary AQM decisions on dropping or marking. The alternative of
applying the AQMs at enqueue would shift some processing from the
critical time when each packet is dequeued. However, it would also add
a whole queue of delay to the control signals, making the control loop
very sloppy.All the dequeue code is contained within a large while loop so that
if it decides to drop a packet, it will continue until it selects a
packet to schedule. Line 3 of the dequeue pseudocode is where the
scheduler chooses between the L4S queue (lq) and the Classic queue
(cq). Detailed implementation of the scheduler is not shown (see
discussion later). If an L4S packet is scheduled, lines 7 and 8 ECN-mark the
packet if a random marking decision is drawn according to p_L.
Line 6 calculates p_L as the maximum of the coupled L4S
probability p_CL and the probability from the native L4S AQM p'_L.
This implements the max() function shown in to couple the outputs of the two
AQMs together. Of the two probabilities input to p_L in line
6:p'_L is calculated per packet in line 5 by the laqm()
function (see ), whereas p_CL is maintained by the dualpi2_update() function
which runs every Tupdate (default 16ms) (see ).If a Classic packet is scheduled, lines 10 to 17 drop or mark
the packet based on the squared probability p_C.The Native L4S AQM algorithm () is a ramp function, similar to
the RED algorithm, but simpler due to the following differences:The min and max of the ramp are defined in units of queuing
delay, not bytes, so that configuration remains invariant as the
queue departure rate varies.It uses instantaneous queueing delay to remove smoothing delay
(L4S senders smooth incoming ECN feedback when necessary).The ramp rises linearly directly from 0 to 1, not to a an
intermediate value of p'_L as RED would, because there is no need
to keep ECN marking probability low.Marking does not have to be randomized. Determinism is being
experimented with instead of randomness; to reduce the delay
necessary to smooth out the noise of randomness from the signal.
In this case, for each packet, the algorithm would accumulate p_L
in a counter and mark the packet that took the counter over 1,
then subtract 1 from the counter and continue.This ramp function requires two configuration parameters, the
minimum threshold (minTh) and the width of the ramp (range), both in
units of queuing time), as shown in the parameter initialization code
in . A minimum marking
threshold parameter (Th_len) in transmission units (default 2 MTU) is
also necessary to ensure that the ramp does not trigger excessive
marking on slow links. The code in lines 23-28 of converts 2 MTU into time
units and adjusts the ramp thresholds to be no shallower than this
floor.An operator can effectively turn the ramp into a step function, as
used by DCTCP, by setting the range to its minimum value (e.g. 1 ns).
Then the condition for the ramp calculation will hardly ever arise.
There is some concern that using the step function of DCTCP for the
Native L4S AQM requires end-systems to smooth the signal for an
unnecessarily large number of round trips to ensure sufficient
fidelity. A ramp seems to be no worse than a step in initial
experiments with existing DCTCP. Therefore, it is recommended that a
ramp is configured in place of a step, which will allow congestion
control algorithms to investigate faster smoothing algorithms.p_CL depends on the base probability (p), which is kept up to date
by the core PI algorithm in
executed every Tupdate.Note that p solely depends on the queuing time in the Classic
queue. In line 2, the current queuing delay (curq) is evaluated from
how long the head packet was in the Classic queue (cq). The function
cq.time() (not shown) subtracts the time stamped at enqueue from the
current time and implicitly takes the current queuing delay as 0 if
the queue is empty.The algorithm centres on line 3, which is a classical
Proportional-Integral (PI) controller that alters p dependent on: a)
the error between the current queuing delay (curq) and the target
queuing delay ('target' - see ); and b) the
change in queuing delay since the last sample. The name 'PI'
represents the fact that the second factor (how fast the queue is
growing) is Proportional to load while the
first is the Integral of the load (so it
removes any standing queue in excess of the target).The two 'gain factors' in line 3, alpha_U and beta_U, respectively
weight how strongly each of these elements ((a) and (b)) alters p.
They are in units of 'per second of delay' or Hz, because they
transform differences in queueing delay into changes in
probability.alpha_U and beta_U are derived from the input parameters alpha and
beta (see lines 5 and 6 of ). These recommended values
of alpha and beta come from the stability analysis in so that the AQM can change p as fast as possible in
response to changes in load without over-compensating and therefore
causing oscillations in the queue.alpha and beta determine how much p ought to change if it was
updated every second. It is best to update p as frequently as
possible, but the update interval (Tupdate) will probably be
constrained by hardware performance. For link rates from 4 - 200 Mb/s,
we found Tupdate=16ms (as recommended in ) is
sufficient. However small the chosen value of Tupdate, p should change
by the same amount per second, but in finer more frequent steps. So
the gain factors used for updating p in need to be scaled by (Tupdate/1s),
which is done in lines 9 and 10 of ). The suffix '_U' represents
'per update time' (Tupdate).In corner cases, p can overflow the range [0,1] so the resulting
value of p has to be bounded (omitted from the pseudocode). Then, as
already explained, the coupled and Classic probabilities are derived
from the new p in lines 4 and 5 as p_CL = k*p and p_C = p^2.Because the coupled L4S marking probability (p_CL) is factored up
by k, the dynamic gain parameters alpha and beta are also inherently
factored up by k for the L4S queue, which is necessary to ensure that
Classic TCP and DCTCP controls have the same stability. So, if alpha
is 10 Hz^2, the effective gain factor for the L4S queue is k*alpha,
which is 20 Hz^2 with the default coupling factor of k=2.Unlike in PIE , alpha_U and beta_U do not
need to be tuned every Tupdate dependent on p. Instead, in PI2,
alpha_U and beta_U are independent of p because the squaring applied
to Classic traffic tunes them inherently. This is explained in , which also explains why this more principled approach
removes the need for most of the heuristics that had to be added to
PIE.{ToDo: Scaling beta with Tupdate and scaling both alpha & beta
with RTT} repeats the
dequeue function of , but
with overload details added. Similarly repeats the core PI algorithm
of with overload details
added. The initialization, enqueue and L4S AQM functions are
unchanged.In line 7 of the initialization function (), the default maximum
Classic drop probability p_Cmax = 1/4 or 25%. This is the point at
which it is deemed that the Classic queue has become persistently
overloaded, so it switches to using solely drop, even for ECN-capable
packets. This protects the queue against any unresponsive traffic that
falsely claims that it is responsive to ECN marking, as required by
and .Line 22 of the initialization function translates this into a
maximum L4S marking probability (p_Lmax) by rearranging Equation (1).
With a coupling factor of k=2 (the default) or greater, this
translates to a maximum L4S marking probability of 1 (or 100%). This
is intended to ensure that the L4S queue starts to introduce dropping
once marking saturates and can rise no further. The 'TCP Prague'
requirements state that,
when an L4S congestion control detects a drop, it falls back to a
response that coexists with 'Classic' TCP. So it is correct that the
L4S queue drops packets proportional to p^2, as if they are Classic
packets.Both these switch-overs are triggered by the tests for overload
introduced in lines 4b and 12b of the dequeue function (). Lines 8c to 8g drop L4S
packets with probability p^2. Lines 8h to 8i mark the remaining
packets with probability p_CL. If p_Lmax = 1, which is the suggested
default configuration, all remaining packets will be marked because,
to have reached the else block at line 8b, p_CL >= 1.Lines 2c to 2d in the core PI algorithm () deal with overload of the L4S
queue when there is no Classic traffic. This is necessary, because the
core PI algorithm maintains the appropriate drop probability to
regulate overload, but it depends on the length of the Classic queue.
If there is no Classic queue the naive algorithm in drops nothing, even if the L4S
queue is overloaded - so tail drop would have to take over (lines 3
and 4 of ).If the test at line 2a finds that the Classic queue is empty, line
2d measures the current queue delay using the L4S queue instead. While
the L4S queue is not overloaded, its delay will always be tiny
compared to the target Classic queue delay. So p_L will be driven to
zero, and the L4S queue will naturally be governed solely by threshold
marking (lines 5 and 6 of the dequeue algorithm in ). But, if unresponsive L4S
source(s) cause overload, the DualQ transitions smoothly to L4S
marking based on the PI algorithm. And as overload increases, it
naturally transitions from marking to dropping by the switch-over
mechanism already described.The choice of scheduler technology is critical to overload
protection (see ). A well-understood weighted scheduler such as weighted round
robin (WRR) is recommended. The scheduler weight for Classic
should be low, e.g. 1/16.Alternatively, a time-shifted FIFO could be used. This is a
very simple scheduler, but it does not fully isolate latency in
the L4S queue from uncontrolled bursts in the Classic queue. It
works by selecting the head packet that has waited the longest,
biased against the Classic traffic by a time-shift of tshift. To
implement time-shifted FIFO, the "if (scheduler() == lq )" test in
line 3 of the dequeue code would simply be replaced by "if (
lq.time() + tshift >= cq.time() )". For the public Internet a
good value for tshift is 50ms. For private networks with smaller
diameter, about 4*target would be reasonable.A strict priority scheduler would be inappropriate, because it
would starve Classic if L4S was overloaded.As another example of a DualQ Coupled AQM algorithm, the pseudocode
below gives the Curvy RED based algorithm we used and tested. Although
we designed the AQM to be efficient in integer arithmetic, to aid
understanding it is first given using real-number arithmetic. Then, one
possible optimization for integer arithmetic is given, also in
pseudocode. To aid comparison, the line numbers are kept in step between
the two by using letter suffixes where the longer code needs extra
lines.Packet classification code is not shown, as it is no different from
. Potential classification
schemes are discussed in . The
Curvy RED algorithm has not been maintained to the same degree as the
DualPI2 algorithm. Some ideas used in DualPI2 would need to be
translated into Curvy RED, such as i) the conditional priority scheduler
instead of strict priority ii) the time-based L4S threshold; iii)
turning off ECN as overload protection; iv) Classic ECN support. These
are not shown in the Curvy RED pseudocode, but would need to be
implemented for production. {ToDo}At the outer level, the structure of dualq_dequeue() implements
strict priority scheduling. The code is written assuming the AQM is
applied on dequeue (Note ) . Every time dualq_dequeue() is called,
the if-block in lines 2-6 determines whether there is an L4S packet to
dequeue by calling lq.dequeue(pkt), and otherwise the while-block in
lines 7-13 determines whether there is a Classic packet to dequeue, by
calling cq.dequeue(pkt). (Note )In the lower priority Classic queue, a while loop is used so that, if
the AQM determines that a classic packet should be dropped, it continues
to test for classic packets deciding whether to drop each until it
actually forwards one. Thus, every call to dualq_dequeue() returns one
packet if at least one is present in either queue, otherwise it returns
NULL at line 14. (Note )Within each queue, the decision whether to drop or mark is taken as
follows (to simplify the explanation, it is assumed that U=1):If the test at line 2 determines there is an L4S
packet to dequeue, the tests at lines 3a and 3c determine whether to
mark it. The first is a simple test of whether the L4S queue
(lq.byt() in bytes) is greater than a step threshold T in bytes
(Note ). The second
test is similar to the random ECN marking in RED, but with the
following differences: i) the marking function does not start with a
plateau of zero marking until a minimum threshold, rather the
marking probability starts to increase as soon as the queue is
positive; ii) marking depends on queuing time, not bytes, in order
to scale for any link rate without being reconfigured; iii) marking
of the L4S queue does not depend on itself, it depends on the
queuing time of the other (Classic)
queue, where cq.sec() is the queuing time of the packet at the head
of the Classic queue (zero if empty); iv) marking depends on the
instantaneous queuing time (of the other Classic queue), not a
smoothed average; v) the queue is compared with the maximum of U
random numbers (but if U=1, this is the same as the single random
number used in RED).Specifically, in line 3a
the marking probability p_L is set to the Classic queueing time
qc.sec() in seconds divided by the L4S scaling parameter 2^S_L,
which represents the queuing time (in seconds) at which marking
probability would hit 100%. Then in line 3d (if U=1) the result is
compared with a uniformly distributed random number between 0 and 1,
which ensures that marking probability will linearly increase with
queueing time. The scaling parameter is expressed as a power of 2 so
that division can be implemented as a right bit-shift (>>) in
line 3 of the integer variant of the pseudocode ().If the test at line 7 determines that there
is at least one Classic packet to dequeue, the test at line 9b
determines whether to drop it. But before that, line 8b updates Q_C,
which is an exponentially weighted moving average (Note ) of the queuing time
in the Classic queue, where pkt.sec() is the instantaneous queueing
time of the current Classic packet and alpha is the EWMA constant
for the classic queue. In line 8a, alpha is represented as an
integer power of 2, so that in line 8 of the integer code the
division needed to weight the moving average can be implemented by a
right bit-shift (>> f_C).Lines 9a and
9b implement the drop function. In line 9a the averaged queuing time
Q_C is divided by the Classic scaling parameter 2^S_C, in the same
way that queuing time was scaled for L4S marking. This scaled
queuing time is given the variable name sqrt_p_C because it will be
squared to compute Classic drop probability, so before it is squared
it is effectively the square root of the drop probability. The
squaring is done by comparing it with the maximum out of two random
numbers (assuming U=1). Comparing it with the maximum out of two is
the same as the logical `AND' of two tests, which ensures drop
probability rises with the square of queuing time (Note ). Again, the
scaling parameter is expressed as a power of 2 so that division can
be implemented as a right bit-shift in line 9 of the integer
pseudocode.The marking/dropping functions in each queue (lines 3 & 9) are
two cases of a new generalization of RED called Curvy RED, motivated as
follows. When we compared the performance of our AQM with fq_CoDel and
PIE, we came to the conclusion that their goal of holding queuing delay
to a fixed target is misguided . As the
number of flows increases, if the AQM does not allow TCP to increase
queuing delay, it has to introduce abnormally high levels of loss. Then
loss rather than queuing becomes the dominant cause of delay for short
flows, due to timeouts and tail losses.Curvy RED constrains delay with a softened target that allows some
increase in delay as load increases. This is achieved by increasing drop
probability on a convex curve relative to queue growth (the square curve
in the Classic queue, if U=1). Like RED, the curve hugs the zero axis
while the queue is shallow. Then, as load increases, it introduces a
growing barrier to higher delay. But, unlike RED, it requires only one
parameter, the scaling, not three. The diadvantage of Curvy RED is that
it is not adapted to a wide range of RTTs. Curvy RED can be used as is
when the RTT range to support is limited otherwise an adaptation
mechanism is required.There follows a summary listing of the two parameters used for each
of the two queues:The scaling factor of the dropping function
scales Classic queuing times in the range [0, 2^(S_C)] seconds
into a dropping probability in the range [0,1]. To make division
efficient, it is constrained to be an integer power of two;To smooth the queuing time of the Classic
queue and make multiplication efficient, we use a negative
integer power of two for the dimensionless EWMA constant, which
we define as alpha = 2^(-f_C).As for the Classic queue, the
scaling factor of the L4S marking function scales Classic
queueing times in the range [0, 2^(S_L)] seconds into a
probability in the range [0,1]. Note that S_L = S_C + k', where
k' is the coupling between the queues. So S_L and k' count as
only one parameter; k' is related to k in Equation (1) () by k=2^k', where both k and k' are
constants. Then implementations can avoid costly division by
shifting p_L by k' bits to the right.The queue size in bytes at which step
threshold marking starts in the L4S queue.{ToDo: These are the raw parameters used within the algorithm.
A configuration front-end could accept more meaningful parameters and
convert them into these raw parameters.}From our experiments so far, recommended values for these parameters
are: S_C = -1; f_C = 5; T = 5 * MTU for the range of base RTTs typical
on the public Internet. explains why
these parameters are applicable whatever rate link this AQM
implementation is deployed on and how the parameters would need to be
adjusted for a scenario with a different range of RTTs (e.g. a data
centre) {ToDo incorporate a summary of that report into this draft}. The
setting of k depends on policy (see and
respectively for its recommended
setting and guidance on alternatives).There is also a cUrviness parameter, U, which is a small positive
integer. It is likely to take the same hard-coded value for all
implementations, once experiments have determined a good value. We have
solely used U=1 in our experiments so far, but results might be even
better with U=2 or higher.Note that the dropping function at line 9 calls maxrand(2*U), which
gives twice as much curviness as the call to maxrand(U) in the marking
function at line 3. This is the trick that implements the square rule in
equation (1) (). This is based on the fact
that, given a number X from 1 to 6, the probability that two dice throws
will both be less than X is the square of the probability that one throw
will be less than X. So, when U=1, the L4S marking function is linear
and the Classic dropping function is squared. If U=2, L4S would be a
square function and Classic would be quartic. And so on.The maxrand(u) function in lines 16-21 simply generates u random
numbers and returns the maximum (Note ). Typically, maxrand(u) could be
run in parallel out of band. For instance, if U=1, the Classic queue
would require the maximum of two random numbers. So, instead of calling
maxrand(2*U) in-band, the maximum of every pair of values from a
pseudorandom number generator could be generated out-of-band, and held
in a buffer ready for the Classic queue to consume.Notes:The drain rate of the queue can vary
if it is scheduled relative to other queues, or to cater for
fluctuations in a wireless medium. To auto-adjust to changes in
drain rate, the queue must be measured in time, not bytes or packets
. In our Linux implementation, it was easiest
to measure queuing time at dequeue. Queuing time can be estimated
when a packet is enqueued by measuring the queue length in bytes and
dividing by the recent drain rate.An implementation has to use
priority queueing, but it need not implement strict priority.If packets can be enqueued while
processing dequeue code, an implementer might prefer to place the
while loop around both queues so that it goes back to test again
whether any L4S packets arrived while it was dropping a Classic
packet.In order not to change too many factors
at once, for now, we keep the marking function for DCTCP-only
traffic as similar as possible to DCTCP. However, unlike DCTCP, all
processing is at dequeue, so we determine whether to mark a packet
at the head of the queue by the byte-length of the queue behind it. We plan to test whether using
queuing time will work in all circumstances, and if we find that the
step can cause oscillations, we will investigate replacing it with a
steep random marking curve.An EWMA is only one possible way to
filter bursts; other more adaptive smoothing methods could be valid
and it might be appropriate to decrease the EWMA faster than it
increases.In practice at line 10 the
Classic queue would probably test for ECN capability on the packet
to determine whether to drop or mark the packet. However, for
brevity such detail is omitted. All packets classified into the L4S
queue have to be ECN-capable, so no dropping logic is necessary at
line 3. Nonetheless, L4S packets could be dropped by overload code
(see ).In the integer variant of the
pseudocode () real numbers are
all represented as integers scaled up by 2^32. In lines 3 & 9
the function maxrand() is arranged to return an integer in the range
0 <= maxrand() < 2^32. Queuing times are also scaled up by
2^32, but in two stages: i) In lines 3 and 8 queuing times cq.ns()
and pkt.ns() are returned in integer nanoseconds, making the values
about 2^30 times larger than when the units were seconds, ii) then
in lines 3 and 9 an adjustment of -2 to the right bit-shift
multiplies the result by 2^2, to complete the scaling by 2^32.RTT_C / RTT_LRenoCubic1k'=1k'=02k'=2k'=13k'=2k'=24k'=3k'=25k'=3k'=3k' is related to k in Equation (1) ()
by k=2^k'.To determine the appropriate policy, the operator first has to judge
whether it wants DCTCP flows to have roughly equal throughput with Reno
or with Cubic (because, even in its Reno-compatibility mode, Cubic is
about 1.4 times more aggressive than Reno). Then the operator needs to
decide at what ratio of RTTs it wants DCTCP and Classic flows to have
roughly equal throughput. For example choosing k'=0 (equivalent to k=1)
will make DCTCP throughput roughly the same as Cubic, if their RTTs are the same.However, even if the base RTTs are the same, the actual RTTs are
unlikely to be the same, because Classic (Cubic or Reno) traffic needs a
large queue to avoid under-utilization and excess drop, whereas L4S
(DCTCP) does not. The operator might still choose this policy if it
judges that DCTCP throughput should be rewarded for keeping its own
queue short.On the other hand, the operator will choose one of the higher values
for k', if it wants to slow DCTCP down to roughly the same throughput as
Classic flows, to compensate for Classic flows slowing themselves down
by causing themselves extra queuing delay.The values for k' in the table are derived from the formulae, which
was developed in :For localized traffic from a particular ISP's data centre, we used
the measured RTTs to calculate that a value of k'=3 (equivalant to k=8)
would achieve throughput equivalence, and our experiments verified the
formula very closely.For a typical mix of RTTs from local data centres and across the
general Internet, a value of k'=1 (equivalent to k=2) is recommended as
a good workable compromise.Most of the following open issues are also tagged '{ToDo}' at the
appropriate point in the document:PI2 appendix: scaling of alpha & beta, esp. dependence of
beta_U on TupdateCurvy RED appendix: complete the unfinished parts