RFC Evaluation Project - First StepPrivate Octopus Inc.Golfcourse RdFriday HarborWA 98250U.S.Ahuitema@huitema.net
General
Internet-DraftThis document presents a first attempt at evaluating the production of the IETF.
We analyze a set of randomly chosen RFC approved in 2018, looking for history
and delays, and using Google Scholar as a proxy for the RFC popularity. The
results are interesting, and inform further evaluation efforts.As stated on the organization's web site, "The IETF is a large open international
community of network designers, operators, vendors, and researchers concerned with
the evolution of the Internet architecture and the smooth operation of the Internet."
In this memo, we start exploring how the IETF could possibly be evaluated, and we
do so by attempting to evaluate the RFC production process.The IETF data tracker provides information about RFC and drafts, from which we can
infer statistics about the production system. We can measure how
long it takes to drive a proposition from initial draft to final publication,
and how these delays can be split between Working Group discussions, IETF reviews,
IESG assessment, RFC Editor delays and final reviews by the authors.Just measuring production delays may be misleading. The IETF
produces standard proposals and informative memos that get published in the RFC
series, but that is not its only purpose. Two other purposes would be the
organization of fruitful discussions between members of the technical community,
and the filtering of ill-thought proposals so they are not endorsed in
IETF publications.To try assess this other purposes, we have to find ways to measure the impact
of the documents that were published. If the communication between participants
was efficient, the documents should be easily accepted by the community. If bad
ideas were filtered, the specifications should lead to deployed product and
services with good feedback.A potential failure mode of the IETF is to do too much filtering. After all,
classifying a proposal as ill-thought or potentially harmful is subjective.
A comprehensive analysis would look at the fate of proposals that were not
accepted by the IETF but got developed in other ways and became eventually
successful. We will not try to assess that in this memo, but further
studies migh consider the issue.In this exploration, we want to evaluate not just the mechanics of the RFC
production, but also the quality and impact of the results. This evaluation of
quality and impact is subjective. We start with two ideas:1- Use Google Scholar to assess the citation counts of published documents2- Ask the RFC authors whether the specifications resulted in the
deployment of products or servicesWhen accessing Google Scholar, we search for quoted strings of the
form "RFC xxxx". This is an arbitrary choice, we could for example
have chosen to search for "RFCxxx" or a combination of the two forms.
We retained the simpler alternative, because we don't believe that
picking one or the other would introduce a significant bias.Basic production mechanisms could be evaluated by processing data from
the IETF tracker, but subjective data requires manual assessment of results,
which can be time consuming. Google Scholar also requires manual access
because the site does not offer an open API.
Since our resources are limited, we will only
perform this analysis for a small sample of RFC, selected at random
from the list of RFC approved in 2018. Specifically, we will pick
10 RFC at random between:RFC 8307, published in January 2018, andRFC 8511, published December 2018.In order to avoid injecting personal bias in the random selecton, we use
a random selection process similar to the Nomination Committee selection
process defined in . The process is seeded with the text string
"vanitas vanitatum et omnia vanitas", and the results are:When evaluating delays and impact, we will compare the year 2018 to 2008 and
1998, 10 and 20 years ago. To drive this comparison, we pick 20 RFC at random
among those published in 2008, and another 20 among those published in 1998.
We use the same nomcom-like methodology.For 2008, we picking random RFC numbers between RFC 5134
(January 2008) and RFC 5405 (December 2008), using the sentence "sed fugit interea
fugit irreparabile tempus" as a seed. We actually list here 21 numbers, because
the random draw place RFC 5315 in 20th position, but that RFC was never issued.
We replace it by RFC 5301, which came in 21st position.For 1998 we picking random RFC numbers between RFC 2257
(January 1998) and RFC 2479 (December 1998), using the sentence
"pulvis et umbra sumus" as a seed.We review each of the RFC listed in (#methodology) for the year 2018, trying
both to answer the known questions and to gather insight for further analyzes.
In many cases, the analysis of the data is complemented by direct feedback
from the RFC authors.IANA Registration for the Cryptographic Algorithm Object Identifier Range :The draft underwent minor copy edit before publication.The long delay in Auth48 is probaby due to clustering with , which entered
AUTH 48 on 06/05. The MISSREF tracker code was cleared then.2 references on Google Scholar.Benchmarking Methodology for Software-Defined Networking (SDN)
Controller Performance :The draft underwent very extensive copy editing, covering use of articles, turn of phrases, choice
of vocabulary. The changes are enough to cause pagination differences. The "diff" tool marks pretty
much every page as changed. Some diagrams see change in protocol elements like message names.According to the author, the experience of producing this draft mirrors a typical one in the
Benchmarking Methodologies Working Group (BMWG).There were multiple authors in multiple time
zones, which slowed down the AUTH process somewhat, although the Auth48 delay of 46 is only
a bit longer than the average draft.The RFC was part of cluster with .Google Scholar shows 3 references. BMWG publishes informational RFCs centered around benchmarking,
and the methodologies in RFC 8456 have been implemented in benchmarking products.The Transport Layer Security (TLS) Protocol Version 1.3 , as the title
indicates, defines the new version of the TLS protocol. From the datatracker, we extract
the following:The RFC was a major effort in the IETF. Working group members developed and tested
several implementations. Researchers analyzed the specifications and performed
to formal verification. Deployment tests outlined issues that caused extra work
when the specification was almost ready. These complexity largely explains the
time spent in the working group.Comparing the final draft to the published version, we find relatively light copy
editing. It includes explaining acronyms on first use, clarifying some definitions
standardizing punctiation and capitalization, and spelling out some numbers in text.
This generally fall in the category of "style", although some of the clarifications
go into message definitions. However, that simple analysis does not explain why
the Auth48 phase took almost two months.According to the author, the main reason was a requirement to have author and AD
review each proposed change, down to any single comma. The concern there was that
any change in a complex specification might affect a protocol that was extensively
reviewed in the working group. The author asked the RFC editors to back away
a number of edits that he considered spurious. This took a long time, partly because
it was very hard to distinguish the edits that were editorial from those that were
typesetting.The RFC has 123 references in Google Scholar. There are 21 implementations listed
in the Wiki of the TLS 1.3 project. It has been deployed on major browsers.Resiliency Use Cases in Source Packet Routing in Networking (SPRING) Networks is an informational RFC.
It originated from a use case informational draft that was mostly used for the BOF creating the WG, and then to drive initial work/evolutions from the WG.According to the author, the delays reflect the relative lack of priority of this work in the WG. One of
the authors actually mentions "blockage by a minority of politically-savvy persons." Once the
WG was created, other work was higher priority than working on the use case.Minor set of copy edit, mostly for style.9 references on Google Scholar. No implementation of the RFC itself, but the technology behind it such as
Segment Routing (architecture RFC 8402, TI-LFA draft-ietf-rtgwg-segment-routing-ti-lfa) is widely implemented
and deployment is ongoing. Even if this specific RFC was incurred significant delay, the authors think
that the technology was implemented and deployed rather quickly.Bootstrapping WebSockets with HTTP/2 This RFC defines the support of WebSockets in HTTP/2, which is different
from the mechanism defined for HTTP/1.1 in . The process was
relatively straightforward, involving the usual type of discussions, some
on details and some on important points.Comparing final draft and published RFC shows a minor set of copy edit,
mostly for style. However, the author recalls a painful process. The RFC
includes many charts and graphs that were very difficult to format
correctly in the author's production process that involve conversions
from markdown to XML, and then from XML to text. The author had to
get substantial help from the RFC editor.No references on Google Scholar. (RFC 6455 had over 1000 results)There are several implementations, including Firefox and Chrome,
making RFC 8441 a very successful standard.DNS Privacy, Authorization, Special Uses, Encoding, Characters, Matching, and Root Structure:
Time for Another Look? . This is an opinion piece on DNS development,
published on the Independent Stream.This RFC took only 9 months from first draft to publication, which is the shortest in
the 2018 sample set. In part, this is because the text was privately circulated
and reviewed before the first draft was published. The nature of the document is
another reason for the short delay. It is an opinion piece, and does not require
the same type of consensus building and reviews than a protocol specification.Comparing the final draft and the published version shows only minor copy edit, mostly
for style. According to the author, because this is because he knows how to write in RFC
Style with the result that his documents often need a minimum of editing. He also
makes sure that the document on which the
Production Center starts working already has changes discussed
and approved during Last Call and IESG review incorporated
rather than expecting the Production Center to operate off of
notes about changed to be made.2 references on Google Scholar.Transparent Interconnection of Lots of Links (TRILL): Multi-Topology Minor set of copy edit, mostly for style, also clarity.1 reference on Google Scholar.A P-Served-User Header Field Parameter for an Originating Call Diversion (CDIV)
Session Case in the Session Initiation Protocol (SIP) .Copy edit for style, but also clarification of ambiguous sentences.No references on Google Scholar.Storing Validation Parameters in PKCS#8 The goal of the draft was to document what the
gnutls implementation was using for storing provably generated RSA keys.
This is a short RFC that was published relatively quickly, although
discussion between the author, the Independent Series Editor and the
IESG lasted several months. The IESG actually asked the ISE
to not publish this document, because "it extends an IETF protocol in
a way that requires IETF review and should therefore not be published
without IETF review and IESG approval. The ISE overruled that advice.Very minor set of copy edit, moving some references from normative to informative.No reference on Google Scholar.The author is not aware of other implementations than gnutls relying on this RFC.Framework for Abstraction and Control of TE Networks (ACTN) Minor copy editing.8 references on Google Scholar.Deprecate Triple-DES (3DES) and RC4 in Kerberos This RFC recommends to deprecate two encryption algorithms that are now considered
obsolete and possibly broken. The document was sent back to the WG after the first last call,
edited, and then there was a second last call. The delay from first draft to working group
last call was relatively short, but thenumber may be misleading. The initial draft was a
replacement of a similar draft in the KITTEN working group, which stagnated for some time
before the VURDLE working group took up the work.
The deprecation of RC4 was somewhat contentious, but the WG had already debated this
prior to the production of this draft, and the draft was not delayed by this debate.Most of the 280 days between IETF LC and IESG approval was
because the IESG had to talk about whether this document should obsolete or
move to historic RFC 4757, and no one was really actively pushing that
discussion for a while.The 99 days in AUTH48 are mostly because one of the authors was a sitting AD, and those
duties ended up taking precedence over reviewing this document.Minor copy editing, for style.1 reference on Google Scholar.The implementation of the draft would be the actual removal of support for 3DES and RC4
in major implementations. This is happening, but very slowly.CUBIC for Fast Long-Distance Networks Minor copy editing, for style.9 references on Google Scholar.The TCP congestion control algorithm Cubic was defined first in 2005, was implemented
in Linux soon after, and was implemented in major OSes after that. After some debates
from 2015 to 2015, the TCPM working group adopted the draft, with a goal of
documenting Cubic in the RFc series. According to the authors, this was not
a high priority effort, as Cubic was already implemented in multiple OSes
and documented in research papers. At some point, only one of the authors
was actively working on the draft. Ths may explain why another two years was spent
progressing the draft after adoption by the WG.The RFC publication may or may not have triggered further implementations. On
the other hand, several OSes picked up bug fixes from the draft and the RFC.Secure Password Ciphersuites for Transport Layer Security (TLS) This RFC has a complex history. The first individual draft was submitted to the
TLS working group on September 7, 2012. It progressed there, and was adopted
by the WG after 3 revisions. There were then 8 revisions in the TLS WG,
until the WG decided to not progress it. The draft was parked in 2013 by
the WG chairs after failing to get consensus in WG last call. The AD finally
pulled the plug in 2016, and the draft was then resubmitted to the ISE.At that point, the author was busy and was treating this RFC with a
low priority because, in his words, it would not be a "real RFC".
There were problems with the draft that only came up late. It was written
back before TLS1.3 existed and the author had to add TLS1.3 support onto it
late in the process. He also got a very late comment while in AUTH48 that
caused some rewrite. Finally, there was some IANA issue with the extension
registry where a similar extension was added by someone else. The draft
was changed to just use it.Changes in AUTH48 include added reference to TLS 1.3, copy-editing for style,
some added requirements, added paragraphs, and changes in algorithms specification.2 references on Google Scholar. Only the author implemented the specification.Signal-Free Locator/ID Separation Protocol (LISP) Multicast is
an experimental RFC, defining how to implement Multicast in the LISP
architecture.Preparing the RFC took more than 4 years. According to the authors, they were
not aggressive pushing it and just let the working group process decide to pace
it. They also did implementations during that time.Minor copy editing, for style.1 reference on Google Scholar. The RFC was implemented by lispers.net and cisco,
and was used in doing IPv6 multicast over IPv4 unicast/multicast at the Olympics
for NBC in PyeungChang. The plan is to work on a proposedstandard once the
experiment concludes.Transparent Interconnection of Lots of Links (TRILL):
Centralized Replication for Active-Active Broadcast,
Unknown Unicast, and Multicast (BUM) Traffic According to the authors, the long delays in producing this RFC was
due to a slow uptake of the technology in the industry.Minor copy editing, for style.1 reference on Google Scholar.There was at least 1 partial implementation.Transport Layer Security (TLS) Extension for Token Binding Protocol Negotiation This is a pretty simple document, but it took over 3 years from individual draft to RFC. According to
the authors,the biggest setbacks occurred at the start: it took a while to find a home for this draft.
It was presented in the TLS WG (because it's a TLS extension) and UTA WG (because it has to do with
applications using TLS). Then the ADs determined that a new WG was needed, so the authors had to work
through the WG creation process, including running a BOF.Minor copy editing, for style, with the addition of a reference to TLS 1.3.5 references on Google Scholar.Perhaps partially due to the delays, some of the implementers lost interest in supporting this RFC.A YANG Data Model for Layer 2 Virtual Private Network (L2VPN) Service Delivery Copy editing for style and clarity, with also corrections to the yang model.2 references on Google Scholar.OSPFv3 Link State Advertisement (LSA) Extensibility is a major extension to the
OSPF protocol. It makes OSPFv3 fully extensible.The specification was first submitted as a personal draft in the IPv6 WG, then moved to the OSPF WG.
The long delay of producing this RFC is due to the complexity of the problem,
and the need to wait for implementations. It is a very important change to OSPF
that makes OSPFv3 fully extensible. Since it was a non-backward compatible change,
the developers started out with some very complex migration scenarios but ended up
with either legacy or extended OSPFv3 LSAs within an OSPFv3 routing domain. The initial attempts
to have a hybrid mode of operation with both legacy and extended LSAs also delayed implementation
due to the complexity.Copy editing for style and clarity.7 references on Google Scholar. It either was or will be implemented by all the router vendors.IPv4, IPv6, and IPv4-IPv6 Coexistence: Updates for the IP
Performance Metrics (IPPM) Framework .RFC8468 was somehow special in that
there was not a technical reason/interest that triggered it, but
rather a formal requirement.
While writing RFC7312 the IP Performance
Metrics working group (IPPM) realized that RFC 2330, the IP Performance
Metrics Framework supported IPv4 only
and explicitly excluded support for IPv6. Nevertheless, people used
the metrics that were defined on top of RFC 2330 (and, therefore, IPv4
only) for IPv6, too. Although the IPPM WG agreed that the work was needed, the
interest of IPPM attendees in progressing (and reading/reviewing) the
IPv6 draft was limited. Resolving the IPv6 technical part was
straight-forward, but subsequently some people asked for a broader scope
(topics like header compression, 6lo, etc.) and it took some time to
figure out and later on convince people that these topics are out of scope.
The group also had to resolve contentious topics, for example how to
measure the processing of IPv6 extension headers, which is sometimes non-standard.The Auth48 delay for this draft was longer than average. According to the authors,
the main reasons include:Work-load and travel caused by busy-work-periods of all co-authorsTime zone difference between co-authors and editor (at least US,
Europe, India, not considering travel)Editor proposing and committing some unacceptable modifications that
needed to be revertedLengthy discussions on a new document title (required high effort and
took a long time, in particular reaching consensus between co-authors
and editor was time-consuming and involved the AD)Editor correctly identifying some nits (obsoleted personal websites of
co-authors) and co-authors attempting to fix them.The differences between the final draft and the publish RFC show copy editing for style
and clarity, but do not account for the back and forth between authors and editors
mentioned by the authors.2 references on Google Scholar. In contrast, RFC 2330 has more than 3000 references,
including over 70 that are more recent than RFC 8468.
The authors believe that many of these references include IPv6 work that should formally
reference RFC 8468 in addition to RFC 2330, but do not.We examine the 20 RFC in the sample, measuring various characteristics such
as delay and citation counts, in an attempt to identify patterns in the
IETF processes.We look at the distribution of delays between the submission of the first
draft and the publication of the RFC. We break out the total delay in three
components:The working group delay, from the first draft to the start of the IETF
last call;The IETF delay, which lasts from the beginning of the IETF last call to
the approval by the IESG, including the reviews by
various directorates;The RFC production, from approval by the IESG to publication, including
the Auth48 reviews.For submissions to the independent stream, we don't have a working group.
We consider instead the progression of the individual draft until the
adoption by the ISE as the equivalent of the "working group" period,
and the delay from adoption by the ISE until submission to the RFC Editor
as the equivalent of the IETF delay. The following table
shows the delays for the 20 RFC in the sample:RFCStatusPagesOverallWGIETFEdit8411Info54551541401618456Info6411078231261588446PS16015761400341428355Info1315171175243998441PS8341204311068324ISE2927038161718377PS817921630211418498Info15106193559678479ISE8414233144378453Info421162103646808429BCP10548763131598312Info1812551113161268492ISE40235817061724808378Exp211524144627518361PS171612147762738472PS81228899249808466PS1587715381241098362PS331871176641648468Info15119697990127average351161928110123The average delay from first draft to publication is about 3 years, but this
varies widely. Excluding the independent stream submissions, the average
delay from start to finish is 3 years and 3 months, of which on average
2 years and 8 months are spent getting consensus in the working group.The longest delay is found for , 6.5 years from start to finish.
This is however a very special case, a draft that was prepared for
the TLS working group and failed to reach consensus. After that, it was
resubmitted to the ISE, and incurred atypical production delays.On average, we see that 80% of the delay is incurred in WG processing,
10% in IETF review, and 10% for edition and publication.
We can compare these delays to those observed 10 years ago and 20 years
ago:RFC (2008)StatusPagesDelay5326Exp5415845348PS588235281Info5113085354Exp2323155227PS2124345329PS1219805277PS359125236ISE2619475358BCP78845271Info2210665195PS109745283PS1210965186Info622535142PS1310055373PS2412495404PS272145172PS73055349Info1010965301PS63965174Info8427RFC (1998)StatusPagesDelay2289PS253962267Info10unknown2317BCP104852404PS74882374PS122892449PS192732283PS91532394Info63652348DS56992382Info303962297ISE109282381PS436992312Info203652387PS101222398Info153962391PS101222431PS104572282Info142152323ISE5unknown2448ISE792We can compare the median delay, and the delays observed by the fastest and
slowest quartiles in the three years:YearFastest 25%MedianSlowest 25%2018604117915222008869108116751998169365442The IETF takes three to four times more times to produce an RFC in 2018
than it did in 1998, but about the sametime as it did in 1998.The increased delay does not mean increased work per RFC.
The number of RFC published per year remained between 200 and
300 all those years, and the number of participants is not greater now
than in 1998. If we estimated the "level of attention" by dividing the
number of participants by the number of RFC produced, we would see
a number that remains stable. People are probably not working much
more on each RFC now than they were 20 years ago, but the same amount
of work is stretched over a much longer period.Part of the exercise is to test whether citation counts provide a useful
measure of the popularity of the IETF production. These citation counts
vary widely:RFCStatusScholar8411Info28456Info38446PS1238355Info98441PS08324ISE28377PS18498Info08479ISE08453Info88429BCP18312Info98492ISE28378Exp18361PS18472PS58466PS28362PS78468Info2The results indicate that is by far the most popular of the 20
RFC in our sample. This is not surprising, since TLS is a key Internet Protocol.
Of the other publications, only 4 have 5 to 9 citations, and the others have
three or less.In order to get a baseline, we can look at the number of references for the
RFC published in 2008 and 1998. However, we need totake time into account.
Documents published a long time ago are expected to have accrued more references.
We try to address this by looking at three counts for each document: the
overall number of references over the document's lifetime, the number of
references obtained in the year following publication, and the number of
references observed since 2018:RFC (2008)Overall08 to 09>201853262342726534836434135281182271253543015752278197532957115527785105523648845358456352711633519521121528326515186134151424214053731640540412305172112053491110530111305174110RFC (1998)Overall98 to 99>2018228936913142267607137231781962404446205237428018424495463228324025123946941234827212382893002297682102381862002312115180238729280239872702391110502431254022821230232312102448510We can compare the median number of references, and the numbers of references
for the least and most popular quartiles in the three years:ReferencesLower 25%MedianHigher 25%RFC (2018)126RFC (2008)122657RFC (2008), until 20093612RFC (2008), 2018 and after015RFC (1998)4784250RFC (1998), until 19994919RFC (1998), 2018 and after003The total numbers shows a decreasing number of references over time.
This can be explain to some degree by the passage of time. If we
restrict the analysis to the number of references accrued in the year of
publishing and the year after that, we still see decreasing numbers. For example,
the top quartile of RFC published in 1998 had at least 19 references by the
end of 1999, while the top quartile of RFC published in 2008 only had at
least 12 references, which is twice the corresponding number for the RFC
of 2018.We also see that the number of references to RFC fades over time. Only the
most popular of the RFC produced in 1998 are still referenced in 2019. The
overall popularity of the RFC series benefit from a history of publishing
relevant documents, but over time the references to historic
documents will decrease and the overall popularity will depend on more
recent documents.We need however to be a bit cautious before asserting that publications with
a low citation count have limited impact:some documents may well accumulate more citations over time. For example,
updates . There are more than 1000 citations of
on Google Scholar. We might expect that the citation count
of will increase in the coming years.citation counts largely come from academic publications, and thus reflect
popularity within researchers more than popularity with network operators
and vendors.We should be able to assess the popularity of specifications with vendors,
operators and designers by asking questions about deployed services and products.The preparation and publication delays include three components:the delay from submission to the RFC Editor to beginning of Auth48, during
which the document is prepared;the AUTH48 delay, during which authors review and eventually approve the
changes proposed by the editors;the publication delay, from final agreement by authors and editors to
actual publication.The breakdown of the publication delays for each RFC is shown in the
following table.RFCStatusPagesRFC editAuth 48RFC PubEdit(total)8411Info55388201618456Info649846141588446PS160855701428355Info1383151998441PS8673361068324ISE2942281718377PS83910201418498Info1549162678479ISE83151378453Info427370808429BCP10609901598312Info18963001268492ISE4035512324808378Exp214290518361PS1739313738472PS859813808466PS158842231098362PS3349114648468Info1565539127Average77.341.24.2122.7-8492Average62374103On average, the total delay appears to be a bit more than four month, but the
average is skewed by the extreme values encountered for . If we
exclude that RFC from the computations, the average delay drops to a just a bit
more than 3 months: about 2 months for the preparation, a bit more than one
month for the Auth48 phase, and 4 days for the publishing.Of course, these delays vary from RFC to RFC. To try explain the causes of the
delay, we compute the correlation factor between the observed delays and 4
plausible explanation factors:The number of pages in the document,The amount of copy edit, as discussed in (#copy-editing),Whether or not an IANA action was required.We find the following values:CorrelationRFC editAuth 48Edit(total)Nb pages0.50-0.040.21Copy-Edit0.420.240.45IANA-0.130.260.15None of these indicate strong correlations. The greater number of pages will
tend to increase the preparation delay, but it does not appear to impact the
Auth48 delay at all. The amount of copy editing also tend to increase the
the preparation delay somewhat and the Auth48 delay a little. The presence or
absence of IANA action has very ittle correlation with the delays.We also observe that the Auth48 delay varies much more than the preparation
delay, with a standard deviation of 20 days for Auth48 versus 10 days for
the preparation delay. In theory, Auth48 is just a final
verification: the authors receive the document prepared by the RFC production center,
and just have to give their approval, or maybe request a last minute
correction. The name indicates that this is expected to last just two days, but
in average it lasts more than a month.We tested a variety of hypotheses that might explain the duration of AUTH48
by computing the correlation coefficients between various properties of the RFC
and the production delays. The results are listed in the following table:CorrelationRFC productionAuth 48RFC Edit(total)Nb pages0.5-0.040.21Copy-Edit0.420.240.45IANA action requested-0.130.260.15Nb drafts0.19-0.3-0.17Nb Authors0.4-0.040.16WG delay0.03-0.16-0.15The results show that there is no simple answer. The number of pages, the
required amount of copy editing and to a very small extent the number of drafts
can help predict the production delay, but there is no obvious predictor for
the Auth 48 delay. In particular, there is no numerical evidence that the
number of authors influences the Auth48 delay, or that authors who have spent
a long time working on the document in the working group somehow spend
even longer to answer questions during Auth48 – if anything, the numerical
results point in the opposite direction.After asking the authors of the RFC in the sample why the AUTH48 phase took
a long time, and we got three explanations:1- Some RFC have multiple authors in multiple time zones. This slows down
the coordination required for approving changes.2- Some authors found some of the proposed changes unnecessary or
undesirable, and asked that they be reversed. This required long
exchanges between authors and editors.3- Some authors were not giving high priority to AUTH48 responses.As mentioned above, we were not able to verify these hypotheses by looking at
the data.We can assess the amount of copy editing applied to each published RFC by
comparing the text of the draft approved for publication and the text of the
RFC. We do expect differences in the "boilerplate" and in the IANA section,
but we will also see differences due to copy editing. Assessing the amount
of copy editing is subjective, and we do it using a scale of 1 to 4:1- Minor editing2- Editing for style, such as capitalization, hyphens, that versus which,
and expending all acronyms at least one.3- Editing for clarity in addition to style, such as rewriting ambiguous
sentences and clarifying use of internal references. For Yang models,
that may include model corrections suggested by the verifier.4- Extensive editing.The following table shows that with about half of the RFC required
editing for style, and the other half at least some editing for clarity.RFCStatusCopy Edit8411Info28456Info48446PS38355Info28441PS28324ISE28377PS38498Info38479ISE18453Info28429BCP28312Info28492ISE38378Exp28361PS28472PS28466PS38362PS38468Info3This method of assessment does not take into account
the number of changes proposed by the editors and eventually rejected
by the authors, since these changes are not present in either the
final draft or the published RFC. It might be possible to get
an evaluation of these "phantom changes" from the RFC Production Center.Out of 20 randomly selected RFC, 3 were published through the "independent series".
One is an independent opinion, another a description of a non-IETF protocol
format, and the third was , which is a special case. Apart from
this special case, the publication delays were significantly shorter
for the Independent Stream than for the IETF stream.
This seems to indicate that the Independent Series is functioning as
expected.The authors of these 3 RFC are regular IETF contributors. This
pbservation motivated a secondary analysis of all the RFC
published in the "independent" stream in 2018. There are 14 such RFC:
8507, 8494, 8493, 8492, 8483, 8479, 8433, 8409, 8374, 8369, 8367, 8351,
8328 and 8324. (RFC 8367 and 8369 were
published on 1 April 2018.) We can ask whether the authors of
these RFC are these outsiders, part of a "wider community" or are
people who are also contributing to the IETF. The overwhelming
response is, "insiders". Pretty much all the authors are or were
involved in the IETF, many of them with a prominent track record. There
are just 2 exceptions, a single RFC in which only 3 of the 5 authors are
well associated with the IETF.This draft is not really complete. We have obtained feedback from many authors
but not all. We should also get a review from the RFC Production Center. We may
want to find a better way of looking at citations than simple queries on
Google Scholar.Even with those limitations, the exercise shows some promise, and also
shows the interest of doing more studies. For example, one of the plausible
questions is whether the IETF impact is increasing or decreasing over time.
We could do that by repeating the statistical sampling analysis for previous
years, for example 2008 and 1998.This draft does not specify any protocol.We might want to analyze whether security issues were discovered after
publication of specific standards.This draft does not require any IANA action.Peliminary analysis does not indicate that IANA is causing any particular
delay in the publication process.Many thanks to the authors of the selected RFC who were willing to
provide feedback on the process:
Michael Ackermann,
Zafar Ali,
Sarah Banks,
Bruno Decraene,
Lars Eggert,
Nalini Elkins,
Joachim Fabini,
Dino Farinacci,
Clarence Filsfils,
Sujay Gupta,
Dan Harkins,
Vinayak Hegde,
Benjamin Kaduk,
John Klensin,
Acee Lindem,
Nikos Mavrogiannopoulos,
Patrick McManus,
Victor Moreno,
Al Morton,
Andrei Popov,
Eric Rescorla,
Michiko Short,
Bhuvaneswaran Vengainathan,
Lao Weiguo, and
Li Yizhou.Publicly Verifiable Nominations Committee (NomCom) Random SelectionThis document describes a method for making random selections in such a way that the unbiased nature of the choice is publicly verifiable. As an example, the selection of the voting members of the IETF Nominations Committee (NomCom) from the pool of eligible volunteers is used. Similar techniques would be applicable to other cases. This memo provides information for the Internet community.IANA Registration for the Cryptographic Algorithm Object Identifier RangeWhen the Curdle Security Working Group was chartered, a range of object identifiers was donated by DigiCert, Inc. for the purpose of registering the Edwards Elliptic Curve key agreement and signature algorithms. This donated set of OIDs allowed for shorter values than would be possible using the existing S/MIME or PKIX arcs. This document describes the donated range and the identifiers that were assigned from that range, transfers control of that range to IANA, and establishes IANA allocation policies for any future assignments within that range.Algorithm Identifiers for Ed25519, Ed448, X25519, and X448 for Use in the Internet X.509 Public Key InfrastructureThis document specifies algorithm identifiers and ASN.1 encoding formats for elliptic curve constructs using the curve25519 and curve448 curves. The signature algorithms covered are Ed25519 and Ed448. The key agreement algorithms covered are X25519 and X448. The encoding for public key, private key, and Edwards-curve Digital Signature Algorithm (EdDSA) structures is provided.Benchmarking Methodology for Software-Defined Networking (SDN) Controller PerformanceThis document defines methodologies for benchmarking the control-plane performance of Software-Defined Networking (SDN) Controllers. The SDN Controller is a core component in the SDN architecture that controls the behavior of the network. SDN Controllers have been implemented with many varying designs in order to achieve their intended network functionality. Hence, the authors of this document have taken the approach of considering an SDN Controller to be a black box, defining the methodology in a manner that is agnostic to protocols and network services supported by controllers. This document provides a method for measuring the performance of all controller implementations.Terminology for Benchmarking Software-Defined Networking (SDN) Controller PerformanceThis document defines terminology for benchmarking a Software-Defined Networking (SDN) controller's control-plane performance. It extends the terminology already defined in RFC 7426 for the purpose of benchmarking SDN Controllers. The terms provided in this document help to benchmark an SDN Controller's performance independently of the controller's supported protocols and/or network services.The Transport Layer Security (TLS) Protocol Version 1.3This document specifies version 1.3 of the Transport Layer Security (TLS) protocol. TLS allows client/server applications to communicate over the Internet in a way that is designed to prevent eavesdropping, tampering, and message forgery.This document updates RFCs 5705 and 6066, and obsoletes RFCs 5077, 5246, and 6961. This document also specifies new requirements for TLS 1.2 implementations.Resiliency Use Cases in Source Packet Routing in Networking (SPRING) NetworksThis document identifies and describes the requirements for a set of use cases related to Segment Routing network resiliency on Source Packet Routing in Networking (SPRING) networks.Bootstrapping WebSockets with HTTP/2This document defines a mechanism for running the WebSocket Protocol (RFC 6455) over a single stream of an HTTP/2 connection.The WebSocket ProtocolThe WebSocket Protocol enables two-way communication between a client running untrusted code in a controlled environment to a remote host that has opted-in to communications from that code. The security model used for this is the origin-based security model commonly used by web browsers. The protocol consists of an opening handshake followed by basic message framing, layered over TCP. The goal of this technology is to provide a mechanism for browser-based applications that need two-way communication with servers that does not rely on opening multiple HTTP connections (e.g., using XMLHttpRequest or <iframe>s and long polling). [STANDARDS-TRACK]DNS Privacy, Authorization, Special Uses, Encoding, Characters, Matching, and Root Structure: Time for Another Look?The basic design of the Domain Name System was completed almost 30 years ago. The last half of that period has been characterized by significant changes in requirements and expectations, some of which either require changes to how the DNS is used or can be accommodated only poorly or not at all. This document asks the question of whether it is time to either redesign and replace the DNS to match contemporary requirements and expectations (rather than continuing to try to design and implement incremental patches that are not fully satisfactory) or draw some clear lines about functionality that is not really needed or that should be performed in some other way.Transparent Interconnection of Lots of Links (TRILL): Multi-TopologyThis document specifies extensions to the IETF TRILL (Transparent Interconnection of Lots of Links) protocol to support multi-topology routing of unicast and multi-destination traffic based on IS-IS (Intermediate System to Intermediate System) multi-topology specified in RFC 5120. This document updates RFCs 6325 and 7177.A P-Served-User Header Field Parameter for an Originating Call Diversion (CDIV) Session Case in the Session Initiation Protocol (SIP)The P-Served-User header field was defined based on a requirement from the 3rd Generation Partnership Project (3GPP) IMS (IP Multimedia Subsystem) in order to convey the identity of the served user, his/ her registration state, and the session case that applies to that particular communication session and application invocation. A session case is metadata that captures the status of the session of a served user regardless of whether or not the served user is registered or the session originates or terminates with the served user. This document updates RFC 5502 by defining a new P-Served-User header field parameter, "orig-cdiv". The parameter conveys the session case used by a proxy when handling an originating session after Call Diversion (CDIV) services have been invoked for the served user. This document also fixes the ABNF in RFC 5502 and provides more guidance for using the P-Served-User header field in IP networks.Storing Validation Parameters in PKCS#8This memo describes a method of storing parameters needed for private-key validation in the Private-Key Information Syntax Specification as defined in PKCS#8 format (RFC 5208). It is equally applicable to the alternative implementation of the Private-Key Information Syntax Specification as defined in RFC 5958.The approach described in this document encodes the parameters under a private enterprise extension and does not form part of a formal standard.Framework for Abstraction and Control of TE Networks (ACTN)Traffic Engineered (TE) networks have a variety of mechanisms to facilitate the separation of the data plane and control plane. They also have a range of management and provisioning protocols to configure and activate network resources. These mechanisms represent key technologies for enabling flexible and dynamic networking. The term "Traffic Engineered network" refers to a network that uses any connection-oriented technology under the control of a distributed or centralized control plane to support dynamic provisioning of end-to- end connectivity.Abstraction of network resources is a technique that can be applied to a single network domain or across multiple domains to create a single virtualized network that is under the control of a network operator or the customer of the operator that actually owns the network resources.This document provides a framework for Abstraction and Control of TE Networks (ACTN) to support virtual network services and connectivity services.Deprecate Triple-DES (3DES) and RC4 in KerberosThe triple-DES (3DES) and RC4 encryption types are steadily weakening in cryptographic strength, and the deprecation process should begin for their use in Kerberos. Accordingly, RFC 4757 has been moved to Historic status, as none of the encryption types it specifies should be used, and RFC 3961 has been updated to note the deprecation of the triple-DES encryption types. RFC 4120 is likewise updated to remove the recommendation to implement triple-DES encryption and checksum types.CUBIC for Fast Long-Distance NetworksCUBIC is an extension to the current TCP standards. It differs from the current TCP standards only in the congestion control algorithm on the sender side. In particular, it uses a cubic function instead of a linear window increase function of the current TCP standards to improve scalability and stability under fast and long-distance networks. CUBIC and its predecessor algorithm have been adopted as defaults by Linux and have been used for many years. This document provides a specification of CUBIC to enable third-party implementations and to solicit community feedback through experimentation on the performance of CUBIC.Secure Password Ciphersuites for Transport Layer Security (TLS)This memo defines several new ciphersuites for the Transport Layer Security (TLS) protocol to support certificateless, secure authentication using only a simple, low-entropy password. The exchange is called "TLS-PWD". The ciphersuites are all based on an authentication and key exchange protocol, named "dragonfly", that is resistant to offline dictionary attacks.Signal-Free Locator/ID Separation Protocol (LISP) MulticastWhen multicast sources and receivers are active at Locator/ID Separation Protocol (LISP) sites, the core network is required to use native multicast so packets can be delivered from sources to group members. When multicast is not available to connect the multicast sites together, a signal-free mechanism can be used to allow traffic to flow between sites. The mechanism described in this document uses unicast replication and encapsulation over the core network for the data plane and uses the LISP mapping database system so encapsulators at the source LISP multicast site can find decapsulators at the receiver LISP multicast sites.Transparent Interconnection of Lots of Links (TRILL): Centralized Replication for Active-Active Broadcast, Unknown Unicast, and Multicast (BUM) TrafficIn Transparent Interconnection of Lots of Links (TRILL) active-active access, a Reverse Path Forwarding (RPF) check failure issue may occur when using the pseudo-nickname mechanism specified in RFC 7781. This document describes a solution to resolve this RPF check failure issue through centralized replication. All ingress Routing Bridges (RBridges) send Broadcast, Unknown Unicast, and Multicast (BUM) traffic to a centralized node with unicast TRILL encapsulation. When the centralized node receives the BUM traffic, it decapsulates the packets and forwards them to their destination RBridges using a distribution tree established per the TRILL base protocol (RFC 6325). To avoid RPF check failure on an RBridge sitting between the ingress RBridge and the centralized replication node, some change in the RPF calculation algorithm is required. RPF checks on each RBridge MUST be calculated as if the centralized node was the ingress RBridge, instead of being calculated using the actual ingress RBridge. This document updates RFC 6325.Transport Layer Security (TLS) Extension for Token Binding Protocol NegotiationThis document specifies a Transport Layer Security (TLS) extension for the negotiation of Token Binding protocol version and key parameters. Negotiation of Token Binding in TLS 1.3 and later versions is beyond the scope of this document.A YANG Data Model for Layer 2 Virtual Private Network (L2VPN) Service DeliveryThis document defines a YANG data model that can be used to configure a Layer 2 provider-provisioned VPN service. It is up to a management system to take this as an input and generate specific configuration models to configure the different network elements to deliver the service. How this configuration of network elements is done is out of scope for this document.The YANG data model defined in this document includes support for point-to-point Virtual Private Wire Services (VPWSs) and multipoint Virtual Private LAN Services (VPLSs) that use Pseudowires signaled using the Label Distribution Protocol (LDP) and the Border Gateway Protocol (BGP) as described in RFCs 4761 and 6624.The YANG data model defined in this document conforms to the Network Management Datastore Architecture defined in RFC 8342.OSPFv3 Link State Advertisement (LSA) ExtensibilityOSPFv3 requires functional extension beyond what can readily be done with the fixed-format Link State Advertisement (LSA) as described in RFC 5340. Without LSA extension, attributes associated with OSPFv3 links and advertised IPv6 prefixes must be advertised in separate LSAs and correlated to the fixed-format LSAs. This document extends the LSA format by encoding the existing OSPFv3 LSA information in Type-Length-Value (TLV) tuples and allowing advertisement of additional information with additional TLVs. Backward-compatibility mechanisms are also described.This document updates RFC 5340, "OSPF for IPv6", and RFC 5838, "Support of Address Families in OSPFv3", by providing TLV-based encodings for the base OSPFv3 unicast support and OSPFv3 address family support.IPv4, IPv6, and IPv4-IPv6 Coexistence: Updates for the IP Performance Metrics (IPPM) FrameworkThis memo updates the IP Performance Metrics (IPPM) framework defined by RFC 2330 with new considerations for measurement methodology and testing. It updates the definition of standard-formed packets to include IPv6 packets, deprecates the definition of minimal IP packet, and augments distinguishing aspects, referred to as Type-P, for test packets in RFC 2330. This memo identifies that IPv4-IPv6 coexistence can challenge measurements within the scope of the IPPM framework. Example use cases include, but are not limited to, IPv4-IPv6 translation, NAT, and protocol encapsulation. IPv6 header compression and use of IPv6 over Low-Power Wireless Area Networks (6LoWPAN) are considered and excluded from the standard-formed packet evaluation.