Asymmetric Manifest Based IntegrityAkamai Technologies, Inc.150 BroadwayCambridge, MA 02144United States of Americajakeholland.net@gmail.comAkamai Technologies, Inc.150 BroadwayCambridge, MA 02144United States of Americakrose@krose.org
Ops
MbonedInternet-DraftThis document defines Asymmetric Manifest-Based Integrity (AMBI).
AMBI allows each receiver or forwarder of a stream of multicast packets to check the integrity of the contents of each packet in the data stream.
AMBI operates by passing cryptographically verifiable hashes of the data packets inside manifest messages, and sending the manifests over authenticated out-of-band communication channels.Multicast transport poses security problems that are not easily addressed by the same security mechanisms used for unicast transport.The “Introduction” sections of the documents describing TESLA , and TESLA in SRTP , and TESLA with ALC and NORM present excellent overviews of the challenges unique to multicast authentication, briefly summarized here:A MAC based on a symmetric shared secret cannot be used because each packet has multiple receivers that do not trust each other, and using a symmetric shared secret exposes the same secret to each receiver.Asymmetric per-packet signatures can handle only very low bit-rates because of the computational overhead.An asymmetric signature of a larger message comprising multiple packets requires reliable receipt of all such packets, something that cannot be guaranteed in a timely manner even for protocols that do provide reliable delivery, and the retransmission of which may anyway exceed the useful lifetime for data formats that can otherwise tolerate some degree of loss.Aymmetric Manifest-Based Integrity (AMBI) defines a method for receivers or middle boxes to cryptographically authenticate and verify the integrity of a stream of packets, by communicating packet “manifests” (described in ) via an out-of-band communication channel that provides authentication and verifiable integrity.Each manifest contains a message digest (described in ) for each packet in a sequence of packets from the data stream, hereafter called a “packet digest”.
The packet digest incorporates a cryptographic hash of the packet contents and some identifying data from the packet, according to a defined digest profile for the data stream.Each manifest MUST be delivered in a way that provides cryptographic integrity guarantees of the authenticity of the manifest.
For example, TLS could be used to deliver a stream of manifests over a unicast data stream from a set of trusted senders to each receiver, or a protocol that asymmetrically signs each message could be used to transport authenticated manifests over a multicast channel.
Note that a UDP-based protocol might drop or reorder manifests while still providing authentication.Upon successful verification of a manifest and receipt of any subset of the corresponding data packets, the receiver has proof of the integrity of the contents of the data packets that are listed in the manifest.Authenticating the integrity of the data packets depends on:the authenticity of the manifeststhe authenticity of the digest profile used for construction of the packet digeststhe difficulty of generating a collision for the packet digests contained in the manifest.This document defines a YANG module that augments the DORMS YANG module to provide a way to communicate a digest profile, described in , for construction of the packet digests, described in .
When obtaining the digest profile by using DORMS, the authenticity of the data stream relies on a trust relationship with the DORMS server, since that anchors the authenticity of the digest profile for constructing packet digests.AMBI and TESLA and attempt to achieve a similar goal of authenticating the integrity of streams of multicast packets.
AMBI imposes a higher overhead, as measured in the amount of extra data required, than TESLA imposes.
In exchange, AMBI relaxes the requirement for establishing an upper bound on clock synchronization between sender and receiver, and allows for the use case of authenticating multicast traffic before forwarding it through the network, while also allowing receivers to authenticate the same traffic.
By contrast, this is not possible with TESLA because the data packets can’t be authenticated until a key is disclosed, so either the middlebox has to forward data packets without first authenticating them so that the receiver has them prior to key disclosure, or the middlebox has to hold packets until the key is disclosed, at which point the receiver can no longer establish their authenticity.The other new capability is that because AMBI provides authentication information out of band, authentication can be retrofitted into some pre-existing deployments without changing the protocol of the data packets, under some restrictions outlined in .
By contrast, TESLA requires a MAC to be added to each authenticated message.TBD: Summarize the applicable threat model this protects against. A diagram plus a cleaned-up version of the on-list explanation here is probably appropriate: https://mailarchive.ietf.org/arch/msg/mboned/CG9FLjPwuno3MtvYvgNcD5p69I4/Reference https://tools.ietf.org/html/rfc3552#section-3The key words “MUST”, “MUST NOT”, “REQUIRED”, “SHALL”, “SHALL NOT”, “SHOULD”, “SHOULD NOT”, “RECOMMENDED”, “NOT RECOMMENDED”, “MAY”, and “OPTIONAL” in this document are to be interpreted as described in and when, and only when, they appear in all capitals, as shown here.Note to RFC Editor: Please remove this section and its subsections before publication.This section is to provide references to make it easier to review the development and discussion on the draft so far.This document is in the Github repository at:https://github.com/GrumpyOldTroll/ietf-dorms-clusterReaders are welcome to open issues and send pull requests for this document.Please note that contributions may be merged and substantially edited, and as a reminder, please carefully consider the Note Well before contributing: https://datatracker.ietf.org/submit/note-well/Substantial discussion of this document should take place on the MBONED working group mailing list (mboned@ietf.org).Join: https://www.ietf.org/mailman/listinfo/mbonedSearch: https://mailarchive.ietf.org/arch/browse/mboned/In order to authenticate a data packet, AMBI receivers need to hold these three pieces of information at the same time:the data packetan authenticated manifest containing the packet digest for the data packeta digest profile defining the transformation from the data packet to its packet digestThe manifests are delivered as a stream of manifests over an authenticated data channel.
Manifest contents MUST be authenticated before they can be used to authenticate data packets.The manifest stream is composed of an ordered sequence of manifests that each contain an ordered sequence of packet digests, corresponding to the original packets as sent from their origin, in the same order.Note that a manifest contains potentially many packet digests, and its size can be tuned to fit within a convenient PDU (Protocol Data Unit) of the manifest transport stream.
By doing so, many packet digests for the multicast data stream can be delivered per packet of the manifest transport.
The intent is that even with unicast-based manifest transport, multicast-style efficiencies of scale can still be realized with only a relatively small unicast overhead, when manifests use a unicast transport.Using different communication channels for the manifest stream and the data stream introduces a possibility of desynchronization in the timing of the received data between the different channels, so receivers hold data packets and packet digests from the manifest stream in buffers for some duration while awaiting the arrival of their counterparts.While holding a data packet, if the corresponding packet digest for that packet arrives in the manifest stream and can be authenticated, the data packet is authenticated.While holding an authenticated packet digest, if the corresponding data packet arrives with a matching packet digest, the data packet is authenticated.Once a data packet is authenticated, the corresponding packet digest can be discarded and the data packet can be further processed by the receiving application or forwarded through the receiving network.
Authenticating a data packet consumes one packet digest and prevents re-learning, with a hold-down time equal to the hold time for packet digests.
A different manifest might provide the same packet digest with the same packet sequence number, but the digest remains consumed if it has been used to authenticate a data packet.If the receiver’s hold duration for a data packet expires without authenticating the packet, the packet SHOULD be dropped as unauthenticated.
If the hold duration of a manifest expires, packet digests last received in that manifest SHOULD be discarded.
(Note that in some cases, packet digests can be sent redundantly in more than one manifest.
In such cases, the latest received time for an authenticated packet digest should be used for the expiration time.)Since packet digests are usually smaller than the data packets, it’s RECOMMENDED that senders generate and send manifests with timing such that the packet digests in a manifest will typically be received by subscribed receivers before the data packets corresponding to those digests are received.This strategy reduces the buffering requirements at receivers at, the cost of introducing some buffering of data packets at the sender, since data packets are generated before their packet digests can be added to manifests.The RECOMMENDED default hold times at receivers are:2 seconds for data packets10 seconds for packet digestsThe sender MAY recommend different values for specific data streams, in order to tune different data streams for different performance goals.
The YANG model in provides a mechanism for senders to communicate the sender’s recommendation for buffering durations, when using DORMS.Receivers SHOULD follow the recommendations for hold times provided by the sender, subject to their capabilities and any administratively configured limits on buffer sizes at the receiver.However receivers MAY deviate from the values recommended by the sender for a variety of reasons.
Decreasing the buffering durations recommended by the server increases the risk of losing packets, but can be an appropriate tradeoff for specific network conditions and hardware constraints on some devices.It’s RECOMMENDED that middle boxes forwarding buffered data packets preserve the inter-packet gap between packets in the same data stream, and that receiving libraries that perform AMBI-based authentication provide mechanisms to expose the network arrival times of packets to applications.The purpose for this recommendation is to preserve the capability of receivers to use techniques for available bandwidth detection or network congestion based on observation of packet times.
Examples of such techniques include paced chirping and pathrate.Note that this recommendation SHOULD NOT prevent the transmission of an authenticated packet because the prior packet is unauthenticated.
This recommendation only asks implementations to delay the transmission of an authenticated packet to correspond to the interpacket gap if an authenticated packet was previously transmitted and the authentication of the subsequent packet would otherwise burst the packets more quickly.This does not prevent the transmission of packets out of order according to their order of authentication, only the timing of packets that are transmitted, after authentication, in the same order they were received.For receiver applications, the time that the original packet was received from the network SHOULD be made available to the receiving application.A packet digest is a message digest for a data packet, built according to a digest profile defined by the sender.The digest profile is defined by the sender, and specifies:A cryptographically secure hash algorithm (REQUIRED)A manifest stream identifierWhether to hash the IP payload or the UDP payload. (see )The hash algorithm is applied to a pseudoheader followed by the packet payload, as determined by the digest profile.
The computed hash value is the packet digest.TBD: there should also be a way to specify that only packets to a specific UDP port are applicable.
I think this is not quite right today and probably should be done with a grouping in the yang model, so that the profile appears either inside a “protocol” container inside the (S,G) or inside the udp-stream inside the “protocol”, but am not sure.
Follow-up on this after the first reference implementation…TBD: As recommended by https://tools.ietf.org/html/rfc7696#section-2.2, a companion document containing the mandatory-to-implement cipher suite should also be published separately and referenced by this document.When the digest profile indicates that UDP payloads are validated, the IP protocol for the packets MUST be UDP (0x11) and the payload used for calculating the packet digest includes only the UDP payload, with length as the number of UDP payload octets, as calculated by subtracting the size of the UDP header from the UDP payload length.When the digest profile indicates that IP payloads are validated, the IP payload of the packet is used, using the outermost IP layer that contains the (S,G) corresponding to the (S,G) protected by the manifest.
There is no restriction on the IP protocols that can be authenticated.
The length field in the pseudoheader is calculated by subtracting the IP Header Length from the IP length, and is equal to the number of octets in the payload for the digest calculation.Full IP payloads often aren’t available to receivers without extra privileges on end user operating systems, so it’s useful to provide a way to authenticate only the UDP payload, which is often the only portion of the packet available to many receiving applications.However, for some use cases a full IP payload is appropriate.
For example, when retrofitting some existing protocols, some packets may be predictable or frequently repeated.
Use of an IPSec Authentication Header is one way to disambiguate such packets.
Even though the shared secret means the Authentication Header can’t itself be used to authenticate the packet contents, the sequence number in the Authentication Header can ensure that specific packets are not repeated at the IP layer, and so it’s useful for AMBI to have the capability to authenticate such packets.Another example: some services might need to authenticate the UDP options .
When using the UDP payload, the UDP options would not be part of the authenticated payload, but would be included when using the IP payload type.Lastly, since (S,G) subscription operates at the IP layer, it’s possible that some non-UDP protocols will need to be authenticated.When calculating the hash for the packet digest, the hash algorithm is applied to a pseudoheader followed by the payload from the packet.
The complete sequence of octets used to calculate the hash is structured as follows:The IPv4 or IPv6 source address of the packet.The IPv4 or IPv6 destination address of the packet.All bits set to 0.The IP Protocol field from IPv4, or the Next Header field for IPv6.
When UDP payload is indicated, this value MUST be UDP (0x11).The length in octets of the Payload Data field, expressed as an unsigned 16-bit integer.The source port of the packet.
Zeroes if using a protocol that does not use source ports.The destination port of the packet.
Zeroes if using a protocol that does not use destination ports.TBD: there’s something I hate about the source and destination ports.
Maybe it should only be active in UDP-payload mode, instead of zeroes when not UDP?
But I suspect there’s a better approach than UDP-or-not, so it’s this way for now, with hopes of finding something better in the next version.The 32-bit identifier for the manifest stream.The payload data includes either the IP payload or the UDP payload, as indicated by the digest profile.The payload type is configurable because when sending UDP, some legacy networks may strip the UDP option space, and it’s necessary to provide a manifest stream capable of authentication that can interoperate with these networks.
However, for non-UDP traffic or in order to authenticate the UDP options, some use cases may require support for authenticating the full IP payload.A 32-bit unsigned integer chosen by the sender.A monotonically increasing 32-bit unsigned integer.
Each manifest sent by the sender increases this value by 1.
On overflow it wraps to 0.It’s RECOMMENDED to expire the manifest stream and start a new stream for the data packets before a sequence number wrap is necessary.A monotonically increasing 32-bit unsigned integer.
Each packet in the data stream increases this value by 1.It’s RECOMMENDED to expire the manifest stream and start a new stream for the data packets before a sequence number wrap is necessary.Note: for redundancy, especially if using a manifest stream with unreliable transport, successive manifests MAY provide duplicates of the same packet digest with the same packet sequence number, using overapping sets of packet sequence numbers.
When received, these reset the hold timer for the listed packet digests.A 16-bit unsigned integer number of seconds.A zero value means the current digest profile for the current manifest stream is stable.A nonzero value means that the authentication is transitioning to a new manifest stream, and the set of digest profiles SHOULD be refreshed by receivers that might stay joined longer than this duration, and a different manifest stream SHOULD be selected, before this many seconds have elapsed, in order to avoid a disruption.
See .The count of packet digests in the manifest.Packet digests appended one after the other, aligned to 8-bit boundaries with zero padding (if the bit length of the digests are not multiples of 8 bits).It’s possible for multiple manifest streams authenticating the same data stream to be active at the same time.
The different manifest streams can have different hash algorithms, manifest ids, and current packet sequence numbers for the same data stream.
These result in different sets of packet digests for the same data packets, one digest per packet per digest profile.It’s necessary sometimes to transition gracefully from one manifest stream to another.
The Refresh Deadline field from the manifest is used to signal to receivers the need to transition.When a receiver gets a nonzero refresh deadline in a manifest the sender SHOULD have an alternate manifest stream ready and available, and the receiver SHOULD learn the alternate manifest stream, join the new one, and leave the old one before the number of seconds given in the refresh deadline.
After the refresh deadline has expired, a manifest stream MAY end.The receivers SHOULD use a random value between now and one half the number of seconds in the deadline field, to spread the spike of load on the DORMS server during a large multicast event.AMBI manifests MUST be authenticated, but any transport protocol providing authentication can be used.
This section discusses several viable options for the use of an authenticating transport, and some associated design considerations.TBD: extend the ‘manifest-transport’ in the YANG model to make an extensible mechanism to advertise different transport options for receiving manifest streams.TBD: add ALTA to the list when and if it gets further along .
Sending an authenticatable multicast stream (instead of the below unicast-based proposals) is a worthwhile goal, else a 1% unicast authentication overhead becomes a new unicast limit to the scalability.TBD: add a recommendation about scalability, like with DORMS, when using a unicast hash stream.
CDN or other kind of fanout solution that can scale the delivery, and still generally hit the time window.This document defines a new media type ‘application/ambi’ for use with HTTPS.An HTTPS stream carrying the ‘application/ambi’ media type is composed of a sequence of binary AMBI manifests.
It is RECOMMENDED to use Chunked encoding.Complete packet Digests from partially received manifests MAY be used by the receiver for authentication, even if the full manifest is not yet delivered.TBD: DTLS can provide authentication for datagrams, so if manifests can be constructed to fit within datagrams, it is an appropriate choice.
(IPSec is similar–worth adding as an option?).This option provides no native redundancy or retransmission, but packet digests can be repeated in different manifests to provide some resilience to loss.
Lost manifests that result in the loss of blocks of packet digests can be expensive, since they would make received data packets unauthenticatable.
TBD: should we therefore not support this case? (probably still worthwhile–the manifests can still contain redundant hashes)DTLS for manifests that do not fit into single-packet payloads can still be delivered by using FECFRAME , particularly Reed-Solomon or possibly Raptor .
This has some advantages compared to HTTPS because of the absence of HOL-blocking, while providing for tunable redundancy.
This has some advantages relative to DTLS because of overhead reduction and non-integer redundancy tunability (e.g. 1.5 becomes a viable redundancy factor).TBD: define this method, possibly in another RFC.TBD: walk through some examples as soon as we have a build running.
Likely to need some touching up.The tree diagram below follows the notation defined in .This document adds one YANG module to the “YANG Module Names” registry maintained at <https://www.iana.org/assignments/yang-parameters>.
The following registrations are made, per the format in Section 14 of :This document adds the following registration to the “ns” subregistry of the “IETF XML Registry” defined in , referencing this document.TBD: Register ‘application/ambi’ according to advice from: https://www.iana.org/form/media-typesTBD: check guidelines in https://tools.ietf.org/html/rfc5226Protocols that have predictable packets run the risk of offline attacks for hash collisions against those packets.
When authenticating a protocol that might have predictable packets, it’s RECOMMENDED to use a hash function secure against such attacks or to add content to the packets to make them unpredictable, such as an Authentication Header (), or the addition of an ignored field with random content to the packet payload.TBD: explain attack from generating malicious packets and then looking for collisions, as opposed to having to generate a collision on packet contents that include a sequence number and then hitting a match.TBD: follow the rest of the guidelines: https://tools.ietf.org/html/rfc3552Many thanks to Daniel Franke, Eric Rescorla, Christian Worm Mortensen, Max Franke, and Albert Manfredi for their very helpful comments and suggestions.Key words for use in RFCs to Indicate Requirement LevelsIn many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.Datagram Transport Layer Security Version 1.2This document specifies version 1.2 of the Datagram Transport Layer Security (DTLS) protocol. The DTLS protocol provides communications privacy for datagram protocols. The protocol allows client/server applications to communicate in a way that is designed to prevent eavesdropping, tampering, or message forgery. The DTLS protocol is based on the Transport Layer Security (TLS) protocol and provides equivalent security guarantees. Datagram semantics of the underlying transport are preserved by the DTLS protocol. This document updates DTLS 1.0 to work with TLS version 1.2. [STANDARDS-TRACK]Forward Error Correction (FEC) FrameworkThis document describes a framework for using Forward Error Correction (FEC) codes with applications in public and private IP networks to provide protection against packet loss. The framework supports applying FEC to arbitrary packet flows over unreliable transport and is primarily intended for real-time, or streaming, media. This framework can be used to define Content Delivery Protocols that provide FEC for streaming media delivery or other packet flows. Content Delivery Protocols defined using this framework can support any FEC scheme (and associated FEC codes) that is compliant with various requirements defined in this document. Thus, Content Delivery Protocols can be defined that are not specific to a particular FEC scheme, and FEC schemes can be defined that are not specific to a particular Content Delivery Protocol. [STANDARDS-TRACK]Raptor Forward Error Correction (FEC) Schemes for FECFRAMEThis document describes Fully-Specified Forward Error Correction (FEC) Schemes for the Raptor and RaptorQ codes and their application to reliable delivery of media streams in the context of the FEC Framework. The Raptor and RaptorQ codes are systematic codes, where a number of repair symbols are generated from a set of source symbols and sent in one or more repair flows in addition to the source symbols that are sent to the receiver(s) within a source flow. The Raptor and RaptorQ codes offer close to optimal protection against arbitrary packet losses at a low computational complexity. Six FEC Schemes are defined: two for the protection of arbitrary packet flows, two that are optimized for small source blocks, and two for the protection of a single flow that already contains a sequence number. Repair data may be sent over arbitrary datagram transport (e.g., UDP) or using RTP. [STANDARDS-TRACK]Simple Reed-Solomon Forward Error Correction (FEC) Scheme for FECFRAMEThis document describes a fully-specified simple Forward Error Correction (FEC) scheme for Reed-Solomon codes over the finite field (also known as the Galois Field) GF(2^^m), with 2 <= m <= 16, that can be used to protect arbitrary media streams along the lines defined by FECFRAME. The Reed-Solomon codes considered have attractive properties, since they offer optimal protection against packet erasures and the source symbols are part of the encoding symbols, which can greatly simplify decoding. However, the price to pay is a limit on the maximum source block size, on the maximum number of encoding symbols, and a computational complexity higher than that of the Low-Density Parity Check (LDPC) codes, for instance.The YANG 1.1 Data Modeling LanguageYANG is a data modeling language used to model configuration data, state data, Remote Procedure Calls, and notifications for network management protocols. This document describes the syntax and semantics of version 1.1 of the YANG language. YANG version 1.1 is a maintenance release of the YANG language, addressing ambiguities and defects in the original specification. There are a small number of backward incompatibilities from YANG version 1. This document also specifies the YANG mappings to the Network Configuration Protocol (NETCONF).Ambiguity of Uppercase vs Lowercase in RFC 2119 Key WordsRFC 2119 specifies common key words that may be used in protocol specifications. This document aims to reduce the ambiguity by clarifying that only UPPERCASE usage of the key words have the defined special meanings.YANG Tree DiagramsThis document captures the current syntax used in YANG module tree diagrams. The purpose of this document is to provide a single location for this definition. This syntax may be updated from time to time based on the evolution of the YANG language.Discovery Of Restconf Metadata for Source-specific multicastThis document defines DORMS (Discovery Of Restconf Metadata for Source-specific multicast), a method to discover and retrieve extensible metadata about source-specific multicast channels using RESTCONF. The reverse IP DNS zone for a multicast sender's IP address is configured to use SRV resource records to advertise the hostname of a RESTCONF server that publishes metadata according to a new YANG module with support for extensions. A new service name and the new YANG module are defined.Guidelines for Writing RFC Text on Security ConsiderationsAll RFCs are required to have a Security Considerations section. Historically, such sections have been relatively weak. This document provides guidelines to RFC authors on how to write a good Security Considerations section. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.The IETF XML RegistryThis document describes an IANA maintained registry for IETF standards which use Extensible Markup Language (XML) related items such as Namespaces, Document Type Declarations (DTDs), Schemas, and Resource Description Framework (RDF) Schemas.Timed Efficient Stream Loss-Tolerant Authentication (TESLA): Multicast Source Authentication Transform IntroductionThis document introduces Timed Efficient Stream Loss-tolerant Authentication (TESLA). TESLA allows all receivers to check the integrity and authenticate the source of each packet in multicast or broadcast data streams. TESLA requires no trust between receivers, uses low-cost operations per packet at both sender and receiver, can tolerate any level of loss without retransmissions, and requires no per-receiver state at the sender. TESLA can protect receivers against denial of service attacks in certain circumstances. Each receiver must be loosely time-synchronized with the source in order to verify messages, but otherwise receivers do not have to send any messages. TESLA alone cannot support non-repudiation of the data source to third parties.This informational document is intended to assist in writing standardizable and secure specifications for protocols based on TESLA in different contexts. This memo provides information for the Internet community.IP Authentication HeaderThis document describes an updated version of the IP Authentication Header (AH), which is designed to provide authentication services in IPv4 and IPv6. This document obsoletes RFC 2402 (November 1998). [STANDARDS-TRACK]The Use of Timed Efficient Stream Loss-Tolerant Authentication (TESLA) in the Secure Real-time Transport Protocol (SRTP)This memo describes the use of the Timed Efficient Stream Loss-tolerant Authentication (RFC 4082) transform within the Secure Real-time Transport Protocol (SRTP), to provide data origin authentication for multicast and broadcast data streams. [STANDARDS-TRACK]Use of Timed Efficient Stream Loss-Tolerant Authentication (TESLA) in the Asynchronous Layered Coding (ALC) and NACK-Oriented Reliable Multicast (NORM) ProtocolsThis document details the Timed Efficient Stream \%Loss-Tolerant Authentication (TESLA) packet source authentication and packet integrity verification protocol and its integration within the Asynchronous Layered Coding (ALC) and NACK-Oriented Reliable Multicast (NORM) content delivery protocols. This document only considers the authentication/integrity verification of the packets generated by the session's sender. The authentication and integrity verification of the packets sent by receivers, if any, is out of the scope of this document. This document defines an Experimental Protocol for the Internet community.YANG - A Data Modeling Language for the Network Configuration Protocol (NETCONF)YANG is a data modeling language used to model configuration and state data manipulated by the Network Configuration Protocol (NETCONF), NETCONF remote procedure calls, and NETCONF notifications. [STANDARDS-TRACK]Transport Options for UDPTransport protocols are extended through the use of transport header options. This document extends UDP by indicating the location, syntax, and semantics for UDP transport layer options.Asymmetric Loss-Tolerant AuthenticationEstablishing authenticity of a stream of datagrams in the presence of multiple receivers is naively achieved through the use of per-packet asymmetric digital signatures, but at high computational cost for both senders and receivers. Timed Efficient Stream Loss-Tolerant Authentication (TESLA) instead employs relatively cheap symmetric authentication, achieving asymmetry via time-delayed key disclosure, while adding latency to verification and imposing requirements on time synchronization between receivers and the sender to prevent forgery. This document introduces Asymmetric Loss-Tolerant Authentication (ALTA), which employs an acyclic graph of message authentication codes (MACs) transmitted alongside data payloads, with redundancy to enable authentication of all received payloads in the presence of certain patterns of loss, along with regularly paced digital signatures. ALTA requires no time synchronization and enables authentication of payloads as soon as sufficient authentication material has been received.