idnits 2.17.1 draft-tsvwg-rfc5405bis-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- -- The document has an IETF Trust Provisions (28 Dec 2009) Section 6.c(ii) Publication Limitation clause. If this document is intended for submission to the IESG for publication, this constitutes an error. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- -- The draft header indicates that this document obsoletes RFC5405, but the abstract doesn't seem to directly say this. It does mention RFC5405 though, so this could be OK. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (December 09, 2014) is 3426 days in the past. Is this intentional? Checking references for intended status: Best Current Practice ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) ** Obsolete normative reference: RFC 793 (Obsoleted by RFC 9293) ** Obsolete normative reference: RFC 1981 (Obsoleted by RFC 8201) ** Obsolete normative reference: RFC 2460 (Obsoleted by RFC 8200) ** Obsolete normative reference: RFC 5405 (Obsoleted by RFC 8085) == Outdated reference: A later version (-18) exists of draft-ietf-avtcore-rtp-circuit-breakers-08 == Outdated reference: A later version (-11) exists of draft-ietf-tsvwg-port-use-06 -- Obsolete informational reference (is this intentional?): RFC 896 (Obsoleted by RFC 7805) -- Obsolete informational reference (is this intentional?): RFC 2309 (Obsoleted by RFC 7567) -- Obsolete informational reference (is this intentional?): RFC 4960 (Obsoleted by RFC 9260) -- Obsolete informational reference (is this intentional?): RFC 5245 (Obsoleted by RFC 8445, RFC 8839) -- Obsolete informational reference (is this intentional?): RFC 5751 (Obsoleted by RFC 8551) -- Obsolete informational reference (is this intentional?): RFC 5996 (Obsoleted by RFC 7296) -- Obsolete informational reference (is this intentional?): RFC 6347 (Obsoleted by RFC 9147) Summary: 4 errors (**), 0 flaws (~~), 3 warnings (==), 10 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Transport Area Working Group L. Eggert 3 Internet-Draft NetApp 4 Obsoletes: 5405 (if approved) G. Fairhurst 5 Intended status: Best Current Practice University of Aberdeen 6 Expires: June 12, 2015 G. Shepherd 7 Cisco Systems 8 December 09, 2014 10 UDP Usage Guidelines 11 draft-tsvwg-rfc5405bis-00 13 Abstract 15 The User Datagram Protocol (UDP) provides a minimal message-passing 16 transport that has no inherent congestion control mechanisms. 17 Because congestion control is critical to the stable operation of the 18 Internet, applications and other protocols that choose to use UDP as 19 an Internet transport must employ mechanisms to prevent congestion 20 collapse and to establish some degree of fairness with concurrent 21 traffic. They may also need to implement additional mechanisms, 22 depending on how they use UDP. 24 This document provides guidelines on the use of UDP for the designers 25 of applications, tunnels and other protocols that use UDP. 26 Congestion control guidelines are a primary focus, but the document 27 also provides guidance on other topics, including message sizes, 28 reliability, checksums, and middlebox traversal. 30 If published as an RFC, this document will obsolete RFC5405. 32 Status of This Memo 34 This Internet-Draft is submitted in full conformance with the 35 provisions of BCP 78 and BCP 79. 37 Internet-Drafts are working documents of the Internet Engineering 38 Task Force (IETF). Note that other groups may also distribute 39 working documents as Internet-Drafts. The list of current Internet- 40 Drafts is at http://datatracker.ietf.org/drafts/current/. 42 Internet-Drafts are draft documents valid for a maximum of six months 43 and may be updated, replaced, or obsoleted by other documents at any 44 time. It is inappropriate to use Internet-Drafts as reference 45 material or to cite them other than as "work in progress." 47 This Internet-Draft will expire on June 12, 2015. 49 Copyright Notice 51 Copyright (c) 2014 IETF Trust and the persons identified as the 52 document authors. All rights reserved. 54 This document is subject to BCP 78 and the IETF Trust's Legal 55 Provisions Relating to IETF Documents 56 (http://trustee.ietf.org/license-info) in effect on the date of 57 publication of this document. Please review these documents 58 carefully, as they describe your rights and restrictions with respect 59 to this document. Code Components extracted from this document must 60 include Simplified BSD License text as described in Section 4.e of 61 the Trust Legal Provisions and are provided without warranty as 62 described in the Simplified BSD License. 64 This document may not be modified, and derivative works of it may not 65 be created, and it may not be published except as an Internet-Draft. 67 Table of Contents 69 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 70 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4 71 3. UDP Usage Guidelines . . . . . . . . . . . . . . . . . . . . 4 72 3.1. Congestion Control Guidelines . . . . . . . . . . . . . . 5 73 3.2. Message Size Guidelines . . . . . . . . . . . . . . . . . 12 74 3.3. Reliability Guidelines . . . . . . . . . . . . . . . . . 14 75 3.4. Checksum Guidelines . . . . . . . . . . . . . . . . . . . 15 76 3.5. Middlebox Traversal Guidelines . . . . . . . . . . . . . 17 77 4. Multicast UDP Usage Guidelines . . . . . . . . . . . . . . . 19 78 4.1. Multicast Congestion Control Guidelines . . . . . . . . . 20 79 4.2. Message Size Guidelines for Multicast . . . . . . . . . . 22 80 5. Programming Guidelines . . . . . . . . . . . . . . . . . . . 22 81 5.1. Using UDP Ports . . . . . . . . . . . . . . . . . . . . . 23 82 5.2. ICMP Guidelines . . . . . . . . . . . . . . . . . . . . . 25 83 6. Security Considerations . . . . . . . . . . . . . . . . . . . 26 84 7. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 85 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 29 86 9. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . 29 87 10. References . . . . . . . . . . . . . . . . . . . . . . . . . 29 88 10.1. Normative References . . . . . . . . . . . . . . . . . . 29 89 10.2. Informative References . . . . . . . . . . . . . . . . . 30 90 Appendix A. Revision Notes . . . . . . . . . . . . . . . . . . . 36 92 1. Introduction 94 The User Datagram Protocol (UDP) [RFC0768] provides a minimal, 95 unreliable, best-effort, message-passing transport to applications 96 and other protocols (such as tunnels) that desire to operate over UDP 97 (both simply called "applications" in the remainder of this 98 document). Compared to other transport protocols, UDP and its UDP- 99 Lite variant [RFC3828] are unique in that they do not establish end- 100 to-end connections between communicating end systems. UDP 101 communication consequently does not incur connection establishment 102 and tear-down overheads, and there is minimal associated end system 103 state. Because of these characteristics, UDP can offer a very 104 efficient communication transport to some applications. 106 A second unique characteristic of UDP is that it provides no inherent 107 congestion control mechanisms. On many platforms, applications can 108 send UDP datagrams at the line rate of the link interface, which is 109 often much greater than the available path capacity, and doing so 110 contributes to congestion along the path. [RFC2914] describes the 111 best current practice for congestion control in the Internet. It 112 identifies two major reasons why congestion control mechanisms are 113 critical for the stable operation of the Internet: 115 1. The prevention of congestion collapse, i.e., a state where an 116 increase in network load results in a decrease in useful work 117 done by the network. 119 2. The establishment of a degree of fairness, i.e., allowing 120 multiple flows to share the capacity of a path reasonably 121 equitably. 123 Because UDP itself provides no congestion control mechanisms, it is 124 up to the applications that use UDP for Internet communication to 125 employ suitable mechanisms to prevent congestion collapse and 126 establish a degree of fairness. [RFC2309] discusses the dangers of 127 congestion-unresponsive flows and states that "all UDP-based 128 streaming applications should incorporate effective congestion 129 avoidance mechanisms". This is an important requirement, even for 130 applications that do not use UDP for streaming. In addition, 131 congestion-controlled transmission is of benefit to an application 132 itself, because it can reduce self-induced packet loss, minimize 133 retransmissions, and hence reduce delays. Congestion control is 134 essential even at relatively slow transmission rates. For example, 135 an application that generates five 1500-byte UDP datagrams in one 136 second can already exceed the capacity of a 56 Kb/s path. For 137 applications that can operate at higher, potentially unbounded data 138 rates, congestion control becomes vital to prevent congestion 139 collapse and establish some degree of fairness. Section 3 describes 140 a number of simple guidelines for the designers of such applications. 142 A UDP datagram is carried in a single IP packet and is hence limited 143 to a maximum payload of 65,507 bytes for IPv4 and 65,527 bytes for 144 IPv6. The transmission of large IP packets usually requires IP 145 fragmentation. Fragmentation decreases communication reliability and 146 efficiency and should be avoided. IPv6 allows the option of 147 transmitting large packets ("jumbograms") without fragmentation when 148 all link layers along the path support this [RFC2675]. Some of the 149 guidelines in Section 3 describe how applications should determine 150 appropriate message sizes. Other sections of this document provide 151 guidance on reliability, checksums, and middlebox traversal. 153 This document provides guidelines and recommendations. Although most 154 UDP applications are expected to follow these guidelines, there do 155 exist valid reasons why a specific application may decide not to 156 follow a given guideline. In such cases, it is RECOMMENDED that 157 application designers cite the respective section(s) of this document 158 in the technical specification of their application or protocol and 159 explain their rationale for their design choice. 161 [RFC5405] was scoped to provide guidelines for unicast applications 162 only, whereas this document also provides guidelines for UDP flows 163 that use IP anycast, multicast and broadcast, and applications that 164 use UDP tunnels to support IP flows. 166 Finally, although this document specifically refers to applications 167 that use UDP, the spirit of some of its guidelines also applies to 168 other message-passing applications and protocols (specifically on the 169 topics of congestion control, message sizes, and reliability). 170 Examples include signaling or control applications that choose to run 171 directly over IP by registering their own IP protocol number with 172 IANA. This document may provide useful background reading to the 173 designers of such applications and protocols. 175 2. Terminology 177 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 178 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 179 "OPTIONAL" in this document are to be interpreted as described in 180 [RFC2119]. 182 3. UDP Usage Guidelines 184 Internet paths can have widely varying characteristics, including 185 transmission delays, available bandwidths, congestion levels, 186 reordering probabilities, supported message sizes, or loss rates. 187 Furthermore, the same Internet path can have very different 188 conditions over time. Consequently, applications that may be used on 189 the Internet MUST NOT make assumptions about specific path 190 characteristics. They MUST instead use mechanisms that let them 191 operate safely under very different path conditions. Typically, this 192 requires conservatively probing the current conditions of the 193 Internet path they communicate over to establish a transmission 194 behavior that it can sustain and that is reasonably fair to other 195 traffic sharing the path. 197 These mechanisms are difficult to implement correctly. For most 198 applications, the use of one of the existing IETF transport protocols 199 is the simplest method of acquiring the required mechanisms. 200 Consequently, the RECOMMENDED alternative to the UDP usage described 201 in the remainder of this section is the use of an IETF transport 202 protocol such as TCP [RFC0793], Stream Control Transmission Protocol 203 (SCTP) [RFC4960], and SCTP Partial Reliability Extension (SCTP-PR) 204 [RFC3758], or Datagram Congestion Control Protocol (DCCP) [RFC4340] 205 with its different congestion control types 206 [RFC4341][RFC4342][RFC5622]. 208 If used correctly, these more fully-featured transport protocols are 209 not as "heavyweight" as often claimed. For example, the TCP 210 algorithms have been continuously improved over decades, and have 211 reached a level of efficiency and correctness that custom 212 application-layer mechanisms will struggle to easily duplicate. In 213 addition, many TCP implementations allow connections to be tuned by 214 an application to its purposes. For example, TCP's "Nagle" algorithm 215 [RFC0896] can be disabled, improving communication latency at the 216 expense of more frequent -- but still congestion-controlled -- packet 217 transmissions. Another example is the TCP SYN cookie mechanism 218 [RFC4987], which is available on many platforms. TCP with SYN 219 cookies does not require a server to maintain per-connection state 220 until the connection is established. TCP also requires the end that 221 closes a connection to maintain the TIME-WAIT state that prevents 222 delayed segments from one connection instance from interfering with a 223 later one. Applications that are aware of and designed for this 224 behavior can shift maintenance of the TIME-WAIT state to conserve 225 resources by controlling which end closes a TCP connection [FABER]. 226 Finally, TCP's built-in capacity-probing and awareness of the maximum 227 transmission unit supported by the path (PMTU) results in efficient 228 data transmission that quickly compensates for the initial connection 229 setup delay, in the case of transfers that exchange more than a few 230 segments. 232 3.1. Congestion Control Guidelines 234 If an application or protocol chooses not to use a congestion- 235 controlled transport protocol, it SHOULD control the rate at which it 236 sends UDP datagrams to a destination host, in order to fulfill the 237 requirements of [RFC2914]. It is important to stress that an 238 application SHOULD perform congestion control over all UDP traffic it 239 sends to a destination, independently from how it generates this 240 traffic. For example, an application that forks multiple worker 241 processes or otherwise uses multiple sockets to generate UDP 242 datagrams SHOULD perform congestion control over the aggregate 243 traffic. 245 Several approaches to perform congestion control are discussed in the 246 remainder of this section. The section describes generic topics with 247 an intended emphasis on unicast and anycast [RFC1546] usage. Not all 248 approaches discussed below are appropriate for all UDP-transmitting 249 applications. Section 3.1.1 discusses congestion control options for 250 applications that perform bulk transfers over UDP. Such applications 251 can employ schemes that sample the path over several subsequent RTTs 252 during which data is exchanged, in order to determine a sending rate 253 that the path at its current load can support. Other applications 254 only exchange a few UDP datagrams with a destination. Section 3.1.2 255 discusses congestion control options for such "low data-volume" 256 applications. Because they typically do not transmit enough data to 257 iteratively sample the path to determine a safe sending rate, they 258 need to employ different kinds of congestion control mechanisms. 259 Section 3.1.6 discusses congestion control considerations when UDP is 260 used as a tunneling protocol. Section 4 provides additional 261 recommendations for broadcast and multicast usage. 263 UDP applications may take advantage of Explicit Congestion 264 Notification (ECN), providing that the application programming 265 interface can support ECN and the congestion control can 266 appropriately react to ECN-marked packets. [RFC6679] provides 267 guidance on how to use ECN for UDP-based applications using the Real- 268 Time Protocol (RTP). 270 It is important to note that congestion control should not be viewed 271 as an add-on to a finished application. Many of the mechanisms 272 discussed in the guidelines below require application support to 273 operate correctly. Application designers need to consider congestion 274 control throughout the design of their application, similar to how 275 they consider security aspects throughout the design process. 277 In the past, the IETF has also investigated integrated congestion 278 control mechanisms that act on the traffic aggregate between two 279 hosts, i.e., a framework such as the Congestion Manager [RFC3124], 280 where active sessions may share current congestion information in a 281 way that is independent of the transport protocol. Such mechanisms 282 have currently failed to see deployment, but would otherwise simplify 283 the design of congestion control mechanisms for UDP sessions, so that 284 they fulfill the requirements in [RFC2914]. 286 3.1.1. Bulk Transfer Applications 288 Applications that perform bulk transmission of data to a peer over 289 UDP, i.e., applications that exchange more than a few UDP datagrams 290 per RTT, SHOULD implement TCP-Friendly Rate Control (TFRC) [RFC5348], 291 window-based TCP-like congestion control, or otherwise ensure that 292 the application complies with the congestion control principles. 294 TFRC has been designed to provide both congestion control and 295 fairness in a way that is compatible with the IETF's other transport 296 protocols. If an application implements TFRC, it need not follow the 297 remaining guidelines in Section 3.1.1, because TFRC already addresses 298 them, but SHOULD still follow the remaining guidelines in the 299 subsequent subsections of Section 3. 301 Bulk transfer applications that choose not to implement TFRC or TCP- 302 like windowing SHOULD implement a congestion control scheme that 303 results in bandwidth use that competes fairly with TCP within an 304 order of magnitude. Section 2 of [RFC3551] suggests that 305 applications SHOULD monitor the packet loss rate to ensure that it is 306 within acceptable parameters. Packet loss is considered acceptable 307 if a TCP flow across the same network path under the same network 308 conditions would achieve an average throughput, measured on a 309 reasonable timescale, that is not less than that of the UDP flow. 310 The comparison to TCP cannot be specified exactly, but is intended as 311 an "order-of-magnitude" comparison in timescale and throughput. 313 Finally, some bulk transfer applications may choose not to implement 314 any congestion control mechanism and instead rely on transmitting 315 across reserved path capacity. This might be an acceptable choice 316 for a subset of restricted networking environments, but is by no 317 means a safe practice for operation over the wider Internet. When 318 the UDP traffic of such applications leaks out into unprovisioned 319 Internet paths, it can significantly degrade the performance of other 320 traffic sharing the path and even result in congestion collapse. 321 Applications that support an uncontrolled or unadaptive transmission 322 behavior SHOULD NOT do so by default and SHOULD instead require users 323 to explicitly enable this mode of operation. 325 3.1.2. Low Data-Volume Applications 327 When applications that at any time exchange only a few UDP datagrams 328 with a destination implement TFRC or one of the other congestion 329 control schemes in Section 3.1.1, the network sees little benefit, 330 because those mechanisms perform congestion control in a way that is 331 only effective for longer transmissions. 333 Applications that at any time exchange only a few UDP datagrams with 334 a destination SHOULD still control their transmission behavior by not 335 sending on average more than one UDP datagram per round-trip time 336 (RTT) to a destination. Similar to the recommendation in [RFC1536], 337 an application SHOULD maintain an estimate of the RTT for any 338 destination with which it communicates. Applications SHOULD 339 implement the algorithm specified in [RFC6298] to compute a smoothed 340 RTT (SRTT) estimate. They SHOULD also detect packet loss and 341 exponentially back their retransmission timer off when a loss event 342 occurs. When implementing this scheme, applications need to choose a 343 sensible initial value for the RTT. This value SHOULD generally be 344 as conservative as possible for the given application. TCP uses an 345 initial value of 3 seconds [RFC6298], which is also RECOMMENDED as an 346 initial value for UDP applications. SIP [RFC3261] and GIST [RFC5971] 347 use an initial value of 500 ms, and initial timeouts that are shorter 348 than this are likely problematic in many cases. It is also important 349 to note that the initial timeout is not the maximum possible timeout 350 -- the RECOMMENDED algorithm in [RFC6298] yields timeout values after 351 a series of losses that are much longer than the initial value. 353 Some applications cannot maintain a reliable RTT estimate for a 354 destination. The first case is that of applications that exchange 355 too few UDP datagrams with a peer to establish a statistically 356 accurate RTT estimate. Such applications MAY use a predetermined 357 transmission interval that is exponentially backed-off when packets 358 are lost. TCP uses an initial value of 3 seconds [RFC6298], which is 359 also RECOMMENDED as an initial value for UDP applications. SIP 360 [RFC3261] and GIST [RFC5971] use an interval of 500 ms, and shorter 361 values are likely problematic in many cases. As in the previous 362 case, note that the initial timeout is not the maximum possible 363 timeout. 365 A second class of applications cannot maintain an RTT estimate for a 366 destination, because the destination does not send return traffic. 367 Such applications SHOULD NOT send more than one UDP datagram every 3 368 seconds, and SHOULD use an even less aggressive rate when possible. 369 The 3-second interval was chosen based on TCP's retransmission 370 timeout when the RTT is unknown [RFC6298], and shorter values are 371 likely problematic in many cases. Note that the sending rate in this 372 case must be more conservative than in the two previous cases, 373 because the lack of return traffic prevents the detection of packet 374 loss, i.e., congestion, and the application therefore cannot perform 375 exponential back-off to reduce load. 377 Applications that communicate bidirectionally SHOULD employ 378 congestion control for both directions of the communication. For 379 example, for a client-server, request-response-style application, 380 clients SHOULD congestion-control their request transmission to a 381 server, and the server SHOULD congestion-control its responses to the 382 clients. Congestion in the forward and reverse direction is 383 uncorrelated, and an application SHOULD either independently detect 384 and respond to congestion along both directions, or limit new and 385 retransmitted requests based on acknowledged responses across the 386 entire round-trip path. 388 3.1.3. Burst Mitigation and Pacing 390 UDP applications SHOULD provide mechanisms to regulate the bursts of 391 transmission that the application may send to the network. Many TCP 392 and SCTP implementations provide mechanisms that prevent a sender 393 from generating long bursts at line-rate, since these are known to 394 induce early loss to applications sharing a common network 395 bottleneck. The use of pacing with TCP has also been shown to 396 improve the coexistence of TCP flows with other flows. 398 Even low data-volume UDP flows may benefit from rate control, e.g., 399 an application that sends three copies of a packet to improve 400 robustness to loss is RECOMMENDED to pace out those three packets 401 over several RTTs, to reduce the probability that all three packets 402 will be lost due to the same congestion event. 404 3.1.4. QoS, Pre-Provisioned or Reserved Capacity 406 An application using UDP can use the differentiated services and 407 integrated services QoS frameworks. These are usually available 408 within controlled environments (e.g., within a single administrative 409 domain or bilaterally agreed connection between domains). 410 Applications intended for the Internet should not assume that QoS 411 mechanisms are supported by the networks they use, and therefore need 412 to provide congestion control, error recovery, etc. in case the 413 actual network path does not provide provisioned service. 415 Some UDP applications are only expected to be deployed over network 416 paths that use pre-provisioned capacity or capacity reserved using 417 dynamic provisioning, e.g., through the Resource Reservation Protocol 418 (RSVP). Multicast applications are also used with pre-provisioned 419 capacity (e.g., IPTV deployments within access networks). These 420 applications MAY choose not to implement any congestion control 421 mechanism and instead rely on transmitting only on paths where the 422 capacity is provisioned and reserved for this use. This might be an 423 acceptable choice for a subset of restricted networking environments, 424 but is by no means a safe practice for operation over the wider 425 Internet. 427 If the traffic of such applications leaks out into unprovisioned 428 Internet paths, it can significantly degrade the performance of other 429 traffic sharing the path and even result in congestion collapse. For 430 this reason, and to protect other applications sharing the same path, 431 applications SHOULD deploy an appropriate circuit breaker, as 432 described in Section 3.1.5. Applications that support an 433 uncontrolled or unadaptive transmission behavior SHOULD NOT do so by 434 default and SHOULD instead require users to explicitly enable this 435 mode of operation. 437 Applications used in networks within a controlled environment may be 438 able to exploit network management functions to detect whether they 439 are causing congestion, and react accordingly. 441 3.1.5. Circuit Breaker Mechanisms 443 A transport circuit breaker is an automatic mechanism that is used to 444 estimate the congestion caused by a flow, and to terminate (or 445 significantly reduce the rate of) the flow when excessive congestion 446 is detected [I-D.fairhurst-tsvwg-circuit-breaker]. This is a safety 447 measure to prevent congestion collapse (starvation of resources 448 available to other flows), essential for an Internet that is 449 heterogeneous and for traffic that is hard to predict in advance. 451 A circuit breaker is intended as a protection mechanism of last 452 resort. Under normal circumstances, a circuit breaker should not be 453 triggered; it is designed to protect things when there is severe 454 overload. The goal is usually to limit the maximum transmission rate 455 that reflects the available capacity of a network path. circuit 456 breakers can operate on individual UDP flows or traffic aggregates, 457 e.g., traffic sent using a network tunnel. Later sections provide 458 examples of cases where circuit breakers may or may not be desirable. 460 [I-D.fairhurst-tsvwg-circuit-breaker] provides guidance on the use of 461 circuit breakers and examples of usage. The use of a circuit breaker 462 in RTP is specified in [I-D.ietf-avtcore-rtp-circuit-breakers]. 464 3.1.6. UDP Tunnels 466 One increasingly popular use of UDP is as a tunneling protocol, where 467 a tunnel endpoint encapsulates the packets of another protocol inside 468 UDP datagrams and transmits them to another tunnel endpoint, which 469 decapsulates the UDP datagrams and forwards the original packets 470 contained in the payload. Tunnels establish virtual links that 471 appear to directly connect locations that are distant in the physical 472 Internet topology and can be used to create virtual (private) 473 networks. Using UDP as a tunneling protocol is attractive when the 474 payload protocol is not supported by middleboxes that may exist along 475 the path, because many middleboxes support transmission using UDP. 477 Well-implemented tunnels are generally invisible to the endpoints 478 that happen to transmit over a path that includes tunneled links. On 479 the other hand, to the routers along the path of a UDP tunnel, i.e., 480 the routers between the two tunnel endpoints, the traffic that a UDP 481 tunnel generates is a regular UDP flow, and the encapsulator and 482 decapsulator appear as regular UDP-sending and -receiving 483 applications. Because other flows can share the path with one or 484 more UDP tunnels, congestion control needs to be considered. 486 Two factors determine whether a UDP tunnel needs to employ specific 487 congestion control mechanisms -- first, whether the payload traffic 488 is IP-based; second, whether the tunneling scheme generates UDP 489 traffic at a volume that corresponds to the volume of payload traffic 490 carried within the tunnel. 492 IP-based traffic is generally assumed to be congestion-controlled, 493 i.e., it is assumed that the transport protocols generating IP-based 494 traffic at the sender already employ mechanisms that are sufficient 495 to address congestion on the path. Consequently, a tunnel carrying 496 IP-based traffic should already interact appropriately with other 497 traffic sharing the path, and specific congestion control mechanisms 498 for the tunnel are not necessary. 500 However, if the IP traffic in the tunnel is known to not be 501 congestion-controlled, additional measures are RECOMMENDED in order 502 to limit the impact of the tunneled traffic on other traffic sharing 503 the path. 505 The following guidelines define these possible cases in more detail: 507 1. A tunnel generates UDP traffic at a volume that corresponds to 508 the volume of payload traffic, and the payload traffic is IP- 509 based and congestion-controlled. 511 This is arguably the most common case for Internet tunnels. In 512 this case, the UDP tunnel SHOULD NOT employ its own congestion 513 control mechanism, because congestion losses of tunneled traffic 514 will already trigger an appropriate congestion response at the 515 original senders of the tunneled traffic. 517 Note that this guideline is built on the assumption that most IP- 518 based communication is congestion-controlled. If a UDP tunnel is 519 used for IP-based traffic that is known to not be congestion- 520 controlled, the next set of guidelines applies. 522 2. A tunnel generates UDP traffic at a volume that corresponds to 523 the volume of payload traffic, and the payload traffic is not 524 known to be IP-based, or is known to be IP-based but not 525 congestion-controlled. 527 This can be the case, for example, when some link-layer protocols 528 are encapsulated within UDP (but not all link-layer protocols; 529 some are congestion-controlled). Because it is not known that 530 congestion losses of tunneled non-IP traffic will trigger an 531 appropriate congestion response at the senders, the UDP tunnel 532 SHOULD employ an appropriate congestion control mechanism. 533 Because tunnels are usually bulk-transfer applications as far as 534 the intermediate routers are concerned, the guidelines in 535 Section 3.1.1 apply. 537 3. A tunnel generates UDP traffic at a volume that does not 538 correspond to the volume of payload traffic, independent of 539 whether the payload traffic is IP-based or congestion-controlled. 541 Examples of this class include UDP tunnels that send at a 542 constant rate, increase their transmission rates under loss, for 543 example, due to increasing redundancy when Forward Error 544 Correction is used, or are otherwise unconstrained in their 545 transmission behavior. These specialized uses of UDP for 546 tunneling go beyond the scope of the general guidelines given in 547 this document. The implementer of such specialized tunnels 548 SHOULD carefully consider congestion control in the design of 549 their tunneling mechanism and SHOULD consider use of a circuit 550 breaker mechanism. 552 Designing a tunneling mechanism requires significantly more expertise 553 than needed for many other UDP applications, because tunnels are 554 usually intended to be transparent to the endpoints transmitting over 555 them, so they need to correctly emulate the behavior of an IP link, 556 e.g., handling fragmentation, generating and responding to ICMP 557 messages, etc. At the same time, the tunneled traffic is application 558 traffic like any other from the perspective of the networks the 559 tunnel transmits over. This document only touches upon the 560 congestion control considerations for implementing UDP tunnels; a 561 discussion of other required tunneling behavior is out of scope. 563 3.2. Message Size Guidelines 565 IP fragmentation lowers the efficiency and reliability of Internet 566 communication. The loss of a single fragment results in the loss of 567 an entire fragmented packet, because even if all other fragments are 568 received correctly, the original packet cannot be reassembled and 569 delivered. This fundamental issue with fragmentation exists for both 570 IPv4 and IPv6. In addition, some network address translators (NATs) 571 and firewalls drop IP fragments. The network address translation 572 performed by a NAT only operates on complete IP packets, and some 573 firewall policies also require inspection of complete IP packets. 574 Even with these being the case, some NATs and firewalls simply do not 575 implement the necessary reassembly functionality, and instead choose 576 to drop all fragments. Finally, [RFC4963] documents other issues 577 specific to IPv4 fragmentation. 579 Due to these issues, an application SHOULD NOT send UDP datagrams 580 that result in IP packets that exceed the MTU of the path to the 581 destination. Consequently, an application SHOULD either use the path 582 MTU information provided by the IP layer or implement path MTU 583 discovery itself [RFC1191][RFC1981][RFC4821] to determine whether the 584 path to a destination will support its desired message size without 585 fragmentation. 587 Applications that do not follow this recommendation to do PMTU 588 discovery SHOULD still avoid sending UDP datagrams that would result 589 in IP packets that exceed the path MTU. Because the actual path MTU 590 is unknown, such applications SHOULD fall back to sending messages 591 that are shorter than the default effective MTU for sending (EMTU_S 592 in [RFC1122]). For IPv4, EMTU_S is the smaller of 576 bytes and the 593 first-hop MTU [RFC1122]. For IPv6, EMTU_S is 1280 bytes [RFC2460]. 594 The effective PMTU for a directly connected destination (with no 595 routers on the path) is the configured interface MTU, which could be 596 less than the maximum link payload size. Transmission of minimum- 597 sized UDP datagrams is inefficient over paths that support a larger 598 PMTU, which is a second reason to implement PMTU discovery. 600 To determine an appropriate UDP payload size, applications MUST 601 subtract the size of the IP header (which includes any IPv4 optional 602 headers or IPv6 extension headers) as well as the length of the UDP 603 header (8 bytes) from the PMTU size. This size, known as the MSS, 604 can be obtained from the TCP/IP stack [RFC1122]. 606 Applications that do not send messages that exceed the effective PMTU 607 of IPv4 or IPv6 need not implement any of the above mechanisms. Note 608 that the presence of tunnels can cause an additional reduction of the 609 effective PMTU, so implementing PMTU discovery may be beneficial. 611 Applications that fragment an application-layer message into multiple 612 UDP datagrams SHOULD perform this fragmentation so that each datagram 613 can be received independently, and be independently retransmitted in 614 the case where an application implements its own reliability 615 mechanisms. 617 Packetization Layer Path MTU Discovery (PLPMTUD) [RFC4821] does not 618 rely upon network support for ICMP messages and is therefore 619 considered more robust than standard PMTUD. To operate, PLPMTUD 620 requires changes to the way the transport is used, both to transmit 621 probe packets, and to account for the loss or success of these 622 probes. This updates not only the PMTU algorithm, it also impacts 623 loss recovery, congestion control, etc. These updated mechanisms can 624 be implemented within a connection-oriented transport (e.g., TCP, 625 SCTP, DCCP), but are not a part of UDP. PLPMTUD therefore places 626 additional design requirements on a UDP application that wishes to 627 use this method. 629 3.3. Reliability Guidelines 631 Application designers are generally aware that UDP does not provide 632 any reliability, e.g., it does not retransmit any lost packets. 633 Often, this is a main reason to consider UDP as a transport. 634 Applications that do require reliable message delivery MUST implement 635 an appropriate mechanism themselves. 637 UDP also does not protect against datagram duplication, i.e., an 638 application may receive multiple copies of the same UDP datagram, 639 with some duplicates arriving potentially much later than the first. 640 Application designers SHOULD verify that their application handles 641 such datagram duplication gracefully, and may consequently need to 642 implement mechanisms to detect duplicates. Even if UDP datagram 643 reception triggers only idempotent operations, applications may want 644 to suppress duplicate datagrams to reduce load. 646 Applications that require ordered delivery MUST reestablish datagram 647 ordering themselves. The Internet can significantly delay some 648 packets with respect to others, e.g., due to routing transients, 649 intermittent connectivity, or mobility. This can cause reordering, 650 where UDP datagrams arrive at the receiver in an order different from 651 the transmission order. 653 It is important to note that the time by which packets are reordered 654 or after which duplicates can still arrive can be very large. Even 655 more importantly, there is no well-defined upper boundary here. 656 [RFC0793] defines the maximum delay a TCP segment should experience 657 -- the Maximum Segment Lifetime (MSL) -- as 2 minutes. No other RFC 658 defines an MSL for other transport protocols or IP itself. The MSL 659 value defined for TCP is conservative enough that it SHOULD be used 660 by other protocols, including UDP. Therefore, applications SHOULD be 661 robust to the reception of delayed or duplicate packets that are 662 received within this 2-minute interval. 664 Instead of implementing these relatively complex reliability 665 mechanisms by itself, an application that requires reliable and 666 ordered message delivery SHOULD whenever possible choose an IETF 667 standard transport protocol that provides these features. 669 3.4. Checksum Guidelines 671 The UDP header includes an optional, 16-bit one's complement checksum 672 that provides an integrity check. These checks are not strong from a 673 coding or cryptographic perspective, and are not designed to detect 674 physical-layer errors or malicious modification of the datagram 675 [RFC3819]. Application developers SHOULD implement additional checks 676 where data integrity is important, e.g., through a Cyclic Redundancy 677 Check (CRC) included with the data to verify the integrity of an 678 entire object/file sent over the UDP service. 680 The UDP checksum provides a statistical guarantee that the payload 681 was not corrupted in transit. It also allows the receiver to verify 682 that it was the intended destination of the packet, because it covers 683 the IP addresses, port numbers, and protocol number, and it verifies 684 that the packet is not truncated or padded, because it covers the 685 size field. It therefore protects an application against receiving 686 corrupted payload data in place of, or in addition to, the data that 687 was sent. More description of the set of checks performed using the 688 checksum field are provided in Section 3.1 of [RFC6396]. 690 Applications SHOULD enable UDP checksums. For IPv4, [RFC0768] 691 permits the option to disable their use. The use of the UDP checksum 692 was required when applications transmit UDP over IPv6 [RFC2460]. 693 This requirement was updated in [RFC6395], but only for specific 694 protocols and applications, and the implementation of the set of 695 functions defined in [RFC6396] is then REQUIRED. These additional 696 design requirements for using a zero IPv6 UDP checksum [RFC6396] are 697 not present for IPv4, since the network-layer header validates 698 information that is not protected for an IPv6 packet. 700 Applications that choose to disable UDP checksums when transmitting 701 over IPv4 MUST NOT make assumptions regarding the correctness of 702 received data and MUST behave correctly when a UDP datagram is 703 received that was originally sent to a different destination or is 704 otherwise corrupted. 706 3.4.1. UDP-Lite 708 A special class of applications can derive benefit from having 709 partially-damaged payloads delivered, rather than discarded, when 710 using paths that include error-prone links. Such applications can 711 tolerate payload corruption and MAY choose to use the Lightweight 712 User Datagram Protocol (UDP-Lite) [RFC3828] variant of UDP instead of 713 basic UDP. Applications that choose to use UDP-Lite instead of UDP 714 should still follow the congestion control and other guidelines 715 described for use with UDP in Section 3. 717 UDP-Lite changes the semantics of the UDP "payload length" field to 718 that of a "checksum coverage length" field. Otherwise, UDP-Lite is 719 semantically identical to UDP. The interface of UDP-Lite differs 720 from that of UDP by the addition of a single (socket) option that 721 communicates a checksum coverage length value: at the sender, this 722 specifies the intended checksum coverage, with the remaining 723 unprotected part of the payload called the "error-insensitive part". 724 By default, the UDP-Lite checksum coverage extends across the entire 725 datagram. If required, an application may dynamically modify this 726 length value, e.g., to offer greater protection to some messages. 727 UDP-Lite always verifies that a packet was delivered to the intended 728 destination, i.e., always verifies the header fields. Errors in the 729 insensitive part will not cause a UDP datagram to be discarded by the 730 destination. Applications using UDP-Lite therefore MUST NOT make 731 assumptions regarding the correctness of the data received in the 732 insensitive part of the UDP-Lite payload. 734 A UDP-Lite sender SHOULD select the minimum checksum coverage to 735 include all sensitive payload information. For example, applications 736 that use the Real-Time Protocol (RTP) [RFC3550] will likely want to 737 protect the RTP header against corruption. Applications, where 738 appropriate, MUST also introduce their own appropriate validity 739 checks for protocol information carried in the insensitive part of 740 the UDP-Lite payload (e.g., internal CRCs). 742 A UDP-Lite receiver MUST set a minimum coverage threshold for 743 incoming packets that is not smaller than the smallest coverage used 744 by the sender [RFC3828]. The receiver SHOULD select a threshold that 745 is sufficiently large to block packets with an inappropriately short 746 coverage field. This may be a fixed value, or may be negotiated by 747 an application. UDP-Lite does not provide mechanisms to negotiate 748 the checksum coverage between the sender and receiver. 750 Applications can still experience packet loss when using UDP-Lite. 751 The enhancements offered by UDP-Lite rely upon a link being able to 752 intercept the UDP-Lite header to correctly identify the partial 753 coverage required. When tunnels and/or encryption are used, this can 754 result in UDP-Lite datagrams being treated the same as UDP datagrams, 755 i.e., result in packet loss. Use of IP fragmentation can also 756 prevent special treatment for UDP-Lite datagrams, and this is another 757 reason why applications SHOULD avoid IP fragmentation (Section 3.2). 759 Current support for middlebox traversal using UDP-Lite is poor, 760 because UDP-Lite uses a different IPv4 protocol number or IPv6 "next 761 header" value than that used for UDP; therefore, few middleboxes are 762 currently able to interpret UDP-Lite and take appropriate actions 763 when forwarding the packet. This makes UDP-Lite less suited for 764 applications needing general Internet support, until such time as 765 UDP-Lite has achieved better support in middleboxes and endpoints. 767 3.5. Middlebox Traversal Guidelines 769 Network address translators (NATs) and firewalls are examples of 770 intermediary devices ("middleboxes") that can exist along an end-to- 771 end path. A middlebox typically performs a function that requires it 772 to maintain per-flow state. For connection-oriented protocols, such 773 as TCP, middleboxes snoop and parse the connection-management 774 information and create and destroy per-flow state accordingly. For a 775 connectionless protocol such as UDP, this approach is not possible. 776 Consequently, middleboxes may create per-flow state when they see a 777 packet that -- according to some local criteria -- indicates a new 778 flow, and destroy the state after some period of time during which no 779 packets belonging to the same flow have arrived. 781 Depending on the specific function that the middlebox performs, this 782 behavior can introduce a time-dependency that restricts the kinds of 783 UDP traffic exchanges that will be successful across the middlebox. 784 For example, NATs and firewalls typically define the partial path on 785 one side of them to be interior to the domain they serve, whereas the 786 partial path on their other side is defined to be exterior to that 787 domain. Per-flow state is typically created when the first packet 788 crosses from the interior to the exterior, and while the state is 789 present, NATs and firewalls will forward return traffic. Return 790 traffic that arrives after the per-flow state has timed out is 791 dropped, as is other traffic that arrives from the exterior. 793 Many applications that use UDP for communication operate across 794 middleboxes without needing to employ additional mechanisms. One 795 example is the Domain Name System (DNS), which has a strict request- 796 response communication pattern that typically completes within 797 seconds. 799 Other applications may experience communication failures when 800 middleboxes destroy the per-flow state associated with an application 801 session during periods when the application does not exchange any UDP 802 traffic. Applications SHOULD be able to gracefully handle such 803 communication failures and implement mechanisms to re-establish 804 application-layer sessions and state. 806 For some applications, such as media transmissions, this re- 807 synchronization is highly undesirable, because it can cause user- 808 perceivable playback artifacts. Such specialized applications MAY 809 send periodic keep-alive messages to attempt to refresh middlebox 810 state. It is important to note that keep-alive messages are NOT 811 RECOMMENDED for general use -- they are unnecessary for many 812 applications and can consume significant amounts of system and 813 network resources. 815 An application that needs to employ keep-alives to deliver useful 816 service over UDP in the presence of middleboxes SHOULD NOT transmit 817 them more frequently than once every 15 seconds and SHOULD use longer 818 intervals when possible. No common timeout has been specified for 819 per-flow UDP state for arbitrary middleboxes. NATs require a state 820 timeout of 2 minutes or longer [RFC4787]. However, empirical 821 evidence suggests that a significant fraction of currently deployed 822 middleboxes unfortunately use shorter timeouts. The timeout of 15 823 seconds originates with the Interactive Connectivity Establishment 824 (ICE) protocol [RFC5245]. When an application is deployed in a 825 controlled network environment, the deployer SHOULD investigate 826 whether the target environment allows applications to use longer 827 intervals, or whether it offers mechanisms to explicitly control 828 middlebox state timeout durations, for example, using Middlebox 829 Communications (MIDCOM) [RFC3303], Next Steps in Signaling (NSIS) 830 [RFC5973], or Universal Plug and Play (UPnP) [UPnP]. It is 831 RECOMMENDED that applications apply slight random variations 832 ("jitter") to the timing of keep-alive transmissions, to reduce the 833 potential for persistent synchronization between keep-alive 834 transmissions from different hosts. 836 Sending keep-alives is not a substitute for implementing a mechanism 837 to recover from broken sessions. Like all UDP datagrams, keep-alives 838 can be delayed or dropped, causing middlebox state to time out. In 839 addition, the congestion control guidelines in Section 3.1 cover all 840 UDP transmissions by an application, including the transmission of 841 middlebox keep-alives. Congestion control may thus lead to delays or 842 temporary suspension of keep-alive transmission. 844 Keep-alive messages are NOT RECOMMENDED for general use. They are 845 unnecessary for many applications and may consume significant 846 resources. For example, on battery-powered devices, if an 847 application needs to maintain connectivity for long periods with 848 little traffic, the frequency at which keep-alives are sent can 849 become the determining factor that governs power consumption, 850 depending on the underlying network technology. Because many 851 middleboxes are designed to require keep-alives for TCP connections 852 at a frequency that is much lower than that needed for UDP, this 853 difference alone can often be sufficient to prefer TCP over UDP for 854 these deployments. On the other hand, there is anecdotal evidence 855 that suggests that direct communication through middleboxes, e.g., by 856 using ICE [RFC5245], does succeed less often with TCP than with UDP. 857 The trade-offs between different transport protocols -- especially 858 when it comes to middlebox traversal -- deserve careful analysis. 860 UDP applications need to be designed understanding that there are 861 many variants of middlebox behavior, and although UDP is connection- 862 less, middleboxes often maintain state for each UDP flow. Using 863 multiple flows can consume available state space and also can lead to 864 changes in the way the middlebox handles subsequent packets (either 865 to protect its internal resources, or to prevent perceived misuse). 866 This has implications on applications that use multiple UDP flows in 867 parallel, even on multiple ports Section 5.1.1. 869 4. Multicast UDP Usage Guidelines 871 This section complements Section 3 by providing additional guidelines 872 that are applicable to multicast and broacast usage of UDP. 874 Multicast and broadcast transmission [RFC1112] usually employ the UDP 875 transport protocol, although they may be used with other transport 876 protocols (e.g., UDP-Lite). 878 There are currently two models of multicast delivery: the Any-Source 879 Multicast (ASM) model as defined in [RFC1112] and the Source-Specific 880 Multicast (SSM) model as defined in [RFC4607]. ASM group members 881 will receive all data sent to the group by any source, while SSM 882 constrains the distribution tree to only one single source. 884 Specialized classes of applications also use UDP for IP multicast or 885 broadcast [RFC0919]. The design of such specialized applications 886 requires expertise that goes beyond simple, unicast-specific 887 guidelines, since these senders may transmit to potentially very many 888 receivers across potentially very heterogeneous paths at the same 889 time, which significantly complicates congestion control, flow 890 control, and reliability mechanisms. This section provides guidance 891 on multicast UDP usage. 893 Use of broadcast by an application is normally constrained by routers 894 to the local subnetwork. However, use of tunneling techniques and 895 proxies can and does result in some broadcast traffic traversing 896 Internet paths. These guidelines therefore also apply to broadcast 897 traffic. 899 The IETF has defined a reliable multicast framework [RFC3048] and 900 several building blocks to aid the designers of multicast 901 applications, such as [RFC3738] or [RFC4654]. Anycast senders must 902 be aware that successive messages sent to the same anycast IP address 903 may be delivered to different anycast nodes, i.e., arrive at 904 different locations in the topology. 906 Most UDP tunnels that carry IP multicast traffic use a tunnel 907 encapsulation with a unicast destination address. These MUST follow 908 the same requirements as a tunnel carrying unicast data (see 909 Section 3.1.6). There are deployment cases and solutions where the 910 outer header of a UDP tunnel contains a multicast destination 911 address, such as [RFC6513]. These cases are primarily deployed in 912 controlled environments over reserved capacity, often operating 913 within a single administrative domain, or between two domains over a 914 bi-laterally agreed upon path with reserved bandwidth, and so 915 congestion control is OPTIONAL, but circuit breaker techniques are 916 still RECOMMENDED in order to restore some degree of service should 917 the offered load exceed the reserved capacity (e.g., due to 918 misconfiguration). 920 4.1. Multicast Congestion Control Guidelines 922 Unicast congestion-controlled transport mechanism are often not 923 applicable to multicast distribution services, or simply do not scale 924 to large multicast trees, since they require bi-directional 925 communication and adapt the sending rate to accommodate the network 926 conditions to a single receiver. In contrast, multicast distribution 927 trees may fan out to massive numbers of receivers, which limits the 928 scalability of an in-band return channel to control the sending rate, 929 and the one-to-many nature of multicast distribution trees prevents 930 adapting the rate to the requirements of an individual receiver. For 931 this reason, generating TCP-compatible aggregate flow rates for 932 Internet multicast data, either native or tunneled, is the 933 responsibility of the application. 935 Congestion control mechanisms for multicast may operate on longer 936 timescales than for unicast (e.g., due to the higher group RTT of a 937 heterogeneous group); appropriate methods are particularly for any 938 multicast session were all or part of the multicast distribution tree 939 spans an access network (e.g., a home gateway). 941 Multicast congestion control needs to consider the potential 942 heterogeneity of both the multicast distribution tree and the 943 receivers belonging to a group. Heterogeneity may manifest itself in 944 some receivers experiencing more loss that others, higher delay, and/ 945 or less ability to respond to network conditions. Any multicast- 946 enabled receiver may attempt to join and receive traffic from any 947 group. This may imply the need for rate limits on individual 948 receivers or the aggregate multicast service. Note there is no way 949 at the transport layer to prevent a join message propagating to the 950 next-hop router. A multicast congestion control method MAY therefore 951 decide not to reduce the rate of the entire multicast group in 952 response to a report received by a single receiver; instead it can 953 decide to expel each congested receiver from the multicast group and 954 to then distribute content to these congested receivers at a lower- 955 rate using unicast congestion-control. Care needs to be taken when 956 this action results in many flows being simultaneously transitioned, 957 so that this does not result in excessive traffic exasperating 958 congestion and potentially contributing to congestion collapse. 960 Some classes of multicast applications support real-time 961 transmissions in which the quality of the transfer may be monitored 962 at the receiver. Applications that detect a significant reduction in 963 user quality SHOULD regard this as a congestion signal (e.g., to 964 leave a group using layered multicast encoding). 966 4.1.1. Bulk Transfer Multicast Applications 968 Applications that perform bulk transmission of data over a multicast 969 distribution tree, i.e., applications that exchange more than a few 970 UDP datagrams per RTT, SHOULD implement a method for congestion 971 control. The currently RECOMMENDED IETF methods are: Asynchronous 972 Layered Coding (ALC) [RFC5775], TCP-Friendly Multicast Congestion 973 Control (TFMCC) [RFC4654], Wave and Equation Based Rate Control 974 (WEBRC) [RFC3738], NACK-Oriented Reliable Multicast (NORM) transport 975 protocol [RFC5740], File Delivery over Unidirectional Transport 976 (FLUTE) [RFC6726], Real Time Protocol/Control Protocol (RTP/RTCP), 977 [RFC3550]. 979 An application can alternatively implement another congestion control 980 schemes following the guidelines of [RFC2887] and utilizing the 981 framework of [RFC3048]. Bulk transfer applications that choose not 982 to implement , [RFC4654][RFC5775], [RFC3738], [RFC5740], [RFC6726], 983 or [RFC3550] SHOULD implement a congestion control scheme that 984 results in bandwidth use that competes fairly with TCP within an 985 order of magnitude. 987 Section 2 of [RFC3551] states that multimedia applications SHOULD 988 monitor the packet loss rate to ensure that it is within acceptable 989 parameters. Packet loss is considered acceptable if a TCP flow 990 across the same network path under the same network conditions would 991 achieve an average throughput, measured on a reasonable timescale, 992 that is not less than that of the UDP flow. The comparison to TCP 993 cannot be specified exactly, but is intended as an "order-of- 994 magnitude" comparison in timescale and throughput. 996 4.1.2. Low Data-Volume Multicast Applications 998 All the recommendations in Section 3.1.2 are also applicable to such 999 multicast applications. 1001 4.2. Message Size Guidelines for Multicast 1003 A multicast application SHOULD NOT send UDP datagrams that result in 1004 IP packets that exceed the effective MTU as described in section 3 of 1005 [RFC6807]. Consequently, an application SHOULD either use the 1006 effective MTU information provided by the Population Count Extensions 1007 to Protocol Independent Multicast [RFC6807] or implement path MTU 1008 discovery itself (see Section 3.2) to determine whether the path to 1009 each destination will support its desired message size without 1010 fragmentation. 1012 5. Programming Guidelines 1014 The de facto standard application programming interface (API) for 1015 TCP/IP applications is the "sockets" interface [POSIX]. Some 1016 platforms also offer applications the ability to directly assemble 1017 and transmit IP packets through "raw sockets" or similar facilities. 1018 This is a second, more cumbersome method of using UDP. The 1019 guidelines in this document cover all such methods through which an 1020 application may use UDP. Because the sockets API is by far the most 1021 common method, the remainder of this section discusses it in more 1022 detail. 1024 Although the sockets API was developed for UNIX in the early 1980s, a 1025 wide variety of non-UNIX operating systems also implement it. The 1026 sockets API supports both IPv4 and IPv6 [RFC3493]. The UDP sockets 1027 API differs from that for TCP in several key ways. Because 1028 application programmers are typically more familiar with the TCP 1029 sockets API, this section discusses these differences. [STEVENS] 1030 provides usage examples of the UDP sockets API. 1032 UDP datagrams may be directly sent and received, without any 1033 connection setup. Using the sockets API, applications can receive 1034 packets from more than one IP source address on a single UDP socket. 1035 Some servers use this to exchange data with more than one remote host 1036 through a single UDP socket at the same time. Many applications need 1037 to ensure that they receive packets from a particular source address; 1038 these applications MUST implement corresponding checks at the 1039 application layer or explicitly request that the operating system 1040 filter the received packets. 1042 If a client/server application executes on a host with more than one 1043 IP interface, the application SHOULD send any UDP responses with an 1044 IP source address that matches the IP destination address of the UDP 1045 datagram that carried the request (see [RFC1122], Section 4.1.3.5). 1046 Many middleboxes expect this transmission behavior and drop replies 1047 that are sent from a different IP address, as explained in 1048 Section 3.5. 1050 A UDP receiver can receive a valid UDP datagram with a zero-length 1051 payload. Note that this is different from a return value of zero 1052 from a read() socket call, which for TCP indicates the end of the 1053 connection. 1055 Many operating systems also allow a UDP socket to be connected, i.e., 1056 to bind a UDP socket to a specific pair of addresses and ports. This 1057 is similar to the corresponding TCP sockets API functionality. 1058 However, for UDP, this is only a local operation that serves to 1059 simplify the local send/receive functions and to filter the traffic 1060 for the specified addresses and ports. Binding a UDP socket does not 1061 establish a connection -- UDP does not notify the remote end when a 1062 local UDP socket is bound. Binding a socket also allows configuring 1063 options that affect the UDP or IP layers, for example, use of the UDP 1064 checksum or the IP Timestamp option. On some stacks, a bound socket 1065 also allows an application to be notified when ICMP error messages 1066 are received for its transmissions [RFC1122]. 1068 UDP provides no flow-control, i.e., the sender at any given time does 1069 not know whether the receiver is able to handle incoming 1070 transmissions. This is another reason why UDP-based applications 1071 need to be robust in the presence of packet loss. This loss can also 1072 occur within the sending host, when an application sends data faster 1073 than the line rate of the outbound network interface. It can also 1074 occur on the destination, where receive calls fail to return all the 1075 data that was sent when the application issues them too infrequently 1076 (i.e., such that the receive buffer overflows). Robust flow control 1077 mechanisms are difficult to implement, which is why applications that 1078 need this functionality SHOULD consider using a full-featured 1079 transport protocol such as TCP. 1081 When an application closes a TCP, SCTP or DCCP socket, the transport 1082 protocol on the receiving host is required to maintain TIME-WAIT 1083 state. This prevents delayed packets from the closed connection 1084 instance from being mistakenly associated with a later connection 1085 instance that happens to reuse the same IP address and port pairs. 1086 The UDP protocol does not implement such a mechanism. Therefore, 1087 UDP-based applications need to be robust in this case. One 1088 application may close a socket or terminate, followed in time by 1089 another application receiving on the same port. This later 1090 application may then receive packets intended for the first 1091 application that were delayed in the network. 1093 5.1. Using UDP Ports 1095 The rules procedures for the management of the Service Name and 1096 Transport Protocol Port Number Registry are specified in [RFC6335]. 1098 Recommendations for use of UDP ports are provided in 1099 [I-D.ietf-tsvwg-port-use]. 1101 A UDP sender SHOULD NOT use a zero source port value, and a UDP 1102 receiver should not bind to port zero. Applications SHOULD implement 1103 corresponding receiver checks at the application layer or explicitly 1104 request that the operating system filter the received packets to 1105 prevent receiving packets with an arbitrary port. This measure is 1106 designed to provide additional protection from data injection attacks 1107 from an off-path source (where the port values may not be known). 1108 Although the source port value is often not directly used in 1109 multicast applications, this should still be set to a random or pre- 1110 determined value. 1112 The UDP port number fields have been used as a basis to design load- 1113 balancing solutions for IPv4. This approach has also been leveraged 1114 for IPv6 [RFC6438], but the IPv6 "flow label" [RFC6437]may also be 1115 used as a basis for entropy for load balancing. This use of the flow 1116 label for load balancing is consistent with the intended use, 1117 although further clarity was needed to ensure the field can be 1118 consistently used for this purpose. Therefore, an updated IPv6 flow 1119 label [RFC6437] and ECMP routing [RFC6438] usage were specified. 1120 Router vendors are encouraged to start using the flow label as a part 1121 of the flow hash, providing support for IP-level ECMP without 1122 requiring use of UDP. The end-to-end use of flow labels for load 1123 balancing is a long-term solution. Even if the usage of the flow 1124 label has been clarified, there will be a transition time before a 1125 significant proportion of endpoints start to assign a good quality 1126 flow label to the flows that they originate. The use of load 1127 balancing using the transport header fields will likely continue 1128 until widespread deployment is finally achieved. 1130 5.1.1. Applications using Multiple UDP Ports 1132 A single application may exchange several types of data. In some 1133 cases, this may require multiple UDP flows (e.g., multiple sets of 1134 flows, identified by different 5-tuples). [RFC6335] recommends 1135 applications developers not to apply to IANA to be assigned multiple 1136 well-known ports (user or system). This does not discuss the 1137 implications of using multiple flows with the same well-known port or 1138 pairs of dynamic ports (e.g., identified by a service name or 1139 signaling protocol). 1141 Use of multiple flows can impact the network in several ways: 1143 o Starting a series of successive connections can increase the 1144 number of state bindings in middleboxes (e.g., NAPT or Firewall) 1145 along the network path. UDP-based middlebox traversal usually 1146 relies on timeouts to remove old state, since middleboxes are 1147 unaware when a particular flow ceases to be used by an 1148 application. 1150 o Using several flows at the same time may result in seeing 1151 different network characteristics for each flow. It can not be 1152 assumed both follow the same path (e.g., when ECMP is used, 1153 traffic is intentionally hashed onto different parallel paths 1154 based on the port numbers). 1156 o Using several flows can also increase the occupancy of a binding 1157 or lookup table in a middlebox (e.g., NAPT or Firewall) which may 1158 cause the device to change the way it manages the flow state. 1160 o Further, using excessive numbers of flows can degrade the ability 1161 of congestion control to react to congestion events, unless the 1162 congestion state is shared between all flows in a session. 1164 Therefore, applications MUST NOT assume consistent behavior of 1165 middleboxes when multiple UDP flows are used; many devices respond 1166 differently as the number of ports used increases. Using multiple 1167 flows with different QoS requirements requires applications to verify 1168 that the expected performance is achieved using each individual flow 1169 (five-tuple), see Section 3.1.4. 1171 5.2. ICMP Guidelines 1173 Applications can utilize information about ICMP error messages that 1174 the UDP layer passes up for a variety of purposes [RFC1122]. 1175 Applications SHOULD appropriately validate the payload of ICMP 1176 messages to ensure these are received in response to transmitted 1177 traffic (i.e., a reported error condition that corresponds to a UDP 1178 datagram actually sent by the application). This requires context, 1179 such as local state about communication instances to each 1180 destination, that although readily available in connection-oriented 1181 transport protocols is not always maintained by UDP-based 1182 applications. Note that not all platforms have the necessary APIs to 1183 support this validation, and some platforms already perform this 1184 validation internally before passing ICMP information to the 1185 application. 1187 Any application response to ICMP error messages SHOULD be robust to 1188 temporary routing failures, e.g., transient ICMP "unreachable" 1189 messages should not normally cause a communication abort. 1191 6. Security Considerations 1193 UDP does not provide communications security. Applications that need 1194 to protect their communications against eavesdropping, tampering, or 1195 message forgery SHOULD employ end-to-end security services provided 1196 by other IETF protocols. Applications that respond to short requests 1197 with potentially large responses are vulnerable to amplification 1198 attacks, and SHOULD authenticate the sender before responding. The 1199 source IP address of a request is not a useful authenticator, because 1200 it can easily be spoofed. 1202 One option of securing UDP communications is with IPsec [RFC4301], 1203 which can provide authentication for flows of IP packets through the 1204 Authentication Header (AH) [RFC4302] and encryption and/or 1205 authentication through the Encapsulating Security Payload (ESP) 1206 [RFC4303]. Applications use the Internet Key Exchange (IKE) 1207 [RFC5996] to configure IPsec for their sessions. Depending on how 1208 IPsec is configured for a flow, it can authenticate or encrypt the 1209 UDP headers as well as UDP payloads. If an application only requires 1210 authentication, ESP with no encryption but with authentication is 1211 often a better option than AH, because ESP can operate across 1212 middleboxes. An application that uses IPsec requires the support of 1213 an operating system that implements the IPsec protocol suite. 1215 Although it is possible to use IPsec to secure UDP communications, 1216 not all operating systems support IPsec or allow applications to 1217 easily configure it for their flows. A second option of securing UDP 1218 communications is through Datagram Transport Layer Security (DTLS) 1219 [RFC6347]. DTLS provides communication privacy by encrypting UDP 1220 payloads. It does not protect the UDP headers. Applications can 1221 implement DTLS without relying on support from the operating system. 1223 Many other options for authenticating or encrypting UDP payloads 1224 exist. For example, the GSS-API security framework [RFC2743] or 1225 Cryptographic Message Syntax (CMS) [RFC5652] could be used to protect 1226 UDP payloads. The IETF standard for securing RTP [RFC3550] 1227 communication sessions over UDP is the Secure Real-time Transport 1228 Protocol (SRTP) [RFC3711]. In some applications, a better solution 1229 is to protect larger stand-alone objects, such as files or messages, 1230 instead of individual UDP payloads. In these situations, CMS 1231 [RFC5652], S/MIME [RFC5751] or OpenPGP [RFC4880] could be used. In 1232 addition, there are many non-IETF protocols in this area. 1234 Like congestion control mechanisms, security mechanisms are difficult 1235 to design and implement correctly. It is hence RECOMMENDED that 1236 applications employ well-known standard security mechanisms such as 1237 DTLS or IPsec, rather than inventing their own. 1239 The Generalized TTL Security Mechanism (GTSM) [RFC5082] may be used 1240 with UDP applications (especially when the intended endpoint is on 1241 the same link as the sender). This is a lightweight mechanism that 1242 allows a receiver to filter unwanted packets. 1244 In terms of congestion control, [RFC2309] and [RFC2914] discuss the 1245 dangers of congestion-unresponsive flows to the Internet. 1246 [I-D.fairhurst-tsvwg-circuit-breaker] describes methods that can be 1247 used to set a performance envelope that can assist in preventing 1248 congestion collapse in the absence of congestion control or when the 1249 congestion control fails to react to congestion events. This 1250 document provides guidelines to designers of UDP-based applications 1251 to congestion-control their transmissions, and does not raise any 1252 additional security concerns. 1254 7. Summary 1256 This section summarizes the guidelines made in Sections 3 and 6 in a 1257 tabular format (Table 1) for easy referencing. 1259 +---------------------------------------------------------+---------+ 1260 | Recommendation | Section | 1261 +---------------------------------------------------------+---------+ 1262 | MUST tolerate a wide range of Internet path conditions | 3 | 1263 | | | 1264 | SHOULD use a full-featured transport (TCP, SCTP, DCCP) | | 1265 | | | 1266 | | | 1267 | | | 1268 | SHOULD control rate of transmission | 3.1 | 1269 | | | 1270 | SHOULD perform congestion control over all traffic | | 1271 | | | 1272 | | | 1273 | | | 1274 | for bulk transfers, | 3.1.1 | 1275 | | | 1276 | SHOULD consider implementing TFRC | | 1277 | | | 1278 | else, SHOULD in other ways use bandwidth similar to TCP | | 1279 | | | 1280 | | | 1281 | | | 1282 | for non-bulk transfers, | 3.1.2 | 1283 | | | 1284 | SHOULD measure RTT and transmit max. 1 datagram/RTT | | 1285 | | | 1286 | else, SHOULD send at most 1 datagram every 3 seconds | | 1287 | | | 1288 | SHOULD back-off retransmission timers following loss | | 1289 | | | 1290 | | | 1291 | | | 1292 | for tunnels carrying IP Traffic, | 3.1.6 | 1293 | | | 1294 | SHOULD NOT perform congestion control | | 1295 | | | 1296 | | | 1297 | | | 1298 | for non-IP tunnels or rate not determined by traffic, | 3.1.6 | 1299 | | | 1300 | SHOULD perform congestion control | | 1301 | | | 1302 | | | 1303 | | | 1304 | SHOULD NOT send datagrams that exceed the PMTU, i.e., | 3.2 | 1305 | | | 1306 | SHOULD discover PMTU or send datagrams < minimum PMTU; | | 1307 | Specific application mechanisms are REQUIRED if PLPMTUD | | 1308 | is used. | | 1309 | | | 1310 | | | 1311 | | | 1312 | SHOULD handle datagram loss, duplication, reordering | 3.3 | 1313 | | | 1314 | SHOULD be robust to delivery delays up to 2 minutes | | 1315 | | | 1316 | | | 1317 | | | 1318 | SHOULD enable IPv4 UDP checksum | 3.4 | 1319 | | | 1320 | SHOULD enable IPv6 UDP checksum; Specific application | | 1321 | mechanisms are REQUIRED if a zero IPv6 UDP checksum is | | 1322 | used. | | 1323 | | | 1324 | else, MAY use UDP-Lite with suitable checksum coverage | 3.4.1 | 1325 | | | 1326 | | | 1327 | | | 1328 | SHOULD NOT always send middlebox keep-alives | 3.5 | 1329 | | | 1330 | MAY use keep-alives when needed (min. interval 15 sec) | | 1331 | | | 1332 | | | 1333 | | | 1334 | MUST check IP source address | 5 | 1335 | | | 1336 | and, for client/server applications | | 1337 | | | 1338 | SHOULD send responses from src address matching request | | 1339 | | | 1340 | | | 1341 | | | 1342 | SHOULD use standard IETF security protocols when needed | 6 | 1343 +---------------------------------------------------------+---------+ 1345 Table 1: Summary of recommendations 1347 8. IANA Considerations 1349 Note to RFC-Editor: please remove this entire section prior to 1350 publication. 1352 This document raises no IANA considerations. 1354 9. Acknowledgments 1356 The middlebox traversal guidelines in Section 3.5 incorporate ideas 1357 from Section 5 of [I-D.ford-behave-app] by Bryan Ford, Pyda 1358 Srisuresh, and Dan Kegel. 1360 10. References 1362 10.1. Normative References 1364 [RFC0768] Postel, J., "User Datagram Protocol", STD 6, RFC 768, 1365 August 1980. 1367 [RFC0793] Postel, J., "Transmission Control Protocol", STD 7, RFC 1368 793, September 1981. 1370 [RFC1122] Braden, R., "Requirements for Internet Hosts - 1371 Communication Layers", STD 3, RFC 1122, October 1989. 1373 [RFC1191] Mogul, J. and S. Deering, "Path MTU discovery", RFC 1191, 1374 November 1990. 1376 [RFC1981] McCann, J., Deering, S., and J. Mogul, "Path MTU Discovery 1377 for IP version 6", RFC 1981, August 1996. 1379 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1380 Requirement Levels", BCP 14, RFC 2119, March 1997. 1382 [RFC2460] Deering, S. and R. Hinden, "Internet Protocol, Version 6 1383 (IPv6) Specification", RFC 2460, December 1998. 1385 [RFC2914] Floyd, S., "Congestion Control Principles", BCP 41, RFC 1386 2914, September 2000. 1388 [RFC3828] Larzon, L-A., Degermark, M., Pink, S., Jonsson, L-E., and 1389 G. Fairhurst, "The Lightweight User Datagram Protocol 1390 (UDP-Lite)", RFC 3828, July 2004. 1392 [RFC4787] Audet, F. and C. Jennings, "Network Address Translation 1393 (NAT) Behavioral Requirements for Unicast UDP", BCP 127, 1394 RFC 4787, January 2007. 1396 [RFC4821] Mathis, M. and J. Heffner, "Packetization Layer Path MTU 1397 Discovery", RFC 4821, March 2007. 1399 [RFC5348] Floyd, S., Handley, M., Padhye, J., and J. Widmer, "TCP 1400 Friendly Rate Control (TFRC): Protocol Specification", RFC 1401 5348, September 2008. 1403 [RFC5405] Eggert, L. and G. Fairhurst, "Unicast UDP Usage Guidelines 1404 for Application Designers", BCP 145, RFC 5405, November 1405 2008. 1407 [RFC6298] Paxson, V., Allman, M., Chu, J., and M. Sargent, 1408 "Computing TCP's Retransmission Timer", RFC 6298, June 1409 2011. 1411 10.2. Informative References 1413 [FABER] Faber, T., Touch, J., and W. Yue, "The TIME-WAIT State in 1414 TCP and Its Effect on Busy Servers", Proc. IEEE Infocom, 1415 March 1999. 1417 [I-D.fairhurst-tsvwg-circuit-breaker] 1418 Fairhurst, G., "Network Transport Circuit Breakers", 1419 draft-fairhurst-tsvwg-circuit-breaker-01 (work in 1420 progress), May 2014. 1422 [I-D.ford-behave-app] 1423 Ford, B., "Application Design Guidelines for Traversal 1424 through Network Address Translators", draft-ford-behave- 1425 app-05 (work in progress), March 2007. 1427 [I-D.ietf-avtcore-rtp-circuit-breakers] 1428 Perkins, C. and V. Singh, "Multimedia Congestion Control: 1429 Circuit Breakers for Unicast RTP Sessions", draft-ietf- 1430 avtcore-rtp-circuit-breakers-08 (work in progress), 1431 December 2014. 1433 [I-D.ietf-tsvwg-port-use] 1434 Touch, J., "Recommendations for Transport Port Number 1435 Uses", draft-ietf-tsvwg-port-use-06 (work in progress), 1436 November 2014. 1438 [POSIX] IEEE Std. 1003.1-2001, , "Standard for Information 1439 Technology - Portable Operating System Interface (POSIX)", 1440 Open Group Technical Standard: Base Specifications Issue 1441 6, ISO/IEC 9945:2002, December 2001. 1443 [RFC0896] Nagle, J., "Congestion control in IP/TCP internetworks", 1444 RFC 896, January 1984. 1446 [RFC0919] Mogul, J., "Broadcasting Internet Datagrams", STD 5, RFC 1447 919, October 1984. 1449 [RFC1112] Deering, S., "Host extensions for IP multicasting", STD 5, 1450 RFC 1112, August 1989. 1452 [RFC1536] Kumar, A., Postel, J., Neuman, C., Danzig, P., and S. 1453 Miller, "Common DNS Implementation Errors and Suggested 1454 Fixes", RFC 1536, October 1993. 1456 [RFC1546] Partridge, C., Mendez, T., and W. Milliken, "Host 1457 Anycasting Service", RFC 1546, November 1993. 1459 [RFC2309] Braden, B., Clark, D., Crowcroft, J., Davie, B., Deering, 1460 S., Estrin, D., Floyd, S., Jacobson, V., Minshall, G., 1461 Partridge, C., Peterson, L., Ramakrishnan, K., Shenker, 1462 S., Wroclawski, J., and L. Zhang, "Recommendations on 1463 Queue Management and Congestion Avoidance in the 1464 Internet", RFC 2309, April 1998. 1466 [RFC2675] Borman, D., Deering, S., and R. Hinden, "IPv6 Jumbograms", 1467 RFC 2675, August 1999. 1469 [RFC2743] Linn, J., "Generic Security Service Application Program 1470 Interface Version 2, Update 1", RFC 2743, January 2000. 1472 [RFC2887] Handley, M., Floyd, S., Whetten, B., Kermode, R., 1473 Vicisano, L., and M. Luby, "The Reliable Multicast Design 1474 Space for Bulk Data Transfer", RFC 2887, August 2000. 1476 [RFC3048] Whetten, B., Vicisano, L., Kermode, R., Handley, M., 1477 Floyd, S., and M. Luby, "Reliable Multicast Transport 1478 Building Blocks for One-to-Many Bulk-Data Transfer", RFC 1479 3048, January 2001. 1481 [RFC3124] Balakrishnan, H. and S. Seshan, "The Congestion Manager", 1482 RFC 3124, June 2001. 1484 [RFC3261] Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston, 1485 A., Peterson, J., Sparks, R., Handley, M., and E. 1486 Schooler, "SIP: Session Initiation Protocol", RFC 3261, 1487 June 2002. 1489 [RFC3303] Srisuresh, P., Kuthan, J., Rosenberg, J., Molitor, A., and 1490 A. Rayhan, "Middlebox communication architecture and 1491 framework", RFC 3303, August 2002. 1493 [RFC3493] Gilligan, R., Thomson, S., Bound, J., McCann, J., and W. 1494 Stevens, "Basic Socket Interface Extensions for IPv6", RFC 1495 3493, February 2003. 1497 [RFC3550] Schulzrinne, H., Casner, S., Frederick, R., and V. 1498 Jacobson, "RTP: A Transport Protocol for Real-Time 1499 Applications", STD 64, RFC 3550, July 2003. 1501 [RFC3551] Schulzrinne, H. and S. Casner, "RTP Profile for Audio and 1502 Video Conferences with Minimal Control", STD 65, RFC 3551, 1503 July 2003. 1505 [RFC3711] Baugher, M., McGrew, D., Naslund, M., Carrara, E., and K. 1506 Norrman, "The Secure Real-time Transport Protocol (SRTP)", 1507 RFC 3711, March 2004. 1509 [RFC3738] Luby, M. and V. Goyal, "Wave and Equation Based Rate 1510 Control (WEBRC) Building Block", RFC 3738, April 2004. 1512 [RFC3758] Stewart, R., Ramalho, M., Xie, Q., Tuexen, M., and P. 1513 Conrad, "Stream Control Transmission Protocol (SCTP) 1514 Partial Reliability Extension", RFC 3758, May 2004. 1516 [RFC3819] Karn, P., Bormann, C., Fairhurst, G., Grossman, D., 1517 Ludwig, R., Mahdavi, J., Montenegro, G., Touch, J., and L. 1518 Wood, "Advice for Internet Subnetwork Designers", BCP 89, 1519 RFC 3819, July 2004. 1521 [RFC4301] Kent, S. and K. Seo, "Security Architecture for the 1522 Internet Protocol", RFC 4301, December 2005. 1524 [RFC4302] Kent, S., "IP Authentication Header", RFC 4302, December 1525 2005. 1527 [RFC4303] Kent, S., "IP Encapsulating Security Payload (ESP)", RFC 1528 4303, December 2005. 1530 [RFC4340] Kohler, E., Handley, M., and S. Floyd, "Datagram 1531 Congestion Control Protocol (DCCP)", RFC 4340, March 2006. 1533 [RFC4341] Floyd, S. and E. Kohler, "Profile for Datagram Congestion 1534 Control Protocol (DCCP) Congestion Control ID 2: TCP-like 1535 Congestion Control", RFC 4341, March 2006. 1537 [RFC4342] Floyd, S., Kohler, E., and J. Padhye, "Profile for 1538 Datagram Congestion Control Protocol (DCCP) Congestion 1539 Control ID 3: TCP-Friendly Rate Control (TFRC)", RFC 4342, 1540 March 2006. 1542 [RFC4607] Holbrook, H. and B. Cain, "Source-Specific Multicast for 1543 IP", RFC 4607, August 2006. 1545 [RFC4654] Widmer, J. and M. Handley, "TCP-Friendly Multicast 1546 Congestion Control (TFMCC): Protocol Specification", RFC 1547 4654, August 2006. 1549 [RFC4880] Callas, J., Donnerhacke, L., Finney, H., Shaw, D., and R. 1550 Thayer, "OpenPGP Message Format", RFC 4880, November 2007. 1552 [RFC4960] Stewart, R., "Stream Control Transmission Protocol", RFC 1553 4960, September 2007. 1555 [RFC4963] Heffner, J., Mathis, M., and B. Chandler, "IPv4 Reassembly 1556 Errors at High Data Rates", RFC 4963, July 2007. 1558 [RFC4987] Eddy, W., "TCP SYN Flooding Attacks and Common 1559 Mitigations", RFC 4987, August 2007. 1561 [RFC5082] Gill, V., Heasley, J., Meyer, D., Savola, P., and C. 1562 Pignataro, "The Generalized TTL Security Mechanism 1563 (GTSM)", RFC 5082, October 2007. 1565 [RFC5245] Rosenberg, J., "Interactive Connectivity Establishment 1566 (ICE): A Protocol for Network Address Translator (NAT) 1567 Traversal for Offer/Answer Protocols", RFC 5245, April 1568 2010. 1570 [RFC5622] Floyd, S. and E. Kohler, "Profile for Datagram Congestion 1571 Control Protocol (DCCP) Congestion ID 4: TCP-Friendly Rate 1572 Control for Small Packets (TFRC-SP)", RFC 5622, August 1573 2009. 1575 [RFC5652] Housley, R., "Cryptographic Message Syntax (CMS)", STD 70, 1576 RFC 5652, September 2009. 1578 [RFC5740] Adamson, B., Bormann, C., Handley, M., and J. Macker, 1579 "NACK-Oriented Reliable Multicast (NORM) Transport 1580 Protocol", RFC 5740, November 2009. 1582 [RFC5751] Ramsdell, B. and S. Turner, "Secure/Multipurpose Internet 1583 Mail Extensions (S/MIME) Version 3.2 Message 1584 Specification", RFC 5751, January 2010. 1586 [RFC5775] Luby, M., Watson, M., and L. Vicisano, "Asynchronous 1587 Layered Coding (ALC) Protocol Instantiation", RFC 5775, 1588 April 2010. 1590 [RFC5971] Schulzrinne, H. and R. Hancock, "GIST: General Internet 1591 Signalling Transport", RFC 5971, October 2010. 1593 [RFC5973] Stiemerling, M., Tschofenig, H., Aoun, C., and E. Davies, 1594 "NAT/Firewall NSIS Signaling Layer Protocol (NSLP)", RFC 1595 5973, October 2010. 1597 [RFC5996] Kaufman, C., Hoffman, P., Nir, Y., and P. Eronen, 1598 "Internet Key Exchange Protocol Version 2 (IKEv2)", RFC 1599 5996, September 2010. 1601 [RFC6335] Cotton, M., Eggert, L., Touch, J., Westerlund, M., and S. 1602 Cheshire, "Internet Assigned Numbers Authority (IANA) 1603 Procedures for the Management of the Service Name and 1604 Transport Protocol Port Number Registry", BCP 165, RFC 1605 6335, August 2011. 1607 [RFC6347] Rescorla, E. and N. Modadugu, "Datagram Transport Layer 1608 Security Version 1.2", RFC 6347, January 2012. 1610 [RFC6395] Gulrajani, S. and S. Venaas, "An Interface Identifier (ID) 1611 Hello Option for PIM", RFC 6395, October 2011. 1613 [RFC6396] Blunk, L., Karir, M., and C. Labovitz, "Multi-Threaded 1614 Routing Toolkit (MRT) Routing Information Export Format", 1615 RFC 6396, October 2011. 1617 [RFC6437] Amante, S., Carpenter, B., Jiang, S., and J. Rajahalme, 1618 "IPv6 Flow Label Specification", RFC 6437, November 2011. 1620 [RFC6438] Carpenter, B. and S. Amante, "Using the IPv6 Flow Label 1621 for Equal Cost Multipath Routing and Link Aggregation in 1622 Tunnels", RFC 6438, November 2011. 1624 [RFC6513] Rosen, E. and R. Aggarwal, "Multicast in MPLS/BGP IP 1625 VPNs", RFC 6513, February 2012. 1627 [RFC6679] Westerlund, M., Johansson, I., Perkins, C., O'Hanlon, P., 1628 and K. Carlberg, "Explicit Congestion Notification (ECN) 1629 for RTP over UDP", RFC 6679, August 2012. 1631 [RFC6726] Paila, T., Walsh, R., Luby, M., Roca, V., and R. Lehtonen, 1632 "FLUTE - File Delivery over Unidirectional Transport", RFC 1633 6726, November 2012. 1635 [RFC6807] Farinacci, D., Shepherd, G., Venaas, S., and Y. Cai, 1636 "Population Count Extensions to Protocol Independent 1637 Multicast (PIM)", RFC 6807, December 2012. 1639 [STEVENS] Stevens, W., Fenner, B., and A. Rudoff, "UNIX Network 1640 Programming, The sockets Networking API", Addison-Wesley, 1641 2004. 1643 [UPnP] UPnP Forum, , "Internet Gateway Device (IGD) Standardized 1644 Device Control Protocol V 1.0", November 2001. 1646 Appendix A. Revision Notes 1648 Note to RFC-Editor: please remove this entire section prior to 1649 publication. 1651 Changes in draft-eggert-tsvwg-rfc5405bis-01: 1653 o Added Greg Shepherd as a co-author, based on the multicast 1654 guidelines that originated with him. 1656 Changes in draft-eggert-tsvwg-rfc5405bis-00 (relative to RFC5405): 1658 o The words "application designers" were removed from the draft 1659 title and the wording of the abstract was clarified abstract. 1661 o New text to clarify various issues and set new recommendations not 1662 previously included in RFC 5405. These include new 1663 recommendations for multicast, the use of checksums with IPv6, 1664 ECMP, recommendations on port usage, use of ECN, use of DiffServ, 1665 circuit breakers (initial text), etc. 1667 Draft-tsvwg-rfc5405bis-00 was adopted by the TSVWG (based on the 1668 above) 1670 Authors' Addresses 1672 Lars Eggert 1673 NetApp 1674 Sonnenallee 1 1675 Kirchheim 85551 1676 Germany 1678 Phone: +49 151 120 55791 1679 EMail: lars@netapp.com 1680 URI: https://eggert.org/ 1682 Godred Fairhurst 1683 University of Aberdeen 1684 Department of Engineering 1685 Fraser Noble Building 1686 Aberdeen AB24 3UE 1687 Scotland 1689 EMail: gorry@erg.abdn.ac.uk 1690 URI: http://www.erg.abdn.ac.uk/ 1691 Greg Shepherd 1692 Cisco Systems 1693 Tasman Drive 1694 San Jose 1695 USA 1697 EMail: gjshep@gmail.com