idnits 2.17.1 draft-templin-dtn-ltpfrag-06.txt: -(562): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == There are 2 instances of lines with non-ascii characters in the document. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document date (19 November 2021) is 889 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-74) exists of draft-templin-6man-omni-49 Summary: 0 errors (**), 0 flaws (~~), 4 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group F. L. Templin, Ed. 3 Internet-Draft Boeing Research & Technology 4 Intended status: Informational 19 November 2021 5 Expires: 23 May 2022 7 LTP Fragmentation 8 draft-templin-dtn-ltpfrag-06 10 Abstract 12 The Licklider Transmission Protocol (LTP) provides a reliable 13 datagram convergence layer for the Delay/Disruption Tolerant 14 Networking (DTN) Bundle Protocol. In common practice, LTP is often 15 configured over UDP/IP sockets and inherits its maximum segment size 16 from the maximum-sized UDP/IP datagram, however when this size 17 exceeds the maximum IP packet size for the path a service known as IP 18 fragmentation must be employed. This document discusses LTP 19 interactions with IP fragmentation and mitigations for managing the 20 amount of IP fragmentation employed. 22 Status of This Memo 24 This Internet-Draft is submitted in full conformance with the 25 provisions of BCP 78 and BCP 79. 27 Internet-Drafts are working documents of the Internet Engineering 28 Task Force (IETF). Note that other groups may also distribute 29 working documents as Internet-Drafts. The list of current Internet- 30 Drafts is at https://datatracker.ietf.org/drafts/current/. 32 Internet-Drafts are draft documents valid for a maximum of six months 33 and may be updated, replaced, or obsoleted by other documents at any 34 time. It is inappropriate to use Internet-Drafts as reference 35 material or to cite them other than as "work in progress." 37 This Internet-Draft will expire on 23 May 2022. 39 Copyright Notice 41 Copyright (c) 2021 IETF Trust and the persons identified as the 42 document authors. All rights reserved. 44 This document is subject to BCP 78 and the IETF Trust's Legal 45 Provisions Relating to IETF Documents (https://trustee.ietf.org/ 46 license-info) in effect on the date of publication of this document. 47 Please review these documents carefully, as they describe your rights 48 and restrictions with respect to this document. Code Components 49 extracted from this document must include Revised BSD License text as 50 described in Section 4.e of the Trust Legal Provisions and are 51 provided without warranty as described in the Revised BSD License. 53 Table of Contents 55 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 56 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4 57 3. IP Fragmentation Issues . . . . . . . . . . . . . . . . . . . 4 58 4. LTP Fragmentation . . . . . . . . . . . . . . . . . . . . . . 5 59 5. Beyond "sendmmsg()" . . . . . . . . . . . . . . . . . . . . . 6 60 6. LTP Performance Enhancement Using GSO/GRO . . . . . . . . . . 7 61 6.1. LTP and GSO . . . . . . . . . . . . . . . . . . . . . . . 8 62 6.2. LTP and GRO . . . . . . . . . . . . . . . . . . . . . . . 8 63 6.3. LTP GSO/GRO Over OMNI Interfaces . . . . . . . . . . . . 9 64 6.4. IPv4/IPv6 Protocol Considerations . . . . . . . . . . . . 10 65 7. Implementation Status . . . . . . . . . . . . . . . . . . . . 11 66 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 11 67 9. Security Considerations . . . . . . . . . . . . . . . . . . . 11 68 10. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 11 69 11. References . . . . . . . . . . . . . . . . . . . . . . . . . 11 70 11.1. Normative References . . . . . . . . . . . . . . . . . . 11 71 11.2. Informative References . . . . . . . . . . . . . . . . . 12 72 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . 13 74 1. Introduction 76 The Licklider Transmission Protocol (LTP) [RFC5326] provides a 77 reliable datagram convergence layer for the Delay/Disruption Tolerant 78 Networking (DTN) Bundle Protocol (BP) [I-D.ietf-dtn-bpbis]. In 79 common practice, LTP is often configured over the User Datagram 80 Protocol (UDP) [RFC0768] and Internet Protocol (IP) [RFC0791] using 81 the "socket" abstraction. LTP inherits its maximum segment size from 82 the maximum-sized UDP/IP datagram (i.e. 64KB minus header sizes), 83 however when that size exceeds the maximum IP packet size for the 84 path a service known as IP fragmentation must be employed. 86 LTP breaks BP bundles into "blocks", then further breaks these blocks 87 into "segments". The segment size is a configurable option and 88 represents the largest atomic portion of data that LTP will require 89 underlying layers to deliver as a single unit. The segment size is 90 therefore also known as the "retransmission unit", since each lost 91 segment must be retransmitted in its entirety. Experimental and 92 operational evidence has shown that on robust networks increasing the 93 LTP segment size (up to the maximum UDP/IP datagram size of slightly 94 less than 64KB) can result in substantial performance increases over 95 smaller segment sizes. However, the performance increases must be 96 tempered with the amount of IP fragmentation invoked as discussed 97 below. 99 When LTP presents a segment to the operating system kernel (e.g., via 100 a sendmsg() system call), the UDP layer prepends a UDP header to 101 create a UDP datagram. The UDP layer then presents the resulting 102 datagram to the IP layer for packet framing and transmission over a 103 networked path. The path is further characterized by the path 104 Maximum Transmission Unit (Path-MTU) which is a measure of the 105 smallest link MTU (Link-MTU) among all links in the path. 107 When LTP presents a segment to the kernel that is larger than the 108 Path-MTU, the resulting UDP datagram is presented to the IP layer 109 which in turn performs IP fragmentation to break the datagram into 110 fragments that are no larger than the Path-MTU. For example, if the 111 LTP segment size is 64KB and the Path-MTU is 1280 bytes IP 112 fragmentation results in 50+ fragments that are transmitted as 113 individual IP packets. (Note that for IPv4 [RFC0791], fragmentation 114 may occur either in the source host or in a router in the network 115 path, while for IPv6 [RFC8200] only the source host may perform 116 fragmentation.) 118 Each IP fragment is subject to the same best-effort delivery service 119 offered by the network according to current congestion and/or link 120 signal quality conditions; therefore, the IP fragment size becomes 121 known as the "loss unit". Especially when the packet loss rate is 122 non-negligible, however, performance can suffer dramatically when the 123 loss unit is significantly smaller than the retransmission unit. In 124 particular, if even a single IP fragment of a fragmented LTP segment 125 is lost then the entire LTP segment is deemed lost and must be 126 retransmitted. Since LTP does not support flow control or congestion 127 control, this can result in cascading communication failure when 128 fragments are systematically lost in transit. 130 This document discusses LTP interactions with IP fragmentation and 131 mitigations for managing the amount of IP fragmentation employed. It 132 further discusses methods for increasing LTP performance both with 133 and without the aid of IP fragmentation. 135 2. Terminology 137 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 138 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 139 "OPTIONAL" in this document are to be interpreted as described in BCP 140 14 [RFC2119][RFC8174] when, and only when, they appear in all 141 capitals, as shown here. 143 3. IP Fragmentation Issues 145 IP fragmentation is a fundamental service of the Internet Protocol, 146 yet it has long been understood that its use can be problematic in 147 some environments. Beginning as early as 1987, "Fragmentation 148 Considered Harmful" [FRAG] outlined multiple issues with the service 149 including a performance-crippling condition that can occur at high 150 data rates when the loss unit is considerably smaller than the 151 retransmission unit during intermittent and/or steady-state loss 152 conditions. 154 Later investigations also identified the possibility for undetected 155 corruption at high data rates due to a condition known as "ID 156 wraparound" when the 16-bit IP identification field (aka the "IP ID") 157 increments such that new fragments overlap with existing fragments 158 still alive in the network and with identical ID values 159 [RFC4963][RFC6864]. Although this issue occurs only in the IPv4 160 protocol (and not in IPv6 where the IP ID is 32-bits in length), the 161 IPv4 concerns along with the fact that IPv6 does not permit routers 162 to perform "network fragmentation" have led many to discourage the 163 use of fragmentation whenever possible. 165 Even in the modern era, investigators have seen fit to declare "IP 166 Fragmentation Considered Fragile" in an Internet Engineering Task 167 Force (IETF) Best Current Practice (BCP) reference [RFC8900]. 168 Indeed, the BCP recommendations cite the Bundle Protocol LTP 169 convergence layer as a user of IP fragmentation that depends on some 170 of its properties to realize greater performance. However, the BCP 171 summarizes by saying: 173 "Rather than deprecating IP fragmentation, this document 174 recommends that upper-layer protocols address the problem of 175 fragmentation at their layer, reducing their reliance on IP 176 fragmentation to the greatest degree possible." 178 While the performance implications are considerable and have serious 179 implications for real-world applications, our goal in this document 180 is neither to condemn nor embrace IP fragmentation as it pertains to 181 the Bundle Protocol LTP convergence layer operating over UDP/IP 182 sockets. Instead, we examine ways in which the benefits of IP 183 fragmentation can be realized while avoiding the pitfalls. We 184 therefore next discuss our systematic approach to LTP fragmentation. 186 4. LTP Fragmentation 188 In common LTP implementations over UDP/IP (e.g., the Interplanetary 189 Overlay Network (ION)), performance is greatly dependent on the LTP 190 segment size. This is due to the fact that a larger segment 191 presented to UDP/IP as a single unit incurs only a single system call 192 and a single data copy from application to kernel space via the 193 sendmsg() system call. Once inside the kernel, the segment incurs 194 UDP/IP encapsulation and IP fragmentation which again results in a 195 loss unit smaller than the retransmission unit. However, during 196 fragmentation, each fragment is transmitted immediately following the 197 previous without delay so that the fragments appear as a "burst" of 198 consecutive packets over the network path resulting in high network 199 utilization during the burst period. Additionally, the use of IP 200 fragmentation with a larger segment size conserves header framing 201 bytes since the upper layer headers only appear in the first IP 202 fragment as opposed to appearing in all fragments. 204 In order to avoid retransmission congestion (i.e., especially when 205 the loss probability is non-negligible), the natural choice would be 206 to set the LTP segment size to a size that is no larger than the 207 Path-MTU. Assuming the minimum IPv4 MTU of 576 bytes, however, 208 transmission of 64KB of data using a 576B segment size would require 209 well over 100 independent sendmsg() system calls and data copies as 210 opposed to just one when the largest segment size is used. This 211 greatly reduces the bandwidth advantage offered by IP fragmentation 212 bursts. Therefore, a means for providing the best aspects of both 213 large segment fragment bursting and small segment retransmission 214 efficiency is needed. 216 Common operating systems such as linux provide the sendmmsg() ("send 217 multiple messages") system call that allows the LTP application to 218 present the kernel with a vector of up to 1024 segments instead of 219 just a single segment. This theoretically affords the bursting 220 behavior of IP fragmentation coupled with the retransmission 221 efficiency of employing small segment sizes. (Note that LTP 222 receivers can also use the recvmmsg() ("receive multiple messages") 223 system call to receive a vector of segments from the kernel in case 224 multiple recent packet arrivals can be combined.) 225 This work therefore recommends implementations of LTP to employ a 226 large block size, a conservative segment size and a new configuration 227 option known as the "Burst-Limit" which determines the number of 228 segments that can be presented in a single sendmmsg() system call. 229 When the implementation receives an LTP block, it carves Burst-Limit- 230 many segments from the block and presents the vector of segments to 231 sendmmsg(). The kernel will prepare each segment as an independent 232 UDP/IP packet and transmit them into the network as a burst in a 233 fashion that parallels IP fragmentation. The loss unit and 234 retransmission unit will be the same, therefore loss of a single 235 segment does not result in a retransmission congestion event. 237 It should be noted that the Burst-Limit is bounded only by the LTP 238 block size and not by the maximum UDP/IP datagram size. Therefore, 239 each burst can in practice convey significantly more data than a 240 single IP fragmentation event. It should also be noted that the 241 segment size can still be made larger than the Path-MTU in low-loss 242 environments without danger of triggering retransmission storms due 243 to loss of IP fragments. This would result in combined large UDP/IP 244 message transmission and IP fragmentation bursting for increased 245 network utilization in more robust environments. Finally, both the 246 Burst-Limit and UDP/IP message sizes need not be static values, and 247 can be tuned to adaptively increase or decrease according to time 248 varying network conditions. 250 5. Beyond "sendmmsg()" 252 Implementation experience with the ION-DTN distribution along with 253 two recent studies have demonstrated modest performance increases for 254 employing sendmmsg() for transmission over UDP/IP sockets. A first 255 study used sendmmsg() as part of an integrated solution to produce 1M 256 packets per second assuming only raw data transmission conditions 257 [MPPS], while a second study focused on performance improvements for 258 the QUIC reliable transport service [QUIC]. In both studies, the use 259 of sendmmsg() alone produced observable increases but complimentary 260 enhancements were identified that (when combined with sendmmsg()) 261 produced considerable additional increases. 263 In [MPPS], additional enhancements such as using recvmmsg() and 264 configuring multiple receive queues at the receiver were introduced 265 in an attempt to achieve greater parallelism and engage multiple 266 processors and threads. However, the system was still limited to a 267 single thread until multiple receiving processes were introduced 268 using the "SO_REUSEPORT" socket option. By having multiple receiving 269 processes (each with its own socket buffer), the performance 270 advantages of parallel processing were employed to achieve the 1M 271 packets per second goal. 273 In [QUIC], a new feature available in recent linux kernel versions 274 was employed. The feature, known as "Generic Segmentation Offload 275 (GSO) / Generic Receive Offload (GRO)" allows an application to 276 provide the kernel with a "super-buffer" containing up to 64 separate 277 upper layer protocol segments. When the application presents the 278 super-buffer to the kernel, GSO segmentation then sends up to 64 279 separate UDP/IP packets in a burst. (Note that GSO requires each 280 UDP/IP packet to be no larger than the path MTU so that receivers can 281 invoke GRO without interactions with IP reassembly.) The GSO 282 facility can be invoked by either sendmsg() (i.e., a single super- 283 buffer) or sendmmsg() (i.e., multiple super-buffers), and the study 284 showed a substantial performance increase over using just sendmsg() 285 and sendmmsg() alone. 287 For LTP fragmentation, our ongoing efforts explore using these 288 techniques in a manner that parallels the effort undertaken for QUIC. 289 Using these higher-layer segmentation management facilities is 290 consistent with the guidance in "IP Fragmentation Considered Fragile" 291 that states: 293 "Rather than deprecating IP fragmentation, this document 294 recommends that upper-layer protocols address the problem of 295 fragmentation at their layer, reducing their reliance on IP 296 fragmentation to the greatest degree possible." 298 By addressing fragmentation at their layer, the LTP/UDP functions can 299 then be tuned to minimize IP fragmentation in environments where it 300 may be problematic or to adaptively engage IP fragmentation in 301 environments where performance gains can be realized without risking 302 sustained loss and/or data corruption. 304 6. LTP Performance Enhancement Using GSO/GRO 306 Some modern operating systems include Generic Segment Offload (GSO) 307 and Generic Receive Offload (GRO) services. For example, GSO/GRO 308 support has been included in linux beginning with kernel version 309 4.18. Some network drivers and network hardware also support GSO/GRO 310 at or below the operating system network device driver interface 311 layer to provide benefits of delayed segmentation and/or early 312 reassembly. The following sections discuss LTP interactions with GSO 313 and GRO. 315 6.1. LTP and GSO 317 GSO allows LTP implementations to present the sendmsg() or sendmmsg() 318 system calls with "super-buffers" that include up to 64 LTP segments 319 which the kernel will subdivide into individual UDP/IP datagrams. 320 LTP implementations enable GSO either on a per-socket basis using the 321 "setsockopt()" system call or on a per-message basis for 322 sendmsg()/sendmmsg() as follows: 324 /* Set LTP segment size */ 325 unsigned integer gso_size = SEGSIZE; 326 ... 327 /* Enable GSO for all messages sent on the socket */ 328 setsockopt(fd, SOL_UDP, UDP_SEGMENT, &gso_size, sizeof(gso_size))); 329 ... 330 /* Alternatively, set per-message GSO control */ 331 cm = CMSG_FIRSTHDR(&msg); 332 cm->cmsg_level = SOL_UDP; 333 cm->cmsg_type = UDP_SEGMENT; 334 cm->cmsg_len = CMSG_LEN(sizeof(uint16_t)); 335 *((uint16_t *) CMSG_DATA(cm)) = gso_size; 337 Implementations must set SEGSIZE to a value no larger than the path 338 MTU via the underlying network interface, minus the header sizes 339 (see: Section 6.4); this ensures that UDP/IP datagrams generated 340 during GSO segmentation will not incur local IP fragmentation prior 341 to transmission (NB: the linux kernel returns EINVAL if SEGSIZE is 342 set to a value that would exceed the path MTU.) 344 Implementations should therefore dynamically determine SEGSIZE for 345 paths that traverse multiple links through Packetization Layer Path 346 MTU Discovery for Datagram Transports [RFC8899] (DPMTUD). 347 Implementations should set an initial SEGSIZE to either a known 348 minimum MTU for the path or to the protocol-defined minimum path MTU 349 (i.e., 576 for IPv4 or 1280 for IPv6). Implementations may then 350 dynamically increase SEGSIZE without service interruption if the 351 discovered path MTU is larger. 353 6.2. LTP and GRO 355 GRO allows the kernel to return "super-buffers" that contain multiple 356 concatenated received segments to the LTP implementation in recvmsg() 357 or recvmmsg() system calls, where each concatenated segment is 358 distinguished by an LTP segment header per [RFC5326]. LTP 359 implementations enable GRO on a per-socket basis using the 360 "setsockopt()" system call as follows: 362 /* Enable GRO */ 363 unsigned integer gro_size = 0; 364 setsockopt(fd, SOL_UDP, UDP_GRO, &gro_size, sizeof(gro_size))); 366 Implementations include a pointer to a gro_size variable as a boolean 367 indication to the kernel using any arbitrary initialization value 368 (e.g., '0'), as GRO will accept received segments of any size; the 369 only interoperability requirement therefore is that each UDP/IP 370 packet includes an integral number of properly-formed LTP segments. 371 The kernel and/or underlying network hardware will first coalesce 372 multiple received segments into a larger single segment whenever 373 possible and/or return multiple coalesced or singular segments to the 374 LTP implementation so as to maximize the amount of data returned in a 375 single system call. 377 Implementations that invoke recvmsg( ) and/or recvmmsg() will 378 therefore receive "super-buffers" that include one or more 379 concatenated received LTP segments. The LTP implementation accepts 380 all received LTP segments and identifies any segments that may be 381 missing. The LTP protocol then engages segment report procedures if 382 necessary to request retransmission of any missing segments. 384 6.3. LTP GSO/GRO Over OMNI Interfaces 386 LTP engines produce UDP/IP packets that can be forwarded over an 387 underlying network interface as the head-end of a "link-layer service 388 that transits IP packets". UDP/IP packets that enter the link near- 389 end are deterministically delivered to the link-far end modulo loss 390 due to corruption, congestion or disruption. The link-layer service 391 is associated with an MTU that deterministically establishes the 392 maximum packet size that can transit the link. The link-layer 393 service may further support a segmentation and reassembly function 394 with fragment retransmissions at a layer below IP; in many cases, 395 these timely link-layer retransmissions can reduce dependency on 396 (slow) end-to-end retransmissions. 398 LTP engines that connect to networks traversed by paths consisting of 399 multiple concatenated links must be prepared to adapt their segment 400 sizes to match the minimum MTU of all links in the path. This could 401 result in a small SEGSIZE that would interfere with the benefits of 402 GSO/GRO layering. However, nodes that configure LTP engines can also 403 establish an Overlay Multilink Network Interface (OMNI) 404 [I-D.templin-6man-omni] that spans the multiple concatenated links 405 while presenting an assured (64KB-1) MTU to the LTP engine. 407 The OMNI interface internally uses IPv6 fragmentation as an OMNI 408 Adaptation Layer (OAL) service not visible to the LTP engine to allow 409 timely link-layer retransmissions of lost fragments where the 410 retransmission unit matches the loss unit. The LTP engine can then 411 dynamically vary its SEGSIZE (up to a maximum value of (64KB-1) minus 412 headers) to determine the size that produces the best performance at 413 the current time by engaging the combined operational factors at all 414 layers of the multi-layer architecture. This dynamic factoring 415 coupled with the ideal link properties provided by the OMNI interface 416 support an effective layering solution for many DTN networks. 418 When an LTP/UDP/IP packet is transmitted over an OMNI interface, the 419 OAL inserts an IPv6 header and performs IPv6 fragmentation to produce 420 fragments small enough to fit within the path MTU. The OAL then 421 replaces the IPv6 encapsulation headers with OMNI Compressed Headers, 422 Type 0 and 1 (OCH-0/1) which are significantly smaller that their 423 uncompressed IPv6 header counterparts and even smaller than the IPv4 424 headers would have been had the packet been sent directly over a 425 physical interface such as Ethernet using IPv4 fragmentation. 427 The end result is that the first fragment produced by the OAL will 428 include a small amount of additional overhead to accommodate the 429 OCH-0 encapsulation header while all additional fragments will 430 include only an OCH-1 header which is significantly smaller than even 431 an IPv4 header. The act of forwarding the large LTP/UDP/IP packet 432 over the OMNI interface will therefore produce a considerable 433 overhead savings in comparison with direct Ethernet transmission. 435 Using the OMNI interface with its OAL service in addition to the GSO/ 436 GRO mechanism, an LTP engine can therefore present concatenated LTP 437 segments in a "super-buffer" of up to (64 * ((64KB-1) minus headers)) 438 octets for transmission in a single sendmsg() system call, and may 439 present multiple such "super-buffers" in a single system call when 440 sendmmsg() is used. In the future, this service may realize even 441 greater benefits through the use of IPv6 Jumbograms [RFC2675] over 442 paths that support them. 444 6.4. IPv4/IPv6 Protocol Considerations 446 LTP/UDP/IP peers can communicate either via IPv4 or IPv6 addressing 447 when both peers configure a unique address of the same protocol 448 version on the OMNI interface. The IPv4 Total Length field includes 449 the length of both the UDP header and base IPv4 header, while the 450 IPv6 Payload Length field includes the length of the UDP header but 451 not the base IPv6 header. 453 Therefore, unless header extensions are included, each maximum-sized 454 LTP/UDP/IPv6 packet would contain 20 octets more actual LTP data than 455 a maximum-sized LTP/UDP/IPv4 packet can contain for the price of 456 including only 20 additional header octets for IPv6. The overhead 457 percentage for carrying this additional 20 header octets in maximum- 458 sized packets is therefore insignificant and becomes smaller still 459 when IPv6 header compression is used. 461 7. Implementation Status 463 Supporting code for invoking the sendmmsg() facility is included in 464 the official ION source code distribution, beginning with release 465 ion-4.0.1. 467 Working code for GSO/GRO has been incorporated into a pre-release of 468 ION and scheduled for integration following the next major release. 470 8. IANA Considerations 472 This document introduces no IANA considerations. 474 9. Security Considerations 476 Communications networking security is necessary to preserve 477 confidentiality, integrity and availability. 479 10. Acknowledgements 481 The NASA Space Communications and Networks (SCaN) directorate 482 coordinates DTN activities for the International Space Station (ISS) 483 and other space exploration initiatives. 485 Madhuri Madhava Badgandi, Keith Philpott, Bill Pohlchuck, 486 Vijayasarathy Rajagopalan and Eric Yeh are acknowledged for their 487 significant contributions. Tyler Doubrava was the first to mention 488 the "sendmmsg()" facility. Scott Burleigh provided review input, and 489 David Zoller provided useful perspective. 491 11. References 493 11.1. Normative References 495 [RFC0768] Postel, J., "User Datagram Protocol", STD 6, RFC 768, 496 DOI 10.17487/RFC0768, August 1980, 497 . 499 [RFC0791] Postel, J., "Internet Protocol", STD 5, RFC 791, 500 DOI 10.17487/RFC0791, September 1981, 501 . 503 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 504 Requirement Levels", BCP 14, RFC 2119, 505 DOI 10.17487/RFC2119, March 1997, 506 . 508 [RFC5326] Ramadas, M., Burleigh, S., and S. Farrell, "Licklider 509 Transmission Protocol - Specification", RFC 5326, 510 DOI 10.17487/RFC5326, September 2008, 511 . 513 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 514 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 515 May 2017, . 517 [RFC8200] Deering, S. and R. Hinden, "Internet Protocol, Version 6 518 (IPv6) Specification", STD 86, RFC 8200, 519 DOI 10.17487/RFC8200, July 2017, 520 . 522 11.2. Informative References 524 [FRAG] Mogul, J. and C. Kent, "Fragmentation Considered Harmful, 525 ACM Sigcomm 1987", August 1987. 527 [I-D.ietf-dtn-bpbis] 528 Burleigh, S., Fall, K., and E. J. Birrane, "Bundle 529 Protocol Version 7", Work in Progress, Internet-Draft, 530 draft-ietf-dtn-bpbis-31, 25 January 2021, 531 . 534 [I-D.templin-6man-omni] 535 Templin, F. L. and T. Whyman, "Transmission of IP Packets 536 over Overlay Multilink Network (OMNI) Interfaces", Work in 537 Progress, Internet-Draft, draft-templin-6man-omni-49, 25 538 October 2021, . 541 [MPPS] Majkowski, M., "How to Receive a Million Packets Per 542 Second, https://blog.cloudflare.com/how-to-receive-a- 543 million-packets/", June 2015. 545 [QUIC] Ghedini, A., "Accelerating UDP Packet Transmission for 546 QUIC, https://calendar.perfplanet.com/2019/accelerating- 547 udp-packet-transmission-for-quic/", December 2019. 549 [RFC2675] Borman, D., Deering, S., and R. Hinden, "IPv6 Jumbograms", 550 RFC 2675, DOI 10.17487/RFC2675, August 1999, 551 . 553 [RFC4963] Heffner, J., Mathis, M., and B. Chandler, "IPv4 Reassembly 554 Errors at High Data Rates", RFC 4963, 555 DOI 10.17487/RFC4963, July 2007, 556 . 558 [RFC6864] Touch, J., "Updated Specification of the IPv4 ID Field", 559 RFC 6864, DOI 10.17487/RFC6864, February 2013, 560 . 562 [RFC8899] Fairhurst, G., Jones, T., Tüxen, M., Rüngeler, I., and T. 563 Völker, "Packetization Layer Path MTU Discovery for 564 Datagram Transports", RFC 8899, DOI 10.17487/RFC8899, 565 September 2020, . 567 [RFC8900] Bonica, R., Baker, F., Huston, G., Hinden, R., Troan, O., 568 and F. Gont, "IP Fragmentation Considered Fragile", 569 BCP 230, RFC 8900, DOI 10.17487/RFC8900, September 2020, 570 . 572 Author's Address 574 Fred L. Templin (editor) 575 Boeing Research & Technology 576 P.O. Box 3707 577 Seattle, WA 98124 578 United States of America 580 Email: fltemplin@acm.org