idnits 2.17.1 draft-templin-dtn-ltpfrag-05.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document date (October 11, 2021) is 928 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-74) exists of draft-templin-6man-omni-47 Summary: 0 errors (**), 0 flaws (~~), 3 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group F. Templin, Ed. 3 Internet-Draft Boeing Research & Technology 4 Intended status: Informational October 11, 2021 5 Expires: April 14, 2022 7 LTP Fragmentation 8 draft-templin-dtn-ltpfrag-05 10 Abstract 12 The Licklider Transmission Protocol (LTP) provides a reliable 13 datagram convergence layer for the Delay/Disruption Tolerant 14 Networking (DTN) Bundle Protocol. In common practice, LTP is often 15 configured over UDP/IP sockets and inherits its maximum segment size 16 from the maximum-sized UDP datagram, however when this size exceeds 17 the maximum IP packet size for the path a service known as IP 18 fragmentation must be employed. This document discusses LTP 19 interactions with IP fragmentation and mitigations for managing the 20 amount of IP fragmentation employed. 22 Status of This Memo 24 This Internet-Draft is submitted in full conformance with the 25 provisions of BCP 78 and BCP 79. 27 Internet-Drafts are working documents of the Internet Engineering 28 Task Force (IETF). Note that other groups may also distribute 29 working documents as Internet-Drafts. The list of current Internet- 30 Drafts is at https://datatracker.ietf.org/drafts/current/. 32 Internet-Drafts are draft documents valid for a maximum of six months 33 and may be updated, replaced, or obsoleted by other documents at any 34 time. It is inappropriate to use Internet-Drafts as reference 35 material or to cite them other than as "work in progress." 37 This Internet-Draft will expire on April 14, 2022. 39 Copyright Notice 41 Copyright (c) 2021 IETF Trust and the persons identified as the 42 document authors. All rights reserved. 44 This document is subject to BCP 78 and the IETF Trust's Legal 45 Provisions Relating to IETF Documents 46 (https://trustee.ietf.org/license-info) in effect on the date of 47 publication of this document. Please review these documents 48 carefully, as they describe your rights and restrictions with respect 49 to this document. Code Components extracted from this document must 50 include Simplified BSD License text as described in Section 4.e of 51 the Trust Legal Provisions and are provided without warranty as 52 described in the Simplified BSD License. 54 Table of Contents 56 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 57 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 3 58 3. IP Fragmentation Issues . . . . . . . . . . . . . . . . . . . 4 59 4. LTP Fragmentation . . . . . . . . . . . . . . . . . . . . . . 4 60 5. Beyond "sendmmsg()" . . . . . . . . . . . . . . . . . . . . . 6 61 6. LTP Performance Enhancement Using GSO/GRO . . . . . . . . . . 7 62 6.1. LTP and GSO . . . . . . . . . . . . . . . . . . . . . . . 7 63 6.2. LTP and GRO . . . . . . . . . . . . . . . . . . . . . . . 8 64 6.3. LTP GSO/GRO Over OMNI Interfaces . . . . . . . . . . . . 8 65 7. Implementation Status . . . . . . . . . . . . . . . . . . . . 9 66 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 9 67 9. Security Considerations . . . . . . . . . . . . . . . . . . . 9 68 10. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 9 69 11. References . . . . . . . . . . . . . . . . . . . . . . . . . 10 70 11.1. Normative References . . . . . . . . . . . . . . . . . . 10 71 11.2. Informative References . . . . . . . . . . . . . . . . . 10 72 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . 11 74 1. Introduction 76 The Licklider Transmission Protocol (LTP) [RFC5326] provides a 77 reliable datagram convergence layer for the Delay/Disruption Tolerant 78 Networking (DTN) Bundle Protocol (BP) [I-D.ietf-dtn-bpbis]. In 79 common practice, LTP is often configured over the User Datagram 80 Protocol (UDP) [RFC0768] and Internet Protocol (IP) [RFC0791] using 81 the "socket" abstraction. LTP inherits its maximum segment size from 82 the maximum-sized UDP datagram (i.e. 2**16 bytes minus header sizes), 83 however when the UDP datagram size exceeds the maximum IP packet size 84 for the path a service known as IP fragmentation must be employed. 86 LTP breaks BP bundles into "blocks", then further breaks these blocks 87 into "segments". The segment size is a configurable option and 88 represents the largest atomic portion of data that LTP will require 89 underlying layers to deliver as a single unit. The segment size is 90 therefore also known as the "retransmission unit", since each lost 91 segment must be retransmitted in its entirety. Experimental and 92 operational evidence has shown that on robust networks increasing the 93 LTP segment size (up to the maximum UDP datagram size of slightly 94 less than 64KB) can result in substantial performance increases over 95 smaller segment sizes. However, the performance increases must be 96 tempered with the amount of IP fragmentation invoked as discussed 97 below. 99 When LTP presents a segment to the operating system kernel (e.g., via 100 a sendmsg() system call), the UDP layer prepends a UDP header to 101 create a UDP datagram. The UDP layer then presents the resulting 102 datagram to the IP layer for packet framing and transmission over a 103 networked path. The path is further characterized by the path 104 Maximum Transmission Unit (Path-MTU) which is a measure of the 105 smallest link MTU (Link-MTU) among all links in the path. 107 When LTP presents a segment to the kernel that is larger than the 108 Path-MTU, the resulting UDP datagram is presented to the IP layer, 109 which in turn performs IP fragmentation to break the datagram into 110 fragments that are no larger than the Path-MTU. For example, if the 111 LTP segment size is 64KB and the Path-MTU is 1280 bytes IP 112 fragmentation results in 50+ fragments that are transmitted as 113 individual IP packets. (Note that for IPv4 [RFC0791], fragmentation 114 may occur either in the source host or in a router in the network 115 path, while for IPv6 [RFC8200] only the source host may perform 116 fragmentation.) 118 Each IP fragment is subject to the same best-effort delivery service 119 offered by the network according to current congestion and/or link 120 signal quality conditions; therefore, the IP fragment size becomes 121 known as the "loss unit". Especially when the packet loss rate is 122 non-negligible, however, performance can suffer dramatically when the 123 loss unit is significantly smaller than the retransmission unit. In 124 particular, if even a single IP fragment of a fragmented LTP segment 125 is lost then the entire LTP segment is deemed lost and must be 126 retransmitted. Since LTP does not support flow control or congestion 127 control, this can result in catastrophic communication failure when 128 fragments are systematically lost in transit. 130 This document discusses LTP interactions with IP fragmentation and 131 mitigations for managing the amount of IP fragmentation employed. It 132 further discusses methods for increasing LTP performance both with 133 and without the aid of IP fragmentation. 135 2. Terminology 137 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 138 "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and 139 "OPTIONAL" in this document are to be interpreted as described in BCP 140 14 [RFC2119][RFC8174] when, and only when, they appear in all 141 capitals, as shown here. 143 3. IP Fragmentation Issues 145 IP fragmentation is a fundamental service of the Internet Protocol, 146 yet it has long been understood that its use can be problematic in 147 some environments. Beginning as early as 1987, "Fragmentation 148 Considered Harmful" [FRAG] outlined multiple issues with the service 149 including a performance-crippling condition that can occur at high 150 data rates when the loss unit is considerably smaller than the 151 retransmission unit during intermittent and/or steady-state loss 152 conditions. 154 Later investigations also identified the possibility for undetected 155 data corruption at high data rates due to a condition known as "ID 156 wraparound" when the 16-bit IP identification field (aka the "IP ID") 157 increments such that new fragments overlap with existing fragments 158 still alive in the network and with identical ID values 159 [RFC4963][RFC6864]. Although this issue occurs only in the IPv4 160 protocol (and not in IPv6 where the IP ID is 32-bits in length), the 161 IPv4 concerns along with the fact that IPv6 does not permit routers 162 to perform "network fragmentation" have led many to discourage its 163 use. 165 Even in the modern era, investigators have seen fit to declare "IP 166 Fragmentation Considered Fragile" in an Internet Engineering Task 167 Force (IETF) Best Current Practice (BCP) reference [RFC8900]. 168 Indeed, the BCP recommendations cite the Bundle Protocol LTP 169 convergence layer as a user of IP fragmentation that depends on some 170 of its properties to realize greater performance. However, the BCP 171 summarizes by saying: 173 "Rather than deprecating IP fragmentation, this document 174 recommends that upper-layer protocols address the problem of 175 fragmentation at their layer, reducing their reliance on IP 176 fragmentation to the greatest degree possible." 178 While the performance implications are considerable and have serious 179 implications for real-world applications, our goal in this document 180 is neither to condemn nor embrace IP fragmentation as it pertains to 181 the Bundle Protocol LTP convergence layer operating over UDP/IP 182 sockets. Instead, we examine ways in which the benefits of IP 183 fragmentation can be realized while avoiding the pitfalls. We 184 therefore next discuss our systematic approach to LTP fragmentation. 186 4. LTP Fragmentation 188 In common LTP implementations over UDP/IP (e.g., the Interplanetary 189 Overlay Network (ION)), performance is greatly dependent on the LTP 190 segment size. This is due to the fact that a larger segment 191 presented to UDP/IP as a single unit incurs only a single system call 192 and a single data copy from application to kernel space via the 193 sendmsg() system call. Once inside the kernel, the segment incurs 194 UDP/IP encapsulation and IP fragmentation which again results in a 195 loss unit smaller than the retransmission unit. However, during 196 fragmentation, each fragment is transmitted immediately following the 197 previous without delay so that the fragments appear as a "burst" of 198 consecutive packets over the network path resulting in high network 199 utilization during the burst period. Additionally, the use of IP 200 fragmentation with a larger segment size conserves header framing 201 bytes since the LTP layer headers only appear in the first IP 202 fragment as opposed to appearing in all IP packets. 204 In order to avoid retransmission congestion (i.e., especially when 205 the loss probability is non-negligible), the natural choice would be 206 to set the LTP segment size to a size that is no larger than the 207 Path-MTU. Assuming the minimum IPv4 MTU of 576 bytes, however, 208 transmission of 64KB of data using a 576B segment size would require 209 well over 100 independent sendmsg() system calls and data copies as 210 opposed to just one when the largest segment size is used. This 211 greatly reduces the bandwidth advantage offered by IP fragmentation 212 bursts. Therefore, a means for providing the best aspects of both 213 large segment fragment bursting and small segment retransmission 214 efficiency is needed. 216 Common operating systems such as linux provide the sendmmsg() ("send 217 multiple messages") system call that allows the LTP application to 218 present the kernel with a vector of up to 1024 segments instead of 219 just a single segment. This affords the bursting behavior of IP 220 fragmentation coupled with the retransmission efficiency of employing 221 small segment sizes. (Note that LTP receivers can also use the 222 recvmmsg() ("receive multiple messages") system call to receive a 223 vector of segments from the kernel in case multiple recent packet 224 arrivals can be combined.) 226 This work therefore recommends implementations of LTP to employ a 227 large block size, a conservative segment size and a new configuration 228 option known as the "Burst-Limit" which determines the number of 229 segments that can be presented in a single sendmmsg() system call. 230 When the implementation receives an LTP block, it carves Burst-Limit- 231 many segments from the block and presents the vector of segments to 232 sendmmsg(). The kernel will prepare each segment as an independent 233 UDP/IP packet and transmit them into the network as a burst in a 234 fashion that parallels IP fragmentation. The loss unit and 235 retransmission unit will be the same, therefore loss of a single 236 segment does not result in a retransmission congestion event. 238 It should be noted that the Burst-Limit is bounded only by the LTP 239 block size and not by the maximum UDP datagram size. Therefore, each 240 burst can in practice convey significantly more data than a single IP 241 fragmentation event. It should also be noted that the segment size 242 can still be made larger than the Path-MTU in low-loss environments 243 without danger of triggering retransmission storms due to loss of IP 244 fragments. This would result in combined UDP message and IP fragment 245 bursting for increased network utilization in more robust 246 environments. Finally, both the Burst-Limit and UDP message sizes 247 need not be static values, and can be tuned to adaptively increase or 248 decrease according to time varying network conditions. 250 5. Beyond "sendmmsg()" 252 Implementation experience with the ION DTN distribution along with 253 two recent studies have demonstrated modest performance increases for 254 employing sendmmsg() for transmission over UDP/IP sockets. A first 255 study used sendmmsg() as part of an integrated solution to produce 1M 256 packets per second assuming only raw data transmission conditions 257 [MPPS], while a second study focused on performance improvements for 258 the QUIC reliable transport service [QUIC]. In both studies, the use 259 of sendmmsg() alone produced observable increases but complimentary 260 enhancements were identified that (when combined with sendmmsg()) 261 produced considerable additional increases. 263 In [MPPS], additional enhancements such as using recvmmsg() and 264 configuring multiple receive queues at the receiver were introduced 265 in an attempt to achieve greater parallelism and engage multiple 266 processors and threads. However, the system was still limited to a 267 single thread until multiple receiving processes were introduced 268 using the "SO_REUSEPORT" socket option. By having multiple receiving 269 processes (each with its own socket buffer), the performance 270 advantages of parallel processing were employed to achieve the 1M 271 packets per second goal. 273 In [QUIC], a new feature available in recent linux kernel versions 274 was employed. The feature, known as "Generic Segmentation Offload 275 (GSO) / Generic Receive Offload (GRO)" allows an application to 276 provide the kernel with a "super-buffer" containing up to 64 separate 277 upper layer protocol segments. When the application presents the 278 super-buffer to the kernel, GSO segmentation then sends 64 separate 279 UDP/IP packets in a burst. If each packet is larger than the Path- 280 MTU, then IP fragmentation will be invoked for each packet leading to 281 high network utilization (at the risk of IP fragment loss and 282 retransmission storms). The GSO facility can be invoked by either 283 sendmsg() (i.e., a single super-buffer) or sendmmsg() (i.e., multiple 284 super-buffers), and the study showed a substantial performance 285 increase over using just sendmsg() and sendmmsg() alone. 287 For LTP fragmentation, our ongoing efforts explore using these 288 techniques in a manner that parallels the effort undertaken for QUIC. 289 Using these higher-layer segmentation management facilities is 290 consistent with the guidance in "IP Fragmentation Considered Fragile" 291 that states: 293 "Rather than deprecating IP fragmentation, this document 294 recommends that upper-layer protocols address the problem of 295 fragmentation at their layer, reducing their reliance on IP 296 fragmentation to the greatest degree possible." 298 By addressing fragmentation at their layer, the LTP/UDP functions can 299 then be tuned to minimize IP fragmentation in environments where it 300 may be problematic or to adaptively engage IP fragmentation in 301 environments where performance gains can be realized without risking 302 data corruption. 304 6. LTP Performance Enhancement Using GSO/GRO 306 Some modern operating systems include Generic Segment Offload (GSO) 307 and Generic Receive Offload (GRO) services. For example, GSO/GRO 308 support has been included in linux beginning with kernel version 309 4.18. Some network drivers and network hardware also support GSO/GRO 310 at or below the operating system network-to-driver interface layer to 311 provide benefits of delayed segmentation and/or early reassembly. 312 The following sections discuss LTP interactions with GSO and GRO. 314 6.1. LTP and GSO 316 GSO allows LTP implementations to present the sendmsg() or sendmmsg() 317 system calls with "super-buffers" that include up to 64 LTP segments 318 which the kernel will subdivide into individual UDP datagrams. LTP 319 implementations enable GSO on a per-socket basis using the 320 "setsockopt()" system call as follows: 322 unsigned integer gso_size = SEGSIZE; 323 setsockopt(fd, SOL_UDP, UDP_SEGMENT, &gso_size, sizeof(gso_size))); 325 Implementations must set SEGSIZE to an initial value no larger than 326 the MTU of the underlying network interface minus the UDP and IP 327 header sizes; this ensures that UDP datagrams generated during GSO 328 segmentation will not incur local IP fragmentation prior to 329 transmission (NB: the linux kernel returns EINVAL if SEGSIZE is set 330 to a value that would exceed the MTU of the underlying interface). 331 For paths that traverse multiple links, implementations should also 332 dynamically adjust SEGSIZE according to the per-destination path MTU 333 to avoid sustained in-the-network fragmentation resulting in a loss 334 unit smaller than the retransmission unit. 336 Implementations should therefore dynamically determine SEGSIZE for 337 paths that traverse multiple links through Packetization Layer Path 338 MTU Discovery for Datagram Transports [RFC8899] (DPMTUD). For IPv4 339 paths, implementations may initially set SEGSIZE according to the MTU 340 of the underlying interface and invoke DPMTUD while initial packets 341 are flowing, then should dynamically reduce SEGSIZE without service 342 interruption if the discovered path MTU is smaller. For IPv6 paths, 343 implementations should initially set SEGSIZE according to the minimum 344 IPv6 Path MTU (i.e., 1280) then may dynamically increase SEGSIZE 345 without service interruption if the discovered path MTU is larger. 347 6.2. LTP and GRO 349 GRO allows the kernel to return "super-buffers" that contain multiple 350 concatenated received segments to the LTP implementation in recvmsg() 351 or recvmmsg() system calls, where each concatenated segment is 352 distinguished by an LTP segment header per [RFC5326]. LTP 353 implementations enable GRO on a per-socket basis using the 354 "setsockopt()" system call as follows: 356 unsigned integer gro_size = 0; 357 setsockopt(fd, SOL_UDP, UDP_GRO, &gro_size, sizeof(gro_size))); 359 Implementations include a pointer to a gro_size variable as a boolean 360 indication to the kernel using any arbitrary initialization value 361 (e.g., '0'), as GRO will accept received segments of any size; the 362 only interoperability requirement therefore is that each UDP packet 363 includes one or more properly-formed LTP segments. The kernel and/or 364 underlying network hardware will first coalesce multiple received 365 segments into a larger single segment whenever possible and/or return 366 multiple coalesced or singular segments to the LTP implementation so 367 as to maximize the amount of data returned in a single system call. 369 Implementations that invoke recvmsg( ) and/or recvmmsg() will 370 therefore receive "super-buffers" that include one or more 371 concatenated received LTP segments. The LTP implementation accepts 372 all received LTP segments and identifies any segments that may be 373 missing. The LTP protocol then engages segment report procedures if 374 necessary to request retransmission of any missing segments. 376 6.3. LTP GSO/GRO Over OMNI Interfaces 378 LTP engines produce UDP/IP packets that can be forwarded over an 379 underlying network interface as the head-end of a "link-layer service 380 that transits IP packets". UDP/IP packets that enter the link near- 381 end are deterministically delivered to the link-far end modulo loss 382 due to corruption, congestion or disruption. The link-layer service 383 is associated with an MTU that deterministically establishes the 384 maximum packet size that can transit the link. The link-layer 385 service may further support a segmentation and reassembly function 386 with fragment retransmissions at a layer below IP; in many cases, 387 these timely link-layer retransmissions can reduce dependency on 388 (slow) end-to-end retransmissions. 390 LTP engines that connect to networks traversed by paths consisting of 391 multiple concatenated links must be prepared to adapt their segment 392 sizes to match the minimum MTU of all links in the path. This could 393 result in a small SEGSIZE that would interfere with the benefits of 394 GSO/GRO layering. However, nodes that configure LTP engines can 395 establish an Overlay Multilink Network Interface (OMNI) 396 [I-D.templin-6man-omni] that spans the multiple concatenated links 397 while presenting an assured 9180 byte MTU to the LTP engine. 399 The OMNI interface internally uses IP fragmentation as a link-layer 400 adaptation service not visible to the LTP engine, including timely 401 link-layer retransmissions of lost fragments where the retransmission 402 unit matches the loss unit. The LTP engine can then dynamically vary 403 its SEGSIZE (up to a maximum value of 9180 bytes) to determine the 404 size that produces the best performance given the combined 405 operational factors at all layers of the multi-layer architecture. 406 This dynamic factoring coupled with the ideal link properties 407 provided by the OMNI interface support an effective layering solution 408 for many DTN networks. 410 7. Implementation Status 412 Supporting code for invoking the sendmmsg() facility is included in 413 the official ION source code distribution, beginning with release 414 ion-4.0.1. 416 8. IANA Considerations 418 This document introduces no IANA considerations. 420 9. Security Considerations 422 Communications networking security is necessary to preserve 423 confidentiality, integrity and availability. 425 10. Acknowledgements 427 The NASA Space Communications and Networks (SCaN) directorate 428 coordinates DTN activities for the International Space Station (ISS) 429 and other space exploration initiatives. 431 Madhuri Madhava Badgandi, Keith Philpott, Bill Pohlchuck, 432 Vijayasarathy Rajagopalan and Eric Yeh are acknowledged for their 433 significant contributions. Tyler Doubrava was the first to mention 434 the "sendmmsg()" facility. Scott Burleigh provided review input, and 435 David Zoller provided useful perspective. 437 11. References 439 11.1. Normative References 441 [RFC0768] Postel, J., "User Datagram Protocol", STD 6, RFC 768, 442 DOI 10.17487/RFC0768, August 1980, 443 . 445 [RFC0791] Postel, J., "Internet Protocol", STD 5, RFC 791, 446 DOI 10.17487/RFC0791, September 1981, 447 . 449 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 450 Requirement Levels", BCP 14, RFC 2119, 451 DOI 10.17487/RFC2119, March 1997, 452 . 454 [RFC5326] Ramadas, M., Burleigh, S., and S. Farrell, "Licklider 455 Transmission Protocol - Specification", RFC 5326, 456 DOI 10.17487/RFC5326, September 2008, 457 . 459 [RFC8174] Leiba, B., "Ambiguity of Uppercase vs Lowercase in RFC 460 2119 Key Words", BCP 14, RFC 8174, DOI 10.17487/RFC8174, 461 May 2017, . 463 [RFC8200] Deering, S. and R. Hinden, "Internet Protocol, Version 6 464 (IPv6) Specification", STD 86, RFC 8200, 465 DOI 10.17487/RFC8200, July 2017, 466 . 468 11.2. Informative References 470 [FRAG] Mogul, J. and C. Kent, "Fragmentation Considered Harmful, 471 ACM Sigcomm 1987", August 1987. 473 [I-D.ietf-dtn-bpbis] 474 Burleigh, S., Fall, K., and E. J. Birrane, "Bundle 475 Protocol Version 7", draft-ietf-dtn-bpbis-31 (work in 476 progress), January 2021. 478 [I-D.templin-6man-omni] 479 Templin, F. L. and T. Whyman, "Transmission of IP Packets 480 over Overlay Multilink Network (OMNI) Interfaces", draft- 481 templin-6man-omni-47 (work in progress), September 2021. 483 [MPPS] Majkowski, M., "How to Receive a Million Packets Per 484 Second, https://blog.cloudflare.com/how-to-receive-a- 485 million-packets/", June 2015. 487 [QUIC] Ghedini, A., "Accelerating UDP Packet Transmission for 488 QUIC, https://calendar.perfplanet.com/2019/accelerating- 489 udp-packet-transmission-for-quic/", December 2019. 491 [RFC4963] Heffner, J., Mathis, M., and B. Chandler, "IPv4 Reassembly 492 Errors at High Data Rates", RFC 4963, 493 DOI 10.17487/RFC4963, July 2007, 494 . 496 [RFC6864] Touch, J., "Updated Specification of the IPv4 ID Field", 497 RFC 6864, DOI 10.17487/RFC6864, February 2013, 498 . 500 [RFC8899] Fairhurst, G., Jones, T., Tuexen, M., Ruengeler, I., and 501 T. Voelker, "Packetization Layer Path MTU Discovery for 502 Datagram Transports", RFC 8899, DOI 10.17487/RFC8899, 503 September 2020, . 505 [RFC8900] Bonica, R., Baker, F., Huston, G., Hinden, R., Troan, O., 506 and F. Gont, "IP Fragmentation Considered Fragile", 507 BCP 230, RFC 8900, DOI 10.17487/RFC8900, September 2020, 508 . 510 Author's Address 512 Fred L. Templin (editor) 513 Boeing Research & Technology 514 P.O. Box 3707 515 Seattle, WA 98124 516 USA 518 Email: fltemplin@acm.org