idnits 2.17.1 draft-oran-icnrg-flowbalance-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (29 February 2020) is 1489 days in the past. Is this intentional? Checking references for intended status: Experimental ---------------------------------------------------------------------------- == Outdated reference: A later version (-34) exists of draft-ietf-quic-transport-27 == Outdated reference: A later version (-05) exists of draft-irtf-icnrg-flic-02 -- Obsolete informational reference (is this intentional?): RFC 793 (Obsoleted by RFC 9293) -- Obsolete informational reference (is this intentional?): RFC 4960 (Obsoleted by RFC 9260) Summary: 0 errors (**), 0 flaws (~~), 3 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 ICNRG D. Oran 3 Internet-Draft Network Systems Research and Design 4 Intended status: Experimental 29 February 2020 5 Expires: 1 September 2020 7 Maintaining CCNx or NDN flow balance with highly variable data object 8 sizes 9 draft-oran-icnrg-flowbalance-03 11 Abstract 13 Deeply embedded in some ICN architectures, especially Named Data 14 Networking (NDN) and Content-Centric Networking (CCNx) is the notion 15 of flow balance. This captures the idea that there is a one-to-one 16 correspondence between requests for data, carried in Interest 17 messages, and the responses with the requested data object, carried 18 in Data messages. This has a number of highly beneficial properties 19 for flow and congestion control in networks, as well as some 20 desirable security properties. For example, neither legitimate users 21 nor attackers are able to inject large amounts of un-requested data 22 into the network. 24 Existing congestion control approaches however have a difficult time 25 dealing effectively with a widely varying MTU of ICN data messages, 26 because the protocols allow a dynamic range of 1-64K bytes. Since 27 Interest messages are used to allocate the reverse link bandwidth for 28 returning Data, there is large uncertainty in how to allocate that 29 bandwidth. Unfortunately, most current congestion control schemes in 30 CCNx and NDN only count Interest messages and have no idea how much 31 data is involved that could congest the inverse link. This document 32 proposes a method to maintain flow balance by accommodating the wide 33 dynamic range in Data message size. 35 Status of This Memo 37 This Internet-Draft is submitted in full conformance with the 38 provisions of BCP 78 and BCP 79. 40 Internet-Drafts are working documents of the Internet Engineering 41 Task Force (IETF). Note that other groups may also distribute 42 working documents as Internet-Drafts. The list of current Internet- 43 Drafts is at https://datatracker.ietf.org/drafts/current/. 45 Internet-Drafts are draft documents valid for a maximum of six months 46 and may be updated, replaced, or obsoleted by other documents at any 47 time. It is inappropriate to use Internet-Drafts as reference 48 material or to cite them other than as "work in progress." 49 This Internet-Draft will expire on 1 September 2020. 51 Copyright Notice 53 Copyright (c) 2020 IETF Trust and the persons identified as the 54 document authors. All rights reserved. 56 This document is subject to BCP 78 and the IETF Trust's Legal 57 Provisions Relating to IETF Documents (https://trustee.ietf.org/ 58 license-info) in effect on the date of publication of this document. 59 Please review these documents carefully, as they describe your rights 60 and restrictions with respect to this document. 62 Table of Contents 64 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 3 65 2. Requirements Language . . . . . . . . . . . . . . . . . . . . 4 66 3. Method to enhance congestion control with signaled size 67 information in Interest Messages . . . . . . . . . . . . 4 68 3.1. How to predict the size of returning Data messages . . . 6 69 3.2. Handling 'too big' cases . . . . . . . . . . . . . . . . 7 70 3.3. Handling 'too small' cases . . . . . . . . . . . . . . . 8 71 3.4. Interactions with Interest Aggregation . . . . . . . . . 8 72 3.5. Operation when some Interests lack the expected data size 73 option and some have it . . . . . . . . . . . . . . . . . 10 74 4. Dealing with malicious actors . . . . . . . . . . . . . . . . 11 75 5. Mapping to CCNx and NDN packet encodings . . . . . . . . . . 12 76 5.1. Packet encoding for CCNx . . . . . . . . . . . . . . . . 12 77 5.2. Packet encoding for NDN . . . . . . . . . . . . . . . . . 12 78 6. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 13 79 7. Security Considerations . . . . . . . . . . . . . . . . . . . 13 80 8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 13 81 9. References . . . . . . . . . . . . . . . . . . . . . . . . . 13 82 9.1. Normative References . . . . . . . . . . . . . . . . . . 13 83 9.2. Informative References . . . . . . . . . . . . . . . . . 13 84 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . 15 86 1. Introduction 88 Deeply embedded in some ICN architectures, especially Named Data 89 Networking ([NDN]) and Content-Centric Networking (CCNx 90 [RFC8569],[RFC8609]) is the notion of _flow balance_. This captures 91 the idea that there is a one-to-one correspondence between requests 92 for data, carried in Interest messages, and the responses with the 93 requested data object, carried in Data messages. This has a number 94 of highly beneficial properties for flow and congestion control in 95 networks, as well as some desirable security properties. For 96 example, neither legitimate users nor attackers are able to inject 97 large amounts of un-requested data into the network. 99 This approach leads to a desire to make the size of the objects 100 carried in Data messages small and near constant, because flow 101 balance can then be kept using simple bookkeeping of how many 102 Interest messages are outstanding. While simple, constraining Data 103 messages to be quite small - usually on the order of a link Maximum 104 Transmission Unit (MTU) - has some constraints and deleterious 105 effects, among which are: 107 * Such small data objects are inconvenient for many applications; 108 their natural data object sizes can be considerably larger than a 109 link MTU. 111 * Applications with truly small data objects (e.g. voice packets in 112 an Internet telephony applications) have no way to communicate 113 that to the network, causing resources to still be allocated for 114 MTU-sized data objects 116 * When chunking a larger data object into multiple Data messages, 117 each message has to be individually cryptographically hashed and 118 signed, increasing both computational overhead and overall message 119 header size. The signature can be elided when Manifests are used 120 (by signing the Manifest instead), but the overhead of hashing 121 multiple small messages rather than fewer larger ones remains. 123 One approach which helps with the last of these is to employ 124 fragmentation for Data messages larger than the Path MTU (PMTU). 125 Such messages are carved into smaller pieces for transmission over 126 the link(s). There are three flavors of fragmentation: end-to-end, 127 hop-by-hop with reassembly at every hop, and hop-by-hop with cut- 128 through of individual fragments. A number of ICN protocol 129 architectures incorporate fragmentation and schemes have been 130 proposed for both NDN and CCNx, for example in [Ghali2013]. 131 Fragmentation alone does not ameliorate the flow balance problem 132 however, since from a resource allocation standpoint both memory and 133 link bandwidth must be set aside for maximum-sized data objects to 134 avoid congestion collapse under overload. 136 The design space considered in this document does not however extend 137 to arbitrarily large objects (e.g. 100's of kilobytes or larger). As 138 the dynamic range of data object sizes gets very large, finding the 139 right tradeoff between handling a large number of small data objects 140 versus a single very large data object when allocating link and 141 buffer resources becomes intractable. Further, the semantics of 142 Interest-Data exchanges means that any error in the exchange results 143 in a re-issue of an Interest for the entire Data object. Very large 144 data objects represent a performance problem because the cost of 145 retransmission when Interests are retransmitted (or re-issued) 146 becomes unsustainably high. Therefore, the method we propose deals 147 with a dynamic range of object sizes from very small (a fraction of a 148 link MTU) to moderately large - about 64 kilobytes or equivalently 149 about 40 Ethernet packets, and assumes an associated fragmentation 150 scheme to handle link MTUs that cannot carry the Data message in a 151 single link-layer packet. 153 The approach described in the rest of this document maintains flow 154 balance under the conditions outlined above by allocating resources 155 accurately based on expected Data message size, rather than employing 156 simple interest counting. 158 2. Requirements Language 160 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 161 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 162 document are to be interpreted as described in RFC 2119 [RFC2119]. 164 3. Method to enhance congestion control with signaled size information 165 in Interest Messages 167 Before diving into the specifics of the design, it is useful to 168 consider how congestion control works in NDN/CCNx. Unlike the IP 169 protocol family, which relies on end-to-end congestion control (e.g. 170 TCP[RFC0793], DCCP[RFC4340], SCTP[RFC4960], 171 QUIC[I-D.ietf-quic-transport]), CCNx and NDN employ hop-by-hop 172 congestion control. There is per-Interest/Data state at every hop of 173 the path and therefore for each outstanding Interest, bandwidth for 174 data returning on the inverse path can be allocated. In many current 175 designs, this allocation is done using simple Interest counting - by 176 queueing and subsequently forwarding one Interest message from a 177 downstream node, implicitly this provides a guarantee (either hard or 178 soft) that there is sufficient bandwidth on the inverse direction of 179 the link to send back one Data message. A number of congestion 180 control schemes have been developed that operate in this fashion, for 181 example [Wang2013],[Mahdian2016],[Song2018],[Carofiglio2012]. Other 182 schemes, like [Schneider2016] neither count nor police interests, but 183 instead monitor queues using AQM (active queue management) to mark or 184 drop returning Data packets that have experienced congestion. It is 185 worth noting that every congestion control algorithm has an explicit 186 fairness goal and associated objective function (usually either 187 [minmaxfairness] or [proportionalfairness]). If your fairness is to 188 be based on resource usage, pure interest counting doesn't do the 189 trick, since a consumer asking for large thing can saturate a link 190 and shift loss to consumers asking for small things. 192 In order to deal with a larger dynamic range of Data message size, 193 some means is required to allocate link bandwidth for Data messages 194 in bytes with an upper bound larger than a Path MTU and a lower bound 195 lower than a single link MTU. Since resources are allocated for 196 returning Data based on arriving Interests, this information must be 197 available in Interest messages. 199 Therefore, one key idea is the inclusion of an _expected data size_ 200 TLV in each Interest message. This allows each forwarder on the path 201 taken by the Interest to more accurately allocate bandwidth on the 202 inverse path for the returning Data message. Also, by including the 203 expected data size, large objects will have a corresponding weight in 204 resource allocation, maintaining link and forwarder buffering 205 fairness. The simpler Interest counting scheme was nominally "fair" 206 on a per-exchange basis within the variations of data that fit in a 207 single PMTU packet because all Interests produced similar amounts of 208 data in return. In the absence of such a field, it is not feasible 209 to allow a large dynamic range in object size. While schemes like 210 [Schneider2016] would not employ the expected data size to allocate 211 reverse link bandwidth, they can still benefit from the information 212 to affect the AQM congestion marking algorithm, preferentially 213 marking data packets that exceed the expected data size from the 214 corresponding Interest. 216 It is natural to ask whether the additional complexity introduced 217 into an ICN forwarder, and the additional computational cost for the 218 congestion control operations is worthwhile. For congestion control 219 schemes like [Schneider2016] the additional overhead is not trivial, 220 since no Interest counting is happening. However, if a congestion 221 control is _already_ counting Interests, the additional overhead is 222 minimal, only reading one extra TLV from the Interest and 223 incrementing the outstanding data amount for the corresponding queue 224 by that number rather than a constant of 1. The overhead on 225 returning data is simply reducing the amount by the actual Data 226 message size, rather than by 1. 228 3.1. How to predict the size of returning Data messages 230 This of course raises the question "How does the requester know how 231 big the corresponding Data message coming back will be?". For a 232 number of important applications, the size is known a priori due to 233 the characteristics of the application. Here are some examples: 235 * For many sensor and other Internet-of-Things applications, the 236 data is instrument readings which have fixed known size. 238 * In video streaming, the data is output of a video encoder which 239 produces variable sized frames. This information is typically 240 made available ahead of time to the streaming clients in the form 241 of a _Manifest_ (e.g [DASH], FLIC [I-D.irtf-icnrg-flic]), which 242 contains the names of the corresponding segments (or individual 243 frames) of video and audio and their sizes. 245 * Internet telephony applications use vocoders that typically employ 246 fixed-size audio frames. Therefore, their size is known either a 247 priori, or via an initialization exchange at the start of an audio 248 session. 250 The more complex cases arise where the data size is not known at the 251 time the Interest must be sent. Much of the nuance of the proposed 252 scheme is in how mismatches between the expected data size and the 253 actual Data message returned are handled. The consumer can either 254 under- or over-estimate the data size. In the former case, the 255 under-estimate can lead to congestion and possible loss of data. In 256 the latter case, bandwidth that could have been used by data objects 257 requested by other consumers might be wasted. We first consider 258 "honest" mis-estimates due to imperfect knowledge by the ICN 259 application; later we consider malicious applications that are using 260 the machinery to mount some form of attack. We also consider the 261 effects of Interest aggregation if the aggregated Interests have 262 differing expected data sizes. Also, it should be obvious that if 263 the Data message arrives, the application learns its actual size, 264 which may or may not be useful in adjusting the expected data size 265 estimate for future Interests. 267 In all cases, the expected data size from the Interest can be 268 incorporated in the corresponding Pending Interest Table (PIT) entry 269 of each CCNx/NDN forwarder on the path and hence when a (possibly 270 fragmented) Data object comes back, its total size is known and can 271 be compared to the expected size in the PIT for a mismatch. Aside: 272 In the case of fragmentation, we assume a fragmentation scheme in 273 which the total Data message size can be known as soon as any one 274 fragment is received (a reasonable assumption for most any well- 275 designed fragmentation method, such as that in [Ghali2013]). 277 3.2. Handling 'too big' cases 279 If the returning Data message is larger than the expected data size, 280 the extra data could result in either unfair bandwidth allocation or 281 possibly data loss under congestion conditions. When this is 282 detected, the forwarder has three choices: 284 1. It could forward the Data message anyway, which is safe under 285 non-congestion conditions, but unfair and possibly unstable when 286 the output link is congested 288 2. It could forward the data when un-congested (e.g. by assessing 289 output queue depth) but drop it when congested 291 3. It could always drop the data, as a way of "punishing" the 292 requester for the mis-estimate. 294 Either of the latter two strategies is acceptable from a congestion 295 control point of view. However, it is not a good idea to simply drop 296 the Data message with no feedback to the issuer of the Interest 297 because the application has no way to learn the actual data size and 298 retry. Further, recovery would be delayed until the failing Interest 299 timed out. Therefore, an additional element needed in protocol 300 semantics is the incorporation of a "Data too big" error message 301 (achieved via the use of an "Interest Return" packet in CCNx). 303 Upon dropping data as above, the CCNx/NDN forwarder converts the 304 normal Data message into an Interest Return packet containing the 305 existing [RFC8609] T_MTU_TOO_LARGE error code and the actual size of 306 the Data message instead of the original content. It propagates that 307 back toward the client identically to how the original Data message 308 would have been handled. Subsequent nodes upon receiving the 309 T_MTU_TOO_LARGE error treat identically to other Interest Return 310 errors. When the Interest Return eventually arrives back to the 311 issuer of the Interest, the user MAY reissue the Interest with the 312 correct expected data size. 314 One detail to note is that an Interest Return carrying 315 T_MTU_TOO_LARGE must be deterministically smaller than the expected 316 data size in all cases. This is clearly the case for large data 317 objects, but there is a corner case with small data objects. There 318 has to be a minimum expected data size that a client can specify in 319 their Interests, and that minimum cannot be smaller than the size of 320 a T_MTU_TOO_LARGE Interest Return packet. 322 3.3. Handling 'too small' cases 324 Next we consider the case where the returning data is smaller than 325 the expected data size. While this case does not result in 326 congestion, it can cause resources to be inefficiently allocated 327 because not all of the set-aside bandwidth for the returning data 328 object gets used. The simplest and most straightforward way to deal 329 with this case is to essentially ignore it. The motivation for not 330 worrying about the smaller data mismatch is that in many situations 331 that employ usage-based resource measurement (and possibly charging), 332 it is trivial to just account for the usage according to the larger 333 expected data size rather than actual returned data size. Properly 334 adjusting congestion control parameters to somehow penalize users for 335 over-estimating their resource usage requires fairly heavyweight 336 machinery, which in most cases is not warranted. If desired, any of 337 the following mechanisms could be considered: 339 * Attempt to identify future Interests for the same object or 340 closely related objects and allocate resources based on some 341 retained state about the actual size of prior objects 343 * Police consumer behavior and decrease the expected data size in 344 one or more future Interests to compensate 346 * For small objects, do more optimistic resource allocation on the 347 links on the presumption that there will be some "slack" due to 348 clients overestimating data object size. 350 3.4. Interactions with Interest Aggregation 352 One protocol detail of CCNx/NDN that needs to be dealt with is 353 Interest Aggregation. Interest Aggregation, while a powerful feature 354 for maintaining flow balance when multiple consumers send Interests 355 for the same Named object, introduces subtle complications. Whenever 356 a second or subsequent Interest arrives at a forwarder with an active 357 PIT entry it is possible that those Interests carry different 358 parameters, for example hop limit, payload, etc. It is therefore 359 necessary to specify the exact behavior of the forwarder for each of 360 the parameters that might differ. In the case of the expected data 361 size parameter defined here, the value is associated with the ingress 362 face on which the Interest creating the PIT entry arrived, as opposed 363 to being global to the PIT entry as a whole. Interest aggregation 364 interacts with expected data size if Interests from different clients 365 contain different values of the expected data size. As above in 366 Section 3.3, the simplest solution to this problem is to ignore it, 367 as most error cases are benign. However, there is one problematic 368 error case where one client provides an accurate expected data size, 369 but another who issued the Interest first underestimates, causing 370 both to receive a T_MTU_TOO_LARGE error. This introduces a denial of 371 service vulnerability, which we discuss below together with the other 372 malicious actor cases. 374 There are two cases to consider: 376 1. The arriving Interest carries an expected data size smaller than 377 any of the values associated with the PIT entry. 379 2. The arriving Interest carries an expected data size larger than 380 any of the values associated with the PIT entry. 382 For Case (1) the Interest can be safely aggregated since the upstream 383 links will have sufficient bandwidth allocated based on the larger 384 expected data size (assuming the original Interest's expected data 385 size was itself sufficiently large to accommodate the actual size of 386 the returning Data). On the other hand, should the incoming face 387 have bandwidth allocated based on the larger existing Interest's 388 expected data size, or on the smaller value in the arriving interest? 389 Here there are two possible approaches: 391 a. Allocate based on the data size already in the PIT. In this case 392 the consumer sending the earlier Interest can cause over- 393 allocation of link bandwidth for other incoming faces, but there 394 will not be a T_MTU_TOO_LARGE error generated for that Interest 396 b. Allocate based on the value in the arriving Interest. If the 397 returning Data is in fact larger, generate a T_MTU_TOO_LARGE 398 Interest Return on that ingress face, while successfully 399 returning the Data message on any faces that do not exhibit a too 400 small expected data size 402 It is RECOMMENDED that the second policy be followed. The reasons 403 behind this recommendation are as follows: 405 1. The link can be congested quite quickly after the queuing 406 decision is made, especially if the data has a long link- 407 occupancy time, so this is a safer alternative. 409 2. The cost of returning the error is only one link RTT, since the 410 consumer (or downstream forwarder) can immediately re-issue the 411 Interest with the correct size and perhaps pick up the cached 412 object from the upstream forwarder's Content Store. 414 3. Being optimistic and returning the data interacts with the 415 behavior of aggregate resource control and resource accounting, 416 which in turn raises the messy issue of whether to "charge" the 417 consumer for the actual bandwidth used or only for the requested 418 bandwidth in the expected data. 420 4. The rabbit hole goes deeper if you add differential QoS to the 421 equation or consumers "playing games" and intentionally 422 underestimating so their interests get satisfied when links 423 aren't congested. This makes handling malicious actors 424 (Section 4) more difficult. 426 For Case (2) above, the Interest MUST be forwarded rather than 427 aggregated to prevent a consumer from mounting a denial of service 428 attack by sending intentionally too small expected data size (see 429 Section 4 for additional detail on this and other attacks). As above 430 for Case (1) it is RECOMMENDED that policy (b) above be followed. 432 3.5. Operation when some Interests lack the expected data size option 433 and some have it 435 Since the expected data size is an optional hop-by-hop packet field, 436 forwarders need to be prepared to handle an arbitrary mix of packets 437 containing or lacking this option. There are two general things to 438 address. 440 First, we assume that any forwarder supporting expected data size is 441 running a more sophisticated congestion control algorithm that one 442 employing simple interest counting. The link bandwidth resource 443 allocation is therefore based directly, or indirectly, on the 444 expected Data size in bytes. Therefore, the forwarder has to assign 445 a value to use in the resource allocation for the reverse link. This 446 specification does not mandate any particular approach or a default 447 value to use. However, in the absence on other guidance, it makes 448 sense to do one of two things: 450 1. Pick a default based on the link MTU of the face on which the 451 Interest arrived and use that for all Interests lacking an 452 expected data size. This is likely to be most compatible with 453 simple interest counting which would rate limit all incoming 454 interests equally. 456 2. Configure some values for given Name prefixes that have known 457 sizes. This may be appropriate for dedicated forwarders 458 supporting single use cases, such as: 460 * A forwarder handling IoT sensors sending very small Data 461 messages 463 * A forwarder handling real-time video with large average Data 464 packets that exceed link MTU and are routinely fragmented 466 * A forwarder doing voice trunking where the vocoders produce 467 moderate sized packets, still much smaller than the link MTU 469 The second area to address is what to do if an interest lacking an 470 expected Data size is responded to by a Data message whose size 471 exceeds the default discussed above. It would be inappropriate to 472 issue a T_MTU_TOO_LARGE error, since the consumer is unlikely to 473 understand or deal correctly with that new error case. Instead, it 474 is RECOMMENDED that the forwarder: 476 * Ignore the mismatch if the reverse link is not congested and 477 return the requested Data message anyway. 479 * If the reverse link is congested, issue an Interest Return with 480 the T_NO_RESOURCES error code 482 This specification does not define or recommend any particular 483 algorithm for assessing the congestion state of the link(s) to carry 484 the Data message downstream to the requesting consumers. It is 485 assumed that a reasonable algorithm is in use, because otherwise even 486 basic Interest counting forms of congestion control would not be 487 effective. 489 4. Dealing with malicious actors 491 First we note that various known attacks in CCNx or NDN can also be 492 mounted by users employing this method. Attacks that involve 493 interest flooding, cache pollution, cache poisoning, etc. are neither 494 worsened nor ameliorated by the introduction of the congestion 495 control capabilities described here. However, there are two new 496 vulnerabilities that need to be dealt with. These two new 497 vulnerabilities involve intentional mis-estimation of data size. 499 The first is a consumer who intentionally over-estimates data size 500 with the goal of preventing other users from using the bandwidth. 501 This is at most a minor additional concern given the discussion of 502 how to handle over-estimation by honest clients in Section 3.2. If 503 one of the amelioration techniques described there is used, the case 504 of malicious over-estimation is also dealt with adequately. 506 The second is a user who intentionally under-estimates the data size 507 with the goal having its Interest processed while the other 508 aggregated interests are not processed, thereby causing 509 T_MTU_TOO_LARGE errors and denying service to the other users with 510 overlapping requests. There are a number of possible mitigation 511 techniques for this attack vector, ranging in complexity. We outline 512 two below; there may be others as or more effective with acceptable 513 complexity and overhead: 515 * (Simplest) A user sending Interests resulting in a T_MTU_TOO_LARGE 516 error is treated similarly to users mounting interest flooding 517 attacks; the a router aggregating Interests with differing 518 expected data sizes rate limits the face(s) exhibiting these 519 errors, thus decreasing the ability of a user to issue enough mis- 520 estimated Interests to collide and generate Interest aggregation. 522 * An ICN forwarder aggregating Interests remembers in the PIT entry 523 not only the expected data size of the Interest it forwarded, but 524 the maximum of the expected data size of the other Interests it 525 aggregated. If a T_MTU_TOO_LARGE error comes back, instead of 526 propagating it, the forwarder MAY treat this as a transient error, 527 drop the Interest Return, and re-forward the Interest using the 528 maximum expected data size in the PIT (assuming it is is bigger). 529 This recovers from the error, but the attacker can still cause an 530 extra round trip to the producer or to an upstream forwarder with 531 a copy of the data in its Content Store. 533 5. Mapping to CCNx and NDN packet encodings 535 The only actual protocol needed is a TLV in Interest messages that 536 states the size in bytes of the expected Data Message coming back, 537 and in the Interest Return on a "too big" error to carry the actual 538 data size. In the case of CCNx, this covers the encapsulated Data 539 Object, but not the hop-by-hop headers. 541 5.1. Packet encoding for CCNx 543 For CCNx[RFC8569] there is a new hop-by-hop header TLV, and a new 544 value of the Interest Return "Return Type". 546 Expected Data Size (for Interest messages), or Actual Data Size (for 547 Interest Return messages) TLV 549 +------------+-----------+--------------------------------+ 550 | Abbrev | Name | Description | 551 +============+===========+================================+ 552 | T_DATASIZE | Data Size | Expected (Section 3) or Actual | 553 | | | (Section 3.2) Data Size | 554 +------------+-----------+--------------------------------+ 556 Table 1: Data Size TLV 558 5.2. Packet encoding for NDN 560 TBD based on [NDNTLV]. Suggestions from the NDN team greatly 561 appreciated. 563 6. IANA Considerations 565 Please Add the T_DATASIZE TLV to the Hop-by-Hop TLV types registry of 566 RFC8609, with fixed length of 2, and data type numeric 568 Expected/Actual Data Size TLV encoding. The range has an upper bound 569 of 64K bytes, since that is the largest MTU supported by CCNx. 571 1 2 3 572 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 573 +---------------+---------------+---------------+---------------+ 574 | T_DATASIZE | 2 | 575 +---------------+---------------+---------------+---------------+ 576 | Expected/Actual Data Size | 577 +---------------+---------------+ 579 Figure 1: Expected/Actual Datazize using RFC8609 encoding 581 7. Security Considerations 583 Section 4 addresses the major security considerations for this 584 specification. 586 8. Acknowledgements 588 Klaus Schneider and Ken Calvert have contributed a number of useful 589 comments which have substantially improved the document. 591 9. References 593 9.1. Normative References 595 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 596 Requirement Levels", BCP 14, RFC 2119, 597 DOI 10.17487/RFC2119, March 1997, 598 . 600 [RFC8569] Mosko, M., Solis, I., and C. Wood, "Content-Centric 601 Networking (CCNx) Semantics", RFC 8569, 602 DOI 10.17487/RFC8569, July 2019, 603 . 605 [RFC8609] Mosko, M., Solis, I., and C. Wood, "Content-Centric 606 Networking (CCNx) Messages in TLV Format", RFC 8609, 607 DOI 10.17487/RFC8609, July 2019, 608 . 610 9.2. Informative References 612 [Carofiglio2012] 613 Carofiglio, G., Gallo, M., and L. Muscariello, "Joint hop- 614 by-hop and receiver-driven interest control protocol for 615 content-centric networks, in ICN Workshop at SIGcomm 616 2012", DOI 10.1145/2377677.2377772, 2102, 617 . 620 [DASH] "Dynamic Adaptive Streaming over HTTP", various, 621 . 624 [Ghali2013] 625 Ghali, C., Narayanan, A., Oran, D., Tsudik, G., and C. 626 Wood, "Secure Fragmentation for Content-Centric Networks, 627 in IEEE 14th International Symposium on Network Computing 628 and Applications", DOI 10.1109/nca.2015.34, 2015, 629 . 631 [I-D.ietf-quic-transport] 632 Iyengar, J. and M. Thomson, "QUIC: A UDP-Based Multiplexed 633 and Secure Transport", Work in Progress, Internet-Draft, 634 draft-ietf-quic-transport-27, 21 February 2020, 635 . 638 [I-D.irtf-icnrg-flic] 639 Tschudin, C., Wood, C., Mosko, M., and D. Oran, "File-Like 640 ICN Collections (FLIC)", Work in Progress, Internet-Draft, 641 draft-irtf-icnrg-flic-02, 4 November 2019, 642 . 644 [Mahdian2016] 645 Mahdian, M., Arianfar, S., Gibson, J., and D. Oran, 646 "MIRCC: Multipath-aware ICN Rate-based Congestion Control, 647 in Proceedings of the 3rd ACM Conference on Information- 648 Centric Networking", DOI 10.1145/2984356.2984365, 2016, 649 . 652 [minmaxfairness] 653 "Max-min Fairness", various, 654 . 656 [NDN] "Named Data Networking", various, 657 . 659 [NDNTLV] "NDN Packet Format Specification.", 2016, 660 . 662 [proportionalfairness] 663 "Proportionally Fair", various, 664 . 666 [RFC0793] Postel, J., "Transmission Control Protocol", STD 7, 667 RFC 793, DOI 10.17487/RFC0793, September 1981, 668 . 670 [RFC4340] Kohler, E., Handley, M., and S. Floyd, "Datagram 671 Congestion Control Protocol (DCCP)", RFC 4340, 672 DOI 10.17487/RFC4340, March 2006, 673 . 675 [RFC4960] Stewart, R., Ed., "Stream Control Transmission Protocol", 676 RFC 4960, DOI 10.17487/RFC4960, September 2007, 677 . 679 [Schneider2016] 680 Schneider, K., Yi, C., Zhang, B., and L. Zhang, "A 681 Practical Congestion Control Scheme for Named Data 682 Networking, in Proceedings of the 2016 conference on 3rd 683 ACM Conference on Information-Centric Networking - ACM-ICN 684 '16", DOI 10.1145/2984356.2984369, 2016, 685 . 688 [Song2018] Song, J., Lee, M., and T. Kwon, "SMIC: Subflow-level 689 Multi-path Interest Control for Information Centric 690 Networking, in 5th ACM Conference on Information-Centric 691 Networking", DOI 10.1145/3267955.3267971, 2018, 692 . 695 [Wang2013] Wang, Y., Rozhnova, N., Narayanan, A., Oran, D., and I. 696 Rhee, "An Improved Hop-by-hop Interest Shaper for 697 Congestion Control in Named Data Networking, in ACM 698 SIGCOMM Workshop on Information-Centric Networking", 699 DOI 10.1145/2534169.2491233, 2013, 700 . 703 Author's Address 705 Dave Oran 706 Network Systems Research and Design 707 4 Shady Hill Square 708 Cambridge, MA 02138 709 United States of America 711 Email: daveoran@orandom.net