idnits 2.17.1 draft-oran-icnrg-qosarch-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == There is 1 instance of lines with non-ascii characters in the document. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (28 February 2020) is 1516 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-02) exists of draft-anilj-icnrg-dnc-qos-icn-01 == Outdated reference: A later version (-34) exists of draft-ietf-quic-transport-27 == Outdated reference: A later version (-15) exists of draft-irtf-icnrg-ccninfo-02 == Outdated reference: A later version (-09) exists of draft-irtf-nwcrg-nwc-ccn-reqs-02 == Outdated reference: A later version (-07) exists of draft-moiseenko-icnrg-flowclass-05 == Outdated reference: A later version (-04) exists of draft-muscariello-intarea-hicn-03 == Outdated reference: A later version (-11) exists of draft-oran-icnrg-flowbalance-02 -- Obsolete informational reference (is this intentional?): RFC 793 (Obsoleted by RFC 9293) -- Obsolete informational reference (is this intentional?): RFC 4960 (Obsoleted by RFC 9260) Summary: 0 errors (**), 0 flaws (~~), 9 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 ICNRG D. Oran 3 Internet-Draft Network Systems Research and Design 4 Intended status: Informational 28 February 2020 5 Expires: 31 August 2020 7 Considerations in the development of a QoS Architecture for CCNx-like 8 ICN protocols 9 draft-oran-icnrg-qosarch-04 11 Abstract 13 This is a position paper. It documents the author's personal views 14 on how Quality of Service (QoS) capabilities ought to be accommodated 15 in ICN protocols like CCNx or NDN which employ flow-balanced 16 Interest/Data exchanges and hop-by-hop forwarding state as their 17 fundamental machinery. It argues that such protocols demand a 18 substantially different approach to QoS from that taken in TCP/IP, 19 and proposes specific design patterns to achieve both classification 20 and differentiated QoS treatment on both a flow and aggregate basis. 21 It also considers the effect of caches as a resource in addition to 22 memory, CPU and link bandwidth that should be subject to explicitly 23 unfair resource allocation. The proposed methods are intended to 24 operate purely at the network layer, providing the primitives needed 25 to achieve both transport and higher layer QoS objectives. It 26 explicitly excludes any discussion of Quality of Experience (QoE) 27 which can only be assessed and controlled at the application layer or 28 above. 30 This document is not a product of the IRTF Information-Centric 31 Networking Research Group (ICNRG) but has been through formal last 32 call and has the support of the participants in the research group 33 for publication as an individual submission. 35 Status of This Memo 37 This Internet-Draft is submitted in full conformance with the 38 provisions of BCP 78 and BCP 79. 40 Internet-Drafts are working documents of the Internet Engineering 41 Task Force (IETF). Note that other groups may also distribute 42 working documents as Internet-Drafts. The list of current Internet- 43 Drafts is at https://datatracker.ietf.org/drafts/current/. 45 Internet-Drafts are draft documents valid for a maximum of six months 46 and may be updated, replaced, or obsoleted by other documents at any 47 time. It is inappropriate to use Internet-Drafts as reference 48 material or to cite them other than as "work in progress." 49 This Internet-Draft will expire on 31 August 2020. 51 Copyright Notice 53 Copyright (c) 2020 IETF Trust and the persons identified as the 54 document authors. All rights reserved. 56 This document is subject to BCP 78 and the IETF Trust's Legal 57 Provisions Relating to IETF Documents (https://trustee.ietf.org/ 58 license-info) in effect on the date of publication of this document. 59 Please review these documents carefully, as they describe your rights 60 and restrictions with respect to this document. 62 Table of Contents 64 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 65 1.1. Applicability Assessment by ICNRG Chairs . . . . . . . . 4 66 2. Requirements Language . . . . . . . . . . . . . . . . . . . . 4 67 3. Background on the nature and properties of Quality of Service 68 in network protocols . . . . . . . . . . . . . . . . . . 4 69 3.1. Congestion Control basics relevant to ICN . . . . . . . . 6 70 4. What can we control to achieve QoS in ICN? . . . . . . . . . 8 71 5. How does this relate to QoS in TCP/IP? . . . . . . . . . . . 9 72 6. Why is ICN Different? Can we do Better? . . . . . . . . . . 10 73 6.1. Equivalence class capabilities . . . . . . . . . . . . . 11 74 6.2. Topology interactions with QoS . . . . . . . . . . . . . 11 75 6.3. Specification of QoS treatments . . . . . . . . . . . . . 12 76 6.4. ICN forwarding semantics effect on QoS . . . . . . . . . 13 77 6.5. QoS interactions with Caching . . . . . . . . . . . . . . 13 78 7. A strawman set of principles to guide QoS architecture for 79 ICN . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 80 7.1. What about the richer QoS semantics available with 81 INTServ-like traffic control? . . . . . . . . . . . . . . 17 82 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 20 83 9. Security Considerations . . . . . . . . . . . . . . . . . . . 20 84 10. References . . . . . . . . . . . . . . . . . . . . . . . . . 20 85 10.1. Normative References . . . . . . . . . . . . . . . . . . 20 86 10.2. Informative References . . . . . . . . . . . . . . . . . 21 87 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . 26 89 1. Introduction 91 The TCP/IP protocol suite used on today's Internet has over 30 years 92 of accumulated research and engineering into the provision of Quality 93 of Service machinery, employed with varying success in different 94 environments. ICN protocols like Named Data Networking (NDN [NDN]) 95 and Content-Centric Networking (CCNx [RFC8569],[RFC8609]) have an 96 accumulated 10 years of research and very little deployment. We 97 therefore have the opportunity to either recapitulate the approaches 98 taken with TCP/IP (e.g. IntServ [RFC2998] and Diffserv [RFC2474]) or 99 design a new architecture and associated mechanisms aligned with the 100 properties of ICN protocols which differ substantially from those of 101 TCP/IP. This position paper advocates the latter approach and 102 comprises the author's personal views on how Quality of Service (QoS) 103 capabilities ought to be accommodated in ICN protocols like CCNx or 104 NDN. Specifically, these protocols differ in fundamental ways from 105 TCP/IP. The important differences are summarized in the following 106 table: 108 +-----------------------------+------------------------------------+ 109 | TCP/IP | CCNx or NDN | 110 +=============================+====================================+ 111 | Stateless forwarding | Stateful forwarding | 112 +-----------------------------+------------------------------------+ 113 | Simple Packets | Object model with optional caching | 114 +-----------------------------+------------------------------------+ 115 | Pure datagram model | Request-response model | 116 +-----------------------------+------------------------------------+ 117 | Asymmetric Routing | Symmetric Routing | 118 +-----------------------------+------------------------------------+ 119 | Independent flow directions | Flow balance | 120 +-----------------------------+------------------------------------+ 121 | Flows grouped by IP prefix | Flows grouped by name prefix | 122 | and port | | 123 +-----------------------------+------------------------------------+ 124 | End-to-end congestion | Hop-by-hop congestion control | 125 | control | | 126 +-----------------------------+------------------------------------+ 128 Table 1: Differences between IP and ICN relevant to QoS architecture 130 This document proposes specific design patterns to achieve both flow 131 classification and differentiated QoS treatment for ICN on both a 132 flow and aggregate basis. It also considers the effect of caches as 133 a resource in addition to memory, CPU and link bandwidth that should 134 be subject to explicitly unfair resource allocation. The proposed 135 methods are intended to operate purely at the network layer, 136 providing the primitives needed to achieve both transport and higher 137 layer QoS objectives. It does not propose detailed protocol 138 machinery to achieve these goals; it leaves these to supplementary 139 specifications, such as [I-D.moiseenko-icnrg-flowclass] and 140 [I-D.anilj-icnrg-dnc-qos-icn]. It explicitly excludes any discussion 141 of Quality of Experience (QoE) which can only be assessed and 142 controlled at the application layer or above. 144 Much of this document is derived from presentations the author has 145 given at ICNRG meetings over the last few years that are available 146 through the IETF datatracker (see, for example [Oran2018QoSslides]). 148 1.1. Applicability Assessment by ICNRG Chairs 150 QoS in ICN is an important topic with a huge design space. ICNRG has 151 been discussing different specific protocol mechanisms as well as 152 conceptual approaches. This document presents architectural 153 considerations for QoS, leveraging ICN properties instead of merely 154 applying IP-QoS mechanisms - without defining a specific architecture 155 or specific protocols mechanisms yet. However, there is consensus in 156 ICNRG that this document, clarifying the author's views, could 157 inspire such work and should hence be published as a position paper. 159 2. Requirements Language 161 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 162 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 163 document are to be interpreted as described in RFC 2119 [RFC2119]. 165 3. Background on the nature and properties of Quality of Service in 166 network protocols 168 Much of this background material is tutorial and can be simply 169 skipped by readers familiar with the long and checkered history of 170 quality of service in packet networks. Other parts of it are 171 polemical yet serve to illuminate the author's personal biases and 172 technical views. 174 All networking systems provide some degree of "quality of service" in 175 that they exhibit non-zero utility when offered traffic to carry. 176 The term therefore is used to describe systems that control the 177 allocation of various resources in order to achieve _managed 178 unfairness_. Absent explicit mechanisms to decide what traffic to be 179 unfair to, most systems try to achieve some form of "fairness" in the 180 allocation of resources, optimizing the overall utility delivered to 181 all demand under the constraint of available resources. From this it 182 should be obvious that you cannot use QoS mechanisms to create or 183 otherwise increase resource capacity! In fact, all known QoS schemes 184 have non-zero overhead and hence may (albeit slightly) decrease the 185 total resources available to carry user traffic. 187 Further, accumulated experience seems to indicate that QoS is helpful 188 in a fairly narrow range of network conditions: 190 * If your resources are lightly loaded, you don't need it, as 191 neither congestive loss nor substantial queueing delay occurs 193 * If your resources are heavily oversubscribed, it doesn't save you. 194 So many users will be unhappy that you are probably not delivering 195 a viable service 197 * Failures can rapidly shift your state from the first above to the 198 second, in which case either: 200 - your QoS machinery cannot respond quickly enough to maintain 201 the advertised service quality continuously, or 203 - resource allocations are sufficiently conservative to result in 204 substantial wasted capacity under non-failure conditions 206 Nevertheless, though not universally deployed, QoS is advantageous at 207 least for some applications and some network environments. Some 208 examples include: 210 * applications with steep utility functions [Shenker2006], such as 211 real-time multimedia 213 * applications with safety-critical operational constraints, such as 214 avionics or industrial automation 216 * dedicated or tightly managed networks whose economics depend on 217 strict adherence to challenging service level agreements (SLAs) 219 Another factor in the design and deployment of QoS is the scalability 220 and scope over which the desired service can be achieved. Here there 221 are two major considerations, one technical, the other economic/ 222 political: 224 * Some signaled QoS schemes, such as RSVP [RFC2205], maintain state 225 in routers for each flow, which scales linearly with the number of 226 flows. For core routers through which pass millions to billions 227 of flows, the memory required is infeasible to provide. 229 * The Internet is comprised of many minimally cooperating autonomous 230 systems [AS]. There are practically no successful examples of QoS 231 deployments crossing the AS boundaries of multiple service 232 providers. This in almost all cases limits the applicability of 233 QoS capabilities to be intra-domain. 235 While this document adopts the narrow definition of QoS as _managed 236 unfairness_, much of the networking literature uses the term more 237 colloquially as applying to any mechanism that improves overall 238 performance. Readers assuming this broader context will find a large 239 class of proven techniques to be ignored. This is intentional. 240 Among these are seamless producer mobility schemes like MAPME 242 [Auge2018], and network coding of ICN data as discussed in 243 [I-D.irtf-nwcrg-nwc-ccn-reqs]. 245 Finally, the relationship between QoS and either accounting or 246 billing is murky. Some schemes can accurately account for resource 247 consumption and ascertain to which user to allocate the usage. 248 Others cannot. While the choice of mechanism may have important 249 practical economic and political consequences for cost and workable 250 business models, this document considers none of those things and 251 discusses QoS only in the context of providing managed unfairness. 253 Some further background on congestion control for ICN is below. 255 3.1. Congestion Control basics relevant to ICN 257 Congestion control is necessary in any packet network that 258 multiplexes traffic among multiple sources and destinations in order 259 to: 261 1. Prevent collapse of utility due to overload, where the total 262 offered service declines as load increases, perhaps 263 precipitously, rather than increasing or remaining flat. 265 2. Avoid starvation of some traffic due to excessive demand by other 266 traffic. 268 3. Beyond the basic protections against starvation, achieve 269 "fairness" among competing traffic. Two common objective 270 functions are [minmaxfairness] and [proportionalfairness] both of 271 which have been implemented and deployed successfully on packet 272 networks for many years. 274 Before moving on to QoS, it is useful to consider how congestion 275 control works in NDN or CCNx. Unlike the IP protocol family, which 276 relies exclusively on end-to-end congestion control (e.g. 277 TCP[RFC0793], DCCP[RFC4340], SCTP[RFC4960], 278 QUIC[I-D.ietf-quic-transport]), CCNx and NDN can employ hop-by-hop 279 congestion control. There is per-Interest/Data state at every hop of 280 the path and therefore outstanding Interests provide information that 281 can be used to optimize resource allocation for data returning on the 282 inverse path, such as bandwidth sharing, prioritization and overload 283 control. In current designs, this allocation is often done using 284 Interest counting. By accepting one Interest packet from a 285 downstream node, implicitly this provides a guarantee (either hard or 286 soft) that there is sufficient bandwidth on the inverse direction of 287 the link to send back one Data packet. A number of congestion 288 control schemes have been developed for ICN that operate in this 289 fashion, for example [Wang2013], [Mahdian2016], [Song2018], 291 [Carofiglio2012]. Other schemes, like [Schneider2016] neither count 292 nor police Interests, but instead monitor queues using AQM (active 293 queue management) to mark returning Data packets that have 294 experienced congestion. This later class of schemes is similar to 295 those used on IP in the sense that they depend on consumers 296 adequately reducing their rate of Interest injection to avoid Data 297 packet drops due to buffer overflow in forwarders. The former class 298 of schemes is (arguably) more robust against mis-behavior by 299 consumers. 301 Given the stochastic nature of round trip times, and the ubiquity of 302 wireless links and encapsulation tunnels with variable bandwidth, a 303 simple scheme that admits interests only based on a time-invariant 304 estimate of the returning link bandwidth will perform poorly. 305 However, two characteristics of NDN and CCNx-like protocols can help 306 substantially to improve the accuracy and responsiveness of the 307 bandwidth allocation: 309 1. RTT is bounded by the Interest lifetime, which puts an upper 310 bound on the RTT uncertainty for any given Interest/Data 311 exchange. If Interest lifetimes are kept reasonably short (a few 312 RTTs) the allocations do not have to deal with an arbitrarily 313 long tail. One could in fact do a deterministic allocation on 314 this basis, but the result would be highly pessimistic. 315 Nevertheless, having a cut-off does improve the performance of an 316 optimistic allocation scheme. 318 2. Returning Data packets can be congestion marked by an ECN-like 319 marking scheme if the inverse link starts experiencing long queue 320 occupancy or other congestion indication. Unlike TCP/IP, where 321 the rate adjustment can only be done end-to-end, this feedback is 322 usable immediately by the downstream ICN forwarder and the 323 Interest shaping rate lowered after a single link RTT. This may 324 allow less pessimistic rate adjustment schemes than the AIMD with 325 .5 multiplier that is used on TCP/IP networks. It also allows 326 the rate adjustments to be spread more accurately among the 327 Interest/Data flows traversing a link sending congestion signals. 329 A useful discussion of these properties and how they demonstrate the 330 advantages of ICN approaches to congestion control can be found in 331 [Carofiglio2016] 333 4. What can we control to achieve QoS in ICN? 335 QoS is achieved through managed unfairness in the allocation of 336 resources in network elements, particularly in the routers doing 337 forwarding of ICN packets. So, a first order question is what 338 resources need to be allocated, and how to ascertain which traffic 339 gets what allocations. In the case of CCNx or NDN the important 340 network element resources are: 342 +---------------+-----------------------------------------------+ 343 | Resource | ICN Usage | 344 +===============+===============================================+ 345 | Communication | buffering for queued packets | 346 | Link capacity | | 347 +---------------+-----------------------------------------------+ 348 | Content Store | to hold cached data | 349 | capacity | | 350 +---------------+-----------------------------------------------+ 351 | Forwarder | for the Pending Interest Table (PIT) | 352 | memory | | 353 +---------------+-----------------------------------------------+ 354 | Compute | for forwarding packets, including the cost of | 355 | capacity | Forwarding Information Base (FIB) lookups. | 356 +---------------+-----------------------------------------------+ 358 Table 2: ICN-related Network Element Resources 360 For these resources, any QoS scheme has to specify two things: 362 1. How do you create _equivalence classes_ (a.k.a. flows) of traffic 363 to which different QoS treatments are applied? 365 2. What are the possible treatments and how are those mapped to the 366 resource allocation algorithms? 368 Two critical facts of life come into play when designing a QoS 369 scheme. First, the number of equivalence classes that can be 370 simultaneously tracked in a network element is bounded by both memory 371 and processing capacity to do the necessary lookups. One can allow 372 very fine-grained equivalence classes, but not be able to employ them 373 globally because of scaling limits of core routers. That means it is 374 wise to either restrict the range of equivalence classes, or allow 375 them to be _aggregated_, trading off accuracy in policing traffic 376 against ability to scale. 378 Second, the flexibility of expressible treatments can be tightly 379 constrained by both protocol encoding and algorithmic limitations. 380 The ability to encode the treatment requests in the protocol can be 381 limited (as it is for IP - there are only 6 of the TOS bits available 382 for Diffserv treatments), but as or more important is whether there 383 are practical traffic policing, queuing, and pacing algorithms that 384 can be combined to support a rich set of QoS treatments. 386 The two considerations above in combination can easily be 387 substantially more expressive than what can be achieved in practice 388 with the available number of queues on real network interfaces or the 389 amount of per-packet computation needed to enqueue or dequeue a 390 packet. 392 5. How does this relate to QoS in TCP/IP? 394 TCP/IP has fewer resource types to manage than ICN, and in some cases 395 the allocation methods are simpler, as shown in the following table: 397 +---------------+-------------+--------------------------------+ 398 | Resource | IP Relevant | TCP/IP Usage | 399 +===============+=============+================================+ 400 | Communication | YES | buffering for queued packets | 401 | Link capacity | | | 402 +---------------+-------------+--------------------------------+ 403 | Content Store | NO | no content store in IP | 404 | capacity | | | 405 +---------------+-------------+--------------------------------+ 406 | Forwarder | MAYBE | not needed for output-buffered | 407 | memory | | designs | 408 +---------------+-------------+--------------------------------+ 409 | Compute | YES | for forwarding packets, but | 410 | capacity | | arguably much cheaper than ICN | 411 +---------------+-------------+--------------------------------+ 413 Table 3: IP-related Network Element Resources 415 For these resources, IP has specified three fundamental things, as 416 shown in the following table: 418 +-----------------------+----------------------------------------+ 419 | What | How | 420 +=======================+========================================+ 421 | *Equivalence classes* | subset+prefix match on IP 5-tuple | 422 | | {SA,DA,SP,DP,PT} | 423 +-----------------------+----------------------------------------+ 424 | *Diffserv treatments* | (very) small number of globally-agreed | 425 | | traffic classes | 426 +-----------------------+----------------------------------------+ 427 | *Intserv treatments* | per-flow parameterized _Controlled | 428 | | Load_ and _Guaranteed_ service classes | 429 +-----------------------+----------------------------------------+ 431 Table 4: Fundamental protocol elements to achieve QoS for TCP/IP 433 Equivalence classes for IP can be pairwise, by matching against both 434 source and destination address+port, pure group using only 435 destination address+port, or source-specific multicast with source 436 adress+port and destination multicast address+port. 438 With Intserv, the signaling protocol RSVP [RFC2205] carries two data 439 structures, the FLOWSPEC and the TSPEC. The former fulfills the 440 requirement to identify the equivalence class to which the QoS being 441 signaled applies. The latter comprises the desired QoS treatment 442 along with a description of the dynamic character of the traffic 443 (e.g. average bandwidth and delay, peak bandwidth, etc.). Both of 444 these encounter substantial scaling limits, which has meant that 445 Intserv has historically been limited to confined topologies, and/or 446 high-value usages, like traffic engineering. 448 With Diffserv, the protocol encoding (6 bits in the TOS field of the 449 IP header) artificially limits the number of classes one can specify. 450 These are documented in [RFC4594]. Nonetheless, when used with fine- 451 grained equivalence classes, one still runs into limits on the number 452 of queues required. 454 6. Why is ICN Different? Can we do Better? 456 While one could adopt an approach to QoS mirroring the extensive 457 experience with TCP/IP, this would, in the author's view, be a 458 mistake. The implementation and deployment of QoS in IP networks has 459 been spotty at best. There are of course economic and political 460 reasons as well as technical reasons for these mixed results, but 461 there are several architectural choices in ICN that make it a 462 potentially much better protocol base to enhance with QoS machinery. 463 This section discusses those differences and their consequences. 465 6.1. Equivalence class capabilities 467 First and foremost, hierarchical names are a much richer basis for 468 specifying equivalence classes than IP 5-tuples. The IP address (or 469 prefix) can only separate traffic by topology to the granularity of 470 hosts, and not express actual computational instances nor sets of 471 data. Ports give some degree of per-instance demultiplexing, but 472 this tends to be both coarse and ephemeral, while confounding the 473 demultiplexing function with the assignment of QoS treatments to 474 particular subsets of the data. Some degree of finer granularity is 475 possible with IPv6 by exploiting the ability to use up to 64 bits of 476 address for classifying traffic. In fact, the hICN project 477 ([I-D.muscariello-intarea-hicn]), while adopting the request-response 478 model of CCNx, uses IPv6 addresses as the available namespace, and 479 IPv6 packets (plus "fake" TCP headers) as the wire format. 481 Nonetheless, the flexibility of tokenized, variable length, 482 hierarchical names allows one to directly associate classes of 483 traffic for QoS purposes with the structure of an application 484 namespace. The classification can be as coarse or fine-grained as 485 desired by the application. While not _always_ the case, there is 486 typically a straightforward association between how objects are 487 named, and how they are grouped together for common treatment. 488 Examples abound; a number can be conveniently found in 489 [I-D.moiseenko-icnrg-flowclass]. 491 6.2. Topology interactions with QoS 493 In ICN, QoS is not pre-bound to network topology since names are non- 494 topological, unlike unicast IP addresses. This allows QoS to be 495 applied to multi-destination and multi-path environments in a 496 straightforward manner, rather than requiring either multicast with 497 coarse class-based scheduling or complex signaling like that in RSVP- 498 TE [RFC3209] that is needed to make point-to-multipoint MPLS work. 500 Because of IP's stateless forwarding model, complicated by the 501 ubiquity of asymmetric routes, any flow-based QoS requires state that 502 is decoupled from the actual arrival of traffic and hence must be 503 maintained, at least as soft-state, even during quiescent periods. 504 Intserv, for example, requires flow signaling with state O(#flows). 505 ICN, even worst case, requires state O(#active Interest/Data 506 exchanges), since state can be instantiated on arrival of an 507 Interest, and removed lazily once the data hase been returned. 509 6.3. Specification of QoS treatments 511 Unlike Intserv, Difserv eschews signaling in favor of class-based 512 configuration of resources and queues in network elements. However, 513 Diffserv limits traffic treatments to a few bits taken from the ToS 514 field of IP. No such wire encoding limitations exist for NDN or 515 CCNx, as the protocol is completely TLV-based, and one (or even more 516 than one) new field can be easily defined to carry QoS treatment 517 information. 519 Therefore, there are greenfield possibilities for more powerful QoS 520 treatment options in ICN. For example, IP has no way to express a 521 QoS treatment like "try hard to deliver reliably, even at the expense 522 of delay or bandwidth". Such a QoS treatment for ICN could invoke 523 native ICN mechanisms, none of which are present in IP, such as: 525 * In-network retransmission in response to hop-by-hop errors 526 returned from upstream forwarders 528 * Trying multiple paths to multiple content sources either in 529 parallel or serially 531 * Higher precedence for short-term caching to recover from 532 downstream errors 534 * Coordinating cache utilization with forwarding resources 536 Such mechanisms are typically described in NDN and CCNx as 537 _forwarding strategies_. However, little or no guidance is given for 538 what application actions or protocol machinery is used to decide 539 which forwarding strategy to use for which Interests that arrive at a 540 forwarder. See [BenAbraham2018] for an investigation of these 541 issues. Associating forwarding strategies with the equivalence 542 classes and QoS treatments directly can make them more accessible and 543 useful to implement and deploy. 545 Stateless forwarding and asymmetric routing in IP limits available 546 state/feedback to manage link resources. In contrast, NDN or CCNx 547 forwarding allows all link resource allocation to occur as part of 548 Interest forwarding, potentially simplifying things considerably. 549 For example, with symmetric routing, producers have no control over 550 the paths their data packets traverse, and hence any QoS treatments 551 intended to influence routing paths from producer to consumer will 552 have no effect. 554 One complication in the handling of ICN QoS treatments is not present 555 in IP and hence worth mention. CCNx and NDN both perform _Interest 556 aggregation_ (See Section 2.3.2 of [RFC8569]). If an Interest 557 arrives matching an existing PIT entry, but with a different QoS 558 treatment from an Interest already forwarded, it can be tricky to 559 decide whether to aggregate the interest or forward it, and how to 560 keep track of the differing QoS treatments for the two Interests. 561 Exploration of the details surrounding these situations is beyond the 562 scope of this document; further discussion can be found for the 563 general case of flow balance and congestion control in 564 [I-D.oran-icnrg-flowbalance], and specifically for QoS treatments in 565 [I-D.anilj-icnrg-dnc-qos-icn]. 567 6.4. ICN forwarding semantics effect on QoS 569 IP has three forwarding semantics, with different QoS needs (Unicast, 570 Anycast, Multicast). ICN has the single forwarding semantic, so any 571 QoS machinery can be uniformly applied across any request/response 572 invocation, whether it employs dynamic destination routing, multi- 573 destination parallel requests, or even localized flooding (e.g. 574 directly on L2 multicast mechanisms). Additionally, the pull-based 575 model of ICN avoids a number of thorny multicast QoS problems that IP 576 has ([Wang2000], [RFC3170], [Tseng2003]). 578 The Multi-destination/multi-path forwarding model in ICN changes 579 resource allocation needs in a fairly deep way. IP treats all 580 endpoints as open-loop packet sources, whereas NDN and CCNx have 581 strong asymmetry between producers and consumers as packet sources. 583 6.5. QoS interactions with Caching 585 IP has no caching in routers, whereas ICN needs ways to allocate 586 cache resources. Treatments to control caching operation are 587 unlikely to look much like the treatments used to control link 588 resources. NDN and CCNx already have useful cache control directives 589 associated with Data messages. The CCNx controls include: 591 ExpiryTime: time after which a cached Content Object is considered 592 expired and MUST no longer be used to respond to an Interest from 593 a cache. 595 Recommended Cache Time: time after which the publisher considers the 596 Content Object to be of low value to cache. 598 See [RFC8569] for the formal definitions. 600 ICN flow classifiers, such as those in 601 [I-D.moiseenko-icnrg-flowclass] can be used to achieve soft or hard 602 partitioning of cache resources in the content store of an ICN 603 forwarder. For example, cached content for a given equivalence class 604 can be considered _fate shared_ in a cache whereby objects from the 605 same equivalence class can be purged as a group rather than 606 individually. This can recover cache space more quickly and at lower 607 overhead than pure per-object replacement when a cache is under 608 extreme pressure and in danger of thrashing. In addition, since the 609 forwarder remembers the QoS treatment for each pending Interest in 610 its PIT, the above cache controls can be augmented by policy to 611 prefer retention of cached content for some equivalence classes as 612 part of the cache replacement algorithm. 614 7. A strawman set of principles to guide QoS architecture for ICN 616 Based on the observations made in the earlier sections, this summary 617 section captures the author's ideas for clear and actionable 618 architectural principles for how to incorporate QoS machinery into 619 ICN protocols like NDN and CCNx. Hopefully, they can guide further 620 work and focus effort on portions of the giant design space for QoS 621 that have the best tradeoffs in terms of flexibility, simplicity, and 622 deployability. 624 *Define equivalence classes using the name hierarchy rather than 625 creating an independent traffic class definition*. This directly 626 associates the specification of equivalence classes of traffic with 627 the structure of the application namespace. It can allow 628 hierarchical decomposition of equivalence classes in a natural way 629 because of the way hierarchical ICN names are constructed. Two 630 practical mechanisms are presented in [I-D.moiseenko-icnrg-flowclass] 631 with different tradeoffs between security and the ability to 632 aggregate flows. Either prefix-based (EC3) or explicit name 633 component based (ECNT) or both could be adopted as the part of the 634 QoS architecture for defining equivalence classes. 636 *Put consumers in control of Link and Forwarding resource 637 allocation*. Do all link buffering and forwarding (both memory and 638 CPU) resource allocations based on Interest arrivals. This is 639 attractive because it provides early congestion feedback to 640 consumers, and allows scheduling the reverse link direction ahead of 641 time for carrying the matching data. It makes enforcement of QoS 642 treatments a single-ended (i.e. at the consumer) rather than a 643 double-ended problem and can avoid wasting resources on fetching data 644 that will wind up dropped when it arrives at a bottleneck link. 646 *Allow producers to influence the allocation of cache resources*. 647 Producers want to affect caching decisions in order to: 649 * Shed load by having Interests served by content stores in 650 forwarders before reaching the producer itself. 652 * Survive transient outages of either the producer or links close to 653 the producer. 655 For caching to be effective, individual Data objects in an 656 equivalence class need to have similar treatment; otherwise well- 657 known cache thrashing pathologies due to self-interference emerge. 658 Producers have the most direct control over caching policies through 659 the caching directives in Data messages. It therefore makes sense to 660 put the producer, rather than the consumer or network operator in 661 charge of specifying these equivalence classes. 663 See [I-D.moiseenko-icnrg-flowclass] for specific mechanisms to 664 achieve this. 666 *Allow consumers to influence the allocation of cache resources*. 667 Consumers want to affect caching decisions in order to: 669 * Reduce latency for retrieving data 671 * Survive transient outages of either a producer or links close to 672 the consumer 674 Consumers can have indirect control over caching by specifying QoS 675 treatments in their Interests. Consider the following potential QoS 676 treatments by consumers that can drive caching policies: 678 * A QoS treatment requesting better robustness against transient 679 disconnection can be used by a forwarder close to the consumer (or 680 downstream of an unreliable link) to preferentially cache the 681 corresponding data. 683 * Conversely a QoS treatment together with, or in addition to a 684 request for short latency, to indicate that new data will be 685 requested soon enough that caching the current data being 686 requested would be ineffective and hence to only pay attention to 687 the caching preferences of the producer. 689 * A QoS treatment indicating a mobile consumer likely to incur a 690 mobility event within an RTT (or a few RTTs). Such a treatment 691 would allow a mobile network operator to preferentially cache the 692 data at a forwarder positioned at a _join point_ or _rendezvous 693 point_ of their topology 695 *Give network operators the ability to match customer SLAs to cache 696 resource availability*. Network operators, whether closely tied 697 administratively to producer or consumer, or constituting an 698 independent transit administration, provide the storage resources in 699 the ICN forwarders. Therefore, they are the ultimate arbiters of how 700 the cache resources are managed. In addition to any local policies 701 they may enforce, the cache behavior from the QoS standpoint emerges 702 from how the producer-specified equivalence classes map onto cache 703 space availability, including whether cache entries are treated 704 individually, or fate-shared. Forwarders also determine how the 705 consumer-specified QoS treatments map to the precedence used for 706 retaining Data objects in the cache. 708 Besides utilizing cache resources to meet the QoS goals of individual 709 producers and consumers, network operators also want to manage their 710 cache resources in order to: 712 * Ameliorate congestion hotspots by reducing load converging on 713 producers they host on their network. 715 * Improve Interest satisfaction rates by utilizing caches as short- 716 term retransmission buffers to recover from link errors or 717 outages. 719 * Improve both latency and reliability in environments when 720 consumers move in the operator's topology. 722 *Re-think how to specify traffic treatments - don't just copy 723 Diffserv*. Some of the Diffserv classes may form a good starting 724 point, as their mapping onto queuing algorithms for managing link 725 buffering are well understood. However, Diffserv alone does not 726 allow one to express latency versus reliability tradeoffs or other 727 useful QoS treatments. Nor does it permit "TSPEC"-style traffic 728 descriptions as are allowed in a signaled QoS scheme. Here are some 729 examples: 731 * A "burst" treatment, where an initial Interest gives an aggregate 732 data size to request allocation of link capacity for a large burst 733 of Interest/Data exchanges. The Interest can be rejected at any 734 hop if the resources are not available. Such a treatment can also 735 accommodate Data implosion produced by the discovery procedures of 736 management protocols like [I-D.irtf-icnrg-ccninfo]. 738 * A "reliable" treatment, which affects preference for allocation of 739 PIT space for the Interest and Content Store space for the data in 740 order to improve the robustness of IoT data delivery in 741 constrained environment, as is described in 742 [I-D.gundogan-icnrg-iotqos]. 744 * A "search" treatment, which, within the specified Interest 745 Lifetime, tries many paths either in parallel or serial to 746 potentially many content sources, to maximize the probability that 747 the requested item will be found. This is done at the expense of 748 the extra bandwidth of both forwarding Interests and receiving 749 multiple responses upstream of an aggregation point. The 750 treatment can encode a value expressing tradeoffs like breadth- 751 first versus depth-first search, and bounds on the total resource 752 expenditure. Such a treatment would be useful for instrumentation 753 protocols like [I-D.mastorakis-icnrg-icntraceroute]. 755 As an aside, loose latency control can be achieved by bounding 756 Interest Lifetime as long as it is not also used as an application 757 mechanism to provide subscriptions or establish path traces for 758 producer mobility. See [Krol2018] for a discussion of the network 759 versus application timescale issues in ICN protocols. 761 7.1. What about the richer QoS semantics available with INTServ-like 762 traffic control? 764 Basic QoS treatments such as those summarized above may not be 765 adequate to cover the whole range of application utility functions 766 and deployment environments we expect for ICN. While it is true that 767 one does not necessarily need a separate signaling protocol like RSVP 768 given the state carried in the ICN data plane by forwarders, there 769 are some potentially important capabilities not provided by just 770 simple QoS treatments applied to per- Interest/Data exchanges. 771 INTserv's richer QoS capabilities may be of value, especially if they 772 can be provided in ICN at lower complexity and protocol overhead than 773 INTServ+RSVP. 775 There are three key capabilities missing from Diffserv-like QoS 776 treatments, no matter how sophisticated they may be in describing the 777 desired treatment for a given equivalence class of traffic. INTserv- 778 like QoS provides all of these: 780 1. The ability to *describe traffic flows* in a mathematically 781 meaningful way. This is done through parameters like average 782 rate, peak rate, and maximum burst size. The parameters are 783 encapsulated in a data structure called a "TSPEC" which can be 784 placed in whatever protocol needs the information (in the case of 785 TCP/IP INTserv, this is RSVP). 787 2. The ability to perform *admission control*, where the element 788 requesting the QoS treatment can know _before_ introducing 789 traffic whether the network elements have agreed to provide the 790 requested traffic treatment. An important side-effect of 791 providing this assurance is that the network elements install 792 state that allows the forwarding and queuing machinery to police 793 and shape the traffic in a way that provides a sufficient degree 794 of _isolation_ from the dynamic behavior of other traffic. 795 Depending on the admission control mechanism, it may or may not 796 be possible to explicitly release that state when the application 797 no longer needs the QoS treatment. 799 3. The permissable *degree of divergence* in the actual traffic 800 handling from the requested handling. INTServ provided two 801 choices here, the _controlled load_ service and the _guaranteed_ 802 service. The former allows stochastic deviation equivalent to 803 what one would experience on an unloaded path of a packet 804 network. The latter conforms to the TSPEC deterministically, at 805 the obvious expense of demanding extremely conservative resource 806 allocation. 808 Given the limited applicability of these capabilities in today's 809 Internet, the author does not take any position as to whether any of 810 these INTserv-like capabilities are needed for ICN to be succesful. 811 However, a few things seem important to consider. The following 812 paragraphs speculate about the consequences to the CCNx or NDN 813 protocol architectures of incorporating these features. 815 Superficially, it would be quite straightforward to accommodate 816 INTserv-equivalent traffic descriptions in CCNx or NDN. One could 817 define a new TLV for the Interest message to carry a TSPEC. A 818 forwarder encountering this, together with a QoS treatment request 819 (e.g. as proposed in Section 6.3) could associate the traffic 820 specification with the corresponding equivalence class derived from 821 the name in the Interest. This would allow the forwarder to create 822 state that not only would apply to the returning Data for that 823 Interest when being queued on the downstream interface, but be 824 maintained as soft state across multiple Interest/Data exchanges to 825 drive policing and shaping algorithms at per-flow granularity. The 826 cost in Interest message overhead would be modest, however the 827 complications associated with managing different traffic 828 specifications in different Interests for the same equivalence class 829 might be substantial. Of course, all the scalability considerations 830 with maintaining per-flow state also come into play. 832 Similarly, it would be equally straightforward to have a way to 833 express the degree of divergence capability that INTserv provides 834 through its controlled load and guaranteed service definitions. This 835 could either be packaged with the traffic specification or encoded 836 separately. 838 In contrast to the above, performing admission control for ICN flows 839 is likely to be just as heavy-weight as it turned out to be with IP 840 using RSVP. The dynamic multi-path, multi-destination forwarding 841 model of ICN makes performing admission control particularly tricky. 842 Just to illustrate: 844 * Forwarding paths are not confined to single paths (or a few ECMP 845 equivalent paths) as they are with IP, making it difficult to know 846 where to install state in advance of the arrival of an Interest to 847 forward. 849 * As with point-to-multipoint complexities when using RSVP for MPLS- 850 TE, state has to be installed to multiple producers over multiple 851 paths before an admission control algorithm can commit the 852 resources and say "yes" to a consumer needing admission control 853 capabilities 855 * Knowing when to remove admission control state is difficult in the 856 absence of a heavy-weight resource reservation protocol. Soft 857 state timeout may or may not be an adequate answer. 859 Despite the challenges above, it may be possible to craft an 860 admission control scheme for ICN that achieves the desired QoS goals 861 of applications without the invention and deployment of a complex 862 separate admission control signaling protocol. There have been 863 designs in earlier network architectures that were capable of 864 performing admission control piggybacked on packet transmission. 866 (The earliest example the author is aware of is [Autonet]). 868 Such a scheme might have the following general shape *(warning: 869 serious hand waving follows!)*: 871 * In addition to a QoS treatment and a traffic specification, an 872 Interest requesting admission for the corresponding equivalence 873 class would so indicate via a new TLV. It would also need to: (a) 874 indicate an expiration time after which any reserved resources can 875 be released, and (b) indicate that caches be bypassed, so that the 876 admission control request arrives at a bone-fide producer (or 877 Repo). 879 * Each forwarder processing the Interest would check for resource 880 availability and if not available, or the requested service not 881 feasible, reject the Interest with an admission control failure. 882 If resources are available, the forwarder would record the traffic 883 specification as described above and forward the Interest. 885 * If the Interest successfully arrives at a producer, the producer 886 returns the requested Data. 888 * Each on-path forwarder, on receiving the matching Data message, if 889 the resources are still available, does the actual allocation, and 890 marks the admission control TLV as "provisionally approved". 891 Conversely, if the resource reservation fails, the admission 892 control is marked "failed", although the Data is still passed 893 downstream. 895 * Upon the Data message arriving, the consumer knows if admission 896 succeeded or not, and subsequent Interests can rely on the QoS 897 state being in place until either some failure occurs, or a 898 topology or other forwarding change alters the forwarding path. 899 To deal with this, additional machinery is needed to ensure 900 subsequent Interests for an admitted flow either follow that path 901 or an error is reported. One possibility (also useful in many 902 other contexts), is to employ a _Path Steering_ mechanism, such as 903 the one described in [Moiseenko2017]. 905 8. IANA Considerations 907 This document does not require any IANA actions. 909 9. Security Considerations 911 There are a few ways in which QoS for ICN interacts with security and 912 privacy issues. Since QoS addresses relationships among traffic 913 rather than the inherent characteristics of traffic, it neither 914 enhances nor degrades the security and privacy properties of the data 915 being carried, as long as the machinery does not alter or otherwise 916 compromise the basic security properties of the associated protocols. 917 The QoS approaches advocated here for ICN can serve to amplify 918 existing threats to network traffic however: 920 * An attacker able to manipulate the QoS treatments of traffic can 921 mount a more focused (and potentially more effective) denial of 922 service attack by suppressing performance on traffic the attacker 923 is targeting. Since the architecture here assumes QoS treatments 924 are manipulable hop-by-hop, any on-path adversary can wreak havoc. 925 Note however, that in basic ICN, an on-path attacker can do this 926 and more by dropping, delaying, or mis-routing traffic independent 927 of any particular QoS machinery in use. 929 * By explicitly revealing equivalence classes of traffic via either 930 names or other fields in packets, an attacker has yet one more 931 handle to use to discover linkability of multiple requests. 933 10. References 935 10.1. Normative References 937 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 938 Requirement Levels", BCP 14, RFC 2119, 939 DOI 10.17487/RFC2119, March 1997, 940 . 942 [RFC8569] Mosko, M., Solis, I., and C. Wood, "Content-Centric 943 Networking (CCNx) Semantics", RFC 8569, 944 DOI 10.17487/RFC8569, July 2019, 945 . 947 [RFC8609] Mosko, M., Solis, I., and C. Wood, "Content-Centric 948 Networking (CCNx) Messages in TLV Format", RFC 8609, 949 DOI 10.17487/RFC8609, July 2019, 950 . 952 10.2. Informative References 954 [AS] "Autonomous System (Internet)", no date, 955 . 958 [Auge2018] Augé, J., Carofiglio, G., Grassi, G., Muscariello, L., 959 Pau, G., and X. Zeng, "MAP-Me: Managing Anchor-Less 960 Producer Mobility in Content-Centric Networks, in IEEE 961 Transactions on Network, Volume 15, Issue 2", 962 DOI 10.1109/TNSM.2018.2796720, June 2018, 963 . 965 [Autonet] Schroeder, M., Birrell, A., Burrows, M., Murray, H., 966 Needham, R., Rodeheffer, T., Satterthwaite, E., and C. 967 Thacker, "Autonet: a High-speed, Self-configuring Local 968 Area Network Using Point-to-point Links", SRC Research 969 Reports 59, April 1990, 970 . 973 [BenAbraham2018] 974 Ben Abraham, H., Parwatikar, J., DeHart, J., Dresher, A., 975 and P. Crowley, "Decoupling Information and Connectivity 976 via Information-Centric Transport, in 5th ACM Conference 977 on Information-Centric Networking (ICN '18), September 978 21-23, 2018, Boston, MA, USA", 979 DOI 10.1145/3267955.3267963, September 2018, 980 . 983 [Carofiglio2012] 984 Carofiglio, G., Gallo, M., and L. Muscariello, "Joint hop- 985 by-hop and receiver-driven Interest control protocol for 986 content-centric networks, in ICN Workshop at SIGcomm 987 2012", DOI 10.1016/j.comnet.2016.09.012, 2012, 988 . 991 [Carofiglio2016] 992 Carofiglio, G., Gallo, M., and L. Muscariello, "Optimal 993 multipath congestion control and request forwarding in 994 information-centric networks: Protocol design and 995 experimentation in Computer Networks, Vol. 110 No. 9, 996 December 2016", DOI 10.1145/2377677.2377772, 2016, 997 . 999 [I-D.anilj-icnrg-dnc-qos-icn] 1000 Jangam, A., suthar, P., and M. Stolic, "QoS Treatments in 1001 ICN using Disaggregated Name Components", Work in 1002 Progress, Internet-Draft, draft-anilj-icnrg-dnc-qos-icn- 1003 01, 11 September 2019, . 1006 [I-D.gundogan-icnrg-iotqos] 1007 Gundogan, C., Schmidt, T., Waehlisch, M., Frey, M., Shzu- 1008 Juraschek, F., and J. Pfender, "Quality of Service for ICN 1009 in the IoT", Work in Progress, Internet-Draft, draft- 1010 gundogan-icnrg-iotqos-01, 8 July 2019, 1011 . 1014 [I-D.ietf-quic-transport] 1015 Iyengar, J. and M. Thomson, "QUIC: A UDP-Based Multiplexed 1016 and Secure Transport", Work in Progress, Internet-Draft, 1017 draft-ietf-quic-transport-27, 21 February 2020, 1018 . 1021 [I-D.irtf-icnrg-ccninfo] 1022 Asaeda, H., Ooka, A., and X. Shao, "CCNinfo: Discovering 1023 Content and Network Information in Content-Centric 1024 Networks", Work in Progress, Internet-Draft, draft-irtf- 1025 icnrg-ccninfo-02, 8 July 2019, 1026 . 1028 [I-D.irtf-nwcrg-nwc-ccn-reqs] 1029 Matsuzono, K., Asaeda, H., and C. Westphal, "Network 1030 Coding for Content-Centric Networking / Named Data 1031 Networking: Requirements and Challenges", Work in 1032 Progress, Internet-Draft, draft-irtf-nwcrg-nwc-ccn-reqs- 1033 02, 20 September 2019, . 1036 [I-D.mastorakis-icnrg-icntraceroute] 1037 Mastorakis, S., Gibson, J., Moiseenko, I., Droms, R., and 1038 D. Oran, "ICN Traceroute Protocol Specification", Work in 1039 Progress, Internet-Draft, draft-mastorakis-icnrg- 1040 icntraceroute-06, 13 February 2020, 1041 . 1044 [I-D.moiseenko-icnrg-flowclass] 1045 Moiseenko, I. and D. Oran, "Flow Classification in 1046 Information Centric Networking", Work in Progress, 1047 Internet-Draft, draft-moiseenko-icnrg-flowclass-05, 20 1048 January 2020, . 1051 [I-D.muscariello-intarea-hicn] 1052 Muscariello, L., Carofiglio, G., Auge, J., and M. 1053 Papalini, "Hybrid Information-Centric Networking", Work in 1054 Progress, Internet-Draft, draft-muscariello-intarea-hicn- 1055 03, 30 October 2019, . 1058 [I-D.oran-icnrg-flowbalance] 1059 Oran, D., "Maintaining CCNx or NDN flow balance with 1060 highly variable data object sizes", Work in Progress, 1061 Internet-Draft, draft-oran-icnrg-flowbalance-02, 3 1062 February 2020, . 1065 [Krol2018] Krol, M., Habak, K., Oran, D., Kutscher, D., and I. 1066 Psaras, "RICE: Remote Method Invocation in ICN, in 1067 Proceedings of the 5th ACM Conference on Information- 1068 Centric Networking - ICN '18", 1069 DOI 10.1145/3267955.3267956, September 2018, 1070 . 1073 [Mahdian2016] 1074 Mahdian, M., Arianfar, S., Gibson, J., and D. Oran, 1075 "MIRCC: Multipath-aware ICN Rate-based Congestion Control, 1076 in Proceedings of the 3rd ACM Conference on Information- 1077 Centric Networking", DOI 10.1145/2984356.2984365, 2016, 1078 . 1081 [minmaxfairness] 1082 "Max-min Fairness", no date, 1083 . 1085 [Moiseenko2017] 1086 Moiseenko, I. and D. Oran, "Path Switching in Content 1087 Centric and Named Data Networks, in 4th ACM Conference on 1088 Information-Centric Networking (ICN 2017)", 1089 DOI 10.1145/3125719.3125721, September 2017, 1090 . 1093 [NDN] "Named Data Networking", various, 1094 . 1096 [Oran2018QoSslides] 1097 Oran, D., "Thoughts on Quality of Service for NDN/CCN- 1098 style ICN protocol architectures, presented at ICNRG 1099 Interim Meeting, Cambridge MA", 24 September 2018, 1100 . 1104 [proportionalfairness] 1105 "Proportionally Fair", no date, 1106 . 1108 [RFC0793] Postel, J., "Transmission Control Protocol", STD 7, 1109 RFC 793, DOI 10.17487/RFC0793, September 1981, 1110 . 1112 [RFC2205] Braden, R., Ed., Zhang, L., Berson, S., Herzog, S., and S. 1113 Jamin, "Resource ReSerVation Protocol (RSVP) -- Version 1 1114 Functional Specification", RFC 2205, DOI 10.17487/RFC2205, 1115 September 1997, . 1117 [RFC2474] Nichols, K., Blake, S., Baker, F., and D. Black, 1118 "Definition of the Differentiated Services Field (DS 1119 Field) in the IPv4 and IPv6 Headers", RFC 2474, 1120 DOI 10.17487/RFC2474, December 1998, 1121 . 1123 [RFC2998] Bernet, Y., Ford, P., Yavatkar, R., Baker, F., Zhang, L., 1124 Speer, M., Braden, R., Davie, B., Wroclawski, J., and E. 1125 Felstaine, "A Framework for Integrated Services Operation 1126 over Diffserv Networks", RFC 2998, DOI 10.17487/RFC2998, 1127 November 2000, . 1129 [RFC3170] Quinn, B. and K. Almeroth, "IP Multicast Applications: 1130 Challenges and Solutions", RFC 3170, DOI 10.17487/RFC3170, 1131 September 2001, . 1133 [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V., 1134 and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP 1135 Tunnels", RFC 3209, DOI 10.17487/RFC3209, December 2001, 1136 . 1138 [RFC4340] Kohler, E., Handley, M., and S. Floyd, "Datagram 1139 Congestion Control Protocol (DCCP)", RFC 4340, 1140 DOI 10.17487/RFC4340, March 2006, 1141 . 1143 [RFC4594] Babiarz, J., Chan, K., and F. Baker, "Configuration 1144 Guidelines for DiffServ Service Classes", RFC 4594, 1145 DOI 10.17487/RFC4594, August 2006, 1146 . 1148 [RFC4960] Stewart, R., Ed., "Stream Control Transmission Protocol", 1149 RFC 4960, DOI 10.17487/RFC4960, September 2007, 1150 . 1152 [Schneider2016] 1153 Schneider, K., Yi, C., Zhang, B., and L. Zhang, "A 1154 Practical Congestion Control Scheme for Named Data 1155 Networking, in Proceedings of the 2016 conference on 3rd 1156 ACM Conference on Information-Centric Networking - ACM-ICN 1157 '16", DOI 10.1145/2984356.2984369, 2016, 1158 . 1161 [Shenker2006] 1162 Shenker, S., "Fundamental Design Issues for the Future 1163 Internet, in IEEE Journal on Selected Areas in 1164 Communications", DOI 10.1109/49.414637, 2006, 1165 . 1167 [Song2018] Song, J., Lee, M., and T. Kwon, "SMIC: Subflow-level 1168 Multi-path Interest Control for Information Centric 1169 Networking, in 5th ACM Conference on Information-Centric 1170 Networking", DOI 10.1145/3267955.3267971, 2018, 1171 . 1174 [Tseng2003] 1175 Tseng, CH.J., "The performance of QoS-aware IP multicast 1176 routing protocols, in Networks, Vol:42, No:2", 1177 DOI 10.1002/net.10084, September 2003, 1178 . 1181 [Wang2000] Wang, B. and J.C. Hou, "Multicast routing and its QoS 1182 extension: problems, algorithms, and protocols, in IEEE 1183 Network, Vol:14, No:1", DOI 10.1109/65.819168, 2000, 1184 . 1187 [Wang2013] Wang, Y., Rozhnova, N., Narayanan, A., Oran, D., and I. 1188 Rhee, "An Improved Hop-by-hop Interest Shaper for 1189 Congestion Control in Named Data Networking, in ACM 1190 SIGCOMM Workshop on Information-Centric Networking", 1191 DOI 10.1145/2534169.2491233, 2013, 1192 . 1195 Author's Address 1197 Dave Oran 1198 Network Systems Research and Design 1199 4 Shady Hill Square 1200 Cambridge, MA 02138 1201 United States of America 1203 Email: daveoran@orandom.net