idnits 2.17.1 draft-oran-icnrg-qosarch-05.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == There are 2 instances of lines with non-ascii characters in the document. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (24 August 2020) is 1341 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-02) exists of draft-anilj-icnrg-dnc-qos-icn-01 == Outdated reference: A later version (-34) exists of draft-ietf-quic-transport-27 == Outdated reference: A later version (-15) exists of draft-irtf-icnrg-ccninfo-02 == Outdated reference: A later version (-09) exists of draft-irtf-nwcrg-nwc-ccn-reqs-02 == Outdated reference: A later version (-07) exists of draft-moiseenko-icnrg-flowclass-05 == Outdated reference: A later version (-04) exists of draft-muscariello-intarea-hicn-03 == Outdated reference: A later version (-11) exists of draft-oran-icnrg-flowbalance-02 -- Obsolete informational reference (is this intentional?): RFC 793 (Obsoleted by RFC 9293) -- Obsolete informational reference (is this intentional?): RFC 4960 (Obsoleted by RFC 9260) Summary: 0 errors (**), 0 flaws (~~), 9 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 ICNRG D. Oran 3 Internet-Draft Network Systems Research and Design 4 Intended status: Informational 24 August 2020 5 Expires: 25 February 2021 7 Considerations in the development of a QoS Architecture for CCNx-like 8 ICN protocols 9 draft-oran-icnrg-qosarch-05 11 Abstract 13 This is a position paper. It documents the author's personal views 14 on how Quality of Service (QoS) capabilities ought to be accommodated 15 in ICN protocols like CCNx or NDN which employ flow-balanced 16 Interest/Data exchanges and hop-by-hop forwarding state as their 17 fundamental machinery. It argues that such protocols demand a 18 substantially different approach to QoS from that taken in TCP/IP, 19 and proposes specific design patterns to achieve both classification 20 and differentiated QoS treatment on both a flow and aggregate basis. 21 It also considers the effect of caches in addition to memory, CPU and 22 link bandwidth as a resource that should be subject to explicitly 23 unfair resource allocation. The proposed methods are intended to 24 operate purely at the network layer, providing the primitives needed 25 to achieve both transport and higher layer QoS objectives. It 26 explicitly excludes any discussion of Quality of Experience (QoE) 27 which can only be assessed and controlled at the application layer or 28 above. 30 This document is not a product of the IRTF Information-Centric 31 Networking Research Group (ICNRG) but has been through formal last 32 call and has the support of the participants in the research group 33 for publication as an individual submission. 35 Status of This Memo 37 This Internet-Draft is submitted in full conformance with the 38 provisions of BCP 78 and BCP 79. 40 Internet-Drafts are working documents of the Internet Engineering 41 Task Force (IETF). Note that other groups may also distribute 42 working documents as Internet-Drafts. The list of current Internet- 43 Drafts is at https://datatracker.ietf.org/drafts/current/. 45 Internet-Drafts are draft documents valid for a maximum of six months 46 and may be updated, replaced, or obsoleted by other documents at any 47 time. It is inappropriate to use Internet-Drafts as reference 48 material or to cite them other than as "work in progress." 49 This Internet-Draft will expire on 25 February 2021. 51 Copyright Notice 53 Copyright (c) 2020 IETF Trust and the persons identified as the 54 document authors. All rights reserved. 56 This document is subject to BCP 78 and the IETF Trust's Legal 57 Provisions Relating to IETF Documents (https://trustee.ietf.org/ 58 license-info) in effect on the date of publication of this document. 59 Please review these documents carefully, as they describe your rights 60 and restrictions with respect to this document. 62 Table of Contents 64 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 65 1.1. Applicability Assessment by ICNRG Chairs . . . . . . . . 4 66 2. Requirements Language . . . . . . . . . . . . . . . . . . . . 4 67 3. Background on Quality of Service in network protocols . . . . 4 68 3.1. Basics on how ICN protocols like NDN and CCNx work . . . 6 69 3.2. Congestion Control basics relevant to ICN . . . . . . . . 7 70 4. What can we control to achieve QoS in ICN? . . . . . . . . . 9 71 5. How does this relate to QoS in TCP/IP? . . . . . . . . . . . 10 72 6. Why is ICN Different? Can we do Better? . . . . . . . . . . 12 73 6.1. Equivalence class capabilities . . . . . . . . . . . . . 12 74 6.2. Topology interactions with QoS . . . . . . . . . . . . . 12 75 6.3. Specification of QoS treatments . . . . . . . . . . . . . 13 76 6.4. ICN forwarding semantics effect on QoS . . . . . . . . . 14 77 6.5. QoS interactions with Caching . . . . . . . . . . . . . . 15 78 7. Strawman principles for an ICN QoS architecture . . . . . . . 15 79 7.1. Can Intserv-like traffic control in ICN provide richer QoS 80 semantics? . . . . . . . . . . . . . . . . . . . . . . . 19 81 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 22 82 9. Security Considerations . . . . . . . . . . . . . . . . . . . 22 83 10. References . . . . . . . . . . . . . . . . . . . . . . . . . 22 84 10.1. Normative References . . . . . . . . . . . . . . . . . . 22 85 10.2. Informative References . . . . . . . . . . . . . . . . . 22 86 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . 28 88 1. Introduction 90 The TCP/IP protocol suite used on today's Internet has over 30 years 91 of accumulated research and engineering into the provision of Quality 92 of Service machinery, employed with varying success in different 93 environments. ICN protocols like Named Data Networking (NDN [NDN]) 94 and Content-Centric Networking (CCNx [RFC8569],[RFC8609]) have an 95 accumulated 10 years of research and very little deployment. We 96 therefore have the opportunity to either recapitulate the approaches 97 taken with TCP/IP (e.g. Intserv [RFC2998] and Diffserv [RFC2474]) or 98 design a new architecture and associated mechanisms aligned with the 99 properties of ICN protocols which differ substantially from those of 100 TCP/IP. This position paper advocates the latter approach and 101 comprises the author's personal views on how Quality of Service (QoS) 102 capabilities ought to be accommodated in ICN protocols like CCNx or 103 NDN. Specifically, these protocols differ in fundamental ways from 104 TCP/IP. The important differences are summarized in the following 105 table: 107 +=============================+====================================+ 108 | TCP/IP | CCNx or NDN | 109 +=============================+====================================+ 110 | Stateless forwarding | Stateful forwarding | 111 +-----------------------------+------------------------------------+ 112 | Simple Packets | Object model with optional caching | 113 +-----------------------------+------------------------------------+ 114 | Pure datagram model | Request-response model | 115 +-----------------------------+------------------------------------+ 116 | Asymmetric Routing | Symmetric Routing | 117 +-----------------------------+------------------------------------+ 118 | Independent flow directions | Flow balance | 119 +-----------------------------+------------------------------------+ 120 | Flows grouped by IP prefix | Flows grouped by name prefix | 121 | and port | | 122 +-----------------------------+------------------------------------+ 123 | End-to-end congestion | Hop-by-hop congestion control | 124 | control | | 125 +-----------------------------+------------------------------------+ 127 Table 1: Differences between IP and ICN relevant to QoS architecture 129 This document proposes specific design patterns to achieve both flow 130 classification and differentiated QoS treatment for ICN on both a 131 flow and aggregate basis. It also considers the effect of caches in 132 addition to memory, CPU and link bandwidth as a resource that should 133 be subject to explicitly unfair resource allocation. The proposed 134 methods are intended to operate purely at the network layer, 135 providing the primitives needed to achieve both transport and higher 136 layer QoS objectives. It does not propose detailed protocol 137 machinery to achieve these goals; it leaves these to supplementary 138 specifications, such as [I-D.moiseenko-icnrg-flowclass] and 139 [I-D.anilj-icnrg-dnc-qos-icn]. It explicitly excludes any discussion 140 of Quality of Experience (QoE) which can only be assessed and 141 controlled at the application layer or above. 143 Much of this document is derived from presentations the author has 144 given at ICNRG meetings over the last few years that are available 145 through the IETF datatracker (see, for example [Oran2018QoSslides]). 147 1.1. Applicability Assessment by ICNRG Chairs 149 QoS in ICN is an important topic with a huge design space. ICNRG has 150 been discussing different specific protocol mechanisms as well as 151 conceptual approaches. This document presents architectural 152 considerations for QoS, leveraging ICN properties instead of merely 153 applying IP-QoS mechanisms - without defining a specific architecture 154 or specific protocols mechanisms yet. However, there is consensus in 155 ICNRG that this document, clarifying the author's views, could 156 inspire such work and should hence be published as a position paper. 158 2. Requirements Language 160 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 161 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 162 document are to be interpreted as described in RFC 2119 [RFC2119]. 164 3. Background on Quality of Service in network protocols 166 Much of this background material is tutorial and can be simply 167 skipped by readers familiar with the long and checkered history of 168 quality of service in packet networks. Other parts of it are 169 polemical yet serve to illuminate the author's personal biases and 170 technical views. 172 All networking systems provide some degree of "quality of service" in 173 that they exhibit non-zero utility when offered traffic to carry. In 174 other words, the network is totally useless if it never delivers any 175 of the traffic injected by applications. The term QoS is therefore 176 more correctly applied in a more restricted sense to describe systems 177 that control the allocation of various resources in order to achieve 178 _managed unfairness_. Absent explicit mechanisms to decide what 179 traffic to be unfair to, most systems try to achieve some form of 180 "fairness" in the allocation of resources, optimizing the overall 181 utility delivered to all offered load under the constraint of 182 available resources. From this it should be obvious that you cannot 183 use QoS mechanisms to create or otherwise increase resource capacity! 184 In fact, all known QoS schemes have non-zero overhead and hence may 185 (albeit slightly) decrease the total resources available to carry 186 user traffic. 188 Further, accumulated experience seems to indicate that QoS is helpful 189 in a fairly narrow range of network conditions: 191 * If your resources are lightly loaded, you don't need it, as 192 neither congestive loss nor substantial queueing delay occurs 194 * If your resources are heavily oversubscribed, it doesn't save you. 195 So many users will be unhappy that you are probably not delivering 196 a viable service 198 * Failures can rapidly shift your state from the first above to the 199 second, in which case either: 201 - your QoS machinery cannot respond quickly enough to maintain 202 the advertised service quality continuously, or 204 - resource allocations are sufficiently conservative to result in 205 substantial wasted capacity under non-failure conditions 207 Nevertheless, though not universally deployed, QoS is advantageous at 208 least for some applications and some network environments. Some 209 examples include: 211 * applications with steep utility functions [Shenker2006], such as 212 real-time multimedia 214 * applications with safety-critical operational constraints, such as 215 avionics or industrial automation 217 * dedicated or tightly managed networks whose economics depend on 218 strict adherence to challenging service level agreements (SLAs) 220 Another factor in the design and deployment of QoS is the scalability 221 and scope over which the desired service can be achieved. Here there 222 are two major considerations, one technical, the other economic/ 223 political: 225 * Some signaled QoS schemes, such as RSVP (Resource reSerVation 226 Protocol) [RFC2205], maintain state in routers for each flow, 227 which scales linearly with the number of flows. For core routers 228 through which pass millions to billions of flows, the memory 229 required is infeasible to provide. 231 * The Internet is comprised of many minimally cooperating autonomous 232 systems [AS]. There are practically no successful examples of QoS 233 deployments crossing the AS boundaries of multiple service 234 providers. This in almost all cases limits the applicability of 235 QoS capabilities to be intra-domain. 237 While this document adopts the narrow definition of QoS as _managed 238 unfairness_, much of the networking literature uses the term more 239 colloquially as applying to any mechanism that improves overall 240 performance. Readers assuming this broader context will find a large 241 class of proven techniques to be ignored. This is intentional. 242 Among these are seamless producer mobility schemes like MAPME 243 [Auge2018], and network coding of ICN data as discussed in 244 [I-D.irtf-nwcrg-nwc-ccn-reqs]. 246 Finally, the relationship between QoS and either accounting or 247 billing is murky. Some schemes can accurately account for resource 248 consumption and ascertain to which user to allocate the usage. 249 Others cannot. While the choice of mechanism may have important 250 practical economic and political consequences for cost and workable 251 business models, this document considers none of those things and 252 discusses QoS only in the context of providing managed unfairness. 254 For those unfamiliar with ICN protocols, a brief description of how 255 NDN and CCNx operate as a packet network is below in Section 3.1. 256 Some further background on congestion control for ICN follows in 257 Section 3.2. 259 3.1. Basics on how ICN protocols like NDN and CCNx work 261 The following is intended as a brief summary of the salient features 262 of the NDN and CCnx ICN protocols relevant to congestion control and 263 QoS. Quite extensive tutorial information may be found in a number 264 of places including material available from [NDNTutorials]. 266 In NDN and CCNx, all protocol interactions operate as a two-way 267 handshake. Named content is requested by a _consumer_ via an 268 _Interest message_ which is routed hop-by-hop through a series of 269 _forwarders_ until it reaches a node that stores the requested data. 270 This can be either the _producer_ of the data, or a forwarder holding 271 a cached copy of the requested data. The content matching the name 272 in the Interest is returned to the requester over the _inverse_ of 273 the path traversed by the corresponding Interest. 275 Forwarding in CCNx and NDN is _per-packet stateful_. Routing 276 information to select next-hops for an Interest is obtained from a 277 _Forwarding Information Base (FIB)_ which is similar in function to 278 the FIB in an IP router, except that it holds name prefixes rather 279 than IP address prefixes. Conventionally a _Longest Name Prefix 280 Match (LNPM)_ is used for lookup, although other algorithms are 281 possible including controlled flooding and adaptive learning based on 282 prior history. 284 Each Interest message leaves a trail of "breadcrumbs" as state in 285 each forwarder. This state, held in a data structure known as a 286 _Pending Interest Table (PIT)_ is used to forward the returning Data 287 message to the consumer. Since the PIT constitutes per-packet state 288 it is therefore a large consumer of memory resources especially in 289 forwarders carrying high traffic loads over long Round Trip Time 290 (RTT) paths, and hence plays a substantial role as a QoS-controllable 291 resource in ICN forwarders. 293 In addition to its role in forwarding Interest messages and returning 294 the corresponding Data messages, an ICN forwarder can also operate as 295 a cache, optionally storing a copy of any Data messages it has seen 296 in a local data structure known as a _Content Store (CS)_. Data in 297 the Content Store may be returned in response to a matching Interest 298 rather than forwarding the Interest further through the network to 299 the original Producer. Both CCNx and NDN have a variety of ways to 300 configure caching, including mechanisms to avoid both cache pollution 301 and cache poisoning (these are clearly beyond the scope of this brief 302 introduction). 304 3.2. Congestion Control basics relevant to ICN 306 In any packet network that multiplexes traffic among multiple sources 307 and destinations, congestion control is necessary in order to: 309 1. Prevent collapse of utility due to overload, where the total 310 offered service declines as load increases, perhaps 311 precipitously, rather than increasing or remaining flat. 313 2. Avoid starvation of some traffic due to excessive demand by other 314 traffic. 316 3. Beyond the basic protections against starvation, achieve 317 "fairness" among competing traffic. Two common objective 318 functions are [minmaxfairness] and [proportionalfairness] both of 319 which have been implemented and deployed successfully on packet 320 networks for many years. 322 Before moving on to QoS, it is useful to consider how congestion 323 control works in NDN or CCNx. Unlike the IP protocol family, which 324 relies exclusively on end-to-end congestion control (e.g. 325 TCP[RFC0793], DCCP[RFC4340], SCTP[RFC4960], 326 QUIC[I-D.ietf-quic-transport]), CCNx and NDN can employ hop-by-hop 327 congestion control. There is per-Interest/Data state at every hop of 328 the path and therefore outstanding Interests provide information that 329 can be used to optimize resource allocation for data returning on the 330 inverse path, such as bandwidth sharing, prioritization and overload 331 control. In current designs, this allocation is often done using 332 Interest counting. By accepting one Interest packet from a 333 downstream node, implicitly this provides a guarantee (either hard or 334 soft) that there is sufficient bandwidth on the inverse direction of 335 the link to send back one Data packet. A number of congestion 336 control schemes have been developed for ICN that operate in this 337 fashion, for example [Wang2013], [Mahdian2016], [Song2018], 338 [Carofiglio2012]. Other schemes, like [Schneider2016] neither count 339 nor police Interests, but instead monitor queues using AQM (active 340 queue management) to mark returning Data packets that have 341 experienced congestion. This later class of schemes is similar to 342 those used on IP in the sense that they depend on consumers 343 adequately reducing their rate of Interest injection to avoid Data 344 packet drops due to buffer overflow in forwarders. The former class 345 of schemes is (arguably) more robust against mis-behavior by 346 consumers. 348 Given the stochastic nature of round trip times, and the ubiquity of 349 wireless links and encapsulation tunnels with variable bandwidth, a 350 simple scheme that admits interests only based on a time-invariant 351 estimate of the returning link bandwidth will perform poorly. 352 However, two characteristics of NDN and CCNx-like protocols can help 353 substantially to improve the accuracy and responsiveness of the 354 bandwidth allocation: 356 1. RTT is bounded by the inclusion of an _Interest Lifetime_ in each 357 Interest message, which puts an upper bound on the RTT 358 uncertainty for any given Interest/Data exchange. If Interest 359 lifetimes are kept reasonably short (a few RTTs) the allocation 360 of local forwarder resources do not have to deal with an 361 arbitrarily long tail. One could in fact do a deterministic 362 allocation on this basis, but the result would be highly 363 pessimistic. Nevertheless, having a cut-off does improve the 364 performance of an optimistic allocation scheme. 366 2. Returning Data packets can be congestion marked by an ECN-like 367 marking scheme if the inverse link starts experiencing long queue 368 occupancy or other congestion indication. Unlike TCP/IP, where 369 the rate adjustment can only be done end-to-end, this feedback is 370 usable immediately by the downstream ICN forwarder and the 371 Interest shaping rate lowered after a single link RTT. This may 372 allow less pessimistic rate adjustment schemes than the Additive 373 Increase, Multiplicative Decrease (AIMD) with .5 multiplier that 374 is used on TCP/IP networks. It also allows the rate adjustments 375 to be spread more accurately among the Interest/Data flows 376 traversing a link sending congestion signals. 378 A useful discussion of these properties and how they demonstrate the 379 advantages of ICN approaches to congestion control can be found in 380 [Carofiglio2016] 382 4. What can we control to achieve QoS in ICN? 384 QoS is achieved through managed unfairness in the allocation of 385 resources in network elements, particularly in the routers doing 386 forwarding of ICN packets. So, a first order question is what 387 resources need to be allocated, and how to ascertain which traffic 388 gets what allocations. In the case of CCNx or NDN the important 389 network element resources are: 391 +===============+===============================================+ 392 | Resource | ICN Usage | 393 +===============+===============================================+ 394 | Communication | buffering for queued packets | 395 | Link capacity | | 396 +---------------+-----------------------------------------------+ 397 | Content Store | to hold cached data | 398 | capacity | | 399 +---------------+-----------------------------------------------+ 400 | Forwarder | for the Pending Interest Table (PIT) | 401 | memory | | 402 +---------------+-----------------------------------------------+ 403 | Compute | for forwarding packets, including the cost of | 404 | capacity | Forwarding Information Base (FIB) lookups. | 405 +---------------+-----------------------------------------------+ 407 Table 2: ICN-related Network Element Resources 409 For these resources, any QoS scheme has to specify two things: 411 1. How do you create _equivalence classes_ (a.k.a. flows) of traffic 412 to which different QoS treatments are applied? 414 2. What are the possible treatments and how are those mapped to the 415 resource allocation algorithms? 417 Two critical facts of life come into play when designing a QoS 418 scheme. First, the number of equivalence classes that can be 419 simultaneously tracked in a network element is bounded by both memory 420 and processing capacity to do the necessary lookups. One can allow 421 very fine-grained equivalence classes, but not be able to employ them 422 globally because of scaling limits of core routers. That means it is 423 wise to either restrict the range of equivalence classes, or allow 424 them to be _aggregated_, trading off accuracy in policing traffic 425 against ability to scale. 427 Second, the flexibility of expressible treatments can be tightly 428 constrained by both protocol encoding and algorithmic limitations. 429 The ability to encode the treatment requests in the protocol can be 430 limited (as it is for IP - there are only 6 of the Type of Service 431 (TOS) bits available for Diffserv treatments), but as or more 432 important is whether there are practical traffic policing, queuing, 433 and pacing algorithms that can be combined to support a rich set of 434 QoS treatments. 436 The two considerations above in combination can easily be 437 substantially more expressive than what can be achieved in practice 438 with the available number of queues on real network interfaces or the 439 amount of per-packet computation needed to enqueue or dequeue a 440 packet. 442 5. How does this relate to QoS in TCP/IP? 444 TCP/IP has fewer resource types to manage than ICN, and in some cases 445 the allocation methods are simpler, as shown in the following table: 447 +===============+=============+================================+ 448 | Resource | IP Relevant | TCP/IP Usage | 449 +===============+=============+================================+ 450 | Communication | YES | buffering for queued packets | 451 | Link capacity | | | 452 +---------------+-------------+--------------------------------+ 453 | Content Store | NO | no content store in IP | 454 | capacity | | | 455 +---------------+-------------+--------------------------------+ 456 | Forwarder | MAYBE | not needed for output-buffered | 457 | memory | | designs^(*) | 458 +---------------+-------------+--------------------------------+ 459 | Compute | YES | for forwarding packets, but | 460 | capacity | | arguably much cheaper than ICN | 461 +---------------+-------------+--------------------------------+ 463 Table 3: IP-related Network Element Resources 465 ^(*)Output-buffered designs are where all packet buffering resources 466 are associated with the output interfaces and there are no receiver 467 interface or internal forwarding buffers that can be over-subscribed. 468 Output-buffered switchs or routers are common but not universal, as 469 they generally require an internal speed-up factor where forwarding 470 capacity is greater than the sum of the input capacity of the 471 interfaces. 473 For these resources, IP has specified three fundamental things, as 474 shown in the following table: 476 +==============+====================================================+ 477 | What | How | 478 +==============+====================================================+ 479 | *Equivalence | subset+prefix match on IP | 480 | classes* | 5-tuple {SA,DA,SP,DP,PT} | 481 | | SA=Source Address | 482 | | DA=Destination Address | 483 | | SP=Source Port | 484 | | DP=Desintation Port | 485 | | PT=IP Protocol Type | 486 +--------------+----------------------------------------------------+ 487 | *Diffserv | (very) small number of | 488 | treatments* | globally-agreed traffic | 489 | | classes | 490 +--------------+----------------------------------------------------+ 491 | *Intserv | per-flow parameterized | 492 | treatments* | _Controlled Load_ and | 493 | | _Guaranteed_ service | 494 | | classes | 495 +--------------+----------------------------------------------------+ 497 Table 4: Fundamental protocol elements to achieve QoS for TCP/IP 499 Equivalence classes for IP can be pairwise, by matching against both 500 source and destination address+port, pure group using only 501 destination address+port, or source-specific multicast with source 502 adress+port and destination multicast address+port. 504 With Intserv, the Resource ReSerVation signaling protocol (RSVP) 505 [RFC2205] carries two data structures, the Flow Specifier (FLOWSPEC) 506 and the Traffic Specifier (TSPEC). The former fulfills the 507 requirement to identify the equivalence class to which the QoS being 508 signaled applies. The latter comprises the desired QoS treatment 509 along with a description of the dynamic character of the traffic 510 (e.g. average bandwidth and delay, peak bandwidth, etc.). Both of 511 these encounter substantial scaling limits, which has meant that 512 Intserv has historically been limited to confined topologies, and/or 513 high-value usages, like traffic engineering. 515 With Diffserv, the protocol encoding (6 bits in the TOS field of the 516 IP header) artificially limits the number of classes one can specify. 517 These are documented in [RFC4594]. Nonetheless, when used with fine- 518 grained equivalence classes, one still runs into limits on the number 519 of queues required. 521 6. Why is ICN Different? Can we do Better? 523 While one could adopt an approach to QoS mirroring the extensive 524 experience with TCP/IP, this would, in the author's view, be a 525 mistake. The implementation and deployment of QoS in IP networks has 526 been spotty at best. There are of course economic and political 527 reasons as well as technical reasons for these mixed results, but 528 there are several architectural choices in ICN that make it a 529 potentially much better protocol base to enhance with QoS machinery. 530 This section discusses those differences and their consequences. 532 6.1. Equivalence class capabilities 534 First and foremost, hierarchical names are a much richer basis for 535 specifying equivalence classes than IP 5-tuples. The IP address (or 536 prefix) can only separate traffic by topology to the granularity of 537 hosts, and not express actual computational instances nor sets of 538 data. Ports give some degree of per-instance demultiplexing, but 539 this tends to be both coarse and ephemeral, while confounding the 540 demultiplexing function with the assignment of QoS treatments to 541 particular subsets of the data. Some degree of finer granularity is 542 possible with IPv6 by exploiting the ability to use up to 64 bits of 543 address for classifying traffic. In fact, the hICN project 544 [I-D.muscariello-intarea-hicn], while adopting the request-response 545 model of CCNx, uses IPv6 addresses as the available namespace, and 546 IPv6 packets (plus "fake" TCP headers) as the wire format. 548 Nonetheless, the flexibility of tokenized (i.e. strings treated as 549 opaque tokens), variable length, hierarchical names allows one to 550 directly associate classes of traffic for QoS purposes with the 551 structure of an application namespace. The classification can be as 552 coarse or fine-grained as desired by the application. While not 553 _always_ the case, there is typically a straightforward association 554 between how objects are named, and how they are grouped together for 555 common treatment. Examples abound; a number can be conveniently 556 found in [I-D.moiseenko-icnrg-flowclass]. 558 6.2. Topology interactions with QoS 560 In ICN, QoS is not pre-bound to network topology since names are non- 561 topological, unlike unicast IP addresses. This allows QoS to be 562 applied to multi-destination and multi-path environments in a 563 straightforward manner, rather than requiring either multicast with 564 coarse class-based scheduling or complex signaling like that in RSVP- 565 TE [RFC3209] that is needed to make point-to-multipoint Muti-Protocol 566 Label Switching (MPLS) work. 568 Because of IP's stateless forwarding model, complicated by the 569 ubiquity of asymmetric routes, any flow-based QoS requires state that 570 is decoupled from the actual arrival of traffic and hence must be 571 maintained, at least as soft-state, even during quiescent periods. 572 Intserv, for example, requires flow signaling with state O(#flows). 573 ICN, even worst case, requires state O(#active Interest/Data 574 exchanges), since state can be instantiated on arrival of an 575 Interest, and removed (perhaps lazily) once the data has been 576 returned. 578 6.3. Specification of QoS treatments 580 Unlike Intserv, Diffserv eschews signaling in favor of class-based 581 configuration of resources and queues in network elements. However, 582 Diffserv limits traffic treatments to a few bits taken from the ToS 583 field of IP. No such wire encoding limitations exist for NDN or 584 CCNx, as the protocol is completely TLV (Type-Length-Value) based, 585 and one (or even more than one) new field can be easily defined to 586 carry QoS treatment information. 588 Therefore, there are greenfield possibilities for more powerful QoS 589 treatment options in ICN. For example, IP has no way to express a 590 QoS treatment like "try hard to deliver reliably, even at the expense 591 of delay or bandwidth". Such a QoS treatment for ICN could invoke 592 native ICN mechanisms, none of which are present in IP, such as: 594 * In-network retransmission in response to hop-by-hop errors 595 returned from upstream forwarders 597 * Trying multiple paths to multiple content sources either in 598 parallel or serially 600 * Assign higher precedence for short-term caching to recover from 601 downstream^(*) errors 603 * Coordinating cache utilization with forwarding resources 605 | ^(*)_Downstream_ refers to the direction Data messages flow 606 | toward the consumer (the issuer of Interests). Conversely, 607 | _Upstream_ refers to the direction Interests flow toward the 608 | producer of data. 610 Such mechanisms are typically described in NDN and CCNx as 611 _forwarding strategies_. However, little or no guidance is given for 612 what application actions or protocol machinery is used to decide 613 which forwarding strategy to use for which Interests that arrive at a 614 forwarder. See [BenAbraham2018] for an investigation of these 615 issues. Associating forwarding strategies with the equivalence 616 classes and QoS treatments directly can make them more accessible and 617 useful to implement and deploy. 619 Stateless forwarding and asymmetric routing in IP limits available 620 state/feedback to manage link resources. In contrast, NDN or CCNx 621 forwarding allows all link resource allocation to occur as part of 622 Interest forwarding, potentially simplifying things considerably. In 623 particular, with symmetric routing, producers have no control over 624 the paths their data packets traverse, and hence any QoS treatments 625 intended to influence routing paths from producer to consumer will 626 have no effect. 628 One complication in the handling of ICN QoS treatments is not present 629 in IP and hence worth mention. CCNx and NDN both perform _Interest 630 aggregation_ (See Section 2.3.2 of [RFC8569]). If an Interest 631 arrives matching an existing PIT entry, but with a different QoS 632 treatment from an Interest already forwarded, it can be tricky to 633 decide whether to aggregate the interest or forward it, and how to 634 keep track of the differing QoS treatments for the two Interests. 635 Exploration of the details surrounding these situations is beyond the 636 scope of this document; further discussion can be found for the 637 general case of flow balance and congestion control in 638 [I-D.oran-icnrg-flowbalance], and specifically for QoS treatments in 639 [I-D.anilj-icnrg-dnc-qos-icn]. 641 6.4. ICN forwarding semantics effect on QoS 643 IP has three forwarding semantics, with different QoS needs (Unicast, 644 Anycast, Multicast). ICN has the single forwarding semantic, so any 645 QoS machinery can be uniformly applied across any request/response 646 invocation. This applies whether the forwarder employs dynamic 647 destination routing, multi-destination forwarding with next-hops 648 tried serially, multi-destination with next-hops used in parallel, or 649 even localized flooding (e.g. directly on L2 multicast mechanisms). 650 Additionally, the pull-based model of ICN avoids a number of thorny 651 multicast QoS problems that IP has ([Wang2000], [RFC3170], 652 [Tseng2003]). 654 The Multi-destination/multi-path forwarding model in ICN changes 655 resource allocation needs in a fairly deep way. IP treats all 656 endpoints as open-loop packet sources, whereas NDN and CCNx have 657 strong asymmetry between producers and consumers as packet sources. 659 6.5. QoS interactions with Caching 661 IP has no caching in routers, whereas ICN needs ways to allocate 662 cache resources. Treatments to control caching operation are 663 unlikely to look much like the treatments used to control link 664 resources. NDN and CCNx already have useful cache control directives 665 associated with Data messages. The CCNx controls include: 667 ExpiryTime: time after which a cached Content Object is considered 668 expired and MUST no longer be used to respond to an Interest from 669 a cache. 671 Recommended Cache Time: time after which the publisher considers the 672 Content Object to be of low value to cache. 674 See [RFC8569] for the formal definitions. 676 ICN flow classifiers, such as those in 677 [I-D.moiseenko-icnrg-flowclass] can be used to achieve soft or hard 678 partitioning^(*) of cache resources in the content store of an ICN 679 forwarder. For example, cached content for a given equivalence class 680 can be considered _fate shared_ in a cache whereby objects from the 681 same equivalence class can be purged as a group rather than 682 individually. This can recover cache space more quickly and at lower 683 overhead than pure per-object replacement when a cache is under 684 extreme pressure and in danger of thrashing. In addition, since the 685 forwarder remembers the QoS treatment for each pending Interest in 686 its PIT, the above cache controls can be augmented by policy to 687 prefer retention of cached content for some equivalence classes as 688 part of the cache replacement algorithm. 690 | ^(*)With hard partitioning, there are dedicated cache resources 691 | for each equivalence class (or enumerated list of equivalence 692 | classes). With soft partitioning, resources are at least 693 | partly shared among the (sets of) equivalence classes of 694 | traffic. 696 7. Strawman principles for an ICN QoS architecture 698 Based on the observations made in the earlier sections, this summary 699 section captures the author's ideas for clear and actionable 700 architectural principles for how to incorporate QoS machinery into 701 ICN protocols like NDN and CCNx. Hopefully, they can guide further 702 work and focus effort on portions of the giant design space for QoS 703 that have the best tradeoffs in terms of flexibility, simplicity, and 704 deployability. 706 *Define equivalence classes using the name hierarchy rather than 707 creating an independent traffic class definition*. This directly 708 associates the specification of equivalence classes of traffic with 709 the structure of the application namespace. It can allow 710 hierarchical decomposition of equivalence classes in a natural way 711 because of the way hierarchical ICN names are constructed. Two 712 practical mechanisms are presented in [I-D.moiseenko-icnrg-flowclass] 713 with different tradeoffs between security and the ability to 714 aggregate flows. Either prefix-based (EC3) or explicit name 715 component based (ECNT) or both could be adopted as the part of the 716 QoS architecture for defining equivalence classes. 718 *Put consumers in control of Link and Forwarding resource 719 allocation*. Do all link buffering and forwarding (both memory and 720 CPU) resource allocations based on Interest arrivals. This is 721 attractive because it provides early congestion feedback to 722 consumers, and allows scheduling the reverse link direction ahead of 723 time for carrying the matching data. It makes enforcement of QoS 724 treatments a single-ended (i.e. at the consumer) rather than a 725 double-ended problem and can avoid wasting resources on fetching data 726 that will wind up dropped when it arrives at a bottleneck link. 728 *Allow producers to influence the allocation of cache resources*. 729 Producers want to affect caching decisions in order to: 731 * Shed load by having Interests served by content stores in 732 forwarders before reaching the producer itself. 734 * Survive transient producer reachability or link outages close to 735 the producer. 737 For caching to be effective, individual Data objects in an 738 equivalence class need to have similar treatment; otherwise well- 739 known cache thrashing pathologies due to self-interference emerge. 740 Producers have the most direct control over caching policies through 741 the caching directives in Data messages. It therefore makes sense to 742 put the producer, rather than the consumer or network operator in 743 charge of specifying these equivalence classes. 745 See [I-D.moiseenko-icnrg-flowclass] for specific mechanisms to 746 achieve this. 748 *Allow consumers to influence the allocation of cache resources*. 749 Consumers want to affect caching decisions in order to: 751 * Reduce latency for retrieving data 752 * Survive transient outages of either a producer or links close to 753 the consumer 755 Consumers can have indirect control over caching by specifying QoS 756 treatments in their Interests. Consider the following potential QoS 757 treatments by consumers that can drive caching policies: 759 * A QoS treatment requesting better robustness against transient 760 disconnection can be used by a forwarder close to the consumer (or 761 downstream of an unreliable link) to preferentially cache the 762 corresponding data. 764 * Conversely a QoS treatment together with, or in addition to a 765 request for short latency, to indicate that new data will be 766 requested soon enough that caching the current data being 767 requested would be ineffective and hence to only pay attention to 768 the caching preferences of the producer. 770 * A QoS treatment indicating a mobile consumer likely to incur a 771 mobility event within an RTT (or a few RTTs). Such a treatment 772 would allow a mobile network operator to preferentially cache the 773 data at a forwarder positioned at a _join point_ or _rendezvous 774 point_ of their topology 776 *Give network operators the ability to match customer SLAs to cache 777 resource availability*. Network operators, whether closely tied 778 administratively to producer or consumer, or constituting an 779 independent transit administration, provide the storage resources in 780 the ICN forwarders. Therefore, they are the ultimate arbiters of how 781 the cache resources are managed. In addition to any local policies 782 they may enforce, the cache behavior from the QoS standpoint emerges 783 from how the producer-specified equivalence classes map onto cache 784 space availability, including whether cache entries are treated 785 individually, or fate-shared. Forwarders also determine how the 786 consumer-specified QoS treatments map to the precedence used for 787 retaining Data objects in the cache. 789 Besides utilizing cache resources to meet the QoS goals of individual 790 producers and consumers, network operators also want to manage their 791 cache resources in order to: 793 * Ameliorate congestion hotspots by reducing load converging on 794 producers they host on their network. 796 * Improve Interest satisfaction rates by utilizing caches as short- 797 term retransmission buffers to recover from transient producer 798 reachability problems, link errors or link outages. 800 * Improve both latency and reliability in environments when 801 consumers are mobile in the operator's topology. 803 *Re-think how to specify traffic treatments - don't just copy 804 Diffserv*. Some of the Diffserv classes may form a good starting 805 point, as their mapping onto queuing algorithms for managing link 806 buffering are well understood. However, Diffserv alone does not 807 allow one to express latency versus reliability tradeoffs or other 808 useful QoS treatments. Nor does it permit "Traffic Specification 809 (TSPEC)"-style traffic descriptions as are allowed in a signaled QoS 810 scheme. Here are some examples: 812 * A "burst" treatment, where an initial Interest gives an aggregate 813 data size to request allocation of link capacity for a large burst 814 of Interest/Data exchanges. The Interest can be rejected at any 815 hop if the resources are not available. Such a treatment can also 816 accommodate Data implosion produced by the discovery procedures of 817 management protocols like [I-D.irtf-icnrg-ccninfo]. 819 * A "reliable" treatment, which affects preference for allocation of 820 PIT space for the Interest and Content Store space for the data in 821 order to improve the robustness of IoT data delivery in 822 constrained environment, as is described in 823 [I-D.gundogan-icnrg-iotqos]. 825 * A "search" treatment, which, within the specified Interest 826 Lifetime, tries many paths either in parallel or serial to 827 potentially many content sources, to maximize the probability that 828 the requested item will be found. This is done at the expense of 829 the extra bandwidth of both forwarding Interests and receiving 830 multiple responses upstream of an aggregation point. The 831 treatment can encode a value expressing tradeoffs like breadth- 832 first versus depth-first search, and bounds on the total resource 833 expenditure. Such a treatment would be useful for instrumentation 834 protocols like [I-D.mastorakis-icnrg-icntraceroute]. 836 | As an aside, loose latency control (on the order of seconds or 837 | tens of milliseconds as opposed milliseconds or microseconds) 838 | can be achieved by bounding Interest Lifetime as long as this 839 | lifetime machinery is not also used as an application mechanism 840 | to provide subscriptions or to establish path traces for 841 | producer mobility. See [Krol2018] for a discussion of the 842 | network versus application timescale issues in ICN protocols. 844 7.1. Can Intserv-like traffic control in ICN provide richer QoS 845 semantics? 847 Basic QoS treatments such as those summarized above may not be 848 adequate to cover the whole range of application utility functions 849 and deployment environments we expect for ICN. While it is true that 850 one does not necessarily need a separate signaling protocol like RSVP 851 given the state carried in the ICN data plane by forwarders, there 852 are some potentially important capabilities not provided by just 853 simple QoS treatments applied to per- Interest/Data exchanges. 854 Intserv's richer QoS capabilities may be of value, especially if they 855 can be provided in ICN at lower complexity and protocol overhead than 856 Intserv+RSVP. 858 There are three key capabilities missing from Diffserv-like QoS 859 treatments, no matter how sophisticated they may be in describing the 860 desired treatment for a given equivalence class of traffic. Intserv- 861 like QoS provides all of these: 863 1. The ability to *describe traffic flows* in a mathematically 864 meaningful way. This is done through parameters like average 865 rate, peak rate, and maximum burst size. The parameters are 866 encapsulated in a data structure called a "TSPEC" which can be 867 placed in whatever protocol needs the information (in the case of 868 TCP/IP Intserv, this is RSVP). 870 2. The ability to perform *admission control*, where the element 871 requesting the QoS treatment can know _before_ introducing 872 traffic whether the network elements have agreed to provide the 873 requested traffic treatment. An important side-effect of 874 providing this assurance is that the network elements install 875 state that allows the forwarding and queuing machinery to police 876 and shape the traffic in a way that provides a sufficient degree 877 of _isolation_ from the dynamic behavior of other traffic. 878 Depending on the admission control mechanism, it may or may not 879 be possible to explicitly release that state when the application 880 no longer needs the QoS treatment. 882 3. The permissable *degree of divergence* in the actual traffic 883 handling from the requested handling. Intserv provided two 884 choices here, the _controlled load_ service and the _guaranteed_ 885 service. The former allows stochastic deviation equivalent to 886 what one would experience on an unloaded path of a packet 887 network. The latter conforms to the TSPEC deterministically, at 888 the obvious expense of demanding extremely conservative resource 889 allocation. 891 Given the limited applicability of these capabilities in today's 892 Internet, the author does not take any position as to whether any of 893 these Intserv-like capabilities are needed for ICN to be succesful. 894 However, a few things seem important to consider. The following 895 paragraphs speculate about the consequences to the CCNx or NDN 896 protocol architectures of incorporating these features. 898 Superficially, it would be quite straightforward to accommodate 899 Intserv-equivalent traffic descriptions in CCNx or NDN. One could 900 define a new TLV for the Interest message to carry a TSPEC. A 901 forwarder encountering this, together with a QoS treatment request 902 (e.g. as proposed in Section 6.3) could associate the traffic 903 specification with the corresponding equivalence class derived from 904 the name in the Interest. This would allow the forwarder to create 905 state that not only would apply to the returning Data for that 906 Interest when being queued on the downstream interface, but be 907 maintained as soft state across multiple Interest/Data exchanges to 908 drive policing and shaping algorithms at per-flow granularity. The 909 cost in Interest message overhead would be modest, however the 910 complications associated with managing different traffic 911 specifications in different Interests for the same equivalence class 912 might be substantial. Of course, all the scalability considerations 913 with maintaining per-flow state also come into play. 915 Similarly, it would be equally straightforward to have a way to 916 express the degree of divergence capability that Intserv provides 917 through its controlled load and guaranteed service definitions. This 918 could either be packaged with the traffic specification or encoded 919 separately. 921 In contrast to the above, performing admission control for ICN flows 922 is likely to be just as heavy-weight as it turned out to be with IP 923 using RSVP. The dynamic multi-path, multi-destination forwarding 924 model of ICN makes performing admission control particularly tricky. 925 Just to illustrate: 927 * Forwarding next-hop selection is not confined to single paths (or 928 a few ECMP equivalent paths) as it is with IP, making it difficult 929 to know where to install state in advance of the arrival of an 930 Interest to forward. 932 * As with point-to-multipoint complexities when using RSVP for MPLS- 933 TE, state has to be installed to multiple producers over multiple 934 paths before an admission control algorithm can commit the 935 resources and say "yes" to a consumer needing admission control 936 capabilities 938 * Knowing when to remove admission control state is difficult in the 939 absence of a heavy-weight resource reservation protocol. Soft 940 state timeout may or may not be an adequate answer. 942 Despite the challenges above, it may be possible to craft an 943 admission control scheme for ICN that achieves the desired QoS goals 944 of applications without the invention and deployment of a complex 945 separate admission control signaling protocol. There have been 946 designs in earlier network architectures that were capable of 947 performing admission control piggybacked on packet transmission. 949 | (The earliest example the author is aware of is [Autonet]). 951 Such a scheme might have the following general shape *(warning: 952 serious hand waving follows!)*: 954 * In addition to a QoS treatment and a traffic specification, an 955 Interest requesting admission for the corresponding equivalence 956 class would so indicate via a new TLV. It would also need to: (a) 957 indicate an expiration time after which any reserved resources can 958 be released, and (b) indicate that caches be bypassed, so that the 959 admission control request arrives at a bone-fide producer. 961 * Each forwarder processing the Interest would check for resource 962 availability and if not available, or the requested service not 963 feasible, reject the Interest with an admission control failure. 964 If resources are available, the forwarder would record the traffic 965 specification as described above and forward the Interest. 967 * If the Interest successfully arrives at a producer, the producer 968 returns the requested Data. 970 * Each on-path forwarder, on receiving the matching Data message, if 971 the resources are still available, does the actual allocation, and 972 marks the admission control TLV as "provisionally approved". 973 Conversely, if the resource reservation fails, the admission 974 control is marked "failed", although the Data is still passed 975 downstream. 977 * Upon the Data message arriving, the consumer knows if admission 978 succeeded or not, and subsequent Interests can rely on the QoS 979 state being in place until either some failure occurs, or a 980 topology or other forwarding change alters the forwarding path. 981 To deal with this, additional machinery is needed to ensure 982 subsequent Interests for an admitted flow either follow that path 983 or an error is reported. One possibility (also useful in many 984 other contexts), is to employ a _Path Steering_ mechanism, such as 985 the one described in [Moiseenko2017]. 987 8. IANA Considerations 989 This document does not require any IANA actions. 991 9. Security Considerations 993 There are a few ways in which QoS for ICN interacts with security and 994 privacy issues. Since QoS addresses relationships among traffic 995 rather than the inherent characteristics of traffic, it neither 996 enhances nor degrades the security and privacy properties of the data 997 being carried, as long as the machinery does not alter or otherwise 998 compromise the basic security properties of the associated protocols. 999 The QoS approaches advocated here for ICN can serve to amplify 1000 existing threats to network traffic however: 1002 * An attacker able to manipulate the QoS treatments of traffic can 1003 mount a more focused (and potentially more effective) denial of 1004 service attack by suppressing performance on traffic the attacker 1005 is targeting. Since the architecture here assumes QoS treatments 1006 are manipulable hop-by-hop, any on-path adversary can wreak havoc. 1007 Note however, that in basic ICN, an on-path attacker can do this 1008 and more by dropping, delaying, or mis-routing traffic independent 1009 of any particular QoS machinery in use. 1011 * By explicitly revealing equivalence classes of traffic via either 1012 names or other fields in packets, an attacker has yet one more 1013 handle to use to discover linkability of multiple requests. 1015 10. References 1017 10.1. Normative References 1019 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1020 Requirement Levels", BCP 14, RFC 2119, 1021 DOI 10.17487/RFC2119, March 1997, 1022 . 1024 [RFC8569] Mosko, M., Solis, I., and C. Wood, "Content-Centric 1025 Networking (CCNx) Semantics", RFC 8569, 1026 DOI 10.17487/RFC8569, July 2019, 1027 . 1029 [RFC8609] Mosko, M., Solis, I., and C. Wood, "Content-Centric 1030 Networking (CCNx) Messages in TLV Format", RFC 8609, 1031 DOI 10.17487/RFC8609, July 2019, 1032 . 1034 10.2. Informative References 1036 [AS] "Autonomous System (Internet)", no date, 1037 . 1040 [Auge2018] Augé, J., Carofiglio, G., Grassi, G., Muscariello, L., 1041 Pau, G., and X. Zeng, "MAP-Me: Managing Anchor-Less 1042 Producer Mobility in Content-Centric Networks", in IEEE 1043 Transactions on Network and Service Management (Volume: 15 1044 , Issue: 2 , June 2018), DOI 10.1109/TNSM.2018.2796720, 1045 June 2018, . 1047 [Autonet] Schroeder, M., Birrell, A., Burrows, M., Murray, H., 1048 Needham, R., Rodeheffer, T., Satterthwaite, E., and C. 1049 Thacker, "Autonet: a High-speed, Self-configuring Local 1050 Area Network Using Point-to-point Links", in IEEE Journal 1051 on Selected Areas in Communications ( Volume: 9, Issue: 8, 1052 Oct 1991), DOI 10.1109/49.105178, October 1991, 1053 . 1056 [BenAbraham2018] 1057 Ben Abraham, H., Parwatikar, J., DeHart, J., Dresher, A., 1058 and P. Crowley, ""Decoupling Information and Connectivity 1059 via Information-Centric Transport", in ICN '18: 1060 Proceedings of the 5th ACM Conference on Information- 1061 Centric Networking September 21-23, 2018, Boston, MA, USA, 1062 DOI 10.1145/3267955.3267963, September 2018, 1063 . 1066 [Carofiglio2012] 1067 Carofiglio, G., Gallo, M., and L. Muscariello, "Joint hop- 1068 by-hop and receiver-driven Interest control protocol for 1069 content-centric networks", in ACM SIGCOMM Computer 1070 Communication Review, September 2012, 1071 DOI 10.1016/j.comnet.2016.09.012, September 2012, 1072 . 1075 [Carofiglio2016] 1076 Carofiglio, G., Gallo, M., and L. Muscariello, "Optimal 1077 multipath congestion control and request forwarding in 1078 information-centric networks: Protocol design and 1079 experimentation", in Computer Networks, Vol. 110 No. 9, 1080 December 2016, DOI 10.1145/2377677.2377772, December 2016, 1081 . 1083 [I-D.anilj-icnrg-dnc-qos-icn] 1084 Jangam, A., suthar, P., and M. Stolic, "QoS Treatments in 1085 ICN using Disaggregated Name Components", Work in 1086 Progress, Internet-Draft, draft-anilj-icnrg-dnc-qos-icn- 1087 01, 11 September 2019, . 1090 [I-D.gundogan-icnrg-iotqos] 1091 Gundogan, C., Schmidt, T., Waehlisch, M., Frey, M., Shzu- 1092 Juraschek, F., and J. Pfender, "Quality of Service for ICN 1093 in the IoT", Work in Progress, Internet-Draft, draft- 1094 gundogan-icnrg-iotqos-01, 8 July 2019, 1095 . 1098 [I-D.ietf-quic-transport] 1099 Iyengar, J. and M. Thomson, "QUIC: A UDP-Based Multiplexed 1100 and Secure Transport", Work in Progress, Internet-Draft, 1101 draft-ietf-quic-transport-27, 21 February 2020, 1102 . 1105 [I-D.irtf-icnrg-ccninfo] 1106 Asaeda, H., Ooka, A., and X. Shao, "CCNinfo: Discovering 1107 Content and Network Information in Content-Centric 1108 Networks", Work in Progress, Internet-Draft, draft-irtf- 1109 icnrg-ccninfo-02, 8 July 2019, 1110 . 1112 [I-D.irtf-nwcrg-nwc-ccn-reqs] 1113 Matsuzono, K., Asaeda, H., and C. Westphal, "Network 1114 Coding for Content-Centric Networking / Named Data 1115 Networking: Requirements and Challenges", Work in 1116 Progress, Internet-Draft, draft-irtf-nwcrg-nwc-ccn-reqs- 1117 02, 20 September 2019, . 1120 [I-D.mastorakis-icnrg-icntraceroute] 1121 Mastorakis, S., Gibson, J., Moiseenko, I., Droms, R., and 1122 D. Oran, "ICN Traceroute Protocol Specification", Work in 1123 Progress, Internet-Draft, draft-mastorakis-icnrg- 1124 icntraceroute-06, 13 February 2020, 1125 . 1128 [I-D.moiseenko-icnrg-flowclass] 1129 Moiseenko, I. and D. Oran, "Flow Classification in 1130 Information Centric Networking", Work in Progress, 1131 Internet-Draft, draft-moiseenko-icnrg-flowclass-05, 20 1132 January 2020, . 1135 [I-D.muscariello-intarea-hicn] 1136 Muscariello, L., Carofiglio, G., Auge, J., and M. 1137 Papalini, "Hybrid Information-Centric Networking", Work in 1138 Progress, Internet-Draft, draft-muscariello-intarea-hicn- 1139 03, 30 October 2019, . 1142 [I-D.oran-icnrg-flowbalance] 1143 Oran, D., "Maintaining CCNx or NDN flow balance with 1144 highly variable data object sizes", Work in Progress, 1145 Internet-Draft, draft-oran-icnrg-flowbalance-02, 3 1146 February 2020, . 1149 [Krol2018] Król, M., Habak, K., Oran, D., Kutscher, D., and I. 1150 Psaras, "RICE: Remote Method Invocation in ICN", in 1151 ICN'18: Proceedings of the 5th ACM Conference on 1152 Information-Centric Networking September 21-23, 2018, 1153 Boston, MA, USA, DOI 10.1145/3267955.3267956, September 1154 2018, . 1157 [Mahdian2016] 1158 Mahdian, M., Arianfar, S., Gibson, J., and D. Oran, 1159 "MIRCC: Multipath-aware ICN Rate-based Congestion 1160 Control", in Proceedings of the 3rd ACM Conference on 1161 Information-Centric Networking, 1162 DOI 10.1145/2984356.2984365, September 2016, 1163 . 1166 [minmaxfairness] 1167 "Max-min Fairness", no date, 1168 . 1170 [Moiseenko2017] 1171 Moiseenko, I. and D. Oran, "Path Switching in Content 1172 Centric and Named Data Networks", in ICN '17: Proceedings 1173 of the 4th ACM Conference on Information-Centric 1174 Networking, DOI 10.1145/3125719.3125721, September 2017, 1175 . 1178 [NDN] "Named Data Networking", various, 1179 . 1181 [NDNTutorials] 1182 "NDN Tutorials", various, 1183 . 1185 [Oran2018QoSslides] 1186 Oran, D., "Thoughts on Quality of Service for NDN/CCN- 1187 style ICN protocol architectures", presented at ICNRG 1188 Interim Meeting, Cambridge MA, 24 September 2018, 1189 . 1193 [proportionalfairness] 1194 "Proportionally Fair", no date, 1195 . 1197 [RFC0793] Postel, J., "Transmission Control Protocol", STD 7, 1198 RFC 793, DOI 10.17487/RFC0793, September 1981, 1199 . 1201 [RFC2205] Braden, R., Ed., Zhang, L., Berson, S., Herzog, S., and S. 1202 Jamin, "Resource ReSerVation Protocol (RSVP) -- Version 1 1203 Functional Specification", RFC 2205, DOI 10.17487/RFC2205, 1204 September 1997, . 1206 [RFC2474] Nichols, K., Blake, S., Baker, F., and D. Black, 1207 "Definition of the Differentiated Services Field (DS 1208 Field) in the IPv4 and IPv6 Headers", RFC 2474, 1209 DOI 10.17487/RFC2474, December 1998, 1210 . 1212 [RFC2998] Bernet, Y., Ford, P., Yavatkar, R., Baker, F., Zhang, L., 1213 Speer, M., Braden, R., Davie, B., Wroclawski, J., and E. 1214 Felstaine, "A Framework for Integrated Services Operation 1215 over Diffserv Networks", RFC 2998, DOI 10.17487/RFC2998, 1216 November 2000, . 1218 [RFC3170] Quinn, B. and K. Almeroth, "IP Multicast Applications: 1219 Challenges and Solutions", RFC 3170, DOI 10.17487/RFC3170, 1220 September 2001, . 1222 [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V., 1223 and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP 1224 Tunnels", RFC 3209, DOI 10.17487/RFC3209, December 2001, 1225 . 1227 [RFC4340] Kohler, E., Handley, M., and S. Floyd, "Datagram 1228 Congestion Control Protocol (DCCP)", RFC 4340, 1229 DOI 10.17487/RFC4340, March 2006, 1230 . 1232 [RFC4594] Babiarz, J., Chan, K., and F. Baker, "Configuration 1233 Guidelines for DiffServ Service Classes", RFC 4594, 1234 DOI 10.17487/RFC4594, August 2006, 1235 . 1237 [RFC4960] Stewart, R., Ed., "Stream Control Transmission Protocol", 1238 RFC 4960, DOI 10.17487/RFC4960, September 2007, 1239 . 1241 [Schneider2016] 1242 Schneider, K., Yi, C., Zhang, B., and L. Zhang, ""A 1243 Practical Congestion Control Scheme for Named Data 1244 Networking", in ACM-ICN '16: Proceedings of the 3rd ACM 1245 Conference on Information-Centric Networking, 1246 DOI 10.1145/2984356.2984369, September 2016, 1247 . 1250 [Shenker2006] 1251 Shenker, S., "Fundamental Design Issues for the Future 1252 Internet", in IEEE Journal on Selected Areas in 1253 Communications, Vol. 13, No. 7, DOI 10.1109/49.414637, 1254 September 2006, 1255 . 1257 [Song2018] Song, J., Lee, M., and T. Kwon, "SMIC: Subflow-level 1258 Multi-path Interest Control for Information Centric 1259 Networking", ICN '18: Proceedings of the 5th ACM 1260 Conference on Information-Centric Networking, 1261 DOI 10.1145/3267955.3267971, September 2018, 1262 . 1265 [Tseng2003] 1266 Tseng, CH.J., "The performance of QoS-aware IP multicast 1267 routing protocols", in Networks, Vol:42, No:2, 1268 DOI 10.1002/net.10084, September 2003, 1269 . 1272 [Wang2000] Wang, B. and J.C. Hou, "Multicast routing and its QoS 1273 extension: problems, algorithms, and protocols", in IEEE 1274 Network, Vol:14, Issue:1, Jan/Feb 2000, 1275 DOI 10.1109/65.819168, January 2000, 1276 . 1279 [Wang2013] Wang, Y., Rozhnova, N., Narayanan, A., Oran, D., and I. 1280 Rhee, "An Improved Hop-by-hop Interest Shaper for 1281 Congestion Control in Named Data Networking", in ICN '13: 1282 Proceedings of the 3rd ACM SIGCOMM workshop on 1283 Information-centric networking, August 2013, 1284 DOI 10.1145/2534169.2491233, August 2013, 1285 . 1288 Author's Address 1290 Dave Oran 1291 Network Systems Research and Design 1292 4 Shady Hill Square 1293 Cambridge, MA 02138 1294 United States of America 1296 Email: daveoran@orandom.net