idnits 2.17.1 draft-oran-icnrg-qosarch-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 12, 2019) is 1658 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-02) exists of draft-anilj-icnrg-dnc-qos-icn-01 == Outdated reference: A later version (-34) exists of draft-ietf-quic-transport-23 == Outdated reference: A later version (-15) exists of draft-irtf-icnrg-ccninfo-02 == Outdated reference: A later version (-06) exists of draft-mastorakis-icnrg-icntraceroute-05 == Outdated reference: A later version (-07) exists of draft-moiseenko-icnrg-flowclass-04 == Outdated reference: A later version (-04) exists of draft-muscariello-intarea-hicn-02 == Outdated reference: A later version (-11) exists of draft-oran-icnrg-flowbalance-01 -- Obsolete informational reference (is this intentional?): RFC 793 (Obsoleted by RFC 9293) -- Obsolete informational reference (is this intentional?): RFC 4960 (Obsoleted by RFC 9260) Summary: 0 errors (**), 0 flaws (~~), 8 warnings (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 ICNRG D. Oran 3 Internet-Draft Network Systems Research and Design 4 Intended status: Informational October 12, 2019 5 Expires: April 14, 2020 7 Considerations in the development of a QoS Architecture for CCNx-like 8 ICN protocols 9 draft-oran-icnrg-qosarch-02 11 Abstract 13 This is a position paper. It documents the author's personal views 14 on how Quality of Service (QoS) capabilities ought to be accommodated 15 in ICN protocols like CCNx or NDN which employ flow-balanced 16 Interest/Data exchanges and hop-by-hop forwarding state as their 17 fundamental machinery. It argues that such protocols demand a 18 substantially different approach to QoS from that taken in TCP/IP, 19 and proposes specific design patterns to achieve both classification 20 and differentiated QoS treatment on both a flow and aggregate basis. 21 It also considers the effect of caches as a resource in addition to 22 memory, CPU and link bandwidth that should be subject to explicitly 23 unfair resource allocation. The proposed methods are intended to 24 operate purely at the network layer, providing the primitives needed 25 to achieve both transport and higher layer QoS objectives. It 26 explicitly excludes any discussion of Quality of Experience (QoE) 27 which can only be assessed and controlled at the application layer or 28 above. 30 Status of This Memo 32 This Internet-Draft is submitted in full conformance with the 33 provisions of BCP 78 and BCP 79. 35 Internet-Drafts are working documents of the Internet Engineering 36 Task Force (IETF). Note that other groups may also distribute 37 working documents as Internet-Drafts. The list of current Internet- 38 Drafts is at https://datatracker.ietf.org/drafts/current/. 40 Internet-Drafts are draft documents valid for a maximum of six months 41 and may be updated, replaced, or obsoleted by other documents at any 42 time. It is inappropriate to use Internet-Drafts as reference 43 material or to cite them other than as "work in progress." 45 This Internet-Draft will expire on April 14, 2020. 47 Copyright Notice 49 Copyright (c) 2019 IETF Trust and the persons identified as the 50 document authors. All rights reserved. 52 This document is subject to BCP 78 and the IETF Trust's Legal 53 Provisions Relating to IETF Documents 54 (https://trustee.ietf.org/license-info) in effect on the date of 55 publication of this document. Please review these documents 56 carefully, as they describe your rights and restrictions with respect 57 to this document. Code Components extracted from this document must 58 include Simplified BSD License text as described in Section 4.e of 59 the Trust Legal Provisions and are provided without warranty as 60 described in the Simplified BSD License. 62 Table of Contents 64 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 65 2. Requirements Language . . . . . . . . . . . . . . . . . . . . 4 66 3. Some background on the nature and properties of Quality of 67 Service in network protocols . . . . . . . . . . . . . . . . 4 68 3.1. Congestion Control basics relevant to ICN . . . . . . . . 5 69 4. What can we control to achieve QoS in ICN? . . . . . . . . . 6 70 5. How does this relate to QoS in TCP/IP? . . . . . . . . . . . 8 71 6. Why is ICN Different? Can we do Better? . . . . . . . . . . . 9 72 6.1. Equivalence class capabilities . . . . . . . . . . . . . 9 73 6.2. Topology interactions with QoS . . . . . . . . . . . . . 10 74 6.3. Specification of QoS treatments . . . . . . . . . . . . . 10 75 6.4. ICN forwarding semantics effect on QoS . . . . . . . . . 11 76 6.5. QoS interactions with Caching . . . . . . . . . . . . . . 11 77 7. A strawman set of principles to guide QoS architecture for 78 ICN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 79 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 18 80 9. Security Considerations . . . . . . . . . . . . . . . . . . . 18 81 10. References . . . . . . . . . . . . . . . . . . . . . . . . . 19 82 10.1. Normative References . . . . . . . . . . . . . . . . . . 19 83 10.2. Informative References . . . . . . . . . . . . . . . . . 19 84 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . 24 86 1. Introduction 88 The TCP/IP protocol suite used on today's Internet has over 30 years 89 of accumulated research and engineering into the provision of Quality 90 of Service machinery, employed with varying success in different 91 environments. ICN protocols like Named Data Networking (NDN [NDN]) 92 and Content-Centric Networking (CCNx [RFC8569],[RFC8609]) have an 93 accumulated 10 years of research and very little deployment. We 94 therefore have the opportunity to either recapitulate the approaches 95 taken with TCP/IP (e.g. IntServ [RFC2998] and Diffserv [RFC2474]) or 96 design a new architecture and associated mechanisms aligned with the 97 properties of ICN protocols which differ substantially from those of 98 TCP/IP. This position paper advocates the latter approach and 99 comprises the author's personal views on how Quality of Service (QoS) 100 capabilities ought to be accommodated in ICN protocols like CCNx or 101 NDN. Specifically, these protocols differ in fundamental ways from 102 TCP/IP. The important differences are summarized in the following 103 table: 105 +---------------------------------+---------------------------------+ 106 | TCP/IP | CCNx or NDN | 107 +---------------------------------+---------------------------------+ 108 | Stateless forwarding | Stateful forwarding | 109 | Simple Packets | Object model with optional | 110 | | caching | 111 | Pure datagram model | Request-response model | 112 | Asymmetric Routing | Symmetric Routing | 113 | Independent flow directions | Flow balance | 114 | Flows grouped by IP prefix and | Flows grouped by name prefix | 115 | port | | 116 | End-to-end congestion control | Hop-by-hop congestion control | 117 +---------------------------------+---------------------------------+ 119 Table 1: Differences between IP and ICN relevant to QoS architecture 121 This document proposes specific design patterns to achieve both flow 122 classification and differentiated QoS treatment for ICN on both a 123 flow and aggregate basis. It also considers the effect of caches as 124 a resource in addition to memory, CPU and link bandwidth that should 125 be subject to explicitly unfair resource allocation. The proposed 126 methods are intended to operate purely at the network layer, 127 providing the primitives needed to achieve both transport and higher 128 layer QoS objectives. It does not propose detailed protocol 129 machinery to achieve these goals; it leaves these to supplementary 130 specifications, such as [I-D.moiseenko-icnrg-flowclass]. It 131 explicitly excludes any discussion of Quality of Experience (QoE) 132 which can only be assessed and controlled at the application layer or 133 above. 135 Much of this document is derived from presentations the author has 136 given at ICNRG meetings over the last few years that are available 137 through the IETF datatracker (see, for example [Oran2018QoSslides]). 139 2. Requirements Language 141 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 142 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 143 document are to be interpreted as described in RFC 2119 [RFC2119]. 145 3. Some background on the nature and properties of Quality of Service 146 in network protocols 148 Much of this background material is tutorial and can be simply 149 skipped by readers familiar with the long and checkered history of 150 quality of service in packet networks. Other parts of it are 151 polemical yet serve to illuminate the author's personal biases and 152 technical views. 154 All networking systems provide some degree of "quality of service" in 155 that they exhibit non-zero utility when offered traffic to carry. 156 The term therefore is used to describe systems that control the 157 allocation of various resources in order to achieve _managed 158 unfairness_. Absent explicit mechanisms to decide what traffic to be 159 unfair to, most systems try to achieve some form of "fairness" in the 160 allocation of resources, optimizing the overall utility delivered to 161 all demand under the constraint of available resources. From this it 162 should be obvious that you cannot use QoS mechanisms to create or 163 otherwise increase resource capacity! In fact, all known QoS schemes 164 have non-zero overhead and hence may (albeit slightly) decrease to 165 total esources available to carry user traffic. 167 Further, accumulated experience seems to indicate that QoS is helpful 168 in a fairly narrow range of network conditions: 170 o If your resources are lightly loaded, you don't need it, as 171 neither congestive loss nor substantial queueing delay occurs 173 o If your resources are heavily oversubscribed, it doesn't save you. 174 So many users will be unhappy that you are probably not delivering 175 a viable service 177 o Failures can rapidly shift your state from the first above to the 178 second, in which case either: 180 * your QoS machinery cannot respond quickly enough to maintain 181 the advertised service quality continuously, or 183 * resource allocations are sufficiently conservative to result in 184 substantial wasted capacity under non-failure conditions 186 Nevertheless, though not universally deployed, QoS is advantageous at 187 least for some applications and some network environments. Some 188 examples include: 190 o applications with steep utility functions [Shenker2006], such as 191 real-time multimedia 193 o applications with safety-critical operational constraints, such as 194 avionics or industrial automation 196 o dedicated or tightly managed networks whose economics depend on 197 strict adherence to challenging service level agreements (SLAs) 199 Another factor in the design and deployment of QoS is the scalability 200 and scope over which the desired service can be achieved. Here there 201 are two major considerations, one technical, the other economic/ 202 political: 204 o Some signaled QoS schemes, such as RSVP [RFC2205], maintain state 205 in routers for each flow, which scales linearly with the number of 206 flows. For core routers through which pass millions to billions 207 of flows, the memory required is infeasible to provide. 209 o The Internet is comprised of many minimally cooperating autonomous 210 systems [AS]. There are practically no successful examples of QoS 211 deployments crossing the AS boundaries of multiple service 212 providers. This in almost all cases limits the applicability of 213 QoS capabilities to be intra-domain. 215 Finally, the relationship between QoS and either accounting or 216 billing is murky. Some schemes can accurately account for resource 217 consumption and ascertain to which user to allocate the usage. 218 Others cannot. While the choice of mechanism may have important 219 practical economic and political consequences for cost and workable 220 business models, this document considers none of those things and 221 discusses QoS only in the context of providing managed unfairness. 223 Some further background on congestion control for ICN is below. 225 3.1. Congestion Control basics relevant to ICN 227 Congestion control is necessary in any packet network that 228 multiplexes traffic among multiple sources and destinations in order 229 to: 231 1. Prevent collapse of utility due to overload, where the total 232 offered service declines as load increases, perhaps 233 precipitously, rather than increasing or remaining flat. 235 2. Avoid starvation of some traffic due to excessive demand by other 236 traffic. 238 3. Beyond the basic protections against starvation, achieve 239 "fairness" among competing traffic. Two common objective 240 functions are [minmaxfairness] and [proportionalfairness] both of 241 which have been implemented and deployed successfully on packet 242 networks for many years. 244 Before moving on to QoS, it is useful to consider how congestion 245 control works in NDN or CCNx. Unlike the IP protocol family, which 246 relies exclusively on end-to-end congestion control (e.g. 247 TCP[RFC0793], DCCP[RFC4340], SCTP[RFC4960], 248 QUIC[I-D.ietf-quic-transport]), CCNx and NDN can employ hop-by-hop 249 congestion control. There is per-Interest/Data state at every hop of 250 the path and therefore for each outstanding Interest, bandwidth for 251 data returning on the inverse path can be allocated. In current 252 designs, this allocation is often done using Interest counting. By 253 accepting one Interest packet from a downstream node, implicitly this 254 provides a guarantee (either hard or soft) that there is sufficient 255 bandwidth on the inverse direction of the link to send back one Data 256 packet. A number of congestion control schemes have been developed 257 for ICN that operate in this fashion, for example [Wang2013], 258 [Mahdian2016], [Song2018], [Carofiglio2012]. Other schemes, like 259 [Schneider2016] neither count nor police interests, but instead 260 monitor queues using AQM (active queue management) to mark returning 261 Data packets that have experienced congestion. This later class of 262 schemes is similar to those used on IP in the sense that they depend 263 on consumers adequately reducing their rate of Interest injection to 264 avoid Data packet drops due to buffer overflow in forwarders. The 265 former class of schemes is (arguably) more robust against mis- 266 behavior by consumers. 268 4. What can we control to achieve QoS in ICN? 270 QoS is achieved through managed unfairness in the allocation of 271 resources in network elements, particularly in the routers doing 272 forwarding of ICN packets. So, a first order question is what 273 resources need to be allocated, and how to ascertain which traffic 274 gets what allocations. In the case of CCNx or NDN the important 275 network element resources are: 277 +-----------------------------+-------------------------------------+ 278 | Resource | ICN Usage | 279 +-----------------------------+-------------------------------------+ 280 | Communication Link capacity | buffering for queued packets | 281 | Content Store capacity | to hold cached data | 282 | Forwarder memory | for the Pending Interest Table | 283 | | (PIT) | 284 | Compute capacity | for forwarding packets, including | 285 | | the cost of Forwarding Information | 286 | | Base (FIB) lookups. | 287 +-----------------------------+-------------------------------------+ 289 Table 2: ICN-related Network Element Resources 291 For these resources, any QoS scheme has to specify two things: 293 1. How do you create _equivalence classes_ (a.k.a. flows) of traffic 294 to which different QoS treatments are applied? 296 2. What are the possible treatments and how are those mapped to the 297 resource allocation algorithms? 299 Two critical facts of life come into play when designing a QoS 300 scheme. First, the number of equivalence classes that can be 301 simultaneously tracked in a network element is bounded by both memory 302 and processing capacity to do the necessary lookups. One can allow 303 very fine-grained equivalence classes, but not be able to employ them 304 globally because of scaling limits of core routers. That means it is 305 wise to either restrict the range of equivalence classes, or allow 306 them to be _aggregated_, trading off accuracy in policing traffic 307 against ability to scale. 309 Second, the flexibility of expressible treatments can be tightly 310 constrained by both protocol encoding and algorithmic limitations. 311 The ability to encode the treatment requests in the protocol can be 312 limited (as it is for IP - there are only 6 of the TOS bits available 313 for Diffserv treatments), but as or more important is whether there 314 are practical traffic policing, queuing, and pacing algorithms that 315 can be combined to support a rich set of QoS treatments. 317 The two considerations above in combination can easily be 318 substantially more expressive than what can be achieved in practice 319 with the available number of queues on real network interfaces or the 320 amount of per-packet computation needed to enqueue or dequeue a 321 packet. 323 5. How does this relate to QoS in TCP/IP? 325 TCP/IP has fewer resource types to manage than ICN, and in some cases 326 the allocation methods are simpler, as shown in the following table: 328 +-----------------------------+-------------+-----------------------+ 329 | Resource | IP Relevant | TCP/IP Usage | 330 +-----------------------------+-------------+-----------------------+ 331 | Communication Link capacity | YES | buffering for queued | 332 | | | packets | 333 | Content Store capacity | NO | no content store in | 334 | | | IP | 335 | Forwarder memory | MAYBE | not needed for | 336 | | | output-buffered | 337 | | | designs | 338 | Compute capacity | YES | for forwarding | 339 | | | packets, but arguably | 340 | | | much cheaper than ICN | 341 +-----------------------------+-------------+-----------------------+ 343 Table 3: IP-related Network Element Resources 345 For these resources, IP has specified three fundamental things, as 346 shown in the following table: 348 +----------------+--------------------------------------------------+ 349 | What | How | 350 +----------------+--------------------------------------------------+ 351 | *Equivalence | subset+prefix match on IP 5-tuple | 352 | classes* | {SA,DA,SP,DP,PT} | 353 | *Diffserv | (very) small number of globally-agreed traffic | 354 | treatments* | classes | 355 | *Intserv | per-flow parameterized _Controlled Load_ and | 356 | treatments* | _Guaranteed_ service classes | 357 +----------------+--------------------------------------------------+ 359 Table 4: Fundamental protocol elements to achieve QoS for TCP/IP 361 Equivalence classes for IP can be pairwise, by matching against both 362 source and destination address+port, pure group using only 363 destination address+port, or source-specific multicast with source 364 adress+port and destination multicast address+port. 366 With Intserv, the signaling protocol RSVP [RFC2205] carries two data 367 structures, the FLOWSPEC and the TSPEC. The former fulfills the 368 requirement to identify the equivalence class to which the QoS being 369 signaled applies. The latter comprises the desired QoS treatment 370 along with a description of the dynamic character of the traffic 371 (e.g. average bandwidth and delay, peak bandwidth, etc.). Both of 372 these encounter substantial scaling limits, which has meant that 373 Intserv has historically been limited to confined topologies, and/or 374 high-value usages, like traffic engineering. 376 With Diffserv, the protocol encoding (6 bits in the TOS field of the 377 IP header) artificially limits the number of classes one can specify. 378 These are documented in [RFC4594]. Nonetheless, when used with fine- 379 grained equivalence classes, one still runs into limits on the number 380 of queues required. 382 6. Why is ICN Different? Can we do Better? 384 While one could adopt an approach to QoS mirroring the extensive 385 experience with TCP/IP, this would, in the author's view, be a 386 mistake. The implementation and deployment of QoS in IP networks has 387 been spotty at best. There are of course economic and political 388 reasons as well as technical reasons for these mixed results, but 389 there are several architectural choices in ICN that make it a 390 potentially much better protocol base to enhance with QoS machinery. 391 This section discusses those differences and their consequences. 393 6.1. Equivalence class capabilities 395 First and foremost, hierarchical names are a much richer basis for 396 specifying equivalence classes than IP 5-tuples. The IP address (or 397 prefix) can only separate traffic by topology to the granularity of 398 hosts, and not express actual computational instances nor sets of 399 data. Ports give some degree of per-instance demultiplexing, but 400 this tends to be both coarse and ephemeral, while confounding the 401 demultiplexing function with the assignment of QoS treatments to 402 particular subsets of the data. Some degree of finer granularity is 403 possible with IPv6 by exploiting the ability to use up to 64 bits of 404 address for classifying traffic. In fact, the hICN project 405 ([I-D.muscariello-intarea-hicn]), while adopting the request-response 406 model of CCNx, uses IPv6 addresses as the available namespace, and 407 IPv6 packets (plus "fake" TCP headers) as the wire format. 409 Nonetheless, the flexibility of tokenized, variable length, 410 hierarchical names allows one to directly associate classes of 411 traffic for QoS purposes with the structure of an application 412 namespace. The classification can be as coarse or fine-grained as 413 desired by the application. While not _always_ the case, there is 414 typically a straightforward association between how objects are 415 named, and how they are grouped together for common treatment. 416 Examples abound; a number can be conveniently found in 417 [I-D.moiseenko-icnrg-flowclass]. 419 6.2. Topology interactions with QoS 421 In ICN, QoS is not pre-bound to topology since names are non- 422 topological, unlike unicast IP addresses. This allows QoS to be 423 applied to multi-destination and multi-path environments in a 424 straightforward manner, rather than requiring either multicast with 425 coarse class-based scheduling or complex signaling like that in RSVP- 426 TE [RFC3209] that is needed to make point-to-multipoint MPLS work. 428 Because of IP's stateless forwarding model, complicated by the 429 ubiquity of asymmetric routes, any flow-based QoS requires state that 430 is decoupled from the actual arrival of traffic and hence must be 431 maintained, at least as soft-state, even during quiescent periods. 432 Intserv, for example, requires flow signaling with state O(#flows). 433 ICN, even worst case, requires state O(#active interest/data 434 exchanges), since state can be instantiated on arrival of an 435 Interest, and removed lazily once the data hase been returned. 437 6.3. Specification of QoS treatments 439 Unlike Intserv, Difserv eschews signaling in favor of class-based 440 configuration of resources and queues in network elements. However, 441 Diffserv limits traffic treatments to a few bits taken from the ToS 442 field of IP. No such wire encoding limitations exist for NDN or 443 CCNx, as the protocol is completely TLV-based, and one (or even more 444 than one) new field can be easily defined to carry QoS treatment 445 information. 447 Therefore, there are greenfield possibilities for more powerful QoS 448 treatment options in ICN. For example, IP has no way to express a 449 QoS treatment like "try hard to deliver reliably, even at the expense 450 of delay or bandwidth". Such a QoS treatment for ICN could invoke 451 native ICN mechanisms, none of which are present in IP, such as: 453 o In-network retransmission in response to hop-by-hop errors 454 returned from upstream forwarders 456 o Trying multiple paths to multiple content sources either in 457 parallel or serially 459 o Higher precedence for short-term caching to recover from 460 downstream errors 462 Such mechanisms are typically described in NDN and CCNx as 463 _forwarding strategies_. However, little or no guidance is given for 464 what application actions or protocol machinery is used to decide 465 which forwarding strategy to use for which Interests that arrive at a 466 forwarder. See [BenAbraham2018] for an investigation of these 467 issues. Associating forwarding strategies with the equivalence 468 classes and QoS treatments directly can make them more accessible and 469 useful to implement and deploy. 471 Stateless forwarding and asymmetric routing in IP limits available 472 state/feedback to manage link resources. In contrast, NDN or CCNx 473 forwarding allows all link resource allocation to occur as part of 474 Interest forwarding, potentially simplifying things considerably. 475 For example, with symmetric routing, producers have no control over 476 the paths their data packets traverse, and hence any QoS treatments 477 intended to influence routing paths from producer to consumer will 478 have no effect. 480 One complication in the handling of ICN QoS treatments is not present 481 in IP and hence worth mention. CCNx and NDN both perform _Interest 482 aggregation_ (See Section 2.3.2 of [RFC8569]). If an Interest 483 arrives matching an existing PIT entry, but with a different QoS 484 treatment from an Interest already forwarded, it can be tricky to 485 decide whether or not to aggregate the interest or forward it, and 486 how to keep track of the differing QoS treatments for the two 487 Interests. Exploration of the details surrounding these situations 488 is beyond the scope of this document; further discussion can be found 489 for the general case of flow balance and congestion control in 490 [I-D.oran-icnrg-flowbalance], and specifically for QoS treatments in 491 [I-D.anilj-icnrg-dnc-qos-icn]. 493 6.4. ICN forwarding semantics effect on QoS 495 IP has three forwarding semantics, with different QoS needs (Unicast, 496 Anycast, Multicast). ICN has the single forwarding semantic, so any 497 QoS machinery can be uniformly applied across any request/response 498 invocation, whether it employs dynamic destination routing, multi- 499 destination parallel requests, or even localized flooding (e.g. 500 directly on L2 multicast mechanisms). Additionally, the pull-based 501 model of ICN avoids a number of thorny multicast QoS problems that IP 502 has ([Wang2000], [RFC3170], [Tseng2003]). 504 The Multi-destination/multi-path forwarding model in ICN changes 505 resource allocation needs in a fairly deep way. IP treats all 506 endpoints as open-loop packet sources, whereas NDN and CCNx have 507 strong asymmetry between producers and consumers as packet sources. 509 6.5. QoS interactions with Caching 511 IP has no caching in routers, whereas ICN needs ways to allocate 512 cache resources. Treatments to control caching operation are 513 unlikely to look much like the treatments used to control link 514 resources. NDN and CCNx already have useful cache control directives 515 associated with Data messages. The CCNx controls include: 517 ExpiryTime: time after which a cached Content Object is considered 518 expired and MUST no longer be used to respond to an Interest from 519 a cache. 521 Recommended Cache Time: time after which the publisher considers the 522 Content Object to be of low value to cache. 524 See [RFC8569] for the formal definitions s. 526 ICN flow classifiers, such as those in 527 [I-D.moiseenko-icnrg-flowclass] can be used to achieve soft or hard 528 partitioning of cache resources in the content store of an ICN 529 forwarder. For example, cached content for a given equivalence class 530 can be considered _fate shared_ in a cache whereby objects from the 531 same equivalence class are purged as a group rather than 532 individually. This can recover cache space more quickly and at lower 533 overhead than pure per-object replacement. In addition, since the 534 forwarder remembers the QoS treatment for each pending Interest in 535 its PIT, the above cache controls can be augmented by policy to 536 prefer retention of cached content for some equivalence classes as 537 part of the cache replacement algorithm. 539 7. A strawman set of principles to guide QoS architecture for ICN 541 Based on the observations made in the earlier sections, this summary 542 section captures the author's ideas for clear and actionable 543 architectural principals for how to incorporate QoS machinery into 544 ICN protocols like NDN and CCNx. Hopefully, they can guide further 545 work and focus effort on portions of the giant design space for QoS 546 that have the best tradeoffs in terms of flexibility, simplicity, and 547 deployability. 549 *Define equivalence classes using the name hierarchy rather than 550 creating an independent traffic class definition*. This directly 551 associates the specification of equivalence classes of traffic with 552 the structure of the application namespace. It can allow 553 hierarchical decomposition of equivalence classes in a natural way 554 because of the way hierarchical ICN names are constructed. Two 555 practical mechanisms are presented in [I-D.moiseenko-icnrg-flowclass] 556 with different tradeoffs between security and the ability to 557 aggregate flows. Either prefix-based (EC3) or explicit name 558 component based (ECNT) or both could be adopted as the part of the 559 QoS architecture for defining equivalence classes. 561 *Put consumers in control of Link and Forwarding resource 562 allocation*. Do all link buffering and forwarding (both memory and 563 CPU) resource allocations based on Interest arrivals. This is 564 attractive because it provides early congestion feedback to 565 consumers, and allows scheduling the reverse link direction ahead of 566 time for carrying the matching data. It makes enforcement of QoS 567 treatments a single-ended rather than a double-ended problem and can 568 avoid wasting resources on fetching data that will wind up dropped 569 when it arrives at a bottleneck link. 571 *Allow producers to influence the allocation of of cache resources*. 572 Producers want to affect caching decisions in order to: 574 o Shed load by having Interests served by content stores in 575 forwarders before reaching the producer itself. 577 o Survive transient outages of either the producer or links close to 578 the producer. 580 For caching to be effective, individual Data objects in an 581 equivalence class need to have similar treatment; otherwise well- 582 known cache thrashing pathologies due to self-interference emerge. 583 Producers have the most direct control over caching policies through 584 the caching directives in Data messages. It therefore makes sense to 585 put the producer, rather than the consumer or network operator in 586 charge of specifying these equivalence classes. 588 See [I-D.moiseenko-icnrg-flowclass] for specific mechanisms to 589 achieve this. 591 *Allow consumers to influence the allocation of of cache resources*. 592 Consumers want to affect caching decisions in order to: 594 o Reduce latency for retrieving data 596 o Survive transient outages of either a producer or links close to 597 the consumer 599 Consumers can have indirect control over caching by specifying QoS 600 treatments in their Interests. Consider the following potential QoS 601 treatments by consumers that can drive caching policies: 603 o A QoS treatment requesting better robustness against transient 604 disconnection can be used by a forwarder close to the consumer (or 605 downstream of an unreliable link) to preferentially cache the 606 corresponding data. 608 o Conversely a QoS treatment together with, or in addition to a 609 request for short latency, to indicate that new data will be 610 requested soon enough that caching the current data being 611 requested would be ineffective and hence to only pay attention to 612 the caching preferences of the producer. 614 o A QoS treatment indicating a mobile consumer likely to incur a 615 mobility event within an RTT (or a few RTTs). Such a treatment 616 would allow a mobile network operator to preferentially cache the 617 data at a forwarder positioned at a _join point_ or _rendezvous 618 point_ of their topology 620 *Give network operators the ability to match customer SLAs to cache 621 resource availability*. Network operators, whether closely tied 622 administratively to producer or consumer, or constituting an 623 independent transit administration, provide the storage resources in 624 the ICN forwarders. Therefore, they are the ultimate arbiters of how 625 the cache resources are managed. In addition to any local policies 626 they may enforce, the cache behavior from the QoS standpoint emerges 627 from how the producer-specified equivalence classes map onto cache 628 space availability, including whether cache entries are treated 629 individually, or fate-shared. Forwarders also determine how the 630 consumer-specified QoS treatments map to the precedence used for 631 retaining Data objects in the cache. 633 Besides utilizing cache resources to meet the QoS goals of individual 634 producers and consumers, network operators also want to manage their 635 cache resources in order to: 637 o Amerliorate congestion hotspots by reducing load converging on 638 producers they host on their network. 640 o Improve Interest satisfaction rates by utilizing caches as short- 641 term retransmission buffers to recover from link errors or 642 outages. 644 o Improve both latency and reliability in environments when 645 consumers move in the operator's topology. 647 *Re-think how to specify traffic treatments - don't just copy 648 Diffserv*. Some of the Diffserv classes may form a good starting 649 point, as their mapping onto queuing algorithms for managing link 650 buffering are well understood. However, Diffserv alone does not 651 allow one to express latency versus reliability tradeoffs or other 652 useful QoS treatments. Nor does it permit "TSPEC"-style traffic 653 descriptions as are allowed in a signaled QoS scheme. Here are some 654 examples: 656 o A "burst" treatment, where an initial Interest gives an aggregate 657 data size to request allocation of link capacity for a large burst 658 of Interest/Data exchanges. The Interest can be rejected at any 659 hop if the resources are not available. Such a treatment can also 660 accomodate Data implosion produced by the discovery procedures of 661 management protocols like [I-D.irtf-icnrg-ccninfo]. 663 o A "reliable" treatment, which affects preference for allocation of 664 PIT space for the Interest and Content Store space for the data in 665 order to improve the robustness of IoT data delivery in 666 constrained environment, as is described in 667 [I-D.gundogan-icnrg-iotqos]. 669 o A "search" treatment, which, within the specified Interest 670 Lifetime, tries many paths either in parallel or serial to 671 potentially many content sources, to maximize the probability that 672 the requested item will be found. This is done at the expense of 673 the extra bandwidth of both forwarding Interests and receiving 674 multiple responses upstream of an aggregation point. The 675 treatment can encode a value expressing tradeoffs like breadth- 676 first versus depth-first search, and bounds on the total resource 677 expenditure. Such a treatment would be useful for instrumentation 678 protocols like [I-D.mastorakis-icnrg-icntraceroute]. 680 As an aside, loose latency control can be achieved by bounding 681 Interest Lifetime as long as it is not also used as an application 682 mechanism to provide subscriptions or establish path traces for 683 producer mobility. See [Krol2018] for a discussion of the network 684 versus application timescale issues in ICN protocols. 686 *What about the richer QoS semantics available with INTServ-like 687 traffic control?*. Basic QoS treatments such as those summarized 688 above may not be adequate to cover the whole range of application 689 utility functions and deployment environments we expect for ICN. 690 While it is true that one does not necessarily need a separate 691 signaling protocol like RSVP given the state carried in the ICN data 692 plane by forwarders, there are some potentially important 693 capabilities not provided by just simple QoS treatments applied to 694 per- Interest/Data exchanges. INTserv's richer QoS capabilities may 695 be of value, especially if they can be provided in ICN at lower 696 complexity and protocol overhead than INTServ+RSVP. 698 There are three key capabilities missing from Diffserv-like QoS 699 treatments, no matter how sophisticated they may be in describing the 700 desired treatment for a given equivalence class of traffic. INTserv- 701 like QoS provides all of these: 703 1. The ability to *describe traffic flows* in a mathematically 704 meaningful way. This is done through parameters like average 705 rate, peak rate, and maximum burst size. The parameters are 706 encapsulated in a data structure called a "TSPEC" which can be 707 placed in whatever protocol needs the information (in the case of 708 TCP/IP INTserv, this is RSVP). 710 2. The ability to perform *admission control*, where the element 711 requesting the QoS treatment can know _before_ introducing 712 traffic whether the network elements have agreed to provide the 713 requested traffic treatment. An important side-effect of 714 providing this assurance is that the network elements install 715 state that allows the forwarding and queuing machinery to police 716 and shape the traffic in a way that provides a sufficient degree 717 of _isolation_ from the dynamic behavior of other traffic. 718 Depending on the admission control mechanism, it may or may not 719 be possible to explicitly release that state when the application 720 no longer needs the QoS treatment. 722 3. The permissable *degree of divergence* in the actual traffic 723 handling from the requested handling. INTServ provided two 724 choices here, the _controlled load_ service and the _guaranteed_ 725 service. The former allows stochastic deviation equivalent to 726 what one would experience on an unloaded path of a packet 727 network. The latter conforms to the TSPEC deterministically, at 728 the obvious expense of demanding extremely conservative resource 729 allocation. 731 Given the limited applicability of these capabilities in today's 732 Internet, the author does not take any position as to whether any of 733 these INTserv-like capabilities are needed for ICN to be succesful. 734 However, a few things seem important to consider. The following 735 paragraphs speculate about the consequences to the CCNx or NDN 736 protocol architectures of incorporating these features. 738 Superficially, it would be quite straightforward to accommodate 739 INTserv-equivalent traffic descriptions in CCNx or NDN. One could 740 define a new TLV for the Interest message to carry a TSPEC. A 741 forwarder encountering this, together with a QoS treatment request 742 (e.g. as proposed in Section 6.3) could associate the traffic 743 specification with the corresponding equivalence class derived from 744 the name in the Interest. This would allow the forwarder to create 745 state that not only would apply to the returning Data for that 746 Interest when being queued on the downstream interface, but be 747 maintained as soft state across multiple Interest/Data exchanges to 748 drive policing and shaping algorithms at per-flow granularity. The 749 cost in Interest message overhead would be modest, however the 750 complications associated with managing different traffic 751 specifications in different Interests for the same equivalence class 752 might be substantial. Of course, all the scalability considerations 753 with maintaining per-flow state also come into play. 755 Similarly, it would be equally straightforward to have a way to 756 express the degree of divergence capability that INTserv provides 757 through its controlled load and guaranteed service definitions. This 758 could either be packaged with the the traffic specification or 759 encoded separately. 761 In contrast to the above, performing admission control for ICN flows 762 is likely to be just as heavy-weight as it turned out to be with IP 763 using RSVP. The dynamic multi-path, multi-destination forwarding 764 model of ICN makes performing admission control particularly tricky. 765 Just to illustrate: 767 o Forwarding paths are not confined to single paths (or a few ECMP 768 equivalent paths) as they are with IP, making it difficult to know 769 where to install state in advance of the arrival of an interest to 770 forward. 772 o As with point-to-multipoint complexities when using RSVP for MPLS- 773 TE, state has to be installed to multiple producers over multiple 774 paths before an admission control algorithm can commit the 775 resources and say "yes" to a consumer needing admission control 776 capabilities 778 o Knowing when to remove admission control state is difficult in the 779 absence of a heavy-weight resource reservation protocol. Soft 780 state timeout may or may not be an adequate answer. 782 Despite the challenges above, it may be possible to craft an 783 admission control scheme for ICN that achieves the desired QoS goals 784 of applications without the invention and deployment of a complex 785 separate admission control signaling protocol. There have been 786 designs in earlier network architectures that were capable of 787 performing admission control piggybacked on packet transmission. 789 (The earliest example the author is aware of is [Autonet]). 791 Such a scheme might have the following general shape *(warning: 792 serious hand waving follows!)*: 794 o In addition to a QoS treatment and a traffic specification, an 795 Interest requesting admission for the corresponding equivalence 796 class would so indicate via a new TLV. It would also need to: (a) 797 indicate an expiration time after which any reserved resources can 798 be released, and (b) indicate that caches be bypassed, so that the 799 admission control request arrives at a bone-fide producer (or 800 Repo). 802 o Each forwarder processing the Interest would check for resource 803 availability and if not available, or the requested service not 804 feasible, reject the Interest with with an admission control 805 failure. If resources are available, the forwarder would record 806 the traffic specification as described above and forward the 807 Interest. 809 o If the Interest successfully arrives at a Producer, the producer 810 returns the requested Data. 812 o Each on-path forwarder, on receiving the matching Data message, if 813 the resources are sill available, does the actual allocation, and 814 marks the admission control TLV as "provisionally approved". 815 Conversely, if the resource reservation fails, the admission 816 control is marked "failed", although the Data is still passed 817 downstream. 819 o Upon the Data message arriving, the consumer knows if admission 820 succeeded or not, and subsequent Interests can rely on the QoS 821 state being in place until either some failure occurs, or a 822 topology or other forwarding change alters the forwarding path. 823 To deal with this, additional machinery is needed to ensure 824 subsequent interests for an admitted flow either follow that path 825 or an error is reported. One possibility (also useful in many 826 other contexts), is to employ a _Path Steering_ mechanism, such as 827 the one described in [Moiseenko2017]. 829 8. IANA Considerations 831 This document does not require any IANA actions. 833 9. Security Considerations 835 There are a few ways in which QoS for ICN interacts with security and 836 privacy issues. Since QoS addresses relationships among traffic 837 rather than the inherent characteristics of traffic, it neither 838 enhances nor degrades the security and privacy properties of the data 839 being carried, as long as the machinery does not alter or otherwise 840 compromise the basic security properties of the associated protocols. 841 The QoS approaches advocated here for ICN can serve to amplify 842 existing threats to network traffic however: 844 o An attacker able to manipulate the QoS treatments of traffic can 845 mount a more focused (and potentially more effective) denial of 846 service attack by suppressing performance on traffic the attacker 847 is targeting. Since the architcture here assumes QoS treatments 848 are manipulable hop-by-hop, any on-path adversary can wreak havoc. 849 Note however, that in basic ICN, an on-path attacker can do this 850 and more by dropping, delaying, or mis-routing traffic independent 851 of any particular QoS machinery in use. 853 o By explicitly revealing equivalence classes of traffic via either 854 names or other fields in packets, an attacker has yet one more 855 handle to use to discover linkability of multiple requests. 857 10. References 859 10.1. Normative References 861 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 862 Requirement Levels", BCP 14, RFC 2119, 863 DOI 10.17487/RFC2119, March 1997, 864 . 866 [RFC8569] Mosko, M., Solis, I., and C. Wood, "Content-Centric 867 Networking (CCNx) Semantics", RFC 8569, 868 DOI 10.17487/RFC8569, July 2019, 869 . 871 [RFC8609] Mosko, M., Solis, I., and C. Wood, "Content-Centric 872 Networking (CCNx) Messages in TLV Format", RFC 8609, 873 DOI 10.17487/RFC8609, July 2019, 874 . 876 10.2. Informative References 878 [AS] "Autonomous System (Internet)", no date, 879 . 882 [Autonet] Schroeder, M., Birrell, A., Burrows, M., Murray, H., 883 Needham, R., Rodeheffer, T., Satterthwaite, E., and C. 884 Thacker, "Autonet: a High-speed, Self-configuring Local 885 Area Network Using Point-to-point Links", SRC Research 886 Reports 59, April 1990, 887 . 890 [BenAbraham2018] 891 Ben Abraham, H., Parwatikar, J., DeHart, J., Dresher, A., 892 and P. Crowley, "Decoupling Information and Connectivity 893 via Information-Centric Transport, in 5th ACM Conference 894 on Information-Centric Networking (ICN '18), September 895 21-23, 2018, Boston, MA, USA", 896 DOI 10.1145/3267955.3267963, September 2018, 897 . 900 [Carofiglio2012] 901 Carofiglio, G., Gallo, M., and L. Muscariello, "Joint hop- 902 by-hop and receiver-driven interest control protocol for 903 content-centric networks, in ICN Workshop at SIGcomm 904 2012", DOI 10.1145/2377677.2377772, 2102, 905 . 908 [I-D.anilj-icnrg-dnc-qos-icn] 909 Jangam, A., suthar, P., and M. Stolic, "QoS Treatments in 910 ICN using Disaggregated Name Components", draft-anilj- 911 icnrg-dnc-qos-icn-01 (work in progress), September 2019. 913 [I-D.gundogan-icnrg-iotqos] 914 Gundogan, C., Schmidt, T., Waehlisch, M., Frey, M., Shzu- 915 Juraschek, F., and J. Pfender, "Quality of Service for ICN 916 in the IoT", draft-gundogan-icnrg-iotqos-01 (work in 917 progress), July 2019. 919 [I-D.ietf-quic-transport] 920 Iyengar, J. and M. Thomson, "QUIC: A UDP-Based Multiplexed 921 and Secure Transport", draft-ietf-quic-transport-23 (work 922 in progress), September 2019. 924 [I-D.irtf-icnrg-ccninfo] 925 Asaeda, H., Ooka, A., and X. Shao, "CCNinfo: Discovering 926 Content and Network Information in Content-Centric 927 Networks", draft-irtf-icnrg-ccninfo-02 (work in progress), 928 July 2019. 930 [I-D.mastorakis-icnrg-icntraceroute] 931 Mastorakis, S., Gibson, J., Moiseenko, I., Droms, R., and 932 D. Oran, "ICN Traceroute Protocol Specification", draft- 933 mastorakis-icnrg-icntraceroute-05 (work in progress), 934 August 2019. 936 [I-D.moiseenko-icnrg-flowclass] 937 Moiseenko, I. and D. Oran, "Flow Classification in 938 Information Centric Networking", draft-moiseenko-icnrg- 939 flowclass-04 (work in progress), July 2019. 941 [I-D.muscariello-intarea-hicn] 942 Muscariello, L., Carofiglio, G., Auge, J., and M. 943 Papalini, "Hybrid Information-Centric Networking", draft- 944 muscariello-intarea-hicn-02 (work in progress), June 2019. 946 [I-D.oran-icnrg-flowbalance] 947 Oran, D., "Maintaining CCNx or NDN flow balance with 948 highly variable data object sizes", draft-oran-icnrg- 949 flowbalance-01 (work in progress), August 2019. 951 [Krol2018] 952 Krol, M., Habak, K., Oran, D., Kutscher, D., and I. 953 Psaras, "RICE: Remote Method Invocation in ICN, in 954 Proceedings of the 5th ACM Conference on Information- 955 Centric Networking - ICN '18", 956 DOI 10.1145/3267955.3267956, September 2018, 957 . 960 [Mahdian2016] 961 Mahdian, M., Arianfar, S., Gibson, J., and D. Oran, 962 "MIRCC: Multipath-aware ICN Rate-based Congestion Control, 963 in Proceedings of the 3rd ACM Conference on Information- 964 Centric Networking", DOI 10.1145/2984356.2984365, 2016, 965 . 968 [minmaxfairness] 969 "Max-min Fairness", no date, 970 . 972 [Moiseenko2017] 973 Moiseenko, I. and D. Oran, "Path Switching in Content 974 Centric and Named Data Networks, in 4th ACM Conference on 975 Information-Centric Networking (ICN 2017)", 976 DOI 10.1145/3125719.3125721, September 2017, 977 . 980 [NDN] "Named Data Networking", various, 981 . 983 [Oran2018QoSslides] 984 Oran, D., "Thoughts on Quality of Service for NDN/CCN- 985 style ICN protocol architectures, presented at ICNRG 986 Interim Meeting, Cambridge MA", September 2018, 987 . 991 [proportionalfairness] 992 "Proportionally Fair", no date, 993 . 995 [RFC0793] Postel, J., "Transmission Control Protocol", STD 7, 996 RFC 793, DOI 10.17487/RFC0793, September 1981, 997 . 999 [RFC2205] Braden, R., Ed., Zhang, L., Berson, S., Herzog, S., and S. 1000 Jamin, "Resource ReSerVation Protocol (RSVP) -- Version 1 1001 Functional Specification", RFC 2205, DOI 10.17487/RFC2205, 1002 September 1997, . 1004 [RFC2474] Nichols, K., Blake, S., Baker, F., and D. Black, 1005 "Definition of the Differentiated Services Field (DS 1006 Field) in the IPv4 and IPv6 Headers", RFC 2474, 1007 DOI 10.17487/RFC2474, December 1998, 1008 . 1010 [RFC2998] Bernet, Y., Ford, P., Yavatkar, R., Baker, F., Zhang, L., 1011 Speer, M., Braden, R., Davie, B., Wroclawski, J., and E. 1012 Felstaine, "A Framework for Integrated Services Operation 1013 over Diffserv Networks", RFC 2998, DOI 10.17487/RFC2998, 1014 November 2000, . 1016 [RFC3170] Quinn, B. and K. Almeroth, "IP Multicast Applications: 1017 Challenges and Solutions", RFC 3170, DOI 10.17487/RFC3170, 1018 September 2001, . 1020 [RFC3209] Awduche, D., Berger, L., Gan, D., Li, T., Srinivasan, V., 1021 and G. Swallow, "RSVP-TE: Extensions to RSVP for LSP 1022 Tunnels", RFC 3209, DOI 10.17487/RFC3209, December 2001, 1023 . 1025 [RFC4340] Kohler, E., Handley, M., and S. Floyd, "Datagram 1026 Congestion Control Protocol (DCCP)", RFC 4340, 1027 DOI 10.17487/RFC4340, March 2006, 1028 . 1030 [RFC4594] Babiarz, J., Chan, K., and F. Baker, "Configuration 1031 Guidelines for DiffServ Service Classes", RFC 4594, 1032 DOI 10.17487/RFC4594, August 2006, 1033 . 1035 [RFC4960] Stewart, R., Ed., "Stream Control Transmission Protocol", 1036 RFC 4960, DOI 10.17487/RFC4960, September 2007, 1037 . 1039 [Schneider2016] 1040 Schneider, K., Yi, C., Zhang, B., and L. Zhang, "A 1041 Practical Congestion Control Scheme for Named Data 1042 Networking, in Proceedings of the 2016 conference on 3rd 1043 ACM Conference on Information-Centric Networking - ACM-ICN 1044 '16", DOI 10.1145/2984356.2984369, 2016, 1045 . 1048 [Shenker2006] 1049 Shenker, S., "Fundamental Design Issues for the Future 1050 Internet, in IEEE Journal on Selected Areas in 1051 Communications", DOI 10.1109/49.414637, 2006, 1052 . 1054 [Song2018] 1055 Song, J., Lee, M., and T. Kwon, "SMIC: Subflow-level 1056 Multi-path Interest Control for Information Centric 1057 Networking, in 5th ACM Conference on Information-Centric 1058 Networking", DOI 10.1145/3267955.3267971, 2018, 1059 . 1062 [Tseng2003] 1063 Tseng, CH., "The performance of QoS-aware IP multicast 1064 routing protocols, in Networks, Vol:42, No:2", 1065 DOI 10.1002/net.10084, September 2003, 1066 . 1069 [Wang2000] 1070 Wang, B. and J. Hou, "Multicast routing and its QoS 1071 extension: problems, algorithms, and protocols, in IEEE 1072 Network, Vol:14, No:1", DOI 10.1109/65.819168, Jan/Feb 1073 2000, . 1076 [Wang2013] 1077 Wang, Y., Rozhnova, N., Narayanan, A., Oran, D., and I. 1078 Rhee, "An Improved Hop-by-hop Interest Shaper for 1079 Congestion Control in Named Data Networking, in ACM 1080 SIGCOMM Workshop on Information-Centric Networking", 1081 DOI 10.1145/2534169.2491233, 2013, 1082 . 1085 Author's Address 1087 Dave Oran 1088 Network Systems Research and Design 1089 4 Shady Hill Square 1090 Cambridge, MA 02138 1091 USA 1093 Email: daveoran@orandom.net