idnits 2.17.1 draft-ietf-pcn-architecture-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 16. -- Found old boilerplate from RFC 3978, Section 5.5, updated by RFC 4748 on line 2195. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 2206. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 2213. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 2219. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (February 8, 2008) is 5920 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-02) exists of draft-ietf-pwe3-congestion-frmwk-00 == Outdated reference: A later version (-05) exists of draft-westberg-pcn-load-control-02 Summary: 1 error (**), 0 flaws (~~), 3 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Congestion and Pre-Congestion Philip. Eardley (Editor) 3 Notification Working Group BT 4 Internet-Draft February 8, 2008 5 Intended status: Informational 6 Expires: August 11, 2008 8 Pre-Congestion Notification Architecture 9 draft-ietf-pcn-architecture-03 11 Status of this Memo 13 By submitting this Internet-Draft, each author represents that any 14 applicable patent or other IPR claims of which he or she is aware 15 have been or will be disclosed, and any of which he or she becomes 16 aware will be disclosed, in accordance with Section 6 of BCP 79. 18 Internet-Drafts are working documents of the Internet Engineering 19 Task Force (IETF), its areas, and its working groups. Note that 20 other groups may also distribute working documents as Internet- 21 Drafts. 23 Internet-Drafts are draft documents valid for a maximum of six months 24 and may be updated, replaced, or obsoleted by other documents at any 25 time. It is inappropriate to use Internet-Drafts as reference 26 material or to cite them other than as "work in progress." 28 The list of current Internet-Drafts can be accessed at 29 http://www.ietf.org/ietf/1id-abstracts.txt. 31 The list of Internet-Draft Shadow Directories can be accessed at 32 http://www.ietf.org/shadow.html. 34 This Internet-Draft will expire on August 11, 2008. 36 Copyright Notice 38 Copyright (C) The IETF Trust (2008). 40 Abstract 42 The purpose of this document is to describe a general architecture 43 for flow admission and termination based on pre-congestion 44 information in order to protect the quality of service of established 45 inelastic flows within a single DiffServ domain. 47 Status 49 Table of Contents 51 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 4 52 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 7 53 3. Assumptions and constraints on scope . . . . . . . . . . . . . 9 54 3.1. Assumption 1: Trust and support of PCN - controlled 55 environment . . . . . . . . . . . . . . . . . . . . . . . 9 56 3.2. Assumption 2: Real-time applications . . . . . . . . . . . 10 57 3.3. Assumption 3: Many flows and additional load . . . . . . . 10 58 3.4. Assumption 4: Emergency use out of scope . . . . . . . . . 11 59 3.5. Other assumptions . . . . . . . . . . . . . . . . . . . . 11 60 4. High-level functional architecture . . . . . . . . . . . . . . 11 61 4.1. Flow admission . . . . . . . . . . . . . . . . . . . . . . 12 62 4.2. Flow termination . . . . . . . . . . . . . . . . . . . . . 13 63 4.3. Flow admission and flow termination . . . . . . . . . . . 14 64 4.4. Information transport . . . . . . . . . . . . . . . . . . 15 65 4.5. PCN-traffic . . . . . . . . . . . . . . . . . . . . . . . 15 66 5. Detailed Functional architecture . . . . . . . . . . . . . . . 16 67 5.1. PCN-interior-node functions . . . . . . . . . . . . . . . 17 68 5.2. PCN-ingress-node functions . . . . . . . . . . . . . . . . 17 69 5.3. PCN-egress-node functions . . . . . . . . . . . . . . . . 18 70 5.4. Other admission control functions . . . . . . . . . . . . 19 71 5.5. Other flow termination functions . . . . . . . . . . . . . 19 72 5.6. Addressing . . . . . . . . . . . . . . . . . . . . . . . . 20 73 5.7. Tunnelling . . . . . . . . . . . . . . . . . . . . . . . . 21 74 5.8. Fault handling . . . . . . . . . . . . . . . . . . . . . . 22 75 6. Design goals and challenges . . . . . . . . . . . . . . . . . 23 76 7. Probing . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 77 7.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . 25 78 7.2. Probing functions . . . . . . . . . . . . . . . . . . . . 26 79 7.3. Discussion of rationale for probing, its downsides and 80 open issues . . . . . . . . . . . . . . . . . . . . . . . 27 81 8. Operations and Management . . . . . . . . . . . . . . . . . . 30 82 8.1. Configuration OAM . . . . . . . . . . . . . . . . . . . . 30 83 8.1.1. System options . . . . . . . . . . . . . . . . . . . . 31 84 8.1.2. Parameters . . . . . . . . . . . . . . . . . . . . . . 31 85 8.2. Performance & Provisioning OAM . . . . . . . . . . . . . . 33 86 8.3. Accounting OAM . . . . . . . . . . . . . . . . . . . . . . 34 87 8.4. Fault OAM . . . . . . . . . . . . . . . . . . . . . . . . 34 88 8.5. Security OAM . . . . . . . . . . . . . . . . . . . . . . . 35 89 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 36 90 10. Security considerations . . . . . . . . . . . . . . . . . . . 36 91 11. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . 37 92 12. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 38 93 13. Comments Solicited . . . . . . . . . . . . . . . . . . . . . . 38 94 14. Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 95 14.1. Changes from -02 to -03 . . . . . . . . . . . . . . . . . 38 96 14.2. Changes from -01 to -02 . . . . . . . . . . . . . . . . . 39 97 14.3. Changes from -00 to -01 . . . . . . . . . . . . . . . . . 40 98 15. Appendix A: Possible work items beyond the scope of the 99 current PCN WG Charter . . . . . . . . . . . . . . . . . . . . 42 100 16. Informative References . . . . . . . . . . . . . . . . . . . . 44 101 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . . 47 102 Intellectual Property and Copyright Statements . . . . . . . . . . 48 104 1. Introduction 106 The purpose of this document is to describe a general architecture 107 for flow admission and termination based on (pre-) congestion 108 information in order to protect the quality of service of flows 109 within a DiffServ domain [RFC2475]. This document defines an 110 architecture for implementing two mechanisms to protect the quality 111 of service of established inelastic flows within a single DiffServ 112 domain, where all boundary and interior nodes are PCN-enabled and 113 trust each other for correct PCN operation. Flow admission control 114 determines whether a new flow should be admitted, in order to protect 115 the QoS of existing PCN-flows in normal circumstances. However, in 116 abnormal circumstances, for instance a disaster affecting multiple 117 nodes and causing traffic re-routes, then the QoS on existing PCN- 118 flows may degrade even though care was exercised when admitting those 119 flows before those circumstances. Therefore we also propose a 120 mechanism for flow termination, which removes enough traffic in order 121 to protect the QoS of the remaining PCN-flows. 123 As a fundamental building block to enable these two mechanisms, PCN- 124 interior-nodes generate, encode and transport pre-congestion 125 information towards the PCN-egress-nodes. Two rates, a PCN-lower- 126 rate and a PCN-upper-rate, can be associated with each link of the 127 PCN-domain. Each rate is used by a marking behaviour (specified in 128 another document) that determines how and when a number of PCN- 129 packets are marked, and how the markings are encoded in packet 130 headers. PCN-egress-nodes make measurements of the packet markings 131 and send information as necessary to the nodes that make the decision 132 about which PCN-flows to accept/reject or terminate, based on this 133 information. Another document will describe the decision-making 134 behaviours. Overall the aim is to enable PCN-nodes to give an "early 135 warning" of potential congestion before there is any significant 136 build-up of PCN-packets in the queue; the admission control mechanism 137 limits the PCN-traffic on each link to *roughly* its PCN-lower-rate 138 and the flow termination mechanism limits the PCN-traffic on each 139 link to *roughly* its PCN-upper-rate. 141 We believe that the key benefits of the PCN mechanisms described in 142 this document are that they are simple, scalable, and robust because: 144 o Per flow state is only required at the PCN-ingress-nodes 145 ("stateless core"). This is required for policing purposes (to 146 prevent non-admitted PCN traffic from entering the PCN-domain) and 147 so on. It is not generally required that other network entities 148 are aware of individual flows (although they may be in particular 149 deployment scenarios). 151 o Admission control is resilient: PCN's QoS is decoupled from the 152 routing system; hence in general admitted flows can survive 153 capacity, routing or topology changes without additional 154 signalling, and they don't have to be told (or learn) about such 155 changes. The PCN-lower-rates can be chosen small enough that 156 admitted traffic can still be carried after a rerouting in most 157 failure cases [Menth]. This is an important feature as QoS 158 violations in core networks due to link failures are more likely 159 than QoS violations due to increased traffic volume [Iyer]. 161 o The PCN-marking behaviours only operate on the overall PCN-traffic 162 on the link, not per flow. 164 o The information of these measurements is signalled to the PCN- 165 egress-nodes by the PCN-marks in the packet headers, ie "in-band". 166 No additional signalling protocol is required for transporting the 167 PCN-marks. Therefore no secure binding is required between data 168 packets and separate congestion messages. 170 o The PCN-egress-nodes make separate measurements, operating on the 171 aggregate PCN-traffic from each PCN-ingress-node, ie not per flow. 172 Similarly, signalling by the PCN-egress-node of PCN-feedback- 173 information (which is used for flow admission and termination 174 decisions) is at the granularity of the ingress-egress-aggregate. 175 An alternative approach is that the PCN-egress-nodes monitor the 176 PCN-traffic and signal PCN-feedback-information (which is used for 177 flow admission and termination decisions) at the granularity of 178 one (or a few) PCN-marks. 180 o The admitted PCN-load is controlled dynamically. Therefore it 181 adapts as the traffic matrix changes, and also if the network 182 topology changes (eg after a link failure). Hence an operator can 183 be less conservative when deploying network capacity, and less 184 accurate in their prediction of the PCN-traffic matrix. 186 o The termination mechanism complements admission control. It 187 allows the network to recover from sudden unexpected surges of 188 PCN-traffic on some links, thus restoring QoS to the remaining 189 flows. Such scenarios are expected to be rare but not impossible. 190 They can be caused by large network failures that redirect lots of 191 admitted PCN-traffic to other links, or by malfunction of the 192 measurement-based admission control in the presence of admitted 193 flows that send for a while with an atypically low rate and then 194 increase their rates in a correlated way. 196 o The PCN-upper-rate may be set below the maximum rate that PCN- 197 traffic can be transmitted on a link, in order to trigger 198 termination of some PCN-flows before loss (or excessive delay) of 199 PCN-packets occurs, or to keep the maximum PCN-load on a link 200 below a level configured by the operator. 202 o Provisioning of the network is decoupled from the process of 203 adding new customers. By contrast, with the DiffServ architecture 204 [RFC2475] operators rely on subscription-time Service Level 205 Agreements that statically define the parameters of the traffic 206 that will be accepted from a customer, and so the operator has to 207 run the provisioning process each time a new customer is added to 208 check that the Service Level Agreement can be fulfilled. A PCN- 209 domain doesn't need such traffic conditioning. 211 Operators of networks will want to use the PCN mechanisms in various 212 arrangements, for instance depending on how they are performing 213 admission control outside the PCN-domain (users after all are 214 concerned about QoS end-to-end), what their particular goals and 215 assumptions are, and so on. Several deployment models are possible: 217 o An operator may choose to deploy either admission control or flow 218 termination or both (see Section 4.3). 220 o IntServ over DiffServ [RFC2998]. The DiffServ region is PCN- 221 enabled and the PCN-domain is a single RSVP hop, ie only the PCN- 222 boundary-nodes process RSVP messages. Outside the PCN-domain RSVP 223 messages are processed on each hop. The case where RSVP 224 signalling is used end-to-end is described in 225 [I-D.briscoe-tsvwg-cl-architecture]; it would also be possible for 226 the RSVP signalling to be originated and/or terminated by proxies, 227 with application-layer signalling between the end user and the 228 proxy (eg SIP signalling with a home hub). 230 o Similar to previous bullet but NSIS signalling is used instead of 231 RSVP. 233 o Depending on the deployment scenario, the decision-making 234 functionality (about flow admission and termination) could reside 235 at the PCN-ingress-nodes or PCN-egress-nodes or (see Appendix) at 236 some central control node in the PCN-domain. 238 o There are several PCN-domains on the end-to-end path, each 239 operating PCN mechanisms independently. 241 o The PCN-domain extends to the end users. The scenario is 242 described in [I-D.babiarz-pcn-sip-cap]. A variant is that the 243 PCN-domain extends out as far as the LAN edge switch. 245 o The operator runs both the access network (not a PCN-domain) and 246 the core network (a PCN-domain); per flow policing is devolved to 247 the access network and is not done at the PCN-ingress-node. Note: 248 to aid readability, the rest of this draft assumes that policing 249 is done by the PCN-ingress-nodes. 251 o Pseudowire: PCN may be used as a congestion avoidance mechanism 252 for edge to edge pseudowire emulations 253 [I-D.ietf-pwe3-congestion-frmwk]. 255 o MPLS: [RFC3270] defines how to support the DiffServ architecture 256 in MPLS networks. [RFC5129] describes how to add PCN for 257 admission control of microflows into a set of MPLS aggregates 258 (Multi-protocol label switching). PCN-marking is done in MPLS's 259 EXP field. 261 o Similarly, it may be possible to extend PCN into Ethernet 262 networks, where PCN-marking is done in the Ethernet header. NOTE: 263 Specific consideration of this extension is outside the IETF's 264 remit. 266 From the perspective of the outside world, a PCN-domain essentially 267 looks like a DiffServ domain. PCN-traffic is either transported 268 across it transparently or policed at the PCN-ingress-node (ie 269 dropped or carried at a lower QoS). A couple of differences are 270 that: PCN-traffic has better QoS guarantees than normal DiffServ 271 traffic (because PCN's mechanisms better protect the QoS of admitted 272 flows); and in rare circumstances (failures), on the one hand some 273 PCN-flows may get terminated, but on the other hand other flows will 274 get their QoS restored. Non PCN-traffic is treated transparently, ie 275 the PCN-domain is a normal DiffServ domain. 277 2. Terminology 279 o PCN-domain: a PCN-capable domain; a contiguous set of PCN-enabled 280 nodes that perform DiffServ scheduling; the compete set of PCN- 281 nodes whose PCN-marking can in principle influence decisions about 282 flow admission and termination for the PCN-domain, including the 283 PCN-egress-nodes which measure these PCN-marks. 285 o PCN-boundary-node: a PCN-node that connects one PCN-domain to a 286 node either in another PCN-domain or in a non PCN-domain. 288 o PCN-interior-node: a node in a PCN-domain that is not a PCN- 289 boundary-node. 291 o PCN-node: a PCN-boundary-node or a PCN-interior-node 292 o PCN-egress-node: a PCN-boundary-node in its role in handling 293 traffic as it leaves a PCN-domain. 295 o PCN-ingress-node: a PCN-boundary-node in its role in handling 296 traffic as it enters a PCN-domain. 298 o PCN-traffic: A PCN-domain carries traffic of different DiffServ 299 behaviour aggregates [RFC2475]. Those using the PCN mechanisms 300 are called PCN-BAs (collectively called PCN-traffic) and the 301 corresponding packets are PCN-packets. The same network may carry 302 traffic using other DiffServ BAs. A PCN-flow is the unit of PCN- 303 traffic that the PCN-boundary-node admits (or terminates); the 304 unit could be a single microflow (as defined in [RFC2475]) or some 305 identifiable collection of microflows. 307 o Ingress-egress-aggregate: The collection of PCN-packets from all 308 PCN-flows that travel in one direction between a specific pair of 309 PCN-boundary-nodes. 311 o PCN-lower-rate: a reference rate configured for each link in the 312 PCN-domain, which is lower than the PCN-upper-rate. It is used by 313 a marking behaviour that determines whether a packet should be 314 PCN-marked with a first encoding. 316 o PCN-upper-rate: a reference rate configured for each link in the 317 PCN-domain, which is higher than the PCN-lower-rate. It is used 318 by a marking behaviour that determines whether a packet should be 319 PCN-marked with a second encoding. 321 o Threshold-marking: a PCN-marking behaviour such that all PCN- 322 traffic is marked if the PCN-traffic exceeds a particular rate 323 (either the PCN-lower-rate or PCN-upper-rate). NOTE: The 324 definition reflects the overall intent rather than its 325 instantaneous behaviour, since the rate measured at a particular 326 moment depends on the behaviour, its implementation and the 327 traffic's variance as well as its rate. 329 o Excess-rate-marking: a PCN-marking behaviour such that the amount 330 of PCN-traffic that is PCN-marked is equal to the amount that 331 exceeds a particular rate (either the PCN-lower-rate or PCN-upper- 332 rate). NOTE: The definition reflects the overall intent rather 333 than its instantaneous behaviour, since the rate measured at a 334 particular moment depends on the behaviour, its implementation and 335 the traffic's variance as well as its rate. 337 o Pre-congestion: a condition of a link within a PCN-domain in which 338 the PCN-node performs PCN-marking, in order to provide an "early 339 warning" of potential congestion before there is any significant 340 build-up of PCN-packets in the real queue. (Hence, by analogy 341 with ECN we call our mechanism Pre-Congestion Notification.) 343 o PCN-marking: the process of setting the header in a PCN-packet 344 based on defined rules, in reaction to pre-congestion. 346 o PCN-feedback-information: information signalled by a PCN-egress- 347 node to a PCN-ingress-node or central control node, which is 348 needed for the flow admission and flow termination mechanisms. 350 3. Assumptions and constraints on scope 352 The scope of PCN is, at least initially (see Appendix A), restricted 353 by the following assumptions: 355 1. these components are deployed in a single DiffServ domain, within 356 which all PCN-nodes are PCN-enabled and trust each other for 357 truthful PCN-marking and transport 359 2. all flows handled by these mechanisms are inelastic and 360 constrained to a known peak rate through policing or shaping 362 3. the number of PCN-flows across any potential bottleneck link is 363 sufficiently large that stateless, statistical mechanisms can be 364 effective. To put it another way, the aggregate bit rate of PCN- 365 traffic across any potential bottleneck link needs to be 366 sufficiently large relative to the maximum additional bit rate 367 added by one flow. This is the basic assumption of measurement- 368 based admission control. 370 4. PCN-flows may have different precedence, but the applicability of 371 the PCN mechanisms for emergency use (911, GETS, WPS, MLPP, etc.) 372 is out of scope. 374 3.1. Assumption 1: Trust and support of PCN - controlled environment 376 We assume that the PCN-domain is a controlled environment, i.e. all 377 the nodes in a PCN-domain run PCN and trust each other. There are 378 several reasons for proposing this assumption: 380 o The PCN-domain has to be encircled by a ring of PCN-boundary- 381 nodes, otherwise traffic could enter a PCN BA without being 382 subject to admission control, which would potentially degrade the 383 QoS of existing PCN-flows. 385 o Similarly, a PCN-boundary-node has to trust that all the PCN-nodes 386 mark PCN-traffic consistently. A node not doing PCN-marking 387 wouldn't be able to alert when it suffered pre-congestion, which 388 potentially would lead to too many PCN-flows being admitted (or 389 too few being terminated). Worse, a rogue node could perform 390 various attacks, as discussed in the Security Considerations 391 section. 393 One way of assuring the above two points is that the entire PCN- 394 domain is run by a single operator. Another possibility is that 395 there are several operators but they trust each other to a sufficient 396 level, in their handling of PCN-traffic. 398 Note: All PCN-nodes need to be trustworthy. However if it's known 399 that an interface cannot become pre-congested then it's not strictly 400 necessary for it to be capable of PCN-marking. But this must be 401 known even in unusual circumstances, eg after the failure of some 402 links. 404 3.2. Assumption 2: Real-time applications 406 We assume that any variation of source bit rate is independent of the 407 level of pre-congestion. We assume that PCN-packets come from real 408 time applications generating inelastic traffic [Shenker] like voice 409 and video requiring low delay, jitter and packet loss, for example 410 the Controlled Load Service, [RFC2211], and the Telephony service 411 class, [RFC4594]. This assumption is to help focus the effort where 412 it looks like PCN would be most useful, ie the sorts of applications 413 where per flow QoS is a known requirement. In other words we focus 414 on PCN providing a benefit to inelastic traffic (PCN may or may not 415 provide a benefit to other types of traffic). For instance, the 416 impact of this assumption would be to guide simulations work. 418 3.3. Assumption 3: Many flows and additional load 420 We assume that there are many PCN-flows on any bottleneck link in the 421 PCN-domain (or, to put it another way, the aggregate bit rate of PCN- 422 traffic across any potential bottleneck link is sufficiently large 423 relative to the maximum additional bit rate added by one PCN-flow). 424 Measurement-based admission control assumes that the present is a 425 reasonable prediction of the future: the network conditions are 426 measured at the time of a new flow request, however the actual 427 network performance must be OK during the call some time later. One 428 issue is that if there are only a few variable rate flows, then the 429 aggregate traffic level may vary a lot, perhaps enough to cause some 430 packets to get dropped. If there are many flows then the aggregate 431 traffic level should be statistically smoothed. How many flows is 432 enough depends on a number of things such as the variation in each 433 flow's rate, the total rate of PCN-traffic, and the size of the 434 "safety margin" between the traffic level at which we start 435 admission-marking and at which packets are dropped or significantly 436 delayed. 438 We do not make explicit assumptions on how many PCN-flows are in each 439 ingress-egress-aggregate. Performance evaluation work may clarify 440 whether it is necessary to make any additional assumption on 441 aggregation at the ingress-egress-aggregate level. 443 3.4. Assumption 4: Emergency use out of scope 445 PCN-flows may have different precedence, but the applicability of the 446 PCN mechanisms for emergency use (911, GETS, WPS, MLPP, etc) is out 447 of scope for consideration by the PCN WG. 449 3.5. Other assumptions 451 As a consequence of Assumption 2 above, it is assumed that PCN- 452 marking is being applied to traffic scheduled with the expedited 453 forwarding per-hop behaviour, [RFC3246], or traffic with similar 454 characteristics. 456 The following two assumptions apply if the PCN WG decides to encode 457 PCN-marking in the ECN-field. 459 o It is assumed that PCN-nodes do not perform ECN, [RFC3168], on 460 PCN-packets. 462 o What to do if a packet that is part of a PCN-flow arrives at a 463 PCN-ingress-node with its CE (Congestion experienced) codepoint 464 set (or if it detects that the ECN-nonce in use). There are 465 several possibilities (not discussed further in this document) 466 about what the PCN-ingress-node should do: 468 * drop the packet 470 * downgrade the packet to non PCN-BA, eg best effort 472 * tunnel the packet, so that the ECN-marking is carried 473 transparently across the PCN-domain. 475 4. High-level functional architecture 477 The high-level approach is to split functionality between: 479 o PCN-interior-nodes 'inside' the PCN-domain, which monitor their 480 own state of pre-congestion on each outgoing interface and mark 481 PCN-packets if appropriate. They are not flow-aware, nor aware of 482 ingress-egress-aggregates. The functionality is also done by PCN- 483 ingress-nodes for their outgoing interfaces (ie those 'inside' the 484 PCN-domain). 486 o PCN-boundary-nodes at the edge of the PCN-domain, which control 487 admission of new PCN-flows and termination of existing PCN-flows, 488 based on information from PCN-interior-nodes. This information is 489 in the form of the PCN-marked data packets (which are intercepted 490 by the PCN-egress-nodes) and not signalling messages. Generally 491 PCN-ingress-nodes are flow-aware. 493 The aim of this split is to keep the bulk of the network simple, 494 scalable and robust, whilst confining policy, application-level and 495 security interactions to the edge of the PCN-domain. For example the 496 lack of flow awareness means that the PCN-interior-nodes don't care 497 about the flow information associated with the PCN-packets that they 498 carry, nor do the PCN-boundary-nodes care about which PCN-interior- 499 nodes its flows traverse. 501 The objective is to standardise PCN-marking behaviour, but 502 potentially produce more than one (informational) RFC describing how 503 PCN-boundary-nodes react to PCN-marks. 505 Note: Section 4 and Section 5 talk about PCN functionality being 506 configured on outgoing interfaces of PCN-nodes. Alternatively, PCN 507 functionality could be configured on the ingress interfaces of PCN- 508 nodes, however a consistent choice must be made across the PCN-domain 509 to ensure that the PCN mechanisms protect all links. This document 510 assumes configuration on the egress interfaces, because in DiffServ 511 networks today DiffServ functionality is usually implemented on 512 egress interfaces. 514 4.1. Flow admission 516 At a high level, flow admission control works as follows. In order 517 to generate information about the current state of the PCN-domain, 518 each PCN-node PCN-marks packets if it is "pre-congested". Exactly 519 how a PCN-node decides if it is "pre-congested" (the algorithm) and 520 exactly how packets are "PCN-marked" (the encoding) will be defined 521 in a separate standards-track document, but at a high level it is 522 expected to be as follows: 524 o the algorithm: a PCN-node meters the amount of PCN-traffic on each 525 one of its outgoing links. The measurement is made as an 526 aggregate of all PCN-packets, and not per flow. The algorithm has 527 a configured parameter, PCN-lower-rate. As the amount of PCN- 528 traffic exceeds the PCN-lower-rate, then PCN-packets are PCN- 529 marked. See NOTE below for more explanation. 531 o the encoding: a PCN-node PCN-marks a PCN-packet (with a first 532 encoding) by setting fields in the header to specific values. It 533 is expected that the ECN and/or DSCP fields will be used. 535 NOTE: Two main categories of algorithm have been proposed: if the 536 algorithm uses threshold-marking then all PCN-packets are marked if 537 the current rate exceeds the PCN-lower-rate, whereas if the algorithm 538 uses excess-rate-marking the amount marked is equal to the amount in 539 excess of the PCN-lower-rate. However, note that this description 540 reflects the overall intent of the algorithm rather than its 541 instantaneous behaviour, since the rate measured at a particular 542 moment depends on the detailed algorithm, its implementation and the 543 traffic's variance as well as its rate (eg marking may well continue 544 after a recent overload even after the instantaneous rate has 545 dropped). 547 The PCN-boundary-nodes monitor the PCN-marked packets in order to 548 extract information about the current state of the PCN-domain. Based 549 on this monitoring, a decision is made about whether to admit a 550 prospective new flow. Exactly how the admission control decision is 551 made will be defined separately (at the moment the intention is that 552 there will be one or more informational-track RFCs), but at a high 553 level two approaches have been proposed to date: 555 o the PCN-egress-node measures (possibly as a moving average) the 556 fraction of the PCN-traffic that is PCN-marked. The fraction is 557 measured for a specific ingress-egress-aggregate. If the fraction 558 is below a threshold value then the new flow is admitted. 560 o if the PCN-egress-node receives one (or several) PCN-marked 561 packets, then a new flow is blocked, otherwise it is admitted. 563 Note that the PCN-lower-rate is a parameter that can be configured by 564 the operator. It will be set lower than the traffic rate at which 565 the link becomes congested and the node drops packets. 567 Note also that the admission control decision is made for a 568 particular pair of PCN-boundary-nodes. So it is quite possible for a 569 new flow to be admitted between one pair of PCN-boundary-nodes, 570 whilst at the same time another admission request is blocked between 571 a different pair of PCN-boundary-nodes. 573 4.2. Flow termination 575 At a high level, flow termination control works as follows. Each 576 PCN-node PCN-marks packets in a similar fashion to above, with all 577 proposals using an excess-rate-marking approach (Section 4.1). An 578 obvious approach is for the algorithm to use a second configured 579 parameter, PCN-upper-rate, and a second header encoding. However 580 there is also a proposal to use the same rate and the same encoding. 581 Several approaches have been proposed to date about how to convert 582 this information into a flow termination decision; at a high level 583 these are as follows: 585 o In one approach the PCN-egress-node measures the rate of unmarked 586 PCN-traffic (ie not PCN-upper-rate-marked), which is the amount of 587 PCN-traffic that can actually be supported. Also the PCN-ingress- 588 node measures the rate of PCN-traffic that is destined for this 589 specific PCN-egress-node, and hence can calculate the excess 590 amount that should be terminated. 592 o Another approach instead measures the rate of PCN-upper-rate- 593 marked traffic and calculates and selects the flows that should be 594 terminated. 596 o Another approach terminates any PCN-flow with a PCN-upper-rate- 597 marked packet. Compared with the approaches above, PCN-marking 598 needs to be done at a reduced rate (every "s" bytes of excess 599 traffic) otherwise far too much traffic would be terminated. 601 o Another approach uses only one sort of marking, which is based on 602 the PCN-lower-rate, to decide not only whether to admit more PCN- 603 flows but also whether any PCN-flows need to be terminated. It 604 assumes that the ratio of the (implicit) PCN-upper-rate and the 605 PCN-lower-rate is the same on all links. This approach measures 606 the rate of unmarked PCN-traffic at a PCN-egress-node. The PCN- 607 ingress-node uses this measurement to compute the implicit PCN- 608 upper-rate of the bottleneck link. It then measures the rate of 609 PCN-traffic that is destined for this specific PCN-egress-node and 610 hence can calculate the amount that should be terminated. 612 Since flow termination is designed for "abnormal" circumstances, it 613 is quite likely that some PCN-nodes are congested and hence packets 614 are being dropped and/or significantly queued. The flow termination 615 mechanism must bear this in mind. 617 Note also that the termination control decision is made for a 618 particular pair of PCN-boundary-nodes. So it is quite possible for 619 PCN-flows to be terminated between one pair of PCN-boundary-nodes, 620 whilst at the same time none are terminated between a different pair 621 of PCN-boundary-nodes. 623 4.3. Flow admission and flow termination 625 Although designed to work together, flow admission and flow 626 termination are independent mechanisms, and the use of one does not 627 require or prevent the use of the other. 629 For example, an operator could use just admission control, solving 630 heavy congestion (caused by re-routing) by 'just waiting' - as 631 sessions end, existing microflows naturally depart from the system 632 over time, and the admission control mechanism will prevent admission 633 of new microflows that use the affected links. So the PCN-domain 634 will naturally return to normal operation, but with reduced capacity. 635 The drawback of this approach would be that until PCN-flows naturally 636 depart to relieve the congestion, all PCN-flows as well as lower 637 priority services will be adversely affected. On the other hand, an 638 operator could just rely for admission control on statically 639 provisioned capacity per PCN-ingress-node (regardless of the PCN- 640 egress-node of a flow), as is typical in the hose model of the 641 DiffServ architecture [RFC2475]. Such traffic conditioning 642 agreements can lead to focused overload: many flows happen to focus 643 on a particular link and then all flows through the congested link 644 fail catastrophically. The flow termination mechanism could then be 645 used to counteract such a problem. 647 A different possibility is to configure only the PCN-lower-rate and 648 hence only do one type of PCN-marking, but generate admission and 649 flow termination responses from different levels of marking. This is 650 suggested in [I-D.charny-pcn-single-marking] which gives some of the 651 pros and cons of this approach. 653 4.4. Information transport 655 The transport of pre-congestion information from a PCN-node to a PCN- 656 egress-node is through PCN-markings in data packet headers, ie "in- 657 band": no signalling protocol messaging is needed. However, 658 signalling is needed to transport PCN-feedback-information between 659 the PCN-boundary-nodes, for example to convey the fraction of PCN- 660 marked traffic from a PCN-egress-node to the relevant PCN-ingress- 661 node. Exactly what information needs to be transported will be 662 described in the future PCN WG document(s) about the boundary 663 mechanisms. The signalling could be done by an extension of RSVP or 664 NSIS, for instance; protocol work will be done by the relevant WG, 665 but for example [I-D.lefaucheur-rsvp-ecn] describes the extensions 666 needed for RSVP. 668 4.5. PCN-traffic 670 The following are some high-level points about how PCN works: 672 o There needs to be a way for a PCN-node to distinguish PCN-traffic 673 from non PCN-traffic. They may be distinguished using the DSCP 674 field and/or ECN field. 676 o The PCN mechanisms may be applied to more than one behaviour 677 aggregate (which are distinguished by DSCP). 679 o There may be traffic that is more important than PCN, perhaps a 680 particular application or an operator's control messages. A PCN- 681 node may dedicate capacity to such traffic or priority schedule it 682 over PCN. In the latter case its traffic needs to contribute to 683 the PCN meters. 685 o There will be traffic less important than PCN. For instance best 686 effort or assured forwarding traffic. It will be scheduled at 687 lower priority than PCN, and use a separate queue or queues. 688 However, a PCN-node should dedicate some capacity to lower 689 priority traffic so that it isn't starved. 691 o There may be other traffic with the same priority as PCN-traffic. 692 For instance, Expedited Forwarding sessions that are originated 693 either without capacity admission or with traffic engineering. In 694 [I-D.ietf-tsvwg-admitted-realtime-dscp] the two traffic classes 695 are called EF and EF-ADMIT. A PCN-node could either use separate 696 queues, or separate policers and a common queue; the draft 697 provides some guidance when each is better, but for instance the 698 latter is preferred when the two traffic classes are carrying the 699 same type of application with the same jitter requirements. 701 5. Detailed Functional architecture 703 This section is intended to provide a systematic summary of the new 704 functional architecture in the PCN-domain. First it describes 705 functions needed at the three specific types of PCN-node; these are 706 data plane functions and are in addition to their normal router 707 functions. Then it describes further functionality needed for both 708 flow admission control and flow termination; these are signalling and 709 decision-making functions, and there are various possibilities for 710 where the functions are physically located. The section is split 711 into: 713 1. functions needed at PCN-interior-nodes 715 2. functions needed at PCN-ingress-nodes 717 3. functions needed at PCN-egress-nodes 719 4. other functions needed for flow admission control 721 5. other functions needed for flow termination control 722 Note: Probing is covered in Section 7. 724 The section then discusses some other detailed topics: 726 1. addressing 728 2. tunnelling 730 3. fault handling 732 5.1. PCN-interior-node functions 734 Each interface of the PCN-domain is configured with the following 735 functionality: 737 o Packet classify - decide whether an incoming packet is a PCN- 738 packet or not. Another PCN WG document will specify encoding, 739 using the DSCP and/or ECN fields. 741 o PCN-meter - measure the 'amount of PCN-traffic'. The measurement 742 is made as an aggregate of all PCN-packets, and not per flow. 744 o PCN-mark - algorithms determine whether to PCN-mark PCN-packets 745 and what packet encoding is used (as specified in another PCN WG 746 document). 748 The same general approach of metering and PCN-marking is performed 749 for both flow admission control and flow termination, however the 750 algorithms and encoding may be different. 752 These functions are needed for each interface of the PCN-domain. 753 They are therefore needed on all interfaces of PCN-interior-nodes, 754 and on the interfaces of PCN-boundary-nodes that are internal to the 755 PCN-domain. There may be more than one PCN-meter and marker 756 installed at a given interface, eg one for admission and one for 757 termination. 759 5.2. PCN-ingress-node functions 761 Each ingress interface of the PCN-domain is configured with the 762 following functionality: 764 o Packet classify - decide whether an incoming packet is part of a 765 previously admitted microflow, by using a filter spec (eg DSCP, 766 source and destination addresses and port numbers) 768 o Police - police, by dropping or re-marking with a non-PCN DSCP, 769 any packets received with a DSCP demanding PCN transport that do 770 not belong to an admitted flow. Similarly, police packets that 771 are part of a previously admitted microflow, to check that the 772 microflow keeps to the agreed rate or flowspec (eg RFC1633 773 [RFC1633] and NSIS equivalent). There is a need to be careful to 774 avoid re-ordering traffic. 776 o PCN-colour - set the DSCP field or DSCP and ECN fields to the 777 appropriate value(s) for a PCN-packet. The draft about PCN- 778 encoding will discuss further. 780 o PCN-meter - make "measurements of PCN-traffic". Some approaches 781 to flow termination require the PCN-ingress-node to measure the 782 (aggregate) rate of PCN-traffic towards a particular PCN-egress- 783 node. 785 The first two are policing functions, needed to make sure that PCN- 786 packets admitted into the PCN-domain belong to a flow that's been 787 admitted and to ensure that the flow keeps to the flowspec agreed (eg 788 doesn't go at a faster rate and is inelastic traffic). Installing 789 the filter spec will typically be done by the signalling protocol, as 790 will re-installing the filter, for example after a re-route that 791 changes the PCN-ingress-node (see [I-D.briscoe-tsvwg-cl-architecture] 792 for an example using RSVP). PCN-colouring allows the rest of the 793 PCN-domain to recognise PCN-packets. 795 5.3. PCN-egress-node functions 797 Each egress interface of the PCN-domain is configured with the 798 following functionality: 800 o Packet classify - determine which PCN-ingress-node a PCN-packet 801 has come from. 803 o PCN-meter - "measure PCN-traffic" or "monitor PCN-marks". 805 o PCN-colour - for PCN-packets, set the DSCP and ECN fields to the 806 appropriate values for use outside the PCN-domain. 808 Another PCN WG document, about boundary mechanisms, will describe 809 PCN-metering in more detail. As described in Section 4.1 and Section 810 4.2, at present there are two alternative proposals: to measure as an 811 aggregate (ie not per flow) all PCN-packets from a particular PCN- 812 ingress-node; or to monitor the PCN-traffic and react to one (or 813 several) PCN-marks. We refer to these approaches as "measuring PCN- 814 traffic" and "monitoring PCN-marks". The PCN-metering functionality 815 also depends on whether the measurement is targeted at admission 816 control or flow termination. It also depends on what encoding and 817 PCN-marking algorithms are specified by the PCN WG. 819 5.4. Other admission control functions 821 As well as the functions covered above (Sections 5.1, 5.2, 5.3), 822 other specific admission control functions can be performed at a PCN- 823 boundary-node (PCN-ingress-node or PCN-egress-node) or at a 824 centralised node, but not at normal PCN-interior-nodes. The 825 functions are: 827 o Make decision about admission - based on the output of the PCN- 828 egress-node's PCN-meter function. In the case where it "measures 829 PCN-traffic", the measured traffic on the ingress-egress-aggregate 830 is compared with some reference level. In the case where it 831 "monitors PCN-marks", then the decision is based on whether one 832 (or several) packets is (are) PCN-marked or not. In either case, 833 the admission decision also takes account of policy and 834 application layer requirements. 836 o Communicate decision about admission - signal the decision to the 837 node making the admission control request (which may be outside 838 the PCN-domain), and to the policer (PCN-ingress-node function) 839 for enforcement of the decision. 841 There are various possibilities for how the functionality can be 842 distributed (we assume the operator would configure which is used): 844 o The decision is made at the PCN-egress-node and signalled to the 845 PCN-ingress-node 847 o The decision is made at the PCN-ingress-node, which requires that 848 the PCN-egress-node signals PCN-feedback-information to the PCN- 849 ingress-node. For example, in the case where the PCN-meter 850 function is to "measure PCN-traffic" it could signal the fraction 851 of PCN-traffic that is PCN-marked. 853 o The decision is made at a centralised node (see Appendix). 855 The decision needs to be passed to the application layer so that it 856 can take the appropriate action. 858 5.5. Other flow termination functions 860 Specific termination control functions can be performed at a PCN- 861 boundary-node (PCN-ingress-node or PCN-egress-node) or at a 862 centralised node, but not at normal PCN-interior-nodes. There are 863 various possibilities for how the functionality can be distributed, 864 similar to those discussed above in the Admission control section; 865 the flow termination decision could be made at the PCN-ingress-node, 866 the PCN-egress-node or at some centralised node. The functions are: 868 o PCN-meter at PCN-egress-node - similarly to flow admission, there 869 are two proposals: to "measure PCN-traffic" on the ingress-egress- 870 aggregate, and to "monitor PCN-marks" and react to one (or 871 several) PCN-marks. 873 o (if required) PCN-meter at PCN-ingress-node - make "measurements 874 of PCN-traffic" being sent towards a particular PCN-egress-node; 875 again, this is done for the ingress-egress-aggregate and not per 876 flow. 878 o (if required) Communicate PCN-feedback-information to the node 879 that makes the flow termination decision. For example, as in 880 [I-D.briscoe-tsvwg-cl-architecture], communicate the PCN-egress- 881 node's measurements to the PCN-ingress-node. 883 o Make decision about flow termination - use the information from 884 the PCN-meter(s) to decide which PCN-flow or PCN-flows to 885 terminate. The decision takes account of policy and application 886 layer requirements. 888 o Communicate decision about flow termination - signal the decision 889 to the node that is able to terminate the flow (which may be 890 outside the PCN-domain), and to the policer (PCN-ingress-node 891 function) for enforcement of the decision. 893 5.6. Addressing 895 PCN-nodes may need to know the address of other PCN-nodes. Note: in 896 all cases PCN-interior-nodes don't need to know the address of any 897 other PCN-nodes (except as normal their next hop neighbours, for 898 routing purposes). 900 The PCN-egress-node needs to know the address of the PCN-ingress-node 901 associated with a flow, at a minimum so that the PCN-ingress-node can 902 be informed to enforce the admission decision (and any flow 903 termination decision) through policing. There are various 904 possibilities for how the PCN-egress-node can do this, ie associate 905 the received packet to the correct ingress-egress-aggregate. It is 906 not the intention of this document to mandate a particular mechanism. 908 o The addressing information can be gathered from signalling. For 909 example, regular processing of an RSVP Path message, as the PCN- 910 ingress-node is the previous RSVP hop (PHOP) 911 ([I-D.lefaucheur-rsvp-ecn]). 913 o Use a probe packet that includes as payload the address of the 914 PCN-ingress-node. 916 o Always tunnel PCN-traffic across the PCN-domain. Then the PCN- 917 ingress-node's address is simply the source address of the outer 918 packet header. The PCN-ingress-node needs to learn the address of 919 the PCN-egress-node, either by manual configuration or by one of 920 the automated tunnel endpoint discovery mechanisms (such as 921 signalling or probing over the data route, interrogating routing 922 or using a centralised broker). 924 5.7. Tunnelling 926 Tunnels may originate and/or terminate within a PCN-domain. It is 927 important that the PCN-marking of any packet can potentially 928 influence PCN's flow admission control and termination - it shouldn't 929 matter whether the packet happens to be tunnelled at the PCN-node 930 that PCN-marks the packet, or indeed whether it's decapsulated or 931 encapsulated by a subsequent PCN-node. This suggests that the 932 "uniform conceptual model" described in [RFC2983] should be re- 933 applied in the PCN context. In line with this and the approach of 934 [RFC4303] and [I-D.briscoe-tsvwg-ecn-tunnel], the following rule is 935 applied if encapsulation is done within the PCN-domain: 937 o any PCN-marking is copied into the outer header 939 Similarly, in line with the "uniform conceptual model" of [RFC2983] 940 and the "full-functionality option" of [RFC3168], the following rule 941 is applied if decapsulation is done within the PCN-domain: 943 o if the outer header's marking state is more severe then it is 944 copied onto the inner header 946 o Note: the order of increasing severity is: unmarked; PCN-marking 947 with first encoding (ie associated with the PCN-lower-rate); PCN- 948 marking with second encoding (ie associated with the PCN-upper- 949 rate) 951 An operator may wish to tunnel PCN-traffic from PCN-ingress-nodes to 952 PCN-egress-nodes. The PCN-marks shouldn't be visible outside the 953 PCN-domain, which can be achieved by doing the PCN-colour function 954 (Section 5.3) after all the other (PCN and tunnelling) functions. 955 The potential reasons for doing such tunnelling are: the PCN-egress- 956 node then automatically knows the address of the relevant PCN- 957 ingress-node for a flow; even if ECMP is running, all PCN-packets on 958 a particular ingress-egress-aggregate follow the same path. But it 959 also has drawbacks, for example the additional overhead in terms of 960 bandwidth and processing, and the cost of setting up a mesh of 961 tunnels between PCN-boundary-nodes (there is an N^2 scaling issue). 963 Potential issues arise for a "partially PCN-capable tunnel", ie where 964 only one tunnel endpoint is in the PCN domain: 966 1. The tunnel starts outside a PCN-domain and finishes inside it. 967 If the packet arrives at the tunnel ingress with the same 968 encoding as used within the PCN-domain to indicate PCN-marking, 969 then this could lead the PCN-egress-node to falsely measure pre- 970 congestion. 972 2. The tunnel starts inside a PCN-domain and finishes outside it. 973 If the packet arrives at the tunnel ingress already PCN-marked, 974 then it will still have the same encoding when it's decapsulated 975 which could potentially confuse nodes beyond the tunnel egress. 977 In line with the solution for partially capable DiffServ tunnels in 978 [RFC2983], the following rules are applied: 980 o For case (1), the tunnel egress node clears any PCN-marking on the 981 inner header. This rule is applied before the 'copy on 982 decapsulation' rule above. 984 o For case (2), the tunnel ingress node clears any PCN-marking on 985 the inner header. This rule is applied after the 'copy on 986 encapsulation' rule above. 988 Note that the above implies that one has to know, or figure out, the 989 characteristics of the other end of the tunnel as part of setting it 990 up. 992 5.8. Fault handling 994 If a PCN-interior-node fails (or one of its links), then lower layer 995 protection mechanisms or the regular IP routing protocol will 996 eventually re-route round it. If the new route can carry all the 997 admitted traffic, flows will gracefully continue. If instead this 998 causes early warning of pre-congestion on the new route, then 999 admission control based on pre-congestion notification will ensure 1000 new flows will not be admitted until enough existing flows have 1001 departed. Re-routing may result in heavy (pre-)congestion, when the 1002 flow termination mechanism will kick in. 1004 If a PCN-boundary-node fails then we would like the regular QoS 1005 signalling protocol to take care of things. As an example 1006 [I-D.briscoe-tsvwg-cl-architecture] considers what happens if RSVP is 1007 the QoS signalling protocol. 1009 6. Design goals and challenges 1011 Prior work on PCN and similar mechanisms has thrown up a number of 1012 considerations about PCN's design goals (things PCN should be good 1013 at) and some issues that have been hard to solve in a fully 1014 satisfactory manner. Taken as a whole it represents a list of trade- 1015 offs (it's unlikely that they can all be 100% achieved) and perhaps 1016 as evaluation criteria to help an operator (or the IETF) decide 1017 between options. 1019 The following are key design goals for PCN (based on 1020 [I-D.chan-pcn-problem-statement]): 1022 o The PCN-enabled packet forwarding network should be simple, 1023 scalable and robust 1025 o Compatibility with other traffic (ie a proposed solution should 1026 work well when non-PCN traffic is also present in the network) 1028 o Support of different types of real-time traffic (eg should work 1029 well with CBR and VBR voice and video sources treated together) 1031 o Reaction time of the mechanisms should be commensurate with the 1032 desired application-level requirements (eg a termination mechanism 1033 needs to terminate flows before significant QoS issues are 1034 experienced by real-time traffic, and before most users hang up). 1036 o Compatibility with different precedence levels of real-time 1037 applications (e.g. preferential treatment of higher precedence 1038 calls over lower precedence calls, [ITU-MLPP]. 1040 The following are open issues. They are mainly taken from 1041 [I-D.briscoe-tsvwg-cl-architecture] which also describes some 1042 possible solutions. Note that some may be considered unimportant in 1043 general or in specific deployment scenarios or by some operators. 1045 NOTE: Potential solutions are out of scope for this document. 1047 o ECMP (Equal Cost Multi-Path) Routing: The level of pre-congestion 1048 is measured on a specific ingress-egress-aggregate. However, if 1049 the PCN-domain runs ECMP, then traffic on this ingress-egress- 1050 aggregate may follow several different paths - some of the paths 1051 could be pre-congested whilst others are not. There are three 1052 potential problems: 1054 1. over-admission: a new flow is admitted (because the pre- 1055 congestion level measured by the PCN-egress-node is 1056 sufficiently diluted by unmarked packets from non-congested 1057 paths that a new flow is admitted), but its packets travel 1058 through a pre-congested PCN-node 1060 2. under-admission: a new flow is blocked (because the pre- 1061 congestion level measured by the PCN-egress-node is 1062 sufficiently increased by PCN-marked packets from pre- 1063 congested paths that a new flow is blocked), but its packets 1064 travel along an uncongested path 1066 3. ineffective termination: flows are terminated, however their 1067 path doesn't travel through the (pre-)congested router(s). 1068 Since flow termination is a 'last resort' that protects the 1069 network should over-admission occur, this problem is probably 1070 more important to solve than the other two. 1072 o ECMP and signalling: It is possible that, in a PCN-domain running 1073 ECMP, the signalling packets (eg RSVP, NSIS) follow a different 1074 path than the data packets, which could matter if the signalling 1075 packets are used as probes. Whether this is an issue depends on 1076 which fields the ECMP algorithm uses; if the ECMP algorithm is 1077 restricted to the source and destination IP addresses, then it 1078 won't be. 1080 o Tunnelling: There are scenarios where tunnelling makes it hard to 1081 determine the path in the PCN-domain. The problem, its impact and 1082 the potential solutions are similar to those for ECMP. 1084 o Scenarios with only one tunnel endpoint in the PCN domain may make 1085 it harder for the PCN-egress-node to gather from the signalling 1086 messages (eg RSVP, NSIS) the identity of the PCN-ingress-node. 1088 o Bi-Directional Sessions: Many applications have bi-directional 1089 sessions - hence there are two flows that should be admitted (or 1090 terminated) as a pair - for instance a bi-directional voice call 1091 only makes sense if flows in both directions are admitted. 1092 However, PCN's mechanisms concern admission and termination of a 1093 single flow, and coordination of the decision for both flows is a 1094 matter for the signalling protocol and out of scope of PCN. One 1095 possible example would use SIP pre-conditions; there are others. 1097 o Global Coordination: PCN makes its admission decision based on 1098 PCN-markings on a particular ingress-egress-aggregate. Decisions 1099 about flows through a different ingress-egress-aggregate are made 1100 independently. However, one can imagine network topologies and 1101 traffic matrices where, from a global perspective, it would be 1102 better to make a coordinated decision across all the ingress- 1103 egress-aggregates for the whole PCN-domain. For example, to block 1104 (or even terminate) flows on one ingress-egress-aggregate so that 1105 more important flows through a different ingress-egress-aggregate 1106 could be admitted. The problem may well be second order. 1108 o Aggregate Traffic Characteristics: Even when the number of flows 1109 is stable, the traffic level through the PCN-domain will vary 1110 because the sources vary their traffic rates. PCN works best when 1111 there's not too much variability in the total traffic level at a 1112 PCN-node's interface (ie in the aggregate traffic from all 1113 sources). Too much variation means that a node may (at one 1114 moment) not be doing any PCN-marking and then (at another moment) 1115 drop packets because it's overloaded. This makes it hard to tune 1116 the admission control scheme to stop admitting new flows at the 1117 right time. Therefore the problem is more likely with fewer, 1118 burstier flows. 1120 o Flash crowds and Speed of Reaction: PCN is a measurement-based 1121 mechanism and so there is an inherent delay between packet marking 1122 by PCN-interior-nodes and any admission control reaction at PCN- 1123 boundary-nodes. For example, potentially if a big burst of 1124 admission requests occurs in a very short space of time (eg 1125 prompted by a televote), they could all get admitted before enough 1126 PCN-marks are seen to block new flows. In other words, any 1127 additional load offered within the reaction time of the mechanism 1128 mustn't move the PCN-domain directly from no congestion to 1129 overload. This 'vulnerability period' may impact at the 1130 signalling level, for instance QoS requests should be rate limited 1131 to bound the number of requests able to arrive within the 1132 vulnerability period. 1134 o Silent at start: after a successful admission request the source 1135 may wait some time before sending data (eg waiting for the called 1136 party to answer). Then the risk is that, in some circumstances, 1137 PCN's measurements underestimate what the pre-congestion level 1138 will be when the source does start sending data. 1140 o Compatibility of PCN-encoding with ECN-encoding. This issue will 1141 be considered further in the PCN WG Milestone 'Survey of encoding 1142 choices'. 1144 7. Probing 1146 7.1. Introduction 1148 Probing is an optional mechanism to assist admission control. 1150 PCN's admission control, as described so far, is essentially a 1151 reactive mechanism where the PCN-egress-node monitors the pre- 1152 congestion level for traffic from each PCN-ingress-node; if the level 1153 rises then it blocks new flows on that ingress-egress-aggregate. 1154 However, it's possible that an ingress-egress-aggregate carries no 1155 traffic, and so the PCN-egress-node can't make an admission decision 1156 using the usual method described earlier. 1158 One approach is to be "optimistic" and simply admit the new flow. 1159 However it's possible to envisage a scenario where the traffic levels 1160 on other ingress-egress-aggregates are already so high that they're 1161 blocking new PCN-flows, and admitting a new flow onto this 'empty' 1162 ingress-egress-aggregate adds extra traffic onto the link that's 1163 already pre-congested - which may 'tip the balance' so that PCN's 1164 flow termination mechanism is activated or some packets are dropped. 1165 This risk could be lessened by configuring on each link sufficient 1166 'safety margin' above the PCN-lower-rate. 1168 An alternative approach is to make PCN a more proactive mechanism. 1169 The PCN-ingress-node explicitly determines, before admitting the 1170 prospective new flow, whether the ingress-egress-aggregate can 1171 support it. This can be seen as a "pessimistic" approach, in 1172 contrast to the "optimism" of the approach above. It involves 1173 probing: a PCN-ingress-node generates and sends probe packets in 1174 order to test the pre-congestion level that the flow would 1175 experience. 1177 One possibility is that a probe packet is just a dummy data packet, 1178 generated by the PCN-ingress-node and addressed to the PCN-egress- 1179 node. Another possibility is that a probe packet is a signalling 1180 packet that is anyway travelling from the PCN-ingress-node to the 1181 PCN-egress-node (eg an RSVP PATH message travelling from source to 1182 destination). 1184 7.2. Probing functions 1186 The probing functions are: 1188 o Make decision that probing is needed. As described above, this is 1189 when the ingress-egress-aggregate (or the ECMP path - Section 6) 1190 carries no PCN-traffic. An alternative is always to probe, ie 1191 probe before admitting every PCN-flow. 1193 o (if required) Communicate the request that probing is needed - the 1194 PCN-egress-node signals to the PCN-ingress-node that probing is 1195 needed 1197 o (if required) Generate probe traffic - the PCN-ingress-node 1198 generates the probe traffic. The appropriate number (or rate) of 1199 probe packets will depend on the PCN-marking algorithm; for 1200 example an excess-rate-marking algorithm generates fewer PCN-marks 1201 than a threshold-marking algorithm, and so will need more probe 1202 packets. 1204 o Forward probe packets - as far as PCN-interior-nodes are 1205 concerned, probe packets must be handled the same as (ordinary 1206 data) PCN-packets, in terms of routing, scheduling and PCN- 1207 marking. 1209 o Consume probe packets - the PCN-egress-node consumes probe packets 1210 to ensure that they don't travel beyond the PCN-domain. 1212 7.3. Discussion of rationale for probing, its downsides and open issues 1214 It is an unresolved question whether probing is really needed, but 1215 three viewpoints have been put forward as to why it is useful. The 1216 first is perhaps the most obvious: there is no PCN-traffic on the 1217 ingress-egress-aggregate. The second assumes that multipath routing 1218 ECMP is running in the PCN-domain. The third viewpoint is that 1219 admission control is always done by probing. We now consider each in 1220 turn. 1222 The first viewpoint assumes the following: 1224 o There is no PCN-traffic on the ingress-egress-aggregate (so a 1225 normal admission decision cannot be made). 1227 o Simply admitting the new flow has a significant risk of leading to 1228 overload: packets dropped or flows terminated. 1230 On the former bullet, [PCN-email-traffic-empty-aggregates] suggests 1231 that, during the future busy hour of a national network with about 1232 100 PCN-boundary-nodes, there are likely to be significant numbers of 1233 aggregates with very few flows under nearly all circumstances. 1235 The latter bullet could occur if a new flow starts on many of the 1236 empty ingress-egress-aggregates and causes overload on a link in the 1237 PCN-domain. To be a problem this would probably have to happen in a 1238 short time period (flash crowd) because, after the reaction time of 1239 the system, other (non-empty) ingress-egress-aggregates that pass 1240 through the link will measure pre-congestion and so block new flows, 1241 and also flows naturally end anyway. 1243 The downsides of probing for this viewpoint are: 1245 o Probing adds delay to the admission control process. 1247 o Sufficient probing traffic has to be generated to test the pre- 1248 congestion level of the ingress-egress-aggregate. But the probing 1249 traffic itself may cause pre-congestion, causing other PCN-flows 1250 to be blocked or even terminated - and in the flash crowd scenario 1251 there will be probing on many ingress-egress-aggregates. 1253 The open issues associated with this viewpoint include: 1255 o What rate and pattern of probe packets does the PCN-ingress-node 1256 need to generate, so that there's enough traffic to make the 1257 admission decision? 1259 o What difficulty does the delay (whilst probing is done) cause 1260 applications, eg packets might be dropped? 1262 o Are there other ways of dealing with the flash crowd scenario? 1263 For instance limit the rate at which new flows are admitted; or 1264 perhaps for a PCN-egress-node to block new flows on its empty 1265 ingress-egress-aggregates when its non-empty ones are pre- 1266 congested. 1268 The second viewpoint applies in the case where there is multipath 1269 routing (ECMP) in the PCN-domain. Note that ECMP is often used on 1270 core networks. There are two possibilities: 1272 (1) If admission control is based on measurements of the ingress- 1273 egress-aggregate, then the viewpoint that probing is useful assumes: 1275 o there's a significant chance that the traffic is unevenly balanced 1276 across the ECMP paths, and hence there's a significant risk of 1277 admitting a flow that should be blocked (because it follows an 1278 ECMP path that is pre-congested) or blocking a flow that should be 1279 admitted. 1281 o Note: [PCN-email-ECMP] suggests unbalanced traffic is quite 1282 possible, even with quite a large number of flows on a PCN-link 1283 (eg 1000) when Assumption 3 (aggregation) is likely to be 1284 satisfied. 1286 (2) If admission control is based on measurements of pre-congestion 1287 on specific ECMP paths, then the viewpoint that probing is useful 1288 assumes: 1290 o There is no PCN-traffic on the ECMP path on which to base an 1291 admission decision. 1293 o Simply admitting the new flow has a significant risk of leading to 1294 overload. 1296 o The PCN-egress-node can match a packet to an ECMP path. 1298 o Note: This is similar to the first viewpoint and so similarly 1299 could occur in a flash crowd if a new flow starts more-or-less 1300 simultaneously on many of the empty ECMP paths. Because there are 1301 several (sometimes many) ECMP paths between each pair of PCN- 1302 boundary-nodes, it's presumably more likely that an ECMP path is 1303 'empty' than an ingress-egress-aggregate. To constrain the number 1304 of ECMP paths, a few tunnels could be set-up between each pair of 1305 PCN-boundary-nodes. Tunnelling also solves the third bullet 1306 (which is otherwise hard because an ECMP routing decision is made 1307 independently on each node). 1309 The downsides of probing for this viewpoint are: 1311 o Probing adds delay to the admission control process. 1313 o Sufficient probing traffic has to be generated to test the pre- 1314 congestion level of the ECMP path. But there's the risk that the 1315 probing traffic itself may cause pre-congestion, causing other 1316 PCN-flows to be blocked or even terminated. 1318 o The PCN-egress-node needs to consume the probe packets to ensure 1319 they don't travel beyond the PCN-domain (eg they might confuse the 1320 destination end node). Hence somehow the PCN-egress-node has to 1321 be able to disambiguate a probe packet from a data packet, via the 1322 characteristic setting of particular bit(s) in the packet's header 1323 or body - but these bit(s) mustn't be used by any PCN-interior- 1324 node's ECMP algorithm. In the general case this isn't possible, 1325 but it should be OK for a typical ECMP algorithm which examines: 1326 the source and destination IP addresses and port numbers, the 1327 protocol ID and the DSCP. 1329 The third viewpoint assumes the following: 1331 o Every admission control decision involves probing, using the 1332 signalling set-up message as the probe packet (eg RSVP PATH). 1334 o The PCN-marking behaviour is such that every packet is PCN-marked 1335 if the flow should be blocked, hence only a single probing packet 1336 is needed. 1338 This viewpoint [I-D.draft-babiarz-pcn-3sm] has in particular been 1339 suggested for the scenario where the PCN-domain reaches out towards 1340 the end terminals (note that it's assumed the trust and aggregation 1341 assumptions still hold), although it has also been suggested for 1342 other scenarios. 1344 8. Operations and Management 1346 This Section considers operations and management issues, under the 1347 FCAPS headings: OAM of Faults, Configuration, Accounting, Performance 1348 and Security. Provisioning is discussed with performance. 1350 8.1. Configuration OAM 1352 This architecture document predates the detailed standards actions of 1353 the PCN WG. Here we assume that only interoperable PCN-marking 1354 behaviours will be standardised, otherwise we would have to consider 1355 how to avoid interactions between non-interoperable marking 1356 behaviours. However, more diversity in PCN-boundary-node behaviours 1357 is expected, in order to interface with diverse industry 1358 architectures. It may be possible to have different PCN-boundary- 1359 node behaviours for different ingress-egress-aggregates within the 1360 same PCN-domain. 1362 PCN functionality is configured on either the egress or the ingress 1363 interfaces of PCN-nodes. A consistent choice must be made across the 1364 PCN-domain to ensure that the PCN mechanisms protect all links. 1366 PCN configuration control variables fall into the following 1367 categories: 1369 o system options (enabling or disabling behaviours) 1371 o parameters (setting levels, addresses etc) 1373 One possibility is that all configurable variables sit within an SNMP 1374 management framework [RFC3411], being structured within a defined 1375 management information base (MIB) on each node, and being remotely 1376 readable and settable via a suitably secure management protocol 1377 (SNMPv3). 1379 Some configuration options and parameters have to be set once to 1380 'globally' control the whole PCN-domain. Where possible, these are 1381 identified below. This may affect operational complexity and the 1382 chances of interoperability problems between kit from different 1383 vendors. 1385 It may be possible for an operator to configure some PCN-interior- 1386 nodes so they don't run the PCN mechanisms, if it knows that these 1387 links will never become (pre-)congested. 1389 8.1.1. System options 1391 On PCN-interior-nodes there will be very few system options: 1393 o Whether two PCN-markings (based on the PCN-lower-rate and PCN- 1394 upper-rate) are enabled or only one (see Section 4.3). Typically 1395 all nodes throughout a PCN-domain will be configured the same in 1396 this respect. However, exceptions could be made. For example, if 1397 most PCN-nodes used both markings, but some legacy hardware was 1398 incapable of running two algorithms, an operator might be willing 1399 to configure these legacy nodes solely for PCN-marking based on 1400 the PCN-upper-rate to enable flow termination as a back-stop. It 1401 would be sensible to place such nodes where they could be 1402 provisioned with a greater leeway over expected traffic levels. 1404 o which marking algorithm to use, if an equipment vendor provides a 1405 choice 1407 PCN-boundary-nodes (ingress and egress) will have more system 1408 options: 1410 o Which of admission and flow termination are enabled. If any PCN- 1411 interior-node is configured to generate a marking, all PCN- 1412 boundary-nodes must be able to handle that marking. Therefore all 1413 PCN-boundary-nodes must be configured the same in this respect. 1415 o Where flow admission and termination decisions are made: at the 1416 PCN-ingress-node, PCN-egress-node or at a centralised node (see 1417 Sections 5.4 and 5.5). Theoretically, this configuration choice 1418 could be negotiated for each pair of PCN-boundary-nodes, but we 1419 cannot imagine why such complexity would be required, except 1420 perhaps in future inter-domain scenarios. 1422 PCN-egress-nodes will have further system options: 1424 o How the mapping should be established between each packet and its 1425 aggregate, eg by MPLS label, by IP packet filterspec; and how to 1426 take account of ECMP. 1428 o If an equipment vendor provides a choice, there may be options to 1429 select which smoothing algorithm to use for measurements. 1431 8.1.2. Parameters 1433 Like any DiffServ domain, every node within a PCN-domain will need to 1434 be configured with the DSCP(s) used to identify PCN-packets. On each 1435 interior link the main configuration parameters are the PCN-lower- 1436 rate and PCN-upper-rate. A larger PCN-lower-rate enables more PCN- 1437 traffic to be admitted on a link, hence improving capacity 1438 utilisation. A PCN-upper-rate set further above the PCN-lower-rate 1439 allows greater increases in traffic (whether due to natural 1440 fluctuations or some unexpected event) before any flows are 1441 terminated, ie minimises the chances of unnecessarily triggering the 1442 termination mechanism. For instance an operator may want to design 1443 their network so that it can cope with a failure of any single PCN- 1444 node without terminating any flows. 1446 Setting these rates on first deployment of PCN will be very similar 1447 to the traditional process for sizing an admission controlled 1448 network, depending on: the operator's requirements for minimising 1449 flow blocking (grade of service), the expected PCN traffic load on 1450 each link and its statistical characteristics (the traffic matrix), 1451 contingency for re-routing the PCN traffic matrix in the event of 1452 single or multiple failures and the expected load from other classes 1453 relative to link capacities [Menth]. But once a domain is up and 1454 running, a PCN design goal is to be able to determine growth in these 1455 configured rates much more simply, by monitoring PCN-marking rates 1456 from actual rather than expected traffic (see Section 8.2 on 1457 Performance & Provisioning). 1459 Operators may also wish to configure a rate greater than the PCN- 1460 upper-rate that is the absolute maximum rate that a link allows for 1461 PCN-traffic. This may simply be the physical link rate, but some 1462 operators may wish to configure a logical limit to prevent starvation 1463 of other traffic classes during any brief period after PCN-traffic 1464 exceeds the PCN-upper-rate but before flow termination brings it back 1465 below this rate. 1467 Specific marking algorithms will also depend on further configuration 1468 parameters. For instance, threshold-marking will require a threshold 1469 queue depth and excess-rate-marking may require a scaling parameter. 1470 It will be preferable for each marking algorithm to have rules to set 1471 defaults for these parameters relative to the reference marking rate, 1472 but then allow operators to change them, for instance if average 1473 traffic characteristics change over time. The PCN-egress-node may 1474 allow configuration of the following: 1476 o how it smoothes metering of PCN-markings (eg EWMA parameters) 1478 Whichever node makes admission and flow termination decisions will 1479 contain algorithms for converting PCN-marking levels into admission 1480 or flow termination decisions. These will also require configurable 1481 parameters, for instance: 1483 o Any admission control algorithm will at least require a marking 1484 threshold setting above which it denies admission to new flows; 1486 o flow termination algorithms will probably require a parameter to 1487 delay termination of any flows until it is more certain that an 1488 anomalous event is not transient; 1490 o a parameter to control the trade-off between how quickly excess 1491 flows are terminated and over-termination. 1493 One particular proposal, [I-D.charny-pcn-single-marking] would 1494 require a global parameter to be defined on all PCN-nodes, but only 1495 needs the PCN-lower-rate to be configured on each link. The global 1496 parameter is a scaling factor between admission and termination, for 1497 example the amount by which the PCN-upper-rate is implicitly assumed 1498 to be above the PCN-lower-rate. [I-D.charny-pcn-single-marking] 1499 discusses in full the impact of this particular proposal on the 1500 operation of PCN. 1502 8.2. Performance & Provisioning OAM 1504 Monitoring of performance factors measurable from *outside* the PCN 1505 domain will be no different with PCN than with any other packet-based 1506 flow admission control system, both at the flow level (blocking 1507 probability etc) and the packet level (jitter [RFC3393], [Y.1541], 1508 loss rate [RFC4656], mean opinion score [P.800], etc). The 1509 difference is that PCN is intentionally designed to indicate 1510 *internally* which exact resource(s) are the cause of performance 1511 problems and by how much. 1513 Even better, PCN indicates which resources will probably cause 1514 problems if they are not upgraded soon. This can be achieved by the 1515 management system monitoring the total amount (in bytes) of PCN- 1516 marking generated by each queue over a period. Given possible long 1517 provisioning lead times, pre-congestion volume is the best metric to 1518 reveal whether sufficient persistent demand has mounted up to warrant 1519 an upgrade. Because, even before utilisation becomes problematic, 1520 the statistical variability of traffic will cause occasional bursts 1521 of pre-congestion. This 'early warning system' decouples the process 1522 of adding customers from the provisioning process. This should cut 1523 the time to add a customer when compared against admission control 1524 provided over native DiffServ [RFC2998], because it saves having to 1525 re-run the capacity planning process before adding each customer. 1527 Alternatively, before triggering an upgrade, the long term pre- 1528 congestion volume on each link can be used to balance traffic load 1529 across the PCN-domain by adjusting the link weights of the routing 1530 system. When an upgrade to a link's configured PCN-rates is 1531 required, it may also be necessary to upgrade the physical capacity 1532 available to other classes. But usually there will be sufficient 1533 physical capacity for the upgrade to go ahead as a simple 1534 configuration change. Alternatively, [Songhurst] has proposed an 1535 adaptive rather than preconfigured system, where the configured PCN- 1536 lower-rate is replaced with a high and low water mark and the marking 1537 algorithm automatically optimises how physical capacity is shared 1538 using the relative loads from PCN and other traffic classes. 1540 All the above processes require just three extra counters associated 1541 with each PCN queue: PCN-markings associated with the PCN-lower-rate 1542 and PCN-upper-rate, and drop. Every time a PCN packet is marked or 1543 dropped its size in bytes should be added to the appropriate counter. 1544 Then the management system can read the counters at any time and 1545 subtract a previous reading to establish the incremental volume of 1546 each type of (pre-)congestion. Readings should be taken frequently, 1547 so that anomalous events (eg re-routes) can be separated from regular 1548 fluctuating demand if required. 1550 8.3. Accounting OAM 1552 Accounting is only done at trust boundaries so it is out of scope of 1553 the initial Charter of the PCN WG which is confined to intra-domain 1554 issues. Use of PCN internal to a domain makes no difference to the 1555 flow signalling events crossing trust boundaries outside the PCN- 1556 domain, which are typically used for accounting. 1558 8.4. Fault OAM 1560 Fault OAM is about preventing faults, telling the management system 1561 (or manual operator) that the system has recovered (or not) from a 1562 failure, and about maintaining information to aid fault diagnosis. 1564 Admission blocking and particularly flow termination mechanisms 1565 should rarely be needed in practice. It would be unfortunate if they 1566 didn't work after an option had been accidentally disabled. 1567 Therefore it will be necessary to regularly test that the live system 1568 works as intended (devising a meaningful test is left as an exercise 1569 for the operator). 1571 Section 5.9 describes how the PCN architecture has been designed to 1572 ensure admitted flows continue gracefully after recovering 1573 automatically from link or node failures. The need to record and 1574 monitor re-routing events affecting signalling is unchanged by the 1575 addition of PCN to a DiffServ domain. Similarly, re-routing events 1576 within the PCN-domain will be recorded and monitored just as they 1577 would be without PCN. 1579 PCN-marking does make it possible to record 'near-misses'. For 1580 instance, at the PCN-egress-node a 'reporting threshold' could be set 1581 to monitor how often - and for how long - the system comes close to 1582 triggering flow blocking without actually doing so. Similarly, 1583 bursts of flow termination marking could be recorded even if they are 1584 not sufficiently sustained to trigger flow termination. Such 1585 statistics could be correlated with per-queue counts of marking 1586 volume (Section 8.2) to upgrade resources in danger of causing 1587 service degradation, or to trigger manual tracing of intermittent 1588 incipient errors that would otherwise have gone unnoticed. 1590 Finally, of course, many faults are caused by failings in the 1591 management process ('human error'): a wrongly configured address in a 1592 node, a wrong address given in a signalling protocol, a wrongly 1593 configured parameter in a queueing algorithm, a node set into a 1594 different mode from other nodes, and so on. Generally, a clean 1595 design with few configurable options ensures this class of faults can 1596 be traced more easily and prevented more often. Sound management 1597 practice at run-time also helps. For instance: a management system 1598 should be used that constrains configuration changes within system 1599 rules (eg preventing an option setting inconsistent with other 1600 nodes); configuration options should also be recorded in an offline 1601 database; and regular automatic consistency checks between live 1602 systems and the database. PCN adds nothing specific to this class of 1603 problems. By the time standards are in place, we expect that the PCN 1604 WG will have ruthlessly removed gratuitous configuration choices. 1605 However, at the time of writing, the WG is yet to choose between 1606 multiple competing proposals, so the range of possible options in 1607 Section 8.1 does seem rather wide compared to the original near-zero 1608 configuration intent of the architecture. 1610 8.5. Security OAM 1612 Security OAM is about using secure operational practices as well as 1613 being able to track security breaches or near-misses at run-time. 1614 PCN adds few specifics to the general good practice required in this 1615 field [RFC4778], other than those below. The correct functions of 1616 the system should be monitored (Section 8.2) in multiple independent 1617 ways and correlated to detect possible security breaches. Persistent 1618 (pre-)congestion marking should raise an alarm (both on the node 1619 doing the marking and on the PCN-egress-node metering it). 1620 Similarly, persistently poor external QoS metrics such as jitter or 1621 MOS should raise an alarm. The following are examples of symptoms 1622 that may be the result of innocent faults, rather than attacks, but 1623 until diagnosed they should be logged and trigger a security alarm: 1625 o Anomalous patterns of non-conforming incoming signals and packets 1626 rejected at the PCN-ingress-nodes (eg packets already marked PCN- 1627 capable, or traffic persistently starving token bucket policers). 1629 o PCN-capable packets arriving at a PCN-egress-node with no 1630 associated state for mapping them to a valid ingress-egress- 1631 aggregate. 1633 o A PCN-ingress-node receiving feedback signals about the pre- 1634 congestion level on a non-existent aggregate, or that are 1635 inconsistent with other signals (eg unexpected sequence numbers, 1636 inconsistent addressing, conflicting reports of the pre-congestion 1637 level, etc). 1639 o Pre-congestion marking arriving at an PCN-egress-node with 1640 (pre-)congestion markings focused on particular flows, rather than 1641 randomly distributed throughout the aggregate. 1643 9. IANA Considerations 1645 This memo includes no request to IANA. 1647 10. Security considerations 1649 Security considerations essentially come from the Trust Assumption 1650 (Section 3.1), ie that all PCN-nodes are PCN-enabled and trust each 1651 other for truthful PCN-marking and transport. PCN splits 1652 functionality between PCN-interior-nodes and PCN-boundary-nodes, and 1653 the security considerations are somewhat different for each, mainly 1654 because PCN-boundary-nodes are flow-aware and PCN-interior-nodes are 1655 not. 1657 o because the PCN-boundary-nodes are flow-aware, they are trusted to 1658 use that awareness correctly. The degree of trust required 1659 depends on the kinds of decisions they have to make and the kinds 1660 of information they need to make them. For example when the PCN- 1661 boundary-node needs to know the contents of the sessions for 1662 making the admission and termination decisions, or when the 1663 contents are highly classified, then the security requirements for 1664 the PCN-boundary-nodes involved will also need to be high. 1666 o the PCN-ingress-nodes police packets to ensure a PCN-flow sticks 1667 within its agreed limit, and to ensure that only PCN-flows which 1668 have been admitted contribute PCN-traffic into the PCN-domain. 1669 The policer must drop (or perhaps re-mark to a different DSCP) any 1670 PCN-packets received that are outside this remit. This is similar 1671 to the existing IntServ behaviour. Between them the PCN-boundary- 1672 nodes must encircle the PCN-domain, otherwise PCN-packets could 1673 enter the PCN-domain without being subject to admission control, 1674 which would potentially destroy the QoS of existing flows. 1676 o PCN-interior-nodes aren't flow-aware. This prevents some security 1677 attacks where an attacker targets specific flows in the data plane 1678 - for instance for DoS or eavesdropping. 1680 o PCN-marking by the PCN-interior-nodes along the packet forwarding 1681 path needs to be trusted, because the PCN-boundary-nodes rely on 1682 this information. For instance a rogue PCN-interior-node could 1683 PCN-mark all packets so that no flows were admitted. Another 1684 possibility is that it doesn't PCN-mark any packets, even when 1685 it's pre-congested. More subtly, the rogue PCN-interior-node 1686 could perform these attacks selectively on particular flows, or it 1687 could PCN-mark the correct fraction overall, but carefully choose 1688 which flows it marked. 1690 o the PCN-boundary-nodes should be able to deal with DoS attacks and 1691 state exhaustion attacks based on fast changes in per flow 1692 signalling. 1694 o the signalling between the PCN-boundary-nodes (and possibly a 1695 central control node) must be protected from attacks. For example 1696 the recipient needs to validate that the message is indeed from 1697 the node that claims to have sent it. Possible measures include 1698 digest authentication and protection against replay and man-in- 1699 the-middle attacks. For the specific protocol RSVP, hop-by-hop 1700 authentication is in [RFC2747], and 1701 [I-D.behringer-tsvwg-rsvp-security-groupkeying] may also be 1702 useful; for a generic signalling protocol the PCN WG document on 1703 "Requirements for signalling" will describe the requirements in 1704 more detail. 1706 Operational security advice is given in Section 8.5. 1708 11. Conclusions 1710 The document describes a general architecture for flow admission and 1711 termination based on pre-congestion information in order to protect 1712 the quality of service of established inelastic flows within a single 1713 DiffServ domain. The main topic is the functional architecture 1714 (first covered at a high level and then at a greater level of 1715 detail). It also mentions other topics like the assumptions and open 1716 issues. 1718 12. Acknowledgements 1720 This document is a revised version of [I-D.eardley-pcn-architecture]. 1721 Its authors were: P. Eardley, J. Babiarz, K. Chan, A. Charny, R. 1722 Geib, G. Karagiannis, M. Menth, T. Tsou. They are therefore 1723 contributors to this document. 1725 Thanks to those who've made comments on 1726 [I-D.eardley-pcn-architecture] and on earlier versions of this draft: 1727 Lachlan Andrew, Joe Babiarz, Fred Baker, David Black, Steven Blake, 1728 Bob Briscoe, Ken Carlberg, Anna Charny, Joachim Charzinski, Andras 1729 Csaszar, Lars Eggert, Ruediger Geib, Robert Hancock, Ingemar 1730 Johansson, Georgios Karagiannis, Michael Menth, Tom Taylor, Hannes 1731 Tschofenig, Tina Tsou, Magnus Westerlund, Delei Yu. Thanks to Bob 1732 Briscoe who extensively revised the Operations and Management 1733 section. 1735 This document is the result of discussions in the PCN WG and 1736 forerunner activity in the TSVWG. A number of previous drafts were 1737 presented to TSVWG: [I-D.chan-pcn-problem-statement], 1738 [I-D.briscoe-tsvwg-cl-architecture], [I-D.briscoe-tsvwg-cl-phb], 1739 [I-D.charny-pcn-single-marking], [I-D.babiarz-pcn-sip-cap], 1740 [I-D.lefaucheur-rsvp-ecn], [I-D.westberg-pcn-load-control]. The 1741 authors of them were: B, Briscoe, P. Eardley, D. Songhurst, F. Le 1742 Faucheur, A. Charny, J. Babiarz, K. Chan, S. Dudley, G. Karagiannis, 1743 A. Bader, L. Westberg, J. Zhang, V. Liatsos, X-G. Liu, A. Bhargava. 1745 13. Comments Solicited 1747 Comments and questions are encouraged and very welcome. They can be 1748 addressed to the IETF PCN working group mailing list . 1750 14. Changes 1752 14.1. Changes from -02 to -03 1754 o Abstract: Clarified by removing the term 'aggregated'. Follow-up 1755 clarifications later in draft: S1: expanded PCN-egress-nodes 1756 bullet to mention case where the PCN-feedback-information is about 1757 one (or a few) PCN-marks, rather than aggregated information; S3 1758 clarified PCN-meter; S5 minor changes; conclusion. 1760 o S1: added a paragraph about how the PCN-domain looks to the 1761 outside world (essentially it looks like a DiffServ domain). 1763 o S2: tweaked the PCN-traffic terminology bullet: changed PCN 1764 traffic classes to PCN behaviour aggregates, to be more in line 1765 with traditional DiffServ jargon (-> follow-up changes later in 1766 draft); included a definition of PCN-flows (and corrected a couple 1767 of 'PCN microflows' to 'PCN-flows' later in draft) 1769 o S3.5: added possibility of downgrading to best effort, where PCN- 1770 packets arrive at PCN-ingress-node already ECN marked (CE or ECN 1771 nonce) 1773 o S4: added note about whether talk about PCN operating on an 1774 interface or on a link. In S8.1 (OAM) mentioned that PCN 1775 functionality needs to be configured consistently on either the 1776 ingress or the egress interface of PCN-nodes in a PCN-domain. 1778 o S5.2: clarified that signalling protocol installs flow filter spec 1779 at PCN-ingress-node (& updates after possible re-route) 1781 o S5.6: addressing: clarified 1783 o S5.7: added tunnelling issue of N^2 scaling if you set up a mesh 1784 of tunnels between PCN-boundary-nodes 1786 o S7.3: Clarified the "third viewpoint" of probing (always probe). 1788 o S8.1: clarified that SNMP is only an example; added note that an 1789 operator may be able to not run PCN on some PCN-interior-nodes, if 1790 it knows that these links will never become (pre-)congested; added 1791 note that it may be possible to have different PCN-boundary-node 1792 behaviours for different ingress-egress-aggregates within the same 1793 PCN-domain. 1795 o Appendix: Created an Appendix about "Possible work items beyond 1796 the scope of the current PCN WG Charter". Material moved from 1797 near start of S3 and elsewhere throughout draft. Moved text about 1798 centralised decision node to Appendix. 1800 o Other minor clarifications. 1802 14.2. Changes from -01 to -02 1804 o S1: Benefits: provisioning bullet extended to stress that PCN does 1805 not use RFC2475-style traffic conditioning. 1807 o S1: Deployment models: mentioned, as variant of PCN-domain 1808 extending to end nodes, that may extend to LAN edge switch. 1810 o S3.1: Trust Assumption: added note about not needing PCN-marking 1811 capability if known that an interface cannot become pre-congested. 1813 o S4: now divided into sub-sections 1815 o S4.1: Admission control: added second proposed method for how to 1816 decide to block new flows (PCN-egress-node receives one (or 1817 several) PCN-marked packets). 1819 o S5: Probing sub-section removed. Material now in new S7. 1821 o S5.6: Addressing: clarified how PCN-ingress-node can discover 1822 address of PCN-egress-node 1824 o S5.6: Addressing: centralised node case, added that PCN-ingress- 1825 node may need to know address of PCN-egress-node 1827 o S5.8: Tunnelling: added case of "partially PCN-capable tunnel" and 1828 degraded bullet on this in S6 (Open Issues) 1830 o S7: Probing: new section. Much more comprehensive than old S5.5. 1832 o S8: Operations and Management: substantially revised. 1834 o other minor changes not affecting semantics 1836 14.3. Changes from -00 to -01 1838 In addition to clarifications and nit squashing, the main changes 1839 are: 1841 o S1: Benefits: added one about provisioning (and contrast with 1842 DiffServ SLAs) 1844 o S1: Benefits: clarified that the objective is also to stop PCN- 1845 packets being significantly delayed (previously only mentioned not 1846 dropping packets) 1848 o S1: Deployment models: added one where policing is done at ingress 1849 of access network and not at ingress of PCN-domain (assume trust 1850 between networks) 1852 o S1: Deployment models: corrected MPLS-TE to MPLS 1854 o S2: Terminology: adjusted definition of PCN-domain 1856 o S3.5: Other assumptions: corrected, so that two assumptions (PCN- 1857 nodes not performing ECN and PCN-ingress-node discarding arriving 1858 CE packet) only apply if the PCN WG decides to encode PCN-marking 1859 in the ECN-field. 1861 o S4 & S5: changed PCN-marking algorithm to marking behaviour 1863 o S4: clarified that PCN-interior-node functionality applies for 1864 each outgoing interface, and added clarification: "The 1865 functionality is also done by PCN-ingress-nodes for their outgoing 1866 interfaces (ie those 'inside' the PCN-domain)." 1868 o S4 (near end): altered to say that a PCN-node "should" dedicate 1869 some capacity to lower priority traffic so that it isn't starved 1870 (was "may") 1872 o S5: clarified to say that PCN functionality is done on an 1873 'interface' (rather than on a 'link') 1875 o S5.2: deleted erroneous mention of service level agreement 1877 o S5.5: Probing: re-written, especially to distinguish probing to 1878 test the ingress-egress-aggregate from probing to test a 1879 particular ECMP path. 1881 o S5.7: Addressing: added mention of probing; added that in the case 1882 where traffic is always tunnelled across the PCN-domain, add a 1883 note that he PCN-ingress-node needs to know the address of the 1884 PCN-egress-node. 1886 o S5.8: Tunnelling: re-written, especially to provide a clearer 1887 description of copying on tunnel entry/exit, by adding explanation 1888 (keeping tunnel encaps/decaps and PCN-marking orthogonal), 1889 deleting one bullet ("if the inner header's marking state is more 1890 sever then it is preserved" - shouldn't happen), and better 1891 referencing of other IETF documents. 1893 o S6: Open issues: stressed that "NOTE: Potential solutions are out 1894 of scope for this document" and edited a couple of sentences that 1895 were close to solution space. 1897 o S6: Open issues: added one about scenarios with only one tunnel 1898 endpoint in the PCN domain . 1900 o S6: Open issues: ECMP: added under-admission as another potential 1901 risk 1903 o S6: Open issues: added one about "Silent at start" 1904 o S10: Conclusions: a small conclusions section added 1906 15. Appendix A: Possible work items beyond the scope of the current PCN 1907 WG Charter 1909 This section mentions some topics that are outside the PCN WG's 1910 current Charter, but which have been mentioned as areas of interest. 1911 They might be work items for: the PCN WG after a future re- 1912 chartering; some other IETF WG; another standards body; an operator- 1913 specific usage that's not standardised. 1915 NOTE: it should be crystal clear that this section discusses 1916 possibilities only. 1918 The first set of possibilities relate to the restrictions on scope 1919 imposed by the PCN WG Charter (see Section 3): 1921 o a single PCN-domain encompasses several autonomous systems that 1922 don't trust each other (perhaps by using a mechanism like re-ECN, 1923 [I-D.briscoe-re-pcn-border-cheat]. 1925 o not all the nodes run PCN. For example, the PCN-domain is a 1926 multi-site enterprise network. The sites are connected by a VPN 1927 tunnel; although PCN doesn't operate inside the tunnel, the PCN 1928 mechanisms still work properly because the of the good QoS on the 1929 virtual link (the tunnel). Another example is that PCN is 1930 deployed on the general Internet (ie widely but not universally 1931 deployed). 1933 o applying the PCN mechanisms to other types of traffic, ie beyond 1934 inelastic traffic. For instance, applying the PCN mechanisms to 1935 traffic scheduled with the Assured Forwarding per-hop behaviour. 1936 One example could be flow-rate adaptation by elastic applications, 1937 that adapts according to the pre-congestion information. 1939 o the aggregation assumption doesn't hold, because the link capacity 1940 is too low. Measurement-based admission control is then risky. 1942 o the applicability of PCN mechanisms for emergency use (911, GETS, 1943 WPS, MLPP, etc.) 1945 Other possibilities include: 1947 o indicating pre-congestion through signalling messages rather than 1948 in-band (in the form of PCN-marked packets) 1950 o the decision-making functionality is at a centralised node rather 1951 than at the PCN-boundary-nodes. This requires that the PCN- 1952 egress-node signals PCN-feedback-information to the centralised 1953 node, and that the centralised node signals to the PCN-ingress- 1954 node about the decision about admission (or termination). It may 1955 also need the centralised node and the PCN-boundary-nodes to know 1956 each others addresses. It would be possible for the centralised 1957 node to be one of the PCN-boundary-nodes, when clearly the 1958 signalling would sometimes be replaced by a message internal to 1959 the node. 1961 o It would be possible for the centralised node to be one of the 1962 PCN-boundary-nodes, when clearly the signalling would sometimes be 1963 replaced by a message internal to the node. 1965 o signalling extensions for specific protocols (eg RSVP, NSIS). For 1966 example: the details of how the signalling protocol installs the 1967 flowspec at the PCN-ingress-node for an admitted PCN-flow; and how 1968 the signalling protocol carries the PCN-feedback-information. 1969 Perhaps also for other functions such as: coping with failure of a 1970 PCN-boundary-node ([I-D.briscoe-tsvwg-cl-architecture] considers 1971 what happens if RSVP is the QoS signalling protocol); establishing 1972 a tunnel across the PCN-domain if it is necessary to carry ECN 1973 marks transparently. Note: There is a PCN WG Milestone on 1974 "Requirements for signalling", which is potential input for the 1975 appropriate WGs. 1977 o policing by the PCN-ingress-node may not be needed if the PCN- 1978 domain can trust that the upstream network has already policed the 1979 traffic on its behalf. 1981 o PCN for Pseudowire: PCN may be used as a congestion avoidance 1982 mechanism for edge to edge pseudowire emulations 1983 [I-D.ietf-pwe3-congestion-frmwk]. 1985 o PCN for MPLS: [RFC3270] defines how to support the DiffServ 1986 architecture in MPLS networks. [RFC5129] describes how to add PCN 1987 for admission control of microflows into a set of MPLS aggregates 1988 (Multi-protocol label switching). PCN-marking is done in MPLS's 1989 EXP field. 1991 o PCN for Ethernet: Similarly, it may be possible to extend PCN into 1992 Ethernet networks, where PCN-marking is done in the Ethernet 1993 header. 1995 . 1997 16. Informative References 1999 [I-D.briscoe-tsvwg-cl-architecture] 2000 Briscoe, B., "An edge-to-edge Deployment Model for Pre- 2001 Congestion Notification: Admission Control over a 2002 DiffServ Region", draft-briscoe-tsvwg-cl-architecture-04 2003 (work in progress), October 2006. 2005 [I-D.briscoe-tsvwg-cl-phb] 2006 Briscoe, B., "Pre-Congestion Notification marking", 2007 draft-briscoe-tsvwg-cl-phb-03 (work in progress), 2008 October 2006. 2010 [I-D.babiarz-pcn-sip-cap] 2011 Babiarz, J., "SIP Controlled Admission and Preemption", 2012 draft-babiarz-pcn-sip-cap-00 (work in progress), 2013 October 2006. 2015 [I-D.lefaucheur-rsvp-ecn] 2016 Faucheur, F., "RSVP Extensions for Admission Control over 2017 Diffserv using Pre-congestion Notification (PCN)", 2018 draft-lefaucheur-rsvp-ecn-01 (work in progress), 2019 June 2006. 2021 [I-D.chan-pcn-problem-statement] 2022 Chan, K., "Pre-Congestion Notification Problem Statement", 2023 draft-chan-pcn-problem-statement-01 (work in progress), 2024 October 2006. 2026 [I-D.ietf-pwe3-congestion-frmwk] 2027 Bryant, S., "Pseudowire Congestion Control Framework", 2028 draft-ietf-pwe3-congestion-frmwk-00 (work in progress), 2029 February 2007. 2031 [I-D.ietf-tsvwg-admitted-realtime-dscp] 2032 "DSCPs for Capacity-Admitted Traffic", November 2006, . 2036 [I-D.briscoe-tsvwg-ecn-tunnel] 2037 "Layered Encapsulation of Congestion Notification", 2038 June 2007, . 2041 [I-D.charny-pcn-single-marking] 2042 "Pre-Congestion Notification Using Single Marking for 2043 Admission and Termination", November 2007, . 2047 [I-D.eardley-pcn-architecture] 2048 "Pre-Congestion Notification Architecture", June 2007, . 2052 [I-D.westberg-pcn-load-control] 2053 "LC-PCN: The Load Control PCN Solution", November 2007, . 2057 [I-D.behringer-tsvwg-rsvp-security-groupkeying] 2058 "Applicability of Keying Methods for RSVP Security", 2059 November 2007, . 2062 [I-D.briscoe-re-pcn-border-cheat] 2063 "Emulating Border Flow Policing using Re-ECN on Bulk 2064 Data", June 2006, . 2067 [I-D.draft-babiarz-pcn-3sm] 2068 "Three State PCN Marking", November 2007, . 2071 [RFC5129] "Explicit Congestion Marking in MPLS", RFC 5129, 2072 January 2008. 2074 [RFC4303] Kent, S., "IP Encapsulating Security Payload (ESP)", 2075 RFC 4303, December 2005. 2077 [RFC2475] Blake, S., Black, D., Carlson, M., Davies, E., Wang, Z., 2078 and W. Weiss, "An Architecture for Differentiated 2079 Services", RFC 2475, December 1998. 2081 [RFC3246] Davie, B., Charny, A., Bennet, J., Benson, K., Le Boudec, 2082 J., Courtney, W., Davari, S., Firoiu, V., and D. 2083 Stiliadis, "An Expedited Forwarding PHB (Per-Hop 2084 Behavior)", RFC 3246, March 2002. 2086 [RFC4594] Babiarz, J., Chan, K., and F. Baker, "Configuration 2087 Guidelines for DiffServ Service Classes", RFC 4594, 2088 August 2006. 2090 [RFC3168] Ramakrishnan, K., Floyd, S., and D. Black, "The Addition 2091 of Explicit Congestion Notification (ECN) to IP", 2092 RFC 3168, September 2001. 2094 [RFC2211] Wroclawski, J., "Specification of the Controlled-Load 2095 Network Element Service", RFC 2211, September 1997. 2097 [RFC2998] Bernet, Y., Ford, P., Yavatkar, R., Baker, F., Zhang, L., 2098 Speer, M., Braden, R., Davie, B., Wroclawski, J., and E. 2099 Felstaine, "A Framework for Integrated Services Operation 2100 over Diffserv Networks", RFC 2998, November 2000. 2102 [RFC3270] Le Faucheur, F., Wu, L., Davie, B., Davari, S., Vaananen, 2103 P., Krishnan, R., Cheval, P., and J. Heinanen, "Multi- 2104 Protocol Label Switching (MPLS) Support of Differentiated 2105 Services", RFC 3270, May 2002. 2107 [RFC1633] Braden, B., Clark, D., and S. Shenker, "Integrated 2108 Services in the Internet Architecture: an Overview", 2109 RFC 1633, June 1994. 2111 [RFC2983] Black, D., "Differentiated Services and Tunnels", 2112 RFC 2983, October 2000. 2114 [RFC2747] Baker, F., Lindell, B., and M. Talwar, "RSVP Cryptographic 2115 Authentication", RFC 2747, January 2000. 2117 [RFC3411] Harrington, D., Presuhn, R., and B. Wijnen, "An 2118 Architecture for Describing Simple Network Management 2119 Protocol (SNMP) Management Frameworks", STD 62, RFC 3411, 2120 December 2002. 2122 [RFC3393] Demichelis, C. and P. Chimento, "IP Packet Delay Variation 2123 Metric for IP Performance Metrics (IPPM)", RFC 3393, 2124 November 2002. 2126 [RFC4656] Shalunov, S., Teitelbaum, B., Karp, A., Boote, J., and M. 2127 Zekauskas, "A One-way Active Measurement Protocol 2128 (OWAMP)", RFC 4656, September 2006. 2130 [RFC4778] Kaeo, M., "Operational Security Current Practices in 2131 Internet Service Provider Environments", RFC 4778, 2132 January 2007. 2134 [ITU-MLPP] 2135 "Multilevel Precedence and Pre-emption Service (MLPP)", 2136 ITU-T Recommendation I.255.3, 1990. 2138 [Iyer] "An approach to alleviate link overload as observed on an 2139 IP backbone", IEEE INFOCOM , 2003, 2140 . 2142 [Shenker] "Fundamental design issues for the future Internet", IEEE 2143 Journal on selected areas in communications pp 1176 - 2144 1188, Vol 13 (7), 1995. 2146 [Y.1541] "Network Performance Objectives for IP-based Services", 2147 ITU-T Recommendation Y.1541, February 2006. 2149 [P.800] "Methods for subjective determination of transmission 2150 quality", ITU-T Recommendation P.800, August 1996. 2152 [Songhurst] 2153 "Guaranteed QoS Synthesis for Admission Control with 2154 Shared Capacity", BT Technical Report TR-CXR9-2006-001, 2155 Feburary 2006, . 2158 [Menth] "PCN-Based Resilient Network Admission Control: The Impact 2159 of a Single Bit"", Technical Report , 2007, . 2163 [PCN-email-ECMP] 2164 "Email to PCN WG mailing list", November 2007, . 2167 [PCN-email-traffic-empty-aggregates] 2168 "Email to PCN WG mailing list", October 2007, . 2171 Author's Address 2173 Philip Eardley 2174 BT 2175 B54/77, Sirius House Adastral Park Martlesham Heath 2176 Ipswich, Suffolk IP5 3RE 2177 United Kingdom 2179 Email: philip.eardley@bt.com 2181 Full Copyright Statement 2183 Copyright (C) The IETF Trust (2008). 2185 This document is subject to the rights, licenses and restrictions 2186 contained in BCP 78, and except as set forth therein, the authors 2187 retain all their rights. 2189 This document and the information contained herein are provided on an 2190 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 2191 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND 2192 THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS 2193 OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 2194 THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 2195 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 2197 Intellectual Property 2199 The IETF takes no position regarding the validity or scope of any 2200 Intellectual Property Rights or other rights that might be claimed to 2201 pertain to the implementation or use of the technology described in 2202 this document or the extent to which any license under such rights 2203 might or might not be available; nor does it represent that it has 2204 made any independent effort to identify any such rights. Information 2205 on the procedures with respect to rights in RFC documents can be 2206 found in BCP 78 and BCP 79. 2208 Copies of IPR disclosures made to the IETF Secretariat and any 2209 assurances of licenses to be made available, or the result of an 2210 attempt made to obtain a general license or permission for the use of 2211 such proprietary rights by implementers or users of this 2212 specification can be obtained from the IETF on-line IPR repository at 2213 http://www.ietf.org/ipr. 2215 The IETF invites any interested party to bring to its attention any 2216 copyrights, patents or patent applications, or other proprietary 2217 rights that may cover technology that may be required to implement 2218 this standard. Please address the information to the IETF at 2219 ietf-ipr@ietf.org. 2221 Acknowledgment 2223 Funding for the RFC Editor function is provided by the IETF 2224 Administrative Support Activity (IASA).