idnits 2.17.1 draft-ietf-pcn-architecture-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 16. -- Found old boilerplate from RFC 3978, Section 5.5, updated by RFC 4748 on line 1647. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 1658. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 1665. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 1671. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (October 26, 2007) is 6027 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'RFC2119' is defined on line 1484, but no explicit reference was found in the text == Unused Reference: 'I-D.briscoe-re-pcn-border-cheat' is defined on line 1542, but no explicit reference was found in the text == Outdated reference: A later version (-03) exists of draft-charny-pcn-single-marking-02 == Outdated reference: A later version (-07) exists of draft-ietf-tsvwg-admitted-realtime-dscp-01 == Outdated reference: A later version (-02) exists of draft-ietf-tsvwg-ecn-mpls-01 == Outdated reference: A later version (-02) exists of draft-ietf-pwe3-congestion-frmwk-00 == Outdated reference: A later version (-05) exists of draft-westberg-pcn-load-control-01 == Outdated reference: A later version (-01) exists of draft-behringer-tsvwg-rsvp-security-groupkeying-00 Summary: 1 error (**), 0 flaws (~~), 9 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Congestion and Pre-Congestion Philip. Eardley (Editor) 3 Notification Working Group BT 4 Internet-Draft October 26, 2007 5 Intended status: Informational 6 Expires: April 28, 2008 8 Pre-Congestion Notification Architecture 9 draft-ietf-pcn-architecture-01 11 Status of this Memo 13 By submitting this Internet-Draft, each author represents that any 14 applicable patent or other IPR claims of which he or she is aware 15 have been or will be disclosed, and any of which he or she becomes 16 aware will be disclosed, in accordance with Section 6 of BCP 79. 18 Internet-Drafts are working documents of the Internet Engineering 19 Task Force (IETF), its areas, and its working groups. Note that 20 other groups may also distribute working documents as Internet- 21 Drafts. 23 Internet-Drafts are draft documents valid for a maximum of six months 24 and may be updated, replaced, or obsoleted by other documents at any 25 time. It is inappropriate to use Internet-Drafts as reference 26 material or to cite them other than as "work in progress." 28 The list of current Internet-Drafts can be accessed at 29 http://www.ietf.org/ietf/1id-abstracts.txt. 31 The list of Internet-Draft Shadow Directories can be accessed at 32 http://www.ietf.org/shadow.html. 34 This Internet-Draft will expire on April 28, 2008. 36 Copyright Notice 38 Copyright (C) The IETF Trust (2007). 40 Abstract 42 The purpose of this document is to describe a general architecture 43 for flow admission and termination based on aggregated pre-congestion 44 information in order to protect the quality of service of established 45 inelastic flows within a single DiffServ domain. 47 Status 49 Table of Contents 51 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 52 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 6 53 3. Assumptions and constraints on scope . . . . . . . . . . . . . 8 54 3.1. Assumption 1: Trust - controlled environment . . . . . . . 9 55 3.2. Assumption 2: Real-time applications . . . . . . . . . . . 9 56 3.3. Assumption 3: Many flows and additional load . . . . . . . 9 57 3.4. Assumption 4: Emergency use out of scope . . . . . . . . . 10 58 3.5. Other assumptions . . . . . . . . . . . . . . . . . . . . 10 59 4. High-level functional architecture . . . . . . . . . . . . . . 10 60 5. Detailed Functional architecture . . . . . . . . . . . . . . . 14 61 5.1. PCN-interior-node functions . . . . . . . . . . . . . . . 15 62 5.2. PCN-ingress-node functions . . . . . . . . . . . . . . . . 16 63 5.3. PCN-egress-node functions . . . . . . . . . . . . . . . . 16 64 5.4. Admission control functions . . . . . . . . . . . . . . . 17 65 5.5. Probing functions . . . . . . . . . . . . . . . . . . . . 17 66 5.6. Flow termination functions . . . . . . . . . . . . . . . . 19 67 5.7. Addressing . . . . . . . . . . . . . . . . . . . . . . . . 20 68 5.8. Tunnelling . . . . . . . . . . . . . . . . . . . . . . . . 21 69 5.9. Fault handling . . . . . . . . . . . . . . . . . . . . . . 22 70 6. Design goals and challenges . . . . . . . . . . . . . . . . . 22 71 7. Operations and Management . . . . . . . . . . . . . . . . . . 25 72 7.1. Fault OAM . . . . . . . . . . . . . . . . . . . . . . . . 25 73 7.2. Configuration OAM . . . . . . . . . . . . . . . . . . . . 26 74 7.3. Accounting OAM . . . . . . . . . . . . . . . . . . . . . . 27 75 7.4. Performance OAM . . . . . . . . . . . . . . . . . . . . . 28 76 7.5. Security OAM . . . . . . . . . . . . . . . . . . . . . . . 28 77 8. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 28 78 9. Security considerations . . . . . . . . . . . . . . . . . . . 28 79 10. Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . 30 80 11. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 30 81 12. Comments Solicited . . . . . . . . . . . . . . . . . . . . . . 30 82 13. Changes . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 83 14. References . . . . . . . . . . . . . . . . . . . . . . . . . . 32 84 14.1. Normative References . . . . . . . . . . . . . . . . . . . 32 85 14.2. Informative References . . . . . . . . . . . . . . . . . . 32 86 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . . 35 87 Intellectual Property and Copyright Statements . . . . . . . . . . 36 89 1. Introduction 91 The purpose of this document is to describe a general architecture 92 for flow admission and termination based on aggregated (pre-) 93 congestion information in order to protect the quality of service of 94 flows within a DiffServ domain[RFC2475]. This document defines an 95 architecture for implementing two mechanisms to protect the quality 96 of service of established inelastic flows within a single DiffServ 97 domain, where all boundary and interior nodes are PCN-enabled and 98 trust each other for correct PCN operation. Flow admission control 99 determines whether a new flow should be admitted and protects the QoS 100 of existing PCN-flows in normal circumstances, by avoiding congestion 101 occurring. However, in abnormal circumstances, for instance a 102 disaster affecting multiple nodes and causing traffic re-routes, then 103 the QoS on existing PCN-flows may degrade even though care was 104 exercised when admitting those flows before those circumstances. 105 Therefore we also propose a mechanism for flow termination, which 106 removes enough traffic in order to protect the QoS of the remaining 107 PCN-flows. 109 As a fundamental building block to enable these two mechanisms, PCN- 110 interior-nodes generate, encode and transport pre-congestion 111 information towards the PCN-egress-nodes. Two rates, a PCN-lower- 112 rate and a PCN-upper-rate, can be associated with each link of the 113 PCN-domain. Each rate is used by a marking behaviour (specified in 114 another document) that determines how and when a number of PCN- 115 packets are marked, and how the markings are encoded in packet 116 headers. PCN-egress-nodes make measurements of the packet markings 117 and send information as necessary to the nodes that make the decision 118 about which PCN-flows to accept/reject or terminate, based on this 119 information. Another document will describe the decision-making 120 behaviours. Overall the aim is to enable PCN-nodes to give an "early 121 warning" of potential congestion before there is any significant 122 build-up of PCN-packets in the queue; the admission control mechanism 123 limits the PCN-traffic on each link to *roughly* its PCN-lower-rate 124 and the flow termination mechanism limits the PCN-traffic on each 125 link to *roughly* its PCN-upper-rate. 127 We believe that the key benefits of the PCN mechanisms described in 128 this document are that they are simple, scalable, and robust because: 130 o Per flow state is only required at the PCN-ingress-nodes 131 ("stateless core"). This is required for policing purposes (to 132 prevent non-admitted PCN traffic from entering the PCN-domain) and 133 so on. It is not generally required that other network entities 134 are aware of individual flows (although they may be in particular 135 deployment scenarios). 137 o Admission control is resilient: PCN's QoS is decoupled from the 138 routing system; hence in general admitted flows can survive 139 capacity, routing or topology changes without additional 140 signalling, and they don't have to be told (or learn) about such 141 changes. The PCN-lower-rates can be chosen small enough that 142 admitted traffic can still be carried after a rerouting in most 143 failure cases. This is an important feature as QoS violations in 144 core networks due to link failures are more likely than QoS 145 violations due to increased traffic volume [Iyer]. 147 o The PCN-marking behaviours only operate on the overall PCN-traffic 148 on the link, not per flow. 150 o The information of these measurements is signalled to the PCN- 151 egress-nodes by the PCN-marks in the packet headers. No 152 additional signalling protocol is required for transporting the 153 PCN-marks. Therefore no secure binding is required between data 154 packets and separate congestion messages. 156 o The PCN-egress-nodes make separate measurements, operating on the 157 overall PCN-traffic, for each PCN-ingress-node, ie not per flow. 158 Similarly, signalling by the PCN-egress-node of PCN-feedback- 159 information (which is used for flow admission and termination 160 decisions) is at the granularity of the ingress-egress-aggregate. 162 o The admitted PCN-load is controlled dynamically. Therefore it 163 adapts as the traffic matrix changes, and also if the network 164 topology changes (eg after a link failure). Hence an operator can 165 be less conservative when deploying network capacity, and less 166 accurate in their prediction of the PCN-traffic matrix. 168 o The termination mechanism complements admission control. It 169 allows the network to recover from sudden unexpected surges of 170 PCN-traffic on some links, thus restoring QoS to the remaining 171 flows. Such scenarios are expected to be rare but not impossible. 172 They can be caused by large network failures that redirect lots of 173 admitted PCN-traffic to other links, or by malfunction of the 174 measurement-based admission control in the presence of admitted 175 flows that send for a while with an atypically low rate and then 176 increase their rates in a correlated way. 178 o The PCN-upper-rate may be set below the maximum rate that PCN- 179 traffic can be transmitted on a link, in order to trigger 180 termination of some PCN-flows before loss (or excessive delay) of 181 PCN-packets occurs, or to keep the maximum PCN-load on a link 182 below a level configured by the operator. 184 o Provisioning of the network is decoupled from the process of 185 adding new customers. By contrast, with the DiffServ architecture 186 [RFC2475] the operator has to run the provisioning process each 187 time a new customer is added to check that the Service Level 188 Agreement can be fulfilled. 190 Operators of networks will want to use the PCN mechanisms in various 191 arrangements, for instance depending on how they are performing 192 admission control outside the PCN-domain (users after all are 193 concerned about QoS end-to-end), what their particular goals and 194 assumptions are, and so on. Several deployment models are possible: 196 o An operator may choose to deploy either admission control or flow 197 termination or both (see Section 7.2). 199 o IntServ over DiffServ [RFC2998]. The DiffServ region is PCN- 200 enabled, RSVP signalling is used end-to-end and the PCN-domain is 201 a single RSVP hop, ie only the PCN-boundary-nodes process RSVP 202 messages. Outside the PCN-domain RSVP messages are processed on 203 each hop. This is described in 204 [I-D.briscoe-tsvwg-cl-architecture] 206 o RSVP signalling is originated and/or terminated by proxies, with 207 application-layer signalling between the end user and the proxy. 208 For instance SIP signalling with a home hub. 210 o Similar to previous bullets but NSIS signalling is used instead of 211 RSVP. 213 o NOTE: Consideration of signalling extensions for specific 214 protocols is outside the scope of the PCN WG, however it will 215 produce a "Requirements for signalling" document as potential 216 input for the appropriate WGs. 218 o Depending on the deployment scenario, the decision-making 219 functionality (about flow admission and termination) could reside 220 at the PCN-ingress-nodes or PCN-egress-nodes or at some central 221 control node in the PCN-domain. NOTE: The Charter restricts us: 222 the decision-making functionality is at the PCN-boundary-nodes. 224 o If the operator runs both the access network and the core network, 225 one deployment scenario is that only the core network uses PCN 226 admission control but per microflow policing is done at the 227 ingress to the access network and not at the PCN-ingress-node. 228 Note: to aid readability, the rest of this draft assumes that 229 policing is done by the PCN-ingress-nodes. 231 o There are several PCN-domains on the end-to-end path, each 232 operating PCN mechanisms independently. NOTE: The Charter 233 restricts us to considering a single PCN-domain. A possibility 234 after re-chartering is to consider that the PCN-domain encompasses 235 several DiffServ domains that don't trust each other (ie weakens 236 Assumption 1 about trust, see Section 3.1) 238 o The PCN-domain extends to the end users. NOTE: This is outside 239 the Charter because it breaks Assumption 3 (aggregation, see 240 later; incidentally it doesn't necessarily break Assumption 1 241 (trust), because in some environments, eg corporate, the end user 242 may have a controlled configuration and so be trusted). The 243 scenario is described in [I-D.babiarz-pcn-sip-cap]. 245 o Pseudowire: PCN may be used as a congestion avoidance mechanism 246 for edge to edge pseudowire emulations 247 [I-D.ietf-pwe3-congestion-frmwk]. NOTE: Specific consideration of 248 pseudowires is not in the PCN WG Charter. 250 o MPLS: [RFC3270] defines how to support the DiffServ architecture 251 in MPLS networks. [I-D.ietf-tsvwg-ecn-mpls] describes how to add 252 PCN for admission control of microflows into a set of MPLS 253 aggregates (Multi-protocol label switching). PCN-marking is done 254 in MPLS's EXP field. 256 o Similarly, it may be possible to extend PCN into Ethernet 257 networks, where PCN-marking is done in the Ethernet header. NOTE: 258 Specific consideration of this extension is outside the IETF's 259 remit. 261 2. Terminology 263 o PCN-domain: a PCN-capable domain; a contiguous set of PCN-enabled 264 nodes that perform DiffServ scheduling; the compete set of PCN- 265 nodes whose PCN-marking can in principle influence decisions about 266 flow admission and termination for the PCN-domain, including the 267 PCN-egress-nodes which measure these PCN-marks. 269 o PCN-boundary-node: a PCN-node that connects one PCN-domain to a 270 node either in another PCN-domain or in a non PCN-domain. 272 o PCN-interior-node: a node in a PCN-domain that is not a PCN- 273 boundary-node. 275 o PCN-node: a PCN-boundary-node or a PCN-interior-node 276 o PCN-egress-node: a PCN-boundary-node in its role in handling 277 traffic as it leaves a PCN-domain. 279 o PCN-ingress-node: a PCN-boundary-node in its role in handling 280 traffic as it enters a PCN-domain. 282 o PCN-traffic: A PCN-domain carries traffic of different DiffServ 283 classes [RFC4594]. Those using the PCN mechanisms are called PCN- 284 classes (collectively called PCN-traffic) and the corresponding 285 packets are PCN-packets. The same network may carry traffic using 286 other DiffServ classes. 288 o Ingress-egress-aggregate: The collection of PCN-packets from all 289 PCN-flows that travel in one direction between a specific pair of 290 PCN-boundary-nodes. 292 o PCN-lower-rate: a reference rate configured for each link in the 293 PCN-domain, which is lower than the PCN-upper-rate. It is used by 294 a marking behaviour that determines whether a packet should be 295 PCN-marked with a first encoding. 297 o PCN-upper-rate: a reference rate configured for each link in the 298 PCN-domain, which is higher than the PCN-lower-rate. It is used 299 by a marking behaviour that determines whether a packet should be 300 PCN-marked with a second encoding. 302 o Threshold-marking: a PCN-marking behaviour such that all PCN- 303 traffic is marked if the PCN-traffic exceeds a particular rate 304 (either the PCN-lower-rate or PCN-upper-rate). NOTE: The 305 definition reflects the overall intent rather than its 306 instantaneous behaviour, since the rate measured at a particular 307 moment depends on the behaviour, its implementation and the 308 traffic's variance as well as its rate. 310 o Excess-rate-marking: a PCN-marking behaviour such that the amount 311 of PCN-traffic that is PCN-marked is equal to the amount that 312 exceeds a particular rate (either the PCN-lower-rate or PCN-upper- 313 rate). NOTE: The definition reflects the overall intent rather 314 than its instantaneous behaviour, since the rate measured at a 315 particular moment depends on the behaviour, its implementation and 316 the traffic's variance as well as its rate. 318 o Pre-congestion: a condition of a link within a PCN-domain in which 319 the PCN-node performs PCN-marking, in order to provide an "early 320 warning" of potential congestion before there is any significant 321 build-up of PCN-packets in the real queue. 323 o PCN-marking: the process of setting the header in a PCN-packet 324 based on defined rules, in reaction to pre-congestion. 326 o {{if necessary: PCN-lower-rate-marking and PCN-upper-rate- 327 marking}} 329 o PCN-feedback-information: information signalled by a PCN-egress- 330 node to a PCN-ingress-node or central control node, which is 331 needed for the flow admission and flow termination mechanisms. 333 3. Assumptions and constraints on scope 335 The PCN WG's charter restricts the initial scope by a set of 336 assumptions. Here we list those assumptions and explain them. 338 1. these components are deployed in a single DiffServ domain, within 339 which all PCN-nodes are PCN-enabled and trust each other for 340 truthful PCN-marking and transport 342 2. all flows handled by these mechanisms are inelastic and 343 constrained to a known peak rate through policing or shaping 345 3. the number of PCN-flows across any potential bottleneck link is 346 sufficiently large that stateless, statistical mechanisms can be 347 effective. To put it another way, the aggregate bit rate of PCN- 348 traffic across any potential bottleneck link needs to be 349 sufficiently large relative to the maximum additional bit rate 350 added by one flow 352 4. PCN-flows may have different precedence, but the applicability of 353 the PCN mechanisms for emergency use (911, GETS, WPS, MLPP, etc.) 354 is out of scope 356 After completion of the initial phase, the PCN WG may re-charter to 357 develop solutions for specific scenarios where some of these 358 restrictions are not in place. It may also re-charter to consider 359 applying the PCN mechanisms to additional deployment scenarios. One 360 possible example is where a single PCN-domain encompasses several 361 DiffServ domains that don't trust each other (perhaps by using a 362 mechanism like re-ECN,[I-D.briscoe-re-pcn-border-cheat]). The WG may 363 also re-charter to investigate additional response mechanisms that 364 act on (pre-)congestion information. One example could be flow-rate 365 adaptation by elastic applications (rather than flow admission or 366 termination). The details of these work items are outside the scope 367 of the initial phase, but the WG may consider their requirements in 368 order to design components that are sufficiently general to support 369 such extensions in the future. The working assumption is that the 370 standards developed in the initial phase should not need to be 371 modified to satisfy the solutions for when these restrictions are 372 removed. 374 3.1. Assumption 1: Trust - controlled environment 376 We assume that the PCN-domain is a controlled environment, i.e. all 377 the nodes in a PCN-domain run PCN and trust each other. There are 378 several reasons for proposing this assumption: 380 o The PCN-domain has to be encircled by a ring of PCN-boundary- 381 nodes, otherwise PCN-packets could enter the PCN-domain without 382 being subject to admission control, which would potentially 383 destroy the QoS of existing flows. 385 o Similarly, a PCN-boundary-node has to trust that all the PCN-nodes 386 are doing PCN-marking. A non PCN-node wouldn't be able to alert 387 that it is suffering pre-congestion, which potentially would lead 388 to too many PCN-flows being admitted (or too few being 389 terminated). Worse, a rogue node could perform various attacks, 390 as discussed in the Security Considerations section. 392 One way of assuring the above two points is that the entire PCN- 393 domain is run by a single operator. Another possibility is that 394 there are several operators but they trust each other to a sufficient 395 level, in their handling of PCN-traffic. 397 3.2. Assumption 2: Real-time applications 399 We assume that any variation of source bit rate is independent of the 400 level of pre-congestion. We assume that PCN-packets come from real 401 time applications generating inelastic traffic [Shenker] like voice 402 and video requiring low delay, jitter and packet loss, for example 403 the Controlled Load Service, [RFC2211], and the Telephony service 404 class, [RFC4594]. This assumption is to help focus the effort where 405 it looks like PCN would be most useful, ie the sorts of applications 406 where per flow QoS is a known requirement. For instance, the impact 407 of this assumption would be to guide simulations work. 409 3.3. Assumption 3: Many flows and additional load 411 We assume that there are many flows on any bottleneck link in the 412 PCN-domain (or, to put it another way, the aggregate bit rate of PCN- 413 traffic across any potential bottleneck link is sufficiently large 414 relative to the maximum additional bit rate added by one flow). 415 Measurement-based admission control assumes that the present is a 416 reasonable prediction of the future: the network conditions are 417 measured at the time of a new flow request, however the actual 418 network performance must be OK during the call some time later. One 419 issue is that if there are only a few variable rate flows, then the 420 aggregate traffic level may vary a lot, perhaps enough to cause some 421 packets to get dropped. If there are many flows then the aggregate 422 traffic level should be statistically smoothed. How many flows is 423 enough depends on a number of things such as the variation in each 424 flow's rate, the total rate of PCN-traffic, and the size of the 425 "safety margin" between the traffic level at which we start 426 admission-marking and at which packets are dropped or significantly 427 delayed. 429 We do not make explicit assumptions on how many PCN-flows are in each 430 ingress-egress-aggregate. Performance evaluation work may clarify 431 whether it is necessary to make any additional assumption on 432 aggregation at the ingress-egress-aggregate level. 434 3.4. Assumption 4: Emergency use out of scope 436 PCN-flows may have different precedence, but the applicability of the 437 PCN mechanisms for emergency use (911, GETS, WPS, MLPP, etc) is out 438 of scope for consideration by the PCN WG. 440 3.5. Other assumptions 442 As a consequence of Assumption 2 above, it is assumed that PCN- 443 marking is being applied to traffic scheduled with the expedited 444 forwarding per-hop behaviour, [RFC3246], or traffic with similar 445 characteristics. 447 The following two assumptions apply if the PCN WG decides to encode 448 PCN-marking in the ECN-field. 450 o It is assumed that PCN-nodes do not perform ECN, [RFC3168], on 451 PCN-packets. 453 o If a packet that is part of a PCN-flow arrives at a PCN-ingress- 454 node with its CE (Congestion experienced) codepoint set, then we 455 assume that the PCN-ingress-node drops the packet. After its 456 initial Charter is complete, the WG may decide to work on a 457 mechanism (such as through a signalling extension) that enables 458 ECN-marking to be carried transparently across the PCN-domain. 460 4. High-level functional architecture 462 The high-level approach is to split functionality between: 464 o PCN-interior-nodes 'inside' the PCN-domain, which monitor their 465 own state of pre-congestion on each outgoing interface and mark 466 PCN-packets if appropriate. They are not flow-aware, nor aware of 467 ingress-egress-aggregates. The functionality is also done by PCN- 468 ingress-nodes for their outgoing interfaces (ie those 'inside' the 469 PCN-domain). 471 o PCN-boundary-nodes at the edge of the PCN-domain, which control 472 admission of new PCN-flows and termination of existing PCN-flows, 473 based on information from PCN-interior-nodes. This information is 474 in the form of the PCN-marked data packets (which are intercepted 475 by the PCN-egress-nodes) and not signalling messages. Generally 476 PCN-ingress-nodes are flow-aware and in several deployment 477 scenarios PCN-egress-nodes will also be flow aware. 479 The aim of this split is to keep the bulk of the network simple, 480 scalable and robust, whilst confining policy, application-level and 481 security interactions to the edge of the PCN-domain. For example the 482 lack of flow awareness means that the PCN-interior-nodes don't care 483 about the flow information associated with the PCN-packets that they 484 carry, nor do the PCN-boundary-nodes care about which PCN-interior- 485 nodes its flows traverse. 487 Flow admission: 489 At a high level, flow admission control works as follows. In order 490 to generate information about the current state of the PCN-domain, 491 each PCN-node PCN-marks packets if it is "pre-congested". Exactly 492 how a PCN-node decides if it is "pre-congested" (the algorithm) and 493 exactly how packets are "PCN-marked" (the encoding) will be defined 494 in a separate standards-track document, but at a high level it is 495 expected to be as follows: 497 o the algorithm: a PCN-node meters the amount of PCN-traffic on each 498 one of its outgoing links. The measurement is made as an 499 aggregate of all PCN-packets, and not per flow. The algorithm has 500 a configured parameter, PCN-lower-rate. As the amount of PCN- 501 traffic exceeds the PCN-lower-rate, then PCN-packets are PCN- 502 marked. See NOTE below for more explanation. 504 o the encoding: a PCN-node PCN-marks a PCN-packet (with a first 505 encoding) by setting fields in the header to specific values. It 506 is expected that the ECN and/or DSCP fields will be used. 508 NOTE: Two main categories of algorithm have been proposed: if the 509 algorithm uses threshold-marking then all PCN-packets are marked if 510 the current rate exceeds the PCN-lower-rate, whereas if the algorithm 511 uses excess-rate-marking the amount marked is equal to the amount in 512 excess of the PCN-lower-rate. However, note that this description 513 reflects the overall intent of the algorithm rather than its 514 instantaneous behaviour, since the rate measured at a particular 515 moment depends on the detailed algorithm, its implementation (eg 516 virtual queue, token bucket...) and the traffic's variance as well as 517 its rate (eg marking may well continue after a recent overload even 518 after the instantaneous rate has dropped). 520 The PCN-boundary-nodes monitor the PCN-marked packets in order to 521 extract information about the current state of the PCN-domain. Based 522 on this monitoring, a decision is made about whether to admit a 523 prospective new flow. Exactly how the admission control decision is 524 made will be defined separately (at the moment the intention is that 525 there will be one or more informational-track RFCs), but at a high 526 level it is expected to be as follows: 528 o the PCN-egress-node measures (possibly as a moving average) the 529 fraction of the PCN-traffic that is PCN-marked. The fraction is 530 measured for a specific ingress-egress-aggregate. If the fraction 531 is below a threshold value then the new flow is admitted. 533 Note that the PCN-lower-rate is a parameter that can be configured by 534 the operator. It will be set lower than the traffic rate at which 535 the link becomes congested and the node drops packets. (Hence, by 536 analogy with ECN we call our mechanism Pre-Congestion Notification.) 538 Note also that the admission control decision is made for a 539 particular ingress-egress-aggregate. So it is quite possible for a 540 new flow to be admitted between one pair of PCN-boundary-nodes, 541 whilst at the same time another admission request is blocked between 542 a different pair of PCN-boundary-nodes. 544 Flow termination: 546 At a high level, flow termination control works as follows. Each 547 PCN-node PCN-marks packets in a similar fashion to above. An obvious 548 approach is for the algorithm to use a second configured parameter, 549 PCN-upper-rate, and a second header encoding ("PCN-upper-rate- 550 marking"). However there is also a proposal to use the same rate and 551 the same encoding. Several approaches have been proposed to date 552 about how to convert this information into a flow termination 553 decision; at a high level these are as follows: 555 o One approach measures the rate of unmarked PCN-traffic (ie not 556 PCN-upper-rate-marked) at the PCN-egress-node, which is the amount 557 of PCN-traffic that can actually be supported; the PCN-ingress- 558 node measures the rate of PCN-traffic that is destined for this 559 specific PCN-egress-node, and hence can calculate the excess 560 amount that should be terminated. 562 o Another approach instead measures the rate of PCN-upper-rate- 563 marked traffic and calculates and selects the flows that should be 564 terminated. 566 o Another approach terminates any PCN-flow with a PCN-upper-rate- 567 marked packet. Compared with the approaches above, PCN-marking 568 needs to be done at a reduced rate otherwise far too much traffic 569 would be terminated. 571 o Another approach uses only one sort of marking, which is based on 572 the PCN-lower-rate, to decide not only whether to admit more PCN- 573 flows but also whether any PCN-flows need to be terminated. It 574 assumes that the ratio of the (implicit) PCN-upper-rate and the 575 PCN-lower-rate is the same on all links. This approach measures 576 the rate of unmarked PCN-traffic at a PCN-egress-node. The PCN- 577 ingress-node uses this measurement to compute the implicit PCN- 578 upper-rate of the bottleneck link. It then measures the rate of 579 PCN-traffic that is destined for this specific PCN-egress-node and 580 hence can calculate the amount that should be terminated. 582 Since flow termination is designed for "abnormal" circumstances, it 583 is quite likely that some PCN-nodes are congested and hence packets 584 are being dropped and/or significantly queued. The flow termination 585 mechanism must bear this in mind. 587 Note also that the termination control decision is made for a 588 particular ingress-egress-aggregate. So it is quite possible for 589 PCN-flows to be terminated between one pair of PCN-boundary-nodes, 590 whilst at the same time none are terminated between a different pair 591 of PCN-boundary-nodes. 593 Although designed to work together, flow admission and flow 594 termination are independent mechanisms, and the use of one does not 595 require or prevent the use of the other (discussed further in Section 596 7.2). 598 Information transport: 600 The transport of pre-congestion information from a PCN-node to a PCN- 601 egress-node is through PCN-markings in data packet headers, no 602 signalling protocol messaging is needed. However, signalling is 603 needed to transport PCN-feedback-information between the PCN- 604 boundary-nodes, for example to convey the fraction of PCN-marked 605 traffic from a PCN-egress-node to the relevant PCN-ingress-node. 606 Exactly what information needs to be transported will be described in 607 the future PCN WG document(s) about the boundary mechanisms. The 608 signalling could be done by an extension of RSVP or NSIS, for 609 instance; protocol work will be done by the relevant WG, but for 610 example [I-D.lefaucheur-rsvp-ecn] describes the extensions needed for 611 RSVP. 613 The following are some high-level points about how PCN works: 615 o There needs to be a way for a PCN-node to distinguish PCN-traffic 616 from non PCN-traffic. They may be distinguished using the DSCP 617 field and/or ECN field. 619 o The PCN mechanisms may be applied to more than one traffic class 620 (which are distinguished by DSCP). 622 o There may be traffic that is more important than PCN, perhaps a 623 particular application or an operator's control messages. A PCN- 624 node may dedicate capacity to such traffic or priority schedule it 625 over PCN. In the latter case its traffic needs to contribute to 626 the PCN meters. 628 o There will be traffic less important than PCN. For instance best 629 effort or assured forwarding traffic. It will be scheduled at 630 lower priority than PCN, and use a separate queue or queues. 631 However, a PCN-node should dedicate some capacity to lower 632 priority traffic so that it isn't starved. 634 o There may be other traffic with the same priority as PCN-traffic. 635 For instance, Expedited Forwarding sessions that are originated 636 either without capacity admission or with traffic engineering. In 637 [I-D.ietf-tsvwg-admitted-realtime-dscp] the two traffic classes 638 are called EF and EF-ADMIT. A PCN-node could either use separate 639 queues, or separate policers and a common queue; the draft 640 provides some guidance when each is better, but for instance the 641 latter is preferred when the two traffic classes are carrying the 642 same type of application with the same jitter requirements. 644 5. Detailed Functional architecture 646 This section is intended to provide a systematic summary of the new 647 functional architecture in the PCN-domain. First it describes 648 functions needed at the three specific types of PCN-node; these are 649 data plane functions and are in addition to their normal router 650 functions. Then it describes further functionality needed for both 651 flow admission control and flow termination; these are signalling and 652 decision-making functions, and there are various possibilities for 653 where the functions are physically located. The section is split 654 into: 656 1. functions needed at PCN-interior-nodes 658 2. functions needed at PCN-ingress-nodes 660 3. functions needed at PCN-egress-nodes 662 4. other functions needed for flow admission control 664 5. other functions needed for probing (which may be needed 665 sometimes) 667 6. other functions needed for flow termination control 669 The section then discusses some other detailed topics: 671 1. addressing 673 2. tunnelling 675 3. fault handling 677 5.1. PCN-interior-node functions 679 Each interface of the PCN-domain is upgraded with the following 680 functionality: 682 o Packet classify - decide whether an incoming packet is a PCN- 683 packet or not. Another PCN WG document will specify encoding, 684 using the DSCP and/or ECN fields. 686 o PCN-meter - measure the 'amount of PCN-traffic'. The measurement 687 is made as an aggregate of all PCN-packets, and not per flow. 689 o PCN-mark - algorithms determine whether to PCN-mark PCN-packets 690 and what packet encoding is used (as specified in another PCN WG 691 document). 693 The same general approach of metering and PCN-marking is performed 694 for both flow admission control and flow termination, however the 695 algorithms and encoding may be different. 697 These functions are needed for each interface of the PCN-domain. 698 They are therefore needed on all interfaces of PCN-interior-nodes, 699 and on the interfaces of PCN-boundary-nodes that are internal to the 700 PCN-domain. There may be more than one PCN-meter and marker 701 installed at a given interface, eg one for admission and one for 702 termination. 704 5.2. PCN-ingress-node functions 706 Each ingress interface of the PCN-domain is upgraded with the 707 following functionality: 709 o Packet classify - decide whether an incoming packet is part of a 710 previously admitted microflow, by using a filter spec (eg DSCP, 711 source and destination addresses and port numbers) 713 o Police - police, by dropping or re-marking with a non-PCN DSCP, 714 any packets received with a DSCP demanding PCN transport that do 715 not belong to an admitted flow. Similarly, police packets that 716 are part of a previously admitted microflow, to check that the 717 microflow keeps to the agreed rate or flowspec (eg RFC1633 718 [RFC1633] and NSIS equivalent). 720 o PCN-colour - set the DSCP field or DSCP and ECN fields to the 721 appropriate value(s) for a PCN-packet. The draft about PCN- 722 encoding will discuss further. 724 o PCN-meter - make "measurements of PCN-traffic". Some approaches 725 to flow termination require the PCN-ingress-node to measure the 726 (aggregate) rate of PCN-traffic towards a particular PCN-egress- 727 node. 729 The first two are policing functions, needed to make sure that PCN- 730 packets let into the PCN-domain belong to a flow that's been admitted 731 and to ensure that the flow doesn't go at a faster rate than agreed. 732 The filter spec will for example come from the flow request message 733 (outside scope of PCN WG, see [I-D.briscoe-tsvwg-cl-architecture] for 734 an example using RSVP). PCN-colouring allows the rest of the PCN- 735 domain to recognise PCN-packets. 737 5.3. PCN-egress-node functions 739 Each egress interface of the PCN-domain is upgraded with the 740 following functionality: 742 o Packet classify - determine which PCN-ingress-node a PCN-packet 743 has come from. 745 o PCN-meter - make measurements of PCN-traffic. The measurement(s) 746 is made as an aggregate (ie not per flow) of all PCN-packets from 747 a particular PCN-ingress-node. 749 o PCN-colour - for PCN-packets, set the DSCP and ECN fields to the 750 appropriate values for use outside the PCN-domain. 752 Another PCN WG document, about boundary mechanisms, will describe 753 what the "measurements of PCN-traffic" are. This depends on whether 754 the measurement is targeted at admission control or flow termination. 755 It also depends on what encoding and PCN-marking algorithms are 756 specified by the PCN WG. 758 5.4. Admission control functions 760 Specific admission control functions can be performed at a PCN- 761 boundary-node (PCN-ingress-node or PCN-egress-node) or at a 762 centralised node, but not at normal PCN-interior-nodes. The 763 functions are: 765 o Make decision about admission - compare the required "measurements 766 of PCN-traffic" (output of the PCN-egress-node's PCN-meter 767 function) with some reference level, and hence decide whether to 768 admit the potential new PCN-flow. As well as the PCN 769 measurements, the decision takes account of policy and application 770 layer requirements. 772 o Communicate decision about admission - signal the decision to the 773 node making the admission control request (which may be outside 774 the PCN-region), and to the policer (PCN-ingress-node function) 776 There are various possibilities for how the functionality can be 777 distributed (we assume the operator would configure which is used): 779 o The decision is made at the PCN-egress-node and signalled to the 780 PCN-ingress-node 782 o The decision is made at the PCN-ingress-node, which requires that 783 the PCN-egress-node signals to the PCN-ingress-node the fraction 784 of PCN-traffic that is PCN-marked (or whatever the PCN WG agrees 785 as the required "measurements of PCN-traffic"). 787 o The decision is made at a centralised node, which requires that 788 the PCN-egress-node signals its measurements to the centralised 789 node, and that the centralised node signals to the PCN-ingress- 790 node about the decision about admission control. It would be 791 possible for the centralised node to be one of the PCN-boundary- 792 nodes, when clearly the signalling would sometimes be replaced by 793 a message internal to the node. 795 5.5. Probing functions 797 Probing functions are optional, and can be used for admission 798 control. 800 PCN's admission control, as described so far, is essentially a 801 reactive mechanism where the PCN-egress-node monitors the pre- 802 congestion level for traffic from each PCN-ingress-node; if the level 803 rises then it blocks new flows on that ingress-egress-aggregate. 804 However, it's possible that an ingress-egress-aggregate carries no 805 traffic, and so the PCN-egress-node can't make an admission decision 806 using the usual method described earlier. 808 One approach is to be "optimistic" and simply admit the new flow. 809 However it's possible to envisage a scenario where the traffic levels 810 on other ingress-egress-aggregates are already so high that they're 811 blocking new PCN-flows and admitting a new flow onto this 'empty' 812 ingress-egress-aggregate would add extra traffic onto the link that's 813 already pre-congested - which may 'tip the balance' so that PCN's 814 flow termination mechanism is activated or some packets are dropped. 815 This risk could be lessened by configuring on each link sufficient 816 'safety margin' above the PCN-lower-rate. 818 An alternative approach is to make PCN a more proactive mechanism. 819 The PCN-ingress-node explicitly determines, before admitting the 820 prospective new flow, whether the ingress-egress-aggregate can 821 support it. This can be seen as a "pessimistic" approach, in 822 contrast to the "optimism" of the approach above. It involves 823 probing: a PCN-ingress-node generates and sends probe packets in 824 order to test the pre-congestion level that the flow would 825 experience. A probe packet is just a dummy data packet, generated by 826 the PCN-ingress-node and addressed to the PCN-egress-node. A 827 downside of probing is that it adds delay to the admission control 828 process. Also note that in the scenario described in the previous 829 paragraph (where traffic levels on other ingress-egress-aggregates is 830 already very high), the probe packets may also 'tip the balance'. 831 However, the risk should be reduced because it should be possible to 832 send probe packets for a shorter time and at a lower rate than a 833 typical data flow. 835 The situation is more complicated if there is multipath routing 836 (ECMP) in the PCN-domain. It is then possible for some paths to be 837 pre-congested whilst other paths within the same ingress-egress- 838 aggregate aren't pre-congested. 840 One approach essentially ignores ECMP: as usual, admit or block a new 841 flow depending on the "measurements of PCN-traffic" on the ingress- 842 egress-aggregate. This is rather similar to the "optimistic" 843 approach above. 845 An alternative ("pessimistic" or "proactive") approach is to probe 846 the ECMP path. The PCN-ingress-node generates and sends probe 847 packets (dummy data) that follow the specific ECMP path that the new 848 flow would do, in order to test the pre-congestion level along it. 849 An ECMP algorithm typically examines: the source and destination IP 850 addresses and port numbers, the protocol ID and the DSCP. Hence 851 these fields must have the same values in the probe packets as the 852 future data packets would have. On the other hand, the PCN-egress- 853 node needs to consume the probe packets to ensure that they don't 854 travel beyond the PCN-domain (eg they might confuse the destination 855 end node). Hence somehow the PCN-egress-node has to be able to 856 disambiguate a probe packet from a data packet, via the 857 characteristic setting of particular bit(s) in the packet's header or 858 body - but these bit(s) mustn't be used by any PCN-interior-node's 859 ECMP algorithm. This should be possible with a typical ECMP 860 algorithm, but isn't in the general case. 862 The probing functions are: 864 o Make decision that probing is needed. As described above, this is 865 when the ingress-egress-aggregate or the ECMP path carries no PCN- 866 traffic. An alternative is always to probe, ie probe before 867 admitting every PCN-flow. 869 o (if required) Communicate the request that probing is needed - the 870 PCN-egress-node signals to the PCN-ingress-node that probing is 871 needed 873 o Generate probe traffic - the PCN-ingress-node generates the probe 874 traffic. The appropriate number (or rate) of probe packets will 875 depend on the PCN-marking algorithm; for example an excess-rate- 876 marking algorithm generates fewer PCN-marks than a threshold- 877 marking algorithm, and so will need more probe packets. 879 o Forward probe packets - as far as PCN-interior-nodes are 880 concerned, probe packets must be handled the same as (ordinary 881 data) PCN-packets, in terms of routing, scheduling and PCN- 882 marking. 884 o Consume probe packets - the PCN-egress-node consumes probe packets 885 to ensure that they don't travel beyond the PCN-domain. 887 5.6. Flow termination functions 889 Specific termination control functions can be performed at a PCN- 890 boundary-node (PCN-ingress-node or PCN-egress-node) or at a 891 centralised node, but not at normal PCN-interior-nodes. There are 892 various possibilities for how the functionality can be distributed, 893 similar to those discussed above in the Admission control section; 894 the flow termination decision could be made at the PCN-ingress-node, 895 the PCN-egress-node or at some centralised node. The functions are: 897 o PCN-meter at PCN-egress-node - (as described in Section 5.3) make 898 "measurements of PCN-traffic" from a particular PCN-ingress-node. 900 o (if required) PCN-meter at PCN-ingress-node - make "measurements 901 of PCN-traffic" being sent towards a particular PCN-egress-node; 902 again, this is done for the ingress-egress-aggregate and not per 903 flow. 905 o (if required) Communicate "measurements of PCN-traffic" to the 906 node that makes the flow termination decision. For example, if 907 the PCN-ingress-node makes the decision then communicate the PCN- 908 egress-node's measurements to it (as in 909 [I-D.briscoe-tsvwg-cl-architecture]). 911 o Make decision about flow termination - use the "measurements of 912 PCN-traffic" to decide which PCN-flow or PCN-flows to terminate. 913 The decision takes account of policy and application layer 914 requirements. 916 o Communicate decision about flow termination - signal the decision 917 to the node that is able to terminate the flow (which may be 918 outside the PCN-region), and to the policer (PCN-ingress-node 919 function) 921 One particular proposal, [I-D.charny-pcn-single-marking], for PCN- 922 marking and performing flow admission and termination would require a 923 global parameter to be defined on all PCN-boundary-nodes in the PCN- 924 domain. [I-D.charny-pcn-single-marking] discusses in full the impact 925 of this particular proposal on the operation of PCN. 927 5.7. Addressing 929 PCN-nodes may need to know the address of other PCN-nodes: 931 o in all cases PCN-interior-nodes don't need to know the address of 932 any other PCN-nodes (except as normal their next hop neighbours) 934 o in the cases of admission or termination decision by a PCN- 935 boundary-node, the PCN-egress-node needs to know the address of 936 the PCN-ingress-node associated with a flow, at a minimum so that 937 the PCN-ingress-node can be informed to enforce the admission 938 decision (and any flow termination decision) through policing. 939 The addressing information can be gathered from signalling, for 940 example as described for RSVP in [I-D.lefaucheur-rsvp-ecn]. 941 Another alternative is to use a probe packet that includes as 942 payload the address of the PCN-ingress-node. Alternatively, if 943 PCN-traffic is always tunnelled across the PCN-domain, then the 944 PCN-ingress-node's address is simply the source address of the 945 outer packet header - but then the PCN-ingress-node needs to know 946 the address of the PCN-egress-node. 948 o in the cases of admission or termination decision by a central 949 control node, the PCN-egress-node needs to be configured with the 950 address of the centralised node. In addition, depending on the 951 exact deployment scenario and its signalling, the centralised node 952 may need to know the addresses of the PCN-ingress-node and PCN- 953 egress-node, the PCN-egress-node know the address of the PCN- 954 ingress-node, and the PCN-ingress-node know the address of the 955 centralised node. NOTE: Consideration of the centralised case is 956 out of scope of the initial PCN WG Charter. 958 5.8. Tunnelling 960 Tunnels may originate and/or terminate within a PCN-domain. It is 961 important that the PCN-marking of any packet can potentially 962 influence PCN's flow admission control and termination - it shouldn't 963 matter whether the packet happens to be tunnelled at the PCN-node 964 that PCN-marks the packet, or indeed whether it's decapsulated or 965 encapsulated by a subsequent PCN-node. This suggests that the 966 "uniform conceptual model" described in [RFC2983] should be re- 967 applied in the PCN context. In line with this and the approach of 968 [RFC4303] and [I-D.briscoe-tsvwg-ecn-tunnel], the following rule is 969 applied if encapsulation is done within the PCN-domain: 971 o any PCN-marking is copied into the outer header 973 Similarly, in line with the "uniform conceptual model" of [RFC2983] 974 and the "full-functionality option" of [RFC3168], the following rules 975 are applied if decapsulation is done within the PCN-domain: 977 o if the outer header's marking state is more severe then it is 978 copied onto the inner header 980 o NB the order of increasing severity is: unmarked; PCN-marking with 981 first encoding (ie associated with the PCN-lower-rate); PCN- 982 marking with second encoding (ie associated with the PCN-upper- 983 rate) 985 Another reason for the copying operations described above is to 986 simplify dealing with the various headers: PCN-marking is then 987 orthogonal to tunnel encapsulation /decapsulation. 989 An operator may wish to tunnel PCN-traffic from PCN-ingress-nodes to 990 PCN-egress-nodes. The PCN-marks shouldn't be visible outside the 991 PCN-domain, which can be achieved by doing the PCN-colour function 992 (Section 5.3) after all the other (PCN and tunnelling) functions. 994 The potential reasons for doing such tunnelling are: the PCN-egress- 995 node then automatically knows the address of the relevant PCN- 996 ingress-node for a flow; even if ECMP is running, all PCN-packets on 997 a particular ingress-egress-aggregate follow the same path. But it 998 also has drawbacks, for example the additional overhead in terms of 999 bandwidth and processing. 1001 5.9. Fault handling 1003 If a PCN-interior-node fails (or one of its links), then lower layer 1004 protection mechanisms or the regular IP routing protocol will 1005 eventually re-route round it. If the new route can carry all the 1006 admitted traffic, flows will gracefully continue. If instead this 1007 causes early warning of pre-congestion on the new route, then 1008 admission control based on pre-congestion notification will ensure 1009 new flows will not be admitted until enough existing flows have 1010 departed. Finally re-routing may result in heavy (pre-)congestion, 1011 when the flow termination mechanism will kick in. 1013 If a PCN-boundary-node fails then we would like the regular QoS 1014 signalling protocol to take care of things. As an example 1015 [I-D.briscoe-tsvwg-cl-architecture] considers what happens if RSVP is 1016 the QoS signalling protocol. The details for a specific signalling 1017 protocol are out of scope of the PCN WG, however there is a WG 1018 Milestone on generic "Requirements for signalling". 1020 6. Design goals and challenges 1022 Prior work on PCN and similar mechanisms has thrown up a number of 1023 considerations about PCN's design goals (things PCN should be good 1024 at) and some issues that have been hard to solve in a fully 1025 satisfactory manner. Taken as a whole it represents a list of trade- 1026 offs (it's unlikely that they can all be 100% achieved) and perhaps 1027 as evaluation criteria to help an operator (or the IETF) decide 1028 between options. 1030 The following are key design goals for PCN (based on 1031 [I-D.chan-pcn-problem-statement]): 1033 o The PCN-enabled packet forwarding network should be simple, 1034 scalable and robust 1036 o Compatibility with other traffic (i.e. a proposed solution should 1037 work well when non-PCN traffic is also present in the network) 1039 o Support of different types of real-time traffic (eg should work 1040 well with CBR and VBR voice and video sources treated together) 1042 o Reaction time of the mechanisms should be commensurate with the 1043 desired application-level requirements (e.g. a termination 1044 mechanism needs to terminate flows before significant QoS issues 1045 are experienced by real-time traffic, and before most users hang 1046 up) 1048 o Compatibility with different precedence levels of real-time 1049 applications (e.g. preferential treatment of higher precedence 1050 calls over lower precedence calls, [ITU-MLPP]. 1052 The following are open issues. They are mainly taken from 1053 [I-D.briscoe-tsvwg-cl-architecture] which also describes some 1054 possible solutions. Note that some may be considered unimportant in 1055 general or in specific deployment scenarios or by some operators. 1057 NOTE: Potential solutions are out of scope for this document. 1059 o ECMP (Equal Cost Multi-Path) Routing: The level of pre-congestion 1060 is measured on a specific ingress-egress-aggregate. However, if 1061 the PCN-domain runs ECMP, then traffic on this ingress-egress- 1062 aggregate may follow several different paths - some of the paths 1063 could be pre-congested whilst others are not. There are three 1064 potential problems: 1066 1. over-admission: a new flow is admitted (because the pre- 1067 congestion level measured by the PCN-egress-node is 1068 sufficiently diluted by unmarked packets from non-congested 1069 paths that a new flow is admitted), but its packets travel 1070 through a pre-congested PCN-node 1072 2. under-admission: a new flow is blocked (because the pre- 1073 congestion level measured by the PCN-egress-node is 1074 sufficiently increased by PCN-marked packets from pre- 1075 congested paths that a new flow is blocked), but its packets 1076 travel along an uncongested path 1078 3. ineffective termination: flows are terminated, however their 1079 path doesn't travel through the (pre-)congested router(s). 1080 Since flow termination is a 'last resort' that protects the 1081 network should over-admission occur, this problem is probably 1082 more important to solve than the other two. 1084 o ECMP and signalling: It is possible that, in a PCN-domain running 1085 ECMP, the signalling packets (eg RSVP, NSIS) follow a different 1086 path than the data packets. This depends on which fields the ECMP 1087 algorithm uses. 1089 o Tunnelling: There are scenarios where tunnelling makes it hard to 1090 determine the path in the PCN-domain. The problem, its impact and 1091 the potential solutions are similar to those for ECMP. 1093 o Scenarios with only one tunnel endpoint in the PCN domain: (1) The 1094 tunnel starts outside a PCN-domain and finishes inside it. If the 1095 packet arrives at the tunnel ingress with the same encoding as 1096 used within the PCN-domain to indicate PCN-marking, then this 1097 could lead the PCN-egress-node to falsely measure pre-congestion. 1098 (2) The tunnel starts inside a PCN-domain and finishes outside it. 1099 If the packet arrives at the tunnel ingress already PCN-marked, 1100 then it will still have the same encoding when it's decapsulated 1101 which could potentially confuse nodes beyond the tunnel egress. 1102 (3) Scenarios with only one tunnel endpoint in the PCN domain may 1103 also make it harder for the PCN-egress-node to gather from the 1104 signalling messages (eg RSVP, NSIS) the identity of the PCN- 1105 ingress-node. 1107 o Bi-Directional Sessions: Many applications have bi-directional 1108 sessions - hence there are two flows that should be admitted (or 1109 terminated) as a pair - for instance a bi-directional voice call 1110 only makes sense if flows in both directions are admitted. 1111 However, PCN's mechanisms concern admission and termination of a 1112 single flow, and coordination of the decision for both flows is a 1113 matter for the signalling protocol and out of scope of PCN. One 1114 possible example would use SIP pre-conditions; there are others. 1116 o Global Coordination: PCN makes its admission decision based on 1117 PCN-markings on a particular ingress-egress-aggregate. Decisions 1118 about flows through a different ingress-egress-aggregate are made 1119 independently. However, one can imagine network topologies and 1120 traffic matrices where, from a global perspective, it would be 1121 better to make a coordinated decision across all the ingress- 1122 egress-aggregates for the whole PCN-domain. For example, to block 1123 (or even terminate) flows on one ingress-egress-aggregate so that 1124 more important flows through a different ingress-egress-aggregate 1125 could be admitted. The problem may well be second order. 1127 o Aggregate Traffic Characteristics: Even when the number of flows 1128 is stable, the traffic level through the PCN-domain will vary 1129 because the sources vary their traffic rates. PCN works best when 1130 there's not too much variability in the total traffic level at a 1131 PCN-node's interface (ie in the aggregate traffic from all 1132 sources). Too much variation means that a node may (at one 1133 moment) not be doing any PCN-marking and then (at another moment) 1134 drop packets because it's overloaded. This makes it hard to tune 1135 the admission control scheme to stop admitting new flows at the 1136 right time. Therefore the problem is more likely with fewer, 1137 burstier flows. 1139 o Flash crowds and Speed of Reaction: PCN is a measurement-based 1140 mechanism and so there is an inherent delay between packet marking 1141 by PCN-interior-nodes and any admission control reaction at PCN- 1142 boundary-nodes. For example, potentially if a big burst of 1143 admission requests occurs in a very short space of time (eg 1144 prompted by a televote), they could all get admitted before enough 1145 PCN-marks are seen to block new flows. In other words, any 1146 additional load offered within the reaction time of the mechanism 1147 mustn't move the PCN-domain directly from no congestion to 1148 overload. This 'vulnerability period' may impact at the 1149 signalling level, for instance QoS requests should be rate limited 1150 to bound the number of requests able to arrive within the 1151 vulnerability period. 1153 o Silent at start: after a successful admission request the source 1154 may wait some time before sending data (eg waiting for the called 1155 party to answer). Then the risk is that, in some circumstances, 1156 PCN's measurements underestimate what the pre-congestion level 1157 will be when the source does start sending data. 1159 o Compatibility of PCN-encoding with ECN-encoding. This issue will 1160 be considered further in the PCN WG Milestone 'Survey of encoding 1161 choices'. 1163 7. Operations and Management 1165 EDITOR'S NOTE: A re-write of this section is planned; some of the 1166 sub-sections are very short! The PCN WG Charter says that the 1167 architecture document should include security, manageability and 1168 operational considerations. 1170 This Section considers operations and management issues, under the 1171 FCAPS headings: OAM of Faults, Configuration, Accounting, Performance 1172 and Security. 1174 7.1. Fault OAM 1176 Fault OAM is about how to tell the management system (or manual 1177 operator) that the system has recovered (or not) from a failure. 1179 Faults include node or link failures, a wrongly configured address in 1180 a node, a wrong address given in a signalling protocol, a wrongly 1181 configured parameter in a queueing algorithm, and so on. 1183 7.2. Configuration OAM 1185 Perhaps the most important consideration here is that the level of 1186 detail of the standardisation affects what can be configured. We 1187 would like different implementations and configurations (eg choice of 1188 parameters) that are compliant with the PCN standard to work together 1189 successfully. 1191 Obvious configuration parameters are the PCN-lower-rate and PCN- 1192 upper-rate. A larger PCN-lower-rate enables more PCN-traffic to be 1193 admitted on a link, hence improving capacity utilisation. A PCN- 1194 upper-rate set further above the PCN-lower-rate allows greater 1195 increases in traffic (whether due to natural fluctuations or some 1196 unexpected event) before any flows are terminated, ie minimises the 1197 chances of unnecessarily triggering the termination mechanism. A 1198 greater gap, between the maximum rate at which PCN-traffic can be 1199 forwarded on a link and the PCN-lower-rate and PCN-upper-rate, 1200 increases the 'safety margin' - which can cover unexpected surges in 1201 traffic due to a re-routing event for instance. For instance an 1202 operator may want to design their network so that it can cope with a 1203 failure of any single PCN-node without terminating any flows. 1204 Setting the rates will therefore depend on things like: the 1205 operator's requirements, the link's capacity, the typical number of 1206 flows and perhaps their traffic characteristics, and so on. 1208 Other configurable parameters concern the PCN-boundary-nodes. For 1209 example, the amount of PCN-marked traffic above which new flows are 1210 blocked. 1212 Another configuration choice is the distribution of the functions 1213 concerning flows admission and termination, given in Section 5.4 and 1214 5.6, and which could potentially be under the control of a 1215 configuration parameter. 1217 Another configuration decision is whether to operate both the 1218 admission control and termination mechanisms. Although we suggest 1219 that an operator uses both, this isn't required and some operators 1220 may want to implement only one. For example, an operator could use 1221 just admission control, solving heavy congestion (caused by re- 1222 routing) by 'just waiting' - as sessions end, existing microflows 1223 naturally depart from the system over time, and the admission control 1224 mechanism will prevent admission of new microflows that use the 1225 affected links. So the PCN-domain will naturally return to normal 1226 operation, but with reduced capacity. The drawback of this approach 1227 would be that until PCN-flows naturally depart to relieve the 1228 congestion, all PCN-flows as well as lower priority services will be 1229 adversely affected. On the other hand, an operator could just rely 1230 for admission control on statically provisioned capacity per PCN- 1231 ingress-node (regardless of the PCN-egress-node of a flow), as is 1232 typical in the hose model of the DiffServ architecture [RFC2475]. 1233 Such traffic conditioning agreements can lead to focused overload: 1234 many flows happen to focus on a particular link and then all flows 1235 through the congested link fail catastrophically. The flow 1236 termination mechanism could then be used to counteract such a 1237 problem. 1239 A different possibility is to configure only the PCN-lower-rate and 1240 hence only do one type of PCN-marking, but generate admission and 1241 flow termination responses from different levels of marking. This is 1242 suggested in [I-D.charny-pcn-single-marking] which gives some of the 1243 pros and cons of this approach. 1245 Another PCN WG document will specify PCN-marking, in particular how 1246 many PCN-packets get PCN-marked according to what measure of PCN- 1247 traffic. For instance an algorithm relating the current rate of PCN- 1248 traffic to the probability of admission-marking a packet. Depending 1249 on how tightly it is decided to specify this, there are potentially 1250 quite a few configuration choices, for instance: 1252 o does the probability go from 0% at one rate of PCN-traffic (the 1253 PCN-lower-rate) to 100% at a slightly higher rate (ie threshold- 1254 marking), or does it 'ramp up' gradually (as in RED)? Does the 1255 standard allow both? 1257 o how is the current rate of PCN-traffic measured? Rate cannot be 1258 measured instantaneously, so how is this smoothed? A sliding 1259 window or exponentially weighted moving average? 1261 o is the PCN-lower-rate a fixed parameter? An idea raised in 1262 [Songhurst] is that the PCN-lower-rate on each router should 1263 depend on the current amount of non-PCN-traffic; the aim is that 1264 resource allocation reflects the traffic mix - for instance more 1265 PCN-traffic could be admitted if the fraction of PCN-traffic was 1266 higher. Is this allowed? 1268 Another question is whether there are any configuration parameters 1269 that have to be set once to 'globally' control the whole PCN-domain 1270 (as required by some proposals). This may affect operational 1271 complexity and the chances of interoperability problems between kit 1272 from different vendors. 1274 7.3. Accounting OAM 1276 Accounting at the flow level will have to record instances of flow 1277 admission, rejection and termination, but accounting itself is 1278 outside the scope of PCN. The ability to enable or disable flow 1279 accounting for specific classes of flow and to specify retrieval of 1280 accounting records in real time for specified classes of flow is a 1281 general requirement not specific to PCN that may, however, find 1282 specific use when diagnosing faults affecting PCN operation. 1284 7.4. Performance OAM 1286 Performance OAM is about monitoring performance at run-time. There 1287 are a wide variety of performance metrics that it may be worth 1288 collecting at PCN-ingress-nodes, PCN-egress-nodes and PCN-interior- 1289 nodes. A detailed list of metrics is not part of this architecture 1290 document, but the sorts of things would be: 1292 o can the operator identify 'hot spots' in the network (links which 1293 most often do PCN-marking)? This would help them plan to install 1294 extra capacity where it is most needed. 1296 o what is the rate at which flows are admitted and terminated (for 1297 each pair of PCN-boundary-nodes)? Such information would be 1298 useful for fault management, networking planning and service level 1299 monitoring. 1301 7.5. Security OAM 1303 Security OAM is finding out about security breaches or near-misses at 1304 run-time. 1306 8. IANA Considerations 1308 This memo includes no request to IANA. 1310 9. Security considerations 1312 Security considerations essentially come from the Trust Assumption 1313 (Section 3.1), ie that all PCN-nodes are PCN-enabled and trust each 1314 other for truthful PCN-marking and transport. PCN splits 1315 functionality between PCN-interior-nodes and PCN-boundary-nodes, and 1316 the security considerations are somewhat different for each, mainly 1317 because PCN-boundary-nodes are flow-aware and PCN-interior-nodes are 1318 not. 1320 o because the PCN-boundary-nodes are flow-aware, they are trusted to 1321 use that awareness correctly. The degree of trust required 1322 depends on the kinds of decisions they have to make and the kinds 1323 of information they need to make them. For example when the PCN- 1324 boundary-node needs to know the contents of the sessions for 1325 making the admission and termination decisions (perhaps based on 1326 the MLPP precedence), or when the contents are highly classified, 1327 then the security requirements for the PCN-boundary-nodes involved 1328 will also need to be high. 1330 o the PCN-ingress-nodes police packets to ensure a flow sticks 1331 within its agreed limit, and to ensure that only flows which have 1332 been admitted contribute PCN-traffic into the PCN-domain. The 1333 policer must drop (or perhaps re-mark to a different DSCP) any 1334 PCN-packets received that are outside this remit. This is similar 1335 to the existing IntServ behaviour. Between them the PCN-boundary- 1336 nodes must encircle the PCN-domain, otherwise PCN-packets could 1337 enter the PCN-domain without being subject to admission control, 1338 which would potentially destroy the QoS of existing flows. 1340 o PCN-interior-nodes aren't flow-aware. This prevents some security 1341 attacks where an attacker targets specific flows in the data plane 1342 - for instance for DoS or eavesdropping. 1344 o PCN-marking by the PCN-interior-nodes along the packet forwarding 1345 path needs to be trusted, because the PCN-boundary-nodes rely on 1346 this information. For instance a rogue PCN-interior-node could 1347 PCN-mark all packets so that no flows were admitted. Another 1348 possibility is that it doesn't PCN-mark any packets, even when 1349 it's pre-congested. More subtly, the rogue PCN-interior-node 1350 could perform these attacks selectively on particular flows, or it 1351 could PCN-mark the correct fraction overall, but carefully choose 1352 which flows it marked. 1354 o the PCN-boundary-nodes should be able to deal with DoS attacks and 1355 state exhaustion attacks based on fast changes in per flow 1356 signalling. 1358 o the signalling between the PCN-boundary-nodes (and possibly a 1359 central control node) must be protected from attacks. For example 1360 the recipient needs to validate that the message is indeed from 1361 the node that claims to have sent it. Possible measures include 1362 digest authentication and protection against replay and man-in- 1363 the-middle attacks. For the specific protocol RSVP, hop-by-hop 1364 authentication is in [RFC2747], and 1365 [I-D.behringer-tsvwg-rsvp-security-groupkeying] may also be 1366 useful; for a generic signalling protocol the PCN WG document on 1367 "Requirements for signalling" will describe the requirements in 1368 more detail. 1370 10. Conclusions 1372 The document describes a general architecture for flow admission and 1373 termination based on aggregated pre-congestion information in order 1374 to protect the quality of service of established inelastic flows 1375 within a single DiffServ domain. The main topic is the functional 1376 architecture (first covered at a high level and then at a greater 1377 level of detail). It also mentions other topics like the assumptions 1378 and open issues. 1380 11. Acknowledgements 1382 This document is a revised version of [I-D.eardley-pcn-architecture]. 1383 Its authors were: P. Eardley, J. Babiarz, K. Chan, A. Charny, R. 1384 Geib, G. Karagiannis, M. Menth, T. Tsou. They are therefore 1385 contributors to this document. 1387 Thanks to those who've made comments on 1388 [I-D.eardley-pcn-architecture] and on earlier versions of this draft: 1389 Lachlan Andrew, Joe Babiarz, Fred Baker, David Black, Steven Blake, 1390 Bob Briscoe, Ken Carlberg, Anna Charny, Joachim Charzinski, Andras 1391 Csaszar, Lars Eggert, Ruediger Geib, Robert Hancock, Georgios 1392 Karagiannis, Michael Menth, Tom Taylor, Tina Tsou, Delei Yu. 1394 This document is the result of discussions in the PCN WG and 1395 forerunner activity in the TSVWG. A number of previous drafts were 1396 presented to TSVWG: [I-D.chan-pcn-problem-statement], 1397 [I-D.briscoe-tsvwg-cl-architecture], [I-D.briscoe-tsvwg-cl-phb], 1398 [I-D.charny-pcn-single-marking], [I-D.babiarz-pcn-sip-cap], 1399 [I-D.lefaucheur-rsvp-ecn], [I-D.westberg-pcn-load-control].The 1400 authors of them were: B, Briscoe, P. Eardley, D. Songhurst, F. Le 1401 Faucheur, A. Charny, J. Babiarz, K. Chan, S. Dudley, G. Karagiannis, 1402 A. Bader, L. Westberg, J. Zhang, V. Liatsos, X-G. Liu, A. Bhargava. 1404 12. Comments Solicited 1406 Comments and questions are encouraged and very welcome. They can be 1407 addressed to the IETF PCN working group mailing list . 1409 13. Changes 1411 In addition to clarifications and nit squashing, the main changes 1412 are: 1414 o S1: Benefits: added one about provisioning (and contrast with 1415 DiffServ SLAs) 1417 o S1: Benefits: clarified that the objective is also to stop PCN- 1418 packets being significantly delayed (previously only mentioned not 1419 dropping pkts) 1421 o S1: Deployment models: added one where policing is done at ingress 1422 of acess network and not at ingress of PCN-domain (assume trust 1423 between networks) 1425 o S1: Deployment models: corrected MPLS-TE to MPLS 1427 o S2: Terminology: adjusted definition of PCN-domain 1429 o S3.5: Other assumptions: corrected, so that two assumptions (PCN- 1430 nodes not performing ECN and PCN-ingress-node discarding arriving 1431 CE packet) only apply if the PCN WG decides to encode PCN-marking 1432 in the ECN-field. 1434 o S4 & S5: changed PCN-marking algorithm to marking behaviour 1436 o S4: clarified that PCN-interior-node functionality applies for 1437 each outgoing interface, and added clarification: "The 1438 functionality is also done by PCN-ingress-nodes for their outgoing 1439 interfaces (ie those 'inside' the PCN-domain)." 1441 o S4 (near end): altered to say that a PCN-node "should" dedicate 1442 some capacity to lower priority traffic so that it isn't starved 1443 (was "may") 1445 o S5: clarified to say that PCN functionality is done on an 1446 'interface' (rather than on a 'link') 1448 o S5.2: deleted erroneous mention of service level agreement 1450 o S5.5: Probing: re-written, especially to distinguish probing to 1451 test the ingress-egress-aggregate from probing to test a 1452 particular ECMP path. 1454 o S5.7: Addressing: added mention of probing; added that in the case 1455 where traffic is always tunnelled across the PCN-domain, add a 1456 note that he PCN-ingress-node needs to know the address of the 1457 PCN-egress-node. 1459 o S5.8: Tunnelling: re-written, especially to provide a clearer 1460 description of copying on tunnel entry/exit, by adding explanation 1461 (keeping tunnel encaps/decaps and PCN-marking orthogonal), 1462 deleting one bullet ("if the inner header's marking state is more 1463 sever then it is preserved" - shouldn't happen), and better 1464 referencing of other IETF documents. 1466 o S6: Open issues: stressed that "NOTE: Potential solutions are out 1467 of scope for this document" and edited a couple of sentences that 1468 were close to solution space. 1470 o S6: Open issues: added one about scenarios with only one tunnel 1471 endpoint in the PCN domain . 1473 o S6: Open issues: ECMP: added under-admission as another potential 1474 risk 1476 o S6: Open issues: added one about "Silent at start" 1478 o S10: Conclusions: a small conclusions section added. 1480 14. References 1482 14.1. Normative References 1484 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1485 Requirement Levels", BCP 14, RFC 2119, March 1997. 1487 14.2. Informative References 1489 [I-D.briscoe-tsvwg-cl-architecture] 1490 Briscoe, B., "An edge-to-edge Deployment Model for Pre- 1491 Congestion Notification: Admission Control over a 1492 DiffServ Region", draft-briscoe-tsvwg-cl-architecture-04 1493 (work in progress), October 2006. 1495 [I-D.briscoe-tsvwg-cl-phb] 1496 Briscoe, B., "Pre-Congestion Notification marking", 1497 draft-briscoe-tsvwg-cl-phb-03 (work in progress), 1498 October 2006. 1500 [I-D.charny-pcn-single-marking] 1501 Charny, A., "Pre-Congestion Notification Using Single 1502 Marking for Admission and Termination", 1503 draft-charny-pcn-single-marking-02 (work in progress), 1504 July 2007. 1506 [I-D.ietf-tsvwg-admitted-realtime-dscp] 1507 Baker, F., "DSCPs for Capacity-Admitted Traffic", 1508 draft-ietf-tsvwg-admitted-realtime-dscp-01 (work in 1509 progress), March 2007. 1511 [I-D.babiarz-pcn-sip-cap] 1512 Babiarz, J., "SIP Controlled Admission and Preemption", 1513 draft-babiarz-pcn-sip-cap-00 (work in progress), 1514 October 2006. 1516 [I-D.ietf-tsvwg-ecn-mpls] 1517 Davie, B., "Explicit Congestion Marking in MPLS", 1518 draft-ietf-tsvwg-ecn-mpls-01 (work in progress), 1519 June 2007. 1521 [I-D.lefaucheur-rsvp-ecn] 1522 Faucheur, F., "RSVP Extensions for Admission Control over 1523 Diffserv using Pre-congestion Notification (PCN)", 1524 draft-lefaucheur-rsvp-ecn-01 (work in progress), 1525 June 2006. 1527 [I-D.chan-pcn-problem-statement] 1528 Chan, K., "Pre-Congestion Notification Problem Statement", 1529 draft-chan-pcn-problem-statement-01 (work in progress), 1530 October 2006. 1532 [I-D.ietf-pwe3-congestion-frmwk] 1533 Bryant, S., "Pseudowire Congestion Control Framework", 1534 draft-ietf-pwe3-congestion-frmwk-00 (work in progress), 1535 February 2007. 1537 [I-D.briscoe-tsvwg-ecn-tunnel] 1538 "Layered Encapsulation of Congestion Notification", 1539 June 2007, . 1542 [I-D.briscoe-re-pcn-border-cheat] 1543 "Emulating Border Flow Policing using Re-ECN on Bulk 1544 Data", June 2006, . 1547 [I-D.eardley-pcn-architecture] 1548 "Pre-Congestion Notification Architecture", June 2007, . 1552 [I-D.westberg-pcn-load-control] 1553 "LC-PCN: The Load Control PCN Solution", August 2007, . 1557 [I-D.behringer-tsvwg-rsvp-security-groupkeying] 1558 "A Framework for RSVP Security Using Dynamic Group 1559 Keying", June 2007, . 1562 [RFC4303] Kent, S., "IP Encapsulating Security Payload (ESP)", 1563 RFC 4303, December 2005. 1565 [RFC2475] Blake, S., Black, D., Carlson, M., Davies, E., Wang, Z., 1566 and W. Weiss, "An Architecture for Differentiated 1567 Services", RFC 2475, December 1998. 1569 [RFC3246] Davie, B., Charny, A., Bennet, J., Benson, K., Le Boudec, 1570 J., Courtney, W., Davari, S., Firoiu, V., and D. 1571 Stiliadis, "An Expedited Forwarding PHB (Per-Hop 1572 Behavior)", RFC 3246, March 2002. 1574 [RFC4594] Babiarz, J., Chan, K., and F. Baker, "Configuration 1575 Guidelines for DiffServ Service Classes", RFC 4594, 1576 August 2006. 1578 [RFC3168] Ramakrishnan, K., Floyd, S., and D. Black, "The Addition 1579 of Explicit Congestion Notification (ECN) to IP", 1580 RFC 3168, September 2001. 1582 [RFC2211] Wroclawski, J., "Specification of the Controlled-Load 1583 Network Element Service", RFC 2211, September 1997. 1585 [RFC2998] Bernet, Y., Ford, P., Yavatkar, R., Baker, F., Zhang, L., 1586 Speer, M., Braden, R., Davie, B., Wroclawski, J., and E. 1587 Felstaine, "A Framework for Integrated Services Operation 1588 over Diffserv Networks", RFC 2998, November 2000. 1590 [RFC3270] Le Faucheur, F., Wu, L., Davie, B., Davari, S., Vaananen, 1591 P., Krishnan, R., Cheval, P., and J. Heinanen, "Multi- 1592 Protocol Label Switching (MPLS) Support of Differentiated 1593 Services", RFC 3270, May 2002. 1595 [RFC1633] Braden, B., Clark, D., and S. Shenker, "Integrated 1596 Services in the Internet Architecture: an Overview", 1597 RFC 1633, June 1994. 1599 [RFC2983] Black, D., "Differentiated Services and Tunnels", 1600 RFC 2983, October 2000. 1602 [RFC2747] Baker, F., Lindell, B., and M. Talwar, "RSVP Cryptographic 1603 Authentication", RFC 2747, January 2000. 1605 [ITU-MLPP] 1606 "Multilevel Precedence and Pre-emption Service (MLPP)", 1607 ITU-T Recommendation I.255.3, 1990. 1609 [Iyer] "An approach to alleviate link overload as observed on an 1610 IP backbone", IEEE INFOCOM , 2003, 1611 . 1613 [Shenker] "Fundamental design issues for the future Internet", IEEE 1614 Journal on selected areas in communications pp 1176 - 1615 1188, Vol 13 (7), 1995. 1617 [Songhurst] 1618 "Guaranteed QoS Synthesis for Admission Control with 1619 Shared Capacity", BT Technical Report TR-CXR9-2006-001, 1620 Feburary 2006, . 1623 Author's Address 1625 Philip Eardley 1626 BT 1627 B54/77, Sirius House Adastral Park Martlesham Heath 1628 Ipswich, Suffolk IP5 3RE 1629 United Kingdom 1631 Email: philip.eardley@bt.com 1633 Full Copyright Statement 1635 Copyright (C) The IETF Trust (2007). 1637 This document is subject to the rights, licenses and restrictions 1638 contained in BCP 78, and except as set forth therein, the authors 1639 retain all their rights. 1641 This document and the information contained herein are provided on an 1642 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 1643 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND 1644 THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS 1645 OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 1646 THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 1647 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 1649 Intellectual Property 1651 The IETF takes no position regarding the validity or scope of any 1652 Intellectual Property Rights or other rights that might be claimed to 1653 pertain to the implementation or use of the technology described in 1654 this document or the extent to which any license under such rights 1655 might or might not be available; nor does it represent that it has 1656 made any independent effort to identify any such rights. Information 1657 on the procedures with respect to rights in RFC documents can be 1658 found in BCP 78 and BCP 79. 1660 Copies of IPR disclosures made to the IETF Secretariat and any 1661 assurances of licenses to be made available, or the result of an 1662 attempt made to obtain a general license or permission for the use of 1663 such proprietary rights by implementers or users of this 1664 specification can be obtained from the IETF on-line IPR repository at 1665 http://www.ietf.org/ipr. 1667 The IETF invites any interested party to bring to its attention any 1668 copyrights, patents or patent applications, or other proprietary 1669 rights that may cover technology that may be required to implement 1670 this standard. Please address the information to the IETF at 1671 ietf-ipr@ietf.org. 1673 Acknowledgment 1675 Funding for the RFC Editor function is provided by the IETF 1676 Administrative Support Activity (IASA).