idnits 2.17.1 draft-ietf-pcn-cl-edge-behaviour-15.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (May 11, 2012) is 4366 days in the past. Is this intentional? Checking references for intended status: Experimental ---------------------------------------------------------------------------- == Missing Reference: 'CL-specific' is mentioned on line 1306, but not defined == Missing Reference: 'CL-Specific' is mentioned on line 342, but not defined == Missing Reference: 'CLE-specific' is mentioned on line 474, but not defined -- No information found for draft-tsvwg-rsvp-pcn - is the name correct? Summary: 0 errors (**), 0 flaws (~~), 4 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Internet Engineering Task Force A. Charny 3 Internet-Draft 4 Intended status: Experimental F. Huang 5 Expires: November 12, 2012 Huawei Technologies 6 G. Karagiannis 7 U. Twente 8 M. Menth 9 University of Tuebingen 10 T. Taylor, Ed. 11 Huawei Technologies 12 May 11, 2012 14 PCN Boundary Node Behaviour for the Controlled Load (CL) Mode of 15 Operation 16 draft-ietf-pcn-cl-edge-behaviour-15 18 Abstract 20 Pre-congestion notification (PCN) is a means for protecting the 21 quality of service for inelastic traffic admitted to a Diffserv 22 domain. The overall PCN architecture is described in RFC 5559. This 23 memo is one of a series describing possible boundary node behaviours 24 for a PCN-domain. The behaviour described here is that for a form of 25 measurement-based load control using three PCN marking states, not- 26 marked, threshold-marked, and excess-traffic-marked. This behaviour 27 is known informally as the Controlled Load (CL) PCN-boundary-node 28 behaviour. 30 Status of this Memo 32 This Internet-Draft is submitted in full conformance with the 33 provisions of BCP 78 and BCP 79. 35 Internet-Drafts are working documents of the Internet Engineering 36 Task Force (IETF). Note that other groups may also distribute 37 working documents as Internet-Drafts. The list of current Internet- 38 Drafts is at http://datatracker.ietf.org/drafts/current/. 40 Internet-Drafts are draft documents valid for a maximum of six months 41 and may be updated, replaced, or obsoleted by other documents at any 42 time. It is inappropriate to use Internet-Drafts as reference 43 material or to cite them other than as "work in progress." 45 This Internet-Draft will expire on November 12, 2012. 47 Copyright Notice 48 Copyright (c) 2012 IETF Trust and the persons identified as the 49 document authors. All rights reserved. 51 This document is subject to BCP 78 and the IETF Trust's Legal 52 Provisions Relating to IETF Documents 53 (http://trustee.ietf.org/license-info) in effect on the date of 54 publication of this document. Please review these documents 55 carefully, as they describe your rights and restrictions with respect 56 to this document. Code Components extracted from this document must 57 include Simplified BSD License text as described in Section 4.e of 58 the Trust Legal Provisions and are provided without warranty as 59 described in the Simplified BSD License. 61 Table of Contents 63 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5 64 1.1. Terminology . . . . . . . . . . . . . . . . . . . . . . . 6 65 2. [CL-Specific] Assumed Core Network Behaviour for CL . . . . . 9 66 3. Node Behaviours . . . . . . . . . . . . . . . . . . . . . . . 10 67 3.1. Overview . . . . . . . . . . . . . . . . . . . . . . . . . 10 68 3.2. Behaviour of the PCN-Egress-Node . . . . . . . . . . . . . 11 69 3.2.1. Data Collection . . . . . . . . . . . . . . . . . . . 11 70 3.2.2. Reporting the PCN Data . . . . . . . . . . . . . . . . 12 71 3.2.3. Optional Report Suppression . . . . . . . . . . . . . 12 72 3.3. Behaviour at the Decision Point . . . . . . . . . . . . . 13 73 3.3.1. Flow Admission . . . . . . . . . . . . . . . . . . . . 13 74 3.3.2. Flow Termination . . . . . . . . . . . . . . . . . . . 14 75 3.3.3. Decision Point Action For Missing 76 PCN-Boundary-Node Reports . . . . . . . . . . . . . . 15 77 3.4. Behaviour of the Ingress Node . . . . . . . . . . . . . . 17 78 3.5. Summary of Timers and Associated Configurable Durations . 17 79 3.5.1. Recommended Values For the Configurable Durations . . 18 80 4. Specification of Diffserv Per-Domain Behaviour . . . . . . . . 19 81 4.1. Applicability . . . . . . . . . . . . . . . . . . . . . . 19 82 4.2. Technical Specification . . . . . . . . . . . . . . . . . 19 83 4.2.1. Classification and Traffic Conditioning . . . . . . . 20 84 4.2.2. PHB Configuration . . . . . . . . . . . . . . . . . . 20 85 4.3. Attributes . . . . . . . . . . . . . . . . . . . . . . . . 20 86 4.4. Parameters . . . . . . . . . . . . . . . . . . . . . . . . 20 87 4.5. Assumptions . . . . . . . . . . . . . . . . . . . . . . . 20 88 4.6. Example Uses . . . . . . . . . . . . . . . . . . . . . . . 21 89 4.7. Environmental Concerns . . . . . . . . . . . . . . . . . . 21 90 4.8. Security Considerations . . . . . . . . . . . . . . . . . 21 91 5. Operational and Management Considerations . . . . . . . . . . 21 92 5.1. Deployment of the CL Edge Behaviour . . . . . . . . . . . 21 93 5.1.1. Selection of Deployment Options and Global 94 Parameters . . . . . . . . . . . . . . . . . . . . . . 21 95 5.1.2. Specification of Node- and Link-Specific Parameters . 23 96 5.1.3. Installation of Parameters and Policies . . . . . . . 24 97 5.1.4. Activation and Verification of All Behaviours . . . . 25 98 5.2. Management Considerations . . . . . . . . . . . . . . . . 26 99 5.2.1. Event Logging In the PCN Domain . . . . . . . . . . . 26 100 5.2.1.1. Logging Loss and Restoration of Contact . . . . . 26 101 5.2.1.2. Logging Flow Termination Events . . . . . . . . . 28 102 5.2.2. Provision and Use of Counters . . . . . . . . . . . . 29 103 6. Security Considerations . . . . . . . . . . . . . . . . . . . 30 104 7. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 30 105 8. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . . 31 106 9. References . . . . . . . . . . . . . . . . . . . . . . . . . . 32 107 9.1. Normative References . . . . . . . . . . . . . . . . . . . 32 108 9.2. Informative References . . . . . . . . . . . . . . . . . . 32 110 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 33 112 1. Introduction 114 The objective of Pre-Congestion Notification (PCN) is to protect the 115 quality of service (QoS) of inelastic flows within a Diffserv domain, 116 in a simple, scalable, and robust fashion. Two mechanisms are used: 117 admission control, to decide whether to admit or block a new flow 118 request, and (in abnormal circumstances) flow termination to decide 119 whether to terminate some of the existing flows. To achieve this, 120 the overall rate of PCN-traffic is metered on every link in the PCN- 121 domain, and PCN-packets are appropriately marked when certain 122 configured rates are exceeded. These configured rates are below the 123 rate of the link thus providing notification to PCN-boundary-nodes 124 about incipient overloads before any congestion occurs (hence the 125 "pre" part of "pre-congestion notification"). The level of marking 126 allows decisions to be made about whether to admit or terminate PCN- 127 flows. For more details see [RFC5559]. 129 This document describes an experimental edge node behaviour to 130 implement PCN in a network. The experiment may be run in a network 131 in which a substantial proportion of the traffic carried is in the 132 form of inelastic flows and where admission control of micro-flows is 133 applied at the edge. For the effects of PCN to be observable, the 134 committed bandwidth (i.e., level of non-best-effort traffic) on at 135 least some links of the network should be near or at link capacity. 136 The amount of effort required to prepare the network for the 137 experiment (see Section 5.1) may constrain the size of network to 138 which it is applied. The purposes of the experiment are: 140 o to validate the specification of the CL edge behaviour; 142 o to evaluate the effectiveness of the CL edge behaviour in 143 preserving quality of service for admitted flows; and 145 o to evaluate PCN's potential for reducing the amount of capital and 146 operational costs in comparison to alternative methods of assuring 147 quality of service. 149 For the first two objectives, the experiment should run long enough 150 for the network to experience sharp peaks of traffic in at least some 151 directions. It would also be desirable to observe PCN performance in 152 the face of failures in the network. A period in the order of a 153 month or two in busy season may be enough. The third objective is 154 more difficult, and could require observation over a period long 155 enough for traffic demand to grow to the point where additional 156 capacity must be provisioned at some points in the network. 158 Section 3 of this document specifies a detailed set of algorithms and 159 procedures used to implement the PCN mechanisms for the CL mode of 160 operation. Since the algorithms depend on specific metering and 161 marking behaviour at the interior nodes, it is also necessary to 162 specify the assumptions made about PCN-interior-node behaviour 163 (Section 2). Finally, because PCN uses DSCP values to carry its 164 markings, a specification of PCN-boundary-node behaviour must include 165 the per domain behaviour (PDB) template specified in [RFC3086], 166 filled out with the appropriate content (Section 4). 168 Note that the terms "block" or "terminate" actually translate to one 169 or more of several possible courses of action, as discussed in 170 Section 3.6 of [RFC5559]. The choice of which action to take for 171 blocked or terminated flows is a matter of local policy. 173 [RFC EDITOR'S NOTE: RFCyyyy is the published version of 174 draft-ietf-pcn-sm-edge-behaviour.] 176 A companion document [RFCyyyy] specifies the Single Marking (SM) PCN- 177 boundary-node behaviour. This document and [RFCyyyy] have a great 178 deal of text in common. To simplify the task of the reader, the text 179 in the present document that is specific to the CL PCN-boundary-node 180 behaviour is preceded by the phrase: "[CL-specific]". A similar 181 distinction for SM-specific text is made in [RFCyyyy]. 183 1.1. Terminology 185 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 186 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 187 document are to be interpreted as described in [RFC2119]. 189 This document uses the following terms defined in Section 2 of 190 [RFC5559]: 192 o PCN-domain; 194 o PCN-ingress-node; 196 o PCN-egress-node; 198 o PCN-interior-node; 200 o PCN-boundary-node; 202 o PCN-flow; 204 o ingress-egress-aggregate (IEA); 206 o [CL-specific] PCN-threshold-rate; 207 o PCN-excess-rate; 209 o PCN-admissible-rate; 211 o PCN-supportable-rate; 213 o PCN-marked; 215 o [CL-specific] threshold-marked; 217 o excess-traffic-marked. 219 It also uses the terms PCN-traffic and PCN-packet, for which the 220 definition is repeated from [RFC5559] because of their importance to 221 the understanding of the text that follows: 223 PCN-traffic, PCN-packets, PCN-BA 224 A PCN-domain carries traffic of different Diffserv behaviour 225 aggregates (BAs) [RFC2474]. The PCN-BA uses the PCN mechanisms to 226 carry PCN-traffic, and the corresponding packets are PCN-packets. 227 The same network will carry traffic of other Diffserv BAs. The 228 PCN-BA is distinguished by a combination of the Diffserv codepoint 229 and the ECN field. 231 This document uses the following terms from [RFC5670]: 233 o [CL-specific] threshold-meter; 235 o excess-traffic-meter. 237 To complete the list of borrowed terms, this document reuses the 238 following terms and abbreviations defined in Section 3 of 239 [ID.pcn-3-in-1]: 241 o not-PCN codepoint; 243 o Not-marked (NM) codepoint; 245 o [CL-specific] Threshold-marked (ThM) codepoint; 247 o Excess-traffic-marked (ETM) codepoint. 249 This document defines the following additional terms: 251 Decision Point 252 The node that makes the decision about which flows to admit and to 253 terminate. In a given network deployment, this can be the PCN- 254 ingress-node or a centralized control node. In either case, the 255 PCN-ingress-node is the point where the decisions are enforced. 257 NM-rate 258 The rate of not-marked PCN-traffic received at a PCN-egress-node 259 for a given ingress-egress-aggregate in octets per second. For 260 further details see Section 3.2.1. 262 [CL-specific] ThM-rate 263 The rate of threshold-marked PCN-traffic received at a PCN-egress- 264 node for a given ingress-egress-aggregate in octets per second. 265 For further details see Section 3.2.1. 267 ETM-rate 268 The rate of excess-traffic-marked PCN-traffic received at a PCN- 269 egress-node for a given ingress-egress-aggregate in octets per 270 second. For further details see Section 3.2.1. 272 PCN-sent-rate 273 The rate of PCN-traffic received at a PCN-ingress-node and 274 destined for a given ingress-egress-aggregate in octets per 275 second. For further details see Section 3.4. 277 Congestion level estimate (CLE) 278 The ratio of PCN-marked to total PCN-traffic (measured in octets) 279 received for a given ingress-egress-aggregate during a given 280 measurement period. The CLE is used to derive the PCN-admission- 281 state (Section 3.3.1) and is also used by the report suppression 282 procedure (Section 3.2.3) if report suppression is activated. 284 PCN-admission-state 285 The state ("admit" or "block") derived by the Decision Point for a 286 given ingress-egress-aggregate based on PCN packet marking 287 statistics. The Decision Point decides to admit or block new 288 flows offered to the aggregate based on the current value of the 289 PCN-admission-state. For further details see Section 3.3.1. 291 Sustainable aggregate rate (SAR) 292 The estimated maximum rate of PCN-traffic that can be carried in a 293 given ingress-egress-aggregate at a given moment without risking 294 degradation of quality of service for the admitted flows. The 295 intention is that if the PCN-sent-rate of every ingress-egress- 296 aggregate passing through a given link is limited to its 297 sustainable aggregate rate, the total rate of PCN-traffic flowing 298 through the link will be limited to the PCN-supportable-rate for 299 that link. An estimate of the sustainable aggregate rate for a 300 given ingress-egress-aggregate is derived as part of the flow 301 termination procedure, and is used to determine how much PCN- 302 traffic needs to be terminated. For further details see 303 Section 3.3.2. 305 CLE-reporting-threshold 306 A configurable value against which the CLE is compared as part of 307 the report suppression procedure. For further details, see 308 Section 3.2.3. 310 CLE-limit 311 A configurable value against which the CLE is compared to 312 determine the PCN-admission-state for a given ingress-egress- 313 aggregate. For further details, see Section 3.3.1. 315 T_meas 316 A configurable time interval that defines the measurement period 317 over which the PCN-egress-node collects statistics relating to 318 PCN-traffic marking. At the end of the interval the PCN-egress- 319 node calculates the values NM-rate, [CL-specific] ThM-rate, and 320 ETM-rate as defined above and sends a report to the Decision 321 Point, subject to the operation of the report suppression feature. 322 For further details see Section 3.2. 324 T_maxsuppress 325 A configurable time interval after which the PCN-egress-node MUST 326 send a report to the Decision Point for a given ingress-egress- 327 aggregate regardless of the most recent values of the CLE. This 328 mechanism provides the Decision Point with a periodic confirmation 329 of liveness when report suppression is activated. For further 330 details, see Section 3.2.3. 332 T_fail 333 An interval after which the Decision Point concludes that 334 communication from a given PCN-egress-node has failed if it has 335 received no reports from the PCN-egress-node during that interval. 336 For further details see Section 3.3.3. 338 T_crit 339 A configurable interval used in the calculation of T_fail. For 340 further details see Section 3.3.3. 342 2. [CL-Specific] Assumed Core Network Behaviour for CL 344 This section describes the assumed behaviour for PCN-interior-nodes 345 in the PCN-domain. The CL mode of operation assumes that: 347 o PCN-interior-nodes perform both threshold-marking and excess- 348 traffic-marking of PCN-packets, according to the rules specified 349 in [RFC5670]; 351 o for IP transport, threshold-marking of PCN-packets uses the ThM 352 codepoint defined in [ID.pcn-3-in-1]; for MPLS transport, an 353 equivalent marking is used as discussed in Appendix C of 354 [ID.pcn-3-in-1]; 356 o for IP transport, excess-traffic-marking of PCN-packets uses the 357 ETM codepoint defined in [ID.pcn-3-in-1]; for MPLS transport, an 358 equivalent marking is used as discussed in Appendix C of 359 [ID.pcn-3-in-1]; 361 o on each link the reference rate for the threshold-meter is 362 configured to be equal to the PCN-admissible-rate for the link; 364 o on each link the reference rate for the excess-traffic-meter is 365 configured to be equal to the PCN-supportable-rate for the link; 367 o the set of valid codepoint transitions is as shown in Sections 368 5.2.1 and 5.2.2 of [ID.pcn-3-in-1]. 370 3. Node Behaviours 372 3.1. Overview 374 This section describes the behaviour of the PCN-ingress-node, PCN- 375 egress-node, and the Decision Point (which MAY be collocated with the 376 PCN-ingress-node). 378 The PCN-egress-node collects the rates of not-marked, [CL-specific] 379 threshold-marked, and excess-traffic-marked PCN-traffic for each 380 ingress-egress-aggregate and reports them to the Decision Point. 381 [CL-specific] It MAY also identify and report PCN-flows that have 382 experienced excess-traffic-marking. For a detailed description, see 383 Section 3.2. 385 The PCN-ingress-node enforces flow admission and termination 386 decisions. It also reports the rate of PCN-traffic sent to a given 387 ingress-egress-aggregate when requested by the Decision Point. For 388 details, see Section 3.4. 390 Finally, the Decision Point makes flow admission decisions and 391 selects flows to terminate based on the information provided by the 392 PCN-ingress-node and PCN-egress-node for a given ingress-egress- 393 aggregate. For details, see Section 3.3. 395 Specification of a signaling protocol to report rates to the Decision 396 Point is out of scope of this document. If the PCN-ingress-node is 397 chosen as the Decision Point, [I-D.tsvwg-rsvp-pcn] specifies an 398 appropriate signaling protocol. 400 Section 5.1.2 describes how to derive the filters by means of which 401 PCN-ingress-nodes and PCN-egress-nodes are able to classify incoming 402 packets into ingress-egress-aggregates. 404 3.2. Behaviour of the PCN-Egress-Node 406 3.2.1. Data Collection 408 The PCN-egress-node needs to meter the PCN-traffic it receives in 409 order to calculate the following rates for each ingress-egress- 410 aggregate passing through it. These rates SHOULD be calculated at 411 the end of each measurement period based on the PCN-traffic observed 412 during that measurement period. The duration of a measurement period 413 is equal to the configurable value T_meas. Foir further information 414 see Section 3.5. 416 o NM-rate: octets per second of PCN-traffic in PCN-packets that are 417 not-marked (i.e., marked with the NM codepoint); 419 o [CL-specific] ThM-rate: octets per second of PCN-traffic in PCN- 420 packets that are threshold-marked (i.e., marked with the ThM 421 codepoint); 423 o ETM-rate: octets per second of PCN-traffic in PCN-packets that are 424 excess-traffic-marked (i.e., marked with the ETM codepoint). 426 Note: metering the PCN-traffic continuously and using equal-length 427 measurement intervals minimizes the statistical variance introduced 428 by the measurement process itself. On the other hand, the operation 429 of PCN is not affected if the starting and ending times of the 430 measurement intervals for different ingress-egress-aggregates are 431 different. 433 [CL-specific] As a configurable option, the PCN-egress-node MAY 434 record flow identifiers of the PCN-flows for which excess-traffic- 435 marked packets have been observed during this measurement interval. 436 If this set is large (e.g., more than 20 flows), the PCN-egress-node 437 MAY record only the most recently excess-traffic-marked PCN-flow 438 identifiers rather than the complete set. 440 These can be used by the Decision Point when it selects flows for 441 termination. In networks using multipath routing it is possible 442 that congestion is not occurring on all paths carrying a given 443 ingress-egress-aggregate. Assuming that specific PCN-flows are 444 routed via specific paths, identifying the PCN-flows that are 445 experiencing excess-traffic-marking helps to avoid termination of 446 PCN-flows not contributing to congestion. 448 3.2.2. Reporting the PCN Data 450 Unless the report suppression option described in Section 3.2.3 is 451 activated, the PCN-egress-node MUST report the latest values of NM- 452 rate, [CL-specific] ThM-rate, and ETM-rate to the Decision Point each 453 time that it calculates them. 455 [CL-specific] If the PCN-egress-node recorded a set of flow 456 identifiers of PCN-flows for which excess-traffic-marking was 457 observed in the most recent measurement interval, then it MUST also 458 include these identifiers in the report. 460 3.2.3. Optional Report Suppression 462 Report suppression MUST be provided as a configurable option, along 463 with two configurable parameters, the CLE-reporting-threshold and the 464 maximum report suppression interval T_maxsuppress. The default value 465 of the CLE-reporting-threshold is zero. The CLE-reporting-threshold 466 MUST NOT exceed the CLE-limit configured at the Decision Point. For 467 further information on T_maxsuppress see Section 3.5. 469 If the report suppression option is enabled, the PCN-egress-node MUST 470 apply the following procedure to decide whether to send a report to 471 the Decision Point, rather than sending a report automatically at the 472 end of each measurement interval. 474 1. As well as the quantities NM-rate, [CLE-specific] ThM-rate, and 475 ETM-rate, the PCN-egress-node MUST calculate the congestion level 476 estimate (CLE) for each measurement interval. The CLE is 477 computed as: 479 [CL-specific] 480 CLE = (ThM-rate + ETM-rate) / (NM-rate + ThM-rate + ETM-rate) 482 if any PCN-traffic was observed, or CLE = 0 if all the rates are 483 zero. 485 2. If the CLE calculated for the latest measurement interval is 486 greater than the CLE-reporting-threshold and/or the CLE 487 calculated for the immediately previous interval was greater than 488 the CLE-reporting-threshold, then the PCN-egress-node MUST send a 489 report to the Decision Point. The contents of the report are 490 described below. 492 The reason for taking into account the CLE of the previous 493 interval is to ensure that the Decision Point gets immediate 494 feedback if the CLE has dropped below CLE-reporting-threshold. 495 This is essential if the Decision Point is running the flow 496 termination procedure and observing whether (further) flow 497 termination is needed. See Section 3.3.2. 499 3. If an interval T_maxsuppress has elapsed since the last report 500 was sent to the Decision Point, then the PCN-egress-node MUST 501 send a report to the Decision Point regardless of the CLE value. 503 4. If neither of the preceding conditions holds, the PCN-egress-node 504 MUST NOT send a report for the latest measurement interval. 506 Each report sent to the Decision Point when report suppression has 507 been activated MUST contain the values of NM-rate, [CL-specific] ThM- 508 rate, ETM-rate, and CLE that were calculated for the most recent 509 measurement interval. [CL-specific] If the PCN-egress-node recorded 510 a set of flow identifiers of PCN-flows for which excess-traffic- 511 marking was observed in the most recent measurement interval, then it 512 MUST also include these identifiers in the report. 514 The above procedure ensures that at least one report is sent per 515 interval (T_maxsuppress + T_meas). This demonstrates to the Decision 516 Point that both the PCN-egress-node and the communication path 517 between that node and the Decision Point are in operation. 519 3.3. Behaviour at the Decision Point 521 Operators can choose to use PCN procedures just for flow admission, 522 or just for flow termination, or for both. Decision Points MUST 523 implement both mechanisms, but configurable options MUST be provided 524 to activate or deactivate PCN-based flow admission and flow 525 termination independently of each other at a given Decision Point. 527 If PCN-based flow termination is enabled but PCN-based flow admission 528 is not, flow termination operates as specified in this document. 530 Logically, some other system of flow admission control is in 531 operation, but the description of such a system is out of scope of 532 this document and depends on local arrangements. 534 3.3.1. Flow Admission 536 The Decision Point determines the PCN-admission-state for a given 537 ingress-egress-aggregate each time it receives a report from the 538 egress node. It makes this determination on the basis of the 539 congestion level estimate (CLE). If the CLE is provided in the 540 egress node report, the Decision Point SHOULD use the reported value. 541 If the CLE was not provided in the report, the Decision Point MUST 542 calculate it based on the other values provided in the report, using 543 the formula: 545 [CL-specific] 546 CLE = (ThM-rate + ETM-rate) / (NM-rate + ThM-rate + ETM-rate) 548 if any PCN-traffic was observed, or CLE = 0 if all the rates are 549 zero. 551 The Decision Point MUST compare the reported or calculated CLE to a 552 configurable value, the CLE-limit. If the CLE is less than the CLE- 553 limit, the PCN-admission-state for that aggregate MUST be set to 554 "admit"; otherwise it MUST be set to "block". 556 If the PCN-admission-state for a given ingress-egress-aggregate is 557 "admit", the Decision Point SHOULD allow new flows to be admitted to 558 that aggregate. If the PCN-admission-state for a given ingress- 559 egress-aggregate is "block", the Decision Point SHOULD NOT allow new 560 flows to be admitted to that aggregate. These actions MAY be 561 modified by policy in specific cases, but such policy intervention 562 risks defeating the purpose of using PCN. 564 A performance study of this admission control method is presented in 565 [MeLe12]. 567 3.3.2. Flow Termination 569 [CL-specific] When the report from the PCN-egress-node includes a 570 non-zero value of the ETM-rate for some ingress-egress-aggregate, the 571 Decision Point MUST request the PCN-ingress-node to provide an 572 estimate of the rate (PCN-sent-rate) at which the PCN-ingress-node is 573 receiving PCN-traffic that is destined for the given ingress-egress- 574 aggregate. 576 If the Decision Point is collocated with the PCN-ingress-node, the 577 request and response are internal operations. 579 The Decision Point MUST then wait, for both the requested rate from 580 the PCN-ingress-node and the next report from the PCN-egress-node for 581 the ingress-egress-aggregate concerned. If this next egress node 582 report also includes a non-zero value for the ETM-rate, the Decision 583 Point MUST determine the amount of PCN-traffic to terminate using the 584 following steps: 586 1. [CL-specific] The sustainable aggregate rate (SAR) for the given 587 ingress-egress-aggregate is estimated by the sum: 589 SAR = NM-rate + ThM-rate 591 for the latest reported interval. 593 2. The amount of traffic to be terminated is the difference: 595 PCN-sent-rate - SAR, 597 where PCN-sent-rate is the value provided by the PCN-ingress- 598 node. 600 See Section 3.3.3 for a discussion of appropriate actions if the 601 Decision Point fails to receive a timely response to its request for 602 the PCN-sent-rate. 604 If the difference calculated in the second step is positive, the 605 Decision Point SHOULD select PCN-flows to terminate, until it 606 determines that the PCN-traffic admission rate will no longer be 607 greater than the estimated sustainable aggregate rate. If the 608 Decision Point knows the bandwidth required by individual PCN-flows 609 (e.g., from resource signalling used to establish the flows), it MAY 610 choose to complete its selection of PCN-flows to terminate in a 611 single round of decisions. 613 Alternatively, the Decision Point MAY spread flow termination over 614 multiple rounds to avoid over-termination. If this is done, it is 615 RECOMMENDED that enough time elapse between successive rounds of 616 termination to allow the effects of previous rounds to be reflected 617 in the measurements upon which the termination decisions are based. 618 (See [Satoh10] and sections 4.2 and 4.3 of [MeLe10].) 620 In general, the selection of flows for termination MAY be guided by 621 policy. [CL-specific] If the egress node has supplied a list of 622 identifiers of PCN-flows that experienced excess-traffic-marking 623 (Section 3.2), the Decision Point SHOULD first consider terminating 624 PCN-flows in that list. 626 The Decision Point SHOULD log each round of termination as described 627 in Section 5.2.1.2. 629 3.3.3. Decision Point Action For Missing PCN-Boundary-Node Reports 631 The Decision Point SHOULD start a timer t_recvFail when it receives a 632 report from the PCN-egress-node. t_recvFail is reset each time a new 633 report is received from the PCN-egress-node. t_recvFail expires if it 634 reaches the value T_fail. T_fail is calculated according to the 635 following logic: 637 a. T_fail = the configurable duration T_crit, if report suppression 638 is not deployed; 640 b. T_fail = T_crit also if report suppression is deployed and the 641 last report received from the PCN-egress-node contained a CLE 642 value greater than CLE-reporting-threshold (Section 3.2.3); 644 c. T_fail = 3 * T_maxsuppress (Section 3.2.3) if report suppression 645 is deployed and the last report received from the PCN-egress-node 646 contained a CLE value less than or equal to CLE-reporting- 647 threshold. 649 If timer t_recvFail expires for a given PCN-egress-node, the Decision 650 Point SHOULD notify management. A log format is defined for that 651 purpose in Section 5.2.1.1. Other actions depend on local policy, 652 but MAY include blocking of new flows destined for the PCN-egress- 653 node concerned until another report is received from it. Termination 654 of already-admitted flows is also possible, but could be triggered by 655 "Destination unreachable" messages received at the PCN-ingress-node. 657 If a centralized Decision Point sends a request for the estimated 658 value of PCN-sent-rate to a given PCN-ingress-node and fails to 659 receive a response in a reasonable amount of time, the Decision Point 660 SHOULD repeat the request once. [CL-specific] While waiting after 661 sending this second request, the Decision Point MAY begin selecting 662 flows to terminate, using ETM-rate as an estimate of the amount of 663 traffic to be terminated in place of the quantity 665 PCN-sent-rate - SAR 667 specified in Section 3.3.2. Because ETM-rate will over-estimate the 668 amount of traffic to be terminated due to dropping of PCN-packets by 669 interior nodes, the Decision Point SHOULD terminate less than the 670 full amount ETM-rate in the first pass and recalculate the additional 671 amount to terminate in additional passes based on subsequent reports 672 from the PCN-egress-node. If the second request to the PCN-ingress- 673 node also fails, the Decision Point MUST select flows to terminate 674 based on the ETM-rate approximation as just described and SHOULD 675 notify management. The log format described in Section 5.2.1.1 is 676 also suitable for this purpose. 678 The response timer t_sndFail with upper bound T_crit is specified 679 in Section 3.5. The use of T_crit is an approximation. A more 680 precise limit would be of the order of two round-trip times, plus 681 an allowance for processing at each end, plus an allowance for 682 variance in these values. 684 See Section 3.5 for suggested values of the configurable durations 685 T_crit and T_maxsuppress. 687 3.4. Behaviour of the Ingress Node 689 The PCN-ingress-node MUST provide the estimated current rate of PCN- 690 traffic received at that node and destined for a given ingress- 691 egress-aggregate in octets per second (the PCN-sent-rate) when the 692 Decision Point requests it. The way this rate estimate is derived is 693 a matter of implementation. 695 For example, the rate that the PCN-ingress-node supplies can be 696 based on a quick sample taken at the time the information is 697 required. 699 3.5. Summary of Timers and Associated Configurable Durations 701 Here is a summary of the timers used in the procedures just 702 described: 704 t_meas 706 Where used: PCN-egress-node. 708 Used in procedure: data collection (Section 3.2.1). 710 Incidence: one per ingress-egress-aggregate. 712 Reset: immediately on expiry. 714 Expiry: when it reaches the configurable duration T_meas. 716 Action on expiry: calculate NM-rate, [CL-specific] ThM-rate, 717 and ETM-rate and proceed to the applicable reporting procedure 718 (Section 3.2.2 or Section 3.2.3). 720 t_maxsuppress 722 Where used: PCN-egress-node. 724 Used in procedure: report suppression (Section 3.2.3). 726 Incidence: one per ingress-egress-aggregate. 728 Reset: when the next report is sent, either after expiry or 729 because the CLE has exceeded the reporting threshold. 731 Expiry: when it reaches the configurable duration 732 T_maxsuppress. 734 Action on expiry: send a report to the Decision Point the next 735 time the reporting procedure (Section 3.2.3) is invoked, 736 regardless of the value of CLE. 738 t_recvFail 740 Where used: Decision Point. 742 Used in procedure: failure detection (Section 3.3.3). 744 Incidence: one per ingress-egress-aggregate. 746 Reset: when a report is received for the ingress-egress- 747 aggregate. 749 Expiry: when it reaches the calculated duration T_fail. As 750 described in Section 3.3.3, T_fail is equal either to the 751 configured duration T_crit or to the calculated value 3 * 752 T_maxsuppress, where T_maxsuppress is a configured duration. 754 Action on expiry: notify management, and possibly other 755 actions. 757 t_sndFail 759 Where used: centralized Decision Point. 761 Used in procedure: failure detection (Section 3.3.3). 763 Incidence: only as required, one per outstanding request to a 764 PCN-ingress-node. 766 Started: when a request for the value of PCN-sent-traffic for a 767 given ingress-egress-aggregate is sent to the PCN-ingress-node. 769 Terminated without action: when a response is received before 770 expiry. 772 Expiry: when it reaches the configured duration T_crit. 774 Action on expiry: as described in Section 3.3.3. 776 3.5.1. Recommended Values For the Configurable Durations 778 The timers just described depend on three configurable durations, 779 T_meas, T_maxsuppress, and T_crit. The recommendations given below 780 for the values of these durations are all related to the intended PCN 781 reaction time of 1 to 3 seconds. However, they are based on 782 judgement rather than operational experience or mathematical 783 derivation. 785 The value of T_meas is RECOMMENDED to be of the order of 100 to 500 786 ms to provide a reasonable tradeoff between demands on network 787 resources (PCN-egress-node and Decision Point processing, network 788 bandwidth) and the time taken to react to impending congestion. 790 The value of T_maxsuppress is RECOMMENDED to be on the order of 3 to 791 6 seconds, for similar reasons to those for the choice of T_meas. 793 The value of T_crit SHOULD NOT be less than 3 * T_meas. Otherwise it 794 could cause too many management notifications due to transient 795 conditions in the PCN-egress-node or along the signalling path. A 796 reasonable upper bound on T_crit is in the order of 3 seconds. 798 4. Specification of Diffserv Per-Domain Behaviour 800 This section provides the specification required by [RFC3086] for a 801 per-domain behaviour. 803 4.1. Applicability 805 This section quotes [RFC5559]. 807 The PCN CL boundary node behaviour specified in this document is 808 applicable to inelastic traffic (particularly video and voice) where 809 quality of service for admitted flows is protected primarily by 810 admission control at the ingress to the domain. 812 In exceptional circumstances (e.g., due to rerouting as a result of 813 network failures) already-admitted flows may be terminated to protect 814 the quality of service of the remaining flows. [CL-specific] The 815 performance results in, e.g., [MeLe10], indicate that the CL boundary 816 node behaviour provides better service outcomes under such 817 circumstances than the SM boundary node behaviour described in 818 [RFCyyyy], because CL is less likely to terminate PCN-flows 819 unnecessarily. 821 [RFC EDITOR'S NOTE: please replace RFCyyyy above by the reference to 822 the published version of draft-ietf-pcn-sm-edge-behaviour.] 824 4.2. Technical Specification 825 4.2.1. Classification and Traffic Conditioning 827 Packet classification and treatment at the PCN-ingress-node is 828 described in Section 5.1 of [ID.pcn-3-in-1]. 830 PCN packets are further classified as belonging or not belonging to 831 an admitted flow. PCN packets not belonging to an admitted flow are 832 "blocked". (See Section 1 for an understanding of how this term is 833 interpreted.) Packets belonging to an admitted flow are policed to 834 ensure that they adhere to the rate or flowspec that was negotiated 835 during flow admission. 837 4.2.2. PHB Configuration 839 The PCN CL boundary node behaviour is a metering and marking 840 behaviour rather than a scheduling behaviour. As a result, while the 841 encoding uses a single DSCP value, that value can vary from one 842 deployment to another. The PCN working group suggests using 843 admission control for the following service classes (defined in 844 [RFC4594]): 846 o Telephony (EF) 848 o Real-time interactive (CS4) 850 o Broadcast Video (CS3) 852 o Multimedia Conferencing (AF4) 854 For a fuller discussion, see Appendix A of [ID.pcn-3-in-1]. 856 4.3. Attributes 858 The purpose of this per-domain behaviour is to achieve low loss and 859 jitter for the target class of traffic. The design requirement for 860 PCN was that recovery from overloads through the use of flow 861 termination should happen within 1-3 seconds. PCN probably performs 862 better than that. 864 4.4. Parameters 866 The set of parameters that needs to be configured at each PCN-node 867 and at the Decision Point is described in Section 5.1. 869 4.5. Assumptions 871 It is assumed that a specific portion of link capacity has been 872 reserved for PCN-traffic. 874 4.6. Example Uses 876 The PCN CL behaviour may be used to carry real-time traffic, 877 particularly voice and video. 879 4.7. Environmental Concerns 881 The PCN CL per-domain behaviour could theoretically interfere with 882 the use of end-to-end ECN due to reuse of ECN bits for PCN marking. 883 Section 5.1 of [ID.pcn-3-in-1] describes the actions that can be 884 taken to protect ECN signalling. Appendix B of that document 885 provides further discussion of how ECN and PCN can co-exist. 887 4.8. Security Considerations 889 Please see the security considerations in [RFC5559] as well as those 890 in [RFC2474] and [RFC2475]. 892 5. Operational and Management Considerations 894 5.1. Deployment of the CL Edge Behaviour 896 Deployment of the PCN Controlled Load edge behaviour requires the 897 following steps: 899 o selection of deployment options and global parameter values; 901 o derivation of per-node and per-link information; 903 o installation, but not activation, of parameters and policies at 904 all of the nodes in the PCN domain; 906 o activation and verification of all behaviours. 908 5.1.1. Selection of Deployment Options and Global Parameters 910 The first set of decisions affects the operation of the network as a 911 whole. To begin with, the operator needs to make basic design 912 decisions such as whether the Decision Point is centralized or 913 collocated with the PCN-ingress-nodes, and whether per-flow and 914 aggregate resource signalling as described in [I-D.tsvwg-rsvp-pcn] is 915 deployed in the network. After that, the operator needs to decide: 917 o whether PCN packets will be forwarded unencapsulated or in tunnels 918 between the PCN-ingress-node and the PCN-egress-node. 919 Encapsulation preserves incoming ECN settings and simplifies the 920 PCN-egress-node's job when it comes to relating incoming packets 921 to specific ingress-egress-aggregates, but lowers the path MTU and 922 imposes the extra labour of encapsulation/decapsulation on the 923 PCN-edge-nodes. 925 o which service classes will be subject to PCN control and what 926 Diffserv code point (DSCP) will be used for each. (See 927 [ID.pcn-3-in-1] Appendix A for advice on this topic.) 929 o the markings to be used at all nodes in the PCN domain to indicate 930 Not-Marked (NM), [CL-specific] Threshold-Marked (ThM), and Excess- 931 Traffic-Marked (ETM) PCN packets; 933 o The marking rules for re-marking PCN-traffic leaving the PCN 934 domain; 936 o whether PCN-based flow admission is enabled; 938 o whether PCN-based flow termination is enabled. 940 The following parameters affect the operation of PCN itself. The 941 operator needs to choose: 943 o the value of CLE-limit if PCN-based flow admission is enabled. 944 [CL-specific] The operation of flow admission is not very 945 sensitive to the value of the CLE-limit in practice, because when 946 threshold-marking occurs it tends to persist long enough that 947 threshold-marked traffic becomes a large proportion of the 948 received traffic in a given interval. 950 o the value of the collection interval T_meas. For a recommended 951 range of values see Section 3.5.1 above. 953 o whether report suppression is to be enabled at the PCN-egress- 954 nodes and if so, the values of CLE-reporting-threshold and 955 T_maxsuppress. It is reasonable to leave CLE-reporting-threshold 956 at its default value (zero, as specified in Section 3.2.3). For a 957 recommended range of values of T_maxsuppress see Section 3.5.1 958 above. 960 o the value of the duration T_crit, which the Decision Point uses in 961 deciding whether communications with a given PCN-edge-node have 962 failed. For a recommended range of values of T_crit see 963 Section 3.5.1 above. 965 o [CL-specific] Activation/deactivation of recording of individual 966 flow identifiers when excess-traffic-marked PCN-traffic is 967 observed. Reporting these identifiers has value only if PCN-based 968 flow termination is activated and Equal Cost Multi-Path (ECMP) 969 routing is enabled in the PCN-domain. 971 5.1.2. Specification of Node- and Link-Specific Parameters 973 Filters are required at both the PCN-ingress-node and the PCN-egress- 974 node to classify incoming PCN packets by ingress-egress-aggregate. 975 Because of the potential use of multi-path routing in domains 976 upstream of the PCN-domain, it is impossible to do such 977 classification reliably at the PCN-egress-node based on the packet 978 header contents as originally received at the PCN-ingress-node. 979 (Packets with the same header contents could enter the PCN-domain at 980 multiple PCN-ingress-nodes.) As a result, the only way to construct 981 such filters reliably is to tunnel the packets from the PCN-ingress- 982 node to the PCN-egress-node. 984 The PCN-ingress-node needs filters in order to place PCN packets into 985 the right tunnel in the first instance, and also to satisfy requests 986 from the Decision Point for admission rates into specific ingress- 987 egress-aggregates. These filters select the PCN-egress-node, but not 988 necessarily a specific path through the network to that node. As a 989 result, they are likely to be stable even in the face of failures in 990 the network, except when the PCN-egress-node itself becomes 991 unreachable. The primary basis for their derivation will be routing 992 policy given the packet's original origin and destination. If all 993 PCN packets will be tunneled, the PCN-ingress-node also needs to know 994 the address of the peer PCN-egress-node associated with each filter. 996 Operators may wish to give some thought to the provisioning of 997 alternate egress points for some or all ingress-egress aggregates in 998 case of failure of the PCN-egress-node. This could require the 999 setting up of standby tunnels to these alternate egress points. 1001 Each PCN-egress-node needs filters to classify incoming PCN packets 1002 by ingress-egress-aggregate, in order to gather measurements on a 1003 per-aggregate basis. If tunneling is used, these filters are 1004 constructed on the basis of the identifier of the tunnel from which 1005 the incoming packet has emerged (e.g. the source address in the outer 1006 header if IP encapsulation is used). The PCN-egress-node also needs 1007 to know the address of the Decision Point to which it sends reports 1008 for each ingress-egress-aggregate. 1010 A centralized Decision Point needs to have the address of the PCN- 1011 ingress-node corresponding to each ingress-egress-aggregate. 1012 Security considerations require that information also be prepared for 1013 a centralized Decision Point and each PCN-edge-node to allow them to 1014 authenticate each other. 1016 Turning to link-specific parameters, the operator needs to derive 1017 values for the PCN-admissible-rate and [CL-specific] PCN-supportable- 1018 rate on each link in the network. The first two paragraphs of 1019 Section 5.2.2 of [RFC5559] discuss how these values may be derived. 1021 5.1.3. Installation of Parameters and Policies 1023 As discussed in the previous two sections, every PCN node needs to be 1024 provisioned with a number of parameters and policies relating to its 1025 behaviour in processing incoming packets. The Diffserv MIB [RFC3289] 1026 can be useful for this purpose, although it needs to be extended in 1027 some cases. This MIB covers packet classification, metering, 1028 counting, policing and dropping, and marking. The required 1029 extensions specifically include an encapsulation action following re- 1030 classification by ingress-egress-aggregate. In addition, the MIB has 1031 to be extended to include objects for marking the ECN field in the 1032 outer header at the PCN-ingress-node and an extension to the 1033 classifiers to include the ECN field at PCN-interior and PCN-egress- 1034 nodes. Finally, new objects metering algorithms may need to be 1035 defined at the PCN-interior-nodes to represent the algorithms for 1036 threshold-marking and packet-size-independent excess-traffic-marking. 1038 Values for the PCN-admissible-rate and [CL-specific] PCN-supportable- 1039 rate on each link on a node appear as metering parameters. Operators 1040 should take note of the need to deploy meters of a given type 1041 (threshold or excess-traffic) either on the ingress side or the 1042 egress of each interior link, but not both (Appendix B.2 of 1043 [RFC5670]. 1045 The following additional information has to be configured by other 1046 means (e.g., additional MIBs, NETCONF models). 1048 At the PCN-egress-node: 1050 o the measurement interval T_meas (units of ms, range 50 to 1000); 1052 o [CL-specific] whether specific flow identifiers must be captured 1053 when excess-traffic-marked packets are observed; 1055 o whether report suppression is to be applied; 1057 o if so, the interval T_maxsuppress (units of 100 ms, range 1 to 1058 100) and the CLE-reporting-threshold (units of tenths of one 1059 percent, range 0 to 1000, default value 0); 1061 o the address of the PCN-ingress-node for each ingress-egress- 1062 aggregate, if the Decision Point is collocated with the PCN- 1063 ingress-node and [I-D.tsvwg-rsvp-pcn] is not deployed. 1065 o the address of the centralized Decision Point to which it sends 1066 its reports, if there is one. 1068 At the Decision Point: 1070 o whether PCN-based flow admission is enabled; 1072 o whether PCN-based flow termination is enabled. 1074 o the value of CLE-limit (units of tenths of one percent, range 0 to 1075 1000); 1077 o the value of the interval T_crit (units of 100 ms, range 1 to 1078 100); 1080 o whether report suppression is to be applied; 1082 o if so, the interval T_maxsuppress (units of 100 ms, range 1 to 1083 100) and the CLE-reporting-threshold (units of tenths of one 1084 percent, range 0 to 1000, default value 0). These MUST be the 1085 same values that are provisioned in the PCN-egress-nodes; 1087 o if the Decision Point is centralized, the address of the PCN- 1088 ingress-node (and any other information needed to establish a 1089 security association) for each ingress-egress-aggregate. 1091 Depending on the testing strategy, it may be necessary to install the 1092 new configuration data in stages. This is discussed further below. 1094 5.1.4. Activation and Verification of All Behaviours 1096 It is certainly not within the scope of this document to advise on 1097 testing strategy, which operators undoubtedly have well in hand. 1098 Quite possibly an operator will prefer an incremental approach to 1099 activation and testing. Implementing the PCN marking scheme at PCN- 1100 ingress-nodes, corresponding scheduling behaviour in downstream 1101 nodes, and re-marking at the PCN-egress-nodes is a large enough step 1102 in itself to require thorough testing before going further. 1104 Testing will probably involve the injection of packets at individual 1105 nodes and tracking of how the node processes them. This work can 1106 make use of the counter capabilities included in the Diffserv MIB. 1107 The application of these capabilities to the management of PCN is 1108 discussed in the next section. 1110 5.2. Management Considerations 1112 This section focuses on the use of event logging and the use of 1113 counters supported by the Diffserv MIB [RFC3289] for the various 1114 monitoring tasks involved in management of a PCN network. 1116 5.2.1. Event Logging In the PCN Domain 1118 It is anticipated that event logging using SYSLOG [RFC5424] will be 1119 needed for fault management and potentially for capacity management. 1120 Implementations MUST be capable of generating logs for the following 1121 events: 1123 o detection of loss of contact between a Decision Point and a PCN- 1124 edge-node, as described in Section 3.3.3; 1126 o successful receipt of a report from a PCN-egress-node, following 1127 detection of loss of contact with that node; 1129 o flow termination events. 1131 All of these logs are generated by the Decision Point. There is a 1132 strong likelihood in the first and third cases that the events are 1133 correlated with network failures at a lower level. This has 1134 implications for how often specific event types should be reported, 1135 so as not to contribute unnecessarily to log buffer overflow. 1136 Recommendations on this topic follow for each event report type. 1138 The field names (e.g., HOSTNAME, STRUCTURED-DATA) used in the 1139 following subsections are defined in [RFC5424]. 1141 5.2.1.1. Logging Loss and Restoration of Contact 1143 Section 3.3.3 describes the circumstances under which the Decision 1144 Point may determine that it has lost contact, either with a PCN- 1145 ingress-node or a PCN-egress-node, due to failure to receive an 1146 expected report. Loss of contact with a PCN-ingress-node is a case 1147 primarily applicable when the Decision Point is in a separate node. 1148 However, implementations MAY implement logging in the collocated case 1149 if the implementation is such that non-response to a request from the 1150 Decision Point function can occasionally occur due to processor load 1151 or other reasons. 1153 The log reporting the loss of contact with a PCN-ingress-node or PCN- 1154 egress-node MUST include the following content: 1156 o The HOSTNAME field MUST identify the Decision Point issuing the 1157 log. 1159 o A STRUCTURED-DATA element MUST be present, containing parameters 1160 identifying the node for which an expected report has not been 1161 received and the type of report lost (ingress or egress). It is 1162 RECOMMENDED that the SD-ID for the STRUCTURED-DATA element have 1163 the form "PCNNode" (without the quotes), which has been registered 1164 with IANA. The node identifier PARAM-NAME is RECOMMENDED to be 1165 "ID" (without the quotes). The identifier itself is subject to 1166 the preferences expressed in Section 6.2.4 of [RFC5424] for the 1167 HOSTNAME field. The report type PARAM-NAME is RECOMMENDED to be 1168 "RTyp" (without the quotes). The PARAM-VALUE for the RTyp field 1169 MUST be either "ingr" or "egr". 1171 The following values are also RECOMMENDED for the indicated fields in 1172 this log, subject to local practice: 1174 o PRI initially set to 115, representing a Facility value of (14) 1175 "log alert" and a Severity level of (3) "Error Condition". Note 1176 that loss of contact with a PCN-egress-node implies that no new 1177 flows will be admitted to one or more ingress-egress-aggregates 1178 until contact is restored. The reason a higher severity level 1179 (lower value) is not proposed for the initial log is because any 1180 corrective action would probably be based on alerts at a lower 1181 subsystem level. 1183 o APPNAME set to "PCN" (without the quotes). 1185 o MSGID set to "LOST" (without the quotes). 1187 If contact is not regained with a PCN-egress-node in a reasonable 1188 period of time (say, one minute), the log SHOULD be repeated, this 1189 time with a PRI value of 113, implying a Facility value of (14) "log 1190 alert" and a Severity value of (1) "Alert: action must be taken 1191 immediately". The reasoning is that by this time, any more general 1192 conditions should have been cleared, and the problem lies 1193 specifically with the PCN-egress-node concerned and the PCN 1194 application in particular. 1196 Whenever a loss-of-contact log is generated for a PCN-egress-node, a 1197 log indicating recovery SHOULD be generated when the Decision Point 1198 next receives a report from the node concerned. The log SHOULD have 1199 the same content as just described for the loss-of-contact log, with 1200 the following differences: 1202 o PRI changes to 117, indicating a Facility value of (14) "log 1203 alert" and a Severity of (5) "Notice: normal but significant 1204 condition". 1206 o MSGID changes to "RECVD" (without the quotes). 1208 5.2.1.2. Logging Flow Termination Events 1210 Section 3.3.2 describes the process whereby the Decision Point 1211 decides that flow termination is required for a given ingress-egress- 1212 aggregate, calculates how much flow to terminate, and selects flows 1213 for termination. This section describes a log that SHOULD be 1214 generated each time such an event occurs. (In the case where 1215 termination occurs in multiple rounds, one log SHOULD be generated 1216 per round.) The log may be useful in fault management, to indicate 1217 the service impact of a fault occuring in a lower-level subsystem. 1218 In the absence of network failures, it may also be used as an 1219 indication of an urgent need to review capacity utilization along the 1220 path of the ingress-egress-aggregate concerned. 1222 The log reporting a flow termination event MUST include the following 1223 content: 1225 o The HOSTNAME field MUST identify the Decision Point issuing the 1226 log. 1228 o A STRUCTURED-DATA element MUST be present, containing parameters 1229 identifying the ingress and egress nodes for the ingress-egress- 1230 aggregate concerned, indicating the total amount of flow being 1231 terminated, and giving the number of flows terminated to achieve 1232 that objective. 1234 It is RECOMMENDED that the SD-ID for the STRUCTURED-DATA element 1235 have the form: "PCNTerm" (without the quotes), which has been 1236 registered with IANA. The parameter identifying the ingress node 1237 for the ingress-egress-aggregate is RECOMMENDED to have PARAM-NAME 1238 "IngrID" (without the quotes). This parameter MAY be omitted if 1239 the Decision Point is collocated with that PCN-ingress-node. The 1240 parameter identifying the egress node for the ingress-egress- 1241 aggregate is RECOMMENDED to have PARAM-NAME "EgrID" (without the 1242 quotes). Both identifiers are subject to the preferences 1243 expressed in Section 6.2.4 of [RFC5424] for the HOSTNAME field. 1245 The parameter giving the total amount of flow being terminated is 1246 RECOMMENDED to have PARAM-NAME "TermRate" (without the quotes). 1247 The PARAM-VALUE MUST be the target rate as calculated according to 1248 the procedures of Section 3.3.2, as an integer value in thousands 1249 of octets per second. The parameter giving the number of flows 1250 selected for termination is RECOMMENDED to have PARAM-NAME "FCnt" 1251 (without the quotes). The PARAM-VALUE for this parameter MUST be 1252 an integer, the number of flows selected. 1254 The following values are also RECOMMENDED for the indicated fields in 1255 this log, subject to local practice: 1257 o PRI initially set to 116, representing a Facility value of (14) 1258 "log alert" and a Severity level of (4) "Warning: warning 1259 conditions". 1261 o APPNAME set to "PCN" (without the quotes). 1263 o MSGID set to "TERM" (without the quotes). 1265 5.2.2. Provision and Use of Counters 1267 The Diffserv MIB [RFC3289] allows for the provision of counters along 1268 the various possible processing paths associated with an interface 1269 and flow direction. It is RECOMMENDED that the PCN-nodes be 1270 instrumented as described below. It is assumed that the cumulative 1271 counts so obtained will be collected periodically for use in 1272 debugging, fault management, and capacity management. 1274 PCN-ingress-nodes SHOULD provide the following counts for each 1275 ingress-egress-aggregate. Since the Diffserv MIB installs counters 1276 by interface and direction, aggregation of counts over multiple 1277 interfaces may be necessary to obtain total counts by ingress-egress- 1278 aggregate. It is expected that such aggregation will be performed by 1279 a central system rather than at the PCN-ingress-node. 1281 o total PCN packets and octets received for that ingress-egress- 1282 aggregate but dropped; 1284 o total PCN packets and octets admitted to that aggregate. 1286 PCN-interior-nodes SHOULD provide the following counts for each 1287 interface, noting that a given packet MUST NOT be counted more than 1288 once as it passes through the node: 1290 o total PCN packets and octets dropped; 1292 o total PCN packets and octets forwarded without re-marking; 1294 o [CL-specific] total PCN packets and octets re-marked to Threshold- 1295 Marked; 1297 o total PCN packets and octets re-marked to Excess-Traffic-Marked. 1299 PCN-egress-nodes SHOULD provide the following counts for each 1300 ingress-egress-aggregate. As with the PCN-ingress-node, so with the 1301 PCN-egress-node it is expected that any necessary aggregation over 1302 multiple interfaces will be done by a central system. 1304 o total Not-Marked PCN packets and octets received; 1306 o [CL-specific] total Threshold-Marked PCN packets and octets 1307 received; 1309 o total Excess-Traffic-Marked PCN packets and octets received. 1311 The following continuously cumulative counters SHOULD be provided as 1312 indicated, but require new MIBs to be defined. If the Decision Point 1313 is not collocated with the PCN-ingress-node, the latter SHOULD 1314 provide a count of the number of requests for PCN-sent-rate received 1315 from the Decision Point and the number of responses returned to the 1316 Decision Point. The PCN-egress-node SHOULD provide a count of the 1317 number of reports sent to each Decision Point. Each Decision Point 1318 SHOULD provide the following: 1320 o total number of requests for PCN-sent-rate sent to each PCN- 1321 ingress-node with which it is not collocated; 1323 o total number of reports received from each PCN-egress-node; 1325 o total number of loss-of-contact events detected for each PCN- 1326 boundary-node; 1328 o total cumulative duration of "block" state in hundreds of 1329 milliseconds for each ingress-egress-aggregate; 1331 o total number of rounds of flow termination exercised for each 1332 ingress-egress-aggregate. 1334 6. Security Considerations 1336 [RFC5559] provides a general description of the security 1337 considerations for PCN. This memo introduces one new consideration, 1338 related to the use of a centralized Decision Point. The Decision 1339 Point itself is a trusted entity. However, its use implies the 1340 existence of an interface on the PCN-ingress-node through which 1341 communication of policy decisions takes place. That interface is a 1342 point of vulnerability which must be protected from denial of service 1343 attacks. 1345 7. IANA Considerations 1347 This document requests IANA to add the following entries to the 1348 syslog Structured Data ID Values registry. RFCxxxx is this document 1349 when published. 1351 Structured Data ID: PCNNode OPTIONAL 1353 Structured Data Parameter: ID MANDATORY 1355 Structured Data Parameter: Rtyp MANDATORY 1357 Reference: RFCxxxx 1359 Structured Data ID: PCNTerm OPTIONAL 1361 Structured Data Parameter: IngrID MANDATORY 1363 Structured Data Parameter: EgrID MANDATORY 1365 Structured Data Parameter: TermRate MANDATORY 1367 Structured Data Parameter: FCnt MANDATORY 1369 Reference: RFCxxxx 1371 8. Acknowledgements 1373 The content of this memo bears a family resemblance to 1374 [ID.briscoe-CL]. The authors of that document were Bob Briscoe, 1375 Philip Eardley, and Dave Songhurst of BT, Anna Charny and Francois Le 1376 Faucheur of Cisco, Jozef Babiarz, Kwok Ho Chan, and Stephen Dudley of 1377 Nortel, Giorgios Karagiannis of U. Twente and Ericsson, and Attila 1378 Bader and Lars Westberg of Ericsson. 1380 Ruediger Geib, Philip Eardley, and Bob Briscoe have helped to shape 1381 the present document with their comments. Toby Moncaster gave a 1382 careful review to get it into shape for Working Group Last Call. 1384 Amongst the authors, Michael Menth deserves special mention for his 1385 constant and careful attention to both the technical content of this 1386 document and the manner in which it was expressed. 1388 David Harrington's careful AD review resulted not only in necessary 1389 changes throughout the document, but also the addition of the 1390 operations and management considerations (Section 5). 1392 As part of the broader review process, the document saw further 1393 improvements as a result of comments by Joel Halpern, Brian 1394 Carpenter, Stephen Farrell, Sean Turner, and Pete Resnick. 1396 9. References 1398 9.1. Normative References 1400 [ID.pcn-3-in-1] 1401 Briscoe, B., Moncaster, T., and M. Menth, "Encoding 3 PCN- 1402 States in the IP header using a single DSCP", March 2012. 1404 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1405 Requirement Levels", BCP 14, RFC 2119, March 1997. 1407 [RFC2474] Nichols, K., Blake, S., Baker, F., and D. Black, 1408 "Definition of the Differentiated Services Field (DS 1409 Field) in the IPv4 and IPv6 Headers", RFC 2474, 1410 December 1998. 1412 [RFC2475] Blake, S., Black, D., Carlson, M., Davies, E., Wang, Z., 1413 and W. Weiss, "An Architecture for Differentiated 1414 Services", RFC 2475, December 1998. 1416 [RFC3086] Nichols, K. and B. Carpenter, "Definition of 1417 Differentiated Services Per Domain Behaviors and Rules for 1418 their Specification", RFC 3086, April 2001. 1420 [RFC3289] Baker, F., Chan, K., and A. Smith, "Management Information 1421 Base for the Differentiated Services Architecture", 1422 RFC 3289, May 2002. 1424 [RFC5424] Gerhards, R., "The Syslog Protocol", RFC 5424, March 2009. 1426 [RFC5559] Eardley, P., "Pre-Congestion Notification (PCN) 1427 Architecture", RFC 5559, June 2009. 1429 [RFC5670] Eardley, P., "Metering and Marking Behaviour of PCN- 1430 Nodes", RFC 5670, November 2009. 1432 9.2. Informative References 1434 [I-D.tsvwg-rsvp-pcn] 1435 Karagiannis, G. and A. Bhargava, "Generic Aggregation of 1436 Resource ReSerVation Protocol (RSVP) for IPv4 And IPv6 1437 Reservations over PCN domains (Work in progress)", 1438 July 2011. 1440 [ID.briscoe-CL] 1441 Briscoe, B., "An edge-to-edge Deployment Model for Pre- 1442 Congestion Notification: Admission Control over a DiffServ 1443 Region (expired Internet Draft)", 2006. 1445 [MeLe10] Menth, M. and F. Lehrieder, "PCN-Based Measured Rate 1446 Termination", Computer Networks Journal (Elsevier) vol. 1447 54, no. 13, pages 2099 - 2116, September 2010. 1449 [MeLe12] Menth, M. and F. Lehrieder, "Performance of PCN-Based 1450 Admission Control under Challenging Conditions, IEEE/ACM 1451 Transactions on Networking, vol. 20, no. 2", April 2012. 1453 [RFC4594] Babiarz, J., Chan, K., and F. Baker, "Configuration 1454 Guidelines for DiffServ Service Classes", RFC 4594, 1455 August 2006. 1457 [RFCyyyy] Charny, A., Zhang, J., Karagiannis, G., Menth, M., and T. 1458 Taylor, "PCN Boundary Node Behaviour for the Single 1459 Marking (SM) Mode of Operation (Work in progress)", 1460 December 2010. 1462 [Satoh10] Satoh, D. and H. Ueno, ""Cause and Countermeasure of 1463 Overtermination for PCN-Based Flow Termination", 1464 Proceedings of IEEE Symposium on Computers and 1465 Communications (ISCC '10), pp. 155-161, Riccione, Italy", 1466 June 2010. 1468 Authors' Addresses 1470 Anna Charny 1471 USA 1473 Phone: 1474 Email: anna@mwsm.com 1476 Fortune Huang 1477 Huawei Technologies 1478 Section F, Huawei Industrial Base, 1479 Bantian Longgang, Shenzhen 518129 1480 P.R. China 1482 Phone: +86 15013838060 1483 Email: fqhuang@huawei.com 1484 Georgios Karagiannis 1485 U. Twente 1487 Phone: 1488 Email: karagian@cs.utwente.nl 1490 Michael Menth 1491 University of Tuebingen 1492 Sand 13 1493 Tuebingen D-72076 1494 Germany 1496 Phone: +49-7071-2970505 1497 Email: menth@informatik.uni-tuebingen.de 1499 Tom Taylor (editor) 1500 Huawei Technologies 1501 Ottawa, Ontario 1502 Canada 1504 Email: tom.taylor.stds@gmail.com