idnits 2.17.1 draft-charny-pcn-single-marking-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 21. -- Found old boilerplate from RFC 3978, Section 5.5, updated by RFC 4748 on line 2622. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 2633. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 2640. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 2646. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** There are 51 instances of too long lines in the document, the longest one being 5 characters in excess of 72. ** The abstract seems to contain references ([I-D.eardley-pcn-architecture], [I-D.briscoe-tsvwg-cl-architecture]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (November 18, 2007) is 6005 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'Menth' is mentioned on line 2572, but not defined == Missing Reference: 'Jamin' is mentioned on line 2569, but not defined == Unused Reference: 'I-D.briscoe-tsvwg-re-ecn-tcp' is defined on line 2537, but no explicit reference was found in the text == Unused Reference: 'I-D.lefaucheur-emergency-rsvp' is defined on line 2551, but no explicit reference was found in the text == Unused Reference: 'I-D.zhang-pcn-performance-evaluation' is defined on line 2561, but no explicit reference was found in the text == Outdated reference: A later version (-01) exists of draft-babiarz-pcn-3sm-00 == Outdated reference: A later version (-09) exists of draft-briscoe-tsvwg-re-ecn-tcp-04 == Outdated reference: A later version (-05) exists of draft-westberg-pcn-load-control-02 Summary: 4 errors (**), 0 flaws (~~), 10 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group A. Charny 3 Internet-Draft Cisco Systems, Inc. 4 Intended status: Standards Track J. Zhang 5 Expires: May 21, 2008 Cisco Systems, Inc. and Cornell 6 University 7 F. Le Faucheur 8 V. Liatsos 9 Cisco Systems, Inc. 10 November 18, 2007 12 Pre-Congestion Notification Using Single Marking for Admission and 13 Termination 14 draft-charny-pcn-single-marking-03.txt 16 Status of this Memo 18 By submitting this Internet-Draft, each author represents that any 19 applicable patent or other IPR claims of which he or she is aware 20 have been or will be disclosed, and any of which he or she becomes 21 aware will be disclosed, in accordance with Section 6 of BCP 79. 23 Internet-Drafts are working documents of the Internet Engineering 24 Task Force (IETF), its areas, and its working groups. Note that 25 other groups may also distribute working documents as Internet- 26 Drafts. 28 Internet-Drafts are draft documents valid for a maximum of six months 29 and may be updated, replaced, or obsoleted by other documents at any 30 time. It is inappropriate to use Internet-Drafts as reference 31 material or to cite them other than as "work in progress." 33 The list of current Internet-Drafts can be accessed at 34 http://www.ietf.org/ietf/1id-abstracts.txt. 36 The list of Internet-Draft Shadow Directories can be accessed at 37 http://www.ietf.org/shadow.html. 39 This Internet-Draft will expire on May 21, 2008. 41 Copyright Notice 43 Copyright (C) The IETF Trust (2007). 45 Abstract 47 Pre-Congestion Notification described in 48 [I-D.eardley-pcn-architecture] and earlier in 50 [I-D.briscoe-tsvwg-cl-architecture] approach proposes the use of an 51 Admission Control mechanism to limit the amount of real-time PCN 52 traffic to a configured level during the normal operating conditions, 53 and the use of a Flow Termination mechanism to tear-down some of the 54 flows to bring the PCN traffic level down to a desirable amount 55 during unexpected events such as network failures, with the goal of 56 maintaining the QoS assurances to the remaining flows. In 57 [I-D.eardley-pcn-architecture], Admission and Flow Termination use 58 two different markings and two different metering mechanisms in the 59 internal nodes of the PCN region. This draft proposes a mechanism 60 using a single marking and metering for both Admission and Flow 61 Termination, and presents an analysis of the tradeoffs. A side- 62 effect of this proposal is that a different marking and metering 63 Admission mechanism than that proposed in 64 [I-D.eardley-pcn-architecture] may be also feasible, and may result 65 in a number of benefits. In addition, this draft proposes a 66 migration path for incremental deployment of this approach as an 67 intermediate step to the dual-marking approach. 69 Requirements Language 71 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 72 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 73 document are to be interpreted as described in RFC 2119 [RFC2119]. 75 Table of Contents 77 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5 78 1.1. Changes from -02 version . . . . . . . . . . . . . . . . . 5 79 1.2. Terminology . . . . . . . . . . . . . . . . . . . . . . . 5 80 1.3. Background and Motivation . . . . . . . . . . . . . . . . 5 81 2. The Single Marking Approach . . . . . . . . . . . . . . . . . 7 82 2.1. High Level description . . . . . . . . . . . . . . . . . . 7 83 2.2. Operation at the PCN-interior-node . . . . . . . . . . . . 8 84 2.3. Operation at the PCN-egress-node . . . . . . . . . . . . . 8 85 2.4. Operation at the PCN-ingress-node . . . . . . . . . . . . 8 86 2.4.1. Admission Decision . . . . . . . . . . . . . . . . . . 8 87 2.4.2. Flow Termination Decision . . . . . . . . . . . . . . 9 88 3. Benefits of Allowing the Single Marking Approach . . . . . . . 10 89 4. Impact on PCN Architectural Framework . . . . . . . . . . . . 11 90 4.1. Impact on the PCN-Internal-Node . . . . . . . . . . . . . 11 91 4.2. Impact on the PCN-boundary nodes . . . . . . . . . . . . . 11 92 4.2.1. Impact on PCN-Egress-Node . . . . . . . . . . . . . . 11 93 4.2.2. Impact on the PCN-Ingress-Node . . . . . . . . . . . . 12 94 4.3. Summary of Proposed Enhancements Required for Support 95 of Single Marking Options . . . . . . . . . . . . . . . . 13 96 4.4. Proposed Optional Renaming of the Marking and Marking 97 Thresholds . . . . . . . . . . . . . . . . . . . . . . . . 14 98 4.5. An Optimization Using a Single Configuration Parameter 99 for Single Marking . . . . . . . . . . . . . . . . . . . . 15 100 5. Incremental Deployment Considerations . . . . . . . . . . . . 15 101 6. Tradeoffs, Issues and Limitations of Single Marking 102 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 103 6.1. Global Configuration Requirements . . . . . . . . . . . . 16 104 6.2. Assumptions on Loss . . . . . . . . . . . . . . . . . . . 16 105 6.3. Effect of Reaction Timescale of Admission Mechanism . . . 17 106 6.4. Performance Implications and Tradeoffs . . . . . . . . . . 17 107 6.5. Effect on Proposed Anti-Cheating Mechanisms . . . . . . . 18 108 6.6. ECMP Handling . . . . . . . . . . . . . . . . . . . . . . 18 109 6.7. Traffic Engineering Considerations . . . . . . . . . . . . 19 110 7. Performance Evaluation Comparison . . . . . . . . . . . . . . 22 111 7.1. Relationship to other drafts . . . . . . . . . . . . . . . 22 112 7.2. Admission Control: High Level Conclusions . . . . . . . . 23 113 7.3. Flow Termination Results . . . . . . . . . . . . . . . . . 24 114 7.3.1. Sensitivity to Low Ingress-Egress aggregation 115 levels . . . . . . . . . . . . . . . . . . . . . . . . 24 116 7.3.2. Over-termination in the Multi-bottleneck Scenarios . . 25 117 7.4. Future work . . . . . . . . . . . . . . . . . . . . . . . 26 118 8. Appendix A: Simulation Details . . . . . . . . . . . . . . . 26 119 8.1. Simulation Setup and Environment . . . . . . . . . . . . . 27 120 8.1.1. Network and Signaling Models . . . . . . . . . . . . . 27 121 8.1.2. Traffic Models . . . . . . . . . . . . . . . . . . . . 29 122 8.1.3. Performance Metrics . . . . . . . . . . . . . . . . . 32 124 8.2. Admission Control . . . . . . . . . . . . . . . . . . . . 33 125 8.2.1. Parameter Settings . . . . . . . . . . . . . . . . . . 33 126 8.2.2. Sensitivity to EWMA weight and CLE . . . . . . . . . . 33 127 8.2.3. Effect of Ingress-Egress Aggregation . . . . . . . . . 36 128 8.2.4. Effect of Multiple Bottlenecks . . . . . . . . . . . . 41 129 8.3. Termination Control . . . . . . . . . . . . . . . . . . . 45 130 8.3.1. Ingress-Egress Aggregation Experiments . . . . . . . . 45 131 8.3.2. Multiple Bottlenecks Experiments . . . . . . . . . . . 48 132 9. Appendix B. Controlling The Single Marking Configuration 133 with a Single Parameter . . . . . . . . . . . . . . . . . . . 54 134 9.1. Assumption . . . . . . . . . . . . . . . . . . . . . . . . 54 135 9.2. Details of the Proposed Enhancements to PCN 136 Architecture . . . . . . . . . . . . . . . . . . . . . . . 54 137 9.2.1. PCN-Internal-Node . . . . . . . . . . . . . . . . . . 54 138 9.2.2. PCN-Egress-Node . . . . . . . . . . . . . . . . . . . 55 139 9.2.3. PCN-Ingress-Node . . . . . . . . . . . . . . . . . . . 56 140 10. Security Considerations . . . . . . . . . . . . . . . . . . . 57 141 11. References . . . . . . . . . . . . . . . . . . . . . . . . . . 57 142 11.1. Normative References . . . . . . . . . . . . . . . . . . . 57 143 11.2. Informative References . . . . . . . . . . . . . . . . . . 57 144 11.3. References . . . . . . . . . . . . . . . . . . . . . . . . 58 145 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 58 146 Intellectual Property and Copyright Statements . . . . . . . . . . 60 148 1. Introduction 150 1.1. Changes from -02 version 152 o Added Flow Termination results (Section 7 and Section 8.3) 154 o Minor other edits 156 o Alignment with draft-charny-pcn-comparison 158 1.2. Terminology 160 This draft uses the terminology defined in 161 [I-D.eardley-pcn-architecture] 163 1.3. Background and Motivation 165 Pre-Congestion Notification [I-D.eardley-pcn-architecture] approach 166 proposes to use an Admission Control mechanism to limit the amount of 167 real-time PCN traffic to a configured level during the normal 168 operating conditions, and to use a Flow Termination mechanism to 169 tear-down some of the flows to bring the PCN traffic level down to a 170 desirable amount during unexpected events such as network failures, 171 with the goal of maintaining the QoS assurances to the remaining 172 flows. In [I-D.eardley-pcn-architecture], Admission and Flow 173 Termination use two different markings and two different metering 174 mechanisms in the internal nodes of the PCN region. Admission 175 Control algorithms for variable-rate real-time traffic such as video 176 have traditionally been based on the observation of the queue length, 177 and hence re-using these techniques and ideas in the context of pre- 178 congestion notification is highly attractive, and motivated the 179 threshold- and ramp- marking and metering techniques based on the 180 virtual queue implementation described in 181 [I-D.briscoe-tsvwg-cl-architecture] for Admission. On the other 182 hand, for Flow Termination, it is desirable to know how many flows 183 need to be terminated, and that in turn motivates excess-rate-based 184 Flow Termination metering. This provides some motivation for 185 employing different metering algorithm for Admission and for Flow 186 Termination. 188 Furthermore, it is frequently desirable to trigger Flow Termination 189 at a substantially higher traffic level than the level at which no 190 new flows are to be admitted. There are multiple reasons for the 191 requirement to enforce a different configured-admissible-rate and 192 configured-termination-rate. These include, for example: 194 o End-users are typically more annoyed by their established call 195 dying than by getting a busy tone at call establishment. Hence 196 decisions to terminate flows may need to be done at a higher load 197 level than the decision to stop admitting. 199 o There are often very tight (possibly legal) obligations on network 200 operators to not drop established calls. 202 o Voice Call Routing often has the ability to route/establish the 203 call on another network (e.g., PSTN) if it is determined at call 204 establishment that one network (e.g., packet network) can not 205 accept the call. Therefore, not admitting a call on the packet 206 network at initial establishment may not impact the end-user. In 207 contrast, it is usually not possible to reroute an established 208 call onto another network mid-call. This means that call 209 Termination can not be hidden to the end-user. 211 o Flow Termination is typically useful in failure situations where 212 some loads get rerouted thereby increasing the load on remaining 213 links. Because the failure may only be temporary, the operator 214 may be ready to tolerate a small degradation during the interim 215 failure period. This also argues for a higher configured- 216 termination-rate than configured-admissible-rate 218 o A congestion notification based Admission scheme has some inherent 219 inaccuracies because of its reactive nature and thus may 220 potentially over admit in some situations (such as burst of calls 221 arrival). If the Flow Termination scheme reacted at the same rate 222 threshold as the Admission , calls may get routinely dropped after 223 establishment because of over admission, even under steady state 224 conditions. 226 These considerations argue for metering for Admission and Flow 227 Termination at different traffic levels and hence, implicitly, for 228 different markings and metering schemes. 230 Different marking schemes require different codepoints. Thus, such 231 separate markings consume valuable real-estate in the packet header, 232 especially scarce in the case of MPLS Pre-Congestion Notification 233 [I-D.davie-ecn-mpls] . Furthermore, two different metering 234 techniques involve additional complexity in the data path of the 235 internal routers of the PCN-domain. 237 To this end, [I-D.briscoe-tsvwg-cl-architecture] proposes an 238 approach, referred to as "implicit Preemption marking" in that draft, 239 that does not require separate termination-marking. However, it does 240 require two separate measurement schemes: one measurement for 241 Admission and another measurement for Flow Termination. Furthermore, 242 this approach mandates that the configured-termination-rate be equal 243 to a drop rate. This approach effectively uses dropping as the way 244 to convey information about how much traffic can "fit" under the 245 configured-termination-rate, instead of using a separate termination 246 marking. This is a significant restriction in that it results in 247 flow termination only taking effect once packets actually get 248 dropped. 250 This document presents an approach that allows the use of a single 251 PCN marking and a single metering technique at the internal devices 252 without requiring that the dropping and flow termination thresholds 253 be the same. We argue that this approach can be used as intermediate 254 step in implementation and deployment of a full-fledged dual-marking 255 PCN implementation. We also quantify performance tradeoffs that are 256 associated with the choice of the Single Marking approach. 258 2. The Single Marking Approach 260 2.1. High Level description 262 The proposed approach is based on several simple ideas: 264 o Replace virtual-queue-based threshold- or ramp-marking for 265 Admission Control by excess-rate-marking: 267 * meter traffic exceeding the configured-admissible-rate and mark 268 *excess* traffic (e.g. using a token bucket with the rate 269 configured with the rate equal to configured-admissible-rate) 271 * at the PCN-boundary-node, stop admitting traffic when the 272 fraction of marked traffic for a given edge-to-edge aggregate 273 exceeds a configured threshold (e.g. stop admitting when 1% of 274 all traffic in the edge-to-edge aggregate received at the 275 ingress is marked) 277 o Impose a PCN-domain-wide constraint on the ratio U between the 278 configured-admissible-rate on a link and level of the PCN load on 279 the link at which Flow Termination needs to be triggered (but do 280 not explicitly configure configured-termination-rate). For 281 example, one might impose a policy that Flow Termination is 282 triggered when PCN traffic exceeds 120% of the configured- 283 admissible-rate on any link of the PCN-domain). 285 The remaining part of this section describes the possible operation 286 of the system. 288 2.2. Operation at the PCN-interior-node 290 The PCN-interior-node meters the aggregate PCN traffic and marks the 291 excess rate. A number of implementations are possible to achieve 292 that. A token bucket implementation is particularly attractive 293 because of its relative simplicity, and even more so because a token 294 bucket implementation is readily available in the vast majority of 295 existing equipment. The rate of the token bucket is configured to 296 correspond to the configured-admissible-rate, and the depth of the 297 token bucket can be configured by an operator based on the desired 298 tolerance to PCN traffic burstiness. 300 Note that no configured-termination-rate is explicitly configured at 301 the PCN-interior-node, and the PCN-interior-node does nothing at all 302 to enforce it. All marking is based on the single configured rate 303 threshold (configured-admissible-rate). 305 2.3. Operation at the PCN-egress-node 307 The PCN-egress-node measures the rate of both marked and unmarked 308 traffic on a per-ingress basis, and reports to the PCN-ingress-node 309 two values: the rate of unmarked traffic from this ingress node, 310 which we deem Sustainable Admission Rate (SAR) and the Congestion 311 Level Estimate (CLE), which is the fraction of the marked traffic 312 received from this ingress node. Note that Sustainable Admission 313 Rate is analogous to the sustainable termination rate of CL, except 314 in this case it is based on the configured-admissible- rather than 315 termination threshold, while the CLE is exactly the same as that of 316 CL. The details of the rate measurement are outside the scope of 317 this draft. 319 2.4. Operation at the PCN-ingress-node 321 2.4.1. Admission Decision 323 Just as in CL, the admission decision is based on the CLE. The 324 ingress node stops admission of new flows if the CLE is above a pre- 325 defined threshold (e.g. 1%). Note that although the logic of the 326 decision is exactly the same as in the case of CL, the detailed 327 semantics of the marking is different. This is because the marking 328 used for admission in this proposal reflects the excess rate over the 329 configured-admissible-rate, while in CL, the marking is based on 330 exceeding a virtual queue threshold. Notably, in the current 331 proposal, if the average sustained rate of admitted traffic is 5% 332 over the admission threshold, then 5% of the traffic is expected to 333 be marked, whereas in the context of CL a steady 5% overload should 334 eventually result in 100% of all traffic being admission marked. A 335 consequence of this is that for "smooth" constant-rate traffic, the 336 approach presented here will not mark any traffic at all until the 337 rate of the traffic exceeds the configured admission threshold by the 338 amount corresponding to the chosen CLE threshold. 340 At first glance this may seem to result in a violation of the pre- 341 congestion notification premise that attempts to stop admission 342 before the desired traffic level is reached. However, in reality one 343 can simply embed the CLE level into the desired configuration of the 344 admission threshold. That is, if a certain rate X is the actual 345 target admission threshold, then one should configure the rate of the 346 metering device (e.g. the rate of the token bucket) to X-y where y 347 corresponds to the level of CLE that would trigger admission blocking 348 decision. 350 A more important distinction is that the ramp- version of the 351 virtual-queue based marking reacts to short-term burstiness of 352 traffic, while the excess-rate based marking is only capable of 353 reacting to rate violations at the timescale chosen for rate 354 measurement. Based on our investigation, it seems that this 355 distinction is not crucial in the context of PCN when no actual 356 queuing is expected even if the virtual queue is full. More 357 discussion on this is presented later in the draft. 359 2.4.2. Flow Termination Decision 361 When the ingress observes a non-zero CLE and Sustainable Admission 362 Rate (SAR), it first computes the Sustainable Termination Rate (STR) 363 by simply multiplying SAR by the system-wide constant U where U is 364 the system-wide ratio between (implicit) termination and admission 365 thresholds on all links in the PCN domain: STR = SAR*U. The PCN- 366 ingress-node then performs exactly the same operation as in CL with 367 respect to STR: it terminates the appropriate number of flows to 368 ensure that the rate of traffic it sends to the corresponding egress 369 node does not exceed STR. 371 Note: In certain cases where ingress-egress aggregations are not 372 sufficient, additional mechanism may be needed to improve the 373 accuracy of algorithm. One possibility is to guard/activate the 374 termination control with a trigger computed from EWMA smoothed egress 375 measurements (e.g. the termination should be triggered when the ratio 376 of smoothed marked and smoothed unmarked traffic is greater than 377 U-1). Sections 7 and 8.3 provide additional discussion on this 378 issue. For sufficient levels of aggregation of IEA traffic, no 379 smoothing of the termination trigger is required. 381 Just as in the case of CL, an implementation may decide to slow down 382 the termination process by preempting fewer flows than is necessary 383 to cap its traffic to STR by employing a variety of techniques such 384 as safety factors or hysteresis. In summary, the operation of 385 Termination at the ingress node is mostly identical to that of CL, 386 with the only exception that the sustainable Termination rate is 387 computed from the sustainable admission rate rather than derived from 388 a separate marking. As discussed earlier, this is enabled by 389 imposing a system-wide restriction on the termination-to-admission 390 thresholds ratio and changing the semantics of the admission marking 391 from ramp- or threshold - to excess-rate-marking. 393 3. Benefits of Allowing the Single Marking Approach 395 The following is a summary of benefits associated with enabling the 396 Single Marking (SM) approach. Some tradeoffs will be discussed in 397 section 7 below. 399 o Reduced implementation requirements on core routers due to a 400 single metering implementation instead of two different ones. 402 o Ease of use on existing hardware: given that the proposed approach 403 is particularly amenable to a token bucket implementation, the 404 availability of token buckets on virtually all commercially 405 available routers makes this approach especially attractive. 407 o Enabling incremental implementation and deployment of PCN (see 408 section 4). 410 o Reduced number of codepoints which need to be conveyed in the 411 packet header. If the PCN-bits used in the packets header to 412 convey the congestion notification information are the ECN-bits in 413 an IP core and the EXP-bits in an MPLS core, those are very 414 expensive real-estate. The current proposals need 5 codepoints, 415 which is especially important in the context of MPLS where there 416 is only a total of 8 EXP codepoints which must also be shared with 417 DiffServ. Eliminating one codepoint considerably helps. 419 o A possibility of using a token-bucket-based, excess-rate-based 420 implementation for admission provides extra flexibility for the 421 choice of an admission mechanism, even if two separate markings 422 and thresholds are used. 424 Subsequent sections argue that these benefits can be achieved with a 425 relatively minor enhancements to the proposed PCN architecture as 426 defined in [I-D.eardley-pcn-architecture], allow simpler 427 implementations at the PCN-interior nodes, and trivial modifications 428 at the PCN- boundary nodes. However, a number of tradeoffs need to 429 be also considered, as discussed in section 7. 431 4. Impact on PCN Architectural Framework 433 The goal of this section is to propose several minor changes to the 434 PCN architecture framework as currently described in 435 [I-D.eardley-pcn-architecture] in order to enable the single marking 436 approach. 438 4.1. Impact on the PCN-Internal-Node 440 No changes are required to the PCN-internal-node in architectural 441 framework in [I-D.eardley-pcn-architecture] in order to support the 442 Single Marking Proposal. The current architecture 443 [I-D.eardley-pcn-architecture] already allows only one marking and 444 metering scheme rather than two by supporting either "admission only" 445 or "termination only" functionality. To support the SM proposal a 446 single threshold (i.e. Configured-termination-rate) must be 447 configured at the PCN-internal-node, and excess-rate marking as 448 described in should be used to mark packets as described in 449 [I-D.briscoe-tsvwg-cl-architecture]. 451 The configuration parameter(s) at the PCN-ingress-nodes and PCN- 452 egress-node (described in section 4.2) will determine how the marking 453 should be interpreted by the PCN-boundary-nodes. 455 4.2. Impact on the PCN-boundary nodes 457 We propose an addition of one global configuration parameter 458 MARKING_MODE to be used at all PCN boundary nodes. If MARKING_MODE = 459 DUAL_MARKING, the behavior of the appropriate PCN-boundary-node as 460 described in the current version of [I-D.eardley-pcn-architecture]. 461 If MARKING_MODE = SINGLE_MARKING, the behavior of the appropriate 462 boundary nodes is as described in the subsequent subsections. 464 4.2.1. Impact on PCN-Egress-Node 466 The exact operation of the PCN-Egress-node depends on whether it is 467 admission marking (AM-marking) or termination-marking (TM-marking) 468 that is used for SM. An assumption made in 469 draft-charny-pcn-comparison-00 is to use AM-marked packets for SM 470 instead of TM-marked packets. In that case the MARKING_MODE will 471 signal that Sustainable-Rate must be measured against the AM-marked 472 packets, while Congestion-Level-Estimate (CLE) will be measured 473 against AM-marked packets just as in the case of CL. If, however, 474 TM-marking is used for SM, then CLE in SM will need to be measured 475 against the TM-marked packets. 477 In more detail, if the encoding used for SM is that of TM-marking, 478 then the setting MARKING_MODE=SINGLE_MARKING indicates that the CLE 479 is measured against termination-marked packets, while if 480 MARKING_MODE=DUAL_MARKING, the CLE is measured against admission- 481 marked packets. The method of measurement of CLE does not depend on 482 the choice of the marking against which the measurement is performed. 484 If, however, the encoding used for SM is that of AM-marking, then the 485 setting MARKING_MODE=SINGLE_MARKING indicates that the Sustainable- 486 Rate is measured against AM-marked packets, while the setting of 487 MARKING_MODE=DUAL_MARKING indicates that Sustainable-Rate should be 488 measured against TM-marked packets 490 We note that from the implementation point of view, the same two 491 functions (measuring the CLE and measuring the Sustainable-Aggregate- 492 Rate are required by both the SM approach and the approach in CL, so 493 the difference in the implementation complexity of the PCN-egress- 494 node is quite negligible and amounts to checking which encoding is 495 used for which function based on the setting of a global parameter. 496 If this checking is implemented, then switching the egress nodes from 497 supporting SM to supporting CL amounts to changing the setting of the 498 global parameter. 500 4.2.2. Impact on the PCN-Ingress-Node 502 If MARKING_MODE=DUAL_MARKING, the PCN-ingress-node behaves exactly as 503 described in [I-D.eardley-pcn-architecture]. If MARKING_MODE = 504 SINGLE_MARKING, then an additional global parameter U is defined. U 505 must be configured at all PCN-ingress-nodes and has the meaning of 506 the desired ratio between the traffic level at which termination 507 should occur and the desired admission threshold, as described in 508 section 2.4 above. The value of U must be greater than or equal to 509 1. The value of this constant U is used to multiply the Sustainable 510 Aggregate Rate received from a given PCN-egress-node to compute the 511 rate threshold used for flow termination decisions. 513 In more detail, if MARKING_MODE=SINGLE_MARKING, then 515 o A PCN-ingress-node receives CLE and/or Sustainable Aggregate Rate 516 from each PCN-egress-node it has traffic to. This is fully 517 compatible with PCN architecture as described in 518 [I-D.eardley-pcn-architecture]. 520 o A PCN-ingress-node bases its admission decisions on the value of 521 CLE. Specifically, once the value of CLE exceeds a configured 522 threshold, the PCN-ingress-node stops admitting new flows. It 523 restarts admitting when the CLE value goes down below the 524 specified threshold. This is fully compatible with PCN 525 architecture as described in [I-D.eardley-pcn-architecture]. 527 o A PCN-ingress node receiving a Sustainable Rate from a particular 528 PCN-egress node measures its traffic to that egress node. This 529 again is fully compatible with PCN architecture as described in 530 draft-earley-pcn-architecture-00. 532 o The PCN-ingress-node computes the desired Termination Rate to a 533 particular PCN-egress-node by multiplying the Sustainable 534 Aggregate Rate from a given PCN-egress-node by the value of the 535 configuration parameter U. This computation step represents a 536 proposed change to the current version of 537 [I-D.eardley-pcn-architecture]. 539 o Once the Termination Rate is computed, it is used for the flow 540 termination decision in a manner fully compatible with 541 [I-D.eardley-pcn-architecture]. Namely the PCN-ingress-node 542 compares the measured traffic rate destined to the given PCN- 543 egress-node with the computed Termination rate for that egress 544 node, and terminates a set of traffic flows to reduce the rate 545 exceeding that Termination rate. This is fully compatible with 546 [I-D.eardley-pcn-architecture]. 548 We note that as in the case of the PCN-egress-node, the change in the 549 implementation of the PCN-ingress-node to support SM is quite 550 negligible (a single multiplication per ingress rate measurement 551 interval for each egress node). [Note: If additional smoothing of 552 the termination signal is required to deal with low IE aggregation as 553 mentioned in section 2.4.2, this smoothing constitutes an additional 554 requirement on the PCN-ingress-node.] 556 4.3. Summary of Proposed Enhancements Required for Support of Single 557 Marking Options 559 The enhancements to the PCN architecture as defined in 560 [I-D.eardley-pcn-architecture], in summary, amount to: 562 o defining a global (within the PCN domain) configuration parameter 563 MARKING_MODE at PCN-boundary nodes 565 o Defining a global (within the PCN domain) configuration parameter 566 U at the PCN-ingress-nodes. This parameter signifies the implicit 567 ratio between the termination and admission thresholds at all 568 links 570 o Multiplication of Sustainable-Aggregate-Rate by the constant U at 571 the PCN-ingress-nodes if MARKING_MODE=SINGLE_MARKING 573 o Using the MARKING_MODE parameter to guide which marking is used to 574 measure the CLE (but the measurement functionality is unchanged) 576 4.4. Proposed Optional Renaming of the Marking and Marking Thresholds 578 Previous work on example mechanisms 579 [I-D.briscoe-tsvwg-cl-architecture] implementing the architecture of 580 [I-D.eardley-pcn-architecture] assumed that the semantics of 581 admission control marking and termination marking differ. 582 Specifically, it was assumed that for termination purposes the 583 semantics of the marking is related to the excess rate over the 584 configured (termination) rate, or even more precisely, the amount of 585 traffic that remains unmarked (sustainable rate) after the excess 586 traffic is marked. Some of the recent proposals assume yet different 587 marking semantics [I-D.babiarz-pcn-3sm], 588 [I-D.westberg-pcn-load-control]. 590 Even though specific association with marking semantics and function 591 (admission vs termination) has been assumed in prior work, it is 592 important to note that in the current architecture draft 593 [I-D.eardley-pcn-architecture], the associations of specific marking 594 semantics (virtual queue vs excess rate) with specific functions 595 (admission vs termination) are actually *not* directly assumed. In 596 fact , the architecture document does not explicitly define the 597 marking mechanism, but rather states the existence of two different 598 marking mechanisms, and also allows implementation of either one or 599 both of these mechanisms in a PCN- domain. 601 We argue that this separation of the marking semantics from the 602 functional use of the marking is important to make sure that devices 603 supporting the same marking can interoperate in delivering the 604 function which is based on specific supported marking semantics. 606 To divorce the function (admission vs termination) and the semantics 607 (excess rate marking, virtual queue marking), it may be beneficial to 608 rename the marking to be associated with the semantics rather than 609 the function to explicitly disassociate the two functions. 610 Specifically, it may be beneficial to change the "admission-marking" 611 and "termination-marking" currently defined in the architecture as 612 "Type Q" or "virtual-queue-based" marking, and "Type R" or "excess- 613 rate-based" marking. Of course, other choices of the naming are 614 possible (including keeping the ones currently used in 615 [I-D.eardley-pcn-architecture]). 617 With this renaming, the dual marking approach in 618 [I-D.briscoe-tsvwg-cl-architecture] would require PCN-internal-nodes 619 to support both Type R and Type Q marking, while SM would require 620 support of Type-R marking only. 622 We conclude by emphasizing that the changes proposed here amount to 623 merely a renaming rather than a change to the proposed architecture, 624 and are therefore entirely optional. 626 4.5. An Optimization Using a Single Configuration Parameter for Single 627 Marking 629 We note finally that it is possible to use a single configuration 630 constant U instead of two constants (U and MARKING_TYPE). 631 Specifically, one can simply interpret the value of U=1 as the dual- 632 marking approach (equivalent to MARKING_TYPE=DUAL_MARKING) and use 633 U>1 to indicate SM. This is discussed in detail in Section 9. 635 5. Incremental Deployment Considerations 637 As most of today's routers already implement a token bucket, 638 implementing token-bucket based excess-rate marking at PCN-ingress 639 nodes is a relatively small incremental step for most of today's 640 implementations. Implementing an additional metering and marking 641 scheme in the datapath required by the dual-marking approach without 642 encountering performance degradation is a larger step. The SM 643 approach may be used as an intermediate step towards the deployment 644 of a dual-marking approach in the sense that routers implementing 645 single-marking functionality only may be deployed first and then 646 incrementally upgraded to CL. 648 The deployment steps might be as follows: 650 o Initially all PCN-ingress-nodes might implement Excess-rate (Type 651 R) type marking and metering only 653 o All PCN-boundary nodes implement the full functionality as 654 described in this document (including the configuration parameters 655 MARKING_TYPE and U) from the start. Since the PCN-boundary-node 656 behavior is enabled by simply changing the values of the 657 configuration parameters, all boundary nodes become immediately 658 compatible with both dual-marking (CL) and single-marking. 660 o Initially all boundary nodes are configured parameter settings 661 indicating SM option. 663 o When a PCN-internal node with dual-marking functionality replaces 664 a subset of PCN-internal-nodes, the virtual-queue-based (Type Q) 665 marking is simply ignored by the boundary nodes until all PCN- 666 internal-nodes in the PCN-domain implement the dual-marking 667 metering and marking. At that time the value of the configuration 668 parameters may be reset to at all boundary nodes to indicate the 669 Dual Marking configuration. 671 o Note that if a subset of PCN-boundary-nodes communicates only with 672 each other, and all PCN-internal-nodes their traffic traverses 673 have been upgraded, this subset of nodes can be upgraded to two 674 dual-marking behavior while the rest of the PCN-domain can still 675 run the SM case. This would entail configuring two thresholds at 676 the PCN-internal-nodes, and setting the value of the configuration 677 parameters appropriately in this subset. 679 o Finally note that if the configuration parameter U is configured 680 per ingress-egress-pair rather than per boundary node, then each 681 ingress-egress pair can be upgraded to the dual marking 682 simultaneously. While we do not recommend that U is defined on a 683 per-ingress-egress pair, such possibility should be noted and 684 considered. 686 6. Tradeoffs, Issues and Limitations of Single Marking Approach 688 6.1. Global Configuration Requirements 690 An obvious restriction necessary for the single-marking approach is 691 that the ratio of (implicit) termination and admission thresholds 692 remains the same on all links in the PCN region. While clearly a 693 limitation, this does not appear to be particularly crippling, and 694 does not appear to outweigh the benefits of reducing the overhead in 695 the router implementation and savings in codepoints in the case of a 696 single PCN domain, or in the case of multiple concatenated PCN 697 regions. The case when this limitation becomes more inconvenient is 698 when an operator wants to merge two previously separate PCN regions 699 (which may have different admission-to-termination ratios) into a 700 single PCN region. In this case it becomes necessary to do a 701 network-wide reconfiguration to align the settings. 703 The fixed ratio between the implicit termination rate and the 704 configured-admissible-rate also has an implications on traffic 705 engineering considerations. Those are discussed in section 7.7 706 below. 708 SM also requires that all PCN-boundary-nodes use the same setting of 709 the global parameters U and MARKING_MODE. 711 6.2. Assumptions on Loss 713 Just as in the case of [I-D.briscoe-tsvwg-cl-architecture], the 714 approach presented in this draft assumes that the configured- 715 admissible-rate is configured at each link below the service rate of 716 the traffic using PCN. This assumption is significant because the 717 algorithm relies on the fact that if admission threshold is exceeded, 718 enough marked traffic reaches the pcn-egress-node to reach the 719 configured CLE level. If this condition does not hold, then traffic 720 may get dropped without ever triggering admission decision. 722 6.3. Effect of Reaction Timescale of Admission Mechanism 724 As mentioned earlier in this draft, there is a potential concern that 725 slower reaction time of admissions mechanism presented in this draft 726 compared to [I-D.briscoe-tsvwg-cl-architecture] may result in 727 overshoot when the load grows rapidly, and undershoot when the load 728 drops rapidly. While this is a valid concern theoretically, it 729 should be noted that at least for the traffic and parameters used in 730 the simulation study reported here, there was no indication that this 731 was a problem. 733 6.4. Performance Implications and Tradeoffs 735 Replacement of a relatively well-studied queue-based measurement- 736 based admission control approach by a cruder excess-rate measurement 737 technique raises a number of algorithmic and performance concerns 738 that need to be carefully evaluated. For example, a token-bucket 739 excess rate measurement is expected to be substantially more 740 sensitive to traffic burstiness and parameter setting, which may have 741 a significant effect in the case of lower levels of traffic 742 aggregation, especially for variable-rate traffic such as video. In 743 addition, the appropriate timescale of rate measurement needs to be 744 carefully evaluated, and in general it depends on the degree of 745 expected traffic variability which is frequently unknown. 747 In view of that, an initial performance comparison of the use token- 748 bucket based excess-rate metering is presented in the following 749 section. Within the constraints of this study, the performance 750 tradeoffs observed between the queue-based technique for admission 751 control suggested in [I-D.briscoe-tsvwg-cl-architecture] and a 752 simpler token-bucket-based excess rate measurement for admission 753 control do not appear to be a cause of substantial concern for cases 754 when traffic aggregation is reasonably high at the bottleneck links 755 as well as on a per ingress-egress pair basis. Details of the 756 simulation study, as well as additional discussion of its 757 implications are presented in section 7. 759 Also, one mitigating consideration in favor of the simpler mechanism 760 is that in a typical DiffServ environment, the real-time traffic is 761 expected to be served at a higher priority and/or the target 762 admission rate is expected to be substantially below the speed at 763 which the real-time queue is actually served. If these assumptions 764 hold, then there is some margin of safety for an admission control 765 algorithm, making the requirements for admission control more 766 forgiving to bounded errors - see additional discussion in section 7. 768 Flow Termination mechanisms of Single Marking and CL are both based 769 on excess-rate metering and marking, as so it may be inferred that 770 their performance is similar. However, there is a subtle difference 771 between the two mechanisms stemming from the fact that in SM, packets 772 continue to be marked when traffic has reduced between the (implicit) 773 termination threshold and the (explicit) admission threshold. This 774 "extra" marking may result in over-termination compared to CL, 775 especially in multi-bottleneck topologies. We quantify this over- 776 termination in Sections 7 and 8. While we believe that the extent of 777 this over-termination is tolerable for practical purposes, it needs 778 to be taken into account when considering performance tradeoffs of 779 the two mechanisms. 781 6.5. Effect on Proposed Anti-Cheating Mechanisms 783 Replacement of the queue-based admission control mechanism of 784 [I-D.briscoe-tsvwg-cl-architecture] by an excess-rate based admission 785 marking changing the semantics of the pre-congestion marking, and 786 consequently interferes with mechanisms for cheating detection 787 discussed in [I-D.briscoe-tsvwg-re-ecn-border-cheat]. Implications 788 of excess-rate based marking on the anti-cheating mechanisms need to 789 be considered. 791 6.6. ECMP Handling 793 An issue not directly addressed by neither the dual-marking approach 794 described in [I-D.briscoe-tsvwg-cl-architecture] nor the single- 795 marking approach described in this draft is that if ECMP is enabled 796 in the PCN-domain, then the PCN-edge nodes do not have a way of 797 knowing whether specific flows in the ingress-egress aggregate (IEA) 798 followed the same path or not. If multiple paths are followed, then 799 some of those paths may be experiencing pre-congestion marking, and 800 some are not. Hence, for example, an ingress node may choose to 801 terminate a flow which takes an entirely un-congested path. This 802 will not only unnecessarily terminate some flows, but also will not 803 eliminate congestion on the actually congested path. While 804 eventually, after several iterations, the correct number of flows 805 might be terminated on the congestion path, this is clearly 806 suboptimal, as the termination takes longer, and many flows are 807 potentially terminated unnecessarily. 809 Two approaches for solving this problem were proposed in 810 draft-babiarz-pcn-explicit-marking and 811 draft-westberg-pcn-load-control. The former handles ECMP by 812 terminating those flows that are termination-marked as soon as the 813 termination marking is seen. The latter uses an additional DiffServ 814 marking/codepoint to mark all packets of the flows passing through a 815 congestion point, with the PCN-boundary-nodes terminating only those 816 flows which are marked with this additional marks. Both of these 817 approaches also differ in the termination-marking semantics, but we 818 omit the discussion of these differences as they can be considered 819 largely independent of the ECMP issue. 821 It should be noted that although not proposed in this draft, either 822 of these ideas can be used with dual- and single- marking approaches 823 discussed here. Specifically, in CL, when a PCN-ingress-node decides 824 which flows to terminate, it can choose for termination only those 825 flows that are termination-marked. Likewise, at the cost of an 826 additional (DiffServ) codepoint, a PCN-internal-node can mark all 827 packets of all flows using this additional marking, and then the PCN- 828 boundary-nodes can use this additional marking to guide their flow 829 termination decisions. In SM, since only one codepoint is used, this 830 approach will result in choosing only those flows for termination 831 which traverse at least one link where the traffic level is above the 832 admission threshold. This may result in termination of the some 833 flows erroneously. 835 Either of these approaches appears to imply changes to the PCN 836 architecture as proposed in draft-eardley-pcn-architecture-00. Such 837 changes have not been considered in this draft at this point. 839 6.7. Traffic Engineering Considerations 841 Dual-marking PCN can be viewed as a replacement for Resilient Network 842 Provisioning (RNP). It is reasonable to expect that an operator 843 currently using DiffServ provisioning for real-time traffic might 844 consider a move to PCN. For such a move it is necessary to 845 understand how to set the PCN rate thresholds to make sure that the 846 move to PCN does not detrimentally affect the guarantees currently 847 offered to the operator. 849 The key question addressed in this section is how to set PCN 850 admission and termination thresholds in the dual marking approach or 851 the single admission threshold and the scaling factor U reflecting 852 the implicit termination threshold in the single-marking approach so 853 that the result is "not worse" than provisioning in the amount of 854 traffic that can be admitted. Even more specifically we will address 855 what if any are the tradeoffs between the dual-marking and the 856 single-approach arise when answering this question. This question 857 was first raised in [Menth] and is further addressed below. 859 Typically, RNP would size the network (in this specific case traffic 860 that is expected to use PCN) by making sure that capacity available 861 for this (PCN) type of traffic is sufficient for PCN traffic under 862 "normal" circumstances ( that is, under no failure condition, for a 863 given traffic matrix), and under a specific set of single failure 864 scenarios (e.g. failure of each individual single link). Some of the 865 obvious limitations of such provisioning is that 867 o the traffic matrix is often not known well, and at times, 868 especially during flash-crowds, the actual traffic matrix can 869 differ substantially from the one assumed by provisioning 871 o unpredicted, non-planned failures can occur (e.g. multiple links, 872 nodes, etc), causing overload. 874 It is specifically such unplanned cases that serve as the motivation 875 for PCN. Yet, one may want to make sure that for cases that RNP can 876 (and does today) plan for, PCN does no worse when an operator makes 877 the decision to implement PCN on a currently provisioned network. 878 This question directly relates to the choice of the PCN configured 879 admission and termination thresholds. 881 For the dual-marking approach, where the termination and admission 882 thresholds are set independently on any link, one can address this 883 issue as follows [Menth]. If a provisioning tool is available, for a 884 given traffic matrix, one can determine the utilization of any link 885 used by traffic expected to use PCN under the no-failure condition, 886 and simply set the configured-admissible-rate to that "no-failure 887 utilization". Then a network using PCN will be able to admit as much 888 traffic as the RNP, and will reject any traffic that exceeds the 889 expected traffic matrix. To address resiliency against a set of 890 planned failures, one can use RNP to find the worst-case utilization 891 of any link under the set of all provisioned failures, and then set 892 the configured-termination-rate to that worst case utilization. 894 Clearly, such setting of PCN thresholds with the dual-marking 895 approach will achieve the following goals: 897 o PCN will admit the same traffic matrix as used by RNP and will 898 protect it against all planned failures without terminating any 899 traffic 901 o When traffic deviates from the planned traffic matrix, PCN will 902 admit such traffic as long as the total usage of any link (without 903 failure) does not exceed the configured-admission threshold, and 904 all admitted traffic will be protected against all planned 905 failures 907 o Additional traffic will not be admitted under the no-failure 908 conditions, and traffic exceeding configure-termination threshold 909 during non-planned failures will be terminated. 911 o Under non-planned failures, some of the planned traffic matrix may 912 be terminated, but the remaining traffic will be able to receive 913 its QoS treatment. 915 The above argues that an operator moving from a purely provisioned 916 network to a PCN network can find the settings of the PCN threshold 917 with dual marking in such a way that all admitted traffic is 918 protected against all planned failures. 920 It is easy to see that with the single-marking scheme, the above 921 approach does not work directly [Menth]. Indeed, the ratio between 922 the configured-termination thresholds and the configured-admissible- 923 rate may not be constant on all links. Since the single-marking 924 approach requires the (implicit) termination rate to be within a 925 fixed factor of the configured admission rate, it can be argued (as 926 was argued in [Menth].) that one needs to set the system-wide ratio U 927 between the (implicit) termination threshold and the configured 928 admission threshold to correspond to the largest ratio between the 929 worst case resilient utilization and the no-failure utilization of 930 RNP, and set the admission threshold on each link to the worst case 931 resilient utilization divided by that system wide ratio. Such 932 approach would result in lower admission thresholds on some links 933 than that of the dual-marking setting of the admission threshold 934 proposed above. It can therefore be argued that PCN with SM will be 935 able to admit *less* traffic that can be fully protected under the 936 planned set of failures than both RNP and the dual-marking approach. 938 However, the settings of the single-marking threshold proposed above 939 are not the only one possible, and in fact we propose here that the 940 settings are chosen differently. Such different settings (described 941 below) will result in the following properties of the PCN network: 943 o PCN will admit the same traffic matrix as used by RNP *or more* 945 o The traffic matrix assumed by RNP will be fully protected against 946 all planned failures without terminating any admitted traffic 948 o When traffic deviates from the planned traffic matrix, PCN will 949 admit such traffic as long as the total usage of any link (without 950 failure) does not exceed the configured-admission threshold, 951 However, not all admitted traffic will be protected against all 952 planned failures (i.e. even under planned failures, traffic 953 exceeding the planned traffic matrix may be preempted) 955 o Under non-planned failures, some of the planned traffic matrix may 956 be terminated, but the remaining traffic will be able to receive 957 its QoS treatment. 959 It is easy to see that all of these properties can be achieved if 960 instead of using the largest ratio between worst case resilient 961 utilization to the no-failure utilization of RNP across all links for 962 setting the system wide constant U in the single-marking approach as 963 proposed in [Menth], one uses the *smallest* ratio, and set the 964 configured-admissible-rate to the worst case resilient utilization 965 divided by that ratio. With such setting, the configured-admissions 966 threshold on each link is at least as large as the non-failure RNP 967 utilization (and hence the planned traffic matrix is always 968 admitted), and the implicit termination threshold is at the worst 969 case planned resilient utilization of RNP on each link (and hence the 970 planned traffic matrix will be fully protected against the planned 971 failures). Therefore, with such settings, the single-marking draft 972 does as well as RNP or dual-marking with respect to the planned 973 matrix and planned failures. In fact, unlike the dual marking 974 approach, it can admit more traffic on some links than the planned 975 traffic matrix would allow, but it is only guaranteed to protect up 976 to the planned traffic matrix under planned failures. 978 In summary, we have argued that both the single-marking approach and 979 the dual-marking approach can be configured to ensure that PCN "does 980 no worse" than RNP for the planned matrix and the planned failure 981 conditions, (and both can do better than RNP under non-planned 982 conditions). The tradeoff between the two is that although the 983 planned traffic matrix can be admitted with protection guarantees 984 against planned failures with both approaches, the nature of the 985 guarantee for the admitted traffic is different. Dual marking (with 986 the settings proposed) would protect all admitted traffic but would 987 not admit more than planned), while SM (with the settings proposed) 988 will admit more traffic than planned, but will not guarantee 989 protection against planned failures for traffic exceeding planned 990 utilization. 992 7. Performance Evaluation Comparison 994 7.1. Relationship to other drafts 996 Initial simulation results of admission and termination mechanisms of 997 [I-D.briscoe-tsvwg-cl-architecture] were reported in 998 [I-D.briscoe-tsvwg-cl-phb]. A follow-up study of these mechanisms is 999 presented in a companion draft 1000 draft-zhang-cl-performance-evaluation-02.txt. The previous versions 1001 of this draft concentrated on a performance comparison of the 1002 virtual-queue-based admission control mechanism of 1003 [I-D.briscoe-tsvwg-cl-phb] and the token-bucket-based admission 1004 control described in section 2 of this draft. In this version, we 1005 added performance evaluation of the Flow Termination function of SM. 1007 The Flow Termination results are discussed in section 7.3 1009 7.2. Admission Control: High Level Conclusions 1011 The results of this study indicate that there is a potential that a 1012 reasonable complexity/performance tradeoff may be viable for the 1013 choice of admission control algorithm. In turn, this suggests that 1014 using a single codepoint and metering technique for admission and 1015 termination may be a viable option. 1017 The key high-level conclusions of the simulation study comparing the 1018 performance of queue-based and token-based admission control 1019 algorithms are summarized below: 1021 1. At reasonable level of aggregation at the bottleneck and per 1022 ingress-egress pair traffic, both algorithms perform reasonably 1023 well for the range of traffic models considered. 1025 2. Both schemes are stressed for small levels of ingress-egress pair 1026 aggregation levels of bursty traffic (e.g. a single video-like 1027 bursty SVD flow per ingress-egress pair). However, while the 1028 queue-based scheme results in tolerable performance even at low 1029 levels of per ingress-egress aggregation, the token-bucket-based 1030 scheme is substantially more sensitive to parameter setting than 1031 the queue-based scheme, and its performance for the high rate 1032 bursty SVD traffic with low levels of ingress-egress aggregation 1033 is quite poor unless parameters are chosen carefully to curb the 1034 error. It should be noted that the SVD traffic model used in 1035 this study is expected to be substantially more challenging for 1036 both admission and termination mechanisms that the actual video 1037 traffic, as the latter is expected to be much smoother than the 1038 bursty on-off model with high peak-to-mean ratio we used. This 1039 expectation is confirmed by the fact that simulations with actual 1040 video traces reported in this version of the draft reveal that 1041 the performance of the video traces is much closer to that of VBR 1042 voice than of our crude SVD on-off model. 1044 3. Even for small per ingress-egress pair aggregation, reasonable 1045 performance across a range of traffic models can be obtained for 1046 both algorithms (with a narrower range of parameter setting for 1047 the token-bucket based approach) . However, at very low ingress- 1048 egress aggregation, the token bucket scheme is substantially more 1049 sensitive to parameter variations than the virtual-queue scheme. 1050 In general, the token-bucket scheme performance is quite brittle 1051 at very low aggregations, and displays substantial performance 1052 degradation with BATCH traffic, as well synchronization effects 1053 resulting in substantial over-admission (see section 8.4.2) 1055 4. The absolute value of round-trip time (RTT) or the RTT difference 1056 between different ingress-egress pair within the range of 1057 continental propagation delays does not appear to have a visible 1058 effect on the performance of both algorithms. 1060 5. There is no substantial effect on the bottleneck utilization of 1061 multi-bottleneck topologies for both schemes. Both schemes 1062 suffer substantial unfairness (and possibly complete starvation) 1063 of the long-haul aggregates traversing multiple bottlenecks 1064 compared to short-haul flows (a property shared by other MBAC 1065 algorithms as well). Token-bucket scheme displayed somewhat 1066 larger unfairness than the virtual-queue scheme. 1068 7.3. Flow Termination Results 1070 A consequence of using just a single metering and marking and a 1071 single marking encoding in SM is that when the traffic level is 1072 between admission and (implicit) termination threshold, traffic 1073 continues to be marked in SM (because it exceeds the admission 1074 threshold at which the metering occurs). This is in contrast to CL 1075 when termination marking stops as soon as the traffic falls below the 1076 termination threshold. This subtle difference results in a visible 1077 performance impact on the Termination algorithm of SM, as discussed 1078 in the next subsections. Specifically: 1080 o SM requires more ingress-egress aggregation than CL (and the 1081 amount of aggregation needed for the termination function is 1082 higher than that of admission - see sections 7.3.1 and 8.3). 1084 o In the multiple bottleneck scenario, where PCN traffic exceeds the 1085 configured (admission) rate on multiple links, additional over- 1086 termination may occur over that already reported for CL (see 1087 sections 7.3.2 and 8.3 for more detail). 1089 7.3.1. Sensitivity to Low Ingress-Egress aggregation levels 1091 In SM, the sustainable termination rate is inferred to from the 1092 Sustainable (Admission) Rate, by multiplying it by a system-wide 1093 constant U. In the case of a single bottleneck, a fluid model in 1094 which marking is uniformly distributed among the contending IEAs, the 1095 Termination Function of CL and SM would be identical. However, in 1096 reality, as shown in draft-zhang-performance-evaluation, excess-rate 1097 marking does not get distributed among contending IEAs completely 1098 uniformly, and at low ingress-egress aggregations, some IEAs get 1099 marked more than others. As a result, when traffic is close below 1100 the (implicit) termination threshold at the bottleneck , some IEAs 1101 get excessively marked, while some get less than their "fair" share 1102 of marking. This causes a false termination event at the PCN- 1103 ingress-nodes corresponding to those IEAs which get excessively 1104 marked, even though the bottleneck load did not exceed the (implicit) 1105 termination threshold. This effect is especially pronounced for low 1106 and medium aggregates of highly bursty traffic. 1108 We investigated how much aggregation is needed to remove this effect 1109 completely, and found that the number of flows in the IEAs necessary 1110 to reduce this error to within 4-10% ranged from about 50 to 150 for 1111 different traffic types we tested (see section 8.3 in the Appendix 1112 for more detailed results). 1114 We also found that for lower aggregation levels, the results could be 1115 improved to be comparable with CL with respect to over-termination if 1116 the ingresses used EWMA smoothing to the ratio of marked and unmarked 1117 traffic when triggering the termination event. Such smoothing, 1118 however, would add latency to the termination decision. The exact 1119 magnitude of this additional latency depends on the value of the 1120 global parameter U, the extent of the overload (i.e. excess over 1121 (implicit) termination threshold), and the exponential weight in the 1122 EWMA smoothing. In the range of parameters we investigated that seem 1123 practically reasonable, the additional latency is bounded by 1-2 sec 1124 (see detailed results in section 8.3). Such smoothing is not 1125 necessary at larger levels of ingress-egress aggregation. 1127 In conclusion, to avoid over-termination on a single bottleneck due 1128 to non-uniformity of packet marking distribution among contending 1129 IEAs, SM needs substantially more ingress-egress aggregation than CL, 1130 if no additional mechanism are used to smooth the termination 1131 trigger. 1133 7.3.2. Over-termination in the Multi-bottleneck Scenarios 1135 As we showed in draft-zhang-performance-evaluation, when long-haul 1136 flows traverse more than one bottleneck, each addition bottleneck 1137 incurs additional termination-marking, which causes long-haul 1138 terminates more than its fair-share, and the unfairness might in turn 1139 cause over-termination on the upstream bottleneck. 1141 SM has a similar issue. However, in addition, this issue is further 1142 amplified, as discussed below. This amplification is due to the fact 1143 that in SM, metering is done at the lower (admission) threshold, and 1144 so the quantity of the additional marking received at subsequent 1145 bottleneck is amplified by factor U (ratio of termination/admission 1146 threshold). It in turn reduces the sustainable rate (i.e. the rate 1147 of unmarked packets), as seen by the PCN-egress-node. It can be 1148 shown that this additional marking generally results in SM 1149 terminating more traffic than CL under the same circumstances, when 1150 multiple bottlenecks are traversed. The degree of over-termination 1151 strongly depends on the number of bottlenecks in the topology, and on 1152 the degree of bottleneck overload above the (implicit) termination 1153 threshold. 1155 To understand the significance of this over-termination in practice, 1156 we randomly generated ~50,000 random traffic matrices on the 5-BTN 1157 topology (see section 8 for detail), choosing the settings of the 1158 admission threshold randomly on each link. We chose U randomly in 1159 the interval 1.0. Among the 1409 160 available traces, we picked the two with the highest average rate 1410 (averaged over the trace length, in this case, 60 minutes. In 1411 addition, the two also have a similar average rate). The trace file 1412 used in the simulation is the concatenation of the two. 1414 Since the duration of the flow in our simulation is much smaller than 1415 the length of the trace, we checked whether the expected rate of flow 1416 corresponds to the trace's long term average. To do so, we simulated 1417 a number of flows starting from random locations in the trace with 1418 duration chosen to be exponentially distributed with the mean of 1419 1min. The results show that the expected rate of flow is roughly the 1420 same as the trace's average. 1422 In summary, our simulations use a set of segments of the 120 min 1423 trace chosen at random offset from the beginning and with mean 1424 duration of 1 min. 1426 Since the traces provide only the frame size, we also simulated 1427 packetization of the frame as a CBR segment with packet size and 1428 inter-arrival time corresponding to those of our SVD model. Since 1429 the frame size is not always a multiple of the chosen packet size, 1430 the last packet in a frame may be shorter than 1500 bytes chosen for 1431 the SVD encoding. 1433 Traffic characteristics for our VTR models are summarized below: 1435 o Average rate 769 Kbps 1437 o Each frame is sent with packet length 1500 bytes and packet inter- 1438 arrival time 1ms 1440 o No traffic is sent between frames. 1442 8.1.2.4. Randomization of Base Traffic Models 1444 To emulate some degree of network-introduced jitter, in some 1445 experiments we implemented limited randomization of the base models 1446 by randomly moving the packet by a small amount of time around its 1447 transmission time in the corresponding base traffic model. More 1448 specifically, for each packet we chose a random number R, which is 1449 picked from uniform distribution in a "randomization-interval", and 1450 delayed the packet by R compared to its ideal departure time. We 1451 choose randomization-interval to be a fraction of packet- 1452 interarrival-time of the CBR portion of the corresponding base model. 1453 To simulate a range of queuing delays, we varied this fraction from 1454 0.0001 to 0.1. While we do not claim this to be an adequate model 1455 for network-introduced jitter, we chose it for the simplicity of 1456 implementation as a means to gain insight on any simulation artifacts 1457 of strictly CBR traffic generation. We implemented randomized 1458 versions of all 5 traffic streams (CBR, VBR, MIX, SVD and VTR) by 1459 randomizing the CBR portion of each model 1461 8.1.3. Performance Metrics 1463 In all our experiments we use as performance metric the percent 1464 deviation of the mean rate achieved in the experiment from the 1465 expected load level. We term these "over-admission" and "over- 1466 termination" percentages, depending on the type of the experiment. 1468 More specifically, our experiments measure the actual achieved 1469 throughput at 50 ms intervals, and then compute the average of these 1470 50ms rate samples over the duration of the experiment (where 1471 relevant, excluding warmup/startup conditions). We then compare this 1472 experiment average to the desired traffic load. 1474 Initially in our experiments we also computed the variance of the 1475 traffic around the mean, and found that in the vast majority of the 1476 experiments it was quite small. Therefore, in this draft we omit the 1477 variance and limit the reporting to the over-admission and over- 1478 termination percentages only. 1480 8.2. Admission Control 1482 8.2.1. Parameter Settings 1484 8.2.1.1. Queue-based settings 1486 All the queue-based simulations were run with the following Virtual 1487 Queue thresholds: 1489 o virtual-queue-rate: configured-admissible-rate, 1/2 link speed 1491 o min-marking-threshold: 5ms at virtual-queue-rate 1493 o max-marking-threshold: 15ms at virtual-queue-rate 1495 o virtual-queue-upper-limit: 20ms at virtual-queue-rate 1497 At the egress, the CLE is computed as an exponential weighted moving 1498 average (EWMA) on an interval basis, with 100ms measurement interval 1499 chosen in all simulations. We simulated the EWMA weight ranging 0.1 1500 to 0.9. The CLE threshold is chosen to be 0.05, 0.15, 0.25, and 0.5. 1502 8.2.1.2. Token Bucket Settings 1504 The token bucket rate is set to the configured-admissible-rate, which 1505 is half of the link speed in all experiments. Token bucket depth 1506 ranges from 64 to 512 packets. Our simulation results indicate that 1507 depth of token bucket has no significant impact on the performance of 1508 the algorithms and hence, in the rest of the section, we only present 1509 the result with 256 packets bucket depth. 1511 The CLE is calculated using EWMA just as in the case of virtual-queue 1512 settings, with weights from 0.1 to 0.9. The CLE thresholds are 1513 chosen to be 0.0001, 0.001, 0.01, 0.05 in this case. Note that the 1514 since meaning of the CLE is different for the Token bucket and queue- 1515 based algorithms, so there is no direct correspondence between the 1516 choice of the CLE thresholds in the two cases. 1518 8.2.2. Sensitivity to EWMA weight and CLE 1520 Table A.2 summarized the comparison result of over-admission- 1521 percentage values from 15 experiments with different [weight, CLE 1522 threshold] settings for each type of traffic and each topology. The 1523 Ratio of the demand on the bottleneck link to the configured 1524 admission threshold is set to 5x. (In the results for 0.95x can be 1525 found in previous draft). For parking lot topologies we report the 1526 worst case result across all bottlenecks. We present here only the 1527 extreme value over the range of resulting over-admission-percentage 1528 values. 1530 We found that the virtual-queue admission control algorithm works 1531 reliably with the range of parameters we simulated, for all five 1532 types of traffic. In addition, except for SVD, the performance is 1533 insensitive to the parameters change under all tested topologies. 1534 For SVD, the algorithms does show certain sensitivity to the tested 1535 parameters. The high level conclusion that can be drawn is that 1536 (predictably) high peak-to-mean ratio SVD traffic is substantially 1537 more stressful to the queue-based admission control algorithm, but a 1538 set of parameters exists that keeps the over-admission within about 1539 -4% - +7% of the expected load even for the bursty SVD traffic. 1541 The token bucket-based admission control algorithm shows higher 1542 sensitivity to the parameter settings compared to the virtual queue 1543 based algorithm. It is important to note here that for the token 1544 bucket-based admission control no traffic will be marked until the 1545 rate of traffic exceeds the configured admission rate by the chosen 1546 CLE. As a consequence, even with the ideal performance of the 1547 algorithms, the over-admission-percentage will not be 0, rather it is 1548 expected to equal to CLE threshold if the algorithm performs as 1549 expected. Therefore, a more meaningful metric for the token-based 1550 results is actually the over-admission-percentage (listed below) 1551 minus the corresponding (CLE threshold * 100). For example, for CLE 1552 = 0.01, one would expect that 1% over-admission is inherently 1553 embedded in the algorithm. When comparing the performance of token 1554 bucket (with the adjusted over-admission-percentage) to its 1555 corresponding virtual queue result, we found that token bucket 1556 performs only slightly worse for voice-like CBR VBR, and MIX traffic. 1558 The results for SVD traffic require some additional commentary. Note 1559 from the results in Table A.2. in the Single Link topology the 1560 performance of the token-based solution is comparable to the 1561 performance of the queue-based scheme. However, for the RTT 1562 topology, the worse case performance for SVD traffic becomes very 1563 bad, with up to 23% over-admission in a high overload. We 1564 investigated two potential causes of this drastic degradation of 1565 performance by concentrating on two key differences between the 1566 Single Link and the RTT topologies: the difference in the round-trip 1567 times and the degree of aggregation in a per ingress-egress pair 1568 aggregate. 1570 To investigate the effect of the difference in round-trip times, we 1571 also conducted a subset of the experiments described above using the 1572 RTT topology that has the same RTT across all ingress-egress pairs 1573 rather than the range of RTTs in one experiment. We found out that 1574 neither the absolute nor the relative difference in RTT between 1575 different ingress-egress pairs appear to have any visible effect on 1576 the over-load performance or the fairness of both algorithms (we do 1577 not present these results here as their are essentially identical to 1578 those in Table A.2). In view of that and noting that in the RTT 1579 topology we used for these experiments for the SVD traffic, there is 1580 only 1 highly bursty flow per ingress, we believe that the severe 1581 degradation of performance in this topology is directly attributable 1582 to the lack of traffic aggregation on the ingress-egress pair basis. 1584 (preamble) 1585 ------------------------------------------------- 1586 | Type | Topo | Over Admission Perc Stats | 1587 | | | Queue-based | Bucket-Based | 1588 | | | Min Max | Min Max | 1589 |------|--------|---------------------------------| 1590 | | S.Link | 0.224 1.105 | -0.99 1.373 | 1591 | CBR | RTT | 0.200 1.192 | 6.495 9.403 | 1592 | | PLT | -0.93 0.990 | -2.24 2.215 | 1593 |-------------------------------------------------| 1594 | | S.Link | -0.07 1.646 | -2.94 2.760 | 1595 | VBR | RTT | -0.11 1.830 | -1.92 6.384 | 1596 | | PLT | -1.48 1.644 | -4.34 3.707 | 1597 |-------------------------------------------------| 1598 | | S.Link | -0.14 1.961 | -2.85 2.153 | 1599 | MIX | RTT | -0.46 1.803 | -3.18 2.445 | 1600 | | PLT | -1.62 1.031 | -3.69 2.955 | 1601 |-------------------------------------------------| 1602 | | S.Link | -0.05 1.581 | -2.36 2.247 | 1603 | VTR | RTT | -0.57 1.313 | -1.44 4.947 | 1604 | | PLT | -1.24 1.071 | -3.05 2.828 | 1605 |-------------------------------------------------| 1606 | | S.Link | -2.73 6.525 | -11.25 6.227 | 1607 | SVD | RTT | -2.98 5.357 | -4.30 23.48 | 1608 | | PLT | -4.84 4.294 | -11.40 6.126 | 1609 ------------------------------------------------- 1610 Table A.2 Parameter sensitivity: Queue-based v.s. Token Bucket- 1611 based. For the single bottleneck topologies (S. Link and RTT) the 1612 overload column represents the ratio of the mean demand on the 1613 bottleneck link to the configured admission threshold. For parking 1614 lot topologies we report the worst case result across all 1615 bottlenecks. We present here only the worst case value over the 1616 range of resulting over-admission-percentage values. 1618 8.2.3. Effect of Ingress-Egress Aggregation 1620 To investigate the effect of Ingress-Egress Aggregation, we fix a 1621 particular EWMA weight and CLE setting (in this case, weight=0.3, for 1622 virtual queue scheme CLE=0.05, and for the token bucket scheme 1623 CLE=0.0001), vary the level of ingress-egress aggregation by using 1624 RTT topologies with different number of ingresses. 1626 Table A.3 shows the change of over-admission-percentage with respect 1627 to the increase in the number of ingress for both virtual queue and 1628 token bucket. For all traffic, the leftmost column in the represents 1629 the case with the largest aggregation (only two ingresses), while the 1630 right most column represents the lowest level of aggregation 1631 (expected number calls per ingress is just 1 in this case). In all 1632 experiments the aggregate load on the bottleneck is the same across 1633 each traffic type (with the aggregate load being evenly divided 1634 between all ingresses). 1636 As seen from Table A.3. the virtual queue based approach is 1637 relatively insensitive to the level of ingress-egress aggregation. 1638 On the other hand, the Token Bucket based approach is performing 1639 significantly worse at lower levels of ingress-egress aggregation. 1640 For example for CBR (with expect 1-call per ingress), the over- 1641 admission-percentage can be as bad as 45%. 1643 (preamble) 1644 -------------------------------------------------------------------- 1645 | | Type | Number of Ingresses | 1646 | |------|---------------------------------------------------- | 1647 | | | 2 | 10 | 70 | 300 | 600 | 1000 | 1648 | | CBR | 1.003 | 1.024 | 0.976 | 0.354 | -1.45 | 0.396 | 1649 | |------------------------------------------------------------| 1650 | | | 2 | 10 | 70 | 300 | 600 | 1800 | 1651 | | VBR | 1.021 | 1.117 | 1.006 | 0.979 | 0.721 | -0.85 | 1652 | |------------------------------------------------------------| 1653 |Virtual| | 2 | 10 | 70 | 300 | 600 | 1000 | 1654 | Queue | MIX | 1.080 | 1.163 | 1.105 | 1.042 | 1.132 | 1.098 | 1655 | Based |------------------------------------------------------------| 1656 | | | 2 | 10 | 70 | 140 | 300 | 600 | 1657 | | VTR | 1.109 | 1.053 | 0.842 | 0.859 | 0.856 | 0.862 | 1658 | |------------------------------------------------------------| 1659 | | | 2 | 10 | 35 | 70 | 140 | 300 | 1660 | | SVD | -0.08 | 0.009 | -0.11 | -0.286 | -1.56 | 0.914 | 1661 -------------------------------------------------------------------- 1662 -------------------------------------------------------------------- 1663 | | Type | Number of Ingresses | 1664 | |------|---------------------------------------------------- | 1665 | | | 2 | 10 | 100 | 300 | 600 | 1000 | 1666 | | CBR | 0.725 | 0.753 | 7.666 | 21.16 | 33.69 | 44.58 | 1667 | |------------------------------------------------------------| 1668 | | | 2 | 10 | 100 | 300 | 600 | 1800 | 1669 | | VBR | 0.532 | 0.477 | 1.409 | 3.044 | 5.812 | 14.80 | 1670 |Token |------------------------------------------------------------| 1671 |Bucket | | 2 | 10 | 100 | 300 | 600 | 1800 | 1672 |Based | MIX | 0.736 | 0.649 | 1.960 | 4.652 | 10.31 | 27.69 | 1673 | |------------------------------------------------------------| 1674 | | | 2 | 10 | 70 | 140 | 300 | 600 | 1675 | | VTR | 0.758 | 0.889 | 1.335 | 1.694 | 4.128 | 13.28 | 1676 | |------------------------------------------------------------| 1677 | | | 2 | 10 | 35 | 100 | 140 | 300 | 1678 | | SVD | -1.64 | -0.93 | 0.237 | 4.732 | 7.103 | 8.799 | 1679 -------------------------------------------------------------------- 1680 (Table A.3 Synchronization effect with low Ingress-Egress 1681 Aggregation: Queue-based v.s. Token bucket-based) 1683 Our investigation reveals that the cause of the poor performance of 1684 the token bucket scheme in our experiments is attributed directly to 1685 the same "synchronization" effect as was earlier described in the 1686 Termination (preemption) results in 1687 draft-zhang-pcn-performance-evaluation, and to which we refer the 1688 reader for a more detailed description of this effect. In short 1689 however, for CBR traffic, a periodic pattern arises where packets of 1690 a given flow see roughly the same state of the token bucket at the 1691 bottleneck, and hence either all get marked, or all do not get 1692 marked. As a result, at low levels of aggregation a subset of 1693 ingresses always get their packets marked, while some other ingresses 1694 do not. 1696 As reported in draft-zhang-pcn-performance-evaluation, in the case of 1697 Termination this synchronization effect is beneficial to the 1698 algorithm. In contrast, for Admission, this synchronization is 1699 detrimental to the algorithm performance at low aggregations. This 1700 can be easily explained by noting that ingresses which packets do not 1701 get marked continue admitting new traffic even if the aggregate 1702 bottleneck load has been reached or exceeded. Since most of the 1703 other traffic patterns contain large CBR segments, this effect is 1704 seen with other traffic types as well, although to a different 1705 extent. 1707 A natural initial reaction can be to write-off this effect as purely 1708 a simulation artifact. In fact, one can expect that if some jitter 1709 is introduced into the strict CBR traffic pattern so that the packet 1710 transmission is longer strictly periodic, then the "synchronization" 1711 effect might be easily broken. 1713 To verify whether this is indeed the case, we ran the experiment with 1714 same topologies and parameter settings, but with randomized version 1715 of the base traffic types. The results are summarized in Table A.4. 1716 Note, the column label with f (e.g. 0.0001) correspond to randomized 1717 traffic with a randomization-interval of f x packet-interarrival- 1718 time. It also means that on average, the packets are delayed by f x 1719 packet-interarrival-time / 2. In addition, the column of "No-Rand" 1720 actually correspond to the token bucket results in Table A.3). It 1721 turns out that indeed introducing enough jitter does break the 1722 synchronization effect and the performance of the algorithm much 1723 improves. However, it takes sufficient amount of the randomization 1724 before it is noticed. For instance, in the CBR graph, the only 1725 column that shows no aggregation effect is the one labeled with 1726 "0.05", which translates to expected packet deviation from its ideal 1727 CBR transmit time of 0.5ms. While 0.5ms per-hop deviation is not 1728 unreasonable to expect, in well provisioned networks with a 1729 relatively small amount of voice traffic in the priority queue one 1730 might find lower levels of network-induced jitter. In any case, 1731 these results indicates the "synchronization" effect can not be 1732 completely written off as a simulation artifact. The good news, 1733 however, that this effect is visible only at very low ingress-egress 1734 aggregation levels, and as the ingress-egress aggregation increases, 1735 the effect quickly disappears. 1737 We observed the synchronization effect consistently across all types 1738 of traffic we tested with the exception of VTR. VTR also exhibits 1739 some aggregation effect - however randomization of its CBR portion 1740 has almost have no effect on performance. We suspect this is because 1741 the randomization we perform is at packet level, while the 1742 synchronization that seems to be causing the performance degradation 1743 at low ingress-egress aggregation for VTR traffic occurs at frame- 1744 level. Although our investigation of this issue is not completed 1745 yet, our preliminary results show that if we calculating random 1746 deviation for our artificially induced jitter using frame inter- 1747 arrival time instead of packet-interarrival-time, we can reduce the 1748 over-admission percentage for VTR to roughly 3%. It is unclear 1749 however, whether such randomization at the frame level meaningfully 1750 reflects network-introduced jitter. 1752 ---------------------------------------------------------------- 1753 | | No. | Randomization Interval | 1754 | | Ingr | No-Rand | 0.0001 | 0.001 | 0.005 | 0.01 | 0.05 | 1755 |----------------------------------------------------------------| 1756 | | 2 | 0.725 | 0.683 | 0.784 | 0.725 | 0.772 | 0.787 | 1757 | | 10 | 0.753 | 0.725 | 0.543 | 0.645 | 0.733 | 0.854 | 1758 | | 100 | 7.666 | 5.593 | 2.706 | 1.454 | 1.226 | 0.692 | 1759 | CBR | 300 | 21.16 | 15.52 | 6.699 | 3.105 | 2.478 | 1.624 | 1760 | | 600 | 33.69 | 25.51 | 11.41 | 6.021 | 4.676 | 2.916 | 1761 | | 1000 | 44.58 | 36.20 | 17.03 | 7.094 | 5.371 | 3.076 | 1762 |----------------------------------------------------------------| 1763 | | 2 | 0.532 | 0.645 | 0.670 | 0.555 | 0.237 | 0.740 | 1764 | | 10 | 0.477 | 0.596 | 0.703 | 0.494 | 0.662 | 0.533 | 1765 | | 100 | 1.409 | 1.236 | 1.043 | 0.810 | 1.202 | 1.016 | 1766 | VBR | 300 | 3.044 | 2.652 | 2.093 | 1.588 | 1.755 | 1.671 | 1767 | | 600 | 5.812 | 4.913 | 3.539 | 2.963 | 2.803 | 2.277 | 1768 | | 1800 | 14.80 | 12.59 | 8.039 | 6.587 | 5.694 | 4.733 | 1769 |----------------------------------------------------------------| 1770 | | 2 | 0.736 | 0.753 | 0.627 | 0.751 | 0.850 | 0.820 | 1771 | | 10 | 0.649 | 0.737 | 0.780 | 0.824 | 0.867 | 0.787 | 1772 | | 100 | 1.960 | 1.705 | 1.428 | 1.160 | 1.149 | 1.034 | 1773 | MIX | 300 | 4.652 | 4.724 | 3.760 | 2.692 | 2.449 | 2.027 | 1774 | | 600 | 10.31 | 9.629 | 7.289 | 5.520 | 4.958 | 3.710 | 1775 | | 1000 | 17.21 | 15.96 | 11.05 | 8.700 | 7.382 | 5.061 | 1776 | | 1800 | 27.69 | 23.46 | 16.53 | 12.04 | 10.84 | 8.563 | 1777 |----------------------------------------------------------------| 1778 | | 2 | 0.758 | 0.756 | 0.872 | 0.894 | 0.825 | 0.849 | 1779 | | 10 | 0.889 | 0.939 | 0.785 | 0.704 | 0.843 | 0.574 | 1780 | | 70 | 1.335 | 1.101 | 1.066 | 1.181 | 0.978 | 0.946 | 1781 | VTR | 140 | 1.694 | 1.162 | 1.979 | 1.791 | 1.684 | 1.573 | 1782 | | 300 | 4.128 | 4.191 | 3.545 | 3.307 | 3.964 | 3.465 | 1783 | | 600 | 13.28 | 13.76 | 13.81 | 13.18 | 12.97 | 12.35 | 1784 |----------------------------------------------------------------| 1785 | | 2 | -1.64 | -2.30 | -2.14 | -1.61 | -1.01 | -0.89 | 1786 | | 10 | -0.93 | -1.65 | -2.41 | -2.98 | -2.58 | -2.27 | 1787 | | 35 | 0.237 | -0.31 | -0.35 | -1.02 | -0.96 | -2.16 | 1788 | SVD | 100 | 4.732 | 4.640 | 4.152 | 2.287 | 1.887 | -0.03 | 1789 | | 140 | 7.103 | 6.002 | 5.560 | 4.974 | 3.619 | 0.091 | 1790 | | 300 | 8.799 | 10.72 | 9.840 | 7.530 | 6.281 | 4.270 | 1791 ---------------------------------------------------------------- 1793 (Table A.4 Ingress-Egress Aggregation: Token-based results for 1794 Randomized traffic)) 1796 Finally, we investigated the impact of call arrival assumptions at 1797 different levels of ingress-egress aggregation by comparing the 1798 results with Poisson and BATCH arrivals. We reported in 1799 draft-zhang-pcn-performance-evaluation that virtual queue -based 1800 admission is relatively insensitive to the BATCH vs Poisson arrivals, 1801 even at lower aggregation levels. In contrast, the call arrival 1802 assumption does affect the performance of token bucket-based 1803 algorithm, and causes substantial degradation of performance at low 1804 ingress-egress aggregation level. An example result with CBR traffic 1805 is presented in table A.5. Here we use batch arrival with mean = 5. 1806 The results show that with the lowest aggregation, the batch arrival 1807 gives worse result than the normal Poisson arrival, however, as the 1808 level of aggregation become sufficient (e.g. 100 ingress, 10 call/ 1809 ingress), the difference becomes insignificant. This behavior is 1810 consistent across all types of traffic. 1812 (preamble) 1813 ---------------------------------------------------------------- 1814 | | No. | Deviation Interval | 1815 | | Ingr | No-Rand | 0.0001 | 0.001 | 0.005 | 0.01 | 0.05 | 1816 |----------------------------------------------------------------| 1817 | | 2 | 0.918 | 1.007 | 0.836 | 0.933 | 1.014 | 0.971 | 1818 | | 10 | 1.221 | 0.936 | 0.767 | 0.906 | 0.920 | 0.857 | 1819 | | 100 | 8.857 | 7.092 | 3.265 | 1.821 | 1.463 | 1.036 | 1820 | CBR | 300 | 29.39 | 22.59 | 8.596 | 4.979 | 4.550 | 2.165 | 1821 | | 600 | 43.36 | 37.12 | 17.37 | 10.02 | 8.005 | 4.223 | 1822 | | 1000 | 63.60 | 50.36 | 25.48 | 12.82 | 9.339 | 6.219 | 1823 |----------------------------------------------------------------| 1824 (Table A.5 In/Egress Aggregation with batch traffic: Token-based 1825 results ) 1827 8.2.4. Effect of Multiple Bottlenecks 1829 The results in Table A.2 (Section 9.5.1, parameter sensitivity study) 1830 implied that from the bottleneck point of view, the performance on 1831 the multiple-bottleneck topology, for all types of traffic, is 1832 comparable to the ones on the SingleLink, for both queue-based and 1833 token bucket-based algorithms. However, the results in Table A.2 1834 only show the worst case values over all bottleneck links. In this 1835 section we consider two other aspects of the Multiple Bottleneck 1836 effects: relative performance at individual bottlenecks and fairness 1837 of bandwidth usage between the short- and the long- haul IEAs. 1839 8.2.4.1. Relative performance of different bottlenecks 1841 In Table A.5, we show a snapshot of the behavior with 5 bottleneck 1842 topology, with the goal of studying the performance of different 1843 bottlenecks more closely. Here, the over-admission-percentage 1844 displayed is an average across all 15 experiments with different 1845 [weight, CLE] setting. (We do observe the same behavior in each of 1846 the individual experiment, hence providing a summarized statistics is 1847 meaningful). 1849 One differences in token-bucket case vs the queue-based admissions in 1850 the PLT topology case revealed in Table A.6 is that there appears to 1851 be a consistent relationship between the position of the bottleneck 1852 link (how far downstream it is) and its over-admission-percentage. 1853 The data shows the further downstream the bottleneck is, the more it 1854 tends to over-admit, regardless the type of the traffic. The exact 1855 cause of this phenomenon is yet to be explained, but the effect of it 1856 seems to be insignificant in magnitude, at least in the experiments 1857 we ran. 1859 (preamble) 1860 --------------------------------------------------------- 1861 | | Traffic | Bottleneck LinkId | 1862 | | Type | 1 | 2 | 3 | 4 | 5 | 1863 | |-------------------------------------------------| 1864 | | CBR | 0.288 | 0.286 | 0.238 | 0.332 | 0.306 | 1865 | |-------------------------------------------------| 1866 | | VBR | 0.319 | 0.420 | 0.257 | 0.341 | 0.254 | 1867 | Queue |-------------------------------------------------| 1868 | Based | MIX | 0.363 | 0.394 | 0.312 | 0.268 | 0.205 | 1869 | |-------------------------------------------------| 1870 | | VTR | 0.466 | 0.309 | 0.223 | 0.363 | 0.317 | 1871 | |-------------------------------------------------| 1872 | | SVD | 0.319 | 0.420 | 0.257 | 0.341 | 0.254 | 1873 |--------------------------------------------------------- 1874 | | Traffic | Bottleneck LinkId | 1875 | | Type | 1 | 2 | 3 | 4 | 5 | 1876 | |-------------------------------------------------| 1877 | | CBR | 0.121 | 0.300 | 0.413 | 0.515 | 0.700 | 1878 | |-------------------------------------------------| 1879 | Token | VBR | -0.07 | 0.251 | 0.496 | 0.698 | 1.044 | 1880 |Bucket |-------------------------------------------------| 1881 | Based | MIX | 0.042 | 0.350 | 0.468 | 0.716 | 0.924 | 1882 | |-------------------------------------------------| 1883 | | VTR | 0.277 | 0.488 | 0.642 | 0.907 | 1.117 | 1884 | |-------------------------------------------------| 1885 | | SVD | -2.64 | -2.50 | -1.72 | -1.57 | -1.19 | 1886 --------------------------------------------------------- 1888 Table A.6 Bottleneck Performance: queue-based v.s. token bucket-based 1890 8.2.4.2. (Un)Fairness Between Different Ingress-Egress pairs 1892 It was reported in draft-zhang-pcn-performance-evaluation that 1893 virtual-queue-based admission control favors significantly short-haul 1894 connection over long-haul connections. As was discussed there, this 1895 property is in fact common for measurement-based admission control 1896 algorithms (see for example [Jamin] for a discussion). It is common 1897 knowledge that in the limit of large demands, long-haul connections 1898 can be completely starved. We show in 1899 draft-zhang-performance-evaluation that in fact starvation of long- 1900 haul connections can occur even with relatively small (but constant) 1901 overloads. We identify there that the primary reason for it is a de- 1902 synchronization of the "congestion periods" at different bottlenecks, 1903 resulting in the long-haul connections almost always seeing at least 1904 one bottleneck and hence almost never being allowed to admit new 1905 flows. We refer the reader to that draft for more detail. 1907 Here we investigate the comparative behavior of the token-bucket 1908 based scheme and virtual queue based scheme with respect to fairness. 1910 The fairness is illustrated using the ratio between bandwidth of the 1911 long-haul aggregates and the short-haul aggregates. As is 1912 intuitively expected, (and also confirmed experimentally), the 1913 unfairness is the larger the higher the demand, and the more 1914 bottlenecks traversed by the long-haul aggregate Therefore, we report 1915 here the "worst case" results across our experiments corresponding to 1916 the 5x demand overload and the 5-PLT topology. 1918 Table A.7 summaries, at 5x overload, with CLE=0.05 (for virtual 1919 queue), 0.0001(for token bucket), the fairness results to different 1920 weight and topology. We display the ratio as function of time, in 10 1921 sec increments, (the reported ratios are averaged over the 1922 corresponding 10 simulation-second interval). The result presented 1923 in this section uses the aggregates that traverse the first 1924 bottleneck. The results on all other bottlenecks are extremely 1925 similar. 1927 (preamble) 1928 --------------------------------------------------------------------------- 1929 | |Topo|Weight| Simulation Time (s) | 1930 | | | | 10 | 20 | 30 | 40 | 50 | 60 | 70 | 80 | 1931 | |-------------------------------------------------------------------| 1932 | | | 0.1 | 0.99 | 1.04 | 1.14 | 1.14 | 1.23 | 1.23 | 1.35 | 1.46 | 1933 | |PLT5| 0.5 | 1.00 | 1.17 | 1.24 | 1.41 | 1.81 | 2.13 | 2.88 | 3.05 | 1934 | | | 0.9 | 1.03 | 1.42 | 1.74 | 2.14 | 2.44 | 2.91 | 3.83 | 4.20 | 1935 | |-------------------------------------------------------------------| 1936 |Virtual| | 0.1 | 1.02 | 1.08 | 1.15 | 1.29 | 1.33 | 1.38 | 1.37 | 1.42 | 1937 |Queue |PLT3| 0.5 | 1.02 | 1.04 | 1.07 | 1.19 | 1.24 | 1.30 | 1.34 | 1.33 | 1938 |Based | | 0.9 | 1.02 | 1.09 | 1.23 | 1.41 | 1.65 | 2.10 | 2.63 | 3.18 | 1939 | |-------------------------------------------------------------------| 1940 | | | 0.1 | 1.02 | 0.98 | 1.03 | 1.11 | 1.22 | 1.21 | 1.25 | 1.31 | 1941 | |PLT2| 0.5 | 1.02 | 1.06 | 1.14 | 1.17 | 1.15 | 1.31 | 1.41 | 1.41 | 1942 | | | 0.9 | 1.02 | 1.04 | 1.11 | 1.30 | 1.56 | 1.61 | 1.62 | 1.67 | 1943 --------------------------------------------------------------------------- 1944 ---------------------------------------------------------------------------- 1945 | |Topo|Weight| Simulation Time (s) | 1946 | | | | 10 | 20 | 30 | 40 | 50 | 60 | 70 | 80 | 1947 | |-------------------------------------------------------------------| 1948 | | | 0.1 | 1.03 | 1.48 | 1.83 | 2.34 | 2.95 | 3.33 | 4.32 | 4.65 | 1949 | |PLT5| 0.5 | 1.08 | 1.53 | 1.90 | 2.44 | 3.04 | 3.42 | 4.47 | 4.83 | 1950 | | | 0.9 | 1.08 | 1.48 | 1.80 | 2.26 | 2.82 | 3.19 | 4.23 | 4.16 | 1951 | |-------------------------------------------------------------------| 1952 |Token | | 0.1 | 1.02 | 1.26 | 1.45 | 1.57 | 1.69 | 1.76 | 1.92 | 1.94 | 1953 |Bucket |PLT3| 0.5 | 1.07 | 1.41 | 1.89 | 2.36 | 2.89 | 3.63 | 3.70 | 3.82 | 1954 |Based | | 0.9 | 1.07 | 1.33 | 1.59 | 1.94 | 2.41 | 2.80 | 2.75 | 2.90 | 1955 | |-------------------------------------------------------------------| 1956 | | | 0.1 | 1.03 | 1.10 | 1.43 | 2.06 | 2.28 | 2.85 | 3.09 | 2.90 | 1957 | |PLT2| 0.5 | 1.07 | 1.32 | 1.47 | 1.72 | 1.71 | 1.81 | 1.89 | 1.94 | 1958 | | | 0.9 | 1.09 | 1.27 | 1.51 | 1.86 | 1.82 | 1.88 | 1.88 | 2.06 | 1959 ------------------------------------------------------------------- 1960 Table A.7 Fairness performance: Virtual Queue v.s. Token Bucket. 1961 The numbers in the cells represent the ratio between the bandwidth of 1962 the long- and short-haul aggregates. Each row represents the time 1963 series of these results in 10 simulation second increments. 1965 To summarize, we observed consistent beatdown effect across all 1966 experiments for both virtual-queue and token-bucket admission 1967 algorithms, although the exact extent of the unfairness depends on 1968 the demand overload, topology and parameters settings. To further 1969 quantify the effect of these factors remains an area of future work. 1970 We also note that the cause of the beatdown effect appears to be 1971 largely independent of the specific algorithm, and is likely to be 1972 relevant to other PCN proposals as well. 1974 8.3. Termination Control 1976 8.3.1. Ingress-Egress Aggregation Experiments 1978 In this section, we investigate sensitivity of the Flow Termination 1979 Function of SM. From our admission control experiments it is clear 1980 that SM is extremely sensitive to very low IE-aggregation (on the 1981 order of 1-10 flows), limiting applicability of SM at these 1982 aggregation levels. We show here that the Termination Function of CL 1983 requires even more IE aggregation, as we quantify in this section. 1985 The table below shows comparative accuracy of CL and SM at different 1986 aggregation levels in a single bottleneck topology with multiple IEAs 1987 sharing the bottleneck. As can be seen from this table, the actual 1988 degree of IE aggregation necessary to achieve an over-termination 1989 within 10% ranges from ~50 to about ~150 for different traffic types 1990 (note that extremely bursty high-rate SVD traffic the maximum number 1991 of flows in an IEA we ran was 69, which was not sufficient to reach a 1992 10% over-termination error bound we targeted. We did not run higher 1993 number of SVD flows per IEA due to time limitations). 1995 -------------------------------------------- 1996 | | No. | Flow per | Over-Term. Perc. | 1997 | | Ingre | Ingre | CL | SM | 1998 |--------------------------------------------| 1999 | | 2 | 285 | -0.106 | 4.112 | 2000 | CBR | 10 | 57 | 0.388 | 6.710 | 2001 | | 35 | 16 | 1.035 | 14.64 | 2002 | | 70 | 8 | 0.727 | 16.39 | 2003 |--------------------------------------------| 2004 | | 2 | 849 | 0.912 | 2.808 | 2005 | VBR | 10 | 169 | 4.032 | 10.47 | 2006 | | 35 | 48 | 2.757 | 22.26 | 2007 | | 100 | 16 | 3.966 | 22.52 | 2008 |--------------------------------------------| 2009 | | 2 | 662 | 1.297 | 3.672 | 2010 | MIX | 10 | 132 | 2.698 | 7.809 | 2011 | | 35 | 37 | 1.978 | 14.83 | 2012 | | 100 | 13 | 4.265 | 17.29 | 2013 |--------------------------------------------| 2014 | | 2 | 158 | 3.513 | 3.718 | 2015 | VTR | 10 | 31 | 4.532 | 14.82 | 2016 | | 35 | 9 | 6.842 | 22.95 | 2017 | | 70 | 4 | 8.458 | 22.31 | 2018 |--------------------------------------------| 2019 | | 2 | 69 | 7.811 | 20.90 | 2020 | SVD | 10 | 13 | 10.69 | 27.38 | 2021 | | 35 | 4 | 8.322 | 20.78 | 2022 -------------------------------------------- 2023 Table A.8 Over-termination comparison between CL and SM at medium/ 2024 high IE aggregation 2026 It turns out that the reason for this higher sensitivity to low 2027 ingress-egress aggregation lies in the non-uniformity in the marking 2028 distribution across different IEAs. As a result of this non- 2029 uniformity , when traffic is close below the (implicit) termination 2030 threshold at the bottleneck , some IEAs get excessively marked, 2031 causing a false termination event at the corresponding PCN-ingress- 2032 nodes, in turn causing extra over-termination. 2034 ---------------------------------------------- 2035 | | No. | Flow per | Over-Term. Perc. | 2036 | | Ingre | Ingre | SM | SM-SM | 2037 |----------------------------------------------| 2038 | | 2 | 285 | 4.112 | 2.243 | 2039 | CBR | 10 | 57 | 6.710 | 3.142 | 2040 | | 35 | 16 | 14.64 | 6.549 | 2041 | | 70 | 8 | 16.39 | 8.496 | 2042 |----------------------------------------------| 2043 | | 2 | 849 | 2.808 | 0.951 | 2044 | VBR | 10 | 169 | 10.47 | 4.096 | 2045 | | 35 | 48 | 22.26 | 6.987 | 2046 | | 100 | 16 | 22.52 | 8.567 | 2047 |----------------------------------------------| 2048 | | 2 | 662 | 3.672 | 2.574 | 2049 | MIX | 10 | 132 | 7.809 | 3.822 | 2050 | | 35 | 37 | 14.83 | 4.936 | 2051 | | 100 | 13 | 17.29 | 6.956 | 2052 |----------------------------------------------| 2053 | | 2 | 158 | 3.718 | 3.866 | 2054 | VTR | 10 | 31 | 14.82 | 7.507 | 2055 | | 35 | 9 | 22.95 | 10.29 | 2056 | | 70 | 4 | 22.31 | 8.528 | 2057 |----------------------------------------------| 2058 | | 2 | 69 | 20.90 | 9.272 | 2059 | SVD | 10 | 13 | 27.38 | 12.46 | 2060 | | 35 | 4 | 20.78 | 10.14 | 2061 ---------------------------------------------- 2062 Table A.9 Over-termination comparison between SM and SM with smoothed 2063 trigger. Here EWMA weight = 0.9 (heavy history) 2065 We investigated whether this effect can be removed by smoothing 2066 (using EWMA) the ratio between marked and unmarked traffic that we 2067 use at the ingress node to trigger the termination event. Table A.9 2068 above presents the results for the EWMA weight of 0.9 corresponding 2069 to a long history. It can be seen that such smoothing does in fact 2070 help reduce over-termination. However, it also increases the 2071 reaction time of flow termination. This increased latency grows for 2072 larger U and decreases with the increase in the excess load over the 2073 (implicit) termination threshold. 2075 Table A.10 quantifies this extra delay (Note: these results are for 2076 100 ms measurement intervals at the ingress, and for negligible 2077 round-trip time. The actual extra latency is obtained by adding the 2078 RTT to the results of table 8.3. 2080 (preamble) 2081 -------------------------------- 2082 |U \ R | 0.2 | 0.3 | 0.4 | 0.5 | 2083 -------------------------------- 2084 | 1.1 | 0.5 | 0.4 | 0.3 | 0.3 | 2085 -------------------------------- 2086 | 1.3 | 1.0 | 0.8 | 0.7 | 0.7 | 2087 -------------------------------- 2088 | 1.5 | 1.4 | 1.1 | 1.0 | 0.9 | 2089 -------------------------------- 2090 | 1.7 | 1.6 | 1.4 | 1.2 | 1.1 | 2091 -------------------------------- 2092 | 1.9 | 1.8 | 1.6 | 1.4 | 1.3 | 2093 -------------------------------- 2094 | 2.0 | 1.9 | 1.6 | 1.5 | 1.4 | 2095 -------------------------------- 2097 Table A.10. Additional latency due to smoothing of termination 2098 signal and the PCN-ingress-node (in sec; W=0.9) 2100 We note that this smoothing is only necessary at the lower range of 2101 the IE aggregation levels we considered, and is not necessary as soon 2102 as the aggregation level reaches 50-150 flows (for different traffic 2103 types) in our experiments. For the lower aggregation level, the 2104 smoothing may be useful, at the expense of the additional latency. 2106 8.3.2. Multiple Bottlenecks Experiments 2108 As discussed in Section 7.3, the fact that SM marks traffic when the 2109 bottleneck load is below (implicit) termination threshold but above 2110 the configured admission threshold, causes additional "beat-down" 2111 effect of flows traversing multiple bottlenecks, compared to the 2112 beat-down effect already observed for CL in 2113 draft-zhang-performance-evaluation. 2115 We start with the setup with 2- and 5-PLT topology similar to that of 2116 draft-zhang-performance-evaluation. That is, at failure event time, 2117 all bottleneck links have a load of roughly 3/4 of its link size. In 2118 addition, the long IEA constitutes 2/3 of this load, while the short 2119 one is 1/3. Table below shows the comparative over-termination on 2120 the bottlenecks (2 and 5 PLT topology) for both CL and SM. The 2121 bottleneck rows are ordered based on the flow traversal order (from 2122 upstream to downstream). 2124 As in the results we presented in draft-zhang-performance-evaluation, 2125 we report over-termination compared to the "reference" over- 2126 termination which we compute as follows for the multi-bottleneck 2127 topology. We take each link in the topology separately and compute 2128 the "rate-proportionally fair" rates that each IEA sharing this 2129 bottleneck will need to be reduced to (in proportion to their 2130 demands), so that the load on that bottleneck independently becomes 2131 equal to the termination threshold (this threshold being implicit for 2132 SM, explicit for CL), assuming the initial sum of rates exceeds this 2133 threshold. After this is done independently for each bottleneck, we 2134 assign each IEA the smallest of its scaled down rates across all 2135 bottlenecks. We then compute the "reference" utilization on each 2136 link by summing up the scaled down rates of each IEA sharing this 2137 link. Our over-termination is then reported in reference to this 2138 "reference" utilization. We note that this reference utilization may 2139 frequently be already below the termination threshold of a given 2140 link. This can happen easily in the case when a large number of 2141 flows sharing a given link is "bottlenecked" elsewhere. 2143 ------------------------------------------------------------------- 2144 | Topo. | CBR | VBR | VTR | SVD | 2145 | 2/5 PLT | CL SM | CL SM | CL SM | CL SM | 2146 ------------------------------------------------------------------- 2147 |2 BN1 | 5.93 20.93 | 6.49 21.31 | 9.07 21.88 | 9.28 23.18 | 2148 | BN2 | 0.56 9.89 | 2.21 9.89 | 3.61 8.74 | 7.99 12.92 | 2149 |-------------------------------------------------------------------| 2150 | BN1 | 9.63 35.04 | 10.9 34.06 | 11.41 36.30 | 14.23 39.37 | 2151 | BN2 | 4.54 23.51 | 6.19 22.83 | 5.66 23.53 | 9.67 28.45 | 2152 |5 BN3 | 2.05 23.36 | 2.46 23.18 | 3.47 24.64 | 5.73 27.01 | 2153 | BN4 | 0.90 23.78 | 1.40 23.46 | 3.13 24.02 | 3.98 27.59 | 2154 | BN5 | 0.00 24.08 | 0.30 23.11 | 2.81 23.83 | 5.54 28.45 | 2155 ------------------------------------------------------------------- 2156 Table A.11 Over-termination comparison of CL and SM for 2 and 5 PLT 2157 topology 2159 We note that in these experiments SM does significantly worse than CL 2160 across all traffic. The most upstream bottleneck suffers the most 2161 over-termination due to the fact that the long-haul IA gets severely 2162 beaten down, while the short-haul flows terminate their fair share. 2163 (In this experiment almost 90% of the long-haul IA is terminated). 2165 In our PLT setup each IEA is heavily aggregated, so we do not expect 2166 smoothing of the termination trigger to have a significant effect. 2167 Table A.12 Summarizes the performance of the same setup with 2168 smoothing. 2170 ------------------------------------------------------------------ 2171 | Topo. | CBR | VBR | VTR | SVD | 2172 | 2/5 PLT | SM SM-SM | SM SM-SM | SM SM-SM | SM SM-SM | 2173 ------------------------------------------------------------------| 2174 |2 BN1 | 20.93 14.29 | 21.31 13.18 | 21.88 15.14 | 23.18 16.53 | 2175 | BN2 | 9.89 14.37 | 9.89 13.32 | 8.74 14.22 | 12.92 16.89 | 2176 |------------------------------------------------------------------| 2177 | BN1 | 35.04 24.27 | 34.06 23.17 | 36.30 23.98 | 39.37 30.83 | 2178 | BN2 | 23.51 24.15 | 22.83 23.87 | 23.53 24.21 | 28.45 31.60 | 2179 |5 BN3 | 23.36 23.94 | 23.18 23.67 | 24.64 25.23 | 27.01 29.65 | 2180 | BN4 | 23.78 24.56 | 23.46 23.86 | 24.02 25.04 | 27.59 29.25 | 2181 | BN5 | 24.08 24.24 | 23.11 24.08 | 23.83 24.95 | 28.45 29.94 | 2182 ------------------------------------------------------------------- 2183 Table A.12. Over-termination comparison of SM and smoothed SM for 2 2184 and 5 PLT topology 2186 Smaller over-termination on the upstream bottlenecks, especially 1) 2187 is due to the fact that with smoothing, the short IEA on the 2188 bottleneck 1 did not terminate at all, which makes it the bottleneck 2189 1 less over-terminated than in the case of SM. The reason for this 2190 is that in the smooth-SM, the additional markings received (due to 2191 multi-bottleneck effect) by the long IEA make its smoothing process 2192 much faster than the short IEA. Again in this particular setup, 2193 there is enough flows in the long IEA to be terminated and bring the 2194 bottleneck load way below the termination threshold, while short IEA 2195 never gets to react. 2197 Our next task was to investigate whether particularly bad performance 2198 of SM in this case is a common occurrence. To do so, we took the 2199 5-PLT topology and generated on the order of ~50,000 random traffic 2200 matrices, and random settings of the admission threshold and the 2201 parameter U, resulting in creation of anywhere from 0 to 5 2202 bottlenecks on this topology in each experiment. We limited the 2203 range of U from 1.0 to 3.0, and the maximum overload on any link was 2204 at most ~10x of the (implicit) termination threshold. To enable us 2205 to run these many experiments in a reasonable time, we implemented a 2206 fluid model, and later compared its accuracy with the packet 2207 simulations on a subset of topologies to confirm reliability of the 2208 fluid model simulation results (see below). 2210 Table A.13 gives a summary of the experimental frequency of the 2211 setups with a particular range of the termination error on the most 2212 loaded bottleneck. In addition, we checked whether the termination 2213 error in those setups is so big as to bring the load on the most 2214 loaded bottleneck below its admission threshold. This data is shown 2215 by summarizing the experimental frequency of experiments where the 2216 resulting load after termination (End-Load) is above the admission 2217 threshold . Since the frequency of these cases depends on the number 2218 of bottlenecks in the experiment, we report this by the number of 2219 bottlenecks. 2221 We split the results in Table A.13 into those with small U (where the 2222 (implicit) termination threshold is very close to the admission 2223 threshold), medium U (where the implicit termination threshold is 2224 between 1.2 - and 2 times the admission threshold, and large U 2225 (greater than 2 times admission threshold). 2227 Given the accuracy results from our packet experiments, it seems that 2228 the reasonable setting of U must be at least 120% of the admission 2229 threshold to reduce the probability that the termination error will 2230 bring the load below admission threshold. Therefore, we present the 2231 results for small U for completeness only. We also believe that in 2232 practice setups with large U>2.0 should be rare, and hence we report 2233 the large U results separately as well. 2235 At a high level the results of table A.13 imply that for small U, 2236 over-termination is small, but it is enough to frequently drive the 2237 bottleneck load below admission threshold, especially with larger 2238 number of bottlenecks. For large U, the over-termination is larger, 2239 but the bottleneck load almost never falls below admission threshold. 2240 Finally, for medium U, which, in our opinion is the case of practical 2241 importance, the over-termination for SM is below 10% in ~65% of the 2242 experiments, is within 20% for about 30% of the experiments, and 2243 between 30 an d 40% for the remaining 5%. In contrast, CL remains 2244 within 10% over-termination most of the time. For this medium U, 2245 this over-termination almost never causes the bottleneck load to drop 2246 below admission threshold for up to 3 bottlenecks, while ~10% of the 2247 4 bottleneck cases and ~20% of the 5-bottleneck cases do drop below 2248 the admission threshold. Note that CL also occasionally drives the 2249 load below admission threshold, albeit not as often as SM - e.g. in 2250 ~3% of the simulations for the 4 bottlenecks and about 7% for the 5 2251 bottlenecks, for medium U. 2253 We note that the cause of the after-termination event load falling 2254 below admission threshold, as well as a partial cause for the over- 2255 termination reported below is partially due to the fact that some of 2256 the flows going through the most overloaded bottleneck are 2257 nevertheless passing another bottleneck elsewhere. Even if the 2258 overall overload on that other bottleneck may not be as high, that 2259 (other) bottleneck nevertheless may be driving some of the IEAs down 2260 to a smaller rate than the bottleneck with the largest overload we 2261 are considering. This is confirmed by the fact that the Reference 2262 termination occasionally also falls below the admission threshold as 2263 well - see REF rows of the "End-load Above Threshold" tables in Table 2264 A.13. 2266 (preamble) 2267 Small U Total Expr: 5185 1.0 < U <= 1.2 2269 ------------------------------------------------------ 2270 | Alg. | Distribution of Over-Termination Percentage | 2271 | | 0-10% | 10-20% | 20-30% | 30-40% | 40-50% | 2272 |------|-------|--------|---------|---------|---------| 2273 | CL | 0.986 | 0.013 | 0.000 | 0.000 | 0.000 | 2274 |------|-------|--------|---------|---------|---------| 2275 | SM | 0.968 | 0.031 | 0.000 | 0.000 | 0.000 | 2276 -----------------------------------------------------| 2278 ------------------------------------------------------ 2279 | Alg. | Fract. End-load Above Admission Threshold | 2280 | | 0 BN | 1 BN | 2 BN | 3 BN | 4 BN | 5 BN | 2281 |------|-------|-------|-------|-------|-------|-------| 2282 | CL | 1 | 0.914 | 0.851 | 0.732 | 0.490 | 0.357 | 2283 |------|-------|-------|-------|-------|-------|-------| 2284 | SM | 1 | 0.934 | 0.869 | 0.725 | 0.488 | 0.374 | 2285 |------------------------------------------------------| 2286 | REF | 1 | 0.980 | 0.982 | 0.977 | 0.978 | 0.968 | 2287 |------------------------------------------------------| 2289 Medium U Total Expr: 19689 1.2 < U < 2.0 2291 -------------------------------------------------------- 2292 | Alg. | Distribution of Over-Termination Percentage | 2293 | | 0-10% | 10-20% | 20-30% | 30-40% | 40-50% | 2294 |------|---------|---------|---------|---------|---------| 2295 | CL | 0.985 | 0.013 | 0.000 | 0.000 | 0.000 | 2296 |------|---------|---------|---------|---------|---------| 2297 | SM | 0.659 | 0.294 | 0.043 | 0.001 | 0.000 | 2298 -------------------------------------------------------- 2300 ------------------------------------------------------ 2301 | Alg. | Fract. End-load Above Admission Threshold | 2302 | | 0 BN | 1 BN | 2 BN | 3 BN | 4 BN | 5 BN | 2303 |------|-------|-------|-------|-------|-------|-------| 2304 | CL | 1 | 0.991 | 0.993 | 0.991 | 0.969 | 0.928 | 2305 |------|-------|-------|-------|-------|-------|-------| 2306 | SM | 1 | 0.990 | 0.991 | 0.977 | 0.909 | 0.831 | 2307 |------------------------------------------------------| 2308 | REF | 1 | 0.991 | 0.994 | 0.994 | 0.996 | 0.993 | 2309 -------------------------------------------------------| 2311 Large U Total Expr: 25129 2.0 <= U <= 3.0 2313 ------------------------------------------------------- 2315 | Alg. | Distribution of Over-Termination Percentage | 2316 | | 0-10% | 10-20% | 20-30% | 30-40% | 40-50% | 2317 |------|---------|---------|---------|---------|--------| 2318 | CL | 0.984 | 0.014 | 0.000 | 0.000 | 0.000 | 2319 |------|---------|---------|---------|---------|--------| 2320 | SM | 0.254 | 0.384 | 0.275 | 0.075 | 0.008 | 2321 -------------------------------------------------------- 2323 ------------------------------------------------------ 2324 | Alg. | Fract. End-load Above Admission Threshold | 2325 | | 0 BN | 1 BN | 2 BN | 3 BN | 4 BN | 5 BN | 2326 |------|-------|-------|-------|-------|-------|-------| 2327 | CL | 1 | 0.998 | 0.998 | 0.997 | 0.997 | 0.996 | 2328 |------|-------|-------|-------|-------|-------|-------| 2329 | SM | 1 | 0.997 | 0.995 | 0.992 | 0.969 | 0.945 | 2330 ------------------------------------------------------ 2331 | REF | 1 | 0.998 | 0.998 | 0.997 | 0.998 | 0.999 | 2332 -------------------------------------------------------- 2334 Table A.13. Distribution of over-termination percentage and 2335 frequency of the load after termination event (denoted "End-Load") 2336 remaining above admission threshold. The REF rows in the "End-load 2337 Above Admission Threshold" tables correspond to the Reference 2338 termination against which the over-termination percentage is computed 2340 To investigate how the fluid simulation results relate to the packet 2341 simulation, we also ran a subset of approximately 2000 experiments 2342 through a packet simulator with the same parameter settings and 2343 traffic loads as the corresponding fluid simulations, and compared 2344 the results. We found that the error in most experiments is 2345 relatively small, allowing us to conjecture that statistics of the 2346 fluid experiments adequately approximate the expected packet-level 2347 results in these experiments. 2349 The distribution of the error between the fluid and packet simulation 2350 results for these experiments is shown in the following table: 2352 (preamble) 2353 -------------------------------------------------------- 2354 | Alg. | Error Dist. in Over-Term. Perc. (Fluid-Packet) | 2355 | |-0.2~-0.1|-0.1~0.0 | 0.0~0.1 | 0.1~0.2 | 0.2-0.4 | 2356 |------|---------|---------|---------|---------|---------| 2357 | CL | 0.000 | 0.032 | 0.963 | 0.000 | 0.000 | 2358 |------|---------|---------|---------|---------|---------| 2359 | SM | 0.003 | 0.334 | 0.640 | 0.013 | 0.000 | 2360 -------------------------------------------------------- 2362 Table A.14 Error Between Fluid and Packet Simulations in about 2000 2363 experiments 2365 As can bee seen, the error is relatively contained. 2367 Finally, we ran a similar set of fluid experiments with the smoothed 2368 trigger signal (see section 8.3.1), and found that there is no 2369 visible difference in the statistical performance of the smoothed 2370 version and non-smoothed version of the algorithm. In some cases 2371 smoothing performed better than the non-smoothed version, as in the 2372 example reported in Tables A11 and A12 , and in other cases non- 2373 smoothed version outperformed the smoothed version, with the overall 2374 distribution of over-termination errors remaining extremely similar 2375 for smoothed and non-smoothed versions. We therefore conclude that 2376 smoothing is necessary only to deal with low levels of ingress-egress 2377 aggregations, and have no effect on the over-termination in the 2378 multi-bottleneck scenario as long as the IEAs are sufficiently 2379 aggregated. 2381 9. Appendix B. Controlling The Single Marking Configuration with a 2382 Single Parameter 2384 9.1. Assumption 2386 This section assumes that TM-marking is used for SM marking encoding. 2388 9.2. Details of the Proposed Enhancements to PCN Architecture 2390 9.2.1. PCN-Internal-Node 2392 No substantive change is required for the PCN framework (as defined 2393 in [I-D.eardley-pcn-architecture]) to enable SM Operation in the PCN 2394 Internal Node. The architecture already allows the implementation of 2395 only one marking and metering algorithm at the PCN-internal-node. 2397 However, we propose to rename the terms "configured-admissible-rate" 2398 and "configured-termination-rate" to "Type Q threshold" and "Type R" 2399 threshold. The architecture should allow configuring either one of 2400 these thresholds or both at the PCN-ingress node. The type of the 2401 threshold determines the type of the marking semantics/algorithm 2402 associated with the threshold. 2404 9.2.2. PCN-Egress-Node 2406 The only proposed change at the PCN-egress-node is the addition of a 2407 single (globally defined) configuration constant U. The setting of 2408 this constant defines the type of marking CLE is measured against. 2409 If U=1, the system defaults to the dual-marking behavior and the CLE 2410 is measured against Type Q marked packets. If U>1, the CLE is 2411 measured against Type R marked traffic. No other change is required. 2413 In more detail, 2415 o If U=1, a PCN-egress-node expects to receive either Type Q marking 2416 only (the network implements virtual-queue-based admission only), 2417 or Type R marking only (the system implements excess-rate-based 2418 flow termination only), or both (the system implements dual- 2419 marking admission and termination). 2421 o If U>1, a PCN egress node expects to receive only type-R marking 2422 (the network implements single-marking approach). 2424 o If U=1 and Type-Q marking is received (as indicated by the 2425 encoding in the PCN packets), then the PCN-egress-node always 2426 measures the CLE (fraction of traffic carrying Type-Q marks) on a 2427 per-ingress basis against Type Q marking. This represents no 2428 change (other than renaming "admission-marked-packets" to "type 2429 Q-marked" packets) compared to the current architecture. The PCN- 2430 egress-node then signals the (type Q-based) CLE to the PCN- 2431 ingress-node - again as already enabled by the current PCN 2432 architecture. 2434 o If U=1 and a PCN-egress-node receives "Type R" marking (as 2435 indicated in the encoding of the PCN packets), it measures 2436 sustainable rate with respect to Type-R marked traffic, (i.e. it 2437 measures the amount of traffic without the "Type-R" marks). This 2438 also is just a renaming change (with termination-marking renamed 2439 to "Type R" marking) and is fully compatible with the current PCN 2440 architecture. 2442 o If U > 1, the PCN-egress node computes both the CLE and the 2443 Sustainable rate with respect to Type-R marking. 2445 o Once computed, the CLE and/or the Sustainable rate are 2446 communicated to the PCN-ingress-node as described in 2448 [I-D.eardley-pcn-architecture]. 2450 9.2.3. PCN-Ingress-Node 2452 The only proposed change at the PCN-ingress-node is the addition of a 2453 single (globally defined) configuration constant U (in fact, this is 2454 the same constant as defined for the PCN-egress-node, so U in fact is 2455 a per PCN-boundary-node constant; its value however is assumed to be 2456 global for all PCN-boundary nodes in the PCN-domain (or at least a 2457 subset of nodes communicating with each other only)). The value of 2458 this constant is used to multiply the sustainable rate received from 2459 a given PCN-egress-node to compute the rate threshold used for flow 2460 termination decisions. The value U=1 corresponds to the dual-marking 2461 approach, and results in using the sustainable rate received from the 2462 PCN-egress-node directly. The value U>1 corresponds to the SM 2463 approach and its (globally defined) value signifies the desired 2464 system-wide implicit ratio between flow termination and flow 2465 admission thresholds as described in Section 2. 2467 Note that constant U is assumed to be defined per PCN-boundary node 2468 (i.e. the ingress and the egress functions of the PCN-boundary-node 2469 use the same configuration constant to guide their behavior. 2471 In more detail: 2473 o A PCN-ingress-node receives CLE and/or Sustainable Rate from each 2474 PCN-egress-node it has traffic to. This is fully compatible with 2475 PCN architecture as described in [I-D.eardley-pcn-architecture]. 2477 o A PCN-ingress-node bases its admission decisions on the value of 2478 CLE. Specifically, once the value of CLE exceeds a configured 2479 threshold, the PCN-ingress-node stops admitting new flows. It 2480 restarts admitting when the CLE value goes down below the 2481 specified threshold. This is fully compatible with PCN 2482 architecture as described in draft-earley-pcn-architecture-00. 2484 o A PCN-ingress node receiving a Sustainable Rate from a particular 2485 PCN-egress node measures its traffic to that egress node. This 2486 again is fully compatible with PCN architecture as described in 2487 draft-earley-pcn-architecture-00. 2489 o The PCN-ingress-node computes the desired Termination Rate to a 2490 particular PCN-egress-node by multiplying the sustainable rate 2491 from a given PCN-egress-node by the value of the configuration 2492 parameter U. This computation step represents a proposed change to 2493 the current version of [I-D.eardley-pcn-architecture]. 2495 o Once the Termination Rate is computed, it is used for the flow 2496 termination decision in a manner fully compatible with 2497 [I-D.eardley-pcn-architecture]. Namely the PCN-ingress-node 2498 compares the measured traffic rate destined to the given PCN- 2499 egress-node with the computed Termination rate for that egress 2500 node, and terminates a set of traffic flows to reduce the rate 2501 exceeding that Termination rate. This is fully compatible with 2502 [I-D.eardley-pcn-architecture]. 2504 10. Security Considerations 2506 TBD 2508 11. References 2510 11.1. Normative References 2512 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 2513 Requirement Levels", BCP 14, RFC 2119, March 1997. 2515 11.2. Informative References 2517 [I-D.babiarz-pcn-3sm] 2518 Babiarz, J., "Three State PCN Marking", 2519 draft-babiarz-pcn-3sm-00 (work in progress), July 2007. 2521 [I-D.briscoe-tsvwg-cl-architecture] 2522 Briscoe, B., "An edge-to-edge Deployment Model for Pre- 2523 Congestion Notification: Admission Control over a 2524 DiffServ Region", draft-briscoe-tsvwg-cl-architecture-04 2525 (work in progress), October 2006. 2527 [I-D.briscoe-tsvwg-cl-phb] 2528 Briscoe, B., "Pre-Congestion Notification marking", 2529 draft-briscoe-tsvwg-cl-phb-03 (work in progress), 2530 October 2006. 2532 [I-D.briscoe-tsvwg-re-ecn-border-cheat] 2533 Briscoe, B., "Emulating Border Flow Policing using Re-ECN 2534 on Bulk Data", draft-briscoe-tsvwg-re-ecn-border-cheat-01 2535 (work in progress), June 2006. 2537 [I-D.briscoe-tsvwg-re-ecn-tcp] 2538 Briscoe, B., "Re-ECN: Adding Accountability for Causing 2539 Congestion to TCP/IP", draft-briscoe-tsvwg-re-ecn-tcp-04 2540 (work in progress), July 2007. 2542 [I-D.davie-ecn-mpls] 2543 Davie, B., "Explicit Congestion Marking in MPLS", 2544 draft-davie-ecn-mpls-01 (work in progress), October 2006. 2546 [I-D.eardley-pcn-architecture] 2547 Eardley, P., "Pre-Congestion Notification Architecture", 2548 draft-eardley-pcn-architecture-00 (work in progress), 2549 June 2007. 2551 [I-D.lefaucheur-emergency-rsvp] 2552 Faucheur, F., "RSVP Extensions for Emergency Services", 2553 draft-lefaucheur-emergency-rsvp-02 (work in progress), 2554 June 2006. 2556 [I-D.westberg-pcn-load-control] 2557 Westberg, L., "LC-PCN: The Load Control PCN Solution", 2558 draft-westberg-pcn-load-control-02 (work in progress), 2559 November 2007. 2561 [I-D.zhang-pcn-performance-evaluation] 2562 Zhang, X., "Performance Evaluation of CL-PHB Admission and 2563 Termination Algorithms", 2564 draft-zhang-pcn-performance-evaluation-02 (work in 2565 progress), July 2007. 2567 11.3. References 2569 [Jamin] "A Measurement-based Admission Control Algorithm for 2570 Integrated Services Packet Networks", 1997. 2572 [Menth] "PCN-Based Resilient Network Admission Control: The Impact 2573 of a Single Bit", 2007. 2575 Authors' Addresses 2577 Anna Charny 2578 Cisco Systems, Inc. 2579 1414 Mass. Ave. 2580 Boxborough, MA 01719 2581 USA 2583 Email: acharny@cisco.com 2584 Xinyang (Joy) Zhang 2585 Cisco Systems, Inc. and Cornell University 2586 1414 Mass. Ave. 2587 Boxborough, MA 01719 2588 USA 2590 Email: joyzhang@cisco.com 2592 Francois Le Faucheur 2593 Cisco Systems, Inc. 2594 Village d'Entreprise Green Side - Batiment T3 , 2595 400 Avenue de Roumanille, 06410 Biot Sophia-Antipolis, 2596 France 2598 Email: flefauch@cisco.com 2600 Vassilis Liatsos 2601 Cisco Systems, Inc. 2602 1414 Mass. Ave. 2603 Boxborough, MA 01719 2604 USA 2606 Email: vliatsos@cisco.com 2608 Full Copyright Statement 2610 Copyright (C) The IETF Trust (2007). 2612 This document is subject to the rights, licenses and restrictions 2613 contained in BCP 78, and except as set forth therein, the authors 2614 retain all their rights. 2616 This document and the information contained herein are provided on an 2617 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 2618 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND 2619 THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS 2620 OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 2621 THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 2622 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 2624 Intellectual Property 2626 The IETF takes no position regarding the validity or scope of any 2627 Intellectual Property Rights or other rights that might be claimed to 2628 pertain to the implementation or use of the technology described in 2629 this document or the extent to which any license under such rights 2630 might or might not be available; nor does it represent that it has 2631 made any independent effort to identify any such rights. Information 2632 on the procedures with respect to rights in RFC documents can be 2633 found in BCP 78 and BCP 79. 2635 Copies of IPR disclosures made to the IETF Secretariat and any 2636 assurances of licenses to be made available, or the result of an 2637 attempt made to obtain a general license or permission for the use of 2638 such proprietary rights by implementers or users of this 2639 specification can be obtained from the IETF on-line IPR repository at 2640 http://www.ietf.org/ipr. 2642 The IETF invites any interested party to bring to its attention any 2643 copyrights, patents or patent applications, or other proprietary 2644 rights that may cover technology that may be required to implement 2645 this standard. Please address the information to the IETF at 2646 ietf-ipr@ietf.org. 2648 Acknowledgment 2650 Funding for the RFC Editor function is provided by the IETF 2651 Administrative Support Activity (IASA).