idnits 2.17.1 draft-charny-pcn-single-marking-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 21. -- Found old boilerplate from RFC 3978, Section 5.5, updated by RFC 4748 on line 1936. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 1947. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 1954. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 1960. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The abstract seems to contain references ([I-D.eardley-pcn-architecture], [I-D.briscoe-tsvwg-cl-architecture]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (July 9, 2007) is 6135 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'Menth' is mentioned on line 1886, but not defined == Missing Reference: 'Jamin' is mentioned on line 1883, but not defined == Unused Reference: 'I-D.briscoe-tsvwg-re-ecn-tcp' is defined on line 1851, but no explicit reference was found in the text == Unused Reference: 'I-D.lefaucheur-emergency-rsvp' is defined on line 1865, but no explicit reference was found in the text == Outdated reference: A later version (-01) exists of draft-babiarz-pcn-3sm-00 == Outdated reference: A later version (-09) exists of draft-briscoe-tsvwg-re-ecn-tcp-03 == Outdated reference: A later version (-05) exists of draft-westberg-pcn-load-control-00 == Outdated reference: A later version (-02) exists of draft-zhang-pcn-performance-evaluation-01 Summary: 2 errors (**), 0 flaws (~~), 10 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group A. Charny 3 Internet-Draft Cisco Systems, Inc. 4 Intended status: Standards Track J. Zhang 5 Expires: January 10, 2008 Cisco Systems, Inc. and Cornell 6 University 7 F. Le Faucheur 8 V. Liatsos 9 Cisco Systems, Inc. 10 July 9, 2007 12 Pre-Congestion Notification Using Single Marking for Admission and 13 Termination 14 draft-charny-pcn-single-marking-02.txt 16 Status of this Memo 18 By submitting this Internet-Draft, each author represents that any 19 applicable patent or other IPR claims of which he or she is aware 20 have been or will be disclosed, and any of which he or she becomes 21 aware will be disclosed, in accordance with Section 6 of BCP 79. 23 Internet-Drafts are working documents of the Internet Engineering 24 Task Force (IETF), its areas, and its working groups. Note that 25 other groups may also distribute working documents as Internet- 26 Drafts. 28 Internet-Drafts are draft documents valid for a maximum of six months 29 and may be updated, replaced, or obsoleted by other documents at any 30 time. It is inappropriate to use Internet-Drafts as reference 31 material or to cite them other than as "work in progress." 33 The list of current Internet-Drafts can be accessed at 34 http://www.ietf.org/ietf/1id-abstracts.txt. 36 The list of Internet-Draft Shadow Directories can be accessed at 37 http://www.ietf.org/shadow.html. 39 This Internet-Draft will expire on January 10, 2008. 41 Copyright Notice 43 Copyright (C) The IETF Trust (2007). 45 Abstract 47 Pre-Congestion Notification described in 48 [I-D.eardley-pcn-architecture] and earlier in 50 [I-D.briscoe-tsvwg-cl-architecture] approach proposes the use of an 51 Admission Control mechanism to limit the amount of real-time PCN 52 traffic to a configured level during the normal operating conditions, 53 and the use of a Flow Termination mechanism to tear-down some of the 54 flows to bring the PCN traffic level down to a desirable amount 55 during unexpected events such as network failures, with the goal of 56 maintaining the QoS assurances to the remaining flows. In 57 [I-D.eardley-pcn-architecture], Admission and Flow Termination use 58 two different markings and two different metering mechanisms in the 59 internal nodes of the PCN region. This draft proposes a mechanism 60 using a single marking and metering for both Admission and Flow 61 Termination, and presents a preliminary analysis of the tradeoffs. A 62 side-effect of this proposal is that a different marking a nd 63 metering Admission mechanism than that proposed in 64 [I-D.eardley-pcn-architecture] may be also feasible, and may result 65 in a number of benefits. In addition, this draft proposes a 66 migration path for incremental deployment of this approach as an 67 intermediate step to the dual-marking approach. 69 Requirements Language 71 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 72 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 73 document are to be interpreted as described in RFC 2119 [RFC2119]. 75 Table of Contents 77 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 5 78 1.1. Changes from -01 version . . . . . . . . . . . . . . . . . 5 79 1.2. Terminology . . . . . . . . . . . . . . . . . . . . . . . 5 80 1.3. Background and Motivation . . . . . . . . . . . . . . . . 5 81 2. The Single Marking Approach . . . . . . . . . . . . . . . . . 7 82 2.1. High Level description . . . . . . . . . . . . . . . . . . 7 83 2.2. Operation at the PCN-interior-node . . . . . . . . . . . . 8 84 2.3. Operation at the PCN-egress-node . . . . . . . . . . . . . 8 85 2.4. Operation at the PCN-ingress-node . . . . . . . . . . . . 8 86 2.4.1. Admission Decision . . . . . . . . . . . . . . . . . . 8 87 2.4.2. Flow Termination Decision . . . . . . . . . . . . . . 9 88 3. Benefits of Allowing the Single Marking Approach . . . . . . . 10 89 4. Impact on PCN Architectural Framework . . . . . . . . . . . . 10 90 4.1. Impact on the PCN-Internal-Node . . . . . . . . . . . . . 11 91 4.2. Impact on the PCN-boundary nodes . . . . . . . . . . . . . 11 92 4.2.1. Impact on PCN-Egress-Node . . . . . . . . . . . . . . 11 93 4.2.2. Impact on the PCN-Ingress-Node . . . . . . . . . . . . 12 94 4.3. Summary of Proposed Enhancements Required for Support 95 of Single Marking Options . . . . . . . . . . . . . . . . 13 96 4.4. Proposed Optional Renaming of the Marking and Marking 97 Thresholds . . . . . . . . . . . . . . . . . . . . . . . . 13 98 4.5. An Optimization Using a Single Configuration Parameter 99 for Single Marking . . . . . . . . . . . . . . . . . . . . 14 100 5. Incremental Deployment Considerations . . . . . . . . . . . . 14 101 6. Tradeoffs, Issues and Limitations of Single Marking 102 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 103 6.1. Restrictions on Termination-to-admission Thresholds . . . 16 104 6.2. Assumptions on Loss . . . . . . . . . . . . . . . . . . . 16 105 6.3. Effect of Reaction Timescale of Admission Mechanism . . . 16 106 6.4. Performance Implications and Tradeoffs . . . . . . . . . . 16 107 6.5. Effect on Proposed Anti-Cheating Mechanisms . . . . . . . 17 108 6.6. ECMP Handling . . . . . . . . . . . . . . . . . . . . . . 17 109 6.7. Traffic Engineering Considerations . . . . . . . . . . . . 18 110 7. Performance Evaluation Comparison . . . . . . . . . . . . . . 22 111 7.1. Relationship to other drafts . . . . . . . . . . . . . . . 22 112 7.2. Limitations, Conclusions and Direction for Future Work . . 22 113 7.2.1. High Level Conclusions . . . . . . . . . . . . . . . . 22 114 7.2.2. Future work . . . . . . . . . . . . . . . . . . . . . 23 115 8. Appendix A: Simulation Details . . . . . . . . . . . . . . . 23 116 8.1. Network and Signaling Models . . . . . . . . . . . . . . . 23 117 8.2. Traffic Models . . . . . . . . . . . . . . . . . . . . . . 26 118 8.2.1. Voice Traffic Models . . . . . . . . . . . . . . . . . 26 119 8.2.2. "Synthetic Video": High Rate ON-OFF traffic with 120 Video-like Mean and Peak Rates ("SVD") . . . . . . . 27 121 8.2.3. Real Video Traces (VTR) . . . . . . . . . . . . . . . 28 122 8.2.4. Randomization of Base Traffic Models . . . . . . . . . 29 124 8.3. Parameter Settings . . . . . . . . . . . . . . . . . . . . 29 125 8.3.1. Queue-based settings . . . . . . . . . . . . . . . . . 29 126 8.3.2. Token Bucket Settings . . . . . . . . . . . . . . . . 29 127 8.4. Simulation Details . . . . . . . . . . . . . . . . . . . . 30 128 8.4.1. Sensitivity to EWMA weight and CLE . . . . . . . . . . 30 129 8.4.2. Effect of Ingress-Egress Aggregation . . . . . . . . . 32 130 8.4.3. Effect of Multiple Bottlenecks . . . . . . . . . . . . 38 131 9. Security Considerations . . . . . . . . . . . . . . . . . . . 42 132 10. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 42 133 11. References . . . . . . . . . . . . . . . . . . . . . . . . . . 42 134 11.1. Normative References . . . . . . . . . . . . . . . . . . . 42 135 11.2. Informative References . . . . . . . . . . . . . . . . . . 42 136 11.3. References . . . . . . . . . . . . . . . . . . . . . . . . 43 137 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . . 43 138 Intellectual Property and Copyright Statements . . . . . . . . . . 45 140 1. Introduction 142 1.1. Changes from -01 version 144 o Added miscellaneous clarifications based on comments received on 145 version -01 147 o Removed Terminology section and replaced it with a pointer to 148 [I-D.eardley-pcn-architecture]. 150 o Added a section on standards implications and considerations for 151 incremental deployment 153 o Added a section on ECMP handling 155 o Added a section on traffic engineering considerations and 156 tradeoffs. 158 o Undated the Appendix to include new results and consolidate some 159 of the old ones 161 1.2. Terminology 163 This draft uses the terminology defined in 164 [I-D.eardley-pcn-architecture] 166 1.3. Background and Motivation 168 Pre-Congestion Notification [I-D.eardley-pcn-architecture] approach 169 proposes to use an Admission Control mechanism to limit the amount of 170 real-time PCN traffic to a configured level during the normal 171 operating conditions, and to use a Flow Termination mechanism to 172 tear-down some of the flows to bring the PCN traffic level down to a 173 desirable amount during unexpected events such as network failures, 174 with the goal of maintaining the QoS assurances to the remaining 175 flows. In [I-D.eardley-pcn-architecture], Admission and Flow 176 Termination use two different markings and two different metering 177 mechanisms in the internal nodes of the PCN region. Admission 178 Control algorithms for variable-rate real-time traffic such as video 179 have traditionally been based on the observation of the queue length, 180 and hence re-using these techniques and ideas in the context of pre- 181 congestion notification is highly attractive, and motivated the 182 virtual-queue-based marking and metering approach specified in 183 [I-D.briscoe-tsvwg-cl-architecture] for Admission. On the other 184 hand, for Flow Termination, it is desirable to know how many flows 185 need to be terminated, and that in turn motivates rate-based Flow 186 Termination metering. This provides some motivation for employing 187 different metering algorithm for Admission and for Flow Termination. 189 Furthermore, it is frequently desirable to trigger Flow Termination 190 at a substantially higher traffic level than the level at which no 191 new flows are to be admitted. There are multiple reasons for the 192 requirement to enforce a different configured-admissible-rate and 193 configured-termination-rate. These include, for example: 195 o End-users are typically more annoyed by their established call 196 dying than by getting a busy tone at call establishment. Hence 197 decisions to terminate flows may need to be done at a higher load 198 level than the decision to stop admitting. 200 o There are often very tight (possibly legal) obligations on network 201 operators to not drop established calls. 203 o Voice Call Routing often has the ability to route/establish the 204 call on another network (e.g., PSTN) if it is determined at call 205 establishment that one network (e.g., packet network) can not 206 accept the call. Therefore, not admitting a call on the packet 207 network at initial establishment may not impact the end-user. In 208 contrast, it is usually not possible to reroute an established 209 call onto another network mid-call. This means that call 210 Termination can not be hidden to the end-user. 212 o Flow Termination is typically useful in failure situations where 213 some loads get rerouted thereby increasing the load on remaining 214 links. Because the failure may only be temporary, the operator 215 may be ready to tolerate a small degradation during the interim 216 failure period. This also argues for a higher configured- 217 termination-rate than configured-admissible-rate 219 o A congestion notification based Admission scheme has some inherent 220 inaccuracies because of its reactive nature and thus may 221 potentially over admit in some situations (such as burst of calls 222 arrival). If the Flow Termination scheme reacted at the same rate 223 threshold as the Admission , calls may get routinely dropped after 224 establishment because of over admission, even under steady state 225 conditions. 227 These considerations argue for metering for Admission and Flow 228 Termination at different traffic levels and hence, implicitly, for 229 different markings and metering schemes. 231 Different marking schemes require different codepoints. Thus, such 232 separate markings consume valuable real-estate in the packet header, 233 especially scarce in the case of MPLS Pre-Congestion Notification 234 [I-D.davie-ecn-mpls] . Furthermore, two different metering 235 techniques involve additional complexity in the data path of the 236 internal routers of the PCN-domain. 238 To this end, [I-D.briscoe-tsvwg-cl-architecture] proposes an 239 approach, referred to as "implicit Preemption marking" in that draft, 240 that does not require separate termination-marking. However, it does 241 require two separate measurement schemes: one measurement for 242 Admission and another measurement for Flow Termination. Furthermore, 243 this approach mandates that the configured-termination-rate be equal 244 to a drop rate. This approach effectively uses dropping as the way 245 to convey information about how much traffic can "fit" under the 246 configured-termination-rate, instead of using a separate termination 247 marking. This is a significant restriction in that it results in 248 flow termination only taking effect once packets actually get 249 dropped. 251 This document presents an approach that allows the use of a single 252 PCN marking and a single metering technique at the internal devices 253 without requiring that the dropping and flow termination thresholds 254 be the same. We also argue that this approach can be used as 255 intermediate step in implementation and deployment of a full-fledged 256 dual-marking PCN implementation. 258 2. The Single Marking Approach 260 2.1. High Level description 262 The proposed approach is based on several simple ideas: 264 o Replace virtual-queue-based marking for Admission Control by 265 excess rate marking: 267 * meter traffic exceeding the configued-admissible-rate and mark 268 *excess* traffic (e.g. using a token bucket with the rate 269 configured with the rate equal to configured-admissible-rate) 271 * at the PCN-boundary-node, stop admitting traffic when the 272 fraction of marked traffic for a given edge-to-edge aggregate 273 exceeds a configured threshold (e.g. stop admitting when 3% of 274 all traffic in the edge-to-edge aggregate received at the 275 ingress is marked) 277 o Impose a PCN-domain-wide constraint on the ratio U between the 278 configured-admissible-rate on a link and level of the PCN load on 279 the link at which Flow Termination needs to be triggered (but do 280 not explicitly configure configured-termination-rate). For 281 example, one might impose a policy that Flow Termination is 282 triggered when PCN traffic exceeds 120% of the configured- 283 admissible-rate on any link of the PCN-domain). 285 The remaining part of this section describes the possible operation 286 of the system. 288 2.2. Operation at the PCN-interior-node 290 The PCN-interior-node meters the aggregate PCN traffic and marks the 291 excess rate. A number of implementations are possible to achieve 292 that. A token bucket implementation is particularly attractive 293 because of its relative simplicity, and even more so because a token 294 bucket implementation is readily available in the vast majority of 295 existing equipment. The rate of the token bucket is configured to 296 correspond to the configured-admissible-rate, and the depth of the 297 token bucket can be configured by an operator based on the desired 298 tolerance to PCN traffic burstiness. 300 Note that no configured-termination-rate is explicitly configured at 301 the PCN-interior-node, and the PCN-interior-node does nothing at all 302 to enforce it. All marking is based on the single configured rate 303 threshold (configured-admissible-rate). 305 2.3. Operation at the PCN-egress-node 307 The PCN-egress-node measures the rate of both marked and unmarked 308 traffic on a per-ingress basis, and reports to the PCN-ingress-node 309 two values: the rate of unmarked traffic from this ingress node, 310 which we deem Sustainable Admission Rate (SAR) and the Congestion 311 Level Estimate (CLE), which is the fraction of the marked traffic 312 received from this ingress node. Note that Sustainable Admission 313 Rate is analogous to the sustainable Preemption rate of 314 [I-D.briscoe-tsvwg-cl-architecture], except in this case it is based 315 on the configured-admissible- rather than termination threshold, 316 while the CLE is exactly the same as that of 317 [I-D.briscoe-tsvwg-cl-architecture]. The details of the rate 318 measurement are outside the scope of this draft. 320 2.4. Operation at the PCN-ingress-node 322 2.4.1. Admission Decision 324 Just as in [I-D.briscoe-tsvwg-cl-architecture], the admission 325 decision is based on the CLE. The ingress node stops admission of 326 new flows if the CLE is above a pre-defined threshold (e.g. 3%). 327 Note that although the logic of the decision is exactly the same as 328 in the case of [I-D.briscoe-tsvwg-cl-architecture], the detailed 329 semantics of the marking is different. This is because the marking 330 used for admission in this proposal reflects the excess rate over the 331 configured-admissible-rate, while in 332 [I-D.briscoe-tsvwg-cl-architecture], the marking is based on 333 exceeding a virtual queue threshold. Notably, in the current 334 proposal, if the average sustained rate of admitted traffic is 5% 335 over the admission threshold, then 5% of the traffic is expected to 336 be marked, whereas in the context of 337 [I-D.briscoe-tsvwg-cl-architecture] a steady 5% overload should 338 eventually result in 100% of all traffic being admission marked. A 339 consequence of this is that for "smooth" constant-rate traffic, the 340 approach presented here will not mark any traffic at all until the 341 rate of the traffic exceeds the configured admission threshold by the 342 amount corresponding to the chosen CLE threshold. 344 At first glance this may seem to result in a violation of the pre- 345 congestion notification premise that attempts to stop admission 346 before the desired traffic level is reached. However, in reality one 347 can simply embed the CLE level into the desired configuration of the 348 admission threshold. That is, if a certain rate X is the actual 349 target admission threshold, then one should configure the rate of the 350 metering device (e.g. the rate of the token bucket) to X-y where y 351 corresponds to the level of CLE that would trigger admission blocking 352 decision. 354 A more important distinction is that virtual-queue based marking 355 reacts to short-term burstiness of traffic, while the excess-rate 356 based marking is only capable of reacting to rate violations at the 357 timescale chosen for rate measurement. Based on our investigation, 358 it seems that this distinction is not crucial in the context of PCN 359 when no actual queuing is expected even if the virtual queue is full. 360 More discussion on this is presented later in the draft. 362 2.4.2. Flow Termination Decision 364 When the ingress observes a non-zero CLE and Sustainable Admission 365 Rate (SAR), it first computes the Sustainable Termination Rate (STR) 366 by simply multiplying SAR by the system-wide constant u, where u is 367 the system-wide ratio between Preemption and admission thresholds on 368 all links in the PCN domain: STR = SAR*U. The PCN-ingress-node then 369 performs exactly the same operation as is proposed in 370 [I-D.briscoe-tsvwg-cl-architecture] with respect to STR: it preempts 371 the appropriate number of flows to ensure that the rate of traffic it 372 sends to the corresponding egress node does not exceed STR. Just as 373 in the case of [I-D.briscoe-tsvwg-cl-architecture], an implementation 374 may decide to slow down the termination process by preempting fewer 375 flows than is necessary to cap its traffic to STR by employing a 376 variety of techniques such as safety factors or hysteresis. In 377 summary, the operation of Termination at the ingress node is 378 identical to that of [I-D.briscoe-tsvwg-cl-architecture], with the 379 only exception that the sustainable Termination rate is computed from 380 the sustainable admission rate rather than derived from a separate 381 marking. As discussed earlier, this is enabled by imposing a system- 382 wide restriction on the termination-to-admission thresholds ratio and 383 changing the semantics of the admission marking. 385 3. Benefits of Allowing the Single Marking Approach 387 The following is a summary of benefits associated with enabling the 388 Single Marking Approach. Some tradeoffs will be discussed in section 389 7 below. 391 o Reduced implementation requirements on core routers due to a 392 single metering implementation instead of two different ones. 394 o Ease of use on existing hardware: given that the proposed approach 395 is particularly amenable to a token bucket implementation, the 396 availability of token buckets on virtually all commercially 397 available routers makes this approach especially attractive. 399 o Enabling incremental implementation and deployment of PCN (see 400 section 4). 402 o Reduced number of codepoints which need to be conveyed in the 403 packet header. If the PCN-bits used in the packets header to 404 convey the congestion notification information are the ECN-bits in 405 an IP core and the EXP-bits in an MPLS core, those are very 406 expensive real-estate. The current proposals need 5 codepoints, 407 which is especially important in the context of MPLS where there 408 is only a total of 8 EXP codepoints which must also be shared with 409 DiffServ. Eliminating one codepoint considerably helps. 411 o A possibility of using a token-bucket-, excess-rate- based 412 implementation for admission provides extra flexibility for the 413 choice of an admission mechanism, even if two separate markings 414 and thresholds are used. 416 Subsequent sections argue that these benefits can be achieved with a 417 relatively minor enhancements to the proposed PCN architecture as 418 defined in [I-D.eardley-pcn-architecture], allow simpler 419 implementations at the PCN-interior nodes, and trivial modifications 420 at the PCN- boundary nodes. 422 4. Impact on PCN Architectural Framework 424 The goal of this section is to propose several minor changes to the 425 PCN architecture framework as currently described in 426 [I-D.eardley-pcn-architecture] in order to enable the single marking 427 approach. 429 4.1. Impact on the PCN-Internal-Node 431 No changes are required to the PCN-internal-node in architectural 432 framework in [I-D.eardley-pcn-architecture] in order to support the 433 Single Marking Proposal. The current architecture 434 [I-D.eardley-pcn-architecture] already allows only one marking and 435 metering scheme rather than two by supporting either "admission only" 436 or "termination only" functionality. To support the Single Marking 437 proposal a single threshold (i.e. Configured-termination-rate) must 438 be configured at the PCN-internal-node, and excess-rate marking 439 should be used to mark packets as described in 440 [I-D.briscoe-tsvwg-cl-architecture]. Note however that the meaning 441 of this single threshold and the marking in this case is not related 442 to termination function, but rather to admission function. The 443 configuration parameter(s) described in section 4.2 below at the PCN- 444 ingress-nodes and PCN-egress-node will determine whether the marking 445 should be interpreted as the admission-marking (as appropriate for 446 the Single Marking approach) or as termination-marking (as 447 appropriate for the Dual Marking approach of 448 [I-D.briscoe-tsvwg-cl-architecture] 450 We note that from the implementation standpoint, a PCN-ingress-node 451 supporting Single Marking implements only a subset of the 452 functionality needed for Dual Marking. 454 4.2. Impact on the PCN-boundary nodes 456 We propose an addition of one global configuration parameter 457 MARKING_MODE to be used at all PCN boundary nodes. IF MARKING_MODE = 458 DUAL_MARKING, the behavior of the appropriate PCN-boundary-node as 459 described in the current version of [I-D.eardley-pcn-architecture]. 460 If MARKING_MODE = SINGLE_MARKING, the behavior of the appropriate 461 boundary nodes is as described in the subsequent subsections. 463 4.2.1. Impact on PCN-Egress-Node 465 If MARKING_MODE=SINGLE_MARKING, the Congestion-Level_Estimete (CLE) 466 is measured against termination-marked packets. If 467 MARKING_MODE=DUAL_MARKING, the CLE is measured against 468 admission_marked packets. The method of measurement does not depend 469 on the choice of the marking against which the measurement is 470 performed. 472 Regardless of the setting of the MARKING_MODE parameter, Sustainable- 473 Aggregate-Rate is measured against termination_marked packets, as 474 currently defined in [I-D.briscoe-tsvwg-cl-architecture]. 476 We note that from the implementation point of view, the same two 477 functions (measuring the CLE and measuring the Sustainable-Aggregate- 478 Rate are required by both the Single Marking approach and the 479 approach in, so the difference in the implementation complexity of 480 the PCN-egress-node is quite negligible. 482 4.2.2. Impact on the PCN-Ingress-Node 484 If MARKING_MODE=DUAL_MARKING, the PCN-ingress-node behaves exactly as 485 described in [I-D.eardley-pcn-architecture]. If MARKING_MODE = 486 SINGLE_MARKING, then an additional global parameter U is defined. U 487 must be configured at all PCN_ingess nodes and has the meaning of the 488 desired ratio between the traffic level at which termination should 489 occur and the desired admission threshold, as described in section 490 2.4 above. The value of U must be greater than or equal to 1. The 491 value of this constant U is used to multiply the Sustainable 492 Aggregate Rate received from a given PCN-egress-node to compute the 493 rate threshold used for flow termination decisions. 495 In more detail, if MARKING_MODE=SINGLE_MARKING, then 497 o A PCN-ingress-node receives CLE and/or Sustainable Aggregate Rate 498 from each PCN-egress-node it has traffic to. This is fully 499 compatible with PCN architecture as described in 500 [I-D.eardley-pcn-architecture]. 502 o A PCN-ingress-node bases its admission decisions on the value of 503 CLE. Specifically, once the value of CLE exceeds a configured 504 threshold, the PCN-ingress-node stops admitting new flows. It 505 restarts admitting when the CLE value goes down below the 506 specified threshold. This is fully compatible with PCN 507 architecture as described in [I-D.eardley-pcn-architecture]. 509 o A PCN-ingress node receiving a Sustainable Rate from a particular 510 PCN-egress node measures its traffic to that egress node. This 511 again is fully compatible with PCN architecture as described in 512 [I-D.eardley-pcn-architecture]. 514 o The PCN-ingress-node computes the desired Termination Rate to a 515 particular PCN-egress-node by multiplying the Sustainable 516 Aggregate Rate from a given PCN-egress-node by the value of the 517 configuration parameter U. This computation step represents a 518 proposed change to the current version of 519 [I-D.eardley-pcn-architecture]. 521 o Once the Termination Rate is computed, it is used for the flow 522 termination decision in a manner fully compatible with 523 [I-D.eardley-pcn-architecture]. Namely the PCN-ingress-node 524 compares the measured traffic rate destined to the given PCN- 525 egress-node with the computed Termination rate for that egress 526 node, and terminates a set of traffic flows to reduce the rate 527 exceeding that Termination rate. This is fully compatible with 528 [I-D.eardley-pcn-architecture]. 530 We note that as in the case of the PCN-egress-node, the change in the 531 implementation of the PCN-ingress-node to support Single Marking is 532 quite negligible (a single multiplication per ingress rate 533 measurement interval for each egress node). 535 4.3. Summary of Proposed Enhancements Required for Support of Single 536 Marking Options 538 The enhancements to the PCN architecture as defined in 539 [I-D.eardley-pcn-architecture], in summary, amount to: 541 o defining a global (within the PCN domain) configuration parameter 542 MARKING_MODE at PCN-boundary nodes 544 o Defining a global (within the PCN domain) configuration parameter 545 U at the PCN-ingress_nodes. This parameter signifies the implicit 546 ratio between the termination and admission thresholds at all 547 links 549 o Multiplication of Sustainable-Aggregate-Rate by the constant U at 550 the PCN-ingress-nodes if MARKING_MODE=SINGLE_MARKING 552 o Using the MARKING_MODE parameter to guide which marking is used to 553 measure the CLE at the PCN-egress-node (but the actual measurement 554 functionality is unchanged). 556 4.4. Proposed Optional Renaming of the Marking and Marking Thresholds 558 Previous work on example mechanisms 559 [I-D.briscoe-tsvwg-cl-architecture] implementing the architecture of 560 [I-D.eardley-pcn-architecture] assumed that the semantics of 561 admission control marking and termination marking differ. 562 Specifically, it was assumed that for termination purposes the 563 semantics of the marking is related to the excess rate over the 564 configured (termination) rate, or even more precisely, the amount of 565 traffic that remains unmarked (sustainable rate) after the excess 566 traffic is marked. Some of the recent proposals assume yet different 567 marking semantics [I-D.babiarz-pcn-3sm], 568 [I-D.westberg-pcn-load-control]. 570 Even though specific association with marking semantics and function 571 (admission vs termination) has been assumed in prior work, it is 572 important to note that in the current architecture draft 573 [I-D.eardley-pcn-architecture], the associations of specific marking 574 semantics (virtual queue vs excess rate) with specific functions 575 (admission vs termination) are actually *not* directly assumed. In 576 fact , the architecture document does not explicitly define the 577 marking mechanism, but rather states the existence of two different 578 marking mechanisms, and also allows implementation of either one or 579 both of these mechanisms in a PCN-domain. 581 We argue that this separation of the marking semantics from the 582 functional use of the marking is important to make sure that devices 583 supporting the same marking can interoperate in delivering the 584 function which is based on specific supported marking semantics. 586 To explicitly divorce the function (admission vs termination) and the 587 semantics (excess rate marking, virtual queue marking), it may be 588 beneficial to rename the marking to be associated with the semantics 589 rather than the function. Specifically, it may be beneficial to 590 change the "admission-marking" and "termination-marking" currently 591 defined in the architecture as "Type Q" or "virtual-queue-based" 592 marking, and "Type R" or "excess-rate-based" marking. Of course, 593 other choices of the naming are possible (including keeping the ones 594 currently used in [I-D.eardley-pcn-architecture]). 596 With this renaming, the dual marking approach in 597 [I-D.briscoe-tsvwg-cl-architecture] would require PCN-internal-nodes 598 to support both Type R and Type Q marking, while Single Marking would 599 require support of Type-R marking only. 601 We conclude by emphasizing that the changes proposed here amount to 602 merely a renaming rather than a change to the proposed architecture, 603 and are therefore entirely optional. 605 4.5. An Optimization Using a Single Configuration Parameter for Single 606 Marking 608 We note finally that it is possible to use a single configuration 609 constant U instead of two constants (U and MARKING_TYPE). 610 Specifically, one can simply interpret the value of U=1 as the dual- 611 marking approach (equivalent to MARKING_TYPE=DUAL_MARKING) and use 612 U>1 to indicate Single Marking. 614 5. Incremental Deployment Considerations 616 As most of today's routers already implement a token bucket, 617 implementing token-bucket based excess-rate marking at PCN-ingress 618 nodes is a relatively small incremental step for most of today's 619 implementations. Implementing an additional metering and marking 620 scheme in the datapath required by the dual-marking approach without 621 encountering performance degradation is a larger step. The single- 622 marking approach may be used as an intermediate step towards the 623 deployment of a dual-marking approach in the sense that routers 624 implementing single-marking functionality only may be incrementally 625 deployed. 627 The deployment steps might be as follows: 629 o Initially all PCN-ingress-nodes might implement Excess-rate (Type 630 R) type marking and metering only 632 o All PCN-boundary nodes implement the full functionality as 633 described in this document (including the configuration parameters 634 MARKING_TYPE and U) from the start. Since the PCN-boundary-node 635 behavior is enabled by simply changing the values of the 636 configuration parameters, all boundary nodes become immediately 637 compatible with both dual-marking and single-marking. 639 o Initially all boundary nodes are configured parameter settings 640 indicating Single Marking option. 642 o When a PCN-internal node with dual-marking functionality replaces 643 a subset of PCN-internal-nodes, the virtual-queue-based (Type Q) 644 marking is simply ignored by the boundary nodes until all PCN- 645 internal-nodes in the PCN-domain implement the dual-marking 646 metering and marking. At that time the value of the configuration 647 parameters may be reset to at all boundary nodes to indicate the 648 Dual Marking configuration. 650 o Note that if a subset of PCN-boundary-nodes communicates only with 651 each other, and all PCN-internal-nodes their traffic traverses 652 have been upgraded, this subset of nodes can be upgraded to two 653 dual-marking behavior while the rest of the PCN-domain can still 654 run the single marking case. This would entail configuring two 655 thresholds at the PCN-internal-nodes, and setting the value of the 656 configuration parameters appropriately in this subset. 658 o Finally note that if the configuration parameter U is configured 659 per ingress-egress-pair rather than per boundary node, then each 660 ingress-egress pair can be upgraded to the dual marking 661 simultaneously. While we do not recommend that U is defined on a 662 per-ingress-egress pair, such possibility should be noted and 663 considered. 665 6. Tradeoffs, Issues and Limitations of Single Marking Approach 667 6.1. Restrictions on Termination-to-admission Thresholds 669 An obvious restriction necessary for the single-marking approach is 670 that the ratio of (implicit) Preemption and admission thresholds 671 remains the same on all links in the PCN region. While clearly a 672 limitation, this does not appear to be particularly crippling, and 673 does not appear to outweigh the benefits of reducing the overhead in 674 the router implementation and savings in codepoints in the case of a 675 single PCN domain, or in the case of multiple concatenated PCN 676 regions. The case when this limitation becomes more inconvenient is 677 when an operator wants to merge two previously separate PCN regions 678 (which may have different admission-to-preemption ratios) into a 679 single PCN region. In this case it becomes necessary to do a 680 network-wide reconfiguration to align the settings. 682 The fixed ratio between the implicit termination rate and the 683 configured-admissible-rate also has an implications on traffic 684 engineering considerations. Those are discussed in section 7.7 685 below. 687 6.2. Assumptions on Loss 689 Just as in the case of [I-D.briscoe-tsvwg-cl-architecture], the 690 approach presented in this draft assumes that the configured- 691 admissible-rate is configured at each link below the service rate of 692 the traffic using PCN. This assumption is significant because the 693 algorithm relies on the fact that if admission threshold is exceeded, 694 enough marked traffic reaches the pcn-egress-node to reach the 695 configured CLE level. If this condition does not hold, then traffic 696 may get dropped without ever triggering admission decision. 698 6.3. Effect of Reaction Timescale of Admission Mechanism 700 As mentioned earlier in this draft, there is a potential concern that 701 slower reaction time of admissions mechanism presented in this draft 702 compared to [I-D.briscoe-tsvwg-cl-architecture] may result in 703 overshoot when the load grows rapidly, and undershoot when the load 704 drops rapidly. While this is a valid concern theoretically, it 705 should be noted that at least for the traffic and parameters used in 706 the simulation study reported here, there was no indication that this 707 was a problem. 709 6.4. Performance Implications and Tradeoffs 711 Replacement of a relatively well-studied queue-based measurement- 712 based admission control approach by a cruder excess-rate measurement 713 technique raises a number of algorithmic and performance concerns 714 that need to be carefully evaluated. For example, a token-bucket 715 excess rate measurement is expected to be substantially more 716 sensitive to traffic burstiness and parameter setting, which may have 717 a significant effect in the case of lower levels of traffic 718 aggregation, especially for variable-rate traffic such as video. In 719 addition, the appropriate timescale of rate measurement needs to be 720 carefully evaluated, and in general it depends on the degree of 721 expected traffic variability which is frequently unknown. 723 In view of that, an initial performance comparison of the token- 724 bucket based measurement is presented in the following section. 725 Within the constraints of this study, the performance tradeoffs 726 observed between the queue-based technique suggested in 727 [I-D.briscoe-tsvwg-cl-architecture] and a simpler token-bucket-based 728 excess rate measurement do not appear to be a cause of substantial 729 concern for cases when traffic aggregation is reasonably high at the 730 bottleneck links as well as on a per ingress-egress pair basis. 731 Details of the simulation study, as well as additional discussion of 732 its implications are presented in section 6. 734 Also, one mitigating consideration in favor of the simpler mechanism 735 is that in a typical DiffServ environment, the real-time traffic is 736 expected to be served at a higher priority and/or the target 737 admission rate is expected to be substantially below the speed at 738 which the real-time queue is actually served. If these assumptions 739 hold, then there is some margin of safety for an admission control 740 algorithm, making the requirements for admission control more 741 forgiving to bounded errors - see additional discussion in section 6. 743 6.5. Effect on Proposed Anti-Cheating Mechanisms 745 Replacement of the queue-based admission control mechanism of 746 [I-D.briscoe-tsvwg-cl-architecture] by an excess-rate based admission 747 marking changing the semantics of the pre-congestion marking, and 748 consequently interferes with mechanisms for cheating detection 749 discussed in [I-D.briscoe-tsvwg-re-ecn-border-cheat]. Implications 750 of excess-rate based marking on the anti-cheating mechanisms need to 751 be considered. 753 6.6. ECMP Handling 755 An issue not directly addressed by either the dual-marking approach 756 described in [I-D.briscoe-tsvwg-cl-architecture] or the single- 757 marking approach described in this draft, is that if ECMP is enabled 758 in the PCN-domain, then the PCN-boundary-nodes do not have a way of 759 knowing whether specific flows in the ingress-egress aggregate 760 followed the same path or not. If multiple paths are followed, then 761 some of those paths may be experiencing pre-congestion marking, and 762 some are not. Hence, for example, an ingress node may choose to 763 terminate a flow which takes an entirely un-congested path. This 764 will not only unnecessarily terminate some flows, but also will not 765 eleminate congestion on the actually congested path. While 766 eventually, after several interations, the correct number of flows 767 might be terminated on the congestion path, this is clearly 768 suboptimal, as the termination takes longer, and many flows are 769 potentialy terminated unnecessarily. 771 Two approaches for solving this problem were proposed in 772 [I-D.babiarz-pcn-3sm] and 774 [I-D.westberg-pcn-load-control]. The former handles ECMP by 775 terminating those flows that are termination-marked as soon as the 776 termination marling is seen. The latter uses an additional DiffServ 777 marking/codepoint to mark all packets of the flows passing through a 778 congestion point, with the PCN-boundary-nodes terminating only those 779 flows which are marked with this additional marks. Both of these 780 approaches also differ in the termination-marking semantics, but we 781 omit the discussion of these differences as they can be considered 782 largely independent of the ECMP issue. 784 It should be noted that although not proposed in this draft, either 785 of these ideas can be used with dual- and single- marking approaches 786 discussed here. Specifically, and when a PCN-ingress-node decides 787 which flows to terminate, it can choose for termination only those 788 flows that are termination-marked. Likewise, at the cost of an 789 additional (DiifServ) codepoint, a PCN-internal-node can mark all 790 packets of all flows using this additional marking, and then the PCN- 791 boundary-nodes can use this additional marking to guide their flow 792 termination decisions. 794 Either of these approaches appears to imply changes to the PCN 795 architecture as proposed in [I-D.eardley-pcn-architecture]. Such 796 changes have not been considered in this draft at this point. 798 6.7. Traffic Engineering Considerations 800 Dual-marking PCN can be viewed as a replacement for Resilient Network 801 Provisioning (RNP). It is reasonable to expect that an operator 802 currently using DiffServ provisioning for real-time traffic might 803 consider a move to PCN. For such a move it is necessary to 804 understand how to set the PCN rate thresholds to make sure that the 805 move to PCN does not detrementally affect the quarantees currently 806 offered to the operator. 808 The key question addressed in this section is how to set PCN 809 admission and termination thresholds in the dual marking approach or 810 the single admission threshold and the scaling factor U reflecting 811 the implicit termination threshold in the single-marking approach, to 812 ensure that the result is "not worse than provisioning" in the amount 813 of traffic that can be admitted. Even more specifically we will 814 address what if any are the tradeoffs between the dual-marking and 815 the single-approach arise when answering this question. This 816 question was first raised in [Menth] and is further addressed below. 818 Typically, RNP would size the network (in this specific case for 819 traffic that is expected to use PCN) by making sure that capacity 820 available for this (PCN) type of traffic is sufficient under "normal" 821 circumstances ( that is, under no failure condition, for a given 822 traffic matrix), and under a specific set of single failure scenarios 823 (e.g. failure of each individual single link). Some of the obvious 824 limitations of such provisioning is that 826 o the traffic matrix is often not known well, and at times, 827 especially during flash-crowds, the actual traffic matrix can 828 differ substantially from the one assumed by provisioning 830 o unpredicted, non-planned failures can occur (e.g. multiple links, 831 nodes, etc), causing overload. 833 It is specifically such unplanned cases that serve as the motivation 834 for PCN. Yet, one may want to make sure that for cases that RNP can 835 (and does today) plan for, PCN does no worse when an operator makes 836 the decision to implement PCN on a currently provisioned network. 837 This question directly relates to the choice of the PCN configured 838 admission and termination thresholds. 840 For the dual-marking approach, where the termination and admission 841 thresholds are set independently on any link, one can address this 842 issue as follows [Menth]. If a provisioning tool is available, for a 843 given traffic matrix, one can determine the utilisation of any link 844 used by traffic expected to use PCN under the no-failure condition, 845 and simply set the configured-admissible-rate to that "no-failure 846 utilization". Then a network using PCN will be able to admit as much 847 traffic as the RNP, and will reject any traffic that exceeds the 848 expected traffic matrix. To address resiliency against a set of 849 planned failures, one can use RNP to find the worst-case utilization 850 of any link under the set of all provisioned failures, and then set 851 the configured-termination-rate to that worst case utilisation. 853 Clearly, such setting of PCN thresholds with the dual-marking 854 approach will achieve the following goals: 856 o PCN will admit the same traffic matrix as used by RNP and will 857 protect it against all planned failures without terminating any 858 traffic 860 o When traffic deviates from the planned traffic matrix, PCN will 861 admit such traffic as long as the total usage of any link (without 862 failure) does not exceed the configured-admission threshold, and 863 all admitted traffic will be protected against all planned 864 failures 866 o Additional traffic will not be admitted under the normal, no- 867 failure conditions 869 o Traffic exceeding configure-termination threshold during non- 870 planned failures will be terminated 872 o Under non-planned failures, some of the planned traffic matrix may 873 be terminated, but the remaining traffic will be able to receive 874 its QoS treatment. 876 The above argues that an operator moving from a purely provisioned 877 network to a PCN network can find the settings of the PCN threshold 878 with dual marking in such a way that all admitted traffic is 879 protected against all planned failures. 881 It is easy to see that with the single-marking scheme, the above 882 approach does not work directly [Menth]. Indeed, the ratio between 883 the configured-termination thresholds and the configured-admissible- 884 rate used by the dual-marking approach as described above may not be 885 constant on all links. Since the single-marking approach requires 886 the (implicit) termination rate to be within a fixed factor of the 887 configured admission rate, it can be argued (as was argued in 888 [Menth].) that one needs to set the system-wide ratio U between the 889 (implicit) termination threshold and the configured admission 890 threshold to correspond to the largest ratio between the worst case 891 resilient utilization and the no-failure utilization of RNP, and set 892 the admission threshold on each link to the worst case resilient 893 utilization divided by that system wide ratio. Such approach would 894 result in lower admission thresholds on some links than that of the 895 dual-marking setting of the admission threshold proposed above. It 896 can therefore be argued that PCN with single marking will be able to 897 admit *less* traffic that can be fully protected under the planned 898 set of failures than both RNP and the dual-marking approach. 900 However, the settings of the single-marking threshold proposed above 901 are not the only ones possible, and in fact we propose here that the 902 settings are chosen differently. Such different settings (described 903 below) will result in the following properties of the PCN network: 905 o PCN will admit the same traffic matrix as used by RNP *or more* 907 o The traffic matrix assumed by RNP will be fully protected against 908 all planned failures without terminating any admitted traffic 910 o When traffic deviates from the planned traffic matrix, PCN will 911 admit such traffic as long as the total usage of any link (without 912 failure) does not exceed the configured-admission threshold, 913 However, not all admitted traffic will be protected against all 914 planned failures (i.e. even under planned failures, traffic 915 exceeding the planned traffic matrix may be preempted) 917 o Under non-planned failures, some of the planned traffic matrix may 918 be terminated, but the remaining traffic will be able to receive 919 its QoS treatment. 921 It is easy to see that all of these properties can be achieved if 922 instead of using the largest ratio between worst case resilient 923 utilisation to the no-failure utilisation of RNP across all links for 924 setting the system wide constant U in the single-marking approach as 925 proposed in [Menth], one would use the *smallest* ratio, and then set 926 the configured-admissible-rate to the worst case resilient 927 utilization divided by that ratio. With such setting, the 928 configured-admissions threshold on each link is at least as large as 929 the non-failure RNP utilisation (and hence the planned traffic matrix 930 is always admitted), and the implicit termination threshold is at the 931 worst case planned resilient utilisation of RNP on each link (and 932 hence the planned traffic matrix will be fully protected against the 933 planned failures). Therefore, with such settings, the single-marking 934 draft does as well as RNP or dual-marking with respect to the planned 935 matrix and planned failures. In fact, unlike the dual marking 936 approach, it can admit more traffic on some links than the planned 937 traffic matrix would allow, but it is only guaranteed to protect up 938 to the planned traffic matrix under planned failures. 940 In summary, we have argued that both the single-marking approach and 941 the dual-marking approach can be configured to ensure that PCN "does 942 no worse" than RNP for the planned matrix and the planned failure 943 conditions, (and both can do better than RNP under non-planned 944 conditions). The tradeoff between the two is that although the 945 planned traffic matrix can be admitted with protection guarantees 946 against planned failures with both approaches, the nature of the 947 guarantee for the admitted traffic is different. Dual marking (with 948 the settings proposed) would protect all admitted traffic but would 949 not admit more than planned), while single marking (with the settings 950 proposed) will admit more traffic than planned, but will not 951 guarantee protection against planned failures for traffic exceeding 952 planned utilization. 954 7. Performance Evaluation Comparison 956 7.1. Relationship to other drafts 958 Initial simulation results of admission and termination mechanisms of 959 [I-D.briscoe-tsvwg-cl-architecture] were reported in 960 [I-D.briscoe-tsvwg-cl-phb]. A follow-up study of these mechanisms is 961 presented in a companion draft 962 [I-D.zhang-pcn-performance-evaluation]. The current draft 963 concentrates on a performance comparison of the admission control 964 mechanism of [I-D.briscoe-tsvwg-cl-phb] and the token-bucket-based 965 admission control described in section 2 of this draft. 967 7.2. Limitations, Conclusions and Direction for Future Work 969 Due to time constraints, the study performed so far was limited to a 970 small set of topologies, described in the Appendix. The key 971 questions that have been investigated are the comparative sensitivity 972 of the two schemes to parameter settings and the effect of traffic 973 burstiness and of the degree of aggregation on a per ingress-egress 974 pair on the performance of the admission control algorithms under 975 study. The study is limited to the case where there is no packet 976 loss. While this is a reasonable initial assumption for an admission 977 control algorithm that is supposed to maintain the traffic level 978 significantly below the service capacity of the corresponding queue, 979 nevertheless future study is necessary to evaluate the effect of 980 packet loss. 982 7.2.1. High Level Conclusions 984 The results of this (preliminary) study indicate that there may be a 985 reasonable complexity/performance tradeoff for the choice of 986 admission control algorithm. In turn, this suggests that using a 987 single codepoint and metering technique for admission and Preemption 988 may be a viable option. 990 The key high-level conclusions of the simulation study comparing the 991 performance of queue-based and token-based admission control 992 algorithms are summarized below: 994 1. At reasonable level of aggregation at the bottleneck and per 995 ingress-egress pair traffic, both algorithms perform reasonably 996 well for the range of traffic models considered. 998 2. Both schemes are stressed for low levels of ingress-egress 999 aggregation, especially for the burstier traffic models such as 1000 SVD. The token bucket scheme is substantially more sensitive to 1001 parameter variations than the virtual-queue scheme at the very 1002 low levels of ingress-egress aggregation )(1-2 flows per ingress- 1003 egress pair), and in general is quite brittle at these very low 1004 aggregation levels. It also displays substantial performance 1005 degradation with BATCH traffic, and is sensitive to CBR 1006 synchronization effects resulting in substantial over-admission 1007 (see section 8.4.2). Luckily, these issues quickly diminish as 1008 the level of ingress-egress aggregation increases. 1010 3. The absolute value of round-trip time (RTT) or the RTT difference 1011 between different ingress-egress pair within the range of 1012 continental propagation delays does not appear to have a visible 1013 effect on the performance of both algorithms. 1015 4. There is no substantial effect on the bottleneck utilization of 1016 multi-bottleneck topologies for both schemes. Both schemes 1017 suffer substantial unfairness (and possibly complete starvation) 1018 of the long-haul aggregates traversing multiple bottlenecks 1019 compared to short-haul flows (a property shared by other MBAC 1020 algorithms as well). Token-bucket scheme displayed somewhat 1021 larger unfairness than the virtual-queue scheme. 1023 7.2.2. Future work 1025 This study is but the first step in performance evaluation of the 1026 token-bucket based admission control. Further evaluation should 1027 include a range of investigation, including the following 1029 o interactions between admission control and termination 1031 o effect of signaling delays/probing 1033 o effect of loss of marked packets 1035 8. Appendix A: Simulation Details 1037 8.1. Network and Signaling Models 1039 Network topologies used in this study are shown in the Figures below. 1040 The network is modeled as either Single Link (Fig. A.1), Multi Link 1041 Network with a single bottleneck (termed "RTT", Fig. A.2), or a range 1042 of multi-bottleneck topologies shown in Fig. A.3 (termed "Parking 1043 Lot"). 1045 A --- B 1047 Figure A.1: Simulated Single Link Network. 1049 A 1051 \ 1053 B - D - F 1055 / 1056 C 1057 Figure A.2: Simulated Multi Link Network. 1059 A--B--C A--B--C--D A--B--C--D--E--F 1060 | | | | | | | | | | | | | 1061 | | | | | | | | | | | | | 1062 D E F E F G H G H I J K L 1064 (a) (b) (c) 1065 Figure A.3: Simulated Multiple-bottleneck (Parking Lot )Topologies. 1067 Figure A.1 shows a single link between an ingress and an egress node, 1068 all flows enter at node A and depart at node B. This topology is used 1069 for the basic verification of the behavior of the algorithms with 1070 respect to a single ingress-egress aggregate in isolation. 1072 In Figure A.2, A set of ingresses (A,B,C) are connected to an 1073 interior node in the network (D). This topology is used to study the 1074 behavior of the algorithm where many ingress-egress aggregates share 1075 a single bottleneck link. The number of ingresses varied in 1076 different simulation experiments in the range of 2-100. All links 1077 have generally different propagation delays, in the range 1ms - 100 1078 ms (although in some experiments all propagation delays are set the 1079 same. This node D in turn is connected to the egress (F). In this 1080 topology, different sets of flows between each ingress and the egress 1081 converge on the single link D-F, where pre-congestion notification 1082 algorithm is enabled. The capacities of the ingress links are not 1083 limiting, and hence no PCN is enable on those. The bottleneck link 1084 D-F is modeled with a 10ms propagation delay in all simulations. 1085 Therefore the range of round-trip delays in the experiments is from 1086 22ms to 220ms. 1088 Another type of network of interest is multi-bottleneck (or Parking 1089 Lot, PLT for short) topology. The simplest PLT with 2 bottlenecks is 1090 illustrated in Fig A.3(a). An example traffic matrix with this 1091 network on this topology is as follows: 1093 o an aggregate of "2-hop" flows entering the network at A and 1094 leaving at C (via the two links A-B-C) 1096 o an aggregate of "1-hop" flows entering the network at D and 1097 leaving at E (via A-B) 1099 o an aggregate of "1-hop" flows entering the network at E and 1100 leaving at F (via B-C) 1102 In the 2-hop PLT shown in Fig. A.3(a) the points of congestion are 1103 links A--B and B--C. Capacity of all other links is not limiting. 1104 We also experiment with larger PLT topologies with 3 bottlenecks(see 1105 Fig A.3(b)) and 5 bottlenecks ( Fig A.3 (c)). In all cases, we 1106 simulated one ingress-egress pair that carries the aggregate of 1107 "long" flows traversing all the N bottlenecks (where N is the number 1108 of bottleneck links in the PLT topology), and N ingress-egress pairs 1109 that carry flows traversing a single bottleneck link and exiting at 1110 the next "hop". In all cases, only the "horizontal" links in Fig. 1111 A.3 were the bottlenecks, with capacities of all "vertical" links 1112 non-limiting. Propagation delays for all links in all PLT topologies 1113 are set to 1ms. 1115 Due to time limitations, other possible traffic matrices (e.g. some 1116 of the flows traversing a subset of several bottleneck links) have 1117 not yet been considered and remain the area for future investigation. 1119 Our simulations concentrated primarily on the range of capacities of 1120 'bottleneck' links with sufficient aggregation - above 10 Mbps for 1121 voice and 622 Mbps for SVD, up to 2.4 Gbps. But we also investigated 1122 slower 'bottleneck' links down to 512 Kbps in some experiments. 1123 Higher rate bottleneck speeds wee not considered due to the 1124 simulation time limitations. It should generally be expected that 1125 the higher link speeds will result in higher levels of aggregation, 1126 and hence generally better performance of the measurement-based 1127 algorithms. Therefore is seems reasonable to believe that the link 1128 speeds studied do provide meaningful evaluation targets. 1130 In the simulation model, a call requests arrives at the ingress and 1131 immediately sends a message to the egress. The message arrives at 1132 the egress after the propagation time plus link processing time (but 1133 no queuing delay). When the egress receives this message, it 1134 immediately responds to the ingress with the current Congestion- 1135 Level-Estimate. If the Congestion-Level-Estimate is below the 1136 specified CLE-threshold, the call is admitted, otherwise it is 1137 rejected. An admitted call sends packets according to one of the 1138 chosen traffic models for the duration of the call (see next 1139 section). Propagation delay from source to the ingress and from 1140 destination to the egress is assumed negligible and is not modeled. 1142 In the simulation model of admission control, a call request arrives 1143 at the ingress and immediately sends a message to the egress. The 1144 message arrives at the egress after the propagation time plus link 1145 processing time (but no queuing delay). When the egress receives 1146 this message, it immediately responds to the ingress with the current 1147 Congestion Level Estimate. If the Congestion Level Estimate is below 1148 the specified CLE- threshold, the call is admitted, otherwise it is 1149 rejected. For Flow Termination, once the ingress node of a PCN- 1150 domain decides to terminate a flow, that flow is preempted 1151 immediately and sends no more packets from that time on. The life of 1152 a flow outside the domain described above is not modelled. 1153 Propagation delay from source to the ingress and from destination to 1154 the egress is assumed negligible and is not modelled. 1156 8.2. Traffic Models 1158 Four types of traffic were simulated (CBR voice, on-off traffic 1159 approximating voice with silence compression, and on-off traffic with 1160 higher peak and mean rates (we termed the latter "Synthetic Video" 1161 (SVD) as the chosen peak and mean rate was similar to that of an MPEG 1162 video stream. (but for SVD no attempt was made to match any other 1163 parameters of this traffic to those of a video stream), and finally 1164 real video traces from 1165 http://www.tkn.tu-berlin.de/research/trace/trace.html (courtesy 1166 Telecommunication Networks Group of Technical University of Berlin). 1168 The distribution of flow duration was chosen to be exponentially 1169 distributed with mean 1min, regardless of the traffic type. In most 1170 of the experiments flows arrived according to a Poisson distribution 1171 with mean arrival rate chosen to achieve a desired amount of overload 1172 over the configured-admissible-rate in each experiment. Overloads in 1173 the range 1x to 5x and underload with 0.95x have been investigated. 1174 Note that the rationale for looking at the load 1 and below is to see 1175 if any significant amount of "false rejects" would be seen (i.e. one 1176 would assume that all traffic should be accepted if the total demand 1177 is below the admission threshold). For on-off traffic, on and off 1178 periods were exponentially distributed with the specified mean. 1179 Traffic parameters for each type are summarized below: 1181 8.2.1. Voice Traffic Models 1183 Table A.1 below describes all voice codecs we modeled in our 1184 simulation results. 1186 The first two rows correspond to our two basic models corresponding 1187 to the older G.711 encoding with and without silence compression. 1188 These two models are referred simply as "CBR" and "VBR" in the 1189 reported simulation results. 1191 We also simulated several "mixes" of the different codecs reported in 1192 the table below. The primary mix consists of equal proportion of all 1193 voice codecs listed below. We have also simulated various other mix 1194 consist different proportion of the subset of all codecs. Though 1195 these result are not reported in this draft due to their similarities 1196 to the primary mix result. 1198 ------------------------------------------------------------------- 1199 | Name/Codecs|Packet Size|Inter-Arrival|On/Off Period |Average Rate | 1200 | | (Bytes) | Time (ms) | Ratio | (kbps) | 1201 ------------------------------------------------------------------- 1202 | "CBR" | 160 | 20 | 1 | 64 | 1203 ------------------------------------------------------------------- 1204 | "VBR" | 160 | 20 | 0.34 | 21.75 | 1205 ------------------------------------------------------------------- 1206 | G.711 CBR | 200 | 20 | 1 | 80 | 1207 ------------------------------------------------------------------- 1208 | G.711 VBR | 200 | 20 | 0.4 | 32 | 1209 ------------------------------------------------------------------- 1210 | G.711 CBR | 120 | 10 | 1 | 96 | 1211 ------------------------------------------------------------------- 1212 | G.711 VBR | 120 | 10 | 0.4 | 38.4 | 1213 ------------------------------------------------------------------- 1214 | G.729 CBR | 60 | 20 | 1 | 24 | 1215 ------------------------------------------------------------------- 1216 | G.729 VBR | 60 | 20 | 0.4 | 9.6 | 1217 ------------------------------------------------------------------- 1218 Table A.1 Simulated Voice Codices. 1220 8.2.2. "Synthetic Video": High Rate ON-OFF traffic with Video-like 1221 Mean and Peak Rates ("SVD") 1223 This model is on-off traffic with video-like mean-to-peak ratio and 1224 mean rate approximating that of an MPEG-2 video stream. No attempt 1225 is made to simulate any other aspects of a real video stream, and 1226 this model is merely that of on-off traffic. Although there is no 1227 claim that this model represents the performance of video traffic 1228 under the algorithms in question adequately, intuitively, this model 1229 should be more challenging for a measurement-based algorithm than the 1230 actual MPEG video, and as a result, 'good' or "reasonable" 1231 performance on this traffic model indicates that MPEG traffic should 1232 perform at least as well. We term this type of traffic SVD for 1233 "Synthetic Video". 1235 o Long term average rate 4 Mbps 1237 o On Period mean duration 340ms; during the on-period the packets 1238 are sent at 12 Mbps (1500 byte packets, packet inter-arrival: 1ms) 1240 o Off Period mean duration 660m 1242 8.2.3. Real Video Traces (VTR) 1244 We used a publicly available library of frame size traces of long 1245 MPEG-4 and H.263 encoded video obtained from 1246 http://www.tkn.tu-berlin.de/research/trace/trace.html. Each trace in 1247 that repository s roughly 60 minutes in length, consisting of a list 1248 of records in the format of . Among the 1249 160 available traces, we picked the two with the highest average rate 1250 (averaged over the trace length, in this case, 60 minutes. In 1251 addition, the two also have a similar average rate). The trace file 1252 used in the simulation is the concatenation of the two. 1254 Since the duration of the flow in our simulation is much smaller than 1255 the length of the trace, we checked whether the expected rate of flow 1256 corresponds to the trace's long term average. To do so, we simulated 1257 a number of flows starting from random locations in the trace with 1258 duration chosen to be exponentially distributed with the mean of 1259 1min. The results show that the expected rate of flow is roughly the 1260 same as the trace's average. 1262 In summary, our simulations use a set of segments of the 120 min 1263 trace chosen at random offset from the beginning and with mean 1264 duration of 1 min. 1266 Since the traces provide only the frame size, we also simulated 1267 packetization of the frame as a CBR segment with packet size and 1268 inter-arrival time corresponding to those of our SVD model. Since 1269 the frame size is not always a multiple of the chosen packet size, 1270 the last packet in a frame may be shorter than 1500 bytes chosen for 1271 the SVD encoding. 1273 Traffic characteristics for our VTR models are summarized below: 1275 o Average rate 769 Kbps 1277 o Each frame is sent with packet length 1500 bytes and packet inter- 1278 arrival time 1ms 1280 o No traffic is sent between frames. 1282 8.2.4. Randomization of Base Traffic Models 1284 To emulate some degree of network-introduced jitter, in some 1285 experiments we implemented limited randomization of the base models 1286 by randomly moving the packet by a small amount of time around its 1287 transmission time in the corresponding base traffic model. More 1288 specifically, for each packet we chose a random number R, which is 1289 picked from uniform distribution in a "randomization-interval", and 1290 delayed the packet by R compared to its ideal departure time. We 1291 choose randomization-interval to be a fraction of packet-inter- 1292 arrive-time of the CBR portion of the corresponding base model. To 1293 simulate a range of queueing delays, we varied this fraction from 1294 0.0001 to 0.1. While we do not claim this to be an adequate model 1295 for network-introduced jitter, we chose it for the simplicity of 1296 implementation as a means to gain insight on any simulation artifacts 1297 of strictly CBR traffic generation. We implemented randomized 1298 versions of all 5 traffic streams (CBR, VBR, MIX, SVD and VTR) by 1299 randomizing the CBR portion of each model 1301 8.3. Parameter Settings 1303 8.3.1. Queue-based settings 1305 All the queue-based simulations were run with the following Virtual 1306 Queue thresholds: 1308 o virtual-queue-rate: configured-admissible-rate, 1/2 link speed 1310 o min-marking-threshold: 5ms at virtual-queue-rate 1312 o max-marking-threshold: 15ms at virtual-queue-rate 1314 o virtual-queue-upper-limit: 20ms at virtual-queue-rate 1316 At the egress, the CLE is computed as an exponential weighted moving 1317 average (EWMA) on an interval basis, with 100ms measurement interval 1318 chosen in all simulations. We simulated the EWMA weight ranging 0.1 1319 to 0.9. The CLE threshold is chosen to be 0.05, 0.15, 0.25, and 0.5. 1321 8.3.2. Token Bucket Settings 1323 The token bucket rate is set to the configured-admissible-rate, which 1324 is half of the link speed in all experiments. Token bucket depth 1325 ranges from 64 to 512 packets. Our simulation results indicate that 1326 depth of token bucket has no significant impact on the performance of 1327 the algorithms and hence, in the rest of the section, we only present 1328 the result with 256 packets bucket depth. 1330 The CLE is calculated using EWMA just as in the case of virtual-queue 1331 settings, with weights from 0.1 to 0.9. The CLE thresholds are 1332 chosen to be 0.0001, 0.001, 0.01, 0.05 in this case. Note that the 1333 since meaning of the CLE is different for the Token bucket and queue- 1334 based algorithms, so there is no direct correspondence between the 1335 choice of the CLE thresholds in the two cases. 1337 8.4. Simulation Details 1339 To evaluate the performance of the algorithms, we recorded the actual 1340 admitted load at a granularity of 50ms, from which the mean admitted 1341 load over the duration of the simulation run can be computed. We 1342 verified that the actual admitted load at any time does not deviate 1343 much from the mean admitted load in each experiment by computing the 1344 coefficient of variation (CV is consistently 0.07 for CBR, 0.15 for 1345 VBR, 0.17 for VTR and 0.51 for SVD for all experiments). Finally, 1346 the performance of the algorithms is evaluated using a metric called 1347 over-admission-percentage, which is calculated as a percentage 1348 difference between the mean admitted load (with the mean taken over 1349 the duration of the experiment) and the configured admission rate. 1350 Given reasonably small deviation of the admitted rate from the mean 1351 admitted in the experiments, this seems reasonable. 1353 8.4.1. Sensitivity to EWMA weight and CLE 1355 Table A.2 summarized the comparison result of over-admission- 1356 percentage values from 15 experiments with different [weight, CLE 1357 threshold] settings for each type of traffic and each topology. The 1358 Ratio of the demand on the bottleneck link to the configured 1359 admission threshold is set to 5x. (In the results for 0.95x can be 1360 found in previous draft). For parking lot topologies we report the 1361 worst case result across all bottlenecks. We present here only the 1362 extreme value over the range of resulting over-admission-percentage 1363 values. 1365 We found that the virtual-queue admission control algorithm works 1366 reliably with the range of parameters we simulated, for all five 1367 types of traffic. In addition, except for SVD, the performance is 1368 insensitive to the parameters change under all tested topologies. 1369 For SVD, the algorithms does show certain sensitivity to the tested 1370 parameters. The high level conclusion that can be drawn is that 1371 (predictably) high peak-to-mean ratio SVD traffic is substantially 1372 more stressful to the queue-based admission control algorithm, but a 1373 set of parameters exists that keeps the over-admission within about 1374 -4% - +7% of the expected load even for the bursty SVD traffic. 1376 The token bucket-based admission control algorithm shows higher 1377 sensitivity to the parameter settings compared to the virtual queue 1378 based algorithm. It is important to note here that for the token 1379 bucket-based admission control no traffic will be marked until the 1380 rate of traffic exceeds the configured admission rate by the chosen 1381 CLE. As a consequence, even with the ideal performance of the 1382 algorithms, the over-admission-percentage will not be 0, rather it is 1383 expected to equal to CLE threshold if the algorithm performs as 1384 expected. Therefore, a more meaningful metric for the token-based 1385 results is actually the over-admission-percentage (listed below) 1386 minus the corresponding (CLE threshold * 100). For example, for CLE 1387 = 0.01, one would expect that 1% over-admission is inherently 1388 embedded in the algorithm. When comparing the performance of token 1389 bucket (with the adjusted over-admission-percentage) to its 1390 corresponding virtual queue result, we found that token bucket 1391 performs only slightly worse for voice-like CBR VBR, and MIX traffic. 1393 The results for SVD traffic require some additional commentary. Note 1394 from the results in Table A.2. in the Single Link topology the 1395 performance of the token-based solution is comparable to the 1396 performance of the queue-based scheme. However, for the RTT 1397 topology, the worse case performance for SVD traffic becomes very 1398 bad, with up to 23% over-admission in a high overload. We 1399 investigated two potential causes of this drastic degradation of 1400 performance by concentrating on two key differences between the 1401 Single Link and the RTT topologies: the difference in the round-trip 1402 times and the degree of aggregation in a per ingress-egress pair 1403 aggregate. 1405 To investigate the effect of the difference in round-trip times, we 1406 also conducted a subset of the experiments described above using the 1407 RTT topology that has the same RTT across all ingress-egress pairs 1408 rather than the range of RTTs in one experiment. We found out that 1409 neither the absolute nor the relative difference in RTT between 1410 different ingress-egress pairs appear to have any visible effect on 1411 the over-load performance or the fairness of both algorithms (we do 1412 not present these results here as their are essentially identical to 1413 those in Table A.2). In view of that and noting that in the RTT 1414 topology we used for these experiments for the SVD traffic, there is 1415 only 1 highly bursty flow per ingress, we believe that the severe 1416 degradation of performance in this topology is directly attributable 1417 to the lack of traffic aggregation on the ingress-egress pair basis. 1419 (preamble) 1420 ------------------------------------------------- 1421 | Type | Topo | Over Admission Perc Stats | 1422 | | | Queue-based | Bucket-Based | 1423 | | | Min Max | Min Max | 1424 |------|--------|---------------------------------| 1425 | | S.Link | 0.224 1.105 | -0.99 1.373 | 1426 | CBR | RTT | 0.200 1.192 | 6.495 9.403 | 1427 | | PLT | -0.93 0.990 | -2.24 2.215 | 1428 |-------------------------------------------------| 1429 | | S.Link | -0.07 1.646 | -2.94 2.760 | 1430 | VBR | RTT | -0.11 1.830 | -1.92 6.384 | 1431 | | PLT | -1.48 1.644 | -4.34 3.707 | 1432 |-------------------------------------------------| 1433 | | S.Link | -0.14 1.961 | -2.85 2.153 | 1434 | MIX | RTT | -0.46 1.803 | -3.18 2.445 | 1435 | | PLT | -1.62 1.031 | -3.69 2.955 | 1436 |-------------------------------------------------| 1437 | | S.Link | -0.05 1.581 | -2.36 2.247 | 1438 | VTR | RTT | -0.57 1.313 | -1.44 4.947 | 1439 | | PLT | -1.24 1.071 | -3.05 2.828 | 1440 |-------------------------------------------------| 1441 | | S.Link | -2.73 6.525 | -11.25 6.227 | 1442 | SVD | RTT | -2.98 5.357 | -4.30 23.48 | 1443 | | PLT | -4.84 4.294 | -11.40 6.126 | 1444 ------------------------------------------------- 1445 Table A.2 Parameter sensitivity: Queue-based v.s. Token Bucket- 1446 based. For the single bottleneck topologies (S. Link and RTT) the 1447 overload column represents the ratio of the mean demand on the 1448 bottleneck link to the configured admission threshold. For parking 1449 lot topologies we report the worst case result across all 1450 bottlenecks. We present here only the worst case value over the 1451 range of resulting over-admission-percentage values. 1453 8.4.2. Effect of Ingress-Egress Aggregation 1455 To investigate the effect of Ingress-Egress Aggregation, we fix a 1456 particular EWMA weight and CLE setting (in this case, weight=0.3, for 1457 virtual queue scheme CLE=0.05, and for the token bucket scheme 1458 CLE=0.0001), vary the level of ingress-egress aggregation by using 1459 RTT topologies with different number of ingresses. 1461 Table A.3 shows the change of over-admission-percentage with respect 1462 to the increase in the number of ingress for both virtual queue and 1463 token bucket. For all traffic, the leftmost column in the represents 1464 the case with the largest aggregation (only two ingresses), while the 1465 right most column represents the lowest level of aggregation 1466 (expected number calls per ingress is just 1 in this case). In all 1467 experiments the aggregate load on the bottleneck is the same across 1468 each traffic type (with the aggregate load being evenly divided 1469 between all ingresses). 1471 As seen from Table A.3. the virtual queue based approach is 1472 relatively insensitive to the level of ingress-egress aggregation. 1473 On the other hand, the Token Bucket based approach is performing 1474 significantly worse at lower levels of ingress-egress aggregation. 1475 For example for CBR (with expect 1-call per ingress), the over- 1476 admission-percentage can be as bad as 45%. 1478 (preamble) 1479 -------------------------------------------------------------------- 1480 | | Type | Number of Ingresses | 1481 | |------|---------------------------------------------------- | 1482 | | | 2 | 10 | 70 | 300 | 600 | 1000 | 1483 | | CBR | 1.003 | 1.024 | 0.976 | 0.354 | -1.45 | 0.396 | 1484 | |------------------------------------------------------------| 1485 | | | 2 | 10 | 70 | 300 | 600 | 1800 | 1486 | | VBR | 1.021 | 1.117 | 1.006 | 0.979 | 0.721 | -0.85 | 1487 | |------------------------------------------------------------| 1488 |Virtual| | 2 | 10 | 70 | 300 | 600 | 1000 | 1489 | Queue | MIX | 1.080 | 1.163 | 1.105 | 1.042 | 1.132 | 1.098 | 1490 | Based |------------------------------------------------------------| 1491 | | | 2 | 10 | 70 | 140 | 300 | 600 | 1492 | | VTR | 1.109 | 1.053 | 0.842 | 0.859 | 0.856 | 0.862 | 1493 | |------------------------------------------------------------| 1494 | | | 2 | 10 | 35 | 70 | 140 | 300 | 1495 | | SVD | -0.08 | 0.009 | -0.11 | -0.286 | -1.56 | 0.914 | 1496 -------------------------------------------------------------------- 1497 -------------------------------------------------------------------- 1498 | | Type | Number of Ingresses | 1499 | |------|---------------------------------------------------- | 1500 | | | 2 | 10 | 100 | 300 | 600 | 1000 | 1501 | | CBR | 0.725 | 0.753 | 7.666 | 21.16 | 33.69 | 44.58 | 1502 | |------------------------------------------------------------| 1503 | | | 2 | 10 | 100 | 300 | 600 | 1800 | 1504 | | VBR | 0.532 | 0.477 | 1.409 | 3.044 | 5.812 | 14.80 | 1505 |Token |------------------------------------------------------------| 1506 |Bucket | | 2 | 10 | 100 | 300 | 600 | 1800 | 1507 |Based | MIX | 0.736 | 0.649 | 1.960 | 4.652 | 10.31 | 27.69 | 1508 | |------------------------------------------------------------| 1509 | | | 2 | 10 | 70 | 140 | 300 | 600 | 1510 | | VTR | 0.758 | 0.889 | 1.335 | 1.694 | 4.128 | 13.28 | 1511 | |------------------------------------------------------------| 1512 | | | 2 | 10 | 35 | 100 | 140 | 300 | 1513 | | SVD | -1.64 | -0.93 | 0.237 | 4.732 | 7.103 | 8.799 | 1514 -------------------------------------------------------------------- 1515 Table A.3 Synchronisation effect with low Ingress-Egress 1516 Aggregation: Queue-based v.s. Token bucket-based. 1518 Our investigation reveals that the cause of the poor performance of 1519 the token bucket scheme in our experiments is attributed directly to 1520 the same "synchronization" effect as was earlier described in the 1521 Termination (preemption) results in 1522 [I-D.zhang-pcn-performance-evaluation], and to which we refer the 1523 reader for a more detailed description of this effect. In short 1524 however, for CBR traffic, a periodic pattern arises where packets of 1525 a given flow see roughly the same state of the token bucket at the 1526 bottleneck, and hence either all get marked, or all do not get 1527 marked. As a result, at low levels of aggregation a subset of 1528 ingresses always get their packets marked, while some other ingresses 1529 do not. 1531 As reported in [I-D.zhang-pcn-performance-evaluation], in the case of 1532 Termination this synchronization effect is beneficial to the 1533 algorithm. In contrast, for Admission, this synchronization is 1534 detrimental to the algorithm performance at low aggregations. This 1535 can be easily explained by noting that ingresses which packets do not 1536 get marked continue admitting new traffic even if the aggregate 1537 bottleneck load has been reached or exceeded. Since most of the 1538 other traffic patterns contain large CBR segments, this effect is 1539 seen with other traffic types as well, although to a different 1540 extent. 1542 A natural initial reaction can be to write-off this effect as purely 1543 a simulation artifact. In fact, one can expect that if some jitter 1544 is introduced into the strict CBR traffic pattern so that the packet 1545 transmission is longer strictly periodic, then the "synchronization" 1546 effect might be easily broken. 1548 To verify whether this is indeed the case, we ran the experiment with 1549 same topologies and parameter settings, but with randomized version 1550 of the base traffic types. The results are summarized in Table A.4. 1551 Note, the columns labeled with fractions (e.g. 0.0001) correspond to 1552 randomized traffic with a randomization-interval of (specified 1553 fraction) x packet-inter-arrival-time. It also means that on 1554 average, the packets are delayed by (specified fraction) x packet- 1555 inter-arrival-time / 2. In addition, the column of "No-Rand" 1556 actually correspond to the token bucket results in Table A.3). It 1557 turns out that indeed introducing enough jitter does break the 1558 synchronization effect and the performance of the algorithm much 1559 improves. However, it takes sufficient amount of the randomization 1560 before it is noticed. For instance, in the CBR graph, the only 1561 column that shows no aggregation effect is the one labeled with 1562 "0.05", which translates to expected packet deviation from its ideal 1563 CBR transmit time of 0.5ms. While 0.5ms per-hop deviation is not 1564 unreasonable to expect, in well provisioned networks with a 1565 relatively small amount of voice traffic in the priority queue one 1566 might find lower levels of network-induced jitter. In any case, 1567 these results indicates the "synchronization" effect can not be 1568 completely written off as a simulation artifact. The good news, 1569 however, that this effect is visible only at very low ingress-egress 1570 aggregation levels, and as the ingress-egress aggregation increases, 1571 the effect quickly disappears. 1573 We observed the synchronization effect consistently across all types 1574 of traffic we tested with the exception of VTR. VTR also exhibits 1575 some aggregation effect - however randomization of its CBR portion 1576 has almost have no effect on performance. We suspect this is because 1577 the randomization we perform is at packet level, while the 1578 synchronization that seems to be causing the performance degradation 1579 at low ingress-egress aggregation for VTR traffic occurs at frame- 1580 level. Although our investigation of this issue is not completed 1581 yet, our preliminary results show that if we calculating random 1582 deviation for our artificially induced jitter using frame inter- 1583 arrival time instead of packet-interarrival time, we can reduce the 1584 over-admission percentage for VTR to roughly 3%. It is unclear 1585 however, whether such randomisation at the frame level meaningfully 1586 reflects network-introduced jitter. 1588 ---------------------------------------------------------------- 1589 | | No. | Randomization Interval | 1590 | | Ingr | No-Rand | 0.0001 | 0.001 | 0.005 | 0.01 | 0.05 | 1591 |----------------------------------------------------------------| 1592 | | 2 | 0.725 | 0.683 | 0.784 | 0.725 | 0.772 | 0.787 | 1593 | | 10 | 0.753 | 0.725 | 0.543 | 0.645 | 0.733 | 0.854 | 1594 | | 100 | 7.666 | 5.593 | 2.706 | 1.454 | 1.226 | 0.692 | 1595 | CBR | 300 | 21.16 | 15.52 | 6.699 | 3.105 | 2.478 | 1.624 | 1596 | | 600 | 33.69 | 25.51 | 11.41 | 6.021 | 4.676 | 2.916 | 1597 | | 1000 | 44.58 | 36.20 | 17.03 | 7.094 | 5.371 | 3.076 | 1598 |----------------------------------------------------------------| 1599 | | 2 | 0.532 | 0.645 | 0.670 | 0.555 | 0.237 | 0.740 | 1600 | | 10 | 0.477 | 0.596 | 0.703 | 0.494 | 0.662 | 0.533 | 1601 | | 100 | 1.409 | 1.236 | 1.043 | 0.810 | 1.202 | 1.016 | 1602 | VBR | 300 | 3.044 | 2.652 | 2.093 | 1.588 | 1.755 | 1.671 | 1603 | | 600 | 5.812 | 4.913 | 3.539 | 2.963 | 2.803 | 2.277 | 1604 | | 1800 | 14.80 | 12.59 | 8.039 | 6.587 | 5.694 | 4.733 | 1605 |----------------------------------------------------------------| 1606 | | 2 | 0.736 | 0.753 | 0.627 | 0.751 | 0.850 | 0.820 | 1607 | | 10 | 0.649 | 0.737 | 0.780 | 0.824 | 0.867 | 0.787 | 1608 | | 100 | 1.960 | 1.705 | 1.428 | 1.160 | 1.149 | 1.034 | 1609 | MIX | 300 | 4.652 | 4.724 | 3.760 | 2.692 | 2.449 | 2.027 | 1610 | | 600 | 10.31 | 9.629 | 7.289 | 5.520 | 4.958 | 3.710 | 1611 | | 1000 | 17.21 | 15.96 | 11.05 | 8.700 | 7.382 | 5.061 | 1612 | | 1800 | 27.69 | 23.46 | 16.53 | 12.04 | 10.84 | 8.563 | 1613 |----------------------------------------------------------------| 1614 | | 2 | 0.758 | 0.756 | 0.872 | 0.894 | 0.825 | 0.849 | 1615 | | 10 | 0.889 | 0.939 | 0.785 | 0.704 | 0.843 | 0.574 | 1616 | | 70 | 1.335 | 1.101 | 1.066 | 1.181 | 0.978 | 0.946 | 1617 | VTR | 140 | 1.694 | 1.162 | 1.979 | 1.791 | 1.684 | 1.573 | 1618 | | 300 | 4.128 | 4.191 | 3.545 | 3.307 | 3.964 | 3.465 | 1619 | | 600 | 13.28 | 13.76 | 13.81 | 13.18 | 12.97 | 12.35 | 1620 |----------------------------------------------------------------| 1621 | | 2 | -1.64 | -2.30 | -2.14 | -1.61 | -1.01 | -0.89 | 1622 | | 10 | -0.93 | -1.65 | -2.41 | -2.98 | -2.58 | -2.27 | 1623 | | 35 | 0.237 | -0.31 | -0.35 | -1.02 | -0.96 | -2.16 | 1624 | SVD | 100 | 4.732 | 4.640 | 4.152 | 2.287 | 1.887 | -0.03 | 1625 | | 140 | 7.103 | 6.002 | 5.560 | 4.974 | 3.619 | 0.091 | 1626 | | 300 | 8.799 | 10.72 | 9.840 | 7.530 | 6.281 | 4.270 | 1627 ---------------------------------------------------------------- 1629 Table A.4 Ingress-Egress Aggregation: Token-based results for 1630 Randomized traffic. Columns labeled with fractions correspond to the 1631 randomizarion interval of (specified fraction) x packet-inter- 1632 arrival-time). 1634 Finally, we investigated the impact of call arrival assumptions at 1635 different levels of ingress-egress aggregation by comparing the 1636 results with Poisson and BATCH arrivals. We reported in 1637 [I-D.zhang-pcn-performance-evaluation] that virtual queue -based 1638 admission is relatively insensitive to the BATCH vs Poisson arrivals, 1639 even at lower aggregation levels. In contrast, the call arrival 1640 assumption does affect the performance of token bucket-based 1641 algorithm, and causes substantial degradation of performance at low 1642 ingress-egress aggregation level. An example result with CBR traffic 1643 is presented in table A.5. Here we use batch arrival with mean = 5. 1644 The results show that with the lowest aggregation, the batch arrival 1645 gives worse result than the normal Poisson arrival, however, as the 1646 level of aggregation become sufficient (e.g. 100 ingress, 10 call/ 1647 ingress), the difference becomes insignificant. This behavior is 1648 consistent across all types of traffic. 1650 (preamble) 1651 ---------------------------------------------------------------- 1652 | | No. | Deviation Interval | 1653 | | Ingr | No-Rand | 0.0001 | 0.001 | 0.005 | 0.01 | 0.05 | 1654 |----------------------------------------------------------------| 1655 | | 2 | 0.918 | 1.007 | 0.836 | 0.933 | 1.014 | 0.971 | 1656 | | 10 | 1.221 | 0.936 | 0.767 | 0.906 | 0.920 | 0.857 | 1657 | | 100 | 8.857 | 7.092 | 3.265 | 1.821 | 1.463 | 1.036 | 1658 | CBR | 300 | 29.39 | 22.59 | 8.596 | 4.979 | 4.550 | 2.165 | 1659 | | 600 | 43.36 | 37.12 | 17.37 | 10.02 | 8.005 | 4.223 | 1660 | | 1000 | 63.60 | 50.36 | 25.48 | 12.82 | 9.339 | 6.219 | 1661 |----------------------------------------------------------------| 1662 Table A.5 In/Egress Aggregation with batch traffic: Token-based 1663 results. 1665 8.4.3. Effect of Multiple Bottlenecks 1667 The results in Table A.2 (Section 9.5.1, parameter sensitivity study) 1668 implied that from the bottleneck point of view, the performance on 1669 the multiple-bottleneck topology, for all types of traffic, is 1670 comparable to the ones on the SingleLink, for both queue-based and 1671 token bucket-based algorithms. However, the results in Table A.2 1672 only show the worst case values over all bottleneck links. In this 1673 section we consider two other aspects of the Multiple Bottleneck 1674 effects: relative performance at individual bottlenecks and fairness 1675 of bandwidth usage between the short- and the long- haul ingress- 1676 egress aggregates. 1678 8.4.3.1. Relative performance of different bottlenecks 1680 In Table A.5, we show a snapshot of the behavior with 5 bottleneck 1681 topology, with the goal of studying the performance of different 1682 bottlenecks more closely. Here, the over-admission-percentage 1683 displayed is an average across all 15 experiments with different 1685 [weight, CLE] setting. (We do observe the same behavior in each of 1686 the individual experiment, hence providing a summarized statistics is 1687 meaningful). 1689 One differences in token-bucket case vs the queue-based admissions in 1690 the PLT topology case revealed in Table A.6 is that there appears to 1691 be a consistent relationship between the position of the bottleneck 1692 link (how far downstream it is) and its over-admission-percentage. 1693 The data shows the further downstream the bottleneck is, the more it 1694 tends to over-admit, regardless the type of the traffic. The exact 1695 cause of this phenomenon is yet to be explained, but the effect of it 1696 seems to be insignificant in magnitude, at least in the experiments 1697 we ran. 1699 (preamble) 1700 --------------------------------------------------------- 1701 | | Traffic | Bottleneck LinkId | 1702 | | Type | 1 | 2 | 3 | 4 | 5 | 1703 | |-------------------------------------------------| 1704 | | CBR | 0.288 | 0.286 | 0.238 | 0.332 | 0.306 | 1705 | |-------------------------------------------------| 1706 | | VBR | 0.319 | 0.420 | 0.257 | 0.341 | 0.254 | 1707 | Queue |-------------------------------------------------| 1708 | Based | MIX | 0.363 | 0.394 | 0.312 | 0.268 | 0.205 | 1709 | |-------------------------------------------------| 1710 | | VTR | 0.466 | 0.309 | 0.223 | 0.363 | 0.317 | 1711 | |-------------------------------------------------| 1712 | | SVD | 0.319 | 0.420 | 0.257 | 0.341 | 0.254 | 1713 |--------------------------------------------------------- 1714 | | Traffic | Bottleneck LinkId | 1715 | | Type | 1 | 2 | 3 | 4 | 5 | 1716 | |-------------------------------------------------| 1717 | | CBR | 0.121 | 0.300 | 0.413 | 0.515 | 0.700 | 1718 | |-------------------------------------------------| 1719 | Token | VBR | -0.07 | 0.251 | 0.496 | 0.698 | 1.044 | 1720 |Bucket |-------------------------------------------------| 1721 | Based | MIX | 0.042 | 0.350 | 0.468 | 0.716 | 0.924 | 1722 | |-------------------------------------------------| 1723 | | VTR | 0.277 | 0.488 | 0.642 | 0.907 | 1.117 | 1724 | |-------------------------------------------------| 1725 | | SVD | -2.64 | -2.50 | -1.72 | -1.57 | -1.19 | 1726 --------------------------------------------------------- 1728 Table A.6 Bottleneck Performance: queue-based v.s. token bucket- 1729 based. 1731 8.4.3.2. (Un)Fairness Between Different Ingress-Egress pairs 1733 It was reported in [I-D.zhang-pcn-performance-evaluation] that 1734 virtual-queue-based admission control favors significantly short-haul 1735 connection over long-haul connections. As was discussed there, this 1736 property is in fact common for measurement-based admission control 1737 algorithms (see for example [Jamin] for a discussion). It is common 1738 knowledge that in the limit of large demands, long-haul connections 1739 can be completely starved. We show in 1740 [I-D.zhang-pcn-performance-evaluation] that in fact starvation of 1741 long-haul connections can occur even with relatively small (but 1742 constant) overloads. We identify there that the primary reason for 1743 it is a de-synchronization of the "congestion periods" at different 1744 bottlenecks, resulting in the long-haul connections almost always 1745 seeing at least one bottleneck and hence almost never being allowed 1746 to admit new flows. We refer the reader to that draft for more 1747 detail. 1749 Here we investigate the comparative behavior of the token-bucket 1750 based scheme and virtual queue based scheme with respect to fairness. 1752 The fairness is illustrated using the ratio between bandwidth of the 1753 long-haul aggregates and the short-haul aggregates. As is 1754 intuitively expected, (and also confirmed experimentally), the 1755 unfairness is the larger the higher the demand, and the more 1756 bottlenecks traversed by the long-haul aggregate Therefore, we report 1757 here the "worst case" results across our experiments corresponding to 1758 the 5x demand overload and the 5-PLT topology. 1760 Table A.7 summaries, at 5x overload, with CLE=0.05 (for virtual 1761 queue), 0.0001(for token bucket), the fairness results to different 1762 weight and topology. We display the ratio as function of time, in 10 1763 sec increments, (the reported ratios are averaged over the 1764 corresponding 10 simulation-second interval). The result presented 1765 in this section uses the aggregates that traverse the first 1766 bottleneck. The results on all other bottlenecks are extremely 1767 similar. 1769 (preamble) 1770 -------------------------------------------------------------------- 1771 | |Topo|Weight| Simulation Time (s) | 1772 | | | | 10 | 20 | 30 | 40 | 50 | 60 | 70 | 80 | 1773 | |-----------------------------------------------------------| 1774 | | | 0.1 |0.99 |1.04 |1.14 |1.14 |1.23 |1.23 |1.35 |1.46 | 1775 | |PLT5| 0.5 |1.00 |1.17 |1.24 |1.41 |1.81 |2.13 |2.88 |3.05 | 1776 | | | 0.9 |1.03 |1.42 |1.74 |2.14 |2.44 |2.91 |3.83 |4.20 | 1777 | |-----------------------------------------------------------| 1778 |Virtual| | 0.1 |1.02 |1.08 |1.15 |1.29 |1.33 |1.38 |1.37 |1.42 | 1779 |Queue |PLT3| 0.5 |1.02 |1.04 |1.07 |1.19 |1.24 |1.30 |1.34 |1.33 | 1780 |Based | | 0.9 |1.02 |1.09 |1.23 |1.41 |1.65 |2.10 |2.63 |3.18 | 1781 | |-----------------------------------------------------------| 1782 | | | 0.1 |1.02 |0.98 |1.03 |1.11 |1.22 |1.21 |1.25 |1.31 | 1783 | |PLT2| 0.5 |1.02 |1.06 |1.14 |1.17 |1.15 |1.31 |1.41 |1.41 | 1784 | | | 0.9 |1.02 |1.04 |1.11 |1.30 |1.56 |1.61 |1.62 |1.67 | 1785 -------------------------------------------------------------------- 1786 -------------------------------------------------------------------- 1787 | |Topo|Weight| Simulation Time (s) | 1788 | | | | 10 | 20 | 30 | 40 | 50 | 60 | 70 | 80 | 1789 | |-----------------------------------------------------------| 1790 | | | 0.1 |1.03 |1.48 |1.83 |2.34 |2.95 |3.33 |4.32 |4.65 | 1791 | |PLT5| 0.5 |1.08 |1.53 |1.90 |2.44 |3.04 |3.42 |4.47 |4.83 | 1792 | | | 0.9 |1.08 |1.48 |1.80 |2.26 |2.82 |3.19 |4.23 |4.16 | 1793 | |-----------------------------------------------------------| 1794 |Token | | 0.1 |1.02 |1.26 |1.45 |1.57 |1.69 |1.76 |1.92 |1.94 | 1795 |Bucket |PLT3| 0.5 |1.07 |1.41 |1.89 |2.36 |2.89 |3.63 |3.70 |3.82 | 1796 |Based | | 0.9 |1.07 |1.33 |1.59 |1.94 |2.41 |2.80 |2.75 |2.90 | 1797 | |-----------------------------------------------------------| 1798 | | | 0.1 |1.03 |1.10 |1.43 |2.06 |2.28 |2.85 |3.09 |2.90 | 1799 | |PLT2| 0.5 |1.07 |1.32 |1.47 |1.72 |1.71 |1.81 |1.89 |1.94 | 1800 | | | 0.9 |1.09 |1.27 |1.51 |1.86 |1.82 |1.88 |1.88 |2.06 | 1801 ------------------------------------------------------------------- 1802 Table A.7 Fairness performance: Virtual Queue v.s. Token Bucket. 1803 The numbers in the cells represent the ratio between the bandwidth of 1804 the long- and short-haul aggregates. Each row represents the time 1805 series of these results in 10 simulation second increments. 1807 To summarize, we observed consistent beatdown effect across all 1808 experiments for both virtual-queue and token-bucket admission 1809 algorithms, although the exact extent of the unfairness depends on 1810 the demand overload, topology and parameters settings. To further 1811 quantify the effect of these factors remains an area of future work. 1812 We also note that the cause of the beatdown effect appears to be 1813 largely independent of the specific algorithm, and is likely to be 1814 relevant to other PCN proposals as well. 1816 9. Security Considerations 1818 TBD 1820 10. IANA Considerations 1822 11. References 1824 11.1. Normative References 1826 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 1827 Requirement Levels", BCP 14, RFC 2119, March 1997. 1829 11.2. Informative References 1831 [I-D.babiarz-pcn-3sm] 1832 Babiarz, J., "Three State PCN Marking", 1833 draft-babiarz-pcn-3sm-00 (work in progress), July 2007. 1835 [I-D.briscoe-tsvwg-cl-architecture] 1836 Briscoe, B., "An edge-to-edge Deployment Model for Pre- 1837 Congestion Notification: Admission Control over a 1838 DiffServ Region", draft-briscoe-tsvwg-cl-architecture-04 1839 (work in progress), October 2006. 1841 [I-D.briscoe-tsvwg-cl-phb] 1842 Briscoe, B., "Pre-Congestion Notification marking", 1843 draft-briscoe-tsvwg-cl-phb-03 (work in progress), 1844 October 2006. 1846 [I-D.briscoe-tsvwg-re-ecn-border-cheat] 1847 Briscoe, B., "Emulating Border Flow Policing using Re-ECN 1848 on Bulk Data", draft-briscoe-tsvwg-re-ecn-border-cheat-01 1849 (work in progress), June 2006. 1851 [I-D.briscoe-tsvwg-re-ecn-tcp] 1852 Briscoe, B., "Re-ECN: Adding Accountability for Causing 1853 Congestion to TCP/IP", draft-briscoe-tsvwg-re-ecn-tcp-03 1854 (work in progress), October 2006. 1856 [I-D.davie-ecn-mpls] 1857 Davie, B., "Explicit Congestion Marking in MPLS", 1858 draft-davie-ecn-mpls-01 (work in progress), October 2006. 1860 [I-D.eardley-pcn-architecture] 1861 Eardley, P., "Pre-Congestion Notification Architecture", 1862 draft-eardley-pcn-architecture-00 (work in progress), 1863 June 2007. 1865 [I-D.lefaucheur-emergency-rsvp] 1866 Faucheur, F., "RSVP Extensions for Emergency Services", 1867 draft-lefaucheur-emergency-rsvp-02 (work in progress), 1868 June 2006. 1870 [I-D.westberg-pcn-load-control] 1871 Westberg, L., "LC-PCN - The Load Control PCN solution", 1872 draft-westberg-pcn-load-control-00 (work in progress), 1873 May 2007. 1875 [I-D.zhang-pcn-performance-evaluation] 1876 Zhang, X., "Performance Evaluation of CL-PHB Admission and 1877 Pre-emption Algorithms", 1878 draft-zhang-pcn-performance-evaluation-01 (work in 1879 progress), March 2007. 1881 11.3. References 1883 [Jamin] "A Measurement-based Admission Control Algorithm for 1884 Integrated Services Packet Networks", 1997. 1886 [Menth] "PCN-Based Resilient Network Admission Control: The Impact 1887 of a Single Bit", 2007. 1889 Authors' Addresses 1891 Anna Charny 1892 Cisco Systems, Inc. 1893 1414 Mass. Ave. 1894 Boxborough, MA 01719 1895 USA 1897 Email: acharny@cisco.com 1899 Xinyang (Joy) Zhang 1900 Cisco Systems, Inc. and Cornell University 1901 1414 Mass. Ave. 1902 Boxborough, MA 01719 1903 USA 1905 Email: joyzhang@cisco.com 1906 Francois Le Faucheur 1907 Cisco Systems, Inc. 1908 Village d'Entreprise Green Side - Batiment T3 , 1909 400 Avenue de Roumanille, 06410 Biot Sophia-Antipolis, 1910 France 1912 Email: flefauch@cisco.com 1914 Vassilis Liatsos 1915 Cisco Systems, Inc. 1916 1414 Mass. Ave. 1917 Boxborough, MA 01719 1918 USA 1920 Email: vliatsos@cisco.com 1922 Full Copyright Statement 1924 Copyright (C) The IETF Trust (2007). 1926 This document is subject to the rights, licenses and restrictions 1927 contained in BCP 78, and except as set forth therein, the authors 1928 retain all their rights. 1930 This document and the information contained herein are provided on an 1931 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 1932 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE IETF TRUST AND 1933 THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS 1934 OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF 1935 THE INFORMATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED 1936 WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 1938 Intellectual Property 1940 The IETF takes no position regarding the validity or scope of any 1941 Intellectual Property Rights or other rights that might be claimed to 1942 pertain to the implementation or use of the technology described in 1943 this document or the extent to which any license under such rights 1944 might or might not be available; nor does it represent that it has 1945 made any independent effort to identify any such rights. Information 1946 on the procedures with respect to rights in RFC documents can be 1947 found in BCP 78 and BCP 79. 1949 Copies of IPR disclosures made to the IETF Secretariat and any 1950 assurances of licenses to be made available, or the result of an 1951 attempt made to obtain a general license or permission for the use of 1952 such proprietary rights by implementers or users of this 1953 specification can be obtained from the IETF on-line IPR repository at 1954 http://www.ietf.org/ipr. 1956 The IETF invites any interested party to bring to its attention any 1957 copyrights, patents or patent applications, or other proprietary 1958 rights that may cover technology that may be required to implement 1959 this standard. Please address the information to the IETF at 1960 ietf-ipr@ietf.org. 1962 Acknowledgment 1964 Funding for the RFC Editor function is provided by the IETF 1965 Administrative Support Activity (IASA).