idnits 2.17.1 draft-ietf-bmwg-acc-bench-meth-07.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 22. -- Found old boilerplate from RFC 3978, Section 5.5, updated by RFC 4748 on line 621. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 632. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 639. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 645. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The abstract seems to contain references ([4]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (September 2007) is 6067 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: '1' is defined on line 551, but no explicit reference was found in the text == Unused Reference: '2' is defined on line 554, but no explicit reference was found in the text == Unused Reference: '3' is defined on line 557, but no explicit reference was found in the text == Unused Reference: 'RFC3871' is defined on line 569, but no explicit reference was found in the text == Unused Reference: 'NANOG25' is defined on line 573, but no explicit reference was found in the text == Unused Reference: 'IEEECQR' is defined on line 576, but no explicit reference was found in the text == Unused Reference: 'CONVMETH' is defined on line 579, but no explicit reference was found in the text == Outdated reference: A later version (-13) exists of draft-ietf-bmwg-acc-bench-term-11 == Outdated reference: A later version (-23) exists of draft-ietf-bmwg-igp-dataplane-conv-meth-11 Summary: 2 errors (**), 0 flaws (~~), 10 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group 2 INTERNET-DRAFT 3 Expires in: September 2007 4 Intended Status: Informational 6 Scott Poretsky 7 Reef Point Systems 9 Shankar Rao 10 Qwest Communications 12 March 2007 14 Methodology Guidelines for 15 Accelerated Stress Benchmarking 16 18 Intellectual Property Rights (IPR) statement: 19 By submitting this Internet-Draft, each author represents that any 20 applicable patent or other IPR claims of which he or she is aware 21 have been or will be disclosed, and any of which he or she becomes 22 aware will be disclosed, in accordance with Section 6 of BCP 79. 24 Status of this Memo 26 Internet-Drafts are working documents of the Internet Engineering 27 Task Force (IETF), its areas, and its working groups. Note that 28 other groups may also distribute working documents as 29 Internet-Drafts. 31 Internet-Drafts are draft documents valid for a maximum of six months 32 and may be updated, replaced, or obsoleted by other documents at any 33 time. It is inappropriate to use Internet-Drafts as reference 34 material or to cite them other than as "work in progress." 36 The list of current Internet-Drafts can be accessed at 37 http://www.ietf.org/ietf/1id-abstracts.txt. 39 The list of Internet-Draft Shadow Directories can be accessed at 40 http://www.ietf.org/shadow.html. 42 Copyright Notice 43 Copyright (C) The IETF Trust (2007). 45 ABSTRACT 46 Routers in an operational network are configured with multiple 47 protocols and security policies while simultaneously forwarding 48 traffic and being managed. To accurately benchmark a router for 49 deployment it is necessary to test the router under accelerated 50 operational conditions, which is known as Stress Testing. This 51 document provides the Methodology Guidelines for performing 52 Accelerated Stress Benchmarking of networking devices. 53 Descriptions of Test Topology, Benchmarks and Reporting Format 54 are provided in addition to procedures for conducting various 55 test cases. The methodology is to be used with the companion 56 terminology document [4]. These guidelines can be used as the 57 for Accelerated Stress Benchmarking 59 basis for additional methodology documents that benchmark 60 stress conditions for specific network technologies. 62 Table of Contents 63 1. Introduction ............................................... 2 64 2. Existing definitions ....................................... 3 65 3. Test Setup.................................................. 3 66 3.1 Test Topologies............................................ 3 67 3.2 Test Considerations........................................ 3 68 3.3 Reporting Format........................................... 4 69 3.3.1 Configuration Sets....................................... 5 70 3.3.2 Startup Conditions....................................... 6 71 3.3.3 Instability Conditions................................... 6 72 3.3.4 Benchmarks............................................... 7 73 4. Stress Test Procedure...................................... 8 74 4.1 General Methodology with Multiple Instability Conditions... 8 75 4.2 General Methodology with a Single Instability Condition....10 76 5. IANA Considerations.........................................11 77 6. Security Considerations.....................................11 78 7. Normative References........................................11 79 8. Informative References......................................11 80 9. Author's Address............................................12 82 1. Introduction 83 Router testing benchmarks have consistently been made in a monolithic 84 fashion wherein a single protocol or behavior is measured in an 85 isolated environment. It is important to know the limits for a 86 networking device's behavior for each protocol in isolation, however 87 this does not produce a reliable benchmark of the device's behavior 88 in an operational network. Routers in an operational network are 89 configured with multiple protocols and security policies while 90 simultaneously forwarding traffic and being managed. To accurately 91 benchmark a router for deployment it is necessary to test that router 92 in operational conditions by simultaneously configuring and scaling 93 network protocols and security policies, forwarding traffic, and 94 managing the device. It is helpful to accelerate these network 95 operational conditions with Instability Conditions [4] so that the 96 networking devices are stress tested. 98 This document provides the Methodology for performing Stress 99 Benchmarking of networking devices. Descriptions of Test Topology, 100 Benchmarks and Reporting Format are provided in addition to 101 procedures for conducting various test cases. The methodology is 102 to be used with the companion terminology document [4]. 104 Stress Testing of networking devices provides the following benefits: 105 1. Evaluation of multiple protocols enabled simultaneously as 106 configured in deployed networks 107 2. Evaluation of system and software stability 108 3. Evaluation of manageability under stressful conditions 109 4. Identification of buffer overflow conditions 110 5. Identification of software coding bugs such as: 111 a. Memory leaks 112 for Accelerated Stress Benchmarking 114 b. Suboptimal CPU utilization 115 c. Coding logic 117 These benefits produce significant advantages for network operations: 118 1. Increased stability of routers and protocols 119 2. Hardened routers to DoS attacks 120 3. Verified manageability under stress 121 4. Planning router resources for growth and scale 123 2. Existing definitions 124 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 125 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 126 document are to be interpreted as described in BCP 14, RFC 2119 127 [5]. RFC 2119 defines the use of these key words to help make the 128 intent of standards track documents as clear as possible. While this 129 document uses these keywords, this document is not a standards track 130 document. 132 Terms related to Accelerated Stress Benchmarking are defined in [4]. 134 3. Test Setup 135 3.1 Test Topologies 136 Figure 1 shows the physical configuration to be used for the 137 methodologies provided in this document. The number of interfaces 138 between the tester and DUT will scale depending upon the number of 139 control protocol sessions and traffic forwarding interfaces. A 140 separate device may be required to externally manage the device in 141 the case that the test equipment does not support such 142 functionality. Figure 2 shows the logical configuration for the 143 stress test methodologies. Each plane MAY be emulated by single or 144 multiple test equipment. 146 3.2 Test Considerations 147 The Accelerated Stress Benchmarking test can be applied in 148 service provider test environments to benchmark DUTs under 149 stress in an environment that reflects conditions found in 150 an operational network. A particular Configuration Set is 151 defined and the DUT is benchmarked using this configuration 152 set and the Instability Conditions. Varying Configuration 153 Sets and/or Instability Conditions applied in an iterative 154 fashion can provide an accurate characterization of the DUT 155 to help determine future network deployments. 157 For the management plane SNMP Gets SHOULD be performed 158 continuously. Management sessions SHOULD be open 159 simultaneously and be repeatedly open and closed using 160 access protocols such as telnet and SSH. Open management 161 sessions SHOULD have valid and invalid configuration and 162 show commands entered. For the security plane, tunnels 163 for protocols such as IPsec SHOULD be established and 164 flapped. Policies for Firewalls and ACLs SHOULD be 165 repeatedly added and removed via management sessions. 167 for Accelerated Stress Benchmarking 169 ___________ 170 | DUT | 171 ___|Management | 172 | | | 173 | ----------- 174 \/ 175 ___________ 176 | | 177 | DUT | 178 |--->| |<---| 179 xN | ----------- | xN 180 interfaces | | interfaces 181 | | 182 | | | | 183 |--->| Tester |<---| 184 | | 185 ----------- 187 Figure 1. Physical Configuration 189 ___________ ___________ 190 | Control | | Management| 191 | Plane |___ ___| Plane | 192 | | | | | | 193 ----------- | | ----------- 194 \/ \/ ___________ 195 ___________ | Security | 196 | |<-----------| Plane | 197 | DUT | | | 198 |--->| |<---| ----------- 199 | ----------- | 200 | | 201 | ___________ | 202 | | Data | | 203 |--->| Plane |<---| 204 | | 205 ----------- 207 Figure 2. Logical Configuration 209 3.3 Reporting Format 210 Each methodology requires reporting of information for test 211 repeatability when benchmarking the same or different devices. 212 The information that are the Configuration Sets, Instability 213 Conditions, and Benchmarks, as defined in [4]. Example 214 reporting formats for each are provided below. Benchmarks 215 MUST be reported as provided below. 217 for Accelerated Stress Benchmarking 219 3.3.1 Configuration Sets 221 The minimum Configuration Set that MUST be used is as follows: 222 PARAMETER UNITS 223 Number of IGP Adjacencies Adjacencies 224 Number of IGP Routes Routes 225 Number of Nodes per Area Nodes 226 Number of Areas per Node Areas 227 SNMP GET Rate SNMP Gets/minute 228 Telnet Establishment Rate Sessions/Hour 229 Concurrent Telnet Sessions Sessions 230 FTP Establishment Rate Sessions/Hour 231 Concurrent FTP Session Sessions 232 SSH Establishment Rate Sessions/Hour 233 Concurrent SSH sessions Sessions 234 DATA TRAFFIC 235 Traffic Forwarding Enabled/Disabled 236 Aggregate Offered Load bps (or pps) 237 Number of Ingress Interfaces interfaces 238 Number of Egress Interfaces interfaces 239 Packet Size(s) bytes 240 Offered Load (interface) array of bps 241 Number of Flows flows 242 Encapsulation(flow) array of encapsulation types 244 Configuration Sets MAY include and are not limited to the 245 following examples. 246 Example Routing Protocol Configuration Set- 247 PARAMETER UNITS 248 BGP Enabled/Disabled 249 Number of EBGP Peers Peers 250 Number of IBGP Peers Peers 251 Number of BGP Route Instances Routes 252 Number of BGP Installed Routes Routes 253 MBGP Enabled/Disabled 254 Number of MBGP Route Instances Routes 255 Number of MBGP Installed Routes Routes 256 IGP Enabled/Disabled 257 IGP-TE Enabled/Disabled 258 Number of IGP Adjacencies Adjacencies 259 Number of IGP Routes Routes 260 Number of Nodes per Area Nodes 261 Number of Areas per Node Areas 263 Example MPLS Protocol Configuration Set- 264 PARAMETER UNITS 265 MPLS-TE Enabled/Disabled 266 Number of Tunnels as Ingress Tunnels 267 Number of Tunnels as Mid-Point Tunnels 268 Number of Tunnels as Egress Tunnels 269 LDP Enabled/Disabled 270 Number of Sessions Sessions 271 Number of FECs FECs 272 for Accelerated Stress Benchmarking 274 Example Multicast Protocol Configuration Set- 275 PARAMETER UNITS 276 PIM-SM Enabled/Disabled 277 RP Enabled/Disabled 278 Number of Multicast Groups Groups 279 MSDP Enabled/Disabled 281 Example Data Plane Configuration Set- 282 PARAMETER UNITS 283 Traffic Forwarding Enabled/Disabled 284 Aggregate Offered Load bps (or pps) 285 Number of Ingress Interfaces interfaces 286 Number of Egress Interfaces interfaces 288 TRAFFIC PROFILE 289 Packet Size(s) bytes 290 Offered Load (interface) array of bps 291 Number of Flows flows 292 Encapsulation(flow) array of encapsulation type 294 Example Management Configuration Set- 295 PARAMETER UNITS 296 SNMP GET Rate SNMP Gets/minute 297 Logging Enabled/Disabled 298 Protocol Debug Enabled/Disabled 299 Telnet Establishment Rate Sessions/Hour 300 Concurrent Telnet Sessions Sessions 301 FTP Establishment Rate Sessions/Hour 302 Concurrent FTP Session Sessions 303 SSH Establishment Rate Sessions/Hour 304 Concurrent SSH sessions Sessions 305 Packet Statistics Collector Enabled/Disabled 306 Statistics Sampling Rate X:1 packets 308 Example Security Configuration Set - 309 PARAMETER UNITS 310 Packet Filters Enabled/Disabled 311 Number of Filters For-Me filters 312 Number of Filter Rules For-Me rules 313 Number of Traffic Filters filters 314 Number of Traffic Filter Rules rules 315 IPsec tunnels tunnels 316 RADIUS Enabled/Disabled 317 TACACS Enabled/Disabled 319 Example SIP Configuration Set - 320 PARAMETER UNITS 321 Session Rate Sessions per Second 322 Media Streams per Session Streams per session 323 Total Sessions Sessions 324 for Accelerated Stress Benchmarking 326 3.3.2 Startup Conditions 327 Startup Conditions MAY include and are not limited to the following 328 examples: 329 PARAMETER UNITS 330 EBGP peering sessions negotiated Total EBGP Sessions 331 IBGP peering sessions negotiated Total IBGP Sessions 332 ISIS adjacencies established Total ISIS Adjacencies 333 ISIS routes learned rate ISIS Routes per Second 334 IPsec tunnels negotiated Total IPsec Tunnels 335 IPsec tunnel establishment rate IPsec tunnels per second 337 3.3.3 Instability Conditions 338 Instability Conditions MAY include and are not limited to the 339 following examples: 340 PARAMETER UNITS 341 Interface Shutdown Cycling Rate interfaces per minute 342 ISIS Route Flap Rate routes per minutes 343 LSP Reroute Rate LSP per minute 344 Overloaded Links number 345 Amount Links Overloaded % of bandwidth 346 FTP Rate Mb/minute 347 IPsec Tunnel Flap Rate tunnels per minute 348 Filter Policy Changes policies per hour 349 SSH Session Rate SSH sessions per hour 350 Telnet Session Rate Telnet session per hour 351 Command Entry Rate Commands per Hour 352 Message Flood Rate Messages 354 3.3.4 Benchmarks 356 Benchmarks are as defined in [4] and MUST be reported as follow: 357 PARAMETER UNITS PHASE 358 Stable Aggregate Forwarding Rate pps Startup 359 Stable Latency seconds Startup 360 Stable Session Count sessions Startup 361 Unstable Aggregate Forwarding Rate pps Instability 362 Degraded Aggregate Forwarding Rate pps Instability 363 Ave. Degraded Aggregate Forwarding Rate pps Instability 364 Unstable Latency seconds Instability 365 Unstable Uncontrolled Sessions Lost sessions Instability 366 Recovered Aggregate Forwarding Rate pps Recovery 367 Recovered Latency seconds Recovery 368 Recovery Time seconds Recovery 369 Recovered Uncontrolled Sessions sessions Recovery 370 for Accelerated Stress Benchmarking 372 4. Stress Test Procedure 374 4.1 General Methodology with Multiple Instability Conditions 376 Objective 377 To benchmark the DUT under accelerated stress when there are 378 multiple instability conditions. 380 Procedure 382 1. Report Configuration Set 383 2. Begin Startup Conditions with the DUT 384 3. Establish Configuration Sets with the DUT 385 4. Report Stability Benchmarks 386 5. Apply Instability Conditions 387 6. Apply Instability Condition specific to test case. 388 7. Report Instability Benchmarks 389 8. Stop applying all Instability Conditions 390 9. Report Recovery Benchmarks 391 10. Optional - Change Configuration Set and/or Instability 392 Conditions for next iteration 394 Expected Results 395 Ideally the Forwarding Rates, Latencies, and Session Counts will 396 be measured to be the same at each phase. If no packet or 397 session loss occurs then the Instability Conditions MAY be 398 increased for a repeated iteration (step 10 of the procedure). 400 Example Procedure 402 1. Report Configuration Set 404 BGP Enabled 405 10 EBGP Peers 406 30 IBGP Peers 407 500K BGP Route Instances 408 160K BGP FIB Routes 410 ISIS Enabled 411 ISIS-TE Disabled 412 30 ISIS Adjacencies 413 10K ISIS Level-1 Routes 414 250 ISIS Nodes per Area 416 MPLS Disabled 417 IP Multicast Disabled 419 IPsec Enabled 420 10K IPsec tunnels 421 640 Firewall Policies 422 100 Firewall Rules per Policy 423 for Accelerated Stress Benchmarking 425 Traffic Forwarding Enabled 426 Aggregate Offered Load 10Gbps 427 30 Ingress Interfaces 428 30 Egress Interfaces 429 Packet Size(s) = 64, 128, 256, 512, 1024, 1280, 1518 bytes 430 Forwarding Rate[1..30] = 1Gbps 431 10000 Flows 432 Encapsulation[1..5000] = IPv4 433 Encapsulation[5001.10000] = IPsec 434 Logging Enabled 435 Protocol Debug Disabled 436 SNMP Enabled 437 SSH Enabled 438 10 Concurrent SSH Sessions 439 FTP Enabled 440 RADIUS Enabled 441 TACACS Disabled 442 Packet Statistics Collector Enabled 444 2. Begin Startup Conditions with the DUT 446 10 EBGP peering sessions negotiated 447 30 EBGP peering sessions negotiated 448 1K BGP routes learned per second 449 30 ISIS Adjacencies 450 1K ISIS routes learned per second 451 10K IPsec tunnels negotiated 453 3. Establish Configuration Sets with the DUT 455 4. Report Stability Benchmarks as follow: 457 Stable Aggregate Forwarding Rate 458 Stable Latency 459 Stable Session Count 461 It is RECOMMENDED that the benchmarks be measured and 462 recorded at one-second intervals. 464 5. Apply Instability Conditions 466 Interface Shutdown Cycling Rate = 1 interface every 5 467 minutes 468 BGP Session Flap Rate = 1 session every 10 minutes 469 BGP Route Flap Rate = 100 routes per minute 470 ISIS Route Flap Rate = 100 routes per minute 471 IPsec Tunnel Flap Rate = 1 tunnel per minute 472 Overloaded Links = 5 of 30 473 Amount Links Overloaded = 20% 474 SNMP GETs = 1 per sec 475 SSH Session Rate = 6 sessions per hour 476 SSH Session Duration = 10 minutes 477 Command Rate via SSH = 20 commands per minute 478 for Accelerated Stress Benchmarking 480 FTP Restart Rate = 10 continuous transfers (Puts/Gets) 481 per hour 482 FTP Transfer Rate = 100 Mbps 483 Statistics Sampling Rate = 1:1 packets 484 RADIUS Server Loss Rate = 1 per Hour 485 RADIUS Server Loss Duration = 3 seconds 487 6. Apply Instability Condition specific to test case. 489 7. Report Instability Benchmarks as follow: 490 Unstable Aggregate Forwarding Rate 491 Degraded Aggregate Forwarding Rate 492 Ave. Degraded Aggregate Forwarding Rate 493 Unstable Latency 494 Unstable Uncontrolled Sessions Lost 496 It is RECOMMENDED that the benchmarks be measured and 497 recorded at one-second intervals. 499 8. Stop applying all Instability Conditions 501 9. Report Recovery Benchmarks as follow: 503 Recovered Aggregate Forwarding Rate 504 Recovered Latency 505 Recovery Time 506 Recovered Uncontrolled Sessions Lost 508 It is RECOMMENDED that the benchmarks be measured and 509 recorded at one-second intervals. 511 10. Optional - Change Configuration Set and/or Instability 512 Conditions for next iteration 514 4.2 General Methodology with a Single Instability Condition 516 Objective 517 To benchmark the DUT under accelerated stress when there is a 518 single instability conditions. 520 Procedure 522 1. Report Configuration Set 523 2. Begin Startup Conditions with the DUT 524 3. Establish Configuration Sets with the DUT 525 4. Report Stability Benchmarks 526 5. Apply single Instability Condition 527 6. Report Instability Benchmarks 528 7. Stop applying all Instability Condition 529 8. Report Recovery Benchmarks 530 9. Optional - Change Configuration Set and/or Instability 531 Conditions for next iteration 532 for Accelerated Stress Benchmarking 534 Expected Results 535 Ideally the Forwarding Rates, Latencies, and Session Counts will 536 be measured to be the same at each phase. If no packet or session 537 loss occurs then the Instability Conditions MAY be increased 538 for a repeated iteration (step 10 of the procedure). 540 5. IANA Considerations 541 This document requires no IANA considerations. 543 6. Security Considerations 544 Documents of this type do not directly affect the security of 545 the Internet or of corporate networks as long as benchmarking 546 is not performed on devices or systems connected to operating 547 networks. 549 7. Normative References 551 [1] Bradner, S., Editor, "Benchmarking Terminology for Network 552 Interconnection Devices", RFC 1242, October 1991. 554 [2] Mandeville, R., "Benchmarking Terminology for LAN Switching 555 Devices", RFC 2285, October 1998. 557 [3] Bradner, S. and McQuaid, J., "Benchmarking Methodology for 558 Network Interconnect Devices", RFC 2544, March 1999. 560 [4] Poretsky, S. and Rao, S., "Terminology for Accelerated 561 Stress Benchmarking", draft-ietf-bmwg-acc-bench-term-11, 562 work in progress, March 2007. 564 [5] Bradner, S., "Key words for use in RFCs to Indicate 565 Requirement Levels", RFC 2119, March 1997. 567 8. Informative References 569 [RFC3871] RFC 3871 "Operational Security Requirements for Large 570 Internet Service Provider (ISP) IP Network Infrastructure. 571 G. Jones, Ed.. IETF, September 2004. 573 [NANOG25] "Core Router Evaluation for Higher Availability", 574 Scott Poretsky, NANOG 25, October 8, 2002, Toronto, CA. 576 [IEEECQR] "Router Stress Testing to Validate Readiness for 577 Network Deployment", Scott Poretsky, IEEE CQR 2003. 579 [CONVMETH] Poretsky, S., "Benchmarking Methodology for IGP Data 580 Plane Route Convergence", 581 draft-ietf-bmwg-igp-dataplane-conv-meth-11, work in 582 progress, March 2007. 584 for Accelerated Stress Benchmarking 586 9. Author's Address 588 Scott Poretsky 589 Reef Point Systems 590 8 New England Executive Park 591 Burlington, MA 01803 592 USA 593 Phone: + 1 781 395 5090 594 EMail: sporetsky@reefpoint.com 596 Shankar Rao 597 1801 California Street 598 8th Floor 599 Qwest Communications 600 Denver, CO 80202 601 USA 602 Phone: + 1 303 437 6643 603 Email: shankar.rao@qwest.com 604 for Accelerated Stress Benchmarking 606 Full Copyright Statement 608 Copyright (C) The IETF Trust (2007). 610 This document is subject to the rights, licenses and restrictions 611 contained in BCP 78, and except as set forth therein, the authors 612 retain all their rights. 614 This document and the information contained herein are provided 615 on an "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE 616 REPRESENTS OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE 617 IETF TRUST AND THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL 618 WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY 619 WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE 620 ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS 621 FOR A PARTICULAR PURPOSE. 623 Intellectual Property 625 The IETF takes no position regarding the validity or scope of any 626 Intellectual Property Rights or other rights that might be claimed to 627 pertain to the implementation or use of the technology described in 628 this document or the extent to which any license under such rights 629 might or might not be available; nor does it represent that it has 630 made any independent effort to identify any such rights. Information 631 on the procedures with respect to rights in RFC documents can be 632 found in BCP 78 and BCP 79. 634 Copies of IPR disclosures made to the IETF Secretariat and any 635 assurances of licenses to be made available, or the result of an 636 attempt made to obtain a general license or permission for the use of 637 such proprietary rights by implementers or users of this 638 specification can be obtained from the IETF on-line IPR repository at 639 http://www.ietf.org/ipr. 641 The IETF invites any interested party to bring to its attention any 642 copyrights, patents or patent applications, or other proprietary 643 rights that may cover technology that may be required to implement 644 this standard. Please address the information to the IETF at ietf- 645 ipr@ietf.org. 647 Acknowledgement 649 Funding for the RFC Editor function is currently provided by the 650 Internet Society.