idnits 2.17.1 draft-ietf-bmwg-acc-bench-meth-09.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3978, Section 5.1 on line 17. -- Found old boilerplate from RFC 3978, Section 5.5, updated by RFC 4748 on line 616. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 627. -- Found old boilerplate from RFC 3979, Section 5, paragraph 2 on line 634. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 640. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The abstract seems to contain references ([4]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (February 25, 2008) is 5899 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: '1' is defined on line 546, but no explicit reference was found in the text == Unused Reference: '2' is defined on line 549, but no explicit reference was found in the text == Unused Reference: '3' is defined on line 552, but no explicit reference was found in the text == Unused Reference: 'RFC3871' is defined on line 564, but no explicit reference was found in the text == Unused Reference: 'NANOG25' is defined on line 568, but no explicit reference was found in the text == Unused Reference: 'IEEECQR' is defined on line 571, but no explicit reference was found in the text == Unused Reference: 'CONVMETH' is defined on line 574, but no explicit reference was found in the text == Outdated reference: A later version (-23) exists of draft-ietf-bmwg-igp-dataplane-conv-meth-15 Summary: 2 errors (**), 0 flaws (~~), 9 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group S. Poretsky 2 Internet Draft NextPoint Networks 3 Expires: August 2008 4 Intended Status: Informational Shankar Rao 5 Qwest Communications 7 February 25, 2008 9 Methodology Guidelines for 10 Accelerated Stress Benchmarking 11 13 Intellectual Property Rights (IPR) statement: 14 By submitting this Internet-Draft, each author represents that any 15 applicable patent or other IPR claims of which he or she is aware 16 have been or will be disclosed, and any of which he or she becomes 17 aware will be disclosed, in accordance with Section 6 of BCP 79. 19 Status of this Memo 21 Internet-Drafts are working documents of the Internet Engineering 22 Task Force (IETF), its areas, and its working groups. Note that 23 other groups may also distribute working documents as 24 Internet-Drafts. 26 Internet-Drafts are draft documents valid for a maximum of six months 27 and may be updated, replaced, or obsoleted by other documents at any 28 time. It is inappropriate to use Internet-Drafts as reference 29 material or to cite them other than as "work in progress." 31 The list of current Internet-Drafts can be accessed at 32 http://www.ietf.org/ietf/1id-abstracts.txt. 34 The list of Internet-Draft Shadow Directories can be accessed at 35 http://www.ietf.org/shadow.html. 37 Copyright Notice 38 Copyright (C) The IETF Trust (2008). 40 ABSTRACT 41 Routers in an operational network are configured with multiple 42 protocols and security policies while simultaneously forwarding 43 traffic and being managed. To accurately benchmark a router for 44 deployment it is necessary to test the router in a lab environment 45 under accelerated conditions, which is known as Stress Testing. 46 This document provides the Methodology Guidelines for performing 47 Accelerated Stress Benchmarking of networking devices. 48 The methodology is to be used with the companion terminology 49 document [4]. These guidelines can be used as the basis for 50 additional methodology documents that benchmark stress conditions 51 for specific network technologies. 53 for Accelerated Stress Benchmarking 55 Table of Contents 56 1. Introduction ............................................... 2 57 2. Existing definitions ....................................... 3 58 3. Test Setup.................................................. 3 59 3.1 Test Topologies............................................ 3 60 3.2 Test Considerations........................................ 3 61 3.3 Reporting Format........................................... 4 62 3.3.1 Configuration Sets....................................... 5 63 3.3.2 Startup Conditions....................................... 6 64 3.3.3 Instability Conditions................................... 6 65 3.3.4 Benchmarks............................................... 7 66 4. Stress Test Procedure...................................... 8 67 4.1 General Methodology with Multiple Instability Conditions... 8 68 4.2 General Methodology with a Single Instability Condition....10 69 5. IANA Considerations.........................................11 70 6. Security Considerations.....................................11 71 7. Normative References........................................11 72 8. Informative References......................................11 73 9. Author's Address............................................12 75 1. Introduction 76 Router testing benchmarks have consistently been made in a monolithic 77 fashion wherein a single protocol or behavior is measured in an 78 isolated environment. It is important to know the limits for a 79 networking device's behavior for each protocol in isolation, however 80 this does not produce a reliable benchmark of the device's behavior 81 in an operational network. Routers in an operational network are 82 configured with multiple protocols and security policies while 83 simultaneously forwarding traffic and being managed. To accurately 84 benchmark a router for deployment it is necessary to test that router 85 in operational conditions by simultaneously configuring and scaling 86 network protocols and security policies, forwarding traffic, and 87 managing the device. It is helpful to accelerate these network 88 operational conditions with Instability Conditions [4] so that the 89 networking devices are stress tested. 91 This document provides the Methodology for performing Stress 92 Benchmarking of networking devices. Descriptions of Test Topology, 93 Benchmarks and Reporting Format are provided in addition to 94 procedures for conducting various test cases. The methodology is 95 to be used with the companion terminology document [4]. 97 Stress Testing of networking devices provides the following benefits: 98 1. Evaluation of multiple protocols enabled simultaneously as 99 configured in deployed networks 100 2. Evaluation of system and software stability 101 3. Evaluation of manageability under stressful conditions 102 4. Identification of buffer overflow conditions 103 5. Identification of software coding bugs such as: 104 a. Memory leaks 105 for Accelerated Stress Benchmarking 107 b. Suboptimal CPU utilization 108 c. Coding logic 110 These benefits produce significant advantages for network operations: 111 1. Increased stability of routers and protocols 112 2. Hardened routers to DoS attacks 113 3. Verified manageability under stress 114 4. Planning router resources for growth and scale 116 2. Existing definitions 117 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 118 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 119 document are to be interpreted as described in BCP 14, RFC 2119 120 [5]. RFC 2119 defines the use of these key words to help make the 121 intent of standards track documents as clear as possible. While this 122 document uses these keywords, this document is not a standards track 123 document. 125 Terms related to Accelerated Stress Benchmarking are defined in [4]. 127 3. Test Setup 128 3.1 Test Topologies 129 Figure 1 shows the physical configuration to be used for the 130 methodologies provided in this document. The number of interfaces 131 between the tester and DUT will scale depending upon the number of 132 control protocol sessions and traffic forwarding interfaces. A 133 separate device may be required to externally manage the device in 134 the case that the test equipment does not support such 135 functionality. Figure 2 shows the logical configuration for the 136 stress test methodologies. Each plane MAY be emulated by single or 137 multiple test equipment. 139 3.2 Test Considerations 140 The Accelerated Stress Benchmarking test can be applied in 141 service provider test environments to benchmark DUTs under 142 stress in an environment that reflects conditions found in 143 an operational network. A particular Configuration Set is 144 defined and the DUT is benchmarked using this configuration 145 set and the Instability Conditions. Varying Configuration 146 Sets and/or Instability Conditions applied in an iterative 147 fashion can provide an accurate characterization of the DUT 148 to help determine future network deployments. 150 For the management plane SNMP Gets SHOULD be performed 151 continuously. Management sessions SHOULD be open 152 simultaneously and be repeatedly open and closed using 153 access protocols such as telnet and SSH. Open management 154 sessions SHOULD have valid and invalid configuration and 155 show commands entered. For the security plane, tunnels 156 for protocols such as IPsec SHOULD be established and 157 flapped. Policies for Firewalls and ACLs SHOULD be 158 repeatedly added and removed via management sessions. 160 for Accelerated Stress Benchmarking 162 ___________ 163 | DUT | 164 ___|Management | 165 | | | 166 | ----------- 167 \/ 168 ___________ 169 | | 170 | DUT | 171 |--->| |<---| 172 xN | ----------- | xN 173 interfaces | | interfaces 174 | | 175 | | | | 176 |--->| Tester |<---| 177 | | 178 ----------- 180 Figure 1. Physical Configuration 182 ___________ ___________ 183 | Control | | Management| 184 | Plane |___ ___| Plane | 185 | | | | | | 186 ----------- | | ----------- 187 \/ \/ ___________ 188 ___________ | Security | 189 | |<-----------| Plane | 190 | DUT | | | 191 |--->| |<---| ----------- 192 | ----------- | 193 | | 194 | ___________ | 195 | | Data | | 196 |--->| Plane |<---| 197 | | 198 ----------- 200 Figure 2. Logical Configuration 202 3.3 Reporting Format 203 Each methodology requires reporting of information for test 204 repeatability when benchmarking the same or different devices. 205 The information that are the Configuration Sets, Instability 206 Conditions, and Benchmarks, as defined in [4]. Example 207 reporting formats for each are provided below. Benchmarks 208 MUST be reported as provided below. 210 for Accelerated Stress Benchmarking 212 3.3.1 Configuration Sets 214 The minimum Configuration Set that MUST be used is as follows: 215 PARAMETER UNITS 216 Number of IGP Adjacencies Adjacencies 217 Number of IGP Routes Routes 218 Number of Nodes per Area Nodes 219 Number of Areas per Node Areas 220 SNMP GET Rate SNMP Gets/minute 221 Telnet Establishment Rate Sessions/Hour 222 Concurrent Telnet Sessions Sessions 223 FTP Establishment Rate Sessions/Hour 224 Concurrent FTP Session Sessions 225 SSH Establishment Rate Sessions/Hour 226 Concurrent SSH sessions Sessions 227 DATA TRAFFIC 228 Traffic Forwarding Enabled/Disabled 229 Aggregate Offered Load bps (or pps) 230 Number of Ingress Interfaces interfaces 231 Number of Egress Interfaces interfaces 232 Packet Size(s) bytes 233 Offered Load (interface) array of bps 234 Number of Flows flows 235 Encapsulation(flow) array of encapsulation types 237 Configuration Sets MAY include and are not limited to the 238 following examples. 239 Example Routing Protocol Configuration Set- 240 PARAMETER UNITS 241 BGP Enabled/Disabled 242 Number of EBGP Peers Peers 243 Number of IBGP Peers Peers 244 Number of BGP Route Instances Routes 245 Number of BGP Installed Routes Routes 246 MBGP Enabled/Disabled 247 Number of MBGP Route Instances Routes 248 Number of MBGP Installed Routes Routes 249 IGP Enabled/Disabled 250 IGP-TE Enabled/Disabled 251 Number of IGP Adjacencies Adjacencies 252 Number of IGP Routes Routes 253 Number of Nodes per Area Nodes 254 Number of Areas per Node Areas 256 Example MPLS Protocol Configuration Set- 257 PARAMETER UNITS 258 MPLS-TE Enabled/Disabled 259 Number of Tunnels as Ingress Tunnels 260 Number of Tunnels as Mid-Point Tunnels 261 Number of Tunnels as Egress Tunnels 262 LDP Enabled/Disabled 263 Number of Sessions Sessions 264 Number of FECs FECs 265 for Accelerated Stress Benchmarking 267 Example Multicast Protocol Configuration Set- 268 PARAMETER UNITS 269 PIM-SM Enabled/Disabled 270 RP Enabled/Disabled 271 Number of Multicast Groups Groups 272 MSDP Enabled/Disabled 274 Example Data Plane Configuration Set- 275 PARAMETER UNITS 276 Traffic Forwarding Enabled/Disabled 277 Number of Ingress Interfaces interfaces 278 Number of Egress Interfaces interfaces 280 TRAFFIC PROFILE 281 Packet Size(s) bytes 282 Packet Rate(interface) array of packets per second 283 Aggregate Offered Load pps 284 Number of Flows number of flows 285 Traffic Type array of (RTP, UDP, TCP, other) 286 Encapsulation(flow) array of encapsulation type 287 Mirroring enabled/disabled 289 Example Management Configuration Set- 290 PARAMETER UNITS 291 SNMP GET Rate SNMP Gets/minute 292 Logging Enabled/Disabled 293 Protocol Debug Enabled/Disabled 294 Telnet Establishment Rate Sessions/Hour 295 Concurrent Telnet Sessions Sessions 296 FTP Establishment Rate Sessions/Hour 297 Concurrent FTP Session Sessions 298 SSH Establishment Rate Sessions/Hour 299 Concurrent SSH sessions Sessions 300 Packet Statistics Collector Enabled/Disabled 301 Statistics Sampling Rate X:1 packets 303 Example Security Configuration Set - 304 PARAMETER UNITS 305 Packet Filters Enabled/Disabled 306 Number of Filters For-Me filters 307 Number of Filter Rules For-Me rules 308 Number of Traffic Filters filters 309 Number of Traffic Filter Rules rules 310 IPsec tunnels tunnels 311 RADIUS Enabled/Disabled 312 TACACS Enabled/Disabled 314 Example SIP Configuration Set - 315 PARAMETER UNITS 316 Session Rate Sessions per Second 317 Media Streams per Session Streams per session 318 Total Sessions Sessions 319 for Accelerated Stress Benchmarking 321 3.3.2 Startup Conditions 322 Startup Conditions MAY include and are not limited to the following 323 examples: 324 PARAMETER UNITS 325 EBGP peering sessions negotiated Total EBGP Sessions 326 IBGP peering sessions negotiated Total IBGP Sessions 327 ISIS adjacencies established Total ISIS Adjacencies 328 ISIS routes learned rate ISIS Routes per Second 329 IPsec tunnels negotiated Total IPsec Tunnels 330 IPsec tunnel establishment rate IPsec tunnels per second 332 3.3.3 Instability Conditions 333 Instability Conditions MAY include and are not limited to the 334 following examples: 335 PARAMETER UNITS 336 Interface Shutdown Cycling Rate interfaces per minute 337 ISIS Route Flap Rate routes per minutes 338 LSP Reroute Rate LSP per minute 339 Overloaded Links number 340 Amount Links Overloaded % of bandwidth 341 FTP Rate Mb/minute 342 IPsec Tunnel Flap Rate tunnels per minute 343 Filter Policy Changes policies per hour 344 SSH Session Rate SSH sessions per hour 345 Telnet Session Rate Telnet session per hour 346 Command Entry Rate Commands per Hour 347 Message Flood Rate Messages 349 3.3.4 Benchmarks 351 Benchmarks are as defined in [4] and MUST be reported as follow: 352 PARAMETER UNITS PHASE 353 Stable Aggregate Forwarding Rate pps Startup 354 Stable Latency seconds Startup 355 Stable Session Count sessions Startup 356 Unstable Aggregate Forwarding Rate pps Instability 357 Degraded Aggregate Forwarding Rate pps Instability 358 Ave. Degraded Aggregate Forwarding Rate pps Instability 359 Unstable Latency seconds Instability 360 Unstable Uncontrolled Sessions Lost sessions Instability 361 Recovered Aggregate Forwarding Rate pps Recovery 362 Recovered Latency seconds Recovery 363 Recovery Time seconds Recovery 364 Recovered Uncontrolled Sessions sessions Recovery 365 for Accelerated Stress Benchmarking 367 4. Stress Test Procedure 369 4.1 General Methodology with Multiple Instability Conditions 371 Objective 372 To benchmark the DUT under accelerated stress when there are 373 multiple instability conditions. 375 Procedure 377 1. Report Configuration Set 378 2. Begin Startup Conditions with the DUT 379 3. Establish Configuration Sets with the DUT 380 4. Report Stability Benchmarks 381 5. Apply Instability Conditions 382 6. Apply Instability Condition specific to test case. 383 7. Report Instability Benchmarks 384 8. Stop applying all Instability Conditions 385 9. Report Recovery Benchmarks 386 10. Optional - Change Configuration Set and/or Instability 387 Conditions for next iteration 389 Expected Results 390 Ideally the Forwarding Rates, Latencies, and Session Counts will 391 be measured to be the same at each phase. If no packet or 392 session loss occurs then the Instability Conditions MAY be 393 increased for a repeated iteration (step 10 of the procedure). 395 Example Procedure 397 1. Report Configuration Set 399 BGP Enabled 400 10 EBGP Peers 401 30 IBGP Peers 402 500K BGP Route Instances 403 160K BGP FIB Routes 405 ISIS Enabled 406 ISIS-TE Disabled 407 30 ISIS Adjacencies 408 10K ISIS Level-1 Routes 409 250 ISIS Nodes per Area 411 MPLS Disabled 412 IP Multicast Disabled 414 IPsec Enabled 415 10K IPsec tunnels 416 640 Firewall Policies 417 100 Firewall Rules per Policy 418 for Accelerated Stress Benchmarking 420 Traffic Forwarding Enabled 421 Aggregate Offered Load 10Gbps 422 30 Ingress Interfaces 423 30 Egress Interfaces 424 Packet Size(s) = 64, 128, 256, 512, 1024, 1280, 1518 bytes 425 Forwarding Rate[1..30] = 1Gbps 426 10000 Flows 427 Encapsulation[1..5000] = IPv4 428 Encapsulation[5001.10000] = IPsec 429 Logging Enabled 430 Protocol Debug Disabled 431 SNMP Enabled 432 SSH Enabled 433 10 Concurrent SSH Sessions 434 FTP Enabled 435 RADIUS Enabled 436 TACACS Disabled 437 Packet Statistics Collector Enabled 439 2. Begin Startup Conditions with the DUT 441 10 EBGP peering sessions negotiated 442 30 EBGP peering sessions negotiated 443 1K BGP routes learned per second 444 30 ISIS Adjacencies 445 1K ISIS routes learned per second 446 10K IPsec tunnels negotiated 448 3. Establish Configuration Sets with the DUT 450 4. Report Stability Benchmarks as follow: 452 Stable Aggregate Forwarding Rate 453 Stable Latency 454 Stable Session Count 456 It is RECOMMENDED that the benchmarks be measured and 457 recorded at one-second intervals. 459 5. Apply Instability Conditions 461 Interface Shutdown Cycling Rate = 1 interface every 5 462 minutes 463 BGP Session Flap Rate = 1 session every 10 minutes 464 BGP Route Flap Rate = 100 routes per minute 465 ISIS Route Flap Rate = 100 routes per minute 466 IPsec Tunnel Flap Rate = 1 tunnel per minute 467 Overloaded Links = 5 of 30 468 Amount Links Overloaded = 20% 469 SNMP GETs = 1 per sec 470 SSH Session Rate = 6 sessions per hour 471 SSH Session Duration = 10 minutes 472 Command Rate via SSH = 20 commands per minute 473 for Accelerated Stress Benchmarking 475 FTP Restart Rate = 10 continuous transfers (Puts/Gets) 476 per hour 477 FTP Transfer Rate = 100 Mbps 478 Statistics Sampling Rate = 1:1 packets 479 RADIUS Server Loss Rate = 1 per Hour 480 RADIUS Server Loss Duration = 3 seconds 482 6. Apply Instability Condition specific to test case. 484 7. Report Instability Benchmarks as follow: 485 Unstable Aggregate Forwarding Rate 486 Degraded Aggregate Forwarding Rate 487 Ave. Degraded Aggregate Forwarding Rate 488 Unstable Latency 489 Unstable Uncontrolled Sessions Lost 491 It is RECOMMENDED that the benchmarks be measured and 492 recorded at one-second intervals. 494 8. Stop applying all Instability Conditions 496 9. Report Recovery Benchmarks as follow: 498 Recovered Aggregate Forwarding Rate 499 Recovered Latency 500 Recovery Time 501 Recovered Uncontrolled Sessions Lost 503 It is RECOMMENDED that the benchmarks be measured and 504 recorded at one-second intervals. 506 10. Optional - Change Configuration Set and/or Instability 507 Conditions for next iteration 509 4.2 General Methodology with a Single Instability Condition 511 Objective 512 To benchmark the DUT under accelerated stress when there is a 513 single instability conditions. 515 Procedure 517 1. Report Configuration Set 518 2. Begin Startup Conditions with the DUT 519 3. Establish Configuration Sets with the DUT 520 4. Report Stability Benchmarks 521 5. Apply single Instability Condition 522 6. Report Instability Benchmarks 523 7. Stop applying all Instability Condition 524 8. Report Recovery Benchmarks 525 9. Optional - Change Configuration Set and/or Instability 526 Conditions for next iteration 527 for Accelerated Stress Benchmarking 529 Expected Results 530 Ideally the Forwarding Rates, Latencies, and Session Counts will 531 be measured to be the same at each phase. If no packet or session 532 loss occurs then the Instability Conditions MAY be increased 533 for a repeated iteration (step 10 of the procedure). 535 5. IANA Considerations 536 This document requires no IANA considerations. 538 6. Security Considerations 539 Documents of this type do not directly affect the security of 540 the Internet or of corporate networks as long as benchmarking 541 is not performed on devices or systems connected to operating 542 networks. 544 7. Normative References 546 [1] Bradner, S., Editor, "Benchmarking Terminology for Network 547 Interconnection Devices", RFC 1242, October 1991. 549 [2] Mandeville, R., "Benchmarking Terminology for LAN Switching 550 Devices", RFC 2285, October 1998. 552 [3] Bradner, S. and McQuaid, J., "Benchmarking Methodology for 553 Network Interconnect Devices", RFC 2544, March 1999. 555 [4] Poretsky, S. and Rao, S., "Terminology for Accelerated 556 Stress Benchmarking", draft-ietf-bmwg-acc-bench-term-13, 557 work in progress, February 2008. 559 [5] Bradner, S., "Key words for use in RFCs to Indicate 560 Requirement Levels", RFC 2119, March 1997. 562 8. Informative References 564 [RFC3871] RFC 3871 "Operational Security Requirements for Large 565 Internet Service Provider (ISP) IP Network Infrastructure. 566 G. Jones, Ed.. IETF, September 2004. 568 [NANOG25] "Core Router Evaluation for Higher Availability", 569 Scott Poretsky, NANOG 25, October 8, 2002, Toronto, CA. 571 [IEEECQR] "Router Stress Testing to Validate Readiness for 572 Network Deployment", Scott Poretsky, IEEE CQR 2003. 574 [CONVMETH] Poretsky, S., "Benchmarking Methodology for IGP Data 575 Plane Route Convergence", 576 draft-ietf-bmwg-igp-dataplane-conv-meth-15, work in 577 progress, February 2008. 579 for Accelerated Stress Benchmarking 581 9. Author's Address 583 Scott Poretsky 584 NextPoint Networks 585 3 Federal Street 586 Billerica, MA 01821 587 USA 588 Phone: + 1 508 439 9008 589 EMail: sporetsky@nextpointnetworks.com 591 Shankar Rao 592 1801 California Street 593 8th Floor 594 Qwest Communications 595 Denver, CO 80202 596 USA 597 Phone: + 1 303 437 6643 598 Email: shankar.rao@qwest.com 599 for Accelerated Stress Benchmarking 601 Full Copyright Statement 603 Copyright (C) The IETF Trust (2008). 605 This document is subject to the rights, licenses and restrictions 606 contained in BCP 78, and except as set forth therein, the authors 607 retain all their rights. 609 This document and the information contained herein are provided 610 on an "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE 611 REPRESENTS OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE 612 IETF TRUST AND THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL 613 WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY 614 WARRANTY THAT THE USE OF THE INFORMATION HEREIN WILL NOT INFRINGE 615 ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS 616 FOR A PARTICULAR PURPOSE. 618 Intellectual Property 620 The IETF takes no position regarding the validity or scope of any 621 Intellectual Property Rights or other rights that might be claimed to 622 pertain to the implementation or use of the technology described in 623 this document or the extent to which any license under such rights 624 might or might not be available; nor does it represent that it has 625 made any independent effort to identify any such rights. Information 626 on the procedures with respect to rights in RFC documents can be 627 found in BCP 78 and BCP 79. 629 Copies of IPR disclosures made to the IETF Secretariat and any 630 assurances of licenses to be made available, or the result of an 631 attempt made to obtain a general license or permission for the use of 632 such proprietary rights by implementers or users of this 633 specification can be obtained from the IETF on-line IPR repository at 634 http://www.ietf.org/ipr. 636 The IETF invites any interested party to bring to its attention any 637 copyrights, patents or patent applications, or other proprietary 638 rights that may cover technology that may be required to implement 639 this standard. Please address the information to the IETF at ietf- 640 ipr@ietf.org. 642 Acknowledgement 644 Funding for the RFC Editor function is currently provided by the 645 Internet Society.