idnits 2.17.1 draft-ietf-bmwg-reset-04.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year (Using the creation date from RFC2544, updated by this document, for RFC5378 checks: 1999-03-01) -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (December 8, 2010) is 4859 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-23) exists of draft-ietf-bmwg-igp-dataplane-conv-meth-21 Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Benchmarking Methodology WG Rajiv Asati 2 Internet Draft Cisco 3 Updates: 2544 (if approved) Carlos Pignataro 4 Intended status: Informational Cisco 5 Expires: June 2011 Fernando Calabria 6 Cisco 7 Cesar Olvera 8 Consulintel 10 December 8, 2010 12 Device Reset Characterization 13 draft-ietf-bmwg-reset-04 15 Abstract 17 An operational forwarding device may need to be re-started 18 (automatically or manually) for a variety of reasons, an event that 19 we call a "reset" in this document. Since there may be an 20 interruption in the forwarding operation during a reset, it is 21 useful to know how long a device takes to resume the forwarding 22 operation. 24 This document specifies a methodology for characterizing reset (and 25 recovery time) during benchmarking of forwarding devices, and 26 provides clarity and consistency in reset test procedures beyond 27 what's specified in RFC2544. It therefore updates RFC2544. 29 Status of this Memo 31 This Internet-Draft is submitted in full conformance with the 32 provisions of BCP 78 and BCP 79. 34 Internet-Drafts are working documents of the Internet Engineering 35 Task Force (IETF), its areas, and its working groups. Note that 36 other groups may also distribute working documents as Internet- 37 Drafts. 39 Internet-Drafts are draft documents valid for a maximum of six 40 months and may be updated, replaced, or obsoleted by other documents 41 at any time. It is inappropriate to use Internet-Drafts as 42 reference material or to cite them other than as "work in progress." 43 The list of current Internet-Drafts can be accessed at 44 http://www.ietf.org/ietf/1id-abstracts.txt 46 The list of Internet-Draft Shadow Directories can be accessed at 47 http://www.ietf.org/shadow.html 49 This Internet-Draft will expire on June 8, 2011. 51 Copyright Notice 53 Copyright (c) 2010 IETF Trust and the persons identified as the 54 document authors. All rights reserved. 56 This document is subject to BCP 78 and the IETF Trust's Legal 57 Provisions Relating to IETF Documents 58 (http://trustee.ietf.org/license-info) in effect on the date of 59 publication of this document. Please review these documents 60 carefully, as they describe your rights and restrictions with 61 respect to this document. Code Components extracted from this 62 document must include Simplified BSD License text as described in 63 Section 4.e of the Trust Legal Provisions and are provided without 64 warranty as described in the Simplified BSD License. 66 Table of Contents 68 1. Key Words to Reflect Requirements..............................4 69 2. Introduction...................................................4 70 2.1. Scope.....................................................4 71 2.2. Recovery Time Measurement Methods.........................5 72 2.3. Reporting Format..........................................6 73 3. Test Requirements..............................................7 74 4. Reset Test.....................................................8 75 4.1. Hardware Reset Test.......................................9 76 4.1.1. Routing Processor (RP) / Routing Engine Reset........9 77 4.1.1.1. RP Reset for a single-RP device (REQUIRED)......9 78 4.1.1.2. RP Switchover for a multiple-RP device (OPTIONAL) 79 ........................................................10 80 4.1.2. Line Card (LC) Removal and Insertion (REQUIRED).....12 81 4.2. Software Reset Test......................................12 82 4.2.1. Operating System (OS) Reset (REQUIRED)..............13 83 4.2.2. Process Reset (OPTIONAL)............................13 84 4.3. Power Interruption Test..................................14 85 4.3.1. Power Interruption (REQUIRED).......................15 86 5. Security Considerations.......................................16 87 6. IANA Considerations...........................................16 88 7. Acknowledgments...............................................16 89 8. References....................................................17 90 8.1. Normative References.....................................17 91 8.2. Informative References...................................17 92 Authors' Addresses...............................................18 94 1. Key Words to Reflect Requirements 96 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 97 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 98 document are to be interpreted as described in BCP 14, RFC 2119 99 [RFC2119]. RFC 2119 defines the use of these key words to help make 100 the intent of standards track documents as clear as possible. While 101 this document uses these keywords, this document is not a standards 102 track document. 104 2. Introduction 106 An operational forwarding device (or one of its components) may need 107 to be re-started for a variety of reasons, an event that we call a 108 "reset" in this document. Since there may be an interruption in the 109 forwarding operation during a reset, it is useful to know how long a 110 device takes to resume the forwarding operation. In other words, it 111 is desired to know the duration of the recovery time following the 112 reset. 114 However, the answer to this question is no longer simple and 115 straight-forward as the modern forwarding devices employ many 116 hardware advancements (distributed forwarding, etc.) and software 117 advancements (graceful restart, etc.) that influence the recovery 118 time after the reset. 120 2.1. Scope 122 This document specifies a methodology for characterizing reset (and 123 recovery time) during benchmarking of forwarding devices, and 124 provides clarity and consistency in reset procedures beyond what is 125 specified in [RFC2544]. Software upgrades involve additional 126 benchmarking complexities and are outside the scope of this 127 document. 129 These procedures may be used by other benchmarking documents such as 130 [RFC2544], [RFC5180], [RFC5695], etc., and is expected that other 131 protocol-specific benchmarking documents would reference this 132 document for reset recovery time characterization. 134 This document updates Section 26.6 of [RFC2544]. 136 This document focuses only on the reset criterion of benchmarking, 137 and presumes that it would be beneficial to [RFC5180], [RFC5695], 138 and other BMWG benchmarking efforts. 140 2.2. Recovery Time Measurement Methods 142 The 'recovery time' is the time during which the traffic forwarding 143 is temporarily interrupted following a reset event. Strictly 144 speaking, this is the time over which one or more frames are lost. 145 This definition is similar to that of 'Loss of connectivity period' 146 defined in [IGPConv] section 4. 148 There are two accepted methods to measure the 'recovery time': 150 1. Frame-Loss Method - This method requires test tool capability 151 to monitor the number of lost frames. In this method, the 152 offered stream rate (frames per second) must be known. The 153 recovery time is calculated per the below equation: 155 Frames_lost (packets) 156 Recovery_time = ------------------------------------- 157 Offered_rate (packets per second) 159 2. Time-Stamp Method - This method requires test tool capability 160 to timestamp each frame. In this method, the test tool 161 timestamps each transmitted frame and monitors the received 162 frame's timestamp. During the test, the test tool would record 163 the timestamp (Timestamp A) of the frame that was last 164 received prior to the reset interruption and the timestamp 165 (Timestamp B) of the first frame after the interruption 166 stopped. The difference between Timestamp B and Timestamp A is 167 the recovery time. 169 The tester / operator MAY use either method for recovery time 170 measurement depending on the test tool capability. However, the 171 Frame-loss method SHOULD be used if the test tool is capable of (a) 172 counting the number of lost frames per stream, and (b) transmitting 173 test frame despite the physical link status, whereas Time-stamp 174 method SHOULD be used if the test tool is capable of (a) time- 175 stamping each frame, (b) monitoring received frame's timestamp, and 176 (c) transmitting frames only if the physical link status is UP. 178 2.3. Reporting Format 180 All reset results are reported in a simple statement including the 181 frame loss (if measured) and recovery times. 183 For each test case, it is RECOMMENDED that the following parameters 184 be reported in these units: 186 Parameter Units or Examples 188 ----------------------------------------------------------------- 190 Throughput Frames per second and bits per 191 second 193 Loss (average) Frames 195 Recovery Time (average) Milliseconds 197 Number of trials Integer count 199 Protocol IPv4, IPv6, MPLS, etc. 201 Frame Size Octets 203 Port Media Ethernet, GigE (Gigabit Ethernet), 204 POS (Packet over SONET), etc. 206 Port Speed 10 Gbps, 1 Gbps, 100 Mbps, etc. 208 Interface Encap. Ethernet, Ethernet VLAN, 209 PPP, HDLC, etc. 211 For mixed protocol environments, frames SHOULD be distributed 212 between all the different protocols. The distribution MAY 213 approximate the network conditions of deployment. In all cases the 214 details of the mixed protocol distribution MUST be included in the 215 reporting. 217 Additionally, the DUT (Device Under Test) / SUT (System Under Test) 218 and test bed provisioning, port and Line Card arrangement, 219 configuration, and deployed methodologies that may influence the 220 overall recovery time MUST be listed. (Refer to the additional 221 factors listed in Section 3). 223 The reporting of results MUST regard repeatability considerations 224 from Section 4 of [RFC2544]. It is RECOMMENDED to perform multiple 225 trials and report average results. 227 3. Test Requirements 229 In order to provide consistent and fairness while benchmarking a set 230 of different DUTs, the Network tester / operator MUST (a) use 231 identical control and data plane information during testing, (b) 232 document & report any factors that may influence the overall time 233 after reset / convergence. 235 Some of these factors include: 237 1. Type of reset - Hardware (line-card crash, etc.) vs. Software 238 (protocol reset, process crash, etc.) or even complete power 239 failures 241 2. Manual vs. Automatic reset 243 3. Scheduled vs. non-scheduled reset 245 4. Local vs. Remote reset 247 5. Scale - Number of line cards present vs. in-use 249 6. Scale - Number of physical and logical interfaces 251 7. Scale - Number of routing protocol instances 253 8. Scale - Number of Routing Table entries 255 9. Scale - Number of Route Processors available 256 10. Performance - Redundancy strategy deployed for route 257 processors and line cards 259 11. Performance - Interface encapsulation as well as achievable 260 Throughput [RFC2544] 262 12. Any other internal or external factor that may influence 263 recovery time after a hardware or software reset 265 The recovery time is one of the key characterization results 266 reported after each test run. While the recovery time during a reset 267 test event may be zero, there may still be effects on traffic, such 268 as transient delay variation or increased latency. However, that is 269 not covered and deemed outside the scope of this document. In this 270 case, only "no loss" is reported. 272 4. Reset Test 274 This section contains the description of the tests that are related 275 to the characterization of the time needed for DUTs (Device Under 276 Test) / SUTs (System Under Test) to recover from a reset. There are 277 three types of reset considered in this document: 279 1. Hardware resets 281 2. Software resets 283 3. Power interruption 285 Different types of reset have potentially different impact on the 286 forwarding behavior of the device. As an example, a software reset 287 (of a routing process) might not result in forwarding interruption, 288 whereas a hardware reset (of a line card) most likely will. 290 Section 4.1 describes various hardware resets, whereas Section 4.2 291 describes various software resets. Additionally, Section 4.3 292 describes power interruption tests. These sections define and 293 characterize these resets. 295 Additionally, since device specific implementations may vary for 296 hardware and software type resets, it is desirable to classify each 297 test case as "REQUIRED" or "OPTIONAL". 299 4.1. Hardware Reset Test 301 A test designed to characterize the time it takes a DUT to recover 302 from the hardware reset. 304 A "hardware reset" generally involves the re-initialization of one 305 or more physical components in the DUT, but not the entire DUT. 307 A hardware reset is executed by the operator for example by physical 308 removal of a hardware component, by pressing on a "reset" button for 309 the component, or could even be triggered from the command line 310 interface. 312 Reset procedures that do not require the physical removal and 313 insertion of a hardware component are RECOMMENDED. These include 314 using the CLI or a physical switch or button. If such procedures 315 cannot be performed (e.g., for lack of platform support, or because 316 the corresponding Test Case calls for them), human operation time 317 SHOULD be minimized across different platforms and Test Cases as 318 much as possible, and variation in human operator time SHOULD also 319 be minimized across different vendors products as much as practical, 320 by having the same person perform the operation, and by practicing 321 the operation. 323 For routers that do not contain separate Routing Processor and Line 324 Card modules, the hardware reset tests are not performed since they 325 are not relevant; instead, the power interruption tests MUST be 326 performed (see Section 4.3) in these cases. 328 4.1.1. Routing Processor (RP) / Routing Engine Reset 330 The Routing Processor (RP) is the DUT module that is primarily 331 concerned with Control Plane functions. 333 4.1.1.1. RP Reset for a single-RP device (REQUIRED) 335 Objective 337 To characterize time needed for a DUT to recover from a Route 338 processor hardware reset in a single RP environment. 340 Procedure 342 First, ensure that the RP is in a permanent state to which it will 343 return to after the reset, by performing some or all of the 344 following operational tasks: save the current DUT configuration, 345 specify boot parameters, ensure the appropriate software files are 346 available, or perform additional Operating System or hardware 347 related task. 349 Second, ensure that the DUT is able to forward the traffic for at 350 least 15 seconds before any test activities are performed. The 351 traffic should use the minimum frame size possible on the media 352 used in the testing and rate should be sufficient for the DUT to 353 attain the maximum forwarding throughput. This enables a finer 354 granularity in the recovery time measurement. 356 Third, perform the Route Processor (RP) hardware reset at this 357 point. This entails for example physically removing the RP to 358 later re-insert it, or triggering a hardware reset by other means 359 (e.g., command line interface, physical switch, etc.) 361 Finally, the characterization is completed by recording the frame 362 loss or time stamps (as reported by the test tool) and calculating 363 the recovery time (as defined in Section 2.2). 365 Reporting format 367 The reporting format is defined in Section 2.3. 369 4.1.1.2. RP Switchover for a multiple-RP device (OPTIONAL) 371 Objective 373 To characterize time needed for "secondary" Route Processor 374 (sometimes referred to as "backup" RP) of a DUT to become active 375 after a "primary" (or "active") Route Processor hardware reset. 376 This process is often referred to as "RP Switchover". The 377 characterization in this test should be done for the default DUT 378 behavior as well as a DUT's non-default configuration that 379 minimizes frame loss, if exists. 381 Procedure 383 This test characterizes "RP Switchover". Many implementations 384 allow for optimized switchover capabilities that minimize the 385 downtime during the actual switchover. This test consists of two 386 sub-cases from a switchover characteristics standpoint: First, a 387 default behavior (with no switchover-specific configurations); and 388 potentially second, a non-default behavior with switchover 389 configuration to minimize frame loss. Therefore, the procedures 390 hereby described are executed twice, and reported separately. 392 First, ensure that the RPs are in a permanent state such that the 393 secondary will be activated to the same state as the active is, by 394 performing some or all of the following operational tasks: save 395 the current DUT configuration, specify boot parameters, ensure the 396 appropriate software files are available, or perform additional 397 Operating System or hardware related task. 399 Second, ensure that the DUT is able to forward the traffic for at 400 least 15 seconds before any test activities are performed. The 401 traffic should use the minimum frame size possible on the media 402 used in the testing and rate should be sufficient for the DUT to 403 attain the maximum forwarding throughput. This enables a finer 404 granularity in the recovery time measurement. 406 Third, perform the primary Route Processor (RP) hardware reset at 407 this point. This entails for example physically removing the RP, 408 or triggering a hardware reset by other means (e.g., command line 409 interface, physical switch, etc.) It is up to the operator to 410 decide if the primary RP needs to be re-inserted after a grace 411 period or not. 413 Finally, the characterization is completed by recording the frame 414 loss or time stamps (as reported by the test tool) and calculating 415 the recovery time (as defined in Section 2.2). 417 Reporting format 419 The reset results are potentially reported twice, one for the 420 default switchover behavior (i.e., the DUT without any switchover- 421 specific enhanced configuration) and the other for the switchover- 422 specific behavior if it exists (i.e., the DUT configured for 423 optimized switchover capabilities that minimize the downtime 424 during the actual switchover). 426 The reporting format is defined in Section 2.3, and also includes 427 any specific redundancy scheme in place. 429 4.1.2. Line Card (LC) Removal and Insertion (REQUIRED) 431 The Line Card (LC) is the DUT component that is responsible with 432 packet forwarding. 434 Objective 436 To characterize time needed for a DUT to recover from a Line Card 437 removal and insertion event. 439 Procedure 441 For this test, the Line Card that is being hardware-reset MUST be 442 on the forwarding path and all destinations MUST be directly 443 connected. 445 First, complete some or all of the following operational tasks: 446 save the current DUT configuration, specify boot parameters, 447 ensure the appropriate software files are available, or perform 448 additional Operating System or hardware related task. 450 Second, ensure that the DUT is able to forward the traffic for at 451 least 15 seconds before any test activities are performed. The 452 traffic should use the minimum frame size possible on the media 453 used in the testing and rate should be sufficient for the DUT to 454 attain the maximum forwarding throughput. This enables a finer 455 granularity in the recovery time measurement. 457 Third, perform the Line Card (LC) hardware reset at this point. 458 This entails for example physically removing the LC to later re- 459 insert it, or triggering a hardware reset by other means (e.g., 460 command line interface (CLI), physical switch, etc.). 462 Finally, the characterization is completed by recording the frame 463 loss or time stamps (as reported by the test tool) and calculating 464 the recovery time (as defined in Section 2.2). 466 Reporting Format 468 The reporting format is defined in Section 2.3. 470 4.2. Software Reset Test 472 To characterize time needed for a DUT to recover from the software 473 reset. 475 In contrast to a "hardware reset", a "software reset" involves only 476 the re-initialization of the execution, data structures, and partial 477 state within the software running on the DUT module(s). 479 A software reset is initiated for example from the DUT's Command 480 Line Interface (CLI). 482 4.2.1. Operating System (OS) Reset (REQUIRED) 484 Objective 486 To characterize time needed for a DUT to recover from an Operating 487 System (OS) software reset. 489 Procedure 491 First, complete some or all of the following operational tasks: 492 save the current DUT configuration, specify software boot 493 parameters, ensure the appropriate software files are available, 494 or perform additional Operating System task. 496 Second, ensure that the DUT is able to forward the traffic for at 497 least 15 seconds before any test activities are performed. The 498 traffic should use the minimum frame size possible on the media 499 used in the testing and rate should be sufficient for the DUT to 500 attain the maximum forwarding throughput. This enables a finer 501 granularity in the recovery time measurement. 503 Third, trigger an Operating System re-initialization in the DUT, 504 by operational means such as use of the DUT's Command Line 505 Interface (CLI) or other management interface. 507 Finally, the characterization is completed by recording the frame 508 loss or time stamps (as reported by the test tool) and calculating 509 the recovery time (as defined in Section 2.2). 511 Reporting format 513 The reporting format is defined in Section 2.3. 515 4.2.2. Process Reset (OPTIONAL) 517 Objective 518 To characterize time needed for a DUT to recover from a software 519 process reset. 521 Such time period may depend upon the number and types of process 522 running in the DUT and which ones are tested. Different 523 implementations of forwarding devices include various common 524 processes. A process reset should be performed only in the 525 processes most relevant to the tester and most impactful to 526 forwarding. 528 Procedure 530 First, complete some or all of the following operational tasks: 531 save the current DUT configuration, specify software parameters or 532 environmental variables, or perform additional Operating System 533 task. 535 Second, ensure that the DUT is able to forward the traffic for at 536 least 15 seconds before any test activities are performed. The 537 traffic should use the minimum frame size possible on the media 538 used in the testing and rate should be sufficient for the DUT to 539 attain the maximum forwarding throughput. This enables a finer 540 granularity in the recovery time measurement. 542 Third, trigger a process reset for each process running in the DUT 543 and considered for testing from a management interface (e.g., by 544 means of the Command Line Interface (CLI), etc.) 546 Finally, the characterization is completed by recording the frame 547 loss or time stamps (as reported by the test tool) and calculating 548 the recovery time (as defined in Section 2.2). 550 Reporting format 552 The reporting format is defined in Section 2.3, and is used for 553 each process running in the DUT and tested. Given the 554 implementation nature of this test, details of the actual process 555 tested should be included along with the statement. 557 4.3. Power Interruption Test 559 "Power interruption" refers to the complete loss of power on the 560 DUT. It can be viewed as a special case of a hardware reset, 561 triggered by the loss of the power supply to the DUT or its 562 components, and is characterized by the re-initialization of all 563 hardware and software in the DUT. 565 4.3.1. Power Interruption (REQUIRED) 567 Objective 569 To characterize time needed for a DUT to recover from a complete 570 loss of electric power or complete power interruption. This test 571 simulates a complete power failure or outage, and should be 572 indicative of the DUT/SUTs behavior during such event. 574 Procedure 576 First, ensure that the entire DUT is at a permanent state to which 577 it will return to after the power interruption, by performing some 578 or all of the following operational tasks: save the current DUT 579 configuration, specify boot parameters, ensure the appropriate 580 software files are available, or perform additional Operating 581 System or hardware related task. 583 Second, ensure that the DUT is able to forward the traffic for at 584 least 15 seconds before any test activities are performed. The 585 traffic should use the minimum frame size possible on the media 586 used in the testing and rate should be sufficient for the DUT to 587 attain the maximum forwarding throughput. This enables a finer 588 granularity in the recovery time measurement. 590 Third, interrupt the power (AC or DC) that feeds the corresponding 591 DUTs power supplies at this point. This entails for example 592 physically removing the power supplies in the DUT to later re- 593 insert them, or simply disconnecting or switching off their power 594 feeds (AC or DC as applicable). The actual power interruption 595 should last at least 15 seconds. 597 Finally, the characterization is completed by recording the frame 598 loss or time stamps (as reported by the test tool) and calculating 599 the recovery time (as defined in Section 2.2). 601 For easier comparison with other testing, the 15 seconds are 602 removed from the reported recovery time. 604 Reporting format 606 The reporting format is defined in Section 2.3. 608 5. Security Considerations 610 Benchmarking activities, as described in this memo, are limited to 611 technology characterization using controlled stimuli in a laboratory 612 environment, with dedicated address space and the constraints 613 specified in the sections above. 615 The benchmarking network topology will be an independent test setup 616 and MUST NOT be connected to devices that may forward the test 617 traffic into a production network or misroute traffic to the test 618 management network. 620 Furthermore, benchmarking is performed on a "black-box" basis, 621 relying solely on measurements observable external to the DUT/SUT. 623 Special capabilities SHOULD NOT exist in the DUT/SUT specifically 624 for benchmarking purposes. Any implications for network security 625 arising from the DUT/SUT SHOULD be identical in the lab and in 626 production networks. 628 There are no specific security considerations within the scope of 629 this document. 631 6. IANA Considerations 633 There is no IANA consideration for this document. 635 7. Acknowledgments 637 The authors would like to thank Ron Bonica, who motivated us to 638 write this document. The authors would also like to thank Al Morton, 639 Andrew Yourtchenko, David Newman, John E. Dawson, Timmons C. Player, 640 Jan Novak, Steve Maxwell, Ilya Varlashkin, and Sarah Banks for 641 providing thorough review, useful suggestions, and valuable input. 643 This document was prepared using 2-Word-v2.0.template.dot. 645 8. References 647 8.1. Normative References 649 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 650 Requirement Levels", BCP 14, RFC 2119, March 1997. 652 [RFC2544] Bradner, S. and McQuaid, J., "Benchmarking Methodology for 653 Network Interconnect Devices", RFC 2544, March 1999. 655 8.2. Informative References 657 [IGPConv] Poretsky, S., Imhoff, B., and K. Michielsen, "Benchmarking 658 Methodology for Link-State IGP Data Plane Route 659 Convergence", draft-ietf-bmwg-igp-dataplane-conv-meth-21 660 (work in progress), May 2010. 662 [RFC5180] Popoviciu, C., et al, "IPv6 Benchmarking Methodology for 663 Network Interconnect Devices", RFC 5180, May 2008. 665 [RFC5695] Akhter, A., Asati, R., and C. Pignataro, "MPLS Forwarding 666 Benchmarking Methodology for IP Flows", RFC 5695, November 667 2009. 669 Authors' Addresses 671 Rajiv Asati 672 Cisco Systems 673 7025-6 Kit Creek Road 674 RTP, NC 27709 675 USA 677 Email: rajiva@cisco.com 679 Carlos Pignataro 680 Cisco Systems 681 7200-12 Kit Creek Road 682 RTP, NC 27709 683 USA 685 Email: cpignata@cisco.com 687 Fernando Calabria 688 Cisco Systems 689 7200-12 Kit Creek Road 690 RTP, NC 27709 691 USA 693 Email: fcalabri@cisco.com 695 Cesar Olvera 696 Consulintel 697 Joaquin Turina, 2 698 Pozuelo de Alarcon, Madrid, E-28224 699 Spain 701 Email: cesar.olvera@consulintel.es