idnits 2.17.1 draft-ietf-bmwg-reset-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year (Using the creation date from RFC2544, updated by this document, for RFC5378 checks: 1999-03-01) -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (November 9, 2010) is 4917 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-23) exists of draft-ietf-bmwg-igp-dataplane-conv-meth-21 Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Benchmarking Methodology WG Rajiv Asati 2 Internet Draft Cisco 3 Updates: 2544 (if approved) Carlos Pignataro 4 Intended status: Informational Cisco 5 Expires: May 2011 Fernando Calabria 6 Cisco 7 Cesar Olvera 8 Consulintel 10 November 9, 2010 12 Device Reset Characterization 13 draft-ietf-bmwg-reset-03 15 Abstract 17 An operational forwarding device may need to be re-started 18 (automatically or manually) for a variety of reasons, an event that 19 we call a "reset" in this document. Since there may be an 20 interruption in the forwarding operation during a reset, it is 21 useful to know how long a device takes to resume the forwarding 22 operation. 24 This document specifies a methodology for characterizing reset (and 25 recovery time) during benchmarking of forwarding devices, and 26 provides clarity and consistency in reset test procedures beyond 27 what's specified in RFC2544. It therefore updates RFC2544. 29 Status of this Memo 31 This Internet-Draft is submitted in full conformance with the 32 provisions of BCP 78 and BCP 79. 34 Internet-Drafts are working documents of the Internet Engineering 35 Task Force (IETF), its areas, and its working groups. Note that 36 other groups may also distribute working documents as Internet- 37 Drafts. 39 Internet-Drafts are draft documents valid for a maximum of six 40 months and may be updated, replaced, or obsoleted by other documents 41 at any time. It is inappropriate to use Internet-Drafts as 42 reference material or to cite them other than as "work in progress." 43 The list of current Internet-Drafts can be accessed at 44 http://www.ietf.org/ietf/1id-abstracts.txt 46 The list of Internet-Draft Shadow Directories can be accessed at 47 http://www.ietf.org/shadow.html 49 This Internet-Draft will expire on May 9, 2011. 51 Copyright Notice 53 Copyright (c) 2010 IETF Trust and the persons identified as the 54 document authors. All rights reserved. 56 This document is subject to BCP 78 and the IETF Trust's Legal 57 Provisions Relating to IETF Documents 58 (http://trustee.ietf.org/license-info) in effect on the date of 59 publication of this document. Please review these documents 60 carefully, as they describe your rights and restrictions with 61 respect to this document. Code Components extracted from this 62 document must include Simplified BSD License text as described in 63 Section 4.e of the Trust Legal Provisions and are provided without 64 warranty as described in the Simplified BSD License. 66 Table of Contents 68 1. Key Words to Reflect Requirements..............................4 69 2. Introduction...................................................4 70 2.1. Scope.....................................................4 71 2.2. Recovery Time Measurement Methods.........................5 72 2.3. Reporting Format..........................................6 73 3. Test Requirements..............................................7 74 4. Reset Test.....................................................8 75 4.1. Hardware Reset Test.......................................9 76 4.1.1. Routing Processor (RP) / Routing Engine Reset........9 77 4.1.1.1. RP Reset for a single-RP device (REQUIRED)......9 78 4.1.1.2. RP Switchover for a multiple-RP device (OPTIONAL) 79 ........................................................10 80 4.1.2. Line Card (LC) Removal and Insertion (REQUIRED).....11 81 4.2. Software Reset Test......................................12 82 4.2.1. Operating System (OS) Reset (REQUIRED)..............13 83 4.2.2. Process Reset (OPTIONAL)............................13 84 4.3. Power Interruption Test..................................14 85 4.3.1. Power Interruption (REQUIRED).......................15 86 5. Security Considerations.......................................16 87 6. IANA Considerations...........................................16 88 7. Acknowledgments...............................................16 89 8. References....................................................17 90 8.1. Normative References.....................................17 91 8.2. Informative References...................................17 92 Authors' Addresses...............................................18 94 1. Key Words to Reflect Requirements 96 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 97 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 98 document are to be interpreted as described in BCP 14, RFC 2119 99 [RFC2119]. RFC 2119 defines the use of these key words to help make 100 the intent of standards track documents as clear as possible. While 101 this document uses these keywords, this document is not a standards 102 track document. 104 2. Introduction 106 An operational forwarding device (or one of its components) may need 107 to be re-started for a variety of reasons, an event that we call a 108 "reset" in this document. Since there may be an interruption in the 109 forwarding operation during a reset, it is useful to know how long a 110 device takes to resume the forwarding operation. In other words, it 111 is desired to know the duration of the recovery time following the 112 reset. 114 However, the answer to this question is no longer simple and 115 straight-forward as the modern forwarding devices employ many 116 hardware advancements (distributed forwarding, etc.) and software 117 advancements (graceful restart, etc.) that influence the recovery 118 time after the reset. 120 2.1. Scope 122 This document specifies a methodology for characterizing reset (and 123 recovery time) during benchmarking of forwarding devices, and 124 provides clarity and consistency in reset procedures beyond what is 125 specified in [RFC2544]. Software upgrades involve additional 126 benchmarking complexities and are outside the scope of this 127 document. 129 These procedures may be used by other benchmarking documents such as 130 [RFC2544], [RFC5180], [RFC5695], etc., and is expected that other 131 protocol-specific benchmarking documents would reference this 132 document for reset recovery time characterization. 134 This document updates Section 26.6 of [RFC2544]. 136 This document focuses only on the reset criterion of benchmarking, 137 and presumes that it would be beneficial to [RFC5180], [RFC5695], 138 and other BMWG benchmarking efforts. 140 2.2. Recovery Time Measurement Methods 142 The 'recovery time' is the time during which the traffic forwarding 143 is temporarily interrupted following a reset event. Strictly 144 speaking, this is the time over which one or more frames are lost. 145 This definition is similar to that of 'Loss of connectivity period' 146 defined in [IGPConv] section 4. 148 There are two accepted methods to measure the 'recovery time': 150 1. Frame-Loss Method - This method requires test tool capability 151 to monitor the number of lost frames. In this method, the 152 offered stream rate (frames per second) must be known. The 153 recovery time is calculated per the below equation: 155 Frames_lost (packets) 156 Recovery_time = ------------------------------------- 157 Offered_rate (packets per second) 159 2. Time-Stamp Method - This method requires test tool capability 160 to timestamp each frame. In this method, the test tool 161 timestamps each transmitted frame and monitors the received 162 frame's timestamp. During the test, the test tool would record 163 the timestamp (Timestamp A) of the frame that was last 164 received prior to the reset interruption and the timestamp 165 (Timestamp B) of the first frame after the interruption 166 stopped. The difference between Timestamp B and Timestamp A is 167 the recovery time. 169 The tester / operator MAY use either method for recovery time 170 measurement depending on the test tool capability. However, the 171 Frame-loss method SHOULD be used if the test tool is capable of (a) 172 counting the number of lost frames per stream, and (b) transmitting 173 test frame despite the physical link status, whereas Time-stamp 174 method SHOULD be used if the test tool is capable of (a) time- 175 stamping each frame, (b) monitoring received frame's timestamp, and 176 (c) transmitting frames only if the physical link status is UP. 178 2.3. Reporting Format 180 All reset results are reported in a simple statement including the 181 frame loss (if measured) and recovery times. 183 For each test case, it is RECOMMENDED that the following parameters 184 be reported in these units: 186 Parameter Units or Examples 188 ----------------------------------------------------------------- 190 Throughput Frames per second and bits per 191 second 193 Loss (average) Frames 195 Recovery Time (average) Milliseconds 197 Number of trials Integer count 199 Protocol IPv4, IPv6, MPLS, etc. 201 Frame Size Octets 203 Port Media Ethernet, GigE (Gigabit Ethernet), 204 POS (Packet over SONET), etc. 206 Port Speed 10 Gbps, 1 Gbps, 100 Mbps, etc. 208 Interface Encap. Ethernet, Ethernet VLAN, 209 PPP, HDLC, etc. 211 For mixed protocol environments, frames SHOULD be distributed 212 between all the different protocols. The distribution MAY 213 approximate the network conditions of deployment. In all cases the 214 details of the mixed protocol distribution MUST be included in the 215 reporting. 217 Additionally, the DUT (Device Under Test) / SUT (System Under Test) 218 and test bed provisioning, port and Line Card arrangement, 219 configuration, and deployed methodologies that may influence the 220 overall recovery time MUST be listed. (Refer to the additional 221 factors listed in Section 3). 223 The reporting of results MUST regard repeatability considerations 224 from Section 4 of [RFC2544]. It is RECOMMENDED to perform multiple 225 trials and report average results. 227 3. Test Requirements 229 In order to provide consistent and fairness while benchmarking a set 230 of different DUTs, the Network tester / operator MUST (a) use 231 identical control and data plane information during testing, (b) 232 document & report any factors that may influence the overall time 233 after reset / convergence. 235 Some of these factors include: 237 1. Type of reset - Hardware (line-card crash, etc.) vs. Software 238 (protocol reset, process crash, etc.) or even complete power 239 failures 241 2. Manual vs. Automatic reset 243 3. Scheduled vs. non-scheduled reset 245 4. Local vs. Remote reset 247 5. Scale - Number of line cards present vs. in-use 249 6. Scale - Number of physical and logical interfaces 251 7. Scale - Number of routing protocol instances 253 8. Scale - Number of Routing Table entries 255 9. Scale - Number of Route Processors available 256 10. Performance - Redundancy strategy deployed for route 257 processors and line cards 259 11. Performance - Interface encapsulation as well as achievable 260 Throughput [RFC2544] 262 12. Any other internal or external factor that may influence 263 recovery time after a hardware or software reset 265 The recovery time is one of the key characterization results 266 reported after each test run. While the recovery time during a reset 267 test event may be zero, there may still be effects on traffic, such 268 as transient delay variation or increased latency. However, that is 269 not covered and deemed outside the scope of this document. In this 270 case, only "no loss" is reported. 272 4. Reset Test 274 This section contains the description of the tests that are related 275 to the characterization of the time needed for DUTs (Device Under 276 Test) / SUTs (System Under Test) to recover from a reset. There are 277 three types of reset considered in this document: 279 1. Hardware resets 281 2. Software resets 283 3. Power interruption 285 Different types of reset have potentially different impact on the 286 forwarding behavior of the device. As an example, a software reset 287 (of a routing process) might not result in forwarding interruption, 288 whereas a hardware reset (of a line card) most likely will. 290 Section 4.1 describes various hardware resets, whereas Section 4.2 291 describes various software resets. Additionally, Section 4.3 292 describes power interruption tests. These sections define and 293 characterize these resets. 295 Additionally, since device specific implementations may vary for 296 hardware and software type resets, it is desirable to classify each 297 test case as "REQUIRED" or "OPTIONAL". 299 4.1. Hardware Reset Test 301 A test designed to characterize the time it takes a DUT to recover 302 from the hardware reset. 304 A "hardware reset" generally involves the re-initialization of one 305 or more physical components in the DUT, but not the entire DUT. 307 A hardware reset is executed by the operator for example by physical 308 removal of a hardware component, by pressing on a "reset" button for 309 the component, or could even be triggered from the command line 310 interface. 312 For routers that do not contain separate Routing Processor and Line 313 Card modules, the hardware reset tests are not performed since they 314 are not relevant; instead, the power interruption tests MUST be 315 performed (see Section 4.3) in these cases. 317 4.1.1. Routing Processor (RP) / Routing Engine Reset 319 The Routing Processor (RP) is the DUT module that is primarily 320 concerned with Control Plane functions. 322 4.1.1.1. RP Reset for a single-RP device (REQUIRED) 324 Objective 326 To characterize time needed for a DUT to recover from a Route 327 processor hardware reset in a single RP environment. 329 Procedure 331 First, ensure that the RP is in a permanent state to which it will 332 return to after the reset, by performing some or all of the 333 following operational tasks: save the current DUT configuration, 334 specify boot parameters, ensure the appropriate software files are 335 available, or perform additional Operating System or hardware 336 related task. 338 Second, ensure that the DUT is able to forward the traffic for at 339 least 15 seconds before any test activities are performed. The 340 traffic should use the minimum frame size possible on the media 341 used in the testing and rate should be sufficient for the DUT to 342 attain the maximum forwarding throughput. This enables a finer 343 granularity in the recovery time measurement. 345 Third, perform the Route Processor (RP) hardware reset at this 346 point. This entails for example physically removing the RP to 347 later re-insert it, or triggering a hardware reset by other means 348 (e.g., command line interface, physical switch, etc.) 350 Finally, the characterization is completed by recording the frame 351 loss or time stamps (as reported by the test tool) and calculating 352 the recovery time (as defined in Section 2.2). 354 Reporting format 356 The reporting format is defined in Section 2.3. 358 4.1.1.2. RP Switchover for a multiple-RP device (OPTIONAL) 360 Objective 362 To characterize time needed for "secondary" Route Processor 363 (sometimes referred to as "backup" RP) of a DUT to become active 364 after a "primary" (or "active") Route Processor hardware reset. 365 This process is often referred to as "RP Switchover". The 366 characterization in this test should be done for the default DUT 367 behavior as well as a DUT's non-default configuration that 368 minimizes frame loss, if exists. 370 Procedure 372 This test characterizes "RP Switchover". Many implementations 373 allow for optimized switchover capabilities that minimize the 374 downtime during the actual switchover. This test consists of two 375 sub-cases from a switchover characteristics standpoint: First, a 376 default behavior (with no switchover-specific configurations); and 377 potentially second, a non-default behavior with switchover 378 configuration to minimize frame loss. Therefore, the procedures 379 hereby described are executed twice, and reported separately. 381 First, ensure that the RPs are in a permanent state such that the 382 secondary will be activated to the same state as the active is, by 383 performing some or all of the following operational tasks: save 384 the current DUT configuration, specify boot parameters, ensure the 385 appropriate software files are available, or perform additional 386 Operating System or hardware related task. 388 Second, ensure that the DUT is able to forward the traffic for at 389 least 15 seconds before any test activities are performed. The 390 traffic should use the minimum frame size possible on the media 391 used in the testing and rate should be sufficient for the DUT to 392 attain the maximum forwarding throughput. This enables a finer 393 granularity in the recovery time measurement. 395 Third, perform the primary Route Processor (RP) hardware reset at 396 this point. This entails for example physically removing the RP, 397 or triggering a hardware reset by other means (e.g., command line 398 interface, physical switch, etc.) It is up to the operator to 399 decide if the primary RP needs to be re-inserted after a grace 400 period or not. 402 Finally, the characterization is completed by recording the frame 403 loss or time stamps (as reported by the test tool) and calculating 404 the recovery time (as defined in Section 2.2). 406 Reporting format 408 The reset results are potentially reported twice, one for the 409 default switchover behavior (i.e., the DUT without any switchover- 410 specific enhanced configuration) and the other for the switchover- 411 specific behavior if it exists (i.e., the DUT configured for 412 optimized switchover capabilities that minimize the downtime 413 during the actual switchover). 415 The reporting format is defined in Section 2.3, and also includes 416 any specific redundancy scheme in place. 418 4.1.2. Line Card (LC) Removal and Insertion (REQUIRED) 420 The Line Card (LC) is the DUT component that is responsible with 421 packet forwarding. 423 Objective 425 To characterize time needed for a DUT to recover from a Line Card 426 removal and insertion event. 428 Procedure 429 For this test, the Line Card that is being hardware-reset MUST be 430 on the forwarding path and all destinations MUST be directly 431 connected. 433 First, complete some or all of the following operational tasks: 434 save the current DUT configuration, specify boot parameters, 435 ensure the appropriate software files are available, or perform 436 additional Operating System or hardware related task. 438 Second, ensure that the DUT is able to forward the traffic for at 439 least 15 seconds before any test activities are performed. The 440 traffic should use the minimum frame size possible on the media 441 used in the testing and rate should be sufficient for the DUT to 442 attain the maximum forwarding throughput. This enables a finer 443 granularity in the recovery time measurement. 445 Third, perform the Line Card (LC) hardware reset at this point. 446 This entails for example physically removing the LC to later re- 447 insert it, or triggering a hardware reset by other means (e.g., 448 command line interface (CLI), physical switch, etc.). However, 449 most accurate results will be obtained using the CLI or a physical 450 switch, and therefore these are RECOMMENDED. Otherwise, the time 451 spent trying to physically seat the LC will get mixed into the 452 results. 454 Finally, the characterization is completed by recording the frame 455 loss or time stamps (as reported by the test tool) and calculating 456 the recovery time (as defined in Section 2.2). 458 Reporting Format 460 The reporting format is defined in Section 2.3. 462 4.2. Software Reset Test 464 To characterize time needed for a DUT to recover from the software 465 reset. 467 In contrast to a "hardware reset", a "software reset" involves only 468 the re-initialization of the execution, data structures, and partial 469 state within the software running on the DUT module(s). 471 A software reset is initiated for example from the DUT's Command 472 Line Interface (CLI). 474 4.2.1. Operating System (OS) Reset (REQUIRED) 476 Objective 478 To characterize time needed for a DUT to recover from an Operating 479 System (OS) software reset. 481 Procedure 483 First, complete some or all of the following operational tasks: 484 save the current DUT configuration, specify software boot 485 parameters, ensure the appropriate software files are available, 486 or perform additional Operating System task. 488 Second, ensure that the DUT is able to forward the traffic for at 489 least 15 seconds before any test activities are performed. The 490 traffic should use the minimum frame size possible on the media 491 used in the testing and rate should be sufficient for the DUT to 492 attain the maximum forwarding throughput. This enables a finer 493 granularity in the recovery time measurement. 495 Third, trigger an Operating System re-initialization in the DUT, 496 by operational means such as use of the DUT's Command Line 497 Interface (CLI) or other management interface. 499 Finally, the characterization is completed by recording the frame 500 loss or time stamps (as reported by the test tool) and calculating 501 the recovery time (as defined in Section 2.2). 503 Reporting format 505 The reporting format is defined in Section 2.3. 507 4.2.2. Process Reset (OPTIONAL) 509 Objective 511 To characterize time needed for a DUT to recover from a software 512 process reset. 514 Such time period may depend upon the number and types of process 515 running in the DUT and which ones are tested. Different 516 implementations of forwarding devices include various common 517 processes. A process reset should be performed only in the 518 processes most relevant to the tester and most impactful to 519 forwarding. 521 Procedure 523 First, complete some or all of the following operational tasks: 524 save the current DUT configuration, specify software parameters or 525 environmental variables, or perform additional Operating System 526 task. 528 Second, ensure that the DUT is able to forward the traffic for at 529 least 15 seconds before any test activities are performed. The 530 traffic should use the minimum frame size possible on the media 531 used in the testing and rate should be sufficient for the DUT to 532 attain the maximum forwarding throughput. This enables a finer 533 granularity in the recovery time measurement. 535 Third, trigger a process reset for each process running in the DUT 536 and considered for testing from a management interface (e.g., by 537 means of the Command Line Interface (CLI), etc.) 539 Finally, the characterization is completed by recording the frame 540 loss or time stamps (as reported by the test tool) and calculating 541 the recovery time (as defined in Section 2.2). 543 Reporting format 545 The reporting format is defined in Section 2.3, and is used for 546 each process running in the DUT and tested. Given the 547 implementation nature of this test, details of the actual process 548 tested should be included along with the statement. 550 4.3. Power Interruption Test 552 "Power interruption" refers to the complete loss of power on the 553 DUT. It can be viewed as a special case of a hardware reset, 554 triggered by the loss of the power supply to the DUT or its 555 components, and is characterized by the re-initialization of all 556 hardware and software in the DUT. 558 4.3.1. Power Interruption (REQUIRED) 560 Objective 562 To characterize time needed for a DUT to recover from a complete 563 loss of electric power or complete power interruption. This test 564 simulates a complete power failure or outage, and should be 565 indicative of the DUT/SUTs behavior during such event. 567 Procedure 569 First, ensure that the entire DUT is at a permanent state to which 570 it will return to after the power interruption, by performing some 571 or all of the following operational tasks: save the current DUT 572 configuration, specify boot parameters, ensure the appropriate 573 software files are available, or perform additional Operating 574 System or hardware related task. 576 Second, ensure that the DUT is able to forward the traffic for at 577 least 15 seconds before any test activities are performed. The 578 traffic should use the minimum frame size possible on the media 579 used in the testing and rate should be sufficient for the DUT to 580 attain the maximum forwarding throughput. This enables a finer 581 granularity in the recovery time measurement. 583 Third, interrupt the power (AC or DC) that feeds the corresponding 584 DUTs power supplies at this point. This entails for example 585 physically removing the power supplies in the DUT to later re- 586 insert them, or simply disconnecting or switching off their power 587 feeds (AC or DC as applicable). The actual power interruption 588 should last at least 15 seconds. 590 Finally, the characterization is completed by recording the frame 591 loss or time stamps (as reported by the test tool) and calculating 592 the recovery time (as defined in Section 2.2). 594 For easier comparison with other testing, the 15 seconds are 595 removed from the reported recovery time. 597 Reporting format 599 The reporting format is defined in Section 2.3. 601 5. Security Considerations 603 Benchmarking activities, as described in this memo, are limited to 604 technology characterization using controlled stimuli in a laboratory 605 environment, with dedicated address space and the constraints 606 specified in the sections above. 608 The benchmarking network topology will be an independent test setup 609 and MUST NOT be connected to devices that may forward the test 610 traffic into a production network or misroute traffic to the test 611 management network. 613 Furthermore, benchmarking is performed on a "black-box" basis, 614 relying solely on measurements observable external to the DUT/SUT. 616 Special capabilities SHOULD NOT exist in the DUT/SUT specifically 617 for benchmarking purposes. Any implications for network security 618 arising from the DUT/SUT SHOULD be identical in the lab and in 619 production networks. 621 There are no specific security considerations within the scope of 622 this document. 624 6. IANA Considerations 626 There is no IANA consideration for this document. 628 7. Acknowledgments 630 The authors would like to thank Ron Bonica, who motivated us to 631 write this document. The authors would also like to thank Al Morton, 632 Andrew Yourtchenko, David Newman, John E. Dawson, Timmons C. Player, 633 Jan Novak, Steve Maxwell, Ilya Varlashkin, and Sarah Banks for 634 providing thorough review, useful suggestions, and valuable input. 636 This document was prepared using 2-Word-v2.0.template.dot. 638 8. References 640 8.1. Normative References 642 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 643 Requirement Levels", BCP 14, RFC 2119, March 1997. 645 [RFC2544] Bradner, S. and McQuaid, J., "Benchmarking Methodology for 646 Network Interconnect Devices", RFC 2544, March 1999. 648 8.2. Informative References 650 [IGPConv] Poretsky, S., Imhoff, B., and K. Michielsen, "Benchmarking 651 Methodology for Link-State IGP Data Plane Route 652 Convergence", draft-ietf-bmwg-igp-dataplane-conv-meth-21 653 (work in progress), May 2010. 655 [RFC5180] Popoviciu, C., et al, "IPv6 Benchmarking Methodology for 656 Network Interconnect Devices", RFC 5180, May 2008. 658 [RFC5695] Akhter, A., Asati, R., and C. Pignataro, "MPLS Forwarding 659 Benchmarking Methodology for IP Flows", RFC 5695, November 660 2009. 662 Authors' Addresses 664 Rajiv Asati 665 Cisco Systems 666 7025-6 Kit Creek Road 667 RTP, NC 27709 668 USA 670 Email: rajiva@cisco.com 672 Carlos Pignataro 673 Cisco Systems 674 7200-12 Kit Creek Road 675 RTP, NC 27709 676 USA 678 Email: cpignata@cisco.com 680 Fernando Calabria 681 Cisco Systems 682 7200-12 Kit Creek Road 683 RTP, NC 27709 684 USA 686 Email: fcalabri@cisco.com 688 Cesar Olvera 689 Consulintel 690 Joaquin Turina, 2 691 Pozuelo de Alarcon, Madrid, E-28224 692 Spain 694 Email: cesar.olvera@consulintel.es