idnits 2.17.1 draft-ietf-bmwg-issu-meth-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (May 30, 2015) is 3253 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Benchmarking Working Group Sarah Banks 2 Internet Draft VSS Monitoring 3 Intended status: Informational Fernando Calabria 4 Expires: November 30, 2015 Cisco Systems 5 Gery Czirjak 6 Ramdas Machat 7 Juniper Networks 8 May 30, 2015 10 ISSU Benchmarking Methodology 11 draft-ietf-bmwg-issu-meth-01 13 Abstract 15 Modern forwarding devices attempt to minimize any control and data 16 plane disruptions while performing planned software changes, by 17 implementing a technique commonly known as In Service Software 18 Upgrade (ISSU) This document specifies a set of common methodologies 19 and procedures designed to characterize the overall behavior of a 20 Device Under Test (DUT), subject to an ISSU event. 22 Status of this Memo 24 This Internet-Draft is submitted in full conformance with the 25 provisions of BCP 78 and BCP 79. 27 Internet-Drafts are working documents of the Internet Engineering 28 Task Force (IETF), its areas, and its working groups. Note that 29 other groups may also distribute working documents as Internet- 30 Drafts. 32 Internet-Drafts are draft documents valid for a maximum of six 33 months and may be updated, replaced, or obsoleted by other 34 documents at any time. It is inappropriate to use Internet-Drafts 35 as reference material or to cite them other than as 36 "work in progress." 38 The list of current Internet-Drafts can be accessed at 39 http://www.ietf.org/1id-abstracts.html 41 The list of Internet-Draft Shadow Directories can be accessed at 42 http://www.ietf.org/shadow.html 44 This Internet-Draft will expire on September 6, 2015. 46 Copyright Notice 48 Copyright (c) 2015 IETF Trust and the persons identified as the 49 document authors. All rights reserved. 51 This document is subject to BCP 78 and the IETF Trust's Legal 52 Provisions Relating to IETF Documents 53 (http://trustee.ietf.org/license-info) in effect on the date of 54 publication of this document. Please review these documents 55 carefully, as they describe your rights and restrictions with 56 respect to this document. Code Components extracted from this 57 document must include Simplified BSD License text as described in 58 Section 4.e of the Trust Legal Provisions and are provided without 59 warranty as described in the Simplified BSD License. 61 This document is subject to BCP 78 and the IETF Trust's Legal 62 Provisions Relating to IETF Documents 63 (http://trustee.ietf.org/license-info) in effect on the date of 64 publication of this document. Please review these documents 65 carefully, as they describe your rights and restrictions with 66 respect to this document. 68 Table of Contents 70 1. Introduction...................................................3 71 2. Conventions used in this document..............................4 72 3. Generic ISSU Process, phased approach..........................5 73 3.1. Software Download.........................................5 74 3.2. Software Staging..........................................6 75 3.3. Upgrade Run...............................................6 76 3.4. Upgrade Acceptance........................................7 77 4. Test Methodology...............................................7 78 4.1. Test Topology.............................................7 79 4.2. Load Model................................................8 80 5. ISSU Test Methodology..........................................9 81 5.1. Pre-ISSU recommended verifications........................9 82 5.2. Software Staging.........................................10 83 5.3. Upgrade Run..............................................11 84 5.4. Post ISSU verification...................................11 85 5.5. ISSU under negative stimuli..............................12 86 6. ISSU Abort and Rollback.......................................12 87 7. Final Report - Data Presentation - Analysis...................13 88 7.1. Data collection considerations...........................15 89 8. Security Considerations.......................................15 90 9. IANA Considerations...........................................16 91 10. References...................................................16 92 10.1. Normative References....................................16 93 10.2. Informative References..................................16 94 11. Acknowledgments..............................................16 96 1. Introduction 97 As required by most Service Provider (SP) network operators, ISSU 98 functionality has been implemented by modern forwarding devices to 99 upgrade or downgrade from one software version to another with a 100 goal of eliminating the downtime of the router and/or the outage 101 of service. However, it is noted that while most operators desire 102 complete elimination of downtime, minimization of downtime and 103 service degradation is often the expectation. 105 The ISSU operation may apply in terms of an atomic version change of 106 the entire system software or it may be applied in a more modular 107 sense such as for a patch or maintenance upgrade. The procedure 108 described herein may be used to verify either approach, as may be 109 supported by the vendor hardware and software. 111 In support of this document, the desired behavior for an ISSU 112 operation can be summarized as follows: 114 - The software is successfully migrated, from one version to a 115 successive version or vice versa. 117 - There are no control plane interruptions throughout the process. 118 That is, the upgrade/downgrade could be accomplished while the device 119 remains "in service". It is noted however, that most service 120 providers will still undertake such actions in a maintenance window 121 (even in redundant environments) to minimize any risk. 123 - Interruptions to the forwarding plane are minimal to none. 125 - The total time to accomplish the upgrade is minimized, again to 126 reduce potential network outage exposure (e.g. an external failure 127 event might impact the network as it operates with reduced 128 redundancy). 130 This document provides a set of procedures to characterize a given 131 forwarding device's ISSU behavior quantitatively, from the 132 perspective of meeting the above expectations. 134 Different hardware configurations may be expected to be benchmarked, 135 but a typical configuration for a forwarding device that supports 136 ISSU consists of at least one pair of Routing Processors (RP's) that 137 operate in a redundant fashion, and single or multiple Forwarding 138 Engines (Line Cards) that may or may not be redundant, as well as 139 fabric cards or other components as applicable. This does not 140 preclude the possibility that a device in question can perform ISSU 141 functions through the operation of independent process components, 142 which may be upgraded without impact to the overall operation of the 143 device. As an example, perhaps the software module involved in SNMP 144 functions can be upgraded without impacting other operations. 146 The concept of a multi-chassis deployment may also be characterized 147 by the current set of proposed methodologies, but the implementation 148 specific details (i.e. process placement and others) are beyond the 149 scope of the current document. 151 Since most modern forwarding devices, where ISSU would be 152 applicable, do consist of redundant RP's and hardware-separated 153 control plane and data plane functionality, this document will focus 154 on methodologies which would be directly applicable to those 155 platforms. It is anticipated that the concepts and approaches 156 described herein may be readily extended to accommodate other device 157 architectures as well. 159 2. Conventions used in this document 161 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 162 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 163 document are to be interpreted as described in RFC-2119 [RFC 2119]. 165 In this document, these words will appear with that interpretation 166 only when in ALL CAPS. Lower case uses of these words are not to be 167 interpreted as carrying RFC-2119 significance. 169 3. Generic ISSU Process, phased approach 171 ISSU may be viewed as the behavior of a device when exposed to a 172 planned change in its software functionality. This may mean changes 173 to the core operating system, separate processes or daemons or even 174 of firmware logic in programmable hardware devices (e.g. CPLD/FPGA). 175 The goal of an ISSU implementation is to permit such actions with 176 minimal or no disruption to the primary operation of the device in 177 question. 179 ISSU may be user initiated through direct interaction with the 180 device or activated through some automated process on a management 181 system or even on the device itself. For the purposes of this 182 document, we will focus on the model where the ISSU action is 183 initiated by direct user intervention. 185 The ISSU process can be viewed as a series of different phases or 186 activities, as defined below. For each of these phases, the test 187 operator MUST record the outcome as well as any relevant 188 observations (defined further in the present document). Note that, a 189 given vendor implementation may or may not permit the abortion of 190 the in-progress ISSU at particular stages. There may also be certain 191 restrictions as to ISSU availability given certain functional 192 configurations (for example, ISSU in the presence of Bidirectional 193 Failure Detection (BFD) [RFC 5880] may not be supported). It is 194 incumbent upon the test operator to ensure that the DUT is 195 appropriately configured to provide the appropriate test 196 environment. As with any properly orchestrated test effort, the test 197 plan document should reflect these and other relevant details and 198 SHOULD be written with close attention to the expected production- 199 operating environment. The combined analysis of the results of each 200 phase will characterize the overall ISSU process with the main goal 201 of being able to identify and quantify any disruption in service 202 (from the data and control plane perspective) allowing operators to 203 plan their maintenance activities with greater precision. 205 3.1. Software Download 207 In this first phase, the requested software package may be 208 downloaded to the router and is typically stored onto a device. The 209 downloading of software may be performed automatically by the device 210 as part of the upgrade process, or it may be initiated separately. 211 Such separation allows an administrator to download the new code 212 inside or outside of a maintenance window; it is anticipated that 213 downloading new code and saving it to disk on the router will not 214 impact operations. In the case where the software can be downloaded 215 outside of the actual upgrade process, the administrator SHOULD do 216 so; downloading software can skew timing results based on factors 217 that are often not comparative in nature. Internal compatibility 218 verification may be performed by the software running on the DUT, to 219 verify the checksum of the files downloaded as well as any other 220 pertinent checks. Depending upon vendor implementation, these 221 mechanisms may extend to include verification that the downloaded 222 module(s) meet a set of identified pre-requisites such as hardware 223 or firmware compatibility or minimum software requirements. Where 224 such mechanisms are made available by the product, they should be 225 verified, by the tester, with the perspective of avoiding 226 operational issues in production. Verification should include both 227 positive verification (ensuring that an ISSU action should be 228 permitted) as well as negative tests (creation of scenarios where 229 the verification mechanisms would report exceptions). 231 3.2. Software Staging 233 In this second phase, the requested software package is loaded in 234 the pertinent components of a given forwarding device (typically the 235 RP in standby state). Internal compatibility verification may be 236 performed by the software running on the DUT, as part of the upgrade 237 process itself, to verify the checksum of the files downloaded as 238 well as any other pertinent checks. Depending upon vendor 239 implementation, these mechanisms may extend to include verification 240 that the downloaded module(s) meet a set of identified pre- 241 requisites such as hardware or firmware compatibility or minimum 242 software requirements. Where such mechanisms are made available by 243 the product, they should be verified, by the tester (again with the 244 perspective of avoiding operational issues in production). In this 245 case, the execution of these checks is within scope of the upgrade 246 time, and SHOULD be included in the testing results. Once the new 247 software is downloaded to the pertinent components of the DUT, the 248 upgrade begins and the DUT begins to prepare itself for upgrade. 249 Depending on the vendor implementation, it is expected that 250 redundant hardware pieces within the DUT are upgraded, including the 251 backup or secondary RP. 253 3.3. Upgrade Run 255 In this phase, a switchover of RPs may take place, where one RP is 256 now upgraded with the new version of software. More importantly, the 257 "Upgrade Run" phase is where the internal changes made to 258 information and state stored on the router, on disk and in memory, 259 are either migrated to the "new" version of code, or 260 transformed/rebuilt to meet the standards of the new version of 261 code, and pushed onto the appropriate pieces of hardware. It is 262 within this phase that any outage(s) on the control or forwarding 263 plane may be expected to be observed. This is the critical phase of 264 the ISSU, where the control plane should not be impacted and any 265 interruptions to the forwarding plane should be minimal to none. For 266 some implementations, the above two steps may be concatenated into 267 one monolithic operation. In such case, the calculation of the 268 respective ISSU time intervals may need to be adapted accordingly. 270 If any control or data plane interruptions are observed within this 271 stage, they should be recorded as part of the results document. 273 3.4. Upgrade Acceptance 275 In this phase, the new version of software MUST be running in all 276 the physical nodes of the logical forwarding device. (RP's and LC's 277 as applicable). At this point, configuration control is returned to 278 the operator and normal device operation i.e. outside of ISSU- 279 oriented operation, is resumed. 281 4. Test Methodology 283 As stated by [RFC 6815], the Test Topology Setup must be part 284 of an ITE (Isolated Test Environment) 286 The reporting of results MUST take into account the repeatability 287 considerations from Section 4 of [RFC 2544]. It is RECOMMENDED to 288 perform multiple trials and report average results. The results are 289 reported in a simple statement including the measured frame loss and 290 ISSU impact times. 292 4.1. Test Topology 294 The hardware configuration of the DUT (Device Under test) SHOULD be 295 identical to the one expected to be or currently deployed in 296 production in order for the benchmark to have relevance. This would 297 include the number of RP's, hardware version, memory and initial 298 software release, any common chassis components, such as fabric 299 hardware in the case of a fabric-switching platform and the specific 300 LC's (version, memory, interfaces type, rate etc.) 302 For the Control and Data plane, differing configuration approached 303 MAY be utilized. The recommended approach relies on "mimicking" the 304 existing production data and control plane information, in order to 305 emulate all the necessary Layer1 through Layer3 communications and, 306 if appropriate, the upper layer characteristics of the network, as 307 well as end to end traffic/communication pairs. In other words, 308 design a representative load model of the production environment and 309 deploy a collapsed topology utilizing test tools and/or external 310 devices, where the DUT will be tested. Note that, the negative 311 impact of ISSU operations is likely to impact scaled, dynamic 312 topologies to a greater extent than simpler, static environments. As 313 such, this methodology (based upon production configuration) is 314 advised for most test scenarios. 316 The second, more simplistic approach is to deploy an ITE 317 "Isolated test Environment" in which end-points are "directly" 318 connected to the DUT. In this manner, control plane information is 319 kept to a minimum (only connected interfaces) and only a basic data 320 plane only a basic data plane of sources and destinations is applied. 321 If this methodology is selected, care must be taken to understand 322 that the systemic behavior of the ITE may not be identical to that 323 experienced by a device in a production network role. That is, 324 control plane validation may be minimal to none with this 325 methodology. Consequently, if this approach is chosen, comparison 326 with at least one production configuration is recommended in order 327 to understand the direct relevance and limitations of the 328 test exercise. 330 4.2. Load Model 332 In consideration of the defined test topology, a load model must be 333 developed to exercise the DUT while the ISSU event is introduced. 334 This applied load should be defined in such a manner as to provide a 335 granular, repeatable verification of the ISSU impact on transit 336 traffic. Sufficient traffic load (rate) should be applied to permit 337 timing extrapolations at a minimum granularity of 100 milliseconds 338 e.g. 100Mbps for a 10Gbps interface. The use of steady traffic 339 streams rather than bursty loads is preferred to simplify analysis. 341 The traffic should be patterned to provide a broad range of source 342 and destination pairs, which resolve to a variety of FIB (forwarding 343 information base) prefix lengths. If the production network 344 environment includes multicast traffic or VPN's (L2, L3 or IPSec) it 345 is critical to include these in the model. 347 For mixed protocol environments (e.g. IPv4 and IPv6), frames SHOULD 348 be distributed between the different protocols. The distribution 349 SHOULD approximate the network conditions of deployment. In all 350 cases, the details of the mixed protocol distribution MUST be 351 included in the reporting. 353 The feature, protocol timing and other relevant configurations 354 should be matched to the expected production environment. Deviations 355 from the production templates may be deemed necessary by the test 356 operator (for example, certain features may not support ISSU or the 357 test bed may not be able to accommodate such). However, the impact 358 of any such divergence should be clearly understood and the 359 differences MUST be recorded in the results documentation. It is 360 recommended that an NMS system be deployed, preferably similar to 361 that utilized in production. This will allow for monitoring of the 362 DUT while it is being tested both in terms of supporting the system 363 resource impact analysis as well as from the perspective of 364 detecting interference with non-transit (management) traffic as a 365 result of the ISSU operation. Additionally, a DUT management session 366 other than snmp-based, typical of usage in production, should be 367 established to the DUT and monitored for any disruption. It is 368 suggested that the actual test exercise be managed utilizing direct 369 console access to the DUT, if at all possible to avoid the 370 possibility that a network interruption impairs execution of the 371 test exercise. 373 All in all, the load model should attempt to simulate the production 374 network environment to the greatest extent possible in order to 375 maximize the applicability of the results generated. 377 5. ISSU Test Methodology 379 As previously described, for the purposes of this test document, the 380 ISSU process is divided into three main phases. The following 381 methodology assumes that a suitable test topology has been 382 constructed per section 4. A description of the methodology to be 383 applied for each of the above phases follows: 385 5.1. Pre-ISSU recommended verifications 387 1. Verify that enough hardware and software resources are available 388 to complete the Load operation (enough disk space). 390 2. Verify that the redundancy states between RPs and other nodes are 391 as expected (e.g. redundancy on, RP's synchronized). 393 3. Verify that the device, if running NSR capable routing protocols, 394 is in a "ready" state; that is, that the sync between RPs is 395 complete and the system is ready for failover, if necessary. 397 4. Gather a configuration snapshot of the device and all of its 398 applicable components. 400 5. Verify that the node is operating in a "steady" state (that is, 401 no critical or maintenance function is being currently performed). 403 6. Note any other operational characteristics that the tester may 404 deem applicable to the specific implementation deployed. 406 5.2. Software Staging 408 1. Establish all relevant protocol adjacencies and stabilize routing 409 within the test topology. In particular, ensure that the scaled 410 levels of the dynamic protocols are dimensioned as specified by the 411 test topology plan. 413 2. Clear relevant logs and interface counters to simplify analysis. 414 If possible, set logging timestamps to a highly granular mode. If 415 the topology includes management systems, ensure that the 416 appropriate polling levels have been applied, sessions established 417 and that the responses are per expectation. 419 3. Apply the traffic loads as specified in the load model previously 420 developed for this exercise. 422 4. Document an operational baseline for the test bed with relevant 423 data supporting the above steps (include all relevant load 424 characteristics of interest in the topology e.g. routing load, 425 traffic volumes, memory and CPU utilization) 427 5. Note the start time (T0) and begin the code change process 428 utilizing the appropriate mechanisms as expected to be used in 429 production (e.g. active download with TFTP/FTP/SCP/etc. or direct 430 install from local or external storage facility). In order to ensure 431 that ISSU process timings are not skewed by the lack of a network 432 wide synchronization source, the use of a network NTP source is 433 encouraged. 435 6. Take note of any logging information and command line interface 436 (CLI) prompts as needed (this detail will be vendor-specific) 437 Respond to any DUT prompts in a timely manner. 439 7. Monitor the DUT for the reload of secondary RP to the new 440 software level. Once the secondary has stabilized on the new code, 441 note the completion time. The duration of these steps will be 442 recorded as "T1". 444 8. Review system logs for any anomalies, check that relevant dynamic 445 protocols have remained stable and note traffic loss if any. Verify 446 that deployed management systems have not identified any unexpected 447 behavior. 449 5.3. Upgrade Run 451 The following assumes that the software load step and upgrade step 452 are discretely controllable. If not, maintain the afore-mentioned 453 timer and monitor for completion of the ISSU as described below. 455 1. Note the start time and initiate the actual upgrade procedure 457 2. Monitor the operation of the secondary route processor while it 458 initializes with the new software and assumes mastership of the DUT. 459 At this point, pay particular attention to any indications of 460 control plane disruption, traffic impact or other anomalous 461 behavior. Once the DUT has converged upon the new code and returned 462 to normal operation note the completion time and log the duration of 463 this step as T2. 465 3. Review the syslog data in the DUT and neighboring devices for any 466 behavior, which would be disruptive in a production environment 467 (linecard reloads, control plane flaps etc.). Examine the traffic 468 generators for any indication of traffic loss over this interval. If 469 the Test Set reported any traffic loss, note the number of frames 470 lost as "TP_frames". If the test set also provides outage duration, 471 note this as TP_time (alternatively this may be calculated as 472 TP/offered pps (packets per second) load). 474 4. Verify the DUT status observations as per any NMS systems 475 managing the DUT and its neighboring devices. Document the observed 476 CPU and memory statistics both during the ISSU upgrade event and 477 after and ensure that memory and CPU have returned to an expected 478 (previously baselined) level. 480 5.4. Post ISSU verification 482 The following describes a set of post-ISSU verification tasks that 483 are not directly part of the ISSU process, but are recommended for 484 execution in order to validate a successful upgrade: 486 1. Configuration delta analysis 488 Examine the post-ISSU configurations to determine if any changes 489 have occurred either through process error or due to differences 490 in the implementation of the upgraded code. 492 2. Exhaustive control plane analysis 493 Review the details of the RIB and FIB to assess whether any 494 unexpected changes have been introduced in the forwarding paths. 496 3. Verify that both RPs are up and that the redundancy mechanism for 497 the control plane is enabled and fully synchronized. 499 4. Verify that no control plane (protocol) events or flaps were 500 detected. 502 5. Verify that no L1 and or L2 interface flaps were observed. 504 6. Document the hitless operation or presence of an outage based 505 upon the counter values provided by the Test Set. 507 5.5. ISSU under negative stimuli 509 As an OPTIONAL Test Case, the operator may want to perform an ISSU 510 test while the DUT is under stress by introducing route churn to any 511 or all of the involved phases of the ISSU process. 513 One approach relies on the operator to gather statistical 514 information from the production environment and determine a specific 515 number of routes to flap every 'fixed' or 'variable' interval. 516 Alternatively, the operator may wish to simply pre-select a fixed 517 number of prefixes to flap. As an example, an operator may decide to 518 flap 1% of all the BGP routes every minute and restore them 1 minute 519 afterwards. The tester may wish to apply this negative stimulus 520 throughout the entire ISSU process or most importantly, during the 521 run phase. It is important to ensure that these routes, which are 522 introduced solely for stress proposes, must not overlap the ones 523 (per the Load Model) specifically leveraged to calculate the TP 524 (recorded outage). Furthermore, there SHOULD NOT be 'operator 525 induced' control plane - protocol adjacency flaps for the duration 526 of the test process as it may adversely affect the characterization 527 of the entire test exercise. For example, triggering IGP adjacency 528 events may force re-computation of underlying routing tables with 529 attendant impact to the perceived ISSU timings. While not 530 recommended, if such trigger events are desired by the test 531 operator, care should be taken to avoid the introduction of 532 unexpected anomalies within the test harness. 534 6. ISSU Abort and Rollback 536 Where a vendor provides such support, the ISSU process could be 537 aborted for any reason by the operator. However, the end results and 538 behavior may depend on the specific phase where the process was 539 aborted. While this is implementation dependent, as a general 540 recommendation, if the process is aborted during the "Software 541 Download" or "Software Staging" phases, no impact to service or 542 device functionality should be observed. In contrast, if the process 543 is aborted during the "Upgrade Run" or "Upgrade Accept" phases, the 544 system may reload and revert back to the previous software release 545 and as such, this operation may be service affecting. Where vendor 546 support is available, the abort/rollback functionality should be 547 verified and the impact, if any, quantified generally following the 548 procedures provided above. 550 7. Final Report - Data Presentation - Analysis 552 All ISSU impact results are summarized in a simple statement 553 describing the "ISSU Disruption Impact" including the measured frame 554 loss and impact time, where impact time is defined as the time frame 555 determined per the TP reported outage. These are considered to be 556 the primary data points of interest. 558 However, the entire ISSU operational impact should also be 559 considered in support of planning for maintenance and as such 560 additional reporting points are included. 562 Software download/secondary update T1 564 Upgrade/Run T2 566 ISSU Traffic Disruption (Frame Loss) TP_frames 568 ISSU Traffic Impact Time (milliseconds) TP Time 570 ISSU Housekeeping Interval T 572 (Time for both RP's up on new code and fully synced - Redundancy 573 restored) 575 Total ISSU Maintenance Window T4 (sum of T1+T2+T3) 577 The results reporting MUST provide the following information: 579 DUT hardware and software detail 581 Test Topology definition and diagram (especially as related to the 582 ISSU operation) 584 Load Model description including protocol mixes and any divergence 585 from the production environment 586 Time Results as per above 588 Anomalies Observed during ISSU 590 Anomalies Observed in post-ISSU analysis 592 It is RECOMMENDED that the following parameters be reported 593 as outlined below: 595 Parameter Units or Examples 597 --------------------------------------------------------------- 598 Traffic Load Frames per second and bits per Second 600 Disruption (average) Frames 602 Impact Time (average) Milliseconds 604 Number of trials Integer count 606 Protocols IPv4, IPv6, MPLS, etc. 608 Frame Size Octets 610 Port Media Ethernet, Gigabit Ethernet (GbE), 612 Packet over SONET (POS), etc. 614 Port Speed 10 Gbps, 1 Gbps, 100 Mbps, etc. 616 Interface Encaps Ethernet, Ethernet VLAN, 618 PPP, High-Level Data Link 620 Control(HDLC),etc. 622 Number of Prefixes Integer count 624 flapped (ON Interval) (Optional # of prefixes / Time 625 (minutes) 627 flapped (OFF Interval) (Optional # of prefixes / Time 628 (minutes) 630 Document any configuration deltas, which are observed after the 631 ISSU upgrade has taken effect. Note differences, which are driven 632 by changes in the patch or release level as well as items, which 633 are aberrant changes due to software faults. In either of these 634 cases, any unexpected behavioral changes should be analyzed and a 635 determination made as to the impact of the change (be it 636 functional variances or operational impacts to existing scripts or 637 management mechanisms. 639 7.1. Data collection considerations 641 When a DUT is undergoing an ISSU operation, it's worth noting that 642 the DUT's data collection and reporting of data, such as counters, 643 interface statistics, log messages, etc., may not be accurate. As 644 such, one SHOULD NOT rely on the DUTs data collection methods, but 645 rather, SHOULD use the test tools and equipment to collect data used 646 for reporting in Section 7. Care and consideration should be paid in 647 testing or adding new test cases, such that the desired data can be 648 collected from the test tools themselves, or other external 649 equipment, outside of the DUT itself. 651 8. Security Considerations 653 All BMWG memos are limited to testing in a laboratory Isolated Test 654 Environment (ITE), thus avoiding accidental interruption to 655 production networks due to test activities. 657 All benchmarking activities are limited to technology 658 characterization using controlled stimuli in a laboratory 659 environment with dedicated address space and the other constraints 660 [RFC 2544] 662 The benchmarking network topology will be an independent test setup 663 and MUST NOT be connected to devices that may forward the test 664 traffic into a production network or misroute traffic to the test 665 management network. 667 Further, benchmarking is performed on a "black-box" basis, relying 668 solely on measurements observable external to the device under test/ 669 system under test (DUT/SUT). 671 Special capabilities SHOULD NOT exist in the DUT/SUT specifically 672 for benchmarking purposes. Any implications for network security 673 arising from the DUT/SUT SHOULD be identical in the lab and in 674 production networks. 676 9. IANA Considerations 678 There are no IANA actions required by this memo. 680 10. References 682 10.1. Normative References 684 [RFC 2119] Bradner, S., "Key words for use in RFCs to Indicate 685 Requirement Levels", BCP 14, RFC 2119, March 1997. 687 [RFC 2544] S. Bradner , J. McQuaid "Benchmarking Methodology 688 for Network Interconnect Devices" RFC 2544 , 689 March 1999 691 10.2. Informative References 693 [RFC 5880] D. Katz, D. Ward "Bidirectional Forwarding Detection 694 (BFD)" RFC 5880 , June 2010 696 [RFC 6815] S. Bradner, K. Dubray , J. McQuaid, A. Morton 697 ?Applicability Statement for RFC 2544: 698 Use on Production Networks Considered Harmful? 699 RFC 6815 , November 2012 701 11. Acknowledgments 703 The authors wish to thank Vibin Thomas for his valued review and 704 feedback. 706 Authors' Addresses 708 Sarah Banks 709 VSS Monitoring 710 Email: sbanks@encrypted.net 712 Fernando Calabria 713 Cisco Systems 714 Email: fcalabri@cisco.com 716 Gery Czirjak 717 Juniper Networks 718 Email: gczirjak@juniper.net 720 Ramdas Machat 721 Juniper Networks 722 Email: rmachat@juniper.net