idnits 2.17.1 draft-ietf-bmwg-issu-meth-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There is 1 instance of lines with control characters in the document. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (Aug 8, 2015) is 3177 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 1 error (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Benchmarking Working Group Sarah Banks 2 Internet Draft VSS Monitoring 3 Intended status: Informational Fernando Calabria 4 Expires: February 10, 2016 Cisco Systems 5 Gery Czirjak 6 Ramdas Machat 7 Juniper Networks 8 Aug 8, 2015 10 ISSU Benchmarking Methodology 11 draft-ietf-bmwg-issu-meth-02 13 Abstract 15 Modern forwarding devices attempt to minimize any control and data 16 plane disruptions while performing planned software changes, by 17 implementing a technique commonly known as In Service Software 18 Upgrade (ISSU) This document specifies a set of common methodologies 19 and procedures designed to characterize the overall behavior of a 20 Device Under Test (DUT), subject to an ISSU event. 22 Status of this Memo 24 This Internet-Draft is submitted in full conformance with the 25 provisions of BCP 78 and BCP 79. 27 Internet-Drafts are working documents of the Internet Engineering 28 Task Force (IETF), its areas, and its working groups. Note that 29 other groups may also distribute working documents as Internet- 30 Drafts. 32 Internet-Drafts are draft documents valid for a maximum of six 33 months and may be updated, replaced, or obsoleted by other 34 documents at any time. It is inappropriate to use Internet-Drafts 35 as reference material or to cite them other than as 36 "work in progress." 38 The list of current Internet-Drafts can be accessed at 39 http://www.ietf.org/1id-abstracts.html 41 The list of Internet-Draft Shadow Directories can be accessed at 42 http://www.ietf.org/shadow.html 44 This Internet-Draft will expire on February 10, 2016. 46 Copyright Notice 48 Copyright (c) 2015 IETF Trust and the persons identified as the 49 document authors. All rights reserved. 51 This document is subject to BCP 78 and the IETF Trust's Legal 52 Provisions Relating to IETF Documents 53 (http://trustee.ietf.org/license-info) in effect on the date of 54 publication of this document. Please review these documents 55 carefully, as they describe your rights and restrictions with 56 respect to this document. Code Components extracted from this 57 document must include Simplified BSD License text as described in 58 Section 4.e of the Trust Legal Provisions and are provided without 59 warranty as described in the Simplified BSD License. 61 This document is subject to BCP 78 and the IETF Trust's Legal 62 Provisions Relating to IETF Documents 63 (http://trustee.ietf.org/license-info) in effect on the date of 64 publication of this document. Please review these documents 65 carefully, as they describe your rights and restrictions with 66 respect to this document. 68 Table of Contents 70 1. Introduction...................................................3 71 2. Conventions used in this document..............................4 72 3. Generic ISSU Process, phased approach..........................5 73 3.1. Software Download.........................................5 74 3.2. Software Staging..........................................6 75 3.3. Upgrade Run...............................................6 76 3.4. Upgrade Acceptance........................................7 77 4. Test Methodology...............................................7 78 4.1. Test Topology.............................................7 79 4.2. Load Model................................................8 80 5. ISSU Test Methodology..........................................9 81 5.1. Pre-ISSU recommended verifications........................9 82 5.2. Software Staging.........................................10 83 5.3. Upgrade Run..............................................11 84 5.4. Post ISSU verification...................................11 85 5.5. ISSU under negative stimuli..............................12 86 6. ISSU Abort and Rollback.......................................12 87 7. Final Report - Data Presentation - Analysis...................13 88 7.1. Data collection considerations...........................15 89 8. Security Considerations.......................................15 90 9. IANA Considerations...........................................16 91 10. References...................................................16 92 10.1. Normative References....................................16 93 10.2. Informative References..................................16 94 11. Acknowledgments..............................................16 96 1. Introduction 98 As required by most Service Provider (SP) network operators, ISSU 99 functionality has been implemented by modern forwarding devices to 100 upgrade or downgrade from one software version to another with a 101 goal of eliminating the downtime of the router and/or the outage 102 of service. However, it is noted that while most operators desire 103 complete elimination of downtime, minimization of downtime and 104 service degradation is often the expectation. 106 The ISSU operation may apply in terms of an atomic version change of 107 the entire system software or it may be applied in a more modular 108 sense such as for a patch or maintenance upgrade. The procedure 109 described herein may be used to verify either approach, as may be 110 supported by the vendor hardware and software. 112 In support of this document, the desired behavior for an ISSU 113 operation can be summarized as follows: 115 - The software is successfully migrated, from one version to a 116 successive version or vice versa. 118 - There are no control plane interruptions throughout the process. 119 That is, the upgrade/downgrade could be accomplished while the device 120 remains "in service". It is noted however, that most service 121 providers will still undertake such actions in a maintenance window 122 (even in redundant environments) to minimize any risk. 124 - Interruptions to the forwarding plane are minimal to none. 126 - The total time to accomplish the upgrade is minimized, again to 127 reduce potential network outage exposure (e.g. an external failure 128 event might impact the network as it operates with reduced 129 redundancy). 131 This document provides a set of procedures to characterize a given 132 forwarding device's ISSU behavior quantitatively, from the 133 perspective of meeting the above expectations. 135 Different hardware configurations may be expected to be benchmarked, 136 but a typical configuration for a forwarding device that supports 137 ISSU consists of at least one pair of Routing Processors (RP's) that 138 operate in a redundant fashion, and single or multiple Forwarding 139 Engines (Line Cards) that may or may not be redundant, as well as 140 fabric cards or other components as applicable. This does not 141 preclude the possibility that a device in question can perform ISSU 142 functions through the operation of independent process components, 143 which may be upgraded without impact to the overall operation of the 144 device. As an example, perhaps the software module involved in SNMP 145 functions can be upgraded without impacting other operations. 147 The concept of a multi-chassis deployment may also be characterized 148 by the current set of proposed methodologies, but the implementation 149 specific details (i.e. process placement and others) are beyond the 150 scope of the current document. 152 Since most modern forwarding devices, where ISSU would be 153 applicable, do consist of redundant RP's and hardware-separated 154 control plane and data plane functionality, this document will focus 155 on methodologies which would be directly applicable to those 156 platforms. It is anticipated that the concepts and approaches 157 described herein may be readily extended to accommodate other device 158 architectures as well. 160 2. Conventions used in this document 162 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 163 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 164 document are to be interpreted as described in RFC-2119 [RFC 2119]. 166 In this document, these words will appear with that interpretation 167 only when in ALL CAPS. Lower case uses of these words are not to be 168 interpreted as carrying RFC-2119 significance. 170 3. Generic ISSU Process, phased approach 172 ISSU may be viewed as the behavior of a device when exposed to a 173 planned change in its software functionality. This may mean changes 174 to the core operating system, separate processes or daemons or even 175 of firmware logic in programmable hardware devices (e.g. CPLD/FPGA). 176 The goal of an ISSU implementation is to permit such actions with 177 minimal or no disruption to the primary operation of the device in 178 question. 180 ISSU may be user initiated through direct interaction with the 181 device or activated through some automated process on a management 182 system or even on the device itself. For the purposes of this 183 document, we will focus on the model where the ISSU action is 184 initiated by direct user intervention. 186 The ISSU process can be viewed as a series of different phases or 187 activities, as defined below. For each of these phases, the test 188 operator must record the outcome as well as any relevant 189 observations (defined further in the present document). Note that, a 190 given vendor implementation may or may not permit the abortion of 191 the in-progress ISSU at particular stages. There may also be certain 192 restrictions as to ISSU availability given certain functional 193 configurations (for example, ISSU in the presence of Bidirectional 194 Failure Detection (BFD) [RFC 5880] may not be supported). It is 195 incumbent upon the test operator to ensure that the DUT is 196 appropriately configured to provide the appropriate test 197 environment. As with any properly orchestrated test effort, the test 198 plan document should reflect these and other relevant details and 199 should be written with close attention to the expected production- 200 operating environment. The combined analysis of the results of each 201 phase will characterize the overall ISSU process with the main goal 202 of being able to identify and quantify any disruption in service 203 (from the data and control plane perspective) allowing operators to 204 plan their maintenance activities with greater precision. 206 3.1. Software Download 208 In this first phase, the requested software package may be 209 downloaded to the router and is typically stored onto a device. The 210 downloading of software may be performed automatically by the device 211 as part of the upgrade process, or it may be initiated separately. 212 Such separation allows an administrator to download the new code 213 inside or outside of a maintenance window; it is anticipated that 214 downloading new code and saving it to disk on the router will not 215 impact operations. In the case where the software can be downloaded 216 outside of the actual upgrade process, the administrator should do 217 so; downloading software can skew timing results based on factors 218 that are often not comparative in nature. Internal compatibility 219 verification may be performed by the software running on the DUT, to 220 verify the checksum of the files downloaded as well as any other 221 pertinent checks. Depending upon vendor implementation, these 222 mechanisms may extend to include verification that the downloaded 223 module(s) meet a set of identified pre-requisites such as (but not 224 limited to) hardware or firmware compatibility, minimum software 225 requirements or even ensure that device is "authorized" to run the 226 target software. 227 Where such mechanisms are made available by the product, they 228 should be verified, by the tester, with the perspective of avoiding 229 operational issues in production. Verification should include both 230 positive verification (ensuring that an ISSU action should be 231 permitted) as well as negative tests (creation of scenarios where 232 the verification mechanisms would report exceptions). 234 3.2. Software Staging 236 In this second phase, the requested software package is loaded in 237 the pertinent components of a given forwarding device (typically the 238 RP in standby state). Internal compatibility verification may be 239 performed by the software running on the DUT, as part of the upgrade 240 process itself, to verify the checksum of the files downloaded as 241 well as any other pertinent checks. Depending upon vendor 242 implementation, these mechanisms may extend to include verification 243 that the downloaded module(s) meet a set of identified pre- 244 requisites such as hardware or firmware compatibility or minimum 245 software requirements. Where such mechanisms are made available by 246 the product, they should be verified, by the tester (again with the 247 perspective of avoiding operational issues in production). In this 248 case, the execution of these checks is within scope of the upgrade 249 time, and should be included in the testing results. Once the new 250 software is downloaded to the pertinent components of the DUT, the 251 upgrade begins and the DUT begins to prepare itself for upgrade. 252 Depending on the vendor implementation, it is expected that 253 redundant hardware pieces within the DUT are upgraded, including the 254 backup or secondary RP. 256 3.3. Upgrade Run 258 In this phase, a switchover of RPs may take place, where one RP is 259 now upgraded with the new version of software. More importantly, the 260 "Upgrade Run" phase is where the internal changes made to 261 information and state stored on the router, on disk and in memory, 262 are either migrated to the "new" version of code, or 263 transformed/rebuilt to meet the standards of the new version of 264 code, and pushed onto the appropriate pieces of hardware. It is 265 within this phase that any outage(s) on the control or forwarding 266 plane may be expected to be observed. This is the critical phase of 267 the ISSU, where the control plane should not be impacted and any 268 interruptions to the forwarding plane should be minimal to none. For 269 some implementations, the above two steps may be concatenated into 270 one monolithic operation. In such case, the calculation of the 271 respective ISSU time intervals may need to be adapted accordingly. 273 If any control or data plane interruptions are observed within this 274 stage, they should be recorded as part of the results document. 276 3.4. Upgrade Acceptance 278 In this phase, the new version of software must be running in all 279 the physical nodes of the logical forwarding device. (RP's and LC's 280 as applicable). At this point, configuration control is returned to 281 the operator and normal device operation i.e. outside of ISSU- 282 oriented operation, is resumed. 284 4. Test Methodology 286 As stated by [RFC 6815], the Test Topology Setup must be part 287 of an ITE (Isolated Test Environment) 289 The reporting of results must take into account the repeatability 290 considerations from Section 4 of [RFC 2544]. It is RECOMMENDED to 291 perform multiple trials and report average results. The results are 292 reported in a simple statement including the measured frame loss and 293 ISSU impact times. 295 4.1. Test Topology 297 The hardware configuration of the DUT (Device Under test) should be 298 identical to the one expected to be or currently deployed in 299 production in order for the benchmark to have relevance. This would 300 include the number of RP's, hardware version, memory and initial 301 software release, any common chassis components, such as fabric 302 hardware in the case of a fabric-switching platform and the specific 303 LC's (version, memory, interfaces type, rate etc.) 305 For the Control and Data plane, differing configuration approached 306 may be utilized. The recommended approach relies on "mimicking" the 307 existing production data and control plane information, in order to 308 emulate all the necessary Layer1 through Layer3 communications and, 309 if appropriate, the upper layer characteristics of the network, as 310 well as end to end traffic/communication pairs. In other words, 311 design a representative load model of the production environment and 312 deploy a collapsed topology utilizing test tools and/or external 313 devices, where the DUT will be tested. Note that, the negative 314 impact of ISSU operations is likely to impact scaled, dynamic 315 topologies to a greater extent than simpler, static environments. As 316 such, this methodology (based upon production configuration) is 317 advised for most test scenarios. 319 The second, more simplistic approach is to deploy an ITE 320 "Isolated test Environment" in which end-points are "directly" 321 connected to the DUT. In this manner, control plane information is 322 kept to a minimum (only connected interfaces) and only a basic data 323 plane only a basic data plane of sources and destinations is applied. 324 If this methodology is selected, care must be taken to understand 325 that the systemic behavior of the ITE may not be identical to that 326 experienced by a device in a production network role. That is, 327 control plane validation may be minimal to none with this 328 methodology. Consequently, if this approach is chosen, comparison 329 with at least one production configuration is recommended in order 330 to understand the direct relevance and limitations of the 331 test exercise. 333 4.2. Load Model 335 In consideration of the defined test topology, a load model must be 336 developed to exercise the DUT while the ISSU event is introduced. 337 This applied load should be defined in such a manner as to provide a 338 granular, repeatable verification of the ISSU impact on transit 339 traffic. Sufficient traffic load (rate) should be applied to permit 340 timing extrapolations at a minimum granularity of 100 milliseconds 341 e.g. 100Mbps for a 10Gbps interface. The use of steady traffic 342 streams rather than bursty loads is preferred to simplify analysis. 344 The traffic should be patterned to provide a broad range of source 345 and destination pairs, which resolve to a variety of FIB (forwarding 346 information base) prefix lengths. If the production network 347 environment includes multicast traffic or VPN's (L2, L3 or IPSec) it 348 is critical to include these in the model. 350 For mixed protocol environments (e.g. IPv4 and IPv6), frames should 351 be distributed between the different protocols. The distribution 352 should approximate the network conditions of deployment. In all 353 cases, the details of the mixed protocol distribution must be 354 included in the reporting. 356 The feature, protocol timing and other relevant configurations 357 should be matched to the expected production environment. Deviations 358 from the production templates may be deemed necessary by the test 359 operator (for example, certain features may not support ISSU or the 360 test bed may not be able to accommodate such). However, the impact 361 of any such divergence should be clearly understood and the 362 differences must be recorded in the results documentation. It is 363 recommended that an NMS system be deployed, preferably similar to 364 that utilized in production. This will allow for monitoring of the 365 DUT while it is being tested both in terms of supporting the system 366 resource impact analysis as well as from the perspective of 367 detecting interference with non-transit (management) traffic as a 368 result of the ISSU operation. Additionally, a DUT management session 369 other than snmp-based, typical of usage in production, should be 370 established to the DUT and monitored for any disruption. It is 371 suggested that the actual test exercise be managed utilizing direct 372 console access to the DUT, if at all possible to avoid the 373 possibility that a network interruption impairs execution of the 374 test exercise. 376 All in all, the load model should attempt to simulate the production 377 network environment to the greatest extent possible in order to 378 maximize the applicability of the results generated. 380 5. ISSU Test Methodology 382 As previously described, for the purposes of this test document, the 383 ISSU process is divided into three main phases. The following 384 methodology assumes that a suitable test topology has been 385 constructed per section 4. A description of the methodology to be 386 applied for each of the above phases follows: 388 5.1. Pre-ISSU recommended verifications 390 1. Verify that enough hardware and software resources are available 391 to complete the Load operation (enough disk space). 393 2. Verify that the redundancy states between RPs and other nodes are 394 as expected (e.g. redundancy on, RP's synchronized). 396 3. Verify that the device, if running NSR (Non Stop Routing) 397 capable routing protocols, is in a "ready" state; that is, 398 that the sync between RPs is complete and the system is ready 399 for failover, if necessary. 401 4. Gather a configuration snapshot of the device and all of its 402 applicable components. 404 5. Verify that the node is operating in a "steady" state (that is, 405 no critical or maintenance function is being currently performed). 407 6. Note any other operational characteristics that the tester may 408 deem applicable to the specific implementation deployed. 410 5.2. Software Staging 412 1. Establish all relevant protocol adjacencies and stabilize routing 413 within the test topology. In particular, ensure that the scaled 414 levels of the dynamic protocols are dimensioned as specified by the 415 test topology plan. 417 2. Clear relevant logs and interface counters to simplify analysis. 418 If possible, set logging timestamps to a highly granular mode. If 419 the topology includes management systems, ensure that the 420 appropriate polling levels have been applied, sessions established 421 and that the responses are per expectation. 423 3. Apply the traffic loads as specified in the load model previously 424 developed for this exercise. 426 4. Document an operational baseline for the test bed with relevant 427 data supporting the above steps (include all relevant load 428 characteristics of interest in the topology e.g. routing load, 429 traffic volumes, memory and CPU utilization) 431 5. Note the start time (T0) and begin the code change process 432 utilizing the appropriate mechanisms as expected to be used in 433 production (e.g. active download with TFTP/FTP/SCP/etc. or direct 434 install from local or external storage facility). In order to ensure 435 that ISSU process timings are not skewed by the lack of a network 436 wide synchronization source, the use of a network NTP source is 437 encouraged. 439 6. Take note of any logging information and command line interface 440 (CLI) prompts as needed (this detail will be vendor-specific) 441 Respond to any DUT prompts in a timely manner. 443 7. Monitor the DUT for the reload of secondary RP to the new 444 software level. Once the secondary has stabilized on the new code, 445 note the completion time. The duration of these steps will be 446 recorded as "T1". 448 8. Review system logs for any anomalies, check that relevant dynamic 449 protocols have remained stable and note traffic loss if any. Verify 450 that deployed management systems have not identified any unexpected 451 behavior. 453 5.3. Upgrade Run 455 The following assumes that the software load step and upgrade step 456 are discretely controllable. If not, maintain the afore-mentioned 457 timer and monitor for completion of the ISSU as described below. 459 1. Note the start time and initiate the actual upgrade procedure 461 2. Monitor the operation of the secondary route processor while it 462 initializes with the new software and assumes mastership of the DUT. 463 At this point, pay particular attention to any indications of 464 control plane disruption, traffic impact or other anomalous 465 behavior. Once the DUT has converged upon the new code and returned 466 to normal operation note the completion time and log the duration of 467 this step as T2. 469 3. Review the syslog data in the DUT and neighboring devices for any 470 behavior, which would be disruptive in a production environment 471 (linecard reloads, control plane flaps etc.). Examine the traffic 472 generators for any indication of traffic loss over this interval. If 473 the Test Set reported any traffic loss, note the number of frames 474 lost as "TP_frames". If the test set also provides outage duration, 475 note this as TP_time (alternatively this may be calculated as 476 TP/offered pps (packets per second) load). 478 4. Verify the DUT status observations as per any NMS systems 479 managing the DUT and its neighboring devices. Document the observed 480 CPU and memory statistics both during the ISSU upgrade event and 481 after and ensure that memory and CPU have returned to an expected 482 (previously baselined) level. 484 5.4. Post ISSU verification 486 The following describes a set of post-ISSU verification tasks that 487 are not directly part of the ISSU process, but are recommended for 488 execution in order to validate a successful upgrade: 490 1. Configuration delta analysis 492 Examine the post-ISSU configurations to determine if any changes 493 have occurred either through process error or due to differences 494 in the implementation of the upgraded code. 496 2. Exhaustive control plane analysis 497 Review the details of the RIB and FIB to assess whether any 498 unexpected changes have been introduced in the forwarding paths. 500 3. Verify that both RPs are up and that the redundancy mechanism for 501 the control plane is enabled and fully synchronized. 503 4. Verify that no control plane (protocol) events or flaps were 504 detected. 506 5. Verify that no L1 and or L2 interface flaps were observed. 508 6. Document the hitless operation or presence of an outage based 509 upon the counter values provided by the Test Set. 511 5.5. ISSU under negative stimuli 513 As an OPTIONAL Test Case, the operator may want to perform an ISSU 514 test while the DUT is under stress by introducing route churn to any 515 or all of the involved phases of the ISSU process. 517 One approach relies on the operator to gather statistical 518 information from the production environment and determine a specific 519 number of routes to flap every 'fixed' or 'variable' interval. 520 Alternatively, the operator may wish to simply pre-select a fixed 521 number of prefixes to flap. As an example, an operator may decide to 522 flap 1% of all the BGP routes every minute and restore them 1 minute 523 afterwards. The tester may wish to apply this negative stimulus 524 throughout the entire ISSU process or most importantly, during the 525 run phase. It is important to ensure that these routes, which are 526 introduced solely for stress proposes, must not overlap the ones 527 (per the Load Model) specifically leveraged to calculate the TP 528 (recorded outage). Furthermore, there should NOT be 'operator 529 induced' control plane - protocol adjacency flaps for the duration 530 of the test process as it may adversely affect the characterization 531 of the entire test exercise. For example, triggering IGP adjacency 532 events may force re-computation of underlying routing tables with 533 attendant impact to the perceived ISSU timings. While not 534 recommended, if such trigger events are desired by the test 535 operator, care should be taken to avoid the introduction of 536 unexpected anomalies within the test harness. 538 6. ISSU Abort and Rollback 540 Where a vendor provides such support, the ISSU process could be 541 aborted for any reason by the operator. However, the end results and 542 behavior may depend on the specific phase where the process was 543 aborted. While this is implementation dependent, as a general 544 recommendation, if the process is aborted during the "Software 545 Download" or "Software Staging" phases, no impact to service or 546 device functionality should be observed. In contrast, if the process 547 is aborted during the "Upgrade Run" or "Upgrade Accept" phases, the 548 system may reload and revert back to the previous software release 549 and as such, this operation may be service affecting. Where vendor 550 support is available, the abort/rollback functionality should be 551 verified and the impact, if any, quantified generally following the 552 procedures provided above. 554 7. Final Report - Data Presentation - Analysis 556 All ISSU impact results are summarized in a simple statement 557 describing the "ISSU Disruption Impact" including the measured frame 558 loss and impact time, where impact time is defined as the time frame 559 determined per the TP reported outage. These are considered to be 560 the primary data points of interest. 562 However, the entire ISSU operational impact should also be 563 considered in support of planning for maintenance and as such 564 additional reporting points are included. 566 Software download/secondary update T1 568 Upgrade/Run T2 570 ISSU Traffic Disruption (Frame Loss) TP_frames 572 ISSU Traffic Impact Time (milliseconds) TP Time 574 ISSU Housekeeping Interval T 576 (Time for both RP's up on new code and fully synced - Redundancy 577 restored) 579 Total ISSU Maintenance Window T4 (sum of T1+T2+T3) 581 The results reporting must provide the following information: 583 DUT hardware and software detail 585 Test Topology definition and diagram (especially as related to the 586 ISSU operation) 588 Load Model description including protocol mixes and any divergence 589 from the production environment 590 Time Results as per above 592 Anomalies Observed during ISSU 594 Anomalies Observed in post-ISSU analysis 596 It is RECOMMENDED that the following parameters be reported 597 as outlined below: 599 Parameter Units or Examples 601 --------------------------------------------------------------- 602 Traffic Load Frames per second and bits per Second 604 Disruption (average) Frames 606 Impact Time (average) Milliseconds 608 Number of trials Integer count 610 Protocols IPv4, IPv6, MPLS, etc. 612 Frame Size Octets 614 Port Media Ethernet, Gigabit Ethernet (GbE), 616 Packet over SONET (POS), etc. 618 Port Speed 10 Gbps, 1 Gbps, 100 Mbps, etc. 620 Interface Encaps Ethernet, Ethernet VLAN, 622 PPP, High-Level Data Link 624 Control(HDLC),etc. 626 Number of Prefixes Integer count 628 flapped (ON Interval) (Optional # of prefixes / Time 629 (minutes) 631 flapped (OFF Interval) (Optional # of prefixes / Time 632 (minutes) 634 Document any configuration deltas, which are observed after the 635 ISSU upgrade has taken effect. Note differences, which are driven 636 by changes in the patch or release level as well as items, which 637 are aberrant changes due to software faults. In either of these 638 cases, any unexpected behavioral changes should be analyzed and a 639 determination made as to the impact of the change (be it 640 functional variances or operational impacts to existing scripts or 641 management mechanisms. 643 7.1. Data collection considerations 645 When a DUT is undergoing an ISSU operation, it's worth noting that 646 the DUT's data collection and reporting of data, such as counters, 647 interface statistics, log messages, etc., may not be accurate. As 648 such, one should NOT rely on the DUTs data collection methods, but 649 rather, should use the test tools and equipment to collect data used 650 for reporting in Section 7. Care and consideration should be paid in 651 testing or adding new test cases, such that the desired data can be 652 collected from the test tools themselves, or other external 653 equipment, outside of the DUT itself. 655 8. Security Considerations 657 All BMWG memos are limited to testing in a laboratory Isolated Test 658 Environment (ITE), thus avoiding accidental interruption to 659 production networks due to test activities. 661 All benchmarking activities are limited to technology 662 characterization using controlled stimuli in a laboratory 663 environment with dedicated address space and the other constraints 664 [RFC 2544] 666 The benchmarking network topology will be an independent test setup 667 and must NOT be connected to devices that may forward the test 668 traffic into a production network or misroute traffic to the test 669 management network. 671 Further, benchmarking is performed on a "black-box" basis, relying 672 solely on measurements observable external to the device under test/ 673 system under test (DUT/SUT). 675 Special capabilities should NOT exist in the DUT/SUT specifically 676 for benchmarking purposes. Any implications for network security 677 arising from the DUT/SUT should be identical in the lab and in 678 production networks. 680 9. IANA Considerations 682 There are no IANA actions required by this memo. 684 10. References 686 10.1. Normative References 688 [RFC 2119] Bradner, S., "Key words for use in RFCs to Indicate 689 Requirement Levels", BCP 14, RFC 2119, March 1997. 691 [RFC 2544] S. Bradner , J. McQuaid "Benchmarking Methodology 692 for Network Interconnect Devices" RFC 2544 , 693 March 1999 695 10.2. Informative References 697 [RFC 5880] D. Katz, D. Ward "Bidirectional Forwarding Detection 698 (BFD)" RFC 5880 , June 2010 700 [RFC 6815] S. Bradner, K. Dubray , J. McQuaid, A. Morton 701 ?Applicability Statement for RFC 2544: 702 Use on Production Networks Considered Harmful? 703 RFC 6815 , November 2012 705 11. Acknowledgments 707 The authors wish to thank Vibin Thomas for his valued review and 708 feedback. 710 Authors' Addresses 712 Sarah Banks 713 VSS Monitoring 714 Email: sbanks@encrypted.net 716 Fernando Calabria 717 Cisco Systems 718 Email: fcalabri@cisco.com 720 Gery Czirjak 721 Juniper Networks 722 Email: gczirjak@juniper.net 724 Ramdas Machat 725 Juniper Networks 726 Email: rmachat@juniper.net