idnits 2.17.1 draft-ietf-bmwg-ospfconv-applicability-00.txt: ** The Abstract section seems to be numbered Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity. == No 'Intended status' indicated for this document; assuming Proposed Standard == The page length should not exceed 58 lines per page, but there was 6 longer pages, the longest (page 2) being 60 lines == It seems as if not all pages are separated by form feeds - found 0 form feeds but 7 pages Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an Introduction section. ** The document seems to lack a Security Considerations section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The abstract seems to contain references ([2], [3]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (June 2002) is 7986 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Missing reference section? '2' on line 226 looks like a reference -- Missing reference section? '3' on line 229 looks like a reference -- Missing reference section? '1' on line 223 looks like a reference -- Missing reference section? '4' on line 232 looks like a reference -- Missing reference section? '5' on line 235 looks like a reference Summary: 8 errors (**), 0 flaws (~~), 4 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group Vishwas Manral 3 Internet Draft Netplane Systems 4 Russ White 5 Cisco Systems 6 Aman Shaikh 7 Expiration Date: December 2002 University of California 8 File Name: draft-ietf-bmwg-ospfconv-applicability-00.txt June 2002 10 Benchmarking Applicability for Basic OSPF Convergence 11 draft-ietf-bmwg-ospfconv-applicability-00.txt 13 1. Status of this Memo 15 This document is an Internet-Draft and is in full conformance with 16 all provisions of Section 10 of RFC2026. 18 Internet Drafts are working documents of the Internet Engineering 19 Task Force (IETF), its Areas, and its Working Groups. Note that other 20 groups may also distribute working documents as Internet Drafts. 22 Internet Drafts are draft documents valid for a maximum of six 23 months. Internet Drafts may be updated, replaced, or obsoleted by 24 other documents at any time. It is not appropriate to use Internet 25 Drafts as reference material or to cite them other than as a "working 26 draft" or "work in progress". 28 The list of current Internet-Drafts can be accessed at 29 http://www.ietf.org/ietf/1id-abstracts.txt 31 The list of Internet-Draft Shadow Directories can be accessed at 32 http://www.ietf.org/shadow.html. 34 2. Abstract 36 This draft describes the applicability of [2] and similar work which 37 may be done in the future. Refer to [3] for terminology used in this 38 draft and [2]. The draft defines the advantages as well as 39 limitations of using the method defined in [2], besides describing 40 the pitfalls to avoid during measurement. 42 3. Conventions used in this document 44 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 45 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 46 document are to be interpreted as described in [1]. 48 4. Motivation 50 There is a growing interest in testing SR-Convergence for routing 51 protocols, with many people looking at testing methodologies which 52 can provide information on how long it takes for a network to 53 converge after various network events occur. It is important to 54 consider the framework within which any given convergence test is 55 executed when attempting to apply the results of the testing, since 56 the framework can have a major impact on the results. For instance, 57 determining when a network is converged, what parts of the router's 58 operation are considered within the testing, and other such things 59 will have a major impact on what apparent performance routing 60 protocols provide. 62 This document describes in detail the various benefits and pitfalls 63 of tests described in [2]. It also explains how such measurements can 64 be useful for providers and the research community. 66 5. Advantages of Such Measurement 68 o To be able to compare the iterations of a protocol implementa- 69 tion. It is often useful to be able to compare the performance 70 of two iterations of a given implementation of a protocol to 71 determine where improvements have been made and where further 72 improvements can be made. 74 o To understand, given a set parameters (network conditions), how 75 a particular implementation on a particular device is going to 76 perform. For instance, if you were trying to decide the process- 77 ing power (size of device) required in a certain location within 78 a network, you can emulate the conditions which are going to 79 exist at that point in the network and use the test described to 80 measure the perfomance of several different routers. The results 81 of these tests can provide one possible data point for an intel- 82 ligent decision. 84 If the device being tested is to be deployed in a running net- 85 work, using routes taken from the network where the equipment is 86 to be deployed rather than some generated topology in these 87 tests will give results which are closer to the real preformance 88 of the device. Care should be taken to emulate or take routes 89 from the actual location in the network where the device will be 90 (or would be) deployed. For instance, one set of routes may be 91 taken from an abr, one set from an area 0 only router, various 92 sets from stub area, another set from various normal areas, etc. 94 o To measure the performance of an OSPF implementation in a wide 95 variety of scenarios. 97 o To be used as parameters in OSPF simulations by researchers. It 98 may some times be required for certain kinds of research to 99 measure the individual delays of each parameter within an OSPF 100 implementation. These delays can be measured using the methods 101 defined in [2]. 103 o To help optimize certain configurable parameters. It may some 104 times be helpful for operators to know the delay required for 105 individual tasks so as to optimize the resource usage in the 106 network i.e. if it is found that the processing time is x 107 seconds on a router, it would be helpful to determine the rate 108 at which to flood LSAs to that router so as to not overload the 109 network. 111 6. Assumptions Made and Limitations of such measurements 113 o The interactions of SR-Convergence and forwarding; testing is 114 restricted to events occurring within the control plane. For- 115 warding performance is the primary focus in [4] and it is 116 expected to be dealt with in work that ensues from [5]. 118 o Duplicate LSAs are Acknowledged Immediately. A few tests rely 119 on the property that duplicate LSA Acknowledgements are not 120 delayed but are done immediately. However if some implementa- 121 tion does not acknowledge duplicate LSAs immediately on 122 receipt, the testing methods presented in [2] could give inac- 123 curate measurements. 125 o It is assumed that SPF is non-preemptive. If SPF is implemented 126 so that it can (and will be) preempted, the SPF measurements 127 taken in [2] would include the times that the SPF process is 128 not running ([2] measures the total time taken for SPF to run, 129 not the amount of time that SPF actually spends on the device's 130 processor), thus giving inaccurate measurements. 132 o Some implementations may be multithreaded or use a 133 multiprocess/multirouter model of OSPF. If because of this any 134 of the assumptions taken in measurement are violated in such a 135 model, it could lead to inaccurate measurements. 137 o The measurements resulting from the tests in [2] may not pro- 138 vide the information required to deploy a device in a large 139 scale network. The tests described focus on individual com- 140 ponents of an OSPF implementation's performance, and it may be 141 difficult to combine the measurements in a way which accurately 142 depicts a device's performance in a large scale network. 143 Further research is required in this area. 145 7. Observations on the Tests Described in [2] 147 Some observations taken while implementing the tests described in [2] 148 are noted in this section. 150 7.1. Measuring the SPF Processing Time Externally 152 The most difficult test to perform is the external measurement of the 153 time required to perform an SPF calculation, since the amount of time 154 between the first LSA which indicates a topology change and the 155 duplicate LSA is critical. If the duplicate LSA is sent too quickly, 156 it may be received before the device under test actually begins run- 157 ning SPF on the network change information. If the delay between the 158 two LSAs is too long, the device under test may finish SPF processing 159 before receiving the duplicate LSA. It is important to closely inves- 160 tigate any delays between the receipt of an LSA and the beginning of 161 an SPF calculation in the device under test; multiple tests with 162 various delays might be required to determine what delay needs to be 163 used to accurately measure the SPF calculation time. 165 7.2. Noise in the Measurement Device 167 The device on which measurements are taken (not the device under 168 test) also adds noise to the test results, primarily in the form of 169 delay in packet processing and producing outout from which measure- 170 ments are taken. The largest source of noise is generally the delay 171 between the receipt of packets by the measuring device and the infor- 172 mation about the packet reaching the device's output, where the event 173 can be measured. The following steps may be taken to reduce this sam- 174 pling noise: 176 o Take lot of samples. The more samples which are taken, the less 177 that noise in the measurements will impact the overall measure- 178 ment, as noise will tend to average out over a large number of 179 samples. 181 o Try to take time-stamp for a packet as early as possible. 182 Depending on the operating system being used on the box, one 183 can instrument the kernel to take the time-stamp when the 184 interrupt is processed. This does not eliminate the noise com- 185 pletely, but at least reduces it. 187 o Keep the measurement box as lightly loaded as possible, unless 188 the loading is part of the test itself. 190 o Having an estimate of noise can also be useful. 192 The DUT also adds noise to the measurement. The first and third 193 points also apply to the DUT. 195 7.3. Gaining an Understanding of the Implementation Improves Measure- 196 ments 198 While the tester will (generally) not have access to internal infor- 199 mation about the OSPF implementation being tested using [2], the more 200 thorough the tester's knowledge of the implementation is, the more 201 accurate the results of the tests will be. For instance, in some 202 implementations, the installation of routes in local routing tables 203 may occur while the SPF is being calculated, dramatically impacting 204 the time required to calculate the SPF. 206 7.4. Gaining an Understanding of the Tests Improves Measurements 208 One method which can be used to become familiar with the tests 209 described in [2] is to perform the tests on an OSPF implementation 210 for which all the internal details are available, such as GateD. 211 While there is no assurance that any two implementations will be 212 similar, this will provide a better understanding of the tests them- 213 selves. 215 8. Acknowledgements 217 Thanks to Howard Berkowitz, (hcb@clark.net) and the rest of the BGP 218 benchmarking team for their support and to Kevin 219 Dubray(kdubray@juniper.net) who realized the need of this draft. 221 9. References 223 [1] Bradner, S., "Key words for use in RFCs to Indicate Requirement 224 Levels", RFC2119, March 1997. 226 [2] Manral, V., "Benchmarking Methodology for Basic OSPF Convergence", 227 draft-bmwg-ospfconv-intraarea-00.txt, May 2002 229 [3] Manral, V., "OSPF Convergence Testing Terminology and Concepts", 230 draft-bmwg-ospfconv-term-00.txt, My 2002 232 [4] Bradner, S., McQuaid, J., "Benchmarking Methodology for Network 233 Interconnect Devices", RFC2544, March 1999. 235 [5] Trotter, G., "Terminology for Forwarding Information Base (FIB) 236 based Router Performance", RFC3222, October 2001. 238 10. Authors' Addresses 239 Vishwas Manral 240 Netplane Systems 241 189 Prashasan Nagar 242 Road number 72 243 Jubilee Hills 244 Hyderabad, India 246 vmanral@netplane.com 248 Russ White 249 Cisco Systems, Inc. 250 7025 Kit Creek Rd. 251 Research Triangle Park, NC 27709 253 riw@cisco.com 255 Aman Shaikh 256 University of California 257 School of Engineering 258 1156 High Street 259 Santa Cruz, CA 95064 261 aman@soe.ucsc.edu