idnits 2.17.1 draft-ietf-bmwg-ospfconv-applicability-01.txt: ** The Abstract section seems to be numbered Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity. == No 'Intended status' indicated for this document; assuming Proposed Standard == The page length should not exceed 58 lines per page, but there was 6 longer pages, the longest (page 2) being 60 lines == It seems as if not all pages are separated by form feeds - found 0 form feeds but 7 pages Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an Introduction section. ** The document seems to lack a Security Considerations section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The abstract seems to contain references ([2], [3]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (January 2003) is 7772 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Missing reference section? '2' on line 222 looks like a reference -- Missing reference section? '3' on line 225 looks like a reference -- Missing reference section? '1' on line 219 looks like a reference -- Missing reference section? '4' on line 228 looks like a reference -- Missing reference section? '5' on line 231 looks like a reference Summary: 8 errors (**), 0 flaws (~~), 4 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group Vishwas Manral 3 Internet Draft Netplane Systems 4 Russ White 5 Cisco Systems 6 Aman Shaikh 7 Expiration Date: June 2003 University of California 8 File Name: draft-ietf-bmwg-ospfconv-applicability-01.txt January 2003 10 Benchmarking Applicability for Basic OSPF Convergence 11 draft-ietf-bmwg-ospfconv-applicability-01.txt 13 1. Status of this Memo 15 This document is an Internet-Draft and is in full conformance with 16 all provisions of Section 10 of RFC2026. 18 Internet Drafts are working documents of the Internet Engineering 19 Task Force (IETF), its Areas, and its Working Groups. Note that other 20 groups may also distribute working documents as Internet Drafts. 22 Internet Drafts are draft documents valid for a maximum of six 23 months. Internet Drafts may be updated, replaced, or obsoleted by 24 other documents at any time. It is not appropriate to use Internet 25 Drafts as reference material or to cite them other than as a "working 26 draft" or "work in progress". 28 The list of current Internet-Drafts can be accessed at 29 http://www.ietf.org/ietf/1id-abstracts.txt 31 The list of Internet-Draft Shadow Directories can be accessed at 32 http://www.ietf.org/shadow.html. 34 2. Abstract 36 This draft describes the applicability of [2] and similar work which 37 may be done in the future. Refer to [3] for terminology used in this 38 draft and [2]. The draft defines the advantages as well as 39 limitations of using the method defined in [2], besides describing 40 the pitfalls to avoid during measurement. 42 3. Conventions used in this document 44 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 45 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 46 document are to be interpreted as described in [1]. 48 4. Motivation 50 There is a growing interest in testing SR-Convergence for routing 51 protocols, with many people looking at testing methodologies which 52 can provide information on how long it takes for a network to 53 converge after various network events occur. It is important to 54 consider the framework within which any given convergence test is 55 executed when attempting to apply the results of the testing, since 56 the framework can have a major impact on the results. For instance, 57 determining when a network is converged, what parts of the router's 58 operation are considered within the testing, and other such things 59 will have a major impact on what apparent performance routing 60 protocols provide. 62 This document describes in detail the various benefits and pitfalls 63 of tests described in [2]. It also explains how such measurements can 64 be useful for providers and the research community. 66 5. Advantages of Such Measurement 68 o To be able to compare the iterations of a protocol implemen- 69 tation. It is often useful to be able to compare the perfor- 70 mance of two iterations of a given implementation of a proto- 71 col to determine where improvements have been made and where 72 further improvements can be made. 74 o To understand, given a set parameters (network conditions), 75 how a particular implementation on a particular device is 76 going to perform. For instance, if you were trying to decide 77 the processing power (size of device) required in a certain 78 location within a network, you can emulate the conditions 79 which are going to exist at that point in the network and use 80 the test described to measure the perfomance of several dif- 81 ferent routers. The results of these tests can provide one 82 possible data point for an intelligent decision. 84 If the device being tested is to be deployed in a running 85 network, using routes taken from the network where the equip- 86 ment is to be deployed rather than some generated topology in 87 these tests will give results which are closer to the real 88 preformance of the device. Care should be taken to emulate or 89 take routes from the actual location in the network where the 90 device will be (or would be) deployed. For instance, one set 91 of routes may be taken from an abr, one set from an area 0 92 only router, various sets from stub area, another set from 93 various normal areas, etc. 95 o To measure the performance of an OSPF implementation in a 96 wide variety of scenarios. 98 o To be used as parameters in OSPF simulations by researchers. 99 It may some times be required for certain kinds of research 100 to measure the individual delays of each parameter within an 101 OSPF implementation. These delays can be measured using the 102 methods defined in [2]. 104 o To help optimize certain configurable parameters. It may some 105 times be helpful for operators to know the delay required for 106 individual tasks so as to optimize the resource usage in the 107 network i.e. if it is found that the processing time is x 108 seconds on an router, it would be helpful to determine the 109 rate at which to flood LSA's to that router so as to not 110 overload the network. 112 6. Assumptions Made and Limitations of such measurements 114 o The interactions of SR-Convergence and forwarding; testing is res- 115 tricted to events occurring within the control plane. Forwarding 116 performance is the primary focus in [4] and it is expected to be 117 dealt with in work that ensues from [5]. 119 o Duplicate LSAs are Acknowledged Immediately. A few tests rely on 120 the property that duplicate LSA Acknowledgements are not delayed 121 but are done immediately. However if some implementation does not 122 acknowledge duplicate LSAs immediately on receipt, the testing 123 methods presented in [2] could give inaccurate measurements. 125 o It is assumed that SPF is non-preemptive. If SPF is implemented so 126 that it can (and will be) preempted, the SPF measurements taken in 127 [2] would include the times that the SPF process is not running 128 ([2] measures the total time taken for SPF to run, not the amount 129 of time that SPF actually spends on the device's processor), thus 130 giving inaccurate measurements. 132 o Some implementations may be multithreaded or use a 133 multiprocess/multirouter model of OSPF. If because of this any of 134 the assumptions taken in measurement are violated in such a model, 135 it could lead to inaccurate measurements. 137 o The measurements resulting from the tests in [2] may not provide 138 the information required to deploy a device in a large scale net- 139 work. The tests described focus on individual components of an 140 OSPF implementation's performance, and it may be difficult to com- 141 bine the measurements in a way which accurately depicts a device's 142 performance in a large scale network. Further research is required 143 in this area. 145 7. Observations on the Tests Described in [2] 147 Some observations taken while implementing the tests described in [2] 148 are noted in this section. 150 7.1. Measuring the SPF Processing Time Externally 152 The most difficult test to perform is the external measurement of the 153 time required to perform an SPF calculation, since the amount of time 154 between the first LSA which indicates a topology change and the 155 duplicate LSA is critical. If the duplicate LSA is sent too quickly, 156 it may be received before the device under test actually begins run- 157 ning SPF on the network change information. If the delay between the 158 two LSAs is too long, the device under test may finish SPF processing 159 before receiving the duplicate LSA. It is important to closely inves- 160 tigate any delays between the receipt of an LSA and the beginning of 161 an SPF calculation in the device under test; multiple tests with 162 various delays might be required to determine what delay needs to be 163 used to accurately measure the SPF calculation time. 165 7.2. Noise in the Measurement Device 167 The device on which measurements are taken (not the device under 168 test) also adds noise to the test results, primarily in the form of 169 delay in packet processing and measurement output. The largest source 170 of noise is generally the delay between the receipt of packets by the 171 measuring device and the information about the packet reaching the 172 device's output, where the event can be measured. The following steps 173 may be taken to reduce this sampling noise: 175 o Take lot of samples Do we need to explain that further. As Russ 176 had previously pointed out. 178 o Try to take time-stamp for a packet as early as possible. 179 Depending on the operating system being used on the box, one 180 can instrument the kernel to take the time-stamp when the 181 interrupt is processed. This does not eliminate the noise com- 182 pletely, but at least reduces it. 184 o Keep the measurement box as lightly loaded as possible. 186 o Having an estimate of noise can also be useful. 188 The DUT also adds noise to the measurement. Points (a) and (c) 189 apply to the DUT as well. 191 7.3. Gaining an Understanding of the Implementation Improves Measure- 192 ments 194 While the tester will (generally) not have access to internal infor- 195 mation about the OSPF implementation being tested using [2], the more 196 thorough the tester's knowledge of the implementation is, the more 197 accurate the results of the tests will be. For instance, in some 198 implementations, the installation of routes in local routing tables 199 may occur while the SPF is being calculated, dramatically impacting 200 the time required to calculate the SPF. 202 7.4. Gaining an Understanding of the Tests Improves Measurements 204 One method which can be used to become familiar with the tests 205 described in [2] is to perform the tests on an OSPF implementation 206 for which all the internal details are available, such as GateD. 207 While there is no assurance that any two implementations will be 208 similar, this will provide a better understanding of the tests them- 209 selves. 211 8. Acknowledgements 213 Thanks to Howard Berkowitz, (hcb@clark.net) and the rest of the BGP 214 benchmarking team for their support and to Kevin 215 Dubray(kdubray@juniper.net) who realized the need of this draft. 217 9. References 219 [1] Bradner, S., "Key words for use in RFCs to Indicate Requirement 220 Levels", RFC2119, March 1997. 222 [2] Manral, V., "Benchmarking Methodology for Basic OSPF Convergence", 223 draft-ietf-bmwg-ospfconv-intraarea, January 2003 225 [3] Manral, V., "OSPF Convergence Testing Terminiology and Concepts", 226 draft-ietf-bmwg-ospfconv-term, January 2003 228 [4] Bradner, S., McQuaid, J., "Benchmarking Methodology for Network 229 Interconnect Devices", RFC2544, March 1999. 231 [5] Trotter, G., "Terminology for Forwarding Information Base (FIB) 232 based Router Performance", RFC3222, October 2001. 234 10. Authors' Addresses 235 Vishwas Manral 236 Netplane Systems 237 189 Prashasan Nagar 238 Road number 72 239 Jubilee Hills 240 Hyderabad, India 242 vmanral@netplane.com 244 Russ White 245 Cisco Systems, Inc. 246 7025 Kit Creek Rd. 247 Research Triangle Park, NC 27709 249 riw@cisco.com 251 Aman Shaikh 252 University of California 253 School of Engineering 254 1156 High Street 255 Santa Cruz, CA 95064 257 aman@soe.ucsc.edu