idnits 2.17.1 draft-ietf-bmwg-ospfconv-term-07.txt: ** The Abstract section seems to be numbered Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity. ** The document seems to lack a 1id_guidelines paragraph about the list of current Internet-Drafts -- however, there's a paragraph with a matching beginning. Boilerplate error? ** The document seems to lack a 1id_guidelines paragraph about the list of Shadow Directories -- however, there's a paragraph with a matching beginning. Boilerplate error? == Mismatching filename: the document gives the document name as 'draft-bmwg-ospfconv-term-07', but the file name used is 'draft-ietf-bmwg-ospfconv-term-07' == No 'Intended status' indicated for this document; assuming Proposed Standard == The page length should not exceed 58 lines per page, but there was 8 longer pages, the longest (page 2) being 60 lines == It seems as if not all pages are separated by form feeds - found 0 form feeds but 9 pages Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an Introduction section. ** The document seems to lack a Security Considerations section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 79: '... measurements MUST NOT be used as a...' Miscellaneous warnings: ---------------------------------------------------------------------------- == Couldn't figure out when the document was first submitted -- there may comments or warnings related to the use of a disclaimer for pre-RFC5378 work that could not be issued because of this. Please check the Legal Provisions document at https://trustee.ietf.org/license-info to determine if you need the pre-RFC5378 disclaimer. -- The document date (January 2004) is 7407 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Missing reference section? 'BENCHMARK' on line 289 looks like a reference -- Missing reference section? 'OSPF' on line 192 looks like a reference -- Missing reference section? 'OSPF-SCALING' on line 297 looks like a reference Summary: 10 errors (**), 0 flaws (~~), 5 warnings (==), 4 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group Vishwas Manral 3 Internet Draft Netplane Systems 4 Russ White 5 Cisco Systems 6 Aman Shaikh 7 Expiration Date: July 2004 University of California 8 File Name: draft-bmwg-ospfconv-term-07.txt January 2004 10 OSPF Benchmarking Terminology and Concepts 11 draft-ietf-bmwg-ospfconv-term-07.txt 13 1. Status of this Memo 15 This document is an Internet-Draft and is in full conformance with 16 all provisions of Section 10 of RFC2026. 18 Internet Drafts are working documents of the Internet Engineering 19 Task Force (IETF), its Areas, and its Working Groups. Note that other 20 groups may also distribute working documents as Internet Drafts. 22 Internet Drafts are draft documents valid for a maximum of six 23 months. Internet Drafts may be updated, replaced, or obsoleted by 24 other documents at any time. It is not appropriate to use Internet 25 Drafts as reference material or to cite them other than as a "working 26 draft" or "work in progress". 28 The list of current Internet-Drafts can be accessed at 29 http//www.ietf.org/ietf/1id-abstracts.txt 31 The list of Internet-Draft Shadow Directories can be accessed at 32 http//www.ietf.org/shadow.html. 34 2. Abstract 36 This draft explains the terminology and concepts used in OSPF 37 benchmarking. While some of these terms may be defined elsewhere, and 38 we will refer the reader to those definitions in some cases, we also 39 include discussions concerning these terms as they relate 40 specifically to the tasks involved in benchmarking the OSPF protocol. 42 3. Motivation 44 This draft is a companion to [BENCHMARK], which describes basic Open 45 Shortest Path First [OSPF] testing methods. This draft explains 46 terminology and concepts used in OSPF Testing Framework Drafts, such 47 as [BENCHMARK]. 49 4. Common Definitions 51 Definitions in this section are well known industry and benchmarking 52 terms which may be defined elsewhere. 54 o White Box (Internal) Measurements 56 - Definition 58 White Box measurements are measurements reported and col- 59 lected on the Device Under Test (DUT) itself. 61 - Discussion 63 These measurement rely on output and event recording, 64 along with the clocking and timestamping available on the 65 DUT itself. Taking measurements on the DUT may impact the 66 actual outcome of the test, since it can increase proces- 67 sor loading, memory utilization, and timing factors. Some 68 devices may not have the required output readily available 69 for taking internal measurements, as well. 71 Note: White box measurements can be influenced by the 72 vendor's implementation of the various timers and process- 73 ing models. Whenever possible, internal measurements 74 should be compared to external measurements to verify and 75 validate them. 77 Because of the potential for variations in collection and 78 presentation methods across different DUTs, white box 79 measurements MUST NOT be used as a basis of comparison in 80 benchmarks. This has been a guiding principal of Bench- 81 marking Methodology Working Group. 83 o Black Box (External) Measurements 84 - Definition 86 Black Box measurements infer the performance of the DUT 87 through observation of its communications with other dev- 88 ices. 90 - Discussion 92 One example of a black box measurement is when a down- 93 stream device receives complete routing information from 94 the DUT, it can be inferred that the DUT has transmitted 95 all the routing information available. External measure- 96 ments of internal operations may suffer in that they 97 include not just the protocol action times, but also pro- 98 pagation delays, queuing delays, and other such factors. 100 For the purposes of [BENCHMARK], external techniques are 101 more readily applicable. 103 o Multi-device Measurements 105 - Measurements assessing communications (usually in combina- 106 tion with internal operations) between two or more DUTs. 107 Multi-device measurements may be internal or external. 109 5. Terms Defined Elsewhere 111 Terms in this section are defined elsewhere, and included only to 112 include a discussion of those terms in reference to [BENCHMARK]. 114 o Point-to-Point links 116 - Definition 118 See [OSPF], Section 1.2. 120 - Discussion 122 A point-to-point link can take lesser time to converge 123 than a broadcast link of the same speed because it does 124 not have the overhead of DR election. Point-to-point links 125 can be either numbered or unnumbered. However in the con- 126 text of [BENCHMARK] and [OSPF], the two can be regarded 127 the same. 129 o Broadcast Link 131 - Definition 133 See [OSPF], Section 1.2. 135 - Discussion 137 The adjacency formation time on a broadcast link can be 138 more than that on a point-to-point link of the same speed, 139 because DR election has to take place. All routers on a 140 broadcast network form adjacency with the DR and BDR. 142 Async flooding also takes place thru the DR. In context of 143 convergence, it may take more time for an LSA to be 144 flooded from one DR-other router to another DR-other 145 router, because the LSA has to be first processed at the 146 DR. 148 o Shortest Path First Execution Time 150 - Definition 152 The time taken by a router to complete the SPF process, as 153 described in [OSPF]. 155 - Discussion 157 This does not include the time taken by the router to give 158 routes to the forwarding engine. 160 Some implementations may force two intervals, the SPF hold 161 time and the SPF delay, between successive SPF calcula- 162 tions. If an SPF hold time exists, it should be subtracted 163 from the total SPF execution time. If an SPF delay exists, 164 it should be noted in the test results. 166 o Measurement Units 168 The SPF time is generally measured in milliseconds. 170 o Hello Interval 172 - Definition 174 See [OSPF], Section 7.1. 176 - Discussion 178 The hello interval should be the same for all routers on a 179 network. 181 Decreasing the hello interval can allow the router dead 182 interval (below) to be reduced, thus reducing convergence 183 times in those situations where the router dead interval 184 timing out causes an OSPF process to notice an adjacency 185 failure. Further discussion on small hello intervals is 186 given in [OSPF-SCALING]. 188 o Router Dead interval 190 - Definition 192 See [OSPF], Section 7.1. 194 - Discussion 196 This is advertised in the router's Hello Packets in the 197 RouterDeadInterval field. The router dead interval should 198 be some multiple of the HelloInterval (say 4 times the 199 hello interval), and must be the same for all routers 200 attached to a common network. 202 6. Concepts 204 6.1. The Meaning of Single Router Control Plane Convergence 206 A network is termed to be converged when all of the devices within 207 the network have a loop free path to each possible destination. Since 208 we are not testing network convergence, but performance for a partic- 209 ular device within a network, however, this definition needs to be 210 narrowed somewhat to fit within a single device view. 212 In this case, convergence will mean the point in time when the DUT 213 has performed all actions needed to react to the change in topology 214 represented by the test condition; for instance, an OSPF device must 215 flood any new information it has received, rebuild its shortest path 216 first (SPF) tree, and install any new paths or destinations in the 217 local routing information base (RIB, or routing table). 219 Note that the word convergence has two distinct meanings; the process 220 of a group of individuals meeting the same place, and the process of 221 a single individual meeting in the same place as an existing group. 222 This work focuses on the second meaning of the word, so we consider 223 the time required for a single device to adapt to a network change to 224 be Single Router Convergence. 226 This concept does not include the time required for the control plane 227 of the device to transfer the information required to forward packets 228 to the data plane, nor the amount of time between the data plane 229 receiving that information and being able to actually forward 230 traffic. 232 6.2. Measuring Convergence 234 Obviously, there are several elements to convergence, even under the 235 definition given above for a single device, including (but not lim- 236 ited to): 238 o The time it takes for the DUT to pass the information about a 239 network event on to its neighbors. 241 o The time it takes for the DUT to process information about a 242 network event and calculate a new Shortest Path Tree (SPT). 244 o The time it takes for the DUT to make changes in its local 245 rib reflecting the new shortest path tree. 247 6.3. Types of Network Events 249 A network event is an event which causes a change in the network 250 topology. 252 o Link or Neighbor Device Up 254 The time needed for an OSPF implementation to recoginize a 255 new link coming up on the device, build any necessarily adja- 256 cencies, synchronize its database, and perform all other 257 needed actions to converge. 259 o Initialization 261 The time needed for an OSPF implementation to be initialized, 262 recognize any links across which OSPF must run, build any 263 needed adjacencies, synchronize its database, and perform 264 other actions needed to converge. 266 o Adjacency Down 268 The time needed for an OSPF implementation to recognize a 269 link down/adjacency loss based on hello timers alone, propo- 270 gate any information as necessary to its remaining adjacen- 271 cies, and perform other actions needed to converge. 273 o Link Down 275 The time needed for an OSPF implementation to recognize a 276 link down based on layer 2 provided information, propogate 277 any information as needed to its remaining adjacencies, and 278 perform other actions needed to converge. 280 7. Acknowedgements 282 The authors would like to thank Howard Berkowitz (hcb@clark.net), 283 Kevin Dubray, (kdubray@juniper.net), Scott Poretsky 284 (sporetsky@avici.com), and Randy Bush (randy@psg.com) for their dis- 285 cussion, ideas, and support. 287 8. Normative References 289 [BENCHMARK] 290 Manral, V., "Benchmarking Basic OSPF Single Router Control Plane 291 Convergence", draft-bmwg-ospfconv-intraarea-05, March 2003 293 [OSPF]Moy, J., "OSPF Version 2", RFC 2328, April 1998. 295 9. Informative References 297 [OSPF-SCALING] 298 Choudhury, Gagan L., Editor, "Prioritized Treatment of Specific 299 OSPF Packets and Congestion Avoidance", draft-ietf-ospf- 300 scalability-06.txt, August 2003. 302 10. Authors' Addresses 304 Vishwas Manral, 305 Netplane Systems, 306 189 Prashasan Nagar, 307 Road number 72, 308 Jubilee Hills, 309 Hyderabad. 311 vmanral@netplane.com 313 Russ White 314 Cisco Systems, Inc. 315 7025 Kit Creek Rd. 316 Research Triangle Park, NC 27709 318 riw@cisco.com 320 Aman Shaikh 321 University of California 322 School of Engineering 323 1156 High Street 324 Santa Cruz, CA 95064 326 aman@soe.ucsc.edu