idnits 2.17.1 draft-manral-ospfconv-term-00.txt: ** The Abstract section seems to be numbered Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity. ** The document seems to lack a 1id_guidelines paragraph about the list of current Internet-Drafts -- however, there's a paragraph with a matching beginning. Boilerplate error? ** The document seems to lack a 1id_guidelines paragraph about the list of Shadow Directories -- however, there's a paragraph with a matching beginning. Boilerplate error? == No 'Intended status' indicated for this document; assuming Proposed Standard == The page length should not exceed 58 lines per page, but there was 9 longer pages, the longest (page 2) being 60 lines == It seems as if not all pages are separated by form feeds - found 0 form feeds but 10 pages Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an Introduction section. ** The document seems to lack a Security Considerations section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The abstract seems to contain references ([2]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. Miscellaneous warnings: ---------------------------------------------------------------------------- == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (December 2001) is 8165 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Missing reference section? '2' on line 382 looks like a reference -- Missing reference section? '1' on line 379 looks like a reference -- Missing reference section? '3' on line 385 looks like a reference -- Missing reference section? '4' on line 387 looks like a reference -- Missing reference section? '5' on line 389 looks like a reference Summary: 10 errors (**), 0 flaws (~~), 4 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group Vishwas Manral 3 Internet Draft Netplane Networks 4 Russ White 5 Expiration Date: June 2002 Cisco Systems 6 File Name: draft-manral-ospfconv-term-00.txt December 2001 8 OSPF Benchmarking Terminology and Concepts 9 draft-manral-ospfconv-term-00.txt 11 1. Status of this Memo 13 This document is an Internet-Draft and is in full conformance with 14 all provisions of Section 10 of RFC2026. 16 Internet Drafts are working documents of the Internet Engineering 17 Task Force (IETF), its Areas, and its Working Groups. Note that other 18 groups may also distribute working documents as Internet Drafts. 20 Internet Drafts are draft documents valid for a maximum of six 21 months. Internet Drafts may be updated, replaced, or obsoleted by 22 other documents at any time. It is not appropriate to use Internet 23 Drafts as reference material or to cite them other than as a "working 24 draft" or "work in progress". 26 The list of current Internet-Drafts can be accessed at 27 http//www.ietf.org/ietf/1id-abstracts.txt 29 The list of Internet-Draft Shadow Directories can be accessed at 30 http//www.ietf.org/shadow.html. 32 2. Abstract 34 This draft explains the terminology and concepts used in [2] and 35 future OSPF benchmarking drafts. 37 3. Conventions used in this document 39 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 40 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 41 document are to be interpreted as described in [1]. 43 4. Motivation 45 This draft is a companion to [2], which describes basic Open Shortest 46 Path First (OSPF [3]) testing methods. This draft explains 47 terminology and concepts used in OSPF Testing Framework Drafts, such 48 as [2]. 50 5. Definitions 52 o Internal Measurements 54 - Definition 56 Internal measurements are measurements taken on the Device 57 Under Test (DUT) itself. 59 - Discussion 61 These measurement rely on output and event recording, 62 along with the clocking and timestamping available on the 63 DUT itself. Internal measurements are preferred for all 64 tests that can be completely contained on the DUT (which 65 is very rare). 67 o External Measurements 69 - Definition 71 External measurements infer the performance of the DUT 72 through observation of its communications with other dev- 73 ices. 75 - Discussion 76 One example of an external measurement is when a down- 77 stream device receives complete routing information from 78 the DUT, it can be inferred that the DUT has transmitted 79 all the routing information available. 81 For the purposes of this paper, external technique are 82 more readily applicable. However, external measurements 83 have their own problems because they include the time to 84 advertise the new route downstream and transmission times 85 for the advertisement within the device under test. 87 o Multi-device Measurements 89 - Definition 91 Multi-device measurements require the measurement of 92 events occuring on multiple devices within the testbed. 94 - Discussion 96 For instance, the timestamp on a device generating an 97 event could be used as the marker for the beginning of a 98 test, while the timestamp on the DUT or some other device 99 might be used to determine when the DUT has finished pro- 100 cessing the event. 102 These sorts of measurements are the most problematic, and 103 are to be avoided where possible, since the timestamps of 104 the devices in the test bed must be synchronized within 105 milliseconds for the test results to be meaningful. Given 106 the state of network time protocol implementation, expect- 107 ing the timestamps on several devices to be within mil- 108 liseconds of each other is highly optimistic. 110 o Point-to-Point links 112 - Definition 114 A network that joins a single pair of routers is called a 115 point-to-point link. For OSPF [3], point-to-point links 116 are those on which a designated router are not elected. 118 - Discussion 120 A point-to-point link will take lesser time to converge 121 than a braodcast link of the same speed because it does 122 not have the overhead of DR election. Point-to-point links 123 can be either numbered or unnumbered. However in the con- 124 text of [2], the two can be regarded the same. 126 o Broadcast Link 128 - Definition 130 Networks supporting many (more than two) attached routers, 131 together with the capability to address a single physical 132 message to all of the attached routers (broadcast). In the 133 context of [2] and [3], broadcast links are taken as those 134 on which a designated router is elected. 136 - Discussion 138 The adjacency formation time on a broadcast link can be 139 more than that on a point-to-point link of the same speed, 140 because DR election has to take place. All routers on a 141 broadcast network form adjacency with the DR and BDR. 143 Async flooding also takes place thru the DR. In context of 144 convergence, it may take more time for an LSU to be 145 flooded from one DR-other router to another DR-other 146 router, because the LSA has to be first processsed at the 147 DR. 149 o Shortest Path First Time 151 - Definition 153 The time taken by a router to complete the SPF process. 155 - Discussion 157 This does not include the time taken by the router to give 158 routes to the forwarding engine. 160 o Measurement Units 162 The LSA time is generally measured in milliseconds. 164 o Hello Interval 166 - Definition 168 The length of time, in seconds, between the Hello Packets 169 that the router sends on the interface. 171 - Discussion 173 The hello interval should be the same for all routers on 174 the network 176 Decreasing the hello interval can allow the router dead 177 interval (below) to be reduced, thus reducing convergence 178 times in those situations where the router dead interval 179 timing out causes an OSPF process to notice an adjacency 180 failure. Very small router dead intervals accompanied by 181 very small hello intervals can produce more problems than 182 they resolve, as described in [4] & [5]. 184 o Router Dead interval 186 - Definition 188 After ceasing to hear a router's Hello Packets, the number 189 of seconds before its neighbors declare the router down. 191 - Discussion 193 This is advertised in the router's Hello Packets in the 194 RouterDeadInterval field. The router dead interval should 195 be some multiple of the HelloInterval (say 4 times the 196 hello interval), and must be the same for all routers 197 attached to a common network. 199 6. Concepts 201 6.1. A network is termed to be converged when all of the devices within 202 the network have a loop free path to each possible destination. Since 203 we are not testing network convergence, but performance for a partic- 204 ular device within a network, however, this definition needs to be 205 narrowed somewhat to fit within a single device view. 207 In this case, convergence will mean the point in time when the DUT 208 has performed all actions needed to react to the change in topology 209 represented by the test condition; for instance, an OSPF device must 210 flood any new information it has received, rebuild its shortest path 211 first (SPF) tree, and install any new paths or destinations in the 212 local routing information base (RIB, or routing table). 214 6.2. Measuring Convergence 216 Obviously, there are several elements to convergence, even under the 217 definition given above for a single device. We will try to provide 218 tests to measure each of these: 220 o The time it takes for the DUT to pass the information about a 221 network event on to its neighbors. 223 o The time it takes for the DUT to process information about a 224 network event and calculate a new Shortest Path Tree (SPT). 226 o The time it takes for the DUT to make changes in its local 227 rib reflecting the new shortest path tree. 229 6.3. Types of Network Events 231 o Link or Neighbor Device Up 233 The time needed for an OSPF implementation to recoginize a 234 new link coming up on the device, build any necessarily adja- 235 cencies, synchronize its database, and perform all other 236 needed actions to converge. 238 o Initialization 239 The time needed for an OSPF implemention to be initialized, 240 recognize any links across which OSPF must run, build any 241 needed adjacencies, synchronize its database, and perform 242 other actions needed to converge. 244 o Adjacency Down 246 The time needed for an OSPF implementation to recognize a 247 link down/adjacency loss based on hello timers alone, propo- 248 gate any information as necessary to its remaining adjacen- 249 cies, and perform other actions needed to converge. 251 o Link Down 253 The time needed for an OSPF implementation to recognize a 254 link down based on layer 2 provided information, propogate 255 any information information as needed to its remaining adja- 256 cencies, and perform other actions needed to converge. 258 6.4. LSA and Destination mix 260 In many OSPF benchmark tests, a generator injecting a number of LSAs 261 is called for. There are several areas in which injected LSAs can be 262 varied in testing: 264 o The number of destinations represented by the injected LSAs 266 Each destination represents a single reachable IP network; 267 these will be leaf nodes on the shortest path tree. The pri- 268 mary impact to performance should be the time required to 269 insert destinations in the local routing table and handling 270 the memory required to store the data. 272 o The types of LSAs injected 274 There are several types of LSAs which would be acceptable 275 under different situations; within an area, for instance, 276 type 1, 2, 3, 4, and 5 are likely to be received by a router. 277 Within a not-so-stubby area, however, type 7 LSAs would 278 replace the type 5 LSAs received. These sorts of characteri- 279 zations are important to note in any test results. 281 o The Number of LSAs injected 283 Within any injected set of information, the number of each 284 type of LSA injected is also important. This will impact the 285 shortest path algorithms ability to handle large numbers of 286 nodes, large shortest path first trees, etc. 288 o The Order of LSA Injection 290 The order in which LSAs are injected should not favor any 291 given data structure used for storing the LSA database on the 292 device under test. The ordering can be changed in various 293 tests to provide insight on the efficency of storage within 294 the DUT. Any such changes in ordering should be noted in test 295 results. 297 6.5. Tree Shape and the SPF Algorithm 299 The shortest path first algorithm is a simple algorithm which handles 300 complexity by breaking the problem of finding the shortest paths 301 through a network into smaller parts and recursing (calling itself) 302 to compute the best path within each smaller part. Because of this, 303 moving along a single level of the tree, along the tree's width, is 304 fundamentally different than moving along the depth of the tree. 306 root root 307 / \ | 308 1 2 1 309 / \ / \ | 310 3 4 5 6 2 311 | 312 3 313 | 314 4 315 | 316 5 317 | 318 6 320 For instance, the shortest path first algorithm would go through two 321 recursions when finding the shortest paths on the left topology, with 322 an average of two nodes processed per level. The topology on the 323 right would produce five recursions, with one node processed per 324 recursion. While this may not produce dramatically different test 325 results, there may be some apparent difference between the two. 327 In general, those benchmarking link state protocols which use the 328 shortest path first algorithm to compute the best paths through the 329 network need to be aware that the construction of the tree may impact 330 the performance of the algorithm. Best practice would be to try and 331 make any emulated network look as much like a real network as possi- 332 ble, especially in the area of the tree depth, the meshiness of the 333 network, the number of stub links verses transit links, and the 334 number of connections and nodes to process at each recursion level. 336 7. Route Generation 338 As the size of networks grows, it becomes more and more difficult to 339 actually create a large scale network on which to test the properties 340 of routing protocols and their implementations. In general, network 341 emulators are used to provide emulated topologues which can be adver- 342 tised to a device with varying conditions. Route generators either 343 tend to be a specialized device, a piece of software which runs on a 344 router, or a process that runs on another operating system, such as 345 Linux or another variant of Unix. 347 Some of the characteristics of this device should be: 349 o The ability to connect to the several devices using both point- 350 to-point and broadcast high speed media. Point-to-point links can 351 be emulated with high speed Ethernet as long as there is no hub or 352 other device in between the DUT and the route generator, and the 353 link is configured as a point-to-point link within OSPF. 355 o The ability to create a set of LSAs which appear to be a logical, 356 realistic topology. For instance, the generator should be able to 357 mix the number of point-to-point and broadcast links within the 358 emulated topology, and should be able to inject varying numbers of 359 externally reachable destinations. 361 o The ability to withdraw and add routing information into and from 362 the emulated topology to emulate links flapping. 364 o The ability to randomly order the LSAs representing the emulated 365 topology as they are advertised. 367 o The ability to log or otherwise measure the time between packets 368 transmitted and received. 370 o The ability to change the rate at which OSPF LSAs are transmitted. 372 8. Acknowedgements 374 The authors would like to thank Aman Shaikh 375 (ashaikh@research.att.com) for his comments and help on this draft. 377 9. References 379 [1] Bradner, S., "Key words for use in RFCs to Indicate Requirement 380 Levels", RFC2119, March 1997. 382 [2] Manral, V., "Benchmarking Methodology for Basic OSPF Convergence", 383 draft-manral-ospconv-intraarea-00, November 2001 385 [3] Moy, J., "OSPF Version 2", RFC 2328, April 1998. 387 [4] draft-ash-ospf-isis-congestion-control-01.txt 389 [5] draft-ietf-ospf-scalability-00.txt 391 10. Authors' Addresses 393 Vishwas Manral, 394 Netplane Networks, 395 189 Prashasan Nagar, 396 Road number 72, 397 Jubilee Hills, 398 Hyderabad. 400 vmanral@netplane.com 402 Russ White 403 Cisco Systems, Inc. 404 7025 Kit Creek Rd. 405 Research Triangle Park, NC 27709 407 riw@cisco.com