idnits 2.17.1 draft-ietf-bmwg-ospfconv-applicability-06.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** It looks like you're using RFC 3978 boilerplate. You should update this to the boilerplate described in the IETF Trust License Policy document (see https://trustee.ietf.org/license-info), which is required now. -- Found old boilerplate from RFC 3667, Section 5.1 on line 481. -- Found old boilerplate from RFC 3979, Section 5, paragraph 1 on line 490. -- Found old boilerplate from RFC 3979, Section 5, paragraph 3 on line 503. ** The document claims conformance with section 10 of RFC 2026, but uses some RFC 3978/3979 boilerplate. As RFC 3978/3979 replaces section 10 of RFC 2026, you should not claim conformance with it if you have changed to using RFC 3978/3979 boilerplate. ** The document seems to lack an RFC 3978 Section 5.1 IPR Disclosure Acknowledgement -- however, there's a paragraph with a matching beginning. Boilerplate error? ** This document has an original RFC 3978 Section 5.4 Copyright Line, instead of the newer IETF Trust Copyright according to RFC 4748. ** The document seems to lack an RFC 3978 Section 5.5 (updated by RFC 4748) Disclaimer -- however, there's a paragraph with a matching beginning. Boilerplate error? ** The document seems to lack an RFC 3979 Section 5, para. 2 IPR Disclosure Acknowledgement -- however, there's a paragraph with a matching beginning. Boilerplate error? ** The document uses RFC 3667 boilerplate or RFC 3978-like boilerplate instead of verbatim RFC 3978 boilerplate. After 6 May 2005, submission of drafts without verbatim RFC 3978 boilerplate is not accepted. The following non-3978 patterns matched text found in the document. That text should be removed or replaced: By submitting this Internet-Draft, I certify that any applicable patent or other IPR claims of which I am aware have been disclosed, or will be disclosed, and any of which I become aware will be disclosed, in accordance with RFC 3668. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity. == No 'Intended status' indicated for this document; assuming Proposed Standard == The page length should not exceed 58 lines per page, but there was 11 longer pages, the longest (page 2) being 60 lines == It seems as if not all pages are separated by form feeds - found 0 form feeds but 12 pages Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The document has an RFC 3978 Section 5.2(a) Derivative Works Limitation clause. == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (June 2004) is 7245 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Missing Reference: 'TERM' is mentioned on line 66, but not defined == Missing Reference: 'OSPF' is mentioned on line 398, but not defined == Unused Reference: 'RFC2119' is defined on line 415, but no explicit reference was found in the text -- No information found for draft-bmwg-ospfconv-intraarea - is the name correct? -- Possible downref: Normative reference to a draft: ref. 'BENCHMARK' == Outdated reference: A later version (-06) exists of draft-ietf-isis-igp-p2p-over-lan-03 Summary: 9 errors (**), 0 flaws (~~), 9 warnings (==), 7 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group Vishwas Manral 2 Internet Draft Netplane Systems 3 Russ White 4 Cisco Systems 5 Aman Shaikh 6 Expiration Date: December 2004 University of California 7 File Name: draft-ietf-bmwg-ospfconv-applicability-06.txt June 2004 9 Considerations When Using Basic OSPF Convergence Benchmarks 10 draft-ietf-bmwg-ospfconv-applicability-06.txt 12 Status of this Memo 14 This document is an Internet-Draft and is in full conformance with 15 all provisions of Section 10 of RFC2026. 17 Internet Drafts are working documents of the Internet Engineering 18 Task Force (IETF), its Areas, and its Working Groups. Note that other 19 groups may also distribute working documents as Internet Drafts. 21 Internet Drafts are draft documents valid for a maximum of six 22 months. Internet Drafts may be updated, replaced, or obsoleted by 23 other documents at any time. It is not appropriate to use Internet 24 Drafts as reference material or to cite them other than as a "working 25 draft" or "work in progress". 27 The list of current Internet-Drafts can be accessed at 28 http://www.ietf.org/ietf/1id-abstracts.txt 30 The list of Internet-Draft Shadow Directories can be accessed at 31 http://www.ietf.org/shadow.html. 33 This document may not be modified, and derivative works of it may not 34 be created, except to publish it as an RFC and to translate it into 35 languages other than English. 37 Abstract 39 This document discusses the applicability of various tests for 40 measuring single router control plane convergence, specifically in 41 regards to the Open Shortest First (OSPF) protocol. There are two 42 general sections in this document, the first discussing specific 43 advantages and limitations of specific OSPF convergence tests, and 44 the second discussing more general pitfalls to be considered when 45 testing routing protocols convergence testing. 47 1. Introduction 49 There is a growing interest in testing single router control plane 50 convergence for routing protocols, with many people looking at 51 testing methodologies which can provide information on how long it 52 takes for a network to converge after various network events occur. 53 It is important to consider the framework within which any given 54 convergence test is executed when attempting to apply the results of 55 the testing, since the framework can have a major impact on the 56 results. For instance, determining when a network is converged, what 57 parts of the router's operation are considered within the testing, 58 and other such things will have a major impact on what apparent 59 performance routing protocols provide. 61 This document describes in detail the various benefits and pitfalls 62 of tests described in [BENCHMARK]. It also explains how such 63 measurements can be useful for providers and the research community. 65 NOTE: The word convergence within this document refers to single 66 router control plane convergence [TERM]. 68 2. Advantages of Such Measurement 70 o To be able to compare the iterations of a protocol implementa- 71 tion. It is often useful to be able to compare the performance 72 of two iterations of a given implementation of a protocol to 73 determine where improvements have been made and where further 74 improvements can be made. 76 o To understand, given a set of parameters (network conditions), 77 how a particular implementation on a particular device is going 78 to perform. For instance, if you were trying to decide the pro- 79 cessing power (size of device) required in a certain location 80 within a network, you can emulate the conditions which are going 81 to exist at that point in the network and use the test described 82 to measure the performance of several different routers. The 83 results of these tests can provide one possible data point for 84 an intelligent decision. 86 If the device being tested is to be deployed in a running net- 87 work, using routes taken from the network where the equipment is 88 to be deployed rather than some generated topology in these 89 tests will give results which are closer to the real performance 90 of the device. Care should be taken to emulate or take routes 91 from the actual location in the network where the device will be 92 (or would be) deployed. For instance, one set of routes may be 93 taken from an ABR, one set from an area 0 only router, various 94 sets from stub area, another set from various normal areas, etc. 96 o To measure the performance of an OSPF implementation in a wide 97 variety of scenarios. 99 o To be used as parameters in OSPF simulations by researchers. It 100 may some times be required for certain kinds of research to 101 measure the individual delays of each parameter within an OSPF 102 implementation. These delays can be measured using the methods 103 defined in [BENCHMARK]. 105 o To help optimize certain configurable parameters. It may some 106 times be helpful for operators to know the delay required for 107 individual tasks so as to optimize the resource usage in the 108 network i.e. if it is found that the processing time is x 109 seconds on an router, it would be helpful to determine the rate 110 at which to flood LSA's to that router so as to not overload the 111 network. 113 3. Assumptions Made and Limitations of such measurements 115 o The interactions of convergence and forwarding; testing is res- 116 tricted to events occurring within the control plane. Forwarding 117 performance is the primary focus in [INTERCONNECT] and it is 118 expected to be dealt with in work that ensues from [FIB-TERM]. 120 o Duplicate LSAs are Acknowledged Immediately. A few tests rely on 121 the property that duplicate LSA Acknowledgements are not delayed 122 but are done immediately. However if some implementation does 123 not acknowledge duplicate LSAs immediately on receipt, the test- 124 ing methods presented in [BENCHMARK] could give inaccurate meas- 125 urements. 127 o It is assumed that SPF is non-preemptive. If SPF is implemented 128 so that it can (and will be) preempted, the SPF measurements 129 taken in [BENCHMARK] would include the times that the SPF pro- 130 cess is not running ([BENCHMARK] measures the total time taken 131 for SPF to run, not the amount of time that SPF actually spends 132 on the device's processor), thus giving inaccurate measurements. 134 o Some implementations may be multithreaded or use a 135 multiprocess/multirouter model of OSPF. If because of this any 136 of the assumptions taken in measurement are violated in such a 137 model, it could lead to inaccurate measurements. 139 o The measurements resulting from the tests in [BENCHMARK] may not 140 provide the information required to deploy a device in a large 141 scale network. The tests described focus on individual com- 142 ponents of an OSPF implementation's performance, and it may be 143 difficult to combine the measurements in a way which accurately 144 depicts a device's performance in a large scale network. Further 145 research is required in this area. 147 o The measurements described in [BENCHMARK] should be used with 148 great care when comparing two different implementations of OSPF 149 from two different vendors. For instance, there are many other 150 factors than convergence speed that need to be taken into con- 151 sideration when comparing different vendor's products, and it's 152 difficult to align the resources available on one device to the 153 resources available on another device. 155 4. Observations on the Tests Described in [BENCHMARK] 157 Some observations taken while implementing the tests described in 158 [BENCHMARK] are noted in this section. 160 4.1. Measuring the SPF Processing Time Externally 162 The most difficult test to perform is the external measurement of the 163 time required to perform an SPF calculation, since the amount of time 164 between the first LSA which indicates a topology change and the 165 duplicate LSA is critical. If the duplicate LSA is sent too quickly, 166 it may be received before the device under test actually begins run- 167 ning SPF on the network change information. If the delay between the 168 two LSAs is too long, the device under test may finish SPF processing 169 before receiving the duplicate LSA. It is important to closely inves- 170 tigate any delays between the receipt of an LSA and the beginning of 171 an SPF calculation in the device under test; multiple tests with 172 various delays might be required to determine what delay needs to be 173 used to accurately measure the SPF calculation time. 175 Some implementations may force two intervals, the SPF hold time and 176 the SPF delay, between successive SPF calculations. If an SPF hold 177 time exists, it should be subtracted from the total SPF execution 178 time. If an SPF delay exists, it should be noted in the test results. 180 4.2. Noise in the Measurement Device 182 The device on which measurements are taken (not the device under 183 test) also adds noise to the test results, primarily in the form of 184 delay in packet processing and measurement output. The largest source 185 of noise is generally the delay between the receipt of packets by the 186 measuring device and the information about the packet reaching the 187 device's output, where the event can be measured. The following steps 188 may be taken to reduce this sampling noise: 190 o Increasing the number of samples taken will generally improve 191 the tester's ability to determine what is noise, and remove it 192 from the results. 194 o Try to take time-stamp for a packet as early as possible. 195 Depending on the operating system being used on the box, one can 196 instrument the kernel to take the time-stamp when the interrupt 197 is processed. This does not eliminate the noise completely, but 198 at least reduces it. 200 o Keep the measurement box as lightly loaded as possible. 202 o Having an estimate of noise can also be useful. 204 The DUT also adds noise to the measurement. Points (a) and (c) 205 apply to the DUT as well. 207 4.3. Gaining an Understanding of the Implementation Improves Measure- 208 ments 210 While the tester will (generally) not have access to internal infor- 211 mation about the OSPF implementation being tested using [BENCHMARK], 212 the more thorough the tester's knowledge of the implementation is, 213 the more accurate the results of the tests will be. For instance, in 214 some implementations, the installation of routes in local routing 215 tables may occur while the SPF is being calculated, dramatically 216 impacting the time required to calculate the SPF. 218 4.4. Gaining an Understanding of the Tests Improves Measurements 220 One method which can be used to become familiar with the tests 221 described in [BENCHMARK] is to perform the tests on an OSPF implemen- 222 tation for which all the internal details are available, such as 223 [GATED]. While there is no assurance that any two implementations 224 will be similar, this will provide a better understanding of the 225 tests themselves. 227 5. LSA and Destination mix 229 In many OSPF benchmark tests, a generator injecting a number of LSAs 230 is called for. There are several areas in which injected LSAs can be 231 varied in testing: 233 o The number of destinations represented by the injected LSAs 235 Each destination represents a single reachable IP network; these 236 will be leaf nodes on the shortest path tree. The primary impact 237 to performance should be the time required to insert destina- 238 tions in the local routing table and handling the memory 239 required to store the data. 241 o The types of LSAs injected 243 There are several types of LSAs which would be acceptable under 244 different situations; within an area, for instance, type 1, 2, 245 3, 4, and 5 are likely to be received by a router. Within a 246 not-so-stubby area, however, type 7 LSAs would replace the type 247 5 LSAs received. These sorts of characterizations are important 248 to note in any test results. 250 o The Number of LSAs injected 252 Within any injected set of information, the number of each type 253 of LSA injected is also important. This will impact the shortest 254 path algorithms ability to handle large numbers of nodes, large 255 shortest path first trees, etc. 257 o The Order of LSA Injection 259 The order in which LSAs are injected should not favor any given 260 data structure used for storing the LSA database on the device 261 under test. For instance, AS-External LSA's have AS wide flood- 262 ing scope; any Type-5 LSA originated is immediately flooded to 263 all neighbors. However the Type-4 LSA which announces the ASBR 264 as a border router is originated in an area at SPF time (by ABRs 265 on the edge of the area in which the ASBR is). If SPF isn't 266 scheduled immediately on the ABRs originating the type 4 LSA, 267 the type-4 LSA is sent after the type-5 LSA's reach a router in 268 the adjacent area. So routes to the external destinations aren't 269 immediately added to the routers in the other areas. When the 270 routers which already have the type 5's receive the type-4 LSA, 271 all the external routes are added to the tree at the same time. 272 This timing could produce different results than a router 273 receiving a type 4 indicating the presence of a border router, 274 followed by the type 5's originated by that border router. 276 The ordering can be changed in various tests to provide insight 277 on the efficiency of storage within the DUT. Any such changes in 278 ordering should be noted in test results. 280 6. Tree Shape and the SPF Algorithm 282 The complexity of Dijkstra's algorithm depends on the data structure 283 used for storing vertices with their current minimum distances from 284 the source, with the simplest structure being a list of vertices 285 currently reachable from the source. In a simple list of vertices, 286 finding the minimum cost vertex then would take O(size of the list). 287 There will be O(n) such operations if we assume that all the vertices 288 are ultimately reachable from the source. Moreover, after the vertex 289 with min cost is found, the algorithm iterates thru all the edges of 290 the vertex and updates cost of other vertices. With an adjacency list 291 representation, this step when iterated over all the vertices, would 292 take O(E) time, with E being the number of edges in the graph. Thus, 293 overall running time is: 295 O(sum(i:1, n)(size(list at level i) + E). 297 So, everything boils down to the size (list at level i). 299 If the graph is linear: 301 root 302 | 303 1 304 | 305 2 306 | 307 3 308 | 309 4 310 | 311 5 312 | 313 6 315 and source is a vertex on the end, then size(list at level i) = 1 316 for all i. Moreover, E = n - 1. Therefore, running time is O(n). 318 If graph is a balanced binary tree: 320 root 321 / \ 322 1 2 323 / \ / \ 324 3 4 5 6 326 size(list at level i) is a little complicated. First it increases 327 by 1 at each level up to a certain number, and then goes down by 328 1. If we assumed that tree is a complete tree (like the one in the 329 draft) with k levels (1 to k), then size(list) goes on like this: 330 1, 2, 3, 332 Then the number of edges E is still n - 1. It then turns out that 333 the run-time is O(n^2) for such a tree. 335 If graph is a complete graph (fully-connected mesh), then 336 size(list at level i) = n - i. Number of edges E = O(n^2). There- 337 fore, run-time is O(n^2). 339 So, the performance of the shortest path first algorithm used to 340 compute the best paths through the network is dependant o the con- 341 struction of the tree The best practice would be to try and make 342 any emulated network look as much like a real network as possible, 343 especially in the area of the tree depth, the meshiness of the 344 network, the number of stub links versus transit links, and the 345 number of connections and nodes to process at each level within 346 the original tree. 348 7. Topology Generation 350 As the size of networks grows, it becomes more and more difficult to 351 actually create a large scale network on which to test the properties 352 of routing protocols and their implementations. In general, network 353 emulators are used to provide emulated topologies which can be adver- 354 tised to a device with varying conditions. Route generators either 355 tend to be a specialized device, a piece of software which runs on a 356 router, or a process that runs on another operating system, such as 357 Linux or another variant of Unix. 359 Some of the characteristics of this device should be: 361 o The ability to connect to the several devices using both point- 362 to-point and broadcast high speed media. Point-to-point links 363 can be emulated with high speed Ethernet as long as there is no 364 hub or other device in between the DUT and the route generator, 365 and the link is configured as a point-to-point link within OSPF 366 [BROADCAST-P2P]. 368 o The ability to create a set of LSAs which appear to be a logi- 369 cal, realistic topology. For instance, the generator should be 370 able to mix the number of point-to-point and broadcast links 371 within the emulated topology, and should be able to inject vary- 372 ing numbers of externally reachable destinations. 374 o The ability to withdraw and add routing information into and 375 from the emulated topology to emulate links flapping. 377 o The ability to randomly order the LSAs representing the emulated 378 topology as they are advertised. 380 o The ability to log or otherwise measure the time between packets 381 transmitted and received. 383 o The ability to change the rate at which OSPF LSAs are transmit- 384 ted. 386 o The generator and the collector should be fast enough so that 387 they are not bottle necks. The devices should also have a degree 388 of granularity of measurement at least as small as desired from 389 the test results. 391 8. IANA Considerations 393 This document requires no IANA considerations. 395 9. Security Considerations 397 This document does not modify the underlying security considerations 398 in [OSPF]. 400 10. Acknowledgements 402 Thanks to Howard Berkowitz, (hcb@clark.net) and the rest of the BGP 403 benchmarking team for their support and to Kevin 404 Dubray(kdubray@juniper.net) who realized the need of this draft. 406 11. Normative References 408 [BENCHMARK] 409 Manral, V., "Benchmarking Basic OSPF Single Router Control Plane 410 Convergence", draft-bmwg-ospfconv-intraarea-09, June 2004 412 [TERM]Manral, V., "OSPF Convergence Testing Terminology and Con- 413 cepts", draft-bmwg-ospfconv-term-08, May 2004 415 [RFC2119] 416 Bradner, S., "Key words for use in RFCs to Indicate Requirement 417 Levels", BCP 14, RFC 2119, March 1997 419 12. Informative References 421 [INTERCONNECT] 422 Bradner, S., McQuaid, J., "Benchmarking Methodology for Network 423 Interconnect Devices", RFC2544, March 1999. 425 [FIB-TERM] 426 Trotter, G., "Terminology for Forwarding Information Base (FIB) 427 based Router Performance", RFC3222, October 2001. 429 [BROADCAST-P2P] 430 Shen, Naiming, et al., "Point-to-point operation over LAN in 431 link-state routing protocols", draft-ietf-isis-igp-p2p-over- 432 lan-03.txt, August, 2003. 434 [GATED] 435 http://www.gated.org 437 13. Authors' Addresses 438 Vishwas Manral 439 Netplane Systems 440 189 Prashasan Nagar 441 Road number 72 442 Jubilee Hills 443 Hyderabad, India 445 vmanral@netplane.com 447 Russ White 448 Cisco Systems, Inc. 449 7025 Kit Creek Rd. 450 Research Triangle Park, NC 27709 452 riw@cisco.com 454 Aman Shaikh 455 University of California 456 School of Engineering 457 1156 High Street 458 Santa Cruz, CA 95064 460 aman@soe.ucsc.edu 462 14. Full Copyright Statement 464 Copyright (C) The Internet Society (2004). This document is subject 465 to the rights, licenses and restrictions contained in BCP 78, and 466 except as set forth therein, the authors retain all their rights. 468 This document and the information contained herein are provided on an 469 "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE REPRESENTS 470 OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY AND THE INTERNET 471 ENGINEERING TASK FORCE DISCLAIM ALL WARRANTIES, EXPRESS OR IMPLIED, 472 INCLUDING BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFOR- 473 MATION HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES 474 OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. 476 15. Intellectual Property 478 By submitting this Internet-Draft, I certify that any applicable 479 patent or other IPR claims of which I am aware have been disclosed, 480 and any of which I become aware will be disclosed, in accordance with 481 RFC 3668. 483 The IETF takes no position regarding the validity or scope of any 484 Intellectual Property Rights or other rights that might be claimed to 485 pertain to the implementation or use of the technology described in 486 this document or the extent to which any license under such rights 487 might or might not be available; nor does it represent that it has 488 made any independent effort to identify any such rights. Information 489 on the procedures with respect to rights in RFC documents can be 490 found in BCP 78 and BCP 79. 492 Copies of IPR disclosures made to the IETF Secretariat and any 493 assurances of licenses to be made available, or the result of an 494 attempt made to obtain a general license or permission for the use of 495 such proprietary rights by implementers or users of this specifica- 496 tion can be obtained from the IETF on-line IPR repository at 497 http://www.ietf.org/ipr. 499 The IETF invites any interested party to bring to its attention any 500 copyrights, patents or patent applications, or other proprietary 501 rights that may cover technology that may be required to implement 502 this standard. Please address the information to the IETF at ietf- 503 ipr@ietf.org.