idnits 2.17.1 draft-ietf-bmwg-protection-term-08.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** The document seems to lack a License Notice according IETF Trust Provisions of 28 Dec 2009, Section 6.b.i or Provisions of 12 Sep 2009 Section 6.b -- however, there's a paragraph with a matching beginning. Boilerplate error? (You're using the IETF Trust Provisions' Section 6.b License Notice from 12 Feb 2009 rather than one of the newer Notices. See https://trustee.ietf.org/license-info/.) Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- == The page length should not exceed 58 lines per page, but there was 5 longer pages, the longest (page 26) being 67 lines Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There is 1 instance of too long lines in the document, the longest one being 29 characters in excess of 72. ** There are 3 instances of lines with control characters in the document. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (December 2009) is 5245 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: '6' is defined on line 1400, but no explicit reference was found in the text == Outdated reference: A later version (-23) exists of draft-ietf-bmwg-igp-dataplane-conv-term-16 ** Obsolete normative reference: RFC 3768 (ref. '11') (Obsoleted by RFC 5798) Summary: 4 errors (**), 0 flaws (~~), 4 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group S. Poretsky 2 Internet Draft Allot Communications 3 Expires: June 2010 Rajiv Papneja 4 Intended Status: Informational Isocore J. Karthik 5 S. Vapiwala 6 Cisco Systems 8 December 2009 10 Benchmarking Terminology 11 for Protection Performance 12 14 Status of this Memo 15 This Internet-Draft is submitted to IETF in full conformance with the 16 provisions of BCP 78 and BCP 79. 18 Internet-Drafts are working documents of the Internet Engineering 19 Task Force (IETF), its areas, and its working groups. Note that 20 other groups may also distribute working documents as Internet- 21 Drafts. 23 Internet-Drafts are draft documents valid for a maximum of six months 24 and may be updated, replaced, or obsoleted by other documents at any 25 time. It is inappropriate to use Internet-Drafts as reference 26 material or to cite them other than as "work in progress." 28 The list of current Internet-Drafts can be accessed at 29 http://www.ietf.org/ietf/1id-abstracts.txt. 30 The list of Internet-Draft Shadow Directories can be accessed at 31 http://www.ietf.org/shadow.html. 33 This Internet-Draft will expire on April 15, 2010. 35 Copyright Notice 36 Copyright (c) 2009 IETF Trust and the persons identified as the 37 document authors. All rights reserved. 39 This document is subject to BCP 78 and the IETF Trust's Legal 40 Provisions Relating to IETF Documents in effect on the date of 41 publication of this document (http://trustee.ietf.org/license-info). 42 Please review these documents carefully, as they describe your rights 43 and restrictions with respect to this document. 45 Abstract 46 This document provides common terminology and metrics for benchmarking 47 the performance of sub-IP layer protection mechanisms. The performance 48 benchmarks are measured at the IP-Layer, avoiding dependence on 49 specific sub-IP protection mechanisms. The benchmarks and terminology 50 can be applied in methodology documents for different sub-IP layer 51 protection mechanisms such as Automatic Protection Switching (APS), 52 Virtual Router Redundancy Protocol (VRRP), Stateful High Availability 53 (HA), and Multi-Protocol Label Switching Fast Reroute (MPLS-FRR). 55 Protection Performance 56 Table of Contents 57 1. Introduction..............................................3 58 2. Existing definitions......................................6 59 3. Test Considerations.......................................7 60 3.1. Paths................................................7 61 3.1.1. Path............................................7 62 3.1.2. Working Path....................................8 63 3.1.3. Primary Path....................................8 64 3.1.4. Protected Primary Path..........................8 65 3.1.5. Backup Path.....................................9 66 3.1.6. Standby Backup Path.............................10 67 3.1.7. Dynamic Backup Path.............................10 68 3.1.8. Disjoint Paths..................................10 69 3.1.9. Point of Local repair (PLR).....................11 70 3.1.10. Shared Risk Link Group (SRLG)..................11 71 3.2. Protection Mechanisms................................12 72 3.2.1. Link Protection.................................12 73 3.2.2. Node Protection.................................12 74 3.2.3. Path Protection.................................12 75 3.2.4. Backup Span.....................................13 76 3.2.5. Local Link Protection...........................13 77 3.2.6. Redundant Node Protection.......................14 78 3.2.7 State Control Interface.........................14 79 3.2.8. Protected Interface.............................15 80 3.3. Protection Switching.................................15 81 3.3.1. Protection Switching System.....................15 82 3.3.2. Failover Event..................................15 83 3.3.3. Failure Detection...............................16 84 3.3.4. Failover........................................17 85 3.3.5. Restoration.....................................17 86 3.3.6. Reversion.......................................18 87 3.4. Nodes................................................18 88 3.4.1. Protection-Switching Node.......................18 89 3.4.2. Non-Protection Switching Node...................19 90 3.4.3. Headend Node....................................19 91 3.4.4. Backup Node.....................................19 92 3.4.5. Merge Node......................................20 93 3.4.6. Primary Node....................................20 94 3.4.7. Standby Node....................................21 95 3.5. Benchmarks...........................................21 96 3.5.1. Failover Packet Loss............................21 97 3.5.2. Reversion Packet Loss...........................22 98 3.5.3. Failover Time...................................22 99 3.5.4. Reversion Time..................................23 100 3.5.5. Additive Backup Delay...........................23 101 3.6 Failover Time Calculation Methods.....................24 102 3.6.1 Time-Based Loss Method...........................24 103 3.6.2 Packet-Loss Based Method.........................25 104 3.6.3 Timestamp-Based Method...........................25 105 4. Acknowledgments...........................................26 106 5. IANA Considerations.......................................26 107 6. Security Considerations...................................26 108 7. References................................................26 109 8. Authors' Addresses........................................27 110 Protection Performance 112 1. Introduction 114 The IP network layer provides route convergence to protect data 115 traffic against planned and unplanned failures in the internet. Fast 116 convergence times are critical to maintain reliable network 117 connectivity and performance. Convergence Events [7] are recognized 118 at the IP Layer so that Route Convergence [7] occurs. Technologies 119 that function at sub-IP layers can be enabled to provide further 120 protection of IP traffic by providing the failure recovery at the 121 sub-IP layers so that the outage is not observed at the IP-layer. 122 Such sub-IP protection technologies include, but are not limited to, 123 High Availability (HA) stateful failover, Virtual Router Redundancy 124 Protocol (VRRP) [11], Automatic Link Protection (APS) for SONET/SDH, 125 Resilient Packet Ring (RPR) for Ethernet, and Fast Reroute for 126 Multi-Protocol Label Switching (MPLS-FRR) [8]. 128 1.1 Scope 129 Benchmarking terminology was defined for IP-layer convergence in 130 [7]. Different terminology and methodologies specific to 131 benchmarking sub-IP layer protection mechanisms are required. The 132 metrics for benchmarking the performance of sub-IP protection 133 mechanisms are measured at the IP layer, so that the results are 134 always measured in reference to IP and independent of the specific 135 protection mechanism being used. The purpose of this document is 136 to provide a single terminology for benchmarking sub-IP protection 137 mechanisms. 139 A common terminology for Sub-IP layer protection mechanism 140 benchmarking enables different implementations of a protection 141 mechanism to be benchmarked and evaluated. In addition, 142 implementations of different protection mechanisms can be 143 benchmarked and evaluated. It is intended that there can exist 144 unique methodology documents for each sub-IP protection mechanism 145 based upon this common terminology document. The terminology 146 can be applied to methodologies that benchmark sub-IP protection 147 mechanism performance with a single stream of traffic or 148 multiple streams of traffic. The traffic flow may be 149 uni-directional or bi-directional as to be indicated in the 150 methodology. 152 1.2 General Model 153 The sequence of events to benchmark the performance of Sub-IP 154 Protection Mechanisms is as follows: 156 1. Failover Event - Primary Path fails 157 2. Failure Detection- Failover Event is detected 158 3. Failover - Backup Path becomes the Working Path due to Failover 159 Event 160 4. Restoration - Primary Path recovers from a Failover Event 161 5. Reversion (optional) - Primary Path becomes the Working Path 163 These terms are further defined in this document. 165 Protection Performance 167 Figures 1 through 5 show models that MAY be used when benchmarking 168 Sub-IP Protection mechanisms, which MUST use a Protection Switching 169 System that consists of a minimum of two Protection-Switching Nodes, 170 an Ingress Node known as the Headend Node and an Egress Node known 171 as the Merge Node. The Protection Switching System MUST include 172 either a Primary Path and Backup Path, as shown in Figures 1 through 173 4, or a Primary Node and Standby Node, as shown in Figure 5. A 174 Protection Switching System may provide link protection, node 175 protection, path protection, local link protection, and high 176 availability, as shown in Figures 1 through 5 respectively. A 177 Failover Event occurs along the Primary Path or at the Primary Node. 178 The Working Path is the Primary Path prior to the Failover Event and 179 the Backup Path after the Failover Event. A Tester is set outside 180 the two paths or nodes as it sends and receives IP traffic along the 181 Working Path. The tester MUST record the IP packet sequence numbers, 182 departure time, and arrival time so that the metrics of Failover 183 Time, Additive Latency, Packet Reordering, Duplicate Packets, and 184 Reversion Time can be measured. The Tester may be a single device 185 or a test system. If Reversion is supported then the Working Path is 186 the Primary Path after Restoration (Failure Recovery) of the Primary 187 Path. 189 Link Protection, as shown in Figure 1, provides protection when a 190 Failover Event occurs on the link between two nodes along the Primary 191 Path. Node Protection, as shown in Figure 2, provides protection 192 when a Failover Event occurs at a Node along the Primary Path. 193 Path Protection, as shown in Figure 3, provides protection for link 194 or node failures for multiple hops along the Primary Path. Local 195 Link Protection, as shown in Figure 4, provides Sub-IP Protection of 196 a link between two nodes, without a Backup Node. An example of such 197 a Sub-IP Protection mechanism is SONET APS. High Availability 198 Protection, as shown in Figure 5, provides protection of a Primary 199 Node with a redundant Standby Node. State Control is provided 200 between the Primary and Standby Nodes. Failure of the Primary Node 201 is detected at the Sub-IP layer to force traffic to switch to the 202 Standby Node, which has state maintained for zero or minimal packet 203 loss. 205 +-----------+ 206 +--------------| Tester |<-----------------------+ 207 | +-----------+ | 208 | IP Traffic | Failover IP Traffic | 209 | | Event | 210 | ------------ | ---------- | 211 +--->| Ingress/ | V | Egress/ |---+ 212 |Headend Node|------------------|Merge Node| Primary 213 ------------ ---------- Path 214 | ^ 215 | --------- | Backup 216 +--------| Backup |-------------+ Path 217 | Node | 218 --------- 219 Figure 1. System Under Test (SUT) for Sub-IP Link Protection 220 Protection Performance 222 +-----------+ 223 +--------------------| Tester |<-----------------+ 224 | +-----------+ | 225 | IP Traffic | Failover IP Traffic | 226 | | Event | 227 | V | 228 | ------------ -------- ---------- | 229 +--->| Ingress/ | |MidPoint| | Egress/ |---+ 230 |Headend Node|----| Node |----|Merge Node| Primary 231 ------------ -------- ---------- Path 232 | ^ 233 | --------- | Backup 234 +--------| Backup |-------------+ Path 235 | Node | 236 --------- 238 Figure 2. System Under Test (SUT) for Sub-IP Node Protection 240 +-----------+ 241 +---------------------------| Tester |<----------------------+ 242 | +-----------+ | 243 | IP Traffic | Failover IP Traffic | 244 | | Event | 245 | Primary Path | | 246 | ------------ -------- | -------- ---------- | 247 +--->| Ingress/ | |MidPoint| V |Midpoint| | Egress/ |---+ 248 |Headend Node|----| Node |---| Node |---|Merge Node| 249 ------------ -------- -------- ---------- 250 | ^ 251 | --------- -------- | Backup 252 +--------| Backup |----| Backup |--------+ Path 253 | Node | | Node | 254 --------- -------- 256 Figure 3. System Under Test (SUT) for Sub-IP Path Protection 258 +-----------+ 259 +--------------------| Tester |<-------------------+ 260 | +-----------+ | 261 | IP Traffic | Failover IP Traffic | 262 | | Event | 263 | Primary | | 264 | +--------+ Path v +--------+ | 265 | | |------------------------>| | | 266 +--->| Ingress| | Egress |----+ 267 | Node |- - - - - - - - - - - - >| Node | 268 +--------+ Backup Path +--------+ 269 ^ ^ 270 | IP-Layer Forwarding | 271 +-------------------------------------------+ 273 Figure 4. System Under Test (SUT) for Sub-IP Local Link Protection 274 Protection Performance 276 +-----------+ 277 +-----------------| Tester |<--------------------+ 278 | +-----------+ | 279 | IP Traffic | Failover IP Traffic | 280 | | Event | 281 | V | 282 | --------- -------- ---------- | 283 +--->| Ingress | |Primary | | Egress/ |------+ 284 | Node |----| Node |----|Merge Node| Primary 285 --------- -------- ---------- Path 286 | State |Control ^ 287 | Interface |(Optional) | 288 | --------- | 289 +---------| Standby |---------+ 290 | Node | 291 --------- 293 Figure 5. System Under Test (SUT) for Sub-IP Redundant Node Protection 295 Some protection switching technologies may use a series of 296 steps that differ from the general model. The specific differences 297 SHOULD be highlighted in each technology-specific methodology. 298 Note that some protection switching technologies are endowed 299 with the ability to re-optimize the working path after a 300 node or link failure. 302 2. Existing definitions 303 This document uses existing terminology defined in other BMWG 304 work. Examples include, but are not limited to: 306 Latency [Ref.[2], section 3.8] 307 Frame Loss Rate [Ref.[2], section 3.6] 308 Throughput [Ref.[2], section 3.17] 309 Device Under Test (DUT) [Ref.[3], section 3.1.1] 310 System Under Test (SUT) [Ref.[3], section 3.1.2] 311 Offered Load [Ref.[3], section 3.5.2] 312 Out-of-order Packet [Ref.[4], section 3.3.2] 313 Duplicate Packet [Ref.[4], section 3.3.3] 314 Forwarding Delay [Ref.[4], section 3.2.4] 315 Jitter [Ref.[4], section 3.2.5] 316 Packet Loss [Ref.[7], Section 3.5] 317 Packet Reordering [Ref.[10], section 3.3] 319 This document has the following frequently used acronyms: 320 DUT Device Under Test 321 SUT System Under Test 323 This document adopts the definition format in Section 2 of RFC 1242 324 [2]. Terms defined in this document are capitalized when used 325 within this document. 327 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 328 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 329 document are to be interpreted as described in BCP 14, RFC 2119 [5]. 330 RFC 2119 defines the use of these key words to help make the 331 intent of standards track documents as clear as possible. While this 332 document uses these keywords, this document is not a standards track 333 document. 335 Protection Performance 337 3. Test Considerations 339 3.1. Paths 341 3.1.1 Path 343 Definition: 344 A unidirectional sequence of nodes, , and links 345 with the following properties: 347 a. R1 is the ingress node and forwards IP packets, which input 348 into DUT/SUT, to R2 as sub-IP frames over link L12. 350 b. Ri is a node which forwards data frames to R[i+1] over Link 351 Li[i+1] for all i, 1