idnits 2.17.1 draft-ietf-mpls-tp-oam-framework-05.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** You're using the IETF Trust Provisions' Section 6.b License Notice from 12 Sep 2009 rather than the newer Notice from 28 Dec 2009. (See https://trustee.ietf.org/license-info/) Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The abstract seems to contain references ([12]), which it shouldn't. Please replace those with straight textual mentions of the documents in question. == There are 1 instance of lines with non-RFC6890-compliant IPv4 addresses in the document. If these are example addresses, they should be changed. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 766 has weird spacing: '... Unless other...' == Line 820 has weird spacing: '...MEE can monit...' == Line 838 has weird spacing: '... need to m...' == Line 839 has weird spacing: '...trative bound...' == Line 916 has weird spacing: '... end monito...' -- The document date (March 5, 2010) is 5163 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: '3' is defined on line 1821, but no explicit reference was found in the text == Unused Reference: '4' is defined on line 1824, but no explicit reference was found in the text == Unused Reference: '17' is defined on line 1873, but no explicit reference was found in the text == Unused Reference: '18' is defined on line 1876, but no explicit reference was found in the text == Unused Reference: '19' is defined on line 1880, but no explicit reference was found in the text == Unused Reference: '20' is defined on line 1884, but no explicit reference was found in the text == Unused Reference: '21' is defined on line 1887, but no explicit reference was found in the text == Outdated reference: A later version (-07) exists of draft-ietf-pwe3-ms-pw-arch-05 == Outdated reference: A later version (-12) exists of draft-ietf-mpls-tp-framework-10 == Outdated reference: A later version (-07) exists of draft-ietf-mpls-tp-identifiers-00 == Outdated reference: A later version (-09) exists of draft-ietf-mpls-tp-oam-analysis-01 Summary: 2 errors (**), 0 flaws (~~), 18 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 MPLS Working Group I. Busi (Ed) 2 Internet Draft Alcatel-Lucent 3 Intended status: Informational B. Niven-Jenkins (Ed) 4 BT 5 D. Allan (Ed) 6 Ericsson 8 Expires: September 5, 2010 March 5, 2010 10 MPLS-TP OAM Framework 11 draft-ietf-mpls-tp-oam-framework-05.txt 13 Abstract 15 Multi-Protocol Label Switching (MPLS) Transport Profile (MPLS-TP) is 16 based on a profile of the MPLS and pseudowire (PW) procedures as 17 specified in the MPLS Traffic Engineering (MPLS-TE), pseudowire (PW) 18 and multi-segment PW (MS-PW) architectures complemented with 19 additional Operations, Administration and Maintenance (OAM) 20 procedures for fault, performance and protection-switching management 21 for packet transport applications that do not rely on the presence of 22 a control plane. 24 This document describes a framework to support a comprehensive set of 25 OAM procedures that fulfills the MPLS-TP OAM requirements [12]. 27 This document is a product of a joint Internet Engineering Task Force 28 (IETF) / International Telecommunications Union Telecommunications 29 Standardization Sector (ITU-T) effort to include an MPLS Transport 30 Profile within the IETF MPLS and PWE3 architectures to support the 31 capabilities and functionalities of a packet transport network as 32 defined by the ITU-T. 34 Status of this Memo 36 This Internet-Draft is submitted to IETF in full conformance with the 37 provisions of BCP 78 and BCP 79. 39 Internet-Drafts are working documents of the Internet Engineering 40 Task Force (IETF), its areas, and its working groups. Note that other 41 groups may also distribute working documents as Internet-Drafts. 43 Internet-Drafts are draft documents valid for a maximum of six months 44 and may be updated, replaced, or obsoleted by other documents at any 45 time. It is inappropriate to use Internet-Drafts as reference 46 material or to cite them other than as "work in progress". 48 The list of current Internet-Drafts can be accessed at 49 http://www.ietf.org/ietf/1id-abstracts.txt. 51 The list of Internet-Draft Shadow Directories can be accessed at 52 http://www.ietf.org/shadow.html. 54 This Internet-Draft will expire on September 5, 2010. 56 Copyright Notice 58 Copyright (c) 2010 IETF Trust and the persons identified as the 59 document authors. All rights reserved. 61 This document is subject to BCP 78 and the IETF Trust's Legal 62 Provisions Relating to IETF Documents 63 (http://trustee.ietf.org/license-info) in effect on the date of 64 publication of this document. Please review these documents 65 carefully, as they describe your rights and restrictions with respect 66 to this document. Code Components extracted from this document must 67 include Simplified BSD License text as described in Section 4.e of 68 the Trust Legal Provisions and are provided without warranty as 69 described in the BSD License. 71 Table of Contents 73 1. Introduction..................................................5 74 1.1. Contributing Authors.....................................5 75 2. Conventions used in this document.............................6 76 2.1. Terminology..............................................6 77 2.2. Definitions..............................................7 78 3. Functional Components.........................................8 79 3.1. Maintenance Entity and Maintenance Entity Group..........9 80 3.2. Nested MEGs: Path Segment Tunnels and Tandem Connection 81 Monitoring...................................................11 82 3.3. MEG End Points (MEPs)...................................12 83 3.4. MEG Intermediate Points (MIPs)..........................13 84 3.5. Server MEPs.............................................14 85 3.6. Configuration Considerations............................15 86 3.7. P2MP considerations.....................................15 87 4. Reference Model..............................................16 88 4.1. MPLS-TP Section Monitoring (SME)........................18 89 4.2. MPLS-TP LSP End-to-End Monitoring (LME).................19 90 4.3. MPLS-TP LSP Path Segment Tunnel Monitoring (LPSTME).....19 91 4.4. MPLS-TP PW Monitoring (PME).............................21 92 4.5. MPLS-TP MS-PW Path Segment Tunnel Monitoring (PPSTME)...21 93 5. OAM Functions for proactive monitoring.......................22 94 5.1. Continuity Check and Connectivity Verification..........23 95 5.1.1. Defects identified by CC-V.........................25 96 5.1.2. Consequent action..................................26 97 5.1.3. Configuration considerations.......................27 98 5.2. Remote Defect Indication................................28 99 5.2.1. Configuration considerations.......................29 100 5.3. Alarm Reporting.........................................29 101 5.4. Lock Reporting..........................................30 102 5.5. Packet Loss Measurement.................................31 103 5.5.1. Configuration considerations.......................32 104 5.6. Client Failure Indication...............................32 105 5.6.1. Configuration considerations.......................32 106 5.7. Packet Delay Measurement................................33 107 5.7.1. Configuration considerations.......................33 108 6. OAM Functions for on-demand monitoring.......................33 109 6.1. Connectivity Verification...............................34 110 6.1.1. Configuration considerations.......................35 111 6.2. Packet Loss Measurement.................................35 112 6.2.1. Configuration considerations.......................36 113 6.3. Diagnostic Tests........................................36 114 6.3.1. Throughput Estimation..............................36 115 6.3.2. Data plane Loopback................................37 116 6.4. Route Tracing...........................................37 117 6.4.1. Configuration considerations.......................38 119 6.5. Packet Delay Measurement...............................38 120 6.5.1. Configuration considerations......................38 121 6.6. Lock Instruct..........................................39 122 6.6.1. Locking a transport path..........................39 123 6.6.2. Unlocking a transport path........................39 124 7. Security Considerations.....................................40 125 8. IANA Considerations.........................................40 126 9. Acknowledgments.............................................40 127 10. References.................................................42 128 10.1. Normative References..................................42 129 10.2. Informative References................................42 131 Editors' Note: 133 This Informational Internet-Draft is aimed at achieving IETF 134 Consensus before publication as an RFC and will be subject to an IETF 135 Last Call. 137 [RFC Editor, please remove this note before publication as an RFC and 138 insert the correct Streams Boilerplate to indicate that the published 139 RFC has IETF Consensus.] 141 1. Introduction 143 As noted in [8], MPLS-TP defines a profile of the MPLS-TE and (MS-)PW 144 architectures defined in RFC 3031 [2], RFC 3985 [5] and [7] which is 145 complemented with additional OAM mechanisms and procedures for alarm, 146 fault, performance and protection-switching management for packet 147 transport applications. 149 In line with [13], existing MPLS OAM mechanisms will be used wherever 150 possible and extensions or new OAM mechanisms will be defined only 151 where existing mechanisms are not sufficient to meet the 152 requirements. 154 The MPLS-TP OAM framework defined in this document provides a 155 comprehensive set of OAM procedures that satisfy the MPLS-TP OAM 156 requirements [12]. In this regard, it defines similar OAM 157 functionality as for existing SONET/SDH and OTN OAM mechanisms (e.g. 158 [16]). 160 The MPLS-TP OAM framework is applicable to both LSPs and (MS-)PWs and 161 supports co-routed and bidirectional p2p transport paths as well as 162 unidirectional p2p and p2mp transport paths. 164 This document is a product of a joint Internet Engineering Task Force 165 (IETF) / International Telecommunications Union Telecommunications 166 Standardization Sector (ITU-T) effort to include an MPLS Transport 167 Profile within the IETF MPLS and PWE3 architectures to support the 168 capabilities and functionalities of a packet transport network as 169 defined by the ITU-T. 171 1.1. Contributing Authors 173 Dave Allan, Italo Busi, Ben Niven-Jenkins, Annamaria Fulignoli, 174 Enrique Hernandez-Valencia, Lieven Levrau, Dinesh Mohan, Vincenzo 175 Sestito, Nurit Sprecher, Huub van Helvoort, Martin Vigoureux, Yaacov 176 Weingarten, Rolf Winter 178 2. Conventions used in this document 180 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 181 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 182 document are to be interpreted as described in RFC-2119 [1]. 184 2.1. Terminology 186 AC Attachment Circuit 188 DBN Domain Border Node 190 FDI Forward Defect Indication 192 LER Label Edge Router 194 LME LSP Maintenance Entity 196 LSP Label Switched Path 198 LSR Label Switch Router 200 LPSTME LSP packet segment tunnel ME 202 ME Maintenance Entity 204 MEG Maintenance Entity Group 206 MEP Maintenance Entity Group End Point 208 MIP Maintenance Entity Group Intermediate Point 210 PHB Per-hop Behavior 212 PME PW Maintenance Entity 214 PPSTME PW path segment tunnel ME 216 PST Path Segment Tunnel 218 PSN Packet Switched Network 220 PW Pseudowire 222 SLA Service Level Agreement 224 SME Section Maintenance Entity 226 2.2. Definitions 228 Note - the definitions in this section are intended to be in line 229 with ITU-T recommendation Y.1731 in order to have a common, 230 unambiguous terminology. They do not however intend to imply a 231 certain implementation but rather serve as a framework to describe 232 the necessary OAM functions for MPLS-TP. 234 Data plane loopback: it is an out-of-service test where an interface 235 at either an intermediate or terminating node in a path is placed 236 into a data plane loopback state, such that it loops back all the 237 packets (including user data and OAM) it receives on a specific MPLS- 238 TP transport path. 240 Domain Border Node (DBN): An LSP intermediate MPLS-TP node (LSR) that 241 is at the boundary of an MPLS-TP OAM domain. Such a node may be 242 present on the edge of two domains or may be connected by a link to 243 an MPLS-TP node in another OAM domain. 245 Loopback: see data plane loopback and OAM loopback definitions. 247 Maintenance Entity (ME): Some portion of a transport path that 248 requires management bounded by two points, and the relationship 249 between those points to which maintenance and monitoring operations 250 apply (details in section 3.1). 252 Maintenance Entity Group (MEG): The set of one or more maintenance 253 entities that maintain and monitor a transport path in an OAM domain. 255 MEP: A MEG end point (MEP) is capable of initiating (MEP Source) and 256 terminating (MEP Sink) OAM messages for fault management and 257 performance monitoring. MEPs reside at the boundaries of an ME 258 (details in section 3.3). 260 MEP Source: A MEP acts as MEP source for an OAM message when it 261 originates and inserts the message into the transport path for its 262 associated MEG. 264 MEP Sink: A MEP acts as a MEP sink for an OAM message when it 265 terminates and processes the messages received from its associated 266 MEG. 268 MIP: A MEG intermediate point (MIP) terminates and processes OAM 269 messages and may generate OAM messages in reaction to received OAM 270 messages. It never generates unsolicited OAM messages itself. A MIP 271 resides within an MEG between MEPs (details in section 3.3). 273 OAM domain: A domain, as defined in [11], whose entities are grouped 274 for the purpose of keeping the OAM confined within that domain. 276 Note - within the rest of this document the term "domain" is used to 277 indicate an "OAM domain" 279 OAM flow: Is the set of all OAM messages originating with a specific 280 MEP that instrument one direction of a MEG. 282 OAM information element: An atomic piece of information exchanged 283 between MEPs in MEG used by an OAM application. 285 OAM loopback: it is the capability of a node to intercepts some 286 specific OAM packets and to generate a reply back to their sender. 287 OAM loopback can work in-service and can support different OAM 288 functions (e.g., bidirectional on-demand connectivity verification). 290 OAM Message: One or more OAM information elements that when exchanged 291 between MEPs or between MEPs and MIPs performs some OAM functionality 292 (e.g. connectivity verification) 294 OAM Packet: A packet that carries one or more OAM messages (i.e. OAM 295 information elements). 297 Path: See Transport Path 299 Signal Fail: A condition declared by a MEP when the data forwarding 300 capability associated with a transport path has failed, e.g. loss of 301 continuity. 303 Tandem Connection: A tandem connection is an arbitrary part of a 304 transport path that can be monitored (via OAM) independent of the 305 end-to-end monitoring (OAM). The tandem connection may also include 306 the forwarding engine(s) of the node(s) at the boundaries of the 307 tandem connection. 309 This document uses the terms defined in RFC 5654 [11]. 311 This document uses the term 'Per-hop Behavior' as defined in [14]. 313 3. Functional Components 315 MPLS-TP defines a profile of the MPLS and PW architectures ([2], [5] 316 and [7]) that is required to transport service traffic where the 317 characteristics of information transfer between the transport path 318 endpoints can be demonstrated to comply with certain performance and 319 quality guarantees. 321 In order to describe the required OAM functionality, this document 322 introduces a set of high-level functional components. 324 3.1. Maintenance Entity and Maintenance Entity Group 326 MPLS-TP OAM operates in the context of Maintenance Entities (MEs) 327 that are a relationship between two points of a point to point 328 transport path or a root and a leaf of a point to multipoint 329 transport path to which maintenance and monitoring operations apply. 330 These two points are called Maintenance Entity Group (MEG) End Points 331 (MEPs). In between these two points zero or more intermediate points, 332 called Maintenance Entity Group Intermediate Points (MIPs), MAY exist 333 and can be shared by more than one ME in a MEG. 335 The abstract reference model for an ME with MEPs and MIPs is 336 described in Figure 1 below: 338 +-+ +-+ +-+ +-+ 339 |A|----|B|----|C|----|D| 340 +-+ +-+ +-+ +-+ 342 Figure 1 ME Abstract Reference Model 344 The instantiation of this abstract model to different MPLS-TP 345 entities is described in section 4. In this model, nodes A, B, C and 346 D can be LER/LSR for an LSP or the {S|T}-PEs for a MS-PW. MEPs reside 347 in nodes A and D while MIPs reside in nodes B and C. The links 348 connecting adjacent nodes can be physical links, (sub-)layer 349 LSPs/PSTs, or serving layer paths. 351 This functional model defines the relationships between all OAM 352 entities from a maintenance perspective, to allow each Maintenance 353 Entity to monitor and manage the (sub-)layer network under its 354 responsibility and to localize problems efficiently. 356 Another OAM functional component is referred to as Maintenance Entity 357 Group, which is a collection of one or more MEs that belongs to the 358 same transport path and that are maintained and monitored as a group. 359 An MPLS-TP Maintenance Entity Group may be defined to monitor the 360 transport path for fault and/or performance management. 362 The MEPs that form an MEG are configured and managed to limit the 363 scope of an OAM flow within the MEG that the MEPs belong to (i.e. 364 within the domain of the transport path that is being monitored and 365 managed). A misbranching fault may cause OAM packets to be delivered 366 to a MEP that is not in the MEG of origin. 368 In case of unidirectional point-to-point transport paths, a single 369 unidirectional Maintenance Entity is defined to monitor it. 371 In case of associated bi-directional point-to-point transport paths, 372 two independent unidirectional Maintenance Entities are defined to 373 independently monitor each direction. This has implications for 374 transactions that terminate at or query a MIP as a return path from 375 MIP to source MEP does not necessarily exist in a unidirectional MEG. 377 In case of co-routed bi-directional point-to-point transport paths, a 378 single bidirectional Maintenance Entity is defined to monitor both 379 directions congruently. 381 In case of unidirectional point-to-multipoint transport paths, a 382 single unidirectional Maintenance entity for each leaf is defined to 383 monitor the transport path from the root to that leaf. 385 The reference model for the p2mp MEG is represented in Figure 2. 387 +-+ 388 /--|D| 389 / +-+ 390 +-+ 391 /--|C| 392 +-+ +-+/ +-+\ +-+ 393 |A|----|B| \--|E| 394 +-+ +-+\ +-+ +-+ 395 \--|F| 396 +-+ 398 Figure 2 Reference Model for p2mp MEG 400 In case of p2mp transport paths, the OAM operations are independent 401 for each ME (A-D, A-E and A-F): 403 o Fault conditions - some faults may impact more than one ME 404 depending from where the failure is located; 406 o Packet loss - packet dropping may impact more than one ME 407 depending from where the packets are lost; 409 o Packet delay - will be unique per ME. 411 Each leaf (i.e. D, E and F) terminates OAM flows to monitor the ME 412 from itself and the root while the root (i.e. A) generates OAM 413 messages common to all the MEs of the p2mp MEG. Nodes B and C MAY 414 implement a MIP in the corresponding MEG. 416 3.2. Nested MEGs: Path Segment Tunnels and Tandem Connection Monitoring 418 In order to verify and maintain performance and quality guarantees, 419 there is a need to not only apply OAM functionality on a transport 420 path granularity (e.g. LSP or MS-PW), but also on arbitrary parts of 421 transport paths, defined as Tandem Connections, between any two 422 arbitrary points along a transport path. 424 Path segment tunnels (PSTs), as defined in [8], are instantiated to 425 provide monitoring of a portion of a set of co-routed transport paths 426 (LSPs or MS-PWs). Path segment tunnels can also be employed to meet 427 the requirement to provide tandem connection monitoring (TCM). 429 TCM for a given portion of a transport path is implemented by first 430 creating a path segment tunnel that has a 1:1 association with 431 portion of the transport path that is to be uniquely monitored. This 432 means there is direct correlation between all FM and PM information 433 gathered for the PST AND the monitored portion of the E2E transport 434 path. The PST is monitored using normal LSP monitoring. 436 There are a number of implications to this approach: 438 1) The PST would use the uniform model of TC code point copying 439 between sub-layers for diffserv such that the E2E markings and 440 PHB treatment for the transport path was preserved by the PST. 442 2) The PST would use the pipe model for TTL handling such that MIP 443 addressing for the E2E entity would be not be impacted by the 444 presence of the PST. 446 3) PM statistics need to be adjusted for the encapsulation overhead 447 of the additional PST sub-layer. 449 A PST is instantiated to create an MEG that monitors a segment of a 450 transport path (LSP or PW). The endpoints of the PST are MEPs and 451 limit the scope of an OAM flow within the MEG the MEPs belong to 452 (i.e. within the domain of the PST that is being monitored and 453 managed). 455 The following properties apply to all MPLS-TP MEGs: 457 o They can be nested but not overlapped, e.g. an MEG may cover a 458 segment or a concatenated segment of another MEG, and may also 459 include the forwarding engine(s) of the node(s) at the edge(s) of 460 the segment or concatenated segment, but all its MEPs and MIPs are 461 no longer part of the encompassing MEG. It is possible that MEPs 462 of nested MEGs reside on a single node. 464 o It is possible for MEPs of nested MEGs to reside on a single node. 466 o Each OAM flow is associated with a single Maintenance Entity 467 Group. 469 o OAM packets that instrument a particular direction of a transport 470 path are subject to the same forwarding treatment (i.e. fate 471 share) as the data traffic and in some cases may be required to 472 have common queuing discipline E2E with the class of traffic 473 monitored. OAM packets can be distinguished from the data traffic 474 using the GAL and ACH constructs [9] for LSP and Section or the 475 ACH construct [6]and [9] for (MS-)PW. 477 3.3. MEG End Points (MEPs) 479 MEG End Points (MEPs) are the source and sink points of an MEG. In 480 the context of an MPLS-TP LSP, only LERs can implement MEPs while in 481 the context of a path segment tunnel (PST) both LERs and LSRs can 482 implement MEPs that contribute to the overall monitoring 483 infrastructure for the transport path. Regarding MPLS-TP PW, only T- 484 PEs can implement MEPs while for PSTs supporting a PW both T-PEs and 485 S-PEs can implement MEPs. In the context of MPLS-TP Section, any 486 MPLS-TP LSR can implement a MEP. 488 MEPs are responsible for activating and controlling all of the OAM 489 functionality for the MEG. A MEP is capable of originating and 490 terminating OAM messages for fault management and performance 491 monitoring. These OAM messages are encapsulated into an OAM packet 492 using the G-ACh as defined in RFC 5586 [9]: in this case the G-ACh 493 message is an OAM message and the channel type indicates an OAM 494 message. A MEP terminates all the OAM packets it receives from the 495 MEG it belongs to. The MEG the OAM packet belongs to is inferred from 496 the MPLS or PW label or, in case of MPLS-TP section, the MPLS-TP port 497 the OAM packet has been received with the GAL at the top of the label 498 stack. 500 OAM packets may require the use of an available "out-of-band" return 501 path (as defined in [8]). In such cases sufficient information is 502 required in the originating transaction such that the OAM reply 503 packet can be constructed (e.g. IP address). 505 Once an MEG is configured, the operator can configure which OAM 506 functions to use on the MEG but the MEPs are always enabled. A node 507 at the edge of an MEG always supports a MEP. 509 MEPs terminate all OAM packets received from the associated MEG. As 510 the MEP corresponds to the termination of the forwarding path for an 511 MEG at the given (sub-)layer, OAM packets never "leaks" outside of a 512 MEG in a fault free implementation. 514 A MEP of an MPLS-TP transport path (Section, LSP or PW) coincides 515 with transport path termination and monitors it for failures or 516 performance degradation (e.g. based on packet counts) in an end-to- 517 end scope. Note that both MEP source and MEP sink coincide with 518 transport paths' source and sink terminations. 520 The MEPs of a path segment tunnel are not necessarily coincident with 521 the termination of the MPLS-TP transport path (LSP or PW) and monitor 522 some portion of the transport path for failures or performance 523 degradation (e.g. based on packet counts) only within the boundary of 524 the MEG for the path segment tunnel. 526 An MPLS-TP MEP sink passes a fault indication to its client 527 (sub-)layer network as a consequent action of fault detection. 529 It may occur that the MEPs of a path segment tunnel are set on both 530 sides of the forwarding engine such that the MEG is entirely internal 531 to the node. 533 Note that a MEP can only exist at the beginning and end of a layer 534 i.e. an LSP or PW. If we need to monitor some portion of that LSP or 535 PW, a new sub-layer in the form of a path segment tunnel MUST be 536 created which permits MEPs and an associated MEG to be created. 538 We have the case of an intermediate node sending msg to a MEP. To do 539 this it uses the LSP label - i.e. the top label of the stack at that 540 point. 542 3.4. MEG Intermediate Points (MIPs) 544 A MEG Intermediate Point (MIP) is a point between the MEPs of an MEG. 546 A MIP is capable of reacting to some OAM packets and forwarding all 547 the other OAM packets while ensuring fate sharing with data plane 548 packets. However, a MIP does not initiate unsolicited OAM packets, 549 but may be addressed by OAM packets initiated by one of the MEPs of 550 the MEG. A MIP can generate OAM packets only in response to OAM 551 packets that are sent on the MEG it belongs to. 553 An intermediate node within a MEG can either: 555 o support per-node MIP (i.e. a single MIP per node) 557 o support per-interface MIP (i.e. two or more MIPs per node on both 558 sides of the forwarding engine) 560 When sending an OAM packet to a MIP, the source MEP should set the 561 TTL field to indicate the number of hops necessary to reach the node 562 where the MIP resides. It is always assumed that the "pipe"/"short 563 pipe" model of TTL handling is used by the MPLS transport profile. 565 The source MEP should also include Target MIP information in the OAM 566 packets sent to a MIP to allow proper identification of the MIP 567 within the node. The MEG the OAM packet is associated with is 568 inferred from the MPLS label. 570 A node at the edge of an MEG can also support per-interface MEPs and 571 per-interface MIPs on either side of the forwarding engine. 573 Once an MEG is configured, the operator can enable/disable the MIPs 574 on the nodes within the MEG. All the intermediate nodes host MIP(s). 575 Local policy allows them to be enabled per function and per LSP. The 576 local policy is controlled by the management system, which may 577 delegate it to the control plane. 579 3.5. Server MEPs 581 A server MEP is a MEP of an MEG that is either: 583 o defined in a layer network that is "below", which is to say 584 encapsulates and transports the MPLS-TP layer network being 585 referenced, or 587 o defined in a sub-layer of the MPLS-TP layer network that is 588 "below" which is to say encapsulates and transports the sub-layer 589 being referenced. 591 A server MEP can coincide with a MIP or a MEP in the client (MPLS-TP) 592 (sub-)layer network. 594 A server MEP also interacts with the client/server adaptation 595 function between the client (MPLS-TP) (sub-)layer network and the 596 server (sub-)layer network. The adaptation function maintains state 597 on the mapping of MPLS-TP transport paths that are setup over that 598 server (sub-)layer's transport path. 600 For example, a server MEP can be either: 602 o A termination point of a physical link (e.g. 802.3), an SDH VC or 603 OTN ODU, for the MPLS-TP Section layer network, defined in section 604 4.1; 606 o An MPLS-TP Section MEP for MPLS-TP LSPs, defined in section 4.2; 608 o An MPLS-TP LSP MEP for MPLS-TP PWs, defined in section 4.4; 610 o An MPLS-TP PST MEP used for LSP segment monitoring, as defined in 611 section 4.3, for MPLS-TP LSPs or higher-level LSP PSTs; 613 o An MPLS-TP PST MEP used for PW segment monitoring, as defined in 614 section 4.5, for MPLS-TP PWs or higher-level PW PSTs. 616 The server MEP can run appropriate OAM functions for fault detection 617 within the server (sub-)layer network, and provides a fault 618 indication to its client MPLS-TP layer network. Server MEP OAM 619 functions are outside the scope of this document. 621 3.6. Configuration Considerations 623 When a control plane is not present, the management plane configures 624 these functional components. Otherwise they can be configured either 625 by the management plane or by the control plane. 627 Local policy allows to disable the usage of any available "out-of- 628 band" return path, as defined in [8], to generate OAM reply packets, 629 irrespectively on what is requested by the node originating the OAM 630 packet triggering the request. 632 PSTs are usually instantiated when the transport path is created by 633 either the management plane or by the control plane (if present). 634 Sometimes PST can be instantiated after the transport path is 635 initially created (e.g. PST). 637 3.7. P2MP considerations 639 All the traffic sent over a p2mp transport path, including OAM 640 packets generated by a MEP, is sent (multicast) from the root to all 641 the leaves. As a consequence: 643 o To send an OAM packet to all leaves, the source MEP can send a 644 single OAM packet that will be delivered by the forwarding plane 645 to all the leaves and processed by all the leaves. 647 o To send an OAM packet to a single leaf, the source MEP sends a 648 single OAM packet that will be delivered by the forwarding plane 649 to all the leaves but contains sufficient information to 650 identify a target leaf, and therefore is processed only by the 651 target leaf and ignored by the other leaves. 653 o In order to send an OAM packet to M leaves (i.e., a subset of 654 all the leaves), the source MEP sends M different OAM packets 655 targeted to each individual leaf in the group of M leaves. 656 Better mechanisms are outside the scope of this document. 658 P2MP paths are unidirectional, therefore any return path to a source 659 MEP for on demand transactions will be out of band. 661 A mechanism to scope the set of MEPs or MIPs expected to respond to a 662 given "on demand" transaction is useful as it relieves the source MEP 663 of the requirement to filter and discard undesired responses as 664 normally TTL exhaust will address all MIPs at a given distance from 665 the source, and failure to exhaust TTL will address all MEPs. 667 4. Reference Model 669 The reference model for the MPLS-TP framework builds upon the concept 670 of an MEG, and its associated MEPs and MIPs, to support the 671 functional requirements specified in [12]. 673 The following MPLS-TP MEGs are specified in this document: 675 o A Section Maintenance Entity Group (SME), allowing monitoring and 676 management of MPLS-TP Sections (between MPLS LSRs). 678 o A LSP Maintenance Entity Group (LME), allowing monitoring and 679 management of an end-to-end LSP (between LERs). 681 o A PW Maintenance Entity Group (PME), allowing monitoring and 682 management of an end-to-end SS/MS-PWs (between T-PEs). 684 o A LSP PST Maintenance Entity Group (LPSTME), allowing monitoring 685 and management of a path segment tunnel (between any LERs/LSRs 686 along an LSP). 688 o A MS-PW PST Maintenance Entity (PPSTME), allowing monitoring and 689 management of an MPLS-TP path segment tunnel (between any 690 T-PEs/S-PEs along the (MS-)PW). 692 The MEGs specified in this MPLS-TP framework are compliant with the 693 architecture framework for MPLS-TP MS-PWs [7] and LSPs [2]. 695 Hierarchical LSPs are also supported in the form of path segment 696 tunnels. In this case, each LSP Tunnel in the hierarchy is a 697 different sub-layer network that can be monitored, independently from 698 higher and lower level LSP tunnels in the hierarchy, on an end-to-end 699 basis (from LER to LER) by a PSTME. It is possible to monitor a 700 portion of a hierarchical LSP by instantiating a hierarchical PSTME 701 between any LERs/LSRs along the hierarchical LSP. 703 Native |<------------------- MS-PW1Z ------------------->| Native 704 Layer | | Layer 705 Service | |<-PSN13->| |<-PSN3X->| |<-PSNXZ->| | Service 706 (AC1) V V LSP V V LSP V V LSP V V (AC2) 707 +----+ +-+ +----+ +----+ +-+ +----+ 708 +----+ |TPE1| | | |SPE3| |SPEX| | | |TPEZ| +----+ 709 | | | |=========| |=========| |=========| | | | 710 | CE1|---|........PW13.......|...PW3X..|........PWXZ.......|---|CE2 | 711 | | | |=========| |=========| |=========| | | | 712 +----+ | 1 | |2| | 3 | | X | |Y| | Z | +----+ 713 +----+ +-+ +----+ +----+ +-+ +----+ 714 . . . . 715 | | | | 716 |<---- Domain 1 --->| |<---- Domain Z --->| 717 ^------------------- PW1Z PME -------------------^ 718 ^---- PW13 PPSTME---^ ^---- PWXZ PPSTME---^ 719 ^---------^ ^---------^ 720 PSN13 LME PSNXZ LME 721 ^---^ ^---^ ^---------^ ^---^ ^---^ 722 Sec12 Sec23 Sec3X SecXY SecYZ 723 SME SME SME SME SME 725 TPE1: Terminating Provider Edge 1 SPE2: Switching Provider Edge 3 726 TPEX: Terminating Provider Edge X SPEZ: Switching Provider Edge Z 728 ^---^ ME ^ MEP ==== LSP .... PW 730 Figure 3 Reference Model for the MPLS-TP OAM Framework 732 Figure 3 depicts a high-level reference model for the MPLS-TP OAM 733 framework. The figure depicts portions of two MPLS-TP enabled network 734 domains, Domain 1 and Domain Z. In Domain 1, LSR1 is adjacent to LSR2 735 via the MPLS Section Sec12 and LSR2 is adjacent to LSR3 via the MPLS 736 Section Sec23. Similarly, in Domain Z, LSRX is adjacent to LSRY via 737 the MPLS Section SecXY and LSRY is adjacent to LSRZ via the MPLS 738 Section SecYZ. In addition, LSR3 is adjacent to LSRX via the MPLS 739 Section 3X. 741 Figure 3 also shows a bi-directional MS-PW (PW1Z) between AC1 on TPE1 742 and AC2 on TPEZ. The MS-PW consists of three bi-directional PW 743 Segments: 1) PW13 segment between T-PE1 and S-PE3 via the bi- 744 directional PSN13 LSP, 2) PW3X segment between S-PE3 and S-PEX, via 745 the bi-directional PSN3X LSP, and 3) PWXZ segment between S-PEX and 746 T-PEZ via the bi-directional PSNXZ LSP. 748 The MPLS-TP OAM procedures that apply to an MEG are expected to 749 operate independently from procedures on other MEGs. Yet, this does 750 not preclude that multiple MEGs may be affected simultaneously by the 751 same network condition, for example, a fiber cut event. 753 Note that there are no constrains imposed by this OAM framework on 754 the number, or type (p2p, p2mp, LSP or PW), of MEGs that may be 755 instantiated on a particular node. In particular, when looking at 756 Figure 3, it should be possible to configure one or more MEPs on the 757 same node if that node is the endpoint of one or more MEGs. 759 Figure 3 does not describe a PW3X PPSTME because typically PSTs are 760 used to monitor an OAM domain (like PW13 and PWXZ PPSTMEs) rather 761 than the segment between two OAM domains. However the OAM framework 762 does not pose any constraints on the way PSTs are instantiated as 763 long as they are not overlapping. 765 The subsections below define the MEGs specified in this MPLS-TP OAM 766 architecture framework document. Unless otherwise stated, all 767 references to domains, LSRs, MPLS Sections, LSPs, pseudowires and 768 MEGs in this section are made in relation to those shown in Figure 3. 770 4.1. MPLS-TP Section Monitoring (SME) 772 An MPLS-TP Section ME (SME) is an MPLS-TP maintenance entity intended 773 to an MPLS Section as defined in [11]. An SME may be configured on 774 any MPLS section. SME OAM packets must fate share with the user data 775 packets sent over the monitored MPLS Section. 777 An SME is intended to be deployed for applications where it is 778 preferable to monitor the link between topologically adjacent (next 779 hop in this layer network) MPLS (and MPLS-TP enabled) LSRs rather 780 than monitoring the individual LSP or PW segments traversing the MPLS 781 Section and the server layer technology does not provide adequate OAM 782 capabilities. 784 Figure 3 shows 5 Section MEs configured in the network between AC1 785 and AC2: 787 1. Sec12 ME associated with the MPLS Section between LSR 1 and LSR 2, 788 2. Sec23 ME associated with the MPLS Section between LSR 2 and LSR 3, 790 3. Sec3X ME associated with the MPLS Section between LSR 3 and LSR X, 792 4. SecXY ME associated with the MPLS Section between LSR X and LSR Y, 793 and 795 5. SecYZ ME associated with the MPLS Section between LSR Y and LSR Z. 797 4.2. MPLS-TP LSP End-to-End Monitoring (LME) 799 An MPLS-TP LSP ME (LME) is an MPLS-TP maintenance entity intended to 800 monitor an end-to-end LSP between two LERs. An LME may be configured 801 on any MPLS LSP. LME OAM packets must fate share with user data 802 packets sent over the monitored MPLS-TP LSP. 804 An LME is intended to be deployed in scenarios where it is desirable 805 to monitor an entire LSP between its LERs, rather than, say, 806 monitoring individual PWs. 808 Figure 3 depicts 2 LMEs configured in the network between AC1 and 809 AC2: 1) the PSN13 LME between LER 1 and LER 3, and 2) the PSNXZ LME 810 between LER X and LER Y. Note that the presence of a PSN3X LME in 811 such a configuration is optional, hence, not precluded by this 812 framework. For instance, the SPs may prefer to monitor the MPLS-TP 813 Section between the two LSRs rather than the individual LSPs. 815 4.3. MPLS-TP LSP Path Segment Tunnel Monitoring (LPSTME) 817 An MPLS-TP LSP Path Segment Tunnel ME (LPSTME) is an MPLS-TP 818 maintenance entity intended to monitor an arbitrary part of an LSP 819 between a given pair of LSRs independently from the end-to-end 820 monitoring (LME). An LPSTMEE can monitor an LSP segment or 821 concatenated segment and it may also include the forwarding engine(s) 822 of the node(s) at the edge(s) of the segment or concatenated segment. 824 Multiple LPSTMEs MAY be configured on any LSP. The LSRs that 825 terminate the LPSTME may or may not be immediately adjacent at the 826 MPLS-TP layer. LPSTME OAM packets must fate share with the user data 827 packets sent over the monitored LSP segment. 829 A LPSTME can be defined between the following entities: 831 o LER and any LSR of a given LSP. 833 o Any two LSRs of a given LSP. 835 An LPSTME is intended to be deployed in scenarios where it is 836 preferable to monitor the behaviour of a part of an LSP or set of 837 LSPs rather than the entire LSP itself, for example when there is a 838 need to monitor a part of an LSP that extends beyond the 839 administrative boundaries of an MPLS-TP enabled administrative 840 domain. 842 |<--------------------- PW1Z -------------------->| 843 | | 844 | |<--------------PSN1Z LSP-------------->| | 845 | |<-PSN13->| |<-PSN3X->| |<-PSNXZ->| | 846 V V S-LSP V V S-LSP V V S-LSP V V 847 +----+ +-+ +----+ +----+ +-+ +----+ 848 +----+ | PE1| | | |DBN3| |DBNX| | | | PEZ| +----+ 849 | |AC1| |=======================================| |AC2| | 850 | CE1|---|......................PW1Z.......................|---|CE2 | 851 | | | |=======================================| | | | 852 +----+ | 1 | |2| | 3 | | X | |Y| | Z | +----+ 853 +----+ +-+ +----+ +----+ +-+ +----+ 854 . . . . 855 | | | | 856 |<---- Domain 1 --->| |<---- Domain Z --->| 858 ^---------^ ^---------^ 859 PSN13 LPSTME PSNXZ LPSTME 860 ^---------------------------------------^ 861 PSN1Z LME 863 DBN: Domain Border Node 865 Figure 4 MPLS-TP LSP Path Segment Tunnel ME (LPSTME) 867 Figure 4 depicts a variation of the reference model in Figure 3 where 868 there is an end-to-end PSN LSP (PSN1Z LSP) between PE1 and PEZ. PSN1Z 869 LSP consists of, at least, three LSP Concatenated Segments: PSN13, 870 PSN3X and PSNXZ. In this scenario there are two separate LPSTMEs 871 configured to monitor the PSN1Z LSP: 1) a LPSTME monitoring the PSN13 872 LSP Concatenated Segment on Domain 1 (PSN13 LPSTME), and 2) a LPSTME 873 monitoring the PSNXZ LSP Concatenated Segment on Domain Z (PSNXZ 874 LPSTME). 876 It is worth noticing that LPSTMEs can coexist with the LME monitoring 877 the end-to-end LSP and that LPSTME MEPs and LME MEPs can be 878 coincident in the same node (e.g. PE1 node supports both the PSN1Z 879 LME MEP and the PSN13 LPSTME MEP). 881 4.4. MPLS-TP PW Monitoring (PME) 883 An MPLS-TP PW ME (PME) is an MPLS-TP maintenance entity intended to 884 monitor a SS-PW or MS-PW between a pair of T-PEs. A PME MAY be 885 configured on any SS-PW or MS-PW. PME OAM packets must fate share 886 with the user data packets sent over the monitored PW. 888 A PME is intended to be deployed in scenarios where it is desirable 889 to monitor an entire PW between a pair of MPLS-TP enabled T-PEs 890 rather than monitoring the LSP aggregating multiple PWs between PEs. 892 |<------------------- MS-PW1Z ------------------->| 893 | | 894 | |<-PSN13->| |<-PSN3X->| |<-PSNXZ->| | 895 V V LSP V V LSP V V LSP V V 896 +----+ +-+ +----+ +----+ +-+ +----+ 897 +----+ |TPE1| | | |SPE3| |SPEX| | | |TPEZ| +----+ 898 | |AC1| |=========| |=========| |=========| |AC2| | 899 | CE1|---|........PW13.......|...PW3X..|........PWXZ.......|---|CE2 | 900 | | | |=========| |=========| |=========| | | | 901 +----+ | 1 | |2| | 3 | | X | |Y| | Z | +----+ 902 +----+ +-+ +----+ +----+ +-+ +----+ 904 ^---------------------PW1Z PME--------------------^ 906 Figure 5 MPLS-TP PW ME (PME) 908 Figure 5 depicts a MS-PW (MS-PW1Z) consisting of three segments: 909 PW13, PW3X and PWXZ and its associated end-to-end PME (PW1Z PME). 911 4.5. MPLS-TP MS-PW Path Segment Tunnel Monitoring (PPSTME) 913 An MPLS-TP MS-PW Path Segment Tunnel Monitoring ME (PPSTME) is an 914 MPLS-TP maintenance entity intended to monitor an arbitrary part of 915 an MS-PW between a given pair of PEs independently from the end-to- 916 end monitoring (PME). A PPSTME can monitor a PW segment or 917 concatenated segment and it may also include the forwarding engine(s) 918 of the node(s) at the edge(s) of the segment or concatenated segment. 920 Multiple PPSTMEs MAY be configured on any MS-PW. The PEs may or may 921 not be immediately adjacent at the MS-PW layer. PPSTME OAM packets 922 fate share with the user data packets sent over the monitored PW 923 Segment. 925 A PPSTME can be defined between the following entities: 927 o T-PE and any S-PE of a given MS-PW 928 o Any two S-PEs of a given MS-PW. It can span several PW segments. 930 A PPSTME is intended to be deployed in scenarios where it is 931 preferable to monitor the behaviour of a part of a MS-PW rather than 932 the entire end-to-end PW itself, for example to monitor an MS-PW 933 Segment within a given network domain of an inter-domain MS-PW. 935 |<------------------- MS-PW1Z ------------------->| 936 | | 937 | |<-PSN13->| |<-PSN3X->| |<-PSNXZ->| | 938 V V LSP V V LSP V V LSP V V 939 +----+ +-+ +----+ +----+ +-+ +----+ 940 +----+ |TPE1| | | |SPE3| |SPEX| | | |TPEZ| +----+ 941 | |AC1| |=========| |=========| |=========| |AC2| | 942 | CE1|---|........PW13.......|...PW3X..|........PWXZ.......|---|CE2 | 943 | | | |=========| |=========| |=========| | | | 944 +----+ | 1 | |2| | 3 | | X | |Y| | Z | +----+ 945 +----+ +-+ +----+ +----+ +-+ +----+ 947 ^---- PW1 PPSTME----^ ^---- PW5 PPSTME----^ 948 ^---------------------PW1Z PME--------------------^ 950 Figure 6 MPLS-TP MS-PW Path Segment Tunnel Monitoring (PPSTME) 952 Figure 6 depicts the same MS-PW (MS-PW1Z) between AC1 and AC2 as in 953 Figure 5. In this scenario there are two separate PPSTMEs configured 954 to monitor MS-PW1Z: 1) a PPSTME monitoring the PW13 MS-PW Segment on 955 Domain 1 (PW13 PPSTME), and 2) a PPSTME monitoring the PWXZ MS-PW 956 Segment on Domain Z with (PWXZ PPSTME). 958 It is worth noticing that PPSTMEs can coexist with the PME monitoring 959 the end-to-end MS-PW and that PPSTME MEPs and PME MEPs can be 960 coincident in the same node (e.g. TPE1 node supports both the PW1Z 961 PME MEP and the PW13 PPSTME MEP). 963 5. OAM Functions for proactive monitoring 965 In this document, proactive monitoring refers to OAM operations that 966 are either configured to be carried out periodically and continuously 967 or preconfigured to act on certain events such as alarm signals. 969 Proactive monitoring is frequently "in service" monitoring. The 970 control and measurement implications are: 972 1. Proactive monitoring for a MEG is typically configured at 973 transport path creation time. 975 2. The operational characteristics of in-band measurement 976 transactions (e.g., CV, LM etc.) are configured at the MEPs. 978 3. Server layer events are reported by transactions originating at 979 intermediate nodes. 981 4. The measurements resulting from proactive monitoring are typically 982 only reported outside of the MEG as unsolicited notifications for 983 "out of profile" events, such as faults or loss measurement 984 indication of excessive impairment of information transfer 985 capability. 987 5. The measurements resulting from proactive monitoring may be 988 periodically harvested by an EMS/NMS. 990 5.1. Continuity Check and Connectivity Verification 992 Proactive Continuity Check functions, as required in section 2.2.2 of 993 [12], are used to detect a loss of continuity defect (LOC) between 994 two MEPs in an MEG. 996 Proactive Connectivity Verification functions, as required in section 997 2.2.3 of [12], are used to detect an unexpected connectivity defect 998 between two MEGs (e.g. mismerging or misconnection), as well as 999 unexpected connectivity within the MEG with an unexpected MEP. 1001 Both functions are based on the (proactive) generation of OAM packets 1002 by the source MEP that are processed by the sink MEP. As a 1003 consequence these two functions are grouped together into Continuity 1004 Check and Connectivity Verification (CC-V) OAM packets. 1006 In order to perform pro-active Connectivity Verification function, 1007 each CC-V OAM packet MUST also include a globally unique Source MEP 1008 identifier. When used to perform only pro-active Continuity Check 1009 function, the CC-V OAM packet MAY not include any globally unique 1010 Source MEP identifier. Different formats of MEP identifiers are 1011 defined in [10] to address different environments. When MPLS-TP is 1012 deployed in transport network environments where IP addressing is not 1013 used in the forwarding plane, the ICC-based format for MEP 1014 identification is used. When MPLS-TP is deployed in IP-based 1015 environment, the IP-based MEP identification is used. 1017 As a consequence, it is not possible to detect misconnections between 1018 two MEGs monitored only for continuity as neither the OAM message 1019 type nor OAM message content provides sufficient information to 1020 disambiguate an invalid source. To expand: 1022 o For CC leaking into a CC monitored MEG - undetectable 1024 o For CV leaking into a CC monitored MEG - presence of additional 1025 Source MEP identifier allows detecting the fault 1027 o For CC leaking into a CV monitored MEG - lack of additional Source 1028 MEP identifier allows detecting the fault. 1030 o For CV leaking into a CV monitored MEG - different Source MEP 1031 identifier permits fault to be identified. 1033 CC-V OAM packets MUST be transmitted at a regular, operator's 1034 configurable, rate. The default CC-V transmission periods are 1035 application dependent (see section 5.1.3). 1037 Proactive CC-V OAM packets are transmitted with the "minimum loss 1038 probability PHB" within a single network operator. This PHB is 1039 configurable on network operator's basis. PHBs can be translated at 1040 the network borders by the same function that translates it for user 1041 data traffic. The implication is that CC-V fate shares with much of 1042 the forwarding implementation, but not all aspects of PHB processing 1043 are exercised. On demand tools are used for finer grained fault 1044 finding. 1046 In a bidirectional point-to-point transport path, when a MEP is 1047 enabled to generate pro-active CC-V OAM packets with a configured 1048 transmission rate, it also expects to receive pro-active CC-V OAM 1049 packets from its peer MEP at the same transmission rate as a common 1050 SLA applies to all components of the transport path. In a 1051 unidirectional transport path (either point-to-point or point-to- 1052 multipoint), only the source MEP is enabled to generate CC-V OAM 1053 packets and only the sink MEP is configured to expect these packets 1054 at the configured rate. 1056 MIPs, as well as intermediate nodes not supporting MPLS-TP OAM, are 1057 transparent to the pro-active CC-V information and forward these pro- 1058 active CC-V OAM packets as regular data packets. 1060 It is desirable to not generate spurious alarms during initialization 1061 or tear down; hence the following procedures are recommended. At 1062 initialization, the MEP source function (generating pro-active CC-V 1063 packets) should be enabled prior to the corresponding MEP sink 1064 function (detecting continuity and connectivity defects). When 1065 disabling the CC-V proactive functionality, the MEP sink function 1066 should be disabled prior to the corresponding MEP source function. 1068 5.1.1. Defects identified by CC-V 1070 Pro-active CC-V functions allow a sink MEP to detect the defect 1071 conditions described in the following sub-sections. For all of the 1072 described defect cases, the sink MEP SHOULD notify the equipment 1073 fault management process of the detected defect. 1075 5.1.1.1. Loss Of Continuity defect 1077 When proactive CC-V is enabled, a sink MEP detects a loss of 1078 continuity (LOC) defect when it fails to receive pro-active CC-V OAM 1079 packets from the peer MEP. 1081 o Entry criteria: if no pro-active CC-V OAM packets from the peer 1082 MEP (i.e. with the correct globally unique Source MEP identifier) 1083 are received within the interval equal to 3.5 times the receiving 1084 MEP's configured CC-V reception period. 1086 o Exit criteria: a pro-active CC-V OAM packet from the peer MEP 1087 (i.e. with the correct globally unique Source MEP identifier) is 1088 received. 1090 5.1.1.2. Mis-connectivity defect 1092 When a pro-active CC-V OAM packet is received, a sink MEP identifies 1093 a mis-connectivity defect (e.g. mismerge, misconnection or unintended 1094 looping) with its peer source MEP when the received packet carries an 1095 incorrect globally unique Source MEP identifier. 1097 o Entry criteria: the sink MEP receives a pro-active CC-V OAM packet 1098 with an incorrect globally unique Source MEP identifier. 1100 o Exit criteria: the sink MEP does not receive any pro-active CC-V 1101 OAM packet with an incorrect globally unique Source MEP identifier 1102 for an interval equal at least to 3.5 times the longest 1103 transmission period of the pro-active CC-V OAM packets received 1104 with an incorrect globally unique Source MEP identifier since this 1105 defect has been raised. This requires the OAM message to self 1106 identify the CC-V periodicity as not all MEPs can be expected to 1107 have knowledge of all MEGs. 1109 5.1.1.3. Period Misconfiguration defect 1111 If pro-active CC-V OAM packets are received with a correct globally 1112 unique Source MEP identifier but with a transmission period different 1113 than the locally configured reception period, then a CV period mis- 1114 configuration defect is detected. 1116 o Entry criteria: a MEP receives a CC-V pro-active packet with 1117 correct globally unique Source MEP identifier but with a Period 1118 field value different than its own CC-V configured transmission 1119 period. 1121 o Exit criteria: the sink MEP does not receive any pro-active CC-V 1122 OAM packet with a correct globally unique Source MEP identifier 1123 and an incorrect transmission period for an interval equal at 1124 least to 3.5 times the longest transmission period of the pro- 1125 active CC-V OAM packets received with a correct globally unique 1126 Source MEP identifier and an incorrect transmission period since 1127 this defect has been raised. 1129 5.1.2. Consequent action 1131 A sink MEP that detects one of the defect conditions defined in 1132 section 5.1.1 MUST perform the following consequent actions. 1134 If a MEP detects an unexpected globally unique Source MEP Identifier, 1135 it MUST block all the traffic (including also the user data packets) 1136 that it receives from the misconnected transport path. 1138 If a MEP detects LOC defect that is not caused by a period 1139 mis-configuration, it SHOULD block all the traffic (including also 1140 the user data packets) that it receives from the transport path, if 1141 this consequent action has been enabled by the operator. 1143 It is worth noticing that the OAM requirements document [12] 1144 recommends that CC-V proactive monitoring is enabled on every MEG in 1145 order to reliably detect connectivity defects. However, CC-V 1146 proactive monitoring MAY be disabled by an operator on an MEG. In the 1147 event of a misconnection between a transport path that is pro- 1148 actively monitored for CC-V and a transport path which is not, the 1149 MEP of the former transport path will detect a LOC defect 1150 representing a connectivity problem (e.g. a misconnection with a 1151 transport path where CC-V proactive monitoring is not enabled) 1152 instead of a continuity problem, with a consequent wrong traffic 1153 delivering. For these reasons, the traffic block consequent action is 1154 applied even when a LOC condition occurs. This block consequent 1155 action MAY be disabled through configuration. This deactivation of 1156 the block action may be used for activating or deactivating the 1157 monitoring when it is not possible to synchronize the function 1158 activation of the two peer MEPs. 1160 If a MEP detects a LOC defect (section 5.1.1.1), a mis-connectivity 1161 defect (section 5.1.1.2) or a period misconfiguration defect (section 1162 5.1.1.3), it MUST declare a signal fail condition at the transport 1163 path level. 1165 5.1.3. Configuration considerations 1167 At all MEPs inside a MEG, the following configuration information 1168 needs to be configured when a proactive CC-V function is enabled: 1170 o MEG ID; the MEG identifier to which the MEP belongs; 1172 o MEP-ID; the MEP's own identity inside the MEG; 1174 o list of peer MEPs inside the MEG. For a point-to-point MEG the 1175 list would consist of the single peer MEP ID from which the OAM 1176 packets are expected. In case of the root MEP of a p2mp MEG, the 1177 list is composed by all the leaf MEP IDs inside the MEG. In case 1178 of the leaf MEP of a p2mp MEG, the list is composed by the root 1179 MEP ID (i.e. each leaf MUST know the root MEP ID from which it 1180 expect to receive the CC-V OAM packets). 1182 o PHB; it identifies the per-hop behaviour of CC-V packet. Proactive 1183 CC-V packets are transmitted with the "minimum loss probability 1184 PHB" previously configured within a single network operator. This 1185 PHB is configurable on network operator's basis. PHBs can be 1186 translated at the network borders. 1188 o transmission rate; the default CC-V transmission periods are 1189 application dependent (depending on whether they are used to 1190 support fault management, performance monitoring, or protection 1191 switching applications): 1193 o Fault Management: default transmission period is 1s (i.e. 1194 transmission rate of 1 packet/second). 1196 o Performance Monitoring: default transmission period is 100ms 1197 (i.e. transmission rate of 10 packets/second). Performance 1198 monitoring is only relevant when the transport path is defect 1199 free. CC-V contributes to the accuracy of PM statistics by 1200 permitting the defect free periods to be properly 1201 distinguished. 1203 o Protection Switching: default transmission period is 3.33ms 1204 (i.e. transmission rate of 300 packets/second), in order to 1205 achieve sub-50ms the CC-V defect entry criteria should resolve 1206 in less than 10msec, and complete a protection switch within a 1207 subsequent period of 50 msec. 1209 It SHOULD be possible for the operator to configure these 1210 transmission rates for all applications, to satisfy his internal 1211 requirements. 1213 Note that the reception period is the same as the configured 1214 transmission rate. 1216 For statically provisioned transport paths the above information are 1217 statically configured; for dynamically established transport paths 1218 the configuration information are signaled via the control plane. 1220 The operator SHOULD be able to enable/disable some of the consequent 1221 actions defined in section 5.1.2. 1223 5.2. Remote Defect Indication 1225 The Remote Defect Indication (RDI) function, as required in section 1226 2.2.9 of [12], is an indicator that is transmitted by a MEP to 1227 communicate to its peer MEPs that a signal fail condition exists. 1228 RDI is only used for bidirectional connections and is associated with 1229 proactive CC-V activation. The RDI indicator is piggy-backed onto the 1230 CC-V packet. 1232 When a MEP detects a signal fail condition (e.g. in case of a 1233 continuity or connectivity defect), it should begin transmitting an 1234 RDI indicator to its peer MEP. The RDI information will be included 1235 in all pro-active CC-V packets that it generates for the duration of 1236 the signal fail condition's existence. 1238 A MEP that receives the packets with the RDI information should 1239 determine that its peer MEP has encountered a defect condition 1240 associated with a signal fail. 1242 MIPs as well as intermediate nodes not supporting MPLS-TP OAM are 1243 transparent to the RDI indicator and forward these proactive CC-V 1244 packets that include the RDI indicator as regular data packets, i.e. 1245 the MIP should not perform any actions nor examine the indicator. 1247 When the signal fail defect condition clears, the MEP should clear 1248 the RDI indicator from subsequent transmission of pro-active CC-V 1249 packets. A MEP should clear the RDI defect upon reception of a pro- 1250 active CC-V packet from the source MEP with the RDI indicator 1251 cleared. 1253 5.2.1. Configuration considerations 1255 In order to support RDI indication, this may be a unique OAM message 1256 or an OAM information element embedded in a CV message. In this case 1257 the RDI transmission rate and PHB of the OAM packets carrying RDI 1258 should be the same as that configured for CC-V. 1260 5.3. Alarm Reporting 1262 The Alarm Reporting function, as required in section 2.2.8 of [12], 1263 relies upon an Alarm Indication Signal (AIS) message used to suppress 1264 alarms following detection of defect conditions at the server 1265 (sub-)layer. 1267 o A server MEP that detects a signal fail conditions in the server 1268 (sub-)layer, will notify the MPLS-TP client (sub-)layer adaptation 1269 function, which can generate packets with AIS information in a 1270 direction opposite to its peers MEPs to allow the suppression of 1271 secondary alarms at the MEP in the client (sub-)layer. 1273 A server MEP is responsible for notifying the MPLS-TP layer network 1274 adaptation function upon fault detection in the server layer network 1275 to which the server MEP is associated. 1277 Only the client layer adaptation function at an intermediate node 1278 will issue MPLS-TP packets with AIS information. Upon receiving 1279 notification of a signal fail condition the adaptation function 1280 SHOULD immediately start transmitting periodic packets with AIS 1281 information. These periodic packets, with AIS information, continue 1282 to be transmitted until the signal fail condition is cleared. 1284 Upon receiving a packet with AIS information an MPLS-TP MEP enters an 1285 AIS defect condition and suppresses loss of continuity alarms 1286 associated with its peer MEP. A MEP resumes loss of continuity alarm 1287 generation upon detecting loss of continuity defect conditions in the 1288 absence of AIS condition. 1290 For example, let's consider a fiber cut between LSR 1 and LSR 2 in 1291 the reference network of Figure 3. Assuming that all the MEGs 1292 described in Figure 3 have pro-active CC-V enabled, a LOC defect is 1293 detected by the MEPs of Sec12 SME, PSN13 LME, PW1 PPSTME and PW1Z 1294 PME, however in transport network only the alarm associate to the 1295 fiber cut needs to be reported to NMS while all these secondary 1296 alarms should be suppressed (i.e. not reported to the NMS or reported 1297 as secondary alarms). 1299 If the fiber cut is detected by the MEP in the physical layer (in 1300 LSR2), LSR2 can generate the proper alarm in the physical layer and 1301 suppress the secondary alarm associated with the LOC defect detected 1302 on Sec12 SME. As both MEPs reside within the same node, this process 1303 does not involve any external protocol exchange. Otherwise, if the 1304 physical layer has not enough OAM capabilities to detect the fiber 1305 cut, the MEP of Sec12 SME in LSR2 will report a LOC alarm. 1307 In both cases, the MEP of Sec12 SME in LSR 2 notifies the adaptation 1308 function for PSN13 LME that then generates AIS packets on the PSN13 1309 LME in order to allow its MEP in LSR3 to suppress the LOC alarm. LSR3 1310 can also suppress the secondary alarm on PW13 PPSTME because the MEP 1311 of PW13 PPSTME resides within the same node as the MEP of PSN13 LME. 1312 The MEP of PW13 PPSTME in LSR3 also notifies the adaptation function 1313 for PW1Z PME that then generates AIS packets on PW1Z PME in order to 1314 allow its MEP in LSRZ to suppress the LOC alarm. 1316 The generation of AIS packets for each MEG in the client (sub-)layer 1317 is configurable (i.e. the operator can enable/disable the AIS 1318 generation). 1320 AIS packets are transmitted with the "minimum loss probability PHB" 1321 within a single network operator. This PHB is configurable on network 1322 operator's basis. 1324 A MIP is transparent to packets with AIS information and therefore 1325 does not require any information to support AIS functionality. 1327 5.4. Lock Reporting 1329 The Lock Reporting function, as required in section 2.2.7 of [12], 1330 relies upon a Locked Report (LKR) message used to suppress alarms 1331 following administrative locking action in the server (sub-)layer. 1333 A server MEP is responsible for notifying the MPLS-TP layer network 1334 adaption function upon locked condition applied to the server layer 1335 network to which the server MEP is associated. 1337 Only the client layer adaptation function at an intermediate node 1338 will issue MPLS-TP packets with LKR information. Upon receiving 1339 notification of a locked condition the adaptation function SHOULD 1340 immediately start transmitting periodic packets with LKR information. 1341 These periodic packets, with LKR information, will continue to be 1342 transmitted until the locked condition is cleared. 1344 Upon receiving a packet with LKR information an MPLS-TP MEP enters an 1345 LKR defect condition and suppresses loss of continuity alarm 1346 associated with its peer MEP. A MEP resumes loss of continuity alarm 1347 generation upon detecting loss of continuity defect conditions in the 1348 absence of LKR condition. 1350 The generation of LKR packets is configurable in the server 1351 (sub-)layer (i.e. the operator can enable/disable the LKR 1352 generation). 1354 LKR packets are transmitted with the "minimum loss probability PHB" 1355 within a single network operator. This PHB is configurable on network 1356 operator's basis. 1358 A MIP is transparent to packets with LKR information and therefore 1359 does not require any information to support LKR functionality. 1361 5.5. Packet Loss Measurement 1363 Packet Loss Measurement (LM) is one of the capabilities supported by 1364 the MPLS-TP Performance Monitoring (PM) function in order to 1365 facilitate reporting of QoS information for a transport path as 1366 required in section 2.2.11 of [12]. LM is used to exchange counter 1367 values for the number of ingress and egress packets transmitted and 1368 received by the transport path monitored by a pair of MEPs. 1370 Proactive LM is performed by periodically sending LM OAM packets from 1371 a MEP to a peer MEP and by receiving LM OAM packets from the peer MEP 1372 (if a bidirectional transport path) during the life time of the 1373 transport path. Each MEP performs measurements of its transmitted and 1374 received packets. These measurements are then transactionally 1375 correlated with the peer MEP in the ME to derive the impact of packet 1376 loss on a number of performance metrics for the ME in the MEG. The LM 1377 transactions are issued such that the OAM packets will experience the 1378 same queuing discipline as the measured traffic while transiting 1379 between the MEPs in the ME. 1381 For a MEP, near-end packet loss refers to packet loss associated with 1382 incoming data packets (from the far-end MEP) while far-end packet 1383 loss refers to packet loss associated with egress data packets 1384 (towards the far-end MEP). 1386 5.5.1. Configuration considerations 1388 In order to support proactive LM, the transmission rate and PHB 1389 associated with the LM OAM packets originating from a MEP need be 1390 configured as part of the LM provisioning procedures. LM OAM packets 1391 should be transmitted with the same PHB class that the LM is intended 1392 to measure. If that PHB is not an ordered aggregate where the 1393 ordering constraint is all packets with the PHB being delivered in 1394 order, LM can produce inconsistent results. 1396 5.6. Client Failure Indication 1398 The Client Failure Indication (CSF) function, as required in section 1399 2.2.10 of [12], is used to help process client defects and propagate 1400 a client signal defect condition from the process associated with the 1401 local attachment circuit where the defect was detected (typically the 1402 source adaptation function for the local client interface) to the 1403 process associated with the far-end attachment circuit (typically the 1404 source adaptation function for the far-end client interface) for the 1405 same transmission path in case the client of the transport path does 1406 not support a native defect/alarm indication mechanism, e.g. AIS. 1408 A source MEP starts transmitting a CSF indication to its peer MEP 1409 when it receives a local client signal defect notification via its 1410 local CSF function. Mechanisms to detect local client signal fail 1411 defects are technology specific. 1413 A sink MEP that has received a CSF indication report this condition 1414 to its associated client process via its local CSF function. 1415 Consequent actions toward the client attachment circuit are 1416 technology specific. 1418 Either there needs to be a 1:1 correspondence between the client and 1419 the MEG, or when multiple clients are multiplexed over a transport 1420 path, the CSF message requires additional information to permit the 1421 client instance to be identified. 1423 5.6.1. Configuration considerations 1425 In order to support CSF indication, the CSF transmission rate and PHB 1426 of the CSF OAM message/information element should be configured as 1427 part of the CSF configuration. 1429 5.7. Packet Delay Measurement 1431 Packet Delay Measurement (DM) is one of the capabilities supported by 1432 the MPLS-TP PM function in order to facilitate reporting of QoS 1433 information for a transport path as required in section 2.2.12 of 1434 [12]. Specifically, pro-active DM is used to measure the long-term 1435 packet delay and packet delay variation in the transport path 1436 monitored by a pair of MEPs. 1438 Proactive DM is performed by sending periodic DM OAM packets from a 1439 MEP to a peer MEP and by receiving DM OAM packets from the peer MEP 1440 (if a bidirectional transport path) during a configurable time 1441 interval. 1443 Pro-active DM can be operated in two ways: 1445 o One-way: a MEP sends DM OAM packet to its peer MEP containing all 1446 the required information to facilitate one-way packet delay and/or 1447 one-way packet delay variation measurements at the peer MEP. Note 1448 that this requires synchronized precision time at either MEP by 1449 means outside the scope of this framework. 1451 o Two-way: a MEP sends DM OAM packet with a DM request to its peer 1452 MEP, which replies with a DM OAM packet as a DM response. The 1453 request/response DM OAM packets containing all the required 1454 information to facilitate two-way packet delay and/or two-way 1455 packet delay variation measurements from the viewpoint of the 1456 source MEP. 1458 5.7.1. Configuration considerations 1460 In order to support pro-active DM, the transmission rate and PHB 1461 associated with the DM OAM packets originating from a MEP need be 1462 configured as part of the DM provisioning procedures. DM OAM packets 1463 should be transmitted with the PHB that yields the lowest packet loss 1464 performance among the PHB Scheduling Classes or Ordered Aggregates 1465 (see RFC 3260 [15]) in the monitored transport path for the relevant 1466 network domain(s). 1468 6. OAM Functions for on-demand monitoring 1470 In contrast to proactive monitoring, on-demand monitoring is 1471 initiated manually and for a limited amount of time, usually for 1472 operations such as e.g. diagnostics to investigate into a defect 1473 condition. 1475 On-demand monitoring covers a combination of "in service" and "out-of 1476 service" monitoring functions. The control and measurement 1477 implications are: 1479 1. A MEG can be directed to perform an "on demand" functions at 1480 arbitrary times in the lifetime of a transport path. 1482 2. "out of service" monitoring functions may require a-priori 1483 configuration of both MEPs and intermediate nodes in the MEG 1484 (e.g., data plane loopback) and the issuance of notifications into 1485 client layers of the transport path being removed from service 1486 (e.g., lock-reporting) 1488 3. The measurements resulting from on-demand monitoring are typically 1489 harvested in real time, as these are frequently craftsperson 1490 initiated and attended. These do not necessarily require different 1491 harvesting mechanisms that that for harvesting proactive 1492 monitoring telemetry. 1494 6.1. Connectivity Verification 1496 In order to preserve network resources, e.g. bandwidth, processing 1497 time at switches, it may be preferable to not use proactive CC-V. In 1498 order to perform fault management functions, network management may 1499 invoke periodic on-demand bursts of on-demand CV packets, as required 1500 in section 2.2.3 of [12]. 1502 Use of on-demand CV is dependent on the existence of either a bi- 1503 directional MEG, or the availability of an out of band return path 1504 because it requires the ability for target MIPs and MEPs to direct 1505 responses to the originating MEPs. 1507 An additional use of on-demand CV would be to detect and locate a 1508 problem of connectivity when a problem is suspected or known based on 1509 other tools. In this case the functionality will be triggered by the 1510 network management in response to a status signal or alarm 1511 indication. 1513 On-demand CV is based upon generation of on-demand CV packets that 1514 should uniquely identify the MEG that is being checked. The on- 1515 demand functionality may be used to check either an entire MEG (end- 1516 to-end) or between a MEP to a specific MIP. This functionality may 1517 not be available for associated bidirectional transport paths, as the 1518 MIP may not have a return path to the source MEP for the on-demand CV 1519 transaction. 1521 On-demand CV may generate a one-time burst of on-demand CV packets, 1522 or be used to invoke periodic, non-continuous, bursts of on-demand CV 1523 packets. The number of packets generated in each burst is 1524 configurable at the MEPs, and should take into account normal packet- 1525 loss conditions. 1527 When invoking a periodic check of the MEG, the source MEP should 1528 issue a burst of on-demand CV packets that uniquely identifies the 1529 MEG being verified. The number of packets and their transmission 1530 rate should be pre-configured and known to both the source MEP and 1531 the target MEP or MIP. The source MEP should use the mechanisms 1532 defined in sections 3.3 and 3.4 when sending an on-demand CV packet 1533 to a target MEP or target MIP respectively. The target MEP/MIP shall 1534 return a reply on-demand CV packet for each packet received. If the 1535 expected number of on-demand CV reply packets is not received at 1536 source MEP, the LOC defect state is entered. 1538 On demand CV should have the ability to carry padding such that a 1539 variety of MTU sizes can be originated to verify the MTU capacity of 1540 the transport path. 1542 6.1.1. Configuration considerations 1544 For on-demand CV the MEP should support the configuration of the 1545 number of packets to be transmitted/received in each burst of 1546 transmissions and their packet size. The transmission rate should be 1547 configured between the different nodes. 1549 In addition, when the CV packet is used to check connectivity toward 1550 a target MIP, the number of hops to reach the target MIP should be 1551 configured. 1553 The PHB of the on-demand CV packets should be configured as well. 1554 This permits the verification of correct operation of QoS queuing as 1555 well as connectivity. 1557 6.2. Packet Loss Measurement 1559 On-demand Packet Loss Measurement (LM) is one of the capabilities 1560 supported by the MPLS-TP Performance Monitoring function in order to 1561 facilitate diagnostic of QoS performance for a transport path, as 1562 required in section 2.2.11 of [12]. As proactive LM, on-demand LM is 1563 used to exchange counter values for the number of ingress and egress 1564 packets transmitted and received by the transport path monitored by a 1565 pair of MEPs. 1567 On-demand LM is performed by periodically sending LM OAM packets from 1568 a MEP to a peer MEP and by receiving LM OAM packets from the peer MEP 1569 (if a bidirectional transport path) during a pre-defined monitoring 1570 period. Each MEP performs measurements of its transmitted and 1571 received packets. These measurements are then correlated evaluate the 1572 packet loss performance metrics of the transport path. 1574 6.2.1. Configuration considerations 1576 In order to support on-demand LM, the beginning and duration of the 1577 LM procedures, the transmission rate and PHB associated with the LM 1578 OAM packets originating from a MEP must be configured as part of the 1579 on-demand LM provisioning procedures. LM OAM packets should be 1580 transmitted with the PHB that yields the lowest packet loss 1581 performance among the PHB Scheduling Classes or Ordered Aggregates 1582 (see RFC 3260 [15]) in the monitored transport path for the relevant 1583 network domain(s). 1585 6.3. Diagnostic Tests 1587 6.3.1. Throughput Estimation 1589 Throughput estimation is an on-demand out-of-service function, as 1590 required in section 2.2.5 of [12], that allows verifying the 1591 bandwidth/throughput of an MPLS-TP transport path (LSP or PW) before 1592 it is put in-service. 1594 Throughput estimation is performed between MEPs and can be performed 1595 in one-way or two way modes. 1597 This test is performed by sending OAM test packets at increasing rate 1598 (up to the theoretical maximum), graphing the percentage of OAM test 1599 packets received and reporting the rate at which OAM test packets 1600 start begin dropped. In general, this rate is dependent on the OAM 1601 test packet size. 1603 When configured to perform such tests, a MEP source inserts OAM test 1604 packets with test information with specified throughput, packet size 1605 and transmission patterns. 1607 For one way test, remote MEP sink receives the OAM test packets and 1608 calculates the packet loss. For two way test, the remote MEP 1609 loopbacks the OAM test packets back to original MEP and the local MEP 1610 sink calculates the packet loss. 1612 6.3.1.1. Configuration considerations 1614 Throughput estimation is an out-of-service tool. The diagnosed MEG 1615 should be put into a Lock status before the diagnostic test is 1616 started. 1618 An MEG can be put into a Lock status either via NMS action or using 1619 the Lock Instruct OAM tool as defined in section 6.6. 1621 At the transmitting MEP, provisioning is required for a test signal 1622 generator, which is associated with the MEP. At a receiving MEP, 1623 provisioning is required for a test signal detector which is 1624 associated with the MEP. 1626 A MIP is transparent to the OAM test packets sent for throught 1627 estimation and therefore does not require any provisioning to support 1628 MPLS-TP throghtput estimation. 1630 6.3.2. Data plane Loopback 1632 Data plane loopback is an out-of-service function, as required in 1633 section 2.2.5 of [12], that permits traffic originated at the ingress 1634 of a transport path to be looped back to the point of origin by an 1635 interface at either an intermediate node or a terminating node. 1637 If the loopback function is to be performed at an intermediate node 1638 it is only applicable to co-routed bi-directional paths. If the 1639 loopback is to be performed end to end, it is applicable to both co- 1640 routed bi-directional or associated bi-directional paths. 1642 Where a node implements data plane loopback capability and whether it 1643 implements more than one point is implementation dependent. 1645 6.4. Route Tracing 1647 It is often necessary to trace a route covered by an MEG from a 1648 source MEP to the sink MEP including all the MIPs in-between after 1649 e.g., provisioning an MPLS-TP transport path or for trouble shooting 1650 purposes, it. 1652 The route tracing function, as required in section 2.2.4 of [12], is 1653 providing this functionality. Based on the fate sharing requirement 1654 of OAM flows, i.e. OAM packets receive the same forwarding treatment 1655 as data packet, route tracing is a basic means to perform 1656 connectivity verification and, to a much lesser degree, continuity 1657 check. For this function to work properly, a return path must be 1658 present. 1660 Route tracing might be implemented in different ways and this 1661 document does not preclude any of them. 1663 Route tracing should always discover the full list of MIPs and of the 1664 peer MEPs. In case a defect exist, the route trace function needs to 1665 be able to detect it and stop automatically returning the incomplete 1666 list of OAM entities that it was able to trace. 1668 6.4.1. Configuration considerations 1670 The configuration of the route trace function must at least support 1671 the setting of the number of trace attempts before it gives up. 1673 6.5. Packet Delay Measurement 1675 Packet Delay Measurement (DM) is one of the capabilities supported by 1676 the MPLS-TP PM function in order to facilitate reporting of QoS 1677 information for a transport path, as required in section 2.2.12 of 1678 [12]. Specifically, on-demand DM is used to measure packet delay and 1679 packet delay variation in the transport path monitored by a pair of 1680 MEPs during a pre-defined monitoring period. 1682 On-Demand DM is performed by sending periodic DM OAM packets from a 1683 MEP to a peer MEP and by receiving DM OAM packets from the peer MEP 1684 (if a bidirectional transport path) during a configurable time 1685 interval. 1687 On-demand DM can be operated in two ways: 1689 o One-way: a MEP sends DM OAM packet to its peer MEP containing all 1690 the required information to facilitate one-way packet delay and/or 1691 one-way packet delay variation measurements at the peer MEP. 1693 o Two-way: a MEP sends DM OAM packet with a DM request to its peer 1694 MEP, which replies with an DM OAM packet as a DM response. The 1695 request/response DM OAM packets containing all the required 1696 information to facilitate two-way packet delay and/or two-way 1697 packet delay variation measurements from the viewpoint of the 1698 source MEP. 1700 6.5.1. Configuration considerations 1702 In order to support on-demand DM, the beginning and duration of the 1703 DM procedures, the transmission rate and PHB associated with the DM 1704 OAM packets originating from a MEP need be configured as part of the 1705 LM provisioning procedures. DM OAM packets should be transmitted with 1706 the PHB that yields the lowest packet delay performance among the PHB 1707 Scheduling Classes or Ordering Aggregates (see RFC 3260 [15]) in the 1708 monitored transport path for the relevant network domain(s). 1710 In order to verify different performances between long and short 1711 packets (e.g., due to the processing time), it SHOULD be possible for 1712 the operator to configure of the on-demand OAM DM packet. 1714 6.6. Lock Instruct 1716 Lock Instruct (LKI) function, as required in section 2.2.6 of [12], 1717 is a command allowing a MEP to instruct the peer MEP(s) to put the 1718 MPLS-TP transport path into a locked condition. 1720 This function allows single-side provisioning for administratively 1721 locking (and unlocking) an MPLS-TP transport path. 1723 Note that it is also possible to administratively lock (and unlock) 1724 an MPLS-TP transport path using two-side provisioning, where the NMS 1725 administratively put both MEPs into ad administrative lock condition. 1726 In this case, the LKI function is not required/used. 1728 6.6.1. Locking a transport path 1730 A MEP, upon receiving a single-side administrative lock command from 1731 NMS, sends an LKI request OAM packet to its peer MEP(s). It also puts 1732 the MPLS-TP transport path into a locked and notify its client 1733 (sub-)layer adaptation function upon the locked condition. 1735 A MEP, upon receiving an LKI request from its peer MEP, can accept or 1736 not the instruction and MUST reply to the peer MEP with an LKI reply 1737 OAM packet indicating whether it has accepted or not the instruction. 1739 If the lock instruction has been accepted, it also puts the MPLS-TP 1740 transport path into a locked and notify its client (sub-)layer 1741 adaptation function upon the locked condition. 1743 Note that if the client (sub-)layer is also MPLS-TP, Lock Reporting 1744 (LKR) generation at the client MPLS-TP (sub-)layer is started, as 1745 described in section 5.4. 1747 6.6.2. Unlocking a transport path 1749 A MEP, upon receiving a single-side administrative unlock command 1750 from NMS, sends an LKI removal request OAM packet to its peer MEP(s). 1752 The peer MEP, upon receiving an LKI removal request, can accept or 1753 not the removal instruction and MUST reply with an LKI removal reply 1754 OAM packet indicating whether it has accepted or not the instruction. 1756 If the lock removal instruction has been accepted, it also clears the 1757 locked condition on the MPLS-TP transport path and notify this event 1758 to its client (sub-)layer adaptation function. 1760 The MEP that has initiated the LKI clear procedure, upon receiving a 1761 positive LKI removal reply, also clears the locked condition on the 1762 MPLS-TP transport path and notify this event to its client 1763 (sub-)layer adaptation function. 1765 Note that if the client (sub-)layer is also MPLS-TP, Lock Reporting 1766 (LKR) generation at the client MPLS-TP (sub-)layer is terminated, as 1767 described in section 5.4. 1769 7. Security Considerations 1771 A number of security considerations are important in the context of 1772 OAM applications. 1774 OAM traffic can reveal sensitive information such as passwords, 1775 performance data and details about e.g. the network topology. The 1776 nature of OAM data therefore suggests to have some form of 1777 authentication, authorization and encryption in place. This will 1778 prevent unauthorized access to vital equipment and it will prevent 1779 third parties from learning about sensitive information about the 1780 transport network. 1782 Mechanisms that the framework does not specify might be subject to 1783 additional security considerations. 1785 8. IANA Considerations 1787 No new IANA considerations. 1789 9. Acknowledgments 1791 The authors would like to thank all members of the teams (the Joint 1792 Working Team, the MPLS Interoperability Design Team in IETF and the 1793 T-MPLS Ad Hoc Group in ITU-T) involved in the definition and 1794 specification of MPLS Transport Profile. 1796 The editors gratefully acknowledge the contributions of Adrian 1797 Farrel, Yoshinori Koike and Luca Martini for per-interface MIPs and 1798 MEPs description. 1800 The editors gratefully acknowledge the contributions of Malcolm 1801 Betts, Yoshinori Koike, Xiao Min, and Maarten Vissers for the lock 1802 report and lock instruction description. 1804 The authors would also like to thank Malcolm Betts, Stewart Bryant, 1805 Rui Costa, Adrian Farrel, Liu Gouman, Feng Huang, Yoshionori Koike, 1806 Yuji Tochio, Maarten Vissers and Xuequin Wei for their comments and 1807 enhancements to the text. 1809 This document was prepared using 2-Word-v2.0.template.dot. 1811 10. References 1813 10.1. Normative References 1815 [1] Bradner, S., "Key words for use in RFCs to Indicate Requirement 1816 Levels", BCP 14, RFC 2119, March 1997 1818 [2] Rosen, E., Viswanathan, A., Callon, R., "Multiprotocol Label 1819 Switching Architecture", RFC 3031, January 2001 1821 [3] Rosen, E., et al., "MPLS Label Stack Encoding", RFC 3032, 1822 January 2001 1824 [4] Agarwal, P., Akyol, B., "Time To Live (TTL) Processing in 1825 Multi-Protocol Label Switching (MPLS) Networks", RFC 3443, 1826 January 2003 1828 [5] Bryant, S., Pate, P., "Pseudo Wire Emulation Edge-to-Edge 1829 (PWE3) Architecture", RFC 3985, March 2005 1831 [6] Nadeau, T., Pignataro, S., "Pseudowire Virtual Circuit 1832 Connectivity Verification (VCCV): A Control Channel for 1833 Pseudowires", RFC 5085, December 2007 1835 [7] Bocci, M., Bryant, S., "An Architecture for Multi-Segment 1836 Pseudo Wire Emulation Edge-to-Edge", draft-ietf-pwe3-ms-pw- 1837 arch-05 (work in progress), September 2008 1839 [8] Bocci, M., et al., "A Framework for MPLS in Transport 1840 Networks", draft-ietf-mpls-tp-framework-10 (work in progress), 1841 February 2010 1843 [9] Vigoureux, M., Bocci, M., Swallow, G., Ward, D., Aggarwal, R., 1844 "MPLS Generic Associated Channel", RFC 5586, June 2009 1846 [10] Swallow, G., Bocci, M., "MPLS-TP Identifiers", draft-ietf-mpls- 1847 tp-identifiers-00 (work in progress), November 2009 1849 10.2. Informative References 1851 [11] Niven-Jenkins, B., Brungard, D., Betts, M., sprecher, N., Ueno, 1852 S., "MPLS-TP Requirements", RFC 5654, September 2009 1854 [12] Vigoureux, M., Betts, M., Ward, D., "Requirements for OAM in 1855 MPLS Transport Networks", draft-ietf-mpls-tp-oam-requirements- 1856 06 (work in progress), March 2010 1858 [13] Sprecher, N., Nadeau, T., van Helvoort, H., Weingarten, Y., 1859 "MPLS-TP OAM Analysis", draft-ietf-mpls-tp-oam-analysis-01 1860 (work in progress), March 2010 1862 [14] Nichols, K., Blake, S., Baker, F., Black, D., "Definition of 1863 the Differentiated Services Field (DS Field) in the IPv4 and 1864 IPv6 Headers", RFC 2474, December 1998 1866 [15] Grossman, D., "New terminology and clarifications for 1867 Diffserv", RFC 3260, April 2002. 1869 [16] ITU-T Recommendation G.707/Y.1322 (01/07), "Network node 1870 interface for the synchronous digital hierarchy (SDH)", January 1871 2007 1873 [17] ITU-T Recommendation G.805 (03/00), "Generic functional 1874 architecture of transport networks", March 2000 1876 [18] ITU-T Recommendation G.806 (01/09), "Characteristics of 1877 transport equipment - Description methodology and generic 1878 functionality ", January 2009 1880 [19] ITU-T Recommendation G.826 (12/02), "End-to-end error 1881 performance parameters and objectives for international, 1882 constant bit-rate digital paths and connections", December 2002 1884 [20] ITU-T Recommendation G.7710 (07/07), "Common equipment 1885 management function requirements", July 2007 1887 [21] ITU-T Recommendation Y.2611 (06/12), " High-level architecture 1888 of future packet-based networks", 2006 1890 Authors' Addresses 1892 Dave Allan (Editor) 1893 Ericsson 1895 Email: david.i.allan@ericsson.com 1897 Italo Busi (Editor) 1898 Alcatel-Lucent 1900 Email: Italo.Busi@alcatel-lucent.com 1901 Ben Niven-Jenkins (Editor) 1902 BT 1904 Email: benjamin.niven-jenkins@bt.com 1906 Contributing Authors' Addresses 1908 Annamaria Fulignoli 1909 Ericsson 1911 Email: annamaria.fulignoli@ericsson.com 1913 Enrique Hernandez-Valencia 1914 Alcatel-Lucent 1916 Email: Enrique.Hernandez@alcatel-lucent.com 1918 Lieven Levrau 1919 Alcatel-Lucent 1921 Email: Lieven.Levrau@alcatel-lucent.com 1923 Dinesh Mohan 1924 Nortel 1926 Email: mohand@nortel.com 1928 Vincenzo Sestito 1929 Alcatel-Lucent 1931 Email: Vincenzo.Sestito@alcatel-lucent.com 1933 Nurit Sprecher 1934 Nokia Siemens Networks 1936 Email: nurit.sprecher@nsn.com 1937 Huub van Helvoort 1938 Huawei Technologies 1940 Email: hhelvoort@huawei.com 1942 Martin Vigoureux 1943 Alcatel-Lucent 1945 Email: Martin.Vigoureux@alcatel-lucent.com 1947 Yaacov Weingarten 1948 Nokia Siemens Networks 1950 Email: yaacov.weingarten@nsn.com 1952 Rolf Winter 1953 NEC 1955 Email: Rolf.Winter@nw.neclab.eu