idnits 2.17.1 draft-ietf-mpls-tp-oam-framework-09.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 7, 2010) is 4943 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-03) exists of draft-ietf-mpls-tp-uni-nni-00 == Outdated reference: A later version (-07) exists of draft-ietf-mpls-tp-identifiers-02 == Outdated reference: A later version (-09) exists of draft-ietf-mpls-tp-oam-analysis-02 Summary: 0 errors (**), 0 flaws (~~), 4 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 MPLS Working Group I. Busi (Ed) 2 Internet Draft Alcatel-Lucent 3 Intended status: Informational D. Allan (Ed) 4 Ericsson 6 Expires: April 7, 2011 October 7, 2010 8 Operations, Administration and Maintenance Framework for MPLS- 9 based Transport Networks 10 draft-ietf-mpls-tp-oam-framework-09.txt 12 Abstract 14 The Transport Profile of Multi-Protocol Label Switching 15 (MPLS-TP) is a packet-based transport technology based on the 16 MPLS Traffic Engineering (MPLS-TE) and Pseudowire (PW) data 17 plane architectures. 19 This document describes a framework to support a comprehensive 20 set of Operations, Administration and Maintenance (OAM) 21 procedures that fulfill the MPLS-TP OAM requirements for fault, 22 performance and protection-switching management and that do not 23 rely on the presence of a control plane. 25 This document is a product of a joint Internet Engineering Task 26 Force (IETF) / International Telecommunications Union 27 Telecommunication Standardization Sector (ITU-T) effort to 28 include an MPLS Transport Profile within the IETF MPLS and PWE3 29 architectures to support the capabilities and functionalities of 30 a packet transport network as defined by the ITU-T. 32 Status of this Memo 34 This Internet-Draft is submitted to IETF in full conformance 35 with the provisions of BCP 78 and BCP 79. 37 Internet-Drafts are working documents of the Internet 38 Engineering Task Force (IETF), its areas, and its working 39 groups. Note that other groups may also distribute working 40 documents as Internet-Drafts. 42 Internet-Drafts are draft documents valid for a maximum of six 43 months and may be updated, replaced, or obsoleted by other 44 documents at any time. It is inappropriate to use Internet- 45 Drafts as reference material or to cite them other than as "work 46 in progress". 48 The list of current Internet-Drafts can be accessed at 49 http://www.ietf.org/ietf/1id-abstracts.txt. 51 The list of Internet-Draft Shadow Directories can be accessed at 52 http://www.ietf.org/shadow.html. 54 This Internet-Draft will expire on April 7, 2011. 56 Copyright Notice 58 Copyright (c) 2010 IETF Trust and the persons identified as the 59 document authors. All rights reserved. 61 This document is subject to BCP 78 and the IETF Trust's Legal 62 Provisions Relating to IETF Documents 63 (http://trustee.ietf.org/license-info) in effect on the date of 64 publication of this document. Please review these documents 65 carefully, as they describe your rights and restrictions with 66 respect to this document. Code Components extracted from this 67 document must include Simplified BSD License text as described 68 in Section 4.e of the Trust Legal Provisions and are provided 69 without warranty as described in the Simplified BSD License. 71 Table of Contents 73 1. Introduction..................................................5 74 1.1. Contributing Authors.....................................6 75 2. Conventions used in this document.............................6 76 2.1. Terminology..............................................6 77 2.2. Definitions..............................................7 78 3. Functional Components........................................10 79 3.1. Maintenance Entity and Maintenance Entity Group.........10 80 3.2. Nested MEGs: SPMEs and Tandem Connection Monitoring.....12 81 3.3. MEG End Points (MEPs)...................................14 82 3.4. MEG Intermediate Points (MIPs)..........................17 83 3.5. Server MEPs.............................................18 84 3.6. Configuration Considerations............................19 85 3.7. P2MP considerations.....................................20 86 3.8. Further considerations of enhanced segment monitoring...21 87 4. Reference Model..............................................21 88 4.1. MPLS-TP Section Monitoring (SME)........................23 89 4.2. MPLS-TP LSP End-to-End Monitoring (LME).................24 90 4.3. MPLS-TP PW Monitoring (PME).............................25 91 4.4. MPLS-TP LSP SPME Monitoring (LSME)......................25 92 4.5. MPLS-TP MS-PW SPME Monitoring (PSME)....................27 93 4.6. Fate sharing considerations for multilink...............28 94 5. OAM Functions for proactive monitoring.......................29 95 5.1. Continuity Check and Connectivity Verification..........30 96 5.1.1. Defects identified by CC-V.........................32 97 5.1.2. Consequent action..................................34 98 5.1.3. Configuration considerations.......................35 99 5.2. Remote Defect Indication................................36 100 5.2.1. Configuration considerations.......................37 101 5.3. Alarm Reporting.........................................37 102 5.4. Lock Reporting..........................................39 103 5.5. Packet Loss Measurement.................................40 104 5.5.1. Configuration considerations.......................41 105 5.5.2. Sampling skew......................................41 106 5.5.3. Multilink issues...................................41 107 5.6. Packet Delay Measurement................................42 108 5.6.1. Configuration considerations.......................42 109 5.7. Client Failure Indication...............................42 110 5.7.1. Configuration considerations.......................43 111 6. OAM Functions for on-demand monitoring.......................43 112 6.1. Connectivity Verification...............................44 113 6.1.1. Configuration considerations.......................45 114 6.2. Packet Loss Measurement.................................45 115 6.2.1. Configuration considerations.......................46 116 6.2.2. Sampling skew......................................46 117 6.2.3. Multilink issues...................................46 118 6.3. Diagnostic Tests........................................47 119 6.3.1. Throughput Estimation.............................47 120 6.3.2. Data plane Loopback...............................48 121 6.4. Route Tracing..........................................49 122 6.4.1. Configuration considerations......................50 123 6.5. Packet Delay Measurement...............................50 124 6.5.1. Configuration considerations......................51 125 7. OAM Functions for administration control....................51 126 7.1. Lock Instruct..........................................51 127 7.1.1. Locking a transport path..........................51 128 7.1.2. Unlocking a transport path........................52 129 8. Security Considerations.....................................52 130 9. IANA Considerations.........................................53 131 10. Acknowledgments............................................53 132 11. References.................................................54 133 11.1. Normative References..................................54 134 11.2. Informative References................................55 136 Editors' Note: 138 This Informational Internet-Draft is aimed at achieving IETF 139 Consensus before publication as an RFC and will be subject to an 140 IETF Last Call. 142 [RFC Editor, please remove this note before publication as an 143 RFC and insert the correct Streams Boilerplate to indicate that 144 the published RFC has IETF Consensus.] 146 1. Introduction 148 As noted in the multi-protocol label switching (MPLS-TP) Framework 149 RFCs (RFC 5921 [8] and [9]), MPLS-TP is a packet-based transport 150 technology based on the MPLS Traffic Engineering (MPLS-TE) and Pseudo 151 Wire (PW) data plane architectures defined in RFC 3031 [1], RFC 3985 152 [2] and RFC 5659 [4]. 154 MPLS-TP supports a comprehensive set of Operations, 155 Administration and Maintenance (OAM) procedures for fault, 156 performance and protection-switching management that do not rely 157 on the presence of a control plane. 159 In line with [14], existing MPLS OAM mechanisms will be used 160 wherever possible and extensions or new OAM mechanisms will be 161 defined only where existing mechanisms are not sufficient to 162 meet the requirements. Extensions do not deprecate support for 163 existing MPLS OAM capabilities. 165 The MPLS-TP OAM framework defined in this document provides a 166 comprehensive set of OAM procedures that satisfy the MPLS-TP OAM 167 requirements of RFC 5860 [11]. In this regard, it defines 168 similar OAM functionality as for existing SONET/SDH and OTN OAM 169 mechanisms (e.g. [18]). 171 The MPLS-TP OAM framework is applicable to sections, LSPs and 172 (MS-)PWs and supports co-routed and associated bidirectional p2p 173 transport paths as well as unidirectional p2p and p2mp transport 174 paths. 176 This document is a product of a joint Internet Engineering Task 177 Force (IETF) / International Telecommunication Union 178 Telecommunication Standardization Sector (ITU-T) effort to 179 include an MPLS Transport Profile within the IETF MPLS and PWE3 180 architectures to support the capabilities and functionalities of 181 a packet transport network as defined by the ITU-T. 183 1.1. Contributing Authors 185 Dave Allan, Italo Busi, Ben Niven-Jenkins, Annamaria Fulignoli, 186 Enrique Hernandez-Valencia, Lieven Levrau, Vincenzo Sestito, 187 Nurit Sprecher, Huub van Helvoort, Martin Vigoureux, Yaacov 188 Weingarten, Rolf Winter 190 2. Conventions used in this document 192 2.1. Terminology 194 AC Attachment Circuit 196 AIS Alarm indication signal 198 CV Connectivity Verification 200 DBN Domain Border Node 202 LER Label Edge Router 204 LKR Lock Report 206 LM Loss Measurement 208 LME LSP Maintenance Entity 210 LMEG LSP ME Group 212 LSP Label Switched Path 214 LSR Label Switching Router 216 LSME LSP SPME ME 218 LSMEG LSP SPME ME Group 220 ME Maintenance Entity 222 MEG Maintenance Entity Group 224 MEP Maintenance Entity Group End Point 226 MIP Maintenance Entity Group Intermediate Point 228 PHB Per-hop Behavior 230 PM Performance Monitoring 231 PME PW Maintenance Entity 233 PMEG PW ME Group 235 PSME PW SPME ME 237 PSMEG PW SPME ME Group 239 PW Pseudowire 241 SLA Service Level Agreement 243 SME Section Maintenance Entity Group 245 SPME Sub-path Maintenance Element 247 2.2. Definitions 249 This document uses the terms defined in RFC 5654 [5]. 251 This document uses the term 'Per-hop Behavior' as defined in RFC 252 2474 [15]. 254 This document uses the term LSP to indicate either a service LSP 255 or a transport LSP (as defined in RFC 5921 [8]). 257 This document uses the term Sub Path Maintenance Entity (SPME) 258 as defined in RFC 5921 [8]. 260 Where appropriate, the following definitions are aligned with 261 ITU-T recommendation Y.1731 [20] in order to have a common, 262 unambiguous terminology. They do not however intend to imply a 263 certain implementation but rather serve as a framework to 264 describe the necessary OAM functions for MPLS-TP. 266 Adaptation function: The adaptation function is the interface 267 between the client (sub)-layer and the server (sub-)layer. 269 Data plane loopback: An out-of-service test where a transport 270 path at either an intermediate or terminating node is placed 271 into a data plane loopback state, such that all traffic 272 (including both payload and OAM) received on the looped back 273 interface is sent on the reverse direction of the transport 274 path. 276 Note - The only way to send an OAM packet to a node that has been put 277 into data plane loopback mode is via TTL expiry, irrespective of 278 whether the node is hosting MIPs or MEPs. 280 Domain Border Node (DBN): An intermediate node in an MPLS-TP LSP 281 that is at the boundary between two MPLS-TP OAM domains. Such a 282 node may be present on the edge of two domains or may be 283 connected by a link to the DBN at the edge of another OAM 284 domain. 286 Down MEP: A MEP that receives OAM packets from, and transmits 287 them towards, the direction of a server layer. 289 In-Service: The administrative status of a transport path when 290 it is unlocked. 292 Intermediate Node: An intermediate node transits traffic for an 293 LSP or a PW. An intermediate node may originate OAM flows 294 directed to downstream intermediate nodes or MEPs. 296 Loopback: See data plane loopback and OAM loopback definitions. 298 Maintenance Entity (ME): Some portion of a transport path that 299 requires management bounded by two points (called MEPs), and the 300 relationship between those points to which maintenance and 301 monitoring operations apply (details in section 3.1). 303 Maintenance Entity Group (MEG): The set of one or more 304 maintenance entities that maintain and monitor a section or a 305 transport path in an OAM domain. 307 MEP: A MEG end point (MEP) is capable of initiating (MEP Source) 308 and terminating (MEP Sink) OAM messages for fault management and 309 performance monitoring. MEPs define the boundaries of an ME 310 (details in section 3.3). 312 MEP Source: A MEP acts as MEP source for an OAM message when it 313 originates and inserts the message into the transport path for 314 its associated MEG. 316 MEP Sink: A MEP acts as a MEP sink for an OAM message when it 317 terminates and processes the messages received from its 318 associated MEG. 320 MIP: A MEG intermediate point (MIP) terminates and processes OAM 321 messages that are sent to this particular MIP and may generate 322 OAM messages in reaction to received OAM messages. It never 323 generates unsolicited OAM messages itself. A MIP resides within 324 a MEG between MEPs (details in section 3.3). 326 MPLS-TP Section: As defined in [8], it is a link that can be 327 traversed by one or more MPLS-TP LSPs. 329 OAM domain: A domain, as defined in [5], whose entities are 330 grouped for the purpose of keeping the OAM confined within that 331 domain. An OAM domain contains zero or more MEGs. 333 Note - within the rest of this document the term "domain" is 334 used to indicate an "OAM domain" 336 OAM flow: Is the set of all OAM messages originating with a 337 specific MEP source that instrument one direction of a MEG (or 338 possibly both in the special case of dataplane loopback). 340 OAM information element: An atomic piece of information 341 exchanged between MEPs and/or MIPs in MEG used by an OAM 342 application. 344 OAM loopback: The capability of a node to be directed by a 345 received OAM message to generate a reply back to the sender. OAM 346 loopback can work in-service and can support different OAM 347 functions (e.g., bidirectional on-demand connectivity 348 verification). 350 OAM Message: One or more OAM information elements that when 351 exchanged between MEPs or between MEPs and MIPs performs some 352 OAM functionality (e.g. connectivity verification) 354 OAM Packet: A packet that carries one or more OAM messages (i.e. 355 OAM information elements). 357 Out-of-Service: The administrative status of a transport path 358 when it is locked. When a path is in a locked condition, it is 359 blocked from carrying client traffic. 361 Path Segment: It is either a segment or a concatenated segment, 362 as defined in RFC 5654 [5]. 364 Signal Degrade: A condition declared by a MEP when the data 365 forwarding capability associated with a transport path has 366 deteriorated, as determined by performance monitoring (PM). See also 367 ITU-T recommendation G.806 [13]. 369 Signal Fail: A condition declared by a MEP when the data 370 forwarding capability associated with a transport path has 371 failed, e.g. loss of continuity. See also ITU-T recommendation 372 G.806 [13]. 374 Tandem Connection: A tandem connection is an arbitrary part of a 375 transport path that can be monitored (via OAM) independent of 376 the end-to-end monitoring (OAM). The tandem connection may also 377 include the forwarding engine(s) of the node(s) at the 378 boundaries of the tandem connection. Tandem connections may be 379 nested but cannot overlap. See also ITU-T recommendation G.805 380 [19]. 382 Up MEP: A MEP that transmits OAM packets towards, and receives 383 them from, the direction of the forwarding engine. 385 3. Functional Components 387 MPLS-TP is a packet-based transport technology based on the MPLS 388 and PW data plane architectures ([1], [2] and [4]) and is 389 capable of transporting service traffic where the 390 characteristics of information transfer between the transport 391 path endpoints can be demonstrated to comply with certain 392 performance and quality guarantees. 394 In order to describe the required OAM functionality, this 395 document introduces a set of functional components. 397 3.1. Maintenance Entity and Maintenance Entity Group 399 MPLS-TP OAM operates in the context of Maintenance Entities 400 (MEs) that define a relationship between two points of a 401 transport path to which maintenance and monitoring operations 402 apply. The collection of one or more MEs that belongs to the 403 same transport path and that are maintained and monitored as a 404 group are known as a maintenance entity group (MEG). The two 405 points that define a maintenance entity are called Maintenance 406 Entity Group (MEG) End Points (MEPs). In between these two 407 points zero or more intermediate points, called Maintenance 408 Entity Group Intermediate Points (MIPs). MEPs and MIPs are 409 associated with the MEG and can be shared by more than one ME in 410 a MEG. 412 An abstract reference model for an ME is illustrated in Figure 1 413 below: 415 +-+ +-+ +-+ +-+ 416 |A|----|B|----|C|----|D| 417 +-+ +-+ +-+ +-+ 419 Figure 1 ME Abstract Reference Model 421 The instantiation of this abstract model to different MPLS-TP 422 entities is described in section 4. In Figure 1, nodes A and D 423 can be LERs for an LSP or the T-PEs for a MS-PW, nodes B and C 424 are LSRs for a LSP or S-PEs for a MS-PW. MEPs reside in nodes A 425 and D while MIPs reside in nodes B and C and may reside in A and 426 D. The links connecting adjacent nodes can be physical links, 427 (sub-)layer LSPs/SPMEs, or server layer paths. 429 This functional model defines the relationships between all OAM 430 entities from a maintenance perspective and it allows each 431 Maintenance Entity to monitor and manage the (sub-)layer network 432 under its responsibility and to localize problems efficiently. 434 An MPLS-TP Maintenance Entity Group may be defined to monitor 435 the transport path for fault and/or performance management. 437 The MEPs that form a MEG bound the scope of an OAM flow to the 438 MEG (i.e. within the domain of the transport path that is being 439 monitored and managed). There are two exceptions to this: 441 1) A misbranching fault may cause OAM packets to be delivered to 442 a MEP that is not in the MEG of origin. 444 2) An out-of-band return path may be used between a MIP or a MEP 445 and the originating MEP. 447 In case of unidirectional point-to-point transport paths, a 448 single unidirectional Maintenance Entity is defined to monitor 449 it. 451 In case of associated bi-directional point-to-point transport 452 paths, two independent unidirectional Maintenance Entities are 453 defined to independently monitor each direction. This has 454 implications for transactions that terminate at or query a MIP, 455 as a return path from MIP to source MEP does not necessarily 456 exist in the MEG. 458 In case of co-routed bi-directional point-to-point transport 459 paths, a single bidirectional Maintenance Entity is defined to 460 monitor both directions congruently. 462 In case of unidirectional point-to-multipoint transport paths, a 463 single unidirectional Maintenance entity for each leaf is 464 defined to monitor the transport path from the root to that 465 leaf. 467 In all cases, portions of the transport path may be monitored by 468 the instantiation of SPMEs (see section 3.2). 470 The reference model for the p2mp MEG is represented in Figure 2. 472 +-+ 473 /--|D| 474 / +-+ 475 +-+ 476 /--|C| 477 +-+ +-+/ +-+\ +-+ 478 |A|----|B| \--|E| 479 +-+ +-+\ +-+ +-+ 480 \--|F| 481 +-+ 483 Figure 2 Reference Model for p2mp MEG 485 In case of p2mp transport paths, the OAM measurements are 486 independent for each ME (A-D, A-E and A-F): 488 o Fault conditions - some faults may impact more than one ME 489 depending from where the failure is located; 491 o Packet loss - packet dropping may impact more than one ME 492 depending from where the packets are lost; 494 o Packet delay - will be unique per ME. 496 Each leaf (i.e. D, E and F) terminates OAM flows to monitor the 497 ME between itself and the root while the root (i.e. A) generates 498 OAM messages common to all the MEs of the p2mp MEG. All nodes 499 may implement a MIP in the corresponding MEG. 501 3.2. Nested MEGs: SPMEs and Tandem Connection Monitoring 503 In order to verify and maintain performance and quality 504 guarantees, there is a need to not only apply OAM functionality 505 on a transport path granularity (e.g. LSP or MS-PW), but also on 506 arbitrary parts of transport paths, defined as Tandem 507 Connections, between any two arbitrary points along a transport 508 path. 510 Sub-path Maintenance Elements (SPMEs), as defined in [8], are 511 instantiated to provide monitoring of a portion of a set of co- 512 routed transport paths (LSPs or MS-PWs). The operational aspects 513 of instantiating SPMEs are out of scope of this memo. 515 SPMEs can also be employed to meet the requirement to provide 516 tandem connection monitoring (TCM). 518 TCM for a given path segment of a transport path is implemented 519 by creating an SPME that has a 1:1 association with the path 520 segment of the transport path that is to be monitored. 522 In the TCM case, this means that the SPME used to provide TCM 523 can carry one and only one transport path thus allowing direct 524 correlation between all fault management and performance 525 monitoring information gathered for the SPME and the monitored 526 path segment of the end-to-end transport path. The SPME is 527 monitored using normal LSP monitoring. 529 There are a number of implications to this approach: 531 1) The SPME would use the uniform model [22] of TC code point 532 copying between sub-layers for diffserv such that the E2E 533 markings and PHB treatment for the transport path was 534 preserved by the SPMEs. 536 2) The SPME normally would use the short-pipe model for TTL 537 handling [6] such that MIP addressing for the E2E entity 538 would be not be impacted by the presence of the SPME, but it 539 should be possible for an operator to specify use of the 540 uniform model. 542 3) PM statistics need to be adjusted for the encapsulation 543 overhead of the additional SPME sub-layer. 545 Note that points 1 and 2 above assume that the TTL copying mode 546 and TC copying modes are independently configurable for an LSP. 548 There are specific issues with the use of the uniform model of 549 TTL copying for an SPME: 551 1. As any MIP in the SPME sub-layer is not part of the transport path 552 MEG, hence only an out of band return path for OAM originating in 553 the transport path MEG that addressed an SPME MIP might be 554 available. 556 2. The instantiation of a lower level MEG or protection switching 557 actions within a lower level MEG may change the TTL distances to 558 MIPs in the higher level MEGs. 560 The endpoints of the SPME are MEPs and limit the scope of an OAM 561 flow within the MEG that the MEPs belong to (i.e. within the 562 domain of the SPME that is being monitored and managed). 564 When considering SPMEs, it is important to consider that the 565 following properties apply to all MPLS-TP MEGs: 567 o They can be nested but not overlapped, e.g. a MEG may cover a 568 segment or a concatenated segment of another MEG, and may 569 also include the forwarding engine(s) of the node(s) at the 570 edge(s) of the segment or concatenated segment. However when 571 MEGs are nested, the MEPs and MIPs in the nested MEG are no 572 longer part of the encompassing MEG. 574 o It is possible that MEPs of nested MEGs reside on a single 575 node but again implemented in such a way that they do not 576 overlap. 578 o Each OAM flow is associated with a single MEG 580 o OAM packets that instrument a particular direction of a 581 transport path are subject to the same forwarding treatment 582 (i.e. fate share) as the data traffic and in some cases may 583 be required to have common queuing discipline E2E with the 584 class of traffic monitored. OAM packets can be distinguished 585 from the data traffic using the GAL and ACH constructs [7] 586 for LSP and Section or the ACH construct [3]and [7] for 587 (MS-)PW. 589 o When a SPME is instantiated after the transport path has been 590 instantiated the TTL addressing of the MIPs will change for 591 the pipe model of TTL copying, and will change for the 592 uniform model if the SPME is not co-routed with the original 593 path. 595 3.3. MEG End Points (MEPs) 597 MEG End Points (MEPs) are the source and sink points of a MEG. 598 In the context of an MPLS-TP LSP, only LERs can implement MEPs 599 while in the context of an SPME LSRs for the MPLS-TP LSP can be 600 LERs for SPMEs that contribute to the overall monitoring 601 infrastructure for the transport path. Regarding PWs, only T-PEs 602 can implement MEPs while for SPMEs supporting one or more PWs 603 both T-PEs and S-PEs can implement SPME MEPs. Any MPLS-TP LSR 604 can implement a MEP for an MPLS-TP Section. 606 MEPs are responsible for activating and controlling all of the 607 proactive and on-demand monitoring OAM functionality for the 608 MEG. There is a separate class of notifications (such as Lock 609 report (LKR) and Alarm indication signal (AIS)) that are 610 originated by intermediate nodes and triggered by server layer 611 events. A MEP is capable of originating and terminating OAM 612 messages for fault management and performance monitoring. These 613 OAM messages are encapsulated into an OAM packet using the G-ACh 614 with an appropriate channel type as defined in RFC 5586 [7]. A 615 MEP terminates all the OAM packets it receives from the MEG it 616 belongs to and silently discards those that do not (note in the 617 particular case of Connectivity Verification (CV) processing a 618 CV message from an incorrect MEG will result in a mis- 619 connectivity defect and there are further actions taken). The 620 MEG the OAM packet belongs to is inferred from the MPLS or PW 621 label or, in case of an MPLS-TP section, the MEG is inferred 622 from the port on which an OAM packet was received with the GAL 623 at the top of the label stack. 625 OAM packets may require the use of an available "out-of-band" 626 return path (as defined in [8]). In such cases sufficient 627 information is required in the originating transaction such that 628 the OAM reply packet can be constructed (e.g. IP address). 630 Each OAM solution will further detail its applicability as a 631 pro-active or on-demand mechanism as well as its usage when: 633 o The "in-band" return path exists and it is used; 635 o An "out-of-band" return path exists and it is used; 637 o Any return path does not exist or is not used. 639 Once a MEG is configured, the operator can configure which 640 proactive OAM functions to use on the MEG but the MEPs are 641 always enabled. A node at the edge of a MEG always supports a 642 MEP. 644 MEPs terminate all OAM packets received from the associated MEG. 645 As the MEP corresponds to the termination of the forwarding path 646 for a MEG at the given (sub-)layer, OAM packets never leak 647 outside of a MEG in a properly configured fault-free 648 implementation. 650 A MEP of an MPLS-TP transport path coincides with transport path 651 termination and monitors it for failures or performance 652 degradation (e.g. based on packet counts) in an end-to-end 653 scope. Note that both MEP source and MEP sink coincide with 654 transport paths' source and sink terminations. 656 The MEPs of an SPME are not necessarily coincident with the 657 termination of the MPLS-TP transport path. They are used to 658 monitor a path segment of the transport path for failures or 659 performance degradation (e.g. based on packet counts) only 660 within the boundary of the MEG for the SPME. 662 An MPLS-TP MEP sink passes a fault indication to its client 663 (sub-)layer network as a consequent action of fault detection. 665 A node at the edge of a MEG can either support per-node MEP or 666 per-interface MEP(s). A per-node MEP resides in an unspecified 667 location within the node while a per-interface MEP resides on a 668 specific side of the forwarding engine. In particular a per- 669 interface MEP is called "Up MEP" or "Down MEP" depending on its 670 location as upstream or downstream relative to the forwarding 671 engine. 673 Source node Up MEP Destination node Up MEP 674 ------------------------ ------------------------ 675 | | | | 676 |----- -----| |----- -----| 677 | MEP | | | | | | MEP | 678 | | ---- | | | | ---- | | 679 | In |->-| FW |->-| Out |->- ->-| In |->-| FW |->-| Out | 680 | i/f | ---- | i/f | | i/f | ---- | i/f | 681 |----- -----| |----- -----| 682 | | | | 683 ------------------------ ------------------------ 684 (1) (2) 686 Source node Down MEP Destination node Down MEP 687 ------------------------ ------------------------ 688 | | | | 689 |----- -----| |----- -----| 690 | | | MEP | | MEP | | | 691 | | ---- | | | | ---- | | 692 | In |->-| FW |->-| Out |->- ->-| In |->-| FW |->-| Out | 693 | i/f | ---- | i/f | | i/f | ---- | i/f | 694 |----- -----| |----- -----| 695 | | | | 696 ------------------------ ------------------------ 697 (3) (4) 699 Figure 3 Examples of per-interface MEPs 701 Figure 3 describes four examples of per-interface Up MEPs: an Up 702 Source MEP in a source node (case 1), an Up Sink MEP in a 703 destination node (case 2), a Down Source MEP in a source node 704 (case 3) and a Down Sink MEP in a destination node (case 4). 706 The usage of per-interface Up MEPs extends the coverage of the 707 ME for both fault and performance monitoring closer to the edge 708 of the domain and allows the isolation of failures or 709 performance degradation to being within a node or either the 710 link or interfaces. 712 Each OAM solution will further detail the implications when used 713 with per-interface or per-node MEPs, if necessary. 715 It may occur that the Up MEPs of an SPME are set on both sides 716 of the forwarding engine such that the MEG is entirely internal 717 to the node. 719 It should be noted that a ME may span nodes that implement per 720 node MEPs and per-interface MEPs. This guarantees backward 721 compatibility with most of the existing LSRs that can implement 722 only a per-node MEP as in current implementations label 723 operations are largely performed on the ingress interface, hence 724 the exposure of the GAL as top label will occur at the ingress 725 interface. 727 Note that a MEP can only exist at the beginning and end of a 728 (sub-)layer in MPLS-TP. If there is a need to monitor some 729 portion of that LSP or PW, a new sub-layer in the form of an 730 SPME is created which permits MEPs and associated MEGs to be 731 created. 733 In the case where an intermediate node sends a message to a MEP, 734 it uses the top label of the stack at that point. 736 3.4. MEG Intermediate Points (MIPs) 738 A MEG Intermediate Point (MIP) is a function located at a point 739 between the MEPs of a MEG for a PW, LSP or SPME. 741 A MIP is capable of reacting to some OAM packets and forwarding all 742 the other OAM packets while ensuring fate sharing with data plane 743 packets. However, a MIP does not initiate unsolicited OAM packets, 744 but may be addressed by OAM packets initiated by one of the MEPs of 745 the MEG. A MIP can generate OAM packets only in response to OAM 746 packets that it receives from the MEG it belongs to. The OAM messages 747 generated by the MIP are sent in the direction of the source MEP and 748 not forwarded to the sink MEP. 750 An intermediate node within a MEG can either: 752 o Support per-node MIP (i.e. a single MIP per node in an 753 unspecified location within the node); 755 o Support per-interface MIP (i.e. two or more MIPs per node on 756 both sides of the forwarding engine). 758 Intermediate node 759 ------------------------ 760 | | 761 |----- -----| 762 | MIP | | MIP | 763 | | ---- | | 764 ->-| In |->-| FW |->-| Out |->- 765 | i/f | ---- | i/f | 766 |----- -----| 767 | | 768 ------------------------ 769 Figure 4 Example of per-interface MIPs 771 Figure 4 describes an example of two per-interface MIPs at an 772 intermediate node of a point-to-point MEG. 774 The usage of per-interface MIPs allows the isolation of failures 775 or performance degradation to being within a node or either the 776 link or interfaces. 778 When sending an OAM packet to a MIP, the source MEP should set 779 the TTL field to indicate the number of hops necessary to reach 780 the node where the MIP resides. 782 The source MEP should also include Target MIP information in the 783 OAM packets sent to a MIP to allow proper identification of the 784 MIP within the node. The MEG the OAM packet is associated with 785 is inferred from the MPLS label. 787 A node at the edge of a MEG can also support per-interface Up 788 MEPs and per-interface MIPs on either side of the forwarding 789 engine. 791 Once a MEG is configured, the operator can enable/disable the 792 MIPs on the nodes within the MEG. All the intermediate nodes and 793 possibly the end nodes host MIP(s). Local policy allows them to 794 be enabled per function and per MEG. The local policy is 795 controlled by the management system, which may delegate it to 796 the control plane. 798 3.5. Server MEPs 800 A server MEP is a MEP of a MEG that is either: 802 o Defined in a layer network that is "below", which is to say 803 encapsulates and transports the MPLS-TP layer network being 804 referenced, or 806 o Defined in a sub-layer of the MPLS-TP layer network that is 807 "below" which is to say encapsulates and transports the 808 sub-layer being referenced. 810 A server MEP can coincide with a MIP or a MEP in the client 811 (MPLS-TP) (sub-)layer network. 813 A server MEP also provides server layer OAM indications to the 814 client/server adaptation function between the client (MPLS-TP) 815 (sub-)layer network and the server (sub-)layer network. The 816 adaptation function maintains state on the mapping of MPLS-TP 817 transport paths that are setup over that server (sub-)layer's 818 transport path. 820 For example, a server MEP can be either: 822 o A termination point of a physical link (e.g. 802.3), an SDH 823 VC or OTN ODU, for the MPLS-TP Section layer network, defined 824 in section 4.1; 826 o An MPLS-TP Section MEP for MPLS-TP LSPs, defined in section 827 4.2; 829 o An MPLS-TP LSP MEP for MPLS-TP PWs, defined in section 4.3; 831 o An MPLS-TP SPME MEP used for LSP path segment monitoring, as 832 defined in section 4.4, for MPLS-TP LSPs or higher-level 833 SPMEs providing LSP path segment monitoring; 835 o An MPLS-TP SPME MEP used for PW path segment monitoring, as 836 defined in section 4.5, for MPLS-TP PWs or higher-level SPMEs 837 providing PW path segment monitoring. 839 The server MEP can run appropriate OAM functions for fault detection 840 within the server (sub-)layer network, and provides a fault 841 indication to its client MPLS-TP layer network via the client/server 842 adaptation function. When the server layer is not MPLS-TP, server MEP 843 OAM functions are outside the scope of this document. 845 3.6. Configuration Considerations 847 When a control plane is not present, the management plane configures 848 these functional components. Otherwise they can be configured either 849 by the management plane or by the control plane. 851 Local policy allows disabling the usage of any available "out- 852 of-band" return path, as defined in [8], irrespective of what is 853 requested by the node originating the OAM packet. 855 SPMEs are usually instantiated when the transport path is 856 created by either the management plane or by the control plane 857 (if present). Sometimes an SPME can be instantiated after the 858 transport path is initially created. 860 3.7. P2MP considerations 862 All the traffic sent over a p2mp transport path, including OAM 863 packets generated by a MEP, is sent (multicast) from the root to 864 all the leaves. As a consequence: 866 o To send an OAM packet to all leaves, the source MEP can 867 send a single OAM packet that will be delivered by the 868 forwarding plane to all the leaves and processed by all the 869 leaves. Hence a single OAM packet can simultaneously 870 instrument all the MEs in a p2mp MEG. 872 o To send an OAM packet to a single leaf, the source MEP 873 sends a single OAM packet that will be delivered by the 874 forwarding plane to all the leaves but contains sufficient 875 information to identify a target leaf, and therefore is 876 processed only by the target leaf and ignored by the other 877 leaves. 879 o To send an OAM packet to a single MIP, the source MEP sends 880 a single OAM packet with the TTL field indicating the 881 number of hops necessary to reach the node where the MIP 882 resides. This packet will be delivered by the forwarding 883 plane to all intermediate nodes at the same TTL distance of 884 the target MIP and to any leaf that is located at a shorter 885 distance. The OAM message must contain sufficient 886 information to identify the target MIP and therefore is 887 processed only by the target MIP. 889 o In order to send an OAM packet to M leaves (i.e., a subset 890 of all the leaves), the source MEP sends M different OAM 891 packets targeted to each individual leaf in the group of M 892 leaves. Aggregated or sub setting mechanisms are outside 893 the scope of this document. 895 P2MP paths are unidirectional; therefore any return path to a 896 source MEP for on-demand transactions will be out-of-band. A 897 mechanism to scope the set of MEPs or MIPs expected to respond 898 to a given "on-demand" transaction is useful as it relieves the 899 source MEP of the requirement to filter and discard undesired 900 responses as normally TTL exhaustion will address all MIPs at a 901 given distance from the source, and failure to exhaust TTL will 902 address all MEPs. 904 3.8. Further considerations of enhanced segment monitoring 906 Segment monitoring in transport network should meet the 907 following network objectives: 909 1. The monitoring and maintenance of existing transport paths has to 910 be conducted in service without traffic disruption. 912 2. The monitored or managed transport path condition has to be 913 exactly the same irrespective of any configurations necessary for 914 maintenance. 916 SPMEs defined in section 3.2 meet the above two objectives, when 917 they are pre-configured or pre-instantiated as exemplified in 918 section 3.6. However, pre-design and pre-configuration of all 919 the considered patterns of SPME are not sometimes preferable in 920 real operation due to the burden of design works, a number of 921 header consumptions, bandwidth consumption and so on. 923 When SPMEs are configured or instantiated after the transport 924 path has been created, network objective (1) can be met, but 925 network objective (2) cannot be met due to new assignment of 926 MPLS labels. 928 Support for a more sophisticated segment monitoring mechanism 929 (temporal and hitless segment monitoring) to efficiently meet 930 the two network objectives may be necessary. 932 4. Reference Model 934 The reference model for the MPLS-TP framework builds upon the 935 concept of a MEG, and its associated MEPs and MIPs, to support 936 the functional requirements specified in RFC 5860 [11]. 938 The following MPLS-TP MEGs are specified in this document: 940 o A Section Maintenance Entity Group (SME), allowing monitoring 941 and management of MPLS-TP Sections (between MPLS LSRs). 943 o An LSP Maintenance Entity Group (LME), allowing monitoring 944 and management of an end-to-end LSP (between LERs). 946 o A PW Maintenance Entity Group (PME), allowing monitoring and 947 management of an end-to-end SS/MS-PWs (between T-PEs). 949 o An LSP SPME ME Group (LSMEG), allowing monitoring and 950 management of an SPME (between a given pair of LERs and/or 951 LSRs along an LSP). 953 o A PW SPME ME Group (PSMEG), allowing monitoring and 954 management of an SPME (between a given pair of T-PEs and/or 955 S-PEs along an (MS-)PW). 957 The MEGs specified in this MPLS-TP framework are compliant with 958 the architecture framework for MPLS-TP MS-PWs [4] and LSPs [1]. 960 Hierarchical LSPs are also supported in the form of SPMEs. In 961 this case, each LSP in the hierarchy is a different sub-layer 962 network that can be monitored, independently from higher and 963 lower level LSPs in the hierarchy, on an end-to-end basis (from 964 LER to LER) by a SPME. It is possible to monitor a portion of a 965 hierarchical LSP by instantiating a hierarchical SPME between 966 any LERs/LSRs along the hierarchical LSP. 968 Native |<------------------ MS-PW1Z ---------------->| Native 969 Layer | | Layer 970 Service | || |<-LSP3X->| || | Service 971 (AC1) V V LSP V V LSP V V LSP V V (AC2) 972 +----+ +-+ +----+ +----+ +-+ +----+ 973 +----+ |TPE1| | | |SPE3| |SPEX| | | |TPEZ| +----+ 974 | | | |=======| |=========| |=======| | | | 975 | CE1|--|.......PW13......|...PW3X..|......PWXZ.......|---|CE2 | 976 | | | |=======| |=========| |=======| | | | 977 +----+ | 1 | |2| | 3 | | X | |Y| | Z | +----+ 978 +----+ +-+ +----+ +----+ +-+ +----+ 979 . . . . 980 | | | | 981 |<--- Domain 1 -->| |<--- Domain Z -->| 982 ^----------------- PW1Z PME -----------------^ 983 ^--- PW13 PSME ---^ ^--- PWXZ PSME ---^ 984 ^-------^ ^-------^ 985 LSP13 LME LSPXZ LME 986 ^--^ ^--^ ^---------^ ^--^ ^--^ 987 Sec12 Sec23 Sec3X SecXY SecYZ 988 SME SME SME SME SME 990 TPE1: Terminating Provider Edge 1 SPE2: Switching Provider Edge 991 3 992 TPEX: Terminating Provider Edge X SPEZ: Switching Provider Edge 993 Z 995 ^---^ ME ^ MEP ==== LSP .... PW 997 Figure 5 Reference Model for the MPLS-TP OAM Framework 999 Figure 5 depicts a high-level reference model for the MPLS-TP 1000 OAM framework. The figure depicts portions of two MPLS-TP 1001 enabled network domains, Domain 1 and Domain Z. In Domain 1, 1002 LSR1 is adjacent to LSR2 via the MPLS-TP Section Sec12 and LSR2 1003 is adjacent to LSR3 via the MPLS-TP Section Sec23. Similarly, in 1004 Domain Z, LSRX is adjacent to LSRY via the MPLS-TP Section SecXY 1005 and LSRY is adjacent to LSRZ via the MPLS-TP Section SecYZ. In 1006 addition, LSR3 is adjacent to LSRX via the MPLS-TP Section 3X. 1008 Figure 5 also shows a bi-directional MS-PW (PW1Z) between AC1 on 1009 TPE1 and AC2 on TPEZ. The MS-PW consists of three bi-directional 1010 PW path segments: 1) PW13 path segment between T-PE1 and S-PE3 1011 via the bi-directional LSP13 LSP, 2) PW3X path segment between 1012 S-PE3 and S-PEX, via the bi-directional LSP3X LSP, and 3) PWXZ 1013 path segment between S-PEX and T-PEZ via the bi-directional 1014 LSPXZ LSP. 1016 The MPLS-TP OAM procedures that apply to a MEG are expected to 1017 operate independently from procedures on other MEGs. Yet, this 1018 does not preclude that multiple MEGs may be affected 1019 simultaneously by the same network condition, for example, a 1020 fiber cut event. 1022 Note that there are no constrains imposed by this OAM framework 1023 on the number, or type (p2p, p2mp, LSP or PW), of MEGs that may 1024 be instantiated on a particular node. In particular, when 1025 looking at Figure 5, it should be possible to configure one or 1026 more MEPs on the same node if that node is the endpoint of one 1027 or more MEGs. 1029 Figure 5 does not describe a PW3X PSME because typically SPMEs 1030 are used to monitor an OAM domain (like PW13 and PWXZ PSMEs) 1031 rather than the segment between two OAM domains. However the OAM 1032 framework does not pose any constraints on the way SPMEs are 1033 instantiated as long as they are not overlapping. 1035 The subsections below define the MEGs specified in this MPLS-TP 1036 OAM architecture framework document. Unless otherwise stated, 1037 all references to domains, LSRs, MPLS-TP Sections, LSPs, 1038 pseudowires and MEGs in this section are made in relation to 1039 those shown in Figure 5. 1041 4.1. MPLS-TP Section Monitoring (SME) 1043 An MPLS-TP Section ME (SME) is an MPLS-TP maintenance entity 1044 intended to monitor an MPLS-TP Section as defined in RFC 5654 1045 [5]. An SME may be configured on any MPLS-TP section. SME OAM 1046 packets must fate share with the user data packets sent over the 1047 monitored MPLS-TP Section. 1049 An SME is intended to be deployed for applications where it is 1050 preferable to monitor the link between topologically adjacent 1051 (next hop in this layer network) MPLS-TP LSRs rather than 1052 monitoring the individual LSP or PW path segments traversing the 1053 MPLS-TP Section and the server layer technology does not provide 1054 adequate OAM capabilities. 1056 Figure 5 shows five Section MEs configured in the network 1057 between AC1 and AC2: 1059 1. Sec12 ME associated with the MPLS-TP Section between LSR 1 1060 and LSR 2, 1062 2. Sec23 ME associated with the MPLS-TP Section between LSR 2 1063 and LSR 3, 1065 3. Sec3X ME associated with the MPLS-TP Section between LSR 3 1066 and LSR X, 1068 4. SecXY ME associated with the MPLS-TP Section between LSR X 1069 and LSR Y, and 1071 5. SecYZ ME associated with the MPLS-TP Section between LSR Y 1072 and LSR Z. 1074 4.2. MPLS-TP LSP End-to-End Monitoring (LME) 1076 An MPLS-TP LSP ME (LME) is an MPLS-TP maintenance entity 1077 intended to monitor an end-to-end LSP between two LERs. An LME 1078 may be configured on any MPLS LSP. LME OAM packets must fate 1079 share with user data packets sent over the monitored MPLS-TP 1080 LSP. 1082 An LME is intended to be deployed in scenarios where it is 1083 desirable to monitor an entire LSP between its LERs, rather 1084 than, say, monitoring individual PWs. 1086 Figure 5 depicts two LMEs configured in the network between AC1 1087 and AC2: 1) the LSP13 LME between LER 1 and LER 3, and 2) the 1088 LSPXZ LME between LER X and LER Y. Note that the presence of a 1089 LSP3X LME in such a configuration is optional, hence, not 1090 precluded by this framework. For instance, the SPs may prefer to 1091 monitor the MPLS-TP Section between the two LSRs rather than the 1092 individual LSPs. 1094 4.3. MPLS-TP PW Monitoring (PME) 1096 An MPLS-TP PW ME (PME) is an MPLS-TP maintenance entity intended 1097 to monitor a SS-PW or MS-PW between a pair of T-PEs. A PME can 1098 be configured on any SS-PW or MS-PW. PME OAM packets must fate 1099 share with the user data packets sent over the monitored PW. 1101 A PME is intended to be deployed in scenarios where it is 1102 desirable to monitor an entire PW between a pair of MPLS-TP 1103 enabled T-PEs rather than monitoring the LSP aggregating 1104 multiple PWs between PEs. 1106 |<----------------- MS-PW1Z ----------------->| 1107 | | 1108 | || |<-LSP3X->| || | 1109 V V LSP V V LSP V V LSP V V 1110 +----+ +-+ +----+ +----+ +-+ +----+ 1111 +---+ |TPE1| | | |SPE3| |SPEX| | | |TPEZ| +---+ 1112 | |AC1| |=======| |=========| |=======| |AC2| | 1113 |CE1|---|.......PW13......|...PW3X..|.......PWXZ......|---|CE2| 1114 | | | |=======| |=========| |=======| | | | 1115 +---+ | 1 | |2| | 3 | | X | |Y| | Z | +---+ 1116 +----+ +-+ +----+ +----+ +-+ +----+ 1117 ^-------------------PW1Z PME------------------^ 1119 Figure 6 MPLS-TP PW ME (PME) 1121 Figure 6 depicts a MS-PW (MS-PW1Z) consisting of three path 1122 segments: PW13, PW3X and PWXZ and its associated end-to-end PME 1123 (PW1Z PME). 1125 4.4. MPLS-TP LSP SPME Monitoring (LSME) 1127 An MPLS-TP LSP SPME ME (LSME) is an MPLS-TP SPME with associated 1128 maintenance entity intended to monitor an arbitrary part of an 1129 LSP between the pair of MEPs instantiated for the SPME 1130 independent from the end-to-end monitoring (LME). An LSME can 1131 monitor an LSP segment or concatenated segment and it may also 1132 include the forwarding engine(s) of the node(s) at the edge(s) 1133 of the segment or concatenated segment. 1135 When SPME is established between non-adjacent LSRs, the edges of 1136 the SPME becomes adjacent at the LSP sub-layer network and any 1137 LSR that were previously in between becomes an LSR for the SPME. 1139 Multiple hierarchical LSMEs can be configured on any LSP. LSME 1140 OAM packets must fate share with the user data packets sent over 1141 the monitored LSP path segment. 1143 A LSME can be defined between the following entities: 1145 o The end node and any intermediate node of a given LSP. 1147 o Any two intermediate nodes of a given LSP. 1149 An LSME is intended to be deployed in scenarios where it is 1150 preferable to monitor the behavior of a part of an LSP or set of 1151 LSPs rather than the entire LSP itself, for example when there 1152 is a need to monitor a part of an LSP that extends beyond the 1153 administrative boundaries of an MPLS-TP enabled administrative 1154 domain. 1156 |<-------------------- PW1Z ------------------->| 1157 | | 1158 | |<-------------LSP1Z LSP------------->| | 1159 | |<-LSP13->| || |<-LSPXZ->| | 1160 V V S-LSP V V S-LSP V V S-LSP V V 1161 +----+ +-+ +----+ +----+ +-+ +----+ 1162 +----+ | PE1| | | |DBN3| |DBNX| | | | PEZ| +----+ 1163 | |AC1| |=====================================| |AC2| | 1164 | CE1|---|.....................PW1Z......................|---|CE2 | 1165 | | | |=====================================| | | | 1166 +----+ | 1 | |2| | 3 | | X | |Y| | Z | +----+ 1167 +----+ +-+ +----+ +----+ +-+ +----+ 1168 . . . . 1169 | | | | 1170 |<---- Domain 1 --->| |<---- Domain Z --->| 1172 ^---------^ ^---------^ 1173 LSP13 LSME LSPXZ LSME 1174 ^-------------------------------------^ 1175 LSP1Z LME 1177 DBN: Domain Border Node 1179 Figure 7 MPLS-TP LSP SPME ME (LSME) 1181 Figure 7 depicts a variation of the reference model in Figure 5 1182 where there is an end-to-end LSP (LSP1Z) between PE1 and PEZ. 1183 LSP1Z consists of, at least, three LSP Concatenated Segments: 1184 LSP13, LSP3X and LSPXZ. In this scenario there are two separate 1185 LSMEs configured to monitor the LSP1Z: 1) a LSME monitoring the 1186 LSP13 Concatenated Segment on Domain 1 (LSP13 LSME), and 2) a 1187 LSME monitoring the LSPXZ Concatenated Segment on Domain Z 1188 (LSPXZ LSME). 1190 It is worth noticing that LSMEs can coexist with the LME 1191 monitoring the end-to-end LSP and that LSME MEPs and LME MEPs 1192 can be coincident in the same node (e.g. PE1 node supports both 1193 the LSP1Z LME MEP and the LSP13 LSME MEP). 1195 4.5. MPLS-TP MS-PW SPME Monitoring (PSME) 1197 An MPLS-TP MS-PW SPME Monitoring ME (PSME) is an MPLS-TP SPME 1198 with associated maintenance entity intended to monitor an 1199 arbitrary part of an MS-PW between the pair of MEPs instantiated 1200 form the SPME independently from the end-to-end monitoring 1201 (PME). A PSME can monitor a PW segment or concatenated segment 1202 and it may also include the forwarding engine(s) of the node(s) 1203 at the edge(s) of the segment or concatenated segment. A PSME is 1204 no different than an SPME, it is simply named as such to discuss 1205 SPMEs specifically in a PW context. 1207 When SPME is established between non-adjacent S-PEs, the edges 1208 of the SPME becomes adjacent at the MS-PW sub-layer network and 1209 any S-PEs that were previously in between becomes an LSR for the 1210 SPME. 1212 S-PE placement is typically dictated by considerations other 1213 than OAM. S-PEs will frequently reside at operational boundaries 1214 such as the transition from distributed (CP) to centralized 1215 (NMS) control or at a routing area boundary. As such the 1216 architecture would appear not to have the flexibility that 1217 arbitrary placement of SPME segments would imply. Support for an 1218 arbitrary placement of PSME would require the definition of 1219 additional PW sub-layering. 1220 Multiple hierarchical PSMEs can be configured on any MS-PW. PSME 1221 OAM packets fate share with the user data packets sent over the 1222 monitored PW path Segment. 1224 A PSME does not add hierarchical components to the MPLS architecture, 1225 it defines the role of existing components for the purposes of 1226 discussing OAM functionality. 1228 A PSME can be defined between the following entities: 1230 o T-PE and any S-PE of a given MS-PW 1232 o Any two S-PEs of a given MS-PW. 1234 Note that, in line with the SPME description in section 3.2, when a 1235 PW SPME is instantiated after the MS-PW has been instantiated, the 1236 TTL addressing of the MIPs may change and MIPs in the nested MEG are 1237 no longer part of the encompassing MEG. This means that the S-PE 1238 nodes hosting these MIPs are no longer S-PEs but P nodes at the SPME 1239 LSP level. The consequences are that the S-PEs hosting the PSME MEPs 1240 become adjacent S-PEs. This is no different than the operation of 1241 SPMEs in general. 1243 A PSME is intended to be deployed in scenarios where it is 1244 preferable to monitor the behavior of a part of a MS-PW rather 1245 than the entire end-to-end PW itself, for example to monitor an 1246 MS-PW path segment within a given network domain of an inter- 1247 domain MS-PW. 1249 |<----------------- MS-PW1Z ------------------>| 1250 | | 1251 | || |<-LSP3X-->| || | 1252 V V LSP V V LSP V V LSP V V 1253 +----+ +-+ +----+ +----+ +-+ +----+ 1254 +---+ |TPE1| | | |SPE3| |SPEX| | | |TPEZ| +---+ 1255 | |AC1| |=======| |==========| |=======| |AC2| | 1256 |CE1|---|.......PW13......|...PW3X...|.......PWXZ......|---|CE2| 1257 | | | |=======| |==========| |=======| | | | 1258 +---+ | 1 | |2| | 3 | | X | |Y| | Z | +---+ 1259 +----+ +-+ +----+ +----+ +-+ +----+ 1261 ^--- PW13 PSME ---^ ^--- PWXZ PSME ----^ 1262 ^-------------------PW1Z PME-------------------^ 1264 Figure 8 MPLS-TP MS-PW SPME Monitoring (PSME) 1266 Figure 8 depicts the same MS-PW (MS-PW1Z) between AC1 and AC2 as 1267 in Figure 6. In this scenario there are two separate PSMEs 1268 configured to monitor MS-PW1Z: 1) a PSME monitoring the PW13 1269 MS-PW path segment on Domain 1 (PW13 PSME), and 2) a PSME 1270 monitoring the PWXZ MS-PW path segment on Domain Z with (PWXZ 1271 PSME). 1273 It is worth noticing that PSMEs can coexist with the PME 1274 monitoring the end-to-end MS-PW and that PSME MEPs and PME MEPs 1275 can be coincident in the same node (e.g. TPE1 node supports both 1276 the PW1Z PME MEP and the PW13 PSME MEP). 1278 4.6. Fate sharing considerations for multilink 1280 Multilink techniques are in use today and are expected to 1281 continue to be used in future deployments. These techniques 1282 include Ethernet Link Aggregations [21], the use of Link 1283 Bundling for MPLS [17] where the option to spread traffic over 1284 component links is supported and enabled. While the use of Link 1285 Bundling can be controlled at the MPLS-TP layer, use of Link 1286 Aggregation (or any server layer specific multilink) is not 1287 necessarily under control of the MPLS-TP layer. Other techniques 1288 may emerge in the future. These techniques share the 1289 characteristic that an LSP may be spread over a set of component 1290 links and therefore be reordered but no flow within the LSP is 1291 reordered (except when very infrequent and minimally disruptive 1292 load rebalancing occurs). 1294 The use of multilink techniques may be prohibited or permitted 1295 in any particular deployment. If multilink techniques are used, 1296 the deployment can be considered to be only partially MPLS-TP 1297 compliant, however this is unlikely to prevent its use. 1299 The implications for OAM is that not all components of a 1300 multilink will be exercised, independent server layer OAM being 1301 required to exercise the aggregated link components. This has 1302 further implications for MIP and MEP placement, as per-interface 1303 MIPs or "down" MEPs on a multilink interface are akin to a layer 1304 violation, as they instrument at the granularity of the server 1305 layer. The implications for reduced OAM loss measurement 1306 functionality are documented in sections 5.5.3 and 6.2.3. 1308 5. OAM Functions for proactive monitoring 1310 In this document, proactive monitoring refers to OAM operations 1311 that are either configured to be carried out periodically and 1312 continuously or preconfigured to act on certain events such as 1313 alarm signals. 1315 Proactive monitoring is usually performed "in-service". Such 1316 transactions are universally MEP to MEP in operation while 1317 notifications can be node to node (e.g. some MS-PW transactions) 1318 or node to MEPs (e.g., AIS). The control and measurement 1319 considerations are: 1321 1. Proactive monitoring for a MEG is typically configured at 1322 transport path creation time. 1324 2. The operational characteristics of in-band measurement 1325 transactions (e.g., CV, Loss Measurement (LM) etc.) are 1326 configured at the MEPs. 1328 3. Server layer events are reported by OAM messages originating 1329 at intermediate nodes. 1331 4. The measurements resulting from proactive monitoring are 1332 typically reported outside of the MEG (e.g. to a management 1333 system) as notifications events such as faults or loss 1334 measurement indication of excessive impairment of information 1335 transfer capability. 1337 5. The measurements resulting from proactive monitoring may be 1338 periodically harvested by an EMS/NMS. 1340 For statically provisioned transport paths the above information 1341 is statically configured; for dynamically established transport 1342 paths the configuration information is signaled via the control 1343 plane or configured via the management plane. 1345 The operator may enable/disable some of the consequent actions 1346 defined in section 5.1.1.4. 1348 5.1. Continuity Check and Connectivity Verification 1350 Proactive Continuity Check functions, as required in section 1351 2.2.2 of RFC 5860 [11], are used to detect a loss of continuity 1352 defect (LOC) between two MEPs in a MEG. 1354 Proactive Connectivity Verification functions, as required in 1355 section 2.2.3 of RFC 5860 [11], are used to detect an unexpected 1356 connectivity defect between two MEGs (e.g. mismerging or 1357 misconnection), as well as unexpected connectivity within the 1358 MEG with an unexpected MEP. 1360 Both functions are based on the (proactive) generation of OAM 1361 packets by the source MEP that are processed by the sink MEP. As 1362 a consequence these two functions are grouped together into 1363 Continuity Check and Connectivity Verification (CC-V) OAM 1364 packets. 1366 In order to perform pro-active Connectivity Verification, each 1367 CC-V OAM packet also includes a globally unique Source MEP 1368 identifier. When used to perform only pro-active Continuity 1369 Check, the CC-V OAM packet will not include any globally unique 1370 Source MEP identifier. Different formats of MEP identifiers are 1371 defined in [10] to address different environments. When MPLS-TP 1372 is deployed in transport network environments where IP 1373 addressing is not used in the forwarding plane, the ICC-based 1374 format for MEP identification is used. When MPLS-TP is deployed 1375 in an IP-based environment, the IP-based MEP identification is 1376 used. 1378 As a consequence, it is not possible to detect misconnections 1379 between two MEGs monitored only for continuity as neither the 1380 OAM message type nor OAM message content provides sufficient 1381 information to disambiguate an invalid source. To expand: 1383 o For CC leaking into a CC monitored MEG - undetectable 1385 o For CV leaking into a CC monitored MEG - presence of 1386 additional Source MEP identifier allows detecting the fault 1388 o For CC leaking into a CV monitored MEG - lack of additional 1389 Source MEP identifier allows detecting the fault. 1391 o For CV leaking into a CV monitored MEG - different Source MEP 1392 identifier permits fault to be identified. 1394 CC-V OAM packets are transmitted at a regular, operator 1395 configurable, rate. The default CC-V transmission periods are 1396 application dependent (see section 5.1.3). 1398 Proactive CC-V OAM packets are transmitted with the "minimum 1399 loss probability PHB" within the transport path (LSP, PW) they 1400 are monitoring. This PHB is configurable on network operator's 1401 basis. PHBs can be translated at the network borders by the same 1402 function that translates it for user data traffic. The 1403 implication is that CC-V fate shares with much of the forwarding 1404 implementation, but not all aspects of PHB processing are 1405 exercised. Either on-demand tools are used for finer grained 1406 fault finding or an implementation may utilize a CC-V flow per 1407 PHB with the entire E-LSP fate sharing with any individual PHB. 1409 In a co-routed or associated, bidirectional point-to-point 1410 transport path, when a MEP is enabled to generate pro-active 1411 CC-V OAM packets with a configured transmission rate, it also 1412 expects to receive pro-active CC-V OAM packets from its peer MEP 1413 at the same transmission rate as a common SLA applies to all 1414 components of the transport path. In a unidirectional transport 1415 path (either point-to-point or point-to-multipoint), only the 1416 source MEP is enabled to generate CC-V OAM packets and only the 1417 sink MEP is configured to expect these packets at the configured 1418 rate. 1420 MIPs, as well as intermediate nodes not supporting MPLS-TP OAM, 1421 are transparent to the pro-active CC-V information and forward 1422 these pro-active CC-V OAM packets as regular data packets. 1424 During path setup and tear down, situations arise where CC-V 1425 checks would give rise to alarms, as the path is not fully 1426 instantiated. In order to avoid these spurious alarms the 1427 following procedures are recommended. At initialization, the MEP 1428 source function (generating pro-active CC-V packets) should be 1429 enabled prior to the corresponding MEP sink function (detecting 1430 continuity and connectivity defects). When disabling the CC-V 1431 proactive functionality, the MEP sink function should be 1432 disabled prior to the corresponding MEP source function. 1434 It should be noted that different encapsulations are possible 1435 for CC-V packets and therefore it is possible that in case of 1436 mis-configurations or mis-connectivity, CC-V packets are 1437 received with an unexpected encapsulation. 1439 There are practical limitations to detecting unexpected 1440 encapsulation. It is possible that there are mis-configuration 1441 or mis-connectivity scenarios where OAM packets can alias as 1442 payload, e.g., when a transport path can carry an arbitrary 1443 payload without a pseudo wire. 1445 When CC-V packets are received with an unexpected encapsulation 1446 that can be parsed by the sink MEP, the CC-V packet is processed 1447 as it were received with the correct encapsulation and if it is 1448 not a manifestation of a mis-connectivity defect a warning is 1449 raised (see section 5.1.1.4). Otherwise the CC-V packet may be 1450 silently discarded as unrecognized and a LOC defect may be 1451 detected (see section 5.1.1.1). 1453 The defect conditions are described in no specific order. 1455 5.1.1. Defects identified by CC-V 1457 Pro-active CC-V functions allow a sink MEP to detect the defect 1458 conditions described in the following sub-sections. For all of 1459 the described defect cases, the sink MEP should notify the 1460 equipment fault management process of the detected defect. 1462 5.1.1.1. Loss Of Continuity defect 1464 When proactive CC-V is enabled, a sink MEP detects a loss of 1465 continuity (LOC) defect when it fails to receive pro-active CC-V 1466 OAM packets from the source MEP. 1468 o Entry criteria: If no pro-active CC-V OAM packets from the 1469 source MEP (and in the case of CV, this includes the 1470 requirement to have the expected globally unique Source MEP 1471 identifier) are received within the interval equal to 3.5 1472 times the receiving MEP's configured CC-V reception period. 1474 o Exit criteria: A pro-active CC-V OAM packet from the source 1475 MEP (and again in the case of CV, with the expected globally 1476 unique Source MEP identifier) is received. 1478 5.1.1.2. Mis-connectivity defect 1480 When a pro-active CC-V OAM packet is received, a sink MEP 1481 identifies a mis-connectivity defect (e.g. mismerge, 1482 misconnection or unintended looping) when the received packet 1483 carries an unexpected globally unique Source MEP identifier. 1485 o Entry criteria: The sink MEP receives a pro-active CC-V OAM 1486 packet with an unexpected globally unique Source MEP 1487 identifier or receives a CC or CC/CV OAM packet with an 1488 unexpected encapsulation. 1490 o Exit criteria: The sink MEP does not receive any pro-active 1491 CC-V OAM packet with an unexpected globally unique Source MEP 1492 identifier for an interval equal at least to 3.5 times the 1493 longest transmission period of the pro-active CC-V OAM 1494 packets received with an unexpected globally unique Source 1495 MEP identifier since this defect has been raised. This 1496 requires the OAM message to self identify the CC-V 1497 periodicity as not all MEPs can be expected to have knowledge 1498 of all MEGs. 1500 5.1.1.3. Period Misconfiguration defect 1502 If pro-active CC-V OAM packets are received with the expected 1503 globally unique Source MEP identifier but with a transmission 1504 period different than the locally configured reception period, 1505 then a CV period mis-configuration defect is detected. 1507 o Entry criteria: A MEP receives a CC-V pro-active packet with 1508 the expected globally unique Source MEP identifier but with a 1509 Period field value different than its own CC-V configured 1510 transmission period. 1512 o Exit criteria: The sink MEP does not receive any pro-active 1513 CC-V OAM packet with the expected globally unique Source MEP 1514 identifier and an incorrect transmission period for an 1515 interval equal at least to 3.5 times the longest transmission 1516 period of the pro-active CC-V OAM packets received with the 1517 expected globally unique Source MEP identifier and an 1518 incorrect transmission period since this defect has been 1519 raised. 1521 5.1.1.4. Unexpected encapsulation defect 1523 If pro-active CC-V OAM packets are received with the expected 1524 globally unique Source MEP identifier but with an unexpected 1525 encapsulation, then a CV unexpected encapsulation defect is 1526 detected. 1528 It should be noted that there are practical limitations to 1529 detecting unexpected encapsulation (see section 5.1.1). 1531 o Entry criteria: A MEP receives a CC-V pro-active packet with 1532 the expected globally unique Source MEP identifier but with 1533 an unexpected encapsulation. 1535 o Exit criteria: The sink MEP does not receive any pro-active 1536 CC-V OAM packet with the expected globally unique Source MEP 1537 identifier and an unexpected encapsulation for an interval 1538 equal at least to 3.5 times the longest transmission period 1539 of the pro-active CC-V OAM packets received with the expected 1540 globally unique Source MEP identifier and an unexpected 1541 encapsulation since this defect has been raised. 1543 5.1.2. Consequent action 1545 A sink MEP that detects any of the defect conditions defined in 1546 section 5.1.1 declares a defect condition and performs the 1547 following consequent actions. 1549 If a MEP detects an unexpected globally unique Source MEP 1550 Identifier, it blocks all the traffic (including also the user 1551 data packets) that it receives from the misconnected transport 1552 path. 1554 If a MEP detects LOC defect that is not caused by a period 1555 mis-configuration, it should block all the traffic (including 1556 also the user data packets) that it receives from the transport 1557 path, if this consequent action has been enabled by the 1558 operator. 1560 It is worth noticing that the OAM requirements document [11] 1561 recommends that CC-V proactive monitoring be enabled on every 1562 MEG in order to reliably detect connectivity defects. However, 1563 CC-V proactive monitoring can be disabled by an operator for a 1564 MEG. In the event of a misconnection between a transport path 1565 that is pro-actively monitored for CC-V and a transport path 1566 which is not, the MEP of the former transport path will detect a 1567 LOC defect representing a connectivity problem (e.g. a 1568 misconnection with a transport path where CC-V proactive 1569 monitoring is not enabled) instead of a continuity problem, with 1570 a consequent wrong traffic delivering. For these reasons, the 1571 traffic block consequent action is applied even when a LOC 1572 condition occurs. This block consequent action can be disabled 1573 through configuration. This deactivation of the block action may 1574 be used for activating or deactivating the monitoring when it is 1575 not possible to synchronize the function activation of the two 1576 peer MEPs. 1578 If a MEP detects a LOC defect (section 5.1.1.1), a 1579 mis-connectivity defect (section 5.1.1.2) it declares a signal 1580 fail condition at the transport path level. 1582 It is a matter if local policy if a MEP that detects a period 1583 misconfiguration defect (section 5.1.1.3) declares a signal fail 1584 condition at the transport path level. 1586 The detection of an unexpected encapsulation defect does not 1587 have any consequent action: it is just a warning for the network 1588 operator. An implementation able to detect an unexpected 1589 encapsulation but not able to verify the source MEP ID may 1590 choose to declare a mis-connectivity defect. 1592 5.1.3. Configuration considerations 1594 At all MEPs inside a MEG, the following configuration 1595 information needs to be configured when a proactive CC-V 1596 function is enabled: 1598 o MEG ID; the MEG identifier to which the MEP belongs; 1600 o MEP-ID; the MEP's own identity inside the MEG; 1602 o list of the other MEPs in the MEG. For a point-to-point MEG 1603 the list would consist of the single MEP ID from which the 1604 OAM packets are expected. In case of the root MEP of a p2mp 1605 MEG, the list is composed by all the leaf MEP IDs inside the 1606 MEG. In case of the leaf MEP of a p2mp MEG, the list is 1607 composed by the root MEP ID (i.e. each leaf needs to know the 1608 root MEP ID from which it expect to receive the CC-V OAM 1609 packets). 1611 o PHB; it identifies the per-hop behavior of CC-V packet. 1612 Proactive CC-V packets are transmitted with the "minimum loss 1613 probability PHB" previously configured within a single 1614 network operator. This PHB is configurable on network 1615 operator's basis. PHBs can be translated at the network 1616 borders. 1618 o transmission rate; the default CC-V transmission periods are 1619 application dependent (depending on whether they are used to 1620 support fault management, performance monitoring, or 1621 protection switching applications): 1623 o Fault Management: default transmission period is 1s (i.e. 1624 transmission rate of 1 packet/second). 1626 o Performance Monitoring: default transmission period is 1627 100ms (i.e. transmission rate of 10 packets/second). 1628 Performance monitoring is only relevant when the 1629 transport path is defect free. CC-V contributes to the 1630 accuracy of PM statistics by permitting the defect free 1631 periods to be properly distinguished. 1633 o Protection Switching: default transmission period is 1634 3.33ms (i.e. transmission rate of 300 packets/second), in 1635 order to achieve sub-50ms the CC-V defect entry criteria 1636 should resolve in less than 10msec, and complete a 1637 protection switch within a subsequent period of 50 msec. 1638 It is also possible to lengthen the transmission period 1639 to 10ms (i.e. transmission rate of 100 packets/second): 1640 in this case the CC-V defect entry criteria is reached 1641 later (i.e. 30msec). 1643 It should be possible for the operator to configure these 1644 transmission rates for all applications, to satisfy his internal 1645 requirements. 1647 Note that the reception period is the same as the configured 1648 transmission rate. 1650 For statically provisioned transport paths the above parameters 1651 are statically configured; for dynamically established transport 1652 paths the configuration information are signaled via the control 1653 plane. 1655 The operator should be able to enable/disable some of the 1656 consequent actions. Which consequent action can be 1657 enabled/disabled are described in section 5.1.1.4. 1659 5.2. Remote Defect Indication 1661 The Remote Defect Indication (RDI) function, as required in 1662 section 2.2.9 of RFC 5860 [11], is an indicator that is 1663 transmitted by a sink MEP to communicate to its source MEP that 1664 a signal fail condition exists. RDI is only used for all 1665 co-routed and associated bidirectional transport paths and is 1666 associated with proactive CC-V. The RDI indicator can be piggy- 1667 backed onto the CC-V packet. 1669 When a MEP detects a signal fail condition (e.g. in case of a 1670 continuity or connectivity defect), it should begin transmitting 1671 an RDI indicator to its peer MEP. When incorporated into CC-V, 1672 the RDI information will be included in all pro-active CC-V 1673 packets that it generates for the duration of the signal fail 1674 condition's existence. 1676 A MEP that receives packets from a peer MEP with the RDI 1677 information should determine that its peer MEP has encountered a 1678 defect condition associated with a signal fail. 1680 MIPs as well as intermediate nodes not supporting MPLS-TP OAM 1681 are transparent to the RDI indicator and forward OAM packets 1682 that include the RDI indicator as regular data packets, i.e. the 1683 MIP should not perform any actions nor examine the indicator. 1685 When the signal fail defect condition clears, the MEP should 1686 stop transmitting the RDI indicator to its peer MEP. When 1687 incorporated into CC-V, the RDI indicator will be cleared from 1688 subsequent transmission of pro-active CC-V packets. A MEP 1689 should clear the RDI defect upon reception of an RDI indicator 1690 cleared. 1692 5.2.1. Configuration considerations 1694 In order to support RDI indication, the indication may be a 1695 unique OAM message or an OAM information element embedded in a 1696 CV message; the RDI transmission rate and PHB of the OAM packets 1697 carrying RDI should be the same as that configured for CC-V. 1699 5.3. Alarm Reporting 1701 The Alarm Reporting function, as required in section 2.2.8 of 1702 RFC 5860 [11], relies upon an Alarm Indication Signal (AIS) 1703 message to suppress alarms following detection of defect 1704 conditions at the server (sub-)layer. 1706 When a server MEP asserts signal fail, it notifies that to the 1707 co-located MPLS-TP client/server adaptation function which then 1708 generates packets with AIS information in the downstream 1709 direction to allow the suppression of secondary alarms at the 1710 MPLS-TP MEP in the client (sub-)layer. 1712 The generation of packets with AIS information starts 1713 immediately when the server MEP asserts signal fail. These 1714 periodic packets, with AIS information, continue to be 1715 transmitted until the signal fail condition is cleared. It is 1716 assumed that to avoid spurious alarm generation a MEP detecting 1717 loss of continuity will wait for a hold off interval prior to 1718 asserting an alarm to the management system. 1720 Upon receiving a packet with AIS information an MPLS-TP MEP 1721 enters an AIS defect condition and suppresses loss of continuity 1722 alarms associated with its peer MEP but does not block traffic 1723 received from the transport path. A MEP resumes loss of 1724 continuity alarm generation upon detecting loss of continuity 1725 defect conditions in the absence of AIS condition. 1727 MIPs, as well as intermediate nodes, do not process AIS 1728 information and forward these AIS OAM packets as regular data 1729 packets. 1731 For example, let's consider a fiber cut between LSR 1 and LSR 2 1732 in the reference network of Figure 5. Assuming that all of the 1733 MEGs described in Figure 5 have pro-active CC-V enabled, a LOC 1734 defect is detected by the MEPs of Sec12 SME, LSP13 LME, PW1 PSME 1735 and PW1Z PME, however in a transport network only the alarm 1736 associated to the fiber cut needs to be reported to an NMS while 1737 all secondary alarms should be suppressed (i.e. not reported to 1738 the NMS or reported as secondary alarms). 1740 If the fiber cut is detected by the MEP in the physical layer 1741 (in LSR2), LSR2 can generate the proper alarm in the physical 1742 layer and suppress the secondary alarm associated with the LOC 1743 defect detected on Sec12 SME. As both MEPs reside within the 1744 same node, this process does not involve any external protocol 1745 exchange. Otherwise, if the physical layer has not enough OAM 1746 capabilities to detect the fiber cut, the MEP of Sec12 SME in 1747 LSR2 will report a LOC alarm. 1749 In both cases, the MEP of Sec12 SME in LSR 2 notifies the 1750 adaptation function for LSP13 LME that then generates AIS 1751 packets on the LSP13 LME in order to allow its MEP in LSR3 to 1752 suppress the LOC alarm. LSR3 can also suppress the secondary 1753 alarm on PW13 PSME because the MEP of PW13 PSME resides within 1754 the same node as the MEP of LSP13 LME. The MEP of PW13 PSME in 1755 LSR3 also notifies the adaptation function for PW1Z PME that 1756 then generates AIS packets on PW1Z PME in order to allow its MEP 1757 in LSRZ to suppress the LOC alarm. 1759 The generation of AIS packets for each MEG in the MPLS-TP client 1760 (sub-)layer is configurable (i.e. the operator can 1761 enable/disable the AIS generation). 1763 AIS packets are transmitted with the "minimum loss probability 1764 PHB" within a single network operator. This PHB is configurable 1765 on network operator's basis. 1767 AIS condition is cleared if no AIS message has been received in 1768 3.5 times the AIS transmission period. 1770 5.4. Lock Reporting 1772 The Lock Reporting function, as required in section 2.2.7 of RFC 1773 5860 [11], relies upon a Locked Report (LKR) message used to 1774 suppress alarms following administrative locking action in the 1775 server (sub-)layer. 1777 When a server MEP is locked, the MPLS-TP client (sub-)layer 1778 adaptation function generates packets with LKR information in 1779 both directions to allow the suppression of secondary alarms at 1780 the MEPs in the client (sub-)layer. Again it is assumed that 1781 there is a hold off for any loss of continuity alarms in the 1782 client layer MEPs downstream of the node originating the locked 1783 report. 1785 The generation of packets with LKR information starts 1786 immediately when the server MEP is locked. These periodic 1787 packets, with LKR information, continue to be transmitted until 1788 the locked condition is cleared. 1790 Upon receiving a packet with LKR information an MPLS-TP MEP 1791 enters an LKR defect condition and suppresses loss of continuity 1792 alarm associated with its peer MEP but does not block traffic 1793 received from the transport path. A MEP resumes loss of 1794 continuity alarm generation upon detecting loss of continuity 1795 defect conditions in the absence of LKR condition. 1797 MIPs, as well as intermediate nodes, do not process the LKR 1798 information and forward these LKR OAM packets as regular data 1799 packets. 1801 For example, let's consider the case where the MPLS-TP Section 1802 between LSR 1 and LSR 2 in the reference network of Figure 5 is 1803 administrative locked at LSR2 (in both directions). 1805 Assuming that all the MEGs described in Figure 5 have pro-active 1806 CC-V enabled, a LOC defect is detected by the MEPs of LSP13 LME, 1807 PW1 PSME and PW1Z PME, however in a transport network all these 1808 secondary alarms should be suppressed (i.e. not reported to the 1809 NMS or reported as secondary alarms). 1811 The MEP of Sec12 SME in LSR 2 notifies the adaptation function 1812 for LSP13 LME that then generates LKR packets on the LSP13 LME 1813 in order to allow its MEPs in LSR1 and LSR3 to suppress the LOC 1814 alarm. LSR3 can also suppress the secondary alarm on PW13 PSME 1815 because the MEP of PW13 PSME resides within the same node as the 1816 MEP of LSP13 LME. The MEP of PW13 PSME in LSR3 also notifies the 1817 adaptation function for PW1Z PME that then generates AIS packets 1818 on PW1Z PME in order to allow its MEP in LSRZ to suppress the 1819 LOC alarm. 1821 The generation of LKR packets for each MEG in the MPLS-TP client 1822 (sub-)layer is configurable (i.e. the operator can 1823 enable/disable the LKR generation). 1825 LKR packets are transmitted with the "minimum loss probability 1826 PHB" within a single network operator. This PHB is configurable 1827 on network operator's basis. 1829 Locked condition is cleared if no LKR packet has been received 1830 for 3.5 times the transmission period. 1832 5.5. Packet Loss Measurement 1834 Packet Loss Measurement (LM) is one of the capabilities 1835 supported by the MPLS-TP Performance Monitoring (PM) function in 1836 order to facilitate reporting of QoS information for a transport 1837 path as required in section 2.2.11 of RFC 5860 [11]. LM is used 1838 to exchange counter values for the number of ingress and egress 1839 packets transmitted and received by the transport path monitored 1840 by a pair of MEPs. 1842 Proactive LM is performed by periodically sending LM OAM packets 1843 from a MEP to a peer MEP and by receiving LM OAM packets from 1844 the peer MEP (if a co-routed or associated bidirectional 1845 transport path) during the life time of the transport path. Each 1846 MEP performs measurements of its transmitted and received 1847 packets. These measurements are then correlated in real time 1848 with the peer MEP in the ME to derive the impact of packet loss 1849 on a number of performance metrics for the ME in the MEG. The LM 1850 transactions are issued such that the OAM packets will 1851 experience the same queuing discipline as the measured traffic 1852 while transiting between the MEPs in the ME. 1854 For a MEP, near-end packet loss refers to packet loss associated 1855 with incoming data packets (from the far-end MEP) while far-end 1856 packet loss refers to packet loss associated with egress data 1857 packets (towards the far-end MEP). 1859 MIPs, as well as intermediate nodes, do not process the LM 1860 information and forward these pro-active LM OAM packets as 1861 regular data packets. 1863 5.5.1. Configuration considerations 1865 In order to support proactive LM, the transmission rate and PHB 1866 class associated with the LM OAM packets originating from a MEP 1867 need be configured as part of the LM provisioning. LM OAM 1868 packets should be transmitted with the PHB that yields the 1869 lowest discard probability within the measured PHB Scheduling 1870 Class (see RFC 3260 [16]). 1872 If that PHB class is not an ordered aggregate where the ordering 1873 constraint is all packets with the PHB class being delivered in 1874 order, LM can produce inconsistent results. 1876 5.5.2. Sampling skew 1878 If an implementation makes use of a hardware forwarding path 1879 which operates in parallel with an OAM processing path, whether 1880 hardware or software based, the packet and byte counts may be 1881 skewed if one or more packets can be processed before the OAM 1882 processing samples counters. If OAM is implemented in software 1883 this error can be quite large. 1885 5.5.3. Multilink issues 1887 If multilink is used at the LSP ingress or egress, there may be 1888 no single packet processing engine where to inject or extract a 1889 LM packet as an atomic operation to which accurate packet and 1890 byte counts can be associated with the packet. 1892 In the case where multilink is encountered in the LSP path, the 1893 reordering of packets within the LSP can cause inaccurate LM 1894 results. 1896 5.6. Packet Delay Measurement 1898 Packet Delay Measurement (DM) is one of the capabilities 1899 supported by the MPLS-TP PM function in order to facilitate 1900 reporting of QoS information for a transport path as required in 1901 section 2.2.12 of RFC 5860 [11]. Specifically, pro-active DM is 1902 used to measure the long-term packet delay and packet delay 1903 variation in the transport path monitored by a pair of MEPs. 1905 Proactive DM is performed by sending periodic DM OAM packets 1906 from a MEP to a peer MEP and by receiving DM OAM packets from 1907 the peer MEP (if a co-routed or associated bidirectional 1908 transport path) during a configurable time interval. 1910 Pro-active DM can be operated in two ways: 1912 o One-way: a MEP sends DM OAM packet to its peer MEP containing 1913 all the required information to facilitate one-way packet 1914 delay and/or one-way packet delay variation measurements at 1915 the peer MEP. Note that this requires synchronized precision 1916 time at either MEP by means outside the scope of this 1917 framework. 1919 o Two-way: a MEP sends DM OAM packet with a DM request to its 1920 peer MEP, which replies with a DM OAM packet as a DM 1921 response. The request/response DM OAM packets containing all 1922 the required information to facilitate two-way packet delay 1923 and/or two-way packet delay variation measurements from the 1924 viewpoint of the source MEP. 1926 MIPs, as well as intermediate nodes, do not process the DM 1927 information and forward these pro-active DM OAM packets as 1928 regular data packets. 1930 5.6.1. Configuration considerations 1932 In order to support pro-active DM, the transmission rate and PHB 1933 associated with the DM OAM packets originating from a MEP need 1934 be configured as part of the DM provisioning. DM OAM packets 1935 should be transmitted with the PHB that yields the lowest 1936 discard probability within the measured PHB Scheduling Class 1937 (see RFC 3260 [16]). 1939 5.7. Client Failure Indication 1941 The Client Failure Indication (CFI) function, as required in 1942 section 2.2.10 of RFC 5860 [11], is used to help process client 1943 defects and propagate a client signal defect condition from the 1944 process associated with the local attachment circuit where the 1945 defect was detected (typically the source adaptation function 1946 for the local client interface) to the process associated with 1947 the far-end attachment circuit (typically the source adaptation 1948 function for the far-end client interface) for the same 1949 transmission path in case the client of the transport path does 1950 not support a native defect/alarm indication mechanism, e.g. 1951 AIS. 1953 A source MEP starts transmitting a CFI indication to its peer 1954 MEP when it receives a local client signal defect notification 1955 via its local CSF function. Mechanisms to detect local client 1956 signal fail defects are technology specific. Similarly 1957 mechanisms to determine when to cease originating client signal 1958 fail indication are also technology specific. 1960 A sink MEP that has received a CFI indication report this 1961 condition to its associated client process via its local CFI 1962 function. Consequent actions toward the client attachment 1963 circuit are technology specific. 1965 Either there needs to be a 1:1 correspondence between the client 1966 and the MEG, or when multiple clients are multiplexed over a 1967 transport path, the CFI message requires additional information 1968 to permit the client instance to be identified. 1970 MIPs, as well as intermediate nodes, do not process the CFI 1971 information and forward these pro-active CFI OAM packets as 1972 regular data packets. 1974 5.7.1. Configuration considerations 1976 In order to support CFI indication, the CFI transmission rate 1977 and PHB of the CFI OAM message/information element should be 1978 configured as part of the CFI configuration. 1980 6. OAM Functions for on-demand monitoring 1982 In contrast to proactive monitoring, on-demand monitoring is 1983 initiated manually and for a limited amount of time, usually for 1984 operations such as diagnostics to investigate a defect 1985 condition. 1987 On-demand monitoring covers a combination of "in-service" and 1988 "out-of-service" monitoring functions. The control and 1989 measurement implications are: 1991 1. A MEG can be directed to perform an "on-demand" functions at 1992 arbitrary times in the lifetime of a transport path. 1994 2. "out-of-service" monitoring functions may require a-priori 1995 configuration of both MEPs and intermediate nodes in the MEG 1996 (e.g., data plane loopback) and the issuance of notifications 1997 into client layers of the transport path being removed from 1998 service (e.g., lock-reporting) 2000 3. The measurements resulting from on-demand monitoring are 2001 typically harvested in real time, as these are frequently 2002 initiated manually. These do not necessarily require 2003 different harvesting mechanisms that for harvesting proactive 2004 monitoring telemetry. 2006 The functions that are exclusive out-of-service are those 2007 described in section 6.3. The remainder are applicable to both 2008 in-service and out-of-service transport paths. 2010 6.1. Connectivity Verification 2012 In order to preserve network resources, e.g. bandwidth, 2013 processing time at switches, it may be preferable to not use 2014 proactive CC-V. In order to perform fault management functions, 2015 network management may invoke periodic on-demand bursts of on- 2016 demand CV packets, as required in section 2.2.3 of RFC 5860 2017 [11]. 2019 On demand connectivity verification is a transaction that flows 2020 from the source MEP to a target MIP or MEP. 2022 Use of on-demand CV is dependent on the existence of either a 2023 bi-directional ME, or an associated return ME, or the 2024 availability of an out-of-band return path because it requires 2025 the ability for target MIPs and MEPs to direct responses to the 2026 originating MEPs. 2028 An additional use of on-demand CV would be to detect and locate 2029 a problem of connectivity when a problem is suspected or known 2030 based on other tools. In this case the functionality will be 2031 triggered by the network management in response to a status 2032 signal or alarm indication. 2034 On-demand CV is based upon generation of on-demand CV packets 2035 that should uniquely identify the MEG that is being checked. 2036 The on-demand functionality may be used to check either an 2037 entire MEG (end-to-end) or between a source MEP and a specific 2038 MIP. This functionality may not be available for associated 2039 bidirectional transport paths or unidirectional paths, as the 2040 MIP may not have a return path to the source MEP for the on- 2041 demand CV transaction. 2043 On-demand CV may generate a one-time burst of on-demand CV 2044 packets, or be used to invoke periodic, non-continuous, bursts 2045 of on-demand CV packets. The number of packets generated in 2046 each burst is configurable at the MEPs, and should take into 2047 account normal packet-loss conditions. 2049 When invoking a periodic check of the MEG, the source MEP should 2050 issue a burst of on-demand CV packets that uniquely identifies 2051 the MEG being verified. The number of packets and their 2052 transmission rate should be pre-configured at the source MEP. 2053 The source MEP should use the mechanisms defined in sections 3.3 2054 and 3.4 when sending an on-demand CV packet to a target MEP or 2055 target MIP respectively. The target MEP/MIP shall return a reply 2056 on-demand CV packet for each packet received. If the expected 2057 number of on-demand CV reply packets is not received at source 2058 MEP, this is an indication that a connectivity problem may 2059 exist. 2061 On-demand CV should have the ability to carry padding such that 2062 a variety of MTU sizes can be originated to verify the MTU 2063 transport capability of the transport path. 2065 MIPs that are not target by on-demand CV packets, as well as 2066 intermediate nodes, do not process the CV information and 2067 forward these on-demand CV OAM packets as regular data packets. 2069 6.1.1. Configuration considerations 2071 For on-demand CV the source MEP should support the configuration 2072 of the number of packets to be transmitted/received in each 2073 burst of transmissions and their packet size. 2075 In addition, when the CV packet is used to check connectivity 2076 toward a target MIP, the number of hops to reach the target MIP 2077 should be configured. 2079 The PHB of the on-demand CV packets should be configured as 2080 well. This permits the verification of correct operation of QoS 2081 queuing as well as connectivity. 2083 6.2. Packet Loss Measurement 2085 On-demand Packet Loss Measurement (LM) is one of the 2086 capabilities supported by the MPLS-TP Performance Monitoring 2087 function in order to facilitate the diagnosis of QoS 2088 performances for a transport path, as required in section 2.2.11 2089 of RFC 5860 [11]. As proactive LM, on-demand LM is used to 2090 exchange counter values for the number of ingress and egress 2091 packets transmitted and received by the transport path monitored 2092 by a pair of MEPs. LM is only performed between a pair of MEPs. 2094 On-demand LM is performed by periodically sending LM OAM packets 2095 from a MEP to a peer MEP and by receiving LM OAM packets from 2096 the peer MEP (if a co-routed or associated bidirectional 2097 transport path) during a pre-defined monitoring period. Each MEP 2098 performs measurements of its transmitted and received packets. 2099 These measurements are then correlated to evaluate the packet 2100 loss performance metrics of the transport path. 2102 Use of packet loss measurement in an out-of-service transport 2103 path requires a traffic source such as a tester. 2105 MIPs, as well as intermediate nodes, do not process the LM 2106 information and forward these on-demand LM OAM packets as 2107 regular data packets. 2109 6.2.1. Configuration considerations 2111 In order to support on-demand LM, the beginning and duration of 2112 the LM procedures, the transmission rate and PHB associated with 2113 the LM OAM packets originating from a MEP must be configured as 2114 part of the on-demand LM provisioning. LM OAM packets should be 2115 transmitted with the PHB that yields the lowest discard 2116 probability within the measured PHB Scheduling Class (see RFC 2117 3260 [16]). 2119 6.2.2. Sampling skew 2121 If an implementation makes use of a hardware forwarding path 2122 which operates in parallel with an OAM processing path, whether 2123 hardware or software based, the packet and byte counts may be 2124 skewed if one or more packets can be processed before the OAM 2125 processing samples counters. If OAM is implemented in software 2126 this error can be quite large. 2128 6.2.3. Multilink issues 2130 Multi-link Issues are as described in section 5.5.3. 2132 6.3. Diagnostic Tests 2134 Diagnostic tests are tests performed on a MEG that has been taken 2135 out-of-service. 2137 6.3.1. Throughput Estimation 2139 Throughput estimation is an on-demand out-of-service function, 2140 as required in section 2.2.5 of RFC 5860 [11], that allows 2141 verifying the bandwidth/throughput of an MPLS-TP transport path 2142 (LSP or PW) before it is put in-service. 2144 Throughput estimation is performed between MEPs and between MEP 2145 and MIP. It and can be performed in one-way or two-way modes. 2147 According to RFC 2544 [12], this test is performed by sending 2148 OAM test packets at increasing rate (up to the theoretical 2149 maximum), graphing the percentage of OAM test packets received 2150 and reporting the rate at which OAM test packets begin to drop. 2151 In general, this rate is dependent on the OAM test packet size. 2153 When configured to perform such tests, a MEP source inserts OAM 2154 test packets with a specified packet size and transmission 2155 pattern at a rate to exercise the throughput. 2157 For a one-way test, the remote MEP sink receives the OAM test 2158 packets and calculates the packet loss. For a two-way test, the 2159 remote MEP loopbacks the OAM test packets back to original MEP 2160 and the local MEP sink calculates the packet loss. 2162 It is worth noting that two-way throughput estimation can only 2163 evaluate the minimum of available throughput of the two 2164 directions. In order to estimate the throughput of each 2165 direction uniquely, two one-way throughput estimation sessions 2166 have to be setup. 2168 MIPs that are not target by on-demand test OAM packets, as well 2169 as intermediate nodes, do not process the throughput test 2170 information and forward these on-demand test OAM packets as 2171 regular data packets. 2173 6.3.1.1. Configuration considerations 2175 Throughput estimation is an out-of-service tool. The diagnosed 2176 MEG should be put into a Lock status before the diagnostic test 2177 is started. 2179 A MEG can be put into a Lock status either via an NMS action or 2180 using the Lock Instruct OAM tool as defined in section 7. 2182 At the transmitting MEP, provisioning is required for a test 2183 signal generator, which is associated with the MEP. At a 2184 receiving MEP, provisioning is required for a test signal 2185 detector which is associated with the MEP. 2187 6.3.1.2. Limited OAM processing rate 2189 If an implementation is able to process payload at much higher 2190 data rates than OAM packets, then accurate measurement of 2191 throughput using OAM packets is not achievable. Whether OAM 2192 packets can be processed at the same rate as payload is 2193 implementation dependent. 2195 6.3.1.3. Multilink considerations 2197 If multilink is used, then it may not be possible to perform 2198 throughput measurement, as the throughput test may not have a 2199 mechanism for utilizing more than one component link of the 2200 aggregated link. 2202 6.3.2. Data plane Loopback 2204 Data plane loopback is an out-of-service function, as required 2205 in section 2.2.5 of RFC 5860 [11]. This function consists in 2206 placing a transport path, at either an intermediate or 2207 terminating node, into a data plane loopback state, such that 2208 all traffic (including both payload and OAM) received on the 2209 looped back interface is sent on the reverse direction of the 2210 transport path. The traffic is looped back unmodified other than 2211 normal per hop processing such as TTL decrement. 2213 The data plane loopback function requires that the MEG is locked 2214 such that user data traffic is prevented from entering/exiting 2215 that MEG. Instead, test traffic is inserted at the ingress of 2216 the MEG. This test traffic can be generated from an internal 2217 process residing within the ingress node or injected by external 2218 test equipment connected to the ingress node. 2220 It is also normal to disable proactive monitoring of the path as 2221 the source MEP will see all source MEP originated OAM messages 2222 returned to it. 2224 The only way to send an OAM packet to a node set in the data 2225 plane loopback mode is via TTL expiry, irrespectively on whether 2226 the node is hosting MIPs or MEPs. It should also be noted that 2227 MIPs can be addressed with more than one TTL value on a 2228 co-routed bi-directional path set into dataplane loopback. 2230 If the loopback function is to be performed at an intermediate 2231 node it is only applicable to co-routed bi-directional paths. If 2232 the loopback is to be performed end to end, it is applicable to 2233 both co-routed bi-directional or associated bi-directional 2234 paths. 2236 It should be noted that data plane loopback function itself is 2237 applied to data-plane loopback points that can resides on 2238 different interfaces from MIPs/MEPs. Where a node implements 2239 data plane loopback capability and whether it implements it in 2240 more than one point is implementation dependent. 2242 6.3.2.1. Configuration considerations 2244 Data plane loopback is an out-of-service tool. The MEG which 2245 defines a diagnosed transport path should be put into a locked 2246 state before the diagnostic test is started. However, a means is 2247 required to permit the originated test traffic to be inserted at 2248 ingress MEP when data plane loopback is performed. 2250 A transport path, at either an intermediate or terminating node, 2251 can be put into data plane loopback state via an NMS action or 2252 using an OAM tool for data plane loopback configuration. 2254 If the data plane loopback point is set somewhere at an 2255 intermediate point of a co-routed bidirectional transport path, 2256 the side of loop back function (one side or both side) needs to 2257 be configured. 2259 6.4. Route Tracing 2261 It is often necessary to trace a route covered by a MEG from a 2262 source MEP to the sink MEP including all the MIPs in-between, 2263 and may be conducted after provisioning an MPLS-TP transport 2264 path for, e.g., trouble shooting purposes such as fault 2265 localization. 2267 The route tracing function, as required in section 2.2.4 of RFC 2268 5860 [11], is providing this functionality. Based on the fate 2269 sharing requirement of OAM flows, i.e. OAM packets receive the 2270 same forwarding treatment as data packet, route tracing is a 2271 basic means to perform connectivity verification and, to a much 2272 lesser degree, continuity check. For this function to work 2273 properly, a return path must be present. 2275 Route tracing might be implemented in different ways and this 2276 document does not preclude any of them. 2278 Route tracing should always discover the full list of MIPs and 2279 of the peer MEPs. In case a defect exists, the route trace 2280 function will only be able to trace up to the defect, and needs 2281 to be able to return the incomplete list of OAM entities that it 2282 was able to trace such that the fault can be localized. 2284 6.4.1. Configuration considerations 2286 The configuration of the route trace function must at least 2287 support the setting of the number of trace attempts before it 2288 gives up. 2290 6.5. Packet Delay Measurement 2292 Packet Delay Measurement (DM) is one of the capabilities 2293 supported by the MPLS-TP PM function in order to facilitate 2294 reporting of QoS information for a transport path, as required 2295 in section 2.2.12 of RFC 5860 [11]. Specifically, on-demand DM 2296 is used to measure packet delay and packet delay variation in 2297 the transport path monitored by a pair of MEPs during a pre- 2298 defined monitoring period. 2300 On-Demand DM is performed by sending periodic DM OAM packets 2301 from a MEP to a peer MEP and by receiving DM OAM packets from 2302 the peer MEP (if a co-routed or associated bidirectional 2303 transport path) during a configurable time interval. 2305 On-demand DM can be operated in two modes: 2307 o One-way: a MEP sends DM OAM packet to its peer MEP containing 2308 all the required information to facilitate one-way packet 2309 delay and/or one-way packet delay variation measurements at 2310 the peer MEP. Note that this requires synchronized precision 2311 time at either MEP by means outside the scope of this 2312 framework. 2314 o Two-way: a MEP sends DM OAM packet with a DM request to its 2315 peer MEP, which replies with an DM OAM packet as a DM 2316 response. The request/response DM OAM packets containing all 2317 the required information to facilitate two-way packet delay 2318 and/or two-way packet delay variation measurements from the 2319 viewpoint of the source MEP. 2321 MIPs, as well as intermediate nodes, do not process the DM 2322 information and forward these on-demand DM OAM packets as 2323 regular data packets. 2325 6.5.1. Configuration considerations 2327 In order to support on-demand DM, the beginning and duration of 2328 the DM procedures, the transmission rate and PHB associated with 2329 the DM OAM packets originating from a MEP need be configured as 2330 part of the DM provisioning. DM OAM packets should be 2331 transmitted with the PHB that yields the lowest discard 2332 probability within the measured PHB Scheduling Class (see RFC 2333 3260 [16]). 2335 In order to verify different performances between long and short 2336 packets (e.g., due to the processing time), it should be 2337 possible for the operator to configure the packet size of the 2338 on-demand OAM DM packet. 2340 7. OAM Functions for administration control 2342 7.1. Lock Instruct 2344 Lock Instruct (LKI) function, as required in section 2.2.6 of 2345 RFC 5860 [11], is a command allowing a MEP to instruct the peer 2346 MEP(s) to put the MPLS-TP transport path into a locked 2347 condition. 2349 This function allows single-side provisioning for 2350 administratively locking (and unlocking) an MPLS-TP transport 2351 path. 2353 Note that it is also possible to administratively lock (and 2354 unlock) an MPLS-TP transport path using two-side provisioning, 2355 where the NMS administratively puts both MEPs into an 2356 administrative lock condition. In this case, the LKI function is 2357 not required/used. 2359 MIPs, as well as intermediate nodes, do not process the lock 2360 instruct information and forward these on-demand LKI OAM packets 2361 as regular data packets. 2363 7.1.1. Locking a transport path 2365 A MEP, upon receiving a single-side administrative lock command 2366 from an NMS, sends an LKI request OAM packet to its peer MEP(s). 2367 It also puts the MPLS-TP transport path into a locked state and 2368 notifies its client (sub-)layer adaptation function upon the 2369 locked condition. 2371 A MEP, upon receiving an LKI request from its peer MEP, can 2372 either accept or reject the instruction and replies to the peer 2373 MEP with an LKI reply OAM packet indicating whether or not it 2374 has accepted the instruction. This requires either an in-band or 2375 out-of-band return path. 2377 If the lock instruction has been accepted, it also puts the 2378 MPLS-TP transport path into a locked state and notifies its 2379 client (sub-)layer adaptation function upon the locked 2380 condition. 2382 Note that if the client (sub-)layer is also MPLS-TP, Lock 2383 Reporting (LKR) generation at the client MPLS-TP (sub-)layer is 2384 started, as described in section 5.4. 2386 7.1.2. Unlocking a transport path 2388 A MEP, upon receiving a single-side administrative unlock 2389 command from NMS, sends an LKI removal request OAM packet to its 2390 peer MEP(s). 2392 The peer MEP, upon receiving an LKI removal request, can either 2393 accept or reject the removal instruction and replies with an LKI 2394 removal reply OAM packet indicating whether or not it has 2395 accepted the instruction. 2397 If the lock removal instruction has been accepted, it also 2398 clears the locked condition on the MPLS-TP transport path and 2399 notifies this event to its client (sub-)layer adaptation 2400 function. 2402 The MEP that has initiated the LKI clear procedure, upon 2403 receiving a positive LKI removal reply, also clears the locked 2404 condition on the MPLS-TP transport path and notifies this event 2405 to its client (sub-)layer adaptation function. 2407 Note that if the client (sub-)layer is also MPLS-TP, Lock 2408 Reporting (LKR) generation at the client MPLS-TP (sub-)layer is 2409 terminated, as described in section 5.4. 2411 8. Security Considerations 2413 A number of security considerations are important in the context 2414 of OAM applications. 2416 OAM traffic can reveal sensitive information such as passwords, 2417 performance data and details about e.g. the network topology. 2418 The nature of OAM data therefore suggests that some form of 2419 authentication, authorization and encryption is in place. This 2420 will prevent unauthorized access to vital equipment and it will 2421 prevent third parties from learning about sensitive information 2422 about the transport network. However it should be observed that 2423 the combination of all permutations of unique MEP to MEP, MEP to 2424 MIP, and intermediate system originated transactions mitigates 2425 against the practical establishment and maintenance of a large 2426 number of security associations per MEG. 2428 For this reason it is assumed that the network is physically 2429 secured against man-in-the-middle attacks. Further, this 2430 document describes OAM functions that, if a man-in-the-middle 2431 attack was possible, could be exploited to significantly disrupt 2432 proper operation of the network. 2434 Mechanisms that the framework does not specify might be subject 2435 to additional security considerations. 2437 9. IANA Considerations 2439 No new IANA considerations. 2441 10. Acknowledgments 2443 The authors would like to thank all members of the teams (the 2444 Joint Working Team, the MPLS Interoperability Design Team in 2445 IETF and the Ad Hoc Group on MPLS-TP in ITU-T) involved in the 2446 definition and specification of MPLS Transport Profile. 2448 The editors gratefully acknowledge the contributions of Adrian 2449 Farrel, Yoshinori Koike, Luca Martini, Yuji Tochio and Manuel 2450 Paul for the definition of per-interface MIPs and MEPs. 2452 The editors gratefully acknowledge the contributions of Malcolm 2453 Betts, Yoshinori Koike, Xiao Min, and Maarten Vissers for the 2454 lock report and lock instruction description. 2456 The authors would also like to thank Alessandro D'Alessandro, 2457 Loa Andersson, Malcolm Betts, Stewart Bryant, Rui Costa, Xuehui 2458 Dai, John Drake, Adrian Farrel, Dan Frost, Xia Liang, Liu 2459 Gouman, Peng He, Feng Huang, Su Hui, Yoshionori Koike, George 2460 Swallow, Yuji Tochio, Curtis Villamizar, Maarten Vissers and 2461 Xuequin Wei for their comments and enhancements to the text. 2463 This document was prepared using 2-Word-v2.0.template.dot. 2465 11. References 2467 11.1. Normative References 2469 [1] Rosen, E., Viswanathan, A., Callon, R., "Multiprotocol 2470 Label Switching Architecture", RFC 3031, January 2001 2472 [2] Bryant, S., Pate, P., "Pseudo Wire Emulation Edge-to-Edge 2473 (PWE3) Architecture", RFC 3985, March 2005 2475 [3] Nadeau, T., Pignataro, S., "Pseudowire Virtual Circuit 2476 Connectivity Verification (VCCV): A Control Channel for 2477 Pseudowires", RFC 5085, December 2007 2479 [4] Bocci, M., Bryant, S., "An Architecture for Multi-Segment 2480 Pseudo Wire Emulation Edge-to-Edge", RFC 5659, October 2481 2009 2483 [5] Niven-Jenkins, B., Brungard, D., Betts, M., sprecher, N., 2484 Ueno, S., "MPLS-TP Requirements", RFC 5654, September 2009 2486 [6] Agarwal, P., Akyol, B., "Time To Live (TTL) Processing in 2487 Multiprotocol Label Switching (MPLS) Networks", RFC 3443, 2488 January 2003 2490 [7] Vigoureux, M., Bocci, M., Swallow, G., Ward, D., Aggarwal, 2491 R., "MPLS Generic Associated Channel", RFC 5586, June 2009 2493 [8] Bocci, M., et al., "A Framework for MPLS in Transport 2494 Networks", RFC 5921, July 2010 2496 [9] Bocci, M., et al., " MPLS Transport Profile User-to-Network and 2497 Network-to-Network Interfaces", draft-ietf-mpls-tp-uni-nni-00 2498 (work in progress), August 2010 2500 [10] Swallow, G., Bocci, M., "MPLS-TP Identifiers", draft-ietf- 2501 mpls-tp-identifiers-02 (work in progress), July 2010 2503 [11] Vigoureux, M., Betts, M., Ward, D., "Requirements for OAM 2504 in MPLS Transport Networks", RFC 5860, May 2010 2506 [12] Bradner, S., McQuaid, J., "Benchmarking Methodology for 2507 Network Interconnect Devices", RFC 2544, March 1999 2509 [13] ITU-T Recommendation G.806 (01/09), "Characteristics of 2510 transport equipment - Description methodology and generic 2511 functionality ", January 2009 2513 11.2. Informative References 2515 [14] Sprecher, N., Nadeau, T., van Helvoort, H., Weingarten, 2516 Y., "MPLS-TP OAM Analysis", draft-ietf-mpls-tp-oam- 2517 analysis-02 (work in progress), July 2010 2519 [15] Nichols, K., Blake, S., Baker, F., Black, D., "Definition 2520 of the Differentiated Services Field (DS Field) in the 2521 IPv4 and IPv6 Headers", RFC 2474, December 1998 2523 [16] Grossman, D., "New terminology and clarifications for 2524 Diffserv", RFC 3260, April 2002. 2526 [17] Kompella, K., Rekhter, Y., Berger, L., "Link Bundling in 2527 MPLS Traffic Engineering (TE)", RFC 4201, October 2005 2529 [18] ITU-T Recommendation G.707/Y.1322 (01/07), "Network node 2530 interface for the synchronous digital hierarchy (SDH)", 2531 January 2007 2533 [19] ITU-T Recommendation G.805 (03/00), "Generic functional 2534 architecture of transport networks", March 2000 2536 [20] ITU-T Recommendation Y.1731 (02/08), "OAM functions and 2537 mechanisms for Ethernet based networks", February 2008 2539 [21] IEEE Standard 802.1AX-2008, "IEEE Standard for Local and 2540 Metropolitan Area Networks - Link Aggregation", November 2541 2008 2543 [22] Le Faucheur et.al. " Multi-Protocol Label Switching (MPLS) 2544 Support of Differentiated Services", RFC 3270, May 2002. 2546 Authors' Addresses 2548 Dave Allan 2549 Ericsson 2551 Email: david.i.allan@ericsson.com 2553 Italo Busi 2554 Alcatel-Lucent 2556 Email: Italo.Busi@alcatel-lucent.com 2557 Ben Niven-Jenkins 2558 Velocix 2560 Email: ben@niven-jenkins.co.uk 2562 Annamaria Fulignoli 2563 Ericsson 2565 Email: annamaria.fulignoli@ericsson.com 2567 Enrique Hernandez-Valencia 2568 Alcatel-Lucent 2570 Email: Enrique.Hernandez@alcatel-lucent.com 2572 Lieven Levrau 2573 Alcatel-Lucent 2575 Email: Lieven.Levrau@alcatel-lucent.com 2577 Vincenzo Sestito 2578 Alcatel-Lucent 2580 Email: Vincenzo.Sestito@alcatel-lucent.com 2582 Nurit Sprecher 2583 Nokia Siemens Networks 2585 Email: nurit.sprecher@nsn.com 2587 Huub van Helvoort 2588 Huawei Technologies 2590 Email: hhelvoort@huawei.com 2592 Martin Vigoureux 2593 Alcatel-Lucent 2595 Email: Martin.Vigoureux@alcatel-lucent.com 2596 Yaacov Weingarten 2597 Nokia Siemens Networks 2599 Email: yaacov.weingarten@nsn.com 2601 Rolf Winter 2602 NEC 2604 Email: Rolf.Winter@nw.neclab.eu