idnits 2.17.1 draft-ietf-mpls-tp-oam-framework-11.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (February 11, 2011) is 4822 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-07) exists of draft-ietf-mpls-tp-identifiers-03 == Outdated reference: A later version (-09) exists of draft-ietf-mpls-tp-oam-analysis-03 Summary: 0 errors (**), 0 flaws (~~), 3 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 MPLS Working Group I. Busi (Ed) 2 Internet Draft Alcatel-Lucent 3 Intended status: Informational D. Allan (Ed) 4 Ericsson 6 Expires: August 11, 2011 February 11, 2011 8 Operations, Administration and Maintenance Framework for 9 MPLS-based Transport Networks 10 draft-ietf-mpls-tp-oam-framework-11.txt 12 Abstract 14 The Transport Profile of Multi-Protocol Label Switching 15 (MPLS-TP) is a packet-based transport technology based on the 16 MPLS Traffic Engineering (MPLS-TE) and Pseudowire (PW) data 17 plane architectures. 19 This document describes a framework to support a comprehensive 20 set of Operations, Administration and Maintenance (OAM) 21 procedures that fulfill the MPLS-TP OAM requirements for fault, 22 performance and protection-switching management and that do not 23 rely on the presence of a control plane. 25 This document is a product of a joint Internet Engineering Task 26 Force (IETF) / International Telecommunications Union 27 Telecommunication Standardization Sector (ITU-T) effort to 28 include an MPLS Transport Profile within the IETF MPLS and PWE3 29 architectures to support the capabilities and functionalities of 30 a packet transport network as defined by the ITU-T. 32 Status of this Memo 34 This Internet-Draft is submitted to IETF in full conformance 35 with the provisions of BCP 78 and BCP 79. 37 Internet-Drafts are working documents of the Internet 38 Engineering Task Force (IETF), its areas, and its working 39 groups. Note that other groups may also distribute working 40 documents as Internet-Drafts. 42 Internet-Drafts are draft documents valid for a maximum of six 43 months and may be updated, replaced, or obsoleted by other 44 documents at any time. It is inappropriate to use Internet- 45 Drafts as reference material or to cite them other than as "work 46 in progress". 48 The list of current Internet-Drafts can be accessed at 49 http://www.ietf.org/ietf/1id-abstracts.txt. 51 The list of Internet-Draft Shadow Directories can be accessed at 52 http://www.ietf.org/shadow.html. 54 This Internet-Draft will expire on August 11, 2011. 56 Copyright Notice 58 Copyright (c) 2011 IETF Trust and the persons identified as the 59 document authors. All rights reserved. 61 This document is subject to BCP 78 and the IETF Trust's Legal 62 Provisions Relating to IETF Documents 63 (http://trustee.ietf.org/license-info) in effect on the date of 64 publication of this document. Please review these documents 65 carefully, as they describe your rights and restrictions with 66 respect to this document. Code Components extracted from this 67 document must include Simplified BSD License text as described 68 in Section 4.e of the Trust Legal Provisions and are provided 69 without warranty as described in the Simplified BSD License. 71 Table of Contents 73 1. Introduction..................................................5 74 1.1. Contributing Authors.....................................7 75 2. Conventions used in this document.............................7 76 2.1. Terminology..............................................7 77 2.2. Definitions..............................................9 78 3. Functional Components........................................12 79 3.1. Maintenance Entity and Maintenance Entity Group.........12 80 3.2. MEG Nesting: SPMEs and Tandem Connection Monitoring.....14 81 3.3. MEG End Points (MEPs)...................................16 82 3.4. MEG Intermediate Points (MIPs)..........................20 83 3.5. Server MEPs.............................................22 84 3.6. Configuration Considerations............................23 85 3.7. P2MP considerations.....................................23 86 3.8. Further considerations of enhanced segment monitoring...24 87 4. Reference Model..............................................26 88 4.1. MPLS-TP Section Monitoring (SMEG).......................28 89 4.2. MPLS-TP LSP End-to-End Monitoring Group (LMEG)..........29 90 4.3. MPLS-TP PW Monitoring (PMEG)............................29 91 4.4. MPLS-TP LSP SPME Monitoring (LSMEG).....................30 92 4.5. MPLS-TP MS-PW SPME Monitoring (PSMEG)...................31 93 4.6. Fate sharing considerations for multilink...............33 94 5. OAM Functions for proactive monitoring.......................33 95 5.1. Continuity Check and Connectivity Verification..........34 96 5.1.1. Defects identified by CC-V.........................37 97 5.1.2. Consequent action..................................39 98 5.1.3. Configuration considerations.......................40 99 5.2. Remote Defect Indication................................42 100 5.2.1. Configuration considerations.......................43 101 5.3. Alarm Reporting.........................................43 102 5.4. Lock Reporting..........................................44 103 5.5. Packet Loss Measurement.................................46 104 5.5.1. Configuration considerations.......................47 105 5.5.2. Sampling skew......................................48 106 5.5.3. Multilink issues...................................48 107 5.6. Packet Delay Measurement................................48 108 5.6.1. Configuration considerations.......................49 109 5.7. Client Failure Indication...............................49 110 5.7.1. Configuration considerations.......................50 111 6. OAM Functions for on-demand monitoring.......................50 112 6.1. Connectivity Verification...............................51 113 6.1.1. Configuration considerations.......................52 114 6.2. Packet Loss Measurement.................................52 115 6.2.1. Configuration considerations.......................53 116 6.2.2. Sampling skew......................................53 117 6.2.3. Multilink issues...................................53 118 6.3. Diagnostic Tests........................................53 119 6.3.1. Throughput Estimation..............................53 120 6.3.2. Data plane Loopback................................55 121 6.4. Route Tracing...........................................57 122 6.4.1. Configuration considerations.......................57 123 6.5. Packet Delay Measurement................................57 124 6.5.1. Configuration considerations.......................58 125 7. OAM Functions for administration control.....................58 126 7.1. Lock Instruct...........................................58 127 7.1.1. Locking a transport path...........................59 128 7.1.2. Unlocking a transport path.........................59 129 8. Security Considerations......................................60 130 9. IANA Considerations..........................................61 131 10. Acknowledgments.............................................61 132 11. References..................................................62 133 11.1. Normative References...................................62 134 11.2. Informative References.................................63 136 Editors' Note: 138 This Informational Internet-Draft is aimed at achieving IETF 139 Consensus before publication as an RFC and will be subject to an 140 IETF Last Call. 142 [RFC Editor, please remove this note before publication as an 143 RFC and insert the correct Streams Boilerplate to indicate that 144 the published RFC has IETF Consensus.] 146 1. Introduction 148 As noted in the multi-protocol label switching (MPLS-TP) Framework 149 RFCs (RFC 5921 [8] and [9]), MPLS-TP is a packet-based transport 150 technology based on the MPLS Traffic Engineering (MPLS-TE) and Pseudo 151 Wire (PW) data plane architectures defined in RFC 3031 [1], RFC 3985 152 [2] and RFC 5659 [4]. 154 MPLS-TP supports a comprehensive set of Operations, 155 Administration and Maintenance (OAM) procedures for fault, 156 performance and protection-switching management that do not rely 157 on the presence of a control plane. 159 In line with [15], existing MPLS OAM mechanisms will be used 160 wherever possible and extensions or new OAM mechanisms will be 161 defined only where existing mechanisms are not sufficient to 162 meet the requirements. Some extensions discussed in this 163 framework may end up as aspirational capabilities and may be 164 determined to be not tractably realizable in some 165 implementations. Extensions do not deprecate support for 166 existing MPLS OAM capabilities. 168 The MPLS-TP OAM framework defined in this document provides a 169 protocol neutral description of the required OAM functions and 170 of the data plane OAM architecture to support a comprehensive 171 set of OAM procedures that satisfy the MPLS-TP OAM requirements 172 of RFC 5860 [11]. In this regard, it defines similar OAM 173 functionality as for existing SONET/SDH and OTN OAM mechanisms 174 (e.g. [19]). 176 The MPLS-TP OAM framework is applicable to sections, Label 177 Switched Paths (LSPs), Multi-Segment Pseudowires (MS-)PWs and 178 Sub Path Maintenance Entities (SPMEs). It supports co-routed and 179 associated bidirectional p2p transport paths as well as 180 unidirectional p2p and p2mp transport paths. 182 OAM packets that instrument a particular direction of a 183 transport path are subject to the same forwarding treatment 184 (i.e. fate-share) as the user data packets and in some cases, 185 where Explicitly TC-encoded-PSC LSPs (E-LSPs) are employed, may 186 be required to have common Per-hop Behavior (PHB) scheduling 187 class (PSC) E2E with the class of traffic monitored. In case of 188 Label-Only-Inferred-PSC LSP (L-LSP), only one class of traffic 189 needs to be monitored and therefore the OAM packets have common 190 PSC with the monitored traffic class. 192 OAM packets can be distinguished from the used data packets 193 using the GAL and ACH constructs of RFC 5586 [7] for LSP, SPME 194 and Section or the ACH construct of RFC 5085 [3] and RFC 5586 195 [7] for (MS-)PW. OAM packets are never fragmented and are not 196 combined with user data in the same packet payload. 198 This framework makes certain assumptions as to the utility and 199 frequency of different classes of measurement that naturally 200 suggest different functions are implemented as distinct OAM 201 flows or packets. This is dictated by the combination of the 202 class of problem being detected and the need for timeliness of 203 network response to the problem. For example fault detection is 204 expected to operate on an entirely different time base than 205 performance monitoring which is also expected to operate on an 206 entirely different time base than in-band management 207 transactions. 209 The remainder of this memo is structured as follow: 211 Section 2 covers the definitions and terminology used in this 212 memo. 214 Section 3 describes the functional component that generates and 215 processes OAM packets. 217 Section 4 describes the reference models for applying OAM 218 functions to Sections, LSP, MS-PW and their SPMEs. 220 Sections 5, 6 and 7 provide a protocol-neutral description of 221 the OAM functions, defined in RFC 5860 [11], aimed at clarifying 222 how the OAM protocol solutions will behave to achieve their 223 functional objectives. 225 Section 8 discusses the security implications of OAM protocol 226 design in the MPLS-TP context. 228 The OAM protocol solutions designed as a consequence of this 229 document are expected to comply with the functional behavior 230 described in sections 5, 6 and 7. Alternative solutions to 231 required functional behaviors may also be defined. 233 OAM specifications following this OAM framework may be provided 234 in different documents to cover distinct OAM functions. 236 This document is a product of a joint Internet Engineering Task 237 Force (IETF) / International Telecommunication Union 238 Telecommunication Standardization Sector (ITU-T) effort to 239 include an MPLS Transport Profile within the IETF MPLS and PWE3 240 architectures to support the capabilities and functionalities of 241 a packet transport network as defined by the ITU-T. 243 1.1. Contributing Authors 245 Dave Allan, Italo Busi, Ben Niven-Jenkins, Annamaria Fulignoli, 246 Enrique Hernandez-Valencia, Lieven Levrau, Vincenzo Sestito, 247 Nurit Sprecher, Huub van Helvoort, Martin Vigoureux, Yaacov 248 Weingarten, Rolf Winter 250 2. Conventions used in this document 252 2.1. Terminology 254 AC Attachment Circuit 256 AIS Alarm indication signal 258 CC Continuity Check 260 CC-V Continuity Check and/or Connectivity Verification 262 CV Connectivity Verification 264 DBN Domain Border Node 266 E-LSP Explicitly TC-encoded-PSC LSP 268 ICC ITU Carrier Code 270 LER Label Edge Router 272 LKR Lock Report 274 L-LSP Label-Only-Inferred-PSC LSP 276 LM Loss Measurement 278 LME LSP Maintenance Entity 280 LMEG LSP ME Group 281 LSP Label Switched Path 283 LSR Label Switching Router 285 LSME LSP SPME ME 287 LSMEG LSP SPME ME Group 289 ME Maintenance Entity 291 MEG Maintenance Entity Group 293 MEP Maintenance Entity Group End Point 295 MIP Maintenance Entity Group Intermediate Point 297 NMS Network Management System 299 PE Provider Edge 301 PHB Per-hop Behavior 303 PM Performance Monitoring 305 PME PW Maintenance Entity 307 PMEG PW ME Group 309 PSC PHB Scheduling Class 311 PSME PW SPME ME 313 PSMEG PW SPME ME Group 315 PW Pseudowire 317 SLA Service Level Agreement 319 SME Section Maintenance Entity 321 SMEG Section ME Group 323 SPME Sub-path Maintenance Element 325 S-PE Switching Provider Edge 327 TC Traffic Class 328 T-PE Terminating Provider Edge 330 2.2. Definitions 332 This document uses the terms defined in RFC 5654 [5]. 334 This document uses the term 'Per-hop Behavior' as defined in RFC 335 2474 [16]. 337 This document uses the term LSP to indicate either a service LSP 338 or a transport LSP (as defined in RFC 5921 [8]). 340 This document uses the term Sub Path Maintenance Element (SPME) 341 as defined in RFC 5921 [8]. 343 This document uses the term traffic profile as defined in RFC 344 2475 [13]. 346 Where appropriate, the following definitions are aligned with 347 ITU-T recommendation Y.1731 [21] in order to have a common, 348 unambiguous terminology. They do not however intend to imply a 349 certain implementation but rather serve as a framework to 350 describe the necessary OAM functions for MPLS-TP. 352 Adaptation function: The adaptation function is the interface 353 between the client (sub)-layer and the server (sub-)layer. 355 Branch Node: A node along a point-to-multipoint transport path 356 that is connected to more than one downstream node. 358 Bud Node: A node along a point-to-multipoint transport path that 359 is at the same time a branch node and a leaf node for this 360 transport path. 362 Data plane loopback: An out-of-service test where a transport 363 path at either an intermediate or terminating node is placed 364 into a data plane loopback state, such that all traffic 365 (including both payload and OAM) received on the looped back 366 interface is sent on the reverse direction of the transport 367 path. 369 Note - The only way to send an OAM packet to a node that has been put 370 into data plane loopback mode is via TTL expiry, irrespective of 371 whether the node is hosting MIPs or MEPs. 373 Domain Border Node (DBN): An intermediate node in an MPLS-TP LSP 374 that is at the boundary between two MPLS-TP OAM domains. Such a 375 node may be present on the edge of two domains or may be 376 connected by a link to the DBN at the edge of another OAM 377 domain. 379 Down MEP: A MEP that receives OAM packets from, and transmits 380 them towards, the direction of a server layer. 382 Forwarding Engine: An abstract functional component, residing in 383 an LSR, that forwards the packets from an ingress interface 384 toward the egress interface(s). 386 In-Service: The administrative status of a transport path when 387 it is unlocked. 389 Interface: An interface is the attachment point to a server 390 (sub-)layer e.g., MPLS-TP section or MPLS-TP tunnel. 392 Intermediate Node: An intermediate node transits traffic for an 393 LSP or a PW. An intermediate node may originate OAM flows 394 directed to downstream intermediate nodes or MEPs. 396 Loopback: See data plane loopback and OAM loopback definitions. 398 Maintenance Entity (ME): Some portion of a transport path that 399 requires management bounded by two points (called MEPs), and the 400 relationship between those points to which maintenance and 401 monitoring operations apply (details in section 3.1). 403 Maintenance Entity Group (MEG): The set of one or more 404 maintenance entities that maintain and monitor a section or a 405 transport path in an OAM domain. 407 MEP: A MEG end point (MEP) is capable of initiating (Source MEP) 408 and terminating (sink MEP) OAM packets for fault management and 409 performance monitoring. MEPs define the boundaries of an ME 410 (details in section 3.3). 412 MIP: A MEG intermediate point (MIP) terminates and processes OAM 413 packets that are sent to this particular MIP and may generate 414 OAM packets in reaction to received OAM packets. It never 415 generates unsolicited OAM packets itself. A MIP resides within a 416 MEG between MEPs (details in section 3.3). 418 MPLS-TP Section: As defined in [8], it is a link that can be 419 traversed by one or more MPLS-TP LSPs. 421 OAM domain: A domain, as defined in [5], whose entities are 422 grouped for the purpose of keeping the OAM confined within that 423 domain. An OAM domain contains zero or more MEGs. 425 Note - within the rest of this document the term "domain" is 426 used to indicate an "OAM domain" 428 OAM flow: Is the set of all OAM packets originating with a 429 specific source MEP that instrument one direction of a MEG (or 430 possibly both in the special case of data plane loopback). 432 OAM loopback: The capability of a node to be directed by a 433 received OAM packet to generate a reply back to the sender. OAM 434 loopback can work in-service and can support different OAM 435 functions (e.g., bidirectional on-demand connectivity 436 verification). 438 OAM Packet: A packet that carries OAM information between MEPs 439 and/or MIPs in MEG to perform some OAM functionality (e.g. 440 connectivity verification). 442 Originating MEP: A MEP that originates an OAM transaction packet 443 (toward a target MIP/MEP) and expects a reply, either in-band or 444 out-of-band, from that target MIP/MEP. The originating MEP 445 always generates the OAM request packets in-band and expects and 446 processes only OAM reply packets returned by the target MIP/MEP. 448 Out-of-Service: The administrative status of a transport path 449 when it is locked. When a path is in a locked condition, it is 450 blocked from carrying client traffic. 452 Path Segment: It is either a segment or a concatenated segment, 453 as defined in RFC 5654 [5]. 455 Signal Degrade: A condition declared by a MEP when the data 456 forwarding capability associated with a transport path has 457 deteriorated, as determined by performance monitoring (PM). See also 458 ITU-T recommendation G.806 [14]. 460 Signal Fail: A condition declared by a MEP when the data 461 forwarding capability associated with a transport path has 462 failed, e.g. loss of continuity. See also ITU-T recommendation 463 G.806 [14]. 465 Sink MEP: A MEP acts as a sink MEP for an OAM packet when it 466 terminates and processes the packets received from its 467 associated MEG. 469 Source MEP: A MEP acts as source MEP for an OAM packet when it 470 originates and inserts the packet into the transport path for 471 its associated MEG. 473 Tandem Connection: A tandem connection is an arbitrary part of a 474 transport path that can be monitored (via OAM) independent of 475 the end-to-end monitoring (OAM). The tandem connection may also 476 include the forwarding engine(s) of the node(s) at the 477 boundaries of the tandem connection. Tandem connections may be 478 nested but cannot overlap. See also ITU-T recommendation G.805 479 [20]. 481 Target MEP/MIP: A MEP or a MIP that is targeted by OAM 482 transaction packets and that replies to the originating MEP that 483 initiated the OAM transactions. The target MEP or MIP can reply 484 either in-band or out-of-band. The target sink MEP function 485 always receives the OAM request packets in-band while the target 486 source MEP function only generates the OAM reply packets that 487 are sent in-band. 489 Up MEP: A MEP that transmits OAM packets towards, and receives 490 them from, the direction of the forwarding engine. 492 3. Functional Components 494 MPLS-TP is a packet-based transport technology based on the MPLS 495 and PW data plane architectures ([1], [2] and [4]) and is 496 capable of transporting service traffic where the 497 characteristics of information transfer between the transport 498 path endpoints can be demonstrated to comply with certain 499 performance and quality guarantees. 501 In order to describe the required OAM functionality, this 502 document introduces a set of functional components. 504 3.1. Maintenance Entity and Maintenance Entity Group 506 MPLS-TP OAM operates in the context of Maintenance Entities 507 (MEs) that define a relationship between two points of a 508 transport path to which maintenance and monitoring operations 509 apply. The two points that define a maintenance entity are 510 called Maintenance Entity Group (MEG) End Points (MEPs). The 511 collection of one or more MEs that belongs to the same transport 512 path and that are maintained and monitored as a group are known 513 as a maintenance entity group (MEG). In between MEPs, there are 514 zero or more intermediate points, called Maintenance Entity 515 Group Intermediate Points (MIPs). MEPs and MIPs are associated 516 with the MEG and can be shared by more than one ME in a MEG. 518 An abstract reference model for an ME is illustrated in Figure 1 519 below: 521 +-+ +-+ +-+ +-+ 522 |A|----|B|----|C|----|D| 523 +-+ +-+ +-+ +-+ 525 Figure 1 ME Abstract Reference Model 527 The instantiation of this abstract model to different MPLS-TP 528 entities is described in section 4. In Figure 1, nodes A and D 529 can be LERs for an LSP or the Terminating Provider Edges (T-PEs) 530 for a MS-PW, nodes B and C are LSRs for a LSP or Switching PEs 531 (S-PEs) for a MS-PW. MEPs reside in nodes A and D while MIPs 532 reside in nodes B and C and may reside in A and D. The links 533 connecting adjacent nodes can be physical links, (sub-)layer 534 LSPs/SPMEs, or server layer paths. 536 This functional model defines the relationships between all OAM 537 entities from a maintenance perspective and it allows each 538 Maintenance Entity to provide monitoring and management for the 539 (sub-)layer network under its responsibility and efficient 540 localization of problems. 542 An MPLS-TP Maintenance Entity Group may be defined to monitor 543 the transport path for fault and/or performance management. 545 The MEPs that form a MEG bound the scope of an OAM flow to the 546 MEG (i.e. within the domain of the transport path that is being 547 monitored and managed). There are two exceptions to this: 549 1) A misbranching fault may cause OAM packets to be delivered to 550 a MEP that is not in the MEG of origin. 552 2) An out-of-band return path may be used between a MIP or a MEP 553 and the originating MEP. 555 In case of unidirectional point-to-point transport paths, a 556 single unidirectional Maintenance Entity is defined to monitor 557 it. 559 In case of associated bi-directional point-to-point transport 560 paths, two independent unidirectional Maintenance Entities are 561 defined to independently monitor each direction. This has 562 implications for transactions that terminate at or query a MIP, 563 as a return path from MIP to originating MEP does not 564 necessarily exist in the MEG. 566 In case of co-routed bi-directional point-to-point transport 567 paths, a single bidirectional Maintenance Entity is defined to 568 monitor both directions congruently. 570 In case of unidirectional point-to-multipoint transport paths, a 571 single unidirectional Maintenance entity for each leaf is 572 defined to monitor the transport path from the root to that 573 leaf. 575 In all cases, portions of the transport path may be monitored by 576 the instantiation of SPMEs (see section 3.2). 578 The reference model for the p2mp MEG is represented in Figure 2. 580 +-+ 581 /--|D| 582 / +-+ 583 +-+ 584 /--|C| 585 +-+ +-+/ +-+\ +-+ 586 |A|----|B| \--|E| 587 +-+ +-+\ +-+ +-+ 588 \--|F| 589 +-+ 591 Figure 2 Reference Model for p2mp MEG 593 In case of p2mp transport paths, the OAM measurements are 594 independent for each ME (A-D, A-E and A-F): 596 o Fault conditions - some faults may impact more than one ME 597 depending from where the failure is located; 599 o Packet loss - packet dropping may impact more than one ME 600 depending from where the packets are lost; 602 o Packet delay - will be unique per ME. 604 Each leaf (i.e. D, E and F) terminates OAM flows to monitor the 605 ME between itself and the root while the root (i.e. A) generates 606 OAM packets common to all the MEs of the p2mp MEG. All nodes may 607 implement a MIP in the corresponding MEG. 609 3.2. MEG Nesting: SPMEs and Tandem Connection Monitoring 611 In order to verify and maintain performance and quality 612 guarantees, there is a need to not only apply OAM functionality 613 on a transport path granularity (e.g. LSP or MS-PW), but also on 614 arbitrary parts of transport paths, defined as Tandem 615 Connections, between any two arbitrary points along a transport 616 path. 618 Sub-path Maintenance Elements (SPMEs), as defined in [8], are 619 hierarchical LSPs instantiated to provide monitoring of a 620 portion of a set of transport paths (LSPs or MS-PWs) that follow 621 the same path between the ingress and the egress of the SPME. 622 The operational aspects of instantiating SPMEs are out of scope 623 of this memo. 625 SPMEs can also be employed to meet the requirement to provide 626 tandem connection monitoring (TCM), as defined by ITU-T 627 Recommendation G.805 [20]. 629 TCM for a given path segment of a transport path is implemented 630 by creating an SPME that has a 1:1 association with the path 631 segment of the transport path that is to be monitored. 633 In the TCM case, this means that the SPME used to provide TCM 634 can carry one and only one transport path thus allowing direct 635 correlation between all fault management and performance 636 monitoring information gathered for the SPME and the monitored 637 path segment of the end-to-end transport path. 639 There are a number of implications to this approach: 641 1) The SPME would use the uniform model [23] of Traffic Class 642 (TC) code point copying between sub-layers for diffserv such 643 that the E2E markings and PHB treatment for the transport 644 path was preserved by the SPMEs. 646 2) The SPME normally would use the short-pipe model for TTL 647 handling [6] (no TTL copying between sub-layer) such that the 648 TTL distance to the MIPs for the E2E entity would not be 649 impacted by the presence of the SPME, but it should be 650 possible for an operator to specify use of the uniform model. 652 Note that points 1 and 2 above assume that the TTL copying mode 653 and TC copying modes are independently configurable for an LSP. 655 The TTL distance to the MIPs plays a critical role for 656 delivering packets to these MIPs as described in section 3.4. 658 There are specific issues with the use of the uniform model of 659 TTL copying for an SPME: 661 1. A MIP in the SPME sub-layer is not part of the transport path MEG, 662 hence only an out of band return path for OAM originating in the 663 transport path MEG that addressed an SPME MIP might be available. 665 2. The instantiation of a lower level MEG or protection switching 666 actions within a lower level MEG may change the TTL distances to 667 MIPs in the higher level MEGs. 669 The endpoints of the SPME are MEPs and limit the scope of an OAM 670 flow within the MEG that the MEPs belong to (i.e. within the 671 domain of the SPME that is being monitored and managed). 673 When considering SPMEs, it is important to consider that the 674 following properties apply to all MPLS-TP MEGs (regardless of 675 whether they instrument LSPs, SPMEs or MS-PWs): 677 o They can be nested but not overlapped, e.g. a MEG may cover a 678 path segment of another MEG, and may also include the 679 forwarding engine(s) of the node(s) at the edge(s) of the 680 path segment. However when MEGs are nested, the MEPs and MIPs 681 in the SPME are no longer part of the encompassing MEG. 683 o It is possible that MEPs of MEGs that are nested reside on a 684 single node but again implemented in such a way that they do 685 not overlap. 687 o Each OAM flow is associated with a single MEG 689 o When a SPME is instantiated after the transport path has been 690 instantiated the TTL distance to the MIPs may change for the 691 short-pipe model of TTL copying, and may change for the 692 uniform model if the SPME is not co-routed with the original 693 path. 695 3.3. MEG End Points (MEPs) 697 MEG End Points (MEPs) are the source and sink points of a MEG. 698 In the context of an MPLS-TP LSP, only LERs can implement MEPs 699 while in the context of an SPME, any LSR of the MPLS-TP LSP can 700 be an LER of SPMEs that contributes to the overall monitoring 701 infrastructure of the transport path. Regarding PWs, only T-PEs 702 can implement MEPs while for SPMEs supporting one or more PWs 703 both T-PEs and S-PEs can implement SPME MEPs. Any MPLS-TP LSR 704 can implement a MEP for an MPLS-TP Section. 706 MEPs are responsible for originating almost all of the proactive 707 and on-demand monitoring OAM functionality for the MEG. There is 708 a separate class of notifications (such as Lock report (LKR) and 709 Alarm indication signal (AIS)) that are originated by 710 intermediate nodes and triggered by server layer events. A MEP 711 is capable of originating and terminating OAM packets for fault 712 management and performance monitoring. These OAM packets are 713 carried within the G-ACh with the proper encapsulation and an 714 appropriate channel type as defined in RFC 5586 [7]. A MEP 715 terminates all the OAM packets it receives from the MEG it 716 belongs to and silently discards those that do not (note in the 717 particular case of Connectivity Verification (CV) processing a 718 CV packet from an incorrect MEG will result in a 719 mis-connectivity defect and there are further actions taken). 720 The MEG the OAM packet belongs to is associated with the MPLS or 721 PW label. Whether the label is used to infer the MEG or the 722 content of the OAM packet is an implementation choice. In the 723 case of an MPLS-TP section, the MEG is inferred from the port on 724 which an OAM packet was received with the GAL at the top of the 725 label stack. 727 OAM packets may require the use of an available "out-of-band" 728 return path (as defined in [8]). In such cases sufficient 729 information is required in the originating transaction such that 730 the OAM reply packet can be constructed and properly forwarded 731 to the originating MEP (e.g. IP address). 733 Each OAM solution document will further detail the applicability 734 of the tools it defines as a pro-active or on-demand mechanism 735 as well as its usage when: 737 o The "in-band" return path exists and it is used; 739 o An "out-of-band" return path exists and it is used; 741 o Any return path does not exist or is not used. 743 Once a MEG is configured, the operator can configure which 744 proactive OAM functions to use on the MEG but the MEPs are 745 always enabled. 747 MEPs terminate all OAM packets received from the associated MEG. 748 As the MEP corresponds to the termination of the forwarding path 749 for a MEG at the given (sub-)layer, OAM packets never leak 750 outside of a MEG in a properly configured fault-free 751 implementation. 753 A MEP of an MPLS-TP transport path coincides with transport path 754 termination and monitors it for failures or performance 755 degradation (e.g. based on packet counts) in an end-to-end 756 scope. Note that both source MEP and sink MEP coincide with 757 transport paths' source and sink terminations. 759 The MEPs of an SPME are not necessarily coincident with the 760 termination of the MPLS-TP transport path. They are used to 761 monitor a path segment of the transport path for failures or 762 performance degradation (e.g. based on packet counts) only 763 within the boundary of the MEG for the SPME. 765 An MPLS-TP sink MEP passes a fault indication to its client 766 (sub-)layer network as a consequent action of fault detection. 767 When the client layer is not MPLS TP, the consequent actions in 768 the client layer (e.g., ignore or generate client layer specific 769 OAM notifications) are outside the scope of this document. 771 A node hosting a MEP can either support per-node MEP or per- 772 interface MEP(s). A per-node MEP resides in an unspecified 773 location within the node while a per-interface MEP resides on a 774 specific side of the forwarding engine. In particular a per- 775 interface MEP is called "Up MEP" or "Down MEP" depending on its 776 location relative to the forwarding engine. An "Up MEP" 777 transmits OAM packets towards, and receives them from, the 778 direction of the forwarding engine, while a "Down MEP" receives 779 OAM packets from, and transmits them towards, the direction of a 780 server layer. 782 Source node Up MEP Destination node Up MEP 783 ------------------------ ------------------------ 784 | | | | 785 |----- -----| |----- -----| 786 | MEP | | | | | | MEP | 787 | | ---- | | | | ---- | | 788 | In |->-| FW |->-| Out |->- ->-| In |->-| FW |->-| Out | 789 | i/f | ---- | i/f | | i/f | ---- | i/f | 790 |----- -----| |----- -----| 791 | | | | 792 ------------------------ ------------------------ 793 (1) (2) 795 Source node Down MEP Destination node Down MEP 796 ------------------------ ------------------------ 797 | | | | 798 |----- -----| |----- -----| 799 | | | MEP | | MEP | | | 800 | | ---- | | | | ---- | | 801 | In |->-| FW |->-| Out |->- ->-| In |->-| FW |->-| Out | 802 | i/f | ---- | i/f | | i/f | ---- | i/f | 803 |----- -----| |----- -----| 804 | | | | 805 ------------------------ ------------------------ 806 (3) (4) 808 Figure 3 Examples of per-interface MEPs 810 Figure 3 describes four examples of per-interface Up MEPs: an Up 811 Source MEP in a source node (case 1), an Up Sink MEP in a 812 destination node (case 2), a Down Source MEP in a source node 813 (case 3) and a Down Sink MEP in a destination node (case 4). 815 The usage of per-interface Up MEPs extends the coverage of the 816 ME for both fault and performance monitoring closer to the edge 817 of the domain and allows the isolation of failures or 818 performance degradation to being within a node or either the 819 link or interfaces. 821 Each OAM solution document will further detail the implications 822 of the tools it defines when used with per-interface or per-node 823 MEPs, if necessary. 825 It may occur that multiple MEPs for the same MEG are on the same 826 node, and are all Up MEPs, each on one side of the forwarding 827 engine, such that the MEG is entirely internal to the node. 829 It should be noted that a ME may span nodes that implement per 830 node MEPs and per-interface MEPs. This guarantees backward 831 compatibility with most of the existing LSRs that can implement 832 only a per-node MEP as in current implementations label 833 operations are largely performed on the ingress interface, hence 834 the exposure of the GAL as top label will occur at the ingress 835 interface. 837 Note that a MEP can only exist at the beginning and end of a 838 (sub-)layer in MPLS-TP. If there is a need to monitor some 839 portion of that LSP or PW, a new sub-layer in the form of an 840 SPME must be created which permits MEPs and associated MEGs to 841 be created. 843 In the case where an intermediate node sends an OAM packet to a 844 MEP, it uses the top label of the stack at that point. 846 3.4. MEG Intermediate Points (MIPs) 848 A MEG Intermediate Point (MIP) is a function located at a point 849 between the MEPs of a MEG for a PW, LSP or SPME. 851 A MIP is capable of reacting to some OAM packets and forwarding all 852 the other OAM packets while ensuring fate sharing with user data 853 packets. However, a MIP does not initiate unsolicited OAM packets, 854 but may be addressed by OAM packets initiated by one of the MEPs of 855 the MEG. A MIP can generate OAM packets only in response to OAM 856 packets that it receives from the MEG it belongs to. The OAM packets 857 generated by the MIP are sent to the originating MEP. 859 An intermediate node within a MEG can either: 861 o Support per-node MIP (i.e. a single MIP per node in an 862 unspecified location within the node); 864 o Support per-interface MIP (i.e. two or more MIPs per node on 865 both sides of the forwarding engine). 867 Support of per-interface of per-node MIPs is an implementation 868 choice. It is also possible that a node support per-interface 869 MIPs on some MEGs and per-node MIPs on other MEGs for which it 870 is a transit node. 872 Intermediate node 873 ------------------------ 874 | | 875 |----- -----| 876 | MIP | | MIP | 877 | | ---- | | 878 ->-| In |->-| FW |->-| Out |->- 879 | i/f | ---- | i/f | 880 |----- -----| 881 | | 882 ------------------------ 883 Figure 4 Example of per-interface MIPs 885 Figure 4 describes an example of two per-interface MIPs at an 886 intermediate node of a point-to-point MEG. 888 The usage of per-interface MIPs allows the isolation of failures 889 or performance degradation to being within a node or either the 890 link or interfaces. 892 When sending an OAM packet to a MIP, the source MEP should set 893 the TTL field to indicate the number of hops necessary to reach 894 the node where the MIP resides. 896 The source MEP should also include target MIP information in the 897 OAM packets sent to a MIP to allow proper identification of the 898 MIP within the node. The MEG the OAM packet belongs to is 899 associated with the MPLS label. Whether the label is used to 900 infer the MEG or the content of the OAM packet is an 901 implementation choice. In the latter, the MPLS label is checked 902 to be the expected one. 904 The use of TTL expiry to deliver OAM packets to a specific MIP 905 is not a fully reliable delivery mechanism because the TTL 906 distance of a MIP from a MEP can change. Any MPLS-TP node 907 silently discards any OAM packet received with an expired TTL 908 and that it is not addressed to any of its MIPs or MEPs. An 909 MPLS-TP node that does not support OAM is also expected to 910 silently discard any received OAM packet. 912 Packets directed to a MIP may not necessarily carry specific MIP 913 identification information beyond that of TTL distance. In this 914 case a MIP would promiscuously respond to all MEP queries on its 915 MEG. This capability could be used for discovery functions 916 (e.g., route tracing as defined in section 6.4) or when it is 917 desirable to leave to the originating MEP the job of correlating 918 TTL and MIP identifiers and noting changes or irregularities 919 (via comparison with information previously extracted from the 920 network). 922 MIPs are associated to the MEG they belong to and their identity 923 is unique within the MEG. However, their identity is not 924 necessarily unique to the MEG: e.g. all nodal MIPs in a node can 925 have a common identity. 927 A node hosting a MEP can also support per-interface Up MEPs and 928 per-interface MIPs on either side of the forwarding engine. 930 Once a MEG is configured, the operator can enable/disable the 931 MIPs on the nodes within the MEG. All the intermediate nodes and 932 possibly the end nodes host MIP(s). Local policy allows them to 933 be enabled per function and per MEG. The local policy is 934 controlled by the management system, which may delegate it to 935 the control plane. A disabled MIP silently discards any received 936 OAM packets. 938 3.5. Server MEPs 940 A server MEP is a MEP of a MEG that is either: 942 o Defined in a layer network that is "below", which is to say 943 encapsulates and transports the MPLS-TP layer network being 944 referenced, or 946 o Defined in a sub-layer of the MPLS-TP layer network that is 947 "below" which is to say encapsulates and transports the 948 sub-layer being referenced. 950 A server MEP can coincide with a MIP or a MEP in the client 951 (MPLS-TP) (sub-)layer network. 953 A server MEP also provides server layer OAM indications to the 954 client/server adaptation function between the client (MPLS-TP) 955 (sub-)layer network and the server (sub-)layer network. The 956 adaptation function maintains state on the mapping of MPLS-TP 957 transport paths that are setup over that server (sub-)layer's 958 transport path. 960 For example, a server MEP can be either: 962 o A termination point of a physical link (e.g. 802.3), an SDH 963 VC or OTN ODU, for the MPLS-TP Section layer network, defined 964 in section 4.1; 966 o An MPLS-TP Section MEP for MPLS-TP LSPs, defined in section 967 4.2; 969 o An MPLS-TP LSP MEP for MPLS-TP PWs, defined in section 4.3; 971 o An MPLS-TP SPME MEP used for LSP path segment monitoring, as 972 defined in section 4.4, for MPLS-TP LSPs or higher-level 973 SPMEs providing LSP path segment monitoring; 975 o An MPLS-TP SPME MEP used for PW path segment monitoring, as 976 defined in section 4.5, for MPLS-TP PWs or higher-level SPMEs 977 providing PW path segment monitoring. 979 The server MEP can run appropriate OAM functions for fault detection 980 within the server (sub-)layer network, and provides a fault 981 indication to its client MPLS-TP layer network via the client/server 982 adaptation function. When the server layer is not MPLS-TP, server MEP 983 OAM functions are simply assumed to exist but are outside the scope 984 of this document. 986 3.6. Configuration Considerations 988 When a control plane is not present, the management plane configures 989 these functional components. Otherwise they can be configured either 990 by the management plane or by the control plane. 992 Local policy allows disabling the usage of any available "out- 993 of-band" return path, as defined in [8], irrespective of what is 994 requested by the node originating the OAM packet. 996 SPMEs are usually instantiated when the transport path is 997 created by either the management plane or by the control plane 998 (if present). Sometimes an SPME can be instantiated after the 999 transport path is initially created. 1001 3.7. P2MP considerations 1003 All the traffic sent over a p2mp transport path, including OAM 1004 packets generated by a MEP, is sent (multicast) from the root to 1005 all the leaves. As a consequence: 1007 o To send an OAM packet to all leaves, the source MEP can 1008 send a single OAM packet that will be delivered by the 1009 forwarding plane to all the leaves and processed by all the 1010 leaves. Hence a single OAM packet can simultaneously 1011 instrument all the MEs in a p2mp MEG. 1013 o To send an OAM packet to a single leaf, the source MEP 1014 sends a single OAM packet that will be delivered by the 1015 forwarding plane to all the leaves but contains sufficient 1016 information to identify a target leaf, and therefore is 1017 processed only by the target leaf and ignored by the other 1018 leaves. 1020 o To send an OAM packet to a single MIP, the source MEP sends 1021 a single OAM packet with the TTL field indicating the 1022 number of hops necessary to reach the node where the MIP 1023 resides. This packet will be delivered by the forwarding 1024 plane to all intermediate nodes at the same TTL distance of 1025 the target MIP and to any leaf that is located at a shorter 1026 distance. The OAM packet must contain sufficient 1027 information to identify the target MIP and therefore is 1028 processed only by the target MIP. 1030 o In order to send an OAM packet to M leaves (i.e., a subset 1031 of all the leaves), the source MEP sends M different OAM 1032 packets targeted to each individual leaf in the group of M 1033 leaves. Aggregated or sub setting mechanisms are outside 1034 the scope of this document. 1036 A bud node with a Down MEP or a per-node MEP will both terminate 1037 and relay OAM packets. Similar to how fault coverage is 1038 maximized by the explicit utilization of Up MEPs, the same is 1039 true for MEPs on a bud node. 1041 P2MP paths are unidirectional; therefore any return path to an 1042 originating MEP for on-demand transactions will be out-of-band. 1043 A mechanism to target "on-demand" transactions to a single MEP 1044 or MIP is required as it relieves the originating MEP of an 1045 arbitrarily large processing load and of the requirement to 1046 filter and discard undesired responses as normally TTL 1047 exhaustion will address all MIPs at a given distance from the 1048 source, and failure to exhaust TTL will address all MEPs. 1050 3.8. Further considerations of enhanced segment monitoring 1052 Segment monitoring, like any in-service monitoring, in a 1053 transport network should meet the following network objectives: 1055 1. The monitoring and maintenance of existing transport paths has to 1056 be conducted in service without traffic disruption. 1058 2. Segment monitoring must not modify the forwarding of the segment 1059 portion of the transport path. 1061 SPMEs defined in section 3.2 meet the above two objectives, when 1062 they are pre-configured or pre-instantiated as exemplified in 1063 section 3.6. However, pre-design and pre-configuration of all 1064 the considered patterns of SPME are not sometimes preferable in 1065 real operation due to the burden of design works, a number of 1066 header consumptions, bandwidth consumption and so on. 1068 When SPMEs are configured or instantiated after the transport 1069 path has been created, network objective (1) can be met: 1070 application and removal of SPME to a faultless monitored 1071 transport entity can be performed in such a way as not to 1072 introduce any loss of traffic, e.g., by using non-disruptive 1073 "make before break" technique. 1075 However, network objective (2) cannot be met due to new 1076 assignment of MPLS labels. As a consequence, generally speaking, 1077 the results of SPME monitoring are not necessarily correlated 1078 with the behaviour of traffic in the monitored entity when it 1079 does not use SPME. For example, application of SPME to a 1080 problematic/faulty monitoring entity might "fix" the problem 1081 encountered by the latter - for as long as SPME is applied. And 1082 vice versa, application of SPME to a faultless monitored entity 1083 may result in making it faulty - again, as long as SPME is 1084 applied. 1086 Support for a more sophisticated segment monitoring mechanism 1087 (temporal and hitless segment monitoring) to efficiently meet 1088 the two network objectives may be necessary. 1090 One possible option to instantiate non-intrusive segment 1091 monitoring without the use of SPMEs would require the MIPs 1092 selected as monitoring endpoints to implement enhanced 1093 functionality and state for the monitored transport path. 1095 For example the MIPs need to be configured with the TTL distance 1096 to the peer or with the address of the peer, when out-of-band 1097 return paths are used. 1099 A further issue that would need to be considered is events that 1100 result in changing the TTL distance to the peer monitoring 1101 entity such as protection events that may temporarily invalidate 1102 OAM information gleaned from the use of this technique. 1104 Further considerations on this technique are outside the scope 1105 of this document. 1107 4. Reference Model 1109 The reference model for the MPLS-TP OAM framework builds upon 1110 the concept of a MEG, and its associated MEPs and MIPs, to 1111 support the functional requirements specified in RFC 5860 [11]. 1113 The following MPLS-TP MEGs are specified in this document: 1115 o A Section Maintenance Entity Group (SMEG), allowing 1116 monitoring and management of MPLS-TP Sections (between MPLS 1117 LSRs). 1119 o An LSP Maintenance Entity Group (LMEG), allowing monitoring 1120 and management of an end-to-end LSP (between LERs). 1122 o A PW Maintenance Entity Group (PMEG), allowing monitoring and 1123 management of an end-to-end SS/MS-PWs (between T-PEs). 1125 o An LSP SPME ME Group (LSMEG), allowing monitoring and 1126 management of an SPME (between a given pair of LERs and/or 1127 LSRs along an LSP). 1129 o A PW SPME ME Group (PSMEG), allowing monitoring and 1130 management of an SPME (between a given pair of T-PEs and/or 1131 S-PEs along an (MS-)PW). 1133 The MEGs specified in this MPLS-TP OAM framework are compliant 1134 with the architecture framework for MPLS-TP [8] that includes 1135 both MS-PWs [4] and LSPs [1]. 1137 Hierarchical LSPs are also supported in the form of SPMEs. In 1138 this case, each LSP in the hierarchy is a different sub-layer 1139 network that can be monitored, independently from higher and 1140 lower level LSPs in the hierarchy, on an end-to-end basis (from 1141 LER to LER) by a SPME. It is possible to monitor a portion of a 1142 hierarchical LSP by instantiating a hierarchical SPME between 1143 any LERs/LSRs along the hierarchical LSP. 1145 Native |<------------------ MS-PW1Z ---------------->| Native 1146 Layer | | Layer 1147 Service | || |<-LSP3X->| || | Service 1148 (AC1) V V V V V V V V (AC2) 1149 +----+ +---+ +----+ +----+ +---+ +----+ 1150 +----+ |T-PE| |LSR| |S-PE| |S-PE| |LSR| |T-PE| +----+ 1151 | | | 1 | | 2 | | 3 | | X | | Y | | Z | | | 1152 | | | |=======| |=========| |=======| | | | 1153 | CE1|--|.......PW13......|...PW3X..|......PWXZ.......|---|CE2 | 1154 | | | |=======| |=========| |=======| | | | 1155 | | | | | | | | | | | | | | | | 1156 +----+ | | | | | | | | | | | | +----+ 1157 +----+ +---+ +----+ +----+ +---+ +----+ 1158 . . . . 1159 | | | | 1160 |<--- Domain 1 -->| |<--- Domain Z -->| 1161 ^----------------- PW1Z PMEG ----------------^ 1162 ^--- PW13 PSMEG --^ ^--- PWXZ PSMEG --^ 1163 ^-------^ ^-------^ 1164 LSP13 LMEG LSPXZ LMEG 1165 ^--^ ^--^ ^---------^ ^--^ ^--^ 1166 Sec12 Sec23 Sec3X SecXY SecYZ 1167 SMEG SMEG SMEG SMEG SMEG 1169 ^---^ ME 1170 ^ MEP 1171 ==== LSP 1172 .... PW 1174 T-PE1: Terminating Provider Edge 1 1175 LSR: Label Switching Router 2 1176 S-PE3: Switching Provider Edge 3 1177 T-PEX: Terminating Provider Edge X 1178 LSRY: Label Switching Router Y 1179 S-PEZ: Switching Provider Edge Z 1181 Figure 5 Reference Model for the MPLS-TP OAM Framework 1183 Figure 5 depicts a high-level reference model for the MPLS-TP 1184 OAM framework. The figure depicts portions of two MPLS-TP 1185 enabled network domains, Domain 1 and Domain Z. In Domain 1, 1186 LSR1 is adjacent to LSR2 via the MPLS-TP Section Sec12 and LSR2 1187 is adjacent to LSR3 via the MPLS-TP Section Sec23. Similarly, in 1188 Domain Z, LSRX is adjacent to LSRY via the MPLS-TP Section SecXY 1189 and LSRY is adjacent to LSRZ via the MPLS-TP Section SecYZ. In 1190 addition, LSR3 is adjacent to LSRX via the MPLS-TP Section 3X. 1192 Figure 5 also shows a bi-directional MS-PW (PW1Z) between AC1 on 1193 T-PE1 and AC2 on T-PEZ. The MS-PW consists of three 1194 bi-directional PW path segments: 1) PW13 path segment between 1195 T-PE1 and S-PE3 via the bi-directional LSP13 LSP, 2) PW3X path 1196 segment between S-PE3 and S-PEX, via the bi-directional LSP3X 1197 LSP, and 3) PWXZ path segment between S-PEX and T-PEZ via the 1198 bi-directional LSPXZ LSP. 1200 The MPLS-TP OAM procedures that apply to a MEG are expected to 1201 operate independently from procedures on other MEGs. Yet, this 1202 does not preclude that multiple MEGs may be affected 1203 simultaneously by the same network condition, for example, a 1204 fiber cut event. 1206 Note that there are no constrains imposed by this OAM framework 1207 on the number, or type (p2p, p2mp, LSP or PW), of MEGs that may 1208 be instantiated on a particular node. In particular, when 1209 looking at Figure 5, it should be possible to configure one or 1210 more MEPs on the same node if that node is the endpoint of one 1211 or more MEGs. 1213 Figure 5 does not describe a PW3X PSMEG because typically SPMEs 1214 are used to monitor an OAM domain (like PW13 and PWXZ PSMEGs) 1215 rather than the segment between two OAM domains. However the OAM 1216 framework does not pose any constraints on the way SPMEs are 1217 instantiated as long as they are not overlapping. 1219 The subsections below define the MEGs specified in this MPLS-TP 1220 OAM architecture framework document. Unless otherwise stated, 1221 all references to domains, LSRs, MPLS-TP Sections, LSPs, 1222 pseudowires and MEGs in this section are made in relation to 1223 those shown in Figure 5. 1225 4.1. MPLS-TP Section Monitoring (SMEG) 1227 An MPLS-TP Section MEG (SMEG) is an MPLS-TP maintenance entity 1228 intended to monitor an MPLS-TP Section as defined in RFC 5654 1229 [5]. An SMEG may be configured on any MPLS-TP section. SMEG OAM 1230 packets must fate-share with the user data packets sent over the 1231 monitored MPLS-TP Section. 1233 An SMEG is intended to be deployed for applications where it is 1234 preferable to monitor the link between topologically adjacent 1235 (next hop in this layer network) MPLS-TP LSRs rather than 1236 monitoring the individual LSP or PW path segments traversing the 1237 MPLS-TP Section and the server layer technology does not provide 1238 adequate OAM capabilities. 1240 Figure 5 shows five Section MEGs configured in the network 1241 between AC1 and AC2: 1243 1. Sec12 MEG associated with the MPLS-TP Section between LSR 1 1244 and LSR 2, 1246 2. Sec23 MEG associated with the MPLS-TP Section between LSR 2 1247 and LSR 3, 1249 3. Sec3X MEG associated with the MPLS-TP Section between LSR 3 1250 and LSR X, 1252 4. SecXY MEG associated with the MPLS-TP Section between LSR X 1253 and LSR Y, and 1255 5. SecYZ MEG associated with the MPLS-TP Section between LSR Y 1256 and LSR Z. 1258 4.2. MPLS-TP LSP End-to-End Monitoring Group (LMEG) 1260 An MPLS-TP LSP MEG (LMEG) is an MPLS-TP maintenance entity group 1261 intended to monitor an end-to-end LSP between its LERs. An LMEG 1262 may be configured on any MPLS LSP. LMEG OAM packets must 1263 fate-share with user data packets sent over the monitored MPLS- 1264 TP LSP. 1266 An LMEG is intended to be deployed in scenarios where it is 1267 desirable to monitor an entire LSP between its LERs, rather 1268 than, say, monitoring individual PWs. 1270 Figure 5 depicts two LMEGs configured in the network between AC1 1271 and AC2: 1) the LSP13 LMEG between LER 1 and LER 3, and 2) the 1272 LSPXZ LMEG between LER X and LER Y. Note that the presence of a 1273 LSP3X LMEG in such a configuration is optional, hence, not 1274 precluded by this framework. For instance, the SPs may prefer to 1275 monitor the MPLS-TP Section between the two LSRs rather than the 1276 individual LSPs. 1278 4.3. MPLS-TP PW Monitoring (PMEG) 1280 An MPLS-TP PW MEG (PMEG) is an MPLS-TP maintenance entity 1281 intended to monitor a SS-PW or MS-PW between its T-PEs. A PMEG 1282 can be configured on any SS-PW or MS-PW. PMEG OAM packets must 1283 fate-share with the user data packets sent over the monitored 1284 PW. 1286 A PMEG is intended to be deployed in scenarios where it is 1287 desirable to monitor an entire PW between a pair of MPLS-TP 1288 enabled T-PEs rather than monitoring the LSP aggregating 1289 multiple PWs between PEs. 1291 Figure 5 depicts a MS-PW (MS-PW1Z) consisting of three path 1292 segments: PW13, PW3X and PWXZ and its associated end-to-end PMEG 1293 (PW1Z PMEG). 1295 4.4. MPLS-TP LSP SPME Monitoring (LSMEG) 1297 An MPLS-TP LSP SPME MEG (LSMEG) is an MPLS-TP SPME with an 1298 associated maintenance entity group intended to monitor an 1299 arbitrary part of an LSP between the MEPs instantiated for the 1300 SPME independent from the end-to-end monitoring (LMEG). An LSMEG 1301 can monitor an LSP path segment and it may also include the 1302 forwarding engine(s) of the node(s) at the edge(s) of the path 1303 segment. 1305 When SPME is established between non-adjacent LSRs, the edges of 1306 the SPME becomes adjacent at the LSP sub-layer network and any 1307 LSR that were previously in between becomes an LSR for the SPME. 1309 Multiple hierarchical LSMEGs can be configured on any LSP. LSMEG 1310 OAM packets must fate-share with the user data packets sent over 1311 the monitored LSP path segment. 1313 A LSME can be defined between the following entities: 1315 o The LER and LSR of a given LSP. 1317 o Any two LSRs of a given LSP. 1319 An LSMEG is intended to be deployed in scenarios where it is 1320 preferable to monitor the behavior of a part of an LSP or set of 1321 LSPs rather than the entire LSP itself, for example when there 1322 is a need to monitor a part of an LSP that extends beyond the 1323 administrative boundaries of an MPLS-TP enabled administrative 1324 domain. 1326 |<-------------------- PW1Z ------------------->| 1327 | | 1328 | |<-------------LSP1Z LSP------------->| | 1329 | |<-LSP13->| || |<-LSPXZ->| | 1330 V V V V V V V V 1331 +----+ +---+ +----+ +----+ +---+ +----+ 1332 +----+ | PE | |LSR| |DBN | |DBN | |LSR| | PE | +----+ 1333 | | | 1 | | 2 | | 3 | | X | | Y | | Z | | | 1334 | |AC1| |=====================================| |AC2| | 1335 | CE1|---|.....................PW1Z......................|---|CE2 | 1336 | | | |=====================================| | | | 1337 | | | | | | | | | | | | | | | | 1338 +----+ | | | | | | | | | | | | +----+ 1339 +----+ +---+ +----+ +----+ +---+ +----+ 1340 . . . . 1341 | | | | 1342 |<---- Domain 1 --->| |<---- Domain Z --->| 1344 ^---------^ ^---------^ 1345 LSP13 LSMEG LSPXZ LSMEG 1346 ^-------------------------------------^ 1347 LSP1Z LMEG 1349 DBN: Domain Border Node 1351 Figure 6 MPLS-TP LSP SPME MEG (LSMEG) 1353 Figure 6 depicts a variation of the reference model in Figure 5 1354 where there is an end-to-end LSP (LSP1Z) between PE1 and PEZ. 1355 LSP1Z consists of, at least, three LSP Concatenated Segments: 1356 LSP13, LSP3X and LSPXZ. In this scenario there are two separate 1357 LSMEGs configured to monitor the LSP1Z: 1) a LSMEG monitoring 1358 the LSP13 Concatenated Segment on Domain 1 (LSP13 LSMEG), and 2) 1359 a LSMEG monitoring the LSPXZ Concatenated Segment on Domain Z 1360 (LSPXZ LSMEG). 1362 It is worth noticing that LSMEGs can coexist with the LMEG 1363 monitoring the end-to-end LSP and that LSMEG MEPs and LMEG MEPs 1364 can be coincident in the same node (e.g. PE1 node supports both 1365 the LSP1Z LMEG MEP and the LSP13 LSMEG MEP). 1367 4.5. MPLS-TP MS-PW SPME Monitoring (PSMEG) 1369 An MPLS-TP MS-PW SPME Monitoring MEG (PSMEG) is an MPLS-TP SPME 1370 with an associated maintenance entity group intended to monitor 1371 an arbitrary part of an MS-PW between the MEPs instantiated for 1372 the SPME independently of the end-to-end monitoring (PMEG). A 1373 PSMEG can monitor a PW path segment and it may also include the 1374 forwarding engine(s) of the node(s) at the edge(s) of the path 1375 segment. A PSMEG is no different than an SPME, it is simply 1376 named as such to discuss SPMEs specifically in a PW context. 1378 When SPME is established between non-adjacent S-PEs, the edges 1379 of the SPME becomes adjacent at the MS-PW sub-layer network and 1380 any S-PEs that were previously in between becomes an LSR for the 1381 SPME. 1383 S-PE placement is typically dictated by considerations other 1384 than OAM. S-PEs will frequently reside at operational boundaries 1385 such as the transition from distributed control plane (CP) to 1386 centralized Network Management System (NMS) control or at a 1387 routing area boundary. As such the architecture would appear not 1388 to have the flexibility that arbitrary placement of SPME 1389 segments would imply. Support for an arbitrary placement of 1390 PSMEG would require the definition of additional PW 1391 sub-layering. 1392 Multiple hierarchical PSMEGs can be configured on any MS-PW. 1393 PSMEG OAM packets fate-share with the user data packets sent 1394 over the monitored PW path Segment. 1396 A PSMEG does not add hierarchical components to the MPLS 1397 architecture, it defines the role of existing components for the 1398 purposes of discussing OAM functionality. 1400 A PSME can be defined between the following entities: 1402 o T-PE and any S-PE of a given MS-PW 1404 o Any two S-PEs of a given MS-PW. 1406 Note that, in line with the SPME description in section 3.2, when a 1407 PW SPME is instantiated after the MS-PW has been instantiated, the 1408 TTL distance of the MIPs may change and MIPs in the PW SPME are no 1409 longer part of the encompassing MEG. This means that the S-PE nodes 1410 hosting these MIPs are no longer S-PEs but P nodes at the SPME LSP 1411 level. The consequences are that the S-PEs hosting the PSMEG MEPs 1412 become adjacent S-PEs. This is no different than the operation of 1413 SPMEs in general. 1415 A PSMEG is intended to be deployed in scenarios where it is 1416 preferable to monitor the behavior of a part of a MS-PW rather 1417 than the entire end-to-end PW itself, for example to monitor an 1418 MS-PW path segment within a given network domain of an inter- 1419 domain MS-PW. 1421 Figure 5 depicts a MS-PW (MS-PW1Z) consisting of three path 1422 segments: PW13, PW3X and PWXZ with two separate PSMEGs: 1) a 1423 PSMEG monitoring the PW13 MS-PW path segment on Domain 1 (PW13 1424 PSMEG), and 2) a PSMEG monitoring the PWXZ MS-PW path segment on 1425 Domain Z with (PWXZ PSMEG). 1427 It is worth noticing that PSMEGs can coexist with the PMEG 1428 monitoring the end-to-end MS-PW and that PSMEG MEPs and PMEG 1429 MEPs can be coincident in the same node (e.g. T-PE1 node 1430 supports both the PW1Z PMEG MEP and the PW13 PSMEG MEP). 1432 4.6. Fate sharing considerations for multilink 1434 Multilink techniques are in use today and are expected to 1435 continue to be used in future deployments. These techniques 1436 include Ethernet Link Aggregation [22] and the use of Link 1437 Bundling for MPLS [18] where the option to spread traffic over 1438 component links is supported and enabled. While the use of Link 1439 Bundling can be controlled at the MPLS-TP layer, use of Link 1440 Aggregation (or any server layer specific multilink) is not 1441 necessarily under control of the MPLS-TP layer. Other techniques 1442 may emerge in the future. These techniques frequently share the 1443 characteristic that an LSP may be spread over a set of component 1444 links and therefore be reordered but no flow within the LSP is 1445 reordered (except when very infrequent and minimally disruptive 1446 load rebalancing occurs). 1448 The use of multilink techniques may be prohibited or permitted 1449 in any particular deployment. If multilink techniques are used, 1450 the deployment can be considered to be only partially MPLS-TP 1451 compliant, however this is unlikely to prevent its use. 1453 The implications for OAM are that not all components of a 1454 multilink will be exercised, independent server layer OAM being 1455 required to exercise the aggregated link components. This has 1456 further implications for MIP and MEP placement, as per-interface 1457 MIPs or "down" MEPs on a multilink interface are akin to a layer 1458 violation, as they instrument at the granularity of the server 1459 layer. The implications for reduced OAM loss measurement 1460 functionality are documented in sections 5.5.3 and 6.2.3. 1462 5. OAM Functions for proactive monitoring 1464 In this document, proactive monitoring refers to OAM operations 1465 that are either configured to be carried out periodically and 1466 continuously or preconfigured to act on certain events such as 1467 alarm signals. 1469 Proactive monitoring is usually performed "in-service". Such 1470 transactions are universally MEP to MEP in operation while 1471 notifications can be node to node (e.g. some MS-PW transactions) 1472 or node to MEPs (e.g., AIS). The control and measurement 1473 considerations are: 1475 1. Proactive monitoring for a MEG is typically configured at 1476 transport path creation time. 1478 2. The operational characteristics of in-band measurement 1479 transactions (e.g., CV, Loss Measurement (LM) etc.) are 1480 configured at the MEPs. 1482 3. Server layer events are reported by OAM packets originating 1483 at intermediate nodes. 1485 4. The measurements resulting from proactive monitoring are 1486 typically reported outside of the MEG (e.g. to a management 1487 system) as notifications events such as faults or indications 1488 of performance degradations (such as signal degrade 1489 conditions). 1491 5. The measurements resulting from proactive monitoring may be 1492 periodically harvested by an NMS. 1494 Pro-active fault reporting is assumed to be subject to 1495 unreliable delivery, soft-state and need to operate also in 1496 cases where a return path is not available or faulty. Therefore 1497 periodic repetition is assumed to be used for reliability, 1498 instead of handshaking. 1500 Delay measurement requires periodic repetition also to allow 1501 estimation of the packet delay variation for the MEG. 1503 For statically provisioned transport paths the above information 1504 is statically configured; for dynamically established transport 1505 paths the configuration information is signaled via the control 1506 plane or configured via the management plane. 1508 The operator may enable/disable some of the consequent actions 1509 defined in section 5.1.2. 1511 5.1. Continuity Check and Connectivity Verification 1513 Proactive Continuity Check functions, as required in section 1514 2.2.2 of RFC 5860 [11], are used to detect a loss of continuity 1515 defect (LOC) between two MEPs in a MEG. 1517 Proactive Connectivity Verification functions, as required in 1518 section 2.2.3 of RFC 5860 [11], are used to detect an unexpected 1519 connectivity defect between two MEGs (e.g. mismerging or 1520 misconnection), as well as unexpected connectivity within the 1521 MEG with an unexpected MEP. 1523 Both functions are based on the (proactive) generation, at the 1524 same rate, of OAM packets by the source MEP that are processed 1525 by the peer sink MEP(s). As a consequence, in order to save OAM 1526 bandwidth consumption, CV, when used, is linked with CC into 1527 Continuity Check and Connectivity Verification (CC-V) OAM 1528 packets. 1530 In order to perform pro-active Connectivity Verification, each 1531 CC-V OAM packet also includes a globally unique Source MEP 1532 identifier, whose value needs to be configured on the source MEP 1533 and on the peer sink MEP(s). In some cases, to avoid the need to 1534 configure the globally unique Source MEP identifier, it is 1535 preferable to perform only pro-active Continuity Check. In this 1536 case, the CC-V OAM packet does not need to include any globally 1537 unique Source MEP identifier. Therefore, an MEG can be monitored 1538 only for CC or for both CC and CV. CC-V OAM packets used for CC- 1539 only monitoring are called CC OAM packets while CC-V OAM packets 1540 used for both CC and CV are called CV OAM packets. 1542 As a consequence, it is not possible to detect misconnections 1543 between two MEGs monitored only for continuity as neither the 1544 OAM packet type nor the OAM packet content provides sufficient 1545 information to disambiguate an invalid source. To expand: 1547 o For CC OAM packet leaking into a CC monitored MEG - 1548 undetectable. 1550 o For CV OAM packet leaking into a CC monitored MEG - reception 1551 of CV OAM packets instead of a CC OAM packets (e.g., with the 1552 additional Source MEP identifier) allows detecting the fault. 1554 o For CC OAM packet leaking into a CV monitored MEG - reception 1555 of CC OAM packets instead of CV OAM packets (e.g., lack of 1556 additional Source MEP identifier) allows detecting the fault. 1558 o For CV OAM packet leaking into a CV monitored MEG - reception 1559 of CV OAM packets with different Source MEP identifier 1560 permits fault to be identified. 1562 Having a common packet format for CC-V OAM packets would 1563 simplify parsing in a sink MEP to properly detect all the 1564 mis-configuration cases described above. 1566 Different formats of MEP identifiers are defined in [10] to 1567 address different environments. When an alternative to IP 1568 addressing is desired (e.g., MPLS-TP is deployed in transport 1569 network environments where consistent operations with other 1570 transport technologies defined by the ITU-T are required), the 1571 ITU Carrier Code (ICC)-based format for MEP identification is 1572 used. When MPLS-TP is deployed in an environment where IP 1573 capabilities are available and desired for OAM, the IP-based MEP 1574 identification is used. 1576 CC-V OAM packets are transmitted at a regular, operator 1577 configurable, rate. The default CC-V transmission periods are 1578 application dependent (see section 5.1.3). 1580 Proactive CC-V OAM packets are transmitted with the "minimum 1581 loss probability PHB" within the transport path (LSP, PW) they 1582 are monitoring. For E-LSPs, this PHB is configurable on network 1583 operator's basis while for L-LSPs this is determined as per RFC 1584 3270 [23]. PHBs can be translated at the network borders by the 1585 same function that translates it for user data traffic. The 1586 implication is that CC-V fate-shares with much of the forwarding 1587 implementation, but not all aspects of PHB processing are 1588 exercised. Either on-demand tools are used for finer grained 1589 fault finding or an implementation may utilize a CC-V flow per 1590 PHB to ensure a CC-V flow fate-shares with each individual PHB. 1592 In a co-routed or associated, bidirectional point-to-point 1593 transport path, when a MEP is enabled to generate pro-active 1594 CC-V OAM packets with a configured transmission rate, it also 1595 expects to receive pro-active CC-V OAM packets from its peer MEP 1596 at the same transmission rate as a common SLA applies to all 1597 components of the transport path. In a unidirectional transport 1598 path (either point-to-point or point-to-multipoint), the source 1599 MEP is enabled only to generate CC-V OAM packets while each sink 1600 MEP is configured to expect these packets at the configured 1601 rate. 1603 MIPs, as well as intermediate nodes not supporting MPLS-TP OAM, 1604 are transparent to the pro-active CC-V information and forward 1605 these pro-active CC-V OAM packets as regular data packets. 1607 During path setup and tear down, situations arise where CC-V 1608 checks would give rise to alarms, as the path is not fully 1609 instantiated. In order to avoid these spurious alarms the 1610 following procedures are recommended. At initialization, the 1611 source MEP function (generating pro-active CC-V packets) should 1612 be enabled prior to the corresponding sink MEP function 1613 (detecting continuity and connectivity defects). When disabling 1614 the CC-V proactive functionality, the sink MEP function should 1615 be disabled prior to the corresponding source MEP function. 1617 It should be noted that different encapsulations are possible 1618 for CC-V packets and therefore it is possible that in case of 1619 mis-configurations or mis-connectivity, CC-V packets are 1620 received with an unexpected encapsulation. 1622 There are practical limitations to detecting unexpected 1623 encapsulation. It is possible that there are mis-configuration 1624 or mis-connectivity scenarios where OAM packets can alias as 1625 payload, e.g., when a transport path can carry an arbitrary 1626 payload without a pseudo wire. 1628 When CC-V packets are received with an unexpected encapsulation 1629 that can be parsed by a sink MEP, the CC-V packet is processed 1630 as it were received with the correct encapsulation and if it is 1631 not a manifestation of a mis-connectivity defect a warning is 1632 raised (see section 5.1.1.4). Otherwise the CC-V packet may be 1633 silently discarded as unrecognized and a LOC defect may be 1634 detected (see section 5.1.1.1). 1636 The defect conditions are described in no specific order. 1638 5.1.1. Defects identified by CC-V 1640 Pro-active CC-V functions allow a sink MEP to detect the defect 1641 conditions described in the following sub-sections. For all of 1642 the described defect cases, a sink MEP should notify the 1643 equipment fault management process of the detected defect. 1645 Sequential consecutive loss of CC-V packets is considered 1646 indicative of an actual break and not congestive loss or 1647 physical layer degradation. The loss of 3 packets in a row 1648 (implying a 3.5 insertion time detection interval) is 1649 interpreted as a true break and a condition that will not clear 1650 by itself. 1652 A CC-V OAM packet is considered to carry an unexpected globally 1653 unique Source MEP identifier if it is a CC OAM packet received 1654 by a sink MEP monitoring the MEG for CV; it is a CV OAM packet 1655 received by a sink MEP monitoring the MEG for CC or it is a CV 1656 OAM packet received by a sink MEP monitoring the MEG for CV but 1657 carrying a unique Source MEP identifier that is different that 1658 the expected one. Conversely, the CC-V packet is considered to 1659 have an expected globally unique Source MEP identifier where it 1660 is a CC OAM packet received by a sink MEP monitoring the MEG for 1661 CC or it is a it is a CV OAM packet received by a sink MEP 1662 monitoring the MEG for CV and carrying a unique Source MEP 1663 identifier that is equal to the expected one. 1665 5.1.1.1. Loss Of Continuity defect 1667 When proactive CC-V is enabled, a sink MEP detects a loss of 1668 continuity (LOC) defect when it fails to receive pro-active CC-V 1669 OAM packets from the source MEP. 1671 o Entry criteria: If no pro-active CC-V OAM packets from the 1672 source MEP (and in the case of CV, this includes the 1673 requirement to have the expected globally unique Source MEP 1674 identifier) are received within the interval equal to 3.5 1675 times the receiving MEP's configured CC-V reception period. 1677 o Exit criteria: A pro-active CC-V OAM packet from the source 1678 MEP (and again in the case of CV, with the expected globally 1679 unique Source MEP identifier) is received. 1681 5.1.1.2. Mis-connectivity defect 1683 When a pro-active CC-V OAM packet is received, a sink MEP 1684 identifies a mis-connectivity defect (e.g. mismerge, 1685 misconnection or unintended looping) when the received packet 1686 carries an unexpected globally unique Source MEP identifier. 1688 o Entry criteria: The sink MEP receives a pro-active CC-V OAM 1689 packet with an unexpected globally unique Source MEP 1690 identifier or with an unexpected encapsulation. 1692 o Exit criteria: The sink MEP does not receive any pro-active 1693 CC-V OAM packet with an unexpected globally unique Source MEP 1694 identifier for an interval equal at least to 3.5 times the 1695 longest transmission period of the pro-active CC-V OAM 1696 packets received with an unexpected globally unique Source 1697 MEP identifier since this defect has been raised. This 1698 requires the OAM packet to self identify the CC-V periodicity 1699 as not all MEPs can be expected to have knowledge of all 1700 MEGs. 1702 5.1.1.3. Period Misconfiguration defect 1704 If pro-active CC-V OAM packets are received with the expected 1705 globally unique Source MEP identifier but with a transmission 1706 period different than the locally configured reception period, 1707 then a CC-V period mis-configuration defect is detected. 1709 o Entry criteria: A MEP receives a CC-V pro-active packet with 1710 the expected globally unique Source MEP identifier but with a 1711 transmission period different than its own CC-V configured 1712 transmission period. 1714 o Exit criteria: The sink MEP does not receive any pro-active 1715 CC-V OAM packet with the expected globally unique Source MEP 1716 identifier and an incorrect transmission period for an 1717 interval equal at least to 3.5 times the longest transmission 1718 period of the pro-active CC-V OAM packets received with the 1719 expected globally unique Source MEP identifier and an 1720 incorrect transmission period since this defect has been 1721 raised. 1723 5.1.1.4. Unexpected encapsulation defect 1725 If pro-active CC-V OAM packets are received with the expected 1726 globally unique Source MEP identifier but with an unexpected 1727 encapsulation, then a CC-V unexpected encapsulation defect is 1728 detected. 1730 It should be noted that there are practical limitations to 1731 detecting unexpected encapsulation (see section 5.1.1). 1733 o Entry criteria: A MEP receives a CC-V pro-active packet with 1734 the expected globally unique Source MEP identifier but with 1735 an unexpected encapsulation. 1737 o Exit criteria: The sink MEP does not receive any pro-active 1738 CC-V OAM packet with the expected globally unique Source MEP 1739 identifier and an unexpected encapsulation for an interval 1740 equal at least to 3.5 times the longest transmission period 1741 of the pro-active CC-V OAM packets received with the expected 1742 globally unique Source MEP identifier and an unexpected 1743 encapsulation since this defect has been raised. 1745 5.1.2. Consequent action 1747 A sink MEP that detects any of the defect conditions defined in 1748 section 5.1.1 declares a defect condition and performs the 1749 following consequent actions. 1751 If a MEP detects a mis-connectivity defect, it blocks all the 1752 traffic (including also the user data packets) that it receives 1753 from the misconnected transport path. 1755 If a MEP detects LOC defect that is not caused by a period 1756 mis-configuration, it should block all the traffic (including 1757 also the user data packets) that it receives from the transport 1758 path, if this consequent action has been enabled by the 1759 operator. 1761 It is worth noticing that the OAM requirements document [11] 1762 recommends that CC-V proactive monitoring be enabled on every 1763 MEG in order to reliably detect connectivity defects. However, 1764 CC-V proactive monitoring can be disabled by an operator for a 1765 MEG. In the event of a misconnection between a transport path 1766 that is pro-actively monitored for CC-V and a transport path 1767 which is not, the MEP of the former transport path will detect a 1768 LOC defect representing a connectivity problem (e.g. a 1769 misconnection with a transport path where CC-V proactive 1770 monitoring is not enabled) instead of a continuity problem, with 1771 a consequent wrong traffic delivering. For these reasons, the 1772 traffic block consequent action is applied even when a LOC 1773 condition occurs. This block consequent action can be disabled 1774 through configuration. This deactivation of the block action may 1775 be used for activating or deactivating the monitoring when it is 1776 not possible to synchronize the function activation of the two 1777 peer MEPs. 1779 If a MEP detects a LOC defect (section 5.1.1.1), a 1780 mis-connectivity defect (section 5.1.1.2) it declares a signal 1781 fail condition of the ME. 1783 It is a matter if local policy if a MEP that detects a period 1784 misconfiguration defect (section 5.1.1.3) declares a signal fail 1785 condition of the ME. 1787 The detection of an unexpected encapsulation defect does not 1788 have any consequent action: it is just a warning for the network 1789 operator. An implementation able to detect an unexpected 1790 encapsulation but not able to verify the source MEP ID may 1791 choose to declare a mis-connectivity defect. 1793 5.1.3. Configuration considerations 1795 At all MEPs inside a MEG, the following configuration 1796 information needs to be configured when a proactive CC-V 1797 function is enabled: 1799 o MEG ID; the MEG identifier to which the MEP belongs; 1801 o MEP-ID; the MEP's own identity inside the MEG; 1802 o list of the other MEPs in the MEG. For a point-to-point MEG 1803 the list would consist of the single MEP ID from which the 1804 OAM packets are expected. In case of the root MEP of a p2mp 1805 MEG, the list is composed by all the leaf MEP IDs inside the 1806 MEG. In case of the leaf MEP of a p2mp MEG, the list is 1807 composed by the root MEP ID (i.e. each leaf needs to know the 1808 root MEP ID from which it expect to receive the CC-V OAM 1809 packets). 1811 o PHB for E-LSPs; it identifies the per-hop behavior of CC-V 1812 packet. Proactive CC-V packets are transmitted with the 1813 "minimum loss probability PHB" previously configured within a 1814 single network operator. This PHB is configurable on network 1815 operator's basis. PHBs can be translated at the network 1816 borders. 1818 o transmission rate; the default CC-V transmission periods are 1819 application dependent (depending on whether they are used to 1820 support fault management, performance monitoring, or 1821 protection switching applications): 1823 o Fault Management: default transmission period is 1s (i.e. 1824 transmission rate of 1 packet/second). 1826 o Performance Management: default transmission period is 1827 100ms (i.e. transmission rate of 10 packets/second). CC-V 1828 contributes to the accuracy of performance monitoring 1829 (PM) statistics by permitting the defect free periods to 1830 be properly distinguished as described in sections 5.5.1 1831 and 5.6.1. 1833 o Protection Switching: If protection switching with CC-V 1834 defect entry criteria of 12ms is required (for example, 1835 in conjunction with the requirement to support 50ms 1836 recovery time as indicated in RFC 5654 [5]), then an 1837 implementation should use a default transmission period 1838 of 3.33ms (i.e., transmission rate of 300 1839 packets/second). Sometimes, the requirement of 50ms 1840 recovery time is associated with the requirement for a 1841 CC-V defect entry criteria period of 35 ms: in these 1842 cases a transmission period of 10ms (i.e., transmission 1843 rate of 100 packets/second) can be used. Furthermore, 1844 when there is no need for so small CC-V defect entry 1845 criteria periods, larger transmission period can be used. 1847 It should be possible for the operator to configure these 1848 transmission rates for all applications, to satisfy specific 1849 network requirements. 1851 Note that the reception period is the same as the configured 1852 transmission rate. 1854 For management provisioned transport paths the above parameters 1855 are statically configured; for dynamically signaled transport 1856 paths the configuration information are distributed via the 1857 control plane. 1859 The operator should be able to enable/disable some of the 1860 consequent actions. Which consequent action can be 1861 enabled/disabled are described in section 5.1.2. 1863 5.2. Remote Defect Indication 1865 The Remote Defect Indication (RDI) function, as required in 1866 section 2.2.9 of RFC 5860 [11], is an indicator that is 1867 transmitted by a sink MEP to communicate to its source MEP that 1868 a signal fail condition exists. In case of co-routed and 1869 associated bidirectional transport paths, RDI is associated with 1870 proactive CC-V and the RDI indicator can be piggy-backed onto 1871 the CC-V packet. In case of unidirectional transport paths, the 1872 RDI indicator can be sent only using an out-of-band return path 1873 if it exists and its usage is enabled by policy actions. 1875 When a MEP detects a signal fail condition (e.g. in case of a 1876 continuity or connectivity defect), it should begin transmitting 1877 an RDI indicator to its peer MEP. When incorporated into CC-V, 1878 the RDI information will be included in all pro-active CC-V 1879 packets that it generates for the duration of the signal fail 1880 condition's existence. 1882 A MEP that receives packets from a peer MEP with the RDI 1883 information should determine that its peer MEP has encountered a 1884 defect condition associated with a signal fail condition. 1886 MIPs as well as intermediate nodes not supporting MPLS-TP OAM 1887 are transparent to the RDI indicator and forward OAM packets 1888 that include the RDI indicator as regular data packets, i.e. the 1889 MIP should not perform any actions nor examine the indicator. 1891 When the signal fail condition clears, the MEP should stop 1892 transmitting the RDI indicator to its peer MEP. When 1893 incorporated into CC-V, the RDI indicator will be cleared from 1894 subsequent transmission of pro-active CC-V packets. A MEP 1895 should clear the RDI defect upon reception of an RDI indicator 1896 cleared. 1898 5.2.1. Configuration considerations 1900 In order to support RDI indication, the indication may be 1901 carried in a unique OAM packet or may be embedded in a CC-V 1902 packet. The in-band RDI transmission rate and PHB of the OAM 1903 packets carrying RDI should be the same as that configured for 1904 CC-V to allow both far-end and near-end defect conditions being 1905 resolved in a timeframe that has the same order of magnitude. 1906 This timeframe is application specific as described in section 1907 5.1.3. Methods of the out-of-band return paths will dictate how 1908 out-of-band RDI indications are transmitted. 1910 5.3. Alarm Reporting 1912 The Alarm Reporting function, as required in section 2.2.8 of 1913 RFC 5860 [11], relies upon an Alarm Indication Signal (AIS) 1914 packet to suppress alarms following detection of defect 1915 conditions at the server (sub-)layer. 1917 When a server MEP asserts a signal fail condition, it notifies 1918 that to the co-located MPLS-TP client/server adaptation function 1919 which then generates OAM packets with AIS information in the 1920 downstream direction to allow the suppression of secondary 1921 alarms at the MPLS-TP MEP in the client (sub-)layer. 1923 The generation of packets with AIS information starts 1924 immediately when the server MEP asserts a signal fail condition. 1925 These periodic OAM packets, with AIS information, continue to be 1926 transmitted until the signal fail condition is cleared. 1928 It is assumed that to avoid spurious alarm generation a MEP 1929 detecting a loss of continuity defect (see section 5.1.1.1) will 1930 wait for a hold off interval prior to asserting an alarm to the 1931 management system. Therefore, upon receiving an OAM packet with 1932 AIS information an MPLS-TP MEP enters an AIS defect condition 1933 and suppresses reporting of alarms to the NMS on the loss of 1934 continuity with its peer MEP but does not block traffic received 1935 from the transport path. A MEP resumes loss of continuity alarm 1936 generation upon detecting loss of continuity defect conditions 1937 in the absence of AIS condition. 1939 MIPs, as well as intermediate nodes, do not process AIS 1940 information and forward these AIS OAM packets as regular data 1941 packets. 1943 For example, let's consider a fiber cut between LSR 1 and LSR 2 1944 in the reference network of Figure 5. Assuming that all of the 1945 MEGs described in Figure 5 have pro-active CC-V enabled, a LOC 1946 defect is detected by the MEPs of Sec12 SMEG LSP13 LMEG, PW1 1947 PSMEG and PW1Z PMEG, however in a transport network only the 1948 alarm associated to the fiber cut needs to be reported to an NMS 1949 while all secondary alarms should be suppressed (i.e. not 1950 reported to the NMS or reported as secondary alarms). 1952 If the fiber cut is detected by the MEP in the physical layer 1953 (in LSR2), LSR2 can generate the proper alarm in the physical 1954 layer and suppress the secondary alarm associated with the LOC 1955 defect detected on Sec12 SMEG. As both MEPs reside within the 1956 same node, this process does not involve any external protocol 1957 exchange. Otherwise, if the physical layer has not enough OAM 1958 capabilities to detect the fiber cut, the MEP of Sec12 SMEG in 1959 LSR2 will report a LOC alarm. 1961 In both cases, the MEP of Sec12 SMEG in LSR 2 notifies the 1962 adaptation function for LSP13 LMEG that then generates AIS 1963 packets on the LSP13 LMEG in order to allow its MEP in LSR3 to 1964 suppress the LOC alarm. LSR3 can also suppress the secondary 1965 alarm on PW13 PSMEG because the MEP of PW13 PSMEG resides within 1966 the same node as the MEP of LSP13 LMEG. The MEP of PW13 PSMEG in 1967 LSR3 also notifies the adaptation function for PW1Z PMEG that 1968 then generates AIS packets on PW1Z PMEG in order to allow its 1969 MEP in LSRZ to suppress the LOC alarm. 1971 The generation of AIS packets for each MEG in the MPLS-TP client 1972 (sub-)layer is configurable (i.e. the operator can 1973 enable/disable the AIS generation). 1975 AIS condition is cleared if no AIS packet has been received in 1976 3.5 times the AIS transmission period. 1978 The AIS transmission period is traditionally one per second but 1979 an option to configure longer periods would be also desirable. 1980 As a consequence, OAM packets need to self-identify the 1981 transmission period such that proper exit criteria can be 1982 established. 1984 AIS packets are transmitted with the "minimum loss probability 1985 PHB" within a single network operator. For E-LSPs, this PHB is 1986 configurable on network operator's basis, while for L-LSPs, this 1987 is determined as per RFC 3270 [23]. 1989 5.4. Lock Reporting 1991 The Lock Reporting function, as required in section 2.2.7 of RFC 1992 5860 [11], relies upon a Locked Report (LKR) packet used to 1993 suppress alarms following administrative locking action in the 1994 server (sub-)layer. 1996 When a server MEP is locked, the MPLS-TP client (sub-)layer 1997 adaptation function generates packets with LKR information to 1998 allow the suppression of secondary alarms at the MEPs in the 1999 client (sub-)layer. Again it is assumed that there is a hold off 2000 for any loss of continuity alarms in the client layer MEPs 2001 downstream of the node originating the locked report. In case of 2002 client (sub-)layer co-routed bidirectional transport paths, the 2003 LKR information is sent on both directions. In case of client 2004 (sub-)layer unidirectional transport paths, the LKR information 2005 is sent only in the downstream direction. As a consequence, in 2006 case of client (sub-)layer point-to-multipoint transport paths, 2007 the LKR information is sent only to the MEPs that are downstream 2008 to the server (sub-)layer that has been administratively locked. 2009 Client (sub-)layer associated bidirectional transport paths 2010 behave like co-routed bidirectional transport paths if the 2011 server (sub-)layer that has been administratively locked is used 2012 by both directions; otherwise they behave like unidirectional 2013 transport paths. 2015 The generation of packets with LKR information starts 2016 immediately when the server MEP is locked. These periodic 2017 packets, with LKR information, continue to be transmitted until 2018 the locked condition is cleared. 2020 Upon receiving a packet with LKR information an MPLS-TP MEP 2021 enters an LKR defect condition and suppresses loss of continuity 2022 alarm associated with its peer MEP but does not block traffic 2023 received from the transport path. A MEP resumes loss of 2024 continuity alarm generation upon detecting loss of continuity 2025 defect conditions in the absence of LKR condition. 2027 MIPs, as well as intermediate nodes, do not process the LKR 2028 information and forward these LKR OAM packets as regular data 2029 packets. 2031 For example, let's consider the case where the MPLS-TP Section 2032 between LSR 1 and LSR 2 in the reference network of Figure 5 is 2033 administrative locked at LSR2 (in both directions). 2035 Assuming that all the MEGs described in Figure 5 have pro-active 2036 CC-V enabled, a LOC defect is detected by the MEPs of LSP13 2037 LMEG, PW1 PSMEG and PW1Z PMEG, however in a transport network 2038 all these secondary alarms should be suppressed (i.e. not 2039 reported to the NMS or reported as secondary alarms). 2041 The MEP of Sec12 SMEG in LSR 2 notifies the adaptation function 2042 for LSP13 LMEG that then generates LKR packets on the LSP13 LMEG 2043 in order to allow its MEPs in LSR1 and LSR3 to suppress the LOC 2044 alarm. LSR3 can also suppress the secondary alarm on PW13 PSMEG 2045 because the MEP of PW13 PSMEG resides within the same node as 2046 the MEP of LSP13 LMEG. The MEP of PW13 PSMEG in LSR3 also 2047 notifies the adaptation function for PW1Z PMEG that then 2048 generates AIS packets on PW1Z PMEG in order to allow its MEP in 2049 LSRZ to suppress the LOC alarm. 2051 The generation of LKR packets for each MEG in the MPLS-TP client 2052 (sub-)layer is configurable (i.e. the operator can 2053 enable/disable the LKR generation). 2055 Locked condition is cleared if no LKR packet has been received 2056 for 3.5 times the transmission period. 2058 The LKR transmission period is traditionally one per second but 2059 an option to configure longer periods would be also desirable. 2060 As a consequence, OAM packets need to self-identify the 2061 transmission period such that proper exit criteria can be 2062 established. 2064 LKR packets are transmitted with the "minimum loss probability 2065 PHB" within a single network operator. For E-LSPs, this PHB is 2066 configurable on network operator's basis, while for L-LSPs, this 2067 is determined as per RFC 3270 [23]. 2069 5.5. Packet Loss Measurement 2071 Packet Loss Measurement (LM) is one of the capabilities 2072 supported by the MPLS-TP Performance Monitoring (PM) function in 2073 order to facilitate reporting of QoS information for a transport 2074 path as required in section 2.2.11 of RFC 5860 [11]. LM is used 2075 to exchange counter values for the number of ingress and egress 2076 packets transmitted and received by the transport path monitored 2077 by a pair of MEPs. 2079 Proactive LM is performed by periodically sending LM OAM packets 2080 from a MEP to a peer MEP and by receiving LM OAM packets from 2081 the peer MEP (if a co-routed or associated bidirectional 2082 transport path) during the life time of the transport path. Each 2083 MEP performs measurements of its transmitted and received user 2084 data packets. These measurements are then correlated in real 2085 time with the peer MEP in the ME to derive the impact of packet 2086 loss on a number of performance metrics for the ME in the MEG. 2087 The LM transactions are issued such that the OAM packets will 2088 experience the same PHB scheduling class as the measured traffic 2089 while transiting between the MEPs in the ME. 2091 For a MEP, near-end packet loss refers to packet loss associated 2092 with incoming data packets (from the far-end MEP) while far-end 2093 packet loss refers to packet loss associated with egress data 2094 packets (towards the far-end MEP). 2096 Pro-active LM can be operated in two ways: 2098 o One-way: a MEP sends LM OAM packet to its peer MEP containing 2099 all the required information to facilitate near-end packet 2100 loss measurements at the peer MEP. 2102 o Two-way: a MEP sends LM OAM packet with a LM request to its 2103 peer MEP, which replies with a LM OAM packet as a LM 2104 response. The request/response LM OAM packets containing all 2105 the required information to facilitate both near-end and 2106 far-end packet loss measurements from the viewpoint of the 2107 originating MEP. 2109 One-way LM is applicable to both unidirectional and 2110 bidirectional (co-routed or associated) transport paths while 2111 two-way LM is applicable only to bidirectional (co-routed or 2112 associated) transport paths. 2114 MIPs, as well as intermediate nodes, do not process the LM 2115 information and forward these pro-active LM OAM packets as 2116 regular data packets. 2118 5.5.1. Configuration considerations 2120 In order to support proactive LM, the transmission rate and, for 2121 E-LSPs, the PHB class associated with the LM OAM packets 2122 originating from a MEP need be configured as part of the LM 2123 provisioning. LM OAM packets should be transmitted with the PHB 2124 that yields the lowest drop precedence within the measured PHB 2125 Scheduling Class (see RFC 3260 [17]), in order to maximize 2126 reliability of measurement within the traffic class. 2128 If that PHB class is not an ordered aggregate where the ordering 2129 constraint is all packets with the PHB class being delivered in 2130 order, LM can produce inconsistent results. 2132 Performance monitoring (e.g., LM) is only relevant when the 2133 transport path is defect free. CC-V contributes to the accuracy 2134 of PM statistics by permitting the defect free periods to be 2135 properly distinguished. Therefore support of pro-active LM has 2136 implications on the CC-V transmission period (see section 2137 5.1.3). 2139 5.5.2. Sampling skew 2141 If an implementation makes use of a hardware forwarding path 2142 which operates in parallel with an OAM processing path, whether 2143 hardware or software based, the packet and byte counts may be 2144 skewed if one or more packets can be processed before the OAM 2145 processing samples counters. If OAM is implemented in software 2146 this error can be quite large. 2148 5.5.3. Multilink issues 2150 If multilink is used at the LSP ingress or egress, there may be 2151 no single packet processing engine where to inject or extract a 2152 LM packet as an atomic operation to which accurate packet and 2153 byte counts can be associated with the packet. 2155 In the case where multilink is encountered in the LSP path, the 2156 reordering of packets within the LSP can cause inaccurate LM 2157 results. 2159 5.6. Packet Delay Measurement 2161 Packet Delay Measurement (DM) is one of the capabilities 2162 supported by the MPLS-TP PM function in order to facilitate 2163 reporting of QoS information for a transport path as required in 2164 section 2.2.12 of RFC 5860 [11]. Specifically, pro-active DM is 2165 used to measure the long-term packet delay and packet delay 2166 variation in the transport path monitored by a pair of MEPs. 2168 Proactive DM is performed by sending periodic DM OAM packets 2169 from a MEP to a peer MEP and by receiving DM OAM packets from 2170 the peer MEP (if a co-routed or associated bidirectional 2171 transport path) during a configurable time interval. 2173 Pro-active DM can be operated in two ways: 2175 o One-way: a MEP sends DM OAM packet to its peer MEP containing 2176 all the required information to facilitate one-way packet 2177 delay and/or one-way packet delay variation measurements at 2178 the peer MEP. Note that this requires precise time 2179 synchronisation at either MEP by means outside the scope of 2180 this framework. 2182 o Two-way: a MEP sends DM OAM packet with a DM request to its 2183 peer MEP, which replies with a DM OAM packet as a DM 2184 response. The request/response DM OAM packets containing all 2185 the required information to facilitate two-way packet delay 2186 and/or two-way packet delay variation measurements from the 2187 viewpoint of the originating MEP. 2189 One-way DM is applicable to both unidirectional and 2190 bidirectional (co-routed or associated) transport paths while 2191 two-way DM is applicable only to bidirectional (co-routed or 2192 associated) transport paths. 2194 MIPs, as well as intermediate nodes, do not process the DM 2195 information and forward these pro-active DM OAM packets as 2196 regular data packets. 2198 5.6.1. Configuration considerations 2200 In order to support pro-active DM, the transmission rate and, 2201 for E-LSPs, the PHB associated with the DM OAM packets 2202 originating from a MEP need be configured as part of the DM 2203 provisioning. DM OAM packets should be transmitted with the PHB 2204 that yields the lowest drop precedence within the measured PHB 2205 Scheduling Class (see RFC 3260 [17]). 2207 Performance monitoring (e.g., DM) is only relevant when the 2208 transport path is defect free. CC-V contributes to the accuracy 2209 of PM statistics by permitting the defect free periods to be 2210 properly distinguished. Therefore support of pro-active DM has 2211 implications on the CC-V transmission period (see section 2212 5.1.3). 2214 5.7. Client Failure Indication 2216 The Client Failure Indication (CFI) function, as required in 2217 section 2.2.10 of RFC 5860 [11], is used to help process client 2218 defects and propagate a client signal defect condition from the 2219 process associated with the local attachment circuit where the 2220 defect was detected (typically the source adaptation function 2221 for the local client interface) to the process associated with 2222 the far-end attachment circuit (typically the source adaptation 2223 function for the far-end client interface) for the same 2224 transmission path in case the client of the transport path does 2225 not support a native defect/alarm indication mechanism, e.g. 2226 AIS. 2228 A source MEP starts transmitting a CFI indication to its peer 2229 MEP when it receives a local client signal defect notification 2230 via its local CSF function. Mechanisms to detect local client 2231 signal fail defects are technology specific. Similarly 2232 mechanisms to determine when to cease originating client signal 2233 fail indication are also technology specific. 2235 A sink MEP that has received a CFI indication report this 2236 condition to its associated client process via its local CFI 2237 function. Consequent actions toward the client attachment 2238 circuit are technology specific. 2240 Either there needs to be a 1:1 correspondence between the client 2241 and the MEG, or when multiple clients are multiplexed over a 2242 transport path, the CFI packet requires additional information 2243 to permit the client instance to be identified. 2245 MIPs, as well as intermediate nodes, do not process the CFI 2246 information and forward these pro-active CFI OAM packets as 2247 regular data packets. 2249 5.7.1. Configuration considerations 2251 In order to support CFI indication, the CFI transmission rate 2252 and, for E-LSPs, the PHB of the CFI OAM packets should be 2253 configured as part of the CFI configuration. 2255 6. OAM Functions for on-demand monitoring 2257 In contrast to proactive monitoring, on-demand monitoring is 2258 initiated manually and for a limited amount of time, usually for 2259 operations such as diagnostics to investigate a defect 2260 condition. 2262 On-demand monitoring covers a combination of "in-service" and 2263 "out-of-service" monitoring functions. The control and 2264 measurement implications are: 2266 1. A MEG can be directed to perform an "on-demand" functions at 2267 arbitrary times in the lifetime of a transport path. 2269 2. "out-of-service" monitoring functions may require a-priori 2270 configuration of both MEPs and intermediate nodes in the MEG 2271 (e.g., data plane loopback) and the issuance of notifications 2272 into client layers of the transport path being removed from 2273 service (e.g., lock-reporting) 2275 3. The measurements resulting from on-demand monitoring are 2276 typically harvested in real time, as these are frequently 2277 initiated manually. These do not necessarily require 2278 different harvesting mechanisms that for harvesting proactive 2279 monitoring telemetry. 2281 The functions that are exclusively out-of-service are those 2282 described in section 6.3. The remainder are applicable to both 2283 in-service and out-of-service transport paths. 2285 6.1. Connectivity Verification 2287 On demand connectivity verification function, as required in 2288 section 2.2.3 of RFC 5860 [11], is a transaction that flows from 2289 the originating MEP to a target MIP or MEP to verify the 2290 connectivity between these points. 2292 Use of on-demand CV is dependent on the existence of either a 2293 bi-directional ME, or an associated return ME, or the 2294 availability of an out-of-band return path because it requires 2295 the ability for target MIPs and MEPs to direct responses to the 2296 originating MEPs. 2298 One possible use of on-demand CV would be to perform fault 2299 management without using proactive CC-V, in order to preserve 2300 network resources, e.g. bandwidth, processing time at switches. 2301 In this case, network management periodically invokes on-demand 2302 CV. 2304 An additional use of on-demand CV would be to detect and locate 2305 a problem of connectivity when a problem is suspected or known 2306 based on other tools. In this case the functionality will be 2307 triggered by the network management in response to a status 2308 signal or alarm indication. 2310 On-demand CV is based upon generation of on-demand CV packets 2311 that should uniquely identify the MEG that is being checked. 2312 The on-demand functionality may be used to check either an 2313 entire MEG (end-to-end) or between the originating MEP and a 2314 specific MIP. This functionality may not be available for 2315 associated bidirectional transport paths or unidirectional 2316 paths, as the MIP may not have a return path to the originating 2317 MEP for the on-demand CV transaction. 2319 When on-demand CV is invoked, the originating MEP issues a 2320 sequence of on-demand CV packets that uniquely identifies the 2321 MEG being verified. The number of packets and their 2322 transmission rate should be pre-configured at the originating 2323 MEP, to take into account normal packet-loss conditions. The 2324 source MEP should use the mechanisms defined in sections 3.3 and 2325 3.4 when sending an on-demand CV packet to a target MEP or 2326 target MIP respectively. The target MEP/MIP shall return a reply 2327 on-demand CV packet for each packet received. If the expected 2328 number of on-demand CV reply packets is not received at 2329 originating MEP, this is an indication that a connectivity 2330 problem may exist. 2332 On-demand CV should have the ability to carry padding such that 2333 a variety of MTU sizes can be originated to verify the MTU 2334 transport capability of the transport path. 2336 MIPs that are not targeted by on-demand CV packets, as well as 2337 intermediate nodes, do not process the CV information and 2338 forward these on-demand CV OAM packets as regular data packets. 2340 6.1.1. Configuration considerations 2342 For on-demand CV the originating MEP should support the 2343 configuration of the number of packets to be 2344 transmitted/received in each sequence of transmissions and their 2345 packet size. 2347 In addition, when the CV packet is used to check connectivity 2348 toward a target MIP, the number of hops to reach the target MIP 2349 should be configured. 2351 For E-LSPs, the PHB of the on-demand CV packets should be 2352 configured as well. This permits the verification of correct 2353 operation of QoS queuing as well as connectivity. 2355 6.2. Packet Loss Measurement 2357 On-demand Packet Loss Measurement (LM) is one of the 2358 capabilities supported by the MPLS-TP Performance Monitoring 2359 function in order to facilitate the diagnosis of QoS 2360 performances for a transport path, as required in section 2.2.11 2361 of RFC 5860 [11]. 2363 On-demand LM is very similar to pro-active LM described in 2364 section 5.5. This section focuses on the differences between on- 2365 demand and pro-active LM. 2367 On-demand LM is performed by periodically sending LM OAM packets 2368 from a MEP to a peer MEP and by receiving LM OAM packets from 2369 the peer MEP (if a co-routed or associated bidirectional 2370 transport path) during a pre-defined monitoring period. Each MEP 2371 performs measurements of its transmitted and received user data 2372 packets. These measurements are then correlated to evaluate the 2373 packet loss performance metrics of the transport path. 2375 Use of packet loss measurement in an out-of-service transport 2376 path requires a traffic source such as a test device that can 2377 inject synthetic traffic. 2379 6.2.1. Configuration considerations 2381 In order to support on-demand LM, the beginning and duration of 2382 the LM procedures, the transmission rate and, for E-LSPs, the 2383 PHB class associated with the LM OAM packets originating from a 2384 MEP must be configured as part of the on-demand LM provisioning. 2385 LM OAM packets should be transmitted with the PHB that yields 2386 the lowest drop precedence as described in section 5.5.1. 2388 6.2.2. Sampling skew 2390 The same considerations described in section 5.5.2 for the 2391 pro-active LM are also applicable to on-demand LM 2392 implementations. 2394 6.2.3. Multilink issues 2396 Multi-link Issues are as described in section 5.5.3. 2398 6.3. Diagnostic Tests 2400 Diagnostic tests are tests performed on a MEG that has been taken 2401 out-of-service. 2403 6.3.1. Throughput Estimation 2405 Throughput estimation is an on-demand out-of-service function, 2406 as required in section 2.2.5 of RFC 5860 [11], that allows 2407 verifying the bandwidth/throughput of an MPLS-TP transport path 2408 (LSP or PW) before it is put in-service. 2410 Throughput estimation is performed between MEPs and between MEP 2411 and MIP. It can be performed in one-way or two-way modes. 2413 According to RFC 2544 [12], this test is performed by sending 2414 OAM test packets at increasing rate (up to the theoretical 2415 maximum), computing the percentage of OAM test packets received 2416 and reporting the rate at which OAM test packets begin to drop. 2417 In general, this rate is dependent on the OAM test packet size. 2419 When configured to perform such tests, a source MEP inserts OAM 2420 test packets with a specified packet size and transmission 2421 pattern at a rate to exercise the throughput. 2423 The throughput test can create congestion within the network 2424 impacting other transport paths. However, the test traffic 2425 should comply with the traffic profile of the transport path 2426 under test, so the impact of the test will not be worst than the 2427 impact caused by the customers, whose traffic would be sent over 2428 that transport path, sending the traffic at the maximum rate 2429 allowed by their traffic profiles. Therefore, throughput tests 2430 are not applicable to transport paths that do not have a defined 2431 traffic profile, such as for instance, LSPs in a context where 2432 statistical multiplexing is leveraged for network capacity 2433 dimensioning. 2435 For a one-way test, the remote sink MEP receives the OAM test 2436 packets and calculates the packet loss. For a two-way test, the 2437 remote MEP loopbacks the OAM test packets back to original MEP 2438 and the local sink MEP calculates the packet loss. 2440 It is worth noting that two-way throughput estimation is only 2441 applicable to bidirectional (co-routed or associated) transport 2442 paths and can only evaluate the minimum of available throughput 2443 of the two directions. In order to estimate the throughput of 2444 each direction uniquely, two one-way throughput estimation 2445 sessions have to be setup. One-way throughput estimation 2446 requires coordination between the transmitting and receiving 2447 test devices as described in section 6 of RFC 2544 [12]. 2449 It is also worth noting that if throughput estimation is 2450 performed on transport paths that transit oversubscribed links, 2451 the test may not produce comprehensive results if viewed in 2452 isolation because the impact of the test on the surrounding 2453 traffic needs to also be considered. Moreover, the estimation 2454 will only reflect the bandwidth available at the moment when the 2455 measure is made. 2457 MIPs that are not target by on-demand test OAM packets, as well 2458 as intermediate nodes, do not process the throughput test 2459 information and forward these on-demand test OAM packets as 2460 regular data packets. 2462 6.3.1.1. Configuration considerations 2464 Throughput estimation is an out-of-service tool. The diagnosed 2465 MEG should be put into a Lock status before the diagnostic test 2466 is started. 2468 A MEG can be put into a Lock status either via an NMS action or 2469 using the Lock Instruct OAM tool as defined in section 7. 2471 At the transmitting MEP, provisioning is required for a test 2472 signal generator, which is associated with the MEP. At a 2473 receiving MEP, provisioning is required for a test signal 2474 detector which is associated with the MEP. 2476 In order to ensure accurate measurement, care needs to be taken 2477 to enable throughput estimation only if all the MEPs within the 2478 MEG can process OAM test packets at the same rate as the payload 2479 data rates (see section 6.3.1.2). 2481 6.3.1.2. Limited OAM processing rate 2483 If an implementation is able to process payload at much higher 2484 data rates than OAM test packets, then accurate measurement of 2485 throughput using OAM test packets is not achievable. Whether 2486 OAM packets can be processed at the same rate as payload is 2487 implementation dependent. 2489 6.3.1.3. Multilink considerations 2491 If multilink is used, then it may not be possible to perform 2492 throughput measurement, as the throughput test may not have a 2493 mechanism for utilizing more than one component link of the 2494 aggregated link. 2496 6.3.2. Data plane Loopback 2498 Data plane loopback is an out-of-service function, as required 2499 in section 2.2.5 of RFC 5860 [11]. This function consists in 2500 placing a transport path, at either an intermediate or 2501 terminating node, into a data plane loopback state, such that 2502 all traffic (including both payload and OAM) received on the 2503 looped back interface is sent on the reverse direction of the 2504 transport path. The traffic is looped back unmodified other than 2505 normal per hop processing such as TTL decrement. 2507 The data plane loopback function requires that the MEG is locked 2508 such that user data traffic is prevented from entering/exiting 2509 that MEG. Instead, test traffic is inserted at the ingress of 2510 the MEG. This test traffic can be generated from an internal 2511 process residing within the ingress node or injected by external 2512 test equipment connected to the ingress node. 2514 It is also normal to disable proactive monitoring of the path as 2515 the MEP located upstream with respect to the node set in the 2516 data plane loopback mode will see all the OAM packets, 2517 originated by itself and this may interfere with other 2518 measurements. 2520 The only way to send an OAM packet (e.g., to remove the data 2521 plane loopback state) to the MIPs or MEPs hosted by a node set 2522 in the data plane loopback mode is via TTL expiry. It should 2523 also be noted that MIPs can be addressed with more than one TTL 2524 value on a co-routed bi-directional path set into data plane 2525 loopback. 2527 If the loopback function is to be performed at an intermediate 2528 node it is only applicable to co-routed bi-directional paths. If 2529 the loopback is to be performed end to end, it is applicable to 2530 both co-routed bi-directional or associated bi-directional 2531 paths. 2533 It should be noted that data plane loopback function itself is 2534 applied to data plane loopback points that can resides on 2535 different interfaces from MIPs/MEPs. Where a node implements 2536 data plane loopback capability and whether it implements it in 2537 more than one point is implementation dependent. 2539 6.3.2.1. Configuration considerations 2541 Data plane loopback is an out-of-service tool. The MEG which 2542 defines a diagnosed transport path should be put into a locked 2543 state before the diagnostic test is started. However, a means is 2544 required to permit the originated test traffic to be inserted at 2545 ingress MEP when data plane loopback is performed. 2547 A transport path, at either an intermediate or terminating node, 2548 can be put into data plane loopback state via an NMS action or 2549 using an OAM tool for data plane loopback configuration. 2551 If the data plane loopback point is set somewhere at an 2552 intermediate point of a co-routed bidirectional transport path, 2553 the side of loop back function (one side or both side) needs to 2554 be configured. 2556 6.4. Route Tracing 2558 It is often necessary to trace a route covered by a MEG from an 2559 originating MEP to the peer MEP(s) including all the MIPs in- 2560 between, and may be conducted after provisioning an MPLS-TP 2561 transport path for, e.g., trouble shooting purposes such as 2562 fault localization. 2564 The route tracing function, as required in section 2.2.4 of RFC 2565 5860 [11], is providing this functionality. Based on the fate 2566 sharing requirement of OAM flows, i.e. OAM packets receive the 2567 same forwarding treatment as data packet, route tracing is a 2568 basic means to perform connectivity verification and, to a much 2569 lesser degree, continuity check. For this function to work 2570 properly, a return path must be present. 2572 Route tracing might be implemented in different ways and this 2573 document does not preclude any of them. 2575 Route tracing should always discover the full list of MIPs and 2576 of the peer MEPs. In case a defect exists, the route trace 2577 function will only be able to trace up to the defect, and needs 2578 to be able to return the incomplete list of OAM entities that it 2579 was able to trace such that the fault can be localized. 2581 6.4.1. Configuration considerations 2583 The configuration of the route trace function must at least 2584 support the setting of the number of trace attempts before it 2585 gives up. 2587 6.5. Packet Delay Measurement 2589 Packet Delay Measurement (DM) is one of the capabilities 2590 supported by the MPLS-TP PM function in order to facilitate 2591 reporting of QoS information for a transport path, as required 2592 in section 2.2.12 of RFC 5860 [11]. Specifically, on-demand DM 2593 is used to measure packet delay and packet delay variation in 2594 the transport path monitored by a pair of MEPs during a pre- 2595 defined monitoring period. 2597 On-Demand DM is performed by sending periodic DM OAM packets 2598 from a MEP to a peer MEP and by receiving DM OAM packets from 2599 the peer MEP (if a co-routed or associated bidirectional 2600 transport path) during a configurable time interval. 2602 On-demand DM can be operated in two modes: 2604 o One-way: a MEP sends DM OAM packet to its peer MEP containing 2605 all the required information to facilitate one-way packet 2606 delay and/or one-way packet delay variation measurements at 2607 the peer MEP. Note that this requires precise time 2608 synchronisation at either MEP by means outside the scope of 2609 this framework. 2611 o Two-way: a MEP sends DM OAM packet with a DM request to its 2612 peer MEP, which replies with an DM OAM packet as a DM 2613 response. The request/response DM OAM packets containing all 2614 the required information to facilitate two-way packet delay 2615 and/or two-way packet delay variation measurements from the 2616 viewpoint of the originating MEP. 2618 MIPs, as well as intermediate nodes, do not process the DM 2619 information and forward these on-demand DM OAM packets as 2620 regular data packets. 2622 6.5.1. Configuration considerations 2624 In order to support on-demand DM, the beginning and duration of 2625 the DM procedures, the transmission rate and, for E-LSPs, the 2626 PHB associated with the DM OAM packets originating from a MEP 2627 need be configured as part of the DM provisioning. DM OAM 2628 packets should be transmitted with the PHB that yields the 2629 lowest drop precedence within the measured PHB Scheduling Class 2630 (see RFC 3260 [17]). 2632 In order to verify different performances between long and short 2633 packets (e.g., due to the processing time), it should be 2634 possible for the operator to configure the packet size of the 2635 on-demand OAM DM packet. 2637 7. OAM Functions for administration control 2639 7.1. Lock Instruct 2641 Lock Instruct (LKI) function, as required in section 2.2.6 of 2642 RFC 5860 [11], is a command allowing a MEP to instruct the peer 2643 MEP(s) to put the MPLS-TP transport path into a locked 2644 condition. 2646 This function allows single-side provisioning for 2647 administratively locking (and unlocking) an MPLS-TP transport 2648 path. 2650 Note that it is also possible to administratively lock (and 2651 unlock) an MPLS-TP transport path using two-side provisioning, 2652 where the NMS administratively puts both MEPs into an 2653 administrative lock condition. In this case, the LKI function is 2654 not required/used. 2656 MIPs, as well as intermediate nodes, do not process the lock 2657 instruct information and forward these on-demand LKI OAM packets 2658 as regular data packets. 2660 7.1.1. Locking a transport path 2662 A MEP, upon receiving a single-side administrative lock command 2663 from an NMS, sends an LKI request OAM packet to its peer MEP(s). 2664 It also puts the MPLS-TP transport path into a locked state and 2665 notifies its client (sub-)layer adaptation function upon the 2666 locked condition. 2668 A MEP, upon receiving an LKI request from its peer MEP, can 2669 either accept or reject the instruction and replies to the peer 2670 MEP with an LKI reply OAM packet indicating whether or not it 2671 has accepted the instruction. This requires either an in-band or 2672 out-of-band return path. The LKI reply is needed to allow the 2673 MEP to properly report to the NMS the actual result of the 2674 single-side administrative lock command. 2676 If the lock instruction has been accepted, it also puts the 2677 MPLS-TP transport path into a locked state and notifies its 2678 client (sub-)layer adaptation function upon the locked 2679 condition. 2681 Note that if the client (sub-)layer is also MPLS-TP, Lock 2682 Reporting (LKR) generation at the client MPLS-TP (sub-)layer is 2683 started, as described in section 5.4. 2685 7.1.2. Unlocking a transport path 2687 A MEP, upon receiving a single-side administrative unlock 2688 command from NMS, sends an LKI removal request OAM packet to its 2689 peer MEP(s). 2691 The peer MEP, upon receiving an LKI removal request, can either 2692 accept or reject the removal instruction and replies with an LKI 2693 removal reply OAM packet indicating whether or not it has 2694 accepted the instruction. The LKI removal reply is needed to 2695 allow the MEP to properly report to the NMS the actual result of 2696 the single-side administrative unlock command. 2698 If the lock removal instruction has been accepted, it also 2699 clears the locked condition on the MPLS-TP transport path and 2700 notifies this event to its client (sub-)layer adaptation 2701 function. 2703 The MEP that has initiated the LKI clear procedure, upon 2704 receiving a positive LKI removal reply, also clears the locked 2705 condition on the MPLS-TP transport path and notifies this event 2706 to its client (sub-)layer adaptation function. 2708 Note that if the client (sub-)layer is also MPLS-TP, Lock 2709 Reporting (LKR) generation at the client MPLS-TP (sub-)layer is 2710 terminated, as described in section 5.4. 2712 8. Security Considerations 2714 A number of security considerations are important in the context 2715 of OAM applications. 2717 OAM traffic can reveal sensitive information such as performance 2718 data and details about the current state of the network. 2719 Insertion of, or modifications to OAM transactions can mask the 2720 true operational state of the network and in the case of 2721 transactions for administration control, such as Lock or data 2722 plane loopback instructions, these can be used for explicit 2723 denial of service attacks. The effect of such attacks is 2724 mitigated only by the fact that, for in-band messaging, the 2725 managed entities whose state can be masked is limited to those 2726 that transit the point of malicious access to the network 2727 internals due to the fate sharing nature of OAM messaging. This 2728 is not true when an out of band return path is employed. 2730 The sensitivity of OAM data therefore suggests that one solution 2731 is that some form of authentication, authorization and 2732 encryption is in place. This will prevent unauthorized access to 2733 vital equipment and it will prevent third parties from learning 2734 about sensitive information about the transport network. However 2735 it should be observed that the combination of the frequency of 2736 some OAM transactions, the need for timeliness of OAM 2737 transaction exchange and all permutations of unique MEP to MEP, 2738 MEP to MIP, and intermediate system originated transactions 2739 mitigates against the practical establishment and maintenance of 2740 a large number of security associations per MEG either in 2741 advance or as required. 2743 For this reason it is assumed that the internal links of the 2744 network is physically secured from malicious access such that 2745 OAM transactions scoped to fault and performance management of 2746 individual MEGs are not encumbered with additional security. 2747 Further it is assumed in multi-provider cases where OAM 2748 transactions originate outside of an individual providers 2749 trusted domain that filtering mechanisms or further 2750 encapsulation will need to constrain the potential impact of 2751 malicious transactions. Mechanisms that the framework does not 2752 specify might be subject to additional security considerations. 2754 In case of mis-configuration, some nodes can receive OAM packets 2755 that they cannot recognize. In such a case, these OAM packets 2756 should be silently discarded in order to avoid malfunctions 2757 whose effect may be similar to malicious attacks (e.g., degraded 2758 performance or even failure). Further considerations about data 2759 plane attacks via G-ACh are provided in RFC 5921 [8]. 2761 9. IANA Considerations 2763 This memo does not have any IANA considerations. 2765 10. Acknowledgments 2767 The authors would like to thank all members of the teams (the 2768 Joint Working Team, the MPLS Interoperability Design Team in 2769 IETF and the Ad Hoc Group on MPLS-TP in ITU-T) involved in the 2770 definition and specification of MPLS Transport Profile. 2772 The editors gratefully acknowledge the contributions of Adrian 2773 Farrel, Yoshinori Koike, Luca Martini, Yuji Tochio and Manuel 2774 Paul for the definition of per-interface MIPs and MEPs. 2776 The editors gratefully acknowledge the contributions of Malcolm 2777 Betts, Yoshinori Koike, Xiao Min, and Maarten Vissers for the 2778 lock report and lock instruction description. 2780 The authors would also like to thank Alessandro D'Alessandro, 2781 Loa Andersson, Malcolm Betts, Dave Black, Stewart Bryant, Rui 2782 Costa, Xuehui Dai, John Drake, Adrian Farrel, Dan Frost, Xia 2783 Liang, Liu Gouman, Peng He, Russ Housley, Feng Huang, Su Hui, 2784 Yoshionori Koike, Thomas Morin, George Swallow, Yuji Tochio, 2785 Curtis Villamizar, Maarten Vissers and Xuequin Wei for their 2786 comments and enhancements to the text. 2788 This document was prepared using 2-Word-v2.0.template.dot. 2790 11. References 2792 11.1. Normative References 2794 [1] Rosen, E., Viswanathan, A., Callon, R., "Multiprotocol 2795 Label Switching Architecture", RFC 3031, January 2001 2797 [2] Bryant, S., Pate, P., "Pseudo Wire Emulation Edge-to-Edge 2798 (PWE3) Architecture", RFC 3985, March 2005 2800 [3] Nadeau, T., Pignataro, S., "Pseudowire Virtual Circuit 2801 Connectivity Verification (VCCV): A Control Channel for 2802 Pseudowires", RFC 5085, December 2007 2804 [4] Bocci, M., Bryant, S., "An Architecture for Multi-Segment 2805 Pseudo Wire Emulation Edge-to-Edge", RFC 5659, October 2806 2009 2808 [5] Niven-Jenkins, B., Brungard, D., Betts, M., sprecher, N., 2809 Ueno, S., "MPLS-TP Requirements", RFC 5654, September 2009 2811 [6] Agarwal, P., Akyol, B., "Time To Live (TTL) Processing in 2812 Multiprotocol Label Switching (MPLS) Networks", RFC 3443, 2813 January 2003 2815 [7] Vigoureux, M., Bocci, M., Swallow, G., Ward, D., Aggarwal, 2816 R., "MPLS Generic Associated Channel", RFC 5586, June 2009 2818 [8] Bocci, M., et al., "A Framework for MPLS in Transport 2819 Networks", RFC 5921, July 2010 2821 [9] Bocci, M., et al., " MPLS Transport Profile User-to-Network and 2822 Network-to-Network Interfaces", draft-ietf-mpls-tp-uni-nni-03 2823 (work in progress), January 2011 2825 [10] Swallow, G., Bocci, M., "MPLS-TP Identifiers", draft-ietf- 2826 mpls-tp-identifiers-03 (work in progress), October 2010 2828 [11] Vigoureux, M., Betts, M., Ward, D., "Requirements for OAM 2829 in MPLS Transport Networks", RFC 5860, May 2010 2831 [12] Bradner, S., McQuaid, J., "Benchmarking Methodology for 2832 Network Interconnect Devices", RFC 2544, March 1999 2834 [13] Blake, S., Black, D., Carlson, M., Davies, E., Wang, Z., 2835 Weiss, W., "An Architecture for Differentiated Services", 2836 RFC 2475, December 1998 2838 [14] ITU-T Recommendation G.806 (01/09), "Characteristics of 2839 transport equipment - Description methodology and generic 2840 functionality ", January 2009 2842 11.2. Informative References 2844 [15] Sprecher, N., Nadeau, T., van Helvoort, H., Weingarten, 2845 Y., "MPLS-TP OAM Analysis", draft-ietf-mpls-tp-oam- 2846 analysis-03 (work in progress), January 2011 2848 [16] Nichols, K., Blake, S., Baker, F., Black, D., "Definition 2849 of the Differentiated Services Field (DS Field) in the 2850 IPv4 and IPv6 Headers", RFC 2474, December 1998 2852 [17] Grossman, D., "New terminology and clarifications for 2853 Diffserv", RFC 3260, April 2002. 2855 [18] Kompella, K., Rekhter, Y., Berger, L., "Link Bundling in 2856 MPLS Traffic Engineering (TE)", RFC 4201, October 2005 2858 [19] ITU-T Recommendation G.707/Y.1322 (01/07), "Network node 2859 interface for the synchronous digital hierarchy (SDH)", 2860 January 2007 2862 [20] ITU-T Recommendation G.805 (03/00), "Generic functional 2863 architecture of transport networks", March 2000 2865 [21] ITU-T Recommendation Y.1731 (02/08), "OAM functions and 2866 mechanisms for Ethernet based networks", February 2008 2868 [22] IEEE Standard 802.1AX-2008, "IEEE Standard for Local and 2869 Metropolitan Area Networks - Link Aggregation", November 2870 2008 2872 [23] Le Faucheur et.al., "Multi-Protocol Label Switching (MPLS) 2873 Support of Differentiated Services", RFC 3270, May 2002. 2875 Authors' Addresses 2877 Dave Allan 2878 Ericsson 2880 Email: david.i.allan@ericsson.com 2881 Italo Busi 2882 Alcatel-Lucent 2884 Email: Italo.Busi@alcatel-lucent.com 2886 Ben Niven-Jenkins 2887 Velocix 2889 Email: ben@niven-jenkins.co.uk 2891 Annamaria Fulignoli 2892 Ericsson 2894 Email: annamaria.fulignoli@ericsson.com 2896 Enrique Hernandez-Valencia 2897 Alcatel-Lucent 2899 Email: Enrique.Hernandez@alcatel-lucent.com 2901 Lieven Levrau 2902 Alcatel-Lucent 2904 Email: Lieven.Levrau@alcatel-lucent.com 2906 Vincenzo Sestito 2907 Alcatel-Lucent 2909 Email: Vincenzo.Sestito@alcatel-lucent.com 2911 Nurit Sprecher 2912 Nokia Siemens Networks 2914 Email: nurit.sprecher@nsn.com 2916 Huub van Helvoort 2917 Huawei Technologies 2919 Email: hhelvoort@huawei.com 2920 Martin Vigoureux 2921 Alcatel-Lucent 2923 Email: Martin.Vigoureux@alcatel-lucent.com 2925 Yaacov Weingarten 2926 Nokia Siemens Networks 2928 Email: yaacov.weingarten@nsn.com 2930 Rolf Winter 2931 NEC 2933 Email: Rolf.Winter@nw.neclab.eu