idnits 2.17.1 draft-ietf-mpls-tp-oam-framework-10.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (December 16, 2010) is 4851 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-03) exists of draft-ietf-mpls-tp-uni-nni-02 == Outdated reference: A later version (-07) exists of draft-ietf-mpls-tp-identifiers-03 == Outdated reference: A later version (-09) exists of draft-ietf-mpls-tp-oam-analysis-02 Summary: 0 errors (**), 0 flaws (~~), 4 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 MPLS Working Group I. Busi (Ed) 2 Internet Draft Alcatel-Lucent 3 Intended status: Informational D. Allan (Ed) 4 Ericsson 6 Expires: June 16, 2011 December 16, 2010 8 Operations, Administration and Maintenance Framework for MPLS- 9 based Transport Networks 10 draft-ietf-mpls-tp-oam-framework-10.txt 12 Abstract 14 The Transport Profile of Multi-Protocol Label Switching 15 (MPLS-TP) is a packet-based transport technology based on the 16 MPLS Traffic Engineering (MPLS-TE) and Pseudowire (PW) data 17 plane architectures. 19 This document describes a framework to support a comprehensive 20 set of Operations, Administration and Maintenance (OAM) 21 procedures that fulfill the MPLS-TP OAM requirements for fault, 22 performance and protection-switching management and that do not 23 rely on the presence of a control plane. 25 This document is a product of a joint Internet Engineering Task 26 Force (IETF) / International Telecommunications Union 27 Telecommunication Standardization Sector (ITU-T) effort to 28 include an MPLS Transport Profile within the IETF MPLS and PWE3 29 architectures to support the capabilities and functionalities of 30 a packet transport network as defined by the ITU-T. 32 Status of this Memo 34 This Internet-Draft is submitted to IETF in full conformance 35 with the provisions of BCP 78 and BCP 79. 37 Internet-Drafts are working documents of the Internet 38 Engineering Task Force (IETF), its areas, and its working 39 groups. Note that other groups may also distribute working 40 documents as Internet-Drafts. 42 Internet-Drafts are draft documents valid for a maximum of six 43 months and may be updated, replaced, or obsoleted by other 44 documents at any time. It is inappropriate to use Internet- 45 Drafts as reference material or to cite them other than as "work 46 in progress". 48 The list of current Internet-Drafts can be accessed at 49 http://www.ietf.org/ietf/1id-abstracts.txt. 51 The list of Internet-Draft Shadow Directories can be accessed at 52 http://www.ietf.org/shadow.html. 54 This Internet-Draft will expire on June 16, 2011. 56 Copyright Notice 58 Copyright (c) 2010 IETF Trust and the persons identified as the 59 document authors. All rights reserved. 61 This document is subject to BCP 78 and the IETF Trust's Legal 62 Provisions Relating to IETF Documents 63 (http://trustee.ietf.org/license-info) in effect on the date of 64 publication of this document. Please review these documents 65 carefully, as they describe your rights and restrictions with 66 respect to this document. Code Components extracted from this 67 document must include Simplified BSD License text as described 68 in Section 4.e of the Trust Legal Provisions and are provided 69 without warranty as described in the Simplified BSD License. 71 Table of Contents 73 1. Introduction..................................................5 74 1.1. Contributing Authors.....................................6 75 2. Conventions used in this document.............................7 76 2.1. Terminology..............................................7 77 2.2. Definitions..............................................8 78 3. Functional Components........................................12 79 3.1. Maintenance Entity and Maintenance Entity Group.........12 80 3.2. Nested MEGs: SPMEs and Tandem Connection Monitoring.....14 81 3.3. MEG End Points (MEPs)...................................16 82 3.4. MEG Intermediate Points (MIPs)..........................19 83 3.5. Server MEPs.............................................21 84 3.6. Configuration Considerations............................22 85 3.7. P2MP considerations.....................................22 86 3.8. Further considerations of enhanced segment monitoring...23 87 4. Reference Model..............................................25 88 4.1. MPLS-TP Section Monitoring (SMEG).......................27 89 4.2. MPLS-TP LSP End-to-End Monitoring Group (LMEG)..........28 90 4.3. MPLS-TP PW Monitoring (PMEG)............................28 91 4.4. MPLS-TP LSP SPME Monitoring (LSMEG).....................29 92 4.5. MPLS-TP MS-PW SPME Monitoring (PSMEG)...................30 93 4.6. Fate sharing considerations for multilink...............32 94 5. OAM Functions for proactive monitoring.......................32 95 5.1. Continuity Check and Connectivity Verification..........33 96 5.1.1. Defects identified by CC-V.........................36 97 5.1.2. Consequent action..................................37 98 5.1.3. Configuration considerations.......................38 99 5.2. Remote Defect Indication................................40 100 5.2.1. Configuration considerations.......................40 101 5.3. Alarm Reporting.........................................41 102 5.4. Lock Reporting..........................................42 103 5.5. Packet Loss Measurement.................................44 104 5.5.1. Configuration considerations.......................45 105 5.5.2. Sampling skew......................................45 106 5.5.3. Multilink issues...................................45 107 5.6. Packet Delay Measurement................................46 108 5.6.1. Configuration considerations.......................46 109 5.7. Client Failure Indication...............................47 110 5.7.1. Configuration considerations.......................47 111 6. OAM Functions for on-demand monitoring.......................48 112 6.1. Connectivity Verification...............................48 113 6.1.1. Configuration considerations.......................49 114 6.2. Packet Loss Measurement.................................50 115 6.2.1. Configuration considerations.......................50 116 6.2.2. Sampling skew......................................51 117 6.2.3. Multilink issues...................................51 118 6.3. Diagnostic Tests........................................51 119 6.3.1. Throughput Estimation.............................51 120 6.3.2. Data plane Loopback...............................52 121 6.4. Route Tracing..........................................54 122 6.4.1. Configuration considerations......................54 123 6.5. Packet Delay Measurement...............................54 124 6.5.1. Configuration considerations......................55 125 7. OAM Functions for administration control....................55 126 7.1. Lock Instruct..........................................55 127 7.1.1. Locking a transport path..........................56 128 7.1.2. Unlocking a transport path........................56 129 8. Security Considerations.....................................57 130 9. IANA Considerations.........................................58 131 10. Acknowledgments............................................58 132 11. References.................................................59 133 11.1. Normative References..................................59 134 11.2. Informative References................................60 136 Editors' Note: 138 This Informational Internet-Draft is aimed at achieving IETF 139 Consensus before publication as an RFC and will be subject to an 140 IETF Last Call. 142 [RFC Editor, please remove this note before publication as an 143 RFC and insert the correct Streams Boilerplate to indicate that 144 the published RFC has IETF Consensus.] 146 1. Introduction 148 As noted in the multi-protocol label switching (MPLS-TP) Framework 149 RFCs (RFC 5921 [8] and [9]), MPLS-TP is a packet-based transport 150 technology based on the MPLS Traffic Engineering (MPLS-TE) and Pseudo 151 Wire (PW) data plane architectures defined in RFC 3031 [1], RFC 3985 152 [2] and RFC 5659 [4]. 154 MPLS-TP supports a comprehensive set of Operations, 155 Administration and Maintenance (OAM) procedures for fault, 156 performance and protection-switching management that do not rely 157 on the presence of a control plane. 159 In line with [14], existing MPLS OAM mechanisms will be used 160 wherever possible and extensions or new OAM mechanisms will be 161 defined only where existing mechanisms are not sufficient to 162 meet the requirements. Some extensions discussed in this 163 framework may end up as aspirational capabilities and may be 164 determined to be not tractably realizable in some 165 implementations. Extensions do not deprecate support for 166 existing MPLS OAM capabilities. 168 The MPLS-TP OAM framework defined in this document provides a 169 protocol neutral description of the required OAM functions and 170 of the data plane OAM architecture to support a comprehensive 171 set of OAM procedures that satisfy the MPLS-TP OAM requirements 172 of RFC 5860 [11]. In this regard, it defines similar OAM 173 functionality as for existing SONET/SDH and OTN OAM mechanisms 174 (e.g. [18]). 176 The MPLS-TP OAM framework is applicable to sections, Label 177 Switched Paths (LSPs), Multi-Segment Pseudowires (MS-)PWs and 178 Sub Path Maintenance Entities (SPMEs). It supports co-routed and 179 associated bidirectional p2p transport paths as well as 180 unidirectional p2p and p2mp transport paths. 182 OAM packets that instrument a particular direction of a 183 transport path are subject to the same forwarding treatment 184 (i.e. fate share) as the data traffic and in some cases, where 185 Explicitly TC-encoded-PSC LSPs (E-LSPs) are employed, may be 186 required to have common Per-hop Behavior (PHB) scheduling class 187 (PSC) E2E with the class of traffic monitored. In case of 188 Label-Only-Inferred-PSC LSP (L-LSP), only one class of traffic 189 needs to be monitored and therefore the OAM packets have common 190 PSC with the monitored traffic class. 192 OAM packets can be distinguished from the data traffic using the 193 GAL and ACH constructs of RFC 5586 [7] for LSP, SPME and Section 194 or the ACH construct of RFC 5085 [3]and RFC 5586 [7] for 195 (MS-)PW. 197 This framework makes certain assumptions as to the utility and 198 frequency of different classes of measurement that naturally 199 suggest different functions are implemented as distinct OAM 200 flows or messages. This is dictated by the combination of the 201 class of problem being detected and the need for timeliness of 202 network response to the problem. For example fault detection is 203 expected to operate on an entirely different time base than 204 performance monitoring which is also expected to operate on an 205 entirely different time base than in band management 206 transactions. 208 Section 3 describes the functional component that generates and 209 processes OAM packets. 211 Section 4 describes the reference models for applying OAM 212 functions to Sections, LSP, MS-PW and their SPMEs. 214 Sections 5, 6 and 7 provide a protocol-neutral description of 215 the OAM functions, defined in RFC 5860 [11], aimed at clarifying 216 how the OAM protocol solutions will behave to achieve their 217 functional objectives. 219 This document is a product of a joint Internet Engineering Task 220 Force (IETF) / International Telecommunication Union 221 Telecommunication Standardization Sector (ITU-T) effort to 222 include an MPLS Transport Profile within the IETF MPLS and PWE3 223 architectures to support the capabilities and functionalities of 224 a packet transport network as defined by the ITU-T. 226 1.1. Contributing Authors 228 Dave Allan, Italo Busi, Ben Niven-Jenkins, Annamaria Fulignoli, 229 Enrique Hernandez-Valencia, Lieven Levrau, Vincenzo Sestito, 230 Nurit Sprecher, Huub van Helvoort, Martin Vigoureux, Yaacov 231 Weingarten, Rolf Winter 233 2. Conventions used in this document 235 2.1. Terminology 237 AC Attachment Circuit 239 AIS Alarm indication signal 241 CC Continuity Check 243 CC-V Continuity Check and Connectivity Verification 245 CV Connectivity Verification 247 DBN Domain Border Node 249 E-LSP Explicitly TC-encoded-PSC LSP 251 ICC ITU Carrier Code 253 LER Label Edge Router 255 LKR Lock Report 257 L-LSP Label-Only-Inferred-PSC LSP 259 LM Loss Measurement 261 LME LSP Maintenance Entity 263 LMEG LSP ME Group 265 LSP Label Switched Path 267 LSR Label Switching Router 269 LSME LSP SPME ME 271 LSMEG LSP SPME ME Group 273 ME Maintenance Entity 275 MEG Maintenance Entity Group 277 MEP Maintenance Entity Group End Point 279 MIP Maintenance Entity Group Intermediate Point 280 NMS Network Management System 282 PE Provider Edge 284 PHB Per-hop Behavior 286 PM Performance Monitoring 288 PME PW Maintenance Entity 290 PMEG PW ME Group 292 PSC PHB Scheduling Class 294 PSME PW SPME ME 296 PSMEG PW SPME ME Group 298 PW Pseudowire 300 SLA Service Level Agreement 302 SME Section Maintenance Entity 304 SMEG Section ME Group 306 SPME Sub-path Maintenance Element 308 S-PE Switching Provider Edge 310 TC Traffic Class 312 T-PE Terminating Provider Edge 314 2.2. Definitions 316 This document uses the terms defined in RFC 5654 [5]. 318 This document uses the term 'Per-hop Behavior' as defined in RFC 319 2474 [15]. 321 This document uses the term LSP to indicate either a service LSP 322 or a transport LSP (as defined in RFC 5921 [8]). 324 This document uses the term Sub Path Maintenance Element (SPME) 325 as defined in RFC 5921 [8]. 327 Where appropriate, the following definitions are aligned with 328 ITU-T recommendation Y.1731 [20] in order to have a common, 329 unambiguous terminology. They do not however intend to imply a 330 certain implementation but rather serve as a framework to 331 describe the necessary OAM functions for MPLS-TP. 333 Adaptation function: The adaptation function is the interface 334 between the client (sub)-layer and the server (sub-)layer. 336 Branch Node: A node along a point-to-multipoint transport path 337 that is connected to more than one downstream node. 339 Bud Node: A node along a point-to-multipoint transport path that 340 is at the same time a branch node and a leaf node for this 341 transport path. 343 Data plane loopback: An out-of-service test where a transport 344 path at either an intermediate or terminating node is placed 345 into a data plane loopback state, such that all traffic 346 (including both payload and OAM) received on the looped back 347 interface is sent on the reverse direction of the transport 348 path. 350 Note - The only way to send an OAM packet to a node that has been put 351 into data plane loopback mode is via TTL expiry, irrespective of 352 whether the node is hosting MIPs or MEPs. 354 Domain Border Node (DBN): An intermediate node in an MPLS-TP LSP 355 that is at the boundary between two MPLS-TP OAM domains. Such a 356 node may be present on the edge of two domains or may be 357 connected by a link to the DBN at the edge of another OAM 358 domain. 360 Down MEP: A MEP that receives OAM packets from, and transmits 361 them towards, the direction of a server layer. 363 In-Service: The administrative status of a transport path when 364 it is unlocked. 366 Interface: An interface is the attachment point to a server 367 (sub-)layer e.g., MPLS-TP section or MPLS-TP tunnel. 369 Intermediate Node: An intermediate node transits traffic for an 370 LSP or a PW. An intermediate node may originate OAM flows 371 directed to downstream intermediate nodes or MEPs. 373 Loopback: See data plane loopback and OAM loopback definitions. 375 Maintenance Entity (ME): Some portion of a transport path that 376 requires management bounded by two points (called MEPs), and the 377 relationship between those points to which maintenance and 378 monitoring operations apply (details in section 3.1). 380 Maintenance Entity Group (MEG): The set of one or more 381 maintenance entities that maintain and monitor a section or a 382 transport path in an OAM domain. 384 MEP: A MEG end point (MEP) is capable of initiating (Source MEP) 385 and terminating (sink MEP) OAM messages for fault management and 386 performance monitoring. MEPs define the boundaries of an ME 387 (details in section 3.3). 389 MIP: A MEG intermediate point (MIP) terminates and processes OAM 390 messages that are sent to this particular MIP and may generate 391 OAM messages in reaction to received OAM messages. It never 392 generates unsolicited OAM messages itself. A MIP resides within 393 a MEG between MEPs (details in section 3.3). 395 MPLS-TP Section: As defined in [8], it is a link that can be 396 traversed by one or more MPLS-TP LSPs. 398 OAM domain: A domain, as defined in [5], whose entities are 399 grouped for the purpose of keeping the OAM confined within that 400 domain. An OAM domain contains zero or more MEGs. 402 Note - within the rest of this document the term "domain" is 403 used to indicate an "OAM domain" 405 OAM flow: Is the set of all OAM messages originating with a 406 specific source MEP that instrument one direction of a MEG (or 407 possibly both in the special case of dataplane loopback). 409 OAM information element: An atomic piece of information 410 exchanged between MEPs and/or MIPs in MEG used by an OAM 411 application. 413 OAM loopback: The capability of a node to be directed by a 414 received OAM message to generate a reply back to the sender. OAM 415 loopback can work in-service and can support different OAM 416 functions (e.g., bidirectional on-demand connectivity 417 verification). 419 OAM Message: One or more OAM information elements that when 420 exchanged between MEPs or between MEPs and MIPs performs some 421 OAM functionality (e.g. connectivity verification) 422 OAM Packet: A packet that carries one or more OAM messages (i.e. 423 OAM information elements). 425 Originating MEP: A MEP that originates an OAM transaction 426 message (toward a target MIP/MEP) and expects a reply, either 427 in-band or out-of-band, from that target MIP/MEP. The 428 originating source MEP function always generates the OAM request 429 packets in-band while the originating sink MEP function expects 430 and processes only OAM reply packets that are sent in-band by 431 the target MIP/MEP. 433 Out-of-Service: The administrative status of a transport path 434 when it is locked. When a path is in a locked condition, it is 435 blocked from carrying client traffic. 437 Path Segment: It is either a segment or a concatenated segment, 438 as defined in RFC 5654 [5]. 440 Signal Degrade: A condition declared by a MEP when the data 441 forwarding capability associated with a transport path has 442 deteriorated, as determined by performance monitoring (PM). See also 443 ITU-T recommendation G.806 [13]. 445 Signal Fail: A condition declared by a MEP when the data 446 forwarding capability associated with a transport path has 447 failed, e.g. loss of continuity. See also ITU-T recommendation 448 G.806 [13]. 450 Sink MEP: A MEP acts as a sink MEP for an OAM message when it 451 terminates and processes the messages received from its 452 associated MEG. 454 Source MEP: A MEP acts as source MEP for an OAM message when it 455 originates and inserts the message into the transport path for 456 its associated MEG. 458 Tandem Connection: A tandem connection is an arbitrary part of a 459 transport path that can be monitored (via OAM) independent of 460 the end-to-end monitoring (OAM). The tandem connection may also 461 include the forwarding engine(s) of the node(s) at the 462 boundaries of the tandem connection. Tandem connections may be 463 nested but cannot overlap. See also ITU-T recommendation G.805 464 [19]. 466 Target MEP/MIP: A MEP or a MIP that is targeted by OAM 467 transaction messages and that replies to the originating MEP 468 that initiated the OAM transactions. The Target MEP or MIP can 469 reply either in-band or out-of-band. The target sink MEP 470 function always receives the OAM request packets in-band while 471 the target source MEP function only generates the OAM reply 472 packets that are sent in-band. 474 Up MEP: A MEP that transmits OAM packets towards, and receives 475 them from, the direction of the forwarding engine. 477 3. Functional Components 479 MPLS-TP is a packet-based transport technology based on the MPLS 480 and PW data plane architectures ([1], [2] and [4]) and is 481 capable of transporting service traffic where the 482 characteristics of information transfer between the transport 483 path endpoints can be demonstrated to comply with certain 484 performance and quality guarantees. 486 In order to describe the required OAM functionality, this 487 document introduces a set of functional components. 489 3.1. Maintenance Entity and Maintenance Entity Group 491 MPLS-TP OAM operates in the context of Maintenance Entities 492 (MEs) that define a relationship between two points of a 493 transport path to which maintenance and monitoring operations 494 apply. The two points that define a maintenance entity are 495 called Maintenance Entity Group (MEG) End Points (MEPs). The 496 collection of one or more MEs that belongs to the same transport 497 path and that are maintained and monitored as a group are known 498 as a maintenance entity group (MEG). In between MEPs, there are 499 zero or more intermediate points, called Maintenance Entity 500 Group Intermediate Points (MIPs). MEPs and MIPs are associated 501 with the MEG and can be shared by more than one ME in a MEG. 503 An abstract reference model for an ME is illustrated in Figure 1 504 below: 506 +-+ +-+ +-+ +-+ 507 |A|----|B|----|C|----|D| 508 +-+ +-+ +-+ +-+ 510 Figure 1 ME Abstract Reference Model 512 The instantiation of this abstract model to different MPLS-TP 513 entities is described in section 4. In Figure 1, nodes A and D 514 can be LERs for an LSP or the Terminating Provider Edges (T-PEs) 515 for a MS-PW, nodes B and C are LSRs for a LSP or Switching PEs 516 (S-PEs) for a MS-PW. MEPs reside in nodes A and D while MIPs 517 reside in nodes B and C and may reside in A and D. The links 518 connecting adjacent nodes can be physical links, (sub-)layer 519 LSPs/SPMEs, or server layer paths. 521 This functional model defines the relationships between all OAM 522 entities from a maintenance perspective and it allows each 523 Maintenance Entity to provide monitoring and management for the 524 (sub-)layer network under its responsibility and efficient 525 localization of problems. 527 An MPLS-TP Maintenance Entity Group may be defined to monitor 528 the transport path for fault and/or performance management. 530 The MEPs that form a MEG bound the scope of an OAM flow to the 531 MEG (i.e. within the domain of the transport path that is being 532 monitored and managed). There are two exceptions to this: 534 1) A misbranching fault may cause OAM packets to be delivered to 535 a MEP that is not in the MEG of origin. 537 2) An out-of-band return path may be used between a MIP or a MEP 538 and the originating MEP. 540 In case of unidirectional point-to-point transport paths, a 541 single unidirectional Maintenance Entity is defined to monitor 542 it. 544 In case of associated bi-directional point-to-point transport 545 paths, two independent unidirectional Maintenance Entities are 546 defined to independently monitor each direction. This has 547 implications for transactions that terminate at or query a MIP, 548 as a return path from MIP to originating MEP does not 549 necessarily exist in the MEG. 551 In case of co-routed bi-directional point-to-point transport 552 paths, a single bidirectional Maintenance Entity is defined to 553 monitor both directions congruently. 555 In case of unidirectional point-to-multipoint transport paths, a 556 single unidirectional Maintenance entity for each leaf is 557 defined to monitor the transport path from the root to that 558 leaf. 560 In all cases, portions of the transport path may be monitored by 561 the instantiation of SPMEs (see section 3.2). 563 The reference model for the p2mp MEG is represented in Figure 2. 565 +-+ 566 /--|D| 567 / +-+ 568 +-+ 569 /--|C| 570 +-+ +-+/ +-+\ +-+ 571 |A|----|B| \--|E| 572 +-+ +-+\ +-+ +-+ 573 \--|F| 574 +-+ 576 Figure 2 Reference Model for p2mp MEG 578 In case of p2mp transport paths, the OAM measurements are 579 independent for each ME (A-D, A-E and A-F): 581 o Fault conditions - some faults may impact more than one ME 582 depending from where the failure is located; 584 o Packet loss - packet dropping may impact more than one ME 585 depending from where the packets are lost; 587 o Packet delay - will be unique per ME. 589 Each leaf (i.e. D, E and F) terminates OAM flows to monitor the 590 ME between itself and the root while the root (i.e. A) generates 591 OAM messages common to all the MEs of the p2mp MEG. All nodes 592 may implement a MIP in the corresponding MEG. 594 3.2. Nested MEGs: SPMEs and Tandem Connection Monitoring 596 In order to verify and maintain performance and quality 597 guarantees, there is a need to not only apply OAM functionality 598 on a transport path granularity (e.g. LSP or MS-PW), but also on 599 arbitrary parts of transport paths, defined as Tandem 600 Connections, between any two arbitrary points along a transport 601 path. 603 Sub-path Maintenance Elements (SPMEs), as defined in [8], are 604 hierarchical LSPs instantiated to provide monitoring of a 605 portion of a set of transport paths (LSPs or MS-PWs) that are 606 co-routed within the OAM domain. The operational aspects of 607 instantiating SPMEs are out of scope of this memo. 609 SPMEs can also be employed to meet the requirement to provide 610 tandem connection monitoring (TCM), as defined by ITU-T 611 Recommendation G.805 [19]. 613 TCM for a given path segment of a transport path is implemented 614 by creating an SPME that has a 1:1 association with the path 615 segment of the transport path that is to be monitored. 617 In the TCM case, this means that the SPME used to provide TCM 618 can carry one and only one transport path thus allowing direct 619 correlation between all fault management and performance 620 monitoring information gathered for the SPME and the monitored 621 path segment of the end-to-end transport path. 623 There are a number of implications to this approach: 625 1) The SPME would use the uniform model [22] of Traffic Class 626 (TC) code point copying between sub-layers for diffserv such 627 that the E2E markings and PHB treatment for the transport 628 path was preserved by the SPMEs. 630 2) The SPME normally would use the short-pipe model for TTL 631 handling [6] (no TTL copying between sub-layer) such that the 632 TTL distance to the MIPs for the E2E entity would be not be 633 impacted by the presence of the SPME, but it should be 634 possible for an operator to specify use of the uniform model. 636 Note that points 1 and 2 above assume that the TTL copying mode 637 and TC copying modes are independently configurable for an LSP. 639 There are specific issues with the use of the uniform model of 640 TTL copying for an SPME: 642 1. A MIP in the SPME sub-layer is not part of the transport path MEG, 643 hence only an out of band return path for OAM originating in the 644 transport path MEG that addressed an SPME MIP might be available. 646 2. The instantiation of a lower level MEG or protection switching 647 actions within a lower level MEG may change the TTL distances to 648 MIPs in the higher level MEGs. 650 The endpoints of the SPME are MEPs and limit the scope of an OAM 651 flow within the MEG that the MEPs belong to (i.e. within the 652 domain of the SPME that is being monitored and managed). 654 When considering SPMEs, it is important to consider that the 655 following properties apply to all MPLS-TP MEGs (regardless of 656 whether they instrument LSPs, SPMEs or MS-PWs): 658 o They can be nested but not overlapped, e.g. a MEG may cover a 659 path segment of another MEG, and may also include the 660 forwarding engine(s) of the node(s) at the edge(s) of the 661 path segment. However when MEGs are nested, the MEPs and MIPs 662 in the nested MEG are no longer part of the encompassing MEG. 664 o It is possible that MEPs of nested MEGs reside on a single 665 node but again implemented in such a way that they do not 666 overlap. 668 o Each OAM flow is associated with a single MEG 670 o When a SPME is instantiated after the transport path has been 671 instantiated the TTL distance to the MIPs will change for the 672 pipe model of TTL copying, and will change for the uniform 673 model if the SPME is not co-routed with the original path. 675 3.3. MEG End Points (MEPs) 677 MEG End Points (MEPs) are the source and sink points of a MEG. 678 In the context of an MPLS-TP LSP, only LERs can implement MEPs 679 while in the context of an SPME, any LSR of the MPLS-TP LSP can 680 be an LER of SPMEs that contributes to the overall monitoring 681 infrastructure of the transport path. Regarding PWs, only T-PEs 682 can implement MEPs while for SPMEs supporting one or more PWs 683 both T-PEs and S-PEs can implement SPME MEPs. Any MPLS-TP LSR 684 can implement a MEP for an MPLS-TP Section. 686 MEPs are responsible for originating all of the proactive and 687 on-demand monitoring OAM functionality for the MEG. There is a 688 separate class of notifications (such as Lock report (LKR) and 689 Alarm indication signal (AIS)) that are originated by 690 intermediate nodes and triggered by server layer events. A MEP 691 is capable of originating and terminating OAM messages for fault 692 management and performance monitoring. These OAM messages are 693 encapsulated into an OAM packet using the G-ACh with an 694 appropriate channel type as defined in RFC 5586 [7]. A MEP 695 terminates all the OAM packets it receives from the MEG it 696 belongs to and silently discards those that do not (note in the 697 particular case of Connectivity Verification (CV) processing a 698 CV message from an incorrect MEG will result in a mis- 699 connectivity defect and there are further actions taken). The 700 MEG the OAM packet belongs to is inferred from the MPLS or PW 701 label or, in case of an MPLS-TP section, the MEG is inferred 702 from the port on which an OAM packet was received with the GAL 703 at the top of the label stack. 705 OAM packets may require the use of an available "out-of-band" 706 return path (as defined in [8]). In such cases sufficient 707 information is required in the originating transaction such that 708 the OAM reply packet can be constructed (e.g. IP address). 710 Each OAM solution document will further detail the applicability 711 of the tools it defines as a pro-active or on-demand mechanism 712 as well as its usage when: 714 o The "in-band" return path exists and it is used; 716 o An "out-of-band" return path exists and it is used; 718 o Any return path does not exist or is not used. 720 Once a MEG is configured, the operator can configure which 721 proactive OAM functions to use on the MEG but the MEPs are 722 always enabled. A node at the edge of a MEG always supports a 723 MEP. 725 MEPs terminate all OAM packets received from the associated MEG. 726 As the MEP corresponds to the termination of the forwarding path 727 for a MEG at the given (sub-)layer, OAM packets never leak 728 outside of a MEG in a properly configured fault-free 729 implementation. 731 A MEP of an MPLS-TP transport path coincides with transport path 732 termination and monitors it for failures or performance 733 degradation (e.g. based on packet counts) in an end-to-end 734 scope. Note that both source MEP and sink MEP coincide with 735 transport paths' source and sink terminations. 737 The MEPs of an SPME are not necessarily coincident with the 738 termination of the MPLS-TP transport path. They are used to 739 monitor a path segment of the transport path for failures or 740 performance degradation (e.g. based on packet counts) only 741 within the boundary of the MEG for the SPME. 743 An MPLS-TP sink MEP passes a fault indication to its client 744 (sub-)layer network as a consequent action of fault detection. 745 When the client layer is not MPLS TP, the consequent actions in 746 the client layer (e.g., ignore or generate client layer specific 747 OAM notifications) are outside the scope of this document. 749 A node at the edge of a MEG can either support per-node MEP or 750 per-interface MEP(s). A per-node MEP resides in an unspecified 751 location within the node while a per-interface MEP resides on a 752 specific side of the forwarding engine. In particular a per- 753 interface MEP is called "Up MEP" or "Down MEP" depending on its 754 location relative to the forwarding engine. An "Up MEP" 755 transmits OAM packets towards, and receives them from, the 756 direction of the forwarding engine, while a "Down MEP" receives 757 OAM packets from, and transmits them towards, the direction of a 758 server layer. 760 Conceptually these "per interface" MIP locations can be mapped 761 to the MPLS architecture by associating the MIP points with 762 FTN/ILM/NHLFE processing, such that the MIP positioning within a 763 node logically bookends the NHLFE processing step of how a 764 packet is handled by an LSR/LER (either prior to or post label 765 processing and packet forwarding). A nodal MIP makes no 766 representation as to where in a nodes packet handling process a 767 MIP is located. 769 Source node Up MEP Destination node Up MEP 770 ------------------------ ------------------------ 771 | | | | 772 |----- -----| |----- -----| 773 | MEP | | | | | | MEP | 774 | | ---- | | | | ---- | | 775 | In |->-| FW |->-| Out |->- ->-| In |->-| FW |->-| Out | 776 | i/f | ---- | i/f | | i/f | ---- | i/f | 777 |----- -----| |----- -----| 778 | | | | 779 ------------------------ ------------------------ 780 (1) (2) 782 Source node Down MEP Destination node Down MEP 783 ------------------------ ------------------------ 784 | | | | 785 |----- -----| |----- -----| 786 | | | MEP | | MEP | | | 787 | | ---- | | | | ---- | | 788 | In |->-| FW |->-| Out |->- ->-| In |->-| FW |->-| Out | 789 | i/f | ---- | i/f | | i/f | ---- | i/f | 790 |----- -----| |----- -----| 791 | | | | 792 ------------------------ ------------------------ 793 (3) (4) 795 Figure 3 Examples of per-interface MEPs 797 Figure 3 describes four examples of per-interface Up MEPs: an Up 798 Source MEP in a source node (case 1), an Up Sink MEP in a 799 destination node (case 2), a Down Source MEP in a source node 800 (case 3) and a Down Sink MEP in a destination node (case 4). 802 The usage of per-interface Up MEPs extends the coverage of the 803 ME for both fault and performance monitoring closer to the edge 804 of the domain and allows the isolation of failures or 805 performance degradation to being within a node or either the 806 link or interfaces. 808 Each OAM solution document will further detail the implications 809 of the tools it defines when used with per-interface or per-node 810 MEPs, if necessary. 812 It may occur that multiple MEPs for the same MEG are on the same 813 node, and are all Up MEPs, each on one side of the forwarding 814 engine, such that the MEG is entirely internal to the node. 816 It should be noted that a ME may span nodes that implement per 817 node MEPs and per-interface MEPs. This guarantees backward 818 compatibility with most of the existing LSRs that can implement 819 only a per-node MEP as in current implementations label 820 operations are largely performed on the ingress interface, hence 821 the exposure of the GAL as top label will occur at the ingress 822 interface. 824 Note that a MEP can only exist at the beginning and end of a 825 (sub-)layer in MPLS-TP. If there is a need to monitor some 826 portion of that LSP or PW, a new sub-layer in the form of an 827 SPME is created which permits MEPs and associated MEGs to be 828 created. 830 In the case where an intermediate node sends a message to a MEP, 831 it uses the top label of the stack at that point. 833 3.4. MEG Intermediate Points (MIPs) 835 A MEG Intermediate Point (MIP) is a function located at a point 836 between the MEPs of a MEG for a PW, LSP or SPME. 838 A MIP is capable of reacting to some OAM packets and forwarding all 839 the other OAM packets while ensuring fate sharing with data plane 840 packets. However, a MIP does not initiate unsolicited OAM packets, 841 but may be addressed by OAM packets initiated by one of the MEPs of 842 the MEG. A MIP can generate OAM packets only in response to OAM 843 packets that it receives from the MEG it belongs to. The OAM messages 844 generated by the MIP are sent to the originating MEP. 846 An intermediate node within a MEG can either: 848 o Support per-node MIP (i.e. a single MIP per node in an 849 unspecified location within the node); 851 o Support per-interface MIP (i.e. two or more MIPs per node on 852 both sides of the forwarding engine). 854 Intermediate node 855 ------------------------ 856 | | 857 |----- -----| 858 | MIP | | MIP | 859 | | ---- | | 860 ->-| In |->-| FW |->-| Out |->- 861 | i/f | ---- | i/f | 862 |----- -----| 863 | | 864 ------------------------ 865 Figure 4 Example of per-interface MIPs 867 Figure 4 describes an example of two per-interface MIPs at an 868 intermediate node of a point-to-point MEG. 870 The usage of per-interface MIPs allows the isolation of failures 871 or performance degradation to being within a node or either the 872 link or interfaces. 874 When sending an OAM packet to a MIP, the source MEP should set 875 the TTL field to indicate the number of hops necessary to reach 876 the node where the MIP resides. 878 The source MEP should also include Target MIP information in the 879 OAM packets sent to a MIP to allow proper identification of the 880 MIP within the node. The MEG the OAM packet is associated with 881 is inferred from the MPLS label. 883 The use of TTL expiry to deliver OAM packets to a specific MIP 884 is not a fully reliable delivery mechanism because the TTL 885 distance of a MIP from a MEP can change. Any MPLS-TP node 886 silently discards any OAM packet received with an expired TTL 887 and that it is not addressed to any of its MIPs or MEPs. An 888 MPLS-TP node that does not support OAM is also expected to 889 silently discard any received OAM packet. 891 Messages directed to a MIP may not necessarily carry specific 892 MIP identification information beyond that of TTL distance. In 893 this case a MIP would promiscuously respond to all MEP queries 894 with the correct MEG. This capability could be used for 895 discovery functions (e.g., route tracing as defined in section 896 6.4) or when it is desirable to leave to the originating MEP the 897 job of correlating TTL and MIP identifiers and noting changes or 898 irregularities (via comparison with information previously 899 extracted from the network). 901 MIPs are associated to the MEG they belong to and their identity 902 is unique within the MEG. However, their identity is not 903 necessarily unique to the MEG: e.g. all nodal MIPs in a node can 904 have a common identity. 906 A node at the edge of a MEG can also support per-interface Up 907 MEPs and per-interface MIPs on either side of the forwarding 908 engine. 910 Once a MEG is configured, the operator can enable/disable the 911 MIPs on the nodes within the MEG. All the intermediate nodes and 912 possibly the end nodes host MIP(s). Local policy allows them to 913 be enabled per function and per MEG. The local policy is 914 controlled by the management system, which may delegate it to 915 the control plane. A disabled MIP silently discards any received 916 OAM packets. 918 3.5. Server MEPs 920 A server MEP is a MEP of a MEG that is either: 922 o Defined in a layer network that is "below", which is to say 923 encapsulates and transports the MPLS-TP layer network being 924 referenced, or 926 o Defined in a sub-layer of the MPLS-TP layer network that is 927 "below" which is to say encapsulates and transports the 928 sub-layer being referenced. 930 A server MEP can coincide with a MIP or a MEP in the client 931 (MPLS-TP) (sub-)layer network. 933 A server MEP also provides server layer OAM indications to the 934 client/server adaptation function between the client (MPLS-TP) 935 (sub-)layer network and the server (sub-)layer network. The 936 adaptation function maintains state on the mapping of MPLS-TP 937 transport paths that are setup over that server (sub-)layer's 938 transport path. 940 For example, a server MEP can be either: 942 o A termination point of a physical link (e.g. 802.3), an SDH 943 VC or OTN ODU, for the MPLS-TP Section layer network, defined 944 in section 4.1; 946 o An MPLS-TP Section MEP for MPLS-TP LSPs, defined in section 947 4.2; 949 o An MPLS-TP LSP MEP for MPLS-TP PWs, defined in section 4.3; 951 o An MPLS-TP SPME MEP used for LSP path segment monitoring, as 952 defined in section 4.4, for MPLS-TP LSPs or higher-level 953 SPMEs providing LSP path segment monitoring; 955 o An MPLS-TP SPME MEP used for PW path segment monitoring, as 956 defined in section 4.5, for MPLS-TP PWs or higher-level SPMEs 957 providing PW path segment monitoring. 959 The server MEP can run appropriate OAM functions for fault detection 960 within the server (sub-)layer network, and provides a fault 961 indication to its client MPLS-TP layer network via the client/server 962 adaptation function. When the server layer is not MPLS-TP, server MEP 963 OAM functions are outside the scope of this document. 965 3.6. Configuration Considerations 967 When a control plane is not present, the management plane configures 968 these functional components. Otherwise they can be configured either 969 by the management plane or by the control plane. 971 Local policy allows disabling the usage of any available "out- 972 of-band" return path, as defined in [8], irrespective of what is 973 requested by the node originating the OAM packet. 975 SPMEs are usually instantiated when the transport path is 976 created by either the management plane or by the control plane 977 (if present). Sometimes an SPME can be instantiated after the 978 transport path is initially created. 980 3.7. P2MP considerations 982 All the traffic sent over a p2mp transport path, including OAM 983 packets generated by a MEP, is sent (multicast) from the root to 984 all the leaves. As a consequence: 986 o To send an OAM packet to all leaves, the source MEP can 987 send a single OAM packet that will be delivered by the 988 forwarding plane to all the leaves and processed by all the 989 leaves. Hence a single OAM packet can simultaneously 990 instrument all the MEs in a p2mp MEG. 992 o To send an OAM packet to a single leaf, the source MEP 993 sends a single OAM packet that will be delivered by the 994 forwarding plane to all the leaves but contains sufficient 995 information to identify a target leaf, and therefore is 996 processed only by the target leaf and ignored by the other 997 leaves. 999 o To send an OAM packet to a single MIP, the source MEP sends 1000 a single OAM packet with the TTL field indicating the 1001 number of hops necessary to reach the node where the MIP 1002 resides. This packet will be delivered by the forwarding 1003 plane to all intermediate nodes at the same TTL distance of 1004 the target MIP and to any leaf that is located at a shorter 1005 distance. The OAM message must contain sufficient 1006 information to identify the target MIP and therefore is 1007 processed only by the target MIP. 1009 o In order to send an OAM packet to M leaves (i.e., a subset 1010 of all the leaves), the source MEP sends M different OAM 1011 packets targeted to each individual leaf in the group of M 1012 leaves. Aggregated or sub setting mechanisms are outside 1013 the scope of this document. 1015 A bud node with a Down MEP or a per-node MEP will both terminate 1016 and relay OAM packets. Similar to how fault coverage is 1017 maximized by the explicit utilization of Up MEPs, the same is 1018 true for MEPs on a bud node. 1020 P2MP paths are unidirectional; therefore any return path to an 1021 originating MEP for on-demand transactions will be out-of-band. 1022 A mechanism to target "on-demand" transactions to a single MEP 1023 or MIP is required as it relieves the originating MEP of an 1024 arbitrarily large processing load and of the requirement to 1025 filter and discard undesired responses as normally TTL 1026 exhaustion will address all MIPs at a given distance from the 1027 source, and failure to exhaust TTL will address all MEPs. 1029 3.8. Further considerations of enhanced segment monitoring 1031 Segment monitoring, like any in-service monitoring, in a 1032 transport network should meet the following network objectives: 1034 1. The monitoring and maintenance of existing transport paths has to 1035 be conducted in service without traffic disruption. 1037 2. Segment monitoring must not modify the forwarding of the segment 1038 portion of the transport path. 1040 SPMEs defined in section 3.2 meet the above two objectives, when 1041 they are pre-configured or pre-instantiated as exemplified in 1042 section 3.6. However, pre-design and pre-configuration of all 1043 the considered patterns of SPME are not sometimes preferable in 1044 real operation due to the burden of design works, a number of 1045 header consumptions, bandwidth consumption and so on. 1047 When SPMEs are configured or instantiated after the transport 1048 path has been created, network objective (1) can be met: 1049 application and removal of SPME to a faultless monitored 1050 transport entity can be performed in such a way as not to 1051 introduce any loss of traffic, e.g., by using non-disruptive 1052 "make before break" technique. 1054 However, network objective (2) cannot be met due to new 1055 assignment of MPLS labels. As a consequence, generally speaking, 1056 the results of SPME monitoring are not necessarily correlated 1057 with the behaviour of traffic in the monitored entity when it 1058 does not use SPME. For example, application of SPME to a 1059 problematic/faulty monitoring entity might "fix" the problem 1060 encountered by the latter - for as long as SPME is applied. And 1061 vice versa, application of SPME to a faultless monitored entity 1062 may result in making it faulty - again, as long as SPME is 1063 applied. 1065 Support for a more sophisticated segment monitoring mechanism 1066 (temporal and hitless segment monitoring) to efficiently meet 1067 the two network objectives may be necessary. 1069 One possible option to instantiate non-intrusive segment 1070 monitoring without the use of SPMEs would require the MIPs 1071 selected as monitoring endpoints to implement enhanced 1072 functionality and state for the monitored transport path. 1074 For example the MIPs need to be configured with the TTL distance 1075 to the peer or with the address of the peer, when out-of-band 1076 return paths are used. 1078 A further issue that would need to be considered is events that 1079 result in changing the TTL distance to the peer monitoring 1080 entity such as protection events that may temporarily invalidate 1081 OAM information gleaned from the use of this technique. 1083 Further considerations on this technique are outside the scope 1084 of this document. 1086 4. Reference Model 1088 The reference model for the MPLS-TP framework builds upon the 1089 concept of a MEG, and its associated MEPs and MIPs, to support 1090 the functional requirements specified in RFC 5860 [11]. 1092 The following MPLS-TP MEGs are specified in this document: 1094 o A Section Maintenance Entity Group (SMEG), allowing 1095 monitoring and management of MPLS-TP Sections (between MPLS 1096 LSRs). 1098 o An LSP Maintenance Entity Group (LMEG), allowing monitoring 1099 and management of an end-to-end LSP (between LERs). 1101 o A PW Maintenance Entity Group (PMEG), allowing monitoring and 1102 management of an end-to-end SS/MS-PWs (between T-PEs). 1104 o An LSP SPME ME Group (LSMEG), allowing monitoring and 1105 management of an SPME (between a given pair of LERs and/or 1106 LSRs along an LSP). 1108 o A PW SPME ME Group (PSMEG), allowing monitoring and 1109 management of an SPME (between a given pair of T-PEs and/or 1110 S-PEs along an (MS-)PW). 1112 The MEGs specified in this MPLS-TP OAM framework are compliant 1113 with the architecture framework for MPLS-TP [8] that includes 1114 both MS-PWs [4] and LSPs [1]. 1116 Hierarchical LSPs are also supported in the form of SPMEs. In 1117 this case, each LSP in the hierarchy is a different sub-layer 1118 network that can be monitored, independently from higher and 1119 lower level LSPs in the hierarchy, on an end-to-end basis (from 1120 LER to LER) by a SPME. It is possible to monitor a portion of a 1121 hierarchical LSP by instantiating a hierarchical SPME between 1122 any LERs/LSRs along the hierarchical LSP. 1124 Native |<------------------ MS-PW1Z ---------------->| Native 1125 Layer | | Layer 1126 Service | || |<-LSP3X->| || | Service 1127 (AC1) V V V V V V V V (AC2) 1128 +----+ +---+ +----+ +----+ +---+ +----+ 1129 +----+ |T-PE| |LSR| |S-PE| |S-PE| |LSR| |T-PE| +----+ 1130 | | | |=======| |=========| |=======| | | | 1131 | CE1|--|.......PW13......|...PW3X..|......PWXZ.......|---|CE2 | 1132 | | | |=======| |=========| |=======| | | | 1133 +----+ | 1 | | 2 | | 3 | | X | | Y | | Z | +----+ 1134 +----+ +---+ +----+ +----+ +---+ +----+ 1135 . . . . 1136 | | | | 1137 |<--- Domain 1 -->| |<--- Domain Z -->| 1138 ^----------------- PW1Z PME -----------------^ 1139 ^--- PW13 PSMEG---^ ^--- PWXZ PSMEG---^ 1140 ^-------^ ^-------^ 1141 LSP13 LMEG LSPXZ LMEG 1142 ^--^ ^--^ ^---------^ ^--^ ^--^ 1143 Sec12 Sec23 Sec3X SecXY SecYZ 1144 SMEG SMEG SMEG SMEG SMEG 1146 ^---^ ME 1147 ^ MEP 1148 ==== LSP 1149 .... PW 1151 T-PE1: Terminating Provider Edge 1 1152 LSR: Label Switching Router 2 1153 S-PE3: Switching Provider Edge 3 1154 T-PEX: Terminating Provider Edge X 1155 LSRY: Label Switching Router Y 1156 S-PEZ: Switching Provider Edge Z 1158 Figure 5 Reference Model for the MPLS-TP OAM Framework 1160 Figure 5 depicts a high-level reference model for the MPLS-TP 1161 OAM framework. The figure depicts portions of two MPLS-TP 1162 enabled network domains, Domain 1 and Domain Z. In Domain 1, 1163 LSR1 is adjacent to LSR2 via the MPLS-TP Section Sec12 and LSR2 1164 is adjacent to LSR3 via the MPLS-TP Section Sec23. Similarly, in 1165 Domain Z, LSRX is adjacent to LSRY via the MPLS-TP Section SecXY 1166 and LSRY is adjacent to LSRZ via the MPLS-TP Section SecYZ. In 1167 addition, LSR3 is adjacent to LSRX via the MPLS-TP Section 3X. 1169 Figure 5 also shows a bi-directional MS-PW (PW1Z) between AC1 on 1170 T-PE1 and AC2 on T-PEZ. The MS-PW consists of three 1171 bi-directional PW path segments: 1) PW13 path segment between T- 1172 PE1 and S-PE3 via the bi-directional LSP13 LSP, 2) PW3X path 1173 segment between S-PE3 and S-PEX, via the bi-directional LSP3X 1174 LSP, and 3) PWXZ path segment between S-PEX and T-PEZ via the 1175 bi-directional LSPXZ LSP. 1177 The MPLS-TP OAM procedures that apply to a MEG are expected to 1178 operate independently from procedures on other MEGs. Yet, this 1179 does not preclude that multiple MEGs may be affected 1180 simultaneously by the same network condition, for example, a 1181 fiber cut event. 1183 Note that there are no constrains imposed by this OAM framework 1184 on the number, or type (p2p, p2mp, LSP or PW), of MEGs that may 1185 be instantiated on a particular node. In particular, when 1186 looking at Figure 5, it should be possible to configure one or 1187 more MEPs on the same node if that node is the endpoint of one 1188 or more MEGs. 1190 Figure 5 does not describe a PW3X PSMEG because typically SPMEs 1191 are used to monitor an OAM domain (like PW13 and PWXZ PSMEGs) 1192 rather than the segment between two OAM domains. However the OAM 1193 framework does not pose any constraints on the way SPMEs are 1194 instantiated as long as they are not overlapping. 1196 The subsections below define the MEGs specified in this MPLS-TP 1197 OAM architecture framework document. Unless otherwise stated, 1198 all references to domains, LSRs, MPLS-TP Sections, LSPs, 1199 pseudowires and MEGs in this section are made in relation to 1200 those shown in Figure 5. 1202 4.1. MPLS-TP Section Monitoring (SMEG) 1204 An MPLS-TP Section MEG (SMEG) is an MPLS-TP maintenance entity 1205 intended to monitor an MPLS-TP Section as defined in RFC 5654 1206 [5]. An SMEG may be configured on any MPLS-TP section. SMEG OAM 1207 packets must fate share with the user data packets sent over the 1208 monitored MPLS-TP Section. 1210 An SMEG is intended to be deployed for applications where it is 1211 preferable to monitor the link between topologically adjacent 1212 (next hop in this layer network) MPLS-TP LSRs rather than 1213 monitoring the individual LSP or PW path segments traversing the 1214 MPLS-TP Section and the server layer technology does not provide 1215 adequate OAM capabilities. 1217 Figure 5 shows five Section MEGs configured in the network 1218 between AC1 and AC2: 1220 1. Sec12 MEG associated with the MPLS-TP Section between LSR 1 1221 and LSR 2, 1223 2. Sec23 MEG associated with the MPLS-TP Section between LSR 2 1224 and LSR 3, 1226 3. Sec3X MEG associated with the MPLS-TP Section between LSR 3 1227 and LSR X, 1229 4. SecXY MEG associated with the MPLS-TP Section between LSR X 1230 and LSR Y, and 1232 5. SecYZ MEG associated with the MPLS-TP Section between LSR Y 1233 and LSR Z. 1235 4.2. MPLS-TP LSP End-to-End Monitoring Group (LMEG) 1237 An MPLS-TP LSP MEG (LMEG) is an MPLS-TP maintenance entity group 1238 intended to monitor an end-to-end LSP between its LERs. An LMEG 1239 may be configured on any MPLS LSP. LMEG OAM packets must fate 1240 share with user data packets sent over the monitored MPLS-TP 1241 LSP. 1243 An LMEG is intended to be deployed in scenarios where it is 1244 desirable to monitor an entire LSP between its LERs, rather 1245 than, say, monitoring individual PWs. 1247 Figure 5 depicts two LMEGs configured in the network between AC1 1248 and AC2: 1) the LSP13 LMEG between LER 1 and LER 3, and 2) the 1249 LSPXZ LMEG between LER X and LER Y. Note that the presence of a 1250 LSP3X LMEG in such a configuration is optional, hence, not 1251 precluded by this framework. For instance, the SPs may prefer to 1252 monitor the MPLS-TP Section between the two LSRs rather than the 1253 individual LSPs. 1255 4.3. MPLS-TP PW Monitoring (PMEG) 1257 An MPLS-TP PW MEG (PMEG) is an MPLS-TP maintenance entity 1258 intended to monitor a SS-PW or MS-PW between its T-PEs. A PMEG 1259 can be configured on any SS-PW or MS-PW. PMEG OAM packets must 1260 fate share with the user data packets sent over the monitored 1261 PW. 1263 A PMEG is intended to be deployed in scenarios where it is 1264 desirable to monitor an entire PW between a pair of MPLS-TP 1265 enabled T-PEs rather than monitoring the LSP aggregating 1266 multiple PWs between PEs. 1268 Figure 5 depicts a MS-PW (MS-PW1Z) consisting of three path 1269 segments: PW13, PW3X and PWXZ and its associated end-to-end PMEG 1270 (PW1Z PMEG). 1272 4.4. MPLS-TP LSP SPME Monitoring (LSMEG) 1274 An MPLS-TP LSP SPME MEG (LSMEG) is an MPLS-TP SPME with an 1275 associated maintenance entity group intended to monitor an 1276 arbitrary part of an LSP between the MEPs instantiated for the 1277 SPME independent from the end-to-end monitoring (LMEG). An LSMEG 1278 can monitor an LSP path segment and it may also include the 1279 forwarding engine(s) of the node(s) at the edge(s) of the path 1280 segment. 1282 When SPME is established between non-adjacent LSRs, the edges of 1283 the SPME becomes adjacent at the LSP sub-layer network and any 1284 LSR that were previously in between becomes an LSR for the SPME. 1286 Multiple hierarchical LSMEGs can be configured on any LSP. LSMEG 1287 OAM packets must fate share with the user data packets sent over 1288 the monitored LSP path segment. 1290 A LSME can be defined between the following entities: 1292 o The LER and LSR of a given LSP. 1294 o Any two LSRs of a given LSP. 1296 An LSMEG is intended to be deployed in scenarios where it is 1297 preferable to monitor the behavior of a part of an LSP or set of 1298 LSPs rather than the entire LSP itself, for example when there 1299 is a need to monitor a part of an LSP that extends beyond the 1300 administrative boundaries of an MPLS-TP enabled administrative 1301 domain. 1303 |<-------------------- PW1Z ------------------->| 1304 | | 1305 | |<-------------LSP1Z LSP------------->| | 1306 | |<-LSP13->| || |<-LSPXZ->| | 1307 V V V V V V V V 1308 +----+ +---+ +----+ +----+ +---+ +----+ 1309 +----+ | PE | |LSR| |DBN | |DBN | |LSR| | PE | +----+ 1310 | |AC1| |=====================================| |AC2| | 1311 | CE1|---|.....................PW1Z......................|---|CE2 | 1312 | | | |=====================================| | | | 1313 +----+ | 1 | | 2 | | 3 | | X | | Y | | Z | +----+ 1314 +----+ +---+ +----+ +----+ +---+ +----+ 1315 . . . . 1316 | | | | 1317 |<---- Domain 1 --->| |<---- Domain Z --->| 1319 ^---------^ ^---------^ 1320 LSP13 LSMEG LSPXZ LSMEG 1321 ^-------------------------------------^ 1322 LSP1Z LMEG 1324 DBN: Domain Border Node 1326 Figure 6 MPLS-TP LSP SPME MEG (LSMEG) 1328 Figure 6 depicts a variation of the reference model in Figure 5 1329 where there is an end-to-end LSP (LSP1Z) between PE1 and PEZ. 1330 LSP1Z consists of, at least, three LSP Concatenated Segments: 1331 LSP13, LSP3X and LSPXZ. In this scenario there are two separate 1332 LSMEGs configured to monitor the LSP1Z: 1) a LSMEG monitoring 1333 the LSP13 Concatenated Segment on Domain 1 (LSP13 LSMEG), and 2) 1334 a LSMEG monitoring the LSPXZ Concatenated Segment on Domain Z 1335 (LSPXZ LSMEG). 1337 It is worth noticing that LSMEGs can coexist with the LMEG 1338 monitoring the end-to-end LSP and that LSMEG MEPs and LMEG MEPs 1339 can be coincident in the same node (e.g. PE1 node supports both 1340 the LSP1Z LMEG MEP and the LSP13 LSMEG MEP). 1342 4.5. MPLS-TP MS-PW SPME Monitoring (PSMEG) 1344 An MPLS-TP MS-PW SPME Monitoring MEG (PSMEG) is an MPLS-TP SPME 1345 with an associated maintenance entity group intended to monitor 1346 an arbitrary part of an MS-PW between the MEPs instantiated for 1347 the SPME independently of the end-to-end monitoring (PMEG). A 1348 PSMEG can monitor a PW path segment and it may also include the 1349 forwarding engine(s) of the node(s) at the edge(s) of the path 1350 segment. A PSMEG is no different than an SPME, it is simply 1351 named as such to discuss SPMEs specifically in a PW context. 1353 When SPME is established between non-adjacent S-PEs, the edges 1354 of the SPME becomes adjacent at the MS-PW sub-layer network and 1355 any S-PEs that were previously in between becomes an LSR for the 1356 SPME. 1358 S-PE placement is typically dictated by considerations other 1359 than OAM. S-PEs will frequently reside at operational boundaries 1360 such as the transition from distributed control plane (CP) to 1361 centralized Network Management System (NMS) control or at a 1362 routing area boundary. As such the architecture would appear not 1363 to have the flexibility that arbitrary placement of SPME 1364 segments would imply. Support for an arbitrary placement of 1365 PSMEG would require the definition of additional PW 1366 sub-layering. 1367 Multiple hierarchical PSMEGs can be configured on any MS-PW. 1368 PSMEG OAM packets fate share with the user data packets sent 1369 over the monitored PW path Segment. 1371 A PSMEG does not add hierarchical components to the MPLS 1372 architecture, it defines the role of existing components for the 1373 purposes of discussing OAM functionality. 1375 A PSME can be defined between the following entities: 1377 o T-PE and any S-PE of a given MS-PW 1379 o Any two S-PEs of a given MS-PW. 1381 Note that, in line with the SPME description in section 3.2, when a 1382 PW SPME is instantiated after the MS-PW has been instantiated, the 1383 TTL distance of the MIPs may change and MIPs in the nested MEG are no 1384 longer part of the encompassing MEG. This means that the S-PE nodes 1385 hosting these MIPs are no longer S-PEs but P nodes at the SPME LSP 1386 level. The consequences are that the S-PEs hosting the PSMEG MEPs 1387 become adjacent S-PEs. This is no different than the operation of 1388 SPMEs in general. 1390 A PSMEG is intended to be deployed in scenarios where it is 1391 preferable to monitor the behavior of a part of a MS-PW rather 1392 than the entire end-to-end PW itself, for example to monitor an 1393 MS-PW path segment within a given network domain of an inter- 1394 domain MS-PW. 1396 Figure 5 depicts a MS-PW (MS-PW1Z) consisting of three path 1397 segments: PW13, PW3X and PWXZ with two separate PSMEGs: 1) a 1398 PSMEG monitoring the PW13 MS-PW path segment on Domain 1 (PW13 1399 PSMEG), and 2) a PSMEG monitoring the PWXZ MS-PW path segment on 1400 Domain Z with (PWXZ PSMEG). 1402 It is worth noticing that PSMEGs can coexist with the PMEG 1403 monitoring the end-to-end MS-PW and that PSMEG MEPs and PMEG 1404 MEPs can be coincident in the same node (e.g. T-PE1 node 1405 supports both the PW1Z PMEG MEP and the PW13 PSMEG MEP). 1407 4.6. Fate sharing considerations for multilink 1409 Multilink techniques are in use today and are expected to 1410 continue to be used in future deployments. These techniques 1411 include Ethernet Link Aggregations [21], the use of Link 1412 Bundling for MPLS [17] where the option to spread traffic over 1413 component links is supported and enabled. While the use of Link 1414 Bundling can be controlled at the MPLS-TP layer, use of Link 1415 Aggregation (or any server layer specific multilink) is not 1416 necessarily under control of the MPLS-TP layer. Other techniques 1417 may emerge in the future. These techniques share the 1418 characteristic that an LSP may be spread over a set of component 1419 links and therefore be reordered but no flow within the LSP is 1420 reordered (except when very infrequent and minimally disruptive 1421 load rebalancing occurs). 1423 The use of multilink techniques may be prohibited or permitted 1424 in any particular deployment. If multilink techniques are used, 1425 the deployment can be considered to be only partially MPLS-TP 1426 compliant, however this is unlikely to prevent its use. 1428 The implications for OAM is that not all components of a 1429 multilink will be exercised, independent server layer OAM being 1430 required to exercise the aggregated link components. This has 1431 further implications for MIP and MEP placement, as per-interface 1432 MIPs or "down" MEPs on a multilink interface are akin to a layer 1433 violation, as they instrument at the granularity of the server 1434 layer. The implications for reduced OAM loss measurement 1435 functionality are documented in sections 5.5.3 and 6.2.3. 1437 5. OAM Functions for proactive monitoring 1439 In this document, proactive monitoring refers to OAM operations 1440 that are either configured to be carried out periodically and 1441 continuously or preconfigured to act on certain events such as 1442 alarm signals. 1444 Proactive monitoring is usually performed "in-service". Such 1445 transactions are universally MEP to MEP in operation while 1446 notifications can be node to node (e.g. some MS-PW transactions) 1447 or node to MEPs (e.g., AIS). The control and measurement 1448 considerations are: 1450 1. Proactive monitoring for a MEG is typically configured at 1451 transport path creation time. 1453 2. The operational characteristics of in-band measurement 1454 transactions (e.g., CV, Loss Measurement (LM) etc.) are 1455 configured at the MEPs. 1457 3. Server layer events are reported by OAM messages originating 1458 at intermediate nodes. 1460 4. The measurements resulting from proactive monitoring are 1461 typically reported outside of the MEG (e.g. to a management 1462 system) as notifications events such as faults or indications 1463 of performance degradations (such as excessive packet loss). 1465 5. The measurements resulting from proactive monitoring may be 1466 periodically harvested by an NMS. 1468 For statically provisioned transport paths the above information 1469 is statically configured; for dynamically established transport 1470 paths the configuration information is signaled via the control 1471 plane or configured via the management plane. 1473 The operator may enable/disable some of the consequent actions 1474 defined in section 5.1.1.4. 1476 5.1. Continuity Check and Connectivity Verification 1478 Proactive Continuity Check functions, as required in section 1479 2.2.2 of RFC 5860 [11], are used to detect a loss of continuity 1480 defect (LOC) between two MEPs in a MEG. 1482 Proactive Connectivity Verification functions, as required in 1483 section 2.2.3 of RFC 5860 [11], are used to detect an unexpected 1484 connectivity defect between two MEGs (e.g. mismerging or 1485 misconnection), as well as unexpected connectivity within the 1486 MEG with an unexpected MEP. 1488 Both functions are based on the (proactive) generation of OAM 1489 packets by the source MEP that are processed by the peer sink 1490 MEP(s). As a consequence these two functions are grouped 1491 together into Continuity Check and Connectivity Verification 1492 (CC-V) OAM packets. 1494 In order to perform pro-active Connectivity Verification, each 1495 CC-V OAM packet also includes a globally unique Source MEP 1496 identifier. When used to perform only pro-active Continuity 1497 Check, the CC-V OAM packet will not include any globally unique 1498 Source MEP identifier. Different formats of MEP identifiers are 1499 defined in [10] to address different environments. When MPLS-TP 1500 is deployed in transport network environments where IP 1501 addressing is not used in the forwarding plane, the ITU Carrier 1502 Code (ICC)-based format for MEP identification is used. When 1503 MPLS-TP is deployed in an IP-based environment, the IP-based MEP 1504 identification is used. 1506 As a consequence, it is not possible to detect misconnections 1507 between two MEGs monitored only for continuity as neither the 1508 OAM message type nor OAM message content provides sufficient 1509 information to disambiguate an invalid source. To expand: 1511 o For CC leaking into a CC monitored MEG - undetectable 1513 o For CV leaking into a CC monitored MEG - presence of 1514 additional Source MEP identifier allows detecting the fault 1516 o For CC leaking into a CV monitored MEG - lack of additional 1517 Source MEP identifier allows detecting the fault. 1519 o For CV leaking into a CV monitored MEG - different Source MEP 1520 identifier permits fault to be identified. 1522 CC-V OAM packets are transmitted at a regular, operator 1523 configurable, rate. The default CC-V transmission periods are 1524 application dependent (see section 5.1.3). 1526 Proactive CC-V OAM packets are transmitted with the "minimum 1527 loss probability PHB" within the transport path (LSP, PW) they 1528 are monitoring. For E-LSPs, this PHB is configurable on network 1529 operator's basis while for L-LSPs this is determined as per RFC 1530 3270 [22]. PHBs can be translated at the network borders by the 1531 same function that translates it for user data traffic. The 1532 implication is that CC-V fate shares with much of the forwarding 1533 implementation, but not all aspects of PHB processing are 1534 exercised. Either on-demand tools are used for finer grained 1535 fault finding or an implementation may utilize a CC-V flow per 1536 PHB to ensure a CC-V flow fate shares with each individual PHB. 1538 In a co-routed or associated, bidirectional point-to-point 1539 transport path, when a MEP is enabled to generate pro-active 1540 CC-V OAM packets with a configured transmission rate, it also 1541 expects to receive pro-active CC-V OAM packets from its peer MEP 1542 at the same transmission rate as a common SLA applies to all 1543 components of the transport path. In a unidirectional transport 1544 path (either point-to-point or point-to-multipoint), the source 1545 MEP is enabled only to generate CC-V OAM packets while each sink 1546 MEP is configured to expect these packets at the configured 1547 rate. 1549 MIPs, as well as intermediate nodes not supporting MPLS-TP OAM, 1550 are transparent to the pro-active CC-V information and forward 1551 these pro-active CC-V OAM packets as regular data packets. 1553 During path setup and tear down, situations arise where CC-V 1554 checks would give rise to alarms, as the path is not fully 1555 instantiated. In order to avoid these spurious alarms the 1556 following procedures are recommended. At initialization, the 1557 source MEP function (generating pro-active CC-V packets) should 1558 be enabled prior to the corresponding sink MEP function 1559 (detecting continuity and connectivity defects). When disabling 1560 the CC-V proactive functionality, the sink MEP function should 1561 be disabled prior to the corresponding source MEP function. 1563 It should be noted that different encapsulations are possible 1564 for CC-V packets and therefore it is possible that in case of 1565 mis-configurations or mis-connectivity, CC-V packets are 1566 received with an unexpected encapsulation. 1568 There are practical limitations to detecting unexpected 1569 encapsulation. It is possible that there are mis-configuration 1570 or mis-connectivity scenarios where OAM packets can alias as 1571 payload, e.g., when a transport path can carry an arbitrary 1572 payload without a pseudo wire. 1574 When CC-V packets are received with an unexpected encapsulation 1575 that can be parsed by the sink MEP, the CC-V packet is processed 1576 as it were received with the correct encapsulation and if it is 1577 not a manifestation of a mis-connectivity defect a warning is 1578 raised (see section 5.1.1.4). Otherwise the CC-V packet may be 1579 silently discarded as unrecognized and a LOC defect may be 1580 detected (see section 5.1.1.1). 1582 The defect conditions are described in no specific order. 1584 5.1.1. Defects identified by CC-V 1586 Pro-active CC-V functions allow a sink MEP to detect the defect 1587 conditions described in the following sub-sections. For all of 1588 the described defect cases, the sink MEP should notify the 1589 equipment fault management process of the detected defect. 1591 5.1.1.1. Loss Of Continuity defect 1593 When proactive CC-V is enabled, a sink MEP detects a loss of 1594 continuity (LOC) defect when it fails to receive pro-active CC-V 1595 OAM packets from the source MEP. 1597 o Entry criteria: If no pro-active CC-V OAM packets from the 1598 source MEP (and in the case of CV, this includes the 1599 requirement to have the expected globally unique Source MEP 1600 identifier) are received within the interval equal to 3.5 1601 times the receiving MEP's configured CC-V reception period. 1603 o Exit criteria: A pro-active CC-V OAM packet from the source 1604 MEP (and again in the case of CV, with the expected globally 1605 unique Source MEP identifier) is received. 1607 5.1.1.2. Mis-connectivity defect 1609 When a pro-active CC-V OAM packet is received, a sink MEP 1610 identifies a mis-connectivity defect (e.g. mismerge, 1611 misconnection or unintended looping) when the received packet 1612 carries an unexpected globally unique Source MEP identifier. 1614 o Entry criteria: The sink MEP receives a pro-active CC-V OAM 1615 packet with an unexpected globally unique Source MEP 1616 identifier or with an unexpected encapsulation. 1618 o Exit criteria: The sink MEP does not receive any pro-active 1619 CC-V OAM packet with an unexpected globally unique Source MEP 1620 identifier for an interval equal at least to 3.5 times the 1621 longest transmission period of the pro-active CC-V OAM 1622 packets received with an unexpected globally unique Source 1623 MEP identifier since this defect has been raised. This 1624 requires the OAM message to self identify the CC-V 1625 periodicity as not all MEPs can be expected to have knowledge 1626 of all MEGs. 1628 5.1.1.3. Period Misconfiguration defect 1630 If pro-active CC-V OAM packets are received with the expected 1631 globally unique Source MEP identifier but with a transmission 1632 period different than the locally configured reception period, 1633 then a CV period mis-configuration defect is detected. 1635 o Entry criteria: A MEP receives a CC-V pro-active packet with 1636 the expected globally unique Source MEP identifier but with a 1637 Period field value different than its own CC-V configured 1638 transmission period. 1640 o Exit criteria: The sink MEP does not receive any pro-active 1641 CC-V OAM packet with the expected globally unique Source MEP 1642 identifier and an incorrect transmission period for an 1643 interval equal at least to 3.5 times the longest transmission 1644 period of the pro-active CC-V OAM packets received with the 1645 expected globally unique Source MEP identifier and an 1646 incorrect transmission period since this defect has been 1647 raised. 1649 5.1.1.4. Unexpected encapsulation defect 1651 If pro-active CC-V OAM packets are received with the expected 1652 globally unique Source MEP identifier but with an unexpected 1653 encapsulation, then a CV unexpected encapsulation defect is 1654 detected. 1656 It should be noted that there are practical limitations to 1657 detecting unexpected encapsulation (see section 5.1.1). 1659 o Entry criteria: A MEP receives a CC-V pro-active packet with 1660 the expected globally unique Source MEP identifier but with 1661 an unexpected encapsulation. 1663 o Exit criteria: The sink MEP does not receive any pro-active 1664 CC-V OAM packet with the expected globally unique Source MEP 1665 identifier and an unexpected encapsulation for an interval 1666 equal at least to 3.5 times the longest transmission period 1667 of the pro-active CC-V OAM packets received with the expected 1668 globally unique Source MEP identifier and an unexpected 1669 encapsulation since this defect has been raised. 1671 5.1.2. Consequent action 1673 A sink MEP that detects any of the defect conditions defined in 1674 section 5.1.1 declares a defect condition and performs the 1675 following consequent actions. 1677 If a MEP detects a mis-connectivity defect, it blocks all the 1678 traffic (including also the user data packets) that it receives 1679 from the misconnected transport path. 1681 If a MEP detects LOC defect that is not caused by a period 1682 mis-configuration, it should block all the traffic (including 1683 also the user data packets) that it receives from the transport 1684 path, if this consequent action has been enabled by the 1685 operator. 1687 It is worth noticing that the OAM requirements document [11] 1688 recommends that CC-V proactive monitoring be enabled on every 1689 MEG in order to reliably detect connectivity defects. However, 1690 CC-V proactive monitoring can be disabled by an operator for a 1691 MEG. In the event of a misconnection between a transport path 1692 that is pro-actively monitored for CC-V and a transport path 1693 which is not, the MEP of the former transport path will detect a 1694 LOC defect representing a connectivity problem (e.g. a 1695 misconnection with a transport path where CC-V proactive 1696 monitoring is not enabled) instead of a continuity problem, with 1697 a consequent wrong traffic delivering. For these reasons, the 1698 traffic block consequent action is applied even when a LOC 1699 condition occurs. This block consequent action can be disabled 1700 through configuration. This deactivation of the block action may 1701 be used for activating or deactivating the monitoring when it is 1702 not possible to synchronize the function activation of the two 1703 peer MEPs. 1705 If a MEP detects a LOC defect (section 5.1.1.1), a 1706 mis-connectivity defect (section 5.1.1.2) it declares a signal 1707 fail condition of the ME. 1709 It is a matter if local policy if a MEP that detects a period 1710 misconfiguration defect (section 5.1.1.3) declares a signal fail 1711 condition of the ME. 1713 The detection of an unexpected encapsulation defect does not 1714 have any consequent action: it is just a warning for the network 1715 operator. An implementation able to detect an unexpected 1716 encapsulation but not able to verify the source MEP ID may 1717 choose to declare a mis-connectivity defect. 1719 5.1.3. Configuration considerations 1721 At all MEPs inside a MEG, the following configuration 1722 information needs to be configured when a proactive CC-V 1723 function is enabled: 1725 o MEG ID; the MEG identifier to which the MEP belongs; 1727 o MEP-ID; the MEP's own identity inside the MEG; 1728 o list of the other MEPs in the MEG. For a point-to-point MEG 1729 the list would consist of the single MEP ID from which the 1730 OAM packets are expected. In case of the root MEP of a p2mp 1731 MEG, the list is composed by all the leaf MEP IDs inside the 1732 MEG. In case of the leaf MEP of a p2mp MEG, the list is 1733 composed by the root MEP ID (i.e. each leaf needs to know the 1734 root MEP ID from which it expect to receive the CC-V OAM 1735 packets). 1737 o PHB for E-LSPs; it identifies the per-hop behavior of CC-V 1738 packet. Proactive CC-V packets are transmitted with the 1739 "minimum loss probability PHB" previously configured within a 1740 single network operator. This PHB is configurable on network 1741 operator's basis. PHBs can be translated at the network 1742 borders. 1744 o transmission rate; the default CC-V transmission periods are 1745 application dependent (depending on whether they are used to 1746 support fault management, performance monitoring, or 1747 protection switching applications): 1749 o Fault Management: default transmission period is 1s (i.e. 1750 transmission rate of 1 packet/second). 1752 o Performance Monitoring: default transmission period is 1753 100ms (i.e. transmission rate of 10 packets/second). 1754 Performance monitoring is only relevant when the 1755 transport path is defect free. CC-V contributes to the 1756 accuracy of PM statistics by permitting the defect free 1757 periods to be properly distinguished. 1759 o Protection Switching: default transmission period is 1760 3.33ms (i.e. transmission rate of 300 packets/second). 1761 CC-V defect entry criteria can resolve in less than 12ms, 1762 and a protection switch can complete within a subsequent 1763 period of 50 ms. 1764 It is also possible to lengthen the transmission period 1765 to 10ms (i.e. transmission rate of 100 packets/second): 1766 in this case the CC-V defect entry criteria is reached 1767 later (i.e. 35ms). 1769 It should be possible for the operator to configure these 1770 transmission rates for all applications, to satisfy his internal 1771 requirements. 1773 Note that the reception period is the same as the configured 1774 transmission rate. 1776 For management provisioned transport paths the above parameters 1777 are statically configured; for dynamically signalled transport 1778 paths the configuration information are distributed via the 1779 control plane. 1781 The operator should be able to enable/disable some of the 1782 consequent actions. Which consequent action can be 1783 enabled/disabled are described in section 5.1.1.4. 1785 5.2. Remote Defect Indication 1787 The Remote Defect Indication (RDI) function, as required in 1788 section 2.2.9 of RFC 5860 [11], is an indicator that is 1789 transmitted by a sink MEP to communicate to its source MEP that 1790 a signal fail condition exists. In case of co-routed and 1791 associated bidirectional transport paths, RDI is associated with 1792 proactive CC-V and the RDI indicator can be piggy-backed onto 1793 the CC-V packet. In case of unidirectional transport paths, the 1794 RDI indicator can be sent only using an out-of-band return path 1795 if it exists and its usage is enabled by policy actions. 1797 When a MEP detects a signal fail condition (e.g. in case of a 1798 continuity or connectivity defect), it should begin transmitting 1799 an RDI indicator to its peer MEP. When incorporated into CC-V, 1800 the RDI information will be included in all pro-active CC-V 1801 packets that it generates for the duration of the signal fail 1802 condition's existence. 1804 A MEP that receives packets from a peer MEP with the RDI 1805 information should determine that its peer MEP has encountered a 1806 defect condition associated with a signal fail condition. 1808 MIPs as well as intermediate nodes not supporting MPLS-TP OAM 1809 are transparent to the RDI indicator and forward OAM packets 1810 that include the RDI indicator as regular data packets, i.e. the 1811 MIP should not perform any actions nor examine the indicator. 1813 When the signal fail condition clears, the MEP should stop 1814 transmitting the RDI indicator to its peer MEP. When 1815 incorporated into CC-V, the RDI indicator will be cleared from 1816 subsequent transmission of pro-active CC-V packets. A MEP 1817 should clear the RDI defect upon reception of an RDI indicator 1818 cleared. 1820 5.2.1. Configuration considerations 1822 In order to support RDI indication, the indication may be a 1823 unique OAM message or an OAM information element embedded in a 1824 CV message. The in-band RDI transmission rate and PHB of the OAM 1825 packets carrying RDI should be the same as that configured for 1826 CC-V. Methods of the out-of-band return paths will dictate how 1827 out-of-band RDI indications are transmitted. 1829 5.3. Alarm Reporting 1831 The Alarm Reporting function, as required in section 2.2.8 of 1832 RFC 5860 [11], relies upon an Alarm Indication Signal (AIS) 1833 message to suppress alarms following detection of defect 1834 conditions at the server (sub-)layer. 1836 When a server MEP asserts a signal fail condition, it notifies 1837 that to the co-located MPLS-TP client/server adaptation function 1838 which then generates OAM packets with AIS information in the 1839 downstream direction to allow the suppression of secondary 1840 alarms at the MPLS-TP MEP in the client (sub-)layer. 1842 The generation of packets with AIS information starts 1843 immediately when the server MEP asserts a signal fail condition. 1844 These periodic OAM packets, with AIS information, continue to be 1845 transmitted until the signal fail condition is cleared. 1847 It is assumed that to avoid spurious alarm generation a MEP 1848 detecting a loss of continuity defect (see section 5.1.1.1) will 1849 wait for a hold off interval prior to asserting an alarm to the 1850 management system. Therefore, upon receiving an OAM packet with 1851 AIS information an MPLS-TP MEP enters an AIS defect condition 1852 and suppresses loss of continuity alarms associated with its 1853 peer MEP but does not block traffic received from the transport 1854 path. A MEP resumes loss of continuity alarm generation upon 1855 detecting loss of continuity defect conditions in the absence of 1856 AIS condition. 1858 MIPs, as well as intermediate nodes, do not process AIS 1859 information and forward these AIS OAM packets as regular data 1860 packets. 1862 For example, let's consider a fiber cut between LSR 1 and LSR 2 1863 in the reference network of Figure 5. Assuming that all of the 1864 MEGs described in Figure 5 have pro-active CC-V enabled, a LOC 1865 defect is detected by the MEPs of Sec12 SMEG LSP13 LMEG, PW1 1866 PSMEG and PW1Z PMEG, however in a transport network only the 1867 alarm associated to the fiber cut needs to be reported to an NMS 1868 while all secondary alarms should be suppressed (i.e. not 1869 reported to the NMS or reported as secondary alarms). 1871 If the fiber cut is detected by the MEP in the physical layer 1872 (in LSR2), LSR2 can generate the proper alarm in the physical 1873 layer and suppress the secondary alarm associated with the LOC 1874 defect detected on Sec12 SMEG. As both MEPs reside within the 1875 same node, this process does not involve any external protocol 1876 exchange. Otherwise, if the physical layer has not enough OAM 1877 capabilities to detect the fiber cut, the MEP of Sec12 SMEG in 1878 LSR2 will report a LOC alarm. 1880 In both cases, the MEP of Sec12 SMEG in LSR 2 notifies the 1881 adaptation function for LSP13 LMEG that then generates AIS 1882 packets on the LSP13 LMEG in order to allow its MEP in LSR3 to 1883 suppress the LOC alarm. LSR3 can also suppress the secondary 1884 alarm on PW13 PSMEG because the MEP of PW13 PSMEG resides within 1885 the same node as the MEP of LSP13 LMEG. The MEP of PW13 PSMEG in 1886 LSR3 also notifies the adaptation function for PW1Z PMEG that 1887 then generates AIS packets on PW1Z PMEG in order to allow its 1888 MEP in LSRZ to suppress the LOC alarm. 1890 The generation of AIS packets for each MEG in the MPLS-TP client 1891 (sub-)layer is configurable (i.e. the operator can 1892 enable/disable the AIS generation). 1894 AIS packets are transmitted with the "minimum loss probability 1895 PHB" within a single network operator. For E-LSPs, this PHB is 1896 configurable on network operator's basis, while for L-LSPs, this 1897 is determined as per RFC 3270 [22]. 1899 AIS condition is cleared if no AIS message has been received in 1900 3.5 times the AIS transmission period. 1902 5.4. Lock Reporting 1904 The Lock Reporting function, as required in section 2.2.7 of RFC 1905 5860 [11], relies upon a Locked Report (LKR) message used to 1906 suppress alarms following administrative locking action in the 1907 server (sub-)layer. 1909 When a server MEP is locked, the MPLS-TP client (sub-)layer 1910 adaptation function generates packets with LKR information to 1911 allow the suppression of secondary alarms at the MEPs in the 1912 client (sub-)layer. Again it is assumed that there is a hold off 1913 for any loss of continuity alarms in the client layer MEPs 1914 downstream of the node originating the locked report. In case of 1915 client (sub-)layer co-routed bidirectional transport paths, the 1916 LKR information is sent on both directions. In case of client 1917 (sub-)layer unidirectional transport paths, the LKR information 1918 is sent only in the downstream direction. As a consequence, in 1919 case of client (sub-)layer point-to-multipoint transport paths, 1920 the LKR information is sent only to the MEPs that are downstream 1921 to the server (sub-)layer that has been administratively locked. 1922 Client (sub-)layer associated bidirectional transport paths 1923 behave like co-routed bidirectional transport paths if the 1924 server (sub-)layer that has been administratively locked is used 1925 by both directions; otherwise they behave like unidirectional 1926 transport paths. 1928 The generation of packets with LKR information starts 1929 immediately when the server MEP is locked. These periodic 1930 packets, with LKR information, continue to be transmitted until 1931 the locked condition is cleared. 1933 Upon receiving a packet with LKR information an MPLS-TP MEP 1934 enters an LKR defect condition and suppresses loss of continuity 1935 alarm associated with its peer MEP but does not block traffic 1936 received from the transport path. A MEP resumes loss of 1937 continuity alarm generation upon detecting loss of continuity 1938 defect conditions in the absence of LKR condition. 1940 MIPs, as well as intermediate nodes, do not process the LKR 1941 information and forward these LKR OAM packets as regular data 1942 packets. 1944 For example, let's consider the case where the MPLS-TP Section 1945 between LSR 1 and LSR 2 in the reference network of Figure 5 is 1946 administrative locked at LSR2 (in both directions). 1948 Assuming that all the MEGs described in Figure 5 have pro-active 1949 CC-V enabled, a LOC defect is detected by the MEPs of LSP13 1950 LMEG, PW1 PSMEG and PW1Z PMEG, however in a transport network 1951 all these secondary alarms should be suppressed (i.e. not 1952 reported to the NMS or reported as secondary alarms). 1954 The MEP of Sec12 SMEG in LSR 2 notifies the adaptation function 1955 for LSP13 LMEG that then generates LKR packets on the LSP13 LMEG 1956 in order to allow its MEPs in LSR1 and LSR3 to suppress the LOC 1957 alarm. LSR3 can also suppress the secondary alarm on PW13 PSMEG 1958 because the MEP of PW13 PSMEG resides within the same node as 1959 the MEP of LSP13 LMEG. The MEP of PW13 PSMEG in LSR3 also 1960 notifies the adaptation function for PW1Z PMEG that then 1961 generates AIS packets on PW1Z PMEG in order to allow its MEP in 1962 LSRZ to suppress the LOC alarm. 1964 The generation of LKR packets for each MEG in the MPLS-TP client 1965 (sub-)layer is configurable (i.e. the operator can 1966 enable/disable the LKR generation). 1968 LKR packets are transmitted with the "minimum loss probability 1969 PHB" within a single network operator. For E-LSPs, this PHB is 1970 configurable on network operator's basis, while for L-LSPs, this 1971 is determined as per RFC 3270 [22]. 1973 Locked condition is cleared if no LKR packet has been received 1974 for 3.5 times the transmission period. 1976 5.5. Packet Loss Measurement 1978 Packet Loss Measurement (LM) is one of the capabilities 1979 supported by the MPLS-TP Performance Monitoring (PM) function in 1980 order to facilitate reporting of QoS information for a transport 1981 path as required in section 2.2.11 of RFC 5860 [11]. LM is used 1982 to exchange counter values for the number of ingress and egress 1983 packets transmitted and received by the transport path monitored 1984 by a pair of MEPs. 1986 Proactive LM is performed by periodically sending LM OAM packets 1987 from a MEP to a peer MEP and by receiving LM OAM packets from 1988 the peer MEP (if a co-routed or associated bidirectional 1989 transport path) during the life time of the transport path. Each 1990 MEP performs measurements of its transmitted and received 1991 packets. These measurements are then correlated in real time 1992 with the peer MEP in the ME to derive the impact of packet loss 1993 on a number of performance metrics for the ME in the MEG. The LM 1994 transactions are issued such that the OAM packets will 1995 experience the same PHB scheduling class as the measured traffic 1996 while transiting between the MEPs in the ME. 1998 For a MEP, near-end packet loss refers to packet loss associated 1999 with incoming data packets (from the far-end MEP) while far-end 2000 packet loss refers to packet loss associated with egress data 2001 packets (towards the far-end MEP). 2003 Pro-active LM can be operated in two ways: 2005 o One-way: a MEP sends LM OAM packet to its peer MEP containing 2006 all the required information to facilitate near-end packet 2007 loss measurements at the peer MEP. 2009 o Two-way: a MEP sends LM OAM packet with a LM request to its 2010 peer MEP, which replies with a LM OAM packet as a LM 2011 response. The request/response LM OAM packets containing all 2012 the required information to facilitate both near-end and 2013 far-end packet loss measurements from the viewpoint of the 2014 originating MEP. 2016 One-way LM is applicable to both unidirectional and 2017 bidirectional (co-routed or associated) transport paths while 2018 two-way LM is applicable only to bidirectional (co-routed or 2019 associated) transport paths. 2021 MIPs, as well as intermediate nodes, do not process the LM 2022 information and forward these pro-active LM OAM packets as 2023 regular data packets. 2025 5.5.1. Configuration considerations 2027 In order to support proactive LM, the transmission rate and PHB 2028 class associated with the LM OAM packets originating from a MEP 2029 need be configured as part of the LM provisioning. LM OAM 2030 packets should be transmitted with the PHB that yields the 2031 lowest drop precedence within the measured PHB Scheduling Class 2032 (see RFC 3260 [16]). 2034 If that PHB class is not an ordered aggregate where the ordering 2035 constraint is all packets with the PHB class being delivered in 2036 order, LM can produce inconsistent results. 2038 5.5.2. Sampling skew 2040 If an implementation makes use of a hardware forwarding path 2041 which operates in parallel with an OAM processing path, whether 2042 hardware or software based, the packet and byte counts may be 2043 skewed if one or more packets can be processed before the OAM 2044 processing samples counters. If OAM is implemented in software 2045 this error can be quite large. 2047 5.5.3. Multilink issues 2049 If multilink is used at the LSP ingress or egress, there may be 2050 no single packet processing engine where to inject or extract a 2051 LM packet as an atomic operation to which accurate packet and 2052 byte counts can be associated with the packet. 2054 In the case where multilink is encountered in the LSP path, the 2055 reordering of packets within the LSP can cause inaccurate LM 2056 results. 2058 5.6. Packet Delay Measurement 2060 Packet Delay Measurement (DM) is one of the capabilities 2061 supported by the MPLS-TP PM function in order to facilitate 2062 reporting of QoS information for a transport path as required in 2063 section 2.2.12 of RFC 5860 [11]. Specifically, pro-active DM is 2064 used to measure the long-term packet delay and packet delay 2065 variation in the transport path monitored by a pair of MEPs. 2067 Proactive DM is performed by sending periodic DM OAM packets 2068 from a MEP to a peer MEP and by receiving DM OAM packets from 2069 the peer MEP (if a co-routed or associated bidirectional 2070 transport path) during a configurable time interval. 2072 Pro-active DM can be operated in two ways: 2074 o One-way: a MEP sends DM OAM packet to its peer MEP containing 2075 all the required information to facilitate one-way packet 2076 delay and/or one-way packet delay variation measurements at 2077 the peer MEP. Note that this requires precise time 2078 synchronisation at either MEP by means outside the scope of 2079 this framework. 2081 o Two-way: a MEP sends DM OAM packet with a DM request to its 2082 peer MEP, which replies with a DM OAM packet as a DM 2083 response. The request/response DM OAM packets containing all 2084 the required information to facilitate two-way packet delay 2085 and/or two-way packet delay variation measurements from the 2086 viewpoint of the originating MEP. 2088 One-way DM is applicable to both unidirectional and 2089 bidirectional (co-routed or associated) transport paths while 2090 two-way DM is applicable only to bidirectional (co-routed or 2091 associated) transport paths. 2093 MIPs, as well as intermediate nodes, do not process the DM 2094 information and forward these pro-active DM OAM packets as 2095 regular data packets. 2097 5.6.1. Configuration considerations 2099 In order to support pro-active DM, the transmission rate and, 2100 for E-LSPs, the PHB associated with the DM OAM packets 2101 originating from a MEP need be configured as part of the DM 2102 provisioning. DM OAM packets should be transmitted with the PHB 2103 that yields the lowest drop precedence within the measured PHB 2104 Scheduling Class (see RFC 3260 [16]). 2106 5.7. Client Failure Indication 2108 The Client Failure Indication (CFI) function, as required in 2109 section 2.2.10 of RFC 5860 [11], is used to help process client 2110 defects and propagate a client signal defect condition from the 2111 process associated with the local attachment circuit where the 2112 defect was detected (typically the source adaptation function 2113 for the local client interface) to the process associated with 2114 the far-end attachment circuit (typically the source adaptation 2115 function for the far-end client interface) for the same 2116 transmission path in case the client of the transport path does 2117 not support a native defect/alarm indication mechanism, e.g. 2118 AIS. 2120 A source MEP starts transmitting a CFI indication to its peer 2121 MEP when it receives a local client signal defect notification 2122 via its local CSF function. Mechanisms to detect local client 2123 signal fail defects are technology specific. Similarly 2124 mechanisms to determine when to cease originating client signal 2125 fail indication are also technology specific. 2127 A sink MEP that has received a CFI indication report this 2128 condition to its associated client process via its local CFI 2129 function. Consequent actions toward the client attachment 2130 circuit are technology specific. 2132 Either there needs to be a 1:1 correspondence between the client 2133 and the MEG, or when multiple clients are multiplexed over a 2134 transport path, the CFI message requires additional information 2135 to permit the client instance to be identified. 2137 MIPs, as well as intermediate nodes, do not process the CFI 2138 information and forward these pro-active CFI OAM packets as 2139 regular data packets. 2141 5.7.1. Configuration considerations 2143 In order to support CFI indication, the CFI transmission rate 2144 and, for E-LSPs, the PHB of the CFI OAM message/information 2145 element should be configured as part of the CFI configuration. 2147 6. OAM Functions for on-demand monitoring 2149 In contrast to proactive monitoring, on-demand monitoring is 2150 initiated manually and for a limited amount of time, usually for 2151 operations such as diagnostics to investigate a defect 2152 condition. 2154 On-demand monitoring covers a combination of "in-service" and 2155 "out-of-service" monitoring functions. The control and 2156 measurement implications are: 2158 1. A MEG can be directed to perform an "on-demand" functions at 2159 arbitrary times in the lifetime of a transport path. 2161 2. "out-of-service" monitoring functions may require a-priori 2162 configuration of both MEPs and intermediate nodes in the MEG 2163 (e.g., data plane loopback) and the issuance of notifications 2164 into client layers of the transport path being removed from 2165 service (e.g., lock-reporting) 2167 3. The measurements resulting from on-demand monitoring are 2168 typically harvested in real time, as these are frequently 2169 initiated manually. These do not necessarily require 2170 different harvesting mechanisms that for harvesting proactive 2171 monitoring telemetry. 2173 The functions that are exclusively out-of-service are those 2174 described in section 6.3. The remainder are applicable to both 2175 in-service and out-of-service transport paths. 2177 6.1. Connectivity Verification 2179 On demand connectivity verification function, as required in 2180 section 2.2.3 of RFC 5860 [11], is a transaction that flows from 2181 the originating MEP to a target MIP or MEP to verify the 2182 connectivity between these points. 2184 Use of on-demand CV is dependent on the existence of either a 2185 bi-directional ME, or an associated return ME, or the 2186 availability of an out-of-band return path because it requires 2187 the ability for target MIPs and MEPs to direct responses to the 2188 originating MEPs. 2190 In order to preserve network resources, e.g. bandwidth, 2191 processing time at switches, it may be preferable to not use 2192 proactive CC-V. In order to perform fault management functions, 2193 network management may invoke periodic on-demand bursts of on- 2194 demand CV packets. 2196 An additional use of on-demand CV would be to detect and locate 2197 a problem of connectivity when a problem is suspected or known 2198 based on other tools. In this case the functionality will be 2199 triggered by the network management in response to a status 2200 signal or alarm indication. 2202 On-demand CV is based upon generation of on-demand CV packets 2203 that should uniquely identify the MEG that is being checked. 2204 The on-demand functionality may be used to check either an 2205 entire MEG (end-to-end) or between the originating MEP and a 2206 specific MIP. This functionality may not be available for 2207 associated bidirectional transport paths or unidirectional 2208 paths, as the MIP may not have a return path to the originating 2209 MEP for the on-demand CV transaction. 2211 On-demand CV may generate a one-time burst of on-demand CV 2212 packets, or be used to invoke periodic, non-continuous, bursts 2213 of on-demand CV packets. The number of packets generated in 2214 each burst is configurable at the MEPs, and should take into 2215 account normal packet-loss conditions. 2217 When invoking a periodic check of the MEG, the originating MEP 2218 should issue a burst of on-demand CV packets that uniquely 2219 identifies the MEG being verified. The number of packets and 2220 their transmission rate should be pre-configured at the 2221 originating MEP. The source MEP should use the mechanisms 2222 defined in sections 3.3 and 3.4 when sending an on-demand CV 2223 packet to a target MEP or target MIP respectively. The target 2224 MEP/MIP shall return a reply on-demand CV packet for each packet 2225 received. If the expected number of on-demand CV reply packets 2226 is not received at originating MEP, this is an indication that a 2227 connectivity problem may exist. 2229 On-demand CV should have the ability to carry padding such that 2230 a variety of MTU sizes can be originated to verify the MTU 2231 transport capability of the transport path. 2233 MIPs that are not targeted by on-demand CV packets, as well as 2234 intermediate nodes, do not process the CV information and 2235 forward these on-demand CV OAM packets as regular data packets. 2237 6.1.1. Configuration considerations 2239 For on-demand CV the originating MEP should support the 2240 configuration of the number of packets to be 2241 transmitted/received in each burst of transmissions and their 2242 packet size. 2244 In addition, when the CV packet is used to check connectivity 2245 toward a target MIP, the number of hops to reach the target MIP 2246 should be configured. 2248 For E-LSPs, the PHB of the on-demand CV packets should be 2249 configured as well. This permits the verification of correct 2250 operation of QoS queuing as well as connectivity. 2252 6.2. Packet Loss Measurement 2254 On-demand Packet Loss Measurement (LM) is one of the 2255 capabilities supported by the MPLS-TP Performance Monitoring 2256 function in order to facilitate the diagnosis of QoS 2257 performances for a transport path, as required in section 2.2.11 2258 of RFC 5860 [11]. As proactive LM, on-demand LM is used to 2259 exchange counter values for the number of ingress and egress 2260 packets transmitted and received by the transport path monitored 2261 by a pair of MEPs. LM is only performed between a pair of MEPs. 2263 On-demand LM is performed by periodically sending LM OAM packets 2264 from a MEP to a peer MEP and by receiving LM OAM packets from 2265 the peer MEP (if a co-routed or associated bidirectional 2266 transport path) during a pre-defined monitoring period. Each MEP 2267 performs measurements of its transmitted and received packets. 2268 These measurements are then correlated to evaluate the packet 2269 loss performance metrics of the transport path. 2271 Use of packet loss measurement in an out-of-service transport 2272 path requires a traffic source such as a tester. 2274 MIPs, as well as intermediate nodes, do not process the LM 2275 information and forward these on-demand LM OAM packets as 2276 regular data packets. 2278 6.2.1. Configuration considerations 2280 In order to support on-demand LM, the beginning and duration of 2281 the LM procedures, the transmission rate and, for E-LSPs, the 2282 PHB associated with the LM OAM packets originating from a MEP 2283 must be configured as part of the on-demand LM provisioning. LM 2284 OAM packets should be transmitted with the PHB that yields the 2285 lowest drop precedence within the measured PHB Scheduling Class 2286 (see RFC 3260 [16]). 2288 6.2.2. Sampling skew 2290 The same considerations described in section 5.5.2 for the 2291 pro-active LM are also applicable to on-demand LM 2292 implementations. 2294 6.2.3. Multilink issues 2296 Multi-link Issues are as described in section 5.5.3. 2298 6.3. Diagnostic Tests 2300 Diagnostic tests are tests performed on a MEG that has been taken 2301 out-of-service. 2303 6.3.1. Throughput Estimation 2305 Throughput estimation is an on-demand out-of-service function, 2306 as required in section 2.2.5 of RFC 5860 [11], that allows 2307 verifying the bandwidth/throughput of an MPLS-TP transport path 2308 (LSP or PW) before it is put in-service. 2310 Throughput estimation is performed between MEPs and between MEP 2311 and MIP. It can be performed in one-way or two-way modes. 2313 According to RFC 2544 [12], this test is performed by sending 2314 OAM test packets at increasing rate (up to the theoretical 2315 maximum), computing the percentage of OAM test packets received 2316 and reporting the rate at which OAM test packets begin to drop. 2317 In general, this rate is dependent on the OAM test packet size. 2319 When configured to perform such tests, a source MEP inserts OAM 2320 test packets with a specified packet size and transmission 2321 pattern at a rate to exercise the throughput. 2323 For a one-way test, the remote sink MEP receives the OAM test 2324 packets and calculates the packet loss. For a two-way test, the 2325 remote MEP loopbacks the OAM test packets back to original MEP 2326 and the local sink MEP calculates the packet loss. 2328 It is worth noting that two-way throughput estimation is only 2329 applicable to bidirectional (co-routed or associated) transport 2330 paths and can only evaluate the minimum of available throughput 2331 of the two directions. In order to estimate the throughput of 2332 each direction uniquely, two one-way throughput estimation 2333 sessions have to be setup. 2335 It is also worth noting that if throughput estimation is 2336 performed on transport paths that transit oversubscribed links, 2337 the test may not produce comprehensive results if viewed in 2338 isolation because the impact of the test on the surrounding 2339 traffic needs to also be considered. Moreover, the estimation 2340 will only reflect the bandwidth available at the moment when the 2341 measure is made. 2343 MIPs that are not target by on-demand test OAM packets, as well 2344 as intermediate nodes, do not process the throughput test 2345 information and forward these on-demand test OAM packets as 2346 regular data packets. 2348 6.3.1.1. Configuration considerations 2350 Throughput estimation is an out-of-service tool. The diagnosed 2351 MEG should be put into a Lock status before the diagnostic test 2352 is started. 2354 A MEG can be put into a Lock status either via an NMS action or 2355 using the Lock Instruct OAM tool as defined in section 7. 2357 At the transmitting MEP, provisioning is required for a test 2358 signal generator, which is associated with the MEP. At a 2359 receiving MEP, provisioning is required for a test signal 2360 detector which is associated with the MEP. 2362 6.3.1.2. Limited OAM processing rate 2364 If an implementation is able to process payload at much higher 2365 data rates than OAM test packets, then accurate measurement of 2366 throughput using OAM test packets is not achievable. Whether 2367 OAM packets can be processed at the same rate as payload is 2368 implementation dependent. 2370 6.3.1.3. Multilink considerations 2372 If multilink is used, then it may not be possible to perform 2373 throughput measurement, as the throughput test may not have a 2374 mechanism for utilizing more than one component link of the 2375 aggregated link. 2377 6.3.2. Data plane Loopback 2379 Data plane loopback is an out-of-service function, as required 2380 in section 2.2.5 of RFC 5860 [11]. This function consists in 2381 placing a transport path, at either an intermediate or 2382 terminating node, into a data plane loopback state, such that 2383 all traffic (including both payload and OAM) received on the 2384 looped back interface is sent on the reverse direction of the 2385 transport path. The traffic is looped back unmodified other than 2386 normal per hop processing such as TTL decrement. 2388 The data plane loopback function requires that the MEG is locked 2389 such that user data traffic is prevented from entering/exiting 2390 that MEG. Instead, test traffic is inserted at the ingress of 2391 the MEG. This test traffic can be generated from an internal 2392 process residing within the ingress node or injected by external 2393 test equipment connected to the ingress node. 2395 It is also normal to disable proactive monitoring of the path as 2396 the sink MEP will see all the OAM messages, originated by the 2397 associated source MEP, returned to it. 2399 The only way to send an OAM packet (e.g., to remove the data 2400 plane loopback state) to the MIPs or MEPs hosted by a node set 2401 in the data plane loopback mode is via TTL expiry. It should 2402 also be noted that MIPs can be addressed with more than one TTL 2403 value on a co-routed bi-directional path set into dataplane 2404 loopback. 2406 If the loopback function is to be performed at an intermediate 2407 node it is only applicable to co-routed bi-directional paths. If 2408 the loopback is to be performed end to end, it is applicable to 2409 both co-routed bi-directional or associated bi-directional 2410 paths. 2412 It should be noted that data plane loopback function itself is 2413 applied to data-plane loopback points that can resides on 2414 different interfaces from MIPs/MEPs. Where a node implements 2415 data plane loopback capability and whether it implements it in 2416 more than one point is implementation dependent. 2418 6.3.2.1. Configuration considerations 2420 Data plane loopback is an out-of-service tool. The MEG which 2421 defines a diagnosed transport path should be put into a locked 2422 state before the diagnostic test is started. However, a means is 2423 required to permit the originated test traffic to be inserted at 2424 ingress MEP when data plane loopback is performed. 2426 A transport path, at either an intermediate or terminating node, 2427 can be put into data plane loopback state via an NMS action or 2428 using an OAM tool for data plane loopback configuration. 2430 If the data plane loopback point is set somewhere at an 2431 intermediate point of a co-routed bidirectional transport path, 2432 the side of loop back function (one side or both side) needs to 2433 be configured. 2435 6.4. Route Tracing 2437 It is often necessary to trace a route covered by a MEG from an 2438 originating MEP to the peer MEP(s) including all the MIPs in- 2439 between, and may be conducted after provisioning an MPLS-TP 2440 transport path for, e.g., trouble shooting purposes such as 2441 fault localization. 2443 The route tracing function, as required in section 2.2.4 of RFC 2444 5860 [11], is providing this functionality. Based on the fate 2445 sharing requirement of OAM flows, i.e. OAM packets receive the 2446 same forwarding treatment as data packet, route tracing is a 2447 basic means to perform connectivity verification and, to a much 2448 lesser degree, continuity check. For this function to work 2449 properly, a return path must be present. 2451 Route tracing might be implemented in different ways and this 2452 document does not preclude any of them. 2454 Route tracing should always discover the full list of MIPs and 2455 of the peer MEPs. In case a defect exists, the route trace 2456 function will only be able to trace up to the defect, and needs 2457 to be able to return the incomplete list of OAM entities that it 2458 was able to trace such that the fault can be localized. 2460 6.4.1. Configuration considerations 2462 The configuration of the route trace function must at least 2463 support the setting of the number of trace attempts before it 2464 gives up. 2466 6.5. Packet Delay Measurement 2468 Packet Delay Measurement (DM) is one of the capabilities 2469 supported by the MPLS-TP PM function in order to facilitate 2470 reporting of QoS information for a transport path, as required 2471 in section 2.2.12 of RFC 5860 [11]. Specifically, on-demand DM 2472 is used to measure packet delay and packet delay variation in 2473 the transport path monitored by a pair of MEPs during a pre- 2474 defined monitoring period. 2476 On-Demand DM is performed by sending periodic DM OAM packets 2477 from a MEP to a peer MEP and by receiving DM OAM packets from 2478 the peer MEP (if a co-routed or associated bidirectional 2479 transport path) during a configurable time interval. 2481 On-demand DM can be operated in two modes: 2483 o One-way: a MEP sends DM OAM packet to its peer MEP containing 2484 all the required information to facilitate one-way packet 2485 delay and/or one-way packet delay variation measurements at 2486 the peer MEP. Note that this requires precise time 2487 synchronisation at either MEP by means outside the scope of 2488 this framework. 2490 o Two-way: a MEP sends DM OAM packet with a DM request to its 2491 peer MEP, which replies with an DM OAM packet as a DM 2492 response. The request/response DM OAM packets containing all 2493 the required information to facilitate two-way packet delay 2494 and/or two-way packet delay variation measurements from the 2495 viewpoint of the originating MEP. 2497 MIPs, as well as intermediate nodes, do not process the DM 2498 information and forward these on-demand DM OAM packets as 2499 regular data packets. 2501 6.5.1. Configuration considerations 2503 In order to support on-demand DM, the beginning and duration of 2504 the DM procedures, the transmission rate and, for E-LSPs, the 2505 PHB associated with the DM OAM packets originating from a MEP 2506 need be configured as part of the DM provisioning. DM OAM 2507 packets should be transmitted with the PHB that yields the 2508 lowest drop precedence within the measured PHB Scheduling Class 2509 (see RFC 3260 [16]). 2511 In order to verify different performances between long and short 2512 packets (e.g., due to the processing time), it should be 2513 possible for the operator to configure the packet size of the 2514 on-demand OAM DM packet. 2516 7. OAM Functions for administration control 2518 7.1. Lock Instruct 2520 Lock Instruct (LKI) function, as required in section 2.2.6 of 2521 RFC 5860 [11], is a command allowing a MEP to instruct the peer 2522 MEP(s) to put the MPLS-TP transport path into a locked 2523 condition. 2525 This function allows single-side provisioning for 2526 administratively locking (and unlocking) an MPLS-TP transport 2527 path. 2529 Note that it is also possible to administratively lock (and 2530 unlock) an MPLS-TP transport path using two-side provisioning, 2531 where the NMS administratively puts both MEPs into an 2532 administrative lock condition. In this case, the LKI function is 2533 not required/used. 2535 MIPs, as well as intermediate nodes, do not process the lock 2536 instruct information and forward these on-demand LKI OAM packets 2537 as regular data packets. 2539 7.1.1. Locking a transport path 2541 A MEP, upon receiving a single-side administrative lock command 2542 from an NMS, sends an LKI request OAM packet to its peer MEP(s). 2543 It also puts the MPLS-TP transport path into a locked state and 2544 notifies its client (sub-)layer adaptation function upon the 2545 locked condition. 2547 A MEP, upon receiving an LKI request from its peer MEP, can 2548 either accept or reject the instruction and replies to the peer 2549 MEP with an LKI reply OAM packet indicating whether or not it 2550 has accepted the instruction. This requires either an in-band or 2551 out-of-band return path. 2553 If the lock instruction has been accepted, it also puts the 2554 MPLS-TP transport path into a locked state and notifies its 2555 client (sub-)layer adaptation function upon the locked 2556 condition. 2558 Note that if the client (sub-)layer is also MPLS-TP, Lock 2559 Reporting (LKR) generation at the client MPLS-TP (sub-)layer is 2560 started, as described in section 5.4. 2562 7.1.2. Unlocking a transport path 2564 A MEP, upon receiving a single-side administrative unlock 2565 command from NMS, sends an LKI removal request OAM packet to its 2566 peer MEP(s). 2568 The peer MEP, upon receiving an LKI removal request, can either 2569 accept or reject the removal instruction and replies with an LKI 2570 removal reply OAM packet indicating whether or not it has 2571 accepted the instruction. 2573 If the lock removal instruction has been accepted, it also 2574 clears the locked condition on the MPLS-TP transport path and 2575 notifies this event to its client (sub-)layer adaptation 2576 function. 2578 The MEP that has initiated the LKI clear procedure, upon 2579 receiving a positive LKI removal reply, also clears the locked 2580 condition on the MPLS-TP transport path and notifies this event 2581 to its client (sub-)layer adaptation function. 2583 Note that if the client (sub-)layer is also MPLS-TP, Lock 2584 Reporting (LKR) generation at the client MPLS-TP (sub-)layer is 2585 terminated, as described in section 5.4. 2587 8. Security Considerations 2589 A number of security considerations are important in the context 2590 of OAM applications. 2592 OAM traffic can reveal sensitive information such as performance 2593 data and details about the current state of the network. 2594 Insertion of, or modifications to OAM transactions can mask the 2595 true operational state of the network and in the case of 2596 transactions for administration control, such as Lock or 2597 dataplane loopback instructions, these can be used for explicit 2598 denial of service attacks. The effect of such attacks is 2599 mitigated only by the fact that the managed entities whose state 2600 can be masked is limited to those that transit the point of 2601 malicious access to the network internals due to the fate 2602 sharing nature of OAM messaging. 2604 The sensitivity of OAM data therefore suggests that one solution 2605 is that some form of authentication, authorization and 2606 encryption is in place. This will prevent unauthorized access to 2607 vital equipment and it will prevent third parties from learning 2608 about sensitive information about the transport network. However 2609 it should be observed that the combination of the need for 2610 timeliness of OAM transaction exchange and all permutations of 2611 unique MEP to MEP, MEP to MIP, and intermediate system 2612 originated transactions mitigates against the practical 2613 establishment and maintenance of a large number of security 2614 associations per MEG either in advance or as required. 2616 For this reason it is assumed that the internal links of the 2617 network is physically secured from malicious access such that 2618 OAM transactions scoped to fault and performance management of 2619 individual MEGs are not encumbered with additional security. 2621 Mechanisms that the framework does not specify might be subject 2622 to additional security considerations. 2624 9. IANA Considerations 2626 No new IANA considerations. 2628 10. Acknowledgments 2630 The authors would like to thank all members of the teams (the 2631 Joint Working Team, the MPLS Interoperability Design Team in 2632 IETF and the Ad Hoc Group on MPLS-TP in ITU-T) involved in the 2633 definition and specification of MPLS Transport Profile. 2635 The editors gratefully acknowledge the contributions of Adrian 2636 Farrel, Yoshinori Koike, Luca Martini, Yuji Tochio and Manuel 2637 Paul for the definition of per-interface MIPs and MEPs. 2639 The editors gratefully acknowledge the contributions of Malcolm 2640 Betts, Yoshinori Koike, Xiao Min, and Maarten Vissers for the 2641 lock report and lock instruction description. 2643 The authors would also like to thank Alessandro D'Alessandro, 2644 Loa Andersson, Malcolm Betts, Stewart Bryant, Rui Costa, Xuehui 2645 Dai, John Drake, Adrian Farrel, Dan Frost, Xia Liang, Liu 2646 Gouman, Peng He, Feng Huang, Su Hui, Yoshionori Koike, George 2647 Swallow, Yuji Tochio, Curtis Villamizar, Maarten Vissers and 2648 Xuequin Wei for their comments and enhancements to the text. 2650 This document was prepared using 2-Word-v2.0.template.dot. 2652 11. References 2654 11.1. Normative References 2656 [1] Rosen, E., Viswanathan, A., Callon, R., "Multiprotocol 2657 Label Switching Architecture", RFC 3031, January 2001 2659 [2] Bryant, S., Pate, P., "Pseudo Wire Emulation Edge-to-Edge 2660 (PWE3) Architecture", RFC 3985, March 2005 2662 [3] Nadeau, T., Pignataro, S., "Pseudowire Virtual Circuit 2663 Connectivity Verification (VCCV): A Control Channel for 2664 Pseudowires", RFC 5085, December 2007 2666 [4] Bocci, M., Bryant, S., "An Architecture for Multi-Segment 2667 Pseudo Wire Emulation Edge-to-Edge", RFC 5659, October 2668 2009 2670 [5] Niven-Jenkins, B., Brungard, D., Betts, M., sprecher, N., 2671 Ueno, S., "MPLS-TP Requirements", RFC 5654, September 2009 2673 [6] Agarwal, P., Akyol, B., "Time To Live (TTL) Processing in 2674 Multiprotocol Label Switching (MPLS) Networks", RFC 3443, 2675 January 2003 2677 [7] Vigoureux, M., Bocci, M., Swallow, G., Ward, D., Aggarwal, 2678 R., "MPLS Generic Associated Channel", RFC 5586, June 2009 2680 [8] Bocci, M., et al., "A Framework for MPLS in Transport 2681 Networks", RFC 5921, July 2010 2683 [9] Bocci, M., et al., " MPLS Transport Profile User-to-Network and 2684 Network-to-Network Interfaces", draft-ietf-mpls-tp-uni-nni-02 2685 (work in progress), December 2010 2687 [10] Swallow, G., Bocci, M., "MPLS-TP Identifiers", draft-ietf- 2688 mpls-tp-identifiers-03 (work in progress), December 2010 2690 [11] Vigoureux, M., Betts, M., Ward, D., "Requirements for OAM 2691 in MPLS Transport Networks", RFC 5860, May 2010 2693 [12] Bradner, S., McQuaid, J., "Benchmarking Methodology for 2694 Network Interconnect Devices", RFC 2544, March 1999 2696 [13] ITU-T Recommendation G.806 (01/09), "Characteristics of 2697 transport equipment - Description methodology and generic 2698 functionality ", January 2009 2700 11.2. Informative References 2702 [14] Sprecher, N., Nadeau, T., van Helvoort, H., Weingarten, 2703 Y., "MPLS-TP OAM Analysis", draft-ietf-mpls-tp-oam- 2704 analysis-02 (work in progress), July 2010 2706 [15] Nichols, K., Blake, S., Baker, F., Black, D., "Definition 2707 of the Differentiated Services Field (DS Field) in the 2708 IPv4 and IPv6 Headers", RFC 2474, December 1998 2710 [16] Grossman, D., "New terminology and clarifications for 2711 Diffserv", RFC 3260, April 2002. 2713 [17] Kompella, K., Rekhter, Y., Berger, L., "Link Bundling in 2714 MPLS Traffic Engineering (TE)", RFC 4201, October 2005 2716 [18] ITU-T Recommendation G.707/Y.1322 (01/07), "Network node 2717 interface for the synchronous digital hierarchy (SDH)", 2718 January 2007 2720 [19] ITU-T Recommendation G.805 (03/00), "Generic functional 2721 architecture of transport networks", March 2000 2723 [20] ITU-T Recommendation Y.1731 (02/08), "OAM functions and 2724 mechanisms for Ethernet based networks", February 2008 2726 [21] IEEE Standard 802.1AX-2008, "IEEE Standard for Local and 2727 Metropolitan Area Networks - Link Aggregation", November 2728 2008 2730 [22] Le Faucheur et.al. " Multi-Protocol Label Switching (MPLS) 2731 Support of Differentiated Services", RFC 3270, May 2002. 2733 Authors' Addresses 2735 Dave Allan 2736 Ericsson 2738 Email: david.i.allan@ericsson.com 2740 Italo Busi 2741 Alcatel-Lucent 2743 Email: Italo.Busi@alcatel-lucent.com 2744 Ben Niven-Jenkins 2745 Velocix 2747 Email: ben@niven-jenkins.co.uk 2749 Annamaria Fulignoli 2750 Ericsson 2752 Email: annamaria.fulignoli@ericsson.com 2754 Enrique Hernandez-Valencia 2755 Alcatel-Lucent 2757 Email: Enrique.Hernandez@alcatel-lucent.com 2759 Lieven Levrau 2760 Alcatel-Lucent 2762 Email: Lieven.Levrau@alcatel-lucent.com 2764 Vincenzo Sestito 2765 Alcatel-Lucent 2767 Email: Vincenzo.Sestito@alcatel-lucent.com 2769 Nurit Sprecher 2770 Nokia Siemens Networks 2772 Email: nurit.sprecher@nsn.com 2774 Huub van Helvoort 2775 Huawei Technologies 2777 Email: hhelvoort@huawei.com 2779 Martin Vigoureux 2780 Alcatel-Lucent 2782 Email: Martin.Vigoureux@alcatel-lucent.com 2783 Yaacov Weingarten 2784 Nokia Siemens Networks 2786 Email: yaacov.weingarten@nsn.com 2788 Rolf Winter 2789 NEC 2791 Email: Rolf.Winter@nw.neclab.eu