idnits 2.17.1 draft-ietf-mpls-tp-oam-framework-08.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** There are 2 instances of too long lines in the document, the longest one being 5 characters in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 1136 has weird spacing: '... with assoc...' -- The document date (September 17, 2010) is 4941 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-03) exists of draft-ietf-mpls-tp-uni-nni-00 == Outdated reference: A later version (-07) exists of draft-ietf-mpls-tp-identifiers-02 == Outdated reference: A later version (-09) exists of draft-ietf-mpls-tp-oam-analysis-02 Summary: 1 error (**), 0 flaws (~~), 5 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 MPLS Working Group I. Busi (Ed) 2 Internet Draft Alcatel-Lucent 3 Intended status: Informational D. Allan (Ed) 4 Ericsson 6 Expires: March 17, 2011 September 17, 2010 8 Operations, Administration and Maintenance Framework for MPLS- 9 based Transport Networks 10 draft-ietf-mpls-tp-oam-framework-08.txt 12 Abstract 14 The Transport Profile of Multi-Protocol Label Switching 15 (MPLS-TP) is a packet-based transport technology based on the 16 MPLS Traffic Engineering (MPLS-TE) and Pseudowire (PW) data 17 plane architectures. 19 This document describes a framework to support a comprehensive 20 set of Operations, Administration and Maintenance (OAM) 21 procedures that fulfill the MPLS-TP OAM requirements for fault, 22 performance and protection-switching management and that do not 23 rely on the presence of a control plane. 25 This document is a product of a joint Internet Engineering Task 26 Force (IETF) / International Telecommunications Union 27 Telecommunication Standardization Sector (ITU-T) effort to 28 include an MPLS Transport Profile within the IETF MPLS and PWE3 29 architectures to support the capabilities and functionalities of 30 a packet transport network as defined by the ITU-T. 32 Status of this Memo 34 This Internet-Draft is submitted to IETF in full conformance 35 with the provisions of BCP 78 and BCP 79. 37 Internet-Drafts are working documents of the Internet 38 Engineering Task Force (IETF), its areas, and its working 39 groups. Note that other groups may also distribute working 40 documents as Internet-Drafts. 42 Internet-Drafts are draft documents valid for a maximum of six 43 months and may be updated, replaced, or obsoleted by other 44 documents at any time. It is inappropriate to use Internet- 45 Drafts as reference material or to cite them other than as "work 46 in progress". 48 The list of current Internet-Drafts can be accessed at 49 http://www.ietf.org/ietf/1id-abstracts.txt. 51 The list of Internet-Draft Shadow Directories can be accessed at 52 http://www.ietf.org/shadow.html. 54 This Internet-Draft will expire on March 17, 2011. 56 Copyright Notice 58 Copyright (c) 2010 IETF Trust and the persons identified as the 59 document authors. All rights reserved. 61 This document is subject to BCP 78 and the IETF Trust's Legal 62 Provisions Relating to IETF Documents 63 (http://trustee.ietf.org/license-info) in effect on the date of 64 publication of this document. Please review these documents 65 carefully, as they describe your rights and restrictions with 66 respect to this document. Code Components extracted from this 67 document must include Simplified BSD License text as described 68 in Section 4.e of the Trust Legal Provisions and are provided 69 without warranty as described in the Simplified BSD License. 71 Table of Contents 73 1. Introduction................................................5 74 1.1. Contributing Authors....................................6 75 2. Conventions used in this document............................6 76 2.1. Terminology............................................6 77 2.2. Definitions............................................7 78 3. Functional Components.......................................10 79 3.1. Maintenance Entity and Maintenance Entity Group.........10 80 3.2. Nested MEGs: SPMEs and Tandem Connection Monitoring.....12 81 3.3. MEG End Points (MEPs)..................................14 82 3.4. MEG Intermediate Points (MIPs).........................17 83 3.5. Server MEPs...........................................18 84 3.6. Configuration Considerations...........................19 85 3.7. P2MP considerations....................................19 86 4. Reference Model............................................20 87 4.1. MPLS-TP Section Monitoring (SME).......................23 88 4.2. MPLS-TP LSP End-to-End Monitoring (LME)................24 89 4.3. MPLS-TP PW Monitoring (PME)............................24 90 4.4. MPLS-TP LSP SPME Monitoring (LSME).....................25 91 4.5. MPLS-TP MS-PW SPME Monitoring (PSME)...................26 92 4.6. Fate sharing considerations for multilink..............28 93 5. OAM Functions for proactive monitoring......................29 94 5.1. Continuity Check and Connectivity Verification..........30 95 5.1.1. Defects identified by CC-V........................31 96 5.1.2. Consequent action.................................33 97 5.1.3. Configuration considerations......................34 98 5.2. Remote Defect Indication...............................36 99 5.2.1. Configuration considerations......................36 100 5.3. Alarm Reporting........................................37 101 5.4. Lock Reporting........................................38 102 5.5. Packet Loss Measurement................................39 103 5.5.1. Configuration considerations......................40 104 5.5.2. Sampling skew.....................................40 105 5.5.3. Multilink issues..................................40 106 5.6. Packet Delay Measurement...............................41 107 5.6.1. Configuration considerations......................41 108 5.7. Client Failure Indication..............................42 109 5.7.1. Configuration considerations......................42 110 6. OAM Functions for on-demand monitoring......................42 111 6.1. Connectivity Verification..............................43 112 6.1.1. Configuration considerations......................44 113 6.2. Packet Loss Measurement................................45 114 6.2.1. Configuration considerations......................45 115 6.2.2. Sampling skew.....................................45 116 6.2.3. Multilink issues..................................45 117 6.3. Diagnostic Tests.......................................46 118 6.3.1. Throughput Estimation.............................46 119 6.3.2. Data plane Loopback...............................47 120 6.4. Route Tracing.........................................48 121 6.4.1. Configuration considerations......................48 122 6.5. Packet Delay Measurement...............................48 123 6.5.1. Configuration considerations......................49 124 7. OAM Functions for administration control....................49 125 7.1. Lock Instruct.........................................49 126 7.1.1. Locking a transport path..........................50 127 7.1.2. Unlocking a transport path........................50 128 8. Security Considerations.....................................51 129 9. IANA Considerations........................................51 130 10. Acknowledgments...........................................51 131 11. References................................................53 132 11.1. Normative References..................................53 133 11.2. Informative References................................54 135 Editors' Note: 137 This Informational Internet-Draft is aimed at achieving IETF 138 Consensus before publication as an RFC and will be subject to an 139 IETF Last Call. 141 [RFC Editor, please remove this note before publication as an 142 RFC and insert the correct Streams Boilerplate to indicate that 143 the published RFC has IETF Consensus.] 145 1. Introduction 147 As noted in the multi-protocol label switching (MPLS-TP) Framework 148 RFCs (RFC 5921 [8] and [9]), MPLS-TP is a packet-based transport 149 technology based on the MPLS Traffic Engineering (MPLS-TE) and Pseudo 150 Wire (PW) data plane architectures defined in RFC 3031 [1], RFC 3985 151 [2] and RFC 5659 [4]. 153 MPLS-TP supports a comprehensive set of Operations, 154 Administration and Maintenance (OAM) procedures for fault, 155 performance and protection-switching management and that do not 156 rely on the presence of a control plane. 158 In line with [14], existing MPLS OAM mechanisms will be used 159 wherever possible and extensions or new OAM mechanisms will be 160 defined only where existing mechanisms are not sufficient to 161 meet the requirements. Extensions do not deprecate support for 162 existing MPLS OAM capabilities. 164 The MPLS-TP OAM framework defined in this document provides a 165 comprehensive set of OAM procedures that satisfy the MPLS-TP OAM 166 requirements of RFC 5860 [11]. In this regard, it defines 167 similar OAM functionality as for existing SONET/SDH and OTN OAM 168 mechanisms (e.g. [18]). 170 The MPLS-TP OAM framework is applicable to both LSPs and 171 (MS-)PWs and supports co-routed and associated bidirectional p2p 172 transport paths as well as unidirectional p2p and p2mp transport 173 paths. 175 This document is a product of a joint Internet Engineering Task 176 Force (IETF) / International Telecommunication Union 177 Telecommunication Standardization Sector (ITU-T) effort to 178 include an MPLS Transport Profile within the IETF MPLS and PWE3 179 architectures to support the capabilities and functionalities of 180 a packet transport network as defined by the ITU-T. 182 1.1. Contributing Authors 184 Dave Allan, Italo Busi, Ben Niven-Jenkins, Annamaria Fulignoli, 185 Enrique Hernandez-Valencia, Lieven Levrau, Vincenzo Sestito, 186 Nurit Sprecher, Huub van Helvoort, Martin Vigoureux, Yaacov 187 Weingarten, Rolf Winter 189 2. Conventions used in this document 191 2.1. Terminology 193 AC Attachment Circuit 195 DBN Domain Border Node 197 LER Label Edge Router 199 LME LSP Maintenance Entity 201 LMEG LSP ME Group 203 LSP Label Switched Path 205 LSR Label Switching Router 207 LSME LSP SPME ME 209 LSMEG LSP SPME ME Group 211 ME Maintenance Entity 213 MEG Maintenance Entity Group 215 MEP Maintenance Entity Group End Point 217 MIP Maintenance Entity Group Intermediate Point 219 PHB Per-hop Behavior 221 PME PW Maintenance Entity 223 PMEG PW ME Group 225 PSME PW SPME ME 227 PSMEG PW SPME ME Group 229 PW Pseudowire 230 SLA Service Level Agreement 232 SME Section Maintenance Entity Group 234 SPME Sub-path Maintenance Element 236 2.2. Definitions 238 This document uses the terms defined in RFC 5654 [5]. 240 This document uses the term 'Per-hop Behavior' as defined in RFC 241 2474 [15]. 243 This document uses the term LSP to indicate either a service LSP 244 or a transport LSP (as defined in [8]). 246 Where appropriate, the following definitions are aligned with 247 ITU-T recommendation Y.1731 [20] in order to have a common, 248 unambiguous terminology. They do not however intend to imply a 249 certain implementation but rather serve as a framework to 250 describe the necessary OAM functions for MPLS-TP. 252 Adaptation function: The adaptation function is the interface 253 between the client (sub)-layer and the server (sub-layer). 255 Data plane loopback: An out-of-service test where an interface 256 at either an intermediate or terminating node in a path is 257 placed into a data plane loopback state, such that all traffic 258 (including user data and OAM) received on the looped back 259 interface is sent on the reverse direction of the transport 260 path. 262 Note - The only way to send an OAM packet to a node set in the data 263 plane loopback mode is via TTL expiry, irrespectively on whether the 264 node is hosting MIPs or MEPs. 266 Domain Border Node (DBN): An intermediate node in an MPLS-TP LSP 267 that is at the boundary between two MPLS-TP OAM domains. Such a 268 node may be present on the edge of two domains or may be 269 connected by a link to the DBN at the edge of another OAM 270 domain. 272 Down MEP: A MEP that receives OAM packets from, and transmits 273 them towards, the direction of a server layer. 275 In-Service: The administrative status of a transport path when 276 it is unlocked. 278 Intermediate Node: An intermediate node transits traffic for an 279 LSP or a PW. An intermediate node may originate OAM flows 280 directed to downstream intermediate nodes or MEPs. 282 Loopback: See data plane loopback and OAM loopback definitions. 284 Maintenance Entity (ME): Some portion of a transport path that 285 requires management bounded by two points (called MEPs), and the 286 relationship between those points to which maintenance and 287 monitoring operations apply (details in section 3.1). 289 Maintenance Entity Group (MEG): The set of one or more 290 maintenance entities that maintain and monitor a transport path 291 in an OAM domain. 293 MEP: A MEG end point (MEP) is capable of initiating (MEP Source) 294 and terminating (MEP Sink) OAM messages for fault management and 295 performance monitoring. MEPs define the boundaries of an ME 296 (details in section 3.3). 298 MEP Source: A MEP acts as MEP source for an OAM message when it 299 originates and inserts the message into the transport path for 300 its associated MEG. 302 MEP Sink: A MEP acts as a MEP sink for an OAM message when it 303 terminates and processes the messages received from its 304 associated MEG. 306 MIP: A MEG intermediate point (MIP) terminates and processes OAM 307 messages that are sent to this particular MIP and may generate 308 OAM messages in reaction to received OAM messages. It never 309 generates unsolicited OAM messages itself. A MIP resides within 310 a MEG between MEPs (details in section 3.3). 312 MPLS-TP Section: As defined in [8], it is the link traversed by 313 an MPLS-TP LSP. 315 OAM domain: A domain, as defined in [5], whose entities are 316 grouped for the purpose of keeping the OAM confined within that 317 domain. 319 Note - within the rest of this document the term "domain" is 320 used to indicate an "OAM domain" 322 OAM flow: Is the set of all OAM messages originating with a 323 specific MEP source that instrument one direction of a MEG. 325 OAM information element: An atomic piece of information 326 exchanged between MEPs and/or MIPs in MEG used by an OAM 327 application. 329 OAM loopback: It is the capability of a node to be directed by a 330 received OAM message to generate a reply back to the sender. OAM 331 loopback can work in-service and can support different OAM 332 functions (e.g., bidirectional on-demand connectivity 333 verification). 335 OAM Message: One or more OAM information elements that when 336 exchanged between MEPs or between MEPs and MIPs performs some 337 OAM functionality (e.g. connectivity verification) 339 OAM Packet: A packet that carries one or more OAM messages (i.e. 340 OAM information elements). 342 Out-of-Service: The administrative status of a transport path 343 when it is locked. When a path is in a locked condition, it is 344 blocked from carrying client traffic. 346 Path Segment: It is either a segment or a concatenated segment, 347 as defined in RFC 5654 [5]. 349 Signal Degrade: A condition declared by a MEP when the data 350 forwarding capability associated with a transport path has 351 deteriorated, as determined by PM. See also ITU-T recommendation 352 G.806 [13]. 354 Signal Fail: A condition declared by a MEP when the data 355 forwarding capability associated with a transport path has 356 failed, e.g. loss of continuity. See also ITU-T recommendation 357 G.806 [13]. 359 Tandem Connection: A tandem connection is an arbitrary part of a 360 transport path that can be monitored (via OAM) independent of 361 the end-to-end monitoring (OAM). The tandem connection may also 362 include the forwarding engine(s) of the node(s) at the 363 boundaries of the tandem connection. Tandem connections may be 364 nested but cannot overlap. See also ITU-T recommendation G.805 365 [19]. 367 Up MEP: A MEP that transmits OAM packets towards, and receives 368 them from, the direction of the forwarding engine. 370 3. Functional Components 372 MPLS-TP is a packet-based transport technology based on the MPLS 373 and PW data plane architectures ([1], [2] and [4]) and is 374 capable of transporting service traffic where the 375 characteristics of information transfer between the transport 376 path endpoints can be demonstrated to comply with certain 377 performance and quality guarantees. 379 In order to describe the required OAM functionality, this 380 document introduces a set of functional components. 382 3.1. Maintenance Entity and Maintenance Entity Group 384 MPLS-TP OAM operates in the context of Maintenance Entities 385 (MEs) that define a relationship between any two points of a 386 transport path to which maintenance and monitoring operations 387 apply. The collection of one or more MEs that belongs to the 388 same transport path and that are maintained and monitored as a 389 group are known as a maintenance entity group (MEG) and the two 390 points that define a maintenance entity are called Maintenance 391 Entity Group (MEG) End Points (MEPs). In between these two 392 points zero or more intermediate points, called Maintenance 393 Entity Group Intermediate Points (MIPs), can exist and can be 394 shared by more than one ME in a MEG. 396 An abstract reference model for an ME is illustrated in Figure 1 397 below: 399 +-+ +-+ +-+ +-+ 400 |A|----|B|----|C|----|D| 401 +-+ +-+ +-+ +-+ 403 Figure 1 ME Abstract Reference Model 405 The instantiation of this abstract model to different MPLS-TP 406 entities is described in section 4. In Figure 1, nodes A and D 407 can be LERs for an LSP or the T-PEs for a MS-PW, nodes B and C 408 are LSRs for a LSP or S-PEs for a MS-PW. MEPs reside in nodes A 409 and D while MIPs reside in nodes B and C and may reside in A and 410 D. The links connecting adjacent nodes can be physical links, 411 (sub-)layer LSPs/SPMEs, or serving layer paths. 413 This functional model defines the relationships between all OAM 414 entities from a maintenance perspective, to allow each 415 Maintenance Entity to monitor and manage the (sub-)layer network 416 under its responsibility and to localize problems efficiently. 418 An MPLS-TP Maintenance Entity Group may be defined to monitor 419 the transport path for fault and/or performance management. 421 The MEPs that form a MEG bound the scope of an OAM flows to the 422 MEG (i.e. within the domain of the transport path that is being 423 monitored and managed). There are two exceptions to this: 425 1) A misbranching fault may cause OAM packets to be delivered to 426 a MEP that is not in the MEG of origin. 428 2) An out-of-band return path may be used between a MIP or a MEP 429 and the originating MEP. 431 In case of unidirectional point-to-point transport paths, a 432 single unidirectional Maintenance Entity is defined to monitor 433 it. 435 In case of associated bi-directional point-to-point transport 436 paths, two independent unidirectional Maintenance Entities are 437 defined to independently monitor each direction. This has 438 implications for transactions that terminate at or query a MIP, 439 as a return path from MIP to source MEP does not necessarily 440 exist in the MEG. 442 In case of co-routed bi-directional point-to-point transport 443 paths, a single bidirectional Maintenance Entity is defined to 444 monitor both directions congruently. 446 In case of unidirectional point-to-multipoint transport paths, a 447 single unidirectional Maintenance entity for each leaf is 448 defined to monitor the transport path from the root to that 449 leaf. 451 In all cases, portions of the transport path may be monitored by 452 the instantiation of SPMEs (see section 3.2). 454 The reference model for the p2mp MEG is represented in Figure 2. 456 +-+ 457 /--|D| 458 / +-+ 459 +-+ 460 /--|C| 461 +-+ +-+/ +-+\ +-+ 462 |A|----|B| \--|E| 463 +-+ +-+\ +-+ +-+ 464 \--|F| 465 +-+ 467 Figure 2 Reference Model for p2mp MEG 469 In case of p2mp transport paths, the OAM measurements are 470 independent for each ME (A-D, A-E and A-F): 472 o Fault conditions - some faults may impact more than one ME 473 depending from where the failure is located; 475 o Packet loss - packet dropping may impact more than one ME 476 depending from where the packets are lost; 478 o Packet delay - will be unique per ME. 480 Each leaf (i.e. D, E and F) terminates OAM flows to monitor the 481 ME between itself and the root while the root (i.e. A) generates 482 OAM messages common to all the MEs of the p2mp MEG. All nodes 483 may implement a MIP in the corresponding MEG. 485 3.2. Nested MEGs: SPMEs and Tandem Connection Monitoring 487 In order to verify and maintain performance and quality 488 guarantees, there is a need to not only apply OAM functionality 489 on a transport path granularity (e.g. LSP or MS-PW), but also on 490 arbitrary parts of transport paths, defined as Tandem 491 Connections, between any two arbitrary points along a transport 492 path. 494 Sub-path Maintenance Elements (SPMEs), as defined in [8], are 495 instantiated to provide monitoring of a portion of a set of co- 496 routed transport paths (LSPs or MS-PWs). The operational aspects 497 of instantiating SPMEs are out of scope of this memo. 499 SPMEs can also be employed to meet the requirement to provide 500 tandem connection monitoring (TCM). 502 TCM for a given path segment of a transport path is implemented 503 by creating an SPME that has a 1:1 association with the path 504 segment of the transport path that is to be monitored. 506 In the TCM case, this means that the SPME used to provide TCM 507 can carry only one and only one transport path thus allowing 508 direct correlation between all fault management and performance 509 monitoring information gathered for the SPME and the monitored 510 path segment of the end-to-end transport path. The SPME is 511 monitored using normal LSP monitoring. 513 Where resiliency is required across an arbitrary portion of a 514 transport path, this may be implemented by more than one 515 diversely routed SPMEs with common end points where only one 516 SPME is active at any given time. 518 There are a number of implications to this approach: 520 1) The SPME would use the uniform model of TC code point copying 521 between sub-layers for diffserv such that the E2E markings 522 and PHB treatment for the transport path was preserved by the 523 SPMEs. 525 2) The SPME normally would use the short-pipe model for TTL 526 handling [6] such that MIP addressing for the E2E entity 527 would be not be impacted by the presence of the SPME, but it 528 should be possible for an operator to specify use of the 529 uniform model. 531 3) PM statistics need to be adjusted for the encapsulation 532 overhead of the additional SPME sub-layer. 534 Note that points 1 an 2 above assume that the TTL copying mode 535 and TC copying modes are independently configurable for an LSP. 537 There are specific issues with the use of the uniform model of 538 TTL copying for an SPME: 540 1. As any MIP in the SPME sub-layer is not part of the transport path 541 MEG, hence only an out of band return path would be available. 543 2. The instantiation of a lower level MEG or protection switching 544 actions within a lower level MEG may change the TTL distances to 545 MIPs in the higher level MEGs. 547 The endpoints of the SPME are MEPs and limit the scope of an OAM 548 flow within each MEG to the MEPs belong to (i.e. within the 549 domain of the SPME that is being monitored and managed). 551 When considering SPMEs, it is important to consider that the 552 following properties apply to all MPLS-TP MEGs: 554 o They can be nested but not overlapped, e.g. a MEG may cover a 555 segment or a concatenated segment of another MEG, and may 556 also include the forwarding engine(s) of the node(s) at the 557 edge(s) of the segment or concatenated segment. However when 558 MEGs are nested, the MEPs and MIPs in the nested MEG are no 559 longer part of the encompassing MEG. 561 o It is possible that MEPs of nested MEGs reside on a single 562 node but again implemented in such a way that they do not 563 overlap. 565 o Each OAM flow is associated with a single MEG 567 o OAM packets that instrument a particular direction of a 568 transport path are subject to the same forwarding treatment 569 (i.e. fate share) as the data traffic and in some cases may 570 be required to have common queuing discipline E2E with the 571 class of traffic monitored. OAM packets can be distinguished 572 from the data traffic using the GAL and ACH constructs [7] 573 for LSP and Section or the ACH construct [3]and [7] for 574 (MS-)PW. 576 o When a SPME is instantiated after the transport path has been 577 instantiated the TTL addressing of the MIPs will change. 579 3.3. MEG End Points (MEPs) 581 MEG End Points (MEPs) are the source and sink points of a MEG. 582 In the context of an MPLS-TP LSP, only LERs can implement MEPs 583 while in the context of an SPME LSRs for the MPLS-TP LSP can be 584 LERs for SPMEs that contribute to the overall monitoring 585 infrastructure for the transport path. Regarding PWs, only T-PEs 586 can implement MEPs while for SPMEs supporting one or more PWs 587 both T-PEs and S-PEs can implement SPME MEPs. Any MPLS-TP LSR 588 can implement a MEP for an MPLS-TP Section. 590 MEPs are responsible for activating and controlling all of the 591 proactive and on-demand monitoring OAM functionality for the 592 MEG. There is a separate class of notifications (such as LKR and 593 AIS) that are originated by intermediate nodes and triggered by 594 server layer events. A MEP is capable of originating and 595 terminating OAM messages for fault management and performance 596 monitoring. These OAM messages are encapsulated into an OAM 597 packet using the G-ACh as defined in RFC 5586 [7]. In this case 598 the G-ACh message is an OAM message and the channel type 599 indicates an OAM message. A MEP terminates all the OAM packets 600 it receives from the MEG it belongs to and silently discards 601 those that do not (note in the case of a mis-connectivity defect 602 there are further actions taken). The MEG the OAM packet belongs 603 to is inferred from the MPLS or PW label or, in case of an 604 MPLS-TP section, the MEG is inferred from the port on which an 605 OAM packet was received with the GAL at the top of the label 606 stack. 608 OAM packets may require the use of an available "out-of-band" 609 return path (as defined in [8]). In such cases sufficient 610 information is required in the originating transaction such that 611 the OAM reply packet can be constructed (e.g. IP address). 613 Each OAM solution will further detail its applicability as a 614 pro-active or on-demand mechanism as well as its usage when: 616 o The "in-band" return path exists and it is used; 618 o An "out-of-band" return path exists and it is used; 620 o Any return path does not exist or is not used. 622 Once a MEG is configured, the operator can configure which 623 proactive OAM functions to use on the MEG but the MEPs are 624 always enabled. A node at the edge of a MEG always supports a 625 MEP. 627 MEPs terminate all OAM packets received from the associated MEG. 628 As the MEP corresponds to the termination of the forwarding path 629 for a MEG at the given (sub-)layer, OAM packets never leak 630 outside of a MEG in a properly configured fault-free 631 implementation. 633 A MEP of an MPLS-TP transport path coincides with transport path 634 termination and monitors it for failures or performance 635 degradation (e.g. based on packet counts) in an end-to-end 636 scope. Note that both MEP source and MEP sink coincide with 637 transport paths' source and sink terminations. 639 The MEPs of an SPME are not necessarily coincident with the 640 termination of the MPLS-TP transport path and monitor a path 641 segment of the transport path for failures or performance 642 degradation (e.g. based on packet counts) only within the 643 boundary of the MEG for the SPME. 645 An MPLS-TP MEP sink passes a fault indication to its client 646 (sub-)layer network as a consequent action of fault detection. 648 A node at the edge of a MEG can either support per-node MEP or 649 per-interface MEP(s). A per-node MEP resides in an unspecified 650 location within the node while a per-interface MEP resides on a 651 specific side of the forwarding engine. In particular a per- 652 interface MEP is called "Up MEP" or "Down MEP" depending on its 653 location relative to the forwarding engine. 655 Source node Destination node 656 ------------------------ ------------------------ 657 | | | | 658 |----- -----| |----- -----| 659 | MEP | | | | | | MEP | 660 | | ---- | | | | ---- | | 661 | In |->-| FW |->-| Out |->- ->-| In |->-| FW |->-| Out | 662 | i/f | ---- | i/f | | i/f | ---- | i/f | 663 |----- -----| |----- -----| 664 | | | | 665 ------------------------ ------------------------ 666 (1) (2) 668 Figure 3 Example of per-interface Up MEPs 670 Figure 3 describes two examples of per-interface Up MEPs: An Up 671 Source MEP in a source node (case 1) and an Up Sink MEP in a 672 destination node (case 2). 674 The usage of per-interface Up MEPs extends the coverage of the 675 ME for both fault and performance monitoring closer to the edge 676 of the domain and allows the isolation of failures or 677 performance degradation to being within a node or either the 678 link or interfaces. 680 Each OAM solution will further detail the implications when used 681 with per-interface or per-node MEPs, if necessary. 683 It may occur that the Up MEPs of an SPME are set on both sides 684 of the forwarding engine such that the MEG is entirely internal 685 to the node. 687 It should be noted that a ME may span nodes that implement per 688 node MEPs and per-interface MEPs. This guarantees backward 689 compatibility with most of the existing LSRs that can implement 690 only a per-node MEP as in current implementations label 691 operations are largely performed on the ingress interface, hence 692 the exposure of the GAL as top label will occur at the ingress 693 interface. 695 Note that a MEP can only exist at the beginning and end of a 696 (sub-)layer in MPLS-TP. If there is a need to monitor some 697 portion of that LSP or PW, a new sub-layer in the form of an 698 SPME is created which permits MEPs and associated MEGs to be 699 created. 701 In the case where an intermediate node sends a message to a MEP, 702 it uses the top label of the stack at that point. 704 3.4. MEG Intermediate Points (MIPs) 706 A MEG Intermediate Point (MIP) is a function located at a point 707 between the MEPs of a MEG for a PW, LSP or SPME. 709 A MIP is capable of reacting to some OAM packets and forwarding all 710 the other OAM packets while ensuring fate sharing with data plane 711 packets. However, a MIP does not initiate unsolicited OAM packets, 712 but may be addressed by OAM packets initiated by one of the MEPs of 713 the MEG. A MIP can generate OAM packets only in response to OAM 714 packets that are sent on the MEG it belongs to. The OAM messages 715 generated by the MIP are sent in the direction of the source MEP and 716 not forwarded to the sink MEP. 718 An intermediate node within a MEG can either: 720 o Support per-node MIP (i.e. a single MIP per node in an 721 unspecified location within the node); 723 o Support per-interface MIP (i.e. two or more MIPs per node on 724 both sides of the forwarding engine). 726 Intermediate node 727 ------------------------ 728 | | 729 |----- -----| 730 | MIP | | MIP | 731 | | ---- | | 732 ->-| In |->-| FW |->-| Out |->- 733 | i/f | ---- | i/f | 734 |----- -----| 735 | | 736 ------------------------ 737 Figure 4 Example of per-interface MIPs 739 Figure 4 describes an example of two per-interface MIPs at an 740 intermediate node of a point-to-point MEG. 742 The usage of per-interface MIPs allows the isolation of failures 743 or performance degradation to being within a node or either the 744 link or interfaces. 746 When sending an OAM packet to a MIP, the source MEP should set 747 the TTL field to indicate the number of hops necessary to reach 748 the node where the MIP resides. 750 The source MEP should also include Target MIP information in the 751 OAM packets sent to a MIP to allow proper identification of the 752 MIP within the node. The MEG the OAM packet is associated with 753 is inferred from the MPLS label. 755 A node at the edge of a MEG can also support per-interface Up 756 MEPs and per-interface MIPs on either side of the forwarding 757 engine. 759 Once a MEG is configured, the operator can enable/disable the 760 MIPs on the nodes within the MEG. All the intermediate nodes and 761 possibly the end nodes host MIP(s). Local policy allows them to 762 be enabled per function and per MEG. The local policy is 763 controlled by the management system, which may delegate it to 764 the control plane. 766 3.5. Server MEPs 768 A server MEP is a MEP of a MEG that is either: 770 o Defined in a layer network that is "below", which is to say 771 encapsulates and transports the MPLS-TP layer network being 772 referenced, or 774 o Defined in a sub-layer of the MPLS-TP layer network that is 775 "below" which is to say encapsulates and transports the sub- 776 layer being referenced. 778 A server MEP can coincide with a MIP or a MEP in the client 779 (MPLS-TP) (sub-)layer network. 781 A server MEP also provides server layer OAM indications to the 782 client/server adaptation function between the client (MPLS-TP) 783 (sub-)layer network and the server (sub-)layer network. The 784 adaptation function maintains state on the mapping of MPLS-TP 785 transport paths that are setup over that server (sub-)layer's 786 transport path. 788 For example, a server MEP can be either: 790 o A termination point of a physical link (e.g. 802.3), an SDH 791 VC or OTN ODU, for the MPLS-TP Section layer network, defined 792 in section 4.1; 794 o An MPLS-TP Section MEP for MPLS-TP LSPs, defined in section 795 4.2; 797 o An MPLS-TP LSP MEP for MPLS-TP PWs, defined in section 4.3; 799 o An MPLS-TP SPME MEP used for LSP path segment monitoring, as 800 defined in section 4.4, for MPLS-TP LSPs or higher-level 801 SPMEs providing LSP path segment monitoring; 803 o An MPLS-TP SPME MEP used for PW path segment monitoring, as 804 defined in section 4.5, for MPLS-TP PWs or higher-level SPMEs 805 providing PW path segment monitoring. 807 The server MEP can run appropriate OAM functions for fault detection 808 within the server (sub-)layer network, and provides a fault 809 indication to its client MPLS-TP layer network via the client/server 810 adaptation function. When the server layer is not MPLS-TP, server MEP 811 OAM functions are outside the scope of this document. 813 3.6. Configuration Considerations 815 When a control plane is not present, the management plane configures 816 these functional components. Otherwise they can be configured either 817 by the management plane or by the control plane. 819 Local policy allows disabling the usage of any available "out- 820 of-band" return path, as defined in [8], irrespective of what is 821 requested by the node originating the OAM packet. 823 SPMEs are usually instantiated when the transport path is 824 created by either the management plane or by the control plane 825 (if present). Sometimes an SPME can be instantiated after the 826 transport path is initially created. 828 3.7. P2MP considerations 830 All the traffic sent over a p2mp transport path, including OAM 831 packets generated by a MEP, is sent (multicast) from the root to 832 all the leaves. As a consequence: 834 o To send an OAM packet to all leaves, the source MEP can 835 send a single OAM packet that will be delivered by the 836 forwarding plane to all the leaves and processed by all the 837 leaves. 839 o To send an OAM packet to a single leaf, the source MEP 840 sends a single OAM packet that will be delivered by the 841 forwarding plane to all the leaves but contains sufficient 842 information to identify a target leaf, and therefore is 843 processed only by the target leaf and ignored by the other 844 leaves. 846 o To send an OAM packet to a single MIP, the source MEP sends 847 a single OAM packet with the TTL field indicating the 848 number of hops necessary to reach the node where the MIP 849 resides. This packet will be delivered by the forwarding 850 plane to all intermediate nodes at the same TTL distance of 851 the target MIP and to any leaf that is located at a shorter 852 distance. The OAM message must contain sufficient 853 information to identify the target MIP and therefore is 854 processed only by the target MIP. 856 o In order to send an OAM packet to M leaves (i.e., a subset 857 of all the leaves), the source MEP sends M different OAM 858 packets targeted to each individual leaf in the group of M 859 leaves. Aggregated or subsetting mechanisms are outside the 860 scope of this document. 862 P2MP paths are unidirectional, therefore any return path to a 863 source MEP for on-demand transactions will be out-of-band. A 864 mechanism to scope the set of MEPs or MIPs expected to respond 865 to a given "on-demand" transaction is useful as it relieves the 866 source MEP of the requirement to filter and discard undesired 867 responses as normally TTL exhaustion will address all MIPs at a 868 given distance from the source, and failure to exhaust TTL will 869 address all MEPs. 871 4. Reference Model 873 The reference model for the MPLS-TP framework builds upon the 874 concept of a MEG, and its associated MEPs and MIPs, to support 875 the functional requirements specified in RFC 5860 [11]. 877 The following MPLS-TP MEGs are specified in this document: 879 o A Section Maintenance Entity Group (SME), allowing monitoring 880 and management of MPLS-TP Sections (between MPLS LSRs). 882 o An LSP Maintenance Entity Group (LME), allowing monitoring 883 and management of an end-to-end LSP (between LERs). 885 o A PW Maintenance Entity Group (PME), allowing monitoring and 886 management of an end-to-end SS/MS-PWs (between T-PEs). 888 o An LSP SPME ME Group (LSMEG), allowing monitoring and 889 management of an SPME (between any LERs/LSRs along an LSP). 891 o A PW SPME ME Group (PSMEG), allowing monitoring and 892 management of an SPME (between any T-PEs/S-PEs along the 893 (MS-)PW). 895 The MEGs specified in this MPLS-TP framework are compliant with 896 the architecture framework for MPLS-TP MS-PWs [4] and LSPs [1]. 898 Hierarchical LSPs are also supported in the form of SPMEs. In 899 this case, each LSP in the hierarchy is a different sub-layer 900 network that can be monitored, independently from higher and 901 lower level LSPs in the hierarchy, on an end-to-end basis (from 902 LER to LER) by a SPME. It is possible to monitor a portion of a 903 hierarchical LSP by instantiating a hierarchical SPME between 904 any LERs/LSRs along the hierarchical LSP. 906 Native |<------------------ MS-PW1Z ---------------->| Native 907 Layer | | Layer 908 Service | || |<-LSP3X->| || | Service 909 (AC1) V V LSP V V LSP V V LSP V V (AC2) 910 +----+ +-+ +----+ +----+ +-+ +----+ 911 +----+ |TPE1| | | |SPE3| |SPEX| | | |TPEZ| +----+ 912 | | | |=======| |=========| |=======| | | | 913 | CE1|--|.......PW13......|...PW3X..|......PWXZ.......|---|CE2 | 914 | | | |=======| |=========| |=======| | | | 915 +----+ | 1 | |2| | 3 | | X | |Y| | Z | +----+ 916 +----+ +-+ +----+ +----+ +-+ +----+ 917 . . . . 918 | | | | 919 |<--- Domain 1 -->| |<--- Domain Z -->| 920 ^----------------- PW1Z PME -----------------^ 921 ^--- PW13 PSME ---^ ^--- PWXZ PSME ---^ 922 ^-------^ ^-------^ 923 LSP13 LME LSPXZ LME 924 ^--^ ^--^ ^---------^ ^--^ ^--^ 925 Sec12 Sec23 Sec3X SecXY SecYZ 926 SME SME SME SME SME 928 TPE1: Terminating Provider Edge 1 SPE2: Switching Provider Edge 929 3 930 TPEX: Terminating Provider Edge X SPEZ: Switching Provider Edge 931 Z 933 ^---^ ME ^ MEP ==== LSP .... PW 935 Figure 5 Reference Model for the MPLS-TP OAM Framework 937 Figure 5 depicts a high-level reference model for the MPLS-TP 938 OAM framework. The figure depicts portions of two MPLS-TP 939 enabled network domains, Domain 1 and Domain Z. In Domain 1, 940 LSR1 is adjacent to LSR2 via the MPLS-TP Section Sec12 and LSR2 941 is adjacent to LSR3 via the MPLS-TP Section Sec23. Similarly, in 942 Domain Z, LSRX is adjacent to LSRY via the MPLS-TP Section SecXY 943 and LSRY is adjacent to LSRZ via the MPLS-TP Section SecYZ. In 944 addition, LSR3 is adjacent to LSRX via the MPLS-TP Section 3X. 946 Figure 5 also shows a bi-directional MS-PW (PW1Z) between AC1 on 947 TPE1 and AC2 on TPEZ. The MS-PW consists of three bi-directional 948 PW path segments: 1) PW13 path segment between T-PE1 and S-PE3 949 via the bi-directional LSP13 LSP, 2) PW3X path segment between 950 S-PE3 and S-PEX, via the bi-directional LSP3X LSP, and 3) PWXZ 951 path segment between S-PEX and T-PEZ via the bi-directional 952 LSPXZ LSP. 954 The MPLS-TP OAM procedures that apply to a MEG are expected to 955 operate independently from procedures on other MEGs. Yet, this 956 does not preclude that multiple MEGs may be affected 957 simultaneously by the same network condition, for example, a 958 fiber cut event. 960 Note that there are no constrains imposed by this OAM framework 961 on the number, or type (p2p, p2mp, LSP or PW), of MEGs that may 962 be instantiated on a particular node. In particular, when 963 looking at Figure 5, it should be possible to configure one or 964 more MEPs on the same node if that node is the endpoint of one 965 or more MEGs. 967 Figure 5 does not describe a PW3X PSME because typically SPMEs 968 are used to monitor an OAM domain (like PW13 and PWXZ PSMEs) 969 rather than the segment between two OAM domains. However the OAM 970 framework does not pose any constraints on the way SPMEs are 971 instantiated as long as they are not overlapping. 973 The subsections below define the MEGs specified in this MPLS-TP 974 OAM architecture framework document. Unless otherwise stated, 975 all references to domains, LSRs, MPLS-TP Sections, LSPs, 976 pseudowires and MEGs in this section are made in relation to 977 those shown in Figure 5. 979 4.1. MPLS-TP Section Monitoring (SME) 981 An MPLS-TP Section ME (SME) is an MPLS-TP maintenance entity 982 intended to monitor an MPLS-TP Section as defined in RFC 5654 983 [5]. An SME may be configured on any MPLS-TP section. SME OAM 984 packets must fate share with the user data packets sent over the 985 monitored MPLS-TP Section. 987 An SME is intended to be deployed for applications where it is 988 preferable to monitor the link between topologically adjacent 989 (next hop in this layer network) MPLS-TP LSRs rather than 990 monitoring the individual LSP or PW path segments traversing the 991 MPLS-TP Section and the server layer technology does not provide 992 adequate OAM capabilities. 994 Figure 5 shows five Section MEs configured in the network 995 between AC1 and AC2: 997 1. Sec12 ME associated with the MPLS-TP Section between LSR 1 998 and LSR 2, 1000 2. Sec23 ME associated with the MPLS-TP Section between LSR 2 1001 and LSR 3, 1003 3. Sec3X ME associated with the MPLS-TP Section between LSR 3 1004 and LSR X, 1006 4. SecXY ME associated with the MPLS-TP Section between LSR X 1007 and LSR Y, and 1009 5. SecYZ ME associated with the MPLS-TP Section between LSR Y 1010 and LSR Z. 1012 4.2. MPLS-TP LSP End-to-End Monitoring (LME) 1014 An MPLS-TP LSP ME (LME) is an MPLS-TP maintenance entity 1015 intended to monitor an end-to-end LSP between two LERs. An LME 1016 may be configured on any MPLS LSP. LME OAM packets must fate 1017 share with user data packets sent over the monitored MPLS-TP 1018 LSP. 1020 An LME is intended to be deployed in scenarios where it is 1021 desirable to monitor an entire LSP between its LERs, rather 1022 than, say, monitoring individual PWs. 1024 Figure 5 depicts two LMEs configured in the network between AC1 1025 and AC2: 1) the LSP13 LME between LER 1 and LER 3, and 2) the 1026 LSPXZ LME between LER X and LER Y. Note that the presence of a 1027 LSP3X LME in such a configuration is optional, hence, not 1028 precluded by this framework. For instance, the SPs may prefer to 1029 monitor the MPLS-TP Section between the two LSRs rather than the 1030 individual LSPs. 1032 4.3. MPLS-TP PW Monitoring (PME) 1034 An MPLS-TP PW ME (PME) is an MPLS-TP maintenance entity intended 1035 to monitor a SS-PW or MS-PW between a pair of T-PEs. A PME can 1036 be configured on any SS-PW or MS-PW. PME OAM packets must fate 1037 share with the user data packets sent over the monitored PW. 1039 A PME is intended to be deployed in scenarios where it is 1040 desirable to monitor an entire PW between a pair of MPLS-TP 1041 enabled T-PEs rather than monitoring the LSP aggregating 1042 multiple PWs between PEs. 1044 |<----------------- MS-PW1Z ----------------->| 1045 | | 1046 | || |<-LSP3X->| || | 1047 V V LSP V V LSP V V LSP V V 1048 +----+ +-+ +----+ +----+ +-+ +----+ 1049 +---+ |TPE1| | | |SPE3| |SPEX| | | |TPEZ| +---+ 1050 | |AC1| |=======| |=========| |=======| |AC2| | 1051 |CE1|---|.......PW13......|...PW3X..|.......PWXZ......|---|CE2| 1052 | | | |=======| |=========| |=======| | | | 1053 +---+ | 1 | |2| | 3 | | X | |Y| | Z | +---+ 1054 +----+ +-+ +----+ +----+ +-+ +----+ 1055 ^-------------------PW1Z PME------------------^ 1057 Figure 6 MPLS-TP PW ME (PME) 1059 Figure 6 depicts a MS-PW (MS-PW1Z) consisting of three path 1060 segments: PW13, PW3X and PWXZ and its associated end-to-end PME 1061 (PW1Z PME). 1063 4.4. MPLS-TP LSP SPME Monitoring (LSME) 1065 An MPLS-TP LSP SPME ME (LSME) is an MPLS-TP SPME with associated 1066 maintenance entity intended to monitor an arbitrary part of an 1067 LSP between the pair of MEPs instantiated for the SPME 1068 independent from the end-to-end monitoring (LME). An LSME can 1069 monitor an LSP segment or concatenated segment and it may also 1070 include the forwarding engine(s) of the node(s) at the edge(s) 1071 of the segment or concatenated segment. 1073 When SPME is established between non-adjacent LSRs, the edges of 1074 the SPME becomes adjacent at the LSP sub-layer network and any 1075 LSR that were previously in between becomes an LSR for the SPME. 1077 Multiple hierarchical LSMEs can be configured on any LSP. LSME 1078 OAM packets must fate share with the user data packets sent over 1079 the monitored LSP path segment. 1081 A LSME can be defined between the following entities: 1083 o The end node and any intermediate node of a given LSP. 1085 o Any two intermediate nodes of a given LSP. 1087 An LSME is intended to be deployed in scenarios where it is 1088 preferable to monitor the behaviour of a part of an LSP or set 1089 of LSPs rather than the entire LSP itself, for example when 1090 there is a need to monitor a part of an LSP that extends beyond 1091 the administrative boundaries of an MPLS-TP enabled 1092 administrative domain. 1094 |<-------------------- PW1Z ------------------->| 1095 | | 1096 | |<-------------LSP1Z LSP------------->| | 1097 | |<-LSP13->| || |<-LSPXZ->| | 1098 V V S-LSP V V S-LSP V V S-LSP V V 1099 +----+ +-+ +----+ +----+ +-+ +----+ 1100 +----+ | PE1| | | |DBN3| |DBNX| | | | PEZ| +----+ 1101 | |AC1| |=====================================| |AC2| | 1102 | CE1|---|.....................PW1Z......................|---|CE2 | 1103 | | | |=====================================| | | | 1104 +----+ | 1 | |2| | 3 | | X | |Y| | Z | +----+ 1105 +----+ +-+ +----+ +----+ +-+ +----+ 1106 . . . . 1107 | | | | 1108 |<---- Domain 1 --->| |<---- Domain Z --->| 1110 ^---------^ ^---------^ 1111 LSP13 LSME LSPXZ LSME 1112 ^-------------------------------------^ 1113 LSP1Z LME 1115 DBN: Domain Border Node 1117 Figure 7 MPLS-TP LSP SPME ME (LSME) 1119 Figure 7 depicts a variation of the reference model in Figure 5 1120 where there is an end-to-end LSP (LSP1Z) between PE1 and PEZ. 1121 LSP1Z consists of, at least, three LSP Concatenated Segments: 1122 LSP13, LSP3X and LSPXZ. In this scenario there are two separate 1123 LSMEs configured to monitor the LSP1Z: 1) a LSME monitoring the 1124 LSP13 Concatenated Segment on Domain 1 (LSP13 LSME), and 2) a 1125 LSME monitoring the LSPXZ Concatenated Segment on Domain Z 1126 (LSPXZ LSME). 1128 It is worth noticing that LSMEs can coexist with the LME 1129 monitoring the end-to-end LSP and that LSME MEPs and LME MEPs 1130 can be coincident in the same node (e.g. PE1 node supports both 1131 the LSP1Z LME MEP and the LSP13 LSME MEP). 1133 4.5. MPLS-TP MS-PW SPME Monitoring (PSME) 1135 An MPLS-TP MS-PW SPME Monitoring ME (PSME) is an MPLS-TP SPME 1136 with associated maintenance entity intended to monitor an 1137 arbitrary part of an MS-PW between the pair of MEPs instantiated 1138 form the SPME independently from the end-to-end monitoring 1139 (PME). A PSME can monitor a PW segment or concatenated segment 1140 and it may also include the forwarding engine(s) of the node(s) 1141 at the edge(s) of the segment or concatenated segment. A PSME is 1142 no different than an SPME, it is simply named as such to discuss 1143 SPMEs specifically in a PW context. 1145 When SPME is established between non-adjacent S-PEs, the edges 1146 of the SPME becomes adjacent at the MS-PW sub-layer network and 1147 any S-PEs that were previously in between becomes an LSR for the 1148 SPME. 1150 S-PE placement is typically dictated by considerations other 1151 than OAM. S-PEs will frequently reside at operational boundaries 1152 such as the transition from distributed (CP) to centralized 1153 (NMS) control or at a routing area boundary. As such the 1154 architecture would appear not to have the flexibility that 1155 arbitrary placement of SPME segments would imply. Support for an 1156 arbitrary placement of PSME would require the definition of 1157 additional PW sub-layering. 1158 Multiple hierarchical PSMEs can be configured on any MS-PW. PSME 1159 OAM packets fate share with the user data packets sent over the 1160 monitored PW path Segment. 1162 A PSME can be defined between the following entities: 1164 o T-PE and any S-PE of a given MS-PW 1166 o Any two S-PEs of a given MS-PW. 1168 Note that, in line with the SPME description in section 3.2, when a 1169 PW SPME is instantiated after the MS-PW has been instantiated, the 1170 TTL addressing of the MIPs may change and MIPs in the nested MEG are 1171 no longer part of the encompassing MEG. This means that the S-PE 1172 nodes hosting these MIPs are no longer S-PEs but P nodes at the SPME 1173 LSP level. The consequences are that the S-PEs hosting the PSME MEPs 1174 become adjacent S-PEs. This is no different than the operation of 1175 SPMEs in general. 1177 A PSME is intended to be deployed in scenarios where it is 1178 preferable to monitor the behaviour of a part of a MS-PW rather 1179 than the entire end-to-end PW itself, for example to monitor an 1180 MS-PW path segment within a given network domain of an inter- 1181 domain MS-PW. 1183 |<----------------- MS-PW1Z ------------------>| 1184 | | 1185 | || |<-LSP3X-->| || | 1186 V V LSP V V LSP V V LSP V V 1187 +----+ +-+ +----+ +----+ +-+ +----+ 1188 +---+ |TPE1| | | |SPE3| |SPEX| | | |TPEZ| +---+ 1189 | |AC1| |=======| |==========| |=======| |AC2| | 1190 |CE1|---|.......PW13......|...PW3X...|.......PWXZ......|---|CE2| 1191 | | | |=======| |==========| |=======| | | | 1192 +---+ | 1 | |2| | 3 | | X | |Y| | Z | +---+ 1193 +----+ +-+ +----+ +----+ +-+ +----+ 1195 ^--- PW13 PSME ---^ ^--- PWXZ PSME ----^ 1196 ^-------------------PW1Z PME-------------------^ 1198 Figure 8 MPLS-TP MS-PW SPME Monitoring (PSME) 1200 Figure 8 depicts the same MS-PW (MS-PW1Z) between AC1 and AC2 as 1201 in Figure 6. In this scenario there are two separate PSMEs 1202 configured to monitor MS-PW1Z: 1) a PSME monitoring the PW13 1203 MS-PW path segment on Domain 1 (PW13 PSME), and 2) a PSME 1204 monitoring the PWXZ MS-PW path segment on Domain Z with (PWXZ 1205 PSME). 1207 It is worth noticing that PSMEs can coexist with the PME 1208 monitoring the end-to-end MS-PW and that PSME MEPs and PME MEPs 1209 can be coincident in the same node (e.g. TPE1 node supports both 1210 the PW1Z PME MEP and the PW13 PSME MEP). 1212 4.6. Fate sharing considerations for multilink 1214 Multilink techniques are in use today and are expected to 1215 continue to be used in future deployments. These techniques 1216 include Ethernet Link Aggregations [21], the use of Link 1217 Bundling for MPLS [17] where the option to spread traffic over 1218 component links is supported and enabled. While the use of Link 1219 Bundling can be controlled at the MPLS-TP layer, use of Link 1220 Aggregation (or any server layer specific multilink) is not 1221 necessarily under control of the MPLS-TP layer. Other techniques 1222 may emerge in the future. These techniques share the 1223 characteristic that an LSP may be spread over a set of component 1224 links and therefore be reordered but no flow within the LSP is 1225 reordered (except when very infrequent and minimally disruptive 1226 load rebalancing occurs). 1228 The use of multilink techniques may be prohibited or permitted 1229 in any particular deployment. If multilink techniques are used, 1230 the deployment can be considered to be only partially MPLS-TP 1231 compliant, however this is unlikely to prevent its use. 1233 The implications for OAM is that not all components of a 1234 multilink will be exercised, independent server layer OAM being 1235 required to exercise the aggregated link components. This has 1236 further implications for MIP and MEP placement, as per-interface 1237 MIPs or "down" MEPs on a multilink interface are akin to a layer 1238 violation, as they instrument at the granularity of the server 1239 layer. The implications for reduced OAM loss measurement 1240 functionality is documented in sections 5.5.3 and 6.2.3. 1242 5. OAM Functions for proactive monitoring 1244 In this document, proactive monitoring refers to OAM operations 1245 that are either configured to be carried out periodically and 1246 continuously or preconfigured to act on certain events such as 1247 alarm signals. 1249 Proactive monitoring is usually performed "in-service". Such 1250 transactions are universally MEP to MEP in operation while 1251 notifications emerging from the serving layer are MIP to MEP or 1252 can be MIP to MIP. The control and measurement considerations 1253 are: 1255 1. Proactive monitoring for a MEG is typically configured at 1256 transport path creation time. 1258 2. The operational characteristics of in-band measurement 1259 transactions (e.g., CV, LM etc.) are configured at the MEPs. 1261 3. Server layer events are reported by transactions originating 1262 at intermediate nodes. 1264 4. The measurements resulting from proactive monitoring are 1265 typically only reported outside of the MEG as unsolicited 1266 notifications for "out of profile" events, such as faults or 1267 loss measurement indication of excessive impairment of 1268 information transfer capability. 1270 5. The measurements resulting from proactive monitoring may be 1271 periodically harvested by an EMS/NMS. 1273 For statically provisioned transport paths the above information 1274 is statically configured; for dynamically established transport 1275 paths the configuration information is signaled via the control 1276 plane or configured via the management plane. 1278 The operator enables/disables some of the consequent actions 1279 defined in section 5.1.1.4. 1281 5.1. Continuity Check and Connectivity Verification 1283 Proactive Continuity Check functions, as required in section 1284 2.2.2 of RFC 5860 [11], are used to detect a loss of continuity 1285 defect (LOC) between two MEPs in a MEG. 1287 Proactive Connectivity Verification functions, as required in 1288 section 2.2.3 of RFC 5860 [11], are used to detect an unexpected 1289 connectivity defect between two MEGs (e.g. mismerging or 1290 misconnection), as well as unexpected connectivity within the 1291 MEG with an unexpected MEP. 1293 Both functions are based on the (proactive) generation of OAM 1294 packets by the source MEP that are processed by the sink MEP. As 1295 a consequence these two functions are grouped together into 1296 Continuity Check and Connectivity Verification (CC-V) OAM 1297 packets. 1299 In order to perform pro-active Connectivity Verification, each 1300 CC-V OAM packet also includes a globally unique Source MEP 1301 identifier. When used to perform only pro-active Continuity 1302 Check, the CC-V OAM packet will not include any globally unique 1303 Source MEP identifier. Different formats of MEP identifiers are 1304 defined in [10] to address different environments. When MPLS-TP 1305 is deployed in transport network environments where IP 1306 addressing is not used in the forwarding plane, the ICC-based 1307 format for MEP identification is used. When MPLS-TP is deployed 1308 in an IP-based environment, the IP-based MEP identification is 1309 used. 1311 As a consequence, it is not possible to detect misconnections 1312 between two MEGs monitored only for continuity as neither the 1313 OAM message type nor OAM message content provides sufficient 1314 information to disambiguate an invalid source. To expand: 1316 o For CC leaking into a CC monitored MEG - undetectable 1318 o For CV leaking into a CC monitored MEG - presence of 1319 additional Source MEP identifier allows detecting the fault 1321 o For CC leaking into a CV monitored MEG - lack of additional 1322 Source MEP identifier allows detecting the fault. 1324 o For CV leaking into a CV monitored MEG - different Source MEP 1325 identifier permits fault to be identified. 1327 CC-V OAM packets are transmitted at a regular, operator's 1328 configurable, rate. The default CC-V transmission periods are 1329 application dependent (see section 5.1.3). 1331 Proactive CC-V OAM packets are transmitted with the "minimum 1332 loss probability PHB" within the transport path (LSP, PW) they 1333 are monitoring. This PHB is configurable on network operator's 1334 basis. PHBs can be translated at the network borders by the same 1335 function that translates it for user data traffic. The 1336 implication is that CC-V fate shares with much of the forwarding 1337 implementation, but not all aspects of PHB processing are 1338 exercised. Either on-demand tools are used for finer grained 1339 fault finding or an implementation may utilize a CC-V flow per 1340 PHB with the entire E-LSP fate sharing with any individual PHB. 1342 In a bidirectional point-to-point transport path, when a MEP is 1343 enabled to generate pro-active CC-V OAM packets with a 1344 configured transmission rate, it also expects to receive pro- 1345 active CC-V OAM packets from its peer MEP at the same 1346 transmission rate as a common SLA applies to all components of 1347 the transport path. In a unidirectional transport path (either 1348 point-to-point or point-to-multipoint), only the source MEP is 1349 enabled to generate CC-V OAM packets and only the sink MEP is 1350 configured to expect these packets at the configured rate. 1352 MIPs, as well as intermediate nodes not supporting MPLS-TP OAM, 1353 are transparent to the pro-active CC-V information and forward 1354 these pro-active CC-V OAM packets as regular data packets. 1356 During path setup and tear down, situations arise where CC-V 1357 checks would give rise to alarms, as the path is not fully 1358 instantiated. In order to avoid these spurious alarms the 1359 following procedures are recommended. At initialization, the MEP 1360 source function (generating pro-active CC-V packets) should be 1361 enabled prior to the corresponding MEP sink function (detecting 1362 continuity and connectivity defects). When disabling the CC-V 1363 proactive functionality, the MEP sink function should be 1364 disabled prior to the corresponding MEP source function. 1366 5.1.1. Defects identified by CC-V 1368 Pro-active CC-V functions allow a sink MEP to detect the defect 1369 conditions described in the following sub-sections. For all of 1370 the described defect cases, the sink MEP should notify the 1371 equipment fault management process of the detected defect. 1373 5.1.1.1. Loss Of Continuity defect 1375 When proactive CC-V is enabled, a sink MEP detects a loss of 1376 continuity (LOC) defect when it fails to receive pro-active CC-V 1377 OAM packets from the source MEP. 1379 o Entry criteria: If no pro-active CC-V OAM packets from the 1380 source MEP with the correct encapsulation (and in the case of 1381 CV, this includes the requirement to have a correct globally 1382 unique Source MEP identifier) are received within the 1383 interval equal to 3.5 times the receiving MEP's configured 1384 CC-V reception period. 1386 o Exit criteria: A pro-active CC-V OAM packet from the source 1387 MEP with the correct encapsulation (and again in the case of 1388 CV, with the correct globally unique Source MEP identifier) 1389 is received. 1391 5.1.1.2. Mis-connectivity defect 1393 When a pro-active CC-V OAM packet is received, a sink MEP 1394 identifies a mis-connectivity defect (e.g. mismerge, 1395 misconnection or unintended looping) when the received packet 1396 carries an incorrect globally unique Source MEP identifier. 1398 o Entry criteria: The sink MEP receives a pro-active CC-V OAM 1399 packet with an incorrect globally unique Source MEP 1400 identifier or receives a CC or CC/CV OAM packet with an 1401 unexpected encapsulation. 1403 o Exit criteria: The sink MEP does not receive any pro-active 1404 CC-V OAM packet with an incorrect globally unique Source MEP 1405 identifier for an interval equal at least to 3.5 times the 1406 longest transmission period of the pro-active CC-V OAM 1407 packets received with an incorrect globally unique Source MEP 1408 identifier since this defect has been raised. This requires 1409 the OAM message to self identify the CC-V periodicity as not 1410 all MEPs can be expected to have knowledge of all MEGs. 1412 5.1.1.3. Period Misconfiguration defect 1414 If pro-active CC-V OAM packets are received with a correct 1415 globally unique Source MEP identifier but with a transmission 1416 period different than the locally configured reception period, 1417 then a CV period mis-configuration defect is detected. 1419 o Entry criteria: A MEP receives a CC-V pro-active packet with 1420 correct globally unique Source MEP identifier but with a 1421 Period field value different than its own CC-V configured 1422 transmission period. 1424 o Exit criteria: The sink MEP does not receive any pro-active 1425 CC-V OAM packet with a correct globally unique Source MEP 1426 identifier and an incorrect transmission period for an 1427 interval equal at least to 3.5 times the longest transmission 1428 period of the pro-active CC-V OAM packets received with a 1429 correct globally unique Source MEP identifier and an 1430 incorrect transmission period since this defect has been 1431 raised. 1433 5.1.1.4. Unexpected encapsulation defect 1435 If pro-active CC-V OAM packets are received with a correct 1436 globally unique Source MEP identifier but with an unexpected 1437 encapsulation, then a CV unexpected encapsulation defect is 1438 detected. 1440 o Entry criteria: A MEP receives a CC-V pro-active packet with 1441 correct globally unique Source MEP identifier but with an 1442 unexpected encapsulation. 1444 It should be noted that there are practical limitations to 1445 detecting unexpected encapsulation. It is possible that there 1446 are mis-connectivity scenarios where OAM frames can alias as 1447 payload if a transport path can carry an arbitrary payload 1448 without a pseudo wire. In this case, the mis-connectivity 1449 defect can not be detected but a LOC defect may be detected 1450 instead. 1452 o Exit criteria: The sink MEP does not receive any pro-active 1453 CC-V OAM packet with a correct globally unique Source MEP 1454 identifier and an unexpected encapsulation for an interval 1455 equal at least to 3.5 times the longest transmission period 1456 of the pro-active CC-V OAM packets received with a correct 1457 globally unique Source MEP identifier and an unexpected 1458 encapsulation since this defect has been raised. 1460 5.1.2. Consequent action 1462 A sink MEP that detects one of the defect conditions defined in 1463 section 5.1.1 performs the following consequent actions. 1465 If a MEP detects an unexpected globally unique Source MEP 1466 Identifier, it blocks all the traffic (including also the user 1467 data packets) that it receives from the misconnected transport 1468 path. 1470 If a MEP detects LOC defect that is not caused by a period 1471 mis-configuration, it should block all the traffic (including 1472 also the user data packets) that it receives from the transport 1473 path, if this consequent action has been enabled by the 1474 operator. 1476 It is worth noticing that the OAM requirements document [11] 1477 recommends that CC-V proactive monitoring be enabled on every 1478 MEG in order to reliably detect connectivity defects. However, 1479 CC-V proactive monitoring can be disabled by an operator for a 1480 MEG. In the event of a misconnection between a transport path 1481 that is pro-actively monitored for CC-V and a transport path 1482 which is not, the MEP of the former transport path will detect a 1483 LOC defect representing a connectivity problem (e.g. a 1484 misconnection with a transport path where CC-V proactive 1485 monitoring is not enabled) instead of a continuity problem, with 1486 a consequent wrong traffic delivering. For these reasons, the 1487 traffic block consequent action is applied even when a LOC 1488 condition occurs. This block consequent action can be disabled 1489 through configuration. This deactivation of the block action may 1490 be used for activating or deactivating the monitoring when it is 1491 not possible to synchronize the function activation of the two 1492 peer MEPs. 1494 If a MEP detects a LOC defect (section 5.1.1.1), a 1495 mis-connectivity defect (section 5.1.1.2) it declares a signal 1496 fail condition at the transport path level. 1498 It is a matter if local policy if a MEP detecting a period 1499 misconfiguration defect (section 5.1.1.3) declares a signal fail 1500 condition at the transport path level. 1502 5.1.3. Configuration considerations 1504 At all MEPs inside a MEG, the following configuration 1505 information needs to be configured when a proactive CC-V 1506 function is enabled: 1508 o MEG ID; the MEG identifier to which the MEP belongs; 1510 o MEP-ID; the MEP's own identity inside the MEG; 1511 o list of the other MEPs in the MEG. For a point-to-point MEG 1512 the list would consist of the single MEP ID from which the 1513 OAM packets are expected. In case of the root MEP of a p2mp 1514 MEG, the list is composed by all the leaf MEP IDs inside the 1515 MEG. In case of the leaf MEP of a p2mp MEG, the list is 1516 composed by the root MEP ID (i.e. each leaf needs to know the 1517 root MEP ID from which it expect to receive the CC-V OAM 1518 packets). 1520 o PHB; it identifies the per-hop behaviour of CC-V packet. 1521 Proactive CC-V packets are transmitted with the "minimum loss 1522 probability PHB" previously configured within a single 1523 network operator. This PHB is configurable on network 1524 operator's basis. PHBs can be translated at the network 1525 borders. 1527 o transmission rate; the default CC-V transmission periods are 1528 application dependent (depending on whether they are used to 1529 support fault management, performance monitoring, or 1530 protection switching applications): 1532 o Fault Management: default transmission period is 1s (i.e. 1533 transmission rate of 1 packet/second). 1535 o Performance Monitoring: default transmission period is 1536 100ms (i.e. transmission rate of 10 packets/second). 1537 Performance monitoring is only relevant when the 1538 transport path is defect free. CC-V contributes to the 1539 accuracy of PM statistics by permitting the defect free 1540 periods to be properly distinguished. 1542 o Protection Switching: default transmission period is 1543 3.33ms (i.e. transmission rate of 300 packets/second), in 1544 order to achieve sub-50ms the CC-V defect entry criteria 1545 should resolve in less than 10msec, and complete a 1546 protection switch within a subsequent period of 50 msec. 1547 It is also possible to lengthen the transmission period 1548 to 10ms (i.e. transmission rate of 100 packets/second): 1549 in this case the CC-V defect entry criteria is reached 1550 later (i.e. 30msec). 1552 It should be possible for the operator to configure these 1553 transmission rates for all applications, to satisfy his internal 1554 requirements. 1556 Note that the reception period is the same as the configured 1557 transmission rate. 1559 For statically provisioned transport paths the above parameters 1560 are statically configured; for dynamically established transport 1561 paths the configuration information are signaled via the control 1562 plane. 1564 The operator should be able to enable/disable some of the 1565 consequent actions. Which consequent action can be 1566 enabled/disabled are described in section 5.1.1.4. 1568 5.2. Remote Defect Indication 1570 The Remote Defect Indication (RDI) function, as required in 1571 section 2.2.9 of RFC 5860 [11], is an indicator that is 1572 transmitted by a sink MEP to communicate to its source MEP that 1573 a signal fail condition exists. RDI is only used for 1574 bidirectional connections and is associated with proactive CC-V. 1575 The RDI indicator is piggy-backed onto the CC-V packet. 1577 When a MEP detects a signal fail condition (e.g. in case of a 1578 continuity or connectivity defect), it should begin transmitting 1579 an RDI indicator to its peer MEP. The RDI information will be 1580 included in all pro-active CC-V packets that it generates for 1581 the duration of the signal fail condition's existence. 1583 A MEP that receives packets from a peer MEP (as best can be 1584 validated with the CC or CV tool in use) with the RDI 1585 information should determine that its peer MEP has encountered a 1586 defect condition associated with a signal fail. 1588 MIPs as well as intermediate nodes not supporting MPLS-TP OAM 1589 are transparent to the RDI indicator and forward these proactive 1590 CC-V packets that include the RDI indicator as regular data 1591 packets, i.e. the MIP should not perform any actions nor examine 1592 the indicator. 1594 When the signal fail defect condition clears, the MEP should 1595 clear the RDI indicator from subsequent transmission of pro- 1596 active CC-V packets. A MEP should clear the RDI defect upon 1597 reception of a pro-active CC-V packet from the source MEP with 1598 the RDI indicator cleared. 1600 5.2.1. Configuration considerations 1602 In order to support RDI indication, this may be a unique OAM 1603 message or an OAM information element embedded in a CV message. 1604 In this case the RDI transmission rate and PHB of the OAM 1605 packets carrying RDI should be the same as that configured for 1606 CC-V. 1608 5.3. Alarm Reporting 1610 The Alarm Reporting function, as required in section 2.2.8 of 1611 RFC 5860 [11], relies upon an Alarm Indication Signal (AIS) 1612 message to suppress alarms following detection of defect 1613 conditions at the server (sub-)layer. 1615 When a server MEP asserts signal fail, it notifies that to the 1616 co-located MPLS-TP client/server adaptation function which then 1617 generates packets with AIS information in the downstream 1618 direction to allow the suppression of secondary alarms at the 1619 MPLS-TP MEP in the client (sub-)layer. 1621 The generation of packets with AIS information starts 1622 immediately when the server MEP asserts signal fail. These 1623 periodic packets, with AIS information, continue to be 1624 transmitted until the signal fail condition is cleared. It is 1625 assumed that to avoid spurious alarm generation a MEP detecting 1626 loss of continuity will wait for a hold off interval prior to 1627 asserting an alarm to the management system. 1629 Upon receiving a packet with AIS information an MPLS-TP MEP 1630 enters an AIS defect condition and suppresses loss of continuity 1631 alarms associated with its peer MEP but does not block traffic 1632 received from the transport path. A MEP resumes loss of 1633 continuity alarm generation upon detecting loss of continuity 1634 defect conditions in the absence of AIS condition. 1636 MIPs, as well as intermediate nodes, do not process AIS 1637 information and forward these AIS OAM packets as regular data 1638 packets. 1640 For example, let's consider a fiber cut between LSR 1 and LSR 2 1641 in the reference network of Figure 5. Assuming that all the MEGs 1642 described in Figure 5 have pro-active CC-V enabled, a LOC defect 1643 is detected by the MEPs of Sec12 SME, LSP13 LME, PW1 PSME and 1644 PW1Z PME, however in a transport network only the alarm 1645 associated to the fiber cut needs to be reported to an NMS while 1646 all secondary alarms should be suppressed (i.e. not reported to 1647 the NMS or reported as secondary alarms). 1649 If the fiber cut is detected by the MEP in the physical layer 1650 (in LSR2), LSR2 can generate the proper alarm in the physical 1651 layer and suppress the secondary alarm associated with the LOC 1652 defect detected on Sec12 SME. As both MEPs reside within the 1653 same node, this process does not involve any external protocol 1654 exchange. Otherwise, if the physical layer has not enough OAM 1655 capabilities to detect the fiber cut, the MEP of Sec12 SME in 1656 LSR2 will report a LOC alarm. 1658 In both cases, the MEP of Sec12 SME in LSR 2 notifies the 1659 adaptation function for LSP13 LME that then generates AIS 1660 packets on the LSP13 LME in order to allow its MEP in LSR3 to 1661 suppress the LOC alarm. LSR3 can also suppress the secondary 1662 alarm on PW13 PSME because the MEP of PW13 PSME resides within 1663 the same node as the MEP of LSP13 LME. The MEP of PW13 PSME in 1664 LSR3 also notifies the adaptation function for PW1Z PME that 1665 then generates AIS packets on PW1Z PME in order to allow its MEP 1666 in LSRZ to suppress the LOC alarm. 1668 The generation of AIS packets for each MEG in the MPLS-TP client 1669 (sub-)layer is configurable (i.e. the operator can 1670 enable/disable the AIS generation). 1672 AIS packets are transmitted with the "minimum loss probability 1673 PHB" within a single network operator. This PHB is configurable 1674 on network operator's basis. 1676 AIS condition is cleared if no AIS message has been received in 1677 3.5 times the AIS transmission period. 1679 5.4. Lock Reporting 1681 The Lock Reporting function, as required in section 2.2.7 of RFC 1682 5860 [11], relies upon a Locked Report (LKR) message used to 1683 suppress alarms following administrative locking action in the 1684 server (sub-)layer. 1686 When a server MEP is locked, the MPLS-TP client (sub-)layer 1687 adaptation function generates packets with LKR information in 1688 both directions to allow the suppression of secondary alarms at 1689 the MEPs in the client (sub-)layer. Again it is assumed that 1690 there is a hold off for any loss of continuity alarms in the 1691 client layer MEPs downstream of the node originating the locked 1692 report. 1694 The generation of packets with LKR information starts 1695 immediately when the server MEP is locked. These periodic 1696 packets, with LKR information, continue to be transmitted until 1697 the locked condition is cleared. 1699 Upon receiving a packet with LKR information an MPLS-TP MEP 1700 enters an LKR defect condition and suppresses loss of continuity 1701 alarm associated with its peer MEP but does not block traffic 1702 received from the transport path. A MEP resumes loss of 1703 continuity alarm generation upon detecting loss of continuity 1704 defect conditions in the absence of LKR condition. 1706 MIPs, as well as intermediate nodes, do not process the LKR 1707 information and forward these LKR OAM packets as regular data 1708 packets. 1710 For example, let's consider the case where the MPLS-TP Section 1711 between LSR 1 and LSR 2 in the reference network of Figure 5 is 1712 administrative locked at LSR2 (in both directions). 1714 Assuming that all the MEGs described in Figure 5 have pro-active 1715 CC-V enabled, a LOC defect is detected by the MEPs of LSP13 LME, 1716 PW1 PSME and PW1Z PME, however in a transport network all these 1717 secondary alarms should be suppressed (i.e. not reported to the 1718 NMS or reported as secondary alarms). 1720 The MEP of Sec12 SME in LSR 2 notifies the adaptation function 1721 for LSP13 LME that then generates LKR packets on the LSP13 LME 1722 in order to allow its MEPs in LSR1 and LSR3 to suppress the LOC 1723 alarm. LSR3 can also suppress the secondary alarm on PW13 PSME 1724 because the MEP of PW13 PSME resides within the same node as the 1725 MEP of LSP13 LME. The MEP of PW13 PSME in LSR3 also notifies the 1726 adaptation function for PW1Z PME that then generates AIS packets 1727 on PW1Z PME in order to allow its MEP in LSRZ to suppress the 1728 LOC alarm. 1730 The generation of LKR packets for each MEG in the MPLS-TP client 1731 (sub-)layer is configurable (i.e. the operator can 1732 enable/disable the LKR generation). 1734 LKR packets are transmitted with the "minimum loss probability 1735 PHB" within a single network operator. This PHB is configurable 1736 on network operator's basis. 1738 Locked condition is cleared if no LKR packet has been received 1739 for 3.5 times the transmission period. 1741 5.5. Packet Loss Measurement 1743 Packet Loss Measurement (LM) is one of the capabilities 1744 supported by the MPLS-TP Performance Monitoring (PM) function in 1745 order to facilitate reporting of QoS information for a transport 1746 path as required in section 2.2.11 of RFC 5860 [11]. LM is used 1747 to exchange counter values for the number of ingress and egress 1748 packets transmitted and received by the transport path monitored 1749 by a pair of MEPs. 1751 Proactive LM is performed by periodically sending LM OAM packets 1752 from a MEP to a peer MEP and by receiving LM OAM packets from 1753 the peer MEP (if a bidirectional transport path) during the life 1754 time of the transport path. Each MEP performs measurements of 1755 its transmitted and received packets. These measurements are 1756 then correlated with the peer MEP in the ME to derive the impact 1757 of packet loss on a number of performance metrics for the ME in 1758 the MEG. The LM transactions are issued such that the OAM 1759 packets will experience the same queuing discipline as the 1760 measured traffic while transiting between the MEPs in the ME. 1762 For a MEP, near-end packet loss refers to packet loss associated 1763 with incoming data packets (from the far-end MEP) while far-end 1764 packet loss refers to packet loss associated with egress data 1765 packets (towards the far-end MEP). 1767 MIPs, as well as intermediate nodes, do not process the LM 1768 information and forward these pro-active LM OAM packets as 1769 regular data packets. 1771 5.5.1. Configuration considerations 1773 In order to support proactive LM, the transmission rate and PHB 1774 class associated with the LM OAM packets originating from a MEP 1775 need be configured as part of the LM provisioning. LM OAM 1776 packets should be transmitted with the PHB that yields the 1777 lowest discard probability within the measured PHB Scheduling 1778 Class (see RFC 3260 [16]). 1780 If that PHB class is not an ordered aggregate where the ordering 1781 constraint is all packets with the PHB class being delivered in 1782 order, LM can produce inconsistent results. 1784 5.5.2. Sampling skew 1786 If an implementation makes use of a hardware forwarding path 1787 which operates in parallel with an OAM processing path, whether 1788 hardware or software based, the packet and byte counts may be 1789 skewed if one or more packets can be processed before the OAM 1790 processing samples counters. If OAM is implemented in software 1791 this error can be quite large. 1793 5.5.3. Multilink issues 1795 If multilink is used at the LSP ingress or egress, there may be 1796 no single packet processing engine where to inject or extract a 1797 LM packet as an atomic operation to which accurate packet and 1798 byte counts can be associated with the packet. 1800 In the case where multilink is encountered in the LSP path, the 1801 reordering of packets within the LSP can cause inaccurate LM 1802 results. 1804 5.6. Packet Delay Measurement 1806 Packet Delay Measurement (DM) is one of the capabilities 1807 supported by the MPLS-TP PM function in order to facilitate 1808 reporting of QoS information for a transport path as required in 1809 section 2.2.12 of RFC 5860 [11]. Specifically, pro-active DM is 1810 used to measure the long-term packet delay and packet delay 1811 variation in the transport path monitored by a pair of MEPs. 1813 Proactive DM is performed by sending periodic DM OAM packets 1814 from a MEP to a peer MEP and by receiving DM OAM packets from 1815 the peer MEP (if a bidirectional transport path) during a 1816 configurable time interval. 1818 Pro-active DM can be operated in two ways: 1820 o One-way: a MEP sends DM OAM packet to its peer MEP containing 1821 all the required information to facilitate one-way packet 1822 delay and/or one-way packet delay variation measurements at 1823 the peer MEP. Note that this requires synchronized precision 1824 time at either MEP by means outside the scope of this 1825 framework. 1827 o Two-way: a MEP sends DM OAM packet with a DM request to its 1828 peer MEP, which replies with a DM OAM packet as a DM 1829 response. The request/response DM OAM packets containing all 1830 the required information to facilitate two-way packet delay 1831 and/or two-way packet delay variation measurements from the 1832 viewpoint of the source MEP. 1834 MIPs, as well as intermediate nodes, do not process the DM 1835 information and forward these pro-active DM OAM packets as 1836 regular data packets. 1838 5.6.1. Configuration considerations 1840 In order to support pro-active DM, the transmission rate and PHB 1841 associated with the DM OAM packets originating from a MEP need 1842 be configured as part of the DM provisioning. DM OAM packets 1843 should be transmitted with the PHB that yields the lowest 1844 discard probability within the measured PHB Scheduling Class 1845 (see RFC 3260 [16]). 1847 5.7. Client Failure Indication 1849 The Client Failure Indication (CFI) function, as required in 1850 section 2.2.10 of RFC 5860 [11], is used to help process client 1851 defects and propagate a client signal defect condition from the 1852 process associated with the local attachment circuit where the 1853 defect was detected (typically the source adaptation function 1854 for the local client interface) to the process associated with 1855 the far-end attachment circuit (typically the source adaptation 1856 function for the far-end client interface) for the same 1857 transmission path in case the client of the transport path does 1858 not support a native defect/alarm indication mechanism, e.g. 1859 AIS. 1861 A source MEP starts transmitting a CFI indication to its peer 1862 MEP when it receives a local client signal defect notification 1863 via its local CSF function. Mechanisms to detect local client 1864 signal fail defects are technology specific. Similarly 1865 mechanisms to determine when to cease originating client signal 1866 fail indication are also technology specific. 1868 A sink MEP that has received a CFI indication report this 1869 condition to its associated client process via its local CFI 1870 function. Consequent actions toward the client attachment 1871 circuit are technology specific. 1873 Either there needs to be a 1:1 correspondence between the client 1874 and the MEG, or when multiple clients are multiplexed over a 1875 transport path, the CFI message requires additional information 1876 to permit the client instance to be identified. 1878 MIPs, as well as intermediate nodes, do not process the CFI 1879 information and forward these pro-active CFI OAM packets as 1880 regular data packets. 1882 5.7.1. Configuration considerations 1884 In order to support CFI indication, the CFI transmission rate 1885 and PHB of the CFI OAM message/information element should be 1886 configured as part of the CFI configuration. 1888 6. OAM Functions for on-demand monitoring 1890 In contrast to proactive monitoring, on-demand monitoring is 1891 initiated manually and for a limited amount of time, usually for 1892 operations such as e.g. diagnostics to investigate into a defect 1893 condition. 1895 On-demand monitoring covers a combination of "in-service" and 1896 "out-of-service" monitoring functions. The control and 1897 measurement implications are: 1899 1. A MEG can be directed to perform an "on-demand" functions at 1900 arbitrary times in the lifetime of a transport path. 1902 2. "out-of-service" monitoring functions may require a-priori 1903 configuration of both MEPs and intermediate nodes in the MEG 1904 (e.g., data plane loopback) and the issuance of notifications 1905 into client layers of the transport path being removed from 1906 service (e.g., lock-reporting) 1908 3. The measurements resulting from on-demand monitoring are 1909 typically harvested in real time, as these are frequently 1910 initiated manually. These do not necessarily require 1911 different harvesting mechanisms that for harvesting proactive 1912 monitoring telemetry. 1914 The functions that are exclusive out-of-service are those 1915 described in section 6.3. The remainder are applicable to both 1916 in-service and out-of-service transport paths. 1918 6.1. Connectivity Verification 1920 In order to preserve network resources, e.g. bandwidth, 1921 processing time at switches, it may be preferable to not use 1922 proactive CC-V. In order to perform fault management functions, 1923 network management may invoke periodic on-demand bursts of on- 1924 demand CV packets, as required in section 2.2.3 of RFC 5860 1925 [11]. 1927 On demand connectivity verification is a transaction that flows 1928 from the source MEP to a target MIP or MEP. 1930 Use of on-demand CV is dependent on the existence of either a 1931 bi-directional ME, or an associated return ME, or the 1932 availability of an out-of-band return path because it requires 1933 the ability for target MIPs and MEPs to direct responses to the 1934 originating MEPs. 1936 An additional use of on-demand CV would be to detect and locate 1937 a problem of connectivity when a problem is suspected or known 1938 based on other tools. In this case the functionality will be 1939 triggered by the network management in response to a status 1940 signal or alarm indication. 1942 On-demand CV is based upon generation of on-demand CV packets 1943 that should uniquely identify the MEG that is being checked. 1944 The on-demand functionality may be used to check either an 1945 entire MEG (end-to-end) or between a source MEP and a specific 1946 MIP. This functionality may not be available for associated 1947 bidirectional transport paths or unidirectional paths, as the 1948 MIP may not have a return path to the source MEP for the on- 1949 demand CV transaction. 1951 On-demand CV may generate a one-time burst of on-demand CV 1952 packets, or be used to invoke periodic, non-continuous, bursts 1953 of on-demand CV packets. The number of packets generated in 1954 each burst is configurable at the MEPs, and should take into 1955 account normal packet-loss conditions. 1957 When invoking a periodic check of the MEG, the source MEP should 1958 issue a burst of on-demand CV packets that uniquely identifies 1959 the MEG being verified. The number of packets and their 1960 transmission rate should be pre-configured at the source MEP. 1961 The source MEP should use the mechanisms defined in sections 3.3 1962 and 3.4 when sending an on-demand CV packet to a target MEP or 1963 target MIP respectively. The target MEP/MIP shall return a reply 1964 on-demand CV packet for each packet received. If the expected 1965 number of on-demand CV reply packets is not received at source 1966 MEP, this is an indication that a connectivity problem may 1967 exist. 1969 On-demand CV should have the ability to carry padding such that 1970 a variety of MTU sizes can be originated to verify the MTU 1971 transport capability of the transport path. 1973 MIPs that are not target by on-demand CV packets, as well as 1974 intermediate nodes, do not process the CV information and 1975 forward these on-demand CV OAM packets as regular data packets. 1977 6.1.1. Configuration considerations 1979 For on-demand CV the source MEP should support the configuration 1980 of the number of packets to be transmitted/received in each 1981 burst of transmissions and their packet size. 1983 In addition, when the CV packet is used to check connectivity 1984 toward a target MIP, the number of hops to reach the target MIP 1985 should be configured. 1987 The PHB of the on-demand CV packets should be configured as 1988 well. This permits the verification of correct operation of QoS 1989 queuing as well as connectivity. 1991 6.2. Packet Loss Measurement 1993 On-demand Packet Loss Measurement (LM) is one of the 1994 capabilities supported by the MPLS-TP Performance Monitoring 1995 function in order to facilitate diagnostic of QoS performance 1996 for a transport path, as required in section 2.2.11 of RFC 5860 1997 [11]. As proactive LM, on-demand LM is used to exchange counter 1998 values for the number of ingress and egress packets transmitted 1999 and received by the transport path monitored by a pair of MEPs. 2000 LM is not performed MEP to MIP or between a pair of MIPs. 2002 On-demand LM is performed by periodically sending LM OAM packets 2003 from a MEP to a peer MEP and by receiving LM OAM packets from 2004 the peer MEP (if a bidirectional transport path) during a pre- 2005 defined monitoring period. Each MEP performs measurements of its 2006 transmitted and received packets. These measurements are then 2007 correlated to evaluate the packet loss performance metrics of 2008 the transport path. 2010 Use of packet loss measurement in an out-of-service transport 2011 path requires a traffic source such as a tester. 2013 MIPs, as well as intermediate nodes, do not process the LM 2014 information and forward these on-demand LM OAM packets as 2015 regular data packets. 2017 6.2.1. Configuration considerations 2019 In order to support on-demand LM, the beginning and duration of 2020 the LM procedures, the transmission rate and PHB associated with 2021 the LM OAM packets originating from a MEP must be configured as 2022 part of the on-demand LM provisioning. LM OAM packets should be 2023 transmitted with the PHB that yields the lowest discard 2024 probability within the measured PHB Scheduling Class (see RFC 2025 3260 [16]). 2027 6.2.2. Sampling skew 2029 If an implementation makes use of a hardware forwarding path 2030 which operates in parallel with an OAM processing path, whether 2031 hardware or software based, the packet and byte counts may be 2032 skewed if one or more packets can be processed before the OAM 2033 processing samples counters. If OAM is implemented in software 2034 this error can be quite large. 2036 6.2.3. Multilink issues 2038 Multi-link Issues are as described in section 5.5.3. 2040 6.3. Diagnostic Tests 2042 Diagnostic tests are tests performed on a MEG that has been taken 2043 out-of-service. 2045 6.3.1. Throughput Estimation 2047 Throughput estimation is an on-demand out-of-service function, 2048 as required in section 2.2.5 of RFC 5860 [11], that allows 2049 verifying the bandwidth/throughput of an MPLS-TP transport path 2050 (LSP or PW) before it is put in-service. 2052 Throughput estimation is performed between MEPs and can be 2053 performed in one-way or two-way modes. 2055 According to RFC 2544 [12], this test is performed by sending 2056 OAM test packets at increasing rate (up to the theoretical 2057 maximum), graphing the percentage of OAM test packets received 2058 and reporting the rate at which OAM test packets begin to drop. 2059 In general, this rate is dependent on the OAM test packet size. 2061 When configured to perform such tests, a MEP source inserts OAM 2062 test packets with a specified packet size and transmission 2063 pattern at a rate to exercise the throughput. 2065 For a one-way test, the remote MEP sink receives the OAM test 2066 packets and calculates the packet loss. For a two-way test, the 2067 remote MEP loopbacks the OAM test packets back to original MEP 2068 and the local MEP sink calculates the packet loss. However, a 2069 two-way test will return the minimum of available throughput in 2070 the two directions. Alternatively it is possible to run two 2071 individual one-way tests to get a distinct measurement in the 2072 two directions. 2074 It is worth noting that two-way throughput estimation can only 2075 evaluate the minimum of available throughput of the two 2076 directions. In order to estimate the throughput of each 2077 direction uniquely, two one-way throughput estimation sessions 2078 have to be setup. 2080 MIPs, as well as intermediate nodes, do not process the 2081 throughput test information and forward these on-demand test OAM 2082 packets as regular data packets. 2084 6.3.1.1. Configuration considerations 2086 Throughput estimation is an out-of-service tool. The diagnosed 2087 MEG should be put into a Lock status before the diagnostic test 2088 is started. 2090 A MEG can be put into a Lock status either via an NMS action or 2091 using the Lock Instruct OAM tool as defined in section 7. 2093 At the transmitting MEP, provisioning is required for a test 2094 signal generator, which is associated with the MEP. At a 2095 receiving MEP, provisioning is required for a test signal 2096 detector which is associated with the MEP. 2098 6.3.1.2. Limited OAM processing rate 2100 If an implementation is able to process payload at much higher 2101 data rates than OAM packets, then accurate measurement of 2102 throughput using OAM packets is not achievable. Whether OAM 2103 packets can be processed at the same rate as payload is 2104 implementation dependent. 2106 6.3.1.3. Multilink considerations 2108 If multilink is used, then it may not be possible to perform 2109 throughput measurement, as the throughput test may not have a 2110 mechanism for utilizing more than one component link of the 2111 aggregated link. 2113 6.3.2. Data plane Loopback 2115 Data plane loopback is an out-of-service function, as required 2116 in section 2.2.5 of RFC 5860 [11], that permits all traffic 2117 (including user data and OAM, with the exception of the disable 2118 loopback command) originated at the ingress of a transport path 2119 or inserted by the test equipment to be looped back unmodified 2120 (other than normal per hop processing such as TTL decrement) in 2121 the direction of the point of origin by an interface at either 2122 an intermediate node or a terminating node. TTL is decremented 2123 normally during this process. It is also normal to disable 2124 proactive monitoring of the path as the source MEP will see all 2125 source MEP originated OAM messages returned to it. 2127 If the loopback function is to be performed at an intermediate 2128 node it is only applicable to co-routed bi-directional paths. If 2129 the loopback is to be performed end to end, it is applicable to 2130 both co-routed bi-directional or associated bi-directional 2131 paths. 2133 Where a node implements data plane loopback capability and 2134 whether it implements more than one point is implementation 2135 dependent. 2137 6.4. Route Tracing 2139 It is often necessary to trace a route covered by a MEG from a 2140 source MEP to the sink MEP including all the MIPs in-between, 2141 and may be conducted after provisioning an MPLS-TP transport 2142 path for, e.g., trouble shooting purposes such as fault 2143 localization. 2145 The route tracing function, as required in section 2.2.4 of RFC 2146 5860 [11], is providing this functionality. Based on the fate 2147 sharing requirement of OAM flows, i.e. OAM packets receive the 2148 same forwarding treatment as data packet, route tracing is a 2149 basic means to perform connectivity verification and, to a much 2150 lesser degree, continuity check. For this function to work 2151 properly, a return path must be present. 2153 Route tracing might be implemented in different ways and this 2154 document does not preclude any of them. 2156 Route tracing should always discover the full list of MIPs and 2157 of the peer MEPs. In case a defect exist, the route trace 2158 function will only be able to tract up to the defect, and needs 2159 to be able to return the incomplete list of OAM entities that it 2160 was able to trace such that the fault can be localized. 2162 6.4.1. Configuration considerations 2164 The configuration of the route trace function must at least 2165 support the setting of the number of trace attempts before it 2166 gives up. 2168 6.5. Packet Delay Measurement 2170 Packet Delay Measurement (DM) is one of the capabilities 2171 supported by the MPLS-TP PM function in order to facilitate 2172 reporting of QoS information for a transport path, as required 2173 in section 2.2.12 of RFC 5860 [11]. Specifically, on-demand DM 2174 is used to measure packet delay and packet delay variation in 2175 the transport path monitored by a pair of MEPs during a pre- 2176 defined monitoring period. 2178 On-Demand DM is performed by sending periodic DM OAM packets 2179 from a MEP to a peer MEP and by receiving DM OAM packets from 2180 the peer MEP (if a bidirectional transport path) during a 2181 configurable time interval. 2183 On-demand DM can be operated in two ways: 2185 o One-way: a MEP sends DM OAM packet to its peer MEP containing 2186 all the required information to facilitate one-way packet 2187 delay and/or one-way packet delay variation measurements at 2188 the peer MEP. Note that this requires synchronized precision 2189 time at either MEP by means outside the scope of this 2190 framework. 2192 o Two-way: a MEP sends DM OAM packet with a DM request to its 2193 peer MEP, which replies with an DM OAM packet as a DM 2194 response. The request/response DM OAM packets containing all 2195 the required information to facilitate two-way packet delay 2196 and/or two-way packet delay variation measurements from the 2197 viewpoint of the source MEP. 2199 MIPs, as well as intermediate nodes, do not process the DM 2200 information and forward these on-demand DM OAM packets as 2201 regular data packets. 2203 6.5.1. Configuration considerations 2205 In order to support on-demand DM, the beginning and duration of 2206 the DM procedures, the transmission rate and PHB associated with 2207 the DM OAM packets originating from a MEP need be configured as 2208 part of the DM provisioning. DM OAM packets should be 2209 transmitted with the PHB that yields the lowest discard 2210 probability within the measured PHB Scheduling Class (see RFC 2211 3260 [16]). 2213 In order to verify different performances between long and short 2214 packets (e.g., due to the processing time), it should be 2215 possible for the operator to configure the packet size of the 2216 on-demand OAM DM packet. 2218 7. OAM Functions for administration control 2220 7.1. Lock Instruct 2222 Lock Instruct (LKI) function, as required in section 2.2.6 of 2223 RFC 5860 [11], is a command allowing a MEP to instruct the peer 2224 MEP(s) to put the MPLS-TP transport path into a locked 2225 condition. 2227 This function allows single-side provisioning for 2228 administratively locking (and unlocking) an MPLS-TP transport 2229 path. 2231 Note that it is also possible to administratively lock (and 2232 unlock) an MPLS-TP transport path using two-side provisioning, 2233 where the NMS administratively put both MEPs into ad 2234 administrative lock condition. In this case, the LKI function is 2235 not required/used. 2237 MIPs, as well as intermediate nodes, do not process the lock 2238 instruct information and forward these on-demand LKI OAM packets 2239 as regular data packets. 2241 7.1.1. Locking a transport path 2243 A MEP, upon receiving a single-side administrative lock command 2244 from an NMS, sends an LKI request OAM packet to its peer MEP(s). 2245 It also puts the MPLS-TP transport path into a locked state and 2246 notifies its client (sub-)layer adaptation function upon the 2247 locked condition. 2249 A MEP, upon receiving an LKI request from its peer MEP, can 2250 accept or not the instruction and replies to the peer MEP with 2251 an LKI reply OAM packet indicating whether it has accepted or 2252 not the instruction. This requires either an in-band or out-of- 2253 band return path. 2255 If the lock instruction has been accepted, it also puts the 2256 MPLS-TP transport path into a locked and notifies its client 2257 (sub-)layer adaptation function upon the locked condition. 2259 Note that if the client (sub-)layer is also MPLS-TP, Lock 2260 Reporting (LKR) generation at the client MPLS-TP (sub-)layer is 2261 started, as described in section 5.4. 2263 7.1.2. Unlocking a transport path 2265 A MEP, upon receiving a single-side administrative unlock 2266 command from NMS, sends an LKI removal request OAM packet to its 2267 peer MEP(s). 2269 The peer MEP, upon receiving an LKI removal request, can accept 2270 or not the removal instruction and replies with an LKI removal 2271 reply OAM packet indicating whether it has accepted or not the 2272 instruction. 2274 If the lock removal instruction has been accepted, it also 2275 clears the locked condition on the MPLS-TP transport path and 2276 notifies this event to its client (sub-)layer adaptation 2277 function. 2279 The MEP that has initiated the LKI clear procedure, upon 2280 receiving a positive LKI removal reply, also clears the locked 2281 condition on the MPLS-TP transport path and notifies this event 2282 to its client (sub-)layer adaptation function. 2284 Note that if the client (sub-)layer is also MPLS-TP, Lock 2285 Reporting (LKR) generation at the client MPLS-TP (sub-)layer is 2286 terminated, as described in section 5.4. 2288 8. Security Considerations 2290 A number of security considerations are important in the context 2291 of OAM applications. 2293 OAM traffic can reveal sensitive information such as passwords, 2294 performance data and details about e.g. the network topology. 2295 The nature of OAM data therefore suggests to have some form of 2296 authentication, authorization and encryption in place. This will 2297 prevent unauthorized access to vital equipment and it will 2298 prevent third parties from learning about sensitive information 2299 about the transport network. However it should be observed that 2300 the combination of all permutations of unique MEP to MEP, MEP to 2301 MIP, and intermediate system originated transactions mitigates 2302 against the practical establishment and maintenance of a large 2303 number of security associations per MEG. 2305 For this reason it is assumed that the network is physically 2306 secured against man-in-the-middle attacks. Further, this 2307 document describes OAM functions that, if a man-in-the-middle 2308 attack was possible, could be exploited to significantly disrupt 2309 proper operation of the network. 2311 Mechanisms that the framework does not specify might be subject 2312 to additional security considerations. 2314 9. IANA Considerations 2316 No new IANA considerations. 2318 10. Acknowledgments 2320 The authors would like to thank all members of the teams (the 2321 Joint Working Team, the MPLS Interoperability Design Team in 2322 IETF and the Ad Hoc Group on MPLS-TP in ITU-T) involved in the 2323 definition and specification of MPLS Transport Profile. 2325 The editors gratefully acknowledge the contributions of Adrian 2326 Farrel, Yoshinori Koike, Luca Martini, Yuji Tochio and Manuel 2327 Paul for the definition of per-interface MIPs and MEPs. 2329 The editors gratefully acknowledge the contributions of Malcolm 2330 Betts, Yoshinori Koike, Xiao Min, and Maarten Vissers for the 2331 lock report and lock instruction description. 2333 The authors would also like to thank Alessandro D'Alessandro, 2334 Loa Andersson, Malcolm Betts, Stewart Bryant, Rui Costa, Xuehui 2335 Dai, John Drake, Adrian Farrel, Dan Frost, Xia Liang, Liu 2336 Gouman, Peng He, Feng Huang, Su Hui, Yoshionori Koike, George 2337 Swallow, Yuji Tochio, Curtis Villamizar, Maarten Vissers and 2338 Xuequin Wei for their comments and enhancements to the text. 2340 This document was prepared using 2-Word-v2.0.template.dot. 2342 11. References 2344 11.1. Normative References 2346 [1] Rosen, E., Viswanathan, A., Callon, R., "Multiprotocol 2347 Label Switching Architecture", RFC 3031, January 2001 2349 [2] Bryant, S., Pate, P., "Pseudo Wire Emulation Edge-to-Edge 2350 (PWE3) Architecture", RFC 3985, March 2005 2352 [3] Nadeau, T., Pignataro, S., "Pseudowire Virtual Circuit 2353 Connectivity Verification (VCCV): A Control Channel for 2354 Pseudowires", RFC 5085, December 2007 2356 [4] Bocci, M., Bryant, S., "An Architecture for Multi-Segment 2357 Pseudo Wire Emulation Edge-to-Edge", RFC 5659, October 2358 2009 2360 [5] Niven-Jenkins, B., Brungard, D., Betts, M., sprecher, N., 2361 Ueno, S., "MPLS-TP Requirements", RFC 5654, September 2009 2363 [6] Agarwal, P., Akyol, B., "Time To Live (TTL) Processing in 2364 Multiprotocol Label Switching (MPLS) Networks", RFC 3443, 2365 January 2003 2367 [7] Vigoureux, M., Bocci, M., Swallow, G., Ward, D., Aggarwal, 2368 R., "MPLS Generic Associated Channel", RFC 5586, June 2009 2370 [8] Bocci, M., et al., "A Framework for MPLS in Transport 2371 Networks", RFC 5921, July 2010 2373 [9] Bocci, M., et al., " MPLS Transport Profile User-to-Network and 2374 Network-to-Network Interfaces", draft-ietf-mpls-tp-uni-nni-00 2375 (work in progress), August 2010 2377 [10] Swallow, G., Bocci, M., "MPLS-TP Identifiers", draft-ietf- 2378 mpls-tp-identifiers-02 (work in progress), July 2010 2380 [11] Vigoureux, M., Betts, M., Ward, D., "Requirements for OAM 2381 in MPLS Transport Networks", RFC 5860, May 2010 2383 [12] Bradner, S., McQuaid, J., "Benchmarking Methodology for 2384 Network Interconnect Devices", RFC 2544, March 1999 2386 [13] ITU-T Recommendation G.806 (01/09), "Characteristics of 2387 transport equipment - Description methodology and generic 2388 functionality ", January 2009 2390 11.2. Informative References 2392 [14] Sprecher, N., Nadeau, T., van Helvoort, H., Weingarten, 2393 Y., "MPLS-TP OAM Analysis", draft-ietf-mpls-tp-oam- 2394 analysis-02 (work in progress), July 2010 2396 [15] Nichols, K., Blake, S., Baker, F., Black, D., "Definition 2397 of the Differentiated Services Field (DS Field) in the 2398 IPv4 and IPv6 Headers", RFC 2474, December 1998 2400 [16] Grossman, D., "New terminology and clarifications for 2401 Diffserv", RFC 3260, April 2002. 2403 [17] Kompella, K., Rekhter, Y., Berger, L., "Link Bundling in 2404 MPLS Traffic Engineering (TE)", RFC 4201, October 2005 2406 [18] ITU-T Recommendation G.707/Y.1322 (01/07), "Network node 2407 interface for the synchronous digital hierarchy (SDH)", 2408 January 2007 2410 [19] ITU-T Recommendation G.805 (03/00), "Generic functional 2411 architecture of transport networks", March 2000 2413 [20] ITU-T Recommendation Y.1731 (02/08), "OAM functions and 2414 mechanisms for Ethernet based networks", February 2008 2416 [21] IEEE Standard 802.1AX-2008, "IEEE Standard for Local and 2417 Metropolitan Area Networks - Link Aggregation", November 2418 2008 2420 Authors' Addresses 2422 Dave Allan 2423 Ericsson 2425 Email: david.i.allan@ericsson.com 2427 Italo Busi 2428 Alcatel-Lucent 2430 Email: Italo.Busi@alcatel-lucent.com 2431 Ben Niven-Jenkins 2432 Velocix 2434 Email: ben@niven-jenkins.co.uk 2436 Annamaria Fulignoli 2437 Ericsson 2439 Email: annamaria.fulignoli@ericsson.com 2441 Enrique Hernandez-Valencia 2442 Alcatel-Lucent 2444 Email: Enrique.Hernandez@alcatel-lucent.com 2446 Lieven Levrau 2447 Alcatel-Lucent 2449 Email: Lieven.Levrau@alcatel-lucent.com 2451 Vincenzo Sestito 2452 Alcatel-Lucent 2454 Email: Vincenzo.Sestito@alcatel-lucent.com 2456 Nurit Sprecher 2457 Nokia Siemens Networks 2459 Email: nurit.sprecher@nsn.com 2461 Huub van Helvoort 2462 Huawei Technologies 2464 Email: hhelvoort@huawei.com 2466 Martin Vigoureux 2467 Alcatel-Lucent 2469 Email: Martin.Vigoureux@alcatel-lucent.com 2470 Yaacov Weingarten 2471 Nokia Siemens Networks 2473 Email: yaacov.weingarten@nsn.com 2475 Rolf Winter 2476 NEC 2478 Email: Rolf.Winter@nw.neclab.eu