idnits 2.17.1 draft-ietf-mpls-tp-oam-framework-07.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** You're using the IETF Trust Provisions' Section 6.b License Notice from 12 Sep 2009 rather than the newer Notice from 28 Dec 2009. (See https://trustee.ietf.org/license-info/) Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (July 12, 2010) is 5031 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Outdated reference: A later version (-07) exists of draft-ietf-mpls-tp-identifiers-01 == Outdated reference: A later version (-09) exists of draft-ietf-mpls-tp-oam-analysis-02 Summary: 1 error (**), 0 flaws (~~), 3 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 MPLS Working Group I. Busi (Ed) 2 Internet Draft Alcatel-Lucent 3 Intended status: Informational B. Niven-Jenkins (Ed) 4 BT 5 D. Allan (Ed) 6 Ericsson 8 Expires: January 12, 2011 July 12, 2010 10 MPLS-TP OAM Framework 11 draft-ietf-mpls-tp-oam-framework-07.txt 13 Abstract 15 The Transport Profile of Multi-Protocol Label Switching 16 (MPLS-TP) is a packet-based transport technology based on the 17 MPLS Traffic Engineering (MPLS-TE) and Pseudowire (PW) data 18 plane architectures. 20 This document describes a framework to support a comprehensive 21 set of Operations, Administration and Maintenance (OAM) 22 procedures that fulfill the MPLS-TP OAM requirements for fault, 23 performance and protection-switching management and that do not 24 rely on the presence of a control plane. 26 This document is a product of a joint Internet Engineering Task 27 Force (IETF) / International Telecommunications Union 28 Telecommunication Standardization Sector (ITU-T) effort to 29 include an MPLS Transport Profile within the IETF MPLS and PWE3 30 architectures to support the capabilities and functionalities of 31 a packet transport network as defined by the ITU-T. 33 Status of this Memo 35 This Internet-Draft is submitted to IETF in full conformance 36 with the provisions of BCP 78 and BCP 79. 38 Internet-Drafts are working documents of the Internet 39 Engineering Task Force (IETF), its areas, and its working 40 groups. Note that other groups may also distribute working 41 documents as Internet-Drafts. 43 Internet-Drafts are draft documents valid for a maximum of six 44 months and may be updated, replaced, or obsoleted by other 45 documents at any time. It is inappropriate to use Internet- 46 Drafts as reference material or to cite them other than as "work 47 in progress". 49 The list of current Internet-Drafts can be accessed at 50 http://www.ietf.org/ietf/1id-abstracts.txt. 52 The list of Internet-Draft Shadow Directories can be accessed at 53 http://www.ietf.org/shadow.html. 55 This Internet-Draft will expire on January 12, 2011. 57 Copyright Notice 59 Copyright (c) 2010 IETF Trust and the persons identified as the 60 document authors. All rights reserved. 62 This document is subject to BCP 78 and the IETF Trust's Legal 63 Provisions Relating to IETF Documents 64 (http://trustee.ietf.org/license-info) in effect on the date of 65 publication of this document. Please review these documents 66 carefully, as they describe your rights and restrictions with 67 respect to this document. Code Components extracted from this 68 document must include Simplified BSD License text as described 69 in Section 4.e of the Trust Legal Provisions and are provided 70 without warranty as described in the BSD License. 72 Table of Contents 74 1. Introduction................................................5 75 1.1. Contributing Authors....................................6 76 2. Conventions used in this document............................6 77 2.1. Terminology............................................6 78 2.2. Definitions............................................7 79 3. Functional Components.......................................10 80 3.1. Maintenance Entity and Maintenance Entity Group.........10 81 3.2. Nested MEGs: SPMEs and Tandem Connection Monitoring.....12 82 3.3. MEG End Points (MEPs)..................................14 83 3.4. MEG Intermediate Points (MIPs).........................17 84 3.5. Server MEPs...........................................18 85 3.6. Configuration Considerations...........................19 86 3.7. P2MP considerations....................................20 87 4. Reference Model............................................21 88 4.1. MPLS-TP Section Monitoring (SME).......................23 89 4.2. MPLS-TP LSP End-to-End Monitoring (LME)................24 90 4.3. MPLS-TP PW Monitoring (PME)............................24 91 4.4. MPLS-TP LSP SPME Monitoring (LSME).....................25 92 4.5. MPLS-TP MS-PW SPME Monitoring (PSME)...................26 93 4.6. Fate sharing considerations for multilink..............28 94 5. OAM Functions for proactive monitoring......................29 95 5.1. Continuity Check and Connectivity Verification..........30 96 5.1.1. Defects identified by CC-V........................31 97 5.1.2. Consequent action.................................33 98 5.1.3. Configuration considerations......................34 99 5.2. Remote Defect Indication...............................35 100 5.2.1. Configuration considerations......................36 101 5.3. Alarm Reporting........................................36 102 5.4. Lock Reporting........................................38 103 5.5. Packet Loss Measurement................................39 104 5.5.1. Configuration considerations......................40 105 5.5.2. Sampling skew.....................................40 106 5.5.3. Multilink issues..................................40 107 5.6. Packet Delay Measurement...............................41 108 5.6.1. Configuration considerations......................41 109 5.7. Client Failure Indication..............................41 110 5.7.1. Configuration considerations......................42 111 6. OAM Functions for on-demand monitoring......................42 112 6.1. Connectivity Verification..............................43 113 6.1.1. Configuration considerations......................44 114 6.2. Packet Loss Measurement................................45 115 6.2.1. Configuration considerations......................45 116 6.2.2. Sampling skew.....................................45 117 6.2.3. Multilink issues..................................46 118 6.3. Diagnostic Tests.......................................46 119 6.3.1. Throughput Estimation.............................46 120 6.3.2. Data plane Loopback...............................47 121 6.4. Route Tracing.........................................48 122 6.4.1. Configuration considerations......................48 123 6.5. Packet Delay Measurement...............................48 124 6.5.1. Configuration considerations......................49 125 7. OAM Functions for administration control....................49 126 7.1. Lock Instruct.........................................49 127 7.1.1. Locking a transport path..........................50 128 7.1.2. Unlocking a transport path........................50 129 8. Security Considerations.....................................51 130 9. IANA Considerations........................................51 131 10. Acknowledgments...........................................52 132 11. References................................................53 133 11.1. Normative References..................................53 134 11.2. Informative References................................54 136 Editors' Note: 138 This Informational Internet-Draft is aimed at achieving IETF 139 Consensus before publication as an RFC and will be subject to an 140 IETF Last Call. 142 [RFC Editor, please remove this note before publication as an 143 RFC and insert the correct Streams Boilerplate to indicate that 144 the published RFC has IETF Consensus.] 146 1. Introduction 148 As noted in [8], the transport profile of multi-protocol label 149 switching (MPLS-TP) is a packet-based transport technology based on 150 the MPLS Traffic Engineering (MPLS-TE) and Pseudo Wire (PW) data 151 plane architectures defined in RFC 3031 [1], RFC 3985 [2] and RFC 152 5659 [4]. 154 MPLS-TP supports a comprehensive set of Operations, 155 Administration and Maintenance (OAM) procedures for fault, 156 performance and protection-switching management and that do not 157 rely on the presence of a control plane. 159 In line with [13], existing MPLS OAM mechanisms will be used 160 wherever possible and extensions or new OAM mechanisms will be 161 defined only where existing mechanisms are not sufficient to 162 meet the requirements. Extensions do not deprecate support for 163 existing MPLS OAM capabilities. 165 The MPLS-TP OAM framework defined in this document provides a 166 comprehensive set of OAM procedures that satisfy the MPLS-TP OAM 167 requirements of RFC 5860 [10]. In this regard, it defines 168 similar OAM functionality as for existing SONET/SDH and OTN OAM 169 mechanisms (e.g. [17]). 171 The MPLS-TP OAM framework is applicable to both LSPs and 172 (MS-)PWs and supports co-routed and associated bidirectional p2p 173 transport paths as well as unidirectional p2p and p2mp transport 174 paths. 176 This document is a product of a joint Internet Engineering Task 177 Force (IETF) / International Telecommunication Union 178 Telecommunication Standardization Sector (ITU-T) effort to 179 include an MPLS Transport Profile within the IETF MPLS and PWE3 180 architectures to support the capabilities and functionalities of 181 a packet transport network as defined by the ITU-T. 183 1.1. Contributing Authors 185 Dave Allan, Italo Busi, Ben Niven-Jenkins, Annamaria Fulignoli, 186 Enrique Hernandez-Valencia, Lieven Levrau, Vincenzo Sestito, 187 Nurit Sprecher, Huub van Helvoort, Martin Vigoureux, Yaacov 188 Weingarten, Rolf Winter 190 2. Conventions used in this document 192 2.1. Terminology 194 AC Attachment Circuit 196 DBN Domain Border Node 198 LER Label Edge Router 200 LME LSP Maintenance Entity 202 LMEG LSP ME Group 204 LSP Label Switched Path 206 LSR Label Switching Router 208 LSME LSP SPME ME 210 LSMEG LSP SPME ME Group 212 ME Maintenance Entity 214 MEG Maintenance Entity Group 216 MEP Maintenance Entity Group End Point 218 MIP Maintenance Entity Group Intermediate Point 220 PHB Per-hop Behavior 222 PME PW Maintenance Entity 224 PMEG PW ME Group 226 PSME PW SPME ME 228 PSMEG PW SPME ME Group 229 PW Pseudowire 231 SLA Service Level Agreement 233 SME Section Maintenance Entity Group 235 SPME Sub-path Maintenance Element 237 2.2. Definitions 239 This document uses the terms defined in RFC 5654 [5]. 241 This document uses the term 'Per-hop Behavior' as defined in RFC 242 2474 [14]. 244 This document uses the term LSP to indicate either a service LSP 245 or a transport LSP (as defined in [8]). 247 Where appropriate, the following definitions are aligned with 248 ITU-T recommendation Y.1731 [19] in order to have a common, 249 unambiguous terminology. They do not however intend to imply a 250 certain implementation but rather serve as a framework to 251 describe the necessary OAM functions for MPLS-TP. 253 Adaptation function: The adaptation function is the interface 254 between the client (sub)-layer and the server (sub-layer). 256 Data plane loopback: An out-of-service test where an interface 257 at either an intermediate or terminating node in a path is 258 placed into a data plane loopback state, such that all traffic 259 (including user data and OAM) received on the looped back 260 interface is sent on the reverse direction of the transport 261 path. 263 Note - The only way to send an OAM packet to a node set in the data 264 plane loopback mode is via TTL expiry, irrespectively on whether the 265 node is hosting MIPs or MEPs. 267 Domain Border Node (DBN): An intermediate node in an MPLS-TP LSP 268 that is at the boundary between two MPLS-TP OAM domains. Such a 269 node may be present on the edge of two domains or may be 270 connected by a link to the DBN at the edge of another OAM 271 domain. 273 Down MEP: A MEP that receives OAM packets from, and transmits 274 them towards, the direction of a server layer. 276 In-Service: The administrative status of a transport path when 277 it is unlocked. 279 Intermediate Node: An intermediate node transits traffic for an 280 LSP or a PW. An intermediate node may originate OAM flows 281 directed to downstream intermediate nodes or MEPs. 283 Loopback: See data plane loopback and OAM loopback definitions. 285 Maintenance Entity (ME): Some portion of a transport path that 286 requires management bounded by two points (called MEPs), and the 287 relationship between those points to which maintenance and 288 monitoring operations apply (details in section 3.1). 290 Maintenance Entity Group (MEG): The set of one or more 291 maintenance entities that maintain and monitor a transport path 292 in an OAM domain. 294 MEP: A MEG end point (MEP) is capable of initiating (MEP Source) 295 and terminating (MEP Sink) OAM messages for fault management and 296 performance monitoring. MEPs define the boundaries of an ME 297 (details in section 3.3). 299 MEP Source: A MEP acts as MEP source for an OAM message when it 300 originates and inserts the message into the transport path for 301 its associated MEG. 303 MEP Sink: A MEP acts as a MEP sink for an OAM message when it 304 terminates and processes the messages received from its 305 associated MEG. 307 MIP: A MEG intermediate point (MIP) terminates and processes OAM 308 messages that are sent to this particular MIP and may generate 309 OAM messages in reaction to received OAM messages. It never 310 generates unsolicited OAM messages itself. A MIP resides within 311 a MEG between MEPs (details in section 3.3). 313 MPLS-TP Section: As defined in [8], it is the link traversed by 314 an MPLS-TP LSP. 316 OAM domain: A domain, as defined in [5], whose entities are 317 grouped for the purpose of keeping the OAM confined within that 318 domain. 320 Note - within the rest of this document the term "domain" is 321 used to indicate an "OAM domain" 322 OAM flow: Is the set of all OAM messages originating with a 323 specific MEP source that instrument one direction of a MEG. 325 OAM information element: An atomic piece of information 326 exchanged between MEPs and/or MIPs in MEG used by an OAM 327 application. 329 OAM loopback: It is the capability of a node to be directed by a 330 received OAM message to generate a reply back to the sender. OAM 331 loopback can work in-service and can support different OAM 332 functions (e.g., bidirectional on-demand connectivity 333 verification). 335 OAM Message: One or more OAM information elements that when 336 exchanged between MEPs or between MEPs and MIPs performs some 337 OAM functionality (e.g. connectivity verification) 339 OAM Packet: A packet that carries one or more OAM messages (i.e. 340 OAM information elements). 342 Out-of-Service: The administrative status of a transport path 343 when it is locked. When a path is in a locked condition, it is 344 blocked from carrying client traffic. 346 Path Segment: It is either a segment or a concatenated segment, 347 as defined in RFC 5654 [5]. 349 Signal Degrade: A condition declared by a MEP when the data 350 forwarding capability associated with a transport path has 351 deteriorated, as determined by PM. See also ITU-T recommendation 352 G.806 [12]. 354 Signal Fail: A condition declared by a MEP when the data 355 forwarding capability associated with a transport path has 356 failed, e.g. loss of continuity. See also ITU-T recommendation 357 G.806 [12]. 359 Tandem Connection: A tandem connection is an arbitrary part of a 360 transport path that can be monitored (via OAM) independent of 361 the end-to-end monitoring (OAM). The tandem connection may also 362 include the forwarding engine(s) of the node(s) at the 363 boundaries of the tandem connection. Tandem connections may be 364 nested but cannot overlap. See also ITU-T recommendation G.805 365 [18]. 367 Up MEP: A MEP that transmits OAM packets towards, and receives 368 them from, the direction of the forwarding engine. 370 3. Functional Components 372 MPLS-TP is a packet-based transport technology based on the MPLS 373 and PW data plane architectures ([1], [2] and [4]) and is 374 capable of transporting service traffic where the 375 characteristics of information transfer between the transport 376 path endpoints can be demonstrated to comply with certain 377 performance and quality guarantees. 379 In order to describe the required OAM functionality, this 380 document introduces a set of functional components. 382 3.1. Maintenance Entity and Maintenance Entity Group 384 MPLS-TP OAM operates in the context of Maintenance Entities 385 (MEs) that define a relationship between any two points of a 386 transport path to which maintenance and monitoring operations 387 apply. The collection of one or more MEs that belongs to the 388 same transport path and that are maintained and monitored as a 389 group are known as a maintenance entity group (MEG) and the two 390 points that define a maintenance entity are called Maintenance 391 Entity Group (MEG) End Points (MEPs). In between these two 392 points zero or more intermediate points, called Maintenance 393 Entity Group Intermediate Points (MIPs), can exist and can be 394 shared by more than one ME in a MEG. 396 An abstract reference model for an ME is illustrated in Figure 1 397 below: 399 +-+ +-+ +-+ +-+ 400 |A|----|B|----|C|----|D| 401 +-+ +-+ +-+ +-+ 403 Figure 1 ME Abstract Reference Model 405 The instantiation of this abstract model to different MPLS-TP 406 entities is described in section 4. In Figure 1, nodes A and D 407 can be LERs for an LSP or the T-PEs for a MS-PW, nodes B and C 408 are LSRs for a LSP or S-PEs for a MS-PW. MEPs reside in nodes A 409 and D while MIPs reside in nodes B and C and may reside in A and 410 D. The links connecting adjacent nodes can be physical links, 411 (sub-)layer LSPs/SPMEs, or serving layer paths. 413 This functional model defines the relationships between all OAM 414 entities from a maintenance perspective, to allow each 415 Maintenance Entity to monitor and manage the (sub-)layer network 416 under its responsibility and to localize problems efficiently. 418 An MPLS-TP Maintenance Entity Group may be defined to monitor 419 the transport path for fault and/or performance management. 421 The MEPs that form a MEG bound the scope of an OAM flows to the 422 MEG (i.e. within the domain of the transport path that is being 423 monitored and managed). There are two exceptions to this: 425 1) A misbranching fault may cause OAM packets to be delivered to 426 a MEP that is not in the MEG of origin. 428 2) An out-of-band return path may be used between a MIP or a MEP 429 and the originating MEP. 431 In case of unidirectional point-to-point transport paths, a 432 single unidirectional Maintenance Entity is defined to monitor 433 it. 435 In case of associated bi-directional point-to-point transport 436 paths, two independent unidirectional Maintenance Entities are 437 defined to independently monitor each direction. This has 438 implications for transactions that terminate at or query a MIP, 439 as a return path from MIP to source MEP does not necessarily 440 exist in the MEG. 442 In case of co-routed bi-directional point-to-point transport 443 paths, a single bidirectional Maintenance Entity is defined to 444 monitor both directions congruently. 446 In case of unidirectional point-to-multipoint transport paths, a 447 single unidirectional Maintenance entity for each leaf is 448 defined to monitor the transport path from the root to that 449 leaf. 451 In all cases, portions of the transport path may be monitored by 452 the instantiation of SPMEs (see section 3.2). 454 The reference model for the p2mp MEG is represented in Figure 2. 456 +-+ 457 /--|D| 458 / +-+ 459 +-+ 460 /--|C| 461 +-+ +-+/ +-+\ +-+ 462 |A|----|B| \--|E| 463 +-+ +-+\ +-+ +-+ 464 \--|F| 465 +-+ 467 Figure 2 Reference Model for p2mp MEG 469 In case of p2mp transport paths, the OAM measurements are 470 independent for each ME (A-D, A-E and A-F): 472 o Fault conditions - some faults may impact more than one ME 473 depending from where the failure is located; 475 o Packet loss - packet dropping may impact more than one ME 476 depending from where the packets are lost; 478 o Packet delay - will be unique per ME. 480 Each leaf (i.e. D, E and F) terminates OAM flows to monitor the 481 ME from itself and the root while the root (i.e. A) generates 482 OAM messages common to all the MEs of the p2mp MEG. All nodes 483 may implement a MIP in the corresponding MEG. 485 3.2. Nested MEGs: SPMEs and Tandem Connection Monitoring 487 In order to verify and maintain performance and quality 488 guarantees, there is a need to not only apply OAM functionality 489 on a transport path granularity (e.g. LSP or MS-PW), but also on 490 arbitrary parts of transport paths, defined as Tandem 491 Connections, between any two arbitrary points along a transport 492 path. 494 Sub-path Maintenance Elements (SPMEs), as defined in [8], are 495 instantiated to provide monitoring of a portion of a set of co- 496 routed transport paths (LSPs or MS-PWs). The operational aspects 497 of instantiating SPMEs are out of scope of this memo. 499 SPMEs can also be employed to meet the requirement to provide 500 tandem connection monitoring (TCM). 502 TCM for a given path segment of a transport path is implemented 503 by creating an SPME that has a 1:1 association with the path 504 segment of the transport path that is to be monitored. 506 In the TCM case, this means that the SPME used to provide TCM 507 can carry only one and only one transport path thus allowing 508 direct correlation between all fault management and performance 509 monitoring information gathered for the SPME and the monitored 510 path segment of the end-to-end transport path. The SPME is 511 monitored using normal LSP monitoring. 513 Where resiliency is required across an arbitrary portion of a 514 transport path, this may be implemented by more than one 515 diversely routed SPMEs with common end points where only one 516 SPME is active at any given time. 518 There are a number of implications to this approach: 520 1) The SPME would use the uniform model of TC code point copying 521 between sub-layers for diffserv such that the E2E markings 522 and PHB treatment for the transport path was preserved by the 523 SPMEs. 525 2) The SPME normally would use the short-pipe model for TTL 526 handling [6] such that MIP addressing for the E2E entity 527 would be not be impacted by the presence of the SPME, but it 528 should be possible for an operator to specify use of the 529 uniform model. 531 3) PM statistics need to be adjusted for the encapsulation 532 overhead of the additional SPME sub-layer. 534 Note that points 1 an 2 above assume that the TTL copying mode 535 and TC copying modes are independently configurable for an LSP. 537 There are specific issues with the use of the uniform model of 538 TTL copying for an SPME: 540 1. As any MIP in the SPME sub-layer is not part of the transport path 541 MEG, hence only an out of band return path would be available. 543 2. The instantiation of a lower level MEG or protection switching 544 actions within a lower level MEG may change the TTL distances to 545 MIPs in the higher level MEGs. 547 The endpoints of the SPME are MEPs and limit the scope of an OAM 548 flow within each MEG to the MEPs belong to (i.e. within the 549 domain of the SPME that is being monitored and managed). 551 When considering SPMEs, it is important to consider that the 552 following properties apply to all MPLS-TP MEGs: 554 o They can be nested but not overlapped, e.g. a MEG may cover a 555 segment or a concatenated segment of another MEG, and may 556 also include the forwarding engine(s) of the node(s) at the 557 edge(s) of the segment or concatenated segment. However when 558 MEGs are nested, the MEPs and MIPs in the nested MEG are no 559 longer part of the encompassing MEG. 561 o It is possible that MEPs of nested MEGs reside on a single 562 node but again implemented in such a way that they do not 563 overlap. 565 o Each OAM flow is associated with a single MEG 567 o OAM packets that instrument a particular direction of a 568 transport path are subject to the same forwarding treatment 569 (i.e. fate share) as the data traffic and in some cases may 570 be required to have common queuing discipline E2E with the 571 class of traffic monitored. OAM packets can be distinguished 572 from the data traffic using the GAL and ACH constructs [7] 573 for LSP and Section or the ACH construct [3]and [7] for 574 (MS-)PW. 576 o When a SPME is instantiated after the transport path has been 577 instantiated the addressing of the MIPs will change. 579 3.3. MEG End Points (MEPs) 581 MEG End Points (MEPs) are the source and sink points of a MEG. 582 In the context of an MPLS-TP LSP, only LERs can implement MEPs 583 while in the context of an SPME LSRs for the MPLS-TP LSP can be 584 LERs for SPMEs that contribute to the overall monitoring 585 infrastructure for the transport path. Regarding PWs, only T-PEs 586 can implement MEPs while for SPMEs supporting one or more PWs 587 both T-PEs and S-PEs can implement SPME MEPs. Any MPLS-TP LSR 588 can implement a MEP for an MPLS-TP Section. 590 MEPs are responsible for activating and controlling all of the 591 proactive and on-demand monitoring OAM functionality for the 592 MEG. There is a separate class of notifications (such as LKR and 593 AIS) that are originated by intermediate nodes and triggered by 594 server layer events. A MEP is capable of originating and 595 terminating OAM messages for fault management and performance 596 monitoring. These OAM messages are encapsulated into an OAM 597 packet using the G-ACh as defined in RFC 5586 [7]. In this case 598 the G-ACh message is an OAM message and the channel type 599 indicates an OAM message. A MEP terminates all the OAM packets 600 it receives from the MEG it belongs to and silently discards 601 those that do not (note in the case of a mis-connectivity defect 602 there are further actions taken). The MEG the OAM packet belongs 603 to is inferred from the MPLS or PW label or, in case of an 604 MPLS-TP section, the MEG is inferred from the port on which an 605 OAM packet was received with the GAL at the top of the label 606 stack. 608 OAM packets may require the use of an available "out-of-band" 609 return path (as defined in [8]). In such cases sufficient 610 information is required in the originating transaction such that 611 the OAM reply packet can be constructed (e.g. IP address). 613 Each OAM solution will further detail its applicability as a 614 pro-active or on-demand mechanism as well as its usage when: 616 o The "in-band" return path exists and it is used; 618 o An "out-of-band" return path exists and it is used; 620 o Any return path does not exist or is not used. 622 Once a MEG is configured, the operator can configure which 623 proactive OAM functions to use on the MEG but the MEPs are 624 always enabled. A node at the edge of a MEG always supports a 625 MEP. 627 MEPs terminate all OAM packets received from the associated MEG. 628 As the MEP corresponds to the termination of the forwarding path 629 for a MEG at the given (sub-)layer, OAM packets never leak 630 outside of a MEG in a properly configured fault-free 631 implementation. 633 A MEP of an MPLS-TP transport path coincides with transport path 634 termination and monitors it for failures or performance 635 degradation (e.g. based on packet counts) in an end-to-end 636 scope. Note that both MEP source and MEP sink coincide with 637 transport paths' source and sink terminations. 639 The MEPs of an SPME are not necessarily coincident with the 640 termination of the MPLS-TP transport path and monitor a path 641 segment of the transport path for failures or performance 642 degradation (e.g. based on packet counts) only within the 643 boundary of the MEG for the SPME. 645 An MPLS-TP MEP sink passes a fault indication to its client 646 (sub-)layer network as a consequent action of fault detection. 648 A node at the edge of a MEG can either support per-node MEP or 649 per-interface MEP(s). A per-node MEP resides in an unspecified 650 location within the node while a per-interface MEP resides on a 651 specific side of the forwarding engine. In particular a per- 652 interface MEP is called "Up MEP" or "Down MEP" depending on its 653 location relative to the forwarding engine. 655 Source node Destination node 656 ------------------------ ------------------------ 657 | | | | 658 |----- -----| |----- -----| 659 | MEP | | | | | | MEP | 660 | | ---- | | | | ---- | | 661 | In |->-| FW |->-| Out |->- ->-| In |->-| FW |->-| Out | 662 | i/f | ---- | i/f | | i/f | ---- | i/f | 663 |----- -----| |----- -----| 664 | | | | 665 ------------------------ ------------------------ 666 (1) (2) 668 Figure 3 Example of per-interface Up MEPs 670 Figure 3 describes two examples of per-interface Up MEPs: An Up 671 Source MEP in a source node (case 1) and an Up Sink MEP in a 672 destination node (case 2). 674 The usage of per-interface Up MEPs extends the coverage of the 675 ME for both fault and performance monitoring closer to the edge 676 of the domain and allows the isolation of failures or 677 performance degradation to being within a node or either the 678 link or interfaces. 680 Each OAM solution will further detail the implications when used 681 with per-interface or per-node MEPs, if necessary. 683 It may occur that the Up MEPs of an SPME are set on both sides 684 of the forwarding engine such that the MEG is entirely internal 685 to the node. 687 It should be noted that a ME may span nodes that implement per 688 node MEPs and per-interface MEPs. This guarantees backward 689 compatibility with most of the existing LSRs that can implement 690 only a per-node MEP as in current implementations label 691 operations are largely performed on the ingress interface, hence 692 the exposure of the GAL as top label will occur at the ingress 693 interface. 695 Note that a MEP can only exist at the beginning and end of a 696 (sub-)layer in MPLS-TP. If there is a need to monitor some 697 portion of that LSP or PW, a new sub-layer in the form of an 698 SPME is created which permits MEPs and associated MEGs to be 699 created. 701 In the case where an intermediate node sends a message to a MEP, 702 it uses the top label of the stack at that point. 704 3.4. MEG Intermediate Points (MIPs) 706 A MEG Intermediate Point (MIP) is a function located at a point 707 between the MEPs of a MEG for a PW, LSP or SPME. 709 A MIP is capable of reacting to some OAM packets and forwarding all 710 the other OAM packets while ensuring fate sharing with data plane 711 packets. However, a MIP does not initiate unsolicited OAM packets, 712 but may be addressed by OAM packets initiated by one of the MEPs of 713 the MEG. A MIP can generate OAM packets only in response to OAM 714 packets that are sent on the MEG it belongs to. The OAM messages 715 generated by the MIP are sent in the direction of the source MEP and 716 not forwarded to the sink MEP. 718 An intermediate node within a MEG can either: 720 o Support per-node MIP (i.e. a single MIP per node in an 721 unspecified location within the node); 723 o Support per-interface MIP (i.e. two or more MIPs per node on 724 both sides of the forwarding engine). 726 Intermediate node 727 ------------------------ 728 | | 729 |----- -----| 730 | MIP | | MIP | 731 | | ---- | | 732 ->-| In |->-| FW |->-| Out |->- 733 | i/f | ---- | i/f | 734 |----- -----| 735 | | 736 ------------------------ 737 Figure 4 Example of per-interface MIPs 739 Figure 4 describes an example of two per-interface MIPs at an 740 intermediate node of a point-to-point MEG. 742 The usage of per-interface MIPs allows the isolation of failures 743 or performance degradation to being within a node or either the 744 link or interfaces. 746 When sending an OAM packet to a MIP, the source MEP should set 747 the TTL field to indicate the number of hops necessary to reach 748 the node where the MIP resides. It is always assumed that the 749 "short pipe" model of TTL handling is used by the MPLS transport 750 profile. 752 The source MEP should also include Target MIP information in the 753 OAM packets sent to a MIP to allow proper identification of the 754 MIP within the node. The MEG the OAM packet is associated with 755 is inferred from the MPLS label. 757 A node at the edge of a MEG can also support per-interface Up 758 MEPs and per-interface MIPs on either side of the forwarding 759 engine. 761 Once a MEG is configured, the operator can enable/disable the 762 MIPs on the nodes within the MEG. All the intermediate nodes and 763 possibly the end nodes host MIP(s). Local policy allows them to 764 be enabled per function and per MEG. The local policy is 765 controlled by the management system, which may delegate it to 766 the control plane. 768 3.5. Server MEPs 770 A server MEP is a MEP of a MEG that is either: 772 o Defined in a layer network that is "below", which is to say 773 encapsulates and transports the MPLS-TP layer network being 774 referenced, or 776 o Defined in a sub-layer of the MPLS-TP layer network that is 777 "below" which is to say encapsulates and transports the sub- 778 layer being referenced. 780 A server MEP can coincide with a MIP or a MEP in the client 781 (MPLS-TP) (sub-)layer network. 783 A server MEP also interacts with the client/server adaptation 784 function between the client (MPLS-TP) (sub-)layer network and 785 the server (sub-)layer network. The adaptation function 786 maintains state on the mapping of MPLS-TP transport paths that 787 are setup over that server (sub-)layer's transport path. 789 For example, a server MEP can be either: 791 o A termination point of a physical link (e.g. 802.3), an SDH 792 VC or OTN ODU, for the MPLS-TP Section layer network, defined 793 in section 4.1; 795 o An MPLS-TP Section MEP for MPLS-TP LSPs, defined in section 796 4.2; 798 o An MPLS-TP LSP MEP for MPLS-TP PWs, defined in section 4.3; 800 o An MPLS-TP SPME MEP used for LSP path segment monitoring, as 801 defined in section 4.4, for MPLS-TP LSPs or higher-level 802 SPMEs providing LSP path segment monitoring; 804 o An MPLS-TP SPME MEP used for PW path segment monitoring, as 805 defined in section 4.5, for MPLS-TP PWs or higher-level SPMEs 806 providing PW path segment monitoring. 808 The server MEP can run appropriate OAM functions for fault 809 detection within the server (sub-)layer network, and provides a 810 fault indication to its client MPLS-TP layer network. Server MEP 811 OAM functions are outside the scope of this document. 813 3.6. Configuration Considerations 815 When a control plane is not present, the management plane 816 configures these functional components. Otherwise they can be 817 configured either by the management plane or by the control 818 plane. 820 Local policy allows disabling the usage of any available "out- 821 of-band" return path, as defined in [8], irrespective of what is 822 requested by the node originating the OAM packet. 824 SPMEs are usually instantiated when the transport path is 825 created by either the management plane or by the control plane 826 (if present). Sometimes an SPME can be instantiated after the 827 transport path is initially created. 829 3.7. P2MP considerations 831 All the traffic sent over a p2mp transport path, including OAM 832 packets generated by a MEP, is sent (multicast) from the root to 833 all the leaves. As a consequence: 835 o To send an OAM packet to all leaves, the source MEP can 836 send a single OAM packet that will be delivered by the 837 forwarding plane to all the leaves and processed by all the 838 leaves. 840 o To send an OAM packet to a single leaf, the source MEP 841 sends a single OAM packet that will be delivered by the 842 forwarding plane to all the leaves but contains sufficient 843 information to identify a target leaf, and therefore is 844 processed only by the target leaf and ignored by the other 845 leaves. 847 o To send an OAM packet to a single MIP, the source MEP sends 848 a single OAM packet with the TTL field indicating the 849 number of hops necessary to reach the node where the MIP 850 resides. This packet will be delivered by the forwarding 851 plane to all intermediate nodes at the same TTL distance of 852 the target MIP and to any leaf that is located at a shorter 853 distance. The OAM message must contain sufficient 854 information to identify the target MIP and therefore is 855 processed only by the target MIP. 857 o In order to send an OAM packet to M leaves (i.e., a subset 858 of all the leaves), the source MEP sends M different OAM 859 packets targeted to each individual leaf in the group of M 860 leaves. Aggregated or subsetting mechanisms are outside the 861 scope of this document. 863 P2MP paths are unidirectional, therefore any return path to a 864 source MEP for on-demand transactions will be out-of-band. A 865 mechanism to scope the set of MEPs or MIPs expected to respond 866 to a given "on-demand" transaction is useful as it relieves the 867 source MEP of the requirement to filter and discard undesired 868 responses as normally TTL exhaustion will address all MIPs at a 869 given distance from the source, and failure to exhaust TTL will 870 address all MEPs. 872 4. Reference Model 874 The reference model for the MPLS-TP framework builds upon the 875 concept of a MEG, and its associated MEPs and MIPs, to support 876 the functional requirements specified in RFC 5860 [10]. 878 The following MPLS-TP MEGs are specified in this document: 880 o A Section Maintenance Entity Group (SME), allowing monitoring 881 and management of MPLS-TP Sections (between MPLS LSRs). 883 o An LSP Maintenance Entity Group (LME), allowing monitoring 884 and management of an end-to-end LSP (between LERs). 886 o A PW Maintenance Entity Group (PME), allowing monitoring and 887 management of an end-to-end SS/MS-PWs (between T-PEs). 889 o An LSP SPME ME Group (LSMEG), allowing monitoring and 890 management of an SPME (between any LERs/LSRs along an LSP). 892 o A PW SPME ME Group (PSMEG), allowing monitoring and 893 management of an SPME (between any T-PEs/S-PEs along the 894 (MS-)PW). 896 The MEGs specified in this MPLS-TP framework are compliant with 897 the architecture framework for MPLS-TP MS-PWs [4] and LSPs [1]. 899 Hierarchical LSPs are also supported in the form of SPMEs. In 900 this case, each LSP in the hierarchy is a different sub-layer 901 network that can be monitored, independently from higher and 902 lower level LSPs in the hierarchy, on an end-to-end basis (from 903 LER to LER) by a SPME. It is possible to monitor a portion of a 904 hierarchical LSP by instantiating a hierarchical SPME between 905 any LERs/LSRs along the hierarchical LSP. 907 Native |<------------------ MS-PW1Z ---------------->| Native 908 Layer | | Layer 909 Service | || |<-LSP3X->| || | Service 910 (AC1) V V LSP V V LSP V V LSP V V (AC2) 911 +----+ +-+ +----+ +----+ +-+ +----+ 912 +----+ |TPE1| | | |SPE3| |SPEX| | | |TPEZ| +----+ 913 | | | |=======| |=========| |=======| | | | 914 | CE1|--|.......PW13......|...PW3X..|......PWXZ.......|---|CE2 | 915 | | | |=======| |=========| |=======| | | | 916 +----+ | 1 | |2| | 3 | | X | |Y| | Z | +----+ 917 +----+ +-+ +----+ +----+ +-+ +----+ 918 . . . . 919 | | | | 920 |<--- Domain 1 -->| |<--- Domain Z -->| 921 ^----------------- PW1Z PME -----------------^ 922 ^--- PW13 PSME ---^ ^--- PWXZ PSME ---^ 923 ^-------^ ^-------^ 924 LSP13 LME LSPXZ LME 925 ^--^ ^--^ ^---------^ ^--^ ^--^ 926 Sec12 Sec23 Sec3X SecXY SecYZ 927 SME SME SME SME SME 929 TPE1: Terminating Provider Edge 1 SPE2: Switching Provider Edge 930 3 931 TPEX: Terminating Provider Edge X SPEZ: Switching Provider Edge 932 Z 934 ^---^ ME ^ MEP ==== LSP .... PW 936 Figure 5 Reference Model for the MPLS-TP OAM Framework 938 Figure 5 depicts a high-level reference model for the MPLS-TP 939 OAM framework. The figure depicts portions of two MPLS-TP 940 enabled network domains, Domain 1 and Domain Z. In Domain 1, 941 LSR1 is adjacent to LSR2 via the MPLS-TP Section Sec12 and LSR2 942 is adjacent to LSR3 via the MPLS-TP Section Sec23. Similarly, in 943 Domain Z, LSRX is adjacent to LSRY via the MPLS-TP Section SecXY 944 and LSRY is adjacent to LSRZ via the MPLS-TP Section SecYZ. In 945 addition, LSR3 is adjacent to LSRX via the MPLS-TP Section 3X. 947 Figure 5 also shows a bi-directional MS-PW (PW1Z) between AC1 on 948 TPE1 and AC2 on TPEZ. The MS-PW consists of three bi-directional 949 PW path segments: 1) PW13 path segment between T-PE1 and S-PE3 950 via the bi-directional LSP13 LSP, 2) PW3X path segment between 951 S-PE3 and S-PEX, via the bi-directional LSP3X LSP, and 3) PWXZ 952 path segment between S-PEX and T-PEZ via the bi-directional 953 LSPXZ LSP. 955 The MPLS-TP OAM procedures that apply to a MEG are expected to 956 operate independently from procedures on other MEGs. Yet, this 957 does not preclude that multiple MEGs may be affected 958 simultaneously by the same network condition, for example, a 959 fiber cut event. 961 Note that there are no constrains imposed by this OAM framework 962 on the number, or type (p2p, p2mp, LSP or PW), of MEGs that may 963 be instantiated on a particular node. In particular, when 964 looking at Figure 5, it should be possible to configure one or 965 more MEPs on the same node if that node is the endpoint of one 966 or more MEGs. 968 Figure 5 does not describe a PW3X PSME because typically SPMEs 969 are used to monitor an OAM domain (like PW13 and PWXZ PSMEs) 970 rather than the segment between two OAM domains. However the OAM 971 framework does not pose any constraints on the way SPMEs are 972 instantiated as long as they are not overlapping. 974 The subsections below define the MEGs specified in this MPLS-TP 975 OAM architecture framework document. Unless otherwise stated, 976 all references to domains, LSRs, MPLS-TP Sections, LSPs, 977 pseudowires and MEGs in this section are made in relation to 978 those shown in Figure 5. 980 4.1. MPLS-TP Section Monitoring (SME) 982 An MPLS-TP Section ME (SME) is an MPLS-TP maintenance entity 983 intended to monitor an MPLS-TP Section as defined in RFC 5654 984 [5]. An SME may be configured on any MPLS-TP section. SME OAM 985 packets must fate share with the user data packets sent over the 986 monitored MPLS-TP Section. 988 An SME is intended to be deployed for applications where it is 989 preferable to monitor the link between topologically adjacent 990 (next hop in this layer network) MPLS-TP LSRs rather than 991 monitoring the individual LSP or PW path segments traversing the 992 MPLS-TP Section and the server layer technology does not provide 993 adequate OAM capabilities. 995 Figure 5 shows five Section MEs configured in the network 996 between AC1 and AC2: 998 1. Sec12 ME associated with the MPLS-TP Section between LSR 1 999 and LSR 2, 1001 2. Sec23 ME associated with the MPLS-TP Section between LSR 2 1002 and LSR 3, 1004 3. Sec3X ME associated with the MPLS-TP Section between LSR 3 1005 and LSR X, 1007 4. SecXY ME associated with the MPLS-TP Section between LSR X 1008 and LSR Y, and 1010 5. SecYZ ME associated with the MPLS-TP Section between LSR Y 1011 and LSR Z. 1013 4.2. MPLS-TP LSP End-to-End Monitoring (LME) 1015 An MPLS-TP LSP ME (LME) is an MPLS-TP maintenance entity 1016 intended to monitor an end-to-end LSP between two LERs. An LME 1017 may be configured on any MPLS LSP. LME OAM packets must fate 1018 share with user data packets sent over the monitored MPLS-TP 1019 LSP. 1021 An LME is intended to be deployed in scenarios where it is 1022 desirable to monitor an entire LSP between its LERs, rather 1023 than, say, monitoring individual PWs. 1025 Figure 5 depicts two LMEs configured in the network between AC1 1026 and AC2: 1) the LSP13 LME between LER 1 and LER 3, and 2) the 1027 LSPXZ LME between LER X and LER Y. Note that the presence of a 1028 LSP3X LME in such a configuration is optional, hence, not 1029 precluded by this framework. For instance, the SPs may prefer to 1030 monitor the MPLS-TP Section between the two LSRs rather than the 1031 individual LSPs. 1033 4.3. MPLS-TP PW Monitoring (PME) 1035 An MPLS-TP PW ME (PME) is an MPLS-TP maintenance entity intended 1036 to monitor a SS-PW or MS-PW between a pair of T-PEs. A PME can 1037 be configured on any SS-PW or MS-PW. PME OAM packets must fate 1038 share with the user data packets sent over the monitored PW. 1040 A PME is intended to be deployed in scenarios where it is 1041 desirable to monitor an entire PW between a pair of MPLS-TP 1042 enabled T-PEs rather than monitoring the LSP aggregating 1043 multiple PWs between PEs. 1045 |<----------------- MS-PW1Z ----------------->| 1046 | | 1047 | || |<-LSP3X->| || | 1048 V V LSP V V LSP V V LSP V V 1049 +----+ +-+ +----+ +----+ +-+ +----+ 1050 +---+ |TPE1| | | |SPE3| |SPEX| | | |TPEZ| +---+ 1051 | |AC1| |=======| |=========| |=======| |AC2| | 1052 |CE1|---|.......PW13......|...PW3X..|.......PWXZ......|---|CE2| 1053 | | | |=======| |=========| |=======| | | | 1054 +---+ | 1 | |2| | 3 | | X | |Y| | Z | +---+ 1055 +----+ +-+ +----+ +----+ +-+ +----+ 1056 ^-------------------PW1Z PME------------------^ 1058 Figure 6 MPLS-TP PW ME (PME) 1060 Figure 6 depicts a MS-PW (MS-PW1Z) consisting of three path 1061 segments: PW13, PW3X and PWXZ and its associated end-to-end PME 1062 (PW1Z PME). 1064 4.4. MPLS-TP LSP SPME Monitoring (LSME) 1066 An MPLS-TP LSP SPME ME (LSME) is an MPLS-TP LSP with associated 1067 maintenance entity intended to monitor an arbitrary part of an 1068 LSP between the pair of MEPs instantiated for the SPME 1069 independent from the end-to-end monitoring (LME). An LSME can 1070 monitor an LSP segment or concatenated segment and it may also 1071 include the forwarding engine(s) of the node(s) at the edge(s) 1072 of the segment or concatenated segment. 1074 Multiple LSMEs can be configured on any LSP. The LSRs that 1075 terminate the LSME may or may not be immediately adjacent at the 1076 MPLS-TP layer. LSME OAM packets must fate share with the user 1077 data packets sent over the monitored LSP path segment. 1079 A LSME can be defined between the following entities: 1081 o The end node and any intermediate node of a given LSP. 1083 o Any two intermediate nodes of a given LSP. 1085 An LSME is intended to be deployed in scenarios where it is 1086 preferable to monitor the behaviour of a part of an LSP or set 1087 of LSPs rather than the entire LSP itself, for example when 1088 there is a need to monitor a part of an LSP that extends beyond 1089 the administrative boundaries of an MPLS-TP enabled 1090 administrative domain. 1092 |<-------------------- PW1Z ------------------->| 1093 | | 1094 | |<-------------LSP1Z LSP------------->| | 1095 | |<-LSP13->| || |<-LSPXZ->| | 1096 V V S-LSP V V S-LSP V V S-LSP V V 1097 +----+ +-+ +----+ +----+ +-+ +----+ 1098 +----+ | PE1| | | |DBN3| |DBNX| | | | PEZ| +----+ 1099 | |AC1| |=====================================| |AC2| | 1100 | CE1|---|.....................PW1Z......................|---|CE2 | 1101 | | | |=====================================| | | | 1102 +----+ | 1 | |2| | 3 | | X | |Y| | Z | +----+ 1103 +----+ +-+ +----+ +----+ +-+ +----+ 1104 . . . . 1105 | | | | 1106 |<---- Domain 1 --->| |<---- Domain Z --->| 1108 ^---------^ ^---------^ 1109 LSP13 LSME LSPXZ LSME 1110 ^-------------------------------------^ 1111 LSP1Z LME 1113 DBN: Domain Border Node 1115 Figure 7 MPLS-TP LSP SPME ME (LSME) 1117 Figure 7 depicts a variation of the reference model in Figure 5 1118 where there is an end-to-end LSP (LSP1Z) between PE1 and PEZ. 1119 LSP1Z consists of, at least, three LSP Concatenated Segments: 1120 LSP13, LSP3X and LSPXZ. In this scenario there are two separate 1121 LSMEs configured to monitor the LSP1Z: 1) a LSME monitoring the 1122 LSP13 Concatenated Segment on Domain 1 (LSP13 LSME), and 2) a 1123 LSME monitoring the LSPXZ Concatenated Segment on Domain Z 1124 (LSPXZ LSME). 1126 It is worth noticing that LSMEs can coexist with the LME 1127 monitoring the end-to-end LSP and that LSME MEPs and LME MEPs 1128 can be coincident in the same node (e.g. PE1 node supports both 1129 the LSP1Z LME MEP and the LSP13 LSME MEP). 1131 4.5. MPLS-TP MS-PW SPME Monitoring (PSME) 1133 An MPLS-TP MS-PW SPME Monitoring ME (PSME) is an MPLS-TP 1134 maintenance entity intended to monitor an arbitrary part of an 1135 MS-PW between a given pair of PEs independently from the end-to- 1136 end monitoring (PME). A PSME can monitor a PW segment or 1137 concatenated segment and it may also include the forwarding 1138 engine(s) of the node(s) at the edge(s) of the segment or 1139 concatenated segment. 1141 S-PE placement is typically dictated by considerations other 1142 than OAM. S-PEs will frequently reside at operational boundaries 1143 such as the transition from distributed (CP) to centralized 1144 (NMS) control or at a routing area boundary. As such the 1145 architecture would superficially appear not to have the 1146 flexibility that arbitrary placement of SPME segments would 1147 imply. More arbitrary placement of MEs for a PW would require 1148 additional hierarchical components, beyond the SPMEs between PEs 1149 Multiple PSMEs can be configured on any MS-PW. The PEs may or 1150 may not be immediately adjacent at the MS-PW layer. PSME OAM 1151 packets fate share with the user data packets sent over the 1152 monitored PW path Segment. 1154 A PSME can be defined between the following entities: 1156 o T-PE and any S-PE of a given MS-PW 1158 o Any two S-PEs of a given MS-PW. It can span several PW 1159 segments. 1161 Note that, in line with the SPME description in section 3.2, when a 1162 PW SPME is instantiated after the MS-PW has been instantiated, the 1163 addressing of the MIPs will change and MIPs in the nested MEG are no 1164 longer part of the encompassing MEG. This means that the S-PE nodes 1165 hosting these MIPs are no longer S-PEs but P nodes at the SPME LSP 1166 level. The consequences are that the S-PEs hosting the PSME MEPs 1167 become adjacent S-PEs. 1169 A PSME is intended to be deployed in scenarios where it is 1170 preferable to monitor the behaviour of a part of a MS-PW rather 1171 than the entire end-to-end PW itself, for example to monitor an 1172 MS-PW path segment within a given network domain of an inter- 1173 domain MS-PW. 1175 |<----------------- MS-PW1Z ----------------->| 1176 | | 1177 | || |<-LSP3X->| || | 1178 V V LSP V V LSP V V LSP V V 1179 +----+ +-+ +----+ +----+ +-+ +----+ 1180 +---+ |TPE1| | | |SPE3| |SPEX| | | |TPEZ| +---+ 1181 | |AC1| |=======| |=========| |=======| |AC2| | 1182 |CE1|---|.......PW13......|...PW3X..|.......PWXZ......|---|CE2| 1183 | | | |=======| |=========| |=======| | | | 1184 +---+ | 1 | |2| | 3 | | X | |Y| | Z | +---+ 1185 +----+ +-+ +----+ +----+ +-+ +----+ 1187 ^--- PW1 PSME ----^ ^--- PW5 PSME ----^ 1188 ^-------------------PW1Z PME------------------^ 1190 Figure 8 MPLS-TP MS-PW SPME Monitoring (PSME) 1192 Figure 8 depicts the same MS-PW (MS-PW1Z) between AC1 and AC2 as 1193 in Figure 6. In this scenario there are two separate PSMEs 1194 configured to monitor MS-PW1Z: 1) a PSME monitoring the PW13 MS- 1195 PW path segment on Domain 1 (PW13 PSME), and 2) a PSME 1196 monitoring the PWXZ MS-PW path segment on Domain Z with (PWXZ 1197 PSME). 1199 It is worth noticing that PSMEs can coexist with the PME 1200 monitoring the end-to-end MS-PW and that PSME MEPs and PME MEPs 1201 can be coincident in the same node (e.g. TPE1 node supports both 1202 the PW1Z PME MEP and the PW13 PSME MEP). 1204 4.6. Fate sharing considerations for multilink 1206 Multilink techniques are in use today and are expected to 1207 continue to be used in future deployments. These techniques 1208 include Ethernet Link Aggregations [20], the use of Link 1209 Bundling for MPLS [16] where the option to spread traffic over 1210 component links is supported and enabled. While the use of Link 1211 Bundling can be controlled at the MPLS-TP layer, use of Link 1212 Aggregation (or any server layer specific multilink) is not 1213 necessarily under control of the MPLS-TP layer. Other techniques 1214 may emerge in the future. These techniques share the 1215 characteristic that an LSP may be spread over a set of component 1216 links and therefore be reordered but no flow within the LSP is 1217 reordered (except when very infrequent and minimally disruptive 1218 load rebalancing occurs). 1220 The use of multilink techniques may be prohibited or permitted 1221 in any particular deployment. If multilink techniques are used, 1222 the deployment can be considered to be only partially MPLS-TP 1223 compliant, however this is unlikely to prevent its use. 1225 The implications for OAM is that not all components of a 1226 multilink will be exercised, independent server layer OAM being 1227 required to exercise the aggregated link components. This has 1228 further implications for MIP and MEP placement, as per-interface 1229 MIPs or "down" MEPs on a multilink interface are akin to a layer 1230 violation, as they instrument at the granularity of the server 1231 layer. The implications for reduced OAM loss measurement 1232 functionality is documented in sections 5.5.3 and 6.2.3. 1234 5. OAM Functions for proactive monitoring 1236 In this document, proactive monitoring refers to OAM operations 1237 that are either configured to be carried out periodically and 1238 continuously or preconfigured to act on certain events such as 1239 alarm signals. 1241 Proactive monitoring is usually performed "in-service". Such 1242 transactions are universally MEP to MEP in operation while 1243 notifications emerging from the serving layer are MIP to MEP or 1244 can be MIP to MIP. The control and measurement considerations 1245 are: 1247 1. Proactive monitoring for a MEG is typically configured at 1248 transport path creation time. 1250 2. The operational characteristics of in-band measurement 1251 transactions (e.g., CV, LM etc.) are configured at the MEPs. 1253 3. Server layer events are reported by transactions originating 1254 at intermediate nodes. 1256 4. The measurements resulting from proactive monitoring are 1257 typically only reported outside of the MEG as unsolicited 1258 notifications for "out of profile" events, such as faults or 1259 loss measurement indication of excessive impairment of 1260 information transfer capability. 1262 5. The measurements resulting from proactive monitoring may be 1263 periodically harvested by an EMS/NMS. 1265 For statically provisioned transport paths the above information 1266 is statically configured; for dynamically established transport 1267 paths the configuration information is signaled via the control 1268 plane or configured via the management plane. 1270 The operator enables/disables some of the consequent actions 1271 defined in section 5.1.2. 1273 5.1. Continuity Check and Connectivity Verification 1275 Proactive Continuity Check functions, as required in section 1276 2.2.2 of RFC 5860 [10], are used to detect a loss of continuity 1277 defect (LOC) between two MEPs in a MEG. 1279 Proactive Connectivity Verification functions, as required in 1280 section 2.2.3 of RFC 5860 [10], are used to detect an unexpected 1281 connectivity defect between two MEGs (e.g. mismerging or 1282 misconnection), as well as unexpected connectivity within the 1283 MEG with an unexpected MEP. 1285 Both functions are based on the (proactive) generation of OAM 1286 packets by the source MEP that are processed by the sink MEP. As 1287 a consequence these two functions are grouped together into 1288 Continuity Check and Connectivity Verification (CC-V) OAM 1289 packets. 1291 In order to perform pro-active Connectivity Verification, each 1292 CC-V OAM packet also includes a globally unique Source MEP 1293 identifier. When used to perform only pro-active Continuity 1294 Check, the CC-V OAM packet will not include any globally unique 1295 Source MEP identifier. Different formats of MEP identifiers are 1296 defined in [9] to address different environments. When MPLS-TP 1297 is deployed in transport network environments where IP 1298 addressing is not used in the forwarding plane, the ICC-based 1299 format for MEP identification is used. When MPLS-TP is deployed 1300 in an IP-based environment, the IP-based MEP identification is 1301 used. 1303 As a consequence, it is not possible to detect misconnections 1304 between two MEGs monitored only for continuity as neither the 1305 OAM message type nor OAM message content provides sufficient 1306 information to disambiguate an invalid source. To expand: 1308 o For CC leaking into a CC monitored MEG - undetectable 1310 o For CV leaking into a CC monitored MEG - presence of 1311 additional Source MEP identifier allows detecting the fault 1313 o For CC leaking into a CV monitored MEG - lack of additional 1314 Source MEP identifier allows detecting the fault. 1316 o For CV leaking into a CV monitored MEG - different Source MEP 1317 identifier permits fault to be identified. 1319 CC-V OAM packets are transmitted at a regular, operator's 1320 configurable, rate. The default CC-V transmission periods are 1321 application dependent (see section 5.1.3). 1323 Proactive CC-V OAM packets are transmitted with the "minimum 1324 loss probability PHB" within the transport path (LSP, PW) they 1325 are monitoring. This PHB is configurable on network operator's 1326 basis. PHBs can be translated at the network borders by the same 1327 function that translates it for user data traffic. The 1328 implication is that CC-V fate shares with much of the forwarding 1329 implementation, but not all aspects of PHB processing are 1330 exercised. Either on-demand tools are used for finer grained 1331 fault finding or an implementation may utilize a CC-V flow per 1332 PHB with the entire E-LSP fate sharing with any individual PHB. 1334 In a bidirectional point-to-point transport path, when a MEP is 1335 enabled to generate pro-active CC-V OAM packets with a 1336 configured transmission rate, it also expects to receive pro- 1337 active CC-V OAM packets from its peer MEP at the same 1338 transmission rate as a common SLA applies to all components of 1339 the transport path. In a unidirectional transport path (either 1340 point-to-point or point-to-multipoint), only the source MEP is 1341 enabled to generate CC-V OAM packets and only the sink MEP is 1342 configured to expect these packets at the configured rate. 1344 MIPs, as well as intermediate nodes not supporting MPLS-TP OAM, 1345 are transparent to the pro-active CC-V information and forward 1346 these pro-active CC-V OAM packets as regular data packets. 1348 During path setup and tear down, situations arise where CC-V 1349 checks would give rise to alarms, as the path is not fully 1350 instantiated. In order to avoid these spurious alarms the 1351 following procedures are recommended. At initialization, the MEP 1352 source function (generating pro-active CC-V packets) should be 1353 enabled prior to the corresponding MEP sink function (detecting 1354 continuity and connectivity defects). When disabling the CC-V 1355 proactive functionality, the MEP sink function should be 1356 disabled prior to the corresponding MEP source function. 1358 5.1.1. Defects identified by CC-V 1360 Pro-active CC-V functions allow a sink MEP to detect the defect 1361 conditions described in the following sub-sections. For all of 1362 the described defect cases, the sink MEP should notify the 1363 equipment fault management process of the detected defect. 1365 5.1.1.1. Loss Of Continuity defect 1367 When proactive CC-V is enabled, a sink MEP detects a loss of 1368 continuity (LOC) defect when it fails to receive pro-active CC-V 1369 OAM packets from the source MEP. 1371 o Entry criteria: If no pro-active CC-V OAM packets from the 1372 source MEP with the correct encapsulation (and in the case of 1373 CV, this includes the requirement to have a correct globally 1374 unique Source MEP identifier) are received within the 1375 interval equal to 3.5 times the receiving MEP's configured 1376 CC-V reception period. 1378 o Exit criteria: A pro-active CC-V OAM packet from the source 1379 MEP with the correct encapsulation (and again in the case of 1380 CV, with the correct globally unique Source MEP identifier) 1381 is received. 1383 5.1.1.2. Mis-connectivity defect 1385 When a pro-active CC-V OAM packet is received, a sink MEP 1386 identifies a mis-connectivity defect (e.g. mismerge, 1387 misconnection or unintended looping) when the received packet 1388 carries an incorrect globally unique Source MEP identifier. 1390 o Entry criteria: The sink MEP receives a pro-active CC-V OAM 1391 packet with an incorrect globally unique Source MEP 1392 identifier or receives a CC or CC/CV OAM packet with an 1393 unexpected encapsulation. 1395 It should be noted that there are practical limitations to 1396 detecting unexpected encapsulation. It is possible that there 1397 are mis-connectivity scenarios where OAM frames can alias as 1398 payload IF a transport path can carry an arbitrary payload 1399 without a pseudo wire. 1401 o Exit criteria: The sink MEP does not receive any pro-active 1402 CC-V OAM packet with an incorrect globally unique Source MEP 1403 identifier for an interval equal at least to 3.5 times the 1404 longest transmission period of the pro-active CC-V OAM 1405 packets received with an incorrect globally unique Source MEP 1406 identifier since this defect has been raised. This requires 1407 the OAM message to self identify the CC-V periodicity as not 1408 all MEPs can be expected to have knowledge of all MEGs. 1410 5.1.1.3. Period Misconfiguration defect 1412 If pro-active CC-V OAM packets are received with a correct 1413 globally unique Source MEP identifier but with a transmission 1414 period different than the locally configured reception period, 1415 then a CV period mis-configuration defect is detected. 1417 o Entry criteria: A MEP receives a CC-V pro-active packet with 1418 correct globally unique Source MEP identifier but with a 1419 Period field value different than its own CC-V configured 1420 transmission period. 1422 o Exit criteria: The sink MEP does not receive any pro-active 1423 CC-V OAM packet with a correct globally unique Source MEP 1424 identifier and an incorrect transmission period for an 1425 interval equal at least to 3.5 times the longest transmission 1426 period of the pro-active CC-V OAM packets received with a 1427 correct globally unique Source MEP identifier and an 1428 incorrect transmission period since this defect has been 1429 raised. 1431 5.1.2. Consequent action 1433 A sink MEP that detects one of the defect conditions defined in 1434 section 5.1.1 performs the following consequent actions. 1436 If a MEP detects an unexpected globally unique Source MEP 1437 Identifier, it blocks all the traffic (including also the user 1438 data packets) that it receives from the misconnected transport 1439 path. 1441 If a MEP detects LOC defect that is not caused by a period 1442 mis-configuration, it should block all the traffic (including 1443 also the user data packets) that it receives from the transport 1444 path, if this consequent action has been enabled by the 1445 operator. 1447 It is worth noticing that the OAM requirements document [10] 1448 recommends that CC-V proactive monitoring be enabled on every 1449 MEG in order to reliably detect connectivity defects. However, 1450 CC-V proactive monitoring can be disabled by an operator for a 1451 MEG. In the event of a misconnection between a transport path 1452 that is pro-actively monitored for CC-V and a transport path 1453 which is not, the MEP of the former transport path will detect a 1454 LOC defect representing a connectivity problem (e.g. a 1455 misconnection with a transport path where CC-V proactive 1456 monitoring is not enabled) instead of a continuity problem, with 1457 a consequent wrong traffic delivering. For these reasons, the 1458 traffic block consequent action is applied even when a LOC 1459 condition occurs. This block consequent action can be disabled 1460 through configuration. This deactivation of the block action may 1461 be used for activating or deactivating the monitoring when it is 1462 not possible to synchronize the function activation of the two 1463 peer MEPs. 1465 If a MEP detects a LOC defect (section 5.1.1.1), a 1466 mis-connectivity defect (section 5.1.1.2) it declares a signal 1467 fail condition at the transport path level. 1469 It is a matter if local policy if a MEP detecting a period 1470 misconfiguration defect (section 5.1.1.3) declares a signal fail 1471 condition at the transport path level. 1473 5.1.3. Configuration considerations 1475 At all MEPs inside a MEG, the following configuration 1476 information needs to be configured when a proactive CC-V 1477 function is enabled: 1479 o MEG ID; the MEG identifier to which the MEP belongs; 1481 o MEP-ID; the MEP's own identity inside the MEG; 1483 o list of the other MEPs in the MEG. For a point-to-point MEG 1484 the list would consist of the single MEP ID from which the 1485 OAM packets are expected. In case of the root MEP of a p2mp 1486 MEG, the list is composed by all the leaf MEP IDs inside the 1487 MEG. In case of the leaf MEP of a p2mp MEG, the list is 1488 composed by the root MEP ID (i.e. each leaf needs to know the 1489 root MEP ID from which it expect to receive the CC-V OAM 1490 packets). 1492 o PHB; it identifies the per-hop behaviour of CC-V packet. 1493 Proactive CC-V packets are transmitted with the "minimum loss 1494 probability PHB" previously configured within a single 1495 network operator. This PHB is configurable on network 1496 operator's basis. PHBs can be translated at the network 1497 borders. 1499 o transmission rate; the default CC-V transmission periods are 1500 application dependent (depending on whether they are used to 1501 support fault management, performance monitoring, or 1502 protection switching applications): 1504 o Fault Management: default transmission period is 1s (i.e. 1505 transmission rate of 1 packet/second). 1507 o Performance Monitoring: default transmission period is 1508 100ms (i.e. transmission rate of 10 packets/second). 1509 Performance monitoring is only relevant when the 1510 transport path is defect free. CC-V contributes to the 1511 accuracy of PM statistics by permitting the defect free 1512 periods to be properly distinguished. 1514 o Protection Switching: default transmission period is 1515 3.33ms (i.e. transmission rate of 300 packets/second), in 1516 order to achieve sub-50ms the CC-V defect entry criteria 1517 should resolve in less than 10msec, and complete a 1518 protection switch within a subsequent period of 50 msec. 1519 It is also possible to lengthen the transmission period 1520 to 10ms (i.e. transmission rate of 100 packets/second): 1521 in this case the CC-V defect entry criteria is reached 1522 later (i.e. 30msec). 1524 It should be possible for the operator to configure these 1525 transmission rates for all applications, to satisfy his internal 1526 requirements. 1528 Note that the reception period is the same as the configured 1529 transmission rate. 1531 For statically provisioned transport paths the above parameters 1532 are statically configured; for dynamically established transport 1533 paths the configuration information are signaled via the control 1534 plane. 1536 The operator should be able to enable/disable some of the 1537 consequent actions. Which consequent action can be 1538 enabled/disabled are described in section 5.1.2. 1540 5.2. Remote Defect Indication 1542 The Remote Defect Indication (RDI) function, as required in 1543 section 2.2.9 of RFC 5860 [10], is an indicator that is 1544 transmitted by a sink MEP to communicate to its source MEP that 1545 a signal fail condition exists. RDI is only used for 1546 bidirectional connections and is associated with proactive CC-V. 1547 The RDI indicator is piggy-backed onto the CC-V packet. 1549 When a MEP detects a signal fail condition (e.g. in case of a 1550 continuity or connectivity defect), it should begin transmitting 1551 an RDI indicator to its peer MEP. The RDI information will be 1552 included in all pro-active CC-V packets that it generates for 1553 the duration of the signal fail condition's existence. 1555 A MEP that receives packets from a peer MEP (as best can be 1556 validated with the CC or CV tool in use) with the RDI 1557 information should determine that its peer MEP has encountered a 1558 defect condition associated with a signal fail. 1560 MIPs as well as intermediate nodes not supporting MPLS-TP OAM 1561 are transparent to the RDI indicator and forward these proactive 1562 CC-V packets that include the RDI indicator as regular data 1563 packets, i.e. the MIP should not perform any actions nor examine 1564 the indicator. 1566 When the signal fail defect condition clears, the MEP should 1567 clear the RDI indicator from subsequent transmission of pro- 1568 active CC-V packets. A MEP should clear the RDI defect upon 1569 reception of a pro-active CC-V packet from the source MEP with 1570 the RDI indicator cleared. 1572 5.2.1. Configuration considerations 1574 In order to support RDI indication, this may be a unique OAM 1575 message or an OAM information element embedded in a CV message. 1576 In this case the RDI transmission rate and PHB of the OAM 1577 packets carrying RDI should be the same as that configured for 1578 CC-V. 1580 5.3. Alarm Reporting 1582 The Alarm Reporting function, as required in section 2.2.8 of 1583 RFC 5860 [10], relies upon an Alarm Indication Signal (AIS) 1584 message to suppress alarms following detection of defect 1585 conditions at the server (sub-)layer. 1587 When a server MEP asserts signal fail, the co-located MPLS-TP 1588 client (sub-)layer adaptation function generates packets with 1589 AIS information in the downstream direction to allow the 1590 suppression of secondary alarms at the MEP in the client (sub- 1591 )layer. 1593 The generation of packets with AIS information starts 1594 immediately when the server MEP asserts signal fail. These 1595 periodic packets, with AIS information, continue to be 1596 transmitted until the signal fail condition is cleared. It is 1597 assumed that to avoid race conditions a MEP detecting loss of 1598 continuity will wait for a hold off interval prior to asserting 1599 an alarm to the management system. 1601 Upon receiving a packet with AIS information an MPLS-TP MEP 1602 enters an AIS defect condition and suppresses loss of continuity 1603 alarms associated with its peer MEP but does not block traffic 1604 received from the transport path. A MEP resumes loss of 1605 continuity alarm generation upon detecting loss of continuity 1606 defect conditions in the absence of AIS condition. 1608 MIPs, as well as intermediate nodes, do not process AIS 1609 information and forward these AIS OAM packets as regular data 1610 packets. 1612 For example, let's consider a fiber cut between LSR 1 and LSR 2 1613 in the reference network of Figure 5. Assuming that all the MEGs 1614 described in Figure 5 have pro-active CC-V enabled, a LOC defect 1615 is detected by the MEPs of Sec12 SME, LSP13 LME, PW1 PSME and 1616 PW1Z PME, however in a transport network only the alarm 1617 associated to the fiber cut needs to be reported to an NMS while 1618 all secondary alarms should be suppressed (i.e. not reported to 1619 the NMS or reported as secondary alarms). 1621 If the fiber cut is detected by the MEP in the physical layer 1622 (in LSR2), LSR2 can generate the proper alarm in the physical 1623 layer and suppress the secondary alarm associated with the LOC 1624 defect detected on Sec12 SME. As both MEPs reside within the 1625 same node, this process does not involve any external protocol 1626 exchange. Otherwise, if the physical layer has not enough OAM 1627 capabilities to detect the fiber cut, the MEP of Sec12 SME in 1628 LSR2 will report a LOC alarm. 1630 In both cases, the MEP of Sec12 SME in LSR 2 notifies the 1631 adaptation function for LSP13 LME that then generates AIS 1632 packets on the LSP13 LME in order to allow its MEP in LSR3 to 1633 suppress the LOC alarm. LSR3 can also suppress the secondary 1634 alarm on PW13 PSME because the MEP of PW13 PSME resides within 1635 the same node as the MEP of LSP13 LME. The MEP of PW13 PSME in 1636 LSR3 also notifies the adaptation function for PW1Z PME that 1637 then generates AIS packets on PW1Z PME in order to allow its MEP 1638 in LSRZ to suppress the LOC alarm. 1640 The generation of AIS packets for each MEG in the MPLS-TP client 1641 (sub-)layer is configurable (i.e. the operator can 1642 enable/disable the AIS generation). 1644 AIS packets are transmitted with the "minimum loss probability 1645 PHB" within a single network operator. This PHB is configurable 1646 on network operator's basis. 1648 AIS condition is cleared if no AIS message has been received in 1649 3.5 times the AIS transmission period. 1651 5.4. Lock Reporting 1653 The Lock Reporting function, as required in section 2.2.7 of RFC 1654 5860 [10], relies upon a Locked Report (LKR) message used to 1655 suppress alarms following administrative locking action in the 1656 server (sub-)layer. 1658 When a server MEP is locked, the MPLS-TP client (sub-)layer 1659 adaptation function generates packets with LKR information in 1660 both directions to allow the suppression of secondary alarms at 1661 the MEPs in the client (sub-)layer. Again it is assumed that 1662 there is a hold off for any loss of continuity alarms in the 1663 client layer MEPs downstream of the node originating the locked 1664 report. 1666 The generation of packets with LKR information starts 1667 immediately when the server MEP is locked. These periodic 1668 packets, with LKR information, continue to be transmitted until 1669 the locked condition is cleared. 1671 Upon receiving a packet with LKR information an MPLS-TP MEP 1672 enters an LKR defect condition and suppresses loss of continuity 1673 alarm associated with its peer MEP but does not block traffic 1674 received from the transport path. A MEP resumes loss of 1675 continuity alarm generation upon detecting loss of continuity 1676 defect conditions in the absence of LKR condition. 1678 MIPs, as well as intermediate nodes, do not process the LKR 1679 information and forward these LKR OAM packets as regular data 1680 packets. 1682 For example, let's consider the case where the MPLS-TP Section 1683 between LSR 1 and LSR 2 in the reference network of Figure 5 is 1684 administrative locked at LSR2 (in both directions). 1686 Assuming that all the MEGs described in Figure 5 have pro-active 1687 CC-V enabled, a LOC defect is detected by the MEPs of LSP13 LME, 1688 PW1 PSME and PW1Z PME, however in a transport network all these 1689 secondary alarms should be suppressed (i.e. not reported to the 1690 NMS or reported as secondary alarms). 1692 The MEP of Sec12 SME in LSR 2 notifies the adaptation function 1693 for LSP13 LME that then generates LKR packets on the LSP13 LME 1694 in order to allow its MEPs in LSR1 and LSR3 to suppress the LOC 1695 alarm. LSR3 can also suppress the secondary alarm on PW13 PSME 1696 because the MEP of PW13 PSME resides within the same node as the 1697 MEP of LSP13 LME. The MEP of PW13 PSME in LSR3 also notifies the 1698 adaptation function for PW1Z PME that then generates AIS packets 1699 on PW1Z PME in order to allow its MEP in LSRZ to suppress the 1700 LOC alarm. 1702 The generation of LKR packets for each MEG in the MPLS-TP client 1703 (sub-)layer is configurable (i.e. the operator can 1704 enable/disable the LKR generation). 1706 LKR packets are transmitted with the "minimum loss probability 1707 PHB" within a single network operator. This PHB is configurable 1708 on network operator's basis. 1710 Locked condition is cleared if no LKR packet has been received 1711 for 3.5 times the transmission period. 1713 5.5. Packet Loss Measurement 1715 Packet Loss Measurement (LM) is one of the capabilities 1716 supported by the MPLS-TP Performance Monitoring (PM) function in 1717 order to facilitate reporting of QoS information for a transport 1718 path as required in section 2.2.11 of RFC 5860 [10]. LM is used 1719 to exchange counter values for the number of ingress and egress 1720 packets transmitted and received by the transport path monitored 1721 by a pair of MEPs. 1723 Proactive LM is performed by periodically sending LM OAM packets 1724 from a MEP to a peer MEP and by receiving LM OAM packets from 1725 the peer MEP (if a bidirectional transport path) during the life 1726 time of the transport path. Each MEP performs measurements of 1727 its transmitted and received packets. These measurements are 1728 then correlated with the peer MEP in the ME to derive the impact 1729 of packet loss on a number of performance metrics for the ME in 1730 the MEG. The LM transactions are issued such that the OAM 1731 packets will experience the same queuing discipline as the 1732 measured traffic while transiting between the MEPs in the ME. 1734 For a MEP, near-end packet loss refers to packet loss associated 1735 with incoming data packets (from the far-end MEP) while far-end 1736 packet loss refers to packet loss associated with egress data 1737 packets (towards the far-end MEP). 1739 MIPs, as well as intermediate nodes, do not process the LM 1740 information and forward these pro-active LM OAM packets as 1741 regular data packets. 1743 5.5.1. Configuration considerations 1745 In order to support proactive LM, the transmission rate and PHB 1746 class associated with the LM OAM packets originating from a MEP 1747 need be configured as part of the LM provisioning. LM OAM 1748 packets should be transmitted with the PHB that yields the 1749 lowest discard probability within the measured PHB Scheduling 1750 Class (see RFC 3260 [15]). 1752 If that PHB class is not an ordered aggregate where the ordering 1753 constraint is all packets with the PHB class being delivered in 1754 order, LM can produce inconsistent results. 1756 5.5.2. Sampling skew 1758 If an implementation makes use of a hardware forwarding path 1759 which operates in parallel with an OAM processing path, whether 1760 hardware or software based, the packet and byte counts may be 1761 skewed if one or more packets can be processed before the OAM 1762 processing samples counters. If OAM is implemented in software 1763 this error can be quite large. 1765 5.5.3. Multilink issues 1767 If multilink is used at the LSP ingress or egress, there may be 1768 no single packet processing engine where to inject or extract a 1769 LM packet as an atomic operation to which accurate packet and 1770 byte counts can be associated with the packet. 1772 In the case where multilink is encountered in the LSP path, the 1773 reordering of packets within the LSP can cause inaccurate LM 1774 results. 1776 5.6. Packet Delay Measurement 1778 Packet Delay Measurement (DM) is one of the capabilities 1779 supported by the MPLS-TP PM function in order to facilitate 1780 reporting of QoS information for a transport path as required in 1781 section 2.2.12 of RFC 5860 [10]. Specifically, pro-active DM is 1782 used to measure the long-term packet delay and packet delay 1783 variation in the transport path monitored by a pair of MEPs. 1785 Proactive DM is performed by sending periodic DM OAM packets 1786 from a MEP to a peer MEP and by receiving DM OAM packets from 1787 the peer MEP (if a bidirectional transport path) during a 1788 configurable time interval. 1790 Pro-active DM can be operated in two ways: 1792 o One-way: a MEP sends DM OAM packet to its peer MEP containing 1793 all the required information to facilitate one-way packet 1794 delay and/or one-way packet delay variation measurements at 1795 the peer MEP. Note that this requires synchronized precision 1796 time at either MEP by means outside the scope of this 1797 framework. 1799 o Two-way: a MEP sends DM OAM packet with a DM request to its 1800 peer MEP, which replies with a DM OAM packet as a DM 1801 response. The request/response DM OAM packets containing all 1802 the required information to facilitate two-way packet delay 1803 and/or two-way packet delay variation measurements from the 1804 viewpoint of the source MEP. 1806 MIPs, as well as intermediate nodes, do not process the DM 1807 information and forward these pro-active DM OAM packets as 1808 regular data packets. 1810 5.6.1. Configuration considerations 1812 In order to support pro-active DM, the transmission rate and PHB 1813 associated with the DM OAM packets originating from a MEP need 1814 be configured as part of the DM provisioning. DM OAM packets 1815 should be transmitted with the PHB that yields the lowest 1816 discard probability within the measured PHB Scheduling Class 1817 (see RFC 3260 [15]). 1819 5.7. Client Failure Indication 1821 The Client Failure Indication (CFI) function, as required in 1822 section 2.2.10 of RFC 5860 [10], is used to help process client 1823 defects and propagate a client signal defect condition from the 1824 process associated with the local attachment circuit where the 1825 defect was detected (typically the source adaptation function 1826 for the local client interface) to the process associated with 1827 the far-end attachment circuit (typically the source adaptation 1828 function for the far-end client interface) for the same 1829 transmission path in case the client of the transport path does 1830 not support a native defect/alarm indication mechanism, e.g. 1831 AIS. 1833 A source MEP starts transmitting a CFI indication to its peer 1834 MEP when it receives a local client signal defect notification 1835 via its local CSF function. Mechanisms to detect local client 1836 signal fail defects are technology specific. Similarly 1837 mechanisms to determine when to cease originating client signal 1838 fail indication are also technology specific. 1840 A sink MEP that has received a CFI indication report this 1841 condition to its associated client process via its local CFI 1842 function. Consequent actions toward the client attachment 1843 circuit are technology specific. 1845 Either there needs to be a 1:1 correspondence between the client 1846 and the MEG, or when multiple clients are multiplexed over a 1847 transport path, the CFI message requires additional information 1848 to permit the client instance to be identified. 1850 MIPs, as well as intermediate nodes, do not process the CFI 1851 information and forward these pro-active CFI OAM packets as 1852 regular data packets. 1854 5.7.1. Configuration considerations 1856 In order to support CFI indication, the CFI transmission rate 1857 and PHB of the CFI OAM message/information element should be 1858 configured as part of the CFI configuration. 1860 6. OAM Functions for on-demand monitoring 1862 In contrast to proactive monitoring, on-demand monitoring is 1863 initiated manually and for a limited amount of time, usually for 1864 operations such as e.g. diagnostics to investigate into a defect 1865 condition. 1867 On-demand monitoring covers a combination of "in-service" and 1868 "out-of-service" monitoring functions. The control and 1869 measurement implications are: 1871 1. A MEG can be directed to perform an "on-demand" functions at 1872 arbitrary times in the lifetime of a transport path. 1874 2. "out-of-service" monitoring functions may require a-priori 1875 configuration of both MEPs and intermediate nodes in the MEG 1876 (e.g., data plane loopback) and the issuance of notifications 1877 into client layers of the transport path being removed from 1878 service (e.g., lock-reporting) 1880 3. The measurements resulting from on-demand monitoring are 1881 typically harvested in real time, as these are frequently 1882 initiated manually. These do not necessarily require 1883 different harvesting mechanisms that for harvesting proactive 1884 monitoring telemetry. 1886 The functions that are exclusive out-of-service are those 1887 described in section 6.3. The remainder are applicable to both 1888 in-service and out-of-service transport paths. 1890 6.1. Connectivity Verification 1892 In order to preserve network resources, e.g. bandwidth, 1893 processing time at switches, it may be preferable to not use 1894 proactive CC-V. In order to perform fault management functions, 1895 network management may invoke periodic on-demand bursts of on- 1896 demand CV packets, as required in section 2.2.3 of RFC 5860 1897 [10]. 1899 On demand connectivity verification is a transaction that flows 1900 from the source MEP to a target MIP or MEP. 1902 Use of on-demand CV is dependent on the existence of either a 1903 bi-directional ME, or an associated return ME, or the 1904 availability of an out-of-band return path because it requires 1905 the ability for target MIPs and MEPs to direct responses to the 1906 originating MEPs. 1908 An additional use of on-demand CV would be to detect and locate 1909 a problem of connectivity when a problem is suspected or known 1910 based on other tools. In this case the functionality will be 1911 triggered by the network management in response to a status 1912 signal or alarm indication. 1914 On-demand CV is based upon generation of on-demand CV packets 1915 that should uniquely identify the MEG that is being checked. 1916 The on-demand functionality may be used to check either an 1917 entire MEG (end-to-end) or between a source MEP and a specific 1918 MIP. This functionality may not be available for associated 1919 bidirectional transport paths or unidirectional paths, as the 1920 MIP may not have a return path to the source MEP for the on- 1921 demand CV transaction. 1923 On-demand CV may generate a one-time burst of on-demand CV 1924 packets, or be used to invoke periodic, non-continuous, bursts 1925 of on-demand CV packets. The number of packets generated in 1926 each burst is configurable at the MEPs, and should take into 1927 account normal packet-loss conditions. 1929 When invoking a periodic check of the MEG, the source MEP should 1930 issue a burst of on-demand CV packets that uniquely identifies 1931 the MEG being verified. The number of packets and their 1932 transmission rate should be pre-configured and known to both the 1933 source MEP and the target MEP or MIP. The source MEP should use 1934 the mechanisms defined in sections 3.3 and 3.4 when sending an 1935 on-demand CV packet to a target MEP or target MIP respectively. 1936 The target MEP/MIP shall return a reply on-demand CV packet for 1937 each packet received. If the expected number of on-demand CV 1938 reply packets is not received at source MEP, the LOC defect 1939 state is entered. 1941 On-demand CV should have the ability to carry padding such that 1942 a variety of MTU sizes can be originated to verify the MTU 1943 transport capability of the transport path. 1945 MIPs that are not target by on-demand CV packets, as well as 1946 intermediate nodes, do not process the CV information and 1947 forward these on-demand CV OAM packets as regular data packets. 1949 6.1.1. Configuration considerations 1951 For on-demand CV the MEP should support the configuration of the 1952 number of packets to be transmitted/received in each burst of 1953 transmissions and their packet size. The transmission rate 1954 should be configured between the different nodes. 1956 In addition, when the CV packet is used to check connectivity 1957 toward a target MIP, the number of hops to reach the target MIP 1958 should be configured. 1960 The PHB of the on-demand CV packets should be configured as 1961 well. This permits the verification of correct operation of QoS 1962 queuing as well as connectivity. 1964 6.2. Packet Loss Measurement 1966 On-demand Packet Loss Measurement (LM) is one of the 1967 capabilities supported by the MPLS-TP Performance Monitoring 1968 function in order to facilitate diagnostic of QoS performance 1969 for a transport path, as required in section 2.2.11 of RFC 5860 1970 [10]. As proactive LM, on-demand LM is used to exchange counter 1971 values for the number of ingress and egress packets transmitted 1972 and received by the transport path monitored by a pair of MEPs. 1973 LM is not performed MEP to MIP or between a pair of MIPs. 1975 On-demand LM is performed by periodically sending LM OAM packets 1976 from a MEP to a peer MEP and by receiving LM OAM packets from 1977 the peer MEP (if a bidirectional transport path) during a pre- 1978 defined monitoring period. Each MEP performs measurements of its 1979 transmitted and received packets. These measurements are then 1980 correlated to evaluate the packet loss performance metrics of 1981 the transport path. 1983 Use of packet loss measurement in an out-of-service transport 1984 path requires a traffic source such as a tester. 1986 MIPs, as well as intermediate nodes, do not process the LM 1987 information and forward these on-demand LM OAM packets as 1988 regular data packets. 1990 6.2.1. Configuration considerations 1992 In order to support on-demand LM, the beginning and duration of 1993 the LM procedures, the transmission rate and PHB associated with 1994 the LM OAM packets originating from a MEP must be configured as 1995 part of the on-demand LM provisioning. LM OAM packets should be 1996 transmitted with the PHB that yields the lowest discard 1997 probability within the measured PHB Scheduling Class (see RFC 1998 3260 [15]). 2000 6.2.2. Sampling skew 2002 If an implementation makes use of a hardware forwarding path 2003 which operates in parallel with an OAM processing path, whether 2004 hardware or software based, the packet and byte counts may be 2005 skewed if one or more packets can be processed before the OAM 2006 processing samples counters. If OAM is implemented in software 2007 this error can be quite large. 2009 6.2.3. Multilink issues 2011 Multi-link Issues are as described in section 5.5.3. 2013 6.3. Diagnostic Tests 2015 Diagnostic tests are tests performed on a MEG that has been taken 2016 out-of-service. 2018 6.3.1. Throughput Estimation 2020 Throughput estimation is an on-demand out-of-service function, 2021 as required in section 2.2.5 of RFC 5860 [10], that allows 2022 verifying the bandwidth/throughput of an MPLS-TP transport path 2023 (LSP or PW) before it is put in-service. 2025 Throughput estimation is performed between MEPs and can be 2026 performed in one-way or two-way modes. 2028 According to RFC 2544 [11], this test is performed by sending 2029 OAM test packets at increasing rate (up to the theoretical 2030 maximum), graphing the percentage of OAM test packets received 2031 and reporting the rate at which OAM test packets begin to drop. 2032 In general, this rate is dependent on the OAM test packet size. 2034 When configured to perform such tests, a MEP source inserts OAM 2035 test packets with a specified packet size and transmission 2036 pattern at a rate to exercise the throughput. 2038 For a one-way test, the remote MEP sink receives the OAM test 2039 packets and calculates the packet loss. For a two-way test, the 2040 remote MEP loopbacks the OAM test packets back to original MEP 2041 and the local MEP sink calculates the packet loss. However, a 2042 two-way test will return the minimum of available throughput in 2043 the two directions. Alternatively it is possible to run two 2044 individual one-way tests to get a distinct measurement in the 2045 two directions. 2047 It is worth noting that two-way throughput estimation can only 2048 evaluate the minimum of available throughput of the two 2049 directions. In order to estimate the throughput of each 2050 direction uniquely, two one-way throughput estimation sessions 2051 have to be setup. 2053 MIPs, as well as intermediate nodes, do not process the 2054 throughput test information and forward these on-demand test OAM 2055 packets as regular data packets. 2057 6.3.1.1. Configuration considerations 2059 Throughput estimation is an out-of-service tool. The diagnosed 2060 MEG should be put into a Lock status before the diagnostic test 2061 is started. 2063 A MEG can be put into a Lock status either via an NMS action or 2064 using the Lock Instruct OAM tool as defined in section 7. 2066 At the transmitting MEP, provisioning is required for a test 2067 signal generator, which is associated with the MEP. At a 2068 receiving MEP, provisioning is required for a test signal 2069 detector which is associated with the MEP. 2071 6.3.1.2. Limited OAM processing rate 2073 If an implementation is able to process payload at much higher 2074 data rates than OAM packets, then accurate measurement of 2075 throughput using OAM packets is not achievable. Whether OAM 2076 packets can be processed at the same rate as payload is 2077 implementation dependent. 2079 6.3.1.3. Multilink considerations 2081 If multilink is used, then it may not be possible to perform 2082 throughput measurement, as the throughput test may not have a 2083 mechanism for utilizing more than one component link of the 2084 aggregated link. 2086 6.3.2. Data plane Loopback 2088 Data plane loopback is an out-of-service function, as required 2089 in section 2.2.5 of RFC 5860 [10], that permits all traffic 2090 (including user data and OAM, with the exception of the disable 2091 loopback command) originated at the ingress of a transport path 2092 or inserted by the test equipment to be looped back unmodified 2093 (other than normal per hop processing such as TTL decrement) in 2094 the direction of the point of origin by an interface at either 2095 an intermediate node or a terminating node. TTL is decremented 2096 normally during this process. It is also normal to disable 2097 proactive monitoring of the path as the source MEP will see all 2098 source MEP originated OAM messages returned to it. 2100 If the loopback function is to be performed at an intermediate 2101 node it is only applicable to co-routed bi-directional paths. If 2102 the loopback is to be performed end to end, it is applicable to 2103 both co-routed bi-directional or associated bi-directional 2104 paths. 2106 Where a node implements data plane loopback capability and 2107 whether it implements more than one point is implementation 2108 dependent. 2110 6.4. Route Tracing 2112 It is often necessary to trace a route covered by a MEG from a 2113 source MEP to the sink MEP including all the MIPs in-between 2114 after e.g., provisioning an MPLS-TP transport path or for 2115 trouble shooting purposes such as fault localization. 2117 The route tracing function, as required in section 2.2.4 of RFC 2118 5860 [10], is providing this functionality. Based on the fate 2119 sharing requirement of OAM flows, i.e. OAM packets receive the 2120 same forwarding treatment as data packet, route tracing is a 2121 basic means to perform connectivity verification and, to a much 2122 lesser degree, continuity check. For this function to work 2123 properly, a return path must be present. 2125 Route tracing might be implemented in different ways and this 2126 document does not preclude any of them. 2128 Route tracing should always discover the full list of MIPs and 2129 of the peer MEPs. In case a defect exist, the route trace 2130 function needs to be able to detect it and stop automatically 2131 returning the incomplete list of OAM entities that it was able 2132 to trace. 2134 6.4.1. Configuration considerations 2136 The configuration of the route trace function must at least 2137 support the setting of the number of trace attempts before it 2138 gives up. 2140 6.5. Packet Delay Measurement 2142 Packet Delay Measurement (DM) is one of the capabilities 2143 supported by the MPLS-TP PM function in order to facilitate 2144 reporting of QoS information for a transport path, as required 2145 in section 2.2.12 of RFC 5860 [10]. Specifically, on-demand DM 2146 is used to measure packet delay and packet delay variation in 2147 the transport path monitored by a pair of MEPs during a pre- 2148 defined monitoring period. 2150 On-Demand DM is performed by sending periodic DM OAM packets 2151 from a MEP to a peer MEP and by receiving DM OAM packets from 2152 the peer MEP (if a bidirectional transport path) during a 2153 configurable time interval. 2155 On-demand DM can be operated in two ways: 2157 o One-way: a MEP sends DM OAM packet to its peer MEP containing 2158 all the required information to facilitate one-way packet 2159 delay and/or one-way packet delay variation measurements at 2160 the peer MEP. Note that this requires synchronized precision 2161 time at either MEP by means outside the scope of this 2162 framework. 2164 o Two-way: a MEP sends DM OAM packet with a DM request to its 2165 peer MEP, which replies with an DM OAM packet as a DM 2166 response. The request/response DM OAM packets containing all 2167 the required information to facilitate two-way packet delay 2168 and/or two-way packet delay variation measurements from the 2169 viewpoint of the source MEP. 2171 MIPs, as well as intermediate nodes, do not process the DM 2172 information and forward these on-demand DM OAM packets as 2173 regular data packets. 2175 6.5.1. Configuration considerations 2177 In order to support on-demand DM, the beginning and duration of 2178 the DM procedures, the transmission rate and PHB associated with 2179 the DM OAM packets originating from a MEP need be configured as 2180 part of the DM provisioning. DM OAM packets should be 2181 transmitted with the PHB that yields the lowest discard 2182 probability within the measured PHB Scheduling Class (see RFC 2183 3260 [15]). 2185 In order to verify different performances between long and short 2186 packets (e.g., due to the processing time), it should be 2187 possible for the operator to configure the packet size of the 2188 on-demand OAM DM packet. 2190 7. OAM Functions for administration control 2192 7.1. Lock Instruct 2194 Lock Instruct (LKI) function, as required in section 2.2.6 of 2195 RFC 5860 [10], is a command allowing a MEP to instruct the peer 2196 MEP(s) to put the MPLS-TP transport path into a locked 2197 condition. 2199 This function allows single-side provisioning for 2200 administratively locking (and unlocking) an MPLS-TP transport 2201 path. 2203 Note that it is also possible to administratively lock (and 2204 unlock) an MPLS-TP transport path using two-side provisioning, 2205 where the NMS administratively put both MEPs into ad 2206 administrative lock condition. In this case, the LKI function is 2207 not required/used. 2209 MIPs, as well as intermediate nodes, do not process the lock 2210 instruct information and forward these on-demand LKI OAM packets 2211 as regular data packets. 2213 7.1.1. Locking a transport path 2215 A MEP, upon receiving a single-side administrative lock command 2216 from an NMS, sends an LKI request OAM packet to its peer MEP(s). 2217 It also puts the MPLS-TP transport path into a locked state and 2218 notifies its client (sub-)layer adaptation function upon the 2219 locked condition. 2221 A MEP, upon receiving an LKI request from its peer MEP, can 2222 accept or not the instruction and replies to the peer MEP with 2223 an LKI reply OAM packet indicating whether it has accepted or 2224 not the instruction. 2226 If the lock instruction has been accepted, it also puts the 2227 MPLS-TP transport path into a locked and notifies its client 2228 (sub-)layer adaptation function upon the locked condition. 2230 Note that if the client (sub-)layer is also MPLS-TP, Lock 2231 Reporting (LKR) generation at the client MPLS-TP (sub-)layer is 2232 started, as described in section 5.4. 2234 7.1.2. Unlocking a transport path 2236 A MEP, upon receiving a single-side administrative unlock 2237 command from NMS, sends an LKI removal request OAM packet to its 2238 peer MEP(s). 2240 The peer MEP, upon receiving an LKI removal request, can accept 2241 or not the removal instruction and replies with an LKI removal 2242 reply OAM packet indicating whether it has accepted or not the 2243 instruction. 2245 If the lock removal instruction has been accepted, it also 2246 clears the locked condition on the MPLS-TP transport path and 2247 notifies this event to its client (sub-)layer adaptation 2248 function. 2250 The MEP that has initiated the LKI clear procedure, upon 2251 receiving a positive LKI removal reply, also clears the locked 2252 condition on the MPLS-TP transport path and notifies this event 2253 to its client (sub-)layer adaptation function. 2255 Note that if the client (sub-)layer is also MPLS-TP, Lock 2256 Reporting (LKR) generation at the client MPLS-TP (sub-)layer is 2257 terminated, as described in section 5.4. 2259 8. Security Considerations 2261 A number of security considerations are important in the context 2262 of OAM applications. 2264 OAM traffic can reveal sensitive information such as passwords, 2265 performance data and details about e.g. the network topology. 2266 The nature of OAM data therefore suggests to have some form of 2267 authentication, authorization and encryption in place. This will 2268 prevent unauthorized access to vital equipment and it will 2269 prevent third parties from learning about sensitive information 2270 about the transport network. However it should be observed that 2271 the combination of all permutations of unique MEP to MEP, MEP to 2272 MIP, and intermediate system originated transactions mitigates 2273 against the practical establishment and maintenance of a large 2274 number of security associations per MEG. 2276 For this reason it is assumed that the network is physically 2277 secured against man-in-the-middle attacks. Further, this 2278 document describes OAM functions that, if a man-in-the-middle 2279 attack was possible, could be exploited to significantly disrupt 2280 proper operation of the network. 2282 Mechanisms that the framework does not specify might be subject 2283 to additional security considerations. 2285 9. IANA Considerations 2287 No new IANA considerations. 2289 10. Acknowledgments 2291 The authors would like to thank all members of the teams (the 2292 Joint Working Team, the MPLS Interoperability Design Team in 2293 IETF and the Ad Hoc Group on MPLS-TP in ITU-T) involved in the 2294 definition and specification of MPLS Transport Profile. 2296 The editors gratefully acknowledge the contributions of Adrian 2297 Farrel, Yoshinori Koike, Luca Martini, Yuji Tochio and Manuel 2298 Paul for the definition of per-interface MIPs and MEPs. 2300 The editors gratefully acknowledge the contributions of Malcolm 2301 Betts, Yoshinori Koike, Xiao Min, and Maarten Vissers for the 2302 lock report and lock instruction description. 2304 The authors would also like to thank Alessandro D'Alessandro, 2305 Loa Andersson, Malcolm Betts, Stewart Bryant, Rui Costa, Xuehui 2306 Dai, John Drake, Adrian Farrel, Dan Frost, Liu Gouman, Peng He, 2307 Feng Huang, Su Hui, Yoshionori Koike, George Swallow, Yuji 2308 Tochio, Curtis Villamizar, Maarten Vissers and Xuequin Wei for 2309 their comments and enhancements to the text. 2311 This document was prepared using 2-Word-v2.0.template.dot. 2313 11. References 2315 11.1. Normative References 2317 [1] Rosen, E., Viswanathan, A., Callon, R., "Multiprotocol 2318 Label Switching Architecture", RFC 3031, January 2001 2320 [2] Bryant, S., Pate, P., "Pseudo Wire Emulation Edge-to-Edge 2321 (PWE3) Architecture", RFC 3985, March 2005 2323 [3] Nadeau, T., Pignataro, S., "Pseudowire Virtual Circuit 2324 Connectivity Verification (VCCV): A Control Channel for 2325 Pseudowires", RFC 5085, December 2007 2327 [4] Bocci, M., Bryant, S., "An Architecture for Multi-Segment 2328 Pseudo Wire Emulation Edge-to-Edge", RFC 5659, October 2329 2009 2331 [5] Niven-Jenkins, B., Brungard, D., Betts, M., sprecher, N., 2332 Ueno, S., "MPLS-TP Requirements", RFC 5654, September 2009 2334 [6] Agarwal, P., Akyol, B., "Time To Live (TTL) Processing in 2335 Multiprotocol Label Switching (MPLS) Networks", RFC 3443, 2336 January 2003 2338 [7] Vigoureux, M., Bocci, M., Swallow, G., Ward, D., Aggarwal, 2339 R., "MPLS Generic Associated Channel", RFC 5586, June 2009 2341 [8] Bocci, M., et al., "A Framework for MPLS in Transport 2342 Networks", RFC 5921, July 2010 2344 [9] Swallow, G., Bocci, M., "MPLS-TP Identifiers", draft-ietf- 2345 mpls-tp-identifiers-01 (work in progress), April 2010 2347 [10] Vigoureux, M., Betts, M., Ward, D., "Requirements for OAM 2348 in MPLS Transport Networks", RFC 5860, May 2010 2350 [11] Bradner, S., McQuaid, J., "Benchmarking Methodology for 2351 Network Interconnect Devices", RFC 2544, March 1999 2353 [12] ITU-T Recommendation G.806 (01/09), "Characteristics of 2354 transport equipment - Description methodology and generic 2355 functionality ", January 2009 2357 11.2. Informative References 2359 [13] Sprecher, N., Nadeau, T., van Helvoort, H., Weingarten, 2360 Y., "MPLS-TP OAM Analysis", draft-ietf-mpls-tp-oam- 2361 analysis-02 (work in progress), July 2010 2363 [14] Nichols, K., Blake, S., Baker, F., Black, D., "Definition 2364 of the Differentiated Services Field (DS Field) in the 2365 IPv4 and IPv6 Headers", RFC 2474, December 1998 2367 [15] Grossman, D., "New terminology and clarifications for 2368 Diffserv", RFC 3260, April 2002. 2370 [16] Kompella, K., Rekhter, Y., Berger, L., "Link Bundling in 2371 MPLS Traffic Engineering (TE)", RFC 4201, October 2005 2373 [17] ITU-T Recommendation G.707/Y.1322 (01/07), "Network node 2374 interface for the synchronous digital hierarchy (SDH)", 2375 January 2007 2377 [18] ITU-T Recommendation G.805 (03/00), "Generic functional 2378 architecture of transport networks", March 2000 2380 [19] ITU-T Recommendation Y.1731 (02/08), "OAM functions and 2381 mechanisms for Ethernet based networks", February 2008 2383 [20] IEEE Standard 802.1AX-2008, "IEEE Standard for Local and 2384 Metropolitan Area Networks - Link Aggregation", November 2385 2008 2387 Authors' Addresses 2389 Dave Allan 2390 Ericsson 2392 Email: david.i.allan@ericsson.com 2394 Italo Busi 2395 Alcatel-Lucent 2397 Email: Italo.Busi@alcatel-lucent.com 2398 Ben Niven-Jenkins 2399 BT 2401 Email: benjamin.niven-jenkins@bt.com 2403 Annamaria Fulignoli 2404 Ericsson 2406 Email: annamaria.fulignoli@ericsson.com 2408 Enrique Hernandez-Valencia 2409 Alcatel-Lucent 2411 Email: Enrique.Hernandez@alcatel-lucent.com 2413 Lieven Levrau 2414 Alcatel-Lucent 2416 Email: Lieven.Levrau@alcatel-lucent.com 2418 Vincenzo Sestito 2419 Alcatel-Lucent 2421 Email: Vincenzo.Sestito@alcatel-lucent.com 2423 Nurit Sprecher 2424 Nokia Siemens Networks 2426 Email: nurit.sprecher@nsn.com 2428 Huub van Helvoort 2429 Huawei Technologies 2431 Email: hhelvoort@huawei.com 2433 Martin Vigoureux 2434 Alcatel-Lucent 2436 Email: Martin.Vigoureux@alcatel-lucent.com 2437 Yaacov Weingarten 2438 Nokia Siemens Networks 2440 Email: yaacov.weingarten@nsn.com 2442 Rolf Winter 2443 NEC 2445 Email: Rolf.Winter@nw.neclab.eu