idnits 2.17.1 draft-ietf-mpls-tp-nm-req-05.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** The document seems to lack a License Notice according IETF Trust Provisions of 28 Dec 2009, Section 6.b.i or Provisions of 12 Sep 2009 Section 6.b -- however, there's a paragraph with a matching beginning. Boilerplate error? (You're using the IETF Trust Provisions' Section 6.b License Notice from 12 Feb 2009 rather than one of the newer Notices. See https://trustee.ietf.org/license-info/.) Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (September 18, 2009) is 5332 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Possible downref: Non-RFC (?) normative reference: ref. '1' ** Downref: Normative reference to an Informational RFC: RFC 4377 (ref. '2') ** Downref: Normative reference to an Informational RFC: RFC 3871 (ref. '4') -- Possible downref: Non-RFC (?) normative reference: ref. '6' Summary: 3 errors (**), 0 flaws (~~), 1 warning (==), 3 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group Hing-Kam Lam 3 Internet Draft Alcatel-Lucent 4 Expires: March 18, 2010 Scott Mansfield 5 Intended Status: Standards Track Eric Gray 6 Ericsson 7 September 18, 2009 9 MPLS TP Network Management Requirements 10 draft-ietf-mpls-tp-nm-req-05.txt 12 Status of this Memo 14 This Internet-Draft is submitted to IETF in full conformance 15 with the provisions of BCP 78 and BCP 79. 17 Internet-Drafts are working documents of the Internet 18 Engineering Task Force (IETF), its areas, and its working 19 groups. Note that other groups may also distribute working 20 documents as Internet-Drafts. 22 Internet-Drafts are draft documents valid for a maximum of six 23 months and may be updated, replaced, or obsoleted by other 24 documents at any time. It is inappropriate to use Internet- 25 Drafts as reference material or to cite them other than as "work 26 in progress." 28 The list of current Internet-Drafts can be accessed at 29 http://www.ietf.org/ietf/1id-abstracts.txt 31 The list of Internet-Draft Shadow Directories can be accessed at 32 http://www.ietf.org/shadow.html 34 This Internet-Draft will expire on March 14, 2010. 36 Abstract 38 This document specifies the requirements for the management of 39 equipment used in networks supporting an MPLS Transport Profile 40 (MPLS-TP). The requirements are defined for specification of 41 network management aspects of protocol mechanisms and procedures 42 that constitute the building blocks out of which the MPLS 43 transport profile is constructed. That is, these requirements 44 indicate what management capabilities need to be available in 45 MPLS for use in managing the MPLS-TP. This document is intended 46 to identify essential network management capabilities, not to 47 specify what functions any particular MPLS implementation 48 supports. 50 Table of Contents 52 1. Introduction..............................................3 53 1.1. Terminology..........................................4 54 2. Management Interface Requirements.........................6 55 3. Management Communication Channel (MCC) Requirements.......6 56 4. Management Communication Network (MCN) Requirements.......6 57 5. Fault Management Requirements.............................8 58 5.1. Supervision Function.................................8 59 5.2. Validation Function..................................9 60 5.3. Alarm Handling Function.............................10 61 5.3.1. Alarm Severity Assignment......................10 62 5.3.2. Alarm Suppression .............................10 63 5.3.3. Alarm Reporting................................11 64 5.3.4. Alarm Reporting Control........................11 65 6. Configuration Management Requirements....................11 66 6.1. System Configuration................................12 67 6.2. Control Plane Configuration.........................12 68 6.3. Path Configuration..................................12 69 6.4. Protection Configuration............................13 70 6.5. OAM Configuration...................................13 71 7. Performance Management Requirements......................14 72 7.1. Path Characterization Performance Metrics...........14 73 7.2. Performance Measurement Instrumentation.............16 74 7.2.1. Measurement Frequency..........................16 75 7.2.2. Measurement Scope .............................16 76 8. Security Management Requirements.........................16 77 8.1. Management Communication Channel Security...........17 78 8.2. Signaling Communication Channel Security............17 79 8.3. Distributed Denial of Service ......................17 80 9. Security Considerations..................................18 81 10. IANA Considerations.....................................18 82 11. Acknowledgments.........................................18 83 12. References..............................................18 84 12.1. Normative References...............................18 85 12.2. Informative References.............................19 86 Author's Addresses..........................................21 87 Copyright Statement.........................................21 88 Acknowledgment..............................................22 89 Appendix A - Communication Channel (CCh) Examples...........23 91 1. Introduction 93 This document specifies the requirements for the management of 94 equipment used in networks supporting an MPLS Transport Profile 95 (MPLS-TP). The requirements are defined for specification of 96 network management aspects of protocol mechanisms and procedures 97 that constitute the building blocks out of which the MPLS 98 transport profile is constructed. That is, these requirements 99 indicate what management capabilities need to be available in 100 MPLS for use in managing the MPLS-TP. This document is intended 101 to identify essential network management capabilities, not to 102 specify what functions any particular MPLS implementation 103 supports. 105 This document also leverages management requirements specified 106 in ITU-T G.7710/Y.1701 [1] and RFC 4377 [2], and attempts to 107 comply with best common practice as defined in [14]. 109 ITU-T G.7710/Y.1701 defines generic management requirements for 110 transport networks. RFC 4377 specifies the OAM requirements, 111 including OAM-related network management requirements, for MPLS 112 networks. 114 This document is a product of a joint ITU-T and IETF effort to 115 include an MPLS Transport Profile (MPLS-TP) within the IETF MPLS 116 and PWE3 architectures to support capabilities and functionality 117 of a transport network as defined by ITU-T. 119 The requirements in this document derive from two sources: 121 1) MPLS and PWE3 architectures as defined by IETF, and 123 2) packet transport networks as defined by ITU-T. 125 Requirements for management of equipment in MPLS-TP networks are 126 defined herein. Related functions of MPLS and PWE3 are defined 127 elsewhere (and are out of scope in this document). 129 This document expands on the requirements in [1] and [2] to 130 cover fault, configuration, performance, and security management 131 for MPLS-TP networks, and the requirements for object and 132 information models needed to manage MPLS-TP Networks and Network 133 Elements. 135 In writing this document, the authors assume the reader is 136 familiar with references [12] and [15]. 138 1.1. Terminology 140 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 141 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 142 document are to be interpreted as described in RFC 2119 [5]. Although 143 this document is not a protocol specification, the use of this 144 language clarifies the instructions to protocol designers producing 145 solutions that satisfy the requirements set out in this document. 147 Anomaly: The smallest discrepancy which can be observed between 148 actual and desired characteristics of an item. The occurrence of 149 a single anomaly does not constitute an interruption in ability 150 to perform a required function. Anomalies are used as the input 151 for the Performance Monitoring (PM) process and for detection of 152 defects (from [21], 3.7). 154 Communication Channel (CCh): A logical channel between network 155 elements (NEs) that can be used - e.g. - for management or 156 control plane applications. The physical channel supporting the 157 CCh is technology specific. See Appendix A. 159 Data Communication Network (DCN): A network that supports Layer 160 1 (physical layer), Layer 2 (data-link layer), and Layer 3 161 (network layer) functionality for distributed management 162 communications related to the management plane, for distributed 163 signaling communications related to the control plane, and other 164 operations communications (e.g., order-wire/voice 165 communications, software downloads, etc.). 167 Defect: The density of anomalies has reached a level where the 168 ability to perform a required function has been interrupted. 169 Defects are used as input for performance monitoring, the 170 control of consequent actions, and the determination of fault 171 cause (from [21], 3.24). 173 Failure: The fault cause persisted long enough to consider the 174 ability of an item to perform a required function to be 175 terminated. The item may be considered as failed; a fault has 176 now been detected (from [21], 3.25). 178 Fault: A fault is the inability of a function to perform a 179 required action. This does not include an inability due to 180 preventive maintenance, lack of external resources, or planned 181 actions (from [21], 3.26). 183 Fault Cause: A single disturbance or fault may lead to the 184 detection of multiple defects. A fault cause is the result of a 185 correlation process which is intended to identify the defect 186 that is representative of the disturbance or fault that is 187 causing the problem (from [21], 3.27). 189 Fault Cause Indication (FCI): An indication of a fault cause. 191 Management Communication Channel (MCC): A CCh dedicated for 192 management plane communications. 194 Management Communication Network (MCN): A DCN supporting 195 management plane communication is referred to as a Management 196 Communication Network (MCN). 198 MPLS-TP NE: A network element (NE) that supports the functions 199 of MPLS necessary to participate in an MPLS-TP based transport 200 service. See [7] for further information on functionality 201 required to support MPLS-TP. 203 MPLS-TP network: a network in which MPLS-TP NEs are deployed. 205 OAM, On-Demand and Proactive: One feature of OAM that is largely 206 a management issue is control of OAM; on-demand and proactive 207 are modes of OAM mechanism operation defined - for example - in 208 Y.1731 ([22] - 3.45 and 3.44 respectively) as: 210 . On-demand OAM - OAM actions which are initiated via manual 211 intervention for a limited time to carry out diagnostics. 212 On-demand OAM can result in singular or periodic OAM 213 actions during the diagnostic time interval. 215 . Proactive OAM - OAM actions which are carried on 216 continuously to permit timely reporting of fault and/or 217 performance status. 219 (Note that it is possible for specific OAM mechanisms to only 220 have a sensible use in either on-demand or proactive mode.) 222 Operations System (OS): A system that performs the functions 223 that support processing of information related to operations, 224 administration, maintenance, and provisioning (OAM&P) for the 225 networks, including surveillance and testing functions to 226 support customer access maintenance. 228 Signaling Communication Channel (SCC): A CCh dedicated for 229 control plane communications. The SCC may be used for GMPLS/ASON 230 signaling and/or other control plane messages (e.g., routing 231 messages). 233 Signaling Communication Network (SCN): A DCN supporting control 234 plane communication is referred to as a Signaling Communication 235 Network (SCN). 237 2. Management Interface Requirements 239 This document does not specify which management interface 240 protocol should be used as the standard protocol for managing 241 MPLS-TP networks. Managing an end-to-end connection across 242 multiple operator domains where one domain is managed (for 243 example) via NETCONF/XML ([16]) or SNMP/SMI ([17]), and another 244 domain via CORBA/IDL ([18]), is allowed. 246 For the management interface to the management system, an MPLS- 247 TP NE MAY actively support more than one management protocol in 248 any given deployment. For example, an MPLS-TP NE may use one 249 protocol for configuration and another for monitoring. The 250 protocols to be supported are at the discretion of the operator. 252 3. Management Communication Channel (MCC) Requirements 254 Specifications SHOULD define support for management connectivity 255 with remote MPLS-TP domains and NEs, as well as with termination 256 points located in NEs under the control of a third party network 257 operator. See ITU-T G.8601 [23] for example scenarios in multi- 258 carrier multi-transport-technology environments. 260 For management purpose, every MPLS-TP NE MUST connect to an OS. 261 The connection MAY be direct (e.g. - via a software, hardware or 262 proprietary protocol connection) or indirect (via another MPLS- 263 TP NE). In this document, any management connection that is not 264 via another MPLS-TP NE is a direct management connection. When 265 an MPLS-TP NE is connected indirectly to an OS, an MCC MUST be 266 supported between that MPLS-TP NE and any MPLS-TP NE(s) used to 267 provide the connection to an OS. 269 4. Management Communication Network (MCN) Requirements 271 Entities of the MPLS-TP management plane communicate via a DCN, 272 or more specifically via the MCN. The MCN connects management 273 systems with management systems, management systems with MPLS-TP 274 NEs, and (in the indirect connectivity case discussed in section 275 3) MPLS-TP NEs with MPLS-TP NEs. 277 RFC 5586 [13] defines a Generic Associated Channel (G-ACh) to 278 enable the realization of a communication channel (CCh) between 279 adjacent MPLS-TP NEs for management and control. Reference [8] 280 describes how the G-ACh may be used to provide infrastructure 281 that forms part of the MCN and SCN. It also explains how MCN and 282 SCN messages are encapsulated, carried on the G-ACh, and 283 decapsulated for delivery to management or signaling/routing 284 control plane components on a label switching router (LSR). 286 ITU-T G.7712/Y.1703 [6], section 7, describes the transport DCN 287 architecture and requirements. The MPLS-TP MCN MUST support the 288 requirements (in reference [6]) for: 290 - CCh access functions specified in section 7.1.1; 292 - MPLS-TP SCC data-link layer termination functions specified 293 in section 7.1.2.3; 295 - MPLS-TP MCC data-link layer termination functions specified 296 in section 7.1.2.4; 298 - Network layer PDU into CCh data-link frame encapsulation 299 functions specified in section 7.1.3; 301 - Network layer PDU forwarding (7.1.6), interworking (7.1.7) 302 and encapsulation (7.1.8) functions, as well as tunneling 303 (7.1.9) and routing (7.1.10) functions specified in [6]. 305 As a practical matter, MCN connections will typically have 306 addresses. See the section on addressing in [15] for further 307 information. 309 In order to have the MCN operate properly, a number of 310 management functions for the MCN are needed, including: 312 . Retrieval of DCN network parameters to ensure compatible 313 functioning, e.g. packet size, timeouts, quality of 314 service, window size, etc.; 316 . Establishment of message routing between DCN nodes; 318 . Management of DCN network addresses; 320 . Retrieval of operational status of the DCN at a given node; 321 . Capability to enable/disable access by an NE to the DCN. 322 Note that this is to allow isolating a malfunctioning NE 323 from impacting the rest of the network. 325 5. Fault Management Requirements 327 The Fault Management functions within an MPLS-TP NE enable the 328 supervision, detection, validation, isolation, correction, and 329 reporting of abnormal operation of the MPLS-TP network and its 330 environment. 332 5.1. Supervision Function 334 The supervision function analyses the actual occurrence of a 335 disturbance or fault for the purpose of providing an appropriate 336 indication of performance and/or detected fault condition to 337 maintenance personnel and operations systems. 339 The MPLS-TP NE MUST support supervision of the OAM mechanisms 340 that are deployed for supporting the OAM requirements defined in 341 [1]. 343 The MPLS-TP NE MUST support the following data-plane forwarding 344 path supervision functions: 346 . Supervision of loop-checking functions used to detect loops 347 in the data-plane forwarding path (which result in non- 348 delivery of traffic, wasting of forwarding resources and 349 unintended self-replication of traffic); 351 . Supervision of failure detection; 353 The MPLS-TP NE MUST support the capability to configure data- 354 plane forwarding path related supervision mechanisms to perform 355 on-demand or proactively. 357 The MPLS-TP NE MUST support supervision for software processing 358 e.g., processing faults, storage capacity, version mismatch, 359 corrupted data and out of memory problems, etc. 361 The MPLS-TP NE MUST support hardware-related supervision for 362 interchangeable and non-interchangeable unit, cable, and power 363 problems. 365 The MPLS-TP NE SHOULD support environment-related supervision 366 for temperature, humidity, etc. 368 5.2. Validation Function 370 Validation is the process of integrating Fault Cause indications 371 into Failures. A Fault Cause Indication (FCI) indicates a 372 limited interruption of the required transport function. A Fault 373 Cause is not reported to maintenance personnel because it might 374 exist only for a very short time. Note that some of these events 375 are summed up in the Performance Monitoring process (see section 376 7), and when this sum exceeds a configured value, a threshold 377 crossing alert (report) can be generated. 379 When the Fault Cause lasts long enough, an inability to perform 380 the required transport function arises. This failure condition 381 is subject to reporting to maintenance personnel and/or an OS 382 because corrective action might be required. Conversely, when 383 the Fault Cause ceases after a certain time, clearing of the 384 Failure condition is also subject to reporting. 386 The MPLS-TP NE MUST perform persistency checks on fault causes 387 before it declares a fault cause a failure. 389 The MPLS-TP NE SHOULD provide a configuration capability for 390 control parameters associated with performing the persistency 391 checks described above. 393 An MPLS-TP NE MAY provide configuration parameters to control 394 reporting, and clearing, of failure conditions. 396 A data-plane forwarding path failure MUST be declared if the 397 fault cause persists continuously for a configurable time (Time- 398 D). The failure MUST be cleared if the fault cause is absent 399 continuously for a configurable time (Time-C). 401 Note: As an example, the default time values might be as 402 follows: 404 Time-D = 2.5 +/- 0.5 seconds 406 Time-C = 10 +/- 0.5 seconds 408 These time values are as defined in G.7710 [1]. 410 MIBs - or other object management semantics specifications - 411 defined to enable configuration of these timers SHOULD 412 explicitly provide default values and MAY provide guidelines on 413 ranges and value determination methods for scenarios where the 414 default value chosen might be inadequate. In addition, such 415 specifications SHOULD define the level of granularity at which 416 tables of these values are to be defined. 418 Implementations MUST provide the ability to configure the 419 preceding set of timers, and SHOULD provide default values to 420 enable rapid configuration. Suitable default values, timer 421 ranges, and level of granularity are out of scope in this 422 document and form part of the specification of fault management 423 details. Timers SHOULD be configurable per NE for broad 424 categories (for example, defects and/or fault causes), and MAY 425 be configurable per-interface on an NE and/or per individual 426 defect/fault cause. 428 The failure declaration and clearing MUST be time stamped. The 429 time-stamp MUST indicate the time at which the fault cause is 430 activated at the input of the fault cause persistency (i.e. 431 defect-to-failure integration) function, and the time at which 432 the fault cause is deactivated at the input of the fault cause 433 persistency function. 435 5.3. Alarm Handling Function 437 5.3.1. Alarm Severity Assignment 439 Failures can be categorized to indicate the severity or urgency 440 of the fault. 442 An MPLS-TP NE SHOULD support the ability to assign severity 443 (e.g., Critical, Major, Minor, Warning) to alarm conditions via 444 configuration. 446 See G.7710 [1], section 7.2.2 for more detail on alarm severity 447 assignment. 449 5.3.2. Alarm Suppression 451 Alarms can be generated from many sources, including OAM, device 452 status, etc. 454 An MPLS-TP NE MUST support suppression of alarms based on 455 configuration. 457 5.3.3. Alarm Reporting 459 Alarm Reporting is concerned with the reporting of relevant 460 events and conditions, which occur in the network (including the 461 NE, incoming signal, and external environment). 463 Local reporting is concerned with automatic alarming by means of 464 audible and visual indicators near the failed equipment. 466 An MPLS-TP NE MUST support local reporting of alarms. 468 The MPLS-TP NE MUST support reporting of alarms to an OS. These 469 reports are either autonomous reports (notifications) or reports 470 on request by maintenance personnel. The MPLS-TP NE SHOULD 471 report local (environmental) alarms to a network management 472 system. 474 An MPLS-TP NE supporting one or more other networking 475 technologies (e.g. - Ethernet, SDH/SONET, MPLS) over MPLS-TP 476 MUST be capable of translating an MPLS-TP defects into failure 477 conditions that are meaningful to the client layer, as described 478 in RFC 4377 [2], section 4.7. 480 5.3.4. Alarm Reporting Control 482 Alarm Reporting Control (ARC) supports an automatic in-service 483 provisioning capability. Alarm reporting can be turned off on a 484 per-managed entity (e.g., LSP) basis to allow sufficient time 485 for customer service testing and other maintenance activities in 486 an "alarm free" state. Once a managed entity is ready, alarm 487 reporting is automatically turned on. 489 An MPLS-TP NE SHOULD support the Alarm Reporting Control 490 function for controlling the reporting of alarm conditions. 492 See G.7710 [1] (section 7.1.3.2) and RFC 3878 [24] for more 493 information about ARC. 495 6. Configuration Management Requirements 497 Configuration Management provides functions to identify, collect 498 data from, provide data to and control NEs. Specific 499 configuration tasks requiring network management support include 500 hardware and software configuration, configuration of NEs to 501 support transport paths (including required working and 502 protection paths), and configuration of required path 503 integrity/connectivity and performance monitoring (i.e. - OAM). 505 6.1. System Configuration 507 The MPLS-TP NE MUST support the configuration requirements 508 specified in G.7710 [1] section 8.1 for hardware. 510 The MPLS-TP NE MUST support the configuration requirements 511 specified in G.7710 [1] section 8.2 for software. 513 The MPLS-TP NE MUST support the configuration requirements 514 specified in G.7710 [1] section 8.13.2.1 for local real time 515 clock functions. 517 The MPLS-TP NE MUST support the configuration requirements 518 specified in G.7710 [1] section 8.13.2.2 for local real time 519 clock alignment with external time reference. 521 The MPLS-TP NE MUST support the configuration requirements 522 specified in G.7710 [1] section 8.13.2.3 for performance 523 monitoring of the clock function. 525 6.2. Control Plane Configuration 527 If a control plane is supported in an implementation of MPLS-TP, 528 the MPLS-TP NE MUST support the configuration of MPLS-TP control 529 plane functions by the management plane. Further detailed 530 requirements will be provided along with progress in defining 531 the MPLS-TP control plane in appropriate specifications. 533 6.3. Path Configuration 535 In addition to the requirement to support static provisioning of 536 transport paths (defined in [7], section 2.1 - General 537 Requirements - requirement 18), an MPLS-TP NE MUST support the 538 configuration of required path performance characteristic 539 thresholds (e.g. - Loss Measurement , Delay Measurement 540 thresholds) necessary to support performance monitoring of the 541 MPLS-TP service(s). 543 In order to accomplish this, an MPLS-TP NE MUST support 544 configuration of LSP information (such as an LSP identifier of 545 some kind) and/or any other information needed to retrieve LSP 546 status information, performance attributes, etc. 548 If a control plane is supported, and that control plane includes 549 support for control-plane/management-plane hand-off for LSP 550 setup/maintenance, the MPLS-TP NE MUST support management of the 551 hand-off of Path control. See, for example, references [19] and 552 [20]. 554 Further detailed requirements will be provided along with 555 progress in defining the MPLS-TP control plane in appropriate 556 specifications. 558 If MPLS-TP transport paths cannot be statically provisioned 559 using MPLS LSP and pseudo-wire management tools (either already 560 defined in standards or under development), further management 561 specifications MUST be provided as needed. 563 6.4. Protection Configuration 565 The MPLS-TP NE MUST support configuration of required path 566 protection information as follows: 568 . designate specifically identified LSPs as working or 569 protecting LSPs; 571 . define associations of working and protecting paths; 573 . operate/release manual protection switching; 575 . operate/release force protection switching; 577 . operate/release protection lockout; 579 . set/retrieve Automatic Protection Switching (APS) 580 parameters, including - 582 o Wait to Restore time, 584 o Protection Switching threshold information. 586 6.5. OAM Configuration 588 The MPLS-TP NE MUST support configuration of the OAM entities 589 and functions specified in [3]. 591 The MPLS-TP NE MUST support the capability to choose which OAM 592 functions are enabled. 594 For enabled OAM functions, the MPLS-TP NE MUST support the 595 ability to associate OAM functions with specific maintenance 596 entities. 598 The MPLS-TP NE MUST support the capability to configure the OAM 599 entities/functions as part of LSP setup and tear-down, including 600 co-routed bidirectional point-to-point, associated bidirectional 601 point-to-point, and uni-directional (both point-to-point and 602 point-to-multipoint) connections. 604 The MPLS-TP NE MUST support the configuration of maintenance 605 entity identifiers (e.g. MEP ID and MIP ID) for the purpose of 606 LSP connectivity checking. 608 The MPLS-TP NE MUST support configuration of OAM parameters to 609 meet their specific operational requirements, such as whether - 611 1) one-time on-demand immediately or 613 2) one-time on-demand pre-scheduled or 615 3) on-demand periodically based on a specified schedule or 617 4) proactive on-going. 619 The MPLS-TP NE MUST support the enabling/disabling of the 620 connectivity check processing. The connectivity check process of 621 the MPLS-TP NE MUST support provisioning of the identifiers to 622 be transmitted and the expected identifiers. 624 7. Performance Management Requirements 626 Performance Management provides functions for the purpose of 627 Maintenance, Bring-into-service, Quality of service, and 628 statistics gathering. 630 This information could be used, for example, to compare behavior 631 of the equipment, MPLS-TP NE or network at different moments in 632 time to evaluate changes in network performance. 634 ITU-T Recommendation G.7710 [1] provides transport performance 635 monitoring requirements for packet-switched and circuit-switched 636 transport networks with the objective of providing coherent and 637 consistent interpretation of the network behavior in a multi- 638 technology environment. The performance management requirements 639 specified in this document are driven by such an objective. 641 7.1. Path Characterization Performance Metrics 643 It MUST be possible to determine when an MPLS-TP based transport 644 service is available and when it is unavailable. 646 From a performance perspective, a service is unavailable if 647 there is an indication that performance has degraded to the 648 extent that a configurable performance threshold has been 649 crossed and the degradation persists long enough (i.e. - the 650 indication persists for some amount of time - which is either 651 configurable, or well-known) to be certain it is not a 652 measurement anomaly. 654 Methods, mechanisms and algorithms for exactly how 655 unavailability is to be determined - based on collection of raw 656 performance data - are out of scope for this document. 658 The MPLS-TP NE MUST support collection and reporting of raw 659 performance data that MAY be used in determining the 660 unavailability of a transport service. 662 MPLS-TP MUST support the determination of the unavailability of 663 the transport service. The result of this determination MUST be 664 available via the MPLS-TP NE (at service termination points), 665 and determination of unavailability MAY be supported by the 666 MPLS-TP NE directly. To support this requirement, the MPLS-TP NE 667 management information model MUST include objects corresponding 668 to availability-state of services. 670 Transport network unavailability is based on Severely Errored 671 Seconds (SES) and Unavailable Seconds (UAS). ITU-T is 672 establishing definitions of unavailability generically 673 applicable to packet transport technologies, including MPLS-TP, 674 based on SES and UAS. Note that SES and UAS are already defined 675 for Ethernet transport networks in ITU-T Recommendation Y.1563 676 [25]. 678 The MPLS-TP NE MUST support collection of loss measurement (LM) 679 statistics. 681 The MPLS-TP NE MUST support collection of delay measurement (DM) 682 statistics. 684 The MPLS-TP NE MUST support reporting of Performance degradation 685 via fault management for corrective actions. "Reporting" in this 686 context could mean: 688 . reporting to an autonomous protection component to trigger 689 protection switching, 691 . reporting via a craft interface to allow replacement of a 692 faulty component (or similar manual intervention), 694 . etc. 696 The MPLS-TP NE MUST support reporting of performance statistics 697 on request from a management system. 699 7.2. Performance Measurement Instrumentation 701 7.2.1. Measurement Frequency 703 For performance measurement mechanisms that support both 704 proactive and on-demand modes, the MPLS-TP NE MUST support the 705 capability to be configured to operate on-demand or proactively. 707 7.2.2. Measurement Scope 709 On measurement of packet loss and loss ratio: 711 . For bidirectional (both co-routed and associated) P2P 712 connections: 714 o on-demand measurement of single-ended packet loss, and 715 loss ratio, measurement is REQUIRED; 717 o proactive measurement of packet loss, and loss ratio, 718 measurement for each direction is REQUIRED. 720 . for unidirectional (P2P and P2MP) connection, proactive 721 measurement of packet loss, and loss ratio, is REQUIRED. 723 On Delay measurement: 725 . for unidirectional (P2P and P2MP) connection, on-demand 726 measurement of delay measurement is REQUIRED. 728 . for co-routed bidirectional (P2P) connection, on-demand 729 measurement of one-way and two-way delay is REQUIRED. 731 . for associated bidirectional (P2P) connection, on-demand 732 measurement of one-way delay is REQUIRED. 734 8. Security Management Requirements 736 The MPLS-TP NE MUST support secure management and control 737 planes. 739 8.1. Management Communication Channel Security 741 Secure communication channels MUST be supported for all network 742 traffic and protocols used to support management functions. 743 This MUST include, at least, protocols used for configuration, 744 monitoring, configuration backup, logging, time synchronization, 745 authentication, and routing. The MCC MUST support application 746 protocols that provide confidentiality and data integrity 747 protection. 749 The MPLS-TP NE MUST support the following: 751 - Use of open cryptographic algorithms (See RFC 3871 [4]) 753 - Authentication - allow management connectivity only from 754 authenticated entities. 756 - Authorization - allow management activity originated by an 757 authorized entity, using (for example) an Access Control 758 List (ACL). 760 - Port Access Control - allow management activity received on 761 an authorized (management) port. 763 8.2. Signaling Communication Channel Security 765 Security requirements for the SCC are driven by considerations 766 similar to MCC requirements described in section 8.1. 768 Security Requirements for the control plane are out of scope for 769 this document and are expected to be defined in the appropriate 770 control plane specifications. 772 Management of control plane security MUST also be defined at 773 that time. 775 8.3. Distributed Denial of Service 777 A Denial of Service (DoS) attack is an attack that tries to 778 prevent a target from performing an assigned task, or providing 779 its intended service(s), through any means. A Distributed DoS 780 (DDoS) can multiply attack severity (possibly by an arbitrary 781 amount) by using multiple (potentially compromised) systems to 782 act as topologically (and potentially geographically) 783 distributed attack sources. It is possible to lessen the impact 784 and potential for DoS and DDoS by using secure protocols, 785 turning off unnecessary processes, logging and monitoring, and 786 ingress filtering. RFC 4732 [26] provides background on DOS in 787 the context of the Internet. 789 An MPLS-TP NE MUST support secure management protocols and 790 SHOULD do so in a manner the reduce potential impact of a DoS 791 attack. 793 An MPLS-TP NE SHOULD support additional mechanisms that mitigate 794 a DoS (or DDoS) attack against the management component while 795 allowing the NE to continue to meet its primary functions. 797 9. Security Considerations 799 Section 8 includes a set of security requirements that apply to 800 MPLS-TP network management. 802 Solutions MUST provide mechanisms to prevent unauthorized and/or 803 unauthenticated access to management capabilities and private 804 information by network elements, systems or users. 806 Performance of diagnostic functions and path characterization 807 involves extracting a significant amount of information about 808 network construction that the network operator might consider 809 private. 811 10. IANA Considerations 813 There are no IANA actions associated with this document. 815 11. Acknowledgments 817 The authors/editors gratefully acknowledge the thoughtful 818 review, comments and explanations provided by Adrian Farrel, 819 Alexander Vainshtein, Andrea Maria Mazzini, Ben Niven-Jenkins, 820 Bernd Zeuner, Dan Romascanu, Daniele Ceccarelli, Diego Caviglia, 821 Dieter Beller, He Jia, Leo Xiao, Maarten Vissers, Neil Harrison, 822 Rolf Winter, Yoav Cohen and Yu Liang. 824 12. References 826 12.1. Normative References 828 [1] ITU-T Recommendation G.7710/Y.1701, "Common equipment 829 management function requirements", July, 2007. 831 [2] Nadeau, T., et al, "Operations and Management (OAM) 832 Requirements for Multi-Protocol Label Switched (MPLS) 833 Networks", RFC 4377, February 2006. 835 [3] Vigoureux, M., et al, "Requirements for OAM in MPLS 836 Transport Networks", draft-ietf-mpls-tp-oam-requirements, 837 work in progress. 839 [4] Jones, G., "Operational Security Requirements for Large 840 Internet Service Provider (ISP) IP Network 841 Infrastructure", RFC 3871, September 2004. 843 [5] Bradner, S., "Key words for use in RFCs to Indicate 844 Requirement Levels", RFC 2119, March 1997. 846 [6] ITU-T Recommendation G.7712/Y.1703, "Architecture and 847 specification of data communication network", June 2008. 849 [7] Niven-Jenkins, B. et al, "MPLS-TP Requirements", draft- 850 ietf-mpls-tp-requirements, work in progress. 852 12.2. Informative References 854 [8] Beller, D., et al, "An Inband Data Communication Network 855 For the MPLS Transport Profile", draft-ietf-mpls-tp-gach- 856 dcn, work in progress. 858 [9] Chisholm, S. and D. Romascanu, "Alarm Management 859 Information Base (MIB)", RFC 3877, September 2004. 861 [10] ITU-T Recommendation M.20, "Maintenance philosophy for 862 telecommunication networks", October 1992. 864 [11] Telcordia, "Network Maintenance: Network Element and 865 Transport Surveillance Messages" (GR-833-CORE), Issue 5, 866 August 2004. 868 [12] Bocci, M. et al, "A Framework for MPLS in Transport 869 Networks", draft-ietf-mpls-tp-framework, work in progress. 871 [13] Bocci, M. et al, "MPLS Generic Associated Channel", RFC 872 5586, June 2009. 874 [14] Harrington, D., "Guidelines for Considering Operations and 875 Management of New Protocols and Protocol Extensions", 876 draft-ietf-opsawg-operations-and-management, work in 877 progress. 879 [15] Mansfield, S. et al, "MPLS-TP Network Management 880 Framework", draft-ietf-mpls-tp-nm-framework, work in 881 progress. 883 [16] Enns, R. et al, "NETCONF Configuration Protocol", 884 draft-ietf-netconf-4741bis, work in progress. 886 [17] McCloghrie, K. et al, "Structure of Management Information 887 Version 2 (SMIv2)", RFC 2578, April 1999. 889 [18] OMG Document formal/04-03-12, "The Common Object Request 890 Broker: Architecture and Specification", Revision 3.0.3. 891 March 12, 2004. 893 [19] Caviglia, D. et al, "Requirements for the Conversion 894 between Permanent Connections and Switched Connections in 895 a Generalized Multiprotocol Label Switching (GMPLS) 896 Network", RFC 5493, April 2009. 898 [20] Caviglia, D. et al, "RSVP-TE Signaling Extension For The 899 Conversion Between Permanent Connections And Soft 900 Permanent Connections In A GMPLS Enabled Transport 901 Network", draft-ietf-ccamp-pc-spc-rsvpte-ext, work in 902 progress. 904 [21] ITU-T Recommendation G.806, "Characteristics of transport 905 equipment - Description methodology and generic 906 functionality", January, 2009. 908 [22] ITU-T Recommendation Y.1731, "OAM functions and mechanisms 909 for Ethernet based networks", February, 2008. 911 [23] ITU-T Recommendation G.8601, "Architecture of service 912 management in multi bearer, multi carrier environment", 913 June 2006. 915 [24] Lam, H., et al, "Alarm Reporting Control Management 916 Information Base (MIB)", RFC 3878, September 2004. 918 [25] ITU-T Recommendation Y.1563, "Ethernet frame transfer and 919 availability performance", January 2009. 921 [26] Handley, M., et al, "Internet Denial-of-Service 922 Considerations", RFC 4732, November 2006. 924 Editors' Addresses 926 Eric Gray 927 Ericsson 928 900 Chelmsford Street 929 Lowell, MA, 01851 930 Phone: +1 978 275 7470 931 Email: Eric.Gray@Ericsson.com 933 Scott Mansfield 934 Ericsson 935 250 Holger Way 936 San Jose CA, 95134 937 +1 724 931 9316 938 EMail: Scott.Mansfield@Ericsson.com 940 Hing-Kam (Kam) Lam 941 Alcatel-Lucent 942 600-700 Mountain Ave 943 Murray Hill, NJ, 07974 944 Phone: +1 908 582 0672 945 Email: hklam@Alcatel-Lucent.com 947 Contributor's Address 949 Adrian Farrel 950 Old Dog Consulting 951 Email: adrian@olddog.co.uk 953 Copyright Statement 955 Copyright (c) 2009 IETF Trust and the persons identified as the 956 document authors. All rights reserved. 958 This document is subject to BCP 78 and the IETF Trust's Legal 959 Provisions Relating to IETF Documents in effect on the date of 960 publication of this document (http://trustee.ietf.org/license- 961 info). Please review these documents carefully, as they 962 describe your rights and restrictions with respect to this 963 document. 965 Acknowledgment 967 Funding for the RFC Editor function is currently provided by the 968 Internet Society. 970 Appendix A- Communication Channel (CCh) Examples 972 A CCh may be realized in a number of ways. 974 1. The CCh may be provided by a link in a physically distinct 975 network. That is, a link that is not part of the transport 976 network that is being managed. For example, the nodes in the 977 transport network may be interconnected in two distinct physical 978 networks: the transport network and the DCN. 980 This is a "physically distinct out-of-band CCh". 982 2. The CCh may be provided by a link in the transport network 983 that is terminated at the ends of the CCh and which is capable 984 of encapsulating and terminating packets of the management 985 protocols. For example, in MPLS-TP a single-hop LSP might be 986 established between two adjacent nodes, and that LSP might be 987 capable of carrying IP traffic. Management traffic can then be 988 inserted into the link in an LSP parallel to the LSPs that carry 989 user traffic. 991 This is a "physically shared out-of-band CCh." 993 3. The CCh may be supported as its native protocol on the 994 interface alongside the transported traffic. For example, if an 995 interface is capable of sending and receiving both MPLS-TP and 996 IP, the IP-based management traffic can be sent as native IP 997 packets on the interface. 999 This is a "shared interface out-of-band CCh". 1001 4. The CCh may use overhead bytes available on a transport 1002 connection. For example, in TDM networks there are overhead 1003 bytes associated with a data channel, and these can be used to 1004 provide a CCh. It is important to note that the use of overhead 1005 bytes does not reduce the capacity of the associated data 1006 channel. 1008 This is an "overhead-based CCh". 1010 This alternative is not available in MPLS-TP because there is no 1011 overhead available. 1013 5. The CCh may provided by a dedicated channel associated with 1014 the data link. For example, the generic associated label (GAL) 1015 [13] may be used to label CCh traffic being exchanged on a data 1016 link between adjacent transport nodes, potentially in the 1017 absence of any data LSP between those nodes. 1019 This is a "data link associated CCh". 1021 It is very similar to case 2, and by its nature can only span a 1022 single hop in the transport network. 1024 6. The CCh may be provided by a dedicated channel associated 1025 with a data channel. For example, in MPLS-TP the GAL [13] may be 1026 imposed under the top label in the label stack for an MPLS-TP 1027 LSP to create a channel associated with the LSP that may carry 1028 management traffic. This CCh requires the receiver to be capable 1029 of demultiplexing management traffic from user traffic carried 1030 on the same LSP by use of the GAL. 1032 This is a "data channel associated CCh". 1034 7. The CCh may be provided by mixing the management traffic with 1035 the user traffic such that is indistinguishable on the link 1036 without deep-packet inspection. In MPLS-TP this could arise if 1037 there is a data-carrying LSP between two nodes, and management 1038 traffic is inserted into that LSP. This approach requires that 1039 the termination point of the LSP is able to demultiplex the 1040 management and user traffic. Such might be possible in MPLS-TP 1041 if the MPLS-TP LSP was carrying IP user traffic. 1043 This is an "in-band CCh". 1045 These realizations may be categorized as: 1047 A. Out-of-fiber, out-of-band (types 1 and 2) 1048 B. In-fiber, out-of-band (types 2, 3, 4, and 5) 1049 C. In-band (types 6 and 7) 1051 The MCN and SCN are logically separate networks and may be 1052 realized by the same DCN or as separate networks. In practice, 1053 that means that, between any pair of nodes, the MCC and SCC may 1054 be the same link or separate links. 1056 It is also important to note that the MCN and SCN do not need to 1057 be categorised as in-band, out-of-band, etc. This definition 1058 only applies to the individual links, and it is possible for 1059 some nodes to be connected in the MCN or SCN by one type of 1060 link, and other nodes by other types of link. Furthermore, a 1061 pair of adjacent nodes may be connected by multiple links of 1062 different types. 1064 Lastly note that the division of DCN traffic between links 1065 between a pair of adjacent nodes is purely an implementation 1066 choice. Parallel links may be deployed for DCN resilience or 1067 load sharing. Links may be designated for specific use. For 1068 example, so that some links carry management traffic and some 1069 carry control plane traffic, or so that some links carry 1070 signaling protocol traffic while others carry routing protocol 1071 traffic. 1073 It should be noted that the DCN may be a routed network with 1074 forwarding capabilities, but that this is not a requirement. The 1075 ability to support forwarding of management or control traffic 1076 within the DCN may substantially simplify the topology of the 1077 DCN and improve its resilience, but does increase the complexity 1078 of operating the DCN. 1080 See also RFC 3877 [9], ITU-T M.20 [10], and Telcordia document 1081 GR-833-CORE [11] for further information.