idnits 2.17.1 draft-ietf-teas-actn-poi-applicability-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 291 has weird spacing: '...nd into a Net...' -- The document date (November 2, 2020) is 1269 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'RFC 8309' is mentioned on line 443, but not defined == Missing Reference: 'PATH-COMPUTE' is mentioned on line 519, but not defined == Missing Reference: 'RFC8527' is mentioned on line 683, but not defined == Missing Reference: 'RFC8342' is mentioned on line 685, but not defined == Missing Reference: 'RFC8650' is mentioned on line 721, but not defined == Missing Reference: 'RFC8641' is mentioned on line 724, but not defined == Missing Reference: 'RFC7923' is mentioned on line 881, but not defined == Missing Reference: 'UNI-TOPO' is mentioned on line 818, but not defined == Missing Reference: 'TE TOPO' is mentioned on line 947, but not defined == Unused Reference: 'RFC4364' is defined on line 1097, but no explicit reference was found in the text == Unused Reference: 'RFC4761' is defined on line 1100, but no explicit reference was found in the text == Unused Reference: 'RFC6074' is defined on line 1104, but no explicit reference was found in the text == Unused Reference: 'RFC6624' is defined on line 1108, but no explicit reference was found in the text == Unused Reference: 'RFC7209' is defined on line 1112, but no explicit reference was found in the text == Unused Reference: 'RFC7432' is defined on line 1116, but no explicit reference was found in the text == Unused Reference: 'RFC7436' is defined on line 1119, but no explicit reference was found in the text == Unused Reference: 'RFC8214' is defined on line 1122, but no explicit reference was found in the text == Unused Reference: 'RFC8299' is defined on line 1126, but no explicit reference was found in the text == Unused Reference: 'RFC8466' is defined on line 1132, but no explicit reference was found in the text == Unused Reference: 'ACTN-PM' is defined on line 1153, but no explicit reference was found in the text == Unused Reference: 'BGP-L3VPN' is defined on line 1158, but no explicit reference was found in the text Summary: 0 errors (**), 0 flaws (~~), 23 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 TEAS Working Group Fabio Peruzzini 2 Internet Draft TIM 3 Intended status: Informational Jean-Francois Bouquier 4 Vodafone 5 Italo Busi 6 Huawei 7 Daniel King 8 Old Dog Consulting 9 Daniele Ceccarelli 10 Ericsson 12 Expires: May 2021 November 2, 2020 14 Applicability of Abstraction and Control of Traffic Engineered 15 Networks (ACTN) to Packet Optical Integration (POI) 17 draft-ietf-teas-actn-poi-applicability-01 19 Status of this Memo 21 This Internet-Draft is submitted in full conformance with the 22 provisions of BCP 78 and BCP 79. 24 Internet-Drafts are working documents of the Internet Engineering 25 Task Force (IETF), its areas, and its working groups. Note that 26 other groups may also distribute working documents as Internet- 27 Drafts. 29 Internet-Drafts are draft documents valid for a maximum of six 30 months and may be updated, replaced, or obsoleted by other documents 31 at any time. It is inappropriate to use Internet-Drafts as 32 reference material or to cite them other than as "work in progress." 34 The list of current Internet-Drafts can be accessed at 35 http://www.ietf.org/ietf/1id-abstracts.txt 37 The list of Internet-Draft Shadow Directories can be accessed at 38 http://www.ietf.org/shadow.html 40 This Internet-Draft will expire on April 9, 2021. 42 Copyright Notice 44 Copyright (c) 2020 IETF Trust and the persons identified as the 45 document authors. All rights reserved. 47 This document is subject to BCP 78 and the IETF Trust's Legal 48 Provisions Relating to IETF Documents 49 (http://trustee.ietf.org/license-info) in effect on the date of 50 publication of this document. Please review these documents 51 carefully, as they describe your rights and restrictions with 52 respect to this document. Code Components extracted from this 53 document must include Simplified BSD License text as described in 54 Section 4.e of the Trust Legal Provisions and are provided without 55 warranty as described in the Simplified BSD License. 57 Abstract 59 This document considers the applicability of Abstraction and Control 60 of TE Networks (ACTN) architecture to Packet Optical Integration 61 (POI)in the context of IP/MPLS and Optical internetworking, 62 identifying the YANG data models being defined by the IETF to 63 support this deployment architecture as well as specific scenarios 64 relevant for Service Providers. 66 Existing IETF protocols and data models are identified for each 67 multi-layer (packet over optical) scenario with particular focus on 68 the MPI (Multi-Domain Service Coordinator to Provisioning Network 69 Controllers Interface)in the ACTN architecture. 71 Table of Contents 73 1. Introduction...................................................3 74 2. Reference architecture and network scenario....................4 75 2.1. L2/L3VPN Service Request in North Bound of MDSC...........8 76 2.2. Service and Network Orchestration........................10 77 2.2.1. Hard Isolation......................................12 78 2.2.2. Shared Tunnel Selection.............................13 79 2.3. IP/MPLS Domain Controller and NE Functions...............13 80 2.4. Optical Domain Controller and NE Functions...............15 81 3. Interface protocols and YANG data models for the MPIs.........15 82 3.1. RESTCONF protocol at the MPIs............................15 83 3.2. YANG data models at the MPIs.............................16 84 3.2.1. Common YANG data models at the MPIs.................16 85 3.2.2. YANG models at the Optical MPIs.....................16 86 3.2.3. YANG data models at the Packet MPIs.................18 87 4. Multi-layer and multi-domain services scenarios...............19 88 4.1. Scenario 1: network and service topology discovery.......19 89 4.1.1. Inter-domain link discovery.........................20 90 4.1.2. IP Link Setup Procedure.............................21 91 4.2. L2VPN/L3VPN establishment................................22 92 5. Security Considerations.......................................22 93 6. Operational Considerations....................................23 94 7. IANA Considerations...........................................23 95 8. References....................................................23 96 8.1. Normative References.....................................23 97 8.2. Informative References...................................24 98 Appendix A. Multi-layer and multi-domain resiliency...........27 99 A.1. Maintenance Window......................................27 100 A.2. Router port failure.....................................27 101 Acknowledgments..................................................28 102 Contributors.....................................................28 103 Authors' Addresses...............................................29 105 1. Introduction 107 The full automation of the management and control of Service 108 Providers transport networks (IP/MPLS, Optical and also Microwave) 109 is key for achieving the new challenges coming now with 5G as well 110 as with the increased demand in terms of business agility and 111 mobility in a digital world. ACTN architecture, by abstracting the 112 network complexity from Optical and IP/MPLS networks towards MDSC 113 and then from MDSC towards OSS/BSS or Orchestration layer through 114 the use of standard interfaces and data models, is allowing a wide 115 range of transport connectivity services that can be requested by 116 the upper layers fulfilling almost any kind of service level 117 requirements from a network perspective (e.g. physical diversity, 118 latency, bandwidth, topology etc.) 120 Packet Optical Integration (POI) is an advanced use case of traffic 121 engineering. In wide area networks, a packet network based on the 122 Internet Protocol (IP) and possibly Multiprotocol Label Switching 123 (MPLS) is typically realized on top of an optical transport network 124 that uses Dense Wavelength Division Multiplexing (DWDM)(and 125 optionally an Optical Transport Network (OTN)layer). In many 126 existing network deployments, the packet and the optical networks 127 are engineered and operated independently of each other. There are 128 technical differences between the technologies (e.g., routers vs. 129 optical switches) and the corresponding network engineering and 130 planning methods (e.g., inter-domain peering optimization in IP vs. 131 dealing with physical impairments in DWDM, or very different time 132 scales). In addition, customers needs can be different between a 133 packet and an optical network, and it is not uncommon to use 134 different vendors in both domains. Last but not least, state-of-the- 135 art packet and optical networks use sophisticated but complex 136 technologies, and for a network engineer it may not be trivial to be 137 a full expert in both areas. As a result, packet and optical 138 networks are often operated in technical and organizational silos. 140 This separation is inefficient for many reasons. Both capital 141 expenditure (CAPEX) and operational expenditure (OPEX) could be 142 significantly reduced by better integrating the packet and the 143 optical network. Multi-layer online topology insight can speed up 144 troubleshooting (e.g., alarm correlation) and network operation 145 (e.g., coordination of maintenance events), multi-layer offline 146 topology inventory can improve service quality (e.g., detection of 147 diversity constraint violations) and multi-layer traffic engineering 148 can use the available network capacity more efficiently (e.g., 149 coordination of restoration). In addition, provisioning workflows 150 can be simplified or automated as needed across layers (e.g, to 151 achieve bandwidth on demand, or to perform maintenance events). 153 ACTN framework enables this complete multi-layer and multi-vendor 154 integration of packet and optical networks through MDSC and packet 155 and optical PNCs. 157 In this document, key scenarios for Packet Optical Integration (POI) 158 are described from the packet service layer perspective. The 159 objective is to explain the benefit and the impact for both the 160 packet and the optical layer, and to identify the required 161 coordination between both layers. Precise definitions of scenarios 162 can help with achieving a common understanding across different 163 disciplines. The focus of the scenarios are IP/MPLS networks 164 operated as client of optical DWDM networks. The scenarios are 165 ordered by increasing level of integration and complexity. For each 166 multi-layer scenario, the document analyzes how to use the 167 interfaces and data models of the ACTN architecture. 169 Understanding the level of standardization and the possible gaps 170 will help to better assess the feasibility of integration between IP 171 and Optical DWDM domain (and optionally OTN layer), in an end-to-end 172 multi-vendor service provisioning perspective. 174 2. Reference architecture and network scenario 176 This document analyses a number of deployment scenarios for Packet 177 and Optical Integration (POI) in which ACTN hierarchy is deployed to 178 control a multi-layer and multi-domain network, with two Optical 179 domains and two Packet domains, as shown in Figure 1: 181 +----------+ 182 | MDSC | 183 +-----+----+ 184 | 185 +-----------+-----+------+-----------+ 186 | | | | 187 +----+----+ +----+----+ +----+----+ +----+----+ 188 | P-PNC 1 | | O-PNC 1 | | O-PNC 2 | | P-PNC 2 | 189 +----+----+ +----+----+ +----+----+ +----+----+ 190 | | | | 191 | \ / | 192 +-------------------+ \ / +-------------------+ 193 CE / PE BR \ | / / BR PE \ CE 194 o--/---o o---\-|-------|--/---o o---\--o 195 \ : : / | | \ : : / 196 \ : PKT Domain 1 : / | | \ : PKT Domain 2 : / 197 +-:---------------:-+ | | +-:---------------:--+ 198 : : | | : : 199 : : | | : : 200 +-:---------------:------+ +-------:---------------:--+ 201 / : : \ / : : \ 202 / o...............o \ / o...............o \ 203 \ Optical Domain 1 / \ Optical Domain 2 / 204 \ / \ / 205 +------------------------+ +--------------------------+ 207 Figure 1 - Reference Scenario 209 The ACTN architecture, defined in [RFC8453], is used to control this 210 multi-domain network where each Packet PNC (P-PNC) is responsible 211 for controlling its IP domain, which can be either an Autonomous 212 System (AS), [RFC1930], or an IGP area within the same operator 213 network, and each Optical PNC (O-PNC) is responsible for controlling 214 its Optical Domain. 216 The routers between IP domains can be either AS Boundary Routers 217 (ASBR) or Area Border Router (ABR): in this document the generic 218 term Border Router (BR) is used to represent either an ASBR or a 219 ABR. 221 The MDSC is responsible for coordinating the whole multi-domain 222 multi-layer (Packet and Optical) network. A specific standard 223 interface (MPI) permits MDSC to interact with the different 224 Provisioning Network Controller (O/P-PNCs). 226 The MPI interface presents an abstracted topology to MDSC hiding 227 technology-specific aspects of the network and hiding topology 228 details depending on the policy chosen regarding the level of 229 abstraction supported. The level of abstraction can be obtained 230 based on P-PNC and O-PNC configuration parameters (e.g. provide the 231 potential connectivity between any PE and any BR in an MPLS-TE 232 network). 234 In the network scenario of Figure 1, it is assumed that: 236 o The domain boundaries between the IP and Optical domains are 237 congruent. In other words, one Optical domain supports 238 connectivity between Routers in one and only one Packet Domain; 240 o Inter-domain links exist only between Packet domains (i.e., 241 between BR routers) and between Packet and Optical domains (i.e., 242 between routers and Optical NEs). In other words, there are no 243 inter-domain links between Optical domains; 245 o The interfaces between the Routers and the Optical NEs are 246 "Ethernet" physical interfaces; 248 o The interfaces between the Border Routers (BRs) are "Ethernet" 249 physical interfaces. 251 This version of the document assumes that the IP Link supported by 252 the Optical network are always intra-AS (PE-BR, intra-domain BR-BR, 253 PE-P, BR-P, or P-P) and that the BRs are co-located and connected by 254 an IP Link supported by an Ethernet physical link. 256 The possibility to setup inter-AS/inter-area IP Links (e.g., 257 inter-domain BR-BR or PE-PE), supported by Optical network, is for 258 further study. 260 Therefore, if inter-domain links between the Optical domains exist, 261 they would be used to support multi-domain Optical services, which 262 are outside the scope of this document. 264 The Optical NEs within the optical domains can be ROADMs or OTN 265 switches, with or without a ROADM. 267 The MDSC in Figure 1 is responsible for multi-domain and multi-layer 268 coordination across multiple Packet and Optical domains, as well as 269 to provide L2/L3VPN services. 271 Although the new technologies (e.g. QSFP-DD ZR 400G) are making 272 convenient to fit the DWDM pluggable interfaces on the Routers, the 273 deployment of those pluggable is not yet widely adopted by the 274 operators. The reason is that most of operators are not yet ready to 275 manage Packet and Transport networks in a unified single domain. As 276 a consequence, this draft is not addressing the unified scenario. 277 This matter will be described in a different draft. 279 From an implementation perspective, the functions associated with 280 MDSC and described in [RFC8453] may be grouped in different ways. 282 1. Both the service- and network-related functions are collapsed into 283 a single, monolithic implementation, dealing with the end customer 284 service requests, received from the CMI (Customer MDSC Interface), 285 and the adaptation to the relevant network models. Such case is 286 represented in Figure 2 of [RFC8453] 287 2. An implementation can choose to split the service-related and the 288 network-related functions in different functional entities, as 289 described in [RFC8309] and in section 4.2 of [RFC8453]. In this 290 case, MDSC is decomposed into a top-level Service Orchestrator, 291 interfacing the customer via the CMI, and into a Network 292 Orchestrator interfacing at the southbound with the PNCs. The 293 interface between the Service Orchestrator and the Network 294 Orchestrator is not specified in [RFC8453]. 295 3. Another implementation can choose to split the MDSC functions 296 between an H-MDSC responsible for packet-optical multi-layer 297 coordination, interfacing with one Optical L-MDSC, providing 298 multi-domain coordination between the O-PNCs and one Packet 299 L-MDSC, providing multi-domain coordination betweeh the P-PNCs 300 (see for example Figure 9 of [RFC8453]). 301 4. Another implementation can also choose to combine the MDSC and the 302 P-PNC functions together. 304 Please note that in current service provider's network deployments, 305 at the North Bound of the MDSC, instead of a CNC, typically there is 306 an OSS/Orchestration layer. In this case, the MDSC would implement 307 only the Network Orchestration functions, as in [RFC8309] and 308 described in point 2 above. In this case, the MDSC is dealing with 309 the network services requests received from the OSS/Orchestration 310 layer. 312 [Editors'note:] Check for a better term to define the network 313 services. It may be worthwhile defining what are the customer and 314 network services. 316 The OSS/Orchestration layer is a key part of the architecture 317 framework for a service provider: 319 o to abstract (through MDSC and PNCs) the underlying transport 320 network complexity to the Business Systems Support layer 322 o to coordinate NFV, Transport (e.g. IP, Optical and Microwave 323 networks), Fixed Acess, Core and Radio domains enabling full 324 automation of end-to-end services to the end customers. 326 o to enable catalogue-driven service provisioning from external 327 applications (e.g. Customer Portal for Enterprise Business 328 services) orchestrating the design and lifecycle management of 329 these end-to-end transport connectivity services, consuming IP 330 and/or Optical transport connectivity services upon request. 332 The functionality of the OSS/Orchestration layer as well as the 333 interface toward the MDSC are usually operator-specific and outside 334 the scope of this draft. This document assumes that the 335 OSS/Orchestrator requests MDSC to setup L2VPN/L3VPN services through 336 mechanisms which are outside the scope of the draft. 338 There are two main cases when MDSC coordination of underlying PNCs 339 in POI context is initiated: 341 o Initiated by a request from the OSS/Orchestration layer to setup 342 L2VPN/L3VPN services that requires multi-layer/multi-domain 343 coordination. 345 o Initiated by the MDSC itself to perform multi-layer/multi-domain 346 optimizations and/or maintenance works, beyond discovery (e.g. 347 rerouting LSPs with their associated services when putting a 348 resource, like a fibre, in maintenance mode during a maintenance 349 window). Different to service fulfillment, the workflows then are 350 not related at all to a service provisioning request being 351 received from the OSS/Orchestration layer. 353 Above two MDSC workflow cases are in the scope of this draft or in 354 future versions. 356 2.1. L2/L3VPN Service Request in North Bound of MDSC 358 As explained in section 2, the OSS/Orchestration layer can request 359 the MDSC to setup of L2/L3VPN services (with or without TE 360 requirements). 362 Although the interface between the OSS/Orchestration layer is 363 usually operator-specific, ideally it would be using a RESTCONF/YANG 364 interface with more abstracted version of the MPI YANG data models 365 used for network configuration (e.g. L3NM, L2NM). 367 Figure 2 shows an example of a possible control flow between the 368 OSS/Orchestration layer and the MDSC to instantiate L2/L3VPN 369 services, using the YANG models under definition in [VN], [L2NM], 370 [L3NM] and [TSM]. 372 +-------------------------------------------+ 373 | | 374 | OSS/Orchestration layer | 375 | | 376 +-----------------------+-------------------+ 377 | 378 1.VN 2. L2/L3NM & | ^ 379 | TSM | | 380 | | | | 381 | | | | 382 v v | 3. Update VN 383 | 384 +-----------------------+-------------------+ 385 | | 386 | MDSC | 387 | | 388 +-------------------------------------------+ 390 Figure 2 Service Request Process 392 o The VN YANG model [VN], whose primary focus is the CMI, can also 393 be used to provide VN Service configuration from a orchestrated 394 connectivity service point of view, when the L2/L3VPN service has 395 TE requirements. This model is not used to setup L2/L3VPN service 396 with no TE requirements. 398 o It provides the profile of VN in terms of VN members, each of 399 which corresponds to an edge-to-edge link between customer 400 end-points (VNAPs). It also provides the mappings between the 401 VNAPs with the LTPs and between the connectivity matrix with 402 the VN member from which the associated traffic matrix (e.g., 403 bandwidth, latency, protection level, etc.) of VN member is 404 expressed (i.e., via the TE-topology's connectivity matrix). 406 o The model also provides VN-level preference information 407 (e.g., VN member diversity) and VN-level admin-status and 408 operational-status. 410 o The L2NM YANG model [L2NM], whose primary focus is the MPI, can 411 also be used to provide L2VPN service configuration and site 412 information, from a orchestrated connectivity service point of 413 view. 415 o The L3NM YANG model [L3NM], whose primary focus is the MPI, can 416 also be used to provide all L3VPN service configuration and site 417 information, from a orchestrated connectivity service point of 418 view. 420 o The TE & Service Mapping YANG model [TSM] provides TE-service 421 mapping as well as site mapping. 423 o TE-service mapping provides the mapping between a L2/L3VPN 424 instance and the corresponding VN instances. 426 o The TE-service mapping also provides the service mapping 427 requirement type as to how each L2/L3VPN/VN instance is 428 created with respect to the underlay TE tunnels (e.g., 429 whether they require a new and isolated set of TE underlay 430 tunnels or not). See Section 2.2 for detailed discussion on 431 the mapping requirement types. 433 o Site mapping provides the site reference information across 434 L2/L3VPN Site ID, VN Access Point ID, and the LTP of the 435 access link. 437 2.2. Service and Network Orchestration 439 From a functional standpoint, MDSC represented in Figure 2 440 interfaces with the OSS/Orchestration layer and decouples L2/L3VPN 441 service configuration functions from network configuration 442 functions. Therefore in this document the MDSC performs the 443 functions of the Network Orchestrator, as defined in [RFC 8309]. 445 One of the important MDSC functions is to identify which TE Tunnels 446 should carry the L2/L3VPN traffic (e.g., from TE & Service Mapping 447 configuration) and to relay this information to the P-PNCs, to 448 ensure the PEs' forwarding tables (e.g., VRF) are properly 449 populated, according to the TE binding requirement for the L2/L3VPN. 451 TE binding requirement types [TSM] are: 453 1. Hard Isolation with deterministic latency: The L2/L3VPN service 454 requires a set of dedicated TE Tunnels providing deterministic 455 latency performances and that cannot be not shared with other 456 services, nor compete for bandwidth with other Tunnels. 458 2. Hard Isolation: This is similar to the above case without 459 deterministic latency requirements. 461 3. Soft Isolation: The L2/L3VPN service requires a set of dedicated 462 MPLS-TE tunnels which cannot be shared with other services, but 463 which could compete for bandwidth with other Tunnels. 465 4. Sharing: The L2/L3VPN service allows sharing the MPLS-TE Tunnels 466 supporting it with other services. 468 For the first three types, there could be additional TE binding 469 requirements with respect to different VN members of the same VN (on 470 how different VN members, belonging to the same VN, can share or not 471 network resources). For the first two cases, VN members can be 472 hard-isolated, soft-isolated, or shared. For the third case, VN 473 members can be soft-isolated or shared. 475 In order to fulfill the the L2/L3VPN end-to-end TE requirements, 476 including the TE binding requirements, the MDSC needs to perform 477 multi-layer/multi-domain path computation to select the BRs, the 478 intra-domain MPLS-TE Tunnels and the intra-domain Optical Tunnels. 480 Depending on the knowledge that MDSC has of the topology and 481 configuration of the underlying network domains, three models for 482 performing path computation are possible: 484 1. Summarization: MDSC has an abstracted TE topology view of all of 485 the underlying domains, both packet and optical. MDSC does not 486 have enough TE topology information to perform 487 multi-layer/multi-domain path computation. Therefore MDSC 488 delegates the P-PNCs and O-PNCs to perform a local path 489 computation within their controlled domains and it uses the 490 information returned by the P-PNCs and O-PNCs to compute the 491 optimal multi-domain/multi-layer path. 492 This model presents an issue to P-PNC, which does not have the 493 capability of performing a single-domain/multi-layer path 494 computation (that is, P-PNC does not have any possibility to 495 retrieve the topology/configuration information from the Optical 496 controller). A possible solution could be to include a CNC 497 function in the P-PNC to request the MDSC multi-domain Optical 498 path computation, as shown in Figure 10 of [RFC8453]. 500 2. Partial summarization: MDSC has full visibility of the TE 501 topology of the packet network domains and an abstracted view of 502 the TE topology of the optical network domains. 503 MDSC then has only the capability of performing multi- 504 domain/single-layer path computation for the packet layer (the 505 path can be computed optimally for the two packet domains). 506 Therefore MDSC still needs to delegate the O-PNCs to perform 507 local path computation within their respective domains and it 508 uses the information received by the O-PNCs, together with its TE 509 topology view of the multi-domain packet layer, to perform 510 multi-layer/multi-domain path computation. 511 The role of P-PNC is minimized, i.e. is limited to management. 513 3. Full knowledge: MDSC has the complete and enough detailed view of 514 the TE topology of all the network domains (both optical and 515 packet). In such case MDSC has all the information needed to 516 perform multi-domain/multi-layer path computation, without 517 relying on PNCs. 518 This model may present, as a potential drawback, scalability 519 issues and, as discussed in section 2.2. of [PATH-COMPUTE], 520 performing path computation for optical networks in the MDSC is 521 quite challenging because the optimal paths depend also on 522 vendor-specific optical attributes (which may be different in the 523 two domains if they are provided by different vendors). 525 The current version of this draft assumes that MDSC supports at 526 least model #2 (Partial summarization). 528 [Note: check with opeerators for some references on real deployment] 530 2.2.1. Hard Isolation 532 For example, when "Hard Isolation with or w/o deterministic latency" 533 TE binding requirement is applied for a L2/L3VPN, new Optical 534 Tunnels need to be setup to support dedicated IP Links between PEs 535 and BRs. 537 The MDSC needs to identify the set of IP/MPLS domains and their BRs. 538 This requires the MDSC to request each O-PNC to compute the 539 intra-domain optical paths between each PEs/BRs pairs. 541 When requesting optical path computation to the O-PNC, the MDSC 542 needs to take into account the inter-layer peering points, such as 543 the interconnections between the PE/BR nodes and the edge Optical 544 nodes (e.g., using the inter-layer lock or the transitional link 545 information, defined in [RFC8795]). 547 When the optimal multi-layer/multi-domain path has been computed, 548 the MDSC requests each O-PNC to setup the selected Optical Tunnels 549 and P-PNC to setup the intra-domain MPLS-TE Tunnels, over the 550 selected Optical Tunnels. MDSC also properly configures its BGP 551 speakers and PE/BR forwarding tables to ensure that the VPN traffic 552 is properly forwarded. 554 2.2.2. Shared Tunnel Selection 556 In case of shared tunnel selection, the MDSC needs to check if there 557 is multi-domain path which can support the L2/L3VPN end-to-end TE 558 service requirements (e.g., bandwidth, latency, etc.) using existing 559 intra-domain MPLS-TE tunnels. 561 If such a path is found, the MDSC selects the optimal path from the 562 candidate pool and request each P-PNC to setup the L2/L3VPN service 563 using the selected intra-domain MPLS-TE tunnel, between PE/BR nodes. 565 Otherwise, the MDSC should detect if the multi-domain path can be 566 setup using existing intra-domain MPLS-TE tunnels with modifications 567 (e.g., increasing the tunnel bandwidth) or setting up new intra- 568 domain MPLS-TE tunnel(s). 570 The modification of an existing MPLS-TE Tunnel as well as the setup 571 of a new MPLS-TE Tunnel may also require multi-layer coordination 572 e.g., in case the available bandwidth of underlying Optical Tunnels 573 is not sufficient. Based on multi-domain/multi-layer path 574 computation, the MDSC can decide for example to modify the bandwidth 575 of an existing Optical Tunnel (e.g., ODUflex bandwidth increase) or 576 to setup new Optical Tunnels to be used as additional LAG members of 577 an existing IP Link or as new IP Links to re-route the MPLS-TE 578 Tunnel. 580 In all the cases, the labels used by the end-to-end tunnel are 581 distributed in the PE and BR nodes by BGP. The MDSC is responsible 582 to configure the BGP speakeers in each P-PNC, if needed. 584 2.3. IP/MPLS Domain Controller and NE Functions 586 IP/MPLS networks are assumed to have multiple domains, where each 587 domain, corresponding to either an IGP area or an Autonomous System 588 (AS) within the same operator network, is controlled by an IP/MPLS 589 domain controller (P-PNC). 591 Among the functions of the P-PNC, there are the setup or 592 modification of the intra-domain MPLS-TE Tunnels, between PEs and 593 BRs, and the configuration of the VPN services, such as the VRF in 594 the PE nodes, as shown in Figure 3: 596 +------------------+ +------------------+ 597 | | | | 598 | P-PNC1 | | P-PNC2 | 599 | | | | 600 +--|-----------|---+ +--|-----------|---+ 601 | 1.Tunnel | 2.VPN | 1.Tunnel | 2.VPN 602 | Config | Provisioning | Config | Provisioning 603 V V V V 604 +---------------------+ +---------------------+ 605 CE / PE tunnel 1 BR\ / BR tunnel 2 PE \ CE 606 o--/---o..................o--\-----/--o..................o---\--o 607 \ / \ / 608 \ Domain 1 / \ Domain 2 / 609 +---------------------+ +---------------------+ 611 End-to-end tunnel 612 <-------------------------------------------------> 614 Figure 3 IP/MPLS Domain Controller & NE Functions 616 It is assumed that BGP is running in the inter-domain IP/MPLS 617 networks for L2/L3VPN and that the P-PNC controller is also 618 responsible for configuring the BGP speakers within its control 619 domain, if necessary. 621 The BGP would be responsible for the label distribution of the 622 end-to-end tunnel on PE and BR nodes. The MDSC is responsible for 623 the selection of the BRs and of the intra-domain MPLS-TE Tunnels 624 between PE/BR nodes. 626 If new MPLS-TE Tunnels are needed or mofications (e.g., bandwidth 627 ingrease) to existing MPLS_TE Tunnels are needed, as outlined in 628 section 2.2, the MDSC would request their setup or modifications to 629 the P-PNCs (step 1 in Figure 3). Then the MDSC would request the 630 P-PNC to configure the VPN, including the selection of the 631 intra-domain TE Tunnel (step 2 in Figure 3). 633 The P-PNC should configure, using mechanisms outside the scope of 634 this document, the ingress PE forwarding table, e.g., the VRF, to 635 forward the VPN traffic, received from the CE, with the following 636 three labels: 638 o VPN label: assigned by the egress PE and distributed by BGP; 640 o end-to-end LSP label: assigned by the egress BR, selected by the 641 MDSC, and distributed by BGP; 643 o MPLS-TE tunnel label, assigned by the next hop P node of the 644 tunnel selected by the MDSC and distributed by mechanism internal 645 to the IP/MPLS domain (e.g., RSVP-TE). 647 2.4. Optical Domain Controller and NE Functions 649 Optical network provides the underlay connectivity services to 650 IP/MPLS networks. The coordination of Packet/Optical multi-layer is 651 done by the MDSC, as shown in Figure 1. 653 The O-PNC is responsible to: 655 o provide to the MDSC an abstract TE topology view of its 656 underlying optical network resources; 658 o perform single-domain local path computation, when requested by 659 the MDSC; 661 o perform Optical Tunnel setup, when requested by the MDSC. 663 The mechanisms used by O-PNC to perform intra-domain topology 664 discovery and path setup are usually vendor-speicific and outside 665 the scope of this document. 667 Depending on the type of optical network, TE topology abstraction, 668 path compution and path setup can be single-layer (either OTN or 669 WDM) or multi-layer OTN/WDM. In the latter case, the multi-layer 670 coordination between the OTN and WDM layers is performed by the 671 O-PNC. 673 3. Interface protocols and YANG data models for the MPIs 675 This section describes general assumptions which are applicable at 676 all the MPI interfaces, between each PNC (Optical or Packet) and the 677 MDSC, and also to all the scenarios discussed in this document. 679 3.1. RESTCONF protocol at the MPIs 681 The RESTCONF protocol, as defined in [RFC8040], using the JSON 682 representation, defined in [RFC7951], is assumed to be used at these 683 interfaces. Extensions to RESTCONF, as defined in [RFC8527], to be 684 compliant with Network Management Datastore Architecture (NMDA) 685 defined in [RFC8342], are assumed to be used as well at these MPI 686 interfaces and also at CMI interfaces. 688 3.2. YANG data models at the MPIs 690 The data models used on these interfaces are assumed to use the YANG 691 1.1 Data Modeling Language, as defined in [RFC7950]. 693 3.2.1. Common YANG data models at the MPIs 695 As required in [RFC8040], the "ietf-yang-library" YANG module 696 defined in [RFC8525] is used to allow the MDSC to discover the set 697 of YANG modules supported by each PNC at its MPI. 699 Both Optical and Packet PNCs use the following common topology YANG 700 models at the MPI to report their abstract topologies: 702 o The Base Network Model, defined in the "ietf-network" YANG module 703 of [RFC8345] 705 o The Base Network Topology Model, defined in the "ietf-network- 706 topology" YANG module of [RFC8345], which augments the Base 707 Network Model 709 o The TE Topology Model, defined in the "ietf-te-topology" YANG 710 module of [RFC8795], which augments the Base Network Topology 711 Model with TE specific information. 713 These common YANG models are generic and augmented by technology- 714 specific YANG modules as described in the following sections. 716 Both Optical and Packet PNCs must use the following common 717 notifications YANG models at the MPI so that any network changes can 718 be reported almost in real-time to MDSC by the PNCs: 720 o Dynamic Subscription to YANG Events and Datastores over RESTCONF 721 as defined in [RFC8650] 723 o Subscription to YANG Notifications for Datastores updates as 724 defined in [RFC8641] 726 PNCs and MDSCs must be compliant with subscription requirements as 727 stated in [RFC7923]. 729 3.2.2. YANG models at the Optical MPIs 731 The Optical PNC also uses at least the following technology-specific 732 topology YANG models, providing WDM and Ethernet technology-specific 733 augmentations of the generic TE Topology Model: 735 o The WSON Topology Model, defined in the "ietf-wson-topology" YANG 736 modules of [WSON-TOPO], or the Flexi-grid Topology Model, defined 737 in the "ietf-flexi-grid-topology" YANG module of [Flexi-TOPO]. 739 o Optionally, when the OTN layer is used, the OTN Topology Model, 740 as defined in the "ietf-otn-topology" YANG module of [OTN-TOPO]. 742 o The Ethernet Topology Model, defined in the "ietf-eth-te- 743 topology" YANG module of [CLIENT-TOPO]. 745 o Optionally, when the OTN layer is used, the network data model 746 for L1 OTN services (e.g. an Ethernet transparent service) as 747 defined in "ietf-trans-client-service" YANG module of draft-ietf- 748 ccamp-client-signal-yang [CLIENT-SIGNAL]. 750 o The WSON Topology Model or, alternatively, the Flexi-grid 751 Topology model is used to report the DWDM network topology (e.g., 752 ROADMs and links) depending on whether the DWDM optical network 753 is based on fixed grid or flexible-grid. 755 The Ethernet Topology is used to report the access links between the 756 IP routers and the edge ROADMs. 758 The optical PNC uses at least the following YANG models: 760 o The TE Tunnel Model, defined in the "ietf-te" YANG module of 761 [TE-TUNNEL] 763 o The WSON Tunnel Model, defined in the "ietf-wson-tunnel" YANG 764 modules of [WSON-TUNNEL], or the Flexi-grid Media Channel Model, 765 defined in the "ietf-flexi-grid-media-channel" YANG module of 766 [Flexi-MC] 768 o Optionally, when the OTN layer is used, the OTN Tunnel Model, 769 defined in the "ietf-otn-tunnel" YANG module of [OTN-TUNNEL]. 771 o The Ethernet Client Signal Model, defined in the "ietf-eth-tran- 772 service" YANG module of [CLIENT-SIGNAL]. 774 The TE Tunnel model is generic and augmented by technology-specific 775 models such as the WSON Tunnel Model and the Flexi-grid Media 776 Channel Model. 778 The WSON Tunnel Model or, alternatively, the Flexi-grid Media 779 Channel Model are used to setup connectivity within the DWDM network 780 depending on whether the DWDM optical network is based on fixed grid 781 or flexible-grid. 783 The Ethernet Client Signal Model is used to configure the steering 784 of the Ethernet client traffic between Ethernet access links and TE 785 Tunnels, which in this case could be either WSON Tunnels or 786 Flexi-Grid Media Channels. This model is generic and applies to any 787 technology-specific TE Tunnel: technology-specific attributes are 788 provided by the technology-specific models which augment the generic 789 TE-Tunnel Model. 791 3.2.3. YANG data models at the Packet MPIs 793 The Packet PNC also uses at least the following technology-specific 794 topology YANG models, providing IP and Ethernet technology-specific 795 augmentations of the generic Topology Models described in section 796 3.2.1: 798 o The L3 Topology Model, defined in the "ietf-l3-unicast-topology" 799 YANG modules of [RFC8346], which augments the Base Network 800 Topology Model 802 o The L3 specific data model including extended TE attributes (e.g. 803 performance derived metrics like latency), defined in "ietf-l3- 804 te-topology" and in "ietf-te-topology-packet" in draft-ietf-teas- 805 l3-te-topo [L3-TE-TOPO] 807 o The Ethernet Topology Model, defined in the "ietf-eth-te- 808 topology" YANG module of [CLIENT-TOPO], which augments the TE 809 Topology Model 811 The Ethernet Topology Model is used to report the access links 812 between the IP routers and the edge ROADMs as well as the 813 inter-domain links between ASBRs, while the L3 Topology Model is 814 used to report the IP network topology (e.g., IP routers and links). 816 o The User Network Interface (UNI) Topology Model, being defined in 817 the "ietf-uni-topology" module of the draft-ogondio-opsawg-uni- 818 topology [UNI-TOPO] which augment "ietf-network" module defined 819 in [RFC8345] adding service attachment points to the nodes to 820 which L2VPN/L3VPN IP/MPLS services can be attached. 822 o L3VPN network data model defined in "ietf-l3vpn-ntw" module of 823 draft-ietf-opsawg-l3sm-l3nm [L3NM] used for non-ACTN MPI for 824 L3VPN service provisioning 826 o L2VPN network data model defined in "ietf-l2vpn-ntw" module of 827 draft-ietf-barguil-opsawg-l2sm-l2nm [L2NM] used for non-ACTN MPI 828 for L2VPN service provisioning 830 [Editor's note:] Add YANG models used for tunnel and service 831 configuration. 833 4. Multi-layer and multi-domain services scenarios 835 Multi-layer and multi-domain scenarios, based on reference network 836 described in section 2, and very relevant for Service Providers, are 837 described in the next sections. For each scenario existing IETF 838 protocols and data models are identified with particular focus on 839 the MPI in the ACTN architecture. Non ACTN IETF data models required 840 for L2/L3VPN service provisioning between MDSC and IP PNCs are also 841 identified. 843 4.1. Scenario 1: network and service topology discovery 845 In this scenario, the MSDC needs to discover through the underlying 846 PNCs, the network topology, at both WDM and IP layers, in terms of 847 nodes (NEs) and links, including inter AS domain links as well as 848 cross-layer links but also in terms of tunnels (MPLS or SR paths in 849 IP layer and OCh and optionally ODUk tunnels in optical layer).MDSC 850 discovers also the IP/MPLS transport services (L2VPN/L3VPN) 851 deployed, both intra-domain and inter-domain wise. 853 Each PNC provides to the MDSC an abstracted or full topology view of 854 the WDM or the IP topology of the domain it controls. This topology 855 can be abstracted in the sense that some detailed NE information is 856 hidden at the MPI, and all or some of the NEs and related physical 857 links are exposed as abstract nodes and logical (virtual) links, 858 depending on the level of abstraction the user requires. This 859 information is key to understand both the inter-AS domain links 860 (seen by each controller as UNI interfaces but as I-NNI interfaces 861 by the MDSC) as well as the cross-layer mapping between IP and WDM 862 layer. 864 The MDSC also maintains an up-to-date network database of both IP 865 and WDM layers (and optionally OTN layer) through the use of IETF 866 notifications through MPI with the PNCs when any topology change 867 occurs. It should be possible also to correlate information coming 868 from IP and WDM layers (e.g.: which port, lambda/OTSi, direction is 869 used by a specific IP service on the WDM equipment) 870 In particular, For the cross-layer links it is key for MDSC to be 871 able to correlate automatically the information from the PNC network 872 databases about the physical ports from the routers (single link or 873 bundle links for LAG) to client ports in the ROADM. 875 It should be possible at MDSC level to easily correlate WDM and IP 876 layers alarms to speed-up troubleshooting 878 Alarms and event notifications are required between MDSC and PNCs so 879 that any network changes are reported almost in real-time to the MDSC 880 (e.g. NE or link failure, MPLS tunnel switched from main to backup 881 path etc.). As specified in [RFC7923] MDSC must be able to subscribe 882 to specific objects from PNC YANG datastores for notifications. 884 4.1.1. Inter-domain link discovery 886 In the reference network of Figure 1, there are two types of 887 inter-domain links: 889 o Links between two IP domains (ASes) 891 o Links between an IP router and a ROADM 893 Both types of links are Ethernet physical links. 895 The inter-domain link information is reported to the MDSC by the two 896 adjacent PNCs, controlling the two ends of the inter-domain link. 897 The MDSC needs to understa how to merge the these inter-domain 898 Ethernet links together. 900 This document considers the following two options for discovering 901 inter-domain links: 903 1. Static configuration 905 2. LLDP [IEEE 802.1AB] automatic discovery 907 Other options are possible but not described in this document. 909 The MDSC can understand how to merge these inter-domain links 910 together using the plug-id attribute defined in the TE Topology 911 Model [RFC8795], as described in as described in section 4.3 of 912 [RFC8795]. 914 A more detailed description of how the plug-id can be used to 915 discover inter-domain link is also provided in section 5.1.4 of 916 [TNBI]. 918 Both types of inter-domain links are discovered using the plug-id 919 attributes reported in the Ethernet Topologies exposed by the two 920 adjacent PNCs. The MDSC can also discover an inter-domain IP 921 link/adjacency between the two IP LTPs, reported in the IP 922 Topologies exposed by the two adjacent P-PNCs, supported by the two 923 ETH LTPs of an Ethernet Link discovered between these two P-PNCs. 925 The static configuration requires an administrative burden to 926 configure network-wide unique identifiers: it is therefore more 927 viable for inter-AS links. For the links between the IP routers and 928 the Optical NEs, the automatic discovery solution based on LLDP 929 snooping is preferable when LLDP snooping is supported by the 930 Optical NEs. 932 As outlined in [TNBI], the encoding of the plug-id namespace as well 933 as of the LLDP information within the plug-id value is 934 implementation specific and needs to be consistent across all the 935 PNCs. 937 4.1.2. IP Link Setup Procedure 939 The MDSC requires the O-PNC to setup a WDM Tunnel (either a WSON 940 Tunnel or a Flexi grid Tunnel) within the DWDM network between the 941 two Optical Transponders (OTs) associated with the two access links. 943 The Optical Transponders are reported by the O-PNC as Trail 944 Termination Points (TTPs), defined in [TE TOPO], within the WDM 945 Topology. The association between the Ethernet access link and the 946 WDM TTP is reported by the Inter Layer Lock (ILL) identifiers, 947 defined in [TE TOPO], reported by the O PNC within the Ethernet 948 Topology and WDM Topology. 950 The MDSC also requires the O-PNC to steer the Ethernet client 951 traffic between the two access Ethernet Links over the WDM Tunnel. 953 After the WDM Tunnel has been setup and the client traffic steering 954 configured, the two IP routers can exchange Ethernet packets between 955 themselves, including LLDP messages. 957 If LLDP [IEEE 802.1AB] is used between the two routers, the P PNC 958 can automatically discover the IP Link being set up by the MDSC. The 959 IP LTPs terminating this IP Link are supported by the ETH LTPs 960 terminating the two access links. 962 Otherwise, the MDSC needs to require the P PNC to configure an IP 963 Link between the two routers: the MDSC also configures the two ETH 964 LTPs which support the two IP LTPs terminating this IP Link. 966 4.2. L2VPN/L3VPN establishment 968 To be added 970 [Editor's Note] What mechanism would convey on the interface to the 971 IP/MPLS domain controllers as well as on the SBI (between IP/MPLS 972 domain controllers and IP/MPLS PE routers) the TE binding policy 973 dynamically for the L3VPN? Typically, VRF is the function of the 974 device that participate MP-BGP in MPLS VPN. With current MP-BGP 975 implementation in MPLS VPN, the VRF's BGP next hop is the 976 destination PE and the mapping to a tunnel (either an LDP or a BGP 977 tunnel) toward the destination PE is done by automatically without 978 any configuration. It is to be determined the impact on the PE VRF 979 operation when the tunnel is an optical bypass tunnel which does not 980 participate either LDP or BGP. 982 New text to answer the yellow part: 984 The MDSC Network-related function will then coordinate with the PNCs 985 involved in the process to provide the provisioning information 986 through ACTN MDSC to PNC (MPI) interface. The relevant data models 987 used at the MPI may be in the form of L3NM, L2NM or others and are 988 exchanged through MPI API calls. Through this process MDSC Network- 989 related functions provide the configuration information to realize a 990 VPN service to PNCs. For example, this process will inform PNCs on 991 what PE routers compose a L3VPN, the topology requested, the VPN 992 attributes, etc. 994 At the end of the process PNCs will deliver the actual configuration 995 to the devices (either physical or virtual), through the ACTN 996 Southbound Interface (SBI). In this case the configuration policies 997 may be exchanged using a Netconf session delivering configuration 998 commands associated to device-specific data models (e.g. BGP[], QOS 999 [], etc.). 1001 Having the topology information of the network domains under their 1002 control, PNCs will deliver all the information necessary to create, 1003 update, optimize or delete the tunnels connecting the PE nodes as 1004 requested by the VPN instantiation. 1006 5. Security Considerations 1008 Several security considerations have been identified and will be 1009 discussed in future versions of this document. 1011 6. Operational Considerations 1013 Telemetry data, such as the collection of lower-layer networking 1014 health and consideration of network and service performance from POI 1015 domain controllers, may be required. These requirements and 1016 capabilities will be discussed in future versions of this document. 1018 7. IANA Considerations 1020 This document requires no IANA actions. 1022 8. References 1024 8.1. Normative References 1026 [RFC7950] Bjorklund, M. et al., "The YANG 1.1 Data Modeling 1027 Language", RFC 7950, August 2016. 1029 [RFC7951] Lhotka, L., "JSON Encoding of Data Modeled with YANG", RFC 1030 7951, August 2016. 1032 [RFC8040] Bierman, A. et al., "RESTCONF Protocol", RFC 8040, January 1033 2017. 1035 [RFC8345] Clemm, A., Medved, J. et al., "A Yang Data Model for 1036 Network Topologies", RFC8345, March 2018. 1038 [RFC8346] Clemm, A. et al., "A YANG Data Model for Layer 3 1039 Topologies", RFC8346, March 2018. 1041 [RFC8453] Ceccarelli, D., Lee, Y. et al., "Framework for Abstraction 1042 and Control of TE Networks (ACTN)", RFC8453, August 2018. 1044 [RFC8525] Bierman, A. et al., "YANG Library", RFC 8525, March 2019. 1046 [RFC8795] Liu, X. et al., "YANG Data Model for Traffic Engineering 1047 (TE) Topologies", RFC8795, August 2020. 1049 [IEEE 802.1AB] IEEE 802.1AB-2016, "IEEE Standard for Local and 1050 metropolitan area networks - Station and Media Access 1051 Control Connectivity Discovery", March 2016. 1053 [WSON-TOPO] Lee, Y. et al., " A YANG Data Model for WSON (Wavelength 1054 Switched Optical Networks)", draft-ietf-ccamp-wson-yang, 1055 work in progress. 1057 [Flexi-TOPO] Lopez de Vergara, J. E. et al., "YANG data model for 1058 Flexi-Grid Optical Networks", draft-ietf-ccamp-flexigrid- 1059 yang, work in progress. 1061 [OTN-TOPO] Zheng, H. et al., "A YANG Data Model for Optical 1062 Transport Network Topology", draft-ietf-ccamp-otn-topo- 1063 yang, work in progress. 1065 [CLIENT-TOPO] Zheng, H. et al., "A YANG Data Model for Client-layer 1066 Topology", draft-zheng-ccamp-client-topo-yang, work in 1067 progress. 1069 [L3-TE-TOPO] Liu, X. et al., "YANG Data Model for Layer 3 TE 1070 Topologies", draft-ietf-teas-yang-l3-te-topo, work in 1071 progress. 1073 [TE-TUNNEL] Saad, T. et al., "A YANG Data Model for Traffic 1074 Engineering Tunnels and Interfaces", draft-ietf-teas-yang- 1075 te, work in progress. 1077 [WSON-TUNNEL] Lee, Y. et al., "A Yang Data Model for WSON Tunnel", 1078 draft-ietf-ccamp-wson-tunnel-model, work in progress. 1080 [Flexi-MC] Lopez de Vergara, J. E. et al., "YANG data model for 1081 Flexi-Grid media-channels", draft-ietf-ccamp-flexigrid- 1082 media-channel-yang, work in progress. 1084 [OTN-TUNNEL] Zheng, H. et al., "OTN Tunnel YANG Model", draft- 1085 ietf-ccamp-otn-tunnel-model, work in progress. 1087 [CLIENT-SIGNAL] Zheng, H. et al., "A YANG Data Model for Transport 1088 Network Client Signals", draft-ietf-ccamp-client-signal- 1089 yang, work in progress. 1091 8.2. Informative References 1093 [RFC1930] J. Hawkinson, T. Bates, "Guideline for creation, 1094 selection, and registration of an Autonomous System (AS)", 1095 RFC 1930, March 1996. 1097 [RFC4364] E. Rosen and Y. Rekhter, "BGP/MPLS IP Virtual Private 1098 Networks (VPNs)", RFC 4364, February 2006. 1100 [RFC4761] K. Kompella, Ed., Y. Rekhter, Ed., "Virtual Private LAN 1101 Service (VPLS) Using BGP for Auto-Discovery and 1102 Signaling", RFC 4761, January 2007. 1104 [RFC6074] E. Rosen, B. Davie, V. Radoaca, and W. Luo, "Provisioning, 1105 Auto-Discovery, and Signaling in Layer 2 Virtual Private 1106 Networks (L2VPNs)", RFC 6074, January 2011. 1108 [RFC6624] K. Kompella, B. Kothari, and R. Cherukuri, "Layer 2 1109 Virtual Private Networks Using BGP for Auto-Discovery and 1110 Signaling", RFC 6624, May 2012. 1112 [RFC7209] A. Sajassi, R. Aggarwal, J. Uttaro, N. Bitar, W. 1113 Henderickx, and A. Isaac, "Requirements for Ethernet VPN 1114 (EVPN)", RFC 7209, May 2014. 1116 [RFC7432] A. Sajassi, Ed., et al., "BGP MPLS-Based Ethernet VPN", 1117 RFC 7432, February 2015. 1119 [RFC7436] H. Shah, E. Rosen, F. Le Faucheur, and G. Heron, "IP-Only 1120 LAN Service (IPLS)", RFC 7436, January 2015. 1122 [RFC8214] S. Boutros, A. Sajassi, S. Salam, J. Drake, and J. 1123 Rabadan, "Virtual Private Wire Service Support in Ethernet 1124 VPN", RFC 8214, August 2017. 1126 [RFC8299] Q. Wu, S. Litkowski, L. Tomotaki, and K. Ogaki, "YANG Data 1127 Model for L3VPN Service Delivery", RFC 8299, January 2018. 1129 [RFC8309] Q. Wu, W. Liu, and A. Farrel, "Service Model Explained", 1130 RFC 8309, January 2018. 1132 [RFC8466] G. Fioccola, ed., "A YANG Data Model for Layer 2 Virtual 1133 Private Network (L2VPN) Service Delivery", RFC8466, 1134 October 2018. 1136 [TNBI] Busi, I., Daniel, K. et al., "Transport Northbound 1137 Interface Applicability Statement", draft-ietf-ccamp- 1138 transport-nbi-app-statement, work in progress. 1140 [VN] Y. Lee, et al., "A Yang Data Model for ACTN VN Operation", 1141 draft-ietf-teas-actn-vn-yang, work in progress. 1143 [L2NM] S. Barguil, et al., "A Layer 2 VPN Network YANG Model", 1144 draft-ietf-opsawg-l2nm, work in progress. 1146 [L3NM] S. Barguil, et al., "A Layer 3 VPN Network YANG Model", 1147 draft-ietf-opsawg-l3sm-l3nm, work in progress. 1149 [TSM] Y. Lee, et al., "Traffic Engineering and Service Mapping 1150 Yang Model", draft-ietf-teas-te-service-mapping-yang, work 1151 in progress. 1153 [ACTN-PM] Y. Lee, et al., "YANG models for VN & TE Performance 1154 Monitoring Telemetry and Scaling Intent Autonomics", 1155 draft-lee-teas-actn-pm-telemetry-autonomics, work in 1156 progress. 1158 [BGP-L3VPN] D. Jain, et al. "Yang Data Model for BGP/MPLS L3 VPNs", 1159 draft-ietf-bess-l3vpn-yang, work in progress. 1161 Appendix A. Multi-layer and multi-domain resiliency 1163 A.1. Maintenance Window 1165 Before planned maintenance operation on DWDM network takes place, IP 1166 traffic should be moved hitless to another link. 1168 MDSC must reroute IP traffic before the events takes place. It 1169 should be possible to lock IP traffic to the protection route until 1170 the maintenance event is finished, unless a fault occurs on such 1171 path. 1173 A.2. Router port failure 1175 The focus is on client-side protection scheme between IP router and 1176 reconfigurable ROADM. Scenario here is to define only one port in 1177 the routers and in the ROADM muxponder board at both ends as back-up 1178 ports to recover any other port failure on client-side of the ROADM 1179 (either on router port side or on muxponder side or on the link 1180 between them). When client-side port failure occurs, alarms are 1181 raised to MDSC by IP-PNC and O-PNC (port status down, LOS etc.). 1182 MDSC checks with OP-PNC(s) that there is no optical failure in the 1183 optical layer. 1185 There can be two cases here: 1187 a) LAG was defined between the two end routers. MDSC, after checking 1188 that optical layer is fine between the two end ROADMs, triggers 1189 the ROADM configuration so that the router back-up port with its 1190 associated muxponder port can reuse the OCh that was already in 1191 use previously by the failed router port and adds the new link to 1192 the LAG on the failure side. 1194 While the ROADM reconfiguration takes place, IP/MPLS traffic is 1195 using the reduced bandwidth of the IP link bundle, discarding 1196 lower priority traffic if required. Once backup port has been 1197 reconfigured to reuse the existing OCh and new link has been 1198 added to the LAG then original Bandwidth is recovered between the 1199 end routers. 1201 Note: in this LAG scenario let assume that BFD is running at LAG 1202 level so that there is nothing triggered at MPLS level when one 1203 of the link member of the LAG fails. 1205 b) If there is no LAG then the scenario is not clear since a router 1206 port failure would automatically trigger (through BFD failure) 1207 first a sub-50ms protection at MPLS level :FRR (MPLS RSVP-TE 1208 case) or TI-LFA (MPLS based SR-TE case) through a protection 1209 port. At the same time MDSC, after checking that optical network 1210 connection is still fine, would trigger the reconfiguration of 1211 the back-up port of the router and of the ROADM muxponder to re- 1212 use the same OCh as the one used originally for the failed router 1213 port. Once everything has been correctly configured, MDSC Global 1214 PCE could suggest to the operator to trigger a possible re- 1215 optimisation of the back-up MPLS path to go back to the MPLS 1216 primary path through the back-up port of the router and the 1217 original OCh if overall cost, latency etc. is improved. However, 1218 in this scenario, there is a need for protection port PLUS back- 1219 up port in the router which does not lead to clear port savings. 1221 Acknowledgments 1223 This document was prepared using 2-Word-v2.0.template.dot. 1225 Some of this analysis work was supported in part by the European 1226 Commission funded H2020-ICT-2016-2 METRO-HAUL project (G.A. 761727). 1228 Contributors 1230 Sergio Belotti 1231 Nokia 1233 Email: sergio.belotti@nokia.com 1235 Gabriele Galimberti 1236 Cisco 1238 Email: ggalimbe@cisco.com 1240 Zheng Yanlei 1241 China Unicom 1243 Email: zhengyanlei@chinaunicom.cn 1245 Anton Snitser 1246 Sedona 1247 Email: antons@sedonasys.com 1249 Washington Costa Pereira Correia 1250 TIM Brasil 1252 Email: wcorreia@timbrasil.com.br 1254 Michael Scharf 1255 Hochschule Esslingen - University of Applied Sciences 1257 Email: michael.scharf@hs-esslingen.de 1259 Young Lee 1260 Sung Kyun Kwan University 1262 Email: younglee.tx@gmail.com 1264 Jeff Tantsura 1265 Apstra 1267 Email: jefftant.ietf@gmail.com 1269 Paolo Volpato 1270 Huawei 1272 Email: paolo.volpato@huawei.com 1274 Authors' Addresses 1276 Fabio Peruzzini 1277 TIM 1279 Email: fabio.peruzzini@telecomitalia.it 1281 Jean-Francois Bouquier 1282 Vodafone 1283 Email: jeff.bouquier@vodafone.com 1285 Italo Busi 1286 Huawei 1288 Email: Italo.busi@huawei.com 1290 Daniel King 1291 Old Dog Consulting 1293 Email: daniel@olddog.co.uk 1295 Daniele Ceccarelli 1296 Ericsson 1298 Email: daniele.ceccarelli@ericsson.com