idnits 2.17.1 draft-ietf-teas-actn-poi-applicability-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 1258 has weird spacing: '...E paths using...' -- The document date (July 12, 2021) is 1017 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'RFC 8309' is mentioned on line 449, but not defined == Missing Reference: 'RFC8527' is mentioned on line 690, but not defined == Missing Reference: 'RFC8342' is mentioned on line 691, but not defined == Missing Reference: 'RFC8650' is mentioned on line 727, but not defined == Missing Reference: 'RFC8641' is mentioned on line 730, but not defined == Missing Reference: 'RFC7923' is mentioned on line 955, but not defined == Missing Reference: 'UNI-TOPO' is mentioned on line 829, but not defined == Missing Reference: 'RFC8637' is mentioned on line 862, but not defined == Missing Reference: 'RFC5440' is mentioned on line 847, but not defined == Missing Reference: 'RFC8231' is mentioned on line 850, but not defined == Missing Reference: 'RFC8281' is mentioned on line 851, but not defined == Missing Reference: 'RFC8751' is mentioned on line 859, but not defined == Missing Reference: 'RFC8283' is mentioned on line 852, but not defined == Missing Reference: 'RFC5623' is mentioned on line 857, but not defined == Missing Reference: 'TE-TOPO' is mentioned on line 1027, but not defined == Missing Reference: 'TE TUNNEL' is mentioned on line 1325, but not defined == Unused Reference: 'RFC4364' is defined on line 1462, but no explicit reference was found in the text == Unused Reference: 'RFC4761' is defined on line 1465, but no explicit reference was found in the text == Unused Reference: 'RFC6074' is defined on line 1469, but no explicit reference was found in the text == Unused Reference: 'RFC6624' is defined on line 1473, but no explicit reference was found in the text == Unused Reference: 'RFC7209' is defined on line 1477, but no explicit reference was found in the text == Unused Reference: 'RFC7432' is defined on line 1481, but no explicit reference was found in the text == Unused Reference: 'RFC7436' is defined on line 1484, but no explicit reference was found in the text == Unused Reference: 'RFC8214' is defined on line 1487, but no explicit reference was found in the text == Unused Reference: 'RFC8299' is defined on line 1491, but no explicit reference was found in the text == Unused Reference: 'RFC8466' is defined on line 1497, but no explicit reference was found in the text == Unused Reference: 'ACTN-PM' is defined on line 1508, but no explicit reference was found in the text == Unused Reference: 'BGP-L3VPN' is defined on line 1513, but no explicit reference was found in the text Summary: 0 errors (**), 0 flaws (~~), 30 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 TEAS Working Group Fabio Peruzzini 2 Internet Draft TIM 3 Intended status: Informational Jean-Francois Bouquier 4 Vodafone 5 Italo Busi 6 Huawei 7 Daniel King 8 Old Dog Consulting 9 Daniele Ceccarelli 10 Ericsson 12 Expires: January 2022 July 12, 2021 14 Applicability of Abstraction and Control of Traffic Engineered 15 Networks (ACTN) to Packet Optical Integration (POI) 17 draft-ietf-teas-actn-poi-applicability-03 19 Status of this Memo 21 This Internet-Draft is submitted in full conformance with the 22 provisions of BCP 78 and BCP 79. 24 Internet-Drafts are working documents of the Internet Engineering 25 Task Force (IETF), its areas, and its working groups. Note that 26 other groups may also distribute working documents as Internet- 27 Drafts. 29 Internet-Drafts are draft documents valid for a maximum of six 30 months and may be updated, replaced, or obsoleted by other documents 31 at any time. It is inappropriate to use Internet-Drafts as 32 reference material or to cite them other than as "work in progress." 34 The list of current Internet-Drafts can be accessed at 35 http://www.ietf.org/ietf/1id-abstracts.txt 37 The list of Internet-Draft Shadow Directories can be accessed at 38 http://www.ietf.org/shadow.html 40 This Internet-Draft will expire on April 9, 2021. 42 Copyright Notice 44 Copyright (c) 2020 IETF Trust and the persons identified as the 45 document authors. All rights reserved. 47 This document is subject to BCP 78 and the IETF Trust's Legal 48 Provisions Relating to IETF Documents 49 (http://trustee.ietf.org/license-info) in effect on the date of 50 publication of this document. Please review these documents 51 carefully, as they describe your rights and restrictions with 52 respect to this document. Code Components extracted from this 53 document must include Simplified BSD License text as described in 54 Section 4.e of the Trust Legal Provisions and are provided without 55 warranty as described in the Simplified BSD License. 57 Abstract 59 This document considers the applicability of Abstraction and Control 60 of TE Networks (ACTN) architecture to Packet Optical Integration 61 (POI)in the context of IP/MPLS and Optical internetworking. It 62 identifies the YANG data models being defined by the IETF to support 63 this deployment architecture and specific scenarios relevant for 64 Service Providers. 66 Existing IETF protocols and data models are identified for each 67 multi-layer (packet over optical) scenario with a specific focus on 68 the MPI (Multi-Domain Service Coordinator to Provisioning Network 69 Controllers Interface)in the ACTN architecture. 71 Table of Contents 73 1. Introduction...................................................3 74 2. Reference architecture and network scenario....................4 75 2.1. L2/L3VPN Service Request North Bound of MDSC..............9 76 2.2. Service and Network Orchestration........................10 77 2.2.1. Hard Isolation......................................13 78 2.2.2. Shared Tunnel Selection.............................13 79 2.3. IP/MPLS Domain Controller and NE Functions...............14 80 2.4. Optical Domain Controller and NE Functions...............15 81 3. Interface protocols and YANG data models for the MPIs.........16 82 3.1. RESTCONF protocol at the MPIs............................16 83 3.2. YANG data models at the MPIs.............................16 84 3.2.1. Common YANG data models at the MPIs.................16 85 3.2.2. YANG models at the Optical MPIs.....................17 86 3.2.3. YANG data models at the Packet MPIs.................18 87 3.3. PCEP.....................................................19 88 4. Multi-layer and multi-domain services scenarios...............20 89 4.1. Scenario 1: inventory, service and network topology 90 discovery.....................................................21 91 4.1.1. Inter-domain link discovery.........................22 92 4.1.2. Multi-layer IP Link discovery.......................23 93 4.1.3. Inventory discovery.................................23 94 4.1.4. SR-TE paths discovery...............................24 95 4.2. Establishment of L2VPN/L3VPN with TE requirements........24 96 4.2.1. Optical Path Computation............................29 97 4.2.2. Multi-layer IP Link Setup and Update................29 98 4.2.3. SR-TE Path Setup and Update.........................30 99 5. Security Considerations.......................................30 100 6. Operational Considerations....................................31 101 7. IANA Considerations...........................................31 102 8. References....................................................31 103 8.1. Normative References.....................................31 104 8.2. Informative References...................................33 105 Appendix A. Multi-layer and multi-domain resiliency...........35 106 A.1. Maintenance Window......................................35 107 A.2. Router port failure.....................................35 108 Acknowledgments..................................................36 109 Contributors.....................................................36 110 Authors' Addresses...............................................38 112 1. Introduction 114 The complete automation of the management and control of Service 115 Providers transport networks (IP/MPLS, optical, and microwave 116 transport networks) is vital for meeting emerging demand for high- 117 bandwidth use cases, including 5G and fiber connectivity services. 118 The Abstraction and Control of TE Networks (ACTN) architecture and 119 interfaces facilitate the automation and operation of complex 120 Optical and IP/MPLS networks through standard interfaces and data 121 models. Thus allowing a wide range of transport connectivity 122 services that can be requested by the upper layers fulfilling almost 123 any kind of service level requirements from a network perspective 124 (e.g. physical diversity, latency, bandwidth, topology, etc.) 126 Packet Optical Integration (POI) is an advanced use case of traffic 127 engineering. In wide-area networks, a packet network based on the 128 Internet Protocol (IP), and often Multiprotocol Label Switching 129 (MPLS), is typically realized on top of an optical transport network 130 that uses Dense Wavelength Division Multiplexing (DWDM)(and 131 optionally an Optical Transport Network (OTN)layer). 133 In many existing network deployments, the packet and the optical 134 networks are engineered and operated independently. As a result, 135 there are technical differences between the technologies (e.g., 136 routers compared to optical switches) and the corresponding network 137 engineering and planning methods (e.g., inter-domain peering 138 optimization in IP, versus dealing with physical impairments in 139 DWDM, or very different time scales). In addition, customers needs 140 can be different between a packet and an optical network, and it is 141 not uncommon to use different vendors in both domains. The operation 142 of these complex packet and optical networks is often siloed, as 143 these technology domains require specific skills sets. 145 The packet/optical network deployment and operation separation are 146 inefficient for many reasons. Both capital expenditure (CAPEX) and 147 operational expenditure (OPEX) could be significantly reduced by 148 integrating the packet and the optical network. Multi-layer online 149 topology insight can speed up troubleshooting (e.g., alarm 150 correlation) and network operation (e.g., coordination of 151 maintenance events), multi-layer offline topology inventory can 152 improve service quality (e.g., detection of diversity constraint 153 violations) and multi-layer traffic engineering can use the 154 available network capacity more efficiently (e.g., coordination of 155 restoration). In addition, provisioning workflows can be simplified 156 or automated as needed across layers (e.g., to achieve bandwidth-on- 157 demand or to perform maintenance events). 159 ACTN framework enables this complete multi-layer and multi-vendor 160 integration of packet and optical networks through MDSC and packet 161 and optical PNCs. 163 In this document, critical scenarios for POI are described from the 164 packet service layer perspective and identify the required 165 coordination between packet and optical layers to improve POI 166 deployment and operation. Precise definitions of scenarios can help 167 with achieving a common understanding across different disciplines. 168 The focus of the scenarios are IP/MPLS networks operated as a client 169 of optical DWDM networks. The scenarios are ordered by increasing 170 the level of integration and complexity. For each multi-layer 171 scenario, the document analyzes how to use the interfaces and data 172 models of the ACTN architecture. 174 Understanding the level of standardization and the possible gaps 175 will help assess the feasibility of integration between IP and 176 Optical DWDM domain (and optionally OTN layer) in an end-to-end 177 multi-vendor service provisioning perspective. 179 2. Reference architecture and network scenario 181 This document analyses several deployment scenarios for Packet and 182 Optical Integration (POI) in which ACTN hierarchy is deployed to 183 control a multi-layer and multi-domain network, with two Optical 184 domains and two Packet domains, as shown in Figure 1: 186 +----------+ 187 | MDSC | 188 +-----+----+ 189 | 190 +-----------+-----+------+-----------+ 191 | | | | 192 +----+----+ +----+----+ +----+----+ +----+----+ 193 | P-PNC 1 | | O-PNC 1 | | O-PNC 2 | | P-PNC 2 | 194 +----+----+ +----+----+ +----+----+ +----+----+ 195 | | | | 196 | \ / | 197 +-------------------+ \ / +-------------------+ 198 CE1 / PE1 BR1 \ | / / BR2 PE2 \ CE2 199 o--/---o o---\-|-------|--/---o o---\--o 200 \ : : / | | \ : : / 201 \ : PKT Domain 1 : / | | \ : PKT Domain 2 : / 202 +-:---------------:-+ | | +-:---------------:--+ 203 : : | | : : 204 : : | | : : 205 +-:---------------:------+ +-------:---------------:--+ 206 / : : \ / : : \ 207 / o...............o \ / o...............o \ 208 \ Optical Domain 1 / \ Optical Domain 2 / 209 \ / \ / 210 +------------------------+ +--------------------------+ 212 Figure 1 - Reference Scenario 214 The ACTN architecture, defined in [RFC8453], is used to control this 215 multi-domain network where each Packet PNC (P-PNC) is responsible 216 for controlling its IP domain, which can be either an Autonomous 217 System (AS), [RFC1930], or an IGP area within the same operator 218 network. Each Optical PNC (O-PNC) in the above topology is 219 responsible for controlling its Optical Domain. 221 The routers between IP domains can be either AS Boundary Routers 222 (ASBR) or Area Border Router (ABR): in this document, the generic 223 term Border Router (BR) is used to represent either an ASBR or a 224 ABR. 226 The MDSC is responsible for coordinating the whole multi-domain 227 multi-layer (Packet and Optical) network. A specific standard 228 interface (MPI) permits MDSC to interact with the different 229 Provisioning Network Controller (O/P-PNCs). 231 The MPI interface presents an abstracted topology to MDSC hiding 232 technology-specific aspects of the network and hiding topology 233 details depending on the policy chosen regarding the level of 234 abstraction supported. The level of abstraction can be obtained 235 based on P-PNC and O-PNC configuration parameters (e.g. provide the 236 potential connectivity between any PE and any BR in an MPLS-TE 237 network). 239 In the network scenario of Figure 1, it is assumed that: 241 o The domain boundaries between the IP and Optical domains are 242 congruent. In other words, one Optical domain supports 243 connectivity between Routers in one and only one Packet Domain; 245 o Inter-domain links exist only between Packet domains (i.e., 246 between BR routers) and between Packet and Optical domains (i.e., 247 between routers and Optical NEs). In other words, there are no 248 inter-domain links between Optical domains; 250 o The interfaces between the Routers and the Optical NEs are 251 "Ethernet" physical interfaces; 253 o The interfaces between the Border Routers (BRs) are "Ethernet" 254 physical interfaces. 256 This version of the document assumes that the IP Link supported by 257 the Optical network are always intra-AS (PE-BR, intra-domain BR-BR, 258 PE-P, BR-P, or P-P) and that the BRs are co-located and connected by 259 an IP Link supported by an Ethernet physical link. 261 The possibility to setup inter-AS/inter-area IP Links (e.g., 262 inter-domain BR-BR or PE-PE), supported by optical network, is for 263 further study. 265 Therefore, if inter-domain links between the Optical domains exist, 266 they would be used to support multi-domain Optical services, which 267 are outside the scope of this document. 269 The Optical NEs within the optical domains can be ROADMs or OTN 270 switches, with or without a ROADM. 272 The MDSC in Figure 1 is responsible for multi-domain and multi-layer 273 coordination across multiple Packet and Optical domains, as well as 274 to provide L2/L3VPN services. 276 Although the new optical technologies (e.g. QSFP-DD ZR 400G) 277 providing DWDM pluggable interfaces on the Routers, the deployment 278 of those pluggable optics is not yet widely adopted by the 279 operators. The reason is that most operators are not yet ready to 280 manage Packet and Transport networks in a single unified domain. As 281 a consequence, this draft is not addressing the unified scenario. 282 Instead, the unified use case will be described in a different 283 draft. 285 From an implementation perspective, the functions associated with 286 MDSC and described in [RFC8453] may be grouped in different ways. 288 1. Both the service- and network-related functions are collapsed into 289 a single, monolithic implementation, dealing with the end customer 290 service requests received from the CMI (Customer MDSC Interface) 291 and adapting the relevant network models. An example is 292 represented in Figure 2 of [RFC8453] 293 2. An implementation can choose to split the service-related and the 294 network-related functions into different functional entities, as 295 described in [RFC8309] and in section 4.2 of [RFC8453]. In this 296 case, MDSC is decomposed into a top-level Service Orchestrator, 297 interfacing the customer via the CMI, and into a Network 298 Orchestrator interfacing at the southbound with the PNCs. The 299 interface between the Service Orchestrator and the Network 300 Orchestrator is not specified in [RFC8453]. 301 3. Another implementation can choose to split the MDSC functions 302 between an H-MDSC responsible for packet-optical multi-layer 303 coordination, interfacing with one Optical L-MDSC, providing 304 multi-domain coordination between the O-PNCs and one Packet 305 L-MDSC, providing multi-domain coordination betweeh the P-PNCs 306 (see for example Figure 9 of [RFC8453]). 307 4. Another implementation can also choose to combine the MDSC and the 308 P-PNC functions together. 310 Please note that in the current service provider's network 311 deployments, at the North Bound of the MDSC, instead of a CNC, 312 typically there is an OSS/Orchestration layer. In this case, the 313 MDSC would implement only the Network Orchestration functions, as in 314 [RFC8309] and described in point 2 above. In this case, the MDSC is 315 dealing with the network services requests received from the 316 OSS/Orchestration layer. 318 [Editors'note:] Check for a better term to define the network 319 services. It may be worthwhile defining what are the customer and 320 network services. 322 The OSS/Orchestration layer is a vital part of the architecture 323 framework for a service provider: 325 o to abstract (through MDSC and PNCs) the underlying transport 326 network complexity to the Business Systems Support layer; 328 o to coordinate NFV, Transport (e.g. IP, Optical and Microwave 329 networks), Fixed Acess, Core and Radio domains enabling full 330 automation of end-to-end services to the end customers; 332 o to enable catalogue-driven service provisioning from external 333 applications (e.g. Customer Portal for Enterprise Business 334 services), orchestrating the design and lifecycle management of 335 these end-to-end transport connectivity services, consuming IP 336 and/or Optical transport connectivity services upon request. 338 The functionality of the OSS/Orchestration layer and the interface 339 toward the MDSC are usually operator-specific and outside the scope 340 of this draft. For example, this document assumes that the 341 OSS/Orchestrator requests MDSC to set up L2VPN/L3VPN services 342 through mechanisms that are outside the scope of this document. 344 There are two prominent cases when MDSC coordination of underlying 345 PNCs for POI networking is initiated: 347 o Initiated by a request from the OSS/Orchestration layer to setup 348 L2VPN/L3VPN services that requires multi-layer/multi-domain 349 coordination; 351 o Initiated by the MDSC itself to perform multi-layer/multi-domain 352 optimizations and/or maintenance activities (e.g. rerouting LSPs 353 with their associated services when putting a resource, like a 354 fibre, in maintenance mode during a maintenance window). 355 Unlike service fulfillment, these workflows are not related to a 356 service provisioning request being received from 357 the OSS/Orchestration layer. 359 The two aforemetioned MDSC workflow cases are in the scope of this 360 draft. The workflow initiation is transparent at the MPI. 362 2.1. L2/L3VPN Service Request North Bound of MDSC 364 As explained in section 2, the OSS/Orchestration layer can request 365 the MDSC to setup L2/L3VPN services (with or without TE 366 requirements). 368 Although the OSS/Orchestration layer interface is usually operator- 369 specific, typically it would be using a RESTCONF/YANG interface with 370 a more abstracted version of the MPI YANG data models used for 371 network configuration (e.g. L3NM, L2NM). 373 Figure 2 shows an example of possible control flow between the 374 OSS/Orchestration layer and the MDSC to instantiate L2/L3VPN 375 services, using the YANG models under the definition in [VN], 376 [L2NM], [L3NM] and [TSM]. 378 +-------------------------------------------+ 379 | | 380 | OSS/Orchestration layer | 381 | | 382 +-----------------------+-------------------+ 383 | 384 1.VN 2. L2/L3NM & | ^ 385 | TSM | | 386 | | | | 387 | | | | 388 v v | 3. Update VN 389 | 390 +-----------------------+-------------------+ 391 | | 392 | MDSC | 393 | | 394 +-------------------------------------------+ 396 Figure 2 Service Request Process 398 o The VN YANG model [VN], whose primary focus is the CMI, can also 399 provide VN Service configuration from an orchestrated 400 connectivity service point of view when the L2/L3VPN service has 401 TE requirements. However, this model is not used to setup 402 L2/L3VPN service with no TE requirements. 404 o It provides the profile of VN in terms of VN members, each of 405 which corresponds to an edge-to-edge link between customer 406 end-points (VNAPs). It also provides the mappings between the 407 VNAPs with the LTPs and the connectivity matrix with the VN 408 member. The associated traffic matrix (e.g., bandwidth, 409 latency, protection level, etc.) of VN member is expressed 410 (i.e., via the TE-topology's connectivity matrix). 412 o The model also provides VN-level preference information 413 (e.g., VN member diversity) and VN-level admin-status and 414 operational-status. 416 o The L2NM YANG model [L2NM], whose primary focus is the MPI, can 417 also be used to provide L2VPN service configuration and site 418 information, from a orchestrated connectivity service point of 419 view. 421 o The L3NM YANG model [L3NM], whose primary focus is the MPI, can 422 also be used to provide all L3VPN service configuration and site 423 information, from a orchestrated connectivity service point of 424 view. 426 o The TE & Service Mapping YANG model [TSM] provides TE-service 427 mapping as well as site mapping. 429 o TE-service mapping provides the mapping between a L2/L3VPN 430 instance and the corresponding VN instances. 432 o The TE-service mapping also provides the service mapping 433 requirement type as to how each L2/L3VPN/VN instance is 434 created concerning the underlay TE tunnels (e.g., whether 435 they require a new and isolated set of TE underlay tunnels or 436 not). See Section 2.2 for a detailed discussion on the 437 mapping requirement types. 439 o Site mapping provides the site reference information across 440 L2/L3VPN Site ID, VN Access Point ID, and the LTP of the 441 access link. 443 2.2. Service and Network Orchestration 445 From a functional standpoint, MDSC represented in Figure 2 446 interfaces with the OSS/Orchestration layer and decoupled L2/L3VPN 447 service configuration functions from network configuration 448 functions. Therefore in this document, the MDSC performs the 449 functions of the Network Orchestrator, as defined in [RFC 8309]. 451 One of the important MDSC functions is to identify which TE Tunnels 452 should carry the L2/L3VPN traffic (e.g., from TE & Service Mapping 453 configuration) and to relay this information to the P-PNCs, to 454 ensure the PEs' forwarding tables (e.g., VRF) are properly 455 populated, according to the TE binding requirement for the L2/L3VPN. 457 TE binding requirement types [TSM] are: 459 1. Hard Isolation with deterministic latency: The L2/L3VPN service 460 requires a set of dedicated TE Tunnels providing deterministic 461 latency performances and that cannot be not shared with other 462 services, nor compete for bandwidth with other Tunnels. 464 2. Hard Isolation: This is similar to the above case without 465 deterministic latency requirements. 467 3. Soft Isolation: The L2/L3VPN service requires a set of dedicated 468 MPLS-TE tunnels that cannot be shared with other services, but 469 which could compete for bandwidth with other Tunnels. 471 4. Sharing: The L2/L3VPN service allows sharing the MPLS-TE Tunnels 472 supporting it with other services. 474 There could be additional TE binding requirements for the first 475 three types with respect to different VN members of the same VN (on 476 how different VN members, belonging to the same VN, can share or not 477 network resources). For the first two cases, VN members can be 478 hard-isolated, soft-isolated, or shared. For the third case, VN 479 members can be soft-isolated or shared. 481 In order to fulfil the L2/L3VPN end-to-end TE requirements, 482 including the TE binding requirements, the MDSC needs to perform 483 multi-layer/multi-domain path computation to select the BRs, the 484 intra-domain MPLS-TE Tunnels and the intra-domain Optical Tunnels. 486 Depending on the knowledge that MDSC has of the topology and 487 configuration of the underlying network domains, three models for 488 performing path computation are possible: 490 1. Summarization: MDSC has an abstracted TE topology view of all of 491 the underlying domains, both packet and optical. MDSC does not 492 have enough TE topology information to perform 493 multi-layer/multi-domain path computation. Therefore MDSC 494 delegates the P-PNCs and O-PNCs to perform a local path 495 computation within their controlled domains and it uses the 496 information returned by the P-PNCs and O-PNCs to compute the 497 optimal multi-domain/multi-layer path. 498 This model presents an issue to P-PNC, which does not have the 499 capability of performing a single-domain/multi-layer path 500 computation (that is, P-PNC does not have any possibility to 501 retrieve the topology/configuration information from the Optical 502 controller). A possible solution could be to include a CNC 503 function in the P-PNC to request the MDSC multi-domain Optical 504 path computation, as shown in Figure 10 of [RFC8453]. 506 2. Partial summarization: MDSC has full visibility of the TE 507 topology of the packet network domains and an abstracted view of 508 the TE topology of the optical network domains. 509 MDSC then has only the capability of performing multi- 510 domain/single-layer path computation for the packet layer (the 511 path can be computed optimally for the two packet domains). 512 Therefore MDSC still needs to delegate the O-PNCs to perform 513 local path computation within their respective domains and it 514 uses the information received by the O-PNCs, together with its TE 515 topology view of the multi-domain packet layer, to perform 516 multi-layer/multi-domain path computation. 517 The role of P-PNC is minimized, i.e. is limited to management. 519 3. Full knowledge: MDSC has the complete and enough detailed view of 520 the TE topology of all the network domains (both optical and 521 packet). In such case MDSC has all the information needed to 522 perform multi-domain/multi-layer path computation, without 523 relying on PNCs. 525 This model may present, as a potential drawback, scalability 526 issues and, as discussed in section 2.2. of [PATH-COMPUTE], 527 performing path computation for optical networks in the MDSC is 528 quite challenging because the optimal paths depend also on 529 vendor-specific optical attributes (which may be different in the 530 two domains if they are provided by different vendors). 532 The current version of this draft assumes that MDSC supports at 533 least model #2 (Partial summarization). 535 [Note: check with opeerators for some references on real deployment] 537 2.2.1. Hard Isolation 539 For example, when "Hard Isolation with, or without, deterministic 540 latency" TE binding requirement is applied for a L2/L3VPN, new 541 Optical Tunnels need to be setup to support dedicated IP Links 542 between PEs and BRs. 544 The MDSC needs to identify the set of IP/MPLS domains and their BRs. 545 This requires the MDSC to request each O-PNC to compute the 546 intra-domain optical paths between each PEs/BRs pairs. 548 When requesting optical path computation to the O-PNC, the MDSC 549 needs to take into account the inter-layer peering points, such as 550 the interconnections between the PE/BR nodes and the edge Optical 551 nodes (e.g., using the inter-layer lock or the transitional link 552 information, defined in [RFC8795]). 554 When the optimal multi-layer/multi-domain path has been computed, 555 the MDSC requests each O-PNC to setup the selected Optical Tunnels 556 and P-PNC to setup the intra-domain MPLS-TE Tunnels, over the 557 selected Optical Tunnels. MDSC also properly configures its BGP 558 speakers and PE/BR forwarding tables to ensure that the VPN traffic 559 is properly forwarded. 561 2.2.2. Shared Tunnel Selection 563 In case of shared tunnel selection, the MDSC needs to check if there 564 is a multi-domain path which can support the L2/L3VPN end-to-end TE 565 service requirements (e.g., bandwidth, latency, etc.) using existing 566 intra-domain MPLS-TE tunnels. 568 If such a path is found, the MDSC selects the optimal path from the 569 candidate pool and request each P-PNC to setup the L2/L3VPN service 570 using the selected intra-domain MPLS-TE tunnel, between PE/BR nodes. 572 Otherwise, the MDSC should detect if the multi-domain path can be 573 setup using existing intra-domain MPLS-TE tunnels with modifications 574 (e.g., increasing the tunnel bandwidth) or setting up new intra- 575 domain MPLS-TE tunnel(s). 577 The modification of an existing MPLS-TE Tunnel and the setup of a 578 new MPLS-TE Tunnel may also require multi-layer coordination e.g., 579 in case the available bandwidth of underlying Optical Tunnels is not 580 sufficient. Based on multi-domain/multi-layer path computation, the 581 MDSC can decide for example to modify the bandwidth of an existing 582 Optical Tunnel (e.g., ODUflex bandwidth increase) or to setup new 583 Optical Tunnels to be used as additional LAG members of an existing 584 IP Link or as new IP Links to re-route the MPLS-TE Tunnel. 586 In all the cases, the labels used by the end-to-end tunnel are 587 distributed in the PE and BR nodes by BGP. The MDSC is responsible 588 to configure the BGP speakers in each P-PNC, if needed. 590 2.3. IP/MPLS Domain Controller and NE Functions 592 IP/MPLS networks are assumed to have multiple domains. Each domain, 593 corresponding to either an IGP area or an Autonomous System (AS) 594 within the same operator network, is controlled by an IP/MPLS domain 595 controller (P-PNC). 597 Among the functions of the P-PNC, there are the setup or 598 modification of the intra-domain MPLS-TE Tunnels, between PEs and 599 BRs, and the configuration of the VPN services, such as the VRF in 600 the PE nodes, as shown in Figure 3: 602 +------------------+ +------------------+ 603 | | | | 604 | P-PNC1 | | P-PNC2 | 605 | | | | 606 +--|-----------|---+ +--|-----------|---+ 607 | 1.Tunnel | 2.VPN | 1.Tunnel | 2.VPN 608 | Config | Provisioning | Config | Provisioning 609 V V V V 610 +---------------------+ +---------------------+ 611 CE / PE tunnel 1 BR\ / BR tunnel 2 PE \ CE 612 o--/---o..................o--\-----/--o..................o---\--o 613 \ / \ / 614 \ Domain 1 / \ Domain 2 / 615 +---------------------+ +---------------------+ 617 End-to-end tunnel 618 <-------------------------------------------------> 620 Figure 3 IP/MPLS Domain Controller & NE Functions 622 It is assumed that BGP is running in the inter-domain IP/MPLS 623 networks for L2/L3VPN. The P-PNC controller is also responsible for 624 configuring the BGP speakers within its control domain, if 625 necessary. 627 The BGP would be responsible for the end-to-end tunnel label 628 distribution on PE and BR nodes. The MDSC is responsible for 629 selecting the BRs and the intra-domain MPLS-TE Tunnels between PE/BR 630 nodes. 632 If new MPLS-TE Tunnels are needed or modifications (e.g., bandwidth 633 increase) to existing MPLS_TE Tunnels are needed, as outlined in 634 section 2.2, the MDSC would request their setup or modifications to 635 the P-PNCs (step 1 in Figure 3). Then the MDSC would request the 636 P-PNC to configure the VPN, including selecting the intra-domain TE 637 Tunnel (step 2 in Figure 3). 639 The P-PNC should configure, using mechanisms outside the scope of 640 this document, the ingress PE forwarding table, e.g., the VRF, to 641 forward the VPN traffic, received from the CE, with the following 642 three labels: 644 o VPN label: assigned by the egress PE and distributed by BGP; 646 o end-to-end LSP label: assigned by the egress BR, selected by the 647 MDSC, and distributed by BGP; 649 o MPLS-TE tunnel label, assigned by the next hop P node of the 650 tunnel selected by the MDSC and distributed by mechanism internal 651 to the IP/MPLS domain (e.g., RSVP-TE). 653 2.4. Optical Domain Controller and NE Functions 655 The optical network provides the underlay connectivity services to 656 IP/MPLS networks. The coordination of Packet/Optical multi-layer is 657 done by the MDSC, as shown in Figure 1. 659 The O-PNC is responsible to: 661 o provide to the MDSC an abstract TE topology view of its 662 underlying optical network resources; 664 o perform single-domain local path computation, when requested by 665 the MDSC; 667 o perform Optical Tunnel setup, when requested by the MDSC. 669 The mechanisms used by O-PNC to perform intra-domain topology 670 discovery and path setup are usually vendor-specific and outside the 671 scope of this document. 673 Depending on the type of optical network, TE topology abstraction, 674 path computation and path setup can be single-layer (either OTN or 675 WDM) or multi-layer OTN/WDM. In the latter case, the multi-layer 676 coordination between the OTN and WDM layers is performed by the 677 O-PNC. 679 3. Interface protocols and YANG data models for the MPIs 681 This section describes general assumptions applicable at all the MPI 682 interfaces, between each PNC (Optical or Packet) and the MDSC, and 683 all the scenarios discussed in this document. 685 3.1. RESTCONF protocol at the MPIs 687 The RESTCONF protocol, as defined in [RFC8040], using the JSON 688 representation defined in [RFC7951], is assumed to be used at these 689 interfaces. In addition, extensions to RESTCONF, as defined in 690 [RFC8527], to be compliant with Network Management Datastore 691 Architecture (NMDA) defined in [RFC8342], are assumed to be used as 692 well at these MPI interfaces and also at CMI interfaces. 694 3.2. YANG data models at the MPIs 696 The data models used on these interfaces are assumed to use the YANG 697 1.1 Data Modeling Language, as defined in [RFC7950]. 699 3.2.1. Common YANG data models at the MPIs 701 As required in [RFC8040], the "ietf-yang-library" YANG module 702 defined in [RFC8525] is used to allow the MDSC to discover the set 703 of YANG modules supported by each PNC at its MPI. 705 Both Optical and Packet PNCs use the following common topology YANG 706 models at the MPI to report their abstract topologies: 708 o The Base Network Model, defined in the "ietf-network" YANG module 709 of [RFC8345]; 711 o The Base Network Topology Model, defined in the "ietf-network- 712 topology" YANG module of [RFC8345], which augments the Base 713 Network Model; 715 o The TE Topology Model, defined in the "ietf-te-topology" YANG 716 module of [RFC8795], which augments the Base Network Topology 717 Model with TE specific information. 719 These common YANG models are generic and augmented by technology- 720 specific YANG modules as described in the following sections. 722 Both Optical and Packet PNCs must use the following common 723 notifications YANG models at the MPI so that any network changes can 724 be reported almost in real-time to MDSC by the PNCs: 726 o Dynamic Subscription to YANG Events and Datastores over RESTCONF 727 as defined in [RFC8650]; 729 o Subscription to YANG Notifications for Datastores updates as 730 defined in [RFC8641]. 732 PNCs and MDSCs must be compliant with subscription requirements as 733 stated in [RFC7923]. 735 3.2.2. YANG models at the Optical MPIs 737 The Optical PNC also uses at least the following technology-specific 738 topology YANG models, providing WDM and Ethernet technology-specific 739 augmentations of the generic TE Topology Model: 741 o The WSON Topology Model, defined in the "ietf-wson-topology" YANG 742 modules of [WSON-TOPO], or the Flexi-grid Topology Model, defined 743 in the "ietf-flexi-grid-topology" YANG module of [Flexi-TOPO]; 745 o Optionally, when the OTN layer is used, the OTN Topology Model, 746 as defined in the "ietf-otn-topology" YANG module of [OTN-TOPO]; 748 o The Ethernet Topology Model, defined in the "ietf-eth-te- 749 topology" YANG module of [CLIENT-TOPO]; 751 o Optionally, when the OTN layer is used, the network data model 752 for L1 OTN services (e.g. an Ethernet transparent service) as 753 defined in "ietf-trans-client-service" YANG module of draft-ietf- 754 ccamp-client-signal-yang [CLIENT-SIGNAL]; 756 o The WSON Topology Model or, alternatively, the Flexi-grid 757 Topology model is used to report the DWDM network topology (e.g., 758 ROADMs and links) depending on whether the DWDM optical network 759 is based on fixed grid or flexible-grid. 761 The Ethernet Topology is used to report the access links between the 762 IP routers and the edge ROADMs. 764 The optical PNC uses at least the following YANG models: 766 o The TE Tunnel Model, defined in the "ietf-te" YANG module of 767 [TE-TUNNEL]; 769 o The WSON Tunnel Model, defined in the "ietf-wson-tunnel" YANG 770 modules of [WSON-TUNNEL], or the Flexi-grid Media Channel Model, 771 defined in the "ietf-flexi-grid-media-channel" YANG module of 772 [Flexi-MC]; 774 o Optionally, when the OTN layer is used, the OTN Tunnel Model, 775 defined in the "ietf-otn-tunnel" YANG module of [OTN-TUNNEL]; 777 o The Ethernet Client Signal Model, defined in the "ietf-eth-tran- 778 service" YANG module of [CLIENT-SIGNAL]. 780 The TE Tunnel model is generic and augmented by technology-specific 781 models such as the WSON Tunnel Model and the Flexi-grid Media 782 Channel Model. 784 The WSON Tunnel Model, or the Flexi-grid Media Channel Model, may be 785 used to setup connectivity within the DWDM network depending on 786 whether the DWDM optical network is based on fixed grid or flexible- 787 grid. 789 The Ethernet Client Signal Model is used to configure the steering 790 of the Ethernet client traffic between Ethernet access links and TE 791 Tunnels, which in this case could be either WSON Tunnels or 792 Flexi-Grid Media Channels. This model is generic and applies to any 793 technology-specific TE Tunnel: technology-specific attributes are 794 provided by the technology-specific models which augment the generic 795 TE-Tunnel Model. 797 3.2.3. YANG data models at the Packet MPIs 799 The Packet PNC also uses at least the following technology-specific 800 topology YANG models, providing IP and Ethernet technology-specific 801 augmentations of the generic Topology Models described in section 802 3.2.1: 804 o The L3 Topology Model, defined in the "ietf-l3-unicast-topology" 805 YANG module of [RFC8346], which augments the Base Network 806 Topology Model; 808 o The L3 specific data model including extended TE attributes (e.g. 809 performance derived metrics like latency), defined in "ietf-l3- 810 te-topology" and in "ietf-te-topology-packet" YANG modules of 811 [L3-TE-TOPO]; 813 o When SR-TE is used, the SR Topology Model, defined in the "ietf- 814 sr-mpls-topology" YANG module of [SR-TE-TOPO]: this YANG module 815 is used together with other YANG modules to provide the SR-TE 816 topology view as described in figure 2 of [SR-TE-TOPO]; 818 o The Ethernet Topology Model, defined in the "ietf-eth-te- 819 topology" YANG module of [CLIENT-TOPO], which augments the TE 820 Topology Model. 822 The Ethernet Topology Model is used to report the access links 823 between the IP routers and the edge ROADMs as well as the 824 inter-domain links between ASBRs, while the L3 Topology Model is 825 used to report the IP network topology (e.g., IP routers and links). 827 o The User Network Interface (UNI) Topology Model, being defined in 828 the "ietf-uni-topology" module of the draft-ogondio-opsawg-uni- 829 topology [UNI-TOPO] which augment "ietf-network" module defined 830 in [RFC8345] adding service attachment points to the nodes to 831 which L2VPN/L3VPN IP/MPLS services can be attached. 833 o L3VPN network data model defined in "ietf-l3vpn-ntw" module of 834 draft-ietf-opsawg-l3sm-l3nm [L3NM] used for non-ACTN MPI for 835 L3VPN service provisioning 837 o L2VPN network data model defined in "ietf-l2vpn-ntw" module of 838 draft-ietf-barguil-opsawg-l2sm-l2nm [L2NM] used for non-ACTN MPI 839 for L2VPN service provisioning 841 [Editor's note:] Add YANG models used for tunnel and service 842 configuration. 844 3.3. PCEP 846 [RFC8637] examines the applicability of a Path Computation Element 847 (PCE) [RFC5440] and PCE Communication Protocol (PCEP) to the ACTN 848 framework. It further describes how the PCE architecture applies to 849 ACTN and lists the PCEP extensions that are needed to use PCEP as an 850 ACTN interface. The stateful PCE [RFC8231], PCE-Initiation 851 [RFC8281], stateful Hierarchical PCE (H-PCE) [RFC8751], and PCE as a 852 central controller (PCECC) [RFC8283] are some of the key extensions 853 that enable the use of PCE/PCEP for ACTN. 855 Since the PCEP supports path computation in the packet and optical 856 networks, PCEP is well suited for inter-layer path computation. 857 [RFC5623] describes a framework for applying the PCE-based 858 architecture to interlayer (G)MPLS traffic engineering. Furthermore, 859 the section 6.1 of [RFC8751] states the H-PCE applicability for 860 inter-layer or POI. 862 [RFC8637] lists various PCEP extensions that apply to ACTN. It also 863 list the PCEP extension for optical network and POI. 865 Note that the PCEP can be used in conjunction with the YANG models 866 described in the rest of this document. Depending on whether ACTN is 867 deployed in a greenfield or brownfield, two options are possible: 869 1. The MDSC uses a single RESTCONF/YANG interface towards each PNC 870 to discover all the TE information and request TE tunnels. It may 871 either perform full multi-layer path computation or delegate path 872 computation to the underneath PNCs. 874 This approach is desirable for operators from an multi-vendor 875 integration perspective as it is simple, and we need only one 876 type of interface (RESTCONF) and use the relevant YANG data 877 models depending on the operator use case considered. Benefits of 878 having only one protocol for the MPI between MDSC and PNC have 879 been already highlighted in [PATH-COMPUTE]. 881 2. The MDSC uses the RESTCONF/YANG interface towards each PNC to 882 discover all the TE information and requests the creation of TE 883 tunnels. However, it uses PCEP for hierarchical path computation. 885 As mentioned in Option 1, from an operator perspective, this 886 option can add integration complexity to have two protocols 887 instead of one, unless the RESTOCONF/YANG interface is added to 888 an existing PCEP deployment (brownfield scenario). 890 Section 4 of this draft analyses the case where a single 891 RESTCONF/YANG interface is deployed at the MPI (i.e., option 1 892 above). 894 4. Multi-layer and multi-domain services scenarios 896 Multi-layer and multi-domain scenarios, based on reference network 897 described in section 2, and very relevant for Service Providers, are 898 described in the next sections. For each scenario, existing IETF 899 protocols and data models are identified with particular focus on 900 the MPI in the ACTN architecture. Non-ACTN IETF data models required 901 for L2/L3VPN service provisioning between MDSC and packet PNCs are 902 also identified. 904 4.1. Scenario 1: inventory, service and network topology discovery 906 In this scenario, the MSDC needs to discover through the underlying 907 PNCs, the network topology, at both WDM and IP layers, in terms of 908 nodes and links, including inter-AS domain links as well as cross- 909 layer links but also in terms of tunnels (MPLS or SR paths in IP 910 layer and OCh and optionally ODUk tunnels in optical layer). 912 In addition, the MDSC should discover the IP/MPLS transport services 913 (L2VPN/L3VPN) deployed, both intra-domain and inter-domain wise. 915 The O-PNC and P-PNC could discover and report the inventory 916 information of their equipment that is used by the different 917 management layers. In the context of POI, the inventory information 918 of IP and WDM equipment can complement the topology views and 919 facilitate the IP-Optical multi-layer view. 921 The MDSC could also discover the whole inventory information of both 922 IP and WDM equipment and correlate this information with the links 923 reported in the network topology. 925 Each PNC provides to the MDSC an abstracted or full topology view of 926 the WDM or the IP topology of the domain it controls. This topology 927 can be abstracted in the sense that some detailed NE information is 928 hidden at the MPI. All or some of the NEs and related physical links 929 are exposed as abstract nodes and logical (virtual) links, depending 930 on the level of abstraction the user requires. This information is 931 key to understanding both the inter-AS domain links (seen by each 932 controller as UNI interfaces but as I-NNI interfaces by the MDSC) 933 and the cross-layer mapping between IP and WDM layer. 935 The MDSC should also maintain up-to-date inventory, service and 936 network topology databases of both IP and WDM layers (and optionally 937 OTN layer) through the use of IETF notifications through MPI with 938 the PNCs when any inventory/topology/service change occurs. 940 It should be possible also to correlate information coming from IP 941 and WDM layers (e.g., which port, lambda/OTSi, and direction, is 942 used by a specific IP service on the WDM equipment). 944 In particular, for the cross-layer links, it is key for MDSC to 945 automatically correlate the information from the PNC network 946 databases about the physical ports from the routers (single link or 947 bundle links for LAG) to client ports in the ROADM. 949 It should be possible at MDSC level to easily correlate WDM and IP 950 layers alarms to speed-up troubleshooting 952 Alarms and event notifications are required between MDSC and PNCs so 953 that any network changes are reported almost in real-time to the MDSC 954 (e.g. NE or link failure, MPLS tunnel switched from primary to back- 955 up path etc.). As specified in [RFC7923], MDSC must subscribe to 956 specific objects from PNC YANG datastores for notifications. 958 4.1.1. Inter-domain link discovery 960 In the reference network of Figure 1, there are two types of 961 inter-domain links: 963 o Links between two IP domains (ASes) 965 o Links between an IP router and a ROADM 967 Both types of links are Ethernet physical links. 969 The inter-domain link information is reported to the MDSC by the two 970 adjacent PNCs, controlling the two ends of the inter-domain link. 971 The MDSC needs to understand how to merge these inter-domain 972 Ethernet links together. 974 This document considers the following two options for discovering 975 inter-domain links: 977 1. Static configuration 979 2. LLDP [IEEE 802.1AB] automatic discovery 981 Other options are possible but not described in this document. 983 The MDSC can understand how to merge these inter-domain links using 984 the plug-id attribute defined in the TE Topology Model [RFC8795], as 985 described in section 4.3 of [RFC8795]. 987 A more detailed description of how the plug-id can be used to 988 discover inter-domain links is also provided in section 5.1.4 of 989 [TNBI]. 991 Both types of inter-domain links are discovered using the plug-id 992 attributes reported in the Ethernet Topologies exposed by the two 993 adjacent PNCs. In addition, the MDSC can also discover an 994 inter-domain IP link/adjacency between the two IP LTPs, reported in 995 the IP Topologies exposed by the two adjacent P-PNCs, supported by 996 the two ETH LTPs of an Ethernet Link discovered between these two 997 P-PNCs. 999 The static configuration requires an administrative burden to 1000 configure network-wide unique identifiers: it is therefore more 1001 viable for inter-AS links. For the links between the IP routers and 1002 the Optical NEs, the automatic discovery solution based on LLDP 1003 snooping is preferable when LLDP snooping is supported by the 1004 Optical NEs. 1006 As outlined in [TNBI], the encoding of the plug-id namespace and the 1007 LLDP information within the plug-id value is implementation specific 1008 and needs to be consistent across all the PNCs. 1010 4.1.2. Multi-layer IP Link discovery 1012 All the intra-domain IP links are discovered by P-PNC, using LLDP 1013 [IEEE 802.1AB] or any other mechanisms which are outside the scope 1014 of this document, and reported at the MPI within the L3 Topology. 1016 In case of a multi-layer IP link, the P-PNC also reports the two 1017 inter-domain ETH LTPs that supports the two IP LTPs terminating the 1018 multi-layer IP link. 1020 The MDSC can therefore discover which Ethernet access link supports 1021 the multi-layer IP link as described in section 4.1.1. 1023 The Optical Transponders, or the OTN access cards, are reported by 1024 the O-PNC as Trail Termination Points (TTPs), defined in [TE-TOPO], 1025 within the Optical Topology. The association between the Ethernet 1026 access link and the Optical TTP is reported using the Inter Layer 1027 Lock (ILL) identifiers, defined in [TE-TOPO], within the Ethernet 1028 Topology and Optical Topology, exposed by the O-PNC. 1030 The MDSC can discover throught the MPI the Optical Tunnels being 1031 setup by each O-PNC and in particular which Optical Tunnel has been 1032 setup between the two TTPs associated with the two Ethernet access 1033 links supporting an inter-domain IP Link. 1035 4.1.3. Inventory discovery 1037 The are no YANG data models in IETF that could be used to report at 1038 the MPI the whole inventory information discovered by a PNC. 1040 [RFC8345] has foreseen some work for inventory as an augmentation of 1041 the network model, but no YANG data model has been developed so far. 1043 There are also no YANG data models in IETF that could be used to 1044 correlate topology information, e.g., a link termination point 1045 (LTP), with inventory information, e.g., the physical port 1046 supporting an LTP, if any. 1048 Inventory information through MPI and correlation with topology 1049 information is identified as a gap requiring further work and 1050 outside of the scope of this draft. 1052 4.1.4. SR-TE paths discovery 1054 This version of the draft assumes that discovery of existing SR-TE 1055 paths, including their bandwidth, at the MPI is done using the 1056 generic TE tunnel YANG data model, defined in [TE-TUNNEL], with 1057 SR-TE specific augmentations, as also outlined in section 1 of 1058 [TE-TUNNEL]. 1060 To enable MDSC to discover the full end-to-end SR-TE path 1061 configuration, the SR-TE specific augmentation of the [TE-TUNNEL] 1062 should allow the P-PNC to report the SID list assigned to an SR-TE 1063 path within its domain. 1065 [Editors' note:] Need to check if SR-TE specific augmentation is 1066 required for SR-TE path discovery 1068 For example, considering the L3VPN in Figure 4, the PE13-P16-PE14 1069 SR-TE path and the SR-TE path in the reverse direction (between PE14 1070 and PE13) could be reported by the P-PNC1 to the MDSC as TE paths of 1071 the same TE tunnel instance. The bandwidth of these TE paths 1072 represents the bandwidth allocated by P-PNC1 to the two SR-TE 1073 paths,which can be symmetric or asymmetric in the two directions. 1075 4.2. Establishment of L2VPN/L3VPN with TE requirements 1077 In this scenario the MDSC needs to setup a multi-domain L2VPN or a 1078 L3VPN with some SLA requirements. 1080 Figure 4 provides an example of an hub&spoke L3VPN with three PEs 1081 where the hub PE (PE13) and one spoke PE (PE14) are within the same 1082 packet domain and the other spoke PE (PE23) is within a different 1083 packet domain. 1085 ------ 1086 | CE13 |___________________ 1087 ------ ) __________________ 1088 ( | ) ( ) 1089 ( | PE13 P15 BR11 ) ( BR21 P24 ) 1090 ( ____ ___ ____ ) ( ____ ___ ) 1091 ( / H \ _ _ _ / \ _ _ / \ _)_ _ _(_ / \ _ _ _ / \ ) 1092 ( \____/... \___/ \____/ ) ( \____/ \___/ ) 1093 ( :..... ) ( | ) 1094 ( ____ :__ ____ ) ( ____ _|__ ) 1095 ( / S \...../ \._._./ \__________/ \._._._._./ S \ ) 1096 ( \____/ \___/ \____/ ) ( \____/ \____/ ) 1097 ( | ) ( | ) 1098 ( | PE14 P16 BR12 ) ( BR22 PE23 | ) 1099 ( | ) ( | ) 1100 ------ ) ( ------ 1101 | CE14 | ___________________) (_____________| CE23 | 1102 ------ ------ 1104 _____________________________ ___________________ 1105 ( ) ( ) 1106 ( ____ ____ ) ( ____ ) 1107 ( / \ __ _ _ _ _ / \ ) ( / \ _ _ ) 1108 ( \____/.. \____/ ) ( \____/ \ ) 1109 ( | :..... ...: \ ) ( / \ ) 1110 ( _|__ :__: \____ ) ( ___/ __\_ ) 1111 ( / \_ _ / \ _ _ _ / \ ) ( / \ _ _ _ / \ ) 1112 ( \____/ \____/ \____/ ) ( \____/ \____/ ) 1113 ( ) ( ) 1114 (_____________________________) (___________________) 1116 Optical Domain 1 Optical Domain 2 1118 H / S = Hub VRF / Spoke VRF 1119 ____ = Inter-domain interconnections 1120 ..... = SR policy Path 1 1121 _ _ _ = SR policy Path 2 1123 Figure 4 Multi-domain L3VPN example 1125 [Editors' note:] Update the SR policy paths to show the intra-domain 1126 PE13-P16-P14 and inter-domain PE13-BR11-BR12-P24-PE23 paths. No need 1127 to show the TI-LFA in this figure. Remove also the intra-domain TI- 1128 LFA. 1130 There are many options to implement multi-domain L3VPN, including: 1132 1. BGP-LU (seamless MPLS) 1133 2. Inter-domain RSVP-TE 1134 3. Inter-domain SR-TE 1136 This version of the draft provides an analysis of the inter-domain 1137 SR-TE option. A future update of this draft will provide a high- 1138 level analysis of the BGP-LU option. 1140 It is assumed that each packet domain in Figure 4 is implementing 1141 SR-TE and the stitching between two domains is done using end-to- 1142 end/multi-domain SR-TE. It is assumed that the bandwidth of each 1143 intra-domain SR-TE path is managed by its respective P-PNC and that 1144 binding SID is used for the end-to-end SR-TE path stitching. It is 1145 assumed that each packet domain in Figure 4 is using TI-LFA, with 1146 SRLG awareness, for local protection within each domain. 1148 [Editor's note:] Analyze how TI-LFA can take into account multi- 1149 layer SRLG disjointness, providing that SRLG information is provided 1150 by the O-PNCs to the P-PNC throught the MDSC. 1152 It is assumed that the MDSC adopts the partial summarization model, 1153 described in section 2.2, having full visibility of the packet layer 1154 TE topology and an abstract view of the underlay optical layer TE 1155 topology. 1157 The MDSC needs to translate the L3VPN SLA requirements to TE 1158 requirements (e.g., bandwidth, TE metric bounds, SRLG disjointness, 1159 nodes/links/domains inclusion/exclusion) and find the SR-TE paths 1160 between PE13 (hub PE) and, respectively, PE23 and PE14 (spoke PEs) 1161 that meet these TE requirements. 1163 For each SR-TE path required to support the L3VPN, it is possible 1164 that: 1166 1. A SR-TE path that meets the TE requirements already exist in the 1167 network. 1169 2. An existing SR-TE path could be modified (e.g., through bandwidth 1170 increase) to meet the TE requirements: 1172 a. The SR-TE path characteristics can be modified only in the 1173 packet layer. 1175 b. One or more new underlay Optical tunnels need to be setup to 1176 support the requested changes of the overlay SR-TE paths 1177 (multi-layer coordination is required). 1179 3. A new SR-TE path needs to be setup: 1181 a. The new SR-TE path reuses existing underlay optical tunnels; 1183 b. One or more new underlay Optical tunnels need to be setup to 1184 support the setup of the new SR-TE path (multi-layer 1185 coordination is required). 1187 For example, considering the L3VPN in Figure 4, the MDSC discovers 1188 that: 1190 o a PE13-P16-PE14 SR-TE path already exists but have not enough 1191 bandwidth to support the new L3VPN, as described in section 1192 4.1.4; 1194 o the IP link(s) between P16 and PE14 has not enough bandwidth to 1195 support increasing the bandwidth of that SR-TE path, as described 1196 in section 4.1; 1198 o a new underlay optical tunnel could be setup to increase the 1199 bandwidth IP link(s) between P16 and PE14 to support increasing 1200 the bandwidth of that overlay SR-TE path, as described in section 1201 4.2.1. The dimensioning of the underlay optical tunnel is decided 1202 by the MDSC based on the bandwidth requested by the SR-TE path 1203 and on its multi-layer optimization policy, which is an internal 1204 MDSC implementation issue. 1206 The MDSC would therefore request: 1208 o the O-PNC1 to setup a new optical tunnel between the ROADMs 1209 connected to P16 and PE14, as described in section 4.2.2; 1211 o the P-PNC1 to update the configuration of the existing IP link, 1212 in case of LAG, or configure a new IP link, in case of ECMP, 1213 between P16 and PE14, as described in section 4.2.2; 1215 o the P-PNC1 to update the bandwidth of the selected SR-TE path 1216 between PE13 and PE14, as described in section 4.2.3. 1218 For example, considering the L3VPN in Figure 4, the MDSC can also 1219 decide that a new multi-domain SR-TE path needs to be setup between 1220 PE13 and PE23. 1222 As described in section 2.2, with partial summarization, the MDSC 1223 will use the TE topology information provided by the P-PNCs and the 1224 results of the path computation requests sent to the O-PNCs, as 1225 described in section 4.2.1, to compute the multi-layer/multi-domain 1226 path between PE13 and PE23. 1228 For example, the multi-layer/multi-domain performed by the MDSC 1229 could require the setup of: 1231 o a new underlay optical tunnel between PE13 and BR11, supporting a 1232 new IP link, as described in section 4.2.2; 1234 o a new underlay optical tunnel between BR21 and P24 to increase 1235 the bandwidth of the IP link(s) between BR21 and P24, as 1236 described in section 4.2.2. 1238 After that, the MDSC requests P-PNC2 to setup an SR-TE path between 1239 BR21 and PE23, with an explicit path (BR21, P24, PE23) as described 1240 in section 4.2.3. The P-PNC2, knowing the node and the adjacency 1241 SIDs assigned within its domain, can install the proper SR policy, 1242 or hierarchical policies, within BR21 and returns to the MDSC the 1243 assigned binding SID. 1245 [Editor's Note] Further investigation is needed for the SR specific 1246 extensions to the TE tunnel model. 1248 MDSC request P-PNC1 to setup an SR-TE path between PE13 and BR11, 1249 with an explicit path (PE13, BR11), specifying the inter-domain link 1250 toward BR21 and the binding SID to be used for the end-to-end SR-TE 1251 path stitching, as described in section 4.2.3. The P-PNC1, knowing 1252 also the node and the adjacency SIDs assigned within its domain and 1253 the EPE SID assigned by BR11 to the inter-domain link toward BR21, 1254 installs the proper policy, or policies, within PE13. 1256 Once the SR-TE paths have been selected and, if needed, 1257 setup/modified, the MDSC can request to both P-PNCs to configure the 1258 L3VPN and its binding with the selected SR-TE paths using the 1259 [L3NM] and [TSM] YANG models. 1261 [Editor's Note] Further investigation is needed to understand how 1262 the binding between a L3VPN and this new end-to-end SR-TE path can 1263 be configured. 1265 4.2.1. Optical Path Computation 1267 As described in section 2.2, the optical path computation is usually 1268 performed by the Optical PNC. 1270 When performing multi-layer/multi-domain path computation, the MDSC 1271 can delegate the Optical PNCs for single-domain optical path 1272 computation. 1274 As discussed in [PATH-COMPUTE], there are two options to request an 1275 Optical PNC to perform optical path computation: either via a 1276 "compute-only" TE tunnel path, using the generic TE tunnel YANG data 1277 model defined in [TE-TUNNEL] or via the path computation RPC defined 1278 in [PATH-COMPUTE]. 1280 This draft assumes that the path computation RPC is used. 1282 The are no YANG data models in IETF that could be used to augment 1283 the generic path computation RPC with technology-specific 1284 attributes. 1286 Optical technology-specific augmentation for the path computation 1287 RPC is identified as a gap requiring further work outside of this 1288 draft's scope. 1290 4.2.2. Multi-layer IP Link Setup and Update 1292 The MDSC requires the O-PNC to setup an Optical Tunnel (either a 1293 WSON Tunnel or a Flexi-grid Tunnel or an OTN Tunnel) within the 1294 Optical network between the two Optical Transponders (OTs), in case 1295 of DWDM network, or the two OTN access cards, in case of OTN 1296 networks, associated with the two access links. 1298 The MDSC also requires the O-PNC to steer the Ethernet client 1299 traffic between the two access Ethernet Links over the Optical 1300 Tunnel. 1302 After the Optical Tunnel has been setup and the client traffic 1303 steering configured, the two IP routers can exchange Ethernet 1304 packets between themselves, including LLDP messages. 1306 If LLDP [IEEE 802.1AB] is used between the two routers, the P- PNC 1307 can automatically discover the IP Link being set up by the MDSC. The 1308 IP LTPs terminating this IP Link are supported by the ETH LTPs 1309 terminating the two access links. 1311 Otherwise, the MDSC needs to require the P-PNC to configure an IP 1312 Link between the two routers: the MDSC also configures the two ETH 1313 LTPs which support the two IP LTPs terminating this IP Link. 1315 [Editor's Note] Add text for IP link update and clarify that the IP 1316 link bandwidth increase can be done either by LAG or by ECMP. Both 1317 options are valid and widely deployed and more or less the same from 1318 POI perspective. 1320 4.2.3. SR-TE Path Setup and Update 1322 This version of the draft assumes that SR-TE path setup and update 1323 at the MPI could be done using the generic TE tunnel YANG data 1324 model, defined in [TE TUNNEL], with SR TE specific augmentations, as 1325 also outlined in section 1 of [TE TUNNEL]. 1327 The MDSC can use the [TE-TUNNEL] model to request the P-PNC to setup 1328 TE paths specifying the explicit path to force the P-PNC to setup 1329 the actual path being computed by MDSC. 1331 The [TE-TUNNEL] model supports requesting the setup of both end- 1332 to-end as well as segment TE paths (within one domain). 1334 In the latter case, SR-TE specific augmentations of the [TE-TUNNEL] 1335 model should be defined to allow the MDSC to configure the binding 1336 SIDs to be used for the end to-end SR-TE path stitching and to allow 1337 the P-PNC to report the binding SID assigned to the segment TE 1338 paths. 1340 The assigned binding SID should be persistent in case router or P- 1341 PNC rebooting. 1343 The MDSC can also use the [TE-TUNNEL] model to request the P-PNC to 1344 increase the bandwidth allocated to an existing TE path, and, if 1345 needed, also on its reverse TE path. The [TE-TUNNEL] model supports 1346 both symmetric and asymmetric bandwidth configuration in the two 1347 directions. 1349 SR-TE path setup and update (e.g., bandwidth increase) through MPI 1350 is identified as a gap requiring further work, which is outside of 1351 the scope of this draft. 1353 5. Security Considerations 1355 Several security considerations have been identified and will be 1356 discussed in future versions of this document. 1358 6. Operational Considerations 1360 Telemetry data, such as collecting lower-layer networking health and 1361 consideration of network and service performance from POI domain 1362 controllers, may be required. These requirements and capabilities 1363 will be discussed in future versions of this document. 1365 7. IANA Considerations 1367 This document requires no IANA actions. 1369 8. References 1371 8.1. Normative References 1373 [RFC7950] Bjorklund, M. et al., "The YANG 1.1 Data Modeling 1374 Language", RFC 7950, August 2016. 1376 [RFC7951] Lhotka, L., "JSON Encoding of Data Modeled with YANG", RFC 1377 7951, August 2016. 1379 [RFC8040] Bierman, A. et al., "RESTCONF Protocol", RFC 8040, January 1380 2017. 1382 [RFC8345] Clemm, A., Medved, J. et al., "A Yang Data Model for 1383 Network Topologies", RFC8345, March 2018. 1385 [RFC8346] Clemm, A. et al., "A YANG Data Model for Layer 3 1386 Topologies", RFC8346, March 2018. 1388 [RFC8453] Ceccarelli, D., Lee, Y. et al., "Framework for Abstraction 1389 and Control of TE Networks (ACTN)", RFC8453, August 2018. 1391 [RFC8525] Bierman, A. et al., "YANG Library", RFC 8525, March 2019. 1393 [RFC8795] Liu, X. et al., "YANG Data Model for Traffic Engineering 1394 (TE) Topologies", RFC8795, August 2020. 1396 [IEEE 802.1AB] IEEE 802.1AB-2016, "IEEE Standard for Local and 1397 metropolitan area networks - Station and Media Access 1398 Control Connectivity Discovery", March 2016. 1400 [WSON-TOPO] Lee, Y. et al., " A YANG Data Model for WSON (Wavelength 1401 Switched Optical Networks)", draft-ietf-ccamp-wson-yang, 1402 work in progress. 1404 [Flexi-TOPO] Lopez de Vergara, J. E. et al., "YANG data model for 1405 Flexi-Grid Optical Networks", draft-ietf-ccamp-flexigrid- 1406 yang, work in progress. 1408 [OTN-TOPO] Zheng, H. et al., "A YANG Data Model for Optical 1409 Transport Network Topology", draft-ietf-ccamp-otn-topo- 1410 yang, work in progress. 1412 [CLIENT-TOPO] Zheng, H. et al., "A YANG Data Model for Client-layer 1413 Topology", draft-zheng-ccamp-client-topo-yang, work in 1414 progress. 1416 [L3-TE-TOPO] Liu, X. et al., "YANG Data Model for Layer 3 TE 1417 Topologies", draft-ietf-teas-yang-l3-te-topo, work in 1418 progress. 1420 [SR-TE-TOPO] Liu, X. et al., "YANG Data Model for SR and SR TE 1421 Topologies on MPLS Data Plane", draft-ietf-teas-yang-sr- 1422 te-topo, work in progress. 1424 [TE-TUNNEL] Saad, T. et al., "A YANG Data Model for Traffic 1425 Engineering Tunnels and Interfaces", draft-ietf-teas-yang- 1426 te, work in progress. 1428 [WSON-TUNNEL] Lee, Y. et al., "A Yang Data Model for WSON Tunnel", 1429 draft-ietf-ccamp-wson-tunnel-model, work in progress. 1431 [Flexi-MC] Lopez de Vergara, J. E. et al., "YANG data model for 1432 Flexi-Grid media-channels", draft-ietf-ccamp-flexigrid- 1433 media-channel-yang, work in progress. 1435 [OTN-TUNNEL] Zheng, H. et al., "OTN Tunnel YANG Model", draft- 1436 ietf-ccamp-otn-tunnel-model, work in progress. 1438 [PATH-COMPUTE] Busi, I., Belotti, S. et al, "Yang model for 1439 requesting Path Computation", draft-ietf-teas-yang-path- 1440 computation, work in progress. 1442 [CLIENT-SIGNAL] Zheng, H. et al., "A YANG Data Model for Transport 1443 Network Client Signals", draft-ietf-ccamp-client-signal- 1444 yang, work in progress. 1446 [L2NM] S. Barguil, et al., "A Layer 2 VPN Network YANG Model", 1447 draft-ietf-opsawg-l2nm, work in progress. 1449 [L3NM] S. Barguil, et al., "A Layer 3 VPN Network YANG Model", 1450 draft-ietf-opsawg-l3sm-l3nm, work in progress. 1452 [TSM] Y. Lee, et al., "Traffic Engineering and Service Mapping 1453 Yang Model", draft-ietf-teas-te-service-mapping-yang, work 1454 in progress. 1456 8.2. Informative References 1458 [RFC1930] J. Hawkinson, T. Bates, "Guideline for creation, 1459 selection, and registration of an Autonomous System (AS)", 1460 RFC 1930, March 1996. 1462 [RFC4364] E. Rosen and Y. Rekhter, "BGP/MPLS IP Virtual Private 1463 Networks (VPNs)", RFC 4364, February 2006. 1465 [RFC4761] K. Kompella, Ed., Y. Rekhter, Ed., "Virtual Private LAN 1466 Service (VPLS) Using BGP for Auto-Discovery and 1467 Signaling", RFC 4761, January 2007. 1469 [RFC6074] E. Rosen, B. Davie, V. Radoaca, and W. Luo, "Provisioning, 1470 Auto-Discovery, and Signaling in Layer 2 Virtual Private 1471 Networks (L2VPNs)", RFC 6074, January 2011. 1473 [RFC6624] K. Kompella, B. Kothari, and R. Cherukuri, "Layer 2 1474 Virtual Private Networks Using BGP for Auto-Discovery and 1475 Signaling", RFC 6624, May 2012. 1477 [RFC7209] A. Sajassi, R. Aggarwal, J. Uttaro, N. Bitar, W. 1478 Henderickx, and A. Isaac, "Requirements for Ethernet VPN 1479 (EVPN)", RFC 7209, May 2014. 1481 [RFC7432] A. Sajassi, Ed., et al., "BGP MPLS-Based Ethernet VPN", 1482 RFC 7432, February 2015. 1484 [RFC7436] H. Shah, E. Rosen, F. Le Faucheur, and G. Heron, "IP-Only 1485 LAN Service (IPLS)", RFC 7436, January 2015. 1487 [RFC8214] S. Boutros, A. Sajassi, S. Salam, J. Drake, and J. 1488 Rabadan, "Virtual Private Wire Service Support in Ethernet 1489 VPN", RFC 8214, August 2017. 1491 [RFC8299] Q. Wu, S. Litkowski, L. Tomotaki, and K. Ogaki, "YANG Data 1492 Model for L3VPN Service Delivery", RFC 8299, January 2018. 1494 [RFC8309] Q. Wu, W. Liu, and A. Farrel, "Service Model Explained", 1495 RFC 8309, January 2018. 1497 [RFC8466] G. Fioccola, ed., "A YANG Data Model for Layer 2 Virtual 1498 Private Network (L2VPN) Service Delivery", RFC8466, 1499 October 2018. 1501 [TNBI] Busi, I., Daniel, K. et al., "Transport Northbound 1502 Interface Applicability Statement", draft-ietf-ccamp- 1503 transport-nbi-app-statement, work in progress. 1505 [VN] Y. Lee, et al., "A Yang Data Model for ACTN VN Operation", 1506 draft-ietf-teas-actn-vn-yang, work in progress. 1508 [ACTN-PM] Y. Lee, et al., "YANG models for VN & TE Performance 1509 Monitoring Telemetry and Scaling Intent Autonomics", 1510 draft-lee-teas-actn-pm-telemetry-autonomics, work in 1511 progress. 1513 [BGP-L3VPN] D. Jain, et al. "Yang Data Model for BGP/MPLS L3 VPNs", 1514 draft-ietf-bess-l3vpn-yang, work in progress. 1516 Appendix A. Multi-layer and multi-domain resiliency 1518 A.1. Maintenance Window 1520 Before planned maintenance operation on DWDM network takes place, IP 1521 traffic should be moved hitless to another link. 1523 MDSC must reroute IP traffic before the events takes place. It 1524 should be possible to lock IP traffic to the protection route until 1525 the maintenance event is finished, unless a fault occurs on such 1526 path. 1528 A.2. Router port failure 1530 The focus is on client-side protection scheme between IP router and 1531 reconfigurable ROADM. Scenario here is to define only one port in 1532 the routers and in the ROADM muxponder board at both ends as back-up 1533 ports to recover any other port failure on client-side of the ROADM 1534 (either on router port side or on muxponder side or on the link 1535 between them). When client-side port failure occurs, alarms are 1536 raised to MDSC by IP-PNC and O-PNC (port status down, LOS etc.). 1537 MDSC checks with OP-PNC(s) that there is no optical failure in the 1538 optical layer. 1540 There can be two cases here: 1542 a) LAG was defined between the two end routers. MDSC, after checking 1543 that optical layer is fine between the two end ROADMs, triggers 1544 the ROADM configuration so that the router back-up port with its 1545 associated muxponder port can reuse the OCh that was already in 1546 use previously by the failed router port and adds the new link to 1547 the LAG on the failure side. 1549 While the ROADM reconfiguration takes place, IP/MPLS traffic is 1550 using the reduced bandwidth of the IP link bundle, discarding 1551 lower priority traffic if required. Once back-up port has been 1552 reconfigured to reuse the existing OCh and new link has been 1553 added to the LAG then original Bandwidth is recovered between the 1554 end routers. 1556 Note: in this LAG scenario let assume that BFD is running at LAG 1557 level so that there is nothing triggered at MPLS level when one 1558 of the link member of the LAG fails. 1560 b) If there is no LAG then the scenario is not clear since a router 1561 port failure would automatically trigger (through BFD failure) 1562 first a sub-50ms protection at MPLS level :FRR (MPLS RSVP-TE 1563 case) or TI-LFA (MPLS based SR-TE case) through a protection 1564 port. At the same time MDSC, after checking that optical network 1565 connection is still fine, would trigger the reconfiguration of 1566 the back-up port of the router and of the ROADM muxponder to re- 1567 use the same OCh as the one used originally for the failed router 1568 port. Once everything has been correctly configured, MDSC Global 1569 PCE could suggest to the operator to trigger a possible re- 1570 optimization of the back-up MPLS path to go back to the MPLS 1571 primary path through the back-up port of the router and the 1572 original OCh if overall cost, latency etc. is improved. However, 1573 in this scenario, there is a need for protection port PLUS back- 1574 up port in the router which does not lead to clear port savings. 1576 Acknowledgments 1578 This document was prepared using 2-Word-v2.0.template.dot. 1580 Some of this analysis work was supported in part by the European 1581 Commission funded H2020-ICT-2016-2 METRO-HAUL project (G.A. 761727). 1583 Contributors 1585 Sergio Belotti 1586 Nokia 1588 Email: sergio.belotti@nokia.com 1590 Gabriele Galimberti 1591 Cisco 1593 Email: ggalimbe@cisco.com 1595 Zheng Yanlei 1596 China Unicom 1598 Email: zhengyanlei@chinaunicom.cn 1599 Anton Snitser 1600 Sedona 1602 Email: antons@sedonasys.com 1604 Washington Costa Pereira Correia 1605 TIM Brasil 1607 Email: wcorreia@timbrasil.com.br 1609 Michael Scharf 1610 Hochschule Esslingen - University of Applied Sciences 1612 Email: michael.scharf@hs-esslingen.de 1614 Young Lee 1615 Sung Kyun Kwan University 1617 Email: younglee.tx@gmail.com 1619 Jeff Tantsura 1620 Apstra 1622 Email: jefftant.ietf@gmail.com 1624 Paolo Volpato 1625 Huawei 1627 Email: paolo.volpato@huawei.com 1629 Brent Foster 1630 Cisco 1632 Email: brfoster@cisco.com 1634 Authors' Addresses 1636 Fabio Peruzzini 1637 TIM 1639 Email: fabio.peruzzini@telecomitalia.it 1641 Jean-Francois Bouquier 1642 Vodafone 1644 Email: jeff.bouquier@vodafone.com 1646 Italo Busi 1647 Huawei 1649 Email: Italo.busi@huawei.com 1651 Daniel King 1652 Old Dog Consulting 1654 Email: daniel@olddog.co.uk 1656 Daniele Ceccarelli 1657 Ericsson 1659 Email: daniele.ceccarelli@ericsson.com