idnits 2.17.1 draft-ietf-teas-actn-poi-applicability-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == Line 294 has weird spacing: '...nd into a Net...' -- The document date (May 14, 2021) is 1078 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'RFC 8309' is mentioned on line 446, but not defined == Missing Reference: 'PATH-COMPUTE' is mentioned on line 876, but not defined == Missing Reference: 'RFC8527' is mentioned on line 691, but not defined == Missing Reference: 'RFC8342' is mentioned on line 693, but not defined == Missing Reference: 'RFC8650' is mentioned on line 729, but not defined == Missing Reference: 'RFC8641' is mentioned on line 732, but not defined == Missing Reference: 'RFC7923' is mentioned on line 953, but not defined == Missing Reference: 'UNI-TOPO' is mentioned on line 826, but not defined == Missing Reference: 'RFC8637' is mentioned on line 859, but not defined == Missing Reference: 'RFC5440' is mentioned on line 844, but not defined == Missing Reference: 'RFC8231' is mentioned on line 847, but not defined == Missing Reference: 'RFC8281' is mentioned on line 848, but not defined == Missing Reference: 'RFC8751' is mentioned on line 856, but not defined == Missing Reference: 'RFC8283' is mentioned on line 849, but not defined == Missing Reference: 'RFC5623' is mentioned on line 854, but not defined == Missing Reference: 'TE TOPO' is mentioned on line 1019, but not defined == Unused Reference: 'RFC4364' is defined on line 1186, but no explicit reference was found in the text == Unused Reference: 'RFC4761' is defined on line 1189, but no explicit reference was found in the text == Unused Reference: 'RFC6074' is defined on line 1193, but no explicit reference was found in the text == Unused Reference: 'RFC6624' is defined on line 1197, but no explicit reference was found in the text == Unused Reference: 'RFC7209' is defined on line 1201, but no explicit reference was found in the text == Unused Reference: 'RFC7432' is defined on line 1205, but no explicit reference was found in the text == Unused Reference: 'RFC7436' is defined on line 1208, but no explicit reference was found in the text == Unused Reference: 'RFC8214' is defined on line 1211, but no explicit reference was found in the text == Unused Reference: 'RFC8299' is defined on line 1215, but no explicit reference was found in the text == Unused Reference: 'RFC8466' is defined on line 1221, but no explicit reference was found in the text == Unused Reference: 'ACTN-PM' is defined on line 1242, but no explicit reference was found in the text == Unused Reference: 'BGP-L3VPN' is defined on line 1247, but no explicit reference was found in the text Summary: 0 errors (**), 0 flaws (~~), 30 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 TEAS Working Group Fabio Peruzzini 2 Internet Draft TIM 3 Intended status: Informational Jean-Francois Bouquier 4 Vodafone 5 Italo Busi 6 Huawei 7 Daniel King 8 Old Dog Consulting 9 Daniele Ceccarelli 10 Ericsson 12 Expires: November 2021 May 14, 2021 14 Applicability of Abstraction and Control of Traffic Engineered 15 Networks (ACTN) to Packet Optical Integration (POI) 17 draft-ietf-teas-actn-poi-applicability-02 19 Status of this Memo 21 This Internet-Draft is submitted in full conformance with the 22 provisions of BCP 78 and BCP 79. 24 Internet-Drafts are working documents of the Internet Engineering 25 Task Force (IETF), its areas, and its working groups. Note that 26 other groups may also distribute working documents as Internet- 27 Drafts. 29 Internet-Drafts are draft documents valid for a maximum of six 30 months and may be updated, replaced, or obsoleted by other documents 31 at any time. It is inappropriate to use Internet-Drafts as 32 reference material or to cite them other than as "work in progress." 34 The list of current Internet-Drafts can be accessed at 35 http://www.ietf.org/ietf/1id-abstracts.txt 37 The list of Internet-Draft Shadow Directories can be accessed at 38 http://www.ietf.org/shadow.html 40 This Internet-Draft will expire on April 9, 2021. 42 Copyright Notice 44 Copyright (c) 2020 IETF Trust and the persons identified as the 45 document authors. All rights reserved. 47 This document is subject to BCP 78 and the IETF Trust's Legal 48 Provisions Relating to IETF Documents 49 (http://trustee.ietf.org/license-info) in effect on the date of 50 publication of this document. Please review these documents 51 carefully, as they describe your rights and restrictions with 52 respect to this document. Code Components extracted from this 53 document must include Simplified BSD License text as described in 54 Section 4.e of the Trust Legal Provisions and are provided without 55 warranty as described in the Simplified BSD License. 57 Abstract 59 This document considers the applicability of Abstraction and Control 60 of TE Networks (ACTN) architecture to Packet Optical Integration 61 (POI)in the context of IP/MPLS and Optical internetworking, 62 identifying the YANG data models being defined by the IETF to 63 support this deployment architecture as well as specific scenarios 64 relevant for Service Providers. 66 Existing IETF protocols and data models are identified for each 67 multi-layer (packet over optical) scenario with particular focus on 68 the MPI (Multi-Domain Service Coordinator to Provisioning Network 69 Controllers Interface)in the ACTN architecture. 71 Table of Contents 73 1. Introduction...................................................3 74 2. Reference architecture and network scenario....................4 75 2.1. L2/L3VPN Service Request in North Bound of MDSC...........8 76 2.2. Service and Network Orchestration........................10 77 2.2.1. Hard Isolation......................................12 78 2.2.2. Shared Tunnel Selection.............................12 79 2.3. IP/MPLS Domain Controller and NE Functions...............13 80 2.4. Optical Domain Controller and NE Functions...............15 81 3. Interface protocols and YANG data models for the MPIs.........15 82 3.1. RESTCONF protocol at the MPIs............................15 83 3.2. YANG data models at the MPIs.............................15 84 3.2.1. Common YANG data models at the MPIs.................16 85 3.2.2. YANG models at the Optical MPIs.....................16 86 3.2.3. YANG data models at the Packet MPIs.................18 87 3.3. PCEP.....................................................18 88 4. Multi-layer and multi-domain services scenarios...............19 89 4.1. Scenario 1: inventory, service and network topology 90 discovery.....................................................20 91 4.1.1. Inter-domain link discovery.........................21 92 4.1.2. IP Link Setup Procedure.............................22 93 4.1.3. Inventory discovery.................................22 94 4.2. L2VPN/L3VPN establishment................................23 95 5. Security Considerations.......................................24 96 6. Operational Considerations....................................24 97 7. IANA Considerations...........................................24 98 8. References....................................................24 99 8.1. Normative References.....................................24 100 8.2. Informative References...................................25 101 Appendix A. Multi-layer and multi-domain resiliency...........28 102 A.1. Maintenance Window......................................28 103 A.2. Router port failure.....................................28 104 Acknowledgments..................................................29 105 Contributors.....................................................29 106 Authors' Addresses...............................................30 108 1. Introduction 110 The full automation of the management and control of Service 111 Providers transport networks (IP/MPLS, Optical and also Microwave) 112 is key for achieving the new challenges coming now with 5G as well 113 as with the increased demand in terms of business agility and 114 mobility in a digital world. ACTN architecture, by abstracting the 115 network complexity from Optical and IP/MPLS networks towards MDSC 116 and then from MDSC towards OSS/BSS or Orchestration layer through 117 the use of standard interfaces and data models, is allowing a wide 118 range of transport connectivity services that can be requested by 119 the upper layers fulfilling almost any kind of service level 120 requirements from a network perspective (e.g. physical diversity, 121 latency, bandwidth, topology etc.) 123 Packet Optical Integration (POI) is an advanced use case of traffic 124 engineering. In wide area networks, a packet network based on the 125 Internet Protocol (IP) and possibly Multiprotocol Label Switching 126 (MPLS) is typically realized on top of an optical transport network 127 that uses Dense Wavelength Division Multiplexing (DWDM)(and 128 optionally an Optical Transport Network (OTN)layer). In many 129 existing network deployments, the packet and the optical networks 130 are engineered and operated independently of each other. There are 131 technical differences between the technologies (e.g., routers vs. 132 optical switches) and the corresponding network engineering and 133 planning methods (e.g., inter-domain peering optimization in IP vs. 134 dealing with physical impairments in DWDM, or very different time 135 scales). In addition, customers needs can be different between a 136 packet and an optical network, and it is not uncommon to use 137 different vendors in both domains. Last but not least, state-of-the- 138 art packet and optical networks use sophisticated but complex 139 technologies, and for a network engineer it may not be trivial to be 140 a full expert in both areas. As a result, packet and optical 141 networks are often operated in technical and organizational silos. 143 This separation is inefficient for many reasons. Both capital 144 expenditure (CAPEX) and operational expenditure (OPEX) could be 145 significantly reduced by better integrating the packet and the 146 optical network. Multi-layer online topology insight can speed up 147 troubleshooting (e.g., alarm correlation) and network operation 148 (e.g., coordination of maintenance events), multi-layer offline 149 topology inventory can improve service quality (e.g., detection of 150 diversity constraint violations) and multi-layer traffic engineering 151 can use the available network capacity more efficiently (e.g., 152 coordination of restoration). In addition, provisioning workflows 153 can be simplified or automated as needed across layers (e.g, to 154 achieve bandwidth on demand, or to perform maintenance events). 156 ACTN framework enables this complete multi-layer and multi-vendor 157 integration of packet and optical networks through MDSC and packet 158 and optical PNCs. 160 In this document, key scenarios for Packet Optical Integration (POI) 161 are described from the packet service layer perspective. The 162 objective is to explain the benefit and the impact for both the 163 packet and the optical layer, and to identify the required 164 coordination between both layers. Precise definitions of scenarios 165 can help with achieving a common understanding across different 166 disciplines. The focus of the scenarios are IP/MPLS networks 167 operated as client of optical DWDM networks. The scenarios are 168 ordered by increasing level of integration and complexity. For each 169 multi-layer scenario, the document analyzes how to use the 170 interfaces and data models of the ACTN architecture. 172 Understanding the level of standardization and the possible gaps 173 will help to better assess the feasibility of integration between IP 174 and Optical DWDM domain (and optionally OTN layer), in an end-to-end 175 multi-vendor service provisioning perspective. 177 2. Reference architecture and network scenario 179 This document analyses a number of deployment scenarios for Packet 180 and Optical Integration (POI) in which ACTN hierarchy is deployed to 181 control a multi-layer and multi-domain network, with two Optical 182 domains and two Packet domains, as shown in Figure 1: 184 +----------+ 185 | MDSC | 186 +-----+----+ 187 | 188 +-----------+-----+------+-----------+ 189 | | | | 190 +----+----+ +----+----+ +----+----+ +----+----+ 191 | P-PNC 1 | | O-PNC 1 | | O-PNC 2 | | P-PNC 2 | 192 +----+----+ +----+----+ +----+----+ +----+----+ 193 | | | | 194 | \ / | 195 +-------------------+ \ / +-------------------+ 196 CE1 / PE1 BR1 \ | / / BR2 PE2 \ CE2 197 o--/---o o---\-|-------|--/---o o---\--o 198 \ : : / | | \ : : / 199 \ : PKT Domain 1 : / | | \ : PKT Domain 2 : / 200 +-:---------------:-+ | | +-:---------------:--+ 201 : : | | : : 202 : : | | : : 203 +-:---------------:------+ +-------:---------------:--+ 204 / : : \ / : : \ 205 / o...............o \ / o...............o \ 206 \ Optical Domain 1 / \ Optical Domain 2 / 207 \ / \ / 208 +------------------------+ +--------------------------+ 210 Figure 1 - Reference Scenario 212 The ACTN architecture, defined in [RFC8453], is used to control this 213 multi-domain network where each Packet PNC (P-PNC) is responsible 214 for controlling its IP domain, which can be either an Autonomous 215 System (AS), [RFC1930], or an IGP area within the same operator 216 network, and each Optical PNC (O-PNC) is responsible for controlling 217 its Optical Domain. 219 The routers between IP domains can be either AS Boundary Routers 220 (ASBR) or Area Border Router (ABR): in this document the generic 221 term Border Router (BR) is used to represent either an ASBR or a 222 ABR. 224 The MDSC is responsible for coordinating the whole multi-domain 225 multi-layer (Packet and Optical) network. A specific standard 226 interface (MPI) permits MDSC to interact with the different 227 Provisioning Network Controller (O/P-PNCs). 229 The MPI interface presents an abstracted topology to MDSC hiding 230 technology-specific aspects of the network and hiding topology 231 details depending on the policy chosen regarding the level of 232 abstraction supported. The level of abstraction can be obtained 233 based on P-PNC and O-PNC configuration parameters (e.g. provide the 234 potential connectivity between any PE and any BR in an MPLS-TE 235 network). 237 In the network scenario of Figure 1, it is assumed that: 239 o The domain boundaries between the IP and Optical domains are 240 congruent. In other words, one Optical domain supports 241 connectivity between Routers in one and only one Packet Domain; 243 o Inter-domain links exist only between Packet domains (i.e., 244 between BR routers) and between Packet and Optical domains (i.e., 245 between routers and Optical NEs). In other words, there are no 246 inter-domain links between Optical domains; 248 o The interfaces between the Routers and the Optical NEs are 249 "Ethernet" physical interfaces; 251 o The interfaces between the Border Routers (BRs) are "Ethernet" 252 physical interfaces. 254 This version of the document assumes that the IP Link supported by 255 the Optical network are always intra-AS (PE-BR, intra-domain BR-BR, 256 PE-P, BR-P, or P-P) and that the BRs are co-located and connected by 257 an IP Link supported by an Ethernet physical link. 259 The possibility to setup inter-AS/inter-area IP Links (e.g., 260 inter-domain BR-BR or PE-PE), supported by Optical network, is for 261 further study. 263 Therefore, if inter-domain links between the Optical domains exist, 264 they would be used to support multi-domain Optical services, which 265 are outside the scope of this document. 267 The Optical NEs within the optical domains can be ROADMs or OTN 268 switches, with or without a ROADM. 270 The MDSC in Figure 1 is responsible for multi-domain and multi-layer 271 coordination across multiple Packet and Optical domains, as well as 272 to provide L2/L3VPN services. 274 Although the new technologies (e.g. QSFP-DD ZR 400G) are making 275 convenient to fit the DWDM pluggable interfaces on the Routers, the 276 deployment of those pluggable is not yet widely adopted by the 277 operators. The reason is that most of operators are not yet ready to 278 manage Packet and Transport networks in a unified single domain. As 279 a consequence, this draft is not addressing the unified scenario. 280 This matter will be described in a different draft. 282 From an implementation perspective, the functions associated with 283 MDSC and described in [RFC8453] may be grouped in different ways. 285 1. Both the service- and network-related functions are collapsed into 286 a single, monolithic implementation, dealing with the end customer 287 service requests, received from the CMI (Customer MDSC Interface), 288 and the adaptation to the relevant network models. Such case is 289 represented in Figure 2 of [RFC8453] 290 2. An implementation can choose to split the service-related and the 291 network-related functions in different functional entities, as 292 described in [RFC8309] and in section 4.2 of [RFC8453]. In this 293 case, MDSC is decomposed into a top-level Service Orchestrator, 294 interfacing the customer via the CMI, and into a Network 295 Orchestrator interfacing at the southbound with the PNCs. The 296 interface between the Service Orchestrator and the Network 297 Orchestrator is not specified in [RFC8453]. 298 3. Another implementation can choose to split the MDSC functions 299 between an H-MDSC responsible for packet-optical multi-layer 300 coordination, interfacing with one Optical L-MDSC, providing 301 multi-domain coordination between the O-PNCs and one Packet 302 L-MDSC, providing multi-domain coordination betweeh the P-PNCs 303 (see for example Figure 9 of [RFC8453]). 304 4. Another implementation can also choose to combine the MDSC and the 305 P-PNC functions together. 307 Please note that in current service provider's network deployments, 308 at the North Bound of the MDSC, instead of a CNC, typically there is 309 an OSS/Orchestration layer. In this case, the MDSC would implement 310 only the Network Orchestration functions, as in [RFC8309] and 311 described in point 2 above. In this case, the MDSC is dealing with 312 the network services requests received from the OSS/Orchestration 313 layer. 315 [Editors'note:] Check for a better term to define the network 316 services. It may be worthwhile defining what are the customer and 317 network services. 319 The OSS/Orchestration layer is a key part of the architecture 320 framework for a service provider: 322 o to abstract (through MDSC and PNCs) the underlying transport 323 network complexity to the Business Systems Support layer 325 o to coordinate NFV, Transport (e.g. IP, Optical and Microwave 326 networks), Fixed Acess, Core and Radio domains enabling full 327 automation of end-to-end services to the end customers. 329 o to enable catalogue-driven service provisioning from external 330 applications (e.g. Customer Portal for Enterprise Business 331 services) orchestrating the design and lifecycle management of 332 these end-to-end transport connectivity services, consuming IP 333 and/or Optical transport connectivity services upon request. 335 The functionality of the OSS/Orchestration layer as well as the 336 interface toward the MDSC are usually operator-specific and outside 337 the scope of this draft. This document assumes that the 338 OSS/Orchestrator requests MDSC to setup L2VPN/L3VPN services through 339 mechanisms which are outside the scope of the draft. 341 There are two main cases when MDSC coordination of underlying PNCs 342 in POI context is initiated: 344 o Initiated by a request from the OSS/Orchestration layer to setup 345 L2VPN/L3VPN services that requires multi-layer/multi-domain 346 coordination. 348 o Initiated by the MDSC itself to perform multi-layer/multi-domain 349 optimizations and/or maintenance works, beyond discovery (e.g. 350 rerouting LSPs with their associated services when putting a 351 resource, like a fibre, in maintenance mode during a maintenance 352 window). Different to service fulfillment, the workflows then are 353 not related at all to a service provisioning request being 354 received from the OSS/Orchestration layer. 356 Above two MDSC workflow cases are in the scope of this draft. The 357 workflow initiation is transparent at the MPI. 359 2.1. L2/L3VPN Service Request in North Bound of MDSC 361 As explained in section 2, the OSS/Orchestration layer can request 362 the MDSC to setup of L2/L3VPN services (with or without TE 363 requirements). 365 Although the interface between the OSS/Orchestration layer is 366 usually operator-specific, ideally it would be using a RESTCONF/YANG 367 interface with more abstracted version of the MPI YANG data models 368 used for network configuration (e.g. L3NM, L2NM). 370 Figure 2 shows an example of a possible control flow between the 371 OSS/Orchestration layer and the MDSC to instantiate L2/L3VPN 372 services, using the YANG models under definition in [VN], [L2NM], 373 [L3NM] and [TSM]. 375 +-------------------------------------------+ 376 | | 377 | OSS/Orchestration layer | 378 | | 379 +-----------------------+-------------------+ 380 | 381 1.VN 2. L2/L3NM & | ^ 382 | TSM | | 383 | | | | 384 | | | | 385 v v | 3. Update VN 386 | 387 +-----------------------+-------------------+ 388 | | 389 | MDSC | 390 | | 391 +-------------------------------------------+ 393 Figure 2 Service Request Process 395 o The VN YANG model [VN], whose primary focus is the CMI, can also 396 be used to provide VN Service configuration from a orchestrated 397 connectivity service point of view, when the L2/L3VPN service has 398 TE requirements. This model is not used to setup L2/L3VPN service 399 with no TE requirements. 401 o It provides the profile of VN in terms of VN members, each of 402 which corresponds to an edge-to-edge link between customer 403 end-points (VNAPs). It also provides the mappings between the 404 VNAPs with the LTPs and between the connectivity matrix with 405 the VN member from which the associated traffic matrix (e.g., 406 bandwidth, latency, protection level, etc.) of VN member is 407 expressed (i.e., via the TE-topology's connectivity matrix). 409 o The model also provides VN-level preference information 410 (e.g., VN member diversity) and VN-level admin-status and 411 operational-status. 413 o The L2NM YANG model [L2NM], whose primary focus is the MPI, can 414 also be used to provide L2VPN service configuration and site 415 information, from a orchestrated connectivity service point of 416 view. 418 o The L3NM YANG model [L3NM], whose primary focus is the MPI, can 419 also be used to provide all L3VPN service configuration and site 420 information, from a orchestrated connectivity service point of 421 view. 423 o The TE & Service Mapping YANG model [TSM] provides TE-service 424 mapping as well as site mapping. 426 o TE-service mapping provides the mapping between a L2/L3VPN 427 instance and the corresponding VN instances. 429 o The TE-service mapping also provides the service mapping 430 requirement type as to how each L2/L3VPN/VN instance is 431 created with respect to the underlay TE tunnels (e.g., 432 whether they require a new and isolated set of TE underlay 433 tunnels or not). See Section 2.2 for detailed discussion on 434 the mapping requirement types. 436 o Site mapping provides the site reference information across 437 L2/L3VPN Site ID, VN Access Point ID, and the LTP of the 438 access link. 440 2.2. Service and Network Orchestration 442 From a functional standpoint, MDSC represented in Figure 2 443 interfaces with the OSS/Orchestration layer and decouples L2/L3VPN 444 service configuration functions from network configuration 445 functions. Therefore in this document the MDSC performs the 446 functions of the Network Orchestrator, as defined in [RFC 8309]. 448 One of the important MDSC functions is to identify which TE Tunnels 449 should carry the L2/L3VPN traffic (e.g., from TE & Service Mapping 450 configuration) and to relay this information to the P-PNCs, to 451 ensure the PEs' forwarding tables (e.g., VRF) are properly 452 populated, according to the TE binding requirement for the L2/L3VPN. 454 TE binding requirement types [TSM] are: 456 1. Hard Isolation with deterministic latency: The L2/L3VPN service 457 requires a set of dedicated TE Tunnels providing deterministic 458 latency performances and that cannot be not shared with other 459 services, nor compete for bandwidth with other Tunnels. 461 2. Hard Isolation: This is similar to the above case without 462 deterministic latency requirements. 464 3. Soft Isolation: The L2/L3VPN service requires a set of dedicated 465 MPLS-TE tunnels which cannot be shared with other services, but 466 which could compete for bandwidth with other Tunnels. 468 4. Sharing: The L2/L3VPN service allows sharing the MPLS-TE Tunnels 469 supporting it with other services. 471 For the first three types, there could be additional TE binding 472 requirements with respect to different VN members of the same VN (on 473 how different VN members, belonging to the same VN, can share or not 474 network resources). For the first two cases, VN members can be 475 hard-isolated, soft-isolated, or shared. For the third case, VN 476 members can be soft-isolated or shared. 478 In order to fulfill the the L2/L3VPN end-to-end TE requirements, 479 including the TE binding requirements, the MDSC needs to perform 480 multi-layer/multi-domain path computation to select the BRs, the 481 intra-domain MPLS-TE Tunnels and the intra-domain Optical Tunnels. 483 Depending on the knowledge that MDSC has of the topology and 484 configuration of the underlying network domains, three models for 485 performing path computation are possible: 487 1. Summarization: MDSC has an abstracted TE topology view of all of 488 the underlying domains, both packet and optical. MDSC does not 489 have enough TE topology information to perform 490 multi-layer/multi-domain path computation. Therefore MDSC 491 delegates the P-PNCs and O-PNCs to perform a local path 492 computation within their controlled domains and it uses the 493 information returned by the P-PNCs and O-PNCs to compute the 494 optimal multi-domain/multi-layer path. 495 This model presents an issue to P-PNC, which does not have the 496 capability of performing a single-domain/multi-layer path 497 computation (that is, P-PNC does not have any possibility to 498 retrieve the topology/configuration information from the Optical 499 controller). A possible solution could be to include a CNC 500 function in the P-PNC to request the MDSC multi-domain Optical 501 path computation, as shown in Figure 10 of [RFC8453]. Another 502 possible solution could be to rely on the MDSC recursive 503 hierarchy, as defined in section 4.1 of [RFC8453], where, for 504 each domain, a "lower-level MDSC" (L-MDSC) provides the essential 505 multi-layer correlation and the "higher-level MDSC" (H-MDSC) 506 provides the multi-domain coordination. 508 2. Partial summarization: MDSC has full visibility of the TE 509 topology of the packet network domains and an abstracted view of 510 the TE topology of the optical network domains. 511 MDSC then has only the capability of performing multi- 512 domain/single-layer path computation for the packet layer (the 513 path can be computed optimally for the two packet domains). 514 Therefore MDSC still needs to delegate the O-PNCs to perform 515 local path computation within their respective domains and it 516 uses the information received by the O-PNCs, together with its TE 517 topology view of the multi-domain packet layer, to perform 518 multi-layer/multi-domain path computation. 519 The role of P-PNC is minimized, i.e. is limited to management. 521 3. Full knowledge: MDSC has the complete and enough detailed view of 522 the TE topology of all the network domains (both optical and 523 packet). In such case MDSC has all the information needed to 524 perform multi-domain/multi-layer path computation, without 525 relying on PNCs. 526 This model may present, as a potential drawback, scalability 527 issues and, as discussed in section 2.2. of [PATH-COMPUTE], 528 performing path computation for optical networks in the MDSC is 529 quite challenging because the optimal paths depend also on 530 vendor-specific optical attributes (which may be different in the 531 two domains if they are provided by different vendors). 533 The current version of this draft assumes that MDSC supports at 534 least model #2 (Partial summarization). 536 [Note: check with opeerators for some references on real deployment] 538 2.2.1. Hard Isolation 540 For example, when "Hard Isolation with or w/o deterministic latency" 541 TE binding requirement is applied for a L2/L3VPN, new Optical 542 Tunnels need to be setup to support dedicated IP Links between PEs 543 and BRs. 545 The MDSC needs to identify the set of IP/MPLS domains and their BRs. 546 This requires the MDSC to request each O-PNC to compute the 547 intra-domain optical paths between each PEs/BRs pairs. 549 When requesting optical path computation to the O-PNC, the MDSC 550 needs to take into account the inter-layer peering points, such as 551 the interconnections between the PE/BR nodes and the edge Optical 552 nodes (e.g., using the inter-layer lock or the transitional link 553 information, defined in [RFC8795]). 555 When the optimal multi-layer/multi-domain path has been computed, 556 the MDSC requests each O-PNC to setup the selected Optical Tunnels 557 and P-PNC to setup the intra-domain MPLS-TE Tunnels, over the 558 selected Optical Tunnels. MDSC also properly configures its BGP 559 speakers and PE/BR forwarding tables to ensure that the VPN traffic 560 is properly forwarded. 562 2.2.2. Shared Tunnel Selection 564 In case of shared tunnel selection, the MDSC needs to check if there 565 is multi-domain path which can support the L2/L3VPN end-to-end TE 566 service requirements (e.g., bandwidth, latency, etc.) using existing 567 intra-domain MPLS-TE tunnels. 569 If such a path is found, the MDSC selects the optimal path from the 570 candidate pool and request each P-PNC to setup the L2/L3VPN service 571 using the selected intra-domain MPLS-TE tunnel, between PE/BR nodes. 573 Otherwise, the MDSC should detect if the multi-domain path can be 574 setup using existing intra-domain MPLS-TE tunnels with modifications 575 (e.g., increasing the tunnel bandwidth) or setting up new intra- 576 domain MPLS-TE tunnel(s). 578 The modification of an existing MPLS-TE Tunnel as well as the setup 579 of a new MPLS-TE Tunnel may also require multi-layer coordination 580 e.g., in case the available bandwidth of underlying Optical Tunnels 581 is not sufficient. Based on multi-domain/multi-layer path 582 computation, the MDSC can decide for example to modify the bandwidth 583 of an existing Optical Tunnel (e.g., ODUflex bandwidth increase) or 584 to setup new Optical Tunnels to be used as additional LAG members of 585 an existing IP Link or as new IP Links to re-route the MPLS-TE 586 Tunnel. 588 In all the cases, the labels used by the end-to-end tunnel are 589 distributed in the PE and BR nodes by BGP. The MDSC is responsible 590 to configure the BGP speakeers in each P-PNC, if needed. 592 2.3. IP/MPLS Domain Controller and NE Functions 594 IP/MPLS networks are assumed to have multiple domains, where each 595 domain, corresponding to either an IGP area or an Autonomous System 596 (AS) within the same operator network, is controlled by an IP/MPLS 597 domain controller (P-PNC). 599 Among the functions of the P-PNC, there are the setup or 600 modification of the intra-domain MPLS-TE Tunnels, between PEs and 601 BRs, and the configuration of the VPN services, such as the VRF in 602 the PE nodes, as shown in Figure 3: 604 +------------------+ +------------------+ 605 | | | | 606 | P-PNC1 | | P-PNC2 | 607 | | | | 608 +--|-----------|---+ +--|-----------|---+ 609 | 1.Tunnel | 2.VPN | 1.Tunnel | 2.VPN 610 | Config | Provisioning | Config | Provisioning 611 V V V V 612 +---------------------+ +---------------------+ 613 CE / PE tunnel 1 BR\ / BR tunnel 2 PE \ CE 614 o--/---o..................o--\-----/--o..................o---\--o 615 \ / \ / 616 \ Domain 1 / \ Domain 2 / 617 +---------------------+ +---------------------+ 619 End-to-end tunnel 620 <-------------------------------------------------> 622 Figure 3 IP/MPLS Domain Controller & NE Functions 624 It is assumed that BGP is running in the inter-domain IP/MPLS 625 networks for L2/L3VPN and that the P-PNC controller is also 626 responsible for configuring the BGP speakers within its control 627 domain, if necessary. 629 The BGP would be responsible for the label distribution of the 630 end-to-end tunnel on PE and BR nodes. The MDSC is responsible for 631 the selection of the BRs and of the intra-domain MPLS-TE Tunnels 632 between PE/BR nodes. 634 If new MPLS-TE Tunnels are needed or mofications (e.g., bandwidth 635 ingrease) to existing MPLS_TE Tunnels are needed, as outlined in 636 section 2.2, the MDSC would request their setup or modifications to 637 the P-PNCs (step 1 in Figure 3). Then the MDSC would request the 638 P-PNC to configure the VPN, including the selection of the 639 intra-domain TE Tunnel (step 2 in Figure 3). 641 The P-PNC should configure, using mechanisms outside the scope of 642 this document, the ingress PE forwarding table, e.g., the VRF, to 643 forward the VPN traffic, received from the CE, with the following 644 three labels: 646 o VPN label: assigned by the egress PE and distributed by BGP; 648 o end-to-end LSP label: assigned by the egress BR, selected by the 649 MDSC, and distributed by BGP; 651 o MPLS-TE tunnel label, assigned by the next hop P node of the 652 tunnel selected by the MDSC and distributed by mechanism internal 653 to the IP/MPLS domain (e.g., RSVP-TE). 655 2.4. Optical Domain Controller and NE Functions 657 Optical network provides the underlay connectivity services to 658 IP/MPLS networks. The coordination of Packet/Optical multi-layer is 659 done by the MDSC, as shown in Figure 1. 661 The O-PNC is responsible to: 663 o provide to the MDSC an abstract TE topology view of its 664 underlying optical network resources; 666 o perform single-domain local path computation, when requested by 667 the MDSC; 669 o perform Optical Tunnel setup, when requested by the MDSC. 671 The mechanisms used by O-PNC to perform intra-domain topology 672 discovery and path setup are usually vendor-speicific and outside 673 the scope of this document. 675 Depending on the type of optical network, TE topology abstraction, 676 path compution and path setup can be single-layer (either OTN or 677 WDM) or multi-layer OTN/WDM. In the latter case, the multi-layer 678 coordination between the OTN and WDM layers is performed by the 679 O-PNC. 681 3. Interface protocols and YANG data models for the MPIs 683 This section describes general assumptions which are applicable at 684 all the MPI interfaces, between each PNC (Optical or Packet) and the 685 MDSC, and also to all the scenarios discussed in this document. 687 3.1. RESTCONF protocol at the MPIs 689 The RESTCONF protocol, as defined in [RFC8040], using the JSON 690 representation, defined in [RFC7951], is assumed to be used at these 691 interfaces. Extensions to RESTCONF, as defined in [RFC8527], to be 692 compliant with Network Management Datastore Architecture (NMDA) 693 defined in [RFC8342], are assumed to be used as well at these MPI 694 interfaces and also at CMI interfaces. 696 3.2. YANG data models at the MPIs 698 The data models used on these interfaces are assumed to use the YANG 699 1.1 Data Modeling Language, as defined in [RFC7950]. 701 3.2.1. Common YANG data models at the MPIs 703 As required in [RFC8040], the "ietf-yang-library" YANG module 704 defined in [RFC8525] is used to allow the MDSC to discover the set 705 of YANG modules supported by each PNC at its MPI. 707 Both Optical and Packet PNCs use the following common topology YANG 708 models at the MPI to report their abstract topologies: 710 o The Base Network Model, defined in the "ietf-network" YANG module 711 of [RFC8345] 713 o The Base Network Topology Model, defined in the "ietf-network- 714 topology" YANG module of [RFC8345], which augments the Base 715 Network Model 717 o The TE Topology Model, defined in the "ietf-te-topology" YANG 718 module of [RFC8795], which augments the Base Network Topology 719 Model with TE specific information. 721 These common YANG models are generic and augmented by technology- 722 specific YANG modules as described in the following sections. 724 Both Optical and Packet PNCs must use the following common 725 notifications YANG models at the MPI so that any network changes can 726 be reported almost in real-time to MDSC by the PNCs: 728 o Dynamic Subscription to YANG Events and Datastores over RESTCONF 729 as defined in [RFC8650] 731 o Subscription to YANG Notifications for Datastores updates as 732 defined in [RFC8641] 734 PNCs and MDSCs must be compliant with subscription requirements as 735 stated in [RFC7923]. 737 3.2.2. YANG models at the Optical MPIs 739 The Optical PNC also uses at least the following technology-specific 740 topology YANG models, providing WDM and Ethernet technology-specific 741 augmentations of the generic TE Topology Model: 743 o The WSON Topology Model, defined in the "ietf-wson-topology" YANG 744 modules of [WSON-TOPO], or the Flexi-grid Topology Model, defined 745 in the "ietf-flexi-grid-topology" YANG module of [Flexi-TOPO]. 747 o Optionally, when the OTN layer is used, the OTN Topology Model, 748 as defined in the "ietf-otn-topology" YANG module of [OTN-TOPO]. 750 o The Ethernet Topology Model, defined in the "ietf-eth-te- 751 topology" YANG module of [CLIENT-TOPO]. 753 o Optionally, when the OTN layer is used, the network data model 754 for L1 OTN services (e.g. an Ethernet transparent service) as 755 defined in "ietf-trans-client-service" YANG module of draft-ietf- 756 ccamp-client-signal-yang [CLIENT-SIGNAL]. 758 o The WSON Topology Model or, alternatively, the Flexi-grid 759 Topology model is used to report the DWDM network topology (e.g., 760 ROADMs and links) depending on whether the DWDM optical network 761 is based on fixed grid or flexible-grid. 763 The Ethernet Topology is used to report the access links between the 764 IP routers and the edge ROADMs. 766 The optical PNC uses at least the following YANG models: 768 o The TE Tunnel Model, defined in the "ietf-te" YANG module of 769 [TE-TUNNEL] 771 o The WSON Tunnel Model, defined in the "ietf-wson-tunnel" YANG 772 modules of [WSON-TUNNEL], or the Flexi-grid Media Channel Model, 773 defined in the "ietf-flexi-grid-media-channel" YANG module of 774 [Flexi-MC] 776 o Optionally, when the OTN layer is used, the OTN Tunnel Model, 777 defined in the "ietf-otn-tunnel" YANG module of [OTN-TUNNEL]. 779 o The Ethernet Client Signal Model, defined in the "ietf-eth-tran- 780 service" YANG module of [CLIENT-SIGNAL]. 782 The TE Tunnel model is generic and augmented by technology-specific 783 models such as the WSON Tunnel Model and the Flexi-grid Media 784 Channel Model. 786 The WSON Tunnel Model or, alternatively, the Flexi-grid Media 787 Channel Model are used to setup connectivity within the DWDM network 788 depending on whether the DWDM optical network is based on fixed grid 789 or flexible-grid. 791 The Ethernet Client Signal Model is used to configure the steering 792 of the Ethernet client traffic between Ethernet access links and TE 793 Tunnels, which in this case could be either WSON Tunnels or 794 Flexi-Grid Media Channels. This model is generic and applies to any 795 technology-specific TE Tunnel: technology-specific attributes are 796 provided by the technology-specific models which augment the generic 797 TE-Tunnel Model. 799 3.2.3. YANG data models at the Packet MPIs 801 The Packet PNC also uses at least the following technology-specific 802 topology YANG models, providing IP and Ethernet technology-specific 803 augmentations of the generic Topology Models described in section 804 3.2.1: 806 o The L3 Topology Model, defined in the "ietf-l3-unicast-topology" 807 YANG modules of [RFC8346], which augments the Base Network 808 Topology Model 810 o The L3 specific data model including extended TE attributes (e.g. 811 performance derived metrics like latency), defined in "ietf-l3- 812 te-topology" and in "ietf-te-topology-packet" in draft-ietf-teas- 813 l3-te-topo [L3-TE-TOPO] 815 o The Ethernet Topology Model, defined in the "ietf-eth-te- 816 topology" YANG module of [CLIENT-TOPO], which augments the TE 817 Topology Model 819 The Ethernet Topology Model is used to report the access links 820 between the IP routers and the edge ROADMs as well as the 821 inter-domain links between ASBRs, while the L3 Topology Model is 822 used to report the IP network topology (e.g., IP routers and links). 824 o The User Network Interface (UNI) Topology Model, being defined in 825 the "ietf-uni-topology" module of the draft-ogondio-opsawg-uni- 826 topology [UNI-TOPO] which augment "ietf-network" module defined 827 in [RFC8345] adding service attachment points to the nodes to 828 which L2VPN/L3VPN IP/MPLS services can be attached. 830 o L3VPN network data model defined in "ietf-l3vpn-ntw" module of 831 draft-ietf-opsawg-l3sm-l3nm [L3NM] used for non-ACTN MPI for 832 L3VPN service provisioning 834 o L2VPN network data model defined in "ietf-l2vpn-ntw" module of 835 draft-ietf-barguil-opsawg-l2sm-l2nm [L2NM] used for non-ACTN MPI 836 for L2VPN service provisioning 838 [Editor's note:] Add YANG models used for tunnel and service 839 configuration. 841 3.3. PCEP 843 [RFC8637] examines the applicability of a Path Computation Element 844 (PCE) [RFC5440] and PCE Communication Protocol (PCEP) to the ACTN 845 framework. It further describes how the PCE architecture is 846 applicable to ACTN and lists the PCEP extensions that are needed to 847 use PCEP as an ACTN interface. The stateful PCE [RFC8231], PCE- 848 Initiation [RFC8281], stateful Hierarchical PCE (H-PCE) [RFC8751], 849 and PCE as a central controller (PCECC) [RFC8283] are some of the 850 key extensions that enable the use of PCE/PCEP for ACTN. 852 Since the PCEP supports path computation in the packet as well as 853 optical networks, PCEP is well suited for inter-layer path 854 computation. [RFC5623] describes a framework for applying the PCE- 855 based architecture to interlayer (G)MPLS traffic engineering. 856 Further, the section 6.1 of [RFC8751] states the H-PCE applicability 857 for inter-layer or POI. 859 [RFC8637] lists various PCEP extensions that are applicable to ACTN. 860 It also list the PCEP extension for optical network and POI. 862 Note that the PCEP can be used in conjunction with the YANG models 863 described in the rest of this document. Depending on whether ACTN is 864 deployed in a greenfield or browfield, two options are possible: 866 1. The MDSC uses a single RESTCONF/YANG interface towards each PNC 867 to discover all the TE information and requests the creation of 868 TE tunnels. It may either perform full multi-layer path 869 computation or delegate path computation to the underneath PNCs. 871 This approach is very attractive for operators from an 872 multi-vendor integration perspective as it is simple and we need 873 only one type of interface (RESTCONF) and use the relevant YANG 874 data models depending on the operator use case considered. 875 Benefits of having only one protocol for the MPI between MDSC and 876 PNC have been already highlighted in [PATH-COMPUTE]. 878 2. The MDSC uses the RESTCONF/YANG interface towards each PNC to 879 discover all the TE information and requests the creation of TE 880 tunnels but it uses PCEP for hierararchical path computation. 882 As mentioned in Option 1, from an operator perspective this 883 option can add integration complexity to have two protocols 884 instead of one, unless the RESTOCONF/YANG interface is added to 885 an existing PCEP deployment (brownfield scenario). 887 Section 4 of this draft analyses the case where a single 888 RESTCONF/YANG interface is deployed at the MPI (i.e., option 1 889 above). 891 4. Multi-layer and multi-domain services scenarios 893 Multi-layer and multi-domain scenarios, based on reference network 894 described in section 2, and very relevant for Service Providers, are 895 described in the next sections. For each scenario existing IETF 896 protocols and data models are identified with particular focus on 897 the MPI in the ACTN architecture. Non ACTN IETF data models required 898 for L2/L3VPN service provisioning between MDSC and IP PNCs are also 899 identified. 901 4.1. Scenario 1: inventory, service and network topology discovery 903 In this scenario, the MSDC needs to discover through the underlying 904 PNCs, the network topology, at both WDM and IP layers, in terms of 905 nodes and links, including inter AS domain links as well as cross- 906 layer links but also in terms of tunnels (MPLS or SR paths in IP 907 layer and OCh and optionally ODUk tunnels in optical layer). 909 In addition, the MDSC should discover the IP/MPLS transport services 910 (L2VPN/L3VPN) deployed, both intra-domain and inter-domain wise. 912 The O-PNC and P-PNC could discover and report the inventory 913 information of their equipment that is used by the different 914 management layers. In the context of POI, the inventory information 915 of IP and WDM equipment can complement the topology views and 916 facilitate the IP-Optical multi-layer view. 918 MDSC could also discover also the whole inventory information of 919 both IP and WDM equipment and be able to correlate this information 920 with the links reported in the network topology. 922 Each PNC provides to the MDSC an abstracted or full topology view of 923 the WDM or the IP topology of the domain it controls. This topology 924 can be abstracted in the sense that some detailed NE information is 925 hidden at the MPI, and all or some of the NEs and related physical 926 links are exposed as abstract nodes and logical (virtual) links, 927 depending on the level of abstraction the user requires. This 928 information is key to understand both the inter-AS domain links 929 (seen by each controller as UNI interfaces but as I-NNI interfaces 930 by the MDSC) as well as the cross-layer mapping between IP and WDM 931 layer. 933 The MDSC should also maintain up-to-date inventory, service and 934 network topology databases of both IP and WDM layers (and optionally 935 OTN layer) through the use of IETF notifications through MPI with 936 the PNCs when any inventory/topology/service change occurs. 938 It should be possible also to correlate information coming from IP 939 and WDM layers (e.g.: which port, lambda/OTSi, direction is used by 940 a specific IP service on the WDM equipment). 942 In particular, for the cross-layer links it is key for MDSC to be 943 able to correlate automatically the information from the PNC network 944 databases about the physical ports from the routers (single link or 945 bundle links for LAG) to client ports in the ROADM. 947 It should be possible at MDSC level to easily correlate WDM and IP 948 layers alarms to speed-up troubleshooting 950 Alarms and event notifications are required between MDSC and PNCs so 951 that any network changes are reported almost in real-time to the MDSC 952 (e.g. NE or link failure, MPLS tunnel switched from main to backup 953 path etc.). As specified in [RFC7923] MDSC must be able to subscribe 954 to specific objects from PNC YANG datastores for notifications. 956 4.1.1. Inter-domain link discovery 958 In the reference network of Figure 1, there are two types of 959 inter-domain links: 961 o Links between two IP domains (ASes) 963 o Links between an IP router and a ROADM 965 Both types of links are Ethernet physical links. 967 The inter-domain link information is reported to the MDSC by the two 968 adjacent PNCs, controlling the two ends of the inter-domain link. 969 The MDSC needs to understa how to merge the these inter-domain 970 Ethernet links together. 972 This document considers the following two options for discovering 973 inter-domain links: 975 1. Static configuration 977 2. LLDP [IEEE 802.1AB] automatic discovery 979 Other options are possible but not described in this document. 981 The MDSC can understand how to merge these inter-domain links 982 together using the plug-id attribute defined in the TE Topology 983 Model [RFC8795], as described in as described in section 4.3 of 984 [RFC8795]. 986 A more detailed description of how the plug-id can be used to 987 discover inter-domain link is also provided in section 5.1.4 of 988 [TNBI]. 990 Both types of inter-domain links are discovered using the plug-id 991 attributes reported in the Ethernet Topologies exposed by the two 992 adjacent PNCs. The MDSC can also discover an inter-domain IP 993 link/adjacency between the two IP LTPs, reported in the IP 994 Topologies exposed by the two adjacent P-PNCs, supported by the two 995 ETH LTPs of an Ethernet Link discovered between these two P-PNCs. 997 The static configuration requires an administrative burden to 998 configure network-wide unique identifiers: it is therefore more 999 viable for inter-AS links. For the links between the IP routers and 1000 the Optical NEs, the automatic discovery solution based on LLDP 1001 snooping is preferable when LLDP snooping is supported by the 1002 Optical NEs. 1004 As outlined in [TNBI], the encoding of the plug-id namespace as well 1005 as of the LLDP information within the plug-id value is 1006 implementation specific and needs to be consistent across all the 1007 PNCs. 1009 4.1.2. IP Link Setup Procedure 1011 The MDSC requires the O-PNC to setup a WDM Tunnel (either a WSON 1012 Tunnel or a Flexi grid Tunnel) within the DWDM network between the 1013 two Optical Transponders (OTs) associated with the two access links. 1015 The Optical Transponders are reported by the O-PNC as Trail 1016 Termination Points (TTPs), defined in [TE TOPO], within the WDM 1017 Topology. The association between the Ethernet access link and the 1018 WDM TTP is reported by the Inter Layer Lock (ILL) identifiers, 1019 defined in [TE TOPO], reported by the O PNC within the Ethernet 1020 Topology and WDM Topology. 1022 The MDSC also requires the O-PNC to steer the Ethernet client 1023 traffic between the two access Ethernet Links over the WDM Tunnel. 1025 After the WDM Tunnel has been setup and the client traffic steering 1026 configured, the two IP routers can exchange Ethernet packets between 1027 themselves, including LLDP messages. 1029 If LLDP [IEEE 802.1AB] is used between the two routers, the P PNC 1030 can automatically discover the IP Link being set up by the MDSC. The 1031 IP LTPs terminating this IP Link are supported by the ETH LTPs 1032 terminating the two access links. 1034 Otherwise, the MDSC needs to require the P PNC to configure an IP 1035 Link between the two routers: the MDSC also configures the two ETH 1036 LTPs which support the two IP LTPs terminating this IP Link. 1038 4.1.3. Inventory discovery 1040 The are no YANG data models in IETF that could be used to report at 1041 the MPI the whole inventory information discovered by a PNC. 1043 [RFC8345] has foreseen some work for inventory as an augmentation of 1044 the network model, but no YANG data model has been developed so far. 1046 There are also no YANG data models in IETF that could be used to 1047 correlate topology information, e.g., a link termination point 1048 (LTP), with inventory information, e.g., the physical port 1049 supporting an LTP, if any. 1051 Inventory information through MPI and correlation with topology 1052 information is identified as a gap requiring further work, which is 1053 outside of the scope of this draft. 1055 4.2. L2VPN/L3VPN establishment 1057 To be added 1059 [Editor's Note] What mechanism would convey on the interface to the 1060 IP/MPLS domain controllers as well as on the SBI (between IP/MPLS 1061 domain controllers and IP/MPLS PE routers) the TE binding policy 1062 dynamically for the L3VPN? Typically, VRF is the function of the 1063 device that participate MP-BGP in MPLS VPN. With current MP-BGP 1064 implementation in MPLS VPN, the VRF's BGP next hop is the 1065 destination PE and the mapping to a tunnel (either an LDP or a BGP 1066 tunnel) toward the destination PE is done by automatically without 1067 any configuration. It is to be determined the impact on the PE VRF 1068 operation when the tunnel is an optical bypass tunnel which does not 1069 participate either LDP or BGP. 1071 New text to answer the yellow part: 1073 The MDSC Network-related function will then coordinate with the PNCs 1074 involved in the process to provide the provisioning information 1075 through ACTN MDSC to PNC (MPI) interface. The relevant data models 1076 used at the MPI may be in the form of L3NM, L2NM or others and are 1077 exchanged through MPI API calls. Through this process MDSC Network- 1078 related functions provide the configuration information to realize a 1079 VPN service to PNCs. For example, this process will inform PNCs on 1080 what PE routers compose a L3VPN, the topology requested, the VPN 1081 attributes, etc. 1083 At the end of the process PNCs will deliver the actual configuration 1084 to the devices (either physical or virtual), through the ACTN 1085 Southbound Interface (SBI). In this case the configuration policies 1086 may be exchanged using a Netconf session delivering configuration 1087 commands associated to device-specific data models (e.g. BGP[], QOS 1088 [], etc.). 1090 Having the topology information of the network domains under their 1091 control, PNCs will deliver all the information necessary to create, 1092 update, optimize or delete the tunnels connecting the PE nodes as 1093 requested by the VPN instantiation. 1095 5. Security Considerations 1097 Several security considerations have been identified and will be 1098 discussed in future versions of this document. 1100 6. Operational Considerations 1102 Telemetry data, such as the collection of lower-layer networking 1103 health and consideration of network and service performance from POI 1104 domain controllers, may be required. These requirements and 1105 capabilities will be discussed in future versions of this document. 1107 7. IANA Considerations 1109 This document requires no IANA actions. 1111 8. References 1113 8.1. Normative References 1115 [RFC7950] Bjorklund, M. et al., "The YANG 1.1 Data Modeling 1116 Language", RFC 7950, August 2016. 1118 [RFC7951] Lhotka, L., "JSON Encoding of Data Modeled with YANG", RFC 1119 7951, August 2016. 1121 [RFC8040] Bierman, A. et al., "RESTCONF Protocol", RFC 8040, January 1122 2017. 1124 [RFC8345] Clemm, A., Medved, J. et al., "A Yang Data Model for 1125 Network Topologies", RFC8345, March 2018. 1127 [RFC8346] Clemm, A. et al., "A YANG Data Model for Layer 3 1128 Topologies", RFC8346, March 2018. 1130 [RFC8453] Ceccarelli, D., Lee, Y. et al., "Framework for Abstraction 1131 and Control of TE Networks (ACTN)", RFC8453, August 2018. 1133 [RFC8525] Bierman, A. et al., "YANG Library", RFC 8525, March 2019. 1135 [RFC8795] Liu, X. et al., "YANG Data Model for Traffic Engineering 1136 (TE) Topologies", RFC8795, August 2020. 1138 [IEEE 802.1AB] IEEE 802.1AB-2016, "IEEE Standard for Local and 1139 metropolitan area networks - Station and Media Access 1140 Control Connectivity Discovery", March 2016. 1142 [WSON-TOPO] Lee, Y. et al., " A YANG Data Model for WSON (Wavelength 1143 Switched Optical Networks)", draft-ietf-ccamp-wson-yang, 1144 work in progress. 1146 [Flexi-TOPO] Lopez de Vergara, J. E. et al., "YANG data model for 1147 Flexi-Grid Optical Networks", draft-ietf-ccamp-flexigrid- 1148 yang, work in progress. 1150 [OTN-TOPO] Zheng, H. et al., "A YANG Data Model for Optical 1151 Transport Network Topology", draft-ietf-ccamp-otn-topo- 1152 yang, work in progress. 1154 [CLIENT-TOPO] Zheng, H. et al., "A YANG Data Model for Client-layer 1155 Topology", draft-zheng-ccamp-client-topo-yang, work in 1156 progress. 1158 [L3-TE-TOPO] Liu, X. et al., "YANG Data Model for Layer 3 TE 1159 Topologies", draft-ietf-teas-yang-l3-te-topo, work in 1160 progress. 1162 [TE-TUNNEL] Saad, T. et al., "A YANG Data Model for Traffic 1163 Engineering Tunnels and Interfaces", draft-ietf-teas-yang- 1164 te, work in progress. 1166 [WSON-TUNNEL] Lee, Y. et al., "A Yang Data Model for WSON Tunnel", 1167 draft-ietf-ccamp-wson-tunnel-model, work in progress. 1169 [Flexi-MC] Lopez de Vergara, J. E. et al., "YANG data model for 1170 Flexi-Grid media-channels", draft-ietf-ccamp-flexigrid- 1171 media-channel-yang, work in progress. 1173 [OTN-TUNNEL] Zheng, H. et al., "OTN Tunnel YANG Model", draft- 1174 ietf-ccamp-otn-tunnel-model, work in progress. 1176 [CLIENT-SIGNAL] Zheng, H. et al., "A YANG Data Model for Transport 1177 Network Client Signals", draft-ietf-ccamp-client-signal- 1178 yang, work in progress. 1180 8.2. Informative References 1182 [RFC1930] J. Hawkinson, T. Bates, "Guideline for creation, 1183 selection, and registration of an Autonomous System (AS)", 1184 RFC 1930, March 1996. 1186 [RFC4364] E. Rosen and Y. Rekhter, "BGP/MPLS IP Virtual Private 1187 Networks (VPNs)", RFC 4364, February 2006. 1189 [RFC4761] K. Kompella, Ed., Y. Rekhter, Ed., "Virtual Private LAN 1190 Service (VPLS) Using BGP for Auto-Discovery and 1191 Signaling", RFC 4761, January 2007. 1193 [RFC6074] E. Rosen, B. Davie, V. Radoaca, and W. Luo, "Provisioning, 1194 Auto-Discovery, and Signaling in Layer 2 Virtual Private 1195 Networks (L2VPNs)", RFC 6074, January 2011. 1197 [RFC6624] K. Kompella, B. Kothari, and R. Cherukuri, "Layer 2 1198 Virtual Private Networks Using BGP for Auto-Discovery and 1199 Signaling", RFC 6624, May 2012. 1201 [RFC7209] A. Sajassi, R. Aggarwal, J. Uttaro, N. Bitar, W. 1202 Henderickx, and A. Isaac, "Requirements for Ethernet VPN 1203 (EVPN)", RFC 7209, May 2014. 1205 [RFC7432] A. Sajassi, Ed., et al., "BGP MPLS-Based Ethernet VPN", 1206 RFC 7432, February 2015. 1208 [RFC7436] H. Shah, E. Rosen, F. Le Faucheur, and G. Heron, "IP-Only 1209 LAN Service (IPLS)", RFC 7436, January 2015. 1211 [RFC8214] S. Boutros, A. Sajassi, S. Salam, J. Drake, and J. 1212 Rabadan, "Virtual Private Wire Service Support in Ethernet 1213 VPN", RFC 8214, August 2017. 1215 [RFC8299] Q. Wu, S. Litkowski, L. Tomotaki, and K. Ogaki, "YANG Data 1216 Model for L3VPN Service Delivery", RFC 8299, January 2018. 1218 [RFC8309] Q. Wu, W. Liu, and A. Farrel, "Service Model Explained", 1219 RFC 8309, January 2018. 1221 [RFC8466] G. Fioccola, ed., "A YANG Data Model for Layer 2 Virtual 1222 Private Network (L2VPN) Service Delivery", RFC8466, 1223 October 2018. 1225 [TNBI] Busi, I., Daniel, K. et al., "Transport Northbound 1226 Interface Applicability Statement", draft-ietf-ccamp- 1227 transport-nbi-app-statement, work in progress. 1229 [VN] Y. Lee, et al., "A Yang Data Model for ACTN VN Operation", 1230 draft-ietf-teas-actn-vn-yang, work in progress. 1232 [L2NM] S. Barguil, et al., "A Layer 2 VPN Network YANG Model", 1233 draft-ietf-opsawg-l2nm, work in progress. 1235 [L3NM] S. Barguil, et al., "A Layer 3 VPN Network YANG Model", 1236 draft-ietf-opsawg-l3sm-l3nm, work in progress. 1238 [TSM] Y. Lee, et al., "Traffic Engineering and Service Mapping 1239 Yang Model", draft-ietf-teas-te-service-mapping-yang, work 1240 in progress. 1242 [ACTN-PM] Y. Lee, et al., "YANG models for VN & TE Performance 1243 Monitoring Telemetry and Scaling Intent Autonomics", 1244 draft-lee-teas-actn-pm-telemetry-autonomics, work in 1245 progress. 1247 [BGP-L3VPN] D. Jain, et al. "Yang Data Model for BGP/MPLS L3 VPNs", 1248 draft-ietf-bess-l3vpn-yang, work in progress. 1250 Appendix A. Multi-layer and multi-domain resiliency 1252 A.1. Maintenance Window 1254 Before planned maintenance operation on DWDM network takes place, IP 1255 traffic should be moved hitless to another link. 1257 MDSC must reroute IP traffic before the events takes place. It 1258 should be possible to lock IP traffic to the protection route until 1259 the maintenance event is finished, unless a fault occurs on such 1260 path. 1262 A.2. Router port failure 1264 The focus is on client-side protection scheme between IP router and 1265 reconfigurable ROADM. Scenario here is to define only one port in 1266 the routers and in the ROADM muxponder board at both ends as back-up 1267 ports to recover any other port failure on client-side of the ROADM 1268 (either on router port side or on muxponder side or on the link 1269 between them). When client-side port failure occurs, alarms are 1270 raised to MDSC by IP-PNC and O-PNC (port status down, LOS etc.). 1271 MDSC checks with OP-PNC(s) that there is no optical failure in the 1272 optical layer. 1274 There can be two cases here: 1276 a) LAG was defined between the two end routers. MDSC, after checking 1277 that optical layer is fine between the two end ROADMs, triggers 1278 the ROADM configuration so that the router back-up port with its 1279 associated muxponder port can reuse the OCh that was already in 1280 use previously by the failed router port and adds the new link to 1281 the LAG on the failure side. 1283 While the ROADM reconfiguration takes place, IP/MPLS traffic is 1284 using the reduced bandwidth of the IP link bundle, discarding 1285 lower priority traffic if required. Once backup port has been 1286 reconfigured to reuse the existing OCh and new link has been 1287 added to the LAG then original Bandwidth is recovered between the 1288 end routers. 1290 Note: in this LAG scenario let assume that BFD is running at LAG 1291 level so that there is nothing triggered at MPLS level when one 1292 of the link member of the LAG fails. 1294 b) If there is no LAG then the scenario is not clear since a router 1295 port failure would automatically trigger (through BFD failure) 1296 first a sub-50ms protection at MPLS level :FRR (MPLS RSVP-TE 1297 case) or TI-LFA (MPLS based SR-TE case) through a protection 1298 port. At the same time MDSC, after checking that optical network 1299 connection is still fine, would trigger the reconfiguration of 1300 the back-up port of the router and of the ROADM muxponder to re- 1301 use the same OCh as the one used originally for the failed router 1302 port. Once everything has been correctly configured, MDSC Global 1303 PCE could suggest to the operator to trigger a possible re- 1304 optimisation of the back-up MPLS path to go back to the MPLS 1305 primary path through the back-up port of the router and the 1306 original OCh if overall cost, latency etc. is improved. However, 1307 in this scenario, there is a need for protection port PLUS back- 1308 up port in the router which does not lead to clear port savings. 1310 Acknowledgments 1312 This document was prepared using 2-Word-v2.0.template.dot. 1314 Some of this analysis work was supported in part by the European 1315 Commission funded H2020-ICT-2016-2 METRO-HAUL project (G.A. 761727). 1317 Contributors 1319 Sergio Belotti 1320 Nokia 1322 Email: sergio.belotti@nokia.com 1324 Gabriele Galimberti 1325 Cisco 1327 Email: ggalimbe@cisco.com 1329 Zheng Yanlei 1330 China Unicom 1332 Email: zhengyanlei@chinaunicom.cn 1334 Anton Snitser 1335 Sedona 1337 Email: antons@sedonasys.com 1338 Washington Costa Pereira Correia 1339 TIM Brasil 1341 Email: wcorreia@timbrasil.com.br 1343 Michael Scharf 1344 Hochschule Esslingen - University of Applied Sciences 1346 Email: michael.scharf@hs-esslingen.de 1348 Young Lee 1349 Sung Kyun Kwan University 1351 Email: younglee.tx@gmail.com 1353 Jeff Tantsura 1354 Apstra 1356 Email: jefftant.ietf@gmail.com 1358 Paolo Volpato 1359 Huawei 1361 Email: paolo.volpato@huawei.com 1363 Authors' Addresses 1365 Fabio Peruzzini 1366 TIM 1368 Email: fabio.peruzzini@telecomitalia.it 1370 Jean-Francois Bouquier 1371 Vodafone 1373 Email: jeff.bouquier@vodafone.com 1374 Italo Busi 1375 Huawei 1377 Email: Italo.busi@huawei.com 1379 Daniel King 1380 Old Dog Consulting 1382 Email: daniel@olddog.co.uk 1384 Daniele Ceccarelli 1385 Ericsson 1387 Email: daniele.ceccarelli@ericsson.com