idnits 2.17.1 draft-peru-teas-actn-poi-applicability-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 1022: '... optical tunnel MUST be provisioned f...' Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (March 9, 2020) is 1509 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'RFC4761' is defined on line 1361, but no explicit reference was found in the text Summary: 1 error (**), 0 flaws (~~), 2 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 TEAS Working Group Fabio Peruzzini 2 Internet Draft TIM 3 Intended status: Informational Italo Busi 4 Huawei 5 Daniel King 6 Old Dog Consulting 7 Sergio Belotti 8 Nokia 9 Gabriele Galimberti 10 Cisco 12 Expires: September 2020 March 9, 2020 14 Applicability of Abstraction and Control of Traffic Engineered 15 Networks (ACTN) to Packet Optical Integration (POI) 17 draft-peru-teas-actn-poi-applicability-03.txt 19 Status of this Memo 21 This Internet-Draft is submitted in full conformance with the 22 provisions of BCP 78 and BCP 79. 24 Internet-Drafts are working documents of the Internet Engineering 25 Task Force (IETF), its areas, and its working groups. Note that 26 other groups may also distribute working documents as Internet- 27 Drafts. 29 Internet-Drafts are draft documents valid for a maximum of six 30 months and may be updated, replaced, or obsoleted by other documents 31 at any time. It is inappropriate to use Internet-Drafts as 32 reference material or to cite them other than as "work in progress." 34 The list of current Internet-Drafts can be accessed at 35 http://www.ietf.org/ietf/1id-abstracts.txt 37 The list of Internet-Draft Shadow Directories can be accessed at 38 http://www.ietf.org/shadow.html 40 This Internet-Draft will expire on September 9, 2020. 42 Copyright Notice 44 Copyright (c) 2020 IETF Trust and the persons identified as the 45 document authors. All rights reserved. 47 This document is subject to BCP 78 and the IETF Trust's Legal 48 Provisions Relating to IETF Documents 49 (http://trustee.ietf.org/license-info) in effect on the date of 50 publication of this document. Please review these documents 51 carefully, as they describe your rights and restrictions with 52 respect to this document. Code Components extracted from this 53 document must include Simplified BSD License text as described in 54 Section 4.e of the Trust Legal Provisions and are provided without 55 warranty as described in the Simplified BSD License. 57 Abstract 59 This document considers the applicability of the IETF Abstraction 60 and Control of Traffic Engineered Networks (ACTN) to Packet Optical 61 Integration (POI), and IP and Optical DWDM domain internetworking. 63 In this document, we highlight the IETF protocols and YANG data 64 models that may be used for the ACTN and control of POI networks, 65 with particular focus on the interfaces between the MDSC (Multi- 66 Domain Service Coordinator) and the underlying Packet and Optical 67 Domain Controllers (P-PNC and O-PNC) to support POI use cases. 69 Table of Contents 71 1. Introduction...................................................3 72 2. Reference Scenario.............................................4 73 2.1. Generic Assumptions.......................................6 74 3. Multi-Layer Topology Coordination..............................7 75 3.1. Discovery of existing Och, ODU, IP links, IP tunnels and IP 76 services.......................................................7 77 3.1.1. Common YANG Models used at the MPI...................8 78 3.1.1.1. YANG models used at the Optical MPIs............8 79 3.1.1.2. Required YANG models at the Packet MPIs.........8 80 3.1.2. Inter-domain link Discovery..........................9 81 3.2. Provisioning of an IP Link/LAG over DWDM.................10 82 3.2.1. YANG models used at the MPIs........................10 83 3.2.1.1. YANG models used at the Optical MPIs...........10 84 3.2.1.2. Required YANG models at the Packet MPIs........11 85 3.2.2. IP Link Setup Procedure.............................11 87 3.3. Provisioning of an IP link/LAG over DWDM with path 88 constraints...................................................12 89 3.3.1. YANG models used at the MPIs........................13 90 3.4. Provisioning Link Members to an existing LAG.............13 91 3.4.1. YANG Models used at the MPIs........................13 92 4. Multi-Layer Recovery Coordination.............................13 93 4.1. Ensuring Network Resiliency during Maintenance Events....13 94 4.2. Router Port Failure......................................13 95 5. Service Coordination for Multi-Layer network..................14 96 5.1. L2/L3VPN/VN Service Request by the Customer..............17 97 5.2. Service and Network Orchestration........................19 98 5.3. IP/MPLS Domain Controller and NE Functions...............23 99 5.3.1. Scenario A: Shared Tunnel Selection.................23 100 5.3.1.1. Domain Tunnel Selection........................24 101 5.3.1.2. VPN/VRF Provisioning for L3VPN.................25 102 5.3.1.3. VSI Provisioning for L2VPN.....................26 103 5.3.1.4. Inter-domain Links Update......................26 104 5.3.1.5. End-to-end Tunnel Management...................26 105 5.3.2. Scenario B: Isolated VN/Tunnel Establishment........27 106 5.4. Optical Domain Controller and NE Functions...............27 107 5.5. Orchestrator-Controllers-NEs Communication Protocol Flows29 108 6. Security Considerations.......................................31 109 7. Operational Considerations....................................31 110 8. IANA Considerations...........................................31 111 9. References....................................................31 112 9.1. Normative References.....................................31 113 9.2. Informative References...................................32 114 10. Acknowledgments..............................................34 115 11. Authors' Addresses...........................................34 117 1. Introduction 119 Packet Optical Integration (POI) is an advanced use case of traffic 120 engineering. In wide-area networks, a packet network based on the 121 Internet Protocol (IP) and possibly Multiprotocol Label Switching 122 (MPLS) is typically deployed on top of an optical transport network 123 that uses Dense Wavelength Division Multiplexing (DWDM). In many 124 existing network deployments, the packet and the optical networks 125 are engineered and operated independently of each other. There are 126 technical differences between the technologies (e.g., routers versus 127 optical switches) and the corresponding network engineering and 128 planning methods (e.g., inter-domain peering optimization in IP vs. 129 dealing with physical impairments in DWDM, or very different time 130 scales). In addition, customers and customer needs vary between a 131 packet and an optical network, and it is not uncommon to use 132 different vendors in both domains. Last but not least, state-of-the- 133 art packet and optical networks use sophisticated but complex 134 technologies, and for a network engineer, it may not be trivial to 135 be a full expert in both areas. As a result, packet and optical 136 networks are often managed by different technical and organizational 137 silos. 139 This separation is inefficient for many reasons. Both capital 140 expenditure (CAPEX) and operational expenditure (OPEX) could be 141 significantly reduced by better integrating the packet and the 142 optical network. Multi-layer online topology insight can speed up 143 troubleshooting (e.g., alarm correlation) and network operation 144 (e.g., coordination of maintenance events), multi-layer offline 145 topology inventory can improve service quality (e.g., detection of 146 diversity constraint violations) and multi-layer traffic engineering 147 can use the available network capacity more efficiently (e.g., 148 coordination of restoration). In addition, provisioning workflows 149 can be simplified or automated as needed across layers (e.g, to 150 achieve bandwidth on demand, or to perform maintenance events). 152 Fully leveraging these benefits requires integration between the 153 management and control of the packet and the optical network. The 154 Abstraction and Control of TE Networks (ACTN) framework outlines the 155 functional components and interfaces between a Multi-Domain Service 156 Coordinator (MDSC) and Provisioning Network Controllers (PNCs) that 157 can be used for coordinating the packet and optical layers. 159 In this document, critical use cases for Packet Optical Integration 160 (POI) are described. We outline how and what is required for the 161 packet and the optical layer to interact to set up and operate 162 services. The IP networks are operated as a client of optical 163 networks. The use cases are ordered by increasing the level of 164 integration and complexity. For each multi-layer use case, the 165 document analyzes how to use the interfaces and data models of the 166 ACTN architecture. 168 The document also captures the current issues with ACTN and POI 169 deployment. By understanding the level of standardization and 170 potential gaps, it will help to better assess the feasibility of 171 integration between IP and optical DWDM domain, in an end-to-end 172 multi-vendor network. 174 2. Reference Scenario 176 This document uses "Reference Scenario 1" with multiple Optical 177 domains and multiple Packet domains. The following Figure 1 shows 178 this scenario in case of two Optical domains and two Packet domains: 180 +----------+ 181 | MDSC | 182 +-----+----+ 183 | 184 +-----------+-----+------+-----------+ 185 | | | | 186 +----+----+ +----+----+ +----+----+ +----+----+ 187 | P-PNC 1 | | O-PNC 1 | | O-PNC 2 | | P-PNC 2 | 188 +----+----+ +----+----+ +----+----+ +----+----+ 189 | | | | 190 | \ / | 191 +-------------------+ \ / +-------------------+ 192 CE / PE ASBR \ | / / ASBR PE \ CE 193 o--/---o o---\-|-------|--/---o o---\--o 194 \ : : / | | \ : : / 195 \ : AS Domain 1 : / | | \ : AS Domain 2 : / 196 +-:---------------:-+ | | +-:---------------:--+ 197 : : | | : : 198 : : | | : : 199 +-:---------------:------+ +-------:---------------:--+ 200 / : : \ / : : \ 201 / o...............o \ / o...............o \ 202 \ Optical Domain 1 / \ Optical Domain 2 / 203 \ / \ / 204 +------------------------+ +--------------------------+ 206 Figure 1 - Reference Scenario 1 208 The ACTN architecture, defined in [RFC8453], is used to control this 209 multi-domain network where each Packet PNC (P-PNC) is responsible 210 for controlling its IP domain (AS), and each Optical PNC (O-PNC) is 211 responsible for controlling its Optical Domain. 213 The MDSC is responsible for coordinating the whole multi-domain 214 multi-layer (Packet and Optical) network. A specific standard 215 interface (MPI) permits MDSC to interact with the different 216 Provisioning Network Controller (O/P-PNCs). 218 The MPI interface presents an abstracted topology to MDSC hiding 219 technology-specific aspects of the network and hiding topology 220 details depending on the policy chosen regarding the level of 221 abstraction supported. The level of abstraction can be obtained 222 based on P-PNC and O-PNC configuration parameters (e.g. provide the 223 potential connectivity between any PE and any ABSR in an MPLS-TE 224 network). 226 The MDSC in Figure 1 is responsible for multi-domain and multi-layer 227 coordination across multiple Packet and Optical domains, as well as 228 to provide IP services to different CNCs at its CMIs using YANG- 229 based service models (e.g., using L2SM [RFC8466], L3SM [RFC8299]). 231 The multi-domain coordination mechanisms for the IP tunnels 232 supporting these IP services are described in section 5. In some 233 cases, the MDSC could also rely on the multi-layer POI mechanisms, 234 described in this draft, to support multi-layer optimizations for 235 these IP services and tunnels. 237 In the network scenario of Figure 1, it is assumed that: 239 o The domain boundaries between the IP and Optical domains are 240 congruent. In other words, one Optical domain supports 241 connectivity between Routers in one and only one Packet Domain; 243 o Inter-domain links exist only between Packet domains (i.e., 244 between ASBR routers) and between Packet and Optical domains 245 (i.e., between routers and ROADMs). In other words, there are no 246 inter-domain links between Optical domains; 248 o The interfaces between the routers and the ROADM's are "Ethernet" 249 physical interfaces; 251 o The interfaces between the ASBR routers are "Ethernet" physical 252 interfaces. 254 2.1. Generic Assumptions 256 This section describes general assumptions which are applicable at 257 all the MPI interfaces, between each PNC (optical or packet) and the 258 MDSC, and also to all the scenarios discussed in this document. 260 The data models used on these interfaces are assumed to use the YANG 261 1.1 Data Modeling Language, as defined in [RFC7950]. 263 The RESTCONF protocol, as defined in [RFC8040], using the JSON 264 representation, defined in [RFC7951], is assumed to be used at these 265 interfaces. 267 As required in [RFC8040], the "ietf-yang-library" YANG module 268 defined in [RFC8525] is used to allow the MDSC to discover the set 269 of YANG modules supported by each PNC at its MPI. 271 3. Multi-Layer Topology Coordination 273 In this scenario, the MSDC needs to discover the network topology, 274 at both WDM and IP layers, in terms of nodes (NEs) and links, 275 including inter-AS domain links as well as cross-layer links. 277 Each PNC provides to the MDSC an abstract topology view of the WDM 278 or of the IP topology of the domain it controls. This topology is 279 abstracted in the sense that some detailed NE information is hidden 280 at the MPI, and all or some of the NEs and related physical links 281 are exposed as abstract nodes and logical (virtual) links, depending 282 on the level of abstraction the user requires. This detailed 283 information is vital to understand both the inter-AS domain links 284 (seen by each controller as UNI interfaces but as I-NNI interfaces 285 by the MDSC) as well as the cross-layer mapping between IP and WDM 286 layer. 288 The MDSC also maintains an up-to-date network inventory of both IP 289 and WDM layers through the use of IETF notifications through MPI 290 with the PNCs. 292 For the cross-layer links, the MDSC needs to be capable of 293 automatically correlating physical ports information from the 294 routers (single link or bundle links for link aggregation groups - 295 LAG) to client ports in the ROADM. 297 3.1. Discovery of existing Och, ODU, IP links, IP tunnels and IP 298 services 300 Typically, an MDSC must be able to automatically discover network 301 topology of both WDM and IP layers (links and NE, links between two 302 domains), this assumes the following: 304 o An abstract view of the WDM and IP topology must be available; 306 o MDSC must keep an up-to-date network inventory of both IP and WDM 307 layers, and it should be possible to correlate such information 308 (e.g., which port, lambda/OTSi, the direction it is used by a 309 specific IP service on the WDM equipment); 311 o It should be possible at MDSC level to easily correlate WDM and 312 IP layers alarms to speed-up troubleshooting. 314 3.1.1. Common YANG Models used at the MPI 316 Both optical and packet PNCs use the following common topology YANG 317 models at the MPI to report their abstract topologies: 319 o The Base Network Model, defined in the "ietf-network" YANG module 320 of [RFC8345]; 322 o The Base Network Topology Model, defined in the "ietf-network- 323 topology" YANG module of [RFC8345], which augments the Base 324 Network Model; 326 o The TE Topology Model, defined in the "ietf-te-topology" YANG 327 module of [TE-TOPO], which augments the Base Network Topology 328 Model. 330 These IETF YANG models are generic and augmented by technology- 331 specific YANG modules as described in the following sections. 333 3.1.1.1. YANG models used at the Optical MPIs 335 The optical PNC also uses at least the following technology-specific 336 topology YANG models, providing WDM and Ethernet technology-specific 337 augmentations of the generic TE Topology Model: 339 o The WSON Topology Model, defined in the "ietf-wson-topology" YANG 340 modules of [WSON-TOPO], or the Flexi-grid Topology Model, defined 341 in the "ietf-flexi-grid-topology" YANG module of [Flexi-TOPO]. 343 o The Ethernet Topology Model, defined in the "ietf-eth-te- 344 topology" YANG module of [CLIENT-TOPO] 346 The WSON Topology Model or, alternatively, the Flexi-grid Topology 347 model is used to report the fixed-grid or, respectively, the 348 flexible-grid DWDM network topology (e.g., ROADMs and OMS links). 350 The Ethernet Topology Model is used to report the Ethernet access 351 links on the edge ROADMs. 353 3.1.1.2. Required YANG models at the Packet MPIs 355 The Packet PNC also uses at least the following technology-specific 356 topology YANG models, providing IP and Ethernet technology-specific 357 augmentations of the generic Topology Models: 359 o The L3 Topology Model, defined in the "ietf-l3-unicast-topology" 360 YANG modules of [RFC8346], which augments the Base Network 361 Topology Model 363 o The Ethernet Topology Model, defined in the "ietf-eth-te- 364 topology" YANG module of [CLIENT-TOPO], which augments the TE 365 Topology Model 367 o The L3-TE Topology Model, defined in the "ietf-l3-te-topology" 368 YANG modules of [L3-TE-TOPO], which augments the L3 Topology 369 Model 371 The Ethernet Topology Model is used to report the Ethernet links 372 between the IP routers and the edge ROADMs as well as the 373 inter-domain links between ASBRs, while the L3 Topology Model is 374 used to report the IP network topology (e.g., IP routers and IP 375 links). 377 The L3-TE Topology Model reports the relationship between the IP 378 routers and LTPs provided by the L3 Topology Model and the 379 underlying Ethernet nodes and LTPs provided by the Ethernet Topology 380 Model. 382 3.1.2. Inter-domain link Discovery 384 In the reference network of Figure 1, there are two types of 385 inter-domain links: 387 o Links between two IP domains/ASBRs (ASes) 389 o Links between an IP router and a ROADM 391 Both types of links are Ethernet physical links. 393 The inter-domain link information is reported to the MDSC by the two 394 adjacent PNCs, controlling the two ends of the inter-domain link, 395 using the Ethernet Topology Model defined in [CLIENT-TOPO]. 397 The MDSC can understand how to merge these inter-domain Ethernet 398 links together using the plug-id attribute defined in the TE 399 Topology Model [TE-TOPO], as described in as described in section 400 4.3 of [TE-TOPO]. 402 A more detailed description of how the plug-id can be used to 403 discover inter-domain link is also provided in section 5.1.4 of 404 [TNBI]. 406 Both types of inter-domain Ethernet links are discovered using the 407 plug-id attributes reported in the Ethernet Topologies exposed by 408 the two adjacent PNCs. 410 The MDSC, when discovering an Ethernet inter-domain link between two 411 Ethernet LTPs which are associated with two IP LTPs, reported in the 412 IP Topologies exposed by the two adjacent P-PNCs, can also discover 413 an inter-domain IP link/adjacency between these two IP LTPs. 415 Two options are possible to discover these inter-domain Ethernet 416 links: 418 1. Static configuration 420 2. LLDP [IEEE 802.1AB] automatic discovery 422 Since the static configuration requires an administrative burden to 423 configure network-wide unique identifiers, the automatic discovery 424 solution based on LLDP is preferable when LLDP is supported. 426 As outlined in [TNBI], the encoding of the plug-id namespace as well 427 as of the LLDP information within the plug-id value is 428 implementation specific and needs to be consistent across all the 429 PNCs. 431 3.2. Provisioning of an IP Link/LAG over DWDM 433 In this scenario, the MSDC needs to coordinate the creation of an IP 434 link, or a LAG, between two routers through a DWDM network. 436 It is assumed that the MDSC has already discovered the whole network 437 topology as described in section 3.1. 439 3.2.1. YANG models used at the MPIs 441 3.2.1.1. YANG models used at the Optical MPIs 443 The optical PNC uses at least the following YANG models: 445 o The TE Tunnel Model, defined in the "ietf-te" YANG module of 446 [TE-TUNNEL] 448 o The WSON Tunnel Model, defined in the "ietf-wson-tunnel" YANG 449 modules of [WSON-TUNNEL], or the Flexi-grid Media Channel Model, 450 defined in the "ietf-flexi-grid-media-channel" YANG module of 451 [Flexi-MC] 453 o The Ethernet Client Signal Model, defined in the "ietf-eth-tran- 454 service" YANG module of [CLIENT-SIGNAL] 456 The TE Tunnel model is generic and augmented by technology-specific 457 models such as the WSON Tunnel Model and the Flexi-grid Media 458 Channel Model. 460 The WSON Tunnel Model or, alternatively, the Flexi-grid Media 461 Channel Model are used to setup connectivity within the DWDM network 462 depending on whether the DWDM optical network is based on fixed grid 463 or flexible-grid. 465 The Ethernet Client Signal Model is used to configure the steering 466 of the Ethernet client traffic between Ethernet access links and TE 467 Tunnels, which in this case could be either WSON Tunnels or 468 Flexi-Grid Media Channels. This model is generic and applies to any 469 technology-specific TE Tunnel: technology-specific attributes are 470 provided by the technology-specific models which augment the generic 471 TE-Tunnel Model. 473 3.2.1.2. Required YANG models at the Packet MPIs 475 The Packet PNC uses at least the following topology YANG models: 477 o The Base Network Model, defined in the "ietf-network" YANG module 478 of [RFC8345] (see section 3.1.1) 480 o The Base Network Topology Model, defined in the "ietf-network- 481 topology" YANG module of [RFC8345] (see section 3.1.1) 483 o The L3 Topology Model, defined in the "ietf-l3-unicast-topology" 484 YANG modules of [RFC8346] (see section 3.1.1.1) 486 If, as discussed in section 3.2.2, IP Links created over DWDM can be 487 automatically discovered by the P-PNC, the IP Topology is needed 488 only to report these IP Links after being discovered by the P-PNC. 490 The IP Topology can also be used to configure the IP Links created 491 over DWDM. 493 3.2.2. IP Link Setup Procedure 495 The MDSC requires the O-PNC to setup a WDM Tunnel (either a WSON 496 Tunnel or a Flexi-grid Tunnel) within the DWDM network between the 497 two Optical Transponders (OTs) associated with the two access links. 499 The Optical Transponders are reported by the O-PNC as Trail 500 Termination Points (TTPs), defined in [TE-TOPO], within the WDM 501 Topology. The association between the Ethernet access link and the 502 WDM TTP is reported by the Inter-Layer Lock (ILL) identifiers, 503 defined in [TE-TOPO], reported by the O-PNC within the Ethernet 504 Topology and WDM Topology. 506 The MDSC also requires the O-PNC to steer the Ethernet client 507 traffic between the two access Ethernet Links over the WDM Tunnel. 509 After the WDM Tunnel has been setup and the client traffic steering 510 configured, the two IP routers can exchange Ethernet packets between 511 themselves, including LLDP messages. 513 If LLDP [IEEE 802.1AB] is used between the two routers, the P-PNC 514 can automatically discover the IP Link being set up by the MDSC. The 515 IP LTPs terminating this IP Link are supported by the ETH LTPs 516 terminating the two access links. 518 Otherwise, the MDSC needs to require the P-PNC to configure an IP 519 Link between the two routers: the MDSC also configures the two ETH 520 LTPs which support the two IP LTPs terminating this IP Link. 522 3.3. Provisioning of an IP link/LAG over DWDM with path constraints 524 MDSC must be able to provision an IP link with a fixed maximum 525 latency constraint, or with the minimum latency available constraint 526 within each domain but as well inter-domain when required (e.g. by 527 monitoring traffic KPIs trends for this IP link). Through the O-PNC 528 fixed latency path/minimum latency path is chosen between PE and 529 ASBR in each optical domain. Then MDSC needs to select the inter-AS 530 domain with less latency (in case we have several interconnection 531 links) to have the right low latency constraint fulfilled end-to-end 532 across domains. 534 MDSC must be able to automatically create two IP links between two 535 routers, over DWDM network, with physical path diversity (avoiding 536 SRLGs communicated by O-PNCs to the MDSC). 538 MDSC must be responsible for routing each of this IP links through 539 different inter-AS domain links so that end-to-end IP links are 540 fully disjoint. 542 Optical connectivity must be set up accordingly by MDSC through O- 543 PNCs. 545 3.3.1. YANG models used at the MPIs 547 This section is for further study 549 3.4. Provisioning Link Members to an existing LAG 551 When adding a new link member to a LAG between two routers with or 552 without path latency/diversity constraint, the MDSC must be able to 553 force the additional optical connection to use the same physical 554 path in the optical domain where the LAG capacity increase is 555 required. 557 3.4.1. YANG Models used at the MPIs 559 This is for further study 561 4. Multi-Layer Recovery Coordination 563 4.1. Ensuring Network Resiliency during Maintenance Events 565 Before planned maintenance operation on DWDM network takes place, IP 566 traffic should be moved hitless to another link. 568 MDSC must reroute IP traffic before the events takes place. It 569 should be possible to lock IP traffic to the protection route until 570 the maintenance event is finished, unless a fault occurs on such 571 path. 573 4.2. Router Port Failure 575 The focus is on client-side protection scheme between IP router and 576 reconfigurable ROADM. Scenario here is to define only one port in 577 the routers and in the ROADM muxponder board at both ends as back-up 578 ports to recover any other port failure on client-side of the ROADM 579 (either on router port side or on muxponder side or on the link 580 between them). When client-side port failure occurs, alarms are 581 raised to MDSC by IP-PNC and O-PNC (port status down, LOS etc.). 582 MDSC checks with OP-PNC(s) that there is no optical failure in the 583 optical layer. 585 There can be two cases here: 587 a) LAG was defined between the two end routers. MDSC, after checking 588 that optical layer is fine between the two end ROADMs, triggers 589 the ROADM configuration so that the router back-up port with its 590 associated muxponder port can reuse the OCh that was already in 591 use previously by the failed router port and adds the new link to 592 the LAG on the failure side. 594 While the ROADM reconfiguration takes place, IP/MPLS traffic is 595 using the reduced bandwidth of the IP link bundle, discarding 596 lower priority traffic if required. Once backup port has been 597 reconfigured to reuse the existing OCh and new link has been 598 added to the LAG then original Bandwidth is recovered between the 599 end routers. 601 Note: in this LAG scenario let assume that BFD is running at LAG 602 level so that there is nothing triggered at MPLS level when one 603 of the link member of the LAG fails. 605 b) If there is no LAG then the scenario is not clear since a router 606 port failure would automatically trigger (through BFD failure) 607 first a sub-50ms protection at MPLS level :FRR (MPLS RSVP-TE 608 case) or TI-LFA (MPLS based SR-TE case) through a protection 609 port. At the same time MDSC, after checking that optical network 610 connection is still fine, would trigger the reconfiguration of 611 the back-up port of the router and of the ROADM muxponder to re- 612 use the same OCh as the one used originally for the failed router 613 port. Once everything has been correctly configured, MDSC Global 614 PCE could suggest to the operator to trigger a possible re- 615 optimisation of the back-up MPLS path to go back to the MPLS 616 primary path through the back-up port of the router and the 617 original OCh if overall cost, latency etc. is improved. However, 618 in this scenario, there is a need for protection port PLUS back- 619 up port in the router which does not lead to clear port savings. 621 5. Service Coordination for Multi-Layer network 623 [Editors' Note] This text has been taken from section 2 of draft- 624 lee-teas-actn-poi-applicability-00 and need to be reconciled with 625 the other sections (the introduction in particular) of this document 627 This section provides a number of deployment scenarios for packet 628 and optical integration (POI). Specifically, this section provides a 629 deployment scenario in which ACTN hierarchy is deployed to control a 630 multi-layer and multi-domain network via two IP/MPLS PNCs and two 631 Optical PNCs with coordination with L-MDSC. This scenario is in the 632 context of an upper layer service configuration (e.g. L3VPN) across 633 two AS domains which are transported by two transport underlay 634 domains (e.g. OTN). 636 The provisioning of the L3VPN service is outside ACTN scope but it 637 is worth showing how the L3VPN service provisioning is integrated 638 for the end-to-end service fulfilment in ACTN context. An example of 639 service configuration function in the Service/Network Orchestrator 640 is discussed in [BGP-L3VPN]. 642 Figure 2 shows an ACTN POI Reference Architecture where it shows 643 ACTN components as well as non-ACTN components that are necessary 644 for the end-to-end service fulfilment. Both IP/MPLS and Optical 645 Networks are multi-domain. Each IP/MPLS domain network is controlled 646 by its' domain controller and all the optical domains are controlled 647 by a hierarchy of optical domain controllers. The L-MDSC function of 648 the optical domain controllers provides an abstract view of the 649 whole optical network to the Service/Network Orchestrator. It is 650 assumed that all these components of the network belong to one 651 single network operator domain under the control of the 652 service/network orchestrator. 654 Customer 655 +-------------------------------+ 656 | +-----+ +------------+ | 657 | | CNC |----| Service Op.| | 658 | +-----+ +------------+ | 659 +-------|------------------|----+ 660 | ACTN interface | Non-ACTN interface 661 | CMI | (Customer Service model) 662 Service/Network| +-----------------+ 663 Orchestrator | | 664 +-----|------------------------------------|-----------+ 665 | +----------------------------------+ | | 666 | |MDSC TE & Service Mapping Function| | | 667 | +----------------------------------+ | | 668 | | | | | 669 | +------------------+ +---------------------+ | 670 | | MDSC NP Function |-------|Service Config. Func.| | 671 | +------------------+ +---------------------+ | 672 +------|---------------------------|-------------------+ 673 MPI | +---------------------+--+ 674 | / Non-ACTN interface \ 675 +-------+---/-------+------------+ \ 676 IP/MPLS | / |Optical | \ IP/MPLS 677 Domain 1 | / |Domain | \ Domain 2 678 Controller| / |Controller | \ Controller 679 +------|-------/--+ +---|-----+ +--|-----------\----+ 680 | +-----+ +-----+| | +-----+ | |+------+ +------+| 681 | |PNC1 | |Serv.|| | |PNC | | || PNC2 | | Serv.|| 682 | +-----+ +----- | | +-----+ | |+------+ +------+| 683 +-----------------+ +---------+ +-------------------+ 684 SBI | | | SBI 685 v | V 686 +------------------+ | +------------------+ 687 / IP/MPLS Network \ | / IP/MPLS Network \ 688 +----------------------+ | SBI +----------------------+ 689 v 690 +-------------------------------+ 691 / Optical Network \ 692 +-----------------------------------+ 694 Figure 2 ACTN POI Reference Architecture 696 Figure 2 shows ACTN POI Reference Architecture where it depicts: 698 o CMI (CNC-MDSC Interface) interfacing CNC with MDSC function in 699 the Service/Network Orchestrator. This is where TE & Service 700 Mapping [TSM] and either ACTN VN [ACTN-VN] or TE-topology [TE- 701 TOPO] model is exchanged over CMI. 703 o Customer Service Model Interface: Non-ACTN interface in the 704 Customer Portal interfacing Service/Network Orchestrator's 705 Service Configuration Function. This is the interface where L3SM 706 information is exchanged. 708 o MPI (MDSC-PNC Interface) interfacing IP/MPLS Domain Controllers 709 and Optical Domain Controllers. 711 o Service Configuration Interface: Non-ACTN interface in 712 Service/Network Orchestrator interfacing with the IP/MPLS Domain 713 Controllers to coordinate L2/L3VPN multi-domain service 714 configuration. This is where service specific information such as 715 VPN, VPN binding policy (e.g., new underlay tunnel creation for 716 isolation), etc. are conveyed. 718 o SBI (South Bound Interface): Non-ACTN interface in the domain 719 controller interfacing network elements in the domain. 721 Please note that MPI and Service Configuration Interface can be 722 implemented as the same interface with the two different 723 capabilities. The split is just functional but doesn't have to be 724 also logical. 726 The following sections are provided to describe key functions that 727 are necessary for the vertical as well as horizontal end-to-end 728 service fulfilment of POI. 730 5.1. L2/L3VPN/VN Service Request by the Customer 732 A customer can request L3VPN services with TE requirements using 733 ACTN CMI models (i.e., ACTN VN YANG, TE & Service Mapping YANG) and 734 non-ACTN customer service models such as L2SM/L3SM YANG together. 735 Figure 3 shows detailed control flow between customer and 736 service/network orchestrator to instantiate L2/L3VPN/VN service 737 request. 739 Customer 740 +-------------------------------------------+ 741 | +-----+ +------------+ | 742 | | CNC |--------------| Service Op.| | 743 | +-----+ +------------+ | 744 +-------|------------------------|----------+ 745 2. VN & TE/Svc | | 1.L2/3SM 746 Mapping | | | 747 | | ^ | | 748 | | | | | 749 v | | 3. Update VN | v 750 | & TE/Svc | 751 Service/Network | mapping | 752 Orchestrator | | 753 +------------------|------------------------|-----------+ 754 | +----------------------------------+ | | 755 | |MDSC TE & Service Mapping Function| | | 756 | +----------------------------------+ | | 757 | | | | | 758 | +------------------+ +---------------------+ | 759 | | MDSC NP Function |-------|Service Config. Func.| | 760 | +------------------+ +---------------------+ | 761 +-------|-----------------------------------|-----------+ 763 NP: Network Provisioning 765 Figure 3 Service Request Process 767 o ACTN VN YANG provides VN Service configuration, as specified in 768 [ACTN-VN]. 770 o It provides the profile of VN in terms of VN members, each of 771 which corresponds to an edge-to-edge link between customer 772 end-points (VNAPs). It also provides the mappings between the 773 VNAPs with the LTPs and between the connectivity matrix with 774 the VN member from which the associated traffic matrix (e.g., 775 bandwidth, latency, protection level, etc.) of VN member is 776 expressed (i.e., via the TE-topology's connectivity matrix). 778 o The model also provides VN-level preference information 779 (e.g., VN member diversity) and VN-level admin-status and 780 operational-status. 782 o L2SM YANG [RFC8466] provides all L2VPN service configuration and 783 site information from a customer/service point of view. 785 o L3SM YANG [RFC8299] provides all L3VPN service configuration and 786 site information from a customer/service point of view. 788 o The TE & Service Mapping YANG model [TSM] provides TE-service 789 mapping as well as site mapping. 791 o TE-service mapping provides the mapping of L3VPN instance 792 from [RFC8299] with the corresponding ACTN VN instance. 794 o The TE-service mapping also provides the service mapping 795 requirement type as to how each L2/L3VPN/VN instance is 796 created with respect to the underlay TE tunnels (e.g., 797 whether the L3VPN requires a new and isolated set of TE 798 underlay tunnels or not, etc.). See Section 5.2 for detailed 799 discussion on the mapping requirement types. 801 o Site mapping provides the site reference information across 802 L2/L3VPN Site ID, ACTN VN Access Point ID, and the LTP of the 803 access link. 805 5.2. Service and Network Orchestration 807 The Service/Network orchestrator shown in Figure 2 interfaces the 808 customer and decouples the ACTN MDSC functions from the customer 809 service configuration functions. 811 An implementation can choose to split the Service/Network 812 orchestration functions, as described in [RFC8309] and in section 813 4.2 of [RFC8453], between a top-level Service Orchestrator 814 interfacing the customer and two low-level Network Orchestrators, 815 one controlling a multi-domain IP/MPLS network and the other 816 controlling the Optical networks. 818 Another implementation can choose to combine the L-MDSC functions of 819 the Optical hierarchical controller, providing multi-domain 820 coordination of the Optical network together with the MDSC functions 821 in the Service/Network orchestrator. 823 Without loss of generality, this assumes that the service/network 824 orchestrator as depicted in Figure 2 would include all the required 825 functionalities as in a hierarchical orchestration case. 827 One of the important service functions the Service/Network 828 orchestrator performs is to identify which TE Tunnels should carry 829 the L3VPN traffic (from TE & Service Mapping Model) and to relay 830 this information to the IP/MPLS domain controllers, via non-ACTN 831 interface, to ensure proper IP/VRF forwarding table be populated 832 according to the TE binding requirement for the L3VPN. 834 [Editor's Note] What mechanism would convey on the interface to the 835 IP/MPLS domain controllers as well as on the SBI (between IP/MPLS 836 domain controllers and IP/MPLS PE routers) the TE binding policy 837 dynamically for the L3VPN? Typically, VRF is the function of the 838 device that participate MP-BGP in MPLS VPN. With current MP-BGP 839 implementation in MPLS VPN, the VRF's BGP next hop is the 840 destination PE and the mapping to a tunnel (either an LDP or a BGP 841 tunnel) toward the destination PE is done by automatically without 842 any configuration. It is to be determined the impact on the PE VRF 843 operation when the tunnel is an optical bypass tunnel which does not 844 participate either LDP or BGP. 846 Figure 4 shows service/network orchestrator interactions with 847 various domain controllers to instantiate tunnel provisioning as 848 well as service configuration. 850 +-------|----------------------------------|-----------+ 851 | +----------------------------------+ | | 852 | |MDSC TE & Service Mapping Function| | | 853 | +----------------------------------+ | | 854 | | | | | 855 | +------------------+ +---------------------+ | 856 | | MDSC NP Function |-------|Service Config. Func.| | 857 | +------------------+ +---------------------+ | 858 +-------|------------------------------|---------------+ 859 | | 860 | +-------------------+------+ 3. 861 2. Inter-layer | / \ VPN 862 Serv. 863 tunnel +-----+--------/-------+-----------------+ 864 \provision 865 binding| / | 1. Optical | \ 866 | / | tunnel creation | \ 867 +----|-----------/-+ +---|------+ +-----|-------\---+ 868 | +-----+ +-----+ | | +------+ | | +-----+ +-----+| 869 | |PNC1 | |Serv.| | | | PNC | | | |PNC2 | |Serv.|| 870 | +-----+ +-----+ | | +------+ | | +-----+ +-----+| 871 +------------------+ +----------+ +-----------------+ 873 Figure 4 Service and Network Orchestration Process 875 TE binding requirement types [TSM] are: 877 1. Hard Isolation with deterministic latency: Customer would request 878 an L3VPN service [RFC8299] using a set of TE Tunnels with a 879 deterministic latency requirement and that cannot be not shared 880 with other L3VPN services nor compete for bandwidth with other 881 Tunnels. 883 2. Hard Isolation: This is similar to the above case without 884 deterministic latency requirements. 886 3. Soft Isolation: Customer would request an L3VPN service using a 887 set of MPLS-TE tunnel which cannot be shared with other L3VPN 888 services. 890 4. Sharing: Customer would accept sharing the MPLS-TE Tunnels 891 supporting its L3VPN service with other services. 893 For the first three types, there could be additional TE binding 894 requirements with respect to different VN members of the same VN 895 associated with an L3VPN service. For the first two cases, VN 896 members can be hard-isolated, soft-isolated, or shared. For the 897 third case, VN members can be soft-isolated or shared. 899 o When "Hard Isolation with or w/o deterministic latency" (i.e., 900 the first and the second type) TE binding requirement is applied 901 for a L3VPN, a new optical layer tunnel has to be created (Step 1 902 in Figure 4). This operation requires the following control level 903 mechanisms as follows: 905 o The MDSC function of the Service/Network Orchestrator 906 identifies only the domains in the IP/MPLS layer in which the 907 VPN needs to be forwarded. 909 o Once the IP/MPLS layer domains are determined, the MDSC 910 function of the Service/Network Orchestrator needs to 911 identify the set of optical ingress and egress points of the 912 underlay optical tunnels providing connectivity between the 913 IP/MPLS layer domains. 915 o Once both IP/MPLS layers and optical layer are determined, 916 the MDSC needs to identify the inter-layer peering points in 917 both IP/MPLS domains as well as the optical domain(s). This 918 implies that the L3VPN traffic will be forwarded to an MPLS- 919 TE tunnel that starts at the ingress PE (in one IP/MPLS 920 domain) and terminates at the egress PE (in another IP/MPLS 921 domain) via a dedicated underlay optical tunnel. 923 o The MDSC function of the Service/Network Orchestrator needs to 924 first request the optical L-MDSC to instantiate an optical tunnel 925 for the optical ingress and egress. This is referred to as 926 optical tunnel creation (Step 1 in Figure 4). Note that it is L- 927 MDSC responsibility to perform multi-domain optical coordination 928 with its underlying optical PNCs, for setting up a multi-domain 929 optical tunnel. 931 o Once the optical tunnel is established, then the MDSC function of 932 the Service/Network Orchestrator needs to coordinate with the PNC 933 functions of the IP/MPLS Domain Controllers (under which the 934 ingress and egress PEs belong) the setup of a multi-domain MPLS- 935 TE Tunnel, between the ingress and egress PEs. This setup is 936 carried by the created underlay optical tunnel (Step 2 in Figure 937 4). 939 o It is the responsibility of the Service Configuration Function of 940 the Service/Network Orchestrator to identify interfaces/labels on 941 both ingress and egress PEs and to convey this information to 942 both the IP/MPLS Domain Controllers (under which the ingress and 943 egress PEs belong) for proper configuration of the L3VPN (BGP and 944 VRF function of the PEs) in their domain networks (Step 3 in 945 Figure 4). 947 5.3. IP/MPLS Domain Controller and NE Functions 949 IP/MPLS networks are assumed to have multiple domains and each 950 domain is controlled by IP/MPLS domain controller in which the ACTN 951 PNC functions and non-ACTN service functions are performed by the 952 IP/MPLS domain controller. 954 Among the functions of the IP/MPLS domain controller are VPN service 955 aspect provisioning such as VRF control and management for VPN 956 services, etc. It is assumed that BGP is running in the inter-domain 957 IP/MPLS networks for L2/L3VPN and that the IP/MPLS domain controller 958 is also responsible for configuring the BGP speakers within its 959 control domain if necessary. 961 Depending on the TE binding requirement types discussed in Section 962 5.2, there are two possible deployment scenarios. 964 5.3.1. Scenario A: Shared Tunnel Selection 966 When the L2/L3VPN does not require isolation (either hard or soft), 967 it can select an existing MPLS-TE and Optical tunnel between ingress 968 and egress PE, without creating any new TE tunnels. Figure 5 shows 969 this scenario. 971 IP/MPLS Domain 1 IP/MPLS Domain 2 972 Controller Controller 974 +------------------+ +------------------+ 975 | +-----+ +-----+ | | +-----+ +-----+ | 976 | |PNC1 | |Serv.| | | |PNC2 | |Serv.| | 977 | +-----+ +-----+ | | +-----+ +-----+ | 978 +--|-----------|---+ +--|-----------|---+ 979 | 1.Tunnel | 2.VPN/VRF | 1.Tunnel | 2.VPN/VRF 980 | Selection | Provisioning | Selection | 981 Provisioning 982 V V V V 983 +---------------------+ +---------------------+ 984 CE / PE tunnel 1 ASBR\ /ASBR tunnel 2 PE \ 985 CE 986 o--/---o..................o--\--------/--o..................o--- 987 \--o 988 \ / \ / 989 \ AS Domain 1 / \ AS Domain 2 / 990 +---------------------+ +---------------------+ 992 End-to-end tunnel 993 <-----------------------------------------------------> 995 Figure 5 IP/MPLS Domain Controller & NE Functions 997 How VPN is disseminated across the network is out of the scope of 998 this document. We assume that MP-BGP is running in IP/MPLS networks 999 and VPN is made known to ABSRs and PEs by each IP/MPLS domain 1000 controllers. See [RFC4364] for detailed descriptions on how MP-BGP 1001 works. 1003 There are several functions IP/MPLS domain controllers need to 1004 provide in order to facilitate tunnel selection for the VPN in both 1005 domain level and end-to-end level. 1007 5.3.1.1. Domain Tunnel Selection 1009 Each domain IP/MPLS controller is responsible for selecting its 1010 domain level tunnel for the L3VPN. First it needs to determine which 1011 existing tunnels would fit for the L2/L3VPN requirements allotted to 1012 the domain by the Service/Network Orchestrator (e.g., tunnel 1013 binding, bandwidth, latency, etc.). If there are existing tunnels 1014 that are feasible to satisfy the L3VPN requirements, the IP/MPLS 1015 domain controller selects the optimal tunnel from the candidate 1016 pool. Otherwise, an MPLS tunnel with modified bandwidth or a new 1017 MPLS Tunnel needs to be setup. Note that with no isolation 1018 requirement for the L3VPN, existing MPLS tunnel can be selected. 1019 With soft isolation requirement for the L3VPN, an optical tunnel can 1020 be shared with other L2/L3VPN services while with hard isolation 1021 requirement for the L2/L3VPN, a dedicated MPLS-TE and a dedicated 1022 optical tunnel MUST be provisioned for the L2/L3VPN. 1024 5.3.1.2. VPN/VRF Provisioning for L3VPN 1026 Once the domain level tunnel is selected for a domain, the Service 1027 Function of the IP/MPLS domain controller maps the L3VPN to the 1028 selected MPLS-TE tunnel and assigns a label (e.g., MPLS label) with 1029 the PE. Then the PE creates a new entry for the VPN in the VRF 1030 forwarding table so that when the VPN packet arrives to the PE, it 1031 will be able to direct to the right interface and PUSH the label 1032 assigned for the VPN. When the PE forwards a VPN packet, it will 1033 push the VPN label signaled by BGP and, in case of option A and B 1034 [RFC4364], it will also push the LSP label assigned to the 1035 configured MPLS-TE Tunnel to reach the ASBR next hop and forwards 1036 the packet to the MPLS next-hop of this MPLS-TE Tunnel. 1038 In case of option C [RFC4364], the PE will push one MPLS LSP label 1039 signaled by BGP to reach the destination PE and a second MPLS LSP 1040 label assigned to the configured MPLS-TE Tunnel to reach the ASBR 1041 next-hop and forward the packet to the MPLS next-hop of this MPLS-TE 1042 Tunnel. 1044 With Option C, the ASBR of the first domain interfacing the next 1045 domain should keep the VPN label intact to the ASBR of the next 1046 domain so that the ASBR in the next domain sees the VPN packets as 1047 if they are coming from a CE. With Option B, the VPN label is 1048 swapped. With option A, the VPN label is removed. 1050 With Option A and B, the ASBR of the second domain does the same 1051 procedure that includes VPN/VRF tunnel mapping and interface/label 1052 assignment with the IP/MPLS domain controller. With option A, the 1053 ASBR operations are the same as of the PEs. With option B, the ASBR 1054 operates with VPN labels so it can see the VPN the traffic belongs 1055 to. With option C, the ASBR operates with the end-to-end tunnel 1056 labels so it may be not aware of the VPN the traffic belongs to. 1058 This process is repeated in each domain. The PE of the last domain 1059 interfacing the destination CE should recognize the VPN label when 1060 the VPN packets arrive and thus POP the VPN label and forward the 1061 packets to the CE. 1063 5.3.1.3. VSI Provisioning for L2VPN 1065 The VSI provisioning for L2VPN is similar to the VPN/VRF provision 1066 for L3VPN. L2VPN service types include: 1068 o Point-to-point Virtual Private Wire Services (VPWSs) that use 1069 LDP-signaled Pseudowires or L2TP-signaled Pseudowires [RFC6074]; 1071 o Multipoint Virtual Private LAN Services (VPLSs) that use LDP- 1072 signaled Pseudowires or L2TP-signaled Pseudowires [RFC6074]; 1074 o Multipoint Virtual Private LAN Services (VPLSs) that use a Border 1075 Gateway Protocol (BGP) control plane as described in [RFC4761]and 1076 [RFC6624]; 1078 o IP-Only LAN-Like Services (IPLSs) that are a functional subset of 1079 VPLS services [RFC7436]; 1081 o BGP MPLS-based Ethernet VPN Services as described in [RFC7432] 1082 and [RFC7209]; 1084 o Ethernet VPN VPWS specified in [RFC8214] and [RFC7432]. 1086 5.3.1.4. Inter-domain Links Update 1088 In order to facilitate inter-domain links for the VPN, we assume 1089 that the service/network orchestrator would know the inter-domain 1090 link status and its resource information (e.g., bandwidth available, 1091 protection/restoration policy, etc.) via some mechanisms (which are 1092 beyond the scope of this document). We also assume that the inter- 1093 domain links are pre-configured prior to service instantiation. 1095 5.3.1.5. End-to-end Tunnel Management 1097 It is foreseen that the Service/Network orchestrator should control 1098 and manage end-to-end tunnels for VPNs per VPN policy. 1100 As discussed in [ACTN-PM], the Orchestrator is responsible to 1101 collect domain LSP-level performance monitoring data from domain 1102 controllers and to derive and report end-to-end tunnel performance 1103 monitoring information to the customer. 1105 5.3.2. Scenario B: Isolated VN/Tunnel Establishment 1107 When the L3VPN requires hard-isolated Tunnel establishment, optical 1108 layer tunnel binding with IP/MPLS layer is necessary. As such, the 1109 following functions are necessary. 1111 o The IP/MPLS Domain Controller of Domain 1 needs to send the VRF 1112 instruction to the PE: 1114 o To the Ingress PE of AS Domain 1: Configuration for each 1115 L3VPN destination IP address (in this case the remote CE's IP 1116 address for the VPN or any customer's IP addresses reachable 1117 through a remote CE) of the associated VPN label assigned by 1118 the Egress PE and of the MPLS-TE Tunnel to be used to reach 1119 the Egress PE: so that the proper VRF table is populated to 1120 forward the VPN traffic to the inter-layer optical interface 1121 with the VPN label. 1123 o The Egress PE, upon the discovery of a new IP address, needs to 1124 send the mapping information (i.e., VPN to IP address) to its' 1125 IP/MPLS Domain Controller of Domain 2 which sends, in turn, to 1126 the service orchestrator. The service orchestrator would then 1127 propagate this mapping information to the IP/MPLS Domain 1128 Controller of Domain 1 which sends it, in turn, to the ingress PE 1129 so that it may override the VPN/VRF forwarding or VSI forwarding, 1130 respectively for L3VPN and L2VPN. As a result, when packets 1131 arriving at the ingress PE with that IP destination address, the 1132 ingress PE would then forward this packet to the inter-layer 1133 optical interface. 1135 [Editor's Note] in case of hard isolated tunnel required for the 1136 VPN, we need to create a separate MPLS TE tunnel and encapsulate the 1137 MPLS packets of the MPLS Tunnel into the ODU so that the optical NE 1138 would route this MPLS Tunnel to a separate optical tunnel from other 1139 tunnels.] 1141 5.4. Optical Domain Controller and NE Functions 1143 Optical network provides the underlay connectivity services to 1144 IP/MPLS networks. The multi-domain optical network coordination is 1145 performed by the L-MDSC function shown in Figure 2 so that the whole 1146 multi-domain optical network appears to the service/network 1147 orchestrator as one optical network. The coordination of 1148 Packet/Optical multi-layer and IP/MPLS multi-domain is done by the 1149 service/network orchestrator where it interfaces two IP/MPLS domain 1150 controllers and one optical L-MDSC. 1152 Figure 6 shows how the Optical Domain Controllers create a new 1153 optical tunnel and the related interaction with IP/MPLS domain 1154 controllers and the NEs to bind the optical tunnel with proper 1155 forwarding instruction so that the VPN requiring hard isolation can 1156 be fulfilled. 1158 IP/MPLS Domain 1 Optical Domain IP/MPLS Domain 2 1159 Controller Controller Controller 1161 +------------------+ +---------+ +------------------+ 1162 | +-----+ +-----+ | | +-----+ | | +-----+ +-----+ | 1163 | |PNC1 | |Serv.| | | |PNC | | | |PNC2 | |Serv.| | 1164 | +-----+ +-----+ | | +-----+ | | +-----+ +-----+ | 1165 +--|-----------|---+ +----|----+ +--|----------|----+ 1166 | 2.Tunnel | 3.VPN/VRF | |2.Tunnel | 3.VPN/VRF 1167 | Binding | Provisioning| |Binding | 1168 Provisioning 1169 V V | V V 1170 +-------------------+ | +-------------------+ 1171 CE / PE ASBR\ | /ASBR PE \ CE 1172 o--/---o o--\----|--/--o o---\--o 1173 \ : / | \ : / 1174 \ : AS Domain 1 / | \ AS Domain 2 : / 1175 +-:-----------------+ | +-----------------:-+ 1176 : | : 1177 : | 1. Optical : 1178 : | Tunnel Creation : 1179 : v : 1180 +-:--------------------------------------------------:-+ 1181 / : : \ 1182 / o..................................................o \ 1183 | Optical Tunnel | 1184 \ / 1185 \ Optical Domain / 1186 +------------------------------------------------------+ 1188 Figure 6 Domain Controller & NE Functions (Isolated Optical Tunnel) 1190 As discussed in 5.2, in case that VPN has requirement for hard- 1191 isolated tunnel establishment, the service/network orchestrator will 1192 coordinate across IP/MPLS domain controllers and Optical L-MDSC to 1193 ensure the creation of a new optical tunnel for the VPN in proper 1194 sequence. Figure 6 shows this scenario. 1196 o The MDSC of the service/network orchestrator requests the L-MDSC 1197 to setup and Optical tunnel providing connectivity between the 1198 inter-layer interfaces at the ingress and egress PEs and requests 1199 the two IP/MPLS domain controllers to setup an inter-domain IP 1200 link between these interfaces 1202 o The MDSC of the service/network orchestrator then should provide 1203 the ingress IP/MPLS domain controller with the routing 1204 instruction for the VPN so that the ingress IP/MPLS domain 1205 controller would help its ingress PE to populate forwarding 1206 table. The packet with the VPN label should be forwarded to the 1207 optical interface the MDSC provided. 1209 o The Ingress Optical Domain PE needs to recognize MPLS-TE label on 1210 its ingress interface from IP/MPLS domain PE and encapsulate the 1211 MPLS packets of this MPLS-TE Tunnel into the ODU. 1213 [Editor's Note] We assumed that the Optical PE is LSR.] 1215 o The Egress Optical Domain PE needs to POP the ODU label before 1216 sending the packet (with MPLS-TE label kept intact at the top 1217 level) to the Egress PE in the IP/MPLS Domain to which the packet 1218 is destined. 1220 [Editor's Note] If there are two VPNs having the same destination CE 1221 requiring non-shared optical tunnels from each other, we need to 1222 explain this case with a need for additional Label to differentiate 1223 the VPNs] 1225 5.5. Orchestrator-Controllers-NEs Communication Protocol Flows 1227 This section provides generic communication protocol flows across 1228 orchestrator, controllers and NEs in order to facilitate the POI 1229 scenarios discussed in Section 5.3.2 for dynamic optical Tunnel 1230 establishment. Figure 7 shows the communication flows. 1232 +---------+ +-------+ +------+ +------+ +------+ +------+ 1233 |Orchestr.| |Optical| |Packet| |Packet| |Ing.PE| |Egr.PE| 1234 | | | Ctr. | |Ctr-D1| |Ctr-D2| | D1 | | D2 | 1235 +---------+ +-------+ +------+ +------+ +------+ +------+ 1236 | | | | | | 1237 | | | | |<--BGP--->| 1238 | | | |VPN Update | | 1239 | | | VPN Update|<---------------------| 1240 |<--------------------------------------|(Dest, VPN)| | 1241 | | |(Dest, VPN)| | | 1242 | Tunnel Create | | | | | 1243 |---------------->| | | | | 1244 |(VPN,Ingr/Egr if)| | | | | 1245 | | | | | | 1246 | Tunnel Confirm | | | | | 1247 |<----------------| | | | | 1248 | (Tunnel ID) | | | | | 1249 | | | | | | 1250 | Tunnel Bind | | | | | 1251 |-------------------------->| | | | 1252 | (Tunnel ID, VPN, Ingr if) | Forward. Mapping | | 1253 | | |---------------------->| (1) | 1254 | Tunnel Bind Confirm | (Dest, VPN, Ingr if | | 1255 |<--------------------------| | | | 1256 | | | | | | 1257 | Tunnel Bind | | | | | 1258 |-------------------------------------->| | | 1259 | (Tunnel ID, VPN, Egr if) | | | | 1260 | | | | Forward. Mapping | 1261 | | | |--------------------- 1262 >|(2) 1263 | | | | (Dest, VPN , Egr if) | 1264 | | Tunnel Bind Confirm | | | 1265 |<--------------------------------------| | | 1266 | | | | | | 1268 Figure 7 Communication Flows for Optical Tunnel Establishment and 1269 binding. 1271 When Domain Packet Controller 1 sends the forwarding mapping 1272 information as indicated in (1) in Figure 7, the Ingress PE in 1273 Domain 1 will need to provision the VRF forwarding table based on 1274 the information it receives. Please see the detailed procedure in 1275 section 5.3.1.2. A similar procedure is to be done at the Egress PE 1276 in Domain 2. 1278 6. Security Considerations 1280 Several security considerations have been identified and will be 1281 discussed in future versions of this document. 1283 7. Operational Considerations 1285 Telemetry data, such as the collection of lower-layer networking 1286 health and consideration of network and service performance from POI 1287 domain controllers, may be required. These requirements and 1288 capabilities will be discussed in future versions of this document. 1290 8. IANA Considerations 1292 This document requires no IANA actions. 1294 9. References 1296 9.1. Normative References 1298 [RFC7950] Bjorklund, M. et al., "The YANG 1.1 Data Modeling 1299 Language", RFC 7950, August 2016. 1301 [RFC7951] Lhotka, L., "JSON Encoding of Data Modeled with YANG", RFC 1302 7951, August 2016. 1304 [RFC8040] Bierman, A. et al., "RESTCONF Protocol", RFC 8040, January 1305 2017. 1307 [RFC8345] Clemm, A., Medved, J. et al., "A Yang Data Model for 1308 Network Topologies", RFC8345, March 2018. 1310 [RFC8346] Clemm, A. et al., "A YANG Data Model for Layer 3 1311 Topologies", RFC8346, March 2018. 1313 [RFC8453] Ceccarelli, D., Lee, Y. et al., "Framework for Abstraction 1314 and Control of TE Networks (ACTN)", RFC8453, August 2018. 1316 [RFC8525] Bierman, A. et al., "YANG Library", RFC 8525, March 2019. 1318 [IEEE 802.1AB] IEEE 802.1AB-2016, "IEEE Standard for Local and 1319 metropolitan area networks - Station and Media Access 1320 Control Connectivity Discovery", March 2016. 1322 [TE-TOPO] Liu, X. et al., "YANG Data Model for TE Topologies", 1323 draft-ietf-teas-yang-te-topo, work in progress. 1325 [WSON-TOPO] Lee, Y. et al., " A YANG Data Model for WSON (Wavelength 1326 Switched Optical Networks)", draft-ietf-ccamp-wson-yang, 1327 work in progress. 1329 [Flexi-TOPO] Lopez de Vergara, J. E. et al., "YANG data model for 1330 Flexi-Grid Optical Networks", draft-ietf-ccamp-flexigrid- 1331 yang, work in progress. 1333 [CLIENT-TOPO] Zheng, H. et al., "A YANG Data Model for Client-layer 1334 Topology", draft-zheng-ccamp-client-topo-yang, work in 1335 progress. 1337 [L3-TE-TOPO] Liu, X. et al., "YANG Data Model for Layer 3 TE 1338 Topologies", draft-ietf-teas-yang-l3-te-topo, work in 1339 progress. 1341 [TE-TUNNEL] Saad, T. et al., "A YANG Data Model for Traffic 1342 Engineering Tunnels and Interfaces", draft-ietf-teas-yang- 1343 te, work in progress. 1345 [WSON-TUNNEL] Lee, Y. et al., "A Yang Data Model for WSON Tunnel", 1346 draft-ietf-ccamp-wson-tunnel-model, work in progress. 1348 [Flexi-MC] Lopez de Vergara, J. E. et al., "YANG data model for 1349 Flexi-Grid media-channels", draft-ietf-ccamp-flexigrid- 1350 media-channel-yang, work in progress. 1352 [CLIENT-SIGNAL] Zheng, H. et al., "A YANG Data Model for Transport 1353 Network Client Signals", draft-ietf-ccamp-client-signal- 1354 yang, work in progress. 1356 9.2. Informative References 1358 [RFC4364] E. Rosen and Y. Rekhter, "BGP/MPLS IP Virtual Private 1359 Networks (VPNs)", RFC 4364, February 2006. 1361 [RFC4761] K. Kompella, Ed., Y. Rekhter, Ed., "Virtual Private LAN 1362 Service (VPLS) Using BGP for Auto-Discovery and 1363 Signaling", RFC 4761, January 2007. 1365 [RFC6074] E. Rosen, B. Davie, V. Radoaca, and W. Luo, "Provisioning, 1366 Auto-Discovery, and Signaling in Layer 2 Virtual Private 1367 Networks (L2VPNs)", RFC 6074, January 2011. 1369 [RFC6624] K. Kompella, B. Kothari, and R. Cherukuri, "Layer 2 1370 Virtual Private Networks Using BGP for Auto-Discovery and 1371 Signaling", RFC 6624, May 2012. 1373 [RFC7209] A. Sajassi, R. Aggarwal, J. Uttaro, N. Bitar, W. 1374 Henderickx, and A. Isaac, "Requirements for Ethernet VPN 1375 (EVPN)", RFC 7209, May 2014. 1377 [RFC7432] A. Sajassi, Ed., et al., "BGP MPLS-Based Ethernet VPN", 1378 RFC 7432, February 2015. 1380 [RFC7436] H. Shah, E. Rosen, F. Le Faucheur, and G. Heron, "IP-Only 1381 LAN Service (IPLS)", RFC 7436, January 2015. 1383 [RFC8214] S. Boutros, A. Sajassi, S. Salam, J. Drake, and J. 1384 Rabadan, "Virtual Private Wire Service Support in Ethernet 1385 VPN", RFC 8214, August 2017. 1387 [RFC8299] Q. Wu, S. Litkowski, L. Tomotaki, and K. Ogaki, "YANG Data 1388 Model for L3VPN Service Delivery", RFC 8299, January 2018. 1390 [RFC8309] Q. Wu, W. Liu, and A. Farrel, "Service Model Explained", 1391 RFC 8309, January 2018. 1393 [RFC8466] G. Fioccola, ed., "A YANG Data Model for Layer 2 Virtual 1394 Private Network (L2VPN) Service Delivery", RFC8466, 1395 October 2018. 1397 [TNBI] Busi, I., Daniel, K. et al., "Transport Northbound 1398 Interface Applicability Statement", draft-ietf-ccamp- 1399 transport-nbi-app-statement, work in progress. 1401 [ACTN-VN] Y. Lee, et al., "A Yang Data Model for ACTN VN Operation", 1402 draft-ietf-teas-actn-vn-yang, work in progress. 1404 [TSM] Y. Lee, et al., "Traffic Engineering and Service Mapping Yang 1405 Model", draft-ietf-teas-te-service-mapping-yang, work in 1406 progress. 1408 [ACTN-PM] Y. Lee, et al., "YANG models for VN & TE Performance 1409 Monitoring Telemetry and Scaling Intent Autonomics", 1410 draft-lee-teas-actn-pm-telemetry-autonomics, work in 1411 progress. 1413 [BGP-L3VPN] D. Jain, et al. "Yang Data Model for BGP/MPLS L3 VPNs", 1414 draft-ietf-bess-l3vpn-yang, work in progress. 1416 10. Acknowledgments 1418 This document was prepared using 2-Word-v2.0.template.dot. 1420 Some of this analysis work was supported in part by the European 1421 Commission funded H2020-ICT-2016-2 METRO-HAUL project (G.A. 761727). 1423 11. Authors' Addresses 1425 Fabio Peruzzini 1426 TIM 1428 Email: fabio.peruzzini@telecomitalia.it 1430 Italo Busi 1431 Huawei 1433 Email: Italo.busi@huawei.com 1435 Daniel King 1436 Old Dog Consulting 1438 Email: daniel@olddog.co.uk 1440 Sergio Belotti 1441 Nokia 1443 Email: sergio.belotti@nokia.com 1445 Gabriele Galimberti 1446 Cisco 1448 Email: ggalimbe@cisco.com 1449 Zheng Yanlei 1450 China Unicom 1452 Email: zhengyanlei@chinaunicom.cn 1454 Washington Costa Pereira Correia 1455 TIM Brasil 1457 Email: wcorreia@timbrasil.com.br 1459 Jean-Francois Bouquier 1460 Vodafone 1462 Email: jeff.bouquier@vodafone.com 1464 Michael Scharf 1465 Hochschule Esslingen - University of Applied Sciences 1467 Email: michael.scharf@hs-esslingen.de 1469 Young Lee 1470 Sung Kyun Kwan University 1472 Email: younglee.tx@gmail.com 1474 Daniele Ceccarelli 1475 Ericsson 1477 Email: daniele.ceccarelli@ericsson.com 1479 Jeff Tantsura 1480 Apstra 1482 Email: jefftant.ietf@gmail.com