idnits 2.17.1 draft-ietf-teas-actn-poi-applicability-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a both a reference to RFC 2119 and the recommended RFC 2119 boilerplate, even if it appears to use RFC 2119 keywords. RFC 2119 keyword, line 1021: '... optical tunnel MUST be provisioned f...' Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (September 28, 2020) is 1299 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: 'RFC4761' is defined on line 1360, but no explicit reference was found in the text Summary: 1 error (**), 0 flaws (~~), 2 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 TEAS Working Group Fabio Peruzzini 2 Internet Draft TIM 3 Intended status: Informational Jean-Francois Bouquier 4 Vodafone 5 Italo Busi 6 Huawei 7 Daniel King 8 Old Dog Consulting 9 Daniele Ceccarelli 10 Ericsson 12 Expires: March 2021 September 28, 2020 14 Applicability of Abstraction and Control of Traffic Engineered 15 Networks (ACTN) to Packet Optical Integration (POI) 17 draft-ietf-teas-actn-poi-applicability-00 19 Status of this Memo 21 This Internet-Draft is submitted in full conformance with the 22 provisions of BCP 78 and BCP 79. 24 Internet-Drafts are working documents of the Internet Engineering 25 Task Force (IETF), its areas, and its working groups. Note that 26 other groups may also distribute working documents as Internet- 27 Drafts. 29 Internet-Drafts are draft documents valid for a maximum of six 30 months and may be updated, replaced, or obsoleted by other documents 31 at any time. It is inappropriate to use Internet-Drafts as 32 reference material or to cite them other than as "work in progress." 34 The list of current Internet-Drafts can be accessed at 35 http://www.ietf.org/ietf/1id-abstracts.txt 37 The list of Internet-Draft Shadow Directories can be accessed at 38 http://www.ietf.org/shadow.html 40 This Internet-Draft will expire on March 28, 2020. 42 Copyright Notice 44 Copyright (c) 2020 IETF Trust and the persons identified as the 45 document authors. All rights reserved. 47 This document is subject to BCP 78 and the IETF Trust's Legal 48 Provisions Relating to IETF Documents 49 (http://trustee.ietf.org/license-info) in effect on the date of 50 publication of this document. Please review these documents 51 carefully, as they describe your rights and restrictions with 52 respect to this document. Code Components extracted from this 53 document must include Simplified BSD License text as described in 54 Section 4.e of the Trust Legal Provisions and are provided without 55 warranty as described in the Simplified BSD License. 57 Abstract 59 This document considers the applicability of the IETF Abstraction 60 and Control of Traffic Engineered Networks (ACTN) to Packet Optical 61 Integration (POI), and IP and Optical DWDM domain internetworking. 63 In this document, we highlight the IETF protocols and YANG data 64 models that may be used for the ACTN and control of POI networks, 65 with particular focus on the interfaces between the MDSC (Multi- 66 Domain Service Coordinator) and the underlying Packet and Optical 67 Domain Controllers (P-PNC and O-PNC) to support POI use cases. 69 Table of Contents 71 1. Introduction...................................................3 72 2. Reference Scenario.............................................5 73 2.1. Generic Assumptions.......................................6 74 3. Multi-Layer Topology Coordination..............................7 75 3.1. Discovery of existing Och, ODU, IP links, IP tunnels and IP 76 services.......................................................7 77 3.1.1. Common YANG Models used at the MPI...................8 78 3.1.1.1. YANG models used at the Optical MPIs............8 79 3.1.1.2. Required YANG models at the Packet MPIs.........9 80 3.1.2. Inter-domain link Discovery..........................9 81 3.2. Provisioning of an IP Link/LAG over DWDM.................10 82 3.2.1. YANG models used at the MPIs........................10 83 3.2.1.1. YANG models used at the Optical MPIs...........10 84 3.2.1.2. Required YANG models at the Packet MPIs........11 85 3.2.2. IP Link Setup Procedure.............................12 87 3.3. Provisioning of an IP link/LAG over DWDM with path 88 constraints...................................................12 89 3.3.1. YANG models used at the MPIs........................13 90 3.4. Provisioning Link Members to an existing LAG.............13 91 3.4.1. YANG Models used at the MPIs........................13 92 4. Multi-Layer Recovery Coordination.............................13 93 4.1. Ensuring Network Resiliency during Maintenance Events....13 94 4.2. Router Port Failure......................................13 95 5. Service Coordination for Multi-Layer network..................14 96 5.1. L2/L3VPN/VN Service Request by the Customer..............17 97 5.2. Service and Network Orchestration........................19 98 5.3. IP/MPLS Domain Controller and NE Functions...............23 99 5.3.1. Scenario A: Shared Tunnel Selection.................23 100 5.3.1.1. Domain Tunnel Selection........................24 101 5.3.1.2. VPN/VRF Provisioning for L3VPN.................25 102 5.3.1.3. VSI Provisioning for L2VPN.....................26 103 5.3.1.4. Inter-domain Links Update......................26 104 5.3.1.5. End-to-end Tunnel Management...................26 105 5.3.2. Scenario B: Isolated VN/Tunnel Establishment........27 106 5.4. Optical Domain Controller and NE Functions...............27 107 5.5. Orchestrator-Controllers-NEs Communication Protocol Flows29 108 6. Security Considerations.......................................31 109 7. Operational Considerations....................................31 110 8. IANA Considerations...........................................31 111 9. References....................................................31 112 9.1. Normative References.....................................31 113 9.2. Informative References...................................32 114 Acknowledgments..................................................34 115 Contributors.....................................................34 116 Authors' Addresses...............................................35 118 1. Introduction 120 Packet Optical Integration (POI) is an advanced use case of traffic 121 engineering. In wide-area networks, a packet network based on the 122 Internet Protocol (IP) and possibly Multiprotocol Label Switching 123 (MPLS) is typically deployed on top of an optical transport network 124 that uses Dense Wavelength Division Multiplexing (DWDM). In many 125 existing network deployments, the packet and the optical networks 126 are engineered and operated independently of each other. There are 127 technical differences between the technologies (e.g., routers versus 128 optical switches) and the corresponding network engineering and 129 planning methods (e.g., inter-domain peering optimization in IP vs. 130 dealing with physical impairments in DWDM, or very different time 131 scales). In addition, customers and customer needs vary between a 132 packet and an optical network, and it is not uncommon to use 133 different vendors in both domains. Last but not least, state-of-the- 134 art packet and optical networks use sophisticated but complex 135 technologies, and for a network engineer, it may not be trivial to 136 be a full expert in both areas. As a result, packet and optical 137 networks are often managed by different technical and organizational 138 silos. 140 This separation is inefficient for many reasons. Both capital 141 expenditure (CAPEX) and operational expenditure (OPEX) could be 142 significantly reduced by better integrating the packet and the 143 optical network. Multi-layer online topology insight can speed up 144 troubleshooting (e.g., alarm correlation) and network operation 145 (e.g., coordination of maintenance events), multi-layer offline 146 topology inventory can improve service quality (e.g., detection of 147 diversity constraint violations) and multi-layer traffic engineering 148 can use the available network capacity more efficiently (e.g., 149 coordination of restoration). In addition, provisioning workflows 150 can be simplified or automated as needed across layers (e.g, to 151 achieve bandwidth on demand, or to perform maintenance events). 153 Fully leveraging these benefits requires integration between the 154 management and control of the packet and the optical network. The 155 Abstraction and Control of TE Networks (ACTN) framework outlines the 156 functional components and interfaces between a Multi-Domain Service 157 Coordinator (MDSC) and Provisioning Network Controllers (PNCs) that 158 can be used for coordinating the packet and optical layers. 160 In this document, critical use cases for Packet Optical Integration 161 (POI) are described. We outline how and what is required for the 162 packet and the optical layer to interact to set up and operate 163 services. The IP networks are operated as a client of optical 164 networks. The use cases are ordered by increasing the level of 165 integration and complexity. For each multi-layer use case, the 166 document analyzes how to use the interfaces and data models of the 167 ACTN architecture. 169 The document also captures the current issues with ACTN and POI 170 deployment. By understanding the level of standardization and 171 potential gaps, it will help to better assess the feasibility of 172 integration between IP and optical DWDM domain, in an end-to-end 173 multi-vendor network. 175 2. Reference Scenario 177 This document uses "Reference Scenario 1" with multiple Optical 178 domains and multiple Packet domains. The following Figure 1 shows 179 this scenario in case of two Optical domains and two Packet domains: 181 +----------+ 182 | MDSC | 183 +-----+----+ 184 | 185 +-----------+-----+------+-----------+ 186 | | | | 187 +----+----+ +----+----+ +----+----+ +----+----+ 188 | P-PNC 1 | | O-PNC 1 | | O-PNC 2 | | P-PNC 2 | 189 +----+----+ +----+----+ +----+----+ +----+----+ 190 | | | | 191 | \ / | 192 +-------------------+ \ / +-------------------+ 193 CE / PE ASBR \ | / / ASBR PE \ CE 194 o--/---o o---\-|-------|--/---o o---\--o 195 \ : : / | | \ : : / 196 \ : AS Domain 1 : / | | \ : AS Domain 2 : / 197 +-:---------------:-+ | | +-:---------------:--+ 198 : : | | : : 199 : : | | : : 200 +-:---------------:------+ +-------:---------------:--+ 201 / : : \ / : : \ 202 / o...............o \ / o...............o \ 203 \ Optical Domain 1 / \ Optical Domain 2 / 204 \ / \ / 205 +------------------------+ +--------------------------+ 207 Figure 1 - Reference Scenario 1 209 The ACTN architecture, defined in [RFC8453], is used to control this 210 multi-domain network where each Packet PNC (P-PNC) is responsible 211 for controlling its IP domain (AS), and each Optical PNC (O-PNC) is 212 responsible for controlling its Optical Domain. 214 The MDSC is responsible for coordinating the whole multi-domain 215 multi-layer (Packet and Optical) network. A specific standard 216 interface (MPI) permits MDSC to interact with the different 217 Provisioning Network Controller (O/P-PNCs). 219 The MPI interface presents an abstracted topology to MDSC hiding 220 technology-specific aspects of the network and hiding topology 221 details depending on the policy chosen regarding the level of 222 abstraction supported. The level of abstraction can be obtained 223 based on P-PNC and O-PNC configuration parameters (e.g. provide the 224 potential connectivity between any PE and any ABSR in an MPLS-TE 225 network). 227 The MDSC in Figure 1 is responsible for multi-domain and multi-layer 228 coordination across multiple Packet and Optical domains, as well as 229 to provide IP services to different CNCs at its CMIs using YANG- 230 based service models (e.g., using L2SM [RFC8466], L3SM [RFC8299]). 232 The multi-domain coordination mechanisms for the IP tunnels 233 supporting these IP services are described in section 5. In some 234 cases, the MDSC could also rely on the multi-layer POI mechanisms, 235 described in this draft, to support multi-layer optimizations for 236 these IP services and tunnels. 238 In the network scenario of Figure 1, it is assumed that: 240 o The domain boundaries between the IP and Optical domains are 241 congruent. In other words, one Optical domain supports 242 connectivity between Routers in one and only one Packet Domain; 244 o Inter-domain links exist only between Packet domains (i.e., 245 between ASBR routers) and between Packet and Optical domains 246 (i.e., between routers and ROADMs). In other words, there are no 247 inter-domain links between Optical domains; 249 o The interfaces between the routers and the ROADM's are "Ethernet" 250 physical interfaces; 252 o The interfaces between the ASBR routers are "Ethernet" physical 253 interfaces. 255 2.1. Generic Assumptions 257 This section describes general assumptions which are applicable at 258 all the MPI interfaces, between each PNC (optical or packet) and the 259 MDSC, and also to all the scenarios discussed in this document. 261 The data models used on these interfaces are assumed to use the YANG 262 1.1 Data Modeling Language, as defined in [RFC7950]. 264 The RESTCONF protocol, as defined in [RFC8040], using the JSON 265 representation, defined in [RFC7951], is assumed to be used at these 266 interfaces. 268 As required in [RFC8040], the "ietf-yang-library" YANG module 269 defined in [RFC8525] is used to allow the MDSC to discover the set 270 of YANG modules supported by each PNC at its MPI. 272 3. Multi-Layer Topology Coordination 274 In this scenario, the MSDC needs to discover the network topology, 275 at both WDM and IP layers, in terms of nodes (NEs) and links, 276 including inter-AS domain links as well as cross-layer links. 278 Each PNC provides to the MDSC an abstract topology view of the WDM 279 or of the IP topology of the domain it controls. This topology is 280 abstracted in the sense that some detailed NE information is hidden 281 at the MPI, and all or some of the NEs and related physical links 282 are exposed as abstract nodes and logical (virtual) links, depending 283 on the level of abstraction the user requires. This detailed 284 information is vital to understand both the inter-AS domain links 285 (seen by each controller as UNI interfaces but as I-NNI interfaces 286 by the MDSC) as well as the cross-layer mapping between IP and WDM 287 layer. 289 The MDSC also maintains an up-to-date network inventory of both IP 290 and WDM layers through the use of IETF notifications through MPI 291 with the PNCs. 293 For the cross-layer links, the MDSC needs to be capable of 294 automatically correlating physical ports information from the 295 routers (single link or bundle links for link aggregation groups - 296 LAG) to client ports in the ROADM. 298 3.1. Discovery of existing Och, ODU, IP links, IP tunnels and IP 299 services 301 Typically, an MDSC must be able to automatically discover network 302 topology of both WDM and IP layers (links and NE, links between two 303 domains), this assumes the following: 305 o An abstract view of the WDM and IP topology must be available; 306 o MDSC must keep an up-to-date network inventory of both IP and WDM 307 layers, and it should be possible to correlate such information 308 (e.g., which port, lambda/OTSi, the direction it is used by a 309 specific IP service on the WDM equipment); 311 o It should be possible at MDSC level to easily correlate WDM and 312 IP layers alarms to speed-up troubleshooting. 314 3.1.1. Common YANG Models used at the MPI 316 Both optical and packet PNCs use the following common topology YANG 317 models at the MPI to report their abstract topologies: 319 o The Base Network Model, defined in the "ietf-network" YANG module 320 of [RFC8345]; 322 o The Base Network Topology Model, defined in the "ietf-network- 323 topology" YANG module of [RFC8345], which augments the Base 324 Network Model; 326 o The TE Topology Model, defined in the "ietf-te-topology" YANG 327 module of [TE-TOPO], which augments the Base Network Topology 328 Model. 330 These IETF YANG models are generic and augmented by technology- 331 specific YANG modules as described in the following sections. 333 3.1.1.1. YANG models used at the Optical MPIs 335 The optical PNC also uses at least the following technology-specific 336 topology YANG models, providing WDM and Ethernet technology-specific 337 augmentations of the generic TE Topology Model: 339 o The WSON Topology Model, defined in the "ietf-wson-topology" YANG 340 modules of [WSON-TOPO], or the Flexi-grid Topology Model, defined 341 in the "ietf-flexi-grid-topology" YANG module of [Flexi-TOPO]. 343 o The Ethernet Topology Model, defined in the "ietf-eth-te- 344 topology" YANG module of [CLIENT-TOPO] 346 The WSON Topology Model or, alternatively, the Flexi-grid Topology 347 model is used to report the fixed-grid or, respectively, the 348 flexible-grid DWDM network topology (e.g., ROADMs and OMS links). 350 The Ethernet Topology Model is used to report the Ethernet access 351 links on the edge ROADMs. 353 3.1.1.2. Required YANG models at the Packet MPIs 355 The Packet PNC also uses at least the following technology-specific 356 topology YANG models, providing IP and Ethernet technology-specific 357 augmentations of the generic Topology Models: 359 o The L3 Topology Model, defined in the "ietf-l3-unicast-topology" 360 YANG modules of [RFC8346], which augments the Base Network 361 Topology Model 363 o The Ethernet Topology Model, defined in the "ietf-eth-te- 364 topology" YANG module of [CLIENT-TOPO], which augments the TE 365 Topology Model 367 o The L3-TE Topology Model, defined in the "ietf-l3-te-topology" 368 YANG modules of [L3-TE-TOPO], which augments the L3 Topology 369 Model 371 The Ethernet Topology Model is used to report the Ethernet links 372 between the IP routers and the edge ROADMs as well as the 373 inter-domain links between ASBRs, while the L3 Topology Model is 374 used to report the IP network topology (e.g., IP routers and IP 375 links). 377 The L3-TE Topology Model reports the relationship between the IP 378 routers and LTPs provided by the L3 Topology Model and the 379 underlying Ethernet nodes and LTPs provided by the Ethernet Topology 380 Model. 382 3.1.2. Inter-domain link Discovery 384 In the reference network of Figure 1, there are two types of 385 inter-domain links: 387 o Links between two IP domains/ASBRs (ASes) 389 o Links between an IP router and a ROADM 391 Both types of links are Ethernet physical links. 393 The inter-domain link information is reported to the MDSC by the two 394 adjacent PNCs, controlling the two ends of the inter-domain link, 395 using the Ethernet Topology Model defined in [CLIENT-TOPO]. 397 The MDSC can understand how to merge these inter-domain Ethernet 398 links together using the plug-id attribute defined in the TE 399 Topology Model [TE-TOPO], as described in as described in section 400 4.3 of [TE-TOPO]. 402 A more detailed description of how the plug-id can be used to 403 discover inter-domain link is also provided in section 5.1.4 of 404 [TNBI]. 406 Both types of inter-domain Ethernet links are discovered using the 407 plug-id attributes reported in the Ethernet Topologies exposed by 408 the two adjacent PNCs. 410 The MDSC, when discovering an Ethernet inter-domain link between two 411 Ethernet LTPs which are associated with two IP LTPs, reported in the 412 IP Topologies exposed by the two adjacent P-PNCs, can also discover 413 an inter-domain IP link/adjacency between these two IP LTPs. 415 Two options are possible to discover these inter-domain Ethernet 416 links: 418 1. Static configuration 420 2. LLDP [IEEE 802.1AB] automatic discovery 422 Since the static configuration requires an administrative burden to 423 configure network-wide unique identifiers, the automatic discovery 424 solution based on LLDP is preferable when LLDP is supported. 426 As outlined in [TNBI], the encoding of the plug-id namespace as well 427 as of the LLDP information within the plug-id value is 428 implementation specific and needs to be consistent across all the 429 PNCs. 431 3.2. Provisioning of an IP Link/LAG over DWDM 433 In this scenario, the MSDC needs to coordinate the creation of an IP 434 link, or a LAG, between two routers through a DWDM network. 436 It is assumed that the MDSC has already discovered the whole network 437 topology as described in section 3.1. 439 3.2.1. YANG models used at the MPIs 441 3.2.1.1. YANG models used at the Optical MPIs 443 The optical PNC uses at least the following YANG models: 445 o The TE Tunnel Model, defined in the "ietf-te" YANG module of 446 [TE-TUNNEL] 448 o The WSON Tunnel Model, defined in the "ietf-wson-tunnel" YANG 449 modules of [WSON-TUNNEL], or the Flexi-grid Media Channel Model, 450 defined in the "ietf-flexi-grid-media-channel" YANG module of 451 [Flexi-MC] 453 o The Ethernet Client Signal Model, defined in the "ietf-eth-tran- 454 service" YANG module of [CLIENT-SIGNAL] 456 The TE Tunnel model is generic and augmented by technology-specific 457 models such as the WSON Tunnel Model and the Flexi-grid Media 458 Channel Model. 460 The WSON Tunnel Model or, alternatively, the Flexi-grid Media 461 Channel Model are used to setup connectivity within the DWDM network 462 depending on whether the DWDM optical network is based on fixed grid 463 or flexible-grid. 465 The Ethernet Client Signal Model is used to configure the steering 466 of the Ethernet client traffic between Ethernet access links and TE 467 Tunnels, which in this case could be either WSON Tunnels or 468 Flexi-Grid Media Channels. This model is generic and applies to any 469 technology-specific TE Tunnel: technology-specific attributes are 470 provided by the technology-specific models which augment the generic 471 TE-Tunnel Model. 473 3.2.1.2. Required YANG models at the Packet MPIs 475 The Packet PNC uses at least the following topology YANG models: 477 o The Base Network Model, defined in the "ietf-network" YANG module 478 of [RFC8345] (see section 3.1.1) 480 o The Base Network Topology Model, defined in the "ietf-network- 481 topology" YANG module of [RFC8345] (see section 3.1.1) 483 o The L3 Topology Model, defined in the "ietf-l3-unicast-topology" 484 YANG modules of [RFC8346] (see section 3.1.1.1) 486 If, as discussed in section 3.2.2, IP Links created over DWDM can be 487 automatically discovered by the P-PNC, the IP Topology is needed 488 only to report these IP Links after being discovered by the P-PNC. 490 The IP Topology can also be used to configure the IP Links created 491 over DWDM. 493 3.2.2. IP Link Setup Procedure 495 The MDSC requires the O-PNC to setup a WDM Tunnel (either a WSON 496 Tunnel or a Flexi-grid Tunnel) within the DWDM network between the 497 two Optical Transponders (OTs) associated with the two access links. 499 The Optical Transponders are reported by the O-PNC as Trail 500 Termination Points (TTPs), defined in [TE-TOPO], within the WDM 501 Topology. The association between the Ethernet access link and the 502 WDM TTP is reported by the Inter-Layer Lock (ILL) identifiers, 503 defined in [TE-TOPO], reported by the O-PNC within the Ethernet 504 Topology and WDM Topology. 506 The MDSC also requires the O-PNC to steer the Ethernet client 507 traffic between the two access Ethernet Links over the WDM Tunnel. 509 After the WDM Tunnel has been setup and the client traffic steering 510 configured, the two IP routers can exchange Ethernet packets between 511 themselves, including LLDP messages. 513 If LLDP [IEEE 802.1AB] is used between the two routers, the P-PNC 514 can automatically discover the IP Link being set up by the MDSC. The 515 IP LTPs terminating this IP Link are supported by the ETH LTPs 516 terminating the two access links. 518 Otherwise, the MDSC needs to require the P-PNC to configure an IP 519 Link between the two routers: the MDSC also configures the two ETH 520 LTPs which support the two IP LTPs terminating this IP Link. 522 3.3. Provisioning of an IP link/LAG over DWDM with path constraints 524 MDSC must be able to provision an IP link with a fixed maximum 525 latency constraint, or with the minimum latency available constraint 526 within each domain but as well inter-domain when required (e.g. by 527 monitoring traffic KPIs trends for this IP link). Through the O-PNC 528 fixed latency path/minimum latency path is chosen between PE and 529 ASBR in each optical domain. Then MDSC needs to select the inter-AS 530 domain with less latency (in case we have several interconnection 531 links) to have the right low latency constraint fulfilled end-to-end 532 across domains. 534 MDSC must be able to automatically create two IP links between two 535 routers, over DWDM network, with physical path diversity (avoiding 536 SRLGs communicated by O-PNCs to the MDSC). 538 MDSC must be responsible for routing each of this IP links through 539 different inter-AS domain links so that end-to-end IP links are 540 fully disjoint. 542 Optical connectivity must be set up accordingly by MDSC through O- 543 PNCs. 545 3.3.1. YANG models used at the MPIs 547 This section is for further study 549 3.4. Provisioning Link Members to an existing LAG 551 When adding a new link member to a LAG between two routers with or 552 without path latency/diversity constraint, the MDSC must be able to 553 force the additional optical connection to use the same physical 554 path in the optical domain where the LAG capacity increase is 555 required. 557 3.4.1. YANG Models used at the MPIs 559 This is for further study 561 4. Multi-Layer Recovery Coordination 563 4.1. Ensuring Network Resiliency during Maintenance Events 565 Before planned maintenance operation on DWDM network takes place, IP 566 traffic should be moved hitless to another link. 568 MDSC must reroute IP traffic before the events takes place. It 569 should be possible to lock IP traffic to the protection route until 570 the maintenance event is finished, unless a fault occurs on such 571 path. 573 4.2. Router Port Failure 575 The focus is on client-side protection scheme between IP router and 576 reconfigurable ROADM. Scenario here is to define only one port in 577 the routers and in the ROADM muxponder board at both ends as back-up 578 ports to recover any other port failure on client-side of the ROADM 579 (either on router port side or on muxponder side or on the link 580 between them). When client-side port failure occurs, alarms are 581 raised to MDSC by IP-PNC and O-PNC (port status down, LOS etc.). 582 MDSC checks with OP-PNC(s) that there is no optical failure in the 583 optical layer. 585 There can be two cases here: 587 a) LAG was defined between the two end routers. MDSC, after checking 588 that optical layer is fine between the two end ROADMs, triggers 589 the ROADM configuration so that the router back-up port with its 590 associated muxponder port can reuse the OCh that was already in 591 use previously by the failed router port and adds the new link to 592 the LAG on the failure side. 594 While the ROADM reconfiguration takes place, IP/MPLS traffic is 595 using the reduced bandwidth of the IP link bundle, discarding 596 lower priority traffic if required. Once backup port has been 597 reconfigured to reuse the existing OCh and new link has been 598 added to the LAG then original Bandwidth is recovered between the 599 end routers. 601 Note: in this LAG scenario let assume that BFD is running at LAG 602 level so that there is nothing triggered at MPLS level when one 603 of the link member of the LAG fails. 605 b) If there is no LAG then the scenario is not clear since a router 606 port failure would automatically trigger (through BFD failure) 607 first a sub-50ms protection at MPLS level :FRR (MPLS RSVP-TE 608 case) or TI-LFA (MPLS based SR-TE case) through a protection 609 port. At the same time MDSC, after checking that optical network 610 connection is still fine, would trigger the reconfiguration of 611 the back-up port of the router and of the ROADM muxponder to re- 612 use the same OCh as the one used originally for the failed router 613 port. Once everything has been correctly configured, MDSC Global 614 PCE could suggest to the operator to trigger a possible re- 615 optimisation of the back-up MPLS path to go back to the MPLS 616 primary path through the back-up port of the router and the 617 original OCh if overall cost, latency etc. is improved. However, 618 in this scenario, there is a need for protection port PLUS back- 619 up port in the router which does not lead to clear port savings. 621 5. Service Coordination for Multi-Layer network 623 [Editors' Note] This text has been taken from section 2 of draft- 624 lee-teas-actn-poi-applicability-00 and need to be reconciled with 625 the other sections (the introduction in particular) of this document 626 This section provides a number of deployment scenarios for packet 627 and optical integration (POI). Specifically, this section provides a 628 deployment scenario in which ACTN hierarchy is deployed to control a 629 multi-layer and multi-domain network via two IP/MPLS PNCs and two 630 Optical PNCs with coordination with L-MDSC. This scenario is in the 631 context of an upper layer service configuration (e.g. L3VPN) across 632 two AS domains which are transported by two transport underlay 633 domains (e.g. OTN). 635 The provisioning of the L3VPN service is outside ACTN scope but it 636 is worth showing how the L3VPN service provisioning is integrated 637 for the end-to-end service fulfilment in ACTN context. An example of 638 service configuration function in the Service/Network Orchestrator 639 is discussed in [BGP-L3VPN]. 641 Figure 2 shows an ACTN POI Reference Architecture where it shows 642 ACTN components as well as non-ACTN components that are necessary 643 for the end-to-end service fulfilment. Both IP/MPLS and Optical 644 Networks are multi-domain. Each IP/MPLS domain network is controlled 645 by its' domain controller and all the optical domains are controlled 646 by a hierarchy of optical domain controllers. The L-MDSC function of 647 the optical domain controllers provides an abstract view of the 648 whole optical network to the Service/Network Orchestrator. It is 649 assumed that all these components of the network belong to one 650 single network operator domain under the control of the 651 service/network orchestrator. 653 Customer 654 +-------------------------------+ 655 | +-----+ +------------+ | 656 | | CNC |----| Service Op.| | 657 | +-----+ +------------+ | 658 +-------|------------------|----+ 659 | ACTN interface | Non-ACTN interface 660 | CMI | (Customer Service model) 661 Service/Network| +-----------------+ 662 Orchestrator | | 663 +-----|------------------------------------|-----------+ 664 | +----------------------------------+ | | 665 | |MDSC TE & Service Mapping Function| | | 666 | +----------------------------------+ | | 667 | | | | | 668 | +------------------+ +---------------------+ | 669 | | MDSC NP Function |-------|Service Config. Func.| | 670 | +------------------+ +---------------------+ | 671 +------|---------------------------|-------------------+ 672 MPI | +---------------------+--+ 673 | / Non-ACTN interface \ 674 +-------+---/-------+------------+ \ 675 IP/MPLS | / |Optical | \ IP/MPLS 676 Domain 1 | / |Domain | \ Domain 2 677 Controller| / |Controller | \ Controller 678 +------|-------/--+ +---|-----+ +--|-----------\----+ 679 | +-----+ +-----+| | +-----+ | |+------+ +------+| 680 | |PNC1 | |Serv.|| | |PNC | | || PNC2 | | Serv.|| 681 | +-----+ +----- | | +-----+ | |+------+ +------+| 682 +-----------------+ +---------+ +-------------------+ 683 SBI | | | SBI 684 v | V 685 +------------------+ | +------------------+ 686 / IP/MPLS Network \ | / IP/MPLS Network \ 687 +----------------------+ | SBI +----------------------+ 688 v 689 +-------------------------------+ 690 / Optical Network \ 691 +-----------------------------------+ 693 Figure 2 ACTN POI Reference Architecture 695 Figure 2 shows ACTN POI Reference Architecture where it depicts: 697 o CMI (CNC-MDSC Interface) interfacing CNC with MDSC function in 698 the Service/Network Orchestrator. This is where TE & Service 699 Mapping [TSM] and either ACTN VN [ACTN-VN] or TE-topology [TE- 700 TOPO] model is exchanged over CMI. 702 o Customer Service Model Interface: Non-ACTN interface in the 703 Customer Portal interfacing Service/Network Orchestrator's 704 Service Configuration Function. This is the interface where L3SM 705 information is exchanged. 707 o MPI (MDSC-PNC Interface) interfacing IP/MPLS Domain Controllers 708 and Optical Domain Controllers. 710 o Service Configuration Interface: Non-ACTN interface in 711 Service/Network Orchestrator interfacing with the IP/MPLS Domain 712 Controllers to coordinate L2/L3VPN multi-domain service 713 configuration. This is where service specific information such as 714 VPN, VPN binding policy (e.g., new underlay tunnel creation for 715 isolation), etc. are conveyed. 717 o SBI (South Bound Interface): Non-ACTN interface in the domain 718 controller interfacing network elements in the domain. 720 Please note that MPI and Service Configuration Interface can be 721 implemented as the same interface with the two different 722 capabilities. The split is just functional but doesn't have to be 723 also logical. 725 The following sections are provided to describe key functions that 726 are necessary for the vertical as well as horizontal end-to-end 727 service fulfilment of POI. 729 5.1. L2/L3VPN/VN Service Request by the Customer 731 A customer can request L3VPN services with TE requirements using 732 ACTN CMI models (i.e., ACTN VN YANG, TE & Service Mapping YANG) and 733 non-ACTN customer service models such as L2SM/L3SM YANG together. 734 Figure 3 shows detailed control flow between customer and 735 service/network orchestrator to instantiate L2/L3VPN/VN service 736 request. 738 Customer 739 +-------------------------------------------+ 740 | +-----+ +------------+ | 741 | | CNC |--------------| Service Op.| | 742 | +-----+ +------------+ | 743 +-------|------------------------|----------+ 744 2. VN & TE/Svc | | 1.L2/3SM 745 Mapping | | | 746 | | ^ | | 747 | | | | | 748 v | | 3. Update VN | v 749 | & TE/Svc | 750 Service/Network | mapping | 751 Orchestrator | | 752 +------------------|------------------------|-----------+ 753 | +----------------------------------+ | | 754 | |MDSC TE & Service Mapping Function| | | 755 | +----------------------------------+ | | 756 | | | | | 757 | +------------------+ +---------------------+ | 758 | | MDSC NP Function |-------|Service Config. Func.| | 759 | +------------------+ +---------------------+ | 760 +-------|-----------------------------------|-----------+ 762 NP: Network Provisioning 764 Figure 3 Service Request Process 766 o ACTN VN YANG provides VN Service configuration, as specified in 767 [ACTN-VN]. 769 o It provides the profile of VN in terms of VN members, each of 770 which corresponds to an edge-to-edge link between customer 771 end-points (VNAPs). It also provides the mappings between the 772 VNAPs with the LTPs and between the connectivity matrix with 773 the VN member from which the associated traffic matrix (e.g., 774 bandwidth, latency, protection level, etc.) of VN member is 775 expressed (i.e., via the TE-topology's connectivity matrix). 777 o The model also provides VN-level preference information 778 (e.g., VN member diversity) and VN-level admin-status and 779 operational-status. 781 o L2SM YANG [RFC8466] provides all L2VPN service configuration and 782 site information from a customer/service point of view. 784 o L3SM YANG [RFC8299] provides all L3VPN service configuration and 785 site information from a customer/service point of view. 787 o The TE & Service Mapping YANG model [TSM] provides TE-service 788 mapping as well as site mapping. 790 o TE-service mapping provides the mapping of L3VPN instance 791 from [RFC8299] with the corresponding ACTN VN instance. 793 o The TE-service mapping also provides the service mapping 794 requirement type as to how each L2/L3VPN/VN instance is 795 created with respect to the underlay TE tunnels (e.g., 796 whether the L3VPN requires a new and isolated set of TE 797 underlay tunnels or not, etc.). See Section 5.2 for detailed 798 discussion on the mapping requirement types. 800 o Site mapping provides the site reference information across 801 L2/L3VPN Site ID, ACTN VN Access Point ID, and the LTP of the 802 access link. 804 5.2. Service and Network Orchestration 806 The Service/Network orchestrator shown in Figure 2 interfaces the 807 customer and decouples the ACTN MDSC functions from the customer 808 service configuration functions. 810 An implementation can choose to split the Service/Network 811 orchestration functions, as described in [RFC8309] and in section 812 4.2 of [RFC8453], between a top-level Service Orchestrator 813 interfacing the customer and two low-level Network Orchestrators, 814 one controlling a multi-domain IP/MPLS network and the other 815 controlling the Optical networks. 817 Another implementation can choose to combine the L-MDSC functions of 818 the Optical hierarchical controller, providing multi-domain 819 coordination of the Optical network together with the MDSC functions 820 in the Service/Network orchestrator. 822 Without loss of generality, this assumes that the service/network 823 orchestrator as depicted in Figure 2 would include all the required 824 functionalities as in a hierarchical orchestration case. 826 One of the important service functions the Service/Network 827 orchestrator performs is to identify which TE Tunnels should carry 828 the L3VPN traffic (from TE & Service Mapping Model) and to relay 829 this information to the IP/MPLS domain controllers, via non-ACTN 830 interface, to ensure proper IP/VRF forwarding table be populated 831 according to the TE binding requirement for the L3VPN. 833 [Editor's Note] What mechanism would convey on the interface to the 834 IP/MPLS domain controllers as well as on the SBI (between IP/MPLS 835 domain controllers and IP/MPLS PE routers) the TE binding policy 836 dynamically for the L3VPN? Typically, VRF is the function of the 837 device that participate MP-BGP in MPLS VPN. With current MP-BGP 838 implementation in MPLS VPN, the VRF's BGP next hop is the 839 destination PE and the mapping to a tunnel (either an LDP or a BGP 840 tunnel) toward the destination PE is done by automatically without 841 any configuration. It is to be determined the impact on the PE VRF 842 operation when the tunnel is an optical bypass tunnel which does not 843 participate either LDP or BGP. 845 Figure 4 shows service/network orchestrator interactions with 846 various domain controllers to instantiate tunnel provisioning as 847 well as service configuration. 849 +-------|----------------------------------|-----------+ 850 | +----------------------------------+ | | 851 | |MDSC TE & Service Mapping Function| | | 852 | +----------------------------------+ | | 853 | | | | | 854 | +------------------+ +---------------------+ | 855 | | MDSC NP Function |-------|Service Config. Func.| | 856 | +------------------+ +---------------------+ | 857 +-------|------------------------------|---------------+ 858 | | 859 | +-------------------+------+ 3. 860 2. Inter-layer | / \ VPN 861 Serv. 862 tunnel +-----+--------/-------+-----------------+ 863 \provision 864 binding| / | 1. Optical | \ 865 | / | tunnel creation | \ 866 +----|-----------/-+ +---|------+ +-----|-------\---+ 867 | +-----+ +-----+ | | +------+ | | +-----+ +-----+| 868 | |PNC1 | |Serv.| | | | PNC | | | |PNC2 | |Serv.|| 869 | +-----+ +-----+ | | +------+ | | +-----+ +-----+| 870 +------------------+ +----------+ +-----------------+ 872 Figure 4 Service and Network Orchestration Process 874 TE binding requirement types [TSM] are: 876 1. Hard Isolation with deterministic latency: Customer would request 877 an L3VPN service [RFC8299] using a set of TE Tunnels with a 878 deterministic latency requirement and that cannot be not shared 879 with other L3VPN services nor compete for bandwidth with other 880 Tunnels. 882 2. Hard Isolation: This is similar to the above case without 883 deterministic latency requirements. 885 3. Soft Isolation: Customer would request an L3VPN service using a 886 set of MPLS-TE tunnel which cannot be shared with other L3VPN 887 services. 889 4. Sharing: Customer would accept sharing the MPLS-TE Tunnels 890 supporting its L3VPN service with other services. 892 For the first three types, there could be additional TE binding 893 requirements with respect to different VN members of the same VN 894 associated with an L3VPN service. For the first two cases, VN 895 members can be hard-isolated, soft-isolated, or shared. For the 896 third case, VN members can be soft-isolated or shared. 898 o When "Hard Isolation with or w/o deterministic latency" (i.e., 899 the first and the second type) TE binding requirement is applied 900 for a L3VPN, a new optical layer tunnel has to be created (Step 1 901 in Figure 4). This operation requires the following control level 902 mechanisms as follows: 904 o The MDSC function of the Service/Network Orchestrator 905 identifies only the domains in the IP/MPLS layer in which the 906 VPN needs to be forwarded. 908 o Once the IP/MPLS layer domains are determined, the MDSC 909 function of the Service/Network Orchestrator needs to 910 identify the set of optical ingress and egress points of the 911 underlay optical tunnels providing connectivity between the 912 IP/MPLS layer domains. 914 o Once both IP/MPLS layers and optical layer are determined, 915 the MDSC needs to identify the inter-layer peering points in 916 both IP/MPLS domains as well as the optical domain(s). This 917 implies that the L3VPN traffic will be forwarded to an MPLS- 918 TE tunnel that starts at the ingress PE (in one IP/MPLS 919 domain) and terminates at the egress PE (in another IP/MPLS 920 domain) via a dedicated underlay optical tunnel. 922 o The MDSC function of the Service/Network Orchestrator needs to 923 first request the optical L-MDSC to instantiate an optical tunnel 924 for the optical ingress and egress. This is referred to as 925 optical tunnel creation (Step 1 in Figure 4). Note that it is L- 926 MDSC responsibility to perform multi-domain optical coordination 927 with its underlying optical PNCs, for setting up a multi-domain 928 optical tunnel. 930 o Once the optical tunnel is established, then the MDSC function of 931 the Service/Network Orchestrator needs to coordinate with the PNC 932 functions of the IP/MPLS Domain Controllers (under which the 933 ingress and egress PEs belong) the setup of a multi-domain MPLS- 934 TE Tunnel, between the ingress and egress PEs. This setup is 935 carried by the created underlay optical tunnel (Step 2 in Figure 936 4). 938 o It is the responsibility of the Service Configuration Function of 939 the Service/Network Orchestrator to identify interfaces/labels on 940 both ingress and egress PEs and to convey this information to 941 both the IP/MPLS Domain Controllers (under which the ingress and 942 egress PEs belong) for proper configuration of the L3VPN (BGP and 943 VRF function of the PEs) in their domain networks (Step 3 in 944 Figure 4). 946 5.3. IP/MPLS Domain Controller and NE Functions 948 IP/MPLS networks are assumed to have multiple domains and each 949 domain is controlled by IP/MPLS domain controller in which the ACTN 950 PNC functions and non-ACTN service functions are performed by the 951 IP/MPLS domain controller. 953 Among the functions of the IP/MPLS domain controller are VPN service 954 aspect provisioning such as VRF control and management for VPN 955 services, etc. It is assumed that BGP is running in the inter-domain 956 IP/MPLS networks for L2/L3VPN and that the IP/MPLS domain controller 957 is also responsible for configuring the BGP speakers within its 958 control domain if necessary. 960 Depending on the TE binding requirement types discussed in Section 961 5.2, there are two possible deployment scenarios. 963 5.3.1. Scenario A: Shared Tunnel Selection 965 When the L2/L3VPN does not require isolation (either hard or soft), 966 it can select an existing MPLS-TE and Optical tunnel between ingress 967 and egress PE, without creating any new TE tunnels. Figure 5 shows 968 this scenario. 970 IP/MPLS Domain 1 IP/MPLS Domain 2 971 Controller Controller 973 +------------------+ +------------------+ 974 | +-----+ +-----+ | | +-----+ +-----+ | 975 | |PNC1 | |Serv.| | | |PNC2 | |Serv.| | 976 | +-----+ +-----+ | | +-----+ +-----+ | 977 +--|-----------|---+ +--|-----------|---+ 978 | 1.Tunnel | 2.VPN/VRF | 1.Tunnel | 2.VPN/VRF 979 | Selection | Provisioning | Selection | 980 Provisioning 981 V V V V 982 +---------------------+ +---------------------+ 983 CE / PE tunnel 1 ASBR\ /ASBR tunnel 2 PE \ 984 CE 985 o--/---o..................o--\--------/--o..................o--- 986 \--o 987 \ / \ / 988 \ AS Domain 1 / \ AS Domain 2 / 989 +---------------------+ +---------------------+ 991 End-to-end tunnel 992 <-----------------------------------------------------> 994 Figure 5 IP/MPLS Domain Controller & NE Functions 996 How VPN is disseminated across the network is out of the scope of 997 this document. We assume that MP-BGP is running in IP/MPLS networks 998 and VPN is made known to ABSRs and PEs by each IP/MPLS domain 999 controllers. See [RFC4364] for detailed descriptions on how MP-BGP 1000 works. 1002 There are several functions IP/MPLS domain controllers need to 1003 provide in order to facilitate tunnel selection for the VPN in both 1004 domain level and end-to-end level. 1006 5.3.1.1. Domain Tunnel Selection 1008 Each domain IP/MPLS controller is responsible for selecting its 1009 domain level tunnel for the L3VPN. First it needs to determine which 1010 existing tunnels would fit for the L2/L3VPN requirements allotted to 1011 the domain by the Service/Network Orchestrator (e.g., tunnel 1012 binding, bandwidth, latency, etc.). If there are existing tunnels 1013 that are feasible to satisfy the L3VPN requirements, the IP/MPLS 1014 domain controller selects the optimal tunnel from the candidate 1015 pool. Otherwise, an MPLS tunnel with modified bandwidth or a new 1016 MPLS Tunnel needs to be setup. Note that with no isolation 1017 requirement for the L3VPN, existing MPLS tunnel can be selected. 1018 With soft isolation requirement for the L3VPN, an optical tunnel can 1019 be shared with other L2/L3VPN services while with hard isolation 1020 requirement for the L2/L3VPN, a dedicated MPLS-TE and a dedicated 1021 optical tunnel MUST be provisioned for the L2/L3VPN. 1023 5.3.1.2. VPN/VRF Provisioning for L3VPN 1025 Once the domain level tunnel is selected for a domain, the Service 1026 Function of the IP/MPLS domain controller maps the L3VPN to the 1027 selected MPLS-TE tunnel and assigns a label (e.g., MPLS label) with 1028 the PE. Then the PE creates a new entry for the VPN in the VRF 1029 forwarding table so that when the VPN packet arrives to the PE, it 1030 will be able to direct to the right interface and PUSH the label 1031 assigned for the VPN. When the PE forwards a VPN packet, it will 1032 push the VPN label signaled by BGP and, in case of option A and B 1033 [RFC4364], it will also push the LSP label assigned to the 1034 configured MPLS-TE Tunnel to reach the ASBR next hop and forwards 1035 the packet to the MPLS next-hop of this MPLS-TE Tunnel. 1037 In case of option C [RFC4364], the PE will push one MPLS LSP label 1038 signaled by BGP to reach the destination PE and a second MPLS LSP 1039 label assigned to the configured MPLS-TE Tunnel to reach the ASBR 1040 next-hop and forward the packet to the MPLS next-hop of this MPLS-TE 1041 Tunnel. 1043 With Option C, the ASBR of the first domain interfacing the next 1044 domain should keep the VPN label intact to the ASBR of the next 1045 domain so that the ASBR in the next domain sees the VPN packets as 1046 if they are coming from a CE. With Option B, the VPN label is 1047 swapped. With option A, the VPN label is removed. 1049 With Option A and B, the ASBR of the second domain does the same 1050 procedure that includes VPN/VRF tunnel mapping and interface/label 1051 assignment with the IP/MPLS domain controller. With option A, the 1052 ASBR operations are the same as of the PEs. With option B, the ASBR 1053 operates with VPN labels so it can see the VPN the traffic belongs 1054 to. With option C, the ASBR operates with the end-to-end tunnel 1055 labels so it may be not aware of the VPN the traffic belongs to. 1057 This process is repeated in each domain. The PE of the last domain 1058 interfacing the destination CE should recognize the VPN label when 1059 the VPN packets arrive and thus POP the VPN label and forward the 1060 packets to the CE. 1062 5.3.1.3. VSI Provisioning for L2VPN 1064 The VSI provisioning for L2VPN is similar to the VPN/VRF provision 1065 for L3VPN. L2VPN service types include: 1067 o Point-to-point Virtual Private Wire Services (VPWSs) that use 1068 LDP-signaled Pseudowires or L2TP-signaled Pseudowires [RFC6074]; 1070 o Multipoint Virtual Private LAN Services (VPLSs) that use LDP- 1071 signaled Pseudowires or L2TP-signaled Pseudowires [RFC6074]; 1073 o Multipoint Virtual Private LAN Services (VPLSs) that use a Border 1074 Gateway Protocol (BGP) control plane as described in [RFC4761]and 1075 [RFC6624]; 1077 o IP-Only LAN-Like Services (IPLSs) that are a functional subset of 1078 VPLS services [RFC7436]; 1080 o BGP MPLS-based Ethernet VPN Services as described in [RFC7432] 1081 and [RFC7209]; 1083 o Ethernet VPN VPWS specified in [RFC8214] and [RFC7432]. 1085 5.3.1.4. Inter-domain Links Update 1087 In order to facilitate inter-domain links for the VPN, we assume 1088 that the service/network orchestrator would know the inter-domain 1089 link status and its resource information (e.g., bandwidth available, 1090 protection/restoration policy, etc.) via some mechanisms (which are 1091 beyond the scope of this document). We also assume that the inter- 1092 domain links are pre-configured prior to service instantiation. 1094 5.3.1.5. End-to-end Tunnel Management 1096 It is foreseen that the Service/Network orchestrator should control 1097 and manage end-to-end tunnels for VPNs per VPN policy. 1099 As discussed in [ACTN-PM], the Orchestrator is responsible to 1100 collect domain LSP-level performance monitoring data from domain 1101 controllers and to derive and report end-to-end tunnel performance 1102 monitoring information to the customer. 1104 5.3.2. Scenario B: Isolated VN/Tunnel Establishment 1106 When the L3VPN requires hard-isolated Tunnel establishment, optical 1107 layer tunnel binding with IP/MPLS layer is necessary. As such, the 1108 following functions are necessary. 1110 o The IP/MPLS Domain Controller of Domain 1 needs to send the VRF 1111 instruction to the PE: 1113 o To the Ingress PE of AS Domain 1: Configuration for each 1114 L3VPN destination IP address (in this case the remote CE's IP 1115 address for the VPN or any customer's IP addresses reachable 1116 through a remote CE) of the associated VPN label assigned by 1117 the Egress PE and of the MPLS-TE Tunnel to be used to reach 1118 the Egress PE: so that the proper VRF table is populated to 1119 forward the VPN traffic to the inter-layer optical interface 1120 with the VPN label. 1122 o The Egress PE, upon the discovery of a new IP address, needs to 1123 send the mapping information (i.e., VPN to IP address) to its' 1124 IP/MPLS Domain Controller of Domain 2 which sends, in turn, to 1125 the service orchestrator. The service orchestrator would then 1126 propagate this mapping information to the IP/MPLS Domain 1127 Controller of Domain 1 which sends it, in turn, to the ingress PE 1128 so that it may override the VPN/VRF forwarding or VSI forwarding, 1129 respectively for L3VPN and L2VPN. As a result, when packets 1130 arriving at the ingress PE with that IP destination address, the 1131 ingress PE would then forward this packet to the inter-layer 1132 optical interface. 1134 [Editor's Note] in case of hard isolated tunnel required for the 1135 VPN, we need to create a separate MPLS TE tunnel and encapsulate the 1136 MPLS packets of the MPLS Tunnel into the ODU so that the optical NE 1137 would route this MPLS Tunnel to a separate optical tunnel from other 1138 tunnels.] 1140 5.4. Optical Domain Controller and NE Functions 1142 Optical network provides the underlay connectivity services to 1143 IP/MPLS networks. The multi-domain optical network coordination is 1144 performed by the L-MDSC function shown in Figure 2 so that the whole 1145 multi-domain optical network appears to the service/network 1146 orchestrator as one optical network. The coordination of 1147 Packet/Optical multi-layer and IP/MPLS multi-domain is done by the 1148 service/network orchestrator where it interfaces two IP/MPLS domain 1149 controllers and one optical L-MDSC. 1151 Figure 6 shows how the Optical Domain Controllers create a new 1152 optical tunnel and the related interaction with IP/MPLS domain 1153 controllers and the NEs to bind the optical tunnel with proper 1154 forwarding instruction so that the VPN requiring hard isolation can 1155 be fulfilled. 1157 IP/MPLS Domain 1 Optical Domain IP/MPLS Domain 2 1158 Controller Controller Controller 1160 +------------------+ +---------+ +------------------+ 1161 | +-----+ +-----+ | | +-----+ | | +-----+ +-----+ | 1162 | |PNC1 | |Serv.| | | |PNC | | | |PNC2 | |Serv.| | 1163 | +-----+ +-----+ | | +-----+ | | +-----+ +-----+ | 1164 +--|-----------|---+ +----|----+ +--|----------|----+ 1165 | 2.Tunnel | 3.VPN/VRF | |2.Tunnel | 3.VPN/VRF 1166 | Binding | Provisioning| |Binding | 1167 Provisioning 1168 V V | V V 1169 +-------------------+ | +-------------------+ 1170 CE / PE ASBR\ | /ASBR PE \ CE 1171 o--/---o o--\----|--/--o o---\--o 1172 \ : / | \ : / 1173 \ : AS Domain 1 / | \ AS Domain 2 : / 1174 +-:-----------------+ | +-----------------:-+ 1175 : | : 1176 : | 1. Optical : 1177 : | Tunnel Creation : 1178 : v : 1179 +-:--------------------------------------------------:-+ 1180 / : : \ 1181 / o..................................................o \ 1182 | Optical Tunnel | 1183 \ / 1184 \ Optical Domain / 1185 +------------------------------------------------------+ 1187 Figure 6 Domain Controller & NE Functions (Isolated Optical Tunnel) 1189 As discussed in 5.2, in case that VPN has requirement for hard- 1190 isolated tunnel establishment, the service/network orchestrator will 1191 coordinate across IP/MPLS domain controllers and Optical L-MDSC to 1192 ensure the creation of a new optical tunnel for the VPN in proper 1193 sequence. Figure 6 shows this scenario. 1195 o The MDSC of the service/network orchestrator requests the L-MDSC 1196 to setup and Optical tunnel providing connectivity between the 1197 inter-layer interfaces at the ingress and egress PEs and requests 1198 the two IP/MPLS domain controllers to setup an inter-domain IP 1199 link between these interfaces 1201 o The MDSC of the service/network orchestrator then should provide 1202 the ingress IP/MPLS domain controller with the routing 1203 instruction for the VPN so that the ingress IP/MPLS domain 1204 controller would help its ingress PE to populate forwarding 1205 table. The packet with the VPN label should be forwarded to the 1206 optical interface the MDSC provided. 1208 o The Ingress Optical Domain PE needs to recognize MPLS-TE label on 1209 its ingress interface from IP/MPLS domain PE and encapsulate the 1210 MPLS packets of this MPLS-TE Tunnel into the ODU. 1212 [Editor's Note] We assumed that the Optical PE is LSR.] 1214 o The Egress Optical Domain PE needs to POP the ODU label before 1215 sending the packet (with MPLS-TE label kept intact at the top 1216 level) to the Egress PE in the IP/MPLS Domain to which the packet 1217 is destined. 1219 [Editor's Note] If there are two VPNs having the same destination CE 1220 requiring non-shared optical tunnels from each other, we need to 1221 explain this case with a need for additional Label to differentiate 1222 the VPNs] 1224 5.5. Orchestrator-Controllers-NEs Communication Protocol Flows 1226 This section provides generic communication protocol flows across 1227 orchestrator, controllers and NEs in order to facilitate the POI 1228 scenarios discussed in Section 5.3.2 for dynamic optical Tunnel 1229 establishment. Figure 7 shows the communication flows. 1231 +---------+ +-------+ +------+ +------+ +------+ +------+ 1232 |Orchestr.| |Optical| |Packet| |Packet| |Ing.PE| |Egr.PE| 1233 | | | Ctr. | |Ctr-D1| |Ctr-D2| | D1 | | D2 | 1234 +---------+ +-------+ +------+ +------+ +------+ +------+ 1235 | | | | | | 1236 | | | | |<--BGP--->| 1237 | | | |VPN Update | | 1238 | | | VPN Update|<---------------------| 1239 |<--------------------------------------|(Dest, VPN)| | 1240 | | |(Dest, VPN)| | | 1241 | Tunnel Create | | | | | 1242 |---------------->| | | | | 1243 |(VPN,Ingr/Egr if)| | | | | 1244 | | | | | | 1245 | Tunnel Confirm | | | | | 1246 |<----------------| | | | | 1247 | (Tunnel ID) | | | | | 1248 | | | | | | 1249 | Tunnel Bind | | | | | 1250 |-------------------------->| | | | 1251 | (Tunnel ID, VPN, Ingr if) | Forward. Mapping | | 1252 | | |---------------------->| (1) | 1253 | Tunnel Bind Confirm | (Dest, VPN, Ingr if | | 1254 |<--------------------------| | | | 1255 | | | | | | 1256 | Tunnel Bind | | | | | 1257 |-------------------------------------->| | | 1258 | (Tunnel ID, VPN, Egr if) | | | | 1259 | | | | Forward. Mapping | 1260 | | | |--------------------- 1261 >|(2) 1262 | | | | (Dest, VPN , Egr if) | 1263 | | Tunnel Bind Confirm | | | 1264 |<--------------------------------------| | | 1265 | | | | | | 1267 Figure 7 Communication Flows for Optical Tunnel Establishment and 1268 binding. 1270 When Domain Packet Controller 1 sends the forwarding mapping 1271 information as indicated in (1) in Figure 7, the Ingress PE in 1272 Domain 1 will need to provision the VRF forwarding table based on 1273 the information it receives. Please see the detailed procedure in 1274 section 5.3.1.2. A similar procedure is to be done at the Egress PE 1275 in Domain 2. 1277 6. Security Considerations 1279 Several security considerations have been identified and will be 1280 discussed in future versions of this document. 1282 7. Operational Considerations 1284 Telemetry data, such as the collection of lower-layer networking 1285 health and consideration of network and service performance from POI 1286 domain controllers, may be required. These requirements and 1287 capabilities will be discussed in future versions of this document. 1289 8. IANA Considerations 1291 This document requires no IANA actions. 1293 9. References 1295 9.1. Normative References 1297 [RFC7950] Bjorklund, M. et al., "The YANG 1.1 Data Modeling 1298 Language", RFC 7950, August 2016. 1300 [RFC7951] Lhotka, L., "JSON Encoding of Data Modeled with YANG", RFC 1301 7951, August 2016. 1303 [RFC8040] Bierman, A. et al., "RESTCONF Protocol", RFC 8040, January 1304 2017. 1306 [RFC8345] Clemm, A., Medved, J. et al., "A Yang Data Model for 1307 Network Topologies", RFC8345, March 2018. 1309 [RFC8346] Clemm, A. et al., "A YANG Data Model for Layer 3 1310 Topologies", RFC8346, March 2018. 1312 [RFC8453] Ceccarelli, D., Lee, Y. et al., "Framework for Abstraction 1313 and Control of TE Networks (ACTN)", RFC8453, August 2018. 1315 [RFC8525] Bierman, A. et al., "YANG Library", RFC 8525, March 2019. 1317 [IEEE 802.1AB] IEEE 802.1AB-2016, "IEEE Standard for Local and 1318 metropolitan area networks - Station and Media Access 1319 Control Connectivity Discovery", March 2016. 1321 [TE-TOPO] Liu, X. et al., "YANG Data Model for TE Topologies", 1322 draft-ietf-teas-yang-te-topo, work in progress. 1324 [WSON-TOPO] Lee, Y. et al., " A YANG Data Model for WSON (Wavelength 1325 Switched Optical Networks)", draft-ietf-ccamp-wson-yang, 1326 work in progress. 1328 [Flexi-TOPO] Lopez de Vergara, J. E. et al., "YANG data model for 1329 Flexi-Grid Optical Networks", draft-ietf-ccamp-flexigrid- 1330 yang, work in progress. 1332 [CLIENT-TOPO] Zheng, H. et al., "A YANG Data Model for Client-layer 1333 Topology", draft-zheng-ccamp-client-topo-yang, work in 1334 progress. 1336 [L3-TE-TOPO] Liu, X. et al., "YANG Data Model for Layer 3 TE 1337 Topologies", draft-ietf-teas-yang-l3-te-topo, work in 1338 progress. 1340 [TE-TUNNEL] Saad, T. et al., "A YANG Data Model for Traffic 1341 Engineering Tunnels and Interfaces", draft-ietf-teas-yang- 1342 te, work in progress. 1344 [WSON-TUNNEL] Lee, Y. et al., "A Yang Data Model for WSON Tunnel", 1345 draft-ietf-ccamp-wson-tunnel-model, work in progress. 1347 [Flexi-MC] Lopez de Vergara, J. E. et al., "YANG data model for 1348 Flexi-Grid media-channels", draft-ietf-ccamp-flexigrid- 1349 media-channel-yang, work in progress. 1351 [CLIENT-SIGNAL] Zheng, H. et al., "A YANG Data Model for Transport 1352 Network Client Signals", draft-ietf-ccamp-client-signal- 1353 yang, work in progress. 1355 9.2. Informative References 1357 [RFC4364] E. Rosen and Y. Rekhter, "BGP/MPLS IP Virtual Private 1358 Networks (VPNs)", RFC 4364, February 2006. 1360 [RFC4761] K. Kompella, Ed., Y. Rekhter, Ed., "Virtual Private LAN 1361 Service (VPLS) Using BGP for Auto-Discovery and 1362 Signaling", RFC 4761, January 2007. 1364 [RFC6074] E. Rosen, B. Davie, V. Radoaca, and W. Luo, "Provisioning, 1365 Auto-Discovery, and Signaling in Layer 2 Virtual Private 1366 Networks (L2VPNs)", RFC 6074, January 2011. 1368 [RFC6624] K. Kompella, B. Kothari, and R. Cherukuri, "Layer 2 1369 Virtual Private Networks Using BGP for Auto-Discovery and 1370 Signaling", RFC 6624, May 2012. 1372 [RFC7209] A. Sajassi, R. Aggarwal, J. Uttaro, N. Bitar, W. 1373 Henderickx, and A. Isaac, "Requirements for Ethernet VPN 1374 (EVPN)", RFC 7209, May 2014. 1376 [RFC7432] A. Sajassi, Ed., et al., "BGP MPLS-Based Ethernet VPN", 1377 RFC 7432, February 2015. 1379 [RFC7436] H. Shah, E. Rosen, F. Le Faucheur, and G. Heron, "IP-Only 1380 LAN Service (IPLS)", RFC 7436, January 2015. 1382 [RFC8214] S. Boutros, A. Sajassi, S. Salam, J. Drake, and J. 1383 Rabadan, "Virtual Private Wire Service Support in Ethernet 1384 VPN", RFC 8214, August 2017. 1386 [RFC8299] Q. Wu, S. Litkowski, L. Tomotaki, and K. Ogaki, "YANG Data 1387 Model for L3VPN Service Delivery", RFC 8299, January 2018. 1389 [RFC8309] Q. Wu, W. Liu, and A. Farrel, "Service Model Explained", 1390 RFC 8309, January 2018. 1392 [RFC8466] G. Fioccola, ed., "A YANG Data Model for Layer 2 Virtual 1393 Private Network (L2VPN) Service Delivery", RFC8466, 1394 October 2018. 1396 [TNBI] Busi, I., Daniel, K. et al., "Transport Northbound 1397 Interface Applicability Statement", draft-ietf-ccamp- 1398 transport-nbi-app-statement, work in progress. 1400 [ACTN-VN] Y. Lee, et al., "A Yang Data Model for ACTN VN Operation", 1401 draft-ietf-teas-actn-vn-yang, work in progress. 1403 [TSM] Y. Lee, et al., "Traffic Engineering and Service Mapping Yang 1404 Model", draft-ietf-teas-te-service-mapping-yang, work in 1405 progress. 1407 [ACTN-PM] Y. Lee, et al., "YANG models for VN & TE Performance 1408 Monitoring Telemetry and Scaling Intent Autonomics", 1409 draft-lee-teas-actn-pm-telemetry-autonomics, work in 1410 progress. 1412 [BGP-L3VPN] D. Jain, et al. "Yang Data Model for BGP/MPLS L3 VPNs", 1413 draft-ietf-bess-l3vpn-yang, work in progress. 1415 Acknowledgments 1417 This document was prepared using 2-Word-v2.0.template.dot. 1419 Some of this analysis work was supported in part by the European 1420 Commission funded H2020-ICT-2016-2 METRO-HAUL project (G.A. 761727). 1422 Contributors 1424 Sergio Belotti 1425 Nokia 1427 Email: sergio.belotti@nokia.com 1429 Gabriele Galimberti 1430 Cisco 1432 Email: ggalimbe@cisco.com 1434 Zheng Yanlei 1435 China Unicom 1437 Email: zhengyanlei@chinaunicom.cn 1439 Anton Snitser 1440 Sedona 1442 Email: antons@sedonasys.com 1444 Washington Costa Pereira Correia 1445 TIM Brasil 1447 Email: wcorreia@timbrasil.com.br 1448 Michael Scharf 1449 Hochschule Esslingen - University of Applied Sciences 1451 Email: michael.scharf@hs-esslingen.de 1453 Young Lee 1454 Sung Kyun Kwan University 1456 Email: younglee.tx@gmail.com 1458 Jeff Tantsura 1459 Apstra 1461 Email: jefftant.ietf@gmail.com 1463 Authors' Addresses 1465 Fabio Peruzzini 1466 TIM 1468 Email: fabio.peruzzini@telecomitalia.it 1470 Jean-Francois Bouquier 1471 Vodafone 1473 Email: jeff.bouquier@vodafone.com 1475 Italo Busi 1476 Huawei 1478 Email: Italo.busi@huawei.com 1480 Daniel King 1481 Old Dog Consulting 1483 Email: daniel@olddog.co.uk 1484 Daniele Ceccarelli 1485 Ericsson 1487 Email: daniele.ceccarelli@ericsson.com