idnits 2.17.1 draft-ietf-teas-actn-poi-applicability-06.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (March 7, 2022) is 780 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 TEAS Working Group Fabio Peruzzini 2 Internet Draft TIM 3 Intended status: Informational Jean-Francois Bouquier 4 Vodafone 5 Italo Busi 6 Huawei 7 Daniel King 8 Old Dog Consulting 9 Daniele Ceccarelli 10 Ericsson 12 Expires: September 2022 March 7, 2022 14 Applicability of Abstraction and Control of Traffic Engineered 15 Networks (ACTN) to Packet Optical Integration (POI) 17 draft-ietf-teas-actn-poi-applicability-06 19 Abstract 21 This document considers the applicability of Abstraction and Control 22 of TE Networks (ACTN) architecture to Packet Optical Integration 23 (POI)in the context of IP/MPLS and optical internetworking. It 24 identifies the YANG data models being defined by the IETF to support 25 this deployment architecture and specific scenarios relevant for 26 Service Providers. 28 Existing IETF protocols and data models are identified for each 29 multi-layer (packet over optical) scenario with a specific focus on 30 the MPI (Multi-Domain Service Coordinator to Provisioning Network 31 Controllers Interface)in the ACTN architecture. 33 Status of this Memo 35 This Internet-Draft is submitted in full conformance with the 36 provisions of BCP 78 and BCP 79. 38 Internet-Drafts are working documents of the Internet Engineering 39 Task Force (IETF), its areas, and its working groups. Note that 40 other groups may also distribute working documents as Internet- 41 Drafts. 43 Internet-Drafts are draft documents valid for a maximum of six months 44 and may be updated, replaced, or obsoleted by other documents at any 45 time. It is inappropriate to use Internet-Drafts as reference 46 material or to cite them other than as "work in progress." 47 The list of current Internet-Drafts can be accessed at 48 http://www.ietf.org/ietf/1id-abstracts.txt 50 The list of Internet-Draft Shadow Directories can be accessed at 51 http://www.ietf.org/shadow.html 53 This Internet-Draft will expire on April 9, 2021. 55 Copyright Notice 57 Copyright (c) 2022 IETF Trust and the persons identified as the 58 document authors. All rights reserved. 60 This document is subject to BCP 78 and the IETF Trust's Legal 61 Provisions Relating to IETF Documents 62 (http://trustee.ietf.org/license-info) in effect on the date of 63 publication of this document. Please review these documents 64 carefully, as they describe your rights and restrictions with respect 65 to this document. Code Components extracted from this document must 66 include Simplified BSD License text as described in Section 4.e of 67 the Trust Legal Provisions and are provided without warranty as 68 described in the Simplified BSD License. 70 Table of Contents 72 1. Introduction...................................................3 73 1.1. Terminology...............................................5 74 2. Reference network architecture.................................7 75 2.1. Multi-domain Service Coordinator (MDSC) functions.........9 76 2.1.1. Multi-domain L2/L3 VPN network services.............11 77 2.1.2. Multi-domain and multi-layer path computation.......14 78 2.2. IP/MPLS Domain Controller and NE Functions...............17 79 2.3. Optical Domain Controller and NE Functions...............19 80 3. Interface protocols and YANG data models for the MPIs.........19 81 3.1. RESTCONF protocol at the MPIs............................19 82 3.2. YANG data models at the MPIs.............................20 83 3.2.1. Common YANG data models at the MPIs.................20 84 3.2.2. YANG models at the Optical MPIs.....................21 85 3.2.3. YANG data models at the Packet MPIs.................21 86 3.3. PCEP.....................................................22 87 4. Inventory, service and network topology discovery.............23 88 4.1. Optical topology discovery...............................25 89 4.2. Optical path discovery...................................26 90 4.3. Packet topology discovery................................27 91 4.4. SR-TE path discovery.....................................27 92 4.5. Inter-domain link discovery..............................28 93 4.5.1. Cross-layer link discovery..........................29 94 4.5.2. Inter-domain IP link discovery......................31 95 4.6. Multi-layer IP link discovery............................33 96 4.6.1. Single-layer intra-domain IP links..................36 97 4.7. LAG discovery............................................38 98 4.8. L2/L3 VPN network services discovery.....................38 99 4.9. Inventory discovery......................................38 100 5. Establishment of L2/L3 VPN network services with TE requirements 101 .................................................................38 102 5.1. Optical Path Computation.................................40 103 5.2. Multi-layer IP link Setup................................41 104 5.3. SR-TE Path Setup and Update..............................42 105 6. Conclusions...................................................43 106 7. Security Considerations.......................................44 107 8. Operational Considerations....................................44 108 9. IANA Considerations...........................................44 109 10. References...................................................44 110 10.1. Normative References....................................44 111 10.2. Informative References..................................46 112 Appendix A. OSS/Orchestration Layer...........................49 113 A.1. MDSC NBI................................................49 114 Appendix B. Multi-layer and multi-domain resiliency...........52 115 B.1. Maintenance Window......................................52 116 B.2. Router port failure.....................................52 117 Acknowledgments..................................................53 118 Contributors.....................................................53 119 Authors' Addresses...............................................55 121 1. Introduction 123 The complete automation of the management and control of Service 124 Providers transport networks (IP/MPLS, optical, and microwave 125 transport networks) is vital for meeting emerging demand for high- 126 bandwidth use cases, including 5G and fiber connectivity services. 127 The Abstraction and Control of TE Networks (ACTN) architecture and 128 interfaces facilitate the automation and operation of complex optical 129 and IP/MPLS networks through standard interfaces and data models. 130 This allows a wide range of of network services that can be requested 131 by the upper layers fulfilling almost any kind of service level 132 requirements from a network perspective (e.g. physical diversity, 133 latency, bandwidth, topology, etc.) 135 Packet Optical Integration (POI) is an advanced use case of traffic 136 engineering. In wide-area networks, a packet network based on the 137 Internet Protocol (IP), and often Multiprotocol Label Switching 138 (MPLS) or Segment Routing (SR), is typically realized on top of an 139 optical transport network that uses Dense Wavelength Division 140 Multiplexing (DWDM)(and optionally an Optical Transport Network 141 (OTN)layer). 143 In many existing network deployments, the packet and the optical 144 networks are engineered and operated independently. As a result, 145 there are technical differences between the technologies (e.g., 146 routers compared to optical switches) and the corresponding network 147 engineering and planning methods (e.g., inter-domain peering 148 optimization in IP, versus dealing with physical impairments in DWDM, 149 or very different time scales). In addition, customers needs can be 150 different between a packet and an optical network, and it is not 151 uncommon to use different vendors in both domains. The operation of 152 these complex packet and optical networks is often siloed, as these 153 technology domains require specific skills sets. 155 The packet/optical network deployment and operation separation are 156 inefficient for many reasons. Both capital expenditure (CAPEX) and 157 operational expenditure (OPEX) could be significantly reduced by 158 integrating the packet and the optical networks. Multi-layer online 159 topology insight can speed up troubleshooting (e.g., alarm 160 correlation) and network operation (e.g., coordination of maintenance 161 events), multi-layer offline topology inventory can improve service 162 quality (e.g., detection of diversity constraint violations) and 163 multi-layer traffic engineering can use the available network 164 capacity more efficiently (e.g., coordination of restoration). In 165 addition, provisioning workflows can be simplified or automated as 166 needed across layers (e.g., to achieve bandwidth-on-demand or to 167 perform activities during maintenance windows). 169 ACTN framework enables this complete multi-layer and multi-vendor 170 integration of packet and optical networks through Multi-Domain 171 Service Coordinator (MDSC) and packet and optical Provisioning 172 Network Controllers (PNCs). 174 In this document, critical scenarios for POI are described from the 175 packet service layer perspective and identified the required 176 coordination between packet and optical layers to improve POI 177 deployment and operation. Precise definitions of scenarios can help 178 with achieving a common understanding across different disciplines. 179 The focus of the scenarios are multi-domain packet networks operated 180 as a client of optical networks. 182 This document analyses the case where the packet networks support 183 multi-domain SR-TE paths and the optical networks could be either a 184 DWDM network or an OTN network (without DWDM layer) or multi-layer 185 OTN/DWDM network. DWDM networks could be either fixed-grid or 186 flexible-grid. 188 Multi-layer and multi-domain scenarios, based on reference network 189 described in section 2, and very relevant for Service Providers, are 190 described in section 4 and in setion 5. 192 For each scenario, existing IETF protocols and data models, 193 identified in section 3.1 and section 3.2, are analysed with 194 particular focus on the MPI in the ACTN architecture. 196 For each multi-layer scenario, the document analyzes how to use the 197 interfaces and data models of the ACTN architecture. 199 A summary of the gaps identified in this analysis is provided in 200 section 6. 202 Understanding the level of standardization and the possible gaps will 203 help assess the feasibility of integration between packet and optical 204 DWDM domains (and optionally OTN layer) in an end-to-end multi-vendor 205 service provisioning perspective. 207 1.1. Terminology 209 This document uses the ACTN terminology defined in [RFC8453] 211 In addition this document uses the following terminology. 213 Customer service: 215 the end-to-end service from CE to CE 217 Network service: 219 the PE to PE configuration including both the network service layer 220 (VRFs, RT import/export policies configuration) and the network 221 transport layer (e.g. RSVP-TE LSPs). This includes the 222 configuration (on the PE side) of the interface towards the CE 223 (e.g. VLAN, IP adress, routing protocol etc.) 225 Port: 227 the physical entity that transmits and receives physical signals 229 Interface: 231 a physical or logical entity that transmits and receives traffic 233 Link: 235 an association between two interfaces that can exchange traffic 236 directly 238 Ethernet link: 240 a link between two Ethernet interfaces 242 IP link: 244 a link between two IP interfaces 246 Cross-layer link: 248 an Ethernet link between an Ethernet interface on a router and an 249 Ethernet interface on an optical NE 251 Intra-domain single-layer Ethernet link: 253 an Ethernet link between between two Ethernet interfaces on 254 physically adjacent routers that belong to the same P-PNC domain 256 Intra-domain single-layer IP link: 258 an IP link supported by an intra-domain single-layer Ethernet link 260 Inter-domain single-layer Ethernet link: 262 an Ethernet link between between two Ethernet interfaces on 263 physically adjacent routers which belong to different P-PNC domains 265 Inter-domain single-layer IP link: 267 an IP link supported by an inter-domain single-layer Ethernet link. 269 Intra-domain multi-layer Ethernet link: 271 an Ethernet link supported by two cross-layer links and an optical 272 tunnel in between 274 Intra-domain multi-layer IP link: 276 an IP link supported an intra-domain multi-layer Ethernet link 278 2. Reference network architecture 280 This document analyses several deployment scenarios for Packet and 281 Optical Integration (POI) in which ACTN hierarchy is deployed to 282 control a multi-layer and multi-domain network, with two optical 283 domains and two packet domains, as shown in Figure 1: 285 +----------+ 286 | MDSC | 287 +-----+----+ 288 | 289 +-----------+-----+------+-----------+ 290 | | | | 291 +----+----+ +----+----+ +----+----+ +----+----+ 292 | P-PNC 1 | | O-PNC 1 | | O-PNC 2 | | P-PNC 2 | 293 +----+----+ +----+----+ +----+----+ +----+----+ 294 | | | | 295 | \ / | 296 +-------------------+ \ / +-------------------+ 297 CE1 / PE1 BR1 \ | / / BR2 PE2 \ CE2 298 o--/---o o---\-|-------|--/---o o---\--o 299 \ : : / | | \ : : / 300 \ : PKT domain 1 : / | | \ : PKT domain 2 : / 301 +-:---------------:-+ | | +-:---------------:--+ 302 : : | | : : 303 : : | | : : 304 +-:---------------:------+ +-------:---------------:--+ 305 / : : \ / : : \ 306 / o...............o \ / o...............o \ 307 \ optical domain 1 / \ optical domain 2 / 308 \ / \ / 309 +------------------------+ +--------------------------+ 311 Figure 1 - Reference Network 313 The ACTN architecture, defined in [RFC8453], is used to control this 314 multi-layer and multi-domain network where each Packet PNC (P-PNC) is 315 responsible for controlling its packet domain and where each Optical 316 PNC (O-PNC) in the above topology is responsible for controlling its 317 optical domain. The packet domains controlled by the P-PNCs can be 318 Autonomous Systems (ASes), defined in [RFC1930], or IGP areas, within 319 the same operator network. 321 The routers between the packet domains can be either AS Boundary 322 Routers (ASBR) or Area Border Router (ABR): in this document, the 323 generic term Border Router (BR) is used to represent either an ASBR 324 or an ABR. 326 The MDSC is responsible for coordinating the whole multi-domain 327 multi-layer (packet and optical) network. A specific standard 328 interface (MPI) permits MDSC to interact with the different 329 Provisioning Network Controller (O/P-PNCs). 331 The MPI interface presents an abstracted topology to MDSC hiding 332 technology-specific aspects of the network and hiding topology 333 details depending on the policy chosen regarding the level of 334 abstraction supported. The level of abstraction can be obtained based 335 on P-PNC and O-PNC configuration parameters (e.g., provide the 336 potential connectivity between any PE and any BR in an SR-TE 337 network). 339 In the reference network of Figure 1, it is assumed that: 341 o The domain boundaries between the packet and optical domains are 342 congruent. In other words, one optical domain supports 343 connectivity between routers in one and only one packet domain; 345 o There are no inter-domain physical links between optical domains. 346 Inter-domain physical links exist only: 348 o between packet domains (i.e., between BRs belonging to 349 different packet domains): these links are called inter-domain 350 Ethernet or IP links within this document; 352 o between packet and optical domains (i.e., between routers and 353 optical NEs): these links are called cross-layer links within 354 this document; 356 o between customer sites and the packet network (i.e., between 357 CE devices and PE routers): these links are called access 358 links within this document. 360 o All the physical interfaces at inter-domain links are Ethernet 361 physical interfaces. 363 Although the new optical technologies (e.g., QSFP-DD ZR 400G) allows 364 providing DWDM pluggable interfaces on the routers, the deployment of 365 those pluggable optics is not yet widely adopted by the operators. 366 The reason is that most operators are not yet ready to manage packet 367 and optical networks in a single unified domain. The analysis of the 368 unified use case is outside the scope of this draft. 370 This document analyses scenarios where all the multi-layer IP links, 371 supported by the optical network, are intra-domain (intra-AS/intra- 372 area), such as PE-BR, PE-P, BR-P, P-P IP links. Therefore the inter- 373 domain IP links are always single-layer links supported by Ethernet 374 physical links. 376 The analysis of scenarios with multi-layer inter-domain IP links is 377 outside the scope of this document. 379 Therefore, if inter-domain links between the optical domains exist, 380 they would be used to support multi-domain optical services, which 381 are outside the scope of this document. 383 The optical network elements (NEs) within the optical domains can be 384 ROADMs or OTN switches, with or without an integrated ROADM function. 386 2.1. Multi-domain Service Coordinator (MDSC) functions 388 The MDSC in Figure 1 is responsible for multi-domain and multi-layer 389 coordination across multiple packet and optical domains, as well as 390 to provide multi-layer/multi-domain L2/L3 VPN network services 391 requested by an OSS/Orchestration layer. 393 From an implementation perspective, the functions associated with 394 MDSC and described in [RFC8453] may be grouped in different ways. 396 1. Both the service- and network-related functions are collapsed into 397 a single, monolithic implementation, dealing with the end customer 398 service requests received from the CMI (Customer MDSC Interface) 399 and adapting the relevant network models. An example is represented 400 in Figure 2 of [RFC8453]. 401 2. An implementation can choose to split the service-related and the 402 network-related functions into different functional entities, as 403 described in [RFC8309] and in section 4.2 of [RFC8453]. In this 404 case, MDSC is decomposed into a top-level Service Orchestrator, 405 interfacing the customer via the CMI, and into a Network 406 Orchestrator interfacing at the southbound with the PNCs. The 407 interface between the Service Orchestrator and the Network 408 Orchestrator is not specified in [RFC8453]. 410 3. Another implementation can choose to split the MDSC functions 411 between an "higher-level MDSC" (MDSC-H) responsible for packet and 412 optical multi-layer coordination, interfacing with one Optical 413 "lower-level MDSC" (MDSC-L), providing multi-domain coordination 414 between the O-PNCs and one Packet MDSC-L, providing multi-domain 415 coordination between the P-PNCs (see for example Figure 9 of 416 [RFC8453]). 417 4. Another implementation can also choose to combine the MDSC and the 418 P-PNC functions together. 420 In the current service provider's network deployments, at the North 421 Bound of the MDSC, instead of a CNC, typically there is an 422 OSS/Orchestration layer. In this case, the MDSC would implement only 423 the Network Orchestration functions, as in [RFC8309] and described in 424 point 2 above. Therefore, the MDSC is dealing with the network 425 services requests received from the OSS/Orchestration layer. 427 The functionality of the OSS/Orchestration layer and the interface 428 toward the MDSC are usually operator-specific and outside the scope 429 of this draft. Therefore, this document assumes that the 430 OSS/Orchestrator requests the MDSC to set up L2/L3 VPN network 431 services through mechanisms that are outside the scope of this 432 document. 434 There are two prominent workflow cases when the MDSC multi-layer 435 coordination is initiated: 437 o Initiated by a request from the OSS/Orchestration layer to setup 438 L2/L3 VPN network services that requires multi-layer/multi-domain 439 coordination; 441 o Initiated by the MDSC itself to perform multi-layer/multi-domain 442 optimizations and/or maintenance activities (e.g. rerouting LSPs 443 with their associated services when putting a resource, like a 444 fibre, in maintenance mode during a maintenance window). 445 Unlike service fulfillment, these workflows are not related to a 446 network service provisioning request being received from 447 the OSS/Orchestration layer. 449 The latter workflow cases are outside the scope of this document. 451 This document analyses the use cases where multi-layer coordination 452 is triggered by a network service request received from the 453 OSS/Orchestration layer. 455 2.1.1. Multi-domain L2/L3 VPN network services 457 Figure 2 provides an example of an hub & spoke multi-domain L2/L3 VPN 458 with three PEs where the hub PE (PE13) and one spoke PE (PE14) are 459 within the same packet domain and the other spoke PE (PE23) is within 460 a different packet domain. 462 ------ 463 | CE13 |___________________ 464 ------ ) __________________ 465 ( | ) ( ) 466 ( | PE13 P15 BR11 ) ( BR21 P24 ) 467 ( ____ ___ ____ ) ( ____ ___ ) 468 ( / H \ _ _ _ / \ _ _ / \ _)_ _ _(_ / \ _ _ _ / \ ) 469 ( \____/... \___/ \____/ ) ( \____/ \___/ ) 470 ( :..... ) ( | ) 471 ( ____ :__ ____ ) ( ____ _|__ ) 472 ( / S \...../ \._._./ \__________/ \._._._._./ S \ ) 473 ( \____/ \___/ \____/ ) ( \____/ \____/ ) 474 ( | ) ( | ) 475 ( | PE14 P16 BR12 ) ( BR22 PE23 | ) 476 ( | ) ( | ) 477 ------ ) ( ------ 478 | CE14 | ___________________) (_____________| CE23 | 479 ------ ------ 481 _____________________________ ___________________ 482 ( ) ( ) 483 ( ____ ____ ) ( ____ ) 484 ( /NE11\ __ _ _ _ _ /NE12\ ) ( /NE21\ _ _ ) 485 ( \____/.. \____/ ) ( \____/ \ ) 486 ( | :..... ...: \ ) ( / \ ) 487 ( _|__ :__: \____ ) ( ___/ __\_ ) 488 ( /NE13\_ _ /NE14\ _ _ _ /NE15\ ) ( /NE22\ _ _ _ /NE23\ ) 489 ( \____/ \____/ \____/ ) ( \____/ \____/ ) 490 ( ) ( ) 491 (_____________________________) (___________________) 493 optical domain 1 optical domain 2 495 H / S = Hub VRF / Spoke VRF 496 ____ = Inter-domain interconnections 497 ..... = SR policy Path 1 498 _ _ _ = SR policy Path 2 500 Figure 2 - Multi-domain L3VPN example 502 There are many options to implement multi-domain L2/L3 VPNs, 503 including: 505 1. BGP-LU (seamless MPLS) 506 2. Inter-domain RSVP-TE 507 3. Inter-domain SR-TE 509 This document provides an analysis of the inter-domain SR-TE option. 510 The analysis of other options is outside the scope of this draft. 512 It is also assumed that: 514 o each packet domain in Figure 2 is implementing SR-TE and the 515 stitching between two domains is done using end-to-end/multi- 516 domain SR-TE; 518 o the bandwidth of each intra-domain SR-TE path is managed by its 519 respective P-PNC; 521 o binding SID is used for the end-to-end SR-TE path stitching; 523 o each packet domain in Figure 2 is using TI-LFA, with SRLG 524 awareness, for local protection within each domain. 526 In this scenario, one of the key MDSC functions is to identify the 527 multi-domain/multi-layer SR-TE paths to be used to carry the L2/L3 528 VPN traffic between PEs belonging to different packet domains and to 529 relay this information to the P-PNCs, to ensure that the PEs' 530 forwarding tables (e.g., VRF) are properly configured to steer the 531 L2/L3 VPN traffic over the intended multi-domain/multi-layer SR-TE 532 paths. 534 The selection of the SR-TE path should take into account the TE 535 requirements and the binding requirements for the L2/L3 VPN network 536 service. 538 In general the binding requirements for a network service (e.g L2/L3 539 VPN), can be summarized within three cases: 541 1. The customer is asking for VPN isolation dynamically creating 542 and binding tunnels to the service such that they are not shared 543 by others services (e.g. VPN). 544 The level of isolation can be different: 545 a) Hard isolation with deterministic latency that means L2/L3 546 VPN requiring a set of dedicated TE Tunnels (neither 547 sharing with other services nor competing for bandwidth 548 with other tunnels) providing deterministic latency 549 performances 550 b) Hard isolation but without deterministic characteristics 551 c) Soft isolation that means the tunnels associated with L2/L3 552 VPN are dedicated to that but can compete for bandwidth 553 with other tunnels. 554 2. The customer does not ask isolation,and could request a VPN 555 service where associated tunnels can be shared across multiple 556 VPNs. 558 For each SR-TE path required to support the L2/L3 VPN network 559 service, it is possible that: 561 1. A SR-TE path that meets the TE and binding requirements already 562 exist in the network. 564 2. An existing SR-TE path could be modified (e.g., through bandwidth 565 increase) to meet the TE and binding requirements: 567 a. The SR-TE path characteristics can be modified only in the 568 packet layer. 570 b. One or more new underlay optical tunnels need to be setup to 571 support the requested changes of the overlay SR-TE paths 572 (multi-layer coordination is required). 574 3. A new SR-TE path needs to be setup to meet the TE and binding 575 requirements: 577 a. The new SR-TE path reuses existing underlay optical tunnels; 579 b. One or more new underlay optical tunnels need to be setup to 580 support the setup of the new SR-TE path (multi-layer 581 coordination is required). 583 2.1.2. Multi-domain and multi-layer path computation 585 When a new SR-TE path needs to be setup, the MDSC is also responsible 586 to coordinate the multi-layer/multi-domain path computation. 588 Depending on the knowledge that MDSC has of the topology and 589 configuration of the underlying network domains, three approaches for 590 performing multi-layer/multi-domain path computation are possible: 592 1. Full Summarization: In this approach, the MDSC has an abstracted 593 TE topology view of all of its, packet and optical, underlying 594 domains. 596 In this case, the MDSC does not have enough TE topology 597 information to perform multi-layer/multi-domain path computation. 598 Therefore the MDSC delegates the P-PNCs and O-PNCs to perform 599 local path computation within their respective controlled domains 600 and it uses the information returned by the P-PNCs and O-PNCs to 601 compute the optimal multi-domain/multi-layer path. 603 This approach presents an issue to P-PNC, which does not have the 604 capability of performing a single-domain/multi-layer path 605 computation, since it can not retrieve the topology information 606 from the O-PNCs nor delegate the O-PNC to perform optical path 607 computation. 609 A possible solution could be to include a CNC function within the 610 P-PNC to request the MDSC multi-domain optical path computation, 611 as shown in Figure 10 of [RFC8453]. 613 Another solution could be to rely on the MDSC recursive hierarchy, 614 as defined in section 4.1 of [RFC8453], where, for each IP and 615 optical domain pair, a "lower-level MDSC" (MDSC-L) provides the 616 essential multi-layer correlation and the "higher-level MDSC" 617 (MDSC-H) provides the multi-domain coordination. 618 In this case, the MDSC-H can get an abstact view of the underlying 619 multi-layer domain topologies from its underlying MDSC-L. Each 620 MDSC-L gets the full view of the IP domain topology from P-PNC and 621 can get an abstracted view of the optical domain topology from its 622 underlying O-PNC. In other words, topology abstraction is possible 623 at the MPIs between MDSC-L and O-PNC and between MDSC-L and MDSC- 624 H. 626 2. Partial summarization: In this approach, the MDSC has full 627 visibility of the TE topology of the packet network domains and an 628 abstracted view of the TE topology of the optical network domains. 630 The MDSC then has only the capability of performing multi- 631 domain/single-layer path computation for the packet layer (the 632 path can be computed optimally for the two packet domains). 634 Therefore, the MDSC still needs to delegate the O-PNCs to perform 635 local path computation within their respective domains and it uses 636 the information received by the O-PNCs, together with its TE 637 topology view of the multi-domain packet layer, to perform multi- 638 layer/multi-domain path computation. 640 3. Full knowledge: In this approach, the MDSC has the complete and 641 enough detailed view of the TE topology of all the network domains 642 (both optical and packet). 644 In such case MDSC has all the information needed to perform multi- 645 domain/multi-layer path computation, without relying on PNCs. 647 This approach may present, as a potential drawback, scalability 648 issues and, as discussed in section 2.2. of [PATH-COMPUTE], 649 performing path computation for optical networks in the MDSC is 650 quite challenging because the optimal paths depend also on 651 vendor-specific optical attributes (which may be different in the 652 two domains if they are provided by different vendors). 654 This document analyses scenarios where the MDSC uses the partial 655 summarization approach to coordinate multi-domain/multi-layer path 656 computation. 658 Typically, the O-PNCs are responsible for the optical path 659 computation of services across their respective single domains. 660 Therefore, when setting up the network service, they must consider 661 the connection requirements such as bandwidth, amplification, 662 wavelength continuity, and non-linear impairments that may affect the 663 network service path. 665 The methods and types of path requirements and impairments, such as 666 those detailed in [OIA-TOPO], used by the O-PNC for optical path 667 computation are not exposed at the MPI and therefore out of scope for 668 this document. 670 2.2. IP/MPLS Domain Controller and NE Functions 672 As highlighted in section 2.1.1, SR-TE is used in the packet domain. 673 Each domain, corresponding to either an IGP area or an Autonomous 674 System (AS) within the same operator network, is controlled by a 675 packet domain controller (P-PNC). 677 P-PNCs are responsible to setup the SR-TE paths between any two PEs 678 or BRs in their respective controlled domains, as requested by MDSC, 679 and to provide topology information to the MDSC. 681 With reference to Figure 2, a bidirectional SR-TE path from PE13 in 682 domain 1 to PE23 in domain 2 requires the MDSC to coordinate the 683 actions of: 685 o P-PNC1 to push a SID list to PE13 including the Binding SID 686 associated to the SR-TE path in Domain 2 with PE23 as the target 687 destination (forward direction); 689 o P-PNC2 to push a SID list to PE23 with including the Binding SID 690 associated to the SR-TE path in Domain 1 with PE13 as the target 691 destination (reverse direction). 693 With reference to Figure 3, P-PNCs are then responsible: 695 1. To expose to MDSC their respective detailed TE topology 697 2. To perform single-layer single-domain local SR-TE path 698 computation, when requested by MDSC between two PEs (for single- 699 domain end-to-end SR-TE path) or between PEs and BRs for an inter- 700 domain SR-TE path selected by MDSC; 702 3. To configure the ingress PE or BR router in their respective 703 domain with the SID list associated with an SR-TE path; 705 4. To configure finally the VRF and PE-CE interfaces (Service access 706 points) of the intra-domain and inter-domain network services 707 requested by the MDSC. 709 +------------------+ +------------------+ 710 | | | | 711 | P-PNC1 | | P-PNC2 | 712 | | | | 713 +--|-----------|---+ +--|-----------|---+ 714 | 1.SR-TE | 2.VPN | 1.SR-TE | 2.VPN 715 | Policy | Provisioning | Policy | Provisioning 716 | Config | | Config | 717 V V V V 718 +---------------------+ +---------------------+ 719 CE / PE SR-TE path 1 BR\ / BR SR-TE path 2 PE \ CE 720 o--/---o..................o--\-----/--o..................o---\--o 721 \ / \ / 722 \ Domain 1 / \ Domain 2 / 723 +---------------------+ +---------------------+ 725 End-to-end SR-TE path 726 <-------------------------------------------------> 728 Figure 3 Domain Controller & NE Functions 730 When requesting the setup of a new SR-TE path, the MDSC provides the 731 P-PNCs with the explicit path to be created or modified. In other 732 words, the MDSC can communicate to the P-PNCs the full list of nodes 733 involved in the path (strict mode). In this case, the P-PNC is just 734 responsible to push to headend PE or BR the list of SIDs to create 735 that explicit SR-TE path. 737 For scalability purposes, in large packet domains, where multiple 738 engineered paths are available between any two nodes, the MDSC can 739 request a loose path, together with per-domain TE constraints, to 740 allow the P-PNC selecting the intra-domain SR-TE path meeting these 741 constraints. 743 In such a case it is mandatory that P-PNC signals back to the MDSC 744 which path it has chosen so that the MDSC keeps track of the relevant 745 resources utilization. 747 An example of that comes from Figure 2. The SR-TE path requested by 748 the MDSC touches PE13 - P16 - BR12 - BR21 - PE23. P-PNC2 knows of two 749 possible paths with the same topology metric, e.g. BR21 - P24 - PE23 750 and BR21 - BR22 - PE23, but with different load. It may prefer then 751 to steer the traffic on the latter because it is less loaded. 753 This exception is mentioned here for the sake of completeness but 754 since the network considered in this document does not fall in this 755 scenario, in the rest of the paper the assumption is that the MDSC 756 always provides the explicit list of SID(s) to the P-PNCs to setup or 757 modify the SR-TE path. 759 2.3. Optical Domain Controller and NE Functions 761 The optical network provides the underlay connectivity services to 762 IP/MPLS networks. The packet and optical multi-layer coordination is 763 done by the MDSC, as shown in Figure 1. 765 The O-PNC is responsible to: 767 o provide to the MDSC an abstract TE topology view of its underlying 768 optical network resources; 770 o perform single-domain local path computation, when requested by 771 the MDSC; 773 o perform optical tunnel setup, when requested by the MDSC. 775 The mechanisms used by O-PNC to perform intra-domain topology 776 discovery and path setup are usually vendor-specific and outside the 777 scope of this document. 779 Depending on the type of optical network, TE topology abstraction, 780 path computation and path setup can be single-layer (either OTN or 781 WDM) or multi-layer OTN/WDM. In the latter case, the multi-layer 782 coordination between the OTN and WDM layers is performed by the 783 O-PNC. 785 3. Interface protocols and YANG data models for the MPIs 787 This section describes general assumptions applicable at all the MPI 788 interfaces, between each PNC (Optical or Packet) and the MDSC, to 789 support the scenarios discussed in this document. 791 3.1. RESTCONF protocol at the MPIs 793 The RESTCONF protocol, as defined in [RFC8040], using the JSON 794 representation defined in [RFC7951], is assumed to be used at these 795 interfaces. In addition, extensions to RESTCONF, as defined in 796 [RFC8527], to be compliant with Network Management Datastore 797 Architecture (NMDA) defined in [RFC8342], are assumed to be used as 798 well at these MPI interfaces and also at MDSC NBI interfaces. 800 3.2. YANG data models at the MPIs 802 The data models used on these interfaces are assumed to use the YANG 803 1.1 Data Modeling Language, as defined in [RFC7950]. 805 3.2.1. Common YANG data models at the MPIs 807 As required in [RFC8040], the "ietf-yang-library" YANG module defined 808 in [RFC8525] is used to allow the MDSC to discover the set of YANG 809 modules supported by each PNC at its MPI. 811 Both Optical and Packet PNCs use the following common topology YANG 812 data models at the MPI: 814 o The Base Network Model, defined in the "ietf-network" YANG module 815 of [RFC8345]; 817 o The Base Network Topology Model, defined in the "ietf-network- 818 topology" YANG module of [RFC8345], which augments the Base 819 Network Model; 821 o The TE Topology Model, defined in the "ietf-te-topology" YANG 822 module of [RFC8795], which augments the Base Network Topology 823 Model. 825 Both Optical and Packet PNCs use the common TE Tunnel Model, defined 826 in the "ietf-te" YANG module of [TE-TUNNEL], at the MPI. 828 All the common YANG data models are generic and augmented by 829 technology-specific YANG modules, as described in the following 830 sections. 832 Both Optical and Packet PNCs also use the Ethernet Topology Model, 833 defined in the "ietf-eth-te-topology" YANG module of [CLIENT-TOPO], 834 which augments the TE Topology Model with Ethernet technology- 835 specific information. 837 Both Optical and Packet PNCs use the following common notifications 838 YANG data models at the MPI: 840 o Dynamic Subscription to YANG Events and Datastores over RESTCONF 841 as defined in [RFC8650]; 843 o Subscription to YANG Notifications for Datastores updates as 844 defined in [RFC8641]. 846 PNCs and MDSCs are compliant with subscription requirements as stated 847 in [RFC7923]. 849 3.2.2. YANG models at the Optical MPIs 851 The Optical PNC uses at least one of the following technology- 852 specific topology YANG data models, which augment the generic TE 853 Topology Model: 855 o The WSON Topology Model, defined in the "ietf-wson-topology" YANG 856 module of [RFC9094]; 858 o the Flexi-grid Topology Model, defined in the "ietf-flexi-grid- 859 topology" YANG module of [Flexi-TOPO]; 861 o the OTN Topology Model, as defined in the "ietf-otn-topology" YANG 862 module of [OTN-TOPO]. 864 The optical PNC uses at least one of the following technology- 865 specific tunnel YANG data models, which augments the generic TE 866 Tunnel Model: 868 o The WSON Tunnel Model, defined in the "ietf-wson-tunnel" YANG 869 modules of [WSON-TUNNEL]; 871 o the Flexi-grid Tunnel Model, defined in the "ietf-flexi-grid- 872 tunnel" YANG module of [Flexi-TUNNEL]; 874 o the OTN Tunnel Model, defined in the "ietf-otn-tunnel" YANG module 875 of [OTN-TUNNEL]. 877 The optical PNC can optionally use the generic Path Computation YANG 878 RPC, defined in the "ietf-te-path-computation" YANG module of 879 [PATH-COMPUTE]. 881 Note that technology-specific augmentations of the generic path 882 computation RPC for WSON, Flexi-grid and OTN path computation RPCs 883 have been identified as a gap. 885 The optical PNC uses the Ethernet Client Signal Model, defined in the 886 "ietf-eth-tran-service" YANG module of [CLIENT-SIGNAL]. 888 3.2.3. YANG data models at the Packet MPIs 890 The Packet PNC also uses at least the following technology-specific 891 topology YANG data models: 893 o The L3 Topology Model, defined in the "ietf-l3-unicast-topology" 894 YANG module of [RFC8346], which augments the Base Network Topology 895 Model; 897 o the L3 specific data model including extended TE attributes (e.g. 898 performance derived metrics like latency), defined in "ietf-l3-te- 899 topology" and in "ietf-te-topology-packet" YANG modules of [L3-TE- 900 TOPO]; 902 o the SR Topology Model, defined in the "ietf-sr-mpls-topology" YANG 903 module of [SR-TE-TOPO]. 905 Need to check the need/applicability of the "ietf-l3-te-topology" in 906 this scenario since it is not described in [SR-TE-TOPO]. 908 The packet PNC uses at least the following YANG data models: 910 o L3VPN Network Model (L3NM), defined in the "ietf-l3vpn-ntw" YANG 911 module of [RFC9182]; 913 o L3NM TE Service Mapping, defined in the "ietf-l3nm-te-service- 914 mapping" YANG module of [TSM]; 916 o L2VPN Network Model (L2NM), defined in the "ietf-l2vpn-ntw" YANG 917 module of [L2NM]; 919 o L2NM TE Service Mapping, defined in the "ietf-l2nm-te-service- 920 mapping" YANG module of [TSM]. 922 3.3. PCEP 924 [RFC8637] examines the applicability of a Path Computation Element 925 (PCE) [RFC5440] and PCE Communication Protocol (PCEP) to the ACTN 926 framework. It further describes how the PCE architecture applies to 927 ACTN and lists the PCEP extensions that are needed to use PCEP as an 928 ACTN interface. The stateful PCE [RFC8231], PCE-Initiation 929 [RFC8281], stateful Hierarchical PCE (H-PCE) [RFC8751], and PCE as a 930 central controller (PCECC) [RFC8283] are some of the key extensions 931 that enable the use of PCE/PCEP for ACTN. 933 Since the PCEP supports path computation in the packet and optical 934 networks, PCEP is well suited for inter-layer path computation. 935 [RFC5623] describes a framework for applying the PCE-based 936 architecture to interlayer (G)MPLS traffic engineering. Furthermore, 937 the section 6.1 of [RFC8751] states the H-PCE applicability for 938 inter-layer or POI. 940 [RFC8637] lists various PCEP extensions that apply to ACTN. It also 941 lists the PCEP extension for optical network and POI. 943 Note that the PCEP can be used in conjunction with the YANG data 944 models described in the rest of this document. Depending on whether 945 ACTN is deployed in a greenfield or brownfield, two options are 946 possible: 948 1. The MDSC uses a single RESTCONF/YANG interface towards each PNC to 949 discover all the TE information and request TE tunnels. It may 950 either perform full multi-layer path computation or delegate path 951 computation to the underneath PNCs. 953 This approach is desirable for operators from an multi-vendor 954 integration perspective as it is simple, and we need only one type 955 of interface (RESTCONF) and use the relevant YANG data models 956 depending on the operator use case considered. Benefits of having 957 only one protocol for the MPI between MDSC and PNC have been 958 already highlighted in [PATH-COMPUTE]. 960 4. The MDSC uses the RESTCONF/YANG interface towards each PNC to 961 discover all the TE information and requests the creation of TE 962 tunnels. However, it uses PCEP for hierarchical path computation. 964 As mentioned in Option 1, from an operator perspective, this 965 option can add integration complexity to have two protocols 966 instead of one, unless the RESTOCONF/YANG interface is added to an 967 existing PCEP deployment (brownfield scenario). 969 Section 4 and section 5 of this draft analyse the case where a single 970 RESTCONF/YANG interface is deployed at the MPI (i.e., option 1 971 above). 973 4. Inventory, service and network topology discovery 975 In this scenario, the MSDC needs to discover through the underlying 976 PNCs: 978 o the network topology, at both optical and IP layers, in terms of 979 nodes and links, including the access links, inter-domain IP links 980 as well as cross-layer links; 982 o the optical tunnels supporting multi-layer intra-domain IP links; 984 o both intra-domain and inter-domain L2/L3 VPN network services 985 deployed within the network; 987 o the SR-TE paths supporting those L2/L3 VPN network services; 989 o the hardware inventory information of IP and optical equipment. 991 The O-PNC and P-PNC could discover and report the hardware network 992 inventory information of their equipment that is used by the 993 different management layers. In the context of POI, the inventory 994 information of IP and optical equipment can complement the topology 995 views and facilitate the packet/optical multi-layer view, e.g., by 996 providing a mapping between the lowest level LTPs in the topology 997 view and corresponding physical port in the network inventory view. 999 The MDSC could also discover the entire network inventory information 1000 of both IP and optical equipment and correlate this information with 1001 the links reported in the network topology. 1003 Reporting the entire inventory and detailed topology information of 1004 packet and optioncal networks to the MDSC may present, as a potential 1005 drawback, scalability issues. The analysis of the scalability of this 1006 approach and mechanisms to address potential issues is outside the 1007 scope of this document. 1009 Each PNC provides to the MDSC the topology view of the domain it 1010 controls, as described in section 4.1 and 4.3. The MDSC uses this 1011 information to discover the complete topology view of the multi-layer 1012 multi-domain network it controls. 1014 The MDSC should also maintain up-to-date inventory, service and 1015 network topology databases of both IP and optical layers through the 1016 use of IETF notifications through MPI with the PNCs when any network 1017 inventory/topology/service change occurs. 1019 It should be possible also to correlate information coming from IP 1020 and optical layers (e.g., which port, lambda/OTSi, and direction, is 1021 used by a specific IP service on the WDM equipment). 1023 In particular, for the cross-layer links, it is key for MDSC to 1024 automatically correlate the information from the PNC network 1025 databases about the physical ports from the routers (single link or 1026 bundle links for LAG) to client ports in the ROADM. 1028 The analysis of multi-layer fault management is outside the scope of 1029 this document. However, the discovered information should be 1030 sufficient for the MDSC to easily correlate optical and IP layers 1031 alarms to speed-up troubleshooting. 1033 Alarms and event notifications are required between MDSC and PNCs so 1034 that any network changes are reported almost in real-time to the MDSC 1035 (e.g., NE or link failure). As specified in [RFC7923], MDSC must 1036 subscribe to specific objects from PNC YANG datastores for 1037 notifications. 1039 4.1. Optical topology discovery 1041 The WSON Topology Model or, alternatively, the Flexi-grid Topology 1042 model is used to report the DWDM network topology (e.g., ROADM nodes 1043 and links), depending on whether the DWDM optical network is based on 1044 fixed grid or flexible-grid. 1046 The OTN Topology Model is used to report the OTN network topology 1047 (e.g., OTN switching nodes and links), when the OTN switching layer 1048 is deployed within the optical domain. 1050 In order to allow the MDSC to discover the complete multi-layer and 1051 multi-domain network topology and to correlate it with the hardware 1052 inventory information, the O-PNCs report an abstract optical network 1053 topology where: 1055 o one TE node is reported for each optical NE deployed within the 1056 optical network domain; and 1058 o one TE link is reported for each OMS link and, optionally, for 1059 each OTN link. 1061 The Ethernet Topology Model is used to report the Ethernet client 1062 LTPs that terminate the cross-layer links: one Ethernet client LTP is 1063 reported for each Ethernet client interface on the optical NEs. 1065 Since the MDSC delegates optical path computation to its underlay O- 1066 PNCs, the following information can be abstracted and not reported at 1067 the MPI: 1069 o the optical parameters required for optical path computation, such 1070 as those detailed in [OIA-TOPO]; 1072 o the underlay OTS links and ILAs of OMS links; 1074 o the physical connectivity between the optical transponders and the 1075 ROADMs. 1077 The optical transponders and, optionally, the OTN access cards, are 1078 abstracted at MPI by the O-PNC as Trail Termination Points (TTPs), 1079 defined in [RFC8795], within the optical network topology. This 1080 abstraction is valid independently of the fact that optical 1081 transponders are physically integrated within the same WDM node or 1082 are physically located on a device external to the WDM node since it 1083 both cases the optical transponders and the WDM node are under the 1084 control of the same O-PNC. 1086 The association between the Ethernet LTPs terminating the Ethernet 1087 cross-layer links and the optical TTPs is reported using the Inter 1088 Layer Lock (ILL) identifiers, defined in [RFC8795]. 1090 All the optical links are intra-domain and they are discovered by O- 1091 PNCs, using mechanisms which are outside the scope of this document, 1092 and reported at the MPIs within the optical network topology. 1094 In case of a multi-layer DWDM/OTN network domain, multi-layer intra- 1095 domain OTN links are supported by underlay DWDM tunnels, which can be 1096 either WSON tunnels or, alternatively, Flexi-grid tunnels, depending 1097 on whether the DWDM optical network is based on fixed grid or 1098 flexible-grid. This relationship is reported by the mechanisms 1099 described in section 4.2. 1101 4.2. Optical path discovery 1103 The WSON Tunnel Model or, alternatively, the Flexi-grid Tunnel model, 1104 depending on whether the DWDM optical network is based on fixed grid 1105 or flexible-grid, is used to report all the DWDM tunnels established 1106 within the optical network. 1108 When the OTN switching layer is deployed within the optical domain, 1109 the OTN Tunnel Model is used to report all the OTN tunnels 1110 established within the optical network. 1112 The Ethernet client signal Model is used to report all the Ethernet 1113 connectivity provided by the underlay optical tunnels between 1114 Ethernet client LTPs. The underlay optical tunnels can be either DWDM 1115 tunnels or, when the optional OTN switching layer is deployed, OTN 1116 tunnels. 1118 The DWDM tunnels can be used as underlay tunnels to support either 1119 Ethernet client connectivity or multi-layer intra-domain OTN links. 1120 In the latter case, the hierarchical-link container, defined in [TE- 1121 TUNNEL], is used to reference which multi-layer intra-domain OTN 1122 links are supported by the underlay DWDM tunnels. 1124 The O-PNCs report in their operational datastores all the Ethernet 1125 client connectivities and all the optical tunnels deployed within 1126 their optical domain regarless of the mechanisms being used to set 1127 them up, such as the mechanisms described in section 5.2, as well as 1128 other mechanism (e.g., static configuration), which are outside the 1129 scope of this document. 1131 4.3. Packet topology discovery 1133 The L3 Topology Model, SR Topology Model, TE Topology Model and the 1134 TE Packet Topology Model are used together to report the SR-TE 1135 network topology, as described in figure 2 of [SR-TE-TOPO]. 1137 In order to allow the MDSC to discover the complete multi-layer and 1138 multi-domain network topology and to correlate it with the hardware 1139 inventory information as well as to perform multi-domain SR-TE path 1140 computation, the P-PNCs report the full SR-TE network, including all 1141 the information that is required by the MDSC to perform SR-TE path 1142 computation. In particular, one TE node is reported for each router 1143 and one TE link is reported for each intra-domain IP link. The SR-TE 1144 topology also reports the IP LTPs terminating the inter-domain IP 1145 links. 1147 All the intra-domain IP links are discovered by the P-PNCs, using 1148 mechanisms, such as LLDP [IEEE 802.1AB], which are outside the scope 1149 of this document, and reported at the MPIs within the SR-TE network 1150 topology. 1152 The Ethernet Topology Model is used to report the intra-domain 1153 Ethernet links supporting the intra-domain IP links as well as the 1154 Ethernet LTPs that might terminate cross-layer links, inter-domain 1155 Ethernet links or access links, as described in detail in section 4.5 1156 and in section 4.6. 1158 4.4. SR-TE path discovery 1160 This version of the draft assumes that discovery of existing SR-TE 1161 paths, including their bandwidth, at the MPI is done using the 1162 generic TE tunnel YANG data model, defined in [TE-TUNNEL], with SR-TE 1163 specific augmentations, as outlined in section 1 of [TE-TUNNEL]. 1165 Note that technology-specific augmentations of the generic path TE 1166 tunnel model for SR-TE path setup and discovery have been identified 1167 as a gap. 1169 To enable MDSC to discover the full end-to-end SR-TE path 1170 configuration, the SR-TE specific augmentation of the [TE-TUNNEL] 1171 should allow the P-PNC to report the SID list assigned to an SR-TE 1172 path within its domain. 1174 For example, considering the L3VPN in Figure 2, the PE13-P16-PE14 SR- 1175 TE path and the SR-TE path in the reverse direction (between PE14 and 1176 PE13) could be reported by the P-PNC1 to the MDSC as TE paths of the 1177 same TE tunnel instance. The bandwidth of these TE paths represents 1178 the bandwidth allocated by P-PNC1 to the two SR-TE paths,which can be 1179 symmetric or asymmetric in the two directions. 1181 The P-PNCs use the TE tunnel model to report, at the MPI, all the SR- 1182 TE paths established within their packet domain regardless of the 1183 mechanism being used to set them up. In other words, the TE tunnel 1184 data model reports within the operational datastore both the SR-TE 1185 paths being setup by the MDSC at the MPI, using the mechanisms 1186 described in section 5.3, as well as the SR-TE paths being setup by 1187 other means, such as static configuration, which are outside the 1188 scope of this document. 1190 4.5. Inter-domain link discovery 1192 In the reference network of Figure 1, there are three types of 1193 inter-domain links: 1195 o Inter-domain Ethernet links suppoting inter-domain IP links 1196 between two adjancent IP domains; 1198 o Cross-layer links between an an IP domain and an adjacent optical 1199 domain; 1201 o Access links between a CE device and a PE router. 1203 All the three types of links are Ethernet links. 1205 It is worth noting that the P-PNC may not be aware whether an 1206 Ethernet interface terminates a cross-layer link, an inter-domain 1207 Ethernet link or an access link. 1209 It is not yet clarified which model can be used to report the access 1210 links between CEs and PEs (e.g., by using the Ethernet Topology Model 1211 defined in [CLIENT-TOPO] or by using the SAP Model defined in 1212 [SAP]). This has been identified as a gap. 1214 The inter-domain Ethernet links and cross-layer links are discovered 1215 by the MDSC using the plug-id attribute, as described in section 4.3 1216 of [RFC8795]. 1218 More detailed description of how the plug-id can be used to discover 1219 inter-domain links is also provided in section 5.1.4 of [TNBI]. 1221 This document considers the following two options for discovering 1222 inter-domain links: 1224 1. Static configuration 1226 2. LLDP [IEEE 802.1AB] automatic discovery 1228 Other options are possible but not described in this document. 1230 As outlined in [TNBI], the encoding of the plug-id namespace and the 1231 specific LLDP information reported within the plug-id value, such as 1232 the Chassis ID and Port ID mandatory TLVs, is implementation specific 1233 and needs to be consistent across all the PNCs within the network. 1235 The static configuration requires an administrative burden to 1236 configure network-wide unique identifiers: it is therefore more 1237 viable for inter-domain Ethenet links. For the cross-layer links, the 1238 automatic discovery solution based on LLDP snooping is preferable 1239 when possible. 1241 The routers exchange standard LLDP packets as defined in [IEEE 1242 802.1AB] and the optical NEs snoop the LLDP packets received from the 1243 local Ethernet interface and report to the O-PNCs the extracted 1244 information, such as the Chassis ID, the Port ID, System Name TLVs. 1246 Note that the optical NEs do not actively participate in the LLDP 1247 packet exchange and does not send any LLDP packets. 1249 4.5.1. Cross-layer link discovery 1251 The MDSC can discover a cross-layer link by matching the plug-id 1252 values of the two Ethernet LTPs reported by two adjacent O-PNC and P- 1253 PNC: in case LLDP snooping is used, the P-PNC reports the LLDP 1254 information sent by the corresponding Ethernet interface on the 1255 router while the O-PNC reports the LLDP information received by the 1256 corresponding Ethernet interface on the optical NE, e.g., between LTP 1257 5-0 on PE13 and LTP 7-0 on NE11, as shown in Figure 4. 1259 +-----------------------------------------------------------+ 1260 / Ethernet Topology (P-PNC) / 1261 / +-------------+ +-------------+ / 1262 / | PE13 | | BR11 | / 1263 / | (5-1)O O(6-1) | / 1264 / | (5-0) |\ /| (6-0) | / 1265 / +------O------+|(*) (*)|+------O------+ / 1266 / {PE13,5} ^\<-----+ +----->/^ {BR11,6} / 1267 +-----------------:------------------------------:----------+ 1268 : : 1269 : : 1270 : : 1271 : : 1272 +--------:------------------------------:------------------+ 1273 / : : / 1274 / {PE13,5} v v {BR11,6} / 1275 / +------O------+ +------O------+ / 1276 / | (7-0) | | (8-0) | / 1277 / | | | | / 1278 / | NE11 | | NE12 | / 1279 / +-------------+ +-------------+ / 1280 / Ethernet Topology (O-PNC) / 1281 +----------------------------------------------------------+ 1283 Notes: 1284 ===== 1285 (*) Supporting LTP 1287 Legenda: 1288 ======== 1289 O LTP 1290 ----> Supporting LTP 1291 <...> Link discovered by the MDSC 1292 { } LTP Plug-id reported by the PNC 1294 Figure 4 - Cross-layer link discovery 1296 It is worth noting that the discovery of cross-layer links is based 1297 only on the LLDP information sent by the Ethernet interfaces of the 1298 routers and received by the Ethernet interfaces of the opticl NEs, 1299 Therefore the MDSC can discover these links also before overlay 1300 multi-layer IP links are setup. 1302 4.5.2. Inter-domain IP link discovery 1304 The MDSC can discover an inter-domain Ethernet link which supports an 1305 inter-domain IP link, by matching the plug-id values of the two 1306 Ethernet LTPs reported by the two adjacent P-PNCs: the two P-PNCs 1307 report the LLDP information being sent and being received from the 1308 corresponding Ethernet interfaces, e.g., between the Ethernet LTP 3-1 1309 on BR11 and the Ethernet LTP 4-1 on BR21 shown in Figure 5. 1311 +--------------------------+ +-------------------------+ 1312 / IP Topology (P-PNC 1) / / IP Topology (P-PNC 2) / 1313 / +-------------+ / / +-------------+ / 1314 / | BR11 | / / | BR21 | / 1315 / | (3-2)O<................>O(4-2) | / 1316 / | |\ / / /| | / 1317 / +-------------+| / / |+-------------+ / 1318 / | / / | / 1319 +------------------------|-+ +-------------------------+ 1320 | | 1321 Supporting LTP | | Supporting LTP 1322 | | 1323 | | 1324 +--------------|----------+ +|------------------------+ 1325 / V / / V / 1326 / +-------------+/ / / \+-------------+ / 1327 / | {1}(3-1)O.................>O(4-1){1} | / 1328 / | |\ / / /| | / 1329 / | BR11 |V(*) / / (*)V| BR21 | / 1330 / | |/ / / \| | / 1331 / | {2}(3-0)O<~~~~~~~~~~~~~~~~>O(4-0){2} | / 1332 / +-------------+ / / +-------------+ / 1333 / Eth. Topology (P-PNC 1) / / Eth. Topology (P-PNC 2) / 1334 +-------------------------+ +-------------------------+ 1336 Notes: 1337 ===== 1338 (*) Supporting LTP 1339 {1} {BR11,3,BR21,4} 1340 {2} {BR11,3} 1342 Legenda: 1343 ======== 1344 O LTP 1345 ----> Supporting LTP 1346 <...> Link discovered by the MDSC 1347 <~~~> Link inferred by the MDSC 1348 { } LTP Plug-id reported by the PNC 1350 Figure 5 - Inter-domain Ethernet and IP link discovery 1352 Different information is required to be encoded within the plug-id 1353 attribute of the Etherent LTPs to discover cross-layer links and 1354 inter-domain Ethernet links. 1356 If the P-PNC does not know a priori whether an Ethernet interface on 1357 a router terminates a cross-layer link or an inter-domain Ethernet 1358 link, it has to report at the MPI two Ethernet LTPs representing the 1359 same Ethernet interface, e.g., both the Ethernet LTP 3-0 and the 1360 Ethenet LTP 3-1, supported by LTP 3-0, shown in Figure 5: 1362 o The physical Ethernet LTP is used to represent the physical 1363 adjacency between the router Ethernet interface and either the 1364 adjacent router Ethernet interface (in case of a single-layer 1365 Ethernet link) or the optical NE Ethernet interface (in case of a 1366 multi-layer Ethernet link). Therefore, this LTP reports, within 1367 the plug-id attribute, the LLDP information sent by the 1368 corresponding router Ethernet interface; 1370 o The logical Ethernet LTP, supported by a physical Ethernet LTP, is 1371 used to discover the logical adjancecy between router Ethernet 1372 interfaces, which can be either single-layer or multi-layer. 1373 Therefore, this LTP reports, within the plug-id attribute, the 1374 LLDP information sent and received by the corresponding router 1375 Ethernet interface. 1377 It is worth noting that in case of an inter-domain single-layer 1378 Ethernet link, the physical adjacency between the two router Ethernet 1379 interfaces cannot be discovered by the MDSC, using the LLDP 1380 information reported in the plug-id attributes, as shown in Figure 5. 1381 However, the MDSC may infer these links if it knows a priori, using 1382 mechanisms which are outside the scope of this document, that inter- 1383 domain Ethernet links are always single-layer, e.g., as shown in 1384 Figure 5. 1386 The P-PNC can omit reporting the physical Ethernet LTPs when it 1387 knows, by mechanisms which are outside the scope of this document, 1388 that the corresponding router Ethernet interfaces terminate single- 1389 layer inter-domain Ethernet links. 1391 The MDSC can then discover an inter-domain IP link between the two IP 1392 LTPs that are supported by the two Ethernet LTPs terminating an 1393 inter-domain Ethernet link, discovered as described in section 4.5.2, 1394 e.g., between the IP LTP 3-2 on BR21 and the IP LTP 4-2 on BR22, 1395 supported respectively by the Ethernet LTP 3-1 on BR11 and by the 1396 Ethernet LTP 4-1 on BR21, as shown in Figure 5. 1398 4.6. Multi-layer IP link discovery 1400 A multi-layer intra-domain IP link and its supporting multi-layer 1401 intra-domain Ethernet link are discovered by the P-PNC like any other 1402 intra-domain IP and Ethernet links, as described in section 4.3, and 1403 reported at the MPI within the SR-TE and Ethernet network topologies, 1404 e.g., as shown in Figure 6. 1406 +-----------------------------------------------------------+ 1407 / IP Topology (P-PNC 1) / 1408 / +---------+ +---------+ / 1409 / | PE13 | | BR11 | / 1410 / | (5-2)O<======================>O(6-2) | / 1411 / | | | | | / 1412 / +---------+ | +---------+ / 1413 / | / 1414 +-----------------------------------|-----------------------+ 1415 | 1416 | Supporting Link 1417 | 1418 +---------------------------|-------------------------------+ 1419 / Ethernet Topology (P-PNC1) | / 1420 / +-------------+ | +-------------+ / 1421 / | PE13 | V | BR11 | / 1422 / | (5-1)O<==============>O(6-1) | / 1423 / | (5-0) |\ /| (6-0) | / 1424 / +------O------+|(*) (*)|+------O------+ / 1425 / ^ \<----+ +----->/^ / 1426 +-----------------:------------------------------:----------+ 1427 : : 1428 : : 1429 : : 1430 +---------:------------------------------:------------------+ 1431 / V Ethernet Topology (O-PNC 1) V / 1432 / +------O------+ +------O------+ / 1433 / | (7-0) |Eth. client sig.| (8-0) | / 1434 / | X----------+-------------------X | / 1435 / | NE11 | | | NE12 | / 1436 / +-------------+ | +-------------+ / 1437 / | / 1438 +----------------------------|------------------------------+ 1439 | Underlay 1440 | tunnel 1441 | 1442 +-----------------------------------------------------------+ 1443 / __ | __ / 1444 / +-----\/------+ v +------\/-----+ / 1445 / | X======|================|======X | / 1446 / | NE11 | Opt. Tunnel | NE12 | / 1447 / | | | | / 1448 / +-------------+ +-------------+ / 1449 / Optical Topology (O-PNC 1) / 1450 +-----------------------------------------------------------+ 1451 Notes: 1452 ===== 1453 (*) Supporting LTP 1455 Legenda: 1456 ======== 1457 O LTP 1458 ----> Supporting LTP or Supporting Link or Underlay tunnel 1459 <===> Link discovered by the PNC and reported at the MPI 1460 <...> Link discovered by the MDSC 1461 <~~~> Link inferred by the MDSC 1462 x---x Ethernet client signal 1463 X===X Optical tunnel 1465 Figure 6 - Multi-layer intra-domain Ethernet and IP link discovery 1467 The P-PNC does not report any plug-id information on the Ethernet 1468 LTPs terminating intra-domain Ethernet links since these links are 1469 discovered by the PNC. 1471 In addition, the P-PNC also reports the physical Ethernet LTPs that 1472 terminate the cross-layer links supporting the multi-layer intra- 1473 domain Ethernet links, e.g., the Ethernet LTP 5-0 on PE13 and the 1474 Ethernet LTP 6-0 on BR11, shown in Figure 6. 1476 The MDSC discovers, using the mechanisms described in section 4.5, 1477 which Ethernet cross-layer links support the multi-layer intra-domain 1478 Ethernet links, e.g. as shown in Figure 6. 1480 The MDSC also discovers, from the information provided by the O-PNC 1481 and described in section 4.2, which optical tunnels support the 1482 multi-layer intra-domain IP links and therefore the path within the 1483 optical network that supports a multi-layer intra-domain IP link, 1484 e.g., as shown in Figure 6. 1486 4.6.1. Single-layer intra-domain IP links 1488 It is worth noting that the P-PNC may not be aware of whether an 1489 Ethernet interface on the router terminates a multi-layer or a 1490 single-layer intra-domain Ethernet link. 1492 In this case, the P-PNC, always reports two Ethernet LTPs for each 1493 Ethernet interface on the router, e.g., the Ethernet LTP 1-0 and 1-1 1494 on PE13, shown in Figure 7. 1496 +-----------------------------------------------------------+ 1497 / IP Topology (P-PNC 1) / 1498 / +---------+ +---------+ / 1499 / | PE13 | | P16 | / 1500 / | (1-2)O<======================>O(2-2) | / 1501 / | | | | | / 1502 / +---------+ | +---------+ / 1503 / | / 1504 +---------------------------------|-------------------------+ 1505 | 1506 | Supporting Link 1507 | 1508 | 1509 +------------------------|--------------------------------+ 1510 / | / 1511 / +---------+ v +---------+ / 1512 / | (1-1)O<======================>O(2-1) | / 1513 / | |\ /| | / 1514 / | PE13 |V(*) (*)V| P16 | / 1515 / | |/ \| | / 1516 / | {1}(1-0)O<~~~~~~~~~~~~~~~~~~~~~~>O(2-0){2} | / 1517 / +---------+ +---------+ / 1518 / Ethernet Topology (P-PNC 1) / 1519 +---------------------------------------------------------+ 1521 Notes: 1522 ===== 1523 (*) Supporting LTP 1524 {1} {PE13,1} 1525 {2} {P16,2} 1527 Legenda: 1528 ======== 1529 O LTP 1530 ----> Supporting LTP 1531 <===> Link discovered by the PNC and reported at the MPI 1532 <~~~> Link inferred by the MDSC 1533 { } LTP Plug-id reported by the PNC 1535 Figure 7 - Single-layer intra-domain Ethernet and IP link discovery 1537 In this case, the MDSC, using the plug-id information reported in the 1538 physical Ethernet LTPs, does not discover any cross-layer link being 1539 terminated by the corresponding Ethernet interface. The MDSC may 1540 infer the physical intra-domain Ethernet link, e.g., between LTP 1-0 1541 on PE13 and LTP 2-0 on P16, as shown in Figure 7, if it knows a 1542 priori, by mechanisms which are outside the scope of this document, 1543 that all the Ethernet interfaces on the routers either terminates a 1544 cross-layer link or a single-layer intra-domain Ethernet link. 1546 The P-PNC can omit reporting the physical Ethernet LTP if it knows, 1547 by mechanisms which are outside the scope of this document, that the 1548 intra-domain Ethernet link is single-layer. 1550 4.7. LAG discovery 1552 TBA 1554 4.8. L2/L3 VPN network services discovery 1556 TBA 1558 4.9. Inventory discovery 1560 The are no YANG data models in IETF that could be used to report at 1561 the MPI the whole inventory information discovered by a PNC. 1563 [RFC8345] had foreseen some work for inventory as an augmentation of 1564 the network model, but no YANG data model has been developed so far. 1566 There are also no YANG data models in IETF that could be used to 1567 correlate topology information, e.g., a link termination point (LTP), 1568 with inventory information, e.g., the physical port supporting an 1569 LTP, if any. 1571 Inventory information through MPI and correlation with topology 1572 information is identified as a gap requiring further work and outside 1573 of the scope of this draft. 1575 5. Establishment of L2/L3 VPN network services with TE requirements 1577 In this scenario the MDSC needs to setup a multi-domain L2VPN or a 1578 multi-domain L3VPN with some SLA requirements. 1580 The MDSC receives the request to setup a L2/L3 VPN network service 1581 from the OSS/Orchestration layer (see Appendix A). 1583 The MDSC translates the L2/L3 VPN SLA requirements into TE 1584 requirements (e.g., bandwidth, TE metric bounds, SRLG disjointness, 1585 nodes/links/domains inclusion/exclusion) and find the SR-TE paths 1586 that meet these TE requirements (see section 2.1.1). 1588 For example, considering the L3VPN in Figure 2, the MDSC finds that: 1590 o a PE13-P16-PE14 SR-TE path already exists but have not enough 1591 bandwidth to support the new L3VPN, as described in section 4.4; 1593 o the IP link(s) between P16 and PE14 has not enough bandwidth to 1594 support increasing the bandwidth of that SR-TE path, as described 1595 in section 4.3; 1597 o a new underlay optical tunnel could be setup to increase the 1598 bandwidth IP link(s) between P16 and PE14 to support increasing 1599 the bandwidth of that overlay SR-TE path, as described in section 1600 5.2. The dimensioning of the underlay optical tunnel is decided by 1601 the MDSC based on the bandwidth requested by the SR-TE path and on 1602 its multi-layer optimization policy, which is an internal MDSC 1603 implementation issue. 1605 Considering for example the L3VPN in Figure 2, the MDSC can also 1606 decide that a new multi-domain SR-TE path needs to be setup between 1607 PE13 and PE23, e.g., either because existing SR-TE paths between PE13 1608 and PE23 are not able to meet the TE and binding requirements of the 1609 L2/L3 VPN service or because there is no SR-TE path between PE13 and 1610 PE23. 1612 As described in section 2.1.2, with partial summarization, the MDSC 1613 will use the TE topology information provided by the P-PNCs and the 1614 results of the path computation requests sent to the O-PNCs, as 1615 described in section 5.1, to compute the multi-layer/multi-domain 1616 path between PE13 and PE23. 1618 For example, the multi-layer/multi-domain performed by the MDSC could 1619 require the setup of: 1621 o a new underlay optical tunnel between PE13 and BR11, supporting a 1622 new IP link, as described in section 5.2; 1624 o a new underlay optical tunnel between BR21 and P24 to increase the 1625 bandwidth of the IP link(s) between BR21 and P24, as described in 1626 section 5.2. 1628 When the setup of the L2/L3 VPN network service requires multi-domain 1629 and multi-layer coordination, the MDSC is also responsible for 1630 coordinating the network configuration required to realize the 1631 request network service across the appropriate optical and packet 1632 domains. 1634 The MDSC would therefore request: 1636 o the O-PNC1 to setup a new optical tunnel between the ROADMs 1637 connected to P16 and PE14, as described in section 5.2; 1639 o the P-PNC1 to update the configuration of the existing IP link, in 1640 case of LAG, or configure a new IP link, in case of ECMP, between 1641 P16 and PE14, as described in section 5.2; 1643 o the P-PNC1 to update the bandwidth of the selected SR-TE path 1644 between PE13 and PE14, as described in section 5.3. 1646 After that, the MDSC requests P-PNC2 to setup an SR-TE path between 1647 BR21 and PE23, with an explicit path (BR21, P24, PE23) to constraint 1648 this new SR-TE path to use the new underlay optical tunnel setup 1649 between BR21 and P24, as described in section 5.3. The P-PNC2, 1650 knowing the node and the adjacency SIDs assigned within its domain, 1651 can install the proper SR policy, or hierarchical policies, within 1652 BR21 and returns to the MDSC the binding SID it has assigned to this 1653 policy in BR21. 1655 Then the MDSC requests P-PNC1 to setup an SR-TE path between PE13 and 1656 BR11, with an explicit path (PE13, BR11) to constraint this new SR-TE 1657 path to use the new underlay optical tunnel setup between PE13 and 1658 BR11, specifying also which inter-domain link should be used to send 1659 traffic to BR21 and the binding SID that has been assigned by P-PNC2 1660 to the corresponding SR policy in BR21, to be used for the end-to-end 1661 SR-TE path stitching, as described in section 5.3. The P-PNC1, 1662 knowing also the node and the adjacency SIDs assigned within its 1663 domain and the EPE SID assigned by P-PNC1 to the inter-domain link 1664 between BR11 and BR21, and the binding SID assigned by P-PNC2, 1665 installs the proper policy, or policies, within PE13. 1667 Once the SR-TE paths have been selected and, if needed, 1668 setup/modified, the MDSC can request to both P-PNCs to configure the 1669 L3VPN and its binding with the selected SR-TE paths using the 1670 [RFC9182] and [TSM] YANG data models. 1672 [Editor's Note] Further investigation is needed to understand how the 1673 binding between a L3VPN and this new end-to-end SR-TE path can be 1674 configured. 1676 5.1. Optical Path Computation 1678 As described in section 2.1.2, the optical path computation is 1679 usually performed by the O-PNCs. 1681 When performing multi-layer/multi-domain path computation, the MDSC 1682 can delegate the O-PNC for single-domain optical path computation. 1684 As discussed in [PATH-COMPUTE], there are two options to request an 1685 O-PNC to perform optical path computation: either via a "compute- 1686 only" TE tunnel path, using the generic TE tunnel YANG data model 1687 defined in [TE-TUNNEL] or via the path computation RPC defined in 1688 [PATH-COMPUTE]. 1690 This draft assumes that the path computation RPC is used. 1692 As described in sections 4.1 and 4.5, there is a one-to-one 1693 relationship between the router ports, the cross-layer links and the 1694 optical TTPs. Therefore, the properties of an optical path between 1695 two optical TTPs, as computed by the O-PNC, can be used by the MDSC 1696 to infer the properties of the multi-layer single-domain IP link 1697 between the router ports associated with the two optical TTPs. 1699 The are no YANG data models in IETF that could be used to augment the 1700 generic path computation RPC with technology-specific attributes. 1702 Optical technology-specific augmentation for the path computation RPC 1703 is identified as a gap requiring further work outside of this draft's 1704 scope. 1706 5.2. Multi-layer IP link Setup 1708 To setup a new multi-layer IP link between two router ports, the MDSC 1709 requires the O-PNC to setup an optical tunnel (either a WSON Tunnel 1710 or a Flexi-grid Tunnel or an OTN Tunnel) within the optical network 1711 between the two TTPs associated, as described in section 5.1, with 1712 these two router Ethernet interfaces. 1714 The MDSC also requires the O-PNC to steer the Ethernet client traffic 1715 between the two cross-layer links over the optical tunnel using the 1716 Ethernet Client Signal Model. 1718 After the optical tunnel has been setup and the client traffic 1719 steering configured, the two IP routers can exchange Ethernet packets 1720 between themselves, including LLDP messages. 1722 If LLDP [IEEE 802.1AB] or any other discovery mechanisms, which are 1723 outside the scope of this document, is used between the adjacency 1724 between the two routers' ports, the P-PNC can automatically discover 1725 the underlay multi-layer single-domain Ethernet link being set up by 1726 the MDSC and report it to the P-PNC. 1728 Otherwise, if there are no automatic discovery mechanisms, the MDSC 1729 can configure this multi-layer single-domain Ethernet link at the MPI 1730 of the P-PNC. 1732 The two Ethernet LTPs terminating this multi-layer single-domain 1733 Ethernet link are supported by the two underlay Ethernet LTPs 1734 terminating the two cross-layer links, e.g., as shown in Figure 6. 1736 After the multi-layer single-domain Ethernet link has been 1737 configured, the corresponding multi-lyaer single-domain IP link can 1738 also be configured either by the MDSC or by the P-PNC. 1740 This document assumes that this IP link is configured by the P-PNC, 1741 when the underlying multi-layer single-domain Ethernet link is either 1742 discovered by the P-PNC or configured by the MDSC at the MPI. 1744 [Editor's Note] Add text for IP link update in case of LAG either 1745 here or in a new section. 1747 [Editor's Note] Add text about the configuration of multi-layer SRLG 1748 information (issue #45). 1750 It is worth noting that the list of SRLGs for a multi-layer IP link 1751 can be quite long. Implementation-specific mechanisms can be 1752 implemented by the MDSC or by the O-PNC to summarize the SRLGs of an 1753 optical tunnel. These mechanisms are implementation-specific and have 1754 no impact on the YANG models nor on the interoperability at the MPI, 1755 but cares have to be taken to avoid missing information. 1757 5.3. SR-TE Path Setup and Update 1759 This version of the draft assumes that SR-TE path setup and update at 1760 the MPI could be done using the generic TE tunnel YANG data model, 1761 defined in [TE-TUNNEL], with SR-TE specific augmentations, as also 1762 outlined in section 1 of [TE-TUNNEL]. 1764 When a new SR-TE path needs to be setup, the MDSC can use the [TE- 1765 TUNNEL] model to request the P-PNC to setup TE paths, properly 1766 specifying the path constraints, such as the explicit path, to force 1767 the P-PNC to setup an SR-TE path that meets the end-to-end TE and 1768 biding constraints and uses the optical tunnels setup by the MDSC for 1769 the purpose of supporting this new SR-TE path. 1771 The [TE-TUNNEL] model supports requesting the setup of both end- 1772 to-end as well as segment TE tunnels (within one domain). 1774 In the latter case, SR-TE specific augmentations of the [TE-TUNNEL] 1775 model should be defined to allow the MDSC to configure the binding 1776 SIDs to be used for the end to-end SR-TE path stitching and to allow 1777 the P-PNC to report the binding SID assigned to the segment TE paths. 1779 The assigned binding SID should be persistent in case router or P-PNC 1780 rebooting. 1782 The MDSC can also use the [TE-TUNNEL] model to request the P-PNC to 1783 increase the bandwidth allocated to an existing TE path, and, if 1784 needed, also on its reverse TE path. The [TE-TUNNEL] model supports 1785 both symmetric and asymmetric bandwidth configuration in the two 1786 directions. 1788 [Editor's Note:] Add some text about the protection options (to 1789 further discuss whether to put this text here or in section 4.2.2). 1791 The MDSC also request the P-PNC to configure TI-LFA local protection: 1792 the mechanisms to request the configuration TI-LFA local protection 1793 for SR-TE paths using the [TE-TUNNEL] are a gap in the current YANG 1794 models. 1796 The TI-LFA local protection within the P-PNC domain is configured by 1797 the P-PNC through implementation specific mechanisms which are 1798 outside the scope of this document. The P-PNC takes into account the 1799 multi-layer SRLG information, configured by the MDSC as described in 1800 section 5.2, when computing the TI-LFA post-convergence path for 1801 multi-layer single-domain IP links. 1803 SR-TE path setup and update (e.g., bandwidth increase) through MPI is 1804 identified as a gap requiring further work, which is outside of the 1805 scope of this draft. 1807 6. Conclusions 1809 The analysis provided in this document has shown that the IETF YANG 1810 models described in 3.2 provides useful support for Packet Optical 1811 Integration (POI) scenarios for resource discovery (network topology, 1812 service, tunnels and network inventory discovery) as well as for 1813 supporting multi-layer/multi-domain L2/L3 VPN network services. 1815 Few gaps have been identified to be addressed by the relevant IETF 1816 Working Groups: 1818 o network inventory model: this gap has been identified in section 1819 4.9 and the solution in [NETWORK-INVENTORY] has been proposed to 1820 resolve it; 1822 o technology-specific augmentations of the path computation RPC, 1823 defined in [PATH-COMPUTE] for optical networks: this gap has been 1824 identified in section 5.1 and the solution in [OPTICAL-PATH- 1825 COMPUTE] has been proposed to resolve it; 1827 o relationship between a common discovery mechanisms applicable to 1828 access links, inter-domain IP links and cross-layer links and the 1829 UNI topology discover mechanism defined in [SAP]: this gap has 1830 been identified in section 4.3; 1832 o a mechanism applicable to the P-PNC NBI to configure the SR-TE 1833 paths. Technology-specific augmentations of TE Tunnel model, 1834 defined in [TE-TUNNEL], are foreseen in section 1 of [TE-TUNNEL] 1835 but not yet defined: this gap has been identified in section 5.3. 1837 7. Security Considerations 1839 Several security considerations have been identified and will be 1840 discussed in future versions of this document. 1842 8. Operational Considerations 1844 Telemetry data, such as collecting lower-layer networking health and 1845 consideration of network and service performance from POI domain 1846 controllers, may be required. These requirements and capabilities 1847 will be discussed in future versions of this document. 1849 9. IANA Considerations 1851 This document requires no IANA actions. 1853 10. References 1855 10.1. Normative References 1857 [RFC7923] Voit, E. et al., "Requirements for Subscription to YANG 1858 Datastores", RFC 7923, June 2016. 1860 [RFC7950] Bjorklund, M. et al., "The YANG 1.1 Data Modeling 1861 Language", RFC 7950, August 2016. 1863 [RFC7951] Lhotka, L., "JSON Encoding of Data Modeled with YANG", RFC 1864 7951, August 2016. 1866 [RFC8040] Bierman, A. et al., "RESTCONF Protocol", RFC 8040, January 1867 2017. 1869 [RFC8342] Bjorklund, M. et al., "Network Management Datastore 1870 Architecture (NMDA)", RFC 8342, March 2018. 1872 [RFC8345] Clemm, A., Medved, J. et al., "A Yang Data Model for 1873 Network Topologies", RFC8345, March 2018. 1875 [RFC8346] Clemm, A. et al., "A YANG Data Model for Layer 3 1876 Topologies", RFC8346, March 2018. 1878 [RFC8453] Ceccarelli, D., Lee, Y. et al., "Framework for Abstraction 1879 and Control of TE Networks (ACTN)", RFC8453, August 2018. 1881 [RFC8525] Bierman, A. et al., "YANG Library", RFC 8525, March 2019. 1883 [RFC8527] Bjorklund, M. et al., "RESTCONF Extensions to Support the 1884 Network Management Datastore Architecture", RFC 8527, March 1885 2019. 1887 [RFC8641] Clemm, A. and E. Voit, "Subscription to YANG Notifications 1888 for Datastore Updates", RFC 8641, September 2019. 1890 [RFC8650] Voit, E. et al., "Dynamic Subscription to YANG Events and 1891 Datastores over RESTCONF", RFC 8650, November 2019. 1893 [RFC8795] Liu, X. et al., "YANG Data Model for Traffic Engineering 1894 (TE) Topologies", RFC8795, August 2020. 1896 [RFC9094] Zheng H., Lee, Y. et al., "A YANG Data Model for Wavelength 1897 Switched Optical Networks (WSONs)", RFC 9094, August 2021. 1899 [IEEE 802.1AB] IEEE 802.1AB-2016, "IEEE Standard for Local and 1900 metropolitan area networks - Station and Media Access 1901 Control Connectivity Discovery", March 2016. 1903 [Flexi-TOPO] Lopez de Vergara, J. E. et al., "YANG data model for 1904 Flexi-Grid Optical Networks", draft-ietf-ccamp-flexigrid- 1905 yang, work in progress. 1907 [OTN-TOPO] Zheng, H. et al., "A YANG Data Model for Optical Transport 1908 Network Topology", draft-ietf-ccamp-otn-topo-yang, work in 1909 progress. 1911 [CLIENT-TOPO] Zheng, H. et al., "A YANG Data Model for Client-layer 1912 Topology", draft-zheng-ccamp-client-topo-yang, work in 1913 progress. 1915 [L3-TE-TOPO] Liu, X. et al., "YANG Data Model for Layer 3 TE 1916 Topologies", draft-ietf-teas-yang-l3-te-topo, work in 1917 progress. 1919 [SR-TE-TOPO] Liu, X. et al., "YANG Data Model for SR and SR TE 1920 Topologies on MPLS Data Plane", draft-ietf-teas-yang-sr-te- 1921 topo, work in progress. 1923 [TE-TUNNEL] Saad, T. et al., "A YANG Data Model for Traffic 1924 Engineering Tunnels and Interfaces", draft-ietf-teas-yang- 1925 te, work in progress. 1927 [WSON-TUNNEL] Lee, Y. et al., "A Yang Data Model for WSON Tunnel", 1928 draft-ietf-ccamp-wson-tunnel-model, work in progress. 1930 [Flexi-TUNNEL] Lopez de Vergara, J. E. et al., "A YANG Data Model for 1931 Flexi-Grid Tunnels ", draft-ietf-ccamp-flexigrid-tunnel- 1932 yang, work in progress. 1934 [OTN-TUNNEL] Zheng, H. et al., "OTN Tunnel YANG Model", draft-ietf- 1935 ccamp-otn-tunnel-model, work in progress. 1937 [PATH-COMPUTE] Busi, I., Belotti, S. et al, "Yang model for 1938 requesting Path Computation", draft-ietf-teas-yang-path- 1939 computation, work in progress. 1941 [CLIENT-SIGNAL] Zheng, H. et al., "A YANG Data Model for Transport 1942 Network Client Signals", draft-ietf-ccamp-client-signal- 1943 yang, work in progress. 1945 10.2. Informative References 1947 [RFC1930] J. Hawkinson, T. Bates, "Guideline for creation, selection, 1948 and registration of an Autonomous System (AS)", RFC 1930, 1949 March 1996. 1951 [RFC5440] Vasseur, JP. et al., "Path Computation Element (PCE) 1952 Communication Protocol (PCEP)", RFC 5440, March 2009. 1954 [RFC5623] Oki, E. et al., "Framework for PCE-Based Inter-Layer MPLS 1955 and GMPLS Traffic Engineering", RFC 5623, September 2009. 1957 [RFC8231] Crabbe, E. et al., "Path Computation Element Communication 1958 Protocol (PCEP) Extensions for Stateful PCE", RFC 8231, 1959 September 2017. 1961 [RFC8281] Crabbe, E. et al., "Path Computation Element Communication 1962 Protocol (PCEP) Extensions for PCE-Initiated LSP Setup in a 1963 Stateful PCE Model", RFC 8281, December 2017. 1965 [RFC8283] Farrel, A. et al., "An Architecture for Use of PCE and the 1966 PCE Communication Protocol (PCEP) in a Network with Central 1967 Control", RFC 8283, December 2017. 1969 [RFC8309] Q. Wu, W. Liu, and A. Farrel, "Service Model Explained", 1970 RFC 8309, January 2018. 1972 [RFC8637] Dhody, D. et al., "Applicability of the Path Computation 1973 Element (PCE) to the Abstraction and Control of TE Networks 1974 (ACTN)", RFC 8637, July 2019. 1976 [RFC8751] Dhody, D. et al., "Hierarchical Stateful Path Computation 1977 Element (PCE)", RFC 8751, March 2020. 1979 [RFC9182] S. Barguil, et al., "A YANG Network Data Model for Layer 1980 3 VPNs", RFC 9182, February 2022. 1982 [L2NM] S. Barguil, et al., "A Layer 2 VPN Network YANG Model", 1983 draft-ietf-opsawg-l2nm, work in progress. 1985 [TSM] Y. Lee, et al., "Traffic Engineering and Service Mapping 1986 Yang Model", draft-ietf-teas-te-service-mapping-yang, work 1987 in progress. 1989 [TNBI] Busi, I., Daniel, K. et al., "Transport Northbound 1990 Interface Applicability Statement", draft-ietf-ccamp- 1991 transport-nbi-app-statement, work in progress. 1993 [VN] Y. Lee, et al., "A Yang Data Model for ACTN VN Operation", 1994 draft-ietf-teas-actn-vn-yang, work in progress. 1996 [OIA-TOPO] Lee Y. et al., "A YANG Data Model for Optical Impairment- 1997 aware Topology", draft-ietf-ccamp-optical-impairment- 1998 topology-yang, work in progress. 2000 [SAP] Gonzalez de Dios O. et al., "A Network YANG Model for 2001 Service Attachment Points (SAPs)", draft-ietf-opsawg-sap, 2002 work in progress. 2004 [NETWORK-INVENTORY] Yu C. et al., "A YANG Data Model for Optical 2005 Network Inventory", draft-yg3bp-ccamp-optical-inventory- 2006 yang, work in progress. 2008 [OPTICAL-PATH-COMPUTE] Busi I. et al., "YANG Data Models for 2009 requesting Path Computation in Optical Networks", draft- 2010 gbb-ccamp-optical-path-computation-yang, work in progress. 2012 Appendix A. OSS/Orchestration Layer 2014 The OSS/Orchestration layer is a vital part of the architecture 2015 framework for a service provider: 2017 o to abstract (through MDSC and PNCs) the underlying transport 2018 network complexity to the Business Systems Support layer; 2020 o to coordinate NFV, Transport (e.g. IP, optical and microwave 2021 networks), Fixed Acess, Core and Radio domains enabling full 2022 automation of end-to-end services to the end customers; 2024 o to enable catalogue-driven service provisioning from external 2025 applications (e.g. Customer Portal for Enterprise Business 2026 services), orchestrating the design and lifecycle management of 2027 these end-to-end transport connectivity services, consuming IP 2028 and/or optical transport connectivity services upon request. 2030 As discussed in section 2.1, in this document, the MDSC interfaces 2031 with the OSS/Orchestration layer and, therefore, it performs the 2032 functions of the Network Orchestrator, defined in [RFC8309]. 2034 The OSS/Orchestration layer requests the creation of a network 2035 service to the MDSC specifying its end-points (PEs and the interfaces 2036 towards the CEs) as well as the network service SLA and then proceeds 2037 to configuring accordingly the end-to-end customer service between 2038 the CEs in the case of an operator managed service. 2040 A.1. MDSC NBI 2042 As explained in section 2, the OSS/Orchestration layer can request 2043 the MDSC to setup L2/L3VPN network services (with or without TE 2044 requirements). 2046 Although the OSS/Orchestration layer interface is usually operator- 2047 specific, typically it would be using a RESTCONF/YANG interface with 2048 a more abstracted version of the MPI YANG data models used for 2049 network configuration (e.g. L3NM, L2NM). 2051 Figure 8 shows an example of possible control flow between the 2052 OSS/Orchestration layer and the MDSC to instantiate L2/L3 VPN network 2053 services, using the YANG data models under the definition in [VN], 2054 [L2NM], [RFC9182] and [TSM]. 2056 +-------------------------------------------+ 2057 | | 2058 | OSS/Orchestration layer | 2059 | | 2060 +-----------------------+-------------------+ 2061 | 2062 1.VN 2. L2/L3NM & | ^ 2063 | TSM | | 2064 | | | | 2065 | | | | 2066 v v | 3. Update VN 2067 | 2068 +-----------------------+-------------------+ 2069 | | 2070 | MDSC | 2071 | | 2072 +-------------------------------------------+ 2074 Figure 8 Service Request Process 2076 o The VN YANG data model, defined in [VN], whose primary focus is 2077 the CMI, can also provide VN Service configuration from an 2078 orchestrated network service point of view when the L2/L3 VPN 2079 network service has TE requirements. However, this model is not 2080 used to setup L2/L3 VPN service with no TE requirements. 2082 o It provides the profile of VN in terms of VN members, each of 2083 which corresponds to an edge-to-edge link between customer 2084 end-points (VNAPs). It also provides the mappings between the 2085 VNAPs with the LTPs and the connectivity matrix with the VN 2086 member. The associated traffic matrix (e.g., bandwidth, 2087 latency, protection level, etc.) of VN member is expressed 2088 (i.e., via the TE-topology's connectivity matrix). 2090 o The model also provides VN-level preference information (e.g., 2091 VN member diversity) and VN-level admin-status and 2092 operational-status. 2094 o The L2NM and L3NM YANG data models, defined in [L2NM] and 2095 [RFC9182], whose primary focus is the MPI, can also be used to 2096 provide L2VPN and L3VPN network service configuration from a 2097 orchestrated connectivity service point of view. 2099 o The TE & Service Mapping YANG data model [TSM] provides TE-service 2100 mapping. 2102 o TE-service mapping provides the mapping between a L2/L3 VPN 2103 instance and the corresponding VN instances. 2105 o The TE-service mapping also provides the binding requirements 2106 as to how each L2/L3 VPN/VN instance is created concerning the 2107 underlay TE tunnels (e.g., whether they require a new and 2108 isolated set of TE underlay tunnels or not). 2110 o Site mapping provides the site reference information across 2111 L2/L3 VPN Site ID, VN Access Point ID, and the LTP of the 2112 access link. 2114 Appendix B. Multi-layer and multi-domain resiliency 2116 B.1. Maintenance Window 2118 Before planned maintenance operation on DWDM network takes place, IP 2119 traffic should be moved hitless to another link. 2121 MDSC must reroute IP traffic before the events takes place. It should 2122 be possible to lock IP traffic to the protection route until the 2123 maintenance event is finished, unless a fault occurs on such path. 2125 B.2. Router port failure 2127 The focus is on client-side protection scheme between IP router and 2128 reconfigurable ROADM. Scenario here is to define only one port in the 2129 routers and in the ROADM muxponder board at both ends as back-up 2130 ports to recover any other port failure on client-side of the ROADM 2131 (either on router port side or on muxponder side or on the link 2132 between them). When client-side port failure occurs, alarms are 2133 raised to MDSC by IP-PNC and O-PNC (port status down, LOS etc.). MDSC 2134 checks with OP-PNC(s) that there is no optical failure in the optical 2135 layer. 2137 There can be two cases here: 2139 a) LAG was defined between the two end routers. MDSC, after checking 2140 that optical layer is fine between the two end ROADMs, triggers 2141 the ROADM configuration so that the router back-up port with its 2142 associated muxponder port can reuse the OCh that was already in 2143 use previously by the failed router port and adds the new link to 2144 the LAG on the failure side. 2146 While the ROADM reconfiguration takes place, IP/MPLS traffic is 2147 using the reduced bandwidth of the IP link bundle, discarding 2148 lower priority traffic if required. Once back-up port has been 2149 reconfigured to reuse the existing OCh and new link has been added 2150 to the LAG then original Bandwidth is recovered between the end 2151 routers. 2153 Note: in this LAG scenario let assume that BFD is running at LAG 2154 level so that there is nothing triggered at MPLS level when one of 2155 the link member of the LAG fails. 2157 b) If there is no LAG then the scenario is not clear since a router 2158 port failure would automatically trigger (through BFD failure) 2159 first a sub-50ms protection at MPLS level :FRR (MPLS RSVP-TE case) 2160 or TI-LFA (MPLS based SR-TE case) through a protection port. At 2161 the same time MDSC, after checking that optical network connection 2162 is still fine, would trigger the reconfiguration of the back-up 2163 port of the router and of the ROADM muxponder to re-use the same 2164 OCh as the one used originally for the failed router port. Once 2165 everything has been correctly configured, MDSC Global PCE could 2166 suggest to the operator to trigger a possible re-optimization of 2167 the back-up MPLS path to go back to the MPLS primary path through 2168 the back-up port of the router and the original OCh if overall 2169 cost, latency etc. is improved. However, in this scenario, there 2170 is a need for protection port PLUS back-up port in the router 2171 which does not lead to clear port savings. 2173 Acknowledgments 2175 This document was prepared using 2-Word-v2.0.template.dot. 2177 Some of this analysis work was supported in part by the European 2178 Commission funded H2020-ICT-2016-2 METRO-HAUL project (G.A. 761727). 2180 Contributors 2182 Sergio Belotti 2183 Nokia 2185 Email: sergio.belotti@nokia.com 2187 Gabriele Galimberti 2188 Cisco 2190 Email: ggalimbe@cisco.com 2192 Zheng Yanlei 2193 China Unicom 2195 Email: zhengyanlei@chinaunicom.cn 2196 Anton Snitser 2197 Sedona 2199 Email: antons@sedonasys.com 2201 Washington Costa Pereira Correia 2202 TIM Brasil 2204 Email: wcorreia@timbrasil.com.br 2206 Michael Scharf 2207 Hochschule Esslingen - University of Applied Sciences 2209 Email: michael.scharf@hs-esslingen.de 2211 Young Lee 2212 Sung Kyun Kwan University 2214 Email: younglee.tx@gmail.com 2216 Jeff Tantsura 2217 Apstra 2219 Email: jefftant.ietf@gmail.com 2221 Paolo Volpato 2222 Huawei 2224 Email: paolo.volpato@huawei.com 2226 Brent Foster 2227 Cisco 2229 Email: brfoster@cisco.com 2231 Authors' Addresses 2233 Fabio Peruzzini 2234 TIM 2236 Email: fabio.peruzzini@telecomitalia.it 2238 Jean-Francois Bouquier 2239 Vodafone 2241 Email: jeff.bouquier@vodafone.com 2243 Italo Busi 2244 Huawei 2246 Email: Italo.busi@huawei.com 2248 Daniel King 2249 Old Dog Consulting 2251 Email: daniel@olddog.co.uk 2253 Daniele Ceccarelli 2254 Ericsson 2256 Email: daniele.ceccarelli@ericsson.com