idnits 2.17.1 draft-ietf-ccamp-transport-nbi-use-cases-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 09, 2017) is 2381 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 CCAMP Working Group I. Busi (Ed.) 2 Internet Draft Huawei 3 Intended status: Informational D. King 4 Expires: April 2018 Lancaster University 5 October 09, 2017 7 Transport Northbound Interface Applicability Statement and Use Cases 8 draft-ietf-ccamp-transport-nbi-use-cases-00 10 Status of this Memo 12 This Internet-Draft is submitted in full conformance with the 13 provisions of BCP 78 and BCP 79. 15 Internet-Drafts are working documents of the Internet Engineering 16 Task Force (IETF), its areas, and its working groups. Note that 17 other groups may also distribute working documents as Internet- 18 Drafts. 20 Internet-Drafts are draft documents valid for a maximum of six 21 months and may be updated, replaced, or obsoleted by other documents 22 at any time. It is inappropriate to use Internet-Drafts as 23 reference material or to cite them other than as "work in progress." 25 The list of current Internet-Drafts can be accessed at 26 http://www.ietf.org/ietf/1id-abstracts.txt 28 The list of Internet-Draft Shadow Directories can be accessed at 29 http://www.ietf.org/shadow.html 31 This Internet-Draft will expire on Aoril 10, 2018. 33 Copyright Notice 35 Copyright (c) 2017 IETF Trust and the persons identified as the 36 document authors. All rights reserved. 38 This document is subject to BCP 78 and the IETF Trust's Legal 39 Provisions Relating to IETF Documents 40 (http://trustee.ietf.org/license-info) in effect on the date of 41 publication of this document. Please review these documents 42 carefully, as they describe your rights and restrictions with 43 respect to this document. Code Components extracted from this 44 document must include Simplified BSD License text as described in 45 Section 4.e of the Trust Legal Provisions and are provided without 46 warranty as described in the Simplified BSD License. 48 Abstract 50 Transport network domains, including Optical Transport Network (OTN) 51 and Wavelength Division Multiplexing (WDM) networks, are typically 52 deployed based on a single vendor or technology platforms. They are 53 often managed using proprietary interfaces to dedicated Element 54 Management Systems (EMS), Network Management Systems (NMS) and 55 increasingly Software Defined Network (SDN) controllers. 57 A well-defined open interface to each domain management system or 58 controller is required for network operators to facilitate control 59 automation and orchestrate end-to-end services across multi-domain 60 networks. These functions may be enabled using standardized data 61 models (e.g. YANG), and appropriate protocol (e.g., RESTCONF). 63 This document describes the key use cases and requirements for 64 transport network control and management. It reviews proposed and 65 existing IETF transport network data models, their applicability, 66 and highlights gaps and requirements. 68 Table of Contents 70 1. Introduction ................................................3 71 1.1. Scope of this document .................................4 72 2. Terminology .................................................4 73 3. Conventions used in this document............................4 74 3.1. Topology and traffic flow processing ...................4 75 4. Use Case 1: Single-domain with single-layer .................5 76 4.1. Reference Network ......................................5 77 4.1.1. Single Transport Domain - OTN Network .............5 78 4.2. Topology Abstractions ..................................8 79 4.3. Service Configuration ..................................9 80 4.3.1. ODU Transit .......................................9 81 4.3.2. EPL over ODU ......................................10 82 4.3.3. Other OTN Client Services .........................10 83 4.3.4. EVPL over ODU .....................................11 84 4.3.5. EVPLAN and EVPTree Services .......................12 85 4.4. Multi-functional Access Links ..........................13 86 4.5. Protection Requirements ................................14 87 4.5.1. Linear Protection .................................15 88 5. Use Case 2: Single-domain with multi-layer ..................15 89 5.1. Reference Network ......................................15 90 5.2. Topology Abstractions ..................................16 91 5.3. Service Configuration ..................................16 92 6. Use Case 3: Multi-domain with single-layer ..................16 93 6.1. Reference Network ......................................16 94 6.2. Topology Abstractions ..................................19 95 6.3. Service Configuration ..................................19 96 6.3.1. ODU Transit .......................................20 97 6.3.2. EPL over ODU ......................................20 98 6.3.3. Other OTN Client Services .........................21 99 6.3.4. EVPL over ODU .....................................21 100 6.3.5. EVPLAN and EVPTree Services .......................21 101 6.4. Multi-functional Access Links ..........................22 102 6.5. Protection Scenarios ...................................22 103 6.5.1. Linear Protection (end-to-end) ....................23 104 6.5.2. Segmented Protection ..............................23 105 7. Use Case 4: Multi-domain and multi-layer ....................24 106 7.1. Reference Network ......................................24 107 7.2. Topology Abstractions ..................................25 108 7.3. Service Configuration ..................................25 109 8. Security Considerations .....................................25 110 9. IANA Considerations .........................................26 111 10. References .................................................26 112 10.1. Normative References ..................................26 113 10.2. Informative References ................................26 114 11. Acknowledgments ............................................27 116 1. Introduction 118 Transport of packet services are critical for a wide-range of 119 applications and services, including: data center and LAN 120 interconnects, Internet service backhauling, mobile backhaul and 121 enterprise Carrier Ethernet Services. These services are typically 122 setup using stovepipe NMS and EMS platforms, often requiring 123 propriety management platforms and legacy management interfaces. A 124 clear goal of operators will be to automate setup of transport 125 services across multiple transport technology domains. 127 A common open interface (API) to each domain controller and or 128 management system is pre-requisite for network operators to control 129 multi-vendor and multi-domain networks and enable also service 130 provisioning coordination/automation. This can be achieved by using 131 standardized YANG models, used together with an appropriate protocol 132 (e.g., [RESTCONF]). 134 This document describes key use cases for analyzing the 135 applicability of the existing models defined by the IETF for 136 transport networks. The intention of this document is to become an 137 applicability statement that provides detailed descriptions of how 138 IETF transport models are applied to solve the described use cases 139 and requirements. 141 1.1. Scope of this document 143 This document assumes a reference architecture, including 144 interfaces, based on the Abstraction and Control of Traffic- 145 Engineered Networks (ACTN), defined in [ACTN-Frame] 147 The focus of this document is on the MPI (interface between the 148 Multi Domain Service Coordinator (MDSC) and a Physical Network 149 Controller (PNC), controlling a transport network domain). 151 The relationship between the current IETF YANG models and the type 152 of ACTN interfaces can be found in [ACTN-YANG]. 154 The ONF Technical Recommendations for Functional Requirements for 155 the transport API in [ONF TR-527] and the ONF transport API multi- 156 layer examples in [ONF GitHub] have been considered as an input for 157 this work. 159 Considerations about the CMI (interface between the Customer Network 160 Controller (CNC) and the MDSC) are outside the scope of this 161 document. 163 2. Terminology 165 E-LINE: Ethernet Line 167 EPL: Ethernet Private Line 169 EVPL: Ethernet Virtual Private Line 171 OTH: Optical Network Hierarchy 173 OTN: Optical Transport Network 175 3. Conventions used in this document 177 3.1. Topology and traffic flow processing 179 The traffic flow between different nodes is specified as an ordered 180 list of nodes, separated with commas, indicating within the brackets 181 the processing within each node: 183 (){, ()} 185 The order represents the order of traffic flow being forwarded 186 through the network. 188 The processing can be either an adaptation of a client layer into a 189 server layer "(client -> server)" or switching at a given layer 190 "([switching])". Multi-layer switching is indicated by two layer 191 switching with client/server adaptation: "([client] -> [server])". 193 For example, the following traffic flow: 195 C-R1 (|PKT| -> ODU2), S3 (|ODU2|), S5 (|ODU2|), S6 (|ODU2|), 196 C-R3 (ODU2 -> |PKT|) 198 Node C-R1 is switching at the packet (PKT) layer and mapping packets 199 into a ODU2 before transmission to node S3. Nodes S3, S5 and S6 are 200 switching at the ODU2 layer: S3 sends the ODU2 traffic to S5 which 201 then sends it to S6 which finally sends to C-R3. Node C-R3 202 terminates the ODU2 from S6 before switching at the packet (PKT) 203 layer. 205 The paths of working and protection transport entities are specified 206 as an ordered list of nodes, separated with commas: 208 {, } 210 The order represents the order of traffic flow being forwarded 211 through the network in the forward direction. In case of 212 bidirectional paths, the forward and backward directions are 213 selected arbitrarily, but the convention is consistent between 214 working/protection path pairs as well as across multiple domains. 216 4. Use Case 1: Single-domain with single-layer 218 4.1. Reference Network 220 The current considerations discussed in this document are based on 221 the following reference networks: 223 - single transport domain: OTN network 225 4.1.1. Single Transport Domain - OTN Network 227 As shown in Figure 1 the network physical topology composed of a 228 single-domain transport network providing transport services to an 229 IP network through five access links. 231 ................................................ 232 : IP domain : 233 : .............................. : 234 : : ........................ : : 235 : : : : : : 236 : : : S1 -------- S2 ------ C-R4 : 237 : : : / | : : : 238 : : : / | : : : 239 : C-R1 ------ S3 ----- S4 | : : : 240 : : : \ \ | : : : 241 : : : \ \ | : : : 242 : : : S5 \ | : : : 243 : C-R2 -----+ / \ \ | : : : 244 : : : \ / \ \ | : : : 245 : : : S6 ---- S7 ---- S8 ------ C-R5 : 246 : : : / : : : 247 : C-R3 -----+ : : : 248 : : : Transport domain : : : 249 : : : : : : 250 :........: :......................: :........: 251 Figure 1 Reference network for Use Case 1 253 The IP and transport (OTN) domains are respectively composed by five 254 routers C-R1 to C-R5 and by eight ODU switches S1 to S8. The 255 transport domain acts as a transit network providing connectivity 256 for IP layer services. 258 The behavior of the transport domain is the same whether the 259 ingress or egress service nodes in the IP domain are only attached 260 to the transport domain, or if there are other routers in between 261 the ingress or egress nodes of the IP domain not also attached to 262 the transport domain. In other words, the behavior of the transport 263 network does not depend on whether C-R1, C-R2, ..., C-R5 are PE or P 264 routers for the IP services. 266 The transport domain control plane architecture follows the ACTN 267 architecture and framework document [ACTN-Frame], and functional 268 components: 270 o Customer Network Controller (CNC) act as a client with respect to 271 the Multi-Domain Service Coordinator (MDSC) via the CNC-MDSC 272 Interface (CMI); 274 o MDSC is connected to a plurality of Physical Network Controllers 275 (PNCs), one for each domain, via a MDSC-PNC Interface (MPI). Each 276 PNC is responsible only for the control of its domain and the 277 MDSC is the only entity capable of multi-domain functionalities 278 as well as of managing the inter-domain links; 280 The ACTN framework facilitates the detachment of the network and 281 service control from the underlying technology and help the customer 282 express the network as desired by business needs. Therefore, care 283 must be taken to keep minimal dependency on the CMI (or no 284 dependency at all) with respect to the network domain technologies. 285 The MPI instead requires some specialization according to the domain 286 technology. 288 +-----+ 289 | CNC | 290 +-----+ 291 | 292 |CMI I/F 293 | 294 +-----------------------+ 295 | MDSC | 296 +-----------------------+ 297 | 298 |MPI I/F 299 | 300 +-------+ 301 | PNC | 302 +-------+ 303 | 304 ----- 305 ( ) 306 ( OTN ) 307 ( Physical ) 308 ( Network ) 309 ( ) 310 ----- 312 Figure 2 Controlling Hierarchy for Use Case 1 314 Once the service request is processed by the MDSC the mapping of the 315 client IP traffic between the routers (across the transport network) 316 is made in the IP routers only and is not controlled by the 317 transport PNC, and therefore transparent to the transport nodes. 319 4.2. Topology Abstractions 321 Abstraction provides a selective method for representing 322 connectivity information within a domain. There are multiple methods 323 to abstract a network topology. This document assumes the 324 abstraction method defined in [RFC7926]: 326 "Abstraction is the process of applying policy to the available TE 327 information within a domain, to produce selective information that 328 represents the potential ability to connect across the domain. 329 Thus, abstraction does not necessarily offer all possible 330 connectivity options, but presents a general view of potential 331 connectivity according to the policies that determine how the 332 domain's administrator wants to allow the domain resources to be 333 used." 335 [TE-Topo] describes a YANG base model for TE topology without any 336 technology specific parameters. Moreover, it defines how to abstract 337 for TE-network topologies. 339 [ACTN-Frame] provides the context of topology abstraction in the 340 ACTN architecture and discusses a few alternatives for the 341 abstraction methods for both packet and optical networks. This is an 342 important consideration since the choice of the abstraction method 343 impacts protocol design and the information it carries. According 344 to [ACTN-Frame], there are three types of topology: 346 o White topology: This is a case where the Physical Network 347 Controller (PNC) provides the actual network topology to the 348 multi-domain Service Coordinator (MDSC) without any hiding or 349 filtering. In this case, the MDSC has the full knowledge of the 350 underlying network topology; 352 o Black topology: The entire domain network is abstracted as a 353 single virtual node with the access/egress links without 354 disclosing any node internal connectivity information; 356 o Grey topology: This abstraction level is between black topology 357 and white topology from a granularity point of view. This is 358 abstraction of TE tunnels for all pairs of border nodes. We may 359 further differentiate from a perspective of how to abstract 360 internal TE resources between the pairs of border nodes: 362 - Grey topology type A: border nodes with a TE links between 363 them in a full mesh fashion; 365 - Grey topology type B: border nodes with some internal 366 abstracted nodes and abstracted links. 368 For single-domain with single-layer use-case, the white topology may 369 be disseminated from the PNC to the MDSC in most cases. There may be 370 some exception to this in the case where the underlay network may 371 have complex optical parameters, which do not warrant the 372 distribution of such details to the MDSC. In such case, the topology 373 disseminated from the PNC to the MDSC may not have the entire TE 374 information but a streamlined TE information. This case would incur 375 another action from the MDSC's standpoint when provisioning a path. 376 The MDSC may make a path compute request to the PNC to verify the 377 feasibility of the estimated path before making the final 378 provisioning request to the PNC, as outlined in [Path-Compute]. 380 Topology abstraction for the CMI is for further study (to be 381 addressed in future revisions of this document). 383 4.3. Service Configuration 385 In the following use cases, the Multi Domain Service Coordinator 386 (MDSC) needs to be capable to request service connectivity from the 387 transport Physical Network Controller (PNC) to support IP routers 388 connectivity. The type of services could depend of the type of 389 physical links (e.g. OTN link, ETH link or SDH link) between the 390 routers and transport network. 392 As described in section 4.1.1, the control of different adaptations 393 inside IP routers, C-Ri (PKT -> foo) and C-Rj (foo -> PKT), are 394 assumed to be performed by means that are not under the control of, 395 and not visible to, transport PNC. Therefore, these mechanisms are 396 outside the scope of this document. 398 4.3.1. ODU Transit 400 This use case assumes that the physical links interconnecting the IP 401 routers and the transport network are OTN links. The 402 physical/optical interconnection below the ODU layer is supposed to 403 be pre-configured and not exposed at the MPI to the MDSC. 405 To setup a 10Gb IP link between C-R1 to C-R3, an ODU2 end-to-end 406 data plane connection needs to be created between C-R1 and C-R3, 407 crossing transport nodes S3, S5, and S6. 409 The traffic flow between C-R1 and C-R3 can be summarized as: 411 C-R1 (|PKT| -> ODU2), S3 (|ODU2|), S5 (|ODU2|), S6 (|ODU2|), 412 C-R3 (ODU2 -> |PKT|) 414 The MDSC should be capable via the MPI to request the setup of an 415 ODU2 transit service with enough information that enable the 416 transport PNC to instantiate and control the ODU2 data plane 417 connection segment through nodes S3, S5, S6. 419 4.3.2. EPL over ODU 421 This use case assumes that the physical links interconnecting the IP 422 routers and the transport network are Ethernet links. 424 In order to setup a 10Gb IP link between C-R1 to C-R3, an EPL 425 service needs to be created between C-R1 and C-R3, supported by an 426 ODU2 end-to-end connection between S3 and S6, crossing transport 427 node S5. 429 The traffic flow between C-R1 and C-R3 can be summarized as: 431 C-R1 (|PKT| -> ETH), S3 (ETH -> |ODU2|), S5 (|ODU2|), 432 S6 (|ODU2| -> ETH), C-R3 (ETH-> |PKT|) 434 The MDSC should be capable via the MPI to request the setup of an 435 EPL service with enough information that can permit the transport 436 PNC to instantiate and control the ODU2 end-to-end data plane 437 connection through nodes S3, S5, S6, as well as the adaptation 438 functions inside S3 and S6: S3&S6 (ETH -> ODU2) and S9&S6 (ODU2 -> 439 ETH). 441 4.3.3. Other OTN Client Services 443 [ITU-T G.709-2016] defines mappings of different client layers into 444 ODU. Most of them are used to provide Private Line services over 445 an OTN transport network supporting a variety of types of physical 446 access links (e.g., Ethernet, SDH STM-N, Fibre Channel, InfiniBand, 447 etc.). 449 This use case assumes that the physical links interconnecting the IP 450 routers and the transport network are any one of these possible 451 options. 453 In order to setup a 10Gb IP link between C-R1 to C-R3 using, for 454 example STM-64 physical links between the IP routers and the 455 transport network, an STM-64 Private Line service needs to be 456 created between C-R1 and C-R3, supported by an ODU2 end-to-end data 457 plane connection between S3 and S6, crossing transport node S5. 459 The traffic flow between C-R1 and C-R3 can be summarized as: 461 C-R1 (|PKT| -> STM-64), S3 (STM-64 -> |ODU2|), S5 (|ODU2|), 462 S6 (|ODU2| -> STM-64), C-R3 (STM-64 -> |PKT|) 464 The MDSC should be capable via the MPI to request the setup of an 465 STM-64 Private Line service with enough information that can permit 466 the transport PNC to instantiate and control the ODU2 end-to-end 467 connection through nodes S3, S5, S6, as well as the adaptation 468 functions inside S3 and S6: S3&S6 (STM-64 -> ODU2) and S9&S3 (STM-64 469 -> PKT). 471 4.3.4. EVPL over ODU 473 This use case assumes that the physical links interconnecting the IP 474 routers and the transport network are Ethernet links and that 475 different Ethernet services (e.g, EVPL) can share the same physical 476 link using different VLANs. 478 In order to setup two 1Gb IP links between C-R1 to C-R3 and between 479 C-R1 and C-R4, two EVPL services need to be created, supported by 480 two ODU0 end-to-end connections respectively between S3 and S6, 481 crossing transport node S5, and between S3 and S2, crossing 482 transport node S1. 484 Since the two EVPL services are sharing the same Ethernet physical 485 link between C-R1 and S3, different VLAN IDs are associated with 486 different EVPL services: for example VLAN IDs 10 and 20 487 respectively. 489 The traffic flow between C-R1 and C-R3 can be summarized as: 491 C-R1 (|PKT| -> VLAN), S3 (VLAN -> |ODU0|), S5 (|ODU0|), 492 S6 (|ODU0| -> VLAN), C-R3 (VLAN -> |PKT|) 494 The traffic flow between C-R1 and C-R4 can be summarized as: 496 C-R1 (|PKT| -> VLAN), S3 (VLAN -> |ODU0|), S1 (|ODU0|), 497 S2 (|ODU0| -> VLAN), C-R4 (VLAN -> |PKT|) 499 The MDSC should be capable via the MPI to request the setup of these 500 EVPL services with enough information that can permit the transport 501 PNC to instantiate and control the ODU0 end-to-end data plane 502 connections as well as the adaptation functions on the boundary 503 nodes: S3&S2&S6 (VLAN -> ODU0) and S3&S2&S6 (ODU0 -> VLAN). 505 Internet-Draft Transport NBI Use Cases October 201 507 4.3.5. EVPLAN and EVPTree Services 509 This use case assumes that the physical links interconnecting the IP 510 routers and the transport network are Ethernet links and that 511 different Ethernet services (e.g, EVPL, EVPLAN and EVPTree) can 512 share the same physical link using different VLANs. 514 Note - it is assumed that EPLAN and EPTree services can be supported 515 by configuring EVPLAN and EVPTree with port mapping. 517 In order to setup an IP subnet between C-R1, C-R2, C-R3 and C-R4, an 518 EVPLAN/EVPTree service needs to be created, supported by two ODUflex 519 end-to-end connections respectively between S3 and S6, crossing 520 transport node S5, and between S3 and S2, crossing transport node 521 S1. 523 In order to support this EVPLAN/EVPTree service, some Ethernet 524 Bridging capabilities are required on some nodes at the edge of the 525 transport network: for example Ethernet Bridging capabilities can be 526 configured in nodes S3 and S6 but not in node S2. 528 Since this EVPLAN/EVPTree service can share the same Ethernet 529 physical links between IP routers and transport nodes (e.g., with 530 the EVPL services described in section 4.3.4), a different VLAN ID 531 (e.g., 30) can be associated with this EVPLAN/EVPTree service. 533 In order to support an EVPTree service instead of an EVPLAN, 534 additional configuration of the Ethernet Bridging capabilities on 535 the nodes at the edge of the transport network is required. 537 The MAC bridging function in node S3 is needed to select, based on 538 the MAC Destination Address, whether the Ethernet frames form C-R1 539 should be sent to the ODUflex terminating on node S6 or to the other 540 ODUflex terminating on node S2. 542 The MAC bridging function in node S6 is needed to select, based on 543 the MAC Destination Address, whether the Ethernet frames received 544 from the ODUflex should be set to C-R2 or C-R3, as well as whether 545 the Ethernet frames received from C-R2 (or C-R3) should be sent to 546 C-R3 (or C-R2) or to the ODUflex. 548 For example, the traffic flow between C-R1 and C-R3 can be 549 summarized as: 551 C-R1 (|PKT| -> VLAN), S3 (VLAN -> |MAC| -> |ODUflex|), 552 S5 (|ODUflex|), S6 (|ODUflex| -> |MAC| -> VLAN), 553 C-R3 (VLAN -> |PKT|) 555 The MAC bridging function in node S3 is also needed to select, based 556 on the MAC Destination Address, whether the Ethernet frames one 557 ODUflex should be sent to C-R1 or to the other ODUflex. 559 For example, the traffic flow between C-R3 and C-R4 can be 560 summarized as: 562 C-R3 (|PKT| -> VLAN), S6 (VLAN -> |MAC| -> |ODUflex|), 563 S5 (|ODUflex|), S3 (|ODUflex| -> |MAC| -> |ODUflex|), 564 S1 (|ODUflex|), S2 (|ODUflex| -> VLAN), C-R4 (VLAN -> |PKT|) 566 In node S2 there is no need for any MAC bridging function since all 567 the Ethernet frames received from C-R4 should be sent to the ODUflex 568 toward S3 and viceversa. 570 The traffic flow between C-R1 and C-R4 can be summarized as: 572 C-R1 (|PKT| -> VLAN), S3 (VLAN -> |MAC| -> |ODUflex|), 573 S1 (|ODUflex|), S2 (|ODUflex| -> VLAN), C-R4 (VLAN -> |PKT|) 575 The MDSC should be capable via the MPI to request the setup of this 576 EVPLAN/EVPTree services with enough information that can permit the 577 transport PNC to instantiate and control the ODUflex end-to-end data 578 plane connections as well as the Ethernet Bridging and adaptation 579 functions on the boundary nodes: S3&S6 (VLAN -> MAC -> ODU2), S3&S6 580 (ODU2 -> ETH -> VLAN), S2 (VLAN -> ODU2) and S2 (ODU2 -> VLAN). 582 4.4. Multi-functional Access Links 584 This use case assumes that some physical links interconnecting the 585 IP routers and the transport network can be configured in different 586 modes, e.g., as OTU2 or STM-64 or 10GE. 588 This configuration can be done a-priori by means outside the scope 589 of this document. In this case, these links will appear at the MPI 590 either as an ODU Link or as an STM-64 Link or as a 10GE Link 591 (depending on the a-priori configuration) and will be controlled at 592 the MPI as discussed in section 4.3. 594 It is also possible not to configure these links a-priori and give 595 the control to the MPI to decide, based on the service 596 configuration, how to configure it. 598 For example, if the physical link between C-R1 and S3 is a multi- 599 functional access link while the physical links between C-R3 and S6 600 and between C-R4 and S2 are STM-64 and 10GE physical links 601 respectively, it is possible at the MPI to configure either an STM- 602 64 Private Line service between C-R1 and C-R3 or an EPL service 603 between C-R1 and C-R4. 605 The traffic flow between C-R1 and C-R3 can be summarized as: 607 C-R1 (|PKT| -> STM-64), S3 (STM-64 -> |ODU2|), S5 (|ODU2|), 608 S6 (|ODU2| -> STM-64), C-R3 (STM-64 -> |PKT|) 610 The traffic flow between C-R1 and C-R4 can be summarized as: 612 C-R1 (|PKT| -> ETH), S3 (ETH -> |ODU2|), S1 (|ODU2|), 613 S2 (|ODU2| -> ETH), C-R4 (ETH-> |PKT|) 615 The MDSC should be capable via the MPI to request the setup of 616 either service with enough information that can permit the transport 617 PNC to instantiate and control the ODU2 end-to-end data plane 618 connection as well as the adaptation functions inside S3 and S2 or 619 S6. 621 4.5. Protection Requirements 623 Protection switching provides a pre-allocated survivability 624 mechanism, typically provided via linear protection methods and 625 would be configured to operate as 1+1 unidirectional (the most 626 common OTN protection method), 1+1 bidirectional or 1:n 627 bidirectional. This ensures fast and simple service survivability. 629 The MDSC needs to be capable to request the transport PNC to 630 configure protection when requesting the setup of the connectivity 631 services described in section 4.3. 633 Since in this use case it is assumed that switching within the 634 transport network domain is performed only in one layer, also 635 protection switching within the transport network domain can only be 636 provided at the OTN ODU layer, for all the services defined in 637 section 4.3. 639 It may be necessary to consider not only protection, but also 640 restoration functions in the future. Restoration methods would 641 provide capability to reroute and restore connectivity traffic 642 around network faults, without the network penalty imposed with 643 dedicated 1+1 protection schemes. 645 4.5.1. Linear Protection 647 It is possible to protect any service defined in section 4.3 from 648 failures within the OTN transport domain by configuring OTN linear 649 protection in the data plane between node S3 and node S6. 651 It is assumed that the OTN linear protection is configured to with 652 1+1 unidirectional protection switching type, as defined in [ITU-T 653 G.808.1-2014] and [ITU-T G.873.1-2014], as well as in [RFC4427]. 655 In these scenarios, a working transport entity and a protection 656 transport entity, as defined in [ITU-T G.808.1-2014], (or a working 657 LSP and a protection LSP, as defined in [RFC4427]) should be 658 configured in the data plane, for example: 660 Working transport entity: S3, S5, S6 662 Protection transport entity: S3, S4, S8, S7, S6 664 The Transport PNC should be capable to report to the MDSC which is 665 the active transport entity, as defined in [ITU-T G.808.1-2014], in 666 the data plane. 668 Given the fast dynamic of protection switching operations in the 669 data plane (50ms recovery time), this reporting is not expected to 670 be in real-time. 672 It is also worth noting that with unidirectional protection 673 switching, e.g., 1+1 unidirectional protection switching, the active 674 transport entity may be different in the two directions. 676 5. Use Case 2: Single-domain with multi-layer 678 5.1. Reference Network 680 The current considerations discussed in this document are based on 681 the following reference network: 683 - single transport domain: OTN and OCh multi-layer network 685 In this use case, the same reference network shown in Figure 1 is 686 considered. The only difference is that all the transport nodes are 687 capable to switch in the ODU as well as in the OCh layer. 689 All the physical links within the transport network are therefore 690 assumed to be OCh links. Therefore, with the exception of the access 691 links, no ODU internal link exists before an OCh end-to-end data 692 plane connection is created within the network. 694 The controlling hierarchy is the same as described in Figure 2. 696 The interface within the scope of this document is the Transport MPI 697 which should be capable to control both the OTN and OCh layers. 699 5.2. Topology Abstractions 701 A grey topology type B abstraction is assumed: abstract nodes and 702 links exposed at the MPI corresponds 1:1 with the physical nodes and 703 links controlled by the PNC but the PNC abstracts/hides at least 704 some optical parameters to be used within the OCh layer. 706 5.3. Service Configuration 708 The same service scenarios, as described in section 4.3, are also 709 applicable to these use cases with the only difference that end-to- 710 end OCh data plane connections will need to be setup before ODU data 711 plane connections. 713 6. Use Case 3: Multi-domain with single-layer 715 6.1. Reference Network 717 In this section we focus on a multi-domain reference network with 718 homogeneous technologies: 720 - multiple transport domains: OTN networks 722 Figure 3 shows the network physical topology composed of three 723 transport network domains providing transport services to an IP 724 customer network through eight access links: 726 ........................ 727 .......... : : 728 : : : Network domain 1 : ............. 729 :Customer: : : : : 730 :domain 1: : S1 -------+ : : Network : 731 : : : / \ : : domain 3 : .......... 732 : C-R1 ------- S3 ----- S4 \ : : : : : 733 : : : \ \ S2 --------+ : :Customer: 734 : : : \ \ | : : \ : :domain 3: 735 : : : S5 \ | : : \ : : : 736 : C-R2 ------+ / \ \ | : : S31 --------- C-R7 : 737 : : : \ / \ \ | : : / \ : : : 738 : : : S6 ---- S7 ---- S8 ------ S32 S33 ------ C-R8 : 739 : : : / | | : : / \ / : :........: 740 : C-R3 ------+ | | : :/ S34 : 741 : : :..........|.......|...: / / : 742 :........: | | /:.../.......: 743 | | / / 744 ...........|.......|..../..../... 745 : | | / / : .......... 746 : Network | | / / : : : 747 : domain 2 | | / / : :Customer: 748 : S11 ---- S12 / : :domain 2: 749 : / | \ / : : : 750 : S13 S14 | S15 ------------- C-R4 : 751 : | \ / \ | \ : : : 752 : | S16 \ | \ : : : 753 : | / S17 -- S18 --------- C-R5 : 754 : | / \ / : : : 755 : S19 ---- S20 ---- S21 ------------ C-R6 : 756 : : : : 757 :...............................: :........: 759 Figure 3 Reference network for Use Case 3 761 It is worth noting that the network domain 1 is identical to the 762 transport domain shown in Figure 1. 764 -------------- 765 | Client | 766 | Controller | 767 -------------- 768 | 769 ....................|....................... 770 | 771 ---------------- 772 | | 773 | MDSC | 774 | | 775 ---------------- 776 / | \ 777 / | \ 778 ............../.....|......\................ 779 / | \ 780 / ---------- \ 781 / | PNC2 | \ 782 / ---------- \ 783 ---------- | \ 784 | PNC1 | ----- \ 785 ---------- ( ) ---------- 786 | ( ) | PNC3 | 787 ----- ( Network ) ---------- 788 ( ) ( Domain 2 ) | 789 ( ) ( ) ----- 790 ( Network ) ( ) ( ) 791 ( Domain 1 ) ----- ( ) 792 ( ) ( Network ) 793 ( ) ( Domain 3 ) 794 ----- ( ) 795 ( ) 796 ----- 798 Figure 4 Controlling Hierarchy for Use Case 3 800 In this section we address the case where the CNC controls the 801 customer IP network and requests transport connectivity among IP 802 routers, via the CMI, to an MDSC which coordinates, via three MPIs, 803 the control of a multi-domain transport network through three PNCs. 805 The interfaces within the scope of this document are the three MPIs 806 while the interface between the CNC and the IP routers is out of its 807 scope and considerations about the CMI are outside the scope of this 808 document. 810 6.2. Topology Abstractions 812 Each PNC should provide the MDSC a topology abstraction of the 813 domain's network topology. 815 Each PNC provides topology abstraction of its own domain topology 816 independently from each other and therefore it is possible that 817 different PNCs provide different types of topology abstractions. 819 As an example, we can assume that: 821 o PNC1 provides a white topology abstraction (likewise use case 1 822 described in section 4.2) 824 o PNC2 provides a type A grey topology abstraction 826 o PNC3 provides a type B grey topology abstraction, with two 827 abstract nodes (AN31 and AN32). They abstract respectively nodes 828 S31+S33 and nodes S32+S34. At the MPI, only the abstract nodes 829 should be reported: the mapping between the abstract nodes (AN31 830 and AN32) and the physical nodes (S31, S32, S33 and S34) should 831 be done internally by the PNC. 833 The MDSC should be capable to glue together these different abstract 834 topologies to build its own view of the multi-domain network 835 topology. This might require proper administrative configuration or 836 other mechanisms (to be defined/analysed). 838 6.3. Service Configuration 840 In the following use cases, it is assumed that the CNC is capable to 841 request service connectivity from the MDSC to support IP routers 842 connectivity. 844 The same service scenarios, as described in section 4.3, are also 845 application to this use cases with the only difference that the two 846 IP routers to be interconnected are attached to transport nodes 847 which belong to different PNCs domains and are under the control of 848 the CNC. 850 Likewise, the service scenarios in section 4.3, the type of services 851 could depend of the type of physical links (e.g. OTN link, ETH link 852 or SDH link) between the customer's routers and the multi-domain 853 transport network and the configuration of the different adaptations 854 inside IP routers is performed by means that are outside the scope 855 of this document because not under control of and not visible to the 856 MDSC nor to the PNCs. It is assumed that the CNC is capable to 857 request the proper configuration of the different adaptation 858 functions inside the customer's IP routers, by means which are 859 outside the scope of this document. 861 It is also assumed that the CNC is capable via the CMI to request 862 the MDSC the setup of these services with enough information that 863 enable the MDSC to coordinate the different PNCs to instantiate and 864 control the ODU2 data plane connection through nodes S3, S1, S2, 865 S31, S33, S34, S15 and S18, as well as the adaptation functions 866 inside nodes S3 and S18, when needed. 868 As described in section 6.2, the MDSC should have its own view of 869 the end-to-end network topology and use it for its own path 870 computation to understand that it needs to coordinate with PNC1, 871 PNC2 and PNC3 the setup and control of a multi-domain ODU2 data 872 plane connection. 874 6.3.1. ODU Transit 876 In order to setup a 10Gb IP link between C-R1 and C-R5, an ODU2 end- 877 to-end data plane connection needs be created between C-R1 and C-R5, 878 crossing transport nodes S3, S1, S2, S31, S33, S34, S15 and S18 879 which belong to different PNC domains. 881 The traffic flow between C-R1 and C-R5 can be summarized as: 883 C-R1 (|PKT| -> ODU2), S3 (|ODU2|), S1 (|ODU2|), S2 (|ODU2|), 884 S31 (|ODU2|), S33 (|ODU2|), S34 (|ODU2|), 885 S15 (|ODU2|), S18 (|ODU2|), C-R5 (ODU2 -> |PKT|) 887 6.3.2. EPL over ODU 889 In order to setup a 10Gb IP link between C-R1 and C-R5, an EPL 890 service needs to be created between C-R1 and C-R5, supported by an 891 ODU2 end-to-end data plane connection between transport nodes S3 and 892 S18, crossing transport nodes S1, S2, S31, S33, S34 and S15 which 893 belong to different PNC domains. 895 The traffic flow between C-R1 and C-R5 can be summarized as: 897 C-R1 (|PKT| -> ETH), S3 (ETH -> |ODU2|), S1 (|ODU2|), 898 S2 (|ODU2|), S31 (|ODU2|), S33 (|ODU2|), S34 (|ODU2|), 899 S15 (|ODU2|), S18 (|ODU2| -> ETH), C-R5 (ETH -> |PKT|) 901 6.3.3. Other OTN Client Services 903 In order to setup a 10Gb IP link between C-R1 and C-R5 using, for 904 example SDH physical links between the IP routers and the transport 905 network, an STM-64 Private Line service needs to be created between 906 C-R1 and C-R5, supported by ODU2 end-to-end data plane connection 907 between transport nodes S3 and S18, crossing transport nodes S1, S2, 908 S31, S33, S34 and S15 which belong to different PNC domains. 910 The traffic flow between C-R1 and C-R5 can be summarized as: 912 C-R1 (|PKT| -> STM-64), S3 (STM-64 -> |ODU2|), S1 (|ODU2|), 913 S2 (|ODU2|), S31 (|ODU2|), S33 (|ODU2|), S34 (|ODU2|), 914 S15 (|ODU2|), S18 (|ODU2| -> STM-64), C-R5 (STM-64 -> |PKT|) 916 6.3.4. EVPL over ODU 918 In order to setup two 1Gb IP links between C-R1 to C-R3 and between 919 C-R1 and C-R5, two EVPL services need to be created, supported by 920 two ODU0 end-to-end connections respectively between S3 and S6, 921 crossing transport node S5, and between S3 and S18, crossing 922 transport nodes S1, S2, S31, S33, S34 and S15 which belong to 923 different PNC domains. 925 The VLAN configuration on the access links is the same as described 926 in section 4.3.4. 928 The traffic flow between C-R1 and C-R3 is the same as described in 929 section 4.3.4. 931 The traffic flow between C-R1 and C-R5 can be summarized as: 933 C-R1 (|PKT| -> VLAN), S3 (VLAN -> |ODU2|), S1 (|ODU2|), 934 S2 (|ODU2|), S31 (|ODU2|), S33 (|ODU2|), S34 (|ODU2|), 935 S15 (|ODU2|), S18 (|ODU2| -> VLAN), C-R5 (VLAN -> |PKT|) 937 6.3.5. EVPLAN and EVPTree Services 939 In order to setup an IP subnet between C-R1, C-R2, C-R3 and C-R7, an 940 EVPLAN/EVPTree service needs to be created, supported by two ODUflex 941 end-to-end connections respectively between S3 and S6, crossing 942 transport node S5, and between S3 and S18, crossing transport nodes 943 S1, S2, S31, S33, S34 and S15 which belong to different PNC domains. 945 The VLAN configuration on the access links is the same as described 946 in section 4.3.5. 948 The configuration of the Ethernet Bridging capabilities on nodes S3 949 and S6 is the same as described in section 4.3.5 while the 950 configuration on node S18 similar to the configuration of node S2 951 described in section 4.3.5. 953 The traffic flow between C-R1 and C-R3 is the same as described in 954 section 4.3.5. 956 The traffic flow between C-R1 and C-R5 can be summarized as: 958 C-R1 (|PKT| -> VLAN), S3 (VLAN -> |MAC| -> |ODUflex|), 959 S1 (|ODUflex|), S2 (|ODUflex|), S31 (|ODUflex|), 960 S33 (|ODUflex|), S34 (|ODUflex|), 961 S15 (|ODUflex|), S18 (|ODUflex| -> VLAN), C-R5 (VLAN -> |PKT|) 963 6.4. Multi-functional Access Links 965 The same considerations of section 4.4 apply with the only 966 difference that the ODU data plane connections could be setup across 967 multiple PNC domains. 969 For example, if the physical link between C-R1 and S3 is a multi- 970 functional access link while the physical links between C-R7 and S31 971 and between C-R5 and S18 are STM-64 and 10GE physical links 972 respectively, it is possible to configure either an STM-64 Private 973 Line service between C-R1 and C-R7 or an EPL service between C-R1 974 and C-R5. 976 The traffic flow between C-R1 and C-R7 can be summarized as: 978 C-R1 (|PKT| -> STM-64), S3 (STM-64 -> |ODU2|), S1 (|ODU2|), 979 S2 (|ODU2|), S31 (|ODU2| -> STM-64), C-R3 (STM-64 -> |PKT|) 981 The traffic flow between C-R1 and C-R5 can be summarized as: 983 C-R1 (|PKT| -> ETH), S3 (ETH -> |ODU2|), S1 (|ODU2|), 984 S2 (|ODU2|), S31 (|ODU2|), S33 (|ODU2|), S34 (|ODU2|), 985 S15 (|ODU2|), S18 (|ODU2| -> ETH), C-R5 (ETH -> |PKT|) 987 6.5. Protection Scenarios 989 The MDSC needs to be capable to coordinate different PNCs to 990 configure protection switching when requesting the setup of the 991 connectivity services described in section 6.3. 993 Since in this use case it is assumed that switching within the 994 transport network domain is performed only in one layer, also 995 protection switching within the transport network domain can only be 996 provided at the OTN ODU layer, for all the services defined in 997 section 6.3. 999 6.5.1. Linear Protection (end-to-end) 1001 In order to protect any service defined in section 6.3 from failures 1002 within the OTN multi-domain transport network, the MDSC should be 1003 capable to coordinate different PNCs to configure and control OTN 1004 linear protection in the data plane between nodes S3 and node S18. 1006 The considerations in section 4.5.1 are also applicable here with 1007 the only difference that MDSC needs to coordinate with different 1008 PNCs the setup and control of the OTN linear protection as well as 1009 of the working and protection transport entities (working and 1010 protection LSPs). 1012 Two cases can be considered. 1014 In one case, the working and protection transport entities pass 1015 through the same PNC domains: 1017 Working transport entity: S3, S1, S2, 1018 S31, S33, S34, 1019 S15, S18 1021 Protection transport entity: S3, S4, S8, 1022 S32, 1023 S12, S17, S18 1025 In another case, the working and protection transport entities can 1026 pass through different PNC domains: 1028 Working transport entity: S3, S5, S7, 1029 S11, S12, S17, S18 1031 Protection transport entity: S3, S1, S2, 1032 S31, S33, S34, 1033 S15, S18 1035 6.5.2. Segmented Protection 1037 In order to protect any service defined in section 6.3 from failures 1038 within the OTN multi-domain transport network, the MDSC should be 1039 capable to request each PNC to configure OTN intra-domain protection 1040 when requesting the setup of the ODU2 data plane connection segment. 1042 If linear protection is used within a domain, the considerations in 1043 section 4.5.1 are also applicable here only for the PNC controlling 1044 the domain where intra-domain linear protection is provided. 1046 If PNC1 provides linear protection, the working and protection 1047 transport entities could be: 1049 Working transport entity: S3, S1, S2 1051 Protection transport entity: S3, S4, S8, S2 1053 If PNC2 provides linear protection, the working and protection 1054 transport entities could be: 1056 Working transport entity: S15, S18 1058 Protection transport entity: S15, S12, S17, S18 1060 If PNC3 provides linear protection, the working and protection 1061 transport entities could be: 1063 Working transport entity: S31, S33, S34 1065 Protection transport entity: S31, S32, S34 1067 7. Use Case 4: Multi-domain and multi-layer 1069 7.1. Reference Network 1071 The current considerations discussed in this document are based on 1072 the following reference network: 1074 - multiple transport domains: OTN and OCh multi-layer networks 1076 In this use case, the reference network shown in Figure 3 is used. 1077 The only difference is that all the transport nodes are capable to 1078 switch either in the ODU or in the OCh layer. 1080 All the physical links within each transport network domain are 1081 therefore assumed to be OCh links, while the inter-domain links are 1082 assumed to be ODU links as described in section 6.1 (multi-domain 1083 with single layer - OTN network). 1085 Therefore, with the exception of the access and inter-domain links, 1086 no ODU link exists within each domain before an OCh single-domain 1087 end-to-end data plane connection is created within the network. 1089 The controlling hierarchy is the same as described in Figure 4. 1091 The interfaces within the scope of this document are the three MPIs 1092 which should be capable to control both the OTN and OCh layers 1093 within each PNC domain. 1095 7.2. Topology Abstractions 1097 Each PNC should provide the MDSC a topology abstraction of its own 1098 network topology as described in section 5.2. 1100 As an example, it is assumed that: 1102 o PNC1 provides a type A grey topology abstraction (likewise in use 1103 case 2 described in section 5.2) 1105 o PNC2 provides a type B grey topology abstraction (likewise in use 1106 case 3 described in section 6.2) 1108 o PNC3 provides a type B grey topology abstraction with two 1109 abstract nodes, likewise in use case 3 described in section 6.2, 1110 and hiding at least some optical parameters to be used within the 1111 OCh layer, likewise in use case 2 described in section 5.2. 1113 7.3. Service Configuration 1115 The same service scenarios, as described in section 6.3, are also 1116 applicable to these use cases with the only difference that single- 1117 domain end-to-end OCh data plane connections needs to be setup 1118 before ODU data plane connections. 1120 8. Security Considerations 1122 Typically, OTN networks ensure a high level of security and data 1123 privacy through hard partitioning of traffic onto isolated circuits. 1125 There may be additional security considerations applied to specific 1126 use cases, but common security considerations do exist and these 1127 must be considered for controlling underlying infrastructure to 1128 deliver transport services: 1130 o use of RESCONF and the need to reuse security between RESTCONF 1131 components; 1133 o use of authentication and policy to govern which transport 1134 services may be requested by the user or application; 1136 o how secure and isolated connectivity may also be requested as an 1137 element of a service and mapped down to the OTN level. 1139 9. IANA Considerations 1141 This document requires no IANA actions. 1143 10. References 1145 10.1. Normative References 1147 [RFC7926] Farrel, A. et al., "Problem Statement and Architecture for 1148 Information Exchange between Interconnected Traffic- 1149 Engineered Networks", BCP 206, RFC 7926, July 2016. 1151 [RFC4427] Mannie, E., Papadimitriou, D., "Recovery (Protection and 1152 Restoration) Terminology for Generalized Multi-Protocol 1153 Label Switching (GMPLS)", RFC 4427, April 2006. 1155 [ACTN-Frame] Ceccarelli, D., Lee, Y. et al., "Framework for 1156 Abstraction and Control of Transport Networks", draft- 1157 ietf-teas-actn-framework, work in progress. 1159 [ITU-T G.709-2016] ITU-T Recommendation G.709 (06/16), "Interfaces 1160 for the optical transport network", June 2016. 1162 [ITU-T G.808.1-2014] ITU-T Recommendation G.808.1 (05/14), "Generic 1163 protection switching - Linear trail and subnetwork 1164 protection", May 2014. 1166 [ITU-T G.873.1-2014] ITU-T Recommendation G.873.1 (05/14), "Optical 1167 transport network (OTN): Linear protection", May 2014. 1169 10.2. Informative References 1171 [TE-Topo] Liu, X. et al., "YANG Data Model for TE Topologies", 1172 draft-ietf-teas-yang-te-topo, work in progress. 1174 [ACTN-YANG] Zhang, X. et al., "Applicability of YANG models for 1175 Abstraction and Control of Traffic Engineered Networks", 1176 draft-zhang-teas-actn-yang, work in progress. 1178 [Path-Compute] Busi, I., Belotti, S. et al., " Yang model for 1179 requesting Path Computation", draft-busibel-teas-yang- 1180 path-computation, work in progress. 1182 [RESTCONF] Bierman, A., Bjorklund, M., and K. Watsen, "RESTCONF 1183 Protocol", RFC 8040, DOI 10.17487/RFC8040, January 2017, 1184 . 1186 [ONF TR-527] ONF Technical Recommendation TR-527, "Functional 1187 Requirements for Transport API", June 2016. 1189 [ONF GitHub] ONF Open Transport (SNOWMASS) 1190 https://github.com/OpenNetworkingFoundation/Snowmass- 1191 ONFOpenTransport 1193 11. Acknowledgments 1195 The authors would like to thank all members of the Transport NBI 1196 Design Team involved in the definition of use cases, gap analysis 1197 and guidelines for using the IETF YANG models at the Northbound 1198 Interface (NBI) of a Transport SDN Controller. 1200 The authors would like to thank Xian Zhang, Anurag Sharma, Sergio 1201 Belotti, Tara Cummings, Michael Scharf, Karthik Sethuraman, Oscar 1202 Gonzalez de Dios, Hans Bjursrom and Italo Busi for having initiated 1203 the work on gap analysis for transport NBI and having provided 1204 foundations work for the development of this document. 1206 This document was prepared using 2-Word-v2.0.template.dot. 1208 Authors' Addresses 1210 Italo Busi (Editor) 1211 Huawei 1212 Email: italo.busi@huawei.com 1214 Daniel King (Editor) 1215 Lancaster University 1216 Email: d.king@lancaster.ac.uk 1218 Sergio Belotti 1219 Nokia 1220 Email: sergio.belotti@nokia.com 1222 Gianmarco Bruno 1223 Ericsson 1224 Email: gianmarco.bruno@ericsson.com 1226 Young Lee 1227 Huawei 1228 Email: leeyoung@huawei.com 1230 Victor Lopez 1231 Telefonica 1232 Email: victor.lopezalvarez@telefonica.com 1234 Carlo Perocchio 1235 Ericsson 1236 Email: carlo.perocchio@ericsson.com 1238 Haomian Zheng 1239 Huawei 1240 Email: zhenghaomian@huawei.com