idnits 2.17.1 draft-tnbidt-ccamp-transport-nbi-use-cases-03.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (September 20, 2017) is 2410 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 CCAMP Working Group I. Busi (Ed.) 2 Internet Draft Huawei 3 Intended status: Informational D. King 4 Lancaster University 6 Expires: March 2018 September 20, 2017 8 Transport Northbound Interface Applicability Statement and Use Cases 9 draft-tnbidt-ccamp-transport-nbi-use-cases-03 11 Status of this Memo 13 This Internet-Draft is submitted in full conformance with the 14 provisions of BCP 78 and BCP 79. 16 Internet-Drafts are working documents of the Internet Engineering 17 Task Force (IETF), its areas, and its working groups. Note that 18 other groups may also distribute working documents as Internet- 19 Drafts. 21 Internet-Drafts are draft documents valid for a maximum of six 22 months and may be updated, replaced, or obsoleted by other documents 23 at any time. It is inappropriate to use Internet-Drafts as 24 reference material or to cite them other than as "work in progress." 26 The list of current Internet-Drafts can be accessed at 27 http://www.ietf.org/ietf/1id-abstracts.txt 29 The list of Internet-Draft Shadow Directories can be accessed at 30 http://www.ietf.org/shadow.html 32 This Internet-Draft will expire on March 20, 2018. 34 Copyright Notice 36 Copyright (c) 2017 IETF Trust and the persons identified as the 37 document authors. All rights reserved. 39 This document is subject to BCP 78 and the IETF Trust's Legal 40 Provisions Relating to IETF Documents 41 (http://trustee.ietf.org/license-info) in effect on the date of 42 publication of this document. Please review these documents 43 carefully, as they describe your rights and restrictions with 44 respect to this document. 46 Abstract 48 Transport network domains, including Optical Transport Network (OTN) 49 and Wavelength Division Multiplexing (WDM) networks, are typically 50 deployed based on a single vendor or technology platforms. They are 51 often managed using proprietary interfaces to dedicated Element 52 Management Systems (EMS), Network Management Systems (NMS) and 53 increasingly Software Defined Network (SDN) controllers. 55 A well-defined open interface to each domain management system or 56 controller is required for network operators to facilitate control 57 automation and orchestrate end-to-end services across multi-domain 58 networks. These functions may be enabled using standardized data 59 models (e.g. YANG), and appropriate protocol (e.g., RESTCONF). 61 This document describes the key use cases and requirements for 62 transport network control and management. It reviews proposed and 63 existing IETF transport network data models, their applicability, 64 and highlights gaps and requirements. 66 Table of Contents 68 1. Introduction ................................................3 69 1.1. Scope of this document .................................4 70 2. Terminology .................................................4 71 3. Conventions used in this document............................4 72 3.1. Topology and traffic flow processing ...................4 73 4. Use Case 1: Single-domain with single-layer .................5 74 4.1. Reference Network ......................................5 75 4.1.1. Single Transport Domain - OTN Network .............5 76 4.2. Topology Abstractions ..................................8 77 4.3. Service Configuration ..................................9 78 4.3.1. ODU Transit .......................................9 79 4.3.2. EPL over ODU ......................................10 80 4.3.3. Other OTN Client Services .........................10 81 4.3.4. EVPL over ODU .....................................11 82 4.3.5. EVPLAN and EVPTree Services .......................12 83 4.4. Multi-functional Access Links ..........................13 84 4.5. Protection Requirements ................................14 85 4.5.1. Linear Protection .................................15 86 5. Use Case 2: Single-domain with multi-layer ..................15 87 5.1. Reference Network ......................................15 88 5.2. Topology Abstractions ..................................16 89 5.3. Service Configuration ..................................16 90 6. Use Case 3: Multi-domain with single-layer ..................16 91 6.1. Reference Network ......................................16 92 6.2. Topology Abstractions ..................................19 93 6.3. Service Configuration ..................................19 94 6.3.1. ODU Transit .......................................20 95 6.3.2. EPL over ODU ......................................20 96 6.3.3. Other OTN Client Services .........................21 97 6.3.4. EVPL over ODU .....................................21 98 6.3.5. EVPLAN and EVPTree Services .......................21 99 6.4. Multi-functional Access Links ..........................22 100 6.5. Protection Scenarios ...................................22 101 6.5.1. Linear Protection (end-to-end) ....................23 102 6.5.2. Segmented Protection ..............................23 103 7. Use Case 4: Multi-domain and multi-layer ....................24 104 7.1. Reference Network ......................................24 105 7.2. Topology Abstractions ..................................25 106 7.3. Service Configuration ..................................25 107 8. Security Considerations .....................................25 108 9. IANA Considerations .........................................26 109 10. References .................................................26 110 10.1. Normative References ..................................26 111 10.2. Informative References ................................26 112 11. Acknowledgments ............................................27 114 1. Introduction 116 Transport of packet services are critical for a wide-range of 117 applications and services, including: data center and LAN 118 interconnects, Internet service backhauling, mobile backhaul and 119 enterprise Carrier Ethernet Services. These services are typically 120 setup using stovepipe NMS and EMS platforms, often requiring 121 propriety management platforms and legacy management interfaces. A 122 clear goal of operators will be to automate setup of transport 123 services across multiple transport technology domains. 125 A common open interface (API) to each domain controller and or 126 management system is pre-requisite for network operators to control 127 multi-vendor and multi-domain networks and enable also service 128 provisioning coordination/automation. This can be achieved by using 129 standardized YANG models, used together with an appropriate protocol 130 (e.g., [RESTCONF]). 132 This document describes key use cases for analyzing the 133 applicability of the existing models defined by the IETF for 134 transport networks. The intention of this document is to become an 135 applicability statement that provides detailed descriptions of how 136 IETF transport models are applied to solve the described use cases 137 and requirements. 139 1.1. Scope of this document 141 This document assumes a reference architecture, including 142 interfaces, based on the Abstraction and Control of Traffic- 143 Engineered Networks (ACTN), defined in [ACTN-Frame] 145 The focus of this document is on the MPI (interface between the 146 Multi Domain Service Coordinator (MDSC) and a Physical Network 147 Controller (PNC), controlling a transport network domain). 149 The relationship between the current IETF YANG models and the type 150 of ACTN interfaces can be found in [ACTN-YANG]. 152 The ONF Technical Recommendations for Functional Requirements for 153 the transport API in [ONF TR-527] and the ONF transport API multi- 154 layer examples in [ONF GitHub] have been considered as an input for 155 this work. 157 Considerations about the CMI (interface between the Customer Network 158 Controller (CNC) and the MDSC) are outside the scope of this 159 document. 161 2. Terminology 163 E-LINE: Ethernet Line 165 EPL: Ethernet Private Line 167 EVPL: Ethernet Virtual Private Line 169 OTH: Optical Network Hierarchy 171 OTN: Optical Transport Network 173 3. Conventions used in this document 175 3.1. Topology and traffic flow processing 177 The traffic flow between different nodes is specified as an ordered 178 list of nodes, separated with commas, indicating within the brackets 179 the processing within each node: 181 (){, ()} 183 The order represents the order of traffic flow being forwarded 184 through the network. 186 The processing can be either an adaptation of a client layer into a 187 server layer "(client -> server)" or switching at a given layer 188 "([switching])". Multi-layer switching is indicated by two layer 189 switching with client/server adaptation: "([client] -> [server])". 191 For example, the following traffic flow: 193 C-R1 (|PKT| -> ODU2), S3 (|ODU2|), S5 (|ODU2|), S6 (|ODU2|), 194 C-R3 (ODU2 -> |PKT|) 196 Node C-R1 is switching at the packet (PKT) layer and mapping packets 197 into a ODU2 before transmission to node S3. Nodes S3, S5 and S6 are 198 switching at the ODU2 layer: S3 sends the ODU2 traffic to S5 which 199 then sends it to S6 which finally sends to C-R3. Node C-R3 200 terminates the ODU2 from S6 before switching at the packet (PKT) 201 layer. 203 The paths of working and protection transport entities are specified 204 as an ordered list of nodes, separated with commas: 206 {, } 208 The order represents the order of traffic flow being forwarded 209 through the network in the forward direction. In case of 210 bidirectional paths, the forward and backward directions are 211 selected arbitrarily, but the convention is consistent between 212 working/protection path pairs as well as across multiple domains. 214 4. Use Case 1: Single-domain with single-layer 216 4.1. Reference Network 218 The current considerations discussed in this document are based on 219 the following reference networks: 221 - single transport domain: OTN network 223 4.1.1. Single Transport Domain - OTN Network 225 As shown in Figure 1 the network physical topology composed of a 226 single-domain transport network providing transport services to an 227 IP network through five access links. 229 ................................................ 230 : IP domain : 231 : .............................. : 232 : : ........................ : : 233 : : : : : : 234 : : : S1 -------- S2 ------ C-R4 : 235 : : : / | : : : 236 : : : / | : : : 237 : C-R1 ------ S3 ----- S4 | : : : 238 : : : \ \ | : : : 239 : : : \ \ | : : : 240 : : : S5 \ | : : : 241 : C-R2 -----+ / \ \ | : : : 242 : : : \ / \ \ | : : : 243 : : : S6 ---- S7 ---- S8 ------ C-R5 : 244 : : : / : : : 245 : C-R3 -----+ : : : 246 : : : Transport domain : : : 247 : : : : : : 248 :........: :......................: :........: 249 Figure 1 Reference network for Use Case 1 251 The IP and transport (OTN) domains are respectively composed by five 252 routers C-R1 to C-R5 and by eight ODU switches S1 to S8. The 253 transport domain acts as a transit network providing connectivity 254 for IP layer services. 256 The behavior of the transport domain is the same whether the 257 ingress or egress service nodes in the IP domain are only attached 258 to the transport domain, or if there are other routers in between 259 the ingress or egress nodes of the IP domain not also attached to 260 the transport domain. In other words, the behavior of the transport 261 network does not depend on whether C-R1, C-R2, ..., C-R5 are PE or P 262 routers for the IP services. 264 The transport domain control plane architecture follows the ACTN 265 architecture and framework document [ACTN-Frame], and functional 266 components: 268 o Customer Network Controller (CNC) act as a client with respect to 269 the Multi-Domain Service Coordinator (MDSC) via the CNC-MDSC 270 Interface (CMI); 272 o MDSC is connected to a plurality of Physical Network Controllers 273 (PNCs), one for each domain, via a MDSC-PNC Interface (MPI). Each 274 PNC is responsible only for the control of its domain and the 275 MDSC is the only entity capable of multi-domain functionalities 276 as well as of managing the inter-domain links; 278 The ACTN framework facilitates the detachment of the network and 279 service control from the underlying technology and help the customer 280 express the network as desired by business needs. Therefore, care 281 must be taken to keep minimal dependency on the CMI (or no 282 dependency at all) with respect to the network domain technologies. 283 The MPI instead requires some specialization according to the domain 284 technology. 286 +-----+ 287 | CNC | 288 +-----+ 289 | 290 |CMI I/F 291 | 292 +-----------------------+ 293 | MDSC | 294 +-----------------------+ 295 | 296 |MPI I/F 297 | 298 +-------+ 299 | PNC | 300 +-------+ 301 | 302 ----- 303 ( ) 304 ( OTN ) 305 ( Physical ) 306 ( Network ) 307 ( ) 308 ----- 310 Figure 2 Controlling Hierarchy for Use Case 1 312 Once the service request is processed by the MDSC the mapping of the 313 client IP traffic between the routers (across the transport network) 314 is made in the IP routers only and is not controlled by the 315 transport PNC, and therefore transparent to the transport nodes. 317 4.2. Topology Abstractions 319 Abstraction provides a selective method for representing 320 connectivity information within a domain. There are multiple methods 321 to abstract a network topology. This document assumes the 322 abstraction method defined in [RFC7926]: 324 "Abstraction is the process of applying policy to the available TE 325 information within a domain, to produce selective information that 326 represents the potential ability to connect across the domain. 327 Thus, abstraction does not necessarily offer all possible 328 connectivity options, but presents a general view of potential 329 connectivity according to the policies that determine how the 330 domain's administrator wants to allow the domain resources to be 331 used." 333 [TE-Topo] describes a YANG base model for TE topology without any 334 technology specific parameters. Moreover, it defines how to abstract 335 for TE-network topologies. 337 [ACTN-Frame] provides the context of topology abstraction in the 338 ACTN architecture and discusses a few alternatives for the 339 abstraction methods for both packet and optical networks. This is an 340 important consideration since the choice of the abstraction method 341 impacts protocol design and the information it carries. According 342 to [ACTN-Frame], there are three types of topology: 344 o White topology: This is a case where the Physical Network 345 Controller (PNC) provides the actual network topology to the 346 multi-domain Service Coordinator (MDSC) without any hiding or 347 filtering. In this case, the MDSC has the full knowledge of the 348 underlying network topology; 350 o Black topology: The entire domain network is abstracted as a 351 single virtual node with the access/egress links without 352 disclosing any node internal connectivity information; 354 o Grey topology: This abstraction level is between black topology 355 and white topology from a granularity point of view. This is 356 abstraction of TE tunnels for all pairs of border nodes. We may 357 further differentiate from a perspective of how to abstract 358 internal TE resources between the pairs of border nodes: 360 - Grey topology type A: border nodes with a TE links between 361 them in a full mesh fashion; 363 - Grey topology type B: border nodes with some internal 364 abstracted nodes and abstracted links. 366 For single-domain with single-layer use-case, the white topology may 367 be disseminated from the PNC to the MDSC in most cases. There may be 368 some exception to this in the case where the underlay network may 369 have complex optical parameters, which do not warrant the 370 distribution of such details to the MDSC. In such case, the topology 371 disseminated from the PNC to the MDSC may not have the entire TE 372 information but a streamlined TE information. This case would incur 373 another action from the MDSC's standpoint when provisioning a path. 374 The MDSC may make a path compute request to the PNC to verify the 375 feasibility of the estimated path before making the final 376 provisioning request to the PNC, as outlined in [Path-Compute]. 378 Topology abstraction for the CMI is for further study (to be 379 addressed in future revisions of this document). 381 4.3. Service Configuration 383 In the following use cases, the Multi Domain Service Coordinator 384 (MDSC) needs to be capable to request service connectivity from the 385 transport Physical Network Controller (PNC) to support IP routers 386 connectivity. The type of services could depend of the type of 387 physical links (e.g. OTN link, ETH link or SDH link) between the 388 routers and transport network. 390 As described in section 4.1.1, the control of different adaptations 391 inside IP routers, C-Ri (PKT -> foo) and C-Rj (foo -> PKT), are 392 assumed to be performed by means that are not under the control of, 393 and not visible to, transport PNC. Therefore, these mechanisms are 394 outside the scope of this document. 396 4.3.1. ODU Transit 398 This use case assumes that the physical links interconnecting the IP 399 routers and the transport network are OTN links. The 400 physical/optical interconnection below the ODU layer is supposed to 401 be pre-configured and not exposed at the MPI to the MDSC. 403 To setup a 10Gb IP link between C-R1 to C-R3, an ODU2 end-to-end 404 data plane connection needs to be created between C-R1 and C-R3, 405 crossing transport nodes S3, S5, and S6. 407 The traffic flow between C-R1 and C-R3 can be summarized as: 409 C-R1 (|PKT| -> ODU2), S3 (|ODU2|), S5 (|ODU2|), S6 (|ODU2|), 410 C-R3 (ODU2 -> |PKT|) 412 The MDSC should be capable via the MPI to request the setup of an 413 ODU2 transit service with enough information that enable the 414 transport PNC to instantiate and control the ODU2 data plane 415 connection segment through nodes S3, S5, S6. 417 4.3.2. EPL over ODU 419 This use case assumes that the physical links interconnecting the IP 420 routers and the transport network are Ethernet links. 422 In order to setup a 10Gb IP link between C-R1 to C-R3, an EPL 423 service needs to be created between C-R1 and C-R3, supported by an 424 ODU2 end-to-end connection between S3 and S6, crossing transport 425 node S5. 427 The traffic flow between C-R1 and C-R3 can be summarized as: 429 C-R1 (|PKT| -> ETH), S3 (ETH -> |ODU2|), S5 (|ODU2|), 430 S6 (|ODU2| -> ETH), C-R3 (ETH-> |PKT|) 432 The MDSC should be capable via the MPI to request the setup of an 433 EPL service with enough information that can permit the transport 434 PNC to instantiate and control the ODU2 end-to-end data plane 435 connection through nodes S3, S5, S6, as well as the adaptation 436 functions inside S3 and S6: S3&S6 (ETH -> ODU2) and S9&S6 (ODU2 -> 437 ETH). 439 4.3.3. Other OTN Client Services 441 [ITU-T G.709-2016] defines mappings of different client layers into 442 ODU. Most of them are used to provide Private Line services over 443 an OTN transport network supporting a variety of types of physical 444 access links (e.g., Ethernet, SDH STM-N, Fibre Channel, InfiniBand, 445 etc.). 447 This use case assumes that the physical links interconnecting the IP 448 routers and the transport network are any one of these possible 449 options. 451 In order to setup a 10Gb IP link between C-R1 to C-R3 using, for 452 example STM-64 physical links between the IP routers and the 453 transport network, an STM-64 Private Line service needs to be 454 created between C-R1 and C-R3, supported by an ODU2 end-to-end data 455 plane connection between S3 and S6, crossing transport node S5. 457 The traffic flow between C-R1 and C-R3 can be summarized as: 459 C-R1 (|PKT| -> STM-64), S3 (STM-64 -> |ODU2|), S5 (|ODU2|), 460 S6 (|ODU2| -> STM-64), C-R3 (STM-64 -> |PKT|) 462 The MDSC should be capable via the MPI to request the setup of an 463 STM-64 Private Line service with enough information that can permit 464 the transport PNC to instantiate and control the ODU2 end-to-end 465 connection through nodes S3, S5, S6, as well as the adaptation 466 functions inside S3 and S6: S3&S6 (STM-64 -> ODU2) and S9&S3 (STM-64 467 -> PKT). 469 4.3.4. EVPL over ODU 471 This use case assumes that the physical links interconnecting the IP 472 routers and the transport network are Ethernet links and that 473 different Ethernet services (e.g, EVPL) can share the same physical 474 link using different VLANs. 476 In order to setup two 1Gb IP links between C-R1 to C-R3 and between 477 C-R1 and C-R4, two EVPL services need to be created, supported by 478 two ODU0 end-to-end connections respectively between S3 and S6, 479 crossing transport node S5, and between S3 and S2, crossing 480 transport node S1. 482 Since the two EVPL services are sharing the same Ethernet physical 483 link between C-R1 and S3, different VLAN IDs are associated with 484 different EVPL services: for example VLAN IDs 10 and 20 485 respectively. 487 The traffic flow between C-R1 and C-R3 can be summarized as: 489 C-R1 (|PKT| -> VLAN), S3 (VLAN -> |ODU0|), S5 (|ODU0|), 490 S6 (|ODU0| -> VLAN), C-R3 (VLAN -> |PKT|) 492 The traffic flow between C-R1 and C-R4 can be summarized as: 494 C-R1 (|PKT| -> VLAN), S3 (VLAN -> |ODU0|), S1 (|ODU0|), 495 S2 (|ODU0| -> VLAN), C-R4 (VLAN -> |PKT|) 497 The MDSC should be capable via the MPI to request the setup of these 498 EVPL services with enough information that can permit the transport 499 PNC to instantiate and control the ODU0 end-to-end data plane 500 connections as well as the adaptation functions on the boundary 501 nodes: S3&S2&S6 (VLAN -> ODU0) and S3&S2&S6 (ODU0 -> VLAN). 503 4.3.5. EVPLAN and EVPTree Services 505 This use case assumes that the physical links interconnecting the IP 506 routers and the transport network are Ethernet links and that 507 different Ethernet services (e.g, EVPL, EVPLAN and EVPTree) can 508 share the same physical link using different VLANs. 510 Note - it is assumed that EPLAN and EPTree services can be supported 511 by configuring EVPLAN and EVPTree with port mapping. 513 In order to setup an IP subnet between C-R1, C-R2, C-R3 and C-R4, an 514 EVPLAN/EVPTree service needs to be created, supported by two ODUflex 515 end-to-end connections respectively between S3 and S6, crossing 516 transport node S5, and between S3 and S2, crossing transport node 517 S1. 519 In order to support this EVPLAN/EVPTree service, some Ethernet 520 Bridging capabilities are required on some nodes at the edge of the 521 transport network: for example Ethernet Bridging capabilities can be 522 configured in nodes S3 and S6 but not in node S2. 524 Since this EVPLAN/EVPTree service can share the same Ethernet 525 physical links between IP routers and transport nodes (e.g., with 526 the EVPL services described in section 4.3.4), a different VLAN ID 527 (e.g., 30) can be associated with this EVPLAN/EVPTree service. 529 In order to support an EVPTree service instead of an EVPLAN, 530 additional configuration of the Ethernet Bridging capabilities on 531 the nodes at the edge of the transport network is required. 533 The MAC bridging function in node S3 is needed to select, based on 534 the MAC Destination Address, whether the Ethernet frames form C-R1 535 should be sent to the ODUflex terminating on node S6 or to the other 536 ODUflex terminating on node S2. 538 The MAC bridging function in node S6 is needed to select, based on 539 the MAC Destination Address, whether the Ethernet frames received 540 from the ODUflex should be set to C-R2 or C-R3, as well as whether 541 the Ethernet frames received from C-R2 (or C-R3) should be sent to 542 C-R3 (or C-R2) or to the ODUflex. 544 For example, the traffic flow between C-R1 and C-R3 can be 545 summarized as: 547 C-R1 (|PKT| -> VLAN), S3 (VLAN -> |MAC| -> |ODUflex|), 548 S5 (|ODUflex|), S6 (|ODUflex| -> |MAC| -> VLAN), 549 C-R3 (VLAN -> |PKT|) 551 The MAC bridging function in node S3 is also needed to select, based 552 on the MAC Destination Address, whether the Ethernet frames one 553 ODUflex should be sent to C-R1 or to the other ODUflex. 555 For example, the traffic flow between C-R3 and C-R4 can be 556 summarized as: 558 C-R3 (|PKT| -> VLAN), S6 (VLAN -> |MAC| -> |ODUflex|), 559 S5 (|ODUflex|), S3 (|ODUflex| -> |MAC| -> |ODUflex|), 560 S1 (|ODUflex|), S2 (|ODUflex| -> VLAN), C-R4 (VLAN -> |PKT|) 562 In node S2 there is no need for any MAC bridging function since all 563 the Ethernet frames received from C-R4 should be sent to the ODUflex 564 toward S3 and viceversa. 566 The traffic flow between C-R1 and C-R4 can be summarized as: 568 C-R1 (|PKT| -> VLAN), S3 (VLAN -> |MAC| -> |ODUflex|), 569 S1 (|ODUflex|), S2 (|ODUflex| -> VLAN), C-R4 (VLAN -> |PKT|) 571 The MDSC should be capable via the MPI to request the setup of this 572 EVPLAN/EVPTree services with enough information that can permit the 573 transport PNC to instantiate and control the ODUflex end-to-end data 574 plane connections as well as the Ethernet Bridging and adaptation 575 functions on the boundary nodes: S3&S6 (VLAN -> MAC -> ODU2), S3&S6 576 (ODU2 -> ETH -> VLAN), S2 (VLAN -> ODU2) and S2 (ODU2 -> VLAN). 578 4.4. Multi-functional Access Links 580 This use case assumes that some physical links interconnecting the 581 IP routers and the transport network can be configured in different 582 modes, e.g., as OTU2 or STM-64 or 10GE. 584 This configuration can be done a-priori by means outside the scope 585 of this document. In this case, these links will appear at the MPI 586 either as an ODU Link or as an STM-64 Link or as a 10GE Link 587 (depending on the a-priori configuration) and will be controlled at 588 the MPI as discussed in section 4.3. 590 It is also possible not to configure these links a-priori and give 591 the control to the MPI to decide, based on the service 592 configuration, how to configure it. 594 For example, if the physical link between C-R1 and S3 is a multi- 595 functional access link while the physical links between C-R3 and S6 596 and between C-R4 and S2 are STM-64 and 10GE physical links 597 respectively, it is possible at the MPI to configure either an STM- 598 64 Private Line service between C-R1 and C-R3 or an EPL service 599 between C-R1 and C-R4. 601 The traffic flow between C-R1 and C-R3 can be summarized as: 603 C-R1 (|PKT| -> STM-64), S3 (STM-64 -> |ODU2|), S5 (|ODU2|), 604 S6 (|ODU2| -> STM-64), C-R3 (STM-64 -> |PKT|) 606 The traffic flow between C-R1 and C-R4 can be summarized as: 608 C-R1 (|PKT| -> ETH), S3 (ETH -> |ODU2|), S1 (|ODU2|), 609 S2 (|ODU2| -> ETH), C-R4 (ETH-> |PKT|) 611 The MDSC should be capable via the MPI to request the setup of 612 either service with enough information that can permit the transport 613 PNC to instantiate and control the ODU2 end-to-end data plane 614 connection as well as the adaptation functions inside S3 and S2 or 615 S6. 617 4.5. Protection Requirements 619 Protection switching provides a pre-allocated survivability 620 mechanism, typically provided via linear protection methods and 621 would be configured to operate as 1+1 unidirectional (the most 622 common OTN protection method), 1+1 bidirectional or 1:n 623 bidirectional. This ensures fast and simple service survivability. 625 The MDSC needs to be capable to request the transport PNC to 626 configure protection when requesting the setup of the connectivity 627 services described in section 4.3. 629 Since in this use case it is assumed that switching within the 630 transport network domain is performed only in one layer, also 631 protection switching within the transport network domain can only be 632 provided at the OTN ODU layer, for all the services defined in 633 section 4.3. 635 It may be necessary to consider not only protection, but also 636 restoration functions in the future. Restoration methods would 637 provide capability to reroute and restore connectivity traffic 638 around network faults, without the network penalty imposed with 639 dedicated 1+1 protection schemes. 641 4.5.1. Linear Protection 643 It is possible to protect any service defined in section 4.3 from 644 failures within the OTN transport domain by configuring OTN linear 645 protection in the data plane between node S3 and node S6. 647 It is assumed that the OTN linear protection is configured to with 648 1+1 unidirectional protection switching type, as defined in [ITU-T 649 G.808.1-2014] and [ITU-T G.873.1-2014], as well as in [RFC4427]. 651 In these scenarios, a working transport entity and a protection 652 transport entity, as defined in [ITU-T G.808.1-2014], (or a working 653 LSP and a protection LSP, as defined in [RFC4427]) should be 654 configured in the data plane, for example: 656 Working transport entity: S3, S5, S6 658 Protection transport entity: S3, S4, S8, S7, S6 660 The Transport PNC should be capable to report to the MDSC which is 661 the active transport entity, as defined in [ITU-T G.808.1-2014], in 662 the data plane. 664 Given the fast dynamic of protection switching operations in the 665 data plane (50ms recovery time), this reporting is not expected to 666 be in real-time. 668 It is also worth noting that with unidirectional protection 669 switching, e.g., 1+1 unidirectional protection switching, the active 670 transport entity may be different in the two directions. 672 5. Use Case 2: Single-domain with multi-layer 674 5.1. Reference Network 676 The current considerations discussed in this document are based on 677 the following reference network: 679 - single transport domain: OTN and OCh multi-layer network 681 In this use case, the same reference network shown in Figure 1 is 682 considered. The only difference is that all the transport nodes are 683 capable to switch in the ODU as well as in the OCh layer. 685 All the physical links within the transport network are therefore 686 assumed to be OCh links. Therefore, with the exception of the access 687 links, no ODU internal link exists before an OCh end-to-end data 688 plane connection is created within the network. 690 The controlling hierarchy is the same as described in Figure 2. 692 The interface within the scope of this document is the Transport MPI 693 which should be capable to control both the OTN and OCh layers. 695 5.2. Topology Abstractions 697 A grey topology type B abstraction is assumed: abstract nodes and 698 links exposed at the MPI corresponds 1:1 with the physical nodes and 699 links controlled by the PNC but the PNC abstracts/hides at least 700 some optical parameters to be used within the OCh layer. 702 5.3. Service Configuration 704 The same service scenarios, as described in section 4.3, are also 705 applicable to these use cases with the only difference that end-to- 706 end OCh data plane connections will need to be setup before ODU data 707 plane connections. 709 6. Use Case 3: Multi-domain with single-layer 711 6.1. Reference Network 713 In this section we focus on a multi-domain reference network with 714 homogeneous technologies: 716 - multiple transport domains: OTN networks 718 Figure 3 shows the network physical topology composed of three 719 transport network domains providing transport services to an IP 720 customer network through eight access links: 722 ........................ 723 .......... : : 724 : : : Network domain 1 : ............. 725 :Customer: : : : : 726 :domain 1: : S1 -------+ : : Network : 727 : : : / \ : : domain 3 : .......... 728 : C-R1 ------- S3 ----- S4 \ : : : : : 729 : : : \ \ S2 --------+ : :Customer: 730 : : : \ \ | : : \ : :domain 3: 731 : : : S5 \ | : : \ : : : 732 : C-R2 ------+ / \ \ | : : S31 --------- C-R7 : 733 : : : \ / \ \ | : : / \ : : : 734 : : : S6 ---- S7 ---- S8 ------ S32 S33 ------ C-R8 : 735 : : : / | | : : / \ / : :........: 736 : C-R3 ------+ | | : :/ S34 : 737 : : :..........|.......|...: / / : 738 :........: | | /:.../.......: 739 | | / / 740 ...........|.......|..../..../... 741 : | | / / : .......... 742 : Network | | / / : : : 743 : domain 2 | | / / : :Customer: 744 : S11 ---- S12 / : :domain 2: 745 : / | \ / : : : 746 : S13 S14 | S15 ------------- C-R4 : 747 : | \ / \ | \ : : : 748 : | S16 \ | \ : : : 749 : | / S17 -- S18 --------- C-R5 : 750 : | / \ / : : : 751 : S19 ---- S20 ---- S21 ------------ C-R6 : 752 : : : : 753 :...............................: :........: 755 Figure 3 Reference network for Use Case 3 757 It is worth noting that the network domain 1 is identical to the 758 transport domain shown in Figure 1. 760 -------------- 761 | Client | 762 | Controller | 763 -------------- 764 | 765 ....................|....................... 766 | 767 ---------------- 768 | | 769 | MDSC | 770 | | 771 ---------------- 772 / | \ 773 / | \ 774 ............../.....|......\................ 775 / | \ 776 / ---------- \ 777 / | PNC2 | \ 778 / ---------- \ 779 ---------- | \ 780 | PNC1 | ----- \ 781 ---------- ( ) ---------- 782 | ( ) | PNC3 | 783 ----- ( Network ) ---------- 784 ( ) ( Domain 2 ) | 785 ( ) ( ) ----- 786 ( Network ) ( ) ( ) 787 ( Domain 1 ) ----- ( ) 788 ( ) ( Network ) 789 ( ) ( Domain 3 ) 790 ----- ( ) 791 ( ) 792 ----- 794 Figure 4 Controlling Hierarchy for Use Case 3 796 In this section we address the case where the CNC controls the 797 customer IP network and requests transport connectivity among IP 798 routers, via the CMI, to an MDSC which coordinates, via three MPIs, 799 the control of a multi-domain transport network through three PNCs. 801 The interfaces within the scope of this document are the three MPIs 802 while the interface between the CNC and the IP routers is out of its 803 scope and considerations about the CMI are outside the scope of this 804 document. 806 6.2. Topology Abstractions 808 Each PNC should provide the MDSC a topology abstraction of the 809 domain's network topology. 811 Each PNC provides topology abstraction of its own domain topology 812 independently from each other and therefore it is possible that 813 different PNCs provide different types of topology abstractions. 815 As an example, we can assume that: 817 o PNC1 provides a white topology abstraction (likewise use case 1 818 described in section 4.2) 820 o PNC2 provides a type A grey topology abstraction 822 o PNC3 provides a type B grey topology abstraction, with two 823 abstract nodes (AN31 and AN32). They abstract respectively nodes 824 S31+S33 and nodes S32+S34. At the MPI, only the abstract nodes 825 should be reported: the mapping between the abstract nodes (AN31 826 and AN32) and the physical nodes (S31, S32, S33 and S34) should 827 be done internally by the PNC. 829 The MDSC should be capable to glue together these different abstract 830 topologies to build its own view of the multi-domain network 831 topology. This might require proper administrative configuration or 832 other mechanisms (to be defined/analysed). 834 6.3. Service Configuration 836 In the following use cases, it is assumed that the CNC is capable to 837 request service connectivity from the MDSC to support IP routers 838 connectivity. 840 The same service scenarios, as described in section 4.3, are also 841 application to this use cases with the only difference that the two 842 IP routers to be interconnected are attached to transport nodes 843 which belong to different PNCs domains and are under the control of 844 the CNC. 846 Likewise, the service scenarios in section 4.3, the type of services 847 could depend of the type of physical links (e.g. OTN link, ETH link 848 or SDH link) between the customer's routers and the multi-domain 849 transport network and the configuration of the different adaptations 850 inside IP routers is performed by means that are outside the scope 851 of this document because not under control of and not visible to the 852 MDSC nor to the PNCs. It is assumed that the CNC is capable to 853 request the proper configuration of the different adaptation 854 functions inside the customer's IP routers, by means which are 855 outside the scope of this document. 857 It is also assumed that the CNC is capable via the CMI to request 858 the MDSC the setup of these services with enough information that 859 enable the MDSC to coordinate the different PNCs to instantiate and 860 control the ODU2 data plane connection through nodes S3, S1, S2, 861 S31, S33, S34, S15 and S18, as well as the adaptation functions 862 inside nodes S3 and S18, when needed. 864 As described in section 6.2, the MDSC should have its own view of 865 the end-to-end network topology and use it for its own path 866 computation to understand that it needs to coordinate with PNC1, 867 PNC2 and PNC3 the setup and control of a multi-domain ODU2 data 868 plane connection. 870 6.3.1. ODU Transit 872 In order to setup a 10Gb IP link between C-R1 and C-R5, an ODU2 end- 873 to-end data plane connection needs be created between C-R1 and C-R5, 874 crossing transport nodes S3, S1, S2, S31, S33, S34, S15 and S18 875 which belong to different PNC domains. 877 The traffic flow between C-R1 and C-R5 can be summarized as: 879 C-R1 (|PKT| -> ODU2), S3 (|ODU2|), S1 (|ODU2|), S2 (|ODU2|), 880 S31 (|ODU2|), S33 (|ODU2|), S34 (|ODU2|), 881 S15 (|ODU2|), S18 (|ODU2|), C-R5 (ODU2 -> |PKT|) 883 6.3.2. EPL over ODU 885 In order to setup a 10Gb IP link between C-R1 and C-R5, an EPL 886 service needs to be created between C-R1 and C-R5, supported by an 887 ODU2 end-to-end data plane connection between transport nodes S3 and 888 S18, crossing transport nodes S1, S2, S31, S33, S34 and S15 which 889 belong to different PNC domains. 891 The traffic flow between C-R1 and C-R5 can be summarized as: 893 C-R1 (|PKT| -> ETH), S3 (ETH -> |ODU2|), S1 (|ODU2|), 894 S2 (|ODU2|), S31 (|ODU2|), S33 (|ODU2|), S34 (|ODU2|), 895 S15 (|ODU2|), S18 (|ODU2| -> ETH), C-R5 (ETH -> |PKT|) 897 6.3.3. Other OTN Client Services 899 In order to setup a 10Gb IP link between C-R1 and C-R5 using, for 900 example SDH physical links between the IP routers and the transport 901 network, an STM-64 Private Line service needs to be created between 902 C-R1 and C-R5, supported by ODU2 end-to-end data plane connection 903 between transport nodes S3 and S18, crossing transport nodes S1, S2, 904 S31, S33, S34 and S15 which belong to different PNC domains. 906 The traffic flow between C-R1 and C-R5 can be summarized as: 908 C-R1 (|PKT| -> STM-64), S3 (STM-64 -> |ODU2|), S1 (|ODU2|), 909 S2 (|ODU2|), S31 (|ODU2|), S33 (|ODU2|), S34 (|ODU2|), 910 S15 (|ODU2|), S18 (|ODU2| -> STM-64), C-R5 (STM-64 -> |PKT|) 912 6.3.4. EVPL over ODU 914 In order to setup two 1Gb IP links between C-R1 to C-R3 and between 915 C-R1 and C-R5, two EVPL services need to be created, supported by 916 two ODU0 end-to-end connections respectively between S3 and S6, 917 crossing transport node S5, and between S3 and S18, crossing 918 transport nodes S1, S2, S31, S33, S34 and S15 which belong to 919 different PNC domains. 921 The VLAN configuration on the access links is the same as described 922 in section 4.3.4. 924 The traffic flow between C-R1 and C-R3 is the same as described in 925 section 4.3.4. 927 The traffic flow between C-R1 and C-R5 can be summarized as: 929 C-R1 (|PKT| -> VLAN), S3 (VLAN -> |ODU2|), S1 (|ODU2|), 930 S2 (|ODU2|), S31 (|ODU2|), S33 (|ODU2|), S34 (|ODU2|), 931 S15 (|ODU2|), S18 (|ODU2| -> VLAN), C-R5 (VLAN -> |PKT|) 933 6.3.5. EVPLAN and EVPTree Services 935 In order to setup an IP subnet between C-R1, C-R2, C-R3 and C-R7, an 936 EVPLAN/EVPTree service needs to be created, supported by two ODUflex 937 end-to-end connections respectively between S3 and S6, crossing 938 transport node S5, and between S3 and S18, crossing transport nodes 939 S1, S2, S31, S33, S34 and S15 which belong to different PNC domains. 941 The VLAN configuration on the access links is the same as described 942 in section 4.3.5. 944 The configuration of the Ethernet Bridging capabilities on nodes S3 945 and S6 is the same as described in section 4.3.5 while the 946 configuration on node S18 similar to the configuration of node S2 947 described in section 4.3.5. 949 The traffic flow between C-R1 and C-R3 is the same as described in 950 section 4.3.5. 952 The traffic flow between C-R1 and C-R5 can be summarized as: 954 C-R1 (|PKT| -> VLAN), S3 (VLAN -> |MAC| -> |ODUflex|), 955 S1 (|ODUflex|), S2 (|ODUflex|), S31 (|ODUflex|), 956 S33 (|ODUflex|), S34 (|ODUflex|), 957 S15 (|ODUflex|), S18 (|ODUflex| -> VLAN), C-R5 (VLAN -> |PKT|) 959 6.4. Multi-functional Access Links 961 The same considerations of section 4.4 apply with the only 962 difference that the ODU data plane connections could be setup across 963 multiple PNC domains. 965 For example, if the physical link between C-R1 and S3 is a multi- 966 functional access link while the physical links between C-R7 and S31 967 and between C-R5 and S18 are STM-64 and 10GE physical links 968 respectively, it is possible to configure either an STM-64 Private 969 Line service between C-R1 and C-R7 or an EPL service between C-R1 970 and C-R5. 972 The traffic flow between C-R1 and C-R7 can be summarized as: 974 C-R1 (|PKT| -> STM-64), S3 (STM-64 -> |ODU2|), S1 (|ODU2|), 975 S2 (|ODU2|), S31 (|ODU2| -> STM-64), C-R3 (STM-64 -> |PKT|) 977 The traffic flow between C-R1 and C-R5 can be summarized as: 979 C-R1 (|PKT| -> ETH), S3 (ETH -> |ODU2|), S1 (|ODU2|), 980 S2 (|ODU2|), S31 (|ODU2|), S33 (|ODU2|), S34 (|ODU2|), 981 S15 (|ODU2|), S18 (|ODU2| -> ETH), C-R5 (ETH -> |PKT|) 983 6.5. Protection Scenarios 985 The MDSC needs to be capable to coordinate different PNCs to 986 configure protection switching when requesting the setup of the 987 connectivity services described in section 6.3. 989 Since in this use case it is assumed that switching within the 990 transport network domain is performed only in one layer, also 991 protection switching within the transport network domain can only be 992 provided at the OTN ODU layer, for all the services defined in 993 section 6.3. 995 6.5.1. Linear Protection (end-to-end) 997 In order to protect any service defined in section 6.3 from failures 998 within the OTN multi-domain transport network, the MDSC should be 999 capable to coordinate different PNCs to configure and control OTN 1000 linear protection in the data plane between nodes S3 and node S18. 1002 The considerations in section 4.5.1 are also applicable here with 1003 the only difference that MDSC needs to coordinate with different 1004 PNCs the setup and control of the OTN linear protection as well as 1005 of the working and protection transport entities (working and 1006 protection LSPs). 1008 Two cases can be considered. 1010 In one case, the working and protection transport entities pass 1011 through the same PNC domains: 1013 Working transport entity: S3, S1, S2, 1014 S31, S33, S34, 1015 S15, S18 1017 Protection transport entity: S3, S4, S8, 1018 S32, 1019 S12, S17, S18 1021 In another case, the working and protection transport entities can 1022 pass through different PNC domains: 1024 Working transport entity: S3, S5, S7, 1025 S11, S12, S17, S18 1027 Protection transport entity: S3, S1, S2, 1028 S31, S33, S34, 1029 S15, S18 1031 6.5.2. Segmented Protection 1033 In order to protect any service defined in section 6.3 from failures 1034 within the OTN multi-domain transport network, the MDSC should be 1035 capable to request each PNC to configure OTN intra-domain protection 1036 when requesting the setup of the ODU2 data plane connection segment. 1038 If linear protection is used within a domain, the considerations in 1039 section 4.5.1 are also applicable here only for the PNC controlling 1040 the domain where intra-domain linear protection is provided. 1042 If PNC1 provides linear protection, the working and protection 1043 transport entities could be: 1045 Working transport entity: S3, S1, S2 1047 Protection transport entity: S3, S4, S8, S2 1049 If PNC2 provides linear protection, the working and protection 1050 transport entities could be: 1052 Working transport entity: S15, S18 1054 Protection transport entity: S15, S12, S17, S18 1056 If PNC3 provides linear protection, the working and protection 1057 transport entities could be: 1059 Working transport entity: S31, S33, S34 1061 Protection transport entity: S31, S32, S34 1063 7. Use Case 4: Multi-domain and multi-layer 1065 7.1. Reference Network 1067 The current considerations discussed in this document are based on 1068 the following reference network: 1070 - multiple transport domains: OTN and OCh multi-layer networks 1072 In this use case, the reference network shown in Figure 3 is used. 1073 The only difference is that all the transport nodes are capable to 1074 switch either in the ODU or in the OCh layer. 1076 All the physical links within each transport network domain are 1077 therefore assumed to be OCh links, while the inter-domain links are 1078 assumed to be ODU links as described in section 6.1 (multi-domain 1079 with single layer - OTN network). 1081 Therefore, with the exception of the access and inter-domain links, 1082 no ODU link exists within each domain before an OCh single-domain 1083 end-to-end data plane connection is created within the network. 1085 The controlling hierarchy is the same as described in Figure 4. 1087 The interfaces within the scope of this document are the three MPIs 1088 which should be capable to control both the OTN and OCh layers 1089 within each PNC domain. 1091 7.2. Topology Abstractions 1093 Each PNC should provide the MDSC a topology abstraction of its own 1094 network topology as described in section 5.2. 1096 As an example, it is assumed that: 1098 o PNC1 provides a type A grey topology abstraction (likewise in use 1099 case 2 described in section 5.2) 1101 o PNC2 provides a type B grey topology abstraction (likewise in use 1102 case 3 described in section 6.2) 1104 o PNC3 provides a type B grey topology abstraction with two 1105 abstract nodes, likewise in use case 3 described in section 6.2, 1106 and hiding at least some optical parameters to be used within the 1107 OCh layer, likewise in use case 2 described in section 5.2. 1109 7.3. Service Configuration 1111 The same service scenarios, as described in section 6.3, are also 1112 applicable to these use cases with the only difference that single- 1113 domain end-to-end OCh data plane connections needs to be setup 1114 before ODU data plane connections. 1116 8. Security Considerations 1118 Typically, OTN networks ensure a high level of security and data 1119 privacy through hard partitioning of traffic onto isolated circuits. 1121 There may be additional security considerations applied to specific 1122 use cases, but common security considerations do exist and these 1123 must be considered for controlling underlying infrastructure to 1124 deliver transport services: 1126 o use of RESCONF and the need to reuse security between RESTCONF 1127 components; 1129 o use of authentication and policy to govern which transport 1130 services may be requested by the user or application; 1132 o how secure and isolated connectivity may also be requested as an 1133 element of a service and mapped down to the OTN level. 1135 9. IANA Considerations 1137 This document requires no IANA actions. 1139 10. References 1141 10.1. Normative References 1143 [RFC7926] Farrel, A. et al., "Problem Statement and Architecture for 1144 Information Exchange between Interconnected Traffic- 1145 Engineered Networks", BCP 206, RFC 7926, July 2016. 1147 [RFC4427] Mannie, E., Papadimitriou, D., "Recovery (Protection and 1148 Restoration) Terminology for Generalized Multi-Protocol 1149 Label Switching (GMPLS)", RFC 4427, March 2006. 1151 [ACTN-Frame] Ceccarelli, D., Lee, Y. et al., "Framework for 1152 Abstraction and Control of Transport Networks", draft- 1153 ietf-teas-actn-framework, work in progress. 1155 [ITU-T G.709-2016] ITU-T Recommendation G.709 (06/16), "Interfaces 1156 for the optical transport network", June 2016. 1158 [ITU-T G.808.1-2014] ITU-T Recommendation G.808.1 (05/14), "Generic 1159 protection switching - Linear trail and subnetwork 1160 protection", May 2014. 1162 [ITU-T G.873.1-2014] ITU-T Recommendation G.873.1 (05/14), "Optical 1163 transport network (OTN): Linear protection", May 2014. 1165 10.2. Informative References 1167 [TE-Topo] Liu, X. et al., "YANG Data Model for TE Topologies", 1168 draft-ietf-teas-yang-te-topo, work in progress. 1170 [ACTN-YANG] Zhang, X. et al., "Applicability of YANG models for 1171 Abstraction and Control of Traffic Engineered Networks", 1172 draft-zhang-teas-actn-yang, work in progress. 1174 [Path-Compute] Busi, I., Belotti, S. et al., " Yang model for 1175 requesting Path Computation", draft-busibel-teas-yang- 1176 path-computation, work in progress. 1178 [RESTCONF] Bierman, A., Bjorklund, M., and K. Watsen, "RESTCONF 1179 Protocol", RFC 8040, DOI 10.17487/RFC8040, January 2017, 1180 . 1182 [ONF TR-527] ONF Technical Recommendation TR-527, "Functional 1183 Requirements for Transport API", June 2016. 1185 [ONF GitHub] ONF Open Transport (SNOWMASS) 1186 https://github.com/OpenNetworkingFoundation/Snowmass- 1187 ONFOpenTransport 1189 11. Acknowledgments 1191 The authors would like to thank all members of the Transport NBI 1192 Design Team involved in the definition of use cases, gap analysis 1193 and guidelines for using the IETF YANG models at the Northbound 1194 Interface (NBI) of a Transport SDN Controller. 1196 The authors would like to thank Xian Zhang, Anurag Sharma, Sergio 1197 Belotti, Tara Cummings, Michael Scharf, Karthik Sethuraman, Oscar 1198 Gonzalez de Dios, Hans Bjursrom and Italo Busi for having initiated 1199 the work on gap analysis for transport NBI and having provided 1200 foundations work for the development of this document. 1202 This document was prepared using 2-Word-v2.0.template.dot. 1204 Authors' Addresses 1206 Italo Busi (Editor) 1207 Huawei 1208 Email: italo.busi@huawei.com 1210 Daniel King (Editor) 1211 Lancaster University 1212 Email: d.king@lancaster.ac.uk 1214 Sergio Belotti 1215 Nokia 1216 Email: sergio.belotti@nokia.com 1218 Gianmarco Bruno 1219 Ericsson 1220 Email: gianmarco.bruno@ericsson.com 1222 Young Lee 1223 Huawei 1224 Email: leeyoung@huawei.com 1226 Victor Lopez 1227 Telefonica 1228 Email: victor.lopezalvarez@telefonica.com 1230 Carlo Perocchio 1231 Ericsson 1232 Email: carlo.perocchio@ericsson.com 1234 Haomian Zheng 1235 Huawei 1236 Email: zhenghaomian@huawei.com