idnits 2.17.1 draft-tnbidt-ccamp-transport-nbi-use-cases-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (July 03, 2017) is 2489 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 0 errors (**), 0 flaws (~~), 1 warning (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 CCAMP Working Group I. Busi (Ed.) 2 Internet Draft Huawei 3 Intended status: Informational D. King 4 Lancaster University 6 Expires: December 2017 July 03, 2017 8 Transport Northbound Interface Use Cases 9 draft-tnbidt-ccamp-transport-nbi-use-cases-02 11 Status of this Memo 13 This Internet-Draft is submitted in full conformance with the 14 provisions of BCP 78 and BCP 79. 16 Internet-Drafts are working documents of the Internet Engineering 17 Task Force (IETF), its areas, and its working groups. Note that 18 other groups may also distribute working documents as Internet- 19 Drafts. 21 Internet-Drafts are draft documents valid for a maximum of six 22 months and may be updated, replaced, or obsoleted by other documents 23 at any time. It is inappropriate to use Internet-Drafts as 24 reference material or to cite them other than as "work in progress." 26 The list of current Internet-Drafts can be accessed at 27 http://www.ietf.org/ietf/1id-abstracts.txt 29 The list of Internet-Draft Shadow Directories can be accessed at 30 http://www.ietf.org/shadow.html 32 This Internet-Draft will expire on December 28, 2017. 34 Copyright Notice 36 Copyright (c) 2017 IETF Trust and the persons identified as the 37 document authors. All rights reserved. 39 This document is subject to BCP 78 and the IETF Trust's Legal 40 Provisions Relating to IETF Documents 41 (http://trustee.ietf.org/license-info) in effect on the date of 42 publication of this document. Please review these documents 43 carefully, as they describe your rights and restrictions with 44 respect to this document. 46 Abstract 48 Transport network domains, including Optical Transport Network (OTN) 49 and Wavelength Division Multiplexing (WDM) networks, are typically 50 deployed based on a single vendor or technology platforms. They are 51 often managed using proprietary interfaces to dedicated Element 52 Management Systems (EMS), Network Management Systems (NMS) and 53 increasingly Software Defined Network (SDN) controllers. 55 A well-defined open interface to each domain management system or 56 controller is required for network operators to facilitate control 57 automation and orchestrate end-to-end services across multi-domain 58 networks. These functions may be enabled using standardized data 59 models (e.g. YANG), and appropriate protocol (e.g., RESTCONF). 61 This document describes the key use cases and requirements for 62 transport network control and management. It reviews proposed and 63 existing IETF transport network data models, their applicability, 64 and highlights gaps and requirements. 66 Table of Contents 68 1. Introduction..................................................3 69 2. Conventions used in this document.............................3 70 3. Use Case 1: Single-domain with single-layer...................3 71 3.1. Reference Network........................................3 72 3.1.1. Single Transport Domain - OTN Network...............4 73 3.2. Topology Abstractions....................................6 74 3.3. Service Configuration....................................7 75 3.3.1. ODU Transit.........................................7 76 3.3.2. EPL over ODU........................................8 77 3.3.3. Other OTN Client Services...........................8 78 3.3.4. EVPL over ODU.......................................9 79 3.3.5. EVPLAN and EVPTree Services.........................9 80 3.3.6. Virtual Network Services............................9 81 3.4. Multi-functional Access Links............................9 82 3.5. Protection Scenarios.....................................9 83 3.5.1. Linear Protection...................................10 84 4. Use Case 2: Single-domain with multi-layer....................11 85 5. Use Case 3: Multi-domain with single-layer....................11 86 6. Use Case 4: Multi-domain and multi-layer......................11 87 7. Security Considerations.......................................11 88 8. IANA Considerations...........................................11 89 9. References....................................................11 90 9.1. Normative References.....................................11 91 9.2. Informative References...................................11 92 10. Acknowledgments..............................................12 94 1. Introduction 96 A common open interface to each domain controller/management system 97 is pre-requisite for network operators to control multi-vendor and 98 multi-domain networks and enable also service provisioning 99 coordination/automation. This can be achieved by using standardized 100 YANG models, used together with an appropriate protocol (e.g., 101 RESTCONF). 103 This document assumes a reference architecture, including 104 interfaces, based on the Abstraction and Control of Traffic- 105 Engineered Networks (ACTN), defined in [ACTN-Frame]. 107 The focus of the current version is on the MPI (interface between 108 the Multi Domain Service Coordinator (MDSC) and a Physical Network 109 Controller (PNC), controlling a transport network domain). 111 The relationship between the current IETF YANG models and the type 112 of ACTN interfaces can be found in [ACTN-YANG]. 114 The ONF Technical Recommendations for Functional Requirements for 115 the transport API, may be found in [ONF TR-527]. 116 Furthermore, ONF transport API multi-layer examples may be found in 117 [ONF GitHub]. 119 This document describes use cases that could be used for analyzing 120 the applicability of the existing models defined by the IETF for 121 transport networks 123 Considerations about the CMI (interface between the Customer Network 124 Controller (CNC) and the MDSC) are for further study. 126 2. Conventions used in this document 128 For discussion in future revisions of this document. 130 3. Use Case 1: Single-domain with single-layer 132 3.1. Reference Network 134 The current considerations discussed in this document are based on 135 the following reference networks: 137 - single transport domain: OTN network 139 It is expected that future revisions of the document will include 140 additional reference networks. 142 3.1.1. Single Transport Domain - OTN Network 144 Figure 1 shows the network physical topology composed of a single- 145 domain transport network providing transport services to an IP 146 network through five access links. 148 ................................................ 149 : IP domain : 150 : .............................. : 151 : : ........................ : : 152 : : : : : : 153 : : : S1 -------- S2 ------ C-R4 : 154 : : : / | : : : 155 : : : / | : : : 156 : C-R1 ------ S3 ----- S4 | : : : 157 : : : \ \ | : : : 158 : : : \ \ | : : : 159 : : : S5 \ | : : : 160 : C-R2 -----+ / \ \ | : : : 161 : : : \ / \ \ | : : : 162 : : : S6 ---- S7 ---- S8 ------ C-R5 : 163 : : : / : : : 164 : C-R3 -----+ : : : 165 : : : Transport domain : : : 166 : : : : : : 167 :........: :......................: :........: 168 Figure 1 Reference network for Use Case 1 170 The IP and transport (OTN) domains are respectively composed by five 171 routers C-R1 to C-R5 and by eight ODU switches S1 to S8. The 172 transport domain acts as a transit domain providing connectivity to 173 the IP layer. 175 The behavior of the transport domain is the same whether the 176 ingress/egress nodes in the IP domain, supporting an IP service, are 177 directly attached to the transport domain or there are other routers 178 in between the ingress/egress nodes of the IP domain and the routers 179 directly attached to the transport network. 181 +-----+ 182 | CNC | 183 +-----+ 184 | 185 |CMI I/F 186 | 187 +-----------------------+ 188 | MDSC | 189 +-----------------------+ 190 | 191 |MPI I/F 192 | 193 +-------+ 194 | PNC | 195 +-------+ 196 | 197 ----- 198 ( ) 199 ( OTN ) 200 ( Physical ) 201 ( Network ) 202 ( ) 203 ----- 205 Figure 2 Controlling Hierarchy for Use Case 1 207 The mapping of the client IP traffic on the physical link between 208 the routers and the transport network is made in the IP routers only 209 and is not controlled by the transport PNC and is transparent to the 210 transport nodes. 212 The control plane architecture follows the ACTN architecture and 213 framework document [ACTN-Frame]. The Client Controller act as a 214 client with respect to the Multi-Domain Service Coordinator (MDSC) 215 via the Controller-MDSC Interface (CMI). The MDSC is connected to a 216 plurality of Physical Network Controllers (PNCs), one for each 217 domain, via a MDSC-PNC Interface (MPI). Each PNC is responsible 218 only for the control of its domain and the MDSC is the only entity 219 capable of multi-domain functionalities as well as of managing the 220 inter-domain links. The key point of the whole ACTN framework is 221 detaching the network and service control from the underlying 222 technology and help the customer express the network as desired 223 by business needs. Therefore, care must be taken to keep minimal 224 dependency on the CMI (or no dependency at all) with respect to 225 the network domain technologies. The MPI instead requires some 226 specialization according to the domain technology. 228 In this section, we address the case of an IP and a Transport PNC 229 having respectively an IP a Transport MPI. The interface within 230 the scope of this document is the Transport MPI while the IP Network 231 MPI is out of its scope and considerations about the CMI are for 232 further study. 234 3.2. Topology Abstractions 236 There are multiple methods to abstract a network topology. This 237 document assumes the abstraction method defined in [RFC7926]: 239 Abstraction is the process of applying policy to the available TE 240 information within a domain, to produce selective information that 241 represents the potential ability to connect across the domain. 242 Thus, abstraction does not necessarily offer all possible 243 connectivity options, but presents a general view of potential 244 connectivity according to the policies that determine how the 245 domain's administrator wants to allow the domain resources to be 246 used. 248 [TE-Topo] describes a YANG base model for TE topology without any 249 technology specific parameters. Moreover, it defines how to abstract 250 for TE-network topologies. 252 [ACTN-Abstraction] provides the context of topology abstraction in 253 the ACTN architecture and discusses a few alternatives for the 254 abstraction methods for both packet and optical networks. This is an 255 important consideration since the choice of the abstraction method 256 impacts protocol design and the information it carries. According 257 to [ACTN-Abstraction], there are three types of topology: 259 o White topology: This is a case where the Physical Network 260 Controller (PNC) provides the actual network topology to the 261 Multi-domain Service Coordinator (MDSC) without any hiding or 262 filtering. In this case, the MDSC has the full knowledge of the 263 underlying network topology. 265 o Black topology: The entire domain network is abstracted as a 266 single virtual node with the access/egress links without 267 disclosing any node internal connectivity information. 269 o Grey topology: This abstraction level is between black topology 270 and white topology from a granularity point of view. This is 271 abstraction of TE tunnels for all pairs of border nodes. We may 272 further differentiate from a perspective of how to abstract 273 internal TE resources between the pairs of border nodes: 275 - Grey topology type A: border nodes with a TE links between 276 them in a full mesh fashion. 278 - Grey topology type B: border nodes with some internal 279 abstracted nodes and abstracted links. 281 For single-domain with single-layer use-case, the white topology may 282 be disseminated from the PNC to the MDSC in most cases. There may be 283 some exception to this in the case where the underlay network may 284 have complex optical parameters, which do not warrant the 285 distribution of such details to the MDSC. In such case, the topology 286 disseminated from the PNC to the MDSC may not have the entire TE 287 information but a streamlined TE information. This case would incur 288 another action from the MDSC's standpoint when provisioning a path. 289 The MDSC may make a path compute request to the PNC to verify the 290 feasibility of the estimated path before making the final 291 provisioning request to the PNC, as outlined in [Path-Compute]. 293 Topology abstraction for the CMI is for further study (to be 294 addressed in future revisions of this document). 296 3.3. Service Configuration 298 In the following use cases, the Multi Domain Service Coordinator 299 (MDSC) needs to be capable to request service connectivity from the 300 transport Physical Network Controller (PNC) to support IP routers 301 connectivity. The type of services could depend of the type of 302 physical links (e.g. OTN link, ETH link or SDH link) between the 303 routers and transport network. 305 As described in section 3.1.1, the control of different adaptations 306 inside IP routers, C-Ri (PKT -> foo) and C-Rj (foo -> PKT), are 307 assumed to be performed by means that are not under the control of, 308 and not visible to, transport PNC. Therefore, these mechanisms are 309 outside the scope of this document. 311 3.3.1. ODU Transit 313 This use case assumes that the physical link interconnecting IP 314 routers and transport network is an OTN link. 316 The physical/optical interconnection is supposed to be a pre- 317 configured and not exposed via MPI to MDSC. 319 If we consider the case of a 10Gb IP link between C-R1 to C-R3,we 320 need to instantiate an ODU2 end-to-end connection between C-R1 and 321 C-R3, crossing transport nodes S3, S5, and S6. 323 The traffic flow between C-R1 and C-R3 can be summarized as: 325 C-R1 (PKT -> ODU2), S3 (ODU2), S5 (ODU2), S6 (ODU2), 326 C-R3 (ODU2 -> PKT) 328 The MDSC should be capable via MPI interface to request the setup of 329 ODU2 transit service with enough information that can permit 330 transport PNC to instantiate and control the ODU2 segment through 331 nodes S3, S5, S6. 333 3.3.2. EPL over ODU 335 This use case assumes that the physical link interconnecting IP 336 routers and transport network is an Ethernet link. 338 If we consider the case of a 10Gb IP link between C-R1 to C-R3, we 339 need to instantiate an EPL service between C-R1 and C-R3 supported 340 by an ODU2 end-to-end connection between S3 and S6, crossing 341 transport node S5. 343 The traffic flow between C-R1 and C-R3 can be summarized as: 345 C-R1 (PKT -> ETH), S3 (ETH -> ODU2), S5 (ODU2), 346 S6 (ODU2 -> ETH), C-R3 (ETH-> PKT) 348 The MDSC should be capable via MPI i/f to request the setup of EPL 349 service with enough information that can permit transport PNC to 350 instantiate and control the ODU2 end-to-end connection through nodes 351 S3, S5, S6, as well as the adaptation functions inside S3 and S6: 352 S3&S6 (ETH -> ODU2) and S9&S6 (ODU2 -> ETH). 354 3.3.3. Other OTN Client Services 356 [ITU-T G.709-2016] defines mappings of different client layers into 357 ODU. Most of them are used to provide Private Line services over 358 an OTN transport network supporting a variety of types of physical 359 access links (e.g., Ethernet, SDH STM-N, Fibre Channel, 360 InfiniBand,). 362 This use case assumes that the physical links interconnecting IP 363 routers and transport network are any one of these possible options. 365 If we consider the case of a 10Gb IP link between C-R1 to C-R3 366 using SDH physical links, we need to instantiate an STM-64 Private 367 Line service between C-R1 and C-R3 supported by an ODU2 end-to-end 368 connection between S3 and S6, crossing transport node S5. 370 The traffic flow between C-R1 and C-R3 can be summarized as: 372 C-R1 (PKT -> STM-64), S3 (STM-64 -> ODU2), S5 (ODU2), 373 S6 (ODU2 -> STM-64), C-R3 (STM-64 -> PKT) 375 The MDSC should be capable via MPI i/f to request the setup of an 376 STM-64 Private Line service with enough information that can permit 377 transport PNC to instantiate and control the ODU2 end-to-end 378 connection through nodes S3, S5, S6, as well as the adaptation 379 functions inside S3 and S6: S3&S6 (STM-64 -> ODU2) and S9&S3 (STM-64 380 -> PKT). 382 3.3.4. EVPL over ODU 384 For future revision. 386 3.3.5. EVPLAN and EVPTree Services 388 For future revision. 390 3.3.6. Virtual Network Services 392 For future revision. 394 3.4. Multi-functional Access Links 396 For future revision. 398 3.5. Protection Scenarios 400 The MDSC needs to be capable to request the transport PNC to 401 configure protection when requesting the setup of the connectivity 402 services described in section 3.3. 404 [Editor's note (for DT discussion):] Should we describe only 405 protection or also restoration scenarios? 407 Since in this use case it is assumed that switching is performed 408 only in one layer, the OTN ODU layer, for all the services defined 409 in section 3.3, protection can only be provided at that same layer. 411 Resiliency mechanisms on the access link are considered outside the 412 scope of this use case. 414 [Editor's note (for DT discussion):] I think that scenarios with 415 access link resiliency could be seen as being multi-domain and/or 416 multi-layer. For further discussion with DT members. 418 3.5.1. Linear Protection 420 It is possible to protect any service defined in section 3.3 from 421 failures within the OTN transport domain by configuring a linear 422 protection group, as defined in [ITU-T G.808.1], in the data plane 423 between node S3 and node S6. 425 [Editor's note:] Check for IETF references about protection 426 definitions. 428 The OTN linear protection group can be configured to operate as 1+1 429 unidirectional, 1+1 bidirectional (to check) or 1:n bidirectional, 430 as defined in [ITU-T G.808.1] and [ITU-T G.873.1]. 432 [Editor's note (for DT discussion):] The most common protection 433 mechanism used in OTN networks is 1+1 unidirectional. Should we 434 consider also the other cases? 436 In these scenarios, a working transport entity and a protection 437 transport entity, as defined in [ITU-T G.808.1], should be 438 configured in the data plane: 440 Working transport entity: S3 -> S5 -> S6 442 Protection transport entity: S3 -> S4 -> S6 -> S7 -> S6 444 Requirements about how the MDSC could be capable to request the 445 transport PNC to configure the protection group are for further 446 study. 448 [Editor's note:] Need to discuss whether MDSC should decide that 449 linear protection is needed and whether it should be 1+1 or 1:1 or 450 whether the transport PNC can take this decision based on other 451 information provided by MDSC (or whether both options are possible). 453 The Transport PNC should be capable to report to the MDSC which is 454 the active transport entity, as defined in [ITU-T G.808.1], in the 455 data plane. 457 Given the fast dynamic of protection switching operations in the 458 data plane (50ms recovery time), this reporting is not expected to 459 be in real-time. 461 It is also worth noting that with unidirectional protection 462 switching, e.g., 1+1 unidirectional, the active transport entity may 463 be different in the two directions. 465 4. Use Case 2: Single-domain with multi-layer 467 For future revision. 469 5. Use Case 3: Multi-domain with single-layer 471 For future revision. 473 6. Use Case 4: Multi-domain and multi-layer 475 For future revision. 477 7. Security Considerations 479 For further study. 481 8. IANA Considerations 483 This document requires no IANA actions. 485 9. References 487 9.1. Normative References 489 [RFC7926] Farrel, A. et al., "Problem Statement and Architecture for 490 Information Exchange between Interconnected Traffic- 491 Engineered Networks", BCP 206, RFC 7926, July 2016. 493 [ITU-T G.709-2016] G.808.1 G.709 (06/16), "Interfaces 494 for the optical transport network", June 2016. 496 [ITU-T G.808.1] ITU-T Recommendation G.808.1, "Generic protection 497 switching - Linear trail and subnetwork protection", May 498 2014. 500 [ITU-T G.873.1] ITU-T Recommendation G.873.1, "Optical transport 501 network (OTN): Linear protection", May 2014. 503 [ACTN-Frame] Ceccarelli, D., Lee, Y. et al., "Framework for 504 Abstraction and Control of Transport Networks", draft- 505 ietf-teas-actn-framework, work in progress. 507 [ACTN-Abstraction] Lee, Y. et al., "Abstraction and Control of TE 508 Networks (ACTN) Abstraction Methods", draft-lee-teas-actn- 509 abstraction, work in progress. 511 9.2. Informative References 513 [TE-Topo] Liu, X. et al., "YANG Data Model for TE Topologies", 514 draft-ietf-teas-yang-te-topo, work in progress. 516 [ACTN-YANG] Zhang, X. et al., "Applicability of YANG models for 517 Abstraction and Control of Traffic Engineered Networks", 518 draft-zhang-teas-actn-yang, work in progress. 520 [Path-Compute] Busi, I., Belotti, S. et al., " Yang model for 521 requesting Path Computation", draft-busibel-teas-yang- 522 path-computation, work in progress. 524 [ONF TR-527] ONF Technical Recommendation TR-527, "Functional 525 Requirements for Transport API", June 2016. 527 [ONF GitHub] ONF Open Transport (SNOWMASS) 528 https://github.com/OpenNetworkingFoundation/Snowmass- 529 ONFOpenTransport 531 10. Acknowledgments 533 The authors would like to thank all members of the Transport NBI 534 Design Team involved in the definition of use cases, gap analysis 535 and guidelines for using the IETF YANG models at the Northbound 536 Interface (NBI) of a Transport SDN Controller. 538 The authors would like to thank Xian Zhang, Anurag Sharma, Sergio 539 Belotti, Tara Cummings, Michael Scharf, Karthik Sethuraman, Oscar 540 Gonzalez de Dios, Hans Bjursrom and Italo Busi for having initiated 541 the work on gap analysis for transport NBI and having provided 542 foundations work for the development of this document. 544 This document was prepared using 2-Word-v2.0.template.dot. 546 Authors' Addresses 548 Italo Busi (Editor) 549 Huawei 550 Email: italo.busi@huawei.com 552 Daniel King (Editor) 553 Lancaster University 554 Email: d.king@lancaster.ac.uk 556 Sergio Belotti 557 Nokia 558 Email: sergio.belotti@nokia.com 560 Gianmarco Bruno 561 Ericsson 562 Email: gianmarco.bruno@ericsson.com 564 Young Lee 565 Huawei 566 Email: leeyoung@huawei.com 568 Victor Lopez 569 Telefonica 570 Email: victor.lopezalvarez@telefonica.com 572 Carlo Perocchio 573 Ericsson 574 Email: carlo.perocchio@ericsson.com 576 Haomian Zheng 577 Huawei 578 Email: zhenghaomian@huawei.com