idnits 2.17.1 draft-ietf-ccamp-transport-nbi-use-cases-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (October 30, 2017) is 2342 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'RESTCONF' is mentioned on line 133, but not defined == Missing Reference: 'PKT' is mentioned on line 984, but not defined == Missing Reference: 'ODU2' is mentioned on line 984, but not defined == Missing Reference: 'ODU0' is mentioned on line 498, but not defined == Missing Reference: 'MAC' is mentioned on line 957, but not defined == Missing Reference: 'ODUflex' is mentioned on line 960, but not defined Summary: 0 errors (**), 0 flaws (~~), 7 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 CCAMP Working Group I. Busi (Ed.) 2 Internet Draft Huawei 3 Intended status: Informational D. King 4 Lancaster University 6 Expires: April 2018 October 30, 2017 8 Transport Northbound Interface Applicability Statement and Use Cases 9 draft-ietf-ccamp-transport-nbi-use-cases-01 11 Status of this Memo 13 This Internet-Draft is submitted in full conformance with the 14 provisions of BCP 78 and BCP 79. 16 Internet-Drafts are working documents of the Internet Engineering 17 Task Force (IETF), its areas, and its working groups. Note that 18 other groups may also distribute working documents as Internet- 19 Drafts. 21 Internet-Drafts are draft documents valid for a maximum of six 22 months and may be updated, replaced, or obsoleted by other documents 23 at any time. It is inappropriate to use Internet-Drafts as 24 reference material or to cite them other than as "work in progress." 26 The list of current Internet-Drafts can be accessed at 27 http://www.ietf.org/ietf/1id-abstracts.txt 29 The list of Internet-Draft Shadow Directories can be accessed at 30 http://www.ietf.org/shadow.html 32 This Internet-Draft will expire on April 30, 2018. 34 Copyright Notice 36 Copyright (c) 2017 IETF Trust and the persons identified as the 37 document authors. All rights reserved. 39 This document is subject to BCP 78 and the IETF Trust's Legal 40 Provisions Relating to IETF Documents 41 (https://trustee.ietf.org/license-info) in effect on the date of 42 publication of this document. Please review these documents 43 carefully, as they describe your rights and restrictions with respect 44 to this document. Code Components extracted from this document must 45 include Simplified BSD License text as described in Section 4.e of 46 the Trust Legal Provisions and are provided without warranty as 47 described in the Simplified BSD License. 49 Abstract 51 Transport network domains, including Optical Transport Network (OTN) 52 and Wavelength Division Multiplexing (WDM) networks, are typically 53 deployed based on a single vendor or technology platforms. They are 54 often managed using proprietary interfaces to dedicated Element 55 Management Systems (EMS), Network Management Systems (NMS) and 56 increasingly Software Defined Network (SDN) controllers. 58 A well-defined open interface to each domain management system or 59 controller is required for network operators to facilitate control 60 automation and orchestrate end-to-end services across multi-domain 61 networks. These functions may be enabled using standardized data 62 models (e.g. YANG), and appropriate protocol (e.g., RESTCONF). 64 This document describes the key use cases and requirements to be 65 used as the basis for applicability statements analyzing how IETF 66 data models can be used for transport network control and 67 management. 69 Table of Contents 71 1. Introduction ................................................ 3 72 1.1. Scope of this document 4 73 2. Terminology ................................................. 4 74 3. Conventions used in this document 4 75 3.1. Topology and traffic flow processing ................... 4 76 4. Use Case 1: Single-domain with single-layer ................. 5 77 4.1. Reference Network ...................................... 5 78 4.1.1. Single Transport Domain - OTN Network ............. 5 79 4.2. Topology Abstractions .................................. 8 80 4.3. Service Configuration .................................. 9 81 4.3.1. ODU Transit ....................................... 9 82 4.3.2. EPL over ODU ..................................... 10 83 4.3.3. Other OTN Client Services ........................ 10 84 4.3.4. EVPL over ODU .................................... 11 85 4.3.5. EVPLAN and EVPTree Services ...................... 12 86 4.4. Multi-functional Access Links ......................... 13 87 4.5. Protection Requirements ............................... 14 88 4.5.1. Linear Protection ................................ 15 89 5. Use Case 2: Single-domain with multi-layer ................. 15 90 5.1. Reference Network ..................................... 15 91 5.2. Topology Abstractions ................................. 16 92 5.3. Service Configuration ................................. 16 93 6. Use Case 3: Multi-domain with single-layer ................. 16 94 6.1. Reference Network ..................................... 16 95 6.2. Topology Abstractions ................................. 19 96 6.3. Service Configuration ................................. 19 97 6.3.1. ODU Transit ...................................... 20 98 6.3.2. EPL over ODU ..................................... 20 99 6.3.3. Other OTN Client Services ........................ 21 100 6.3.4. EVPL over ODU .................................... 21 101 6.3.5. EVPLAN and EVPTree Services ...................... 21 102 6.4. Multi-functional Access Links ......................... 22 103 6.5. Protection Scenarios .................................. 22 104 6.5.1. Linear Protection (end-to-end) ................... 23 105 6.5.2. Segmented Protection ............................. 23 106 7. Use Case 4: Multi-domain and multi-layer ................... 24 107 7.1. Reference Network ..................................... 24 108 7.2. Topology Abstractions ................................. 25 109 7.3. Service Configuration ................................. 25 110 8. Security Considerations .................................... 25 111 9. IANA Considerations ........................................ 26 112 10. References ................................................ 26 113 10.1. Normative References ................................. 26 114 10.2. Informative References ............................... 26 115 11. Acknowledgments ........................................... 27 117 1. Introduction 119 Transport of packet services are critical for a wide-range of 120 applications and services, including: data center and LAN 121 interconnects, Internet service backhauling, mobile backhaul and 122 enterprise Carrier Ethernet Services. These services are typically 123 setup using stovepipe NMS and EMS platforms, often requiring 124 propriety management platforms and legacy management interfaces. A 125 clear goal of operators will be to automate setup of transport 126 services across multiple transport technology domains. 128 A common open interface (API) to each domain controller and or 129 management system is pre-requisite for network operators to control 130 multi-vendor and multi-domain networks and enable also service 131 provisioning coordination/automation. This can be achieved by using 132 standardized YANG models, used together with an appropriate protocol 133 (e.g., [RESTCONF]). 135 This document describes key use cases for analyzing the 136 applicability of the models defined by the IETF for transport 137 networks. The intention of this document is to provide the base 138 reference scenarios for applicability statements that will describe 139 in details how IETF transport models are applied to solve the 140 described use cases and requirements. 142 1.1. Scope of this document 144 This document assumes a reference architecture, including 145 interfaces, based on the Abstraction and Control of Traffic- 146 Engineered Networks (ACTN), defined in [ACTN-Frame] 148 The focus of this document is on the MPI (interface between the 149 Multi Domain Service Coordinator (MDSC) and a Physical Network 150 Controller (PNC), controlling a transport network domain). 152 The relationship between the current IETF YANG models and the type 153 of ACTN interfaces can be found in [ACTN-YANG]. 155 The ONF Technical Recommendations for Functional Requirements for 156 the transport API in [ONF TR-527] and the ONF transport API multi- 157 layer examples in [ONF GitHub] have been considered as an input for 158 this work. 160 Considerations about the CMI (interface between the Customer Network 161 Controller (CNC) and the MDSC) are outside the scope of this 162 document. 164 2. Terminology 166 E-LINE: Ethernet Line 168 EPL: Ethernet Private Line 170 EVPL: Ethernet Virtual Private Line 172 OTH: Optical Network Hierarchy 174 OTN: Optical Transport Network 176 3. Conventions used in this document 178 3.1. Topology and traffic flow processing 180 The traffic flow between different nodes is specified as an ordered 181 list of nodes, separated with commas, indicating within the brackets 182 the processing within each node: 184 () {, ()} 186 The order represents the order of traffic flow being forwarded 187 through the network. 189 The processing can be either an adaptation of a client layer into a 190 server layer "(client -> server)" or switching at a given layer 191 "([switching])". Multi-layer switching is indicated by two layer 192 switching with client/server adaptation: "([client] -> [server])". 194 For example, the following traffic flow: 196 C-R1 ([PKT] -> ODU2), S3 ([ODU2]), S5 ([ODU2]), S6 ([ODU2]), 197 C-R3 (ODU2 -> [PKT]) 199 Node C-R1 is switching at the packet (PKT) layer and mapping packets 200 into a ODU2 before transmission to node S3. Nodes S3, S5 and S6 are 201 switching at the ODU2 layer: S3 sends the ODU2 traffic to S5 which 202 then sends it to S6 which finally sends to C-R3. Node C-R3 203 terminates the ODU2 from S6 before switching at the packet (PKT) 204 layer. 206 The paths of working and protection transport entities are specified 207 as an ordered list of nodes, separated with commas: 209 {, } 211 The order represents the order of traffic flow being forwarded 212 through the network in the forward direction. In case of 213 bidirectional paths, the forward and backward directions are 214 selected arbitrarily, but the convention is consistent between 215 working/protection path pairs as well as across multiple domains. 217 4. Use Case 1: Single-domain with single-layer 219 4.1. Reference Network 221 The current considerations discussed in this document are based on 222 the following reference networks: 224 - single transport domain: OTN network 226 4.1.1. Single Transport Domain - OTN Network 228 As shown in Figure 1 the network physical topology composed of a 229 single-domain transport network providing transport services to an 230 IP network through five access links. 232 ................................................ 233 : IP domain : 234 : .............................. : 235 : : ........................ : : 236 : : : : : : 237 : : : S1 -------- S2 ------ C-R4 : 238 : : : / | : : : 239 : : : / | : : : 240 : C-R1 ------ S3 ----- S4 | : : : 241 : : : \ \ | : : : 242 : : : \ \ | : : : 243 : : : S5 \ | : : : 244 : C-R2 -----+ / \ \ | : : : 245 : : : \ / \ \ | : : : 246 : : : S6 ---- S7 ---- S8 ------ C-R5 : 247 : : : / : : : 248 : C-R3 -----+ : : : 249 : : : Transport domain : : : 250 : : : : : : 251 :........: :......................: :........: 252 Figure 1 Reference network for Use Case 1 254 The IP and transport (OTN) domains are respectively composed by five 255 routers C-R1 to C-R5 and by eight ODU switches S1 to S8. The 256 transport domain acts as a transit network providing connectivity 257 for IP layer services. 259 The behavior of the transport domain is the same whether the 260 ingress or egress service nodes in the IP domain are only attached 261 to the transport domain, or if there are other routers in between 262 the ingress or egress nodes of the IP domain not also attached to 263 the transport domain. In other words, the behavior of the transport 264 network does not depend on whether C-R1, C-R2, ..., C-R5 are PE or P 265 routers for the IP services. 267 The transport domain control plane architecture follows the ACTN 268 architecture and framework document [ACTN-Frame], and functional 269 components: 271 o Customer Network Controller (CNC) act as a client with respect to 272 the Multi-Domain Service Coordinator (MDSC) via the CNC-MDSC 273 Interface (CMI); 275 o MDSC is connected to a plurality of Physical Network Controllers 276 (PNCs), one for each domain, via a MDSC-PNC Interface (MPI). Each 277 PNC is responsible only for the control of its domain and the 278 MDSC is the only entity capable of multi-domain functionalities 279 as well as of managing the inter-domain links; 281 The ACTN framework facilitates the detachment of the network and 282 service control from the underlying technology and help the customer 283 express the network as desired by business needs. Therefore, care 284 must be taken to keep minimal dependency on the CMI (or no 285 dependency at all) with respect to the network domain technologies. 286 The MPI instead requires some specialization according to the domain 287 technology. 289 +-----+ 290 | CNC | 291 +-----+ 292 | 293 |CMI I/F 294 | 295 +-----------------------+ 296 | MDSC | 297 +-----------------------+ 298 | 299 |MPI I/F 300 | 301 +-------+ 302 | PNC | 303 +-------+ 304 | 305 ----- 306 ( ) 307 ( OTN ) 308 ( Physical ) 309 ( Network ) 310 ( ) 311 ----- 313 Figure 2 Controlling Hierarchy for Use Case 1 315 Once the service request is processed by the MDSC the mapping of the 316 client IP traffic between the routers (across the transport network) 317 is made in the IP routers only and is not controlled by the 318 transport PNC, and therefore transparent to the transport nodes. 320 4.2. Topology Abstractions 322 Abstraction provides a selective method for representing 323 connectivity information within a domain. There are multiple methods 324 to abstract a network topology. This document assumes the 325 abstraction method defined in [RFC7926]: 327 "Abstraction is the process of applying policy to the available TE 328 information within a domain, to produce selective information that 329 represents the potential ability to connect across the domain. 330 Thus, abstraction does not necessarily offer all possible 331 connectivity options, but presents a general view of potential 332 connectivity according to the policies that determine how the 333 domain's administrator wants to allow the domain resources to be 334 used." 336 [TE-Topo] describes a YANG base model for TE topology without any 337 technology specific parameters. Moreover, it defines how to abstract 338 for TE-network topologies. 340 [ACTN-Frame] provides the context of topology abstraction in the 341 ACTN architecture and discusses a few alternatives for the 342 abstraction methods for both packet and optical networks. This is an 343 important consideration since the choice of the abstraction method 344 impacts protocol design and the information it carries. According 345 to [ACTN-Frame], there are three types of topology: 347 o White topology: This is a case where the Physical Network 348 Controller (PNC) provides the actual network topology to the 349 multi-domain Service Coordinator (MDSC) without any hiding or 350 filtering. In this case, the MDSC has the full knowledge of the 351 underlying network topology; 353 o Black topology: The entire domain network is abstracted as a 354 single virtual node with the access/egress links without 355 disclosing any node internal connectivity information; 357 o Grey topology: This abstraction level is between black topology 358 and white topology from a granularity point of view. This is 359 abstraction of TE tunnels for all pairs of border nodes. We may 360 further differentiate from a perspective of how to abstract 361 internal TE resources between the pairs of border nodes: 363 - Grey topology type A: border nodes with a TE links between 364 them in a full mesh fashion; 366 - Grey topology type B: border nodes with some internal 367 abstracted nodes and abstracted links. 369 For single-domain with single-layer use-case, the white topology may 370 be disseminated from the PNC to the MDSC in most cases. There may be 371 some exception to this in the case where the underlay network may 372 have complex optical parameters, which do not warrant the 373 distribution of such details to the MDSC. In such case, the topology 374 disseminated from the PNC to the MDSC may not have the entire TE 375 information but a streamlined TE information. This case would incur 376 another action from the MDSC's standpoint when provisioning a path. 377 The MDSC may make a path compute request to the PNC to verify the 378 feasibility of the estimated path before making the final 379 provisioning request to the PNC, as outlined in [Path-Compute]. 381 Topology abstraction for the CMI is for further study (to be 382 addressed in future revisions of this document). 384 4.3. Service Configuration 386 In the following use cases, the Multi Domain Service Coordinator 387 (MDSC) needs to be capable to request service connectivity from the 388 transport Physical Network Controller (PNC) to support IP routers 389 connectivity. The type of services could depend of the type of 390 physical links (e.g. OTN link, ETH link or SDH link) between the 391 routers and transport network. 393 As described in section 4.1.1, the control of different adaptations 394 inside IP routers, C-Ri (PKT -> foo) and C-Rj (foo -> PKT), are 395 assumed to be performed by means that are not under the control of, 396 and not visible to, transport PNC. Therefore, these mechanisms are 397 outside the scope of this document. 399 4.3.1. ODU Transit 401 This use case assumes that the physical links interconnecting the IP 402 routers and the transport network are OTN links. The 403 physical/optical interconnection below the ODU layer is supposed to 404 be pre-configured and not exposed at the MPI to the MDSC. 406 To setup a 10Gb IP link between C-R1 to C-R3, an ODU2 end-to-end 407 data plane connection needs to be created between C-R1 and C-R3, 408 crossing transport nodes S3, S5, and S6. 410 The traffic flow between C-R1 and C-R3 can be summarized as: 412 C-R1 ([PKT] -> ODU2), S3 ([ODU2]), S5 ([ODU2]), S6 ([ODU2]), 413 C-R3 (ODU2 -> [PKT]) 415 The MDSC should be capable via the MPI to request the setup of an 416 ODU2 transit service with enough information that enable the 417 transport PNC to instantiate and control the ODU2 data plane 418 connection segment through nodes S3, S5, S6. 420 4.3.2. EPL over ODU 422 This use case assumes that the physical links interconnecting the IP 423 routers and the transport network are Ethernet links. 425 In order to setup a 10Gb IP link between C-R1 to C-R3, an EPL 426 service needs to be created between C-R1 and C-R3, supported by an 427 ODU2 end-to-end connection between S3 and S6, crossing transport 428 node S5. 430 The traffic flow between C-R1 and C-R3 can be summarized as: 432 C-R1 ([PKT] -> ETH), S3 (ETH -> [ODU2]), S5 ([ODU2]), 433 S6 ([ODU2] -> ETH), C-R3 (ETH-> [PKT]) 435 The MDSC should be capable via the MPI to request the setup of an 436 EPL service with enough information that can permit the transport 437 PNC to instantiate and control the ODU2 end-to-end data plane 438 connection through nodes S3, S5, S6, as well as the adaptation 439 functions inside S3 and S6: S3&S6 (ETH -> ODU2) and S9&S6 (ODU2 -> 440 ETH). 442 4.3.3. Other OTN Client Services 444 [ITU-T G.709-2016] defines mappings of different client layers into 445 ODU. Most of them are used to provide Private Line services over 446 an OTN transport network supporting a variety of types of physical 447 access links (e.g., Ethernet, SDH STM-N, Fibre Channel, InfiniBand, 448 etc.). 450 This use case assumes that the physical links interconnecting the IP 451 routers and the transport network are any one of these possible 452 options. 454 In order to setup a 10Gb IP link between C-R1 to C-R3 using, for 455 example STM-64 physical links between the IP routers and the 456 transport network, an STM-64 Private Line service needs to be 457 created between C-R1 and C-R3, supported by an ODU2 end-to-end data 458 plane connection between S3 and S6, crossing transport node S5. 460 The traffic flow between C-R1 and C-R3 can be summarized as: 462 C-R1 ([PKT] -> STM-64), S3 (STM-64 -> [ODU2]), S5 ([ODU2]), 463 S6 ([ODU2] -> STM-64), C-R3 (STM-64 -> [PKT]) 465 The MDSC should be capable via the MPI to request the setup of an 466 STM-64 Private Line service with enough information that can permit 467 the transport PNC to instantiate and control the ODU2 end-to-end 468 connection through nodes S3, S5, S6, as well as the adaptation 469 functions inside S3 and S6: S3&S6 (STM-64 -> ODU2) and S9&S3 (STM-64 470 -> PKT). 472 4.3.4. EVPL over ODU 474 This use case assumes that the physical links interconnecting the IP 475 routers and the transport network are Ethernet links and that 476 different Ethernet services (e.g, EVPL) can share the same physical 477 link using different VLANs. 479 In order to setup two 1Gb IP links between C-R1 to C-R3 and between 480 C-R1 and C-R4, two EVPL services need to be created, supported by 481 two ODU0 end-to-end connections respectively between S3 and S6, 482 crossing transport node S5, and between S3 and S2, crossing 483 transport node S1. 485 Since the two EVPL services are sharing the same Ethernet physical 486 link between C-R1 and S3, different VLAN IDs are associated with 487 different EVPL services: for example VLAN IDs 10 and 20 488 respectively. 490 The traffic flow between C-R1 and C-R3 can be summarized as: 492 C-R1 ([PKT] -> VLAN), S3 (VLAN -> [ODU0]), S5 ([ODU0]), 493 S6 ([ODU0] -> VLAN), C-R3 (VLAN -> [PKT]) 495 The traffic flow between C-R1 and C-R4 can be summarized as: 497 C-R1 ([PKT] -> VLAN), S3 (VLAN -> [ODU0]), S1 ([ODU0]), 498 S2 ([ODU0] -> VLAN), C-R4 (VLAN -> [PKT]) 500 The MDSC should be capable via the MPI to request the setup of these 501 EVPL services with enough information that can permit the transport 502 PNC to instantiate and control the ODU0 end-to-end data plane 503 connections as well as the adaptation functions on the boundary 504 nodes: S3&S2&S6 (VLAN -> ODU0) and S3&S2&S6 (ODU0 -> VLAN). 506 4.3.5. EVPLAN and EVPTree Services 508 This use case assumes that the physical links interconnecting the IP 509 routers and the transport network are Ethernet links and that 510 different Ethernet services (e.g., EVPL, EVPLAN and EVPTree) can 511 share the same physical link using different VLANs. 513 Note - it is assumed that EPLAN and EPTree services can be supported 514 by configuring EVPLAN and EVPTree with port mapping. 516 In order to setup an IP subnet between C-R1, C-R2, C-R3 and C-R4, an 517 EVPLAN/EVPTree service needs to be created, supported by two ODUflex 518 end-to-end connections respectively between S3 and S6, crossing 519 transport node S5, and between S3 and S2, crossing transport node 520 S1. 522 In order to support this EVPLAN/EVPTree service, some Ethernet 523 Bridging capabilities are required on some nodes at the edge of the 524 transport network: for example Ethernet Bridging capabilities can be 525 configured in nodes S3 and S6 but not in node S2. 527 Since this EVPLAN/EVPTree service can share the same Ethernet 528 physical links between IP routers and transport nodes (e.g., with 529 the EVPL services described in section 4.3.4), a different VLAN ID 530 (e.g., 30) can be associated with this EVPLAN/EVPTree service. 532 In order to support an EVPTree service instead of an EVPLAN, 533 additional configuration of the Ethernet Bridging capabilities on 534 the nodes at the edge of the transport network is required. 536 The MAC bridging function in node S3 is needed to select, based on 537 the MAC Destination Address, whether the Ethernet frames form C-R1 538 should be sent to the ODUflex terminating on node S6 or to the other 539 ODUflex terminating on node S2. 541 The MAC bridging function in node S6 is needed to select, based on 542 the MAC Destination Address, whether the Ethernet frames received 543 from the ODUflex should be set to C-R2 or C-R3, as well as whether 544 the Ethernet frames received from C-R2 (or C-R3) should be sent to 545 C-R3 (or C-R2) or to the ODUflex. 547 For example, the traffic flow between C-R1 and C-R3 can be 548 summarized as: 550 C-R1 ([PKT] -> VLAN), S3 (VLAN -> [MAC] -> [ODUflex]), 551 S5 ([ODUflex]), S6 ([ODUflex] -> [MAC] -> VLAN), 552 C-R3 (VLAN -> [PKT]) 554 The MAC bridging function in node S3 is also needed to select, based 555 on the MAC Destination Address, whether the Ethernet frames one 556 ODUflex should be sent to C-R1 or to the other ODUflex. 558 For example, the traffic flow between C-R3 and C-R4 can be 559 summarized as: 561 C-R3 ([PKT] -> VLAN), S6 (VLAN -> [MAC] -> [ODUflex]), 562 S5 ([ODUflex]), S3 ([ODUflex] -> [MAC] -> [ODUflex]), 563 S1 ([ODUflex]), S2 ([ODUflex] -> VLAN), C-R4 (VLAN -> [PKT]) 565 In node S2 there is no need for any MAC bridging function since all 566 the Ethernet frames received from C-R4 should be sent to the ODUflex 567 toward S3 and viceversa. 569 The traffic flow between C-R1 and C-R4 can be summarized as: 571 C-R1 ([PKT] -> VLAN), S3 (VLAN -> [MAC] -> [ODUflex]), 572 S1 ([ODUflex]), S2 ([ODUflex] -> VLAN), C-R4 (VLAN -> [PKT]) 574 The MDSC should be capable via the MPI to request the setup of this 575 EVPLAN/EVPTree services with enough information that can permit the 576 transport PNC to instantiate and control the ODUflex end-to-end data 577 plane connections as well as the Ethernet Bridging and adaptation 578 functions on the boundary nodes: S3&S6 (VLAN -> MAC -> ODU2), S3&S6 579 (ODU2 -> ETH -> VLAN), S2 (VLAN -> ODU2) and S2 (ODU2 -> VLAN). 581 4.4. Multi-functional Access Links 583 This use case assumes that some physical links interconnecting the 584 IP routers and the transport network can be configured in different 585 modes, e.g., as OTU2 or STM-64 or 10GE. 587 This configuration can be done a-priori by means outside the scope 588 of this document. In this case, these links will appear at the MPI 589 either as an ODU Link or as an STM-64 Link or as a 10GE Link 590 (depending on the a-priori configuration) and will be controlled at 591 the MPI as discussed in section 4.3. 593 It is also possible not to configure these links a-priori and give 594 the control to the MPI to decide, based on the service 595 configuration, how to configure it. 597 For example, if the physical link between C-R1 and S3 is a multi- 598 functional access link while the physical links between C-R3 and S6 599 and between C-R4 and S2 are STM-64 and 10GE physical links 600 respectively, it is possible at the MPI to configure either an STM- 601 64 Private Line service between C-R1 and C-R3 or an EPL service 602 between C-R1 and C-R4. 604 The traffic flow between C-R1 and C-R3 can be summarized as: 606 C-R1 ([PKT] -> STM-64), S3 (STM-64 -> [ODU2]), S5 ([ODU2]), 607 S6 ([ODU2] -> STM-64), C-R3 (STM-64 -> [PKT]) 609 The traffic flow between C-R1 and C-R4 can be summarized as: 611 C-R1 ([PKT] -> ETH), S3 (ETH -> [ODU2]), S1 ([ODU2]), 612 S2 ([ODU2] -> ETH), C-R4 (ETH-> [PKT]) 614 The MDSC should be capable via the MPI to request the setup of 615 either service with enough information that can permit the transport 616 PNC to instantiate and control the ODU2 end-to-end data plane 617 connection as well as the adaptation functions inside S3 and S2 or 618 S6. 620 4.5. Protection Requirements 622 Protection switching provides a pre-allocated survivability 623 mechanism, typically provided via linear protection methods and 624 would be configured to operate as 1+1 unidirectional (the most 625 common OTN protection method), 1+1 bidirectional or 1:n 626 bidirectional. This ensures fast and simple service survivability. 628 The MDSC needs to be capable to request the transport PNC to 629 configure protection when requesting the setup of the connectivity 630 services described in section 4.3. 632 Since in this use case it is assumed that switching within the 633 transport network domain is performed only in one layer, also 634 protection switching within the transport network domain can only be 635 provided at the OTN ODU layer, for all the services defined in 636 section 4.3. 638 It may be necessary to consider not only protection, but also 639 restoration functions in the future. Restoration methods would 640 provide capability to reroute and restore connectivity traffic 641 around network faults, without the network penalty imposed with 642 dedicated 1+1 protection schemes. 644 4.5.1. Linear Protection 646 It is possible to protect any service defined in section 4.3 from 647 failures within the OTN transport domain by configuring OTN linear 648 protection in the data plane between node S3 and node S6. 650 It is assumed that the OTN linear protection is configured to with 651 1+1 unidirectional protection switching type, as defined in [ITU-T 652 G.808.1-2014] and [ITU-T G.873.1-2014], as well as in [RFC4427]. 654 In these scenarios, a working transport entity and a protection 655 transport entity, as defined in [ITU-T G.808.1-2014], (or a working 656 LSP and a protection LSP, as defined in [RFC4427]) should be 657 configured in the data plane, for example: 659 Working transport entity: S3, S5, S6 661 Protection transport entity: S3, S4, S8, S7, S6 663 The Transport PNC should be capable to report to the MDSC which is 664 the active transport entity, as defined in [ITU-T G.808.1-2014], in 665 the data plane. 667 Given the fast dynamic of protection switching operations in the 668 data plane (50ms recovery time), this reporting is not expected to 669 be in real-time. 671 It is also worth noting that with unidirectional protection 672 switching, e.g., 1+1 unidirectional protection switching, the active 673 transport entity may be different in the two directions. 675 5. Use Case 2: Single-domain with multi-layer 677 5.1. Reference Network 679 The current considerations discussed in this document are based on 680 the following reference network: 682 - single transport domain: OTN and OCh multi-layer network 684 In this use case, the same reference network shown in Figure 1 is 685 considered. The only difference is that all the transport nodes are 686 capable to switch in the ODU as well as in the OCh layer. 688 All the physical links within the transport network are therefore 689 assumed to be OCh links. Therefore, with the exception of the access 690 links, no ODU internal link exists before an OCh end-to-end data 691 plane connection is created within the network. 693 The controlling hierarchy is the same as described in Figure 2. 695 The interface within the scope of this document is the Transport MPI 696 which should be capable to control both the OTN and OCh layers. 698 5.2. Topology Abstractions 700 A grey topology type B abstraction is assumed: abstract nodes and 701 links exposed at the MPI corresponds 1:1 with the physical nodes and 702 links controlled by the PNC but the PNC abstracts/hides at least 703 some optical parameters to be used within the OCh layer. 705 5.3. Service Configuration 707 The same service scenarios, as described in section 4.3, are also 708 applicable to these use cases with the only difference that end-to- 709 end OCh data plane connections will need to be setup before ODU data 710 plane connections. 712 6. Use Case 3: Multi-domain with single-layer 714 6.1. Reference Network 716 In this section we focus on a multi-domain reference network with 717 homogeneous technologies: 719 - multiple transport domains: OTN networks 721 Figure 3 shows the network physical topology composed of three 722 transport network domains providing transport services to an IP 723 customer network through eight access links: 725 ........................ 726 .......... : : 727 : : : Network domain 1 : ............. 728 :Customer: : : : : 729 :domain 1: : S1 -------+ : : Network : 730 : : : / \ : : domain 3 : .......... 731 : C-R1 ------- S3 ----- S4 \ : : : : : 732 : : : \ \ S2 --------+ : :Customer: 733 : : : \ \ | : : \ : :domain 3: 734 : : : S5 \ | : : \ : : : 735 : C-R2 ------+ / \ \ | : : S31 --------- C-R7 : 736 : : : \ / \ \ | : : / \ : : : 737 : : : S6 ---- S7 ---- S8 ------ S32 S33 ------ C-R8 : 738 : : : / | | : : / \ / : :........: 739 : C-R3 ------+ | | : :/ S34 : 740 : : :..........|.......|...: / / : 741 :........: | | /:.../.......: 742 | | / / 743 ...........|.......|..../..../... 744 : | | / / : .......... 745 : Network | | / / : : : 746 : domain 2 | | / / : :Customer: 747 : S11 ---- S12 / : :domain 2: 748 : / | \ / : : : 749 : S13 S14 | S15 ------------- C-R4 : 750 : | \ / \ | \ : : : 751 : | S16 \ | \ : : : 752 : | / S17 -- S18 --------- C-R5 : 753 : | / \ / : : : 754 : S19 ---- S20 ---- S21 ------------ C-R6 : 755 : : : : 756 :...............................: :........: 758 Figure 3 Reference network for Use Case 3 760 It is worth noting that the network domain 1 is identical to the 761 transport domain shown in Figure 1. 763 -------------- 764 | Client | 765 | Controller | 766 -------------- 767 | 768 ....................|....................... 769 | 770 ---------------- 771 | | 772 | MDSC | 773 | | 774 ---------------- 775 / | \ 776 / | \ 777 ............../.....|......\................ 778 / | \ 779 / ---------- \ 780 / | PNC2 | \ 781 / ---------- \ 782 ---------- | \ 783 | PNC1 | ----- \ 784 ---------- ( ) ---------- 785 | ( ) | PNC3 | 786 ----- ( Network ) ---------- 787 ( ) ( Domain 2 ) | 788 ( ) ( ) ----- 789 ( Network ) ( ) ( ) 790 ( Domain 1 ) ----- ( ) 791 ( ) ( Network ) 792 ( ) ( Domain 3 ) 793 ----- ( ) 794 ( ) 795 ----- 797 Figure 4 Controlling Hierarchy for Use Case 3 799 In this section we address the case where the CNC controls the 800 customer IP network and requests transport connectivity among IP 801 routers, via the CMI, to an MDSC which coordinates, via three MPIs, 802 the control of a multi-domain transport network through three PNCs. 804 The interfaces within the scope of this document are the three MPIs 805 while the interface between the CNC and the IP routers is out of its 806 scope and considerations about the CMI are outside the scope of this 807 document. 809 6.2. Topology Abstractions 811 Each PNC should provide the MDSC a topology abstraction of the 812 domain's network topology. 814 Each PNC provides topology abstraction of its own domain topology 815 independently from each other and therefore it is possible that 816 different PNCs provide different types of topology abstractions. 818 As an example, we can assume that: 820 o PNC1 provides a white topology abstraction (likewise use case 1 821 described in section 4.2) 823 o PNC2 provides a type A grey topology abstraction 825 o PNC3 provides a type B grey topology abstraction, with two 826 abstract nodes (AN31 and AN32). They abstract respectively nodes 827 S31+S33 and nodes S32+S34. At the MPI, only the abstract nodes 828 should be reported: the mapping between the abstract nodes (AN31 829 and AN32) and the physical nodes (S31, S32, S33 and S34) should 830 be done internally by the PNC. 832 The MDSC should be capable to glue together these different abstract 833 topologies to build its own view of the multi-domain network 834 topology. This might require proper administrative configuration or 835 other mechanisms (to be defined/analysed). 837 6.3. Service Configuration 839 In the following use cases, it is assumed that the CNC is capable to 840 request service connectivity from the MDSC to support IP routers 841 connectivity. 843 The same service scenarios, as described in section 4.3, are also 844 application to this use cases with the only difference that the two 845 IP routers to be interconnected are attached to transport nodes 846 which belong to different PNCs domains and are under the control of 847 the CNC. 849 Likewise, the service scenarios in section 4.3, the type of services 850 could depend of the type of physical links (e.g. OTN link, ETH link 851 or SDH link) between the customer's routers and the multi-domain 852 transport network and the configuration of the different adaptations 853 inside IP routers is performed by means that are outside the scope 854 of this document because not under control of and not visible to the 855 MDSC nor to the PNCs. It is assumed that the CNC is capable to 856 request the proper configuration of the different adaptation 857 functions inside the customer's IP routers, by means which are 858 outside the scope of this document. 860 It is also assumed that the CNC is capable via the CMI to request 861 the MDSC the setup of these services with enough information that 862 enable the MDSC to coordinate the different PNCs to instantiate and 863 control the ODU2 data plane connection through nodes S3, S1, S2, 864 S31, S33, S34, S15 and S18, as well as the adaptation functions 865 inside nodes S3 and S18, when needed. 867 As described in section 6.2, the MDSC should have its own view of 868 the end-to-end network topology and use it for its own path 869 computation to understand that it needs to coordinate with PNC1, 870 PNC2 and PNC3 the setup and control of a multi-domain ODU2 data 871 plane connection. 873 6.3.1. ODU Transit 875 In order to setup a 10Gb IP link between C-R1 and C-R5, an ODU2 end- 876 to-end data plane connection needs be created between C-R1 and C-R5, 877 crossing transport nodes S3, S1, S2, S31, S33, S34, S15 and S18 878 which belong to different PNC domains. 880 The traffic flow between C-R1 and C-R5 can be summarized as: 882 C-R1 ([PKT] -> ODU2), S3 ([ODU2]), S1 ([ODU2]), S2 ([ODU2]), 883 S31 ([ODU2]), S33 ([ODU2]), S34 ([ODU2]), 884 S15 ([ODU2]), S18 ([ODU2]), C-R5 (ODU2 -> [PKT]) 886 6.3.2. EPL over ODU 888 In order to setup a 10Gb IP link between C-R1 and C-R5, an EPL 889 service needs to be created between C-R1 and C-R5, supported by an 890 ODU2 end-to-end data plane connection between transport nodes S3 and 891 S18, crossing transport nodes S1, S2, S31, S33, S34 and S15 which 892 belong to different PNC domains. 894 The traffic flow between C-R1 and C-R5 can be summarized as: 896 C-R1 ([PKT] -> ETH), S3 (ETH -> [ODU2]), S1 ([ODU2]), 897 S2 ([ODU2]), S31 ([ODU2]), S33 ([ODU2]), S34 ([ODU2]), 898 S15 ([ODU2]), S18 ([ODU2] -> ETH), C-R5 (ETH -> [PKT]) 900 6.3.3. Other OTN Client Services 902 In order to setup a 10Gb IP link between C-R1 and C-R5 using, for 903 example SDH physical links between the IP routers and the transport 904 network, an STM-64 Private Line service needs to be created between 905 C-R1 and C-R5, supported by ODU2 end-to-end data plane connection 906 between transport nodes S3 and S18, crossing transport nodes S1, S2, 907 S31, S33, S34 and S15 which belong to different PNC domains. 909 The traffic flow between C-R1 and C-R5 can be summarized as: 911 C-R1 ([PKT] -> STM-64), S3 (STM-64 -> [ODU2]), S1 ([ODU2]), 912 S2 ([ODU2]), S31 ([ODU2]), S33 ([ODU2]), S34 ([ODU2]), 913 S15 ([ODU2]), S18 ([ODU2] -> STM-64), C-R5 (STM-64 -> [PKT]) 915 6.3.4. EVPL over ODU 917 In order to setup two 1Gb IP links between C-R1 to C-R3 and between 918 C-R1 and C-R5, two EVPL services need to be created, supported by 919 two ODU0 end-to-end connections respectively between S3 and S6, 920 crossing transport node S5, and between S3 and S18, crossing 921 transport nodes S1, S2, S31, S33, S34 and S15 which belong to 922 different PNC domains. 924 The VLAN configuration on the access links is the same as described 925 in section 4.3.4. 927 The traffic flow between C-R1 and C-R3 is the same as described in 928 section 4.3.4. 930 The traffic flow between C-R1 and C-R5 can be summarized as: 932 C-R1 ([PKT] -> VLAN), S3 (VLAN -> [ODU2]), S1 ([ODU2]), 933 S2 ([ODU2]), S31 ([ODU2]), S33 ([ODU2]), S34 ([ODU2]), 934 S15 ([ODU2]), S18 ([ODU2] -> VLAN), C-R5 (VLAN -> [PKT]) 936 6.3.5. EVPLAN and EVPTree Services 938 In order to setup an IP subnet between C-R1, C-R2, C-R3 and C-R7, an 939 EVPLAN/EVPTree service needs to be created, supported by two ODUflex 940 end-to-end connections respectively between S3 and S6, crossing 941 transport node S5, and between S3 and S18, crossing transport nodes 942 S1, S2, S31, S33, S34 and S15 which belong to different PNC domains. 944 The VLAN configuration on the access links is the same as described 945 in section 4.3.5. 947 The configuration of the Ethernet Bridging capabilities on nodes S3 948 and S6 is the same as described in section 4.3.5 while the 949 configuration on node S18 similar to the configuration of node S2 950 described in section 4.3.5. 952 The traffic flow between C-R1 and C-R3 is the same as described in 953 section 4.3.5. 955 The traffic flow between C-R1 and C-R5 can be summarized as: 957 C-R1 ([PKT] -> VLAN), S3 (VLAN -> [MAC] -> [ODUflex]), 958 S1 ([ODUflex]), S2 ([ODUflex]), S31 ([ODUflex]), 959 S33 ([ODUflex]), S34 ([ODUflex]), 960 S15 ([ODUflex]), S18 ([ODUflex] -> VLAN), C-R5 (VLAN -> [PKT]) 962 6.4. Multi-functional Access Links 964 The same considerations of section 4.4 apply with the only 965 difference that the ODU data plane connections could be setup across 966 multiple PNC domains. 968 For example, if the physical link between C-R1 and S3 is a multi- 969 functional access link while the physical links between C-R7 and S31 970 and between C-R5 and S18 are STM-64 and 10GE physical links 971 respectively, it is possible to configure either an STM-64 Private 972 Line service between C-R1 and C-R7 or an EPL service between C-R1 973 and C-R5. 975 The traffic flow between C-R1 and C-R7 can be summarized as: 977 C-R1 ([PKT] -> STM-64), S3 (STM-64 -> [ODU2]), S1 ([ODU2]), 978 S2 ([ODU2]), S31 ([ODU2] -> STM-64), C-R3 (STM-64 -> [PKT]) 980 The traffic flow between C-R1 and C-R5 can be summarized as: 982 C-R1 ([PKT] -> ETH), S3 (ETH -> [ODU2]), S1 ([ODU2]), 983 S2 ([ODU2]), S31 ([ODU2]), S33 ([ODU2]), S34 ([ODU2]), 984 S15 ([ODU2]), S18 ([ODU2] -> ETH), C-R5 (ETH -> [PKT]) 986 6.5. Protection Scenarios 988 The MDSC needs to be capable to coordinate different PNCs to 989 configure protection switching when requesting the setup of the 990 connectivity services described in section 6.3. 992 Since in this use case it is assumed that switching within the 993 transport network domain is performed only in one layer, also 994 protection switching within the transport network domain can only be 995 provided at the OTN ODU layer, for all the services defined in 996 section 6.3. 998 6.5.1. Linear Protection (end-to-end) 1000 In order to protect any service defined in section 6.3 from failures 1001 within the OTN multi-domain transport network, the MDSC should be 1002 capable to coordinate different PNCs to configure and control OTN 1003 linear protection in the data plane between nodes S3 and node S18. 1005 The considerations in section 4.5.1 are also applicable here with 1006 the only difference that MDSC needs to coordinate with different 1007 PNCs the setup and control of the OTN linear protection as well as 1008 of the working and protection transport entities (working and 1009 protection LSPs). 1011 Two cases can be considered. 1013 In one case, the working and protection transport entities pass 1014 through the same PNC domains: 1016 Working transport entity: S3, S1, S2, 1017 S31, S33, S34, 1018 S15, S18 1020 Protection transport entity: S3, S4, S8, 1021 S32, 1022 S12, S17, S18 1024 In another case, the working and protection transport entities can 1025 pass through different PNC domains: 1027 Working transport entity: S3, S5, S7, 1028 S11, S12, S17, S18 1030 Protection transport entity: S3, S1, S2, 1031 S31, S33, S34, 1032 S15, S18 1034 6.5.2. Segmented Protection 1036 In order to protect any service defined in section 6.3 from failures 1037 within the OTN multi-domain transport network, the MDSC should be 1038 capable to request each PNC to configure OTN intra-domain protection 1039 when requesting the setup of the ODU2 data plane connection segment. 1041 If linear protection is used within a domain, the considerations in 1042 section 4.5.1 are also applicable here only for the PNC controlling 1043 the domain where intra-domain linear protection is provided. 1045 If PNC1 provides linear protection, the working and protection 1046 transport entities could be: 1048 Working transport entity: S3, S1, S2 1050 Protection transport entity: S3, S4, S8, S2 1052 If PNC2 provides linear protection, the working and protection 1053 transport entities could be: 1055 Working transport entity: S15, S18 1057 Protection transport entity: S15, S12, S17, S18 1059 If PNC3 provides linear protection, the working and protection 1060 transport entities could be: 1062 Working transport entity: S31, S33, S34 1064 Protection transport entity: S31, S32, S34 1066 7. Use Case 4: Multi-domain and multi-layer 1068 7.1. Reference Network 1070 The current considerations discussed in this document are based on 1071 the following reference network: 1073 - multiple transport domains: OTN and OCh multi-layer networks 1075 In this use case, the reference network shown in Figure 3 is used. 1076 The only difference is that all the transport nodes are capable to 1077 switch either in the ODU or in the OCh layer. 1079 All the physical links within each transport network domain are 1080 therefore assumed to be OCh links, while the inter-domain links are 1081 assumed to be ODU links as described in section 6.1 (multi-domain 1082 with single layer - OTN network). 1084 Therefore, with the exception of the access and inter-domain links, 1085 no ODU link exists within each domain before an OCh single-domain 1086 end-to-end data plane connection is created within the network. 1088 The controlling hierarchy is the same as described in Figure 4. 1090 The interfaces within the scope of this document are the three MPIs 1091 which should be capable to control both the OTN and OCh layers 1092 within each PNC domain. 1094 7.2. Topology Abstractions 1096 Each PNC should provide the MDSC a topology abstraction of its own 1097 network topology as described in section 5.2. 1099 As an example, it is assumed that: 1101 o PNC1 provides a type A grey topology abstraction (likewise in use 1102 case 2 described in section 5.2) 1104 o PNC2 provides a type B grey topology abstraction (likewise in use 1105 case 3 described in section 6.2) 1107 o PNC3 provides a type B grey topology abstraction with two 1108 abstract nodes, likewise in use case 3 described in section 6.2, 1109 and hiding at least some optical parameters to be used within the 1110 OCh layer, likewise in use case 2 described in section 5.2. 1112 7.3. Service Configuration 1114 The same service scenarios, as described in section 6.3, are also 1115 applicable to these use cases with the only difference that single- 1116 domain end-to-end OCh data plane connections needs to be setup 1117 before ODU data plane connections. 1119 8. Security Considerations 1121 Typically, OTN networks ensure a high level of security and data 1122 privacy through hard partitioning of traffic onto isolated circuits. 1124 There may be additional security considerations applied to specific 1125 use cases, but common security considerations do exist and these 1126 must be considered for controlling underlying infrastructure to 1127 deliver transport services: 1129 o use of RESCONF and the need to reuse security between RESTCONF 1130 components; 1132 o use of authentication and policy to govern which transport 1133 services may be requested by the user or application; 1135 o how secure and isolated connectivity may also be requested as an 1136 element of a service and mapped down to the OTN level. 1138 9. IANA Considerations 1140 This document requires no IANA actions. 1142 10. References 1144 10.1. Normative References 1146 [RFC7926] Farrel, A. et al., "Problem Statement and Architecture for 1147 Information Exchange between Interconnected Traffic- 1148 Engineered Networks", BCP 206, RFC 7926, July 2016. 1150 [RFC4427] Mannie, E., Papadimitriou, D., "Recovery (Protection and 1151 Restoration) Terminology for Generalized Multi-Protocol 1152 Label Switching (GMPLS)", RFC 4427, March 2006. 1154 [ACTN-Frame] Ceccarelli, D., Lee, Y. et al., "Framework for 1155 Abstraction and Control of Transport Networks", draft- 1156 ietf-teas-actn-framework, work in progress. 1158 [ITU-T G.709-2016] ITU-T Recommendation G.709 (06/16), "Interfaces 1159 for the optical transport network", June 2016. 1161 [ITU-T G.808.1-2014] ITU-T Recommendation G.808.1 (05/14), "Generic 1162 protection switching - Linear trail and subnetwork 1163 protection", May 2014. 1165 [ITU-T G.873.1-2014] ITU-T Recommendation G.873.1 (05/14), "Optical 1166 transport network (OTN): Linear protection", May 2014. 1168 10.2. Informative References 1170 [TE-Topo] Liu, X. et al., "YANG Data Model for TE Topologies", 1171 draft-ietf-teas-yang-te-topo, work in progress. 1173 [ACTN-YANG] Zhang, X. et al., "Applicability of YANG models for 1174 Abstraction and Control of Traffic Engineered Networks", 1175 draft-zhang-teas-actn-yang, work in progress. 1177 [Path-Compute] Busi, I., Belotti, S. et al., " Yang model for 1178 requesting Path Computation", draft-busibel-teas-yang- 1179 path-computation, work in progress. 1181 [ONF TR-527] ONF Technical Recommendation TR-527, "Functional 1182 Requirements for Transport API", June 2016. 1184 [ONF GitHub] ONF Open Transport (SNOWMASS) 1185 https://github.com/OpenNetworkingFoundation/Snowmass- 1186 ONFOpenTransport 1188 11. Acknowledgments 1190 The authors would like to thank all members of the Transport NBI 1191 Design Team involved in the definition of use cases, gap analysis 1192 and guidelines for using the IETF YANG models at the Northbound 1193 Interface (NBI) of a Transport SDN Controller. 1195 The authors would like to thank Xian Zhang, Anurag Sharma, Sergio 1196 Belotti, Tara Cummings, Michael Scharf, Karthik Sethuraman, Oscar 1197 Gonzalez de Dios, Hans Bjursrom and Italo Busi for having initiated 1198 the work on gap analysis for transport NBI and having provided 1199 foundations work for the development of this document. 1201 This document was prepared using 2-Word-v2.0.template.dot. 1203 Authors' Addresses 1205 Italo Busi (Editor) 1206 Huawei 1207 Email: italo.busi@huawei.com 1209 Daniel King (Editor) 1210 Lancaster University 1211 Email: d.king@lancaster.ac.uk 1213 Sergio Belotti 1214 Nokia 1215 Email: sergio.belotti@nokia.com 1217 Gianmarco Bruno 1218 Ericsson 1219 Email: gianmarco.bruno@ericsson.com 1221 Young Lee 1222 Huawei 1223 Email: leeyoung@huawei.com 1225 Victor Lopez 1226 Telefonica 1227 Email: victor.lopezalvarez@telefonica.com 1229 Carlo Perocchio 1230 Ericsson 1231 Email: carlo.perocchio@ericsson.com 1233 Haomian Zheng 1234 Huawei 1235 Email: zhenghaomian@huawei.com