idnits 2.17.1 draft-ietf-ipo-carrier-requirements-05.txt: -(639): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(1075): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(1076): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(1951): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(1953): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(1956): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(1959): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(1962): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(1963): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(1965): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(1970): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(1973): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(1976): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(1979): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(1985): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(1986): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(1988): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(1995): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(1998): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(2004): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(2005): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(2008): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** Missing document type: Expected "INTERNET-DRAFT" in the upper left hand corner of the first page ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity -- however, there's a paragraph with a matching beginning. Boilerplate error? ** The document is more than 15 pages and seems to lack a Table of Contents. == There are 32 instances of lines with non-ascii characters in the document. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an Introduction section. (A line matching the expected section header was found, but with an unexpected indentation: ' 1.3. Scope of this document' ) ** The document seems to lack a Security Considerations section. (A line matching the expected section header was found, but with an unexpected indentation: ' 12. Security Considerations' ) ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** There are 1066 instances of too long lines in the document, the longest one being 11 characters in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year == Line 403 has weird spacing: '...rt call admis...' == The document doesn't use any RFC 2119 keywords, yet has text resembling RFC 2119 boilerplate text. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (December 2002) is 7774 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 9 errors (**), 0 flaws (~~), 4 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 INTERNET-DRAFT 3 Document: draft-ietf-ipo-carrier-requirements-05.txt Yong Xue 4 Category: Informational (Editor) 5 Expiration Date: June, 2003 WorldCom, Inc 7 December 2002 9 Optical Network Service Requirements 11 Status of This Memo 12 This document is an Internet-Draft and is in full conformance with all 13 provisions of Section 10 of RFC2026. Internet-Drafts are working 14 documents of the Internet Engineering Task Force (IETF), its areas, 15 and its working groups. Note that other groups may also distribute 16 working documents as Internet-Drafts. 18 Internet-Drafts are draft documents valid for a maximum of six months 19 and may be updated, replaced, or rendered obsolete by other documents 20 at any time. It is inappropriate to use Internet-Drafts as reference 21 material or to cite them other than as "work in progress". 23 The list of current Internet-Drafts can be accessed at 24 http://www.ietf.org/ietf/1id-abstracts.txt. 26 The list of Internet-Draft Shadow Directories can be accessed at 27 http://www.ietf.org/shadow.html. 29 Abstract 30 This Internet Draft describes the major carrier's optical service 31 requirements for the Automatically Switched Optical Networks (ASON) 32 from both an end-user's as well as an operator's perspectives. Its 33 focus is on the description of the service building blocks and 34 service-related control plane functional requirements. The management 35 functions for the optical services and their underlying networks are 36 beyond the scope of this document. 38 Table of Contents 39 1. Introduction 2 40 1.1 Conventions used in this document 3 41 1.2 Value Statement 3 42 1.3 Scope of This Document 4 43 2. Contributing Authors 5 44 3. Abbreviations 6 45 4. General Requirements 6 46 4.1 Separation of Networking Functions 7 47 4.2 Separation of Call and Connection Control 8 48 4.3 Network and Service Scalability 8 49 4.4 Transport Network Technology 9 50 4.5 Service Building Blocks 9 51 5. Service Models and Applications 9 52 5.1 Service and Connection Types 10 53 5.2 Examples of Common Service Models 11 55 Y. Xue et al Informational 57 6. Network Reference Model 12 58 6.1 Optical Networks and Subnetworks 12 59 6.2 Network Interfaces 12 60 6.3 Intra-Carrier Network Model 15 61 6.4 Inter-Carrier Network Model 16 62 6.5 Implied Control Constraints 16 63 7. Optical Service User Requirements 16 64 7.1 Common Optical Services 17 65 7.2 Bearer Interface Types 17 66 7.3 Optical Service Invocation 18 67 7.4 Optical Connection Granularity 20 68 7.5 Other Service Parameters and Requirements 20 69 8. Optical Service Provider Requirements 21 70 8.1 Access Methods to Optical Networks 22 71 8.2 Dual Homing and Network Interconnections 22 72 8.3 Inter-domain connectivity 22 73 8.4 Names and Address Management 23 74 8.5 Policy-Based Service Management Framework 24 75 9. Control Plane Functional Requirements for Optical 76 Services 24 77 9.1 Control Plane Capabilities and Functions 24 78 9.2 Control Message Transport Network 26 79 9.3 Control Plane Interface to Data Plane 27 80 9.4 Management Plane Interface to Data Plane 28 81 9.5 Control Plane Interface to Management Plane 28 82 9.6 IP and Optical Control Plane Interconnection 29 83 10. Requirements for Signaling, Routing and Discovery 29 84 10.1 Requirements for information sharing over UNI, 85 I-NNI and E-NNI 29 86 10.2 Signaling Functions 30 87 10.3 Routing Functions 30 88 10.4 Requirements for path selection 32 89 10.5 Discovery Functions 32 90 11. Requirements for service and control plane 91 resiliency 33 92 11.1 Service resiliency 34 93 11.2 Control plane resiliency 35 94 12. Security Considerations 36 95 12.1 Optical Network Security Concerns 36 96 12.2 Service Access Control 36 97 13. Acknowledgements 37 98 14. References 38 99 Authors' Addresses 39 100 Appendix: Interconnection of Control Planes 41 102 1. Introduction 104 Optical transport networks are evolving from the current TDM-based 105 SONET/SDH optical networks as defined by ANSI T1.105 and ITU Rec. 106 G.803 [ansi-sonet, itu-sdh] to emerging WDM-based optical transport 107 networks (OTN) as defined by ITU Rec. G.872 in [itu-otn]. Therefore in 108 Y. Xue et al Informational 110 the near future, carrier optical transport networks are expected to 111 consist of a mixture of the SONET/SDH-based sub-networks and the WDM- 112 based wavelength or fiber switched OTN sub-networks. The OTN networks 113 can be either transparent or opaque depending upon if O-E-O functions 114 are utilized within the optical networks. Optical networking 115 encompasses the functionalities for the establishment, transmission, 116 multiplexing and switching of optical connections carrying a wide 117 range of user signals of varying formats and bit rate. The optical 118 connections in this document include switched optical path using TDM 119 channel, WADM wavelength or fiber links. 121 Some of the challenges for the carriers are efficient bandwidth 122 management and fast service provisioning in a multi-technology and 123 possibly multi-vendor networking environment. The emerging and rapidly 124 evolving Automatically Switched Optical Network (ASON) technology 125 [itu-astn, itu-ason] is aimed at providing optical networks with 126 intelligent networking functions and capabilities in its control plane 127 to enable rapid optical connection provisioning, dynamic rerouting as 128 well as multiplexing and switching at different granularity levels, 129 including 130 fiber, wavelength and TDM channel. The ASON control plane should not 131 only enable the new networking functions and capabilities for the 132 emerging OTN networks, but significantly enhance the service 133 provisioning capabilities for the existing SONET/SDH networks as well. 135 The ultimate goals should be to allow the carriers to automate network 136 resource and topology discovery, to quickly and dynamically provision 137 network resources and circuits, and to support assorted network 138 survivability using ring and mesh-based protection and restoration 139 techniques. The carriers see that this new networking platform will 140 create tremendous business opportunities for the network operators and 141 service providers to offer new services to the market, and in the long 142 run to reduce their network operation cost (OpEx saving), and to 143 improve their network utilization efficiency (CapEx saving). 145 1.1 Conventions Used in This Document 147 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL 148 NOT","SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in 149 this document are to be interpreted as described in RFC 2119. 151 1.2 Value Statement 153 By deploying ASON technology, a carrier expects to achieve the 154 following benefits from both technical and business perspectives: 155 Automated Discovery: ASON technology will enable automatic network 156 inventory management, topology and resource discovery which eliminates 157 the manual or semi-manual process for maintaining the network 158 information database that exist in most carrier environment. 160 Y. Xue et al Informational 162 Rapid Circuit Provisioning: ASON technology will enable the dynamic 163 end-to-end provisioning of the optical connections across the optical 164 network by using standard routing and signaling protocols. 166 Enhanced Protection and Restoration: ASON technology will enable the 167 network to dynamically reroute an optical connection in case of 168 failure using mesh-based network protection and restoration 169 techniques, which greatly improves the cost-effectiveness compared to 170 the current line and ring protection schemes in the SONET/SDH network. 172 - Service Flexibility: ASON technology will support provisioning of an 173 assortment of existing and new services such as protocol and bit-rate 174 independent transparent network services, and bandwidth-on-demand 175 services. 177 - Enhanced Interoperability: ASON technology will use a control plane 178 utilizing industry and international standards-based architecture and 179 protocols, which facilitate the interoperability of the optical 180 network equipment from different vendors. 182 In addition, the ASON control plane may offer the following potential 183 value-added benefits: 185 - Reactive traffic engineering at optical layer that allows network 186 resources to be dynamically allocated to traffic flow. 188 - Reduce the need for service providers to develop new operational 189 support systems (OSS) software for the network control and new service 190 provisioning on the optical network, thus speeding up the deployment 191 of the optical network technology and reducing the software 192 development and maintenance cost. 194 - Potential development of a unified control plane that can be used 195 for different transport technologies including OTN, SONET/SDH, ATM and 196 PDH. 198 1.3. Scope of this document 200 This document is intended to provide, from the carriers perspective, a 201 service framework and some associated requirements in relation to the 202 optical transport services to be offered in the next generation 203 optical transport networking environment and their service control and 204 management functions. As such, this document concentrates on the 205 requirements driving the work towards realization of automatically 206 switched optical networks. This document is intended to be protocol- 207 neutral, but the specific goals include providing the requirements to 208 guide the control protocol development and enhancement within IETF in 209 terms of reuse of IP-centric control protocols in the optical 210 transport network. 212 Y. Xue et al Informational 214 Every carrier's needs are different. The objective of this document is 215 NOT to define some specific service models. Instead, some major 216 service building blocks are identified that will enable the carriers 217 to use them in order to create the best service platform most suitable 218 to their business model. These building blocks include generic service 219 types, service enabling control mechanisms and service control and 220 management functions. 222 The Optical Internetworking Forum (OIF) carrier group has developed a 223 comprehensive set of control plane requirements for both UNI and NNI 224 [oif-carrier, oif-nnireq] and they have been used as the base line 225 input to this document. 227 The fundamental principles and basic set of requirements for the 228 control plane of the automatic switched optical networks have been 229 provided in a series of ITU Recommendations under the umbrella of ITU 230 ASTN/ASON architectural and functional requirements as listed below: 232 Architecture: 233 - ITU-T Rec. G.8070/Y.1301 (2001), Requirements for the Automatic 234 Switched Transport Network (ASTN)[itu-astn] 236 - ITU-T Rec. G.8080/Y.1304 (2001), Architecture of the Automatic 237 Switched Optical Network (ASON)[itu-ason] 239 Signaling: 240 - ITU-T Rec. G.7713/Y.1704 (2001), Distributed Call and Connection 241 Management (DCM)[itu-dcm] 243 Routing: 244 - ITU-T Draft Rec. G.7715/Y.1706 (2002), Architecture and Requirements 245 for Routing in the Automatically Switched Optical Network [itu-rtg] 247 Discovery: 248 - ITU-T Rec. G.7714/Y.1705 (2001), Generalized Automatic Discovery 249 [itu-disc] 251 Signaling Communication Network: 252 - ITU-T Rec. G.7712/Y.1703 (2001), Architecture and Specification of 253 Data Communication Network [itu-dcn] 255 This document provides further detailed requirements based on the 256 ASTN/ASON framework. In addition, even though for IP over Optical we 257 consider IP as a major client to the optical network in this document, 258 the same requirements and principles should be equally applicable to 259 non-IP clients such as SONET/SDH, ATM, ITU G.709, Ethernet, etc. The 260 general architecture for IP over Optical is described in the IP over 261 Optical framework document [ipo-frame] 263 2. Contributing Authors 264 Y. Xue et al Informational 266 This document was the combined effort of the editors and the following 267 authors who contributed to this document: 268 Monica Lazer 269 Jennifer Yates 270 Dongmei Wang 271 Ananth Nagarajan 272 Hirokazu Ishimatsu 273 Olga Aparicio 274 Steven Wright 276 3. Acronyms 278 APON ATM Passive Optical Network 279 ASON Automatic Switched Optical Networking 280 ASTN Automatic Switched Transport Network 281 CAC Connection Admission Control 282 EPON Ethernet Passive Optical Network 283 ESCON Enterprise Storage Connectivity 284 FC Fiber Channel 285 FICON Fiber Connectivity 286 NNI Node-to-Node Interface 287 UNI User-to-Network Interface 288 I-NNI Internal NNI 289 E-NNI External NNI 290 NE Network Element 291 OTN Optical Transport Network 292 CNE Customer/Client Network Element 293 ONE Optical Network Element 294 OLS Optical Line System 295 PI Physical Interface 296 PDH Plesiosynchronous Digital Hierarchy 297 CI Control Interface 298 SLA Service Level Agreement 299 SCN Signaling Communication Network 300 SONET Synchronous Digital Hierarchy 301 SDH Synchronous Optical Network 303 4. General Requirements 304 In order to provide the carriers with flexibility and control of the 305 optical networks, the following set of architectural requirements are 306 essential. 308 4.1. Separation of Networking Functions 310 A fundamental architectural principle of the ASON network is to 311 segregate the networking functions within each layer network into 312 three logical functional planes: control plane, data plane and 313 management plane. They are responsible for providing network control 314 functions, data transmission functions and network management 315 functions respectively. The crux of the ASON network is the networking 316 Y. Xue et al Informational 318 intelligence that contains automatic routing, signaling and discovery 319 functions to automate the network control functions. 321 Control Plane: includes the functions related to networking control 322 capabilities such as routing, signaling, and policy control, as well 323 as resource and service discovery. These functions are automated. 325 Data Plane (Transport Plane): includes the functions related to bearer 326 channels and signal transmission. 328 Management Plane: includes the functions related to the management 329 functions of network element, networks and network resources and 330 services. These functions are less automated as compared to control 331 plane functions. 333 Each plane consists of a set of interconnected functional or control 334 entities, physical or logical, responsible for providing the 335 networking or control functions defined for that network layer. 337 Each plane has clearly defined functional responsibilities. However, 338 the management plane is responsible for the management of both control 339 and data planes, thus playing an authoritative role in overall control 340 and management functions as discussed in Section 9. 342 The separation of the control plane from both the data and management 343 plane is beneficial to the carriers in that it: 345 - Allows equipment vendors to have a modular system design that will 346 be more reliable and maintainable. 348 - Allows carriers to have the flexibility to choose a third party 349 vendor control plane software systems as the control plane solution 350 for its switched optical network. 352 - Allows carriers to deploy a unified control plane and OSS/management 353 systems to manage and control different types of transport networks it 354 owns. 356 - Allows carriers to use a separate control network specially designed 357 and engineered for the control plane communications. 359 The separation of control, management and transport function is 360 required and it shall accommodate both logical and physical level 361 separation. The logical separation refers to functional separation 362 while physical separation refers to the case where the control, 363 management and transport functions physically reside in different 364 equipment or locations. 366 Note that it is in contrast to the IP network where the control 367 messages and user traffic are routed and switched based on the same 368 Y. Xue et al Informational 370 network topology due to the associated in-band signaling nature of the 371 IP network. 373 When the physical separation is allowed between the control and data 374 plane, a standardized interface and control protocol (e.g. GSMP [ietf- 375 gsmp]) should be supported. 377 4.2. Separation of call and connection control 379 To support many enhanced optical services, such as scheduled bandwidth 380 on demand, diverse circuit provisioning and bundled connections, a 381 call model based on the separation of call control and connection 382 control is essential. 384 The call control is responsible for the end-to-end session 385 negotiation, call admission control and call state maintenance while 386 connection control is responsible for setting up the connections 387 associated with a call across the network. A call can correspond to 388 zero, one or more connections depending upon the number of connections 389 needed to support the call. 391 The existence of the connection depends upon the existence of its 392 associated call session and connection can be deleted and re- 393 established while still keeping the call session up. 395 The call control shall be provided at an ingress port or gateway port 396 to the network such as UNI and E-NNI [see Section 6 for definition]. 397 The connection control is provided at the originating node of the 398 circuit as well as on each link along the path. 400 The control plane shall support the separation of the call control 401 from the connection control. 403 The control plane shall support call admission control on call setup 404 and connection admission control on connection setup. 406 4.3. Network and Service Scalability 408 Although some specific applications or networks may be on a small 409 scale, the control plane protocol and functional capabilities shall 410 support large-scale networks. 412 In terms of the scale and complexity of the future optical network, 413 the following assumption can be made when considering the scalability 414 and performance that are required of the optical control and 415 management functions. 416 - There may be up to thousands of OXC nodes and the same or higher 417 order of magnitude of OADMs per carrier network. 419 - There may be up to thousands of terminating ports/wavelength per OXC 420 node. 422 Y. Xue et al Informational 424 - There may be up to hundreds of parallel fibers between a pair of OXC 425 nodes. 427 - There may be up to hundreds of wavelength channels transmitted on 428 each fiber. 430 As for the frequency and duration of the optical connections: 432 - The expected end-to-end connection setup/teardown time should be in 433 the order of seconds, preferably less. 435 - The expected connection holding times should be in the order of 436 minutes or greater. 438 - There may be up to millions of simultaneous optical connections 439 switched across a single carrier network. 441 4.4. Transport Network Technology 443 Optical services can be offered over different types of underlying 444 optical transport technologies including both TDM-based SONET/SDH 445 network and WDM-based OTN networks. 447 Standards-based transport technologies SONET/SDH as defined in the ITU 448 Rec. G.803 and OTN implementation framing as defined in ITU Rec. G.709 449 [itu-g709] shall be supported. 451 Note that the service characteristics such as bandwidth granularity 452 and signaling framing hierarchy to a large degree will be determined 453 by the capabilities and constraints of the server layer network. 455 4.5. Service Building Blocks 457 One of the goals of this document is to identify a set of basic 458 service building blocks the carriers can use to create the best 459 suitable service models that serve their business needs. 461 The service building blocks are comprised of a well-defined set of 462 capabilities and a basic set of control and management functions. 463 These capabilities and functions should support a basic set of 464 services and enable a carrier to build enhanced services through 465 extensions and customizations. Examples of the building blocks include 466 the connection types, provisioning methods, control interfaces, policy 467 control functions, and domain internetworking mechanisms, etc. 469 5. Service Model and Applications 471 A carrier's optical network supports multiple types of service models. 472 Each service model may have its own service operations, target 473 markets, and service management requirements. 475 Y. Xue et al Informational 477 5.1. Service and Connection Types 479 The optical network is primarily offering optical paths that are fixed 480 bandwidth connections between two client network elements, such as IP 481 routers or ATM switches, established across the optical network. A 482 connection is also defined by its demarcation from ingress access 483 point, across the optical network, to egress access point of the 484 optical network. 486 The following connection capability topologies must be supported: 488 - Bi-directional point-to-point connection 490 - Uni-directional point-to-point connection 492 - Uni-directional point-to-multipoint connection 494 The point-to-point connections are the primary concerns of the 495 carriers. In this case, the following three types of network 496 connections based on different connection set-up control methods shall 497 be supported: 498 - Permanent connection (PC): Established hop-by-hop directly on each 499 ONE along a specified path without relying on the network routing and 500 signaling capability. The connection has two fixed end-points and 501 fixed cross-connect configuration along the path and stays up until it 502 is deleted. This is similar to the concept of PVC in ATM and there is 503 no automatic re-routing capability. 505 - Switched connection (SC): Established through UNI signaling 506 interface and the connection is dynamically established by network 507 using the network routing and signaling functions. This is similar to 508 the concept of SVC in ATM. 510 - Soft permanent connection (SPC): Established by specifying two PC at 511 end-points and let the network dynamically establishes a SC connection 512 in between. This is similar to the SPVC concept in ATM. 514 The PC and SPC connections should be provisioned via management plane 515 to control interface and the SC connection should be provisioned via 516 signaled UNI interface. 518 Note that even though automated rapid optical connection provisioning 519 is required, the carriers expect the majority of provisioned circuits, 520 at least in short term, to have a long lifespan ranging from months to 521 years. 523 In terms of service provisioning, some carriers may choose to perform 524 testing prior to turning over to the customer. 526 5.2. Examples of Common Service Models 527 Y. Xue et al Informational 529 Each carrier may define its own service model based on it business 530 strategy and environment. The following are example service models 531 that carriers may use. 533 5.2.1. Provisioned Bandwidth Service (PBS) 535 The PBS model provides enhanced leased/private line services 536 provisioned via service management interface (MI) using either PC or 537 SPC type of connection. The provisioning can be real-time or near 538 real-time. It has the following characteristics: 539 - Connection request goes through a well-defined management interface 541 - Client/Server relationship between clients and optical network. 543 - Clients have no optical network visibility and depend on network 544 intelligence or operator for optical connection setup. 546 5.2.2. Bandwidth on Demand Service (BDS) 548 The BDS model provides bandwidth-on-demand dynamic connection services 549 via signaled user-network interface (UNI). The provisioning is real- 550 time and is using SC type of optical connection. It has the following 551 characteristics: 552 - Signaled connection request via UNI directly from the user or its 553 proxy. 555 - Customer has no or limited network visibility depending upon the 556 control interconnection model used and network administrative policy. 558 - Relies on network or client intelligence for connection set-up 559 depending upon the control plane interconnection model used. 561 5.2.3. Optical Virtual Private Network (OVPN) 563 The OVPN model provides virtual private network at the optical layer 564 between a specified set of user sites. It has the following 565 characteristics: 567 - Customers contract for specific set of network resources such as 568 optical connection ports, wavelengths, etc. 570 - Closed User Group (CUG) concept is supported as in normal VPN. 572 - Optical connection can be of PC, SPC or SC type depending upon the 573 provisioning method used. 575 - An OVPN site can request dynamic reconfiguration of the connections 576 between sites within the same CUG. 578 Y. Xue et al Informational 580 - A customer may have visibility and control of network resources up 581 to the extent allowed by the customer service contract. 583 At a minimum, the PBS, BDS and OVPN service models described above 584 shall be supported by the control functions. 586 6. Network Reference Model 588 This section discusses major architectural and functional components 589 of a generic carrier optical network, which will provide a reference 590 model for describing the requirements for the control and management 591 of carrier optical services. 593 6.1. Optical Networks and Sub-networks 595 As mentioned before, there are two main types of optical networks that 596 are currently under consideration: SDH/SONET network as defined in ITU 597 Rec. G.803, and OTN as defined in ITU Rec. G.872. 599 In the current SONET/SDH-based optical network, digital cross-connects 600 (DXC) and add-drop multiplexer (ADM) and line multiplexer terminal 601 (LMT) are connected in ring or linear topology. Similarly, we assume 602 an OTN is composed of a set of optical cross-connects (OXC) and 603 optical add-drop multiplexer (OADM) which is interconnected in a 604 general mesh topology using DWDM optical line systems (OLS). 606 It is often convenient for easy discussion and description to treat an 607 optical network as an sub-network cloud, in which the details of the 608 network become less important, instead focus is on the function and 609 the interfaces the optical network provides. In general, a subnetwork 610 can be defined as a set of access points on the network boundary and a 611 set of point-to-point optical connections between those access points. 613 6.2. Control Domains and Interfaces 614 A generic carrier network reference model describes a multi-carrier 615 network environment. Each individual carrier network can be further 616 partitioned into sub-networks or administrative domains based on 617 administrative, technological or architectural reasons. This partition 618 can be recursive. Similarly, a network can be partitioned into control 619 domains that match the administrative domains and are controlled by a 620 single administrative policy. The control domains can be recursively 621 divided into sub-domains to form control hierarchy for scalability. 622 The control domain concept can be applied to routing, signaling and 623 protection & restoration to form an autonomous control function 624 domain. 626 The demarcation between domains can be either logical or physical and 627 consists of a set of reference points identifiable in the optical 628 network. From the control plane perspective, these reference points 629 define a set of control interfaces in terms of optical control and 630 management functionality as illustrated in Figure 1. 632 Y. Xue et al Informational 634 +---------------------------------------+ 635 | Single carrier network | 636 +------------+ | | 637 |Customer | | +------------+ +------------+ | 638 |IP | | | | | | | 639 |Network +-UNI|- + Optical +--UNI--+Carrier�s IP| | 640 | | | | Subnetwork | | network | | 641 +------------+ | | (Domain A) +--+ | | | 642 | +------+-----+ | +------+-----+ | 643 | | | | | 644 | I-NNI E-NNI UNI | 645 +------------+ | | | | | 646 |Customer | | +------+-----+ | +------+-----+ | 647 |IP +-UNI|- + | +----+ | | 648 |Network | | | Optical | | Optical | | 649 | | | | Subnetwork +-E-NNI-+ Subnetwork | | 650 +------------+ | | (Domain A) | | (Domain B) | | 651 | +------+-----+ +------+-----+ | 652 | | | | 653 +---------------------------------------+ 654 UNI E-NNI 655 | | 656 +------+-------+ +-------+--------+ 657 | | | | 658 | Other Client | | Other Carrier | 659 |Network | | Network | 660 | (ATM/SONET) | | | 661 +--------------+ +----------------+ 663 Figure 1 Generic Carrier Network Reference Model 665 The network interfaces encompass two aspects of the networking 666 functions: user data plane interface and control plane interface. The 667 former concerns about user data transmission across the physical 668 network interface and the latter concerns about the control message 669 exchange across the network interface such as signaling, routing, etc. 670 We call the former physical interface (PI) and the latter control 671 interface (CI). Unless otherwise stated, the CI is assumed in the 672 remaining of this document. 674 6.2.1. Control Plane Interfaces 676 The Control Interface defines the relationship between two connected 677 network entities on both sides of the interface. For each control 678 interface, we need to define the architectural function that each side 679 plays and a controlled set of information that can be exchanged across 680 the interface. The information flowing over this logical interface may 681 include, but not limited to: 682 - Interface endpoint name and address 683 Y. Xue et al Informational 685 - Reachability/summarized network address information 687 - Topology/routing information 689 - Authentication and connection admission control information 691 - Connection management signaling messages 693 - Network resource control information 695 Different types of interfaces can be defined for network control and 696 architectural purposes and can be used as the network reference points 697 in the control plane. In this document, the following set of 698 interfaces is defined as shown in Figure 1. 699 User-Network Interface (UNI): is a bi-directional control interface 700 between service requester and service provider control entities. The 701 service request control entity resides outside the carrier network 702 control domain. 704 Network-Network/Node-Node Interface (NNI): is a bi-directional 705 signaling interface between two optical network elements or sub- 706 networks. 708 We differentiate between internal NNI (I-NNI) and external NNI (E-NNI) 709 as follows: 710 - E-NNI: A NNI between two control plane entities belonging to 711 different control domains. 713 - I-NNI: A NNI between two control plane entities within the same 714 control domain in the carrier network. 716 Different types of interface, internal vs. external, have different 717 implied trust relationship for security and access control purposes. 718 The trust relationship is not binary. Instead a policy-based control 719 mechanism need to be in place to restrict the type and amount of 720 information that can flow cross each type of interfaces depending on 721 the carrier's service and business requirements. 723 Generally, two networks have a fully trusted relationship if they 724 belong to the same administrative domain. In this case, the control 725 information exchanged across the control interface between them should 726 be unlimited. Otherwise, the type and amount of the control 727 information that can go across the information should be constrained 728 by the administrative policy. 730 An example of fully trusted interface is an I-NNI between two optical 731 network elements in a single control domain. Non-trusted interface 732 examples include an E-NNI between two different carriers or a UNI 733 interface between a carrier optical network and its customers. The 734 trust level can be different for the non-trusted UNI or E-NNI 735 Y. Xue et al Informational 737 interface depending upon if it within the carrier or not. In general, 738 intra-carrier E-NNI has higher trust level than inter-carrier E-NNI. 740 The control plane shall support the UNI and NNI interface described 741 above and the interfaces shall be configurable in terms of the type 742 and amount of control information exchange and their behavior shall be 743 consistent with the configuration (i.e., external versus internal 744 interfaces). 746 6.3. Intra-Carrier Network Model 748 Intra-carrier network model concerns the network service control and 749 management issues within networks owned by a single carrier. 751 6.3.1. Multiple Sub-networks 753 Without loss of generality, the optical network owned by a carrier 754 service operator can be depicted as consisting of one or more optical 755 sub-networks interconnected by direct optical links. There may be many 756 different reasons for more than one optical sub-network. It may be the 757 result of using hierarchical layering, different technologies across 758 access, metro and long haul (as discussed below), or a result of 759 business mergers and acquisitions or incremental optical network 760 technology deployment by the carrier using different vendors or 761 technologies. 763 A sub-network may be a single vendor and single technology network. 764 But in general, the carrier's optical network is heterogeneous in 765 terms of equipment vendor and the technology utilized in each sub- 766 network. 768 6.3.2. Access, Metro and Long-haul networks 770 Few carriers have end-to-end ownership of the optical networks. Even 771 if they do, access, metro and long-haul networks often belong to 772 different administrative divisions as separate optical sub-networks. 773 Therefore Inter-(sub)-networks interconnection is essential in terms 774 of supporting the end-to-end optical service provisioning and 775 management. The access, metro and long-haul networks may use different 776 technologies and architectures, and as such may have different network 777 properties. 779 In general, end-to-end optical connectivity may easily cross multiple 780 sub-networks with the following possible scenarios: 781 Access -- Metro -- Access 782 Access - Metro -- Long Haul -- Metro - Access 784 6.4. Inter-Carrier Network Model 786 The inter-carrier model focuses on the service and control aspects 787 between different carrier networks and describes the internetworking 788 Y. Xue et al Informational 790 relationship between them. The inter-carrier connection is often not 791 only constrained technical and business requirements, but by the 792 government regulations as well, 794 Inter-carrier interconnection provides for connectivity between 795 optical network operators. To provide globally reachable end-to-end 796 optical services, optical service control and management between 797 different carrier networks becomes essential. For example, it is 798 possible to support distributed peering within the IP client layer 799 network where the connectivity between two distant IP routers can be 800 achieved via an inter-carrier optical transport connection. 802 6.5. Implied Control Constraints 804 The intra-carrier and inter-carrier models have different implied 805 control constraints. For example, in the intra-carrier model, the 806 address for routing and signaling only need to be unique with the 807 carrier while the inter-carrier model requires the address to be 808 globally unique. 810 In the intra-carrier network model, the network itself forms the 811 largest control domain within the carrier network. This domain is 812 usually partitioned into multiple sub-domains, either flat or 813 hierarchical. The UNI and E-NNI interfaces are internal to the carrier 814 network, therefore higher trust level is assumed. Because of this, 815 direct signaling between domains and summarized topology and resource 816 information exchanged can be allowed across the internal UNI or intra- 817 carrier E-NNI interfaces. 819 In the inter-carrier network model, each carrier's optical network is 820 a separate administrative domain. Both the UNI interface between the 821 user and the carrier network and the NNI interface between two 822 carrier's networks are crossing the carrier's administrative boundary 823 and therefore are external interfaces by definition. 825 In terms of control information exchange, the topology information 826 shall not be allowed to cross both E-NNI and UNI interfaces. 828 7. Optical Service User Requirements 830 This section describes the user requirements for optical services, 831 which in turn impose the requirements on service control and 832 management for the network operators. The user requirements reflect 833 the perception of the optical service from a user's point of view. 835 7.1. Common Optical Services 837 The basic unit of an optical transport service is fixed-bandwidth 838 optical connectivity between applications. However different services 839 are created based on its supported signal characteristics (format, bit 840 Y. Xue et al Informational 842 rate, etc), the service invocation methods and possibly the associated 843 Service Level Agreement (SLA) provided by the service provider. 845 At present, the following are the major optical services provided in 846 the industry: 847 - SONET/SDH, with different degrees of transparency 849 - Optical wavelength services, transparent or opaque 851 - Ethernet at 10Mbps, 100Mbps, 1 Gbps and 10 Gbps 853 - Storage Area Networks (SANs) based on Fiber Connectivity (FICON), 854 Enterprise Storage Connectivity (ESCON) and Fiber Channel (FC). 856 Optical Wavelength Service refers to transport services where signal 857 framing is negotiated between the client and the network operator 858 (framing and bit-rate dependent), and only the payload is carried 859 transparently. SONET/SDH transport is most widely used for network- 860 wide transport. Different levels of transparency can be achieved in 861 the SONET/SDH transmission. 863 Ethernet Services, specifically 1Gb/s and 10Gbs Ethernet services, are 864 gaining more popularity due to the lower costs of the customers' 865 premises equipment and its simplified management requirements 866 (compared to SONET or SDH). 868 Ethernet services may be carried over either SONET/SDH (GFP mapping) 869 or WDM networks. The Ethernet service requests will require some 870 service specific parameters: priority class, VLAN ID/Tag, traffic 871 aggregation parameters. 873 ESCON and FICON are proprietary versions of the SAN service, while 874 Fiber Channel is the standard alternative. As is the case with 875 Ethernet services, SAN services may be carried over either SONET/SDH 876 (using GFP mapping) or WDM networks. 878 The control plane shall provide the carrier with the capability 879 functionality to provision, control and manage all the services listed 880 above. 882 7.2. Bearer Interface Types 884 All the bearer interfaces implemented in the ONE shall be supported by 885 the control plane and associated signaling protocols. 887 The signaling shall support the following interface types protocol: 888 - SDH/SONET 889 - Ethernet 890 - FC-N for Fiber Channel services 891 - OTN (G.709) 892 - PDH (Plesiosynchronous Digital Hierarchy) 893 Y. Xue et al Informational 895 - Passive Optical Network (PON) based on ATM (APON) and Ethernet 896 (EPON) 897 - ESCON and FICON 899 7.3. Optical Service Invocation 900 As mentioned earlier, the methods of service invocation play an 901 important role in defining different services. 903 7.3.1. Provider-Initiated Service Provisioning 905 In this scenario, users forward their service request to the provider 906 via a well-defined service management interface. All connection 907 management operations, including set-up, release, query, or 908 modification shall be invoked from the management plane. This 909 provisioning method is for PC and SPC connections. 911 7.3.2. User-Initiated Service Provisioning 913 In this scenario, users forward their service request to the provider 914 via a well-defined UNI interface in the control plane (including proxy 915 signaling). All connection management operation requests, including 916 set-up, release, query, or modification shall be invoked from directly 917 connected user devices, or its signaling proxy. This provisioning 918 method is for SC connection. 920 7.3.3. Call set-up requirements 921 In summary the following requirements for the control plane have been 922 identified: 923 - The control plane shall support action result codes as responses to 924 any requests over the control interfaces. 926 - The control plane shall support requests for call set-up, subject to 927 policies in effect between the user and the network. 929 - The control plane shall support the destination client device's 930 decision to accept or reject call set-up requests from the source 931 client's device. 933 - The control plane shall support requests for call set-up and 934 deletion across multiple (sub)networks. 936 - NNI signaling shall support requests for call set-up, subject to 937 policies in effect between the (sub)networks. 939 - Call set-up shall be supported for both uni-directional and bi- 940 directional connections. 942 - Upon call request initiation, the control plane shall generate a 943 network unique Call-ID associated with the connection, to be used for 944 information retrieval or other activities related to that connection. 946 Y. Xue et al Informational 948 - CAC shall be provided as part of the call control functionality. It 949 is the role of the CAC function to determine if the call can be 950 allowed to proceed based on resource availability and authentication. 952 - Negotiation for call set-up for multiple service level options shall 953 be supported. 955 - The policy management system must determine what kinds of call setup 956 requests can be authorized. 958 - The control plane elements need the ability to rate limit (or pace) 959 call setup attempts into the network. 961 - The control plane shall report to the management plane, the 962 success/failures of a call request. 964 - Upon a connection request failure, the control plane shall report to 965 the management plane a cause code identifying the reason for the 966 failure and all allocated resources shall be released. A negative 967 acknowledgment shall be returned to the source. 969 - Upon a connection request success a positive acknowledgment shall be 970 returned to the source when a connection has been successfully 971 established. 973 - The control plane shall support requests for call release by Call- 974 ID. 976 - The control plane shall allow any end point or any intermediate node 977 to initiate call release procedures. 979 - Upon call release completion all resources associated with the call 980 shall become available for access for new requests. 982 - The management plane shall be able to release calls or connections 983 established by the control plane both gracefully and forcibly on 984 demand. 986 - Partially deleted calls or connections shall not remain within the 987 network. 989 - End-to-end acknowledgments shall be used for connection deletion 990 requests. 992 - Connection deletion shall not result in either restoration or 993 protection being initiated. 995 - The control plane shall support management plane and neighboring 996 device requests for status query. 998 Y. Xue et al Informational 1000 - The UNI shall support initial registration and updates of the client 1001 with the network via the control plane. 1003 7.4. Optical Connection granularity 1005 The service granularity is determined by the specific technology, 1006 framing and bit rate of the physical interface between the ONE and the 1007 client at the edge and by the capabilities of the ONE. The control 1008 plane needs to support signaling and routing for all the services 1009 supported by the ONE. In general, there should not be a one-to-one 1010 correspondence imposed between the granularity of the service provided 1011 and the maximum capacity of the interface to the user. 1013 The control plane shall support the ITU Rec. G.709 connection 1014 granularity for the OTN network. 1016 The control plane shall support the SDH/SONET connection granularity. 1018 The optical control plane shall support sub-rate interfaces such as VT 1019 /TU granularity (as low as 1.5 Mb/s). 1021 The following fiber channel interfaces shall be supported by the 1022 control plane if the given interfaces are available on the equipment: 1024 - FC-12 1025 - FC-50 1026 - FC-100 1027 - FC-200 1029 Encoding of service types in the protocols used shall be such that new 1030 service types can be added by adding new code point values or objects. 1032 7.5. Other Service Parameters and Requirements 1034 7.5.1 Classes of Service 1036 We use "service level" to describe priority related characteristics of 1037 connections, such as holding priority, set-up priority, or restoration 1038 priority. The intent currently is to allow each carrier to define the 1039 actual service level in terms of priority, protection, and restoration 1040 options. Therefore, individual carriers will determine mapping of 1041 individual service levels to a specific set of quality features. 1043 The control plane shall be capable of mapping individual service 1044 classes into specific priority or protection and restoration options. 1046 7.5.2. Diverse Routing Attributes 1048 Diversity refers to the fact that a disjoint set of network resources 1049 (links and nodes) is utilized to provision multiple parallel optical 1050 connections terminated between a pair of ingress and egress ports. 1052 Y. Xue et al Informational 1054 There are different levels of diversity based on link, node or 1055 administrative policy as described below. In the simple node and link 1056 diversity case: 1057 - Two optical connections are said to be node-disjoint diverse, if the 1058 two connections do not share any node along the path except the 1059 ingress and egress nodes. 1060 - Two optical connections are said to be link-disjoint diverse, if the 1061 two connections do not share any link along the path. 1063 A more general concept of diversity is the Shared Risk Group (SRG) 1064 that is based on a risk-sharing model and allows the definition of 1065 administrative policy-based diversity. A SRG is defined as a group of 1066 links or nodes that share a common risk component, whose failure can 1067 potentially cause the failure of all the links or nodes in the group. 1068 When the SRG is applied to the link resource, it is referred to as 1069 shared risk link group (SRLG). For example, all fiber links that go 1070 through a common conduit under the ground belong to the same SRLG 1071 group, because the conduit is a shared risk component whose failure, 1072 such as a cut, may cause all fibers in the conduit to break. Note that 1073 SRLG is a relation defined within a group of links based upon a 1074 specific risk factor that can be defined based on various technical or 1075 administrative grounds such as �sharing a conduit�, �within 10 miles 1076 of distance proximity� etc. Please see ITU-T G.7715 for more 1077 discussion [itu-rtg]. 1079 Therefore, two optical connections are said to be SRG-disjoint diverse 1080 if the two connections do not have any links or nodes that belong to 1081 the same SRG along the path. 1083 The ability to route service paths diversely is a required control 1084 feature. Diverse routing is one of the connection parameters and is 1085 specified at the time of the connection creation. 1087 The control plane routing algorithms shall be able to route an optical 1088 connection diversely from a previously routed connection in terms of 1089 link disjoint path, node disjoint path and SRG disjoint path. 1091 8. Optical Service Provider Requirements 1093 This section discusses specific service control and management 1094 requirements from the service provider's point of view. 1096 8.1. Service Access Methods to Optical Networks 1098 In order to have access to the optical network service, a customer 1099 needs to be physically connected to the service provider network on 1100 the transport plane. The control plane connection may or may not be 1101 required depending upon the service invocation model provided to the 1102 customer: provisioned vs. signaled. For the signaled, either direct or 1103 indirect signaling methods can be used depending upon if the UNI proxy 1104 Y. Xue et al Informational 1106 is utilized on the client side. The detailed discussion on the UNI 1107 signaling methods is in [oif-uni]. 1109 Multiple access methods blow shall be supported: 1111 - Cross-office access (CNE co-located with ONE) 1113 - Direct remote access (Dedicated links to the user) 1115 - Remote access via access sub-network (via a 1116 multiplexing/distribution sub-network) 1118 8.2. Dual Homing and Network Interconnections 1120 Dual homing is a special case of the access network. Client devices 1121 can be dual homed to the same or different hub, the same or different 1122 access network, the same or different core networks, the same or 1123 different carriers. The different levels of dual homing connectivity 1124 result in many different combinations of configurations. The main 1125 objective for dual homing is for enhanced survivability. 1127 Dual homing must be supported. Dual homing shall not require the use 1128 of multiple addresses for the same client device. 1130 8.3. Inter-domain connectivity 1132 A domain is a portion of a network, or an entire network that is 1133 controlled by a single control plane entity. This section discusses 1134 the various requirements for connecting domains. 1136 8.3.1. Multi-Level Hierarchy 1138 Traditionally current transport networks are divided into core inter- 1139 city long haul networks, regional intra-city metro networks and access 1140 networks. Due to the differences in transmission technologies, 1141 service, and multiplexing needs, the three types of networks are 1142 served by different types of network elements and often have different 1143 capabilities. The network hierarchy is usually implemented through 1144 the control domain hierarchy. 1146 When control domains exists for routing and signaling purpose, there 1147 will be intra-domain routing/signaling and inter-domain 1148 routing/signaling. In general, domain-based routing/signaling autonomy 1149 is desired and the intra-domain routing/signaling and the inter-domain 1150 routing/signaling should be agnostic to each other. 1152 Routing and signaling for multi-level hierarchies shall be supported 1153 to allow carriers to configure their networks as needed. 1155 8.3.2. Network Interconnections 1156 Y. Xue et al Informational 1158 Sub-networks may have multiple points of inter-connections. All 1159 relevant NNI functions, such as routing, reachability information 1160 exchanges, and inter-connection topology discovery must recognize and 1161 support multiple points of inter-connections between subnetworks. Dual 1162 inter-connection is often used as a survivable architecture. 1164 The control plane shall provide support for routing and signaling for 1165 subnetworks having multiple points of interconnection. 1167 8.4. Names and Address Management 1169 8.4.1. Address Space Separation 1171 To ensure the scalability of and smooth migration toward to the 1172 optical switched network, the separation of three address spaces are 1173 required as discussed in [oif-addr]: 1175 - Internal transport network addresses: This is used for routing 1176 control plane messages within the transport network. For example, if 1177 GMPLS is used then IP address should be used. 1179 - Transport Network Assigned (TNA) address: This is a routable address 1180 in the optical transport network and is assigned by the 1181 network. 1183 - Client addresses: This address has significance in the client layer. 1184 For example, if the clients are ATM switches, the NSAP address can be 1185 used. If the clients are IP router, then IP address should be used. 1187 8.4.2. Directory Services 1189 Directory Services shall support address resolution and translation 1190 between various user/client device names or address and the 1191 corresponding TNA addresses. UNI shall use the user naming schemes 1192 for connection request. The directory service is essential for the 1193 implementation of overlay model. 1195 8.4.3. Network element Identification 1197 Each control domain and each network element within a carrier network 1198 shall be uniquely identifiable. Similarly all the service access 1199 points shall be uniquely identifiable. 1201 8.5. Policy-Based Service Management Framework 1203 The optical service must be supported by a robust policy-based 1204 management system to be able to make important decisions. 1206 Examples of policy decisions include: 1207 - What types of connections can be set up for a given UNI? 1208 Y. Xue et al Informational 1210 - What information can be shared and what information must be 1211 restricted in automatic discovery functions? 1213 - What are the security policies over signaling interfaces? 1215 - What routing policies should be applied in the path selection? E.g 1216 The definition of the link diversity. 1218 Requirements: 1219 - Service and network policies related to configuration and 1220 provisioning, admission control, and support of Service Level 1221 Agreements (SLAs) must be flexible, and at the same time simple and 1222 scalable. 1224 - The policy-based management framework must be based on standards- 1225 based policy systems (e.g., IETF COPS [rfc2784]). 1227 - In addition, the IPO service management system must support and be 1228 backwards compatible with legacy service management systems. 1230 9. Control Plane Functional Requirements for Optical Services 1231 This section addresses the requirements for the optical control plane 1232 in support of service provisioning. 1234 The scope of the control plane includes the control of the interfaces 1235 and network resources within an optical network and the interfaces 1236 between the optical network and its client networks. In other words, 1237 it should include both NNI and UNI aspects. 1239 9.1. Control Plane Capabilities and Functions 1241 The control capabilities are supported by the underlying control 1242 functions and protocols built in the control plane. 1244 9.1.1. Network Control Capabilities 1246 The following capabilities are required in the network control plane 1247 to successfully deliver automated provisioning for optical services: 1248 - Network resource discovery 1250 - Address assignment and resolution 1252 - Routing information propagation and dissemination 1254 - Path calculation and selection 1256 - Connection management 1258 These capabilities may be supported by a combination of functions 1259 across the control and the management planes. 1261 Y. Xue et al Informational 1263 9.1.2. Control Plane Functions for Network Control 1265 The following are essential functions needed to support network 1266 control capabilities: 1267 - Signaling 1268 - Routing 1269 - Automatic resource, service and neighbor discovery 1271 Specific requirements for signaling, routing and discovery are 1272 addressed in Section 10. 1274 The general requirements for the control plane functions to support 1275 optical networking and service functions include: 1277 - The control plane must have the capability to establish, teardown 1278 and maintain the end-to-end connection, and the hop-by-hop connection 1279 segments between any two end-points. 1281 - The control plane must have the capability to support optical 1282 traffic-engineering (e.g. wavelength management) requirements 1283 including resource discovery and dissemination, constraint-based 1284 routing and path computation. 1286 - The control plane shall support network status or action result code 1287 responses to any requests over the control interfaces. 1289 - The control plane shall support call admission control on UNI and 1290 connection-admission control on NNI. 1292 - The control plane shall support graceful release of network 1293 resources associated with the connection after a successful connection 1294 teardown or failed connection. 1296 - The control plane shall support management plane request for 1297 connection attributes/status query. 1299 - The control plane must have the capability to support various 1300 protection and restoration schemes. 1302 - Control plane failures shall not affect active connections and shall 1303 not adversely impact the transport and data planes. 1305 - The control plane should support separation of control function 1306 entities including routing, signaling and discovery and should allow 1307 different control distributions of those functionalities, including 1308 centralized, distributed or hybrid. 1310 - The control plane should support physical separation of the control 1311 plane from the transport plane to support either tightly coupled or 1312 loosely coupled control plane solutions. 1314 Y. Xue et al Informational 1316 - The control plane should support the routing and signaling proxy to 1317 participate in the normal routing and signaling message exchange and 1318 processing. 1320 - Resilience and security are crucial issues for the control plane and 1321 will be addressed in Section 11 and 12 of this document respectively. 1323 9.2. Signaling Communication Network (SCN) 1325 The signaling communication network is a transport network for control 1326 plane messages and it consists of a set of control channels that 1327 interconnects the nodes within the control plane. Therefore, the 1328 signaling communication network must be accessible by each of the 1329 communicating nodes (e.g., OXCs). If an out-of-band IP-based control 1330 message transport network is an overlay network built on top of the IP 1331 data network using some tunneling technologies, these tunnels must be 1332 standards-based such as IPSec, GRE, etc. 1334 - The signaling communication network must terminate at each of the 1335 nodes in the transport plane. 1337 - The signaling communication network shall not be assumed to have the 1338 same topology as the data plane, nor shall the data plane and control 1339 plane traffic be assumed to be congruently routed. 1341 A control channel is the communication path for transporting control 1342 messages between network nodes, and over the UNI (i.e., between the 1343 UNI entity on the user side and the UNI entity on the network side ). 1344 The control messages include signaling messages, routing information 1345 messages, and other control maintenance protocol messages such as 1346 neighbor and service discovery. 1348 The following three types of signaling in the control channel shall be 1349 supported: 1351 - In-band signaling: The signaling messages are carried over a logical 1352 communication channel embedded in the data-carrying optical link or 1353 channel. For example, using the overhead bytes in SONET data framing 1354 as a logical communication channel falls into the in-band signaling 1355 methods. 1357 - In fiber, Out-of-band signaling: The signaling messages are carried 1358 over a dedicated communication channel separate from the optical data- 1359 bearing channels, but within the same fiber. For example, a dedicated 1360 wavelength or TDM channel may be used within the same fiber as the 1361 data channels. 1363 - Out-of-fiber signaling: The signaling messages are carried over a 1364 dedicated communication channel or path within different fibers to 1365 those used by the optical data-bearing channels. For example, 1366 dedicated optical fiber links or communication path via separate and 1367 Y. Xue et al Informational 1369 independent IP-based network infrastructure are both classified as 1370 out-of-fiber signaling. 1372 The UNI control channel and proxy signaling defined in the OIF UNI 1.0 1373 [oif-uni] shall be supported. 1375 The signaling communication network provides communication mechanisms 1376 between entities in the control plane. 1378 - The signaling communication network shall support reliable message 1379 transfer. 1381 - The signaling communication network shall have its own OAM 1382 mechanisms. 1384 - The signaling communication network shall use protocols that support 1385 congestion control mechanisms. 1387 In addition, the signaling communication network should support 1388 message priorities. Message prioritization allows time critical 1389 messages, such as those used for restoration, to have priority over 1390 other messages, such as other connection signaling messages and 1391 topology and resource discovery messages. 1393 The signaling communication network shall be highly reliable and 1394 implement failure recovery. 1396 9.3 Control Plane Interface to Data Plane 1398 In the situation where the control plane and data plane are decoupled, 1399 this interface needs to be standardized. Requirements for a standard 1400 control-data plane interface are under study. The specification of a 1401 control plane interface to the data plane is outside the scope of this 1402 document. 1404 Control plane should support a standards based interface to configure 1405 switching fabrics and port functions via the management plane. 1407 Data plane shall monitor and detect the failure (LOL, LOS, etc.) and 1408 quality degradation (high BER, etc.) of the signals and be able to 1409 provide signal-failure and signal-degrade alarms to the control plane 1410 accordingly to trigger proper mitigation actions in the control plane. 1412 9.4. Management Plane Interface to Data Plane 1414 The management plane shall be responsible for the network resource 1415 management in the data plane. It should be able to partition the 1416 network resources and control the allocation and the deallocation of 1417 the resource for use by the control plane. 1419 Y. Xue et al Informational 1421 Data plane shall monitor and detect the failure and quality 1422 degradation of the signals and be able to provide signal-failure and 1423 signal-degrade alarms plus associated detailed fault information to 1424 the management plane to trigger and enable the management for fault 1425 location and repair. 1427 Management plane failures shall not affect the normal operation of a 1428 configured and operational control plane or data plane. 1430 9.5. Control Plane Interface to Management Plane 1432 The control plane is considered a managed entity within a network. 1433 Therefore, it is subject to management requirements just as other 1434 managed entities in the network are subject to such requirements. 1436 The control plane should be able to service the requests from the 1437 management plane for end-to-end connection provisioning (e.g. SPC 1438 connection) and control plane database information query (e.g. 1439 topology database) 1441 The control plane shall report all control plane faults to the 1442 management plane with detailed fault information 1444 The control, management and transport plane each has its well-defined 1445 network functions. Those functions are orthogonal to each other. 1446 However, this does not imply total independency. Since the management 1447 plane is responsible for the management of both control plane and 1448 transport plane, the management plane plays an authoritative role 1450 In general, the management plane shall have authority over the control 1451 plane. Management plane should be able to configure the routing, 1452 signaling and discovery control parameters such as hold-down timers, 1453 hello-interval, etc. to affect the behavior of the control plane. 1455 In the case of network failure, both the management plane and the 1456 control plane need fault information at the same priority. The control 1457 plane shall be responsible for providing necessary statistic data such 1458 as call counts and traffic stats to the management plane. They should 1459 be available upon query from the management plane. The management 1460 plane shall be able to tear down connections established by the 1461 control plane both gracefully and forcibly on demand. 1463 9.6. IP and Optical Control Plane Interconnection 1465 The control plane interconnection model defines how two control 1466 networks can be interconnected in terms of controlling relationship 1467 and control information flow allowed between them. There are three 1468 basic types of control plane network interconnection models: overlay, 1469 peer and hybrid, which are defined in the IETF IPO WG document [ipo- 1470 frame]. See Appendix A for more discussion. 1472 Y. Xue et al Informational 1474 Choosing the level of coupling depends upon a number of different 1475 factors, some of which are: 1476 - Variety of clients using the optical network 1478 - Relationship between the client and optical network 1480 - Operating model of the carrier 1482 Overlay model (UNI like model) shall be supported for client to 1483 optical control plane interconnection. 1485 Other models are optional for client to optical control plane 1486 interconnection. 1488 For optical to optical control plane interconnection all three models 1489 shall be supported. In general, the priority for support of 1490 interconnection models should be overlay, hybrid and peer, in 1491 decreasing order. 1493 10. Requirements for Signaling, Routing and Discovery 1495 10.1. Requirements for information sharing over UNI, I-NNI and E-NNI 1497 Different types of interfaces shall impose different requirements and 1498 functionality due to their different trust relationships. 1499 Specifically: 1501 - Topology information shall not be exchanged across inter-carrier E- 1502 NNI and UNI. 1504 - The control plane shall allow the carrier to configure the type and 1505 extent of control information exchange across various interfaces. 1507 - Address resolution exchange over UNI is needed if an addressing 1508 directory service is not available. 1510 10.2. Signaling Functions 1512 Call and connection control and management signaling messages are used 1513 for the establishment, modification, status query and release of an 1514 end-to-end optical connection. Unless otherwise specified, the word 1515 "signaling" refers to both inter-domain and intra-domain signaling. 1516 - The inter-domain signaling protocol shall be agnostic to the intra- 1517 domain signaling protocol for all the domains within the network. 1519 - Signaling shall support both strict and loose routing. 1521 - Signaling shall support individual as well as groups of connection 1522 requests. 1524 - Signaling shall support fault notifications. 1526 Y. Xue et al Informational 1528 - Inter-domain signaling shall support per connection, globally unique 1529 identifiers for all connection management primitives based on a well- 1530 defined naming scheme. 1532 - Inter-domain signaling shall support crank-back and rerouting. 1534 10.3. Routing Functions 1536 Routing includes reachability information propagation, network 1537 topology/resource information dissemination and path computation. 1538 Network topology/resource information dissemination is to provide each 1539 node in the network with information about the carrier network such 1540 that a single node is able to support constraint-based path selection. 1541 A mixture of hop-by-hop routing, explicit/source routing and 1542 hierarchical routing will likely be used within future transport 1543 networks. 1545 All three mechanisms (Hop-by-hop routing, explicit / source-based 1546 routing and hierarchical routing) must be supported. Messages 1547 crossing untrusted boundaries must not contain information regarding 1548 the details of an internal network topology. 1550 Requirements for routing information dissemination: 1551 - The inter-domain routing protocol shall be agnostic to the intra- 1552 domain routing protocol within any of the domains within the network. 1554 - The exchange of the following types of information shall be 1555 supported by inter-domain routing protocols: 1556 - Inter-domain topology 1557 - Per-domain topology abstraction 1558 - Per domain reachability summarization 1560 Major concerns for routing protocol performance are scalability and 1561 stability, which impose the following requirement on the routing 1562 protocols: 1563 - The routing protocol shall scale with the size of the network 1565 The routing protocols shall support following requirements: 1567 - Routing protocol shall support hierarchical routing information 1568 dissemination, including topology information aggregation and 1569 summarization. 1571 - The routing protocol(s) shall minimize global information and keep 1572 information locally significant as much as possible. Over external 1573 interfaces only reachability information, next routing hop and service 1574 capability information should be exchanged. Any other network related 1575 information shall not leak out to other networks. 1577 Y. Xue et al Informational 1579 - The routing protocol shall be able to minimize global information 1580 and keep information locally significant as much as possible (e.g., 1581 information local to a node, a sub-network, a domain, etc). For 1582 example, a single optical node may have thousands of ports. The ports 1583 with common characteristics need not to be advertised individually. 1584 - The routing protocol shall distinguish static routing information 1585 and dynamic routing information. The routing protocol operation shall 1586 update dynamic and static routing information differently. Only 1587 dynamic routing information shall be updated in real time. 1589 - Routing protocol shall be able to control the dynamic information 1590 updating frequency through different types of thresholds. Two types of 1591 thresholds could be defined: absolute threshold and relative 1592 threshold. 1594 - The routing protocol shall support trigger-based and timeout-based 1595 information update. 1597 - Inter-domain routing protocol shall support policy-based routing 1598 information exchange. 1600 - The routing protocol shall be able to support different levels of 1601 protection/restoration and other resiliency requirements. These are 1602 discussed in Section 11. 1604 All the scalability techniques will impact the network resource 1605 representation accuracy. The tradeoff between accuracy of the routing 1606 information and the routing protocol scalability is an important 1607 consideration to be made by network operators. 1609 10.4. Requirements for path selection 1611 The following are functional requirements for path selection: 1612 - Path selection shall support shortest path routing. 1614 - Path selection shall also support constraint-based routing. At least 1615 the following constraints shall be supported: 1616 - Cost 1617 - Link utilization 1618 - Diversity 1619 - Service Class 1621 - Path selection shall be able to include/exclude some specific 1622 network resources, based on policy. 1624 - Path selection shall be able to support different levels of 1625 diversity, including node, link, SRLG and SRG. 1627 - Path selection algorithms shall provide carriers the ability to 1628 support a wide range of services and multiple levels of service 1629 Y. Xue et al Informational 1631 classes. Parameters such as service type, transparency, bandwidth, 1632 latency, bit error rate, etc. may be relevant. 1634 Constraint-based routing in the optical network in significantly 1635 complex Compared to the IP network. There are many optical layer 1636 constraints to consider such as wavelength, diversity, optical layer 1637 impairments, etc. A detailed discussion on the routing constraints at 1638 the optical layer is in [ietf-olr]. 1640 10.5. Discovery Functions 1641 The discovery functions include neighbor, resource and service 1642 discovery. The control plane shall support both manual configuration 1643 and automatic discovery 1645 10.5.1. Neighbor discovery 1646 Neighbor Discovery can be described as an instance of auto-discovery 1647 that is used for associating two network entities within a layer 1648 network based on a specified adjacency relation. 1650 The control plane shall support the following neighbor discovery 1651 capabilities as described in [itu-disc]: 1652 - Physical media adjacency that detects and verifies the physical 1653 layer network connectivity between two connected network element 1654 ports. 1656 - Logical network adjacency that detects and verifies the logical 1657 network layer connection above the physical layer between network 1658 layer specific ports. 1660 - Control adjacency that detects and verifies the logical neighboring 1661 relation between two control entities associated with data plane 1662 network elements that form either physical or logical adjacency. 1664 The control plane shall support manual neighbor adjacency 1665 configuration to either overwrite or supplement the automatic neighbor 1666 discovery function. 1668 10.5.2. Resource Discovery 1670 Resource discovery is concerned with the ability to verify physical 1671 connectivity between two ports on adjacent network elements, improve 1672 inventory management of network resources, detect configuration 1673 mismatches between adjacent ports, associating port characteristics of 1674 adjacent network elements, etc. Resource discovery shall be supported. 1676 Resource discovery can be achieved through either manual provisioning 1677 or automated procedures. The procedures are generic while the specific 1678 mechanisms and control information can be technology dependent. 1680 Y. Xue et al Informational 1682 After neighbor discovery, resource verification and monitoring must be 1683 performed periodically to verify physical attributes to ensure 1684 compatibility. 1686 10.5.3. Service Discovery 1688 Service Discovery can be described as an instance of auto-discovery 1689 that is used for verifying and exchanging service capabilities of a 1690 network. Service discovery can only happen after neighbor discovery. 1691 Since service capabilities of a network can dynamically change, 1692 service discovery may need to be repeated. Service discovery is 1693 required for all the optical services supported. 1695 11. Requirements for service and control plane resiliency 1697 Resiliency is a network capability to continue its operations under 1698 the condition of failures within the network. The automatic switched 1699 optical network assumes the separation of control plane and data 1700 plane. Therefore the failures in the network can be divided into those 1701 affecting the data plane and those affecting the control plane. To 1702 provide enhanced optical services, resiliency measures in both data 1703 plane and control plane should be implemented. The following failure- 1704 handling principles shall be supported. 1706 The control plane shall provide optical service failure detection and 1707 recovery functions such that the failures in the data plane within the 1708 control plane coverage can be quickly mitigated. 1710 The failure of control plane shall not in any way adversely affect the 1711 normal functioning of existing optical connections in the data plane. 1713 In general, there shall be no single point of failure for all major 1714 control plane functions, including signaling, routing etc. The control 1715 plane shall provide reliable transfer of signaling messages and flow 1716 control mechanisms for easing any congestion within the control plane. 1718 11.1. Service resiliency 1720 In circuit-switched transport networks, the quality and reliability of 1721 the established optical connections in the transport plane can be 1722 enhanced by the protection and restoration mechanisms provided by the 1723 control plane functions. Rapid recovery is required by transport 1724 network providers to protect service and also to support stringent 1725 Service Level Agreements (SLAs) that dictate high reliability and 1726 availability for customer connectivity. 1728 The protection and restoration actions are usually in reaction to the 1729 failure in the networks. However, during the network maintenance 1730 affecting the protected connections, a network operator needs to 1731 proactively force the traffic on the protected connections to switch 1732 to its protection connection. Therefore in order to support easy 1733 Y. Xue et al Informational 1735 network maintenance, it is required that management initiated 1736 protection and restoration be supported. 1738 The failure and signal degradation in the transport plane is usually 1739 technology specific and therefore shall be monitored and detected by 1740 the transport plane. 1742 The transport plane shall report both physical level failure and 1743 signal degradation to the control plane in the form of the signal 1744 failure alarm and signal degrade alarm. 1746 The control plane shall support both alarm-triggered and hold-down 1747 timers based protection switching and dynamic restoration for failure 1748 recovery. 1750 Clients will have different requirements for connection availability. 1751 These requirements can be expressed in terms of the "service level", 1752 which can be mapped to different restoration and protection options 1753 and priority related connection characteristics, such as holding 1754 priority (e.g. pre-emptable or not), set-up priority, or restoration 1755 priority. However, how the mapping of individual service levels to a 1756 specific set of protection/restoration options and individual carriers 1757 will determine connection priorities. 1759 In order for the network to support multiple grades of service, the 1760 control plane must support differing protection and restoration 1761 options on a per connection basis. 1763 In order for the network to support multiple grades of service, the 1764 control plane must support setup priority, restoration priority and 1765 holding priority on a per connection basis. 1767 In general, the following protection schemes shall be considered for 1768 all protection cases within the network: 1769 - Dedicated protection: 1+1 and 1:1 1770 - Shared protection: 1:N and M:N. 1771 - Unprotected 1773 The control plane shall support "extra-traffic" capability, which 1774 allows unprotected traffic to be transmitted on the protection 1775 circuit. 1777 The control plane shall support both trunk-side and drop-side 1778 protection switching. 1780 The following restoration schemes should be supported: 1781 - Restorable 1782 - Un-restorable 1784 Protection and restoration shall be supported on both an end-to-end 1785 basis and a link-by-link basis. 1787 Y. Xue et al Informational 1789 Protection and restoration configuration should be based on software 1790 only. 1792 The control plane shall allow the modification of protection and 1793 restoration attributes on a per-connection basis. 1794 The control plane shall support mechanisms for reserving bandwidth 1795 resources for restoration. 1797 The control plane shall support mechanisms for normalizing connection 1798 routing (reversion) after failure repair. 1800 Normal connection management operations (e.g., connection deletion) 1801 shall not result in protection/restoration being initiated. 1803 11.2. Control plane resiliency 1805 The control plane may be affected by failures in signaling network 1806 connectivity and by software failures (e.g., signaling, topology and 1807 resource discovery modules). 1809 The control plane should implement signaling message priorities to 1810 ensure that restoration messages receive preferential treatment, 1811 resulting in faster restoration. 1813 The optical control plane signaling network shall support protection 1814 and restoration options to enable it to be self-healing in case of 1815 failures within the control plane. 1817 Control network failure detection mechanisms shall distinguish between 1818 control channel and software process failures. 1820 The control plane failure shall only impact the capability to 1821 provision new services. 1823 Fault localization techniques for the isolation of failed control 1824 resources shall be supported. 1826 Recovery from control plane failures shall result in complete recovery 1827 and re-synchronization of the network. 1829 There shall not be a single point of failure in the control plane 1830 systems design. 1832 Partial or total failure of the control plane shall not affect the 1833 existing established connections. It should only lose the capability 1834 to accept the new connection requests. 1836 12. Security Considerations 1837 Y. Xue et al Informational 1839 In this section, security considerations and requirements for optical 1840 services and associated control plane requirements are described. 1842 12.1. Optical Network Security Concerns 1844 Since optical service is directly related to the physical network, 1845 which is fundamental to a telecommunications infrastructure, stringent 1846 security assurance mechanism should be implemented in optical 1847 networks. 1849 In terms of security, an optical connection consists of two aspects. 1850 One is security of the data plane where an optical connection itself 1851 belongs, and the other is security of the control plane. 1853 12.1.1. Data Plane Security 1855 - Misconnection shall be avoided in order to keep the user's data 1856 confidential. For enhancing integrity and confidentiality of data, it 1857 may be helpful to support scrambling of data at layer 2 or encryption 1858 of data at a higher layer. 1860 12.1.2. Control Plane Security 1862 It is desirable to decouple the control plane from the data plane 1863 physically. 1865 Restoration shall not result in miss-connections (connections 1866 established to a destination other than that intended), even for short 1867 periods of time (e.g., during contention resolution). For example, 1868 signaling messages, used to restore connectivity after failure, should 1869 not be forwarded by a node before contention has been resolved. 1871 Additional security mechanisms should be provided to guard against 1872 intrusions on the signaling network. Some of these may be done with 1873 the help of the management plane. 1874 - Network information shall not be advertised across external 1875 interfaces (UNI or E-NNI). The advertisement of network information 1876 across the E-NNI shall be controlled and limited in a configurable 1877 policy based fashion. The advertisement of network information shall 1878 be isolated and managed separately by each administration. 1880 - The signaling network itself shall be secure, blocking all 1881 unauthorized access. The signaling network topology and addresses 1882 shall not be advertised outside a carrier's domain of trust. 1884 - Identification, authentication and access control shall be 1885 rigorously used by network operators for providing access to the 1886 control plane. 1888 Y. Xue et al Informational 1890 - Discovery information, including neighbor discovery, service 1891 discovery, resource discovery and reachability information should be 1892 exchanged in a secure way. 1894 - Information on security-relevant events occurring in the control 1895 plane or security-relevant operations performed or attempted in the 1896 control plane shall be logged in the management plane. 1898 - The management plane shall be able to analyze and exploit logged 1899 data in order to check if they violate or threat security of the 1900 control plane. 1902 - The control plane shall be able to generate alarm notifications 1903 about security related events to the management plane in an adjustable 1904 and selectable fashion. 1906 - The control plane shall support recovery from successful and 1907 attempted intrusion attacks. 1909 12.2. Service Access Control 1911 From a security perspective, network resources should be protected 1912 from unauthorized accesses and should not be used by unauthorized 1913 entities. Service access control is the mechanism that limits and 1914 controls entities trying to access network resources. Especially on 1915 the UNI and E-NNI, Connection Admission Control (CAC) functions should 1916 also support the following security features: 1917 - CAC should be applied to any entity that tries to access network 1918 resources through the UNI (or E-NNI). CAC should include an 1919 authentication function of an entity in order to prevent masquerade 1920 (spoofing). Masquerade is fraudulent use of network resources by 1921 pretending to be a different entity. An authenticated entity should be 1922 given a service access level on a configurable policy basis. 1924 - The UNI and NNI should provide optional mechanisms to ensure origin 1925 authentication and message integrity for connection management 1926 requests such as set-up, tear-down and modify and connection signaling 1927 messages. This is important in order to prevent Denial of Service 1928 attacks. The UNI and E-NNI should also include mechanisms, such as 1929 usage-based billing based on CAC, to ensure non-repudiation of 1930 connection management messages. 1932 - Each entity should be authorized to use network resources according 1933 to the administrative policy set by the operator. 1935 13. Acknowledgements 1936 The authors of this document would like to extend our special 1937 appreciation to John Strand for his initial contributions to the 1938 carrier requirements. We also want to acknowledge the valuable inputs 1939 from, Yangguang Xu, Zhiwei Lin, Eve Verma, Daniel Awduche, James 1940 Y. Xue et al Informational 1942 Luciani, Deborah Brunhard, Lynn Neir, Wesam Alanqar, Tammy Ferris, and 1943 Mark Jones. 1945 14. References 1946 14.1 Normative References 1948 [rfc2026] S. Bradner, "The Internet Standards Process -- Revision 3," 1949 BCP 9, RFC 2026, IETF October 1996. 1951 [rfc2119] S. Bradner, �Key words for use in RFC to indicate 1952 requirement levels�, BCP 14, RFC 2119, 1997 1953 [itu-astn] ITU-T Rec. G.8070/Y.1301 (2001), �Requirements for the 1954 Automatic Switched Transport Network (ASTN)�. 1956 [itu-ason] ITU-T Rec. G.8080/Y.1304 (2001), �Architecture of the 1957 Automatic Switched Optical Network (ASON)�. 1959 [itu-dcm] ITU-T Rec. G.7713/Y.1704 (2001), �Distributed Call and 1960 Connection Management (DCM)�. 1962 [itu-rtg] ITU-T Rec. G.7715/Y.1706 (2002), �Architecture and 1963 Requirements for Routing in the Automatic Switched Optical Networks�. 1965 [itu-disc] ITU-T Rec. G.7714/Y.1705 (2001), �Generalized Automatic 1966 Discovery Techniques. 1968 14.2 Informative References 1970 [itu-otn] ITU-T G.872 (2000) � �Architecture of Optical Transport 1971 Networks�. 1973 [itu-g709] ITU-T G.709 (2001)� �Network Node Interface for the Optical 1974 Transport Network�. 1976 [itu-sdh] ITU-T Rec. G.803 (2000), �Architecture of Transport Networks 1977 based on the Synchronous Digital Hierarchy� 1979 [ipo-frw] B. Rajagopalan, et. al �IP over Optical Networks: A 1980 Framework�, work in progress, IETF 1982 [oif-addr] M. Lazer, "High Level Requirements on Optical Network 1983 Addressing", oif2001.196, 2001 1985 [oif-carrier] Y. Xue and M. Lazer, et al, �Carrier Optical Service 1986 Framework and Associated Requirements for UNI�, OIF2000.155, 2000 1988 [oif-nnireq] M. Lazer et al, �Carrier NNI Requirements�, OIF2002.229, 1989 2002 1991 [ipo-olr] A Chiu and J. Strand et al., "Impairments and Other 1992 Constraints on Optical Layer Routing", work in progress, IETF 1993 Y. Xue et al Informational 1995 [ietf-gsmp] A. Doria, et al �General Switch Management Protocol V3�, 1996 work in progress, IETF, 2002 1998 [rfc2748] D. Durham, et al, �The COPS (Common Open Policy Services) 1999 Protocol�, RFC 2748, Jan. 2000 2001 [oif-uni] Optical Internetworking Forum (OIF), "UNI 1.0 Signaling 2002 Specification," December, 2001. 2004 [ansi-sonet] ANSI T1.105-2001, �Synchronous Optical Network (SONET) - 2005 Basic Description including Multiplex Structure, Rates and Formats�, 2006 2001 2008 [itu-dcn]ITU-T Rec. G.7712/Y.1703 (2001), �Architecture and 2009 Specification of Data Communication Network�. 2011 14 Author's Addresses 2013 Yong Xue 2014 UUNET/WorldCom 2015 22001 Loudoun County Parkway 2016 Ashburn, VA 20147 2017 Email: yxue@cox.net 2019 Monica Lazer 2020 AT&T 2021 900 ROUTE 202/206N PO BX 752 2022 BEDMINSTER, NJ 07921-0000 2023 mlazer@att.com 2025 Jennifer Yates 2026 AT&T Labs 2027 180 PARK AVE, P.O. BOX 971 2028 FLORHAM PARK, NJ 07932-0000 2029 jyates@research.att.com 2031 Dongmei Wang 2032 AT&T Labs 2033 Room B180, Building 103 2034 180 Park Avenue 2035 Florham Park, NJ 07932 2036 mei@research.att.com 2038 Ananth Nagarajan 2039 Sprint 2040 6220 Sprint Parkway 2041 Overland Park, KS 66251, USA 2042 ananth.nagarajan@mail.sprint.com 2043 Y. Xue et al Informational 2045 Hirokazu Ishimatsu 2046 Japan Telecom Co., LTD 2047 2-9-1 Hatchobori, Chuo-ku, 2048 Tokyo 104-0032 Japan 2049 Phone: +81 3 5540 8493 2050 Fax: +81 3 5540 8485 2051 hirokazu.ishimatsu@japan-telecom.co.jp 2053 Olga Aparicio 2054 Cable & Wireless Global 2055 11700 Plaza America Drive 2056 Reston, VA 20191 2057 Phone: 703-292-2022 2058 Email: olga.aparicio@cwusa.com 2060 Steven Wright 2061 Science & Technology 2062 BellSouth Telecommunications 2063 41G70 BSC 2064 675 West Peachtree St. NE. 2065 Atlanta, GA 30375 2066 Phone +1 (404) 332-2194 2067 Email: steven.wright@snt.bellsouth.com 2069 Appendix A: Interconnection of Control Planes 2071 The interconnection of the IP router (client) and optical control 2072 planes can be realized in a number of ways depending on the required 2073 level of coupling. The control planes can be loosely or tightly 2074 coupled. Loose coupling is generally referred to as the overlay model 2075 and tight coupling is referred to as the peer model. Additionally 2076 there is the augmented model that is somewhat in between the other two 2077 models but more akin to the peer model. The model selected determines 2078 the following: 2079 - The details of the topology, resource and reachability information 2080 advertised between the client and optical networks 2082 - The level of control IP routers can exercise in selecting paths 2083 across the optical network 2085 The next three sections discuss these models in more details and the 2086 last section describes the coupling requirements from a carrier's 2087 perspective. 2089 Peer Model (I-NNI like model) 2091 Under the peer model, the IP router clients act as peers of the 2092 optical transport network, such that single routing protocol instance 2093 runs over both the IP and optical domains. In this regard the optical 2094 network elements are treated just like any other router as far as the 2095 Y. Xue et al Informational 2097 control plane is concerned. The peer model, although not strictly an 2098 internal NNI, behaves like an I-NNI in the sense that there is sharing 2099 of resource and topology information. 2101 Presumably a common IGP such as OSPF or IS-IS, with appropriate 2102 extensions, will be used to distribute topology information. One 2103 tacit assumption here is that a common addressing scheme will also be 2104 used for the optical and IP networks. A common address space can be 2105 trivially realized by using IP addresses in both IP and optical 2106 domains. Thus, the optical networks elements become IP addressable 2107 entities. 2109 The obvious advantage of the peer model is the seamless 2110 interconnection between the client and optical transport networks. The 2111 tradeoff is that the tight integration and the optical specific 2112 routing information that must be known to the IP clients. 2114 The discussion above has focused on the client to optical control 2115 plane inter-connection. The discussion applies equally well to inter- 2116 connecting two optical control planes. 2118 Overlay (UNI-like model) 2120 Under the overlay model, the IP client routing, topology distribution, 2121 and signaling protocols are independent of the routing, topology 2122 distribution, and signaling protocols at the optical layer. This model 2123 is conceptually similar to the classical IP over ATM model, but 2124 applied to an optical sub-network directly. 2126 Though the overlay model dictates that the client and optical network 2127 are independent this still allows the optical network to re-use IP 2128 layer protocols to perform the routing and signaling functions. 2130 In addition to the protocols being independent the addressing scheme 2131 used between the client and optical network must be independent in the 2132 overlay model. That is, the use of IP layer addressing in the clients 2133 must not place any specific requirement upon the addressing used 2134 within the optical control plane. 2136 The overlay model would provide a UNI to the client networks through 2137 which the clients could request to add, delete or modify optical 2138 connections. The optical network would additionally provide 2139 reachability information to the clients but no topology information 2140 would be provided across the UNI. 2142 Augmented model (E-NNI like model) 2144 Under the augmented model, there are actually separate routing 2145 instances in the IP and optical domains, but information from one 2146 routing instance is passed through the other routing instance. For 2147 example, external IP addresses could be carried within the optical 2148 Y. Xue et al Informational 2150 routing protocols to allow reachability information to be passed to IP 2151 clients. A typical implementation would use BGP between the IP client 2152 and optical network. 2154 The augmented model, although not strictly an external NNI, behaves 2155 like an E-NNI in that there is limited sharing of information. 2157 Generally in a carrier environment there will be more than just IP 2158 routers connected to the optical network. Some other examples of 2159 clients could be ATM switches or SONET ADM equipment. This may drive 2160 the decision towards loose coupling to prevent undue burdens upon non- 2161 IP router clients. Also, loose coupling would ensure that future 2162 clients are not hampered by legacy technologies. 2164 Additionally, a carrier may for business reasons want a separation 2165 between the client and optical networks. For example, the ISP 2166 business unit may not want to be tightly coupled with the optical 2167 network business unit. Another reason for separation might be just 2168 pure politics that play out in a large carrier. That is, it would 2169 seem unlikely to force the optical transport network to run that same 2170 set of protocols as the IP router networks. Also, by forcing the same 2171 set of protocols in both networks the evolution of the networks is 2172 directly tied together. That is, it would seem you could not upgrade 2173 the optical transport network protocols without taking into 2174 consideration the impact on the IP router network (and vice versa). 2176 Operating models also play a role in deciding the level of coupling. 2177 Four main operating models envisioned for an optical transport 2178 network: 2179 Category 1: ISP owning all of its own infrastructure (i.e. including 2180 fiber and duct to the customer premises) 2182 Category 2: ISP leasing some or all of its capacity from a third 2183 party 2185 Category 3: Carriers carrier providing layer 1 services 2187 Category 4: Service provider offering multiple layer 1, 2, and 3 2188 services over a common infrastructure 2190 Although relatively few, if any, ISPs fall into category 1 it would 2191 seem the mostly likely of the four to use the peer model. The other 2192 operating models would lend themselves more likely to choose an 2193 overlay model. Most carriers would fall into category 4 and thus 2194 would most likely choose an overlay model architecture. 2196 Full Copyright Statement 2198 Copyright (C) The Internet Society (2002). All Rights Reserved. 2200 Y. Xue et al Informational 2202 This document and translations of it may be copied and furnished to 2203 others, and derivative works that comment on or otherwise explain it 2204 or assist in its implementation may be prepared, copied, published and 2205 distributed, in whole or in part, without restriction of any kind, 2206 provided that the above copyright notice and this paragraph are 2207 included on all such copies and derivative works. However, this 2208 document itself may not be modified in any way, such as by removing 2209 the copyright notice or references to the Internet Society or other 2210 Internet organizations, except as needed for the purpose of developing 2211 Internet standards in which case the procedures for copyrights defined 2212 in the Internet Standards process must be followed, or as required to 2213 translate it into languages other than English. 2215 The limited permissions granted above are perpetual and will not be 2216 revoked by the Internet Society or its successors or assigns. 2218 This document and the information contained herein is provided on an 2219 "AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING 2220 TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT 2221 NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN 2222 WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF 2223 MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.