idnits 2.17.1 draft-ietf-ipo-carrier-requirements-04.txt: -(645): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(1073): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(1074): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(1995): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(1999): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(2001): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(2007): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(2013): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(2016): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(2024): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(2027): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(2030): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity -- however, there's a paragraph with a matching beginning. Boilerplate error? == There are 14 instances of lines with non-ascii characters in the document. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** There are 450 instances of too long lines in the document, the longest one being 13 characters in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year == Line 405 has weird spacing: '...rt call admis...' == The document doesn't use any RFC 2119 keywords, yet has text resembling RFC 2119 boilerplate text. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (November 2002) is 7832 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 5 errors (**), 0 flaws (~~), 4 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 INTERNET-DRAFT 3 Document: draft-ietf-ipo-carrier-requirements-04.txt Yong Xue 4 Category: Informational (Editor) 5 Expiration Date: May, 2003 WorldCom, Inc 7 Monica Lazer 8 Jennifer Yates 9 Dongmei Wang 10 AT&T 12 Ananth Nagarajan 13 Sprint 15 Hirokazu Ishimatsu 16 Japan Telecom Co., LTD 18 Olga Aparicio 19 Cable & Wireless Global 21 Steven Wright 22 Bellsouth 24 November 2002 26 Carrier Optical Service Requirements 28 Status of This Memo 29 This document is an Internet-Draft and is in full conformance with 30 all provisions of Section 10 of RFC2026. Internet-Drafts are working 31 documents of the Internet Engineering Task Force (IETF), its areas, 32 and its working groups. Note that other groups may also distribute 33 working documents as Internet-Drafts. 35 Internet-Drafts are draft documents valid for a maximum of six months 36 and may be updated, replaced, or rendered obsolete by other documents 37 at any time. It is inappropriate to use Internet-Drafts as reference 38 material or to cite them other than as "work in progress." 40 The list of current Internet-Drafts can be accessed at 41 http://www.ietf.org/ietf/1id-abstracts.txt. 43 The list of Internet-Draft Shadow Directories can be accessed at 44 http://www.ietf.org/shadow.html. 46 Abstract 47 This Internet Draft describes the major carrier's optical service requirements 48 for the Automatically Switched Optical Networks (ASON) from both an end-user's 49 as well as an operator's perspectives. Its focus is on the description of the 50 service building blocks and service-related control plane functional 51 requirements. The management functions for the optical services and their 52 underlying networks are beyond the scope of this document and will be addressed 53 in a separate document. 55 Y. Xue et al 57 Table of Contents 58 1. Introduction 3 59 1.1 Justification 4 60 1.2 Conventions used in this document 4 61 1.3 Value Statement 4 62 1.4 Scope of This Document 5 63 2. Abbreviations 6 64 3. General Requirements 7 65 3.1 Separation of Networking Functions 7 66 3.2 Separation of Call and Connection Control 8 67 3.3 Network and Service Scalability 9 68 3.4 Transport Network Technology 9 69 3.5 Service Building Blocks 10 70 4. Service Models and Applications 10 71 4.1 Service and Connection Types 10 72 4.2 Examples of Common Service Models 11 73 5. Network Reference Model 12 74 5.1 Optical Networks and Subnetworks 13 75 5.2 Network Interfaces 13 76 5.3 Intra-Carrier Network Model 15 77 5.4 Inter-Carrier Network Model 16 78 5.5 Implied Control Constraints 16 79 6. Optical Service User Requirements 17 80 6.1 Common Optical Services 17 81 6.2 Bearer Interface Types 18 82 6.3 Optical Service Invocation 18 83 6.4 Optical Connection Granularity 20 84 6.5 Other Service Parameters and Requirements 21 85 7. Optical Service Provider Requirements 22 86 7.1 Access Methods to Optical Networks 22 87 7.2 Dual Homing and Network Interconnections 22 88 7.3 Inter-domain connectivity 23 89 7.4 Names and Address Management 23 90 7.5 Policy-Based Service Management Framework 24 91 8. Control Plane Functional Requirements for Optical 92 Services 25 93 8.1 Control Plane Capabilities and Functions 25 94 8.2 Control Message Transport Network 27 95 8.3 Control Plane Interface to Data Plane 28 96 8.4 Management Plane Interface to Data Plane 28 97 8.5 Control Plane Interface to Management Plane 29 98 8.6 IP and Optical Control Plane Interconnection 29 99 9. Requirements for Signaling, Routing and Discovery 30 100 9.1 Requirements for information sharing over UNI, 101 I-NNI and E-NNI 30 102 9.2 Signaling Functions 30 103 9.3 Routing Functions 31 104 9.4 Requirements for path selection 32 105 9.5 Discovery Functions 33 106 10. Requirements for service and control plane 108 Y. Xue et al 110 resiliency 34 111 10.1 Service resiliency 35 112 10.2 Control plane resiliency 37 113 11. Security Considerations 37 114 11.1 Optical Network Security Concerns 37 115 11.2 Service Access Control 39 116 12. Acknowledgements 39 117 13. References 39 118 Authors' Addresses 41 119 Appendix: Interconnection of Control Planes 42 121 1. Introduction 123 Optical transport networks are evolving from the current TDM-based SONET/SDH 124 optical networks as defined by ANSI T1.105 and ITU Rec. G.803[ansi-sonet, itu- 125 sdh] to emerging WDM-based optical transport networks (OTN) as defined by ITU 126 Rec. G.872 in [itu-otn]. Therefore in the near future, carrier optical transport 127 networks are expected to consist of a mixture of the SONET/SDH-based sub- 128 networks and the WDM-based wavelength or fiber switched OTN sub-networks. The 129 OTN networks can be either transparent or opaque depending upon if O-E-O 130 functions are utilized within the optical networks. Optical networking 131 encompasses the functionalities for the establishment, transmission, 132 multiplexing and switching of optical connections carrying a wide range of user 133 signals of varying formats and bit rate. The optical connections in this 134 document include switched optical path using TDM channel, WADM wavelength or 135 fiber links. 137 Some of the challenges for the carriers are efficient bandwidth management and 138 fast service provisioning in a multi-technology and possibly multi-vendor 139 networking environment. The emerging and rapidly evolving Automatically Switched 140 Optical Network (ASON) technology [itu-astn, itu-ason] is aimed at providing 141 optical networks with intelligent networking functions and capabilities in its 142 control plane to enable rapid optical connection provisioning, dynamic rerouting 143 as well as multiplexing and switching at different granularity levels, including 144 fiber, wavelength and TDM channel. The ASON control plane should not only enable 145 the new networking functions and capabilities for the emerging OTN networks, but 146 significantly enhance the service provisioning capabilities for the existing 147 SONET/SDH networks as well. 149 The ultimate goals should be to allow the carriers to automate network resource 150 and topology discovery, to quickly and dynamically provision network resources 151 and circuits, and to support assorted network survivability using ring and 152 mesh-based protection and restoration techniques. The carriers see that this new 153 networking platform will create tremendous business opportunities for the 154 network operators and service providers to offer new services to the market, and 155 in the long run to reduce their network operation cost (OpEx saving), and to 156 improve their network utilization efficiency (CapEx saving). 158 1.1. Justification 159 Y. Xue et al 161 The charter of the IPO WG calls for a document on "Carrier Optical Service 162 Requirements" for IP over Optical networks. This document addresses that aspect 163 of the IPO WG charter. Furthermore, this document was accepted as an IPO WG 164 document by unanimous agreement at the IPO WG meeting held on March 19, 2001, in 165 Minneapolis, MN, USA. It presents a carrier as well as an end-user perspective 166 on optical network services and requirements. 168 1.2. Conventions used in this document 170 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT","SHOULD", 171 "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be 172 interpreted as described in RFC 2119. 174 1.3. Value Statement 176 By deploying ASON technology, a carrier expects to achieve the following 177 benefits from both technical and business perspectives: 178 Automated Discovery: ASON technology will enable automatic network Inventory 179 management, topology and resource discovery which eliminates the manual or semi- 180 manual process for maintaining the network information database that exist in 181 most carrier environment. 183 Rapid Circuit Provisioning: ASON technology will enable the dynamic end-to-end 184 provisioning of the optical connections across the optical network by using 185 standard routing and signaling protocols. 187 Enhanced Protection and Restoration: ASON technology will enable the network to 188 dynamically reroute an optical connection in case of failure using mesh-based 189 network protection and restoration techniques, which greatly improves the cost- 190 effectiveness compared to the current line and ring protection schemes in the 191 SONET/SDH network. 193 - Service Flexibility: ASON technology will support provisioning of 194 an assortment of existing and new services such as protocol and bit- 195 rate independent transparent network services, and bandwidth-on- 196 demand services. 198 - Enhanced Interoperability: ASON technology will use a control plane 199 utilizing industry and international standards-based architecture and 200 protocols, which facilitate the interoperability of the optical 201 network equipment from different vendors. 203 In addition, the ASON control plane may offer the following potential 204 value-added benefits: 206 - Reactive traffic engineering at optical layer that allows network 207 resources to be dynamically allocated to traffic flow. 209 - Reduce the need for service providers to develop new operational 210 support systems (OSS) software for the network control and new service 211 Y. Xue et al 213 provisioning on the optical network, thus speeding up the deployment 214 of the optical network technology and reducing the software 215 development and maintenance cost. 217 - Potential development of a unified control plane that can be used 218 for different transport technologies including OTN, SONET/SDH, ATM 219 and PDH. 221 1.4. Scope of this document 223 This document is intended to provide, from the carriers perspective, 224 a service framework and some associated requirements in relation to 225 the optical transport services to be offered in the next generation optical 226 transport networking environment and their service control and 227 management functions. As such, this document concentrates on the 228 requirements driving the work towards realization of the automatic 229 switched optical networks. This document is intended to be protocol- 230 neutral, but the specific goals include providing the requirements to 231 guide the control protocol development and enhancement within IETF in 232 terms of reuse of IP-centric control protocols in the optical 233 transport network. 235 Every carrier's needs are different. The objective of this document 236 is NOT to define some specific service models. Instead, some major 237 service building blocks are identified that will enable the carriers 238 to use them in order to create the best service platform most 239 suitable to their business model. These building blocks include 240 generic service types, service enabling control mechanisms and 241 service control and management functions. 243 OIF carrier group has developed a comprehensive set of control plane 244 requirements for both UNI and NNI [oif-carrier, oif-nnireq] and they 245 have been used as the base line input to this document. 247 The fundamental principles and basic set of requirements for the 248 control plane of the automatic switched optical networks have been 249 provided in a series of ITU Recommendations under the umbrella of 250 ITU ASTN/ASON architectural and functional requirements as listed 251 below: 253 Architecture: 254 - ITU-T Rec. G.8070/Y.1301 (2001), Requirements for the Automatic 255 Switched Transport Network (ASTN)[itu-astn] 257 - ITU-T Rec. G.8080/Y.1304 (2001), Architecture of the Automatic 258 Switched Optical Network (ASON)[itu-ason] 260 Signaling: 261 - ITU-T Rec. G.7713/Y.1704 (2001), Distributed Call and Connection 262 Management (DCM)[itu-dcm] 263 Y. Xue et al 265 Routing: 266 - ITU-T Draft Rec. G.7715/Y.1706 (2002), Architecture and Requirements for 267 Routing in the Automatically Switched Optical Network [itu-rtg] 269 Discovery: 270 - ITU-T Rec. G.7714/Y.1705 (2001), Generalized Automatic Discovery 271 [itu-disc] 273 Link Management: 274 - ITU-T Rec. G.7716/Y.1707 (2003), Link Resource Management for ASON 275 (work in progress)[itu-lm] 277 Signaling Communication Network: 278 - ITU-T Rec. G.7712/Y.1703 (2001), Architecture and Specification of 279 Data Communication Network [itu-dcn] 281 This document provides further detailed requirements based on the ASTN/ASON 282 framework. In addition, even though for IP over Optical we consider IP as a 283 major client to the optical network in this document, the same requirements 284 and principles should be equally applicable to non-IP clients such as 285 SONET/SDH, ATM, ITU G.709, Ethernet, etc. The general architecture for IP over 286 Optical is described in the IP over Optical framework document [ipo-frame] 288 2. Abbreviations 290 ASON Automatic Switched Optical Networking 291 ASTN Automatic Switched Transport Network 292 CAC Connection Admission Control 293 NNI Node-to-Node Interface 294 UNI User-to-Network Interface 295 I-NNI Internal NNI 296 E-NNI External NNI 297 NE Network Element 298 OTN Optical Transport Network 299 CNE Customer/Client Network Element 300 ONE Optical Network Element 301 OLS Optical Line System 302 PI Physical Interface 303 SLA Service Level Agreement 304 SCN Signaling Communication Network 306 3. General Requirements 307 In order to provide the carriers with flexibility and control of the optical 308 networks, the following set of architectural requirements are essential. 310 3.1. Separation of Networking Functions 312 A fundamental architectural principle of the ASON network 313 is to segregate the networking functions within 314 each layer network into three logical functional planes: control 315 plane, data plane and management plane. They are responsible for 316 Y. Xue et al 318 providing network control functions, data transmission functions and 319 network management functions respectively. The crux of the ASON 320 network is the networking intelligence that contains automatic 321 routing, signaling and discovery functions to automate the network 322 control functions. 324 Control Plane: includes the functions related to networking control 325 capabilities such as routing, signaling, and policy control, as well 326 as resource and service discovery. These functions are automated. 328 Data Plane (Transport Plane): includes the functions related to 329 bearer channels and signal transmission. 331 Management Plane: includes the functions related to the management 332 functions of network element, networks and network resources and 333 services. These functions are less automated as compared to control 334 plane functions. 336 Each plane consists of a set of interconnected functional or control 337 entities, physical or logical, responsible for providing the 338 networking or control functions defined for that network layer. 340 Each plane has clearly defined functional responsibilities. However, the 341 management plane is responsible for the management of both control and data 342 planes, thus playing an authoritative role in overall control and management 343 functions as discussed in Section 8. 345 The separation of the control plane from both the data and management 346 plane is beneficial to the carriers in that it: 348 - Allows equipment vendors to have a modular system design that will 349 be more reliable and maintainable. 351 - Allows carriers to have the flexibility to choose a third party 352 vendor control plane software systems as the control plane solution 353 for its switched optical network. 355 - Allows carriers to deploy a unified control plane and 356 OSS/management systems to manage and control different types of 357 transport networks it owns. 359 - Allows carriers to use a separate control network specially 360 designed and engineered for the control plane communications. 362 The separation of control, management and transport function is 363 required and it shall accommodate both logical and physical level 364 separation. The logical separation refers to functional separation while 365 physical separation refers to the case where the control, management and 366 transport functions physically reside in different equipment or locations. 368 Y. Xue et al 370 Note that it is in contrast to the IP network where the control 371 messages and user traffic are routed and switched based on the same 372 network topology due to the associated in-band signaling nature of 373 the IP network. 375 When the physical separation is allowed between the control and data plane, a 376 standardized interface and control protocol (e.g. GSMP [ietf-gsmp]) should be 377 supported. 379 3.2. Separation of call and connection control 381 To support many enhanced optical services, such as scheduled 382 bandwidth on demand, diverse circuit provisioning and bundled connections, a 383 call model based on the separation of call control and connection control is 384 essential. 386 The call control is responsible for the end-to-end session 387 negotiation, call admission control and call state maintenance while 388 connection control is responsible for setting up the connections 389 associated with a call across the network. A call can correspond to 390 zero, one or more connections depending upon the number of 391 connections needed to support the call. 393 The existence of the connection depends upon the existence of its 394 associated call session and connection can be deleted and re- 395 established while still keeping the call session up. 397 The call control shall be provided at an ingress port or gateway port 398 to the network such as UNI and E-NNI [ see Section 5 for definition]. 399 The connection control is provided at the originating node of the circuit as 400 well as on each link along the path. 402 The control plane shall support the separation of the call control 403 from the connection control. 405 The control plane shall support call admission control on call setup 406 and connection admission control on connection setup. 408 3.3. Network and Service Scalability 410 Although some specific applications or networks may be on a small 411 scale, the control plane protocol and functional capabilities shall 412 support large-scale networks. 414 In terms of the scale and complexity of the future optical network, 415 the following assumption can be made when considering the scalability 416 and performance that are required of the optical control and 417 management functions. 419 - There may be up to thousands of OXC nodes and the same or higher 420 order of magnitude of OADMs per carrier network. 422 Y. Xue et al 424 - There may be up to thousands of terminating ports/wavelength per 425 OXC node. 427 - There may be up to hundreds of parallel fibers between a pair of 428 OXC nodes. 430 - There may be up to hundreds of wavelength channels transmitted on 431 each fiber. 433 As for the frequency and duration of the optical connections: 435 - The expected end-to-end connection setup/teardown time should be in 436 the order of seconds, preferably less. 438 - The expected connection holding times should be in the order of 439 minutes or greater. 441 - There may be up to millions of simultaneous optical connections 442 switched across a single carrier network. 444 3.4. Transport Network Technology 446 Optical services can be offered over different types of underlying 447 optical transport technologies including both TDM-based SONET/SDH 448 network and WDM-based OTN networks. 450 For this document, standards-based transport technologies SONET/SDH 451 as defined in the ITU Rec. G.803 and OTN implementation framing as 452 defined in ITU Rec. G.709 [itu-g709] shall be supported. 454 Note that the service characteristics such as bandwidth granularity 455 and signaling framing hierarchy to a large degree will be determined 456 by the capabilities and constraints of the server layer network. 458 3.5. Service Building Blocks 460 The primary goal of this document is to identify a set of basic 461 service building blocks the carriers can use to create the best 462 suitable service models that serve their business needs. 464 The service building blocks are comprised of a well-defined set of 465 capabilities and a basic set of control and management functions. 466 These capabilities and functions should support a basic set of 467 services and enable a carrier to build enhanced services through 468 extensions and customizations. Examples of the building blocks 469 include the connection types, provisioning methods, control 470 interfaces, policy control functions, and domain internetworking 471 mechanisms, etc. 473 4. Service Model and Applications 474 Y. Xue et al 476 A carrier's optical network supports multiple types of service 477 models. Each service model may have its own service operations, 478 target markets, and service management requirements. 480 4.1. Service and Connection Types 482 The optical network is primarily offering optical paths that are 483 fixed bandwidth connections between two client network elements, such 484 as IP routers or ATM switches, established across the optical 485 network. A connection is also defined by its demarcation from ingress 486 access point, across the optical network, to egress access point of 487 the optical network. 489 The following connection capability topologies must be supported: 491 - Bi-directional point-to-point connection 493 - Uni-directional point-to-point connection 495 - Uni-directional point-to-multipoint connection 497 The point-to-point connections are the primary concerns of the carriers. In this 498 case, the following three types of network 499 connections based on different connection set-up control methods 500 shall be supported: 502 - Permanent connection (PC): Established hop-by-hop directly on each 503 ONE along a specified path without relying on the network routing and 504 signaling capability. The connection has two fixed end-points and 505 fixed cross-connect configuration along the path and will stays 506 permanently until it is deleted. This is similar to the concept of 507 PVC in ATM and there is no automatic re-routing capability. 509 - Switched connection (SC): Established through UNI signaling 510 interface and the connection is dynamically established by network 511 using the network routing and signaling functions. This is similar to the 512 concept of SVC in ATM. 514 - Soft permanent connection (SPC): Established by specifying two PC 515 at end-points and let the network dynamically establishes a SC 516 connection in between. This is similar to the SPVC concept in ATM. 518 The PC and SPC connections should be provisioned via management plane 519 to control interface and the SC connection should be provisioned via 520 signaled UNI interface. 522 Note that even though automated rapid optical connection provisioning 523 is required, the carriers expect the majority of provisioned 524 circuits, at least in short term, to have a long lifespan ranging 525 from months to years. 527 Y. Xue et al 529 In terms of service provisioning, some carriers may choose to perform 530 testing prior to turning over to the customer. 532 4.2. Examples of Common Service Models 534 Each carrier may define its own service model based on it business 535 strategy and environment. The following are three example service 536 models that carriers may use. 538 4.2.1. Provisioned Bandwidth Service (PBS) 540 The PBS model provides enhanced leased/private line services 541 provisioned via service management interface (MI) using either PC or 542 SPC type of connection. The provisioning can be real-time or near 543 real-time. It has the following characteristics: 545 - Connection request goes through a well-defined management interface 547 - Client/Server relationship between clients and optical network. 549 - Clients have no optical network visibility and depend on network 550 intelligence or operator for optical connection setup. 552 4.2.2. Bandwidth on Demand Service (BDS) 554 The BDS model provides bandwidth-on-demand dynamic connection 555 services via signaled user-network interface (UNI). The provisioning 556 is real-time and is using SC type of optical connection. It has the 557 following characteristics: 559 - Signaled connection request via UNI directly from the user or its 560 proxy. 562 - Customer has no or limited network visibility depending upon the 563 control interconnection model used and network administrative policy. 565 - Relies on network or client intelligence for connection set-up 566 depending upon the control plane interconnection model used. 568 4.2.3. Optical Virtual Private Network (OVPN) 570 The OVPN model provides virtual private network at the optical layer 571 between a specified set of user sites. It has the following 572 characteristics: 574 - Customers contract for specific set of network resources such as 575 optical connection ports, wavelengths, etc. 577 Y. Xue et al 579 - Closed User Group (CUG) concept is supported as in normal VPN. 581 - Optical connection can be of PC, SPC or SC type depending upon the 582 provisioning method used. 584 - An OVPN site can request dynamic reconfiguration of the connections 585 between sites within the same CUG. 587 - A customer may have visibility and control of network resources up 588 to the extent allowed by the customer service contract. 590 At a minimum, the PBS, BDS and OVPN service models described above 591 shall be supported by the control functions. 593 5. Network Reference Model 595 This section discusses major architectural and functional components 596 of a generic carrier optical network, which will provide a reference 597 model for describing the requirements for the control and management 598 of carrier optical services. 600 5.1. Optical Networks and Sub-networks 602 As mentioned before, there are two main types of optical networks 603 that are currently under consideration: SDH/SONET network as defined 604 in ITU Rec. G.803, and OTN as defined in ITU Rec. G.872. 606 In the current SONET/SDH-based optical network, digital cross-connects (DXC) and 607 add-drop multiplexer (ADM) and line multiplexer terminal (LMT) are connected in 608 ring or linear topology. Similarly, we assume an OTN is composed of a set of 609 optical cross-connects (OXC) and optical add-drop multiplexer (OADM) which is 610 interconnected in a general mesh topology using DWDM optical line systems (OLS). 612 It is often convenient for easy discussion and description to treat 613 an optical network as an sub-network cloud, in which the details of 614 the network become less important, instead focus is on the function 615 and the interfaces the optical network provides. In general, a 616 subnetwork can be defined as a set of access points on the network 617 boundary and a set of point-to-point optical connections between 618 those access points. 620 5.2. Control Domains and Interfaces 621 A generic carrier network reference model describes a multi-carrier 622 network environment. Each individual carrier network can be further 623 partitioned into sub-networks or administrative domains based on administrative, 624 technological or architectural reasons. This partition can be recursive. 625 Similarly, a network can be partitioned into control domains that match the 626 administrative domains and are controlled by a single administrative policy. 627 The control domains can be recursively divided into sub-domains to form control 628 hierarchy for scalability. The control domain concept can be applied to routing, 629 Y. Xue et al 631 signaling and protection & restoration to form an autonomous control function 632 domain. 634 The demarcation between domains can be either logical or physical and consists 635 of a set of reference points identifiable in the optical network. >From the 636 control plane perspective, these reference points define a set of 637 control interfaces in terms of optical control and management 638 functionality. The figure 1 is an illustrative diagram for this. 640 +---------------------------------------+ 641 | single carrier network | 642 +--------------+ | | 643 |Customer | | +------------+ +------------+ | 644 |IP | | | | | | | 645 |Network +--UNI--+ + Optical +--UNI--+Carrier�s IP| | 646 | | | | Subnetwork | | network | | 647 +--------------+ | | (Domain A) +--+ | | | 648 | +------+-----+ | +------+-----+ | 649 | | | | | 650 | I-NNI E-NNI UNI | 651 +--------------+ | | | | | 652 |Customer | | +------+-----+ | +------+-----+ | 653 |IP +--UNI--+ + | +----+ | | 654 |Network | | | Optical | | Optical | | 655 | | | | Subnetwork +-E-NNI-+ Subnetwork | | 656 +--------------+ | | (Domain A) | | (Domain B) | | 657 | +------+-----+ +------+-----+ | 658 | | | | 659 +---------------------------------------+ 660 UNI E-NNI 661 | | 662 +------+-------+ +-------+--------+ 663 | | | | 664 | Other Client | | Other Carrier | 665 |Network | | Network | 666 | (ATM/SONET) | | | 667 +--------------+ +----------------+ 669 Figure 1 Generic Carrier Network Reference Model 671 The network interfaces encompass two aspects of the networking 672 functions: user data plane interface and control plane interface. The 673 former concerns about user data transmission across the physical 674 network interface and the latter concerns about the control message 675 exchange across the network interface such as signaling, routing, 676 etc. We call the former physical interface (PI) and the latter 677 control interface. Unless otherwise stated, the control 678 interface is assumed in the remaining of this document. 680 5.2.1. Control Plane Interfaces 681 Y. Xue et al 683 Control interface defines a relationship between two connected 684 network entities on both sides of the interface. For each control 685 interface, we need to define the architectural function that each side 686 plays and a controlled set of information that can be exchanged 687 across the interface. The information flowing over this logical 688 interface may include, but not limited to: 690 - Interface endpoint name and address 692 - Reachability/summarized network address information 694 - Topology/routing information 696 - Authentication and connection admission control information 698 - Connection management signaling messages 700 - Network resource control information 702 Different types of the interfaces can be defined for the network 703 control and architectural purposes and can be used as the network 704 reference points in the control plane. In this document, the 705 following set of interfaces is defined as shown in Figure 1. 706 User-Network Interface (UNI): is a bi-directional control interface 707 between service requester and service provider control entities. The 708 service request control entity resides outside the carrier network 709 control domain. 711 Network-Network/Node-Node Interface (NNI): is a bi-directional signaling 712 interface between two optical network elements or sub-networks. 714 We differentiate between internal NNI (I-NNI) and external NNI (E-NNI) as 715 follows: 717 - E-NNI: A NNI interface between two control plane entities belonging 718 to different control domains. 720 - I-NNI: A NNI interface between two control plane entities within 721 the same control domain in the carrier network. 723 Different types of interface, internal vs. external, have different implied 724 trust relationship for security and access control purposes. The trust 725 relationship is not binary, instead a policy-based control mechanism need to be 726 in place to restrict the type and amount of information that can flow cross each 727 type of interfaces depending the carrier's service and business requirements. 728 Generally, two networks have a fully trusted relationship if they belong to 729 the same administrative domain, in this case, the control information exchange 730 across the control interface between them should be unlimited. Otherwise, the 731 type and amount of the control information that can go across the information 732 should be constrained by the administrative policy. 734 Y. Xue et al 736 An example of fully trusted interface is an I-NNI between two optical 737 network elements in a single control domain. Non-trusted interface 738 examples include an E-NNI between two different carriers or a UNI 739 interface between a carrier optical network and its customers. The trust level 740 can be different for the non-trusted UNI or E-NNI interface depending upon if it 741 within the carrier or not. In general, intra-carrier E-NNI has higher trust 742 level than inter-carrier E-NNI. 744 The control plane shall support the UNI and NNI interface described 745 above and the interfaces shall be configurable in terms of the type 746 and amount of control information exchange and their behavior shall 747 be consistent with the configuration (i.e., external versus internal 748 interfaces). 750 5.3. Intra-Carrier Network Model 752 Intra-carrier network model concerns the network service control and 753 management issues within networks owned by a single carrier. 755 5.3.1. Multiple Sub-networks 757 Without loss of generality, the optical network owned by a carrier 758 service operator can be depicted as consisting of one or more optical 759 sub-networks interconnected by direct optical links. There may be 760 many different reasons for more than one optical sub-network. It may 761 be the result of using hierarchical layering, different technologies 762 across access, metro and long haul (as discussed below), or a result 763 of business mergers and acquisitions or incremental optical network 764 technology deployment by the carrier using different vendors or 765 technologies. 767 A sub-network may be a single vendor and single technology network. 768 But in general, the carrier's optical network is heterogeneous in 769 terms of equipment vendor and the technology utilized in each sub- 770 network. 772 5.3.2. Access, Metro and Long-haul networks 774 Few carriers have end-to-end ownership of the optical networks. Even 775 if they do, access, metro and long-haul networks often belong to 776 different administrative divisions as separate optical sub-networks. 777 Therefore Inter-(sub)-networks interconnection is essential in terms 778 of supporting the end-to-end optical service provisioning and 779 management. The access, metro and long-haul networks may use 780 different technologies and architectures, and as such may have 781 different network properties. 783 In general, end-to-end optical connectivity may easily cross multiple 784 sub-networks with the following possible scenarios: 785 Access -- Metro -- Access 786 Access - Metro -- Long Haul -- Metro - Access 787 Y. Xue et al 789 5.4. Inter-Carrier Network Model 791 The inter-carrier model focuses on the service and control aspects 792 between different carrier networks and describes the internetworking 793 relationship between them. 795 Inter-carrier interconnection provides for connectivity between 796 optical network operators. To provide the global reach end-to-end 797 optical services, optical service control and management between 798 different carrier networks becomes essential. It is possible to 799 support distributed peering within the IP client layer network where 800 the connectivity between two distant IP routers can be achieved via 801 an optical transport network. 803 5.5. Implied Control Constraints 805 The intra-carrier and inter-carrier models have different implied control 806 constraints. For example, in the intra-carrier model, the address for routing 807 and signaling only need to be unique with the carrier while the inter-carrier 808 model requires the address to be globally unique. 810 In the intra-carrier network model, the network itself forms the largest control 811 domain within the carrier network. This domain is usually partitioned into 812 multiple sub-domains, either flat or in hierarchy. The UNI and E-NNI interfaces 813 are internal to the carrier network, therefore higher trust level is assumed. 814 Because of this, direct signaling between domains and summarized topology and 815 resource information exchanged can be allowed across the internal UNI or intra- 816 carrier E-NNI interfaces. 818 In the inter-carrier network model, each carrier's optical network is 819 a separate administrative domain. Both the UNI interface between the 820 user and the carrier network and the NNI interface between two 821 carrier's networks are crossing the carrier's administrative boundary 822 and therefore are by definition external interfaces. 824 In terms of control information exchange, the topology information 825 shall not be allowed to cross both E-NNI and UNI interfaces. 827 6. Optical Service User Requirements 829 This section describes the user requirements for optical services, 830 which in turn impose the requirements on service control and 831 management for the network operators. The user requirements reflect 832 the perception of the optical service from a user's point of view. 834 6.1. Common Optical Services 836 The basic unit of an optical transport service is fixed-bandwidth 837 optical connectivity between applications. However different services are 838 Y. Xue et al 840 created based on its supported signal characteristics (format, bit 841 rate, etc), the service invocation methods and possibly the 842 associated Service Level Agreement (SLA) provided by the service 843 provider. 845 At present, the following are the major optical services provided in 846 the industry: 848 - SONET/SDH, with different degrees of transparency 850 - Optical wavelength services, transparent or opaque 852 - Ethernet at 10Mbps, 100Mbps, 1 Gbps and 10 Gbps 854 - Storage Area Networks (SANs) based on FICON, ESCON and Fiber 855 Channel 857 Optical Wavelength Service refers to transport services where signal 858 framing is negotiated between the client and the network operator 859 (framing and bit-rate dependent), and only the payload is carried 860 transparently. SONET/SDH transport is most widely used for network- 861 wide transport. Different levels of transparency can be achieved in 862 the SONET/SDH transmission. 864 Ethernet Services, specifically 1Gb/s and 10Gbs Ethernet services, 865 are gaining more popularity due to the lower costs of the customers' 866 premises equipment and its simplified management requirements 867 (compared to SONET or SDH). 869 Ethernet services may be carried over either SONET/SDH (GFP mapping) 870 or WDM networks. The Ethernet service requests will require some 871 service specific parameters: priority class, VLAN Id/Tag, traffic 872 aggregation parameters. 874 Storage Area Network (SAN) Services. ESCON and FICON are proprietary 875 versions of the service, while Fiber Channel is the standard 876 alternative. As is the case with Ethernet services, SAN services may 877 be carried over either SONET/SDH (using GFP mapping) or WDM networks. 879 The control plane shall provide the carrier with the capability 880 functionality to provision, control and manage all the services 881 listed above. 883 6.2. Bearer Interface Types 885 All the bearer interfaces implemented in the ONE shall be supported 886 by the control plane and associated signaling protocols. 888 The signaling shall support the following interface types 889 protocol: 890 - SDH/SONET 891 Y. Xue et al 893 - Ethernet 894 - FC-N for Fiber Channel services 895 - OTN (G.709) 896 - PDH 897 - APON and EPON 898 - ESCON and FICON 900 6.3. Optical Service Invocation 901 As mentioned earlier, the methods of service invocation play an 902 important role in defining different services. 904 6.3.1. Provider-Initiated Service Provisioning 906 In this scenario, users forward their service request to the provider 907 via a well-defined service management interface. All connection 908 management operations, including set-up, release, query, or 909 modification shall be invoked from the management plane. This provisioning 910 method is for PC and SPC connections. 912 6.3.2. User-Initiated Service Provisioning 914 In this scenario, users forward their service request to the provider 915 via a well-defined UNI interface in the control plane (including 916 proxy signaling). All connection management operation requests, 917 including set-up, release, query, or modification shall be invoked 918 from directly connected user devices, or its signaling proxy. 919 This provisioning method is for SC connection. 921 6.3.3. Call set-up requirements 922 In summary the following requirements for the control plane have been 923 identified: 924 - The control plane shall support action result codes as responses to 925 any requests over the control interfaces. 927 - The control plane shall support requests for call set-up, subject 928 to policies in effect between the user and the network. 930 - The control plane shall support the destination client device's 931 decision to accept or reject call set-up requests from the source 932 client's device. 934 - The control plane shall support requests for call set-up and 935 deletion across multiple (sub)networks. 937 - NNI signaling shall support requests for call set-up, subject to 938 policies in effect between the (sub)networks. 940 - Call set-up shall be supported for both uni-directional and bi- 941 directional connections. 943 - Upon call request initiation, the control plane shall generate a 944 Y. Xue et al 946 network unique Call-ID associated with the connection, to be used for 947 information retrieval or other activities related to that connection. 949 - CAC shall be provided as part of the call control functionality. It 950 is the role of the CAC function to determine if the call can be 951 allowed to proceed based on resource availability and authentication. 953 - Negotiation for call set-up for multiple service level options 954 shall be supported. 956 - The policy management system must determine what kinds of call setup 957 requests can be authorized. 959 - The control plane elements need the ability to rate limit (or pace) 960 call setup attempts into the network. 962 - The control plane shall report to the management plane, the 963 success/failures of a call request. 965 - Upon a connection request failure, the control plane shall report 966 to the management plane a cause code identifying the reason for the 967 failure and all allocated resources shall be released. A negative 968 acknowledgment shall be returned to the source. 970 - Upon a connection request success a positive acknowledgment shall 971 be returned to the source when a connection has been successfully 972 established. 974 - The control plane shall support requests for call release by Call- 975 ID. 977 - The control plane shall allow any end point or any intermediate 978 node to initiate call release procedures. 980 - Upon call release completion all resources associated with the call 981 shall become available for access for new requests. 983 - The management plane shall be able to release calls or connections 984 established by the control plane both gracefully and forcibly on 985 demand. 987 - Partially deleted calls or connections shall not remain within the 988 network. 990 - End-to-end acknowledgments shall be used for connection deletion 991 requests. 993 - Connection deletion shall not result in either restoration or 994 protection being initiated. 996 - The control plane shall support management plane and neighboring 997 Y. Xue et al 999 device requests for status query. 1001 - The UNI shall support initial registration and updates of the UNI-C 1002 with the network via the control plane. 1004 6.4. Optical Connection granularity 1006 The service granularity is determined by the specific technology, 1007 framing and bit rate of the physical interface between the ONE and 1008 the client at the edge and by the capabilities of the ONE. The 1009 control plane needs to support signaling and routing for all the 1010 services supported by the ONE. In general, there should not be a one- 1011 to-one correspondence imposed between the granularity of the service 1012 provided and the maximum capacity of the interface to the user. 1014 The control plane shall support the ITU Rec. G.709 connection 1015 granularity for the OTN network. 1017 The control plane shall support the SDH/SONET connection granularity. 1019 The optical control plane shall support sub-rate interfaces 1020 such as VT /TU granularity (as low as 1.5 Mb/s). 1022 The following fiber channel interfaces shall be supported by the 1023 control plane if the given interfaces are available on the equipment: 1025 - FC-12 1026 - FC-50 1027 - FC-100 1028 - FC-200 1030 Encoding of service types in the protocols used shall be such that 1031 new service types can be added by adding new code point values or 1032 objects. 1034 6.5. Other Service Parameters and Requirements 1036 6.5.1. Classes of Service 1038 We use "service level" to describe priority related characteristics 1039 of connections, such as holding priority, set-up priority, or 1040 restoration priority. The intent currently is to allow each carrier 1041 to define the actual service level in terms of priority, protection, 1042 and restoration options. Therefore, individual carriers will 1043 determine mapping of individual service levels to a specific set of 1044 quality features. 1046 The control plane shall be capable of mapping individual service 1047 classes into specific priority or protection and restoration options. 1049 6.5.2. Diverse Routing Attributes 1050 Y. Xue et al 1052 Diversity refers to the fact that a disjoint set of network resources (links and 1053 nodes) is utilized to provision multiple parallel optical connections terminated 1054 between a pair of ingress and egress ports. There are different levels of 1055 diversity based on link, node or administrative policy as described below. In 1056 the simple node and link diversity case: 1057 . Two optical connections are said to be node-disjoint diverse, if the two 1058 connections do not share any node along the path except the ingress and 1059 egress nodes. 1060 . Two optical connections are said to be link-disjoint diverse, if the two 1061 connections do not share any link along the path. 1063 A more general concept of diversity is the Shared Risk Group (SRG) that is based 1064 on a risk-sharing model and allows the definition of administrative policy-based 1065 diversity. A SRG is defined as a group of links or nodes that share a common 1066 risk component, whose failure can potentially cause the failure of all the links 1067 or nodes in the group. When the SRG is applied to the link resource, it is 1068 referred to as shared risk link group (SRLG). For example, all fiber links that 1069 go through a common conduit under the ground belong to the same SRLG group, 1070 because the conduit is a shared risk component whose failure, such as a cut, may 1071 cause all fibers in the conduit to break. Note that SRLG is a relation defined 1072 within a group of links based upon a specific risk factor that can be defined 1073 based on various technical or administrative grounds such as �sharing a 1074 conduit�, �within 10 miles of distance proximity� etc. Please see ITU-T G.7715 1075 for more discussion [itu-rtg]. 1077 Therefore, two optical connections are said to be SRG-disjoint diverse if the 1078 two connections do not have any links or nodes that belong to the same SRG along 1079 the path. 1081 The ability to route service paths diversely is a required control 1082 feature. Diverse routing is one of the connection parameters and is 1083 specified at the time of the connection creation. 1085 The control plane routing algorithms shall be able to route an optical 1086 connection diversely from a previously routed connection in terms of link 1087 disjoint path, node disjoint path and SRG disjoint path. 1089 7. Optical Service Provider Requirements 1091 This section discusses specific service control and management 1092 requirements from the service provider's point of view. 1094 7.1. Service Access Methods to Optical Networks 1096 In order to have access to the optical network service, a customer needs to be 1097 physically connected to the service provider network on the transport plane. The 1098 control plane connection may or may not be required depending upon the service 1099 invocation model provided to the customer: provisioned vs. signaled. For the 1100 signaled, either direct or indirect signaling methods can be used depending upon 1101 Y. Xue et al 1103 if the UNI proxy is utilized on the client side. The detailed discussion on the 1104 UNI signaling methods is in [oif-uni]. 1106 Multiple access methods blow shall be supported: 1108 - Cross-office access (CNE co-located with ONE) 1110 - Direct remote access (Dedicated links to the user) 1112 - Remote access via access sub-network (via a 1113 multiplexing/distribution sub-network) 1115 7.2. Dual Homing and Network Interconnections 1117 Dual homing is a special case of the access network. Client devices 1118 can be dual homed to the same or different hub, the same or different 1119 access network, the same or different core networks, the same or 1120 different carriers. The different levels of dual homing connectivity 1121 result in many different combinations of configurations. The main 1122 objective for dual homing is for enhanced survivability. 1124 Dual homing must be supported. Dual homing shall not require the use 1125 of multiple addresses for the same client device. 1127 7.3. Inter-domain connectivity 1129 A domain is a portion of a network, or an entire network that is 1130 controlled by a single control plane entity. This section discusses 1131 the various requirements for connecting domains. 1133 7.3.1. Multi-Level Hierarchy 1135 Traditionally current transport networks are divided into core inter- 1136 city long haul networks, regional intra-city metro networks and 1137 access networks. Due to the differences in transmission technologies, 1138 service, and multiplexing needs, the three types of networks are 1139 served by different types of network elements and often have 1140 different capabilities. The network hierarchy is usually implemented through 1141 the control domain hierarchy. 1143 When control domains exists for routing and signaling purpose, there will be 1144 intra-domain routing/signaling and inter-domain routing/signaling. In general, 1145 domain-based routing/signaling autonomy is desired and the intra-domain 1146 routing/signaling and the inter-domain routing/signaling should be agnostic to 1147 each other. 1149 Routing and signaling for multi-level hierarchies shall be supported 1150 to allow carriers to configure their networks as needed. 1152 7.3.2. Network Interconnections 1153 Y. Xue et al 1155 Sub-networks may have multiple points of inter-connections. All 1156 relevant NNI functions, such as routing, reachability information 1157 exchanges, and inter-connection topology discovery must recognize and 1158 support multiple points of inter-connections between subnetworks. 1159 Dual inter-connection is often used as a survivable architecture. 1161 The control plane shall provide support for routing and signaling for 1162 subnetworks having multiple points of interconnection. 1164 7.4. Names and Address Management 1166 7.4.1. Address Space Separation 1168 To ensure the scalability of and smooth migration toward to the 1169 optical switched network, the separation of three address spaces are 1170 required as discussed in [oif-addr]: 1172 - Internal transport network addresses: This is used for routing 1173 control plane messages within the transport network. For example, 1174 if GMPLS is used then IP address should be used. 1176 - Transport Network Assigned (TNA) address: This is a routable 1177 address in the optical transport network and is assigned by the 1178 network. 1180 - Client addresses: This address has significance in the client layer. 1181 For example, if the clients are ATM switches, the NSAP address can be used. 1182 If the clients are IP router, then IP address should be used. 1184 7.4.2. Directory Services 1186 Directory Services shall support address resolution and translation 1187 between various user/client device names or address and the corresponding TNA 1188 addresses. UNI shall use the user naming schemes for connection request. The 1189 directory service is essential for the implementation of overlay model. 1191 7.4.3. Network element Identification 1193 Each control domain and each network element within a carrier network shall be 1194 uniquely identifiable. Similarly all the service access points shall be uniquely 1195 identifiable. 1197 7.5. Policy-Based Service Management Framework 1199 The optical service must be supported by a robust policy-based management 1200 system to be able to make important decisions. 1202 Examples of policy decisions include: 1203 - What types of connections can be set up for a given UNI? 1204 Y. Xue et al 1206 - What information can be shared and what information must be 1207 restricted in automatic discovery functions? 1209 - What are the security policies over signaling interfaces? 1211 - What routing policies should be applied in the path selection? E.g 1212 The definition of the link diversity. 1214 Requirements: 1215 - Service and network policies related to configuration and 1216 provisioning, admission control, and support of Service Level 1217 Agreements (SLAs) must be flexible, and at the same time simple and 1218 scalable. 1220 - The policy-based management framework must be based on standards- 1221 based policy systems (e.g., IETF COPS [rfc2784]). 1223 - In addition, the IPO service management system must support and be 1224 backwards compatible with legacy service management systems. 1226 8. Control Plane Functional Requirements for Optical Services 1228 This section addresses the requirements for the optical control plane 1229 in support of service provisioning. 1231 The scope of the control plane include the control of the interfaces 1232 and network resources within an optical network and the interfaces 1233 between the optical network and its client networks. In other words, 1234 it should include both NNI and UNI aspects. 1236 8.1. Control Plane Capabilities and Functions 1238 The control capabilities are supported by the underlying control 1239 functions and protocols built in the control plane. 1241 8.1.1. Network Control Capabilities 1243 The following capabilities are required in the network control plane 1244 to successfully deliver automated provisioning for optical services: 1245 - Network resource discovery 1247 - Address assignment and resolution 1249 - Routing information propagation and dissemination 1251 - Path calculation and selection 1253 - Connection management 1255 These capabilities may be supported by a combination of functions 1256 across the control and the management planes. 1258 Y. Xue et al 1260 8.1.2. Control Plane Functions for Network Control 1262 The following are essential functions needed to support network 1263 control capabilities: 1265 - Signaling 1266 - Routing 1267 - Automatic resource, service and neighbor discovery 1269 Specific requirements for signaling, routing and discovery are 1270 addressed in Section 9. 1272 The general requirements for the control plane functions to support 1273 optical networking and service functions include: 1275 - The control plane must have the capability to establish, teardown 1276 and maintain the end-to-end connection, and the hop-by-hop connection 1277 segments between any two end-points. 1279 - The control plane must have the capability to support optical traffic- 1280 engineering (e.g. wavelength management) requirements including resource 1281 discovery and dissemination, constraint-based routing and path computation. 1283 - The control plane shall support network status or action result 1284 code responses to any requests over the control interfaces. 1286 - The control plane shall support call admission control on UNI and 1287 connection-admission control on NNI. 1289 - The control plane shall support graceful release of network 1290 resources associated with the connection after a successful 1291 connection teardown or failed connection. 1293 - The control plane shall support management plane request for 1294 connection attributes/status query. 1296 - The control plane must have the capability to support various 1297 protection and restoration schemes. 1299 - Control plane failures shall not affect active connections and 1300 shall not adversely impact the transport and data planes. 1302 - The control plane should support separation of control function 1303 entities including routing, signaling and discovery and should allow 1304 different control distributions of those functionalities, including 1305 centralized, distributed or hybrid. 1307 - The control plane should support physical separation of the control 1308 plane from the transport plane to support either tightly coupled or 1309 loosely coupled control plane solutions. 1311 Y. Xue et al 1313 - The control plane should support the routing and signaling proxy to 1314 participate in the normal routing and signaling message exchange and 1315 processing. 1317 - Security and resilience are crucial issues for the control plane 1318 and will be addressed in Section 10 and 11 of this document. 1320 8.2. Signaling Communication Network (SCN) 1322 The signaling communication network is a transport network for 1323 control plane messages and it consists of a set of control channels 1324 that interconnects the nodes within the control plane. Therefore, the 1325 signaling communication network must be accessible by each of the 1326 communicating nodes (e.g., OXCs). If an out-of-band IP-based control 1327 message transport network is an overlay network built on top of the 1328 IP data network using some tunneling technologies, these tunnels must 1329 be standards-based such as IPSec, GRE, etc. 1331 - The signaling communication network must terminate at each of the 1332 nodes in the transport plane. 1334 - The signaling communication network shall not be assumed to have 1335 the same topology as the data plane, nor shall the data plane and 1336 control plane traffic be assumed to be congruently routed. 1338 A control channel is the communication path for transporting control 1339 messages between network nodes, and over the UNI (i.e., between the 1340 UNI entity on the user side (UNI-C) and the UNI entity on the network 1341 side (UNI-N)). The control messages include signaling messages, 1342 routing information messages, and other control maintenance protocol 1343 messages such as neighbor and service discovery. 1345 The following three types of signaling in the control channel shall 1346 be supported: 1348 - In-band signaling: The signaling messages are carried over a 1349 logical communication channel embedded in the data-carrying optical 1350 link or channel. For example, using the overhead bytes in SONET data 1351 framing as a logical communication channel falls into the in-band 1352 signaling methods. 1354 - In fiber, Out-of-band signaling: The signaling messages are carried 1355 over a dedicated communication channel separate from the optical 1356 data-bearing channels, but within the same fiber. For example, a 1357 dedicated wavelength or TDM channel may be used within the same fiber 1358 as the data channels. 1360 - Out-of-fiber signaling: The signaling messages are carried over a 1361 dedicated communication channel or path within different fibers to 1362 those used by the optical data-bearing channels. For example, 1363 Y. Xue et al 1365 dedicated optical fiber links or communication path via separate and 1366 independent IP-based network infrastructure are both classified as 1367 out-of-fiber signaling. 1369 The UNI control channel and proxy signaling defined in the OIF UNI 1370 1.0 [oif-uni] shall be supported. 1372 The signaling communication network provides communication 1373 mechanisms between entities in the control plane. 1375 - The signaling communication network shall support reliable 1376 message transfer. 1378 - The signaling communication network shall have its own OAM mechanisms. 1380 - The signaling communication network shall use protocols that 1381 support congestion control mechanisms. 1383 In addition, the signaling communication network should support 1384 message priorities. Message prioritization allows time critical 1385 messages, such as those used for restoration, to have priority over 1386 other messages, such as other connection signaling messages and 1387 topology and resource discovery messages. 1389 The signaling communication network shall be highly reliable and 1390 implement failure recovery. 1392 8.3 Control Plane Interface to Data Plane 1394 In the situation where the control plane and data plane are decoupled, this 1395 interface needs to be standardized. 1396 Requirements for a standard control-data plane interface are under 1397 study. The specification of a control plane interface to the data 1398 plane is outside the scope of this document. 1400 Control plane should support a standards based interface to configure 1401 switching fabrics and port functions via the management plane. 1403 Data plane shall monitor and detect the failure (LOL, LOS, etc.) and 1404 quality degradation (high BER, etc.) of the signals and be able to 1405 provide signal-failure and signal-degrade alarms to the control plane 1406 accordingly to trigger proper mitigation actions in the control 1407 plane. 1409 8.4. Management Plane Interface to Data Plane 1411 The management plane shall be responsible for the network resource 1412 management in the data plane. It should able to partition the network 1413 resources and control the allocation and the deallocation of the 1414 resource for the use by the control plane. 1416 Y. Xue et al 1418 Data plane shall monitor and detect the failure and quality 1419 degradation of the signals and be able to provide signal-failure and 1420 signal-degrade alarms plus associated detailed fault information to 1421 the management plane to trigger and enable the management for fault 1422 location and repair. 1424 Management plane failures shall not affect the normal operation of a 1425 configured and operational control plane or data plane. 1427 8.5. Control Plane Interface to Management Plane 1429 The control plane is considered a managed entity within a network. 1430 Therefore, it is subject to management requirements just as other 1431 managed entities in the network are subject to such requirements. 1433 Control plane should be able to service the requests from the 1434 management plane for end-to-end connection provisioning (e.g. SPC 1435 connection) and control plane database information query (e.g. 1436 topology database) 1438 Control plane shall report all the control plane faults to the 1439 management plane with detailed fault information 1441 The control, management and transport plane each has its well-defined network 1442 functions. Those functions are orthogonal to each other. However, this does not 1443 total independency. Since the management plane is responsible for the management 1444 of both control plane and transport plane, the management plane plays an 1445 authoritative role 1447 In general, the management plane shall have authority over the 1448 control plane. Management plane should be able to configure the 1449 routing, signaling and discovery control parameters such as hold-down 1450 timers, hello-interval, etc. to affect the behavior of the control 1451 plane. 1453 In the case of network failure, both the management plane and 1454 the control plane need fault information at the same priority. The 1455 control plane shall be responsible for providing necessary statistic 1456 data such as call counts, traffic counts to the management plane. 1457 They should be available upon the query from the management plane. 1458 The management plane shall be able to tear down connections 1459 established by the control plane both gracefully and forcibly on 1460 demand. 1462 8.6. IP and Optical Control Plane Interconnection 1464 The control plane interconnection model defines how two 1465 control networks can be interconnected in terms of controlling 1466 relationship and control information flow allowed between them. 1467 There are three basic types of control plane network interconnection 1468 Y. Xue et al 1470 models: overlay, peer and hybrid, which are defined in the IETF IPO 1471 WG document [ipo_frame]. See Appendix A for more discussion. 1473 Choosing the level of coupling depends upon a number of different 1474 factors, some of which are: 1476 - Variety of clients using the optical network 1478 - Relationship between the client and optical network 1480 - Operating model of the carrier 1482 Overlay model (UNI like model) shall be supported for client to 1483 optical control plane interconnection. 1485 Other models are optional for client to optical control plane 1486 interconnection. 1488 For optical to optical control plane interconnection all three models 1489 shall be supported. In general, the priority for support of 1490 interconnection models should be overlay, hybrid and peer, in 1491 decreasing order. 1493 9. Requirements for Signaling, Routing and Discovery 1495 9.1. Requirements for information sharing over UNI, I-NNI and E-NNI 1497 Different types of interfaces shall impose different requirements and 1498 functionality due to their different trust relationships. 1499 Specifically: 1501 - Topology information shall not be exchanged across inter-carrier E-NNI and 1502 UNI. 1504 - The control plane shall allow the carrier to configure the type 1505 and extent of control information exchange across various interfaces. 1507 - Address resolution exchange over UNI is needed if an addressing 1508 directory service is not available. 1510 9.2. Signaling Functions 1512 Call and connection control and management signaling messages are 1513 used for the establishment, modification, status query and release of 1514 an end-to-end optical connection. Unless otherwise specified, the 1515 word "signaling" refers to both inter-domain and intra-domain 1516 signaling. 1518 - The inter-domain signaling protocol shall be agnostic to the intra- 1519 domain signaling protocol for all the domains within the network. 1521 Y. Xue et al 1523 - Signaling shall support both strict and loose routing. 1525 - Signaling shall support individual as well as groups of connection 1526 requests. 1528 - Signaling shall support fault notifications. 1530 - Inter-domain signaling shall support per connection, globally 1531 unique identifiers for all connection management primitives based on 1532 a well-defined naming scheme. 1534 - Inter-domain signaling shall support crank-back and rerouting. 1536 9.3. Routing Functions 1538 Routing includes reachability information propagation, network 1539 topology/resource information dissemination and path computation. 1540 Network topology/resource information dissemination is to provide 1541 each node in the network with information about the carrier network 1542 such that a single node is able to support constraint-based path 1543 selection. A mixture of hop-by-hop routing, explicit/source routing 1544 and hierarchical routing will likely be used within future transport 1545 networks. 1547 All three mechanisms (Hop-by-hop routing, explicit / source-based 1548 routing and hierarchical routing) must be supported. Messages 1549 crossing untrusted boundaries must not contain information regarding 1550 the details of an internal network topology. 1552 Requirements for routing information dissemination: 1554 - The inter-domain routing protocol shall be agnostic to the intra- 1555 domain routing protocol within any of the domains within the network. 1557 - The exchange of the following types of information shall be 1558 supported by inter-domain routing protocols: 1559 - Inter-domain topology 1560 - Per-domain topology abstraction 1561 - Per domain reachability summarization 1563 Major concerns for routing protocol performance are scalability and 1564 stability, which impose the following requirement on the routing 1565 protocols: 1567 - The routing protocol shall scale with the size of the network 1569 The routing protocols shall support following requirements: 1571 - Routing protocol shall support hierarchical routing information 1572 dissemination, including topology information aggregation and 1573 summarization. 1575 Y. Xue et al 1577 - The routing protocol(s) shall minimize global information and keep 1578 information locally significant as much as possible. 1579 Over external interfaces only reachability information, next 1580 routing hop and service capability information should be exchanged. 1581 Any other network related information shall not leak out to other 1582 networks. 1584 - The routing protocol shall be able to minimize global information 1585 and keep information locally significant as much as possible (e.g., 1586 information local to a node, a sub-network, a domain, etc). For 1587 example, a single optical node may have thousands of ports. The ports 1588 with common characteristics need not to be advertised individually. 1590 - The routing protocol shall distinguish static routing information 1591 and dynamic routing information. The routing protocol operation shall 1592 update dynamic and static routing information differently. Only 1593 dynamic 1594 routing information shall be updated in real time. 1596 - Routing protocol shall be able to control the dynamic information 1597 updating frequency through different types of thresholds. Two types 1598 of thresholds could be defined: absolute threshold and relative 1599 threshold. 1601 - The routing protocol shall support trigger-based and timeout-based 1602 information update. 1604 - Inter-domain routing protocol shall support policy-based routing 1605 information exchange. 1607 - The routing protocol shall be able to support different levels of 1608 protection/restoration and other resiliency requirements. These are 1609 discussed in Section 10. 1611 All the scalability techniques will impact the network resource 1612 representation accuracy. The tradeoff between accuracy of the routing 1613 information and the routing protocol scalability is an important 1614 consideration to be made by network operators. 1616 9.4. Requirements for path selection 1618 The following are functional requirements for path selection: 1620 - Path selection shall support shortest path routing. 1622 - Path selection shall also support constraint-based routing. At 1623 least the following constraints shall be supported: 1624 - Cost 1625 - Link utilization 1626 - Diversity 1628 Y. Xue et al 1630 - Service Class 1632 - Path selection shall be able to include/exclude some specific 1633 network resources, based on policy. 1635 - Path selection shall be able to support different levels of 1636 diversity, including node, link, SRLG and SRG. 1638 - Path selection algorithms shall provide carriers the ability to 1639 support a wide range of services and multiple levels of service 1640 classes. Parameters such as service type, transparency, bandwidth, 1641 latency, bit error rate, etc. may be relevant. 1643 Constraint-based routing in the optical network in significantly complex 1644 Compared to the IP network. There are many optical layer constraints to consider 1645 such as wavelength, diversity, optical layer impairments, etc. A detailed 1646 discussion on the routing constraints at the optical layer is in [ietf-olr]. 1648 9.5. Discovery Functions 1649 The discovery functions include neighbor, resource and service 1650 discovery. The control plane shall support both manual configuration and 1651 automatic discovery 1653 9.5.1. Neighbor discovery 1654 Neighbor Discovery can be described as an instance of auto-discovery 1655 that is used for associating two network entities within a layer 1656 network based on a specified adjacency relation. 1658 The control plane shall support the following neighbor discovery 1659 capabilities as described in [itu-disc]: 1661 - Physical media adjacency that detects and verifies the physical 1662 layer network connectivity between two connected network element 1663 ports. 1665 - Logical network adjacency that detects and verify the logical 1666 network layer connection above the physical layer between network 1667 layer specific ports. 1669 - Control adjacency that detect and verify the logical neighboring 1670 relation between two control entities associated with data plane 1671 network elements that form either physical or logical adjacency. 1673 The control plane shall support manual neighbor adjacency 1674 configuration to either overwrite or supplement the automatic 1675 neighbor discovery function. 1677 9.5.2. Resource Discovery 1679 Resource discovery is concerned with the ability to verify physical 1680 connectivity between two ports on adjacent network elements, improve 1681 Y. Xue et al 1683 inventory management of network resources, detect configuration 1684 mismatches between adjacent ports, associating port characteristics 1685 of adjacent network elements, etc. Resource discovery shall be 1686 supported. 1688 Resource discovery can be achieved through either manual provisioning 1689 or automated procedures. The procedures are generic while the 1690 specific mechanisms and control information can be technology 1691 dependent. 1693 After neighbor discovery, resource verification and monitoring must be 1694 performed periodically to verify physical attributes to ensure 1695 compatibility. 1697 9.5.3. Service Discovery 1699 Service Discovery can be described as an instance of auto-discovery 1700 that is used for verifying and exchanging service capabilities of a 1701 network. Service discovery can only happen after neighbor discovery. 1702 Since service capabilities of a network can dynamically change, 1703 service discovery may need to be repeated. 1705 Service discovery is required for all the optical services supported. 1707 10. Requirements for service and control plane resiliency 1709 Resiliency is a network capability to continue its operations under 1710 the condition of failures within the network. The automatic switched 1711 optical network assumes the separation of control plane and data 1712 plane. Therefore the failures in the network can be divided into 1713 those affecting the data plane and those affecting the control plane. 1714 To provide enhanced optical services, resiliency measures in both 1715 data plane and control plane should be implemented. The following 1716 Failure-handling principles shall be supported. 1718 The control plane shall provide optical service failure detection and 1719 recovery functions such that the failures in the data plane within 1720 the control plane coverage can be quickly mitigated. 1722 The failure of control plane shall not in any way adversely affect 1723 the normal functioning of existing optical connections in the data 1724 plane. 1726 In general, there shall be no single point of failure for all major 1727 control plane functions, including signaling, routing etc. The 1728 control plane shall provide reliable transfer of signaling messages 1729 and flow control mechanisms for easing any congestion within the 1730 control plane. 1732 10.1. Service resiliency 1733 Y. Xue et al 1735 In circuit-switched transport networks, the quality and reliability 1736 of the established optical connections in the transport plane can be 1737 enhanced by the protection and restoration mechanisms provided by the 1738 control plane functions. Rapid recovery is required by transport 1739 network providers to protect service and also to support stringent 1740 Service Level Agreements (SLAs) that dictate high reliability and 1741 availability for customer connectivity. 1743 Protection and restoration are closely related techniques for 1744 repairing network node and link failures. Protection is a collection 1745 of failure recovery techniques meant to rehabilitate failed 1746 connections by pre-provisioning dedicated protection network 1747 connections and switching to the protection circuit once the failure 1748 is detected. Restoration is a collection of reactive techniques used 1749 to rehabilitate failed connections by dynamic rerouting the failed 1750 connection around the network failures using the shared network 1751 resources. 1753 The protection switching is characterized by shorter recovery time at 1754 the cost of the dedicated network resources while dynamic restoration 1755 is characterized by longer recover time with efficient resource 1756 sharing. Furthermore, the protection and restoration can be 1757 performed either on a per link/span basis or on an end-to-end 1758 connection path basis. The former is called local repair initiated a 1759 node closest to the failure and the latter is called global repair 1760 initiated from the ingress node. 1762 The protection and restoration actions are usually in reaction to the 1763 failure in the networks. However, during the network maintenance 1764 affecting the protected connections, a network operator needs to 1765 proactively force the traffic on the protected connections to switch 1766 to its protection connection. 1768 The failure and signal degradation in the transport plane is usually 1769 technology specific and therefore shall be monitored and detected by 1770 the transport plane. 1772 The transport plane shall report both physical level failure and 1773 signal degradation to the control plane in the form of the signal 1774 failure alarm and signal degrade alarm. 1776 The control plane shall support both alarm-triggered and hold-down 1777 timers based protection switching and dynamic restoration for failure 1778 recovery. 1780 Clients will have different requirements for connection availability. 1781 These requirements can be expressed in terms of the "service level", 1782 which can be mapped to different restoration and protection options 1783 and priority related connection characteristics, such as holding 1784 priority(e.g. pre-emptable or not), set-up priority, or restoration 1785 priority. However, how the mapping of individual service levels to a 1786 Y. Xue et al 1788 specific set of protection/restoration options and connection 1789 priorities will be determined by individual carriers. 1791 In order for the network to support multiple grades of service, the 1792 control plane must support differing protection and restoration 1793 options on a per connection basis. 1795 In order for the network to support multiple grades of service, the 1796 control plane must support setup priority, restoration priority and 1797 holding priority on a per connection basis. 1799 In general, the following protection schemes shall be considered for 1800 all protection cases within the network: 1801 - Dedicated protection: 1+1 and 1:1 1802 - Shared protection: 1:N and M:N. 1803 - Unprotected 1805 The control plane shall support "extra-traffic" capability, which 1806 allows unprotected traffic to be transmitted on the protection 1807 circuit. 1809 The control plane shall support both trunk-side and drop-side 1810 protection switching. 1812 The following restoration schemes should be supported: 1813 - Restorable 1814 - Un-restorable 1816 Protection and restoration can be done on an end-to-end basis per 1817 connection. It can also be done on a per span or link basis between 1818 two adjacent network nodes. These schemes should be supported. 1820 The protection and restoration actions are usually triggered by the 1821 failure in the networks. However, during the network maintenance 1822 affecting the protected connections, a network operator need to 1823 proactively force the traffic on the protected connections to switch 1824 to its protection connection. Therefore in order to support easy 1825 network maintenance, it is required that management initiated 1826 protection and restoration be supported. 1828 Protection and restoration configuration should be based on software 1829 only. 1831 The control plane shall allow the modification of protection and 1832 restoration attributes on a per-connection basis. 1834 The control plane shall support mechanisms for reserving bandwidth 1835 resources for restoration. 1837 The control plane shall support mechanisms for normalizing connection 1838 routing (reversion) after failure repair. 1840 Y. Xue et al 1842 Normal connection management operations (e.g., connection deletion) 1843 shall not result in protection/restoration being initiated. 1845 10.2. Control plane resiliency 1847 The control plane may be affected by failures in signaling network 1848 connectivity and by software failures (e.g., signaling, topology and 1849 resource discovery modules). 1851 The signaling control plane should implement signaling message 1852 priorities to ensure that restoration messages receive preferential 1853 treatment, resulting in faster restoration. 1855 The optical control plane signal network shall support protection and 1856 restoration options to enable it to self-healing in case of failures 1857 within the control plane. 1859 Control network failure detection mechanisms shall distinguish 1860 between control channel and software process failures. 1862 The control plane failure shall only impact the capability to 1863 provision new services. 1865 Fault localization techniques for the isolation of failed control 1866 resources shall be supported. 1868 Recovery from control plane failures shall result in complete 1869 recovery and re-synchronization of the network. 1871 There shall not be a signal point of failure in the control plane systems 1872 design. 1874 Partial or total failure of the control plane shall not affect the existing 1875 established connections. It should only lose the capability to accept the new 1876 connection requests. 1878 11. Security Considerations 1880 In this section, security considerations and requirements for optical 1881 services and associated control plane requirements are described. 1883 11.1. Optical Network Security Concerns 1885 Since optical service is directly related to the physical network 1886 which is fundamental to a telecommunications infrastructure, 1887 stringent security assurance mechanism should be implemented in 1888 optical networks. 1890 In terms of security, an optical connection consists of two aspects. 1892 Y. Xue et al 1894 One is security of the data plane where an optical connection itself 1895 belongs, and the other is security of the control plane. 1897 11.1.1. Data Plane Security 1899 - Misconnection shall be avoided in order to keep the user's data 1900 confidential. For enhancing integrity and confidentiality of data, 1901 it may be helpful to support scrambling of data at layer 2 or 1902 encryption of data at a higher layer. 1904 11.1.2. Control Plane Security 1906 It is desirable to decouple the control plane from the data plane 1907 physically. 1909 Restoration shall not result in miss-connections (connections 1910 established to a destination other than that intended), even for 1911 short periods of time (e.g., during contention resolution). For 1912 example, signaling messages, used to restore connectivity after 1913 failure, should not be forwarded by a node before contention has been 1914 resolved. 1916 Additional security mechanisms should be provided to guard against 1917 intrusions on the signaling network. Some of these may be done with 1918 the help of the management plane. 1920 - Network information shall not be advertised across external 1921 interfaces (UNI or E-NNI). The advertisement of network information 1922 across the E-NNI shall be controlled and limited in a configurable 1923 policy based fashion. The advertisement of network information shall 1924 be isolated and managed separately by each administration. 1926 - The signaling network itself shall be secure, blocking all 1927 unauthorized access. The signaling network topology and addresses 1928 shall not be advertised outside a carrier's domain of trust. 1930 - Identification, authentication and access control shall be 1931 rigorously used by network operators for providing access to the 1932 control plane. 1934 - Discovery information, including neighbor discovery, service 1935 discovery, resource discovery and reachability information should be 1936 exchanged in a secure way. 1938 - Information on security-relevant events occurring in the control 1939 plane or security-relevant operations performed or attempted in the 1940 control plane shall be logged in the management plane. 1942 - The management plane shall be able to analyze and exploit logged 1943 data in order to check if they violate or threat security of the 1944 Y. Xue et al 1946 control plane. 1948 - The control plane shall be able to generate alarm notifications 1949 about security related events to the management plane in an 1950 adjustable and selectable fashion. 1952 - The control plane shall support recovery from successful and 1953 attempted intrusion attacks. 1955 11.2. Service Access Control 1957 From a security perspective, network resources should be protected 1958 from unauthorized accesses and should not be used by unauthorized 1959 entities. Service access control is the mechanism that limits and 1960 controls entities trying to access network resources. Especially on 1961 the UNI and E-NNI, Connection Admission Control (CAC) functions 1962 should also support the following security features: 1964 - CAC should be applied to any entity that tries to access network 1965 resources through the UNI (or E-NNI). CAC should include an 1966 authentication function of an entity in order to prevent masquerade 1967 (spoofing). Masquerade is fraudulent use of network resources by 1968 pretending to be a different entity. An authenticated entity should 1969 be given a service access level on a configurable policy basis. 1971 - The UNI and NNI should provide optional mechanisms to ensure origin 1972 authentication and message integrity for connection management 1973 requests such as set-up, tear-down and modify and connection 1974 signaling messages. This is important in order to prevent Denial of 1976 Service attacks. The UNI and E-NNI should also include mechanisms, 1977 such as usage-based billing based on CAC, to ensure non-repudiation 1978 of connection management messages. 1980 - Each entity should be authorized to use network resources according 1981 to the administrative policy set by the operator. 1983 12. Acknowledgements 1984 The authors of this document would like to extend our special appreciation to John 1985 Strand for his initial contributions to the carrier requirements. We also want to 1986 acknowledge the valuable inputs from, Yangguang Xu, Zhiwei Lin, 1987 Eve Verma, Daniel Awduche, James Luciani, Deborah Brunhard and Lynn Neir, 1988 Wesam Alanqar, Tammy Ferris, Mark Jones. 1990 13. References 1992 [rfc2026] S. Bradner, "The Internet Standards Process -- Revision 3," BCP 9, RFC 1993 2026, IETF October 1996. 1995 [rfc2119] S. Bradner, �Key words for use in RFC to indicate requirement levels�, 1996 BCP 14, RFC 2119, 1997 1997 Y. Xue et al 1999 [itu-otn] ITU-T G.872 (2000) � Architecture of Optical Transport Networks. 2001 [itu-g709] ITU-T G.709 (2001)� Network Node Interface for the Optical Transport 2002 Network. 2004 [itu-sdh] ITU-T Rec. G.803 (2000), Architecture of Transport Networks based on 2005 the Synchronous Digital Hierarchy 2007 [ipo-frw] B. Rajagopalan, et. al �IP over Optical Networks: A Framework�, work 2008 in progress, IETF 2002 2010 [oif-addr] M. Lazer, "High Level Requirements on Optical Network Addressing", 2011 oif2001.196, 2001 2013 [oif-carrier] Y. Xue and M. Lazer, et al, �Carrier Optical Service Framework and 2014 Associated Requirements for UNI�, OIF2000.155, 2000 2016 [oif-nnireq] M. Lazer et al, �Carrier NNI Requirements�, OIF2002.229, 2002 2018 [ipo-olr] A Chiu and J. Strand et al., "Impairments and Other Constraints on 2019 Optical Layer Routing", work in progress, IETF, 2002 2021 [ccamp-req] J. Jiang et al., "Common Control and Measurement Plane Framework 2022 and Requirements", work in progress, IETF, 2001 2024 [ietf-gsmp] A. Doria, et al �General Switch Management Protocol V3�, work in 2025 progress, IETF, 2002 2027 [id-freeland] D. Freeland, et al, �Consideration on the development of an 2028 optical control plane�, Nov. 2000 2030 [rfc2748] D. Durham, et al, �The COPS (Common Open Policy Services) Protocol�, 2031 RFC 2748, Jan. 2000 2033 [oif-uni] Optical Internetworking Forum (OIF), "UNI 1.0 Signaling 2034 Specification," December, 2001. 2036 [itu-astn] ITU-T Rec. G.8070/Y.1301 (2001), Requirements for the Automatic 2037 Switched Transport Network (ASTN). 2039 [itu-ason] ITU-T Rec. G.8080/Y.1304 (2001), Architecture of the Automatic 2040 Switched Optical Network (ASON). 2042 [itu-dcm] ITU-T Rec. G.7713/Y.1704 (2001), Distributed Call and Connection 2043 Management (DCM). 2045 [itu-rtg] ITU-T Draft Rec. G.7715/Y.1706 (2002), Architecture and Requirements 2046 for Routing in the Automatic Switched Optical Networks. 2048 Y. Xue et al 2050 [itu-lm] ITU-T Draft Rec. G.7716/Y.1706 (2002), Link Resource Management for 2051 ASON Networks. (work in progress) 2053 [itu-disc] ITU-T Rec. G.7714/Y.1705 (2001), Generalized Automatic Discovery 2054 Techniques. 2056 [itu-dcn]ITU-T Rec. G.7712/Y.1703 (2001), Architecture and Specification of Data 2057 Communication Network. 2059 [ansi-sonet] ANSI T1.105-2001, Synchronous Optical Network (SONET) - Basic 2060 Description including Multiplex Structure, Rates and Formats 2062 14 Author's Addresses 2064 Yong Xue 2065 UUNET/WorldCom 2066 22001 Loudoun County Parkway 2067 Ashburn, VA 20147 2068 Email: yxue@ieee.org 2070 Monica Lazer 2071 AT&T 2072 900 ROUTE 202/206N PO BX 752 2073 BEDMINSTER, NJ 07921-0000 2074 mlazer@att.com 2076 Jennifer Yates, 2077 AT&T Labs 2078 180 PARK AVE, P.O. BOX 971 2079 FLORHAM PARK, NJ 07932-0000 2080 jyates@research.att.com 2082 Dongmei Wang 2083 AT&T Labs 2084 Room B180, Building 103 2085 180 Park Avenue 2086 Florham Park, NJ 07932 2087 mei@research.att.com 2089 Ananth Nagarajan 2090 Sprint 2091 6220 Sprint Parkway 2092 Overland Park, KS 66251, USA 2093 Email: ananth.nagarajan@mail.sprint.com 2095 Hirokazu Ishimatsu 2096 Japan Telecom Co., LTD 2097 2-9-1 Hatchobori, Chuo-ku, 2098 Tokyo 104-0032 Japan 2099 Phone: +81 3 5540 8493 2100 Fax: +81 3 5540 8485 2101 Y. Xue et al 2103 EMail: hirokazu@japan-telecom.co.jp 2105 Olga Aparicio 2106 Cable & Wireless Global 2107 11700 Plaza America Drive 2108 Reston, VA 20191 2109 Phone: 703-292-2022 2110 Email: olga.aparicio@cwusa.com 2112 Steven Wright 2113 Science & Technology 2114 BellSouth Telecommunications 2115 41G70 BSC 2116 675 West Peachtree St. NE. 2117 Atlanta, GA 30375 2118 Phone +1 (404) 332-2194 2119 Email: steven.wright@snt.bellsouth.com 2121 Appendix A: Interconnection of Control Planes 2123 The interconnection of the IP router (client) and optical control 2124 planes can be realized in a number of ways depending on the required 2125 level of coupling. The control planes can be loosely or tightly 2126 coupled. Loose coupling is generally referred to as the overlay 2127 model and tight coupling is referred to as the peer model. 2128 Additionally there is the augmented model that is somewhat in between 2129 the other two models but more akin to the peer model. The model 2130 selected determines the following: 2132 - The details of the topology, resource and reachability information 2133 advertised between the client and optical networks 2135 - The level of control IP routers can exercise in selecting paths 2136 across the optical network 2138 The next three sections discuss these models in more details and the 2139 last section describes the coupling requirements from a carrier's 2140 perspective. 2142 Peer Model (I-NNI like model) 2144 Under the peer model, the IP router clients act as peers of the 2145 optical transport network, such that single routing protocol instance 2146 runs over both the IP and optical domains. In this regard the 2147 optical network elements are treated just like any other router as 2148 far as the control plane is concerned. The peer model, although not 2149 strictly an internal NNI, behaves like an I-NNI in the sense that 2150 Y. Xue et al 2152 there is sharing of resource and topology information. 2154 Presumably a common IGP such as OSPF or IS-IS, with appropriate 2155 extensions, will be used to distribute topology information. One 2156 tacit assumption here is that a common addressing scheme will also be 2157 used for the optical and IP networks. A common address space can be 2158 trivially realized by using IP addresses in both IP and optical 2159 domains. Thus, the optical networks elements become IP addressable 2160 entities. 2162 The obvious advantage of the peer model is the seamless 2163 interconnection between the client and optical transport networks. 2164 The tradeoff is that the tight integration and the optical specific 2165 routing information that must be known to the IP clients. 2167 The discussion above has focused on the client to optical control 2168 plane inter-connection. The discussion applies equally well to 2169 inter-connecting two optical control planes. 2171 Overlay (UNI-like model) 2173 Under the overlay model, the IP client routing, topology 2174 distribution, and signaling protocols are independent of the routing, 2175 topology distribution, and signaling protocols at the optical layer. 2176 This model is conceptually similar to the classical IP over ATM 2177 model, but applied to an optical sub-network directly. 2179 Though the overlay model dictates that the client and optical network 2180 are independent this still allows the optical network to re-use IP 2181 layer protocols to perform the routing and signaling functions. 2183 In addition to the protocols being independent the addressing scheme 2184 used between the client and optical network must be independent in 2185 the overlay model. That is, the use of IP layer addressing in the 2186 clients must not place any specific requirement upon the addressing 2187 used within the optical control plane. 2189 The overlay model would provide a UNI to the client networks through 2190 which the clients could request to add, delete or modify optical 2191 connections. The optical network would additionally provide 2192 reachability information to the clients but no topology information 2193 would be provided across the UNI. 2195 Augmented model (E-NNI like model) 2197 Under the augmented model, there are actually separate routing 2198 instances in the IP and optical domains, but information from one 2199 routing instance is passed through the other routing instance. For 2200 example, external IP addresses could be carried within the optical 2201 routing protocols to allow reachability information to be passed to 2202 Y. Xue et al 2204 IP clients. A typical implementation would use BGP between the IP 2205 client and optical network. 2207 The augmented model, although not strictly an external NNI, behaves 2208 like an E-NNI in that there is limited sharing of information. 2210 Generally in a carrier environment there will be more than just IP 2211 routers connected to the optical network. Some other examples of 2212 clients could be ATM switches or SONET ADM equipment. This may drive 2213 the decision towards loose coupling to prevent undue burdens upon 2214 non-IP router clients. Also, loose coupling would ensure that future 2215 clients are not hampered by legacy technologies. 2217 Additionally, a carrier may for business reasons want a separation 2218 between the client and optical networks. For example, the ISP 2219 business unit may not want to be tightly coupled with the optical 2220 network business unit. Another reason for separation might be just 2221 pure politics that play out in a large carrier. That is, it would 2222 seem unlikely to force the optical transport network to run that same 2223 set of protocols as the IP router networks. Also, by forcing the 2224 same set of protocols in both networks the evolution of the networks 2225 is directly tied together. That is, it would seem you could not 2226 upgrade the optical transport network protocols without taking into 2227 consideration the impact on the IP router network (and vice versa). 2229 Operating models also play a role in deciding the level of coupling. 2230 [id-freeland] gives four main operating models envisioned for an optical 2231 transport network: 2232 Category 1: ISP owning all of its own infrastructure (i.e., 2233 including fiber and duct to the customer premises) 2235 Category 2: ISP leasing some or all of its capacity from a third party 2237 Category 3: Carriers carrier providing layer 1 services 2239 Category 4: Service provider offering multiple layer 1, 2, and 3 services over 2240 a common infrastructure 2242 Although relatively few, if any, ISPs fall into category 1 it would 2243 seem the mostly likely of the four to use the peer model. The other 2244 operating models would lend themselves more likely to choose an 2245 overlay model. Most carriers would fall into category 4 and thus 2246 would most likely choose an overlay model architecture. 2248 Full Copyright Statement 2250 Copyright (C) The Internet Society (2002). All Rights Reserved. 2252 This document and translations of it may be copied and furnished to 2253 others, and derivative works that comment on or otherwise explain it 2254 Y. Xue et al 2256 or assist in its implementation may be prepared, copied, published 2257 and distributed, in whole or in part, without restriction of any 2258 kind, provided that the above copyright notice and this paragraph are 2259 included on all such copies and derivative works. However, this 2260 document itself may not be modified in any way, such as by removing 2261 the copyright notice or references to the Internet Society or other 2262 Internet organizations, except as needed for the purpose of 2263 developing Internet standards in which case the procedures for 2264 copyrights defined in the Internet Standards process must be 2265 followed, or as required to translate it into languages other than 2266 English. 2268 The limited permissions granted above are perpetual and will not be 2269 revoked by the Internet Society or its successors or assigns. 2271 This document and the information contained herein is provided on an 2272 "AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING 2273 TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING 2274 BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION 2275 HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF 2276 MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.