idnits 2.17.1 draft-ietf-ipo-carrier-requirements-03.txt: -(1080): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(1081): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(2006): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(2009): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(2011): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(2017): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(2023): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(2026): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(2034): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(2037): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(2040): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity -- however, there's a paragraph with a matching beginning. Boilerplate error? == There are 14 instances of lines with non-ascii characters in the document. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. ** There are 410 instances of too long lines in the document, the longest one being 13 characters in excess of 72. ** There are 2 instances of lines with control characters in the document. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year == Line 407 has weird spacing: '...rt call admis...' == Line 628 has weird spacing: '...cal and consi...' == Line 1075 has weird spacing: '...d to as share...' == Line 1994 has weird spacing: '... Strand for h...' == Line 1996 has weird spacing: '...runhard and L...' == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- The document date (June 2002) is 7986 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- No issues found here. Summary: 7 errors (**), 0 flaws (~~), 8 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 INTERNET-DRAFT 3 Document: draft-ietf-ipo-carrier-requirements-03.txt Yong Xue 4 Category: Informational (Editor) 5 Expiration Date: December, 2002 WorldCom, Inc 7 Monica Lazer 8 Jennifer Yates 9 Dongmei Wang 10 AT&T 12 Ananth Nagarajan 13 Sprint 15 Hirokazu Ishimatsu 16 Japan Telecom Co., LTD 18 Olga Aparicio 19 Cable & Wireless Global 21 Steven Wright 22 Bellsouth 24 June 2002 26 Carrier Optical Services Requirements 28 Status of This Memo 29 This document is an Internet-Draft and is in full conformance with 30 all provisions of Section 10 of RFC2026. Internet-Drafts are working 31 documents of the Internet Engineering Task Force (IETF), its areas, 32 and its working groups. Note that other groups may also distribute 33 working documents as Internet-Drafts. 35 Internet-Drafts are draft documents valid for a maximum of six months 36 and may be updated, replaced, or rendered obsolete by other documents 37 at any time. It is inappropriate to use Internet-Drafts as reference 38 material or to cite them other than as "work in progress." 40 The list of current Internet-Drafts can be accessed at 41 http://www.ietf.org/ietf/1id-abstracts.txt. 43 The list of Internet-Draft Shadow Directories can be accessed at 44 http://www.ietf.org/shadow.html. 46 Abstract 47 This Internet Draft describes the major carrier's service requirements for the 48 automatic switched optical networks (ASON) from both an end-user's as well as an 49 operator's perspectives. Its focus is on the description of the service building 50 blocks and service-related control plane functional requirements. The management 51 functions for the optical services and their underlying networks are beyond the 52 scope of this document and will be addressed in a separate document. 54 Y. Xue et al 56 Table of Contents 57 1. Introduction 3 58 1.1 Justification 4 59 1.2 Conventions used in this document 4 60 1.3 Value Statement 4 61 1.4 Scope of This Document 5 62 2. Abbreviations 6 63 3. General Requirements 7 64 3.1 Separation of Networking Functions 7 65 3.2 Separation of Call and Connection Control 8 66 3.3 Network and Service Scalability 9 67 3.4 Transport Network Technology 9 68 3.5 Service Building Blocks 10 69 4. Service Models and Applications 10 70 4.1 Service and Connection Types 10 71 4.2 Examples of Common Service Models 11 72 5. Network Reference Model 12 73 5.1 Optical Networks and Subnetworks 13 74 5.2 Network Interfaces 13 75 5.3 Intra-Carrier Network Model 15 76 5.4 Inter-Carrier Network Model 16 77 5.5 Implied Control Constraints 16 78 6. Optical Service User Requirements 17 79 6.1 Common Optical Services 17 80 6.2 Bearer Interface Types 18 81 6.3 Optical Service Invocation 18 82 6.4 Optical Connection Granularity 20 83 6.5 Other Service Parameters and Requirements 21 84 7. Optical Service Provider Requirements 22 85 7.1 Access Methods to Optical Networks 22 86 7.2 Dual Homing and Network Interconnections 22 87 7.3 Inter-domain connectivity 23 88 7.4 Names and Address Management 23 89 7.5 Policy-Based Service Management Framework 24 90 8. Control Plane Functional Requirements for Optical 91 Services 25 92 8.1 Control Plane Capabilities and Functions 25 93 8.2 Control Message Transport Network 27 94 8.3 Control Plane Interface to Data Plane 28 95 8.4 Management Plane Interface to Data Plane 28 96 8.5 Control Plane Interface to Management Plane 29 97 8.6 IP and Optical Control Plane Interconnection 29 98 9. Requirements for Signaling, Routing and Discovery 30 99 9.1 Requirements for information sharing over UNI, 100 I-NNI and E-NNI 30 101 9.2 Signaling Functions 30 102 9.3 Routing Functions 31 103 9.4 Requirements for path selection 32 104 9.5 Discovery Functions 33 105 10. Requirements for service and control plane 106 resiliency 34 108 Y. Xue et al 110 10.1 Service resiliency 35 111 10.2 Control plane resiliency 37 112 11. Security Considerations 37 113 11.1 Optical Network Security Concerns 37 114 11.2 Service Access Control 39 115 12. Acknowledgements 39 116 13. References 39 117 Authors' Addresses 41 118 Appendix: Interconnection of Control Planes 42 120 1. Introduction 122 Optical transport networks are evolving from the current TDM-based 123 SONET/SDH optical networks as defined by ITU Rec. G.803 [itu-sdh] to 124 the emerging WDM-based optical transport networks (OTN) as defined by 125 the ITU Rec. G.872 in [itu-otn]. Therefore in the near future, 126 carrier optical transport networks will consist of a mixture of the 127 SONET/SDH-based sub-networks and the WDM-based wavelength or fiber 128 switched OTN sub-networks. The OTN networks can be either transparent 129 or opaque depending upon if O-E-O functions are utilized within the 130 sub-networks. Optical networking encompasses the functionalities for 131 the establishment, transmission, multiplexing, switching of optical 132 connections carrying a wide range of user signals of varying formats 133 and bit rate. 135 Some of the challenges for the carriers are bandwidth management and fast 136 service provisioning in such a multi-technology and possibly multi-vendor 137 networking environment. The emerging and rapidly evolving automatic 138 switched optical networks or ASON technology [itu-astn, itu-ason] is 139 aimed at providing optical networks with intelligent networking 140 functions and capabilities in its control plane to enable rapid 141 optical connection provisioning, dynamic rerouting as well as 142 multiplexing and switching at different granularity level, including 143 fiber, wavelength and TDM time slots. The ASON control plane should 144 not only enable the new networking functions and capabilities for the 145 emerging OTN networks, but significantly enhance the service 146 provisioning capabilities for the existing SONET/SDH networks as 147 well. 149 The ultimate goals should be to allow the carriers to quickly and 150 dynamically provision network resources and to support network 151 survivability using ring and mesh-based protection and restoration 152 techniques. The carriers see that this new networking platform will 153 create tremendous business opportunities for the network operators 154 and service providers to offer new services to the market, reduce 155 their network operation efficiency (OpEx saving), and 156 improve their network utilization efficiency (CapEx saving). 158 1.1. Justification 160 The charter of the IPO WG calls for a document on "Carrier Optical 161 Y. Xue et al 163 Services Requirements" for IP over Optical networks. This document 164 addresses that aspect of the IPO WG charter. Furthermore, this 165 document was accepted as an IPO WG document by unanimous agreement at 166 the IPO WG meeting held on March 19, 2001, in Minneapolis, MN, USA. 167 It presents a carrier and end-user perspective on optical network 168 services and requirements. 170 1.2. Conventions used in this document 172 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 173 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 174 document are to be interpreted as described in RFC 2119. 176 1.3. Value Statement 178 By deploying ASON technology, a carrier expects to achieve the 179 following benefits from both technical and business perspectives: 180 - Automated Discovery: ASON technology will enable automatic network 181 inventory, topology and resource discovery and maintenance 182 which eliminates the manual or semi-manual process for 183 maintaining the network information database that exist in most 184 carrier environment. 186 - Rapid Circuit Provisioning: ASON technology will enable the dynamic 187 end-to-end provisioning of the optical connections across the optical 188 network by using standard routing and signaling protocols. 190 - Enhanced Survivability: ASON technology will enable the network to 191 dynamically reroute an optical connection in case of a failure using 192 mesh-based network protection and restoration techniques, which 193 greatly improves the cost-effectiveness compared to the current line 194 and ring protection schemes in the SONET/SDH network. 196 - Service Flexibility: ASON technology will support provisioning of 197 an assortment of existing and new services such as protocol and bit- 198 rate independent transparent network services, and bandwidth-on- 199 demand services. 201 - Enhanced Interoperability: ASON technology will use a control plane 202 utilizing industry and international standards architecture and 203 protocols, which facilitate the interoperability of the optical 204 network equipment from different vendors. 206 In addition, the introduction of a standards-based control plane 207 offers the following potential benefits: 209 - Reactive traffic engineering at optical layer that allows network 210 resources to be dynamically allocated to traffic flow. 212 Y. Xue et al 214 - Reduce the need for service providers to develop new operational 215 support systems software for the network control and new service 216 provisioning on the optical network, thus speeding up the deployment 217 of the optical network technology and reducing the software 218 development and maintenance cost. 220 - Potential development of a unified control plane that can be used 221 for different transport technologies including OTN, SONET/SDH, ATM 222 and PDH. 224 1.4. Scope of this document 226 This document is intended to provide, from the carriers perspective, 227 a service framework and some associated requirements in relation to 228 the optical transport services to be offered in the next generation optical 229 transport networking environment and their service control and 230 management functions. As such, this document concentrates on the 231 requirements driving the work towards realization of the automatic 232 switched optical networks. This document is intended to be protocol- 233 neutral, but the specific goals include providing the requirements to 234 guide the control protocol development and enhancement within IETF in 235 terms of reuse of IP-centric control protocols in the optical 236 transport network. 238 Every carrier's needs are different. The objective of this document 239 is NOT to define some specific service models. Instead, some major 240 service building blocks are identified that will enable the carriers 241 to use them in order to create the best service platform most 242 suitable to their business model. These building blocks include 243 generic service types, service enabling control mechanisms and 244 service control and management functions. 246 OIF carrier group has developed a comprehensive set of control plane 247 requirements for both UNI and NNI [oif-carrier, oif-nnireq] and they 248 have been used as the base line input to this document. 250 The fundamental principles and basic set of requirements for the 251 control plane of the automatic switched optical networks have been 252 provided in a series of ITU Recommendations under the umbrella of the 253 ITU ASTN/ASON architectural and functional requirements as listed 254 below: 256 Architecture: 257 - ITU-T Rec. G.8070/Y.1301 (2001), Requirements for the Automatic 258 Switched Transport Network (ASTN)[itu-astn] 260 - ITU-T Rec. G.8080/Y.1304 (2001), Architecture of the Automatic 261 Switched Optical Network (ASON)[itu-ason] 263 Signaling: 265 Y. Xue et al 267 - ITU-T Rec. G.7713/Y.1704 (2001), Distributed Call and Connection 268 Management (DCM)[itu-dcm] 270 Routing: 271 - ITU-T Draft Rec. G.7715/Y.1706 (2002), Architecture and Requirements for 272 Routing in the Automatically Switched Optical Network [itu-rtg] 274 Discovery: 275 - ITU-T Rec. G.7714/Y.1705 (2001), Generalized Automatic Discovery 276 [itu-disc] 278 Link Management: 279 - ITU-T Rec. G.7716/Y.1707 (2003), Link Resource Management for ASON 280 (work in progress)[itu-lm] 282 Signaling Communication Network: 283 - ITU-T Rec. G.7712/Y.1703 (2001), Architecture and Specification of 284 Data Communication Network [itu-dcn] 286 This document provides further detailed requirements based on this ASTN/ASON 287 framework. In addition, even though we consider IP a major 288 client to the optical network in this document, the same requirements 289 and principles should be equally applicable to non-IP clients such as 290 SONET/SDH, ATM, ITU G.709, Ethernet, etc. The general architecture for IP over 291 Optical is described in the IP over Optical framework document [ipo-frame] 293 2. Abbreviations 295 ASON Automatic Switched Optical Networking 296 ASTN Automatic Switched Transport Network 297 CAC Connection Admission Control 298 NNI Node-to-Node Interface 299 UNI User-to-Network Interface 300 I-NNI Internal NNI 301 E-NNI External NNI 302 NE Network Element 303 OTN Optical Transport Network 304 CNE Customer/Client Network Element 305 ONE Optical Network Element 306 OLS Optical Line System 307 PI Physical Interface 308 SLA Service Level Agreement 309 SCN Signaling Communication Network 311 3. General Requirements 313 In this section, a number of generic requirements related to the 314 service control and management functions are discussed. 316 3.1. Separation of Networking Functions 317 Y. Xue et al 319 A fundamental architectural principle of the ASON network 320 is to segregate the networking functions within 321 each layer network into three logical functional planes: control 322 plane, data plane and management plane. They are responsible for 323 providing network control functions, data transmission functions and 324 network management functions respectively. The crux of the ASON 325 network is the networking intelligence that contains automatic 326 routing, signaling and discovery functions to automate the network 327 control functions. 329 Control Plane: includes the functions related to networking control 330 capabilities such as routing, signaling, and policy control, as well 331 as resource and service discovery. These functions are automated. 333 Data Plane (transport plane): includes the functions related to 334 bearer channels and signal transmission. 336 Management Plane: includes the functions related to the management 337 functions of network element, networks and network resources and 338 services. These functions are less automated as compared to control 339 plane functions. 341 Each plane consists of a set of interconnected functional or control 342 entities, physical or logical, responsible for providing the 343 networking or control functions defined for that network layer. 345 Each plane has clearly defined functional responsibilities. However, the 346 management plane is responsible for the management of both control and data 347 planes, thus playing an authoritative role in overall control and management 348 functions as discussed in Section 8. 350 The separation of the control plane from both the data and management 351 plane is beneficial to the carriers in that it: 353 - Allows equipment vendors to have a modular system design that will 354 be more reliable and maintainable thus reducing the overall systems 355 ownership and operation cost. 357 - Allows carriers to have the flexibility to choose a third party 358 vendor control plane software systems as its control plane solution 359 for its switched optical network. 361 - Allows carriers to deploy a unified control plane and 362 OSS/management systems to manage and control different types of 363 transport networks it owns. 365 - Allows carriers to use a separate control network specially 366 designed and engineered for the control plane communications. 368 The separation of control, management and transport function is 369 Y. Xue et al 371 required and it shall accommodate both logical and physical level 372 separation. 374 Note that it is in contrast to the IP network where the control 375 messages and user traffic are routed and switched based on the same 376 network topology due to the associated in-band signaling nature of 377 the IP network. 379 When the physical separation is allowed between the control and data plane, a 380 standardized interface and control protocol (e.g. GSMP [ietf-gsmp]) should be 381 supported. 383 3.2. Separation of call and connection control 385 To support many enhanced optical services, such as scheduled 386 bandwidth on demand, diversity circuit provisioning and bundled connections, a 387 call model based on the separation of the, all control and connection control is 388 essential. 390 The call control is responsible for the end-to-end session 391 negotiation, call admission control and call state maintenance while 392 connection control is responsible for setting up the connections 393 associated with a call across the network. A call can correspond to 394 zero, one or more connections depending upon the number of 395 connections needed to support the call. 397 The existence of the connection depends upon the existence of its 398 associated call session and connection can be deleted and re- 399 established while still keeping the call session up. 401 The call control shall be provided at an ingress port or gateway port 402 to the network such as UNI and E-NNI [ see Section 5 for definition]. 404 The control plane shall support the separation of the call control 405 from the connection control. 407 The control plane shall support call admission control on call setup 408 and connection admission control on connection setup. 410 3.3. Network and Service Scalability 412 Although some specific applications or networks may be on a small 413 scale, the control plane protocol and functional capabilities shall 414 support large-scale networks. 416 In terms of the scale and complexity of the future optical network, 417 the following assumption can be made when considering the scalability 418 and performance that are required of the optical control and 419 management functions. 421 Y. Xue et al 423 - There may be up to thousands of OXC nodes and the same or higher 424 order of magnitude of OADMs per carrier network. 426 - There may be up to thousands of terminating ports/wavelength per 427 OXC node. 429 - There may be up to hundreds of parallel fibers between a pair of 430 OXC nodes. 432 - There may be up to hundreds of wavelength channels transmitted on 433 each fiber. 435 In relation to the frequency and duration of the optical connections: 437 - The expected end-to-end connection setup/teardown time should be in 438 the order of seconds, preferably less. 440 - The expected connection holding times should be in the order of 441 minutes or greater. 443 - There may be up to millions of simultaneous optical connections 444 switched across a single carrier network. 446 3.4. Transport Network Technology 448 Optical services can be offered over different types of underlying 449 optical transport technologies including both TDM-based SONET/SDH 450 network and WDM-based OTN networks. 452 For this document, standards-based transport technologies SONET/SDH 453 as defined in the ITU Rec. G.803 and OTN implementation framing as 454 defined in ITU Rec. G.709 [itu-g709] shall be supported. 456 Note that the service characteristics such as bandwidth granularity 457 and signaling framing hierarchy to a large degree will be determined 458 by the capabilities and constraints of the server layer network. 460 3.5. Service Building Blocks 462 The primary goal of this document is to identify a set of basic 463 service building blocks the carriers can use to create the best 464 suitable service models that serve their business needs. 466 The service building blocks are comprised of a well-defined set of 467 capabilities and a basic set of control and management functions. 468 These capabilities and functions should support a basic set of 469 services and enable a carrier to build enhanced services through 470 extensions and customizations. Examples of the building blocks 471 include the connection types, provisioning methods, control 472 interfaces, policy control functions, and domain internetworking 473 mechanisms, etc. 475 Y. Xue et al 477 4. Service Model and Applications 479 A carrier's optical network supports multiple types of service 480 models. Each service model may have its own service operations, 481 target markets, and service management requirements. 483 4.1. Service and Connection Types 485 The optical network is primarily offering optical paths that are 486 fixed bandwidth connections between two client network elements, such 487 as IP routers or ATM switches, established across the optical 488 network. A connection is also defined by its demarcation from ingress 489 access point, across the optical network, to egress access point of 490 the optical network. 492 The following connection capability topologies must be supported: 494 - Bi-directional point-to-point connection 496 - Uni-directional point-to-point connection 498 - Uni-directional point-to-multipoint connection 500 The point-to-point connections are the primary concerns of the carriers. In this 501 case, the following three types of network 502 connections based on different connection set-up control methods 503 shall be supported: 505 - Permanent connection (PC): Established hop-by-hop directly on each 506 ONE along a specified path without relying on the network routing and 507 signaling capability. The connection has two fixed end-points and 508 fixed cross-connect configuration along the path and will stays 509 permanently until it is deleted. This is similar to the concept of 510 PVC in ATM. 512 - Switched connection (SC): Established through UNI signaling 513 interface and the connection is dynamically established by network 514 using the network routing and signaling functions. This is similar to 515 the concept of SVC in ATM. 517 - Soft permanent connection (SPC): Established by specifying two PC 518 at end-points and let the network dynamically establishes a SC 519 connection in between. This is similar to the SPVC concept in ATM. 521 The PC and SPC connections should be provisioned via management plane 522 to control interface and the SC connection should be provisioned via 523 signaled UNI interface. 525 Note that even though automated rapid optical connection provisioning 526 is required, the carriers expect the majority of provisioned 527 Y. Xue et al 529 circuits, at least in short term, to have a long lifespan ranging 530 from months to years. 532 In terms of service provisioning, some carriers may choose to perform 533 testing prior to turning over to the customer. 535 4.2. Examples of Common Service Models 537 Each carrier may define its own service model based on it business 538 strategy and environment. The following are three example service 539 models that carriers may use. 541 4.2.1. Provisioned Bandwidth Service (PBS) 543 The PBS model provides enhanced leased/private line services 544 provisioned via service management interface (MI) using either PC or 545 SPC type of connection. The provisioning can be real-time or near 546 real-time. It has the following characteristics: 548 - Connection request goes through a well-defined management interface 550 - Client/Server relationship between clients and optical network. 552 - Clients have no optical network visibility and depend on network 553 intelligence or operator for optical connection setup. 555 4.2.2. Bandwidth on Demand Service (BDS) 557 The BDS model provides bandwidth-on-demand dynamic connection 558 services via signaled user-network interface (UNI). The provisioning 559 is real-time and is using SC type of optical connection. It has the 560 following characteristics: 562 - Signaled connection request via UNI directly from the user or its 563 proxy. 565 - Customer has no or limited network visibility depending upon the 566 control interconnection model used and network administrative policy. 568 - Relies on network or client intelligence for connection set-up 569 depending upon the control plane interconnection model used. 571 4.2.3. Optical Virtual Private Network (OVPN) 573 The OVPN model provides virtual private network at the optical layer 574 between a specified set of user sites. It has the following 575 characteristics: 577 - Customers contract for specific set of network resources such as 578 Y. Xue et al 580 optical connection ports, wavelengths, etc. 582 - Closed User Group (CUG) concept is supported as in normal VPN. 584 - Optical connection can be of PC, SPC or SC type depending upon the 585 provisioning method used. 587 - An OVPN site can request dynamic reconfiguration of the connections 588 between sites within the same CUG. 590 - A customer may have visibility and control of network resources up 591 to the extent allowed by the customer service contract. 593 At a minimum, the PBS, BDS and OVPN service models described above 594 shall be supported by the control functions. 596 5. Network Reference Model 598 This section discusses major architectural and functional components 599 of a generic carrier optical network, which will provide a reference 600 model for describing the requirements for the control and management 601 of carrier optical services. 603 5.1. Optical Networks and Subnetworks 605 As mentioned before, there are two main types of optical networks 606 that are currently under consideration: SDH/SONET network as defined 607 in ITU Rec. G.803, and OTN as defined in ITU Rec. G.872. 609 In the current SONET/SDH-based optical network, digital cross-connects (DXC) and 610 add-drop multiplexer (ADM) and line multiplexer terminal (LMT) are connected in 611 ring or linear topology. Similarly, we assume an OTN is composed of a set of 612 optical cross-connects (OXC) and optical add-drop multiplexer (OADM) which are 613 interconnected in a general mesh topology using DWDM optical line systems (OLS). 615 It is often convenient for easy discussion and description to treat 616 an optical network as an subnetwork cloud, in which the details of 617 the network become less important, instead focus is on the function 618 and the interfaces the optical network provides. In general, a 619 subnetwork can be defined as a set of access points on the network 620 boundary and a set of point-to-point optical connections between 621 those access points. 623 5.2. Network Interfaces 624 A generic carrier network reference model describes a multi-carrier 625 network environment. Each individual carrier network can be further 626 partitioned into domains or sub-networks based on administrative, 627 technological or architectural reasons. The demarcation between 628 (sub)networks can be either logical or physical and consists of a 629 set of reference points identifiable in the optical network. From the 630 control plane perspective, these reference points define a set of 631 Y. Xue et al 633 control interfaces in terms of optical control and management 634 functionality. The figure 1 is an illustrative diagram for this. 636 +---------------------------------------+ 637 | single carrier network | 638 +--------------+ | | 639 | | | +------------+ +------------+ | 640 |IP | | | | | | | 641 |Network +--UNI | + Optical +--UNI--+ Carrier IP | | 642 | | | | Subnetwork | | network | | 643 +--------------+ | | (Domain A) +--+ | | | 644 | +------+-----+ | +------+-----+ | 645 | | | | | 646 | I-NNI E-NNI UNI | 647 +--------------+ | | | | | 648 | | | +------+-----+ | +------+-----+ | 649 |IP +--UNI | + | +----+ | | 650 |Network | | | Optical | | Optical | | 651 | | | | Subnetwork +-E-NNI-+ Subnetwork | | 652 +--------------+ | | (Domain A) | | (Domain B) | | 653 | +------+-----+ +------+-----+ | 654 | | | | 655 +---------------------------------------+ 656 UNI E-NNI 657 | | 658 +------+-------+ +-------+--------+ 659 | | | | 660 | Other Client | | Other Carrier | 661 |Network | | Network | 662 | (ATM/SONET) | | | 663 +--------------+ +----------------+ 665 Figure 1 Generic Carrier Network Reference Model 667 A network can be partitioned into control domains that match the administrative 668 domains and is controlled under a single administrative policy. The control 669 domains can be recursively divided into sub-domains to form control hierarchy 670 for scalability. The control domain concept can be applied to routing, signaling 671 and protection & restoration to form an autonomous control function domain. 673 The network interfaces encompass two aspects of the networking 674 functions: user data plane interface and control plane interface. The 675 former concerns about user data transmission across the physical 676 network interface and the latter concerns about the control message 677 exchange across the network interface such as signaling, routing, 678 etc. We call the former physical interface (PI) and the latter 679 control interface. Unless otherwise stated, the control 680 interface is assumed in the remaining of this document. 682 5.2.1. Control Plane Interfaces 683 Y. Xue et al 685 Control interface defines a relationship between two connected 686 network entities on both sides of the interface. For each control 687 interface, we need to define the architectural function that each side 688 plays and a controlled set of information that can be exchanged 689 across the interface. The information flowing over this logical 690 interface may include, but not limited to: 692 - Endpoint name and address 694 - Reachability/summarized network address information 696 - Topology/routing information 698 - Authentication and connection admission control information 700 - Connection management signaling messages 702 - Network resource control information 704 Different types of the interfaces can be defined for the network 705 control and architectural purposes and can be used as the network 706 reference points in the control plane. In this document, the 707 following set of interfaces are defined as shown in Figure 1. The 708 User-Network Interface (UNI) is a bi-directional control interface 709 between service requester and service provider control entities. The 710 service request control entity resides outside the carrier network 711 control domain. 713 The Network-Network/Node-Node Interface (NNI) is a bi-directional signaling 714 interface between two optical network elements or sub-networks. 716 We differentiate between internal NNI (I-NNI) and external NNI (E-NNI) as 717 follows: 719 - E-NNI: A NNI interface between two control plane entities belonging 720 to different control domains. 722 - I-NNI: A NNI interface between two control plane entities within 723 the same control domain in the carrier network. 725 It should be noted that it is quite common to use I-NNI between two 726 sub-networks within the same carrier network if they belong to 727 different control domains. Different types of interface, internal vs. 728 external, have different implied trust relationship for security and 729 access control purposes. The trust relationship is not binary, instead a 730 policy-based control mechanism need to be in place to restrict the 731 type and amount of information that can flow cross each type of 732 interfaces depending the carrier's service and business requirements. 733 Generally, two networks have a fully trusted relationship if they belong to 734 the same administrative domain, in this case, the control information exchange 735 across the control interface between them should be unlimited. Otherwise, the 736 Y. Xue et al 738 type and amount of the control information that can go across the information 739 should be constrained by the administrative policy. 741 An example of fully trusted interface is an I-NNI between two optical 742 network elements in a single control domain. Non-trusted interface 743 examples include an E-NNI between two different carriers or a UNI 744 interface between a carrier optical network and its customers. The trust level 745 can be different for the non-trusted UNI or E-NNI interface depending upon if it 746 within the carrier or not. In general, intra-carrier E-NNI has higher trust 747 level than inter-carrier E-NNI; similarly UNI internal to the carrier (private 748 UNI) has higher trust level than UNI external to the carrier (public UNI). 750 The control plane shall support the UNI and NNI interface described 751 above and the interfaces shall be configurable in terms of the type 752 and amount of control information exchange and their behavior shall 753 be consistent with the configuration (i.e., external versus internal 754 interfaces). 756 5.3. Intra-Carrier Network Model 758 Intra-carrier network model concerns the network service control and 759 management issues within networks owned by a single carrier. 761 5.3.1. Multiple Sub-networks 763 Without loss of generality, the optical network owned by a carrier 764 service operator can be depicted as consisting of one or more optical 765 sub-networks interconnected by direct optical links. There may be 766 many different reasons for more than one optical sub-networks It may 767 be the result of using hierarchical layering, different technologies 768 across access, metro and long haul (as discussed below), or a result 769 of business mergers and acquisitions or incremental optical network 770 technology deployment by the carrier using different vendors or 771 technologies. 773 A sub-network may be a single vendor and single technology network. 774 But in general, the carrier's optical network is heterogeneous in 775 terms of equipment vendor and the technology utilized in each sub- 776 network. 778 5.3.2. Access, Metro and Long-haul networks 780 Few carriers have end-to-end ownership of the optical networks. Even 781 if they do, access, metro and long-haul networks often belong to 782 different administrative divisions as separate optical sub-networks. 783 Therefore Inter-(sub)-networks interconnection is essential in terms 784 of supporting the end-to-end optical service provisioning and 785 management. The access, metro and long-haul networks may use 786 different technologies and architectures, and as such may have 787 different network properties. 789 Y. Xue et al 791 In general, end-to-end optical connectivity may easily cross multiple 792 sub-networks with the following possible scenarios: 793 Access -- Metro -- Access 794 Access - Metro -- Long Haul -- Metro - Access 796 5.4. Inter-Carrier Network Model 798 The inter-carrier model focuses on the service and control aspects 799 between different carrier networks and describes the internetworking 800 relationship between them. 802 Inter-carrier interconnection provides for connectivity between 803 optical network operators. To provide the global reach end-to-end 804 optical services, optical service control and management between 805 different carrier networks becomes essential. It is possible to 806 support distributed peering within the IP client layer network where 807 the connectivity between two distant IP routers can be achieved via 808 an optical transport network. 810 5.5. Implied Control Constraints 812 The intra-carrier and inter-carrier models have different implied control 813 constraints. For example, in the intra-carrier model, the address for routing 814 and signaling only need to be unique with the carrier while the inter-carrier 815 model requires the address to be globally unique. 817 In the intra-carrier network model, the network itself forms the largest control 818 domain within the carrier network. This domain is usually partitioned into 819 multiple sub-domains, either flat or in hierarchy. The UNI and E-NNI interfaces 820 are internal to the carrier network, therefore higher trust level is assumed. 821 Because of this, direct signaling between domains and summarized topology and 822 resource information exchanged can be allowed across the private UNI or intra- 823 carrier E-NNI interfaces. 825 In the inter-carrier network model, each carrier's optical network is 826 a separate administrative domain. Both the UNI interface between the 827 user and the carrier network and the NNI interface between two 828 carrier's networks are crossing the carrier's administrative boundary 829 and therefore are by definition external interfaces. 831 In terms of control information exchange, the topology information 832 shall not be allowed to cross both E-NNI and UNI interfaces. 834 6. Optical Service User Requirements 836 This section describes the user requirements for optical services, 837 which in turn impose the requirements on service control and 838 management for the network operators. The user requirements reflect 839 the perception of the optical service from a user's point of view. 841 Y. Xue et al 843 6.1. Common Optical Services 845 The basic unit of an optical transport service is fixed-bandwidth 846 optical connectivity between parties. However different services are 847 created based on its supported signal characteristics (format, bit 848 rate, etc), the service invocation methods and possibly the 849 associated Service Level Agreement (SLA) provided by the service 850 provider. 852 At present, the following are the major optical services provided in 853 the industry: 855 - SONET/SDH, with different degrees of transparency 857 - Optical wavelength services, transparent or opaque 859 - Ethernet at 10Mbps, 100Mbps, 1 Gbps and 10 Gbps 861 - Storage Area Networks (SANs) based on FICON, ESCON and Fiber 862 Channel 864 Optical Wavelength Service refers to transport services where signal 865 framing is negotiated between the client and the network operator 866 (framing and bit-rate dependent), and only the payload is carried 867 transparently. SONET/SDH transport is most widely used for network- 868 wide transport. Different levels of transparency can be achieved in 869 the SONET/SDH transmission. 871 Ethernet Services, specifically 1Gb/s and 10Gbs Ethernet services, 872 are gaining more popularity due to the lower costs of the customers' 873 premises equipment and its simplified management requirements 874 (compared to SONET or SDH). 876 Ethernet services may be carried over either SONET/SDH (GFP mapping) 877 or WDM networks. The Ethernet service requests will require some 878 service specific parameters: priority class, VLAN Id/Tag, traffic 879 aggregation parameters. 881 Storage Area Network (SAN) Services. ESCON and FICON are proprietary 882 versions of the service, while Fiber Channel is the standard 883 alternative. As is the case with Ethernet services, SAN services may 884 be carried over either SONET/SDH (using GFP mapping) or WDM networks. 886 The control plane shall provide the carrier with the capability 887 functionality to provision, control and manage all the services 888 listed above. 890 6.2. Bearer Interface Types 892 All the bearer interfaces implemented in the ONE shall be supported 893 by the control plane and associated signaling protocols. 895 Y. Xue et al 897 The following interface types shall be supported by the signaling 898 protocol: 899 - SDH/SONET 900 - 1 Gb Ethernet, 10 Gb Ethernet (WAN mode) 901 - 10 M/100 M/1 G/10 Gb (LAN mode) Ethernet 902 - FC-N (N= 12, 50, 100, or 200) for Fiber Channel services 903 - OTN (G.709) 904 - PDH 905 - APON, E-PON 906 - ESCON and FICON 908 6.3. Optical Service Invocation 909 As mentioned earlier, the methods of service invocation play an 910 important role in defining different services. 911 6.3.1. Provider-Controlled Service Provisioning 913 In this scenario, users forward their service request to the provider 914 via a well-defined service management interface. All connection 915 management operations, including set-up, release, query, or 916 modification shall be invoked from the management plane. 918 6.3.2. User-Initiated Service Provisioning 920 In this scenario, users forward their service request to the provider 921 via a well-defined UNI interface in the control plane (including 922 proxy signaling). All connection management operation requests, 923 including set-up, release, query, or modification shall be invoked 924 from directly connected user devices, or its signaling representative 925 (such as a signaling proxy). 927 6.3.3. Call set-up requirements 928 In summary the following requirements for the control plane have been 929 identified: 930 - The control plane shall support action result codes as responses to 931 any requests over the control interfaces. 933 - The control plane shall support requests for call set-up, subject 934 to policies in effect between the user and the network. 936 - The control plane shall support the destination client device's 937 decision to accept or reject call set-up requests from the source 938 client's device. 940 - The control plane shall support requests for call set-up and 941 deletion across multiple (sub)networks. 943 - NNI signaling shall support requests for call set-up, subject to 944 policies in effect between the (sub)networks. 946 - Call set-up shall be supported for both uni-directional and bi- 947 Y. Xue et al 949 directional connections. 951 - Upon call request initiation, the control plane shall generate a 952 network unique Call-ID associated with the connection, to be used for 953 information retrieval or other activities related to that connection. 955 - CAC shall be provided as part of the call control functionality. It 956 is the role of the CAC function to determine if the call can be 957 allowed to proceed based on resource availability and authentication. 959 - Negotiation for call set-up for multiple service level options 960 shall be supported. 962 - The policy management system must determine what kinds of call setup 963 requests can be authorized. 965 - The control plane elements need the ability to rate limit (or pace) 966 call setup attempts into the network. 968 - The control plane shall report to the management plane, the 969 success/failures of a call request. 971 - Upon a connection request failure, the control plane shall report 972 to the management plane a cause code identifying the reason for the 973 failure and all allocated resources shall be released. A negative 974 acknowledgment shall be returned to the source. 976 - Upon a connection request success a positive acknowledgment shall 977 be returned to the source when a connection has been successfully 978 established, the control plane shall be notified. 980 - The control plane shall support requests for call release by Call- 981 ID. 983 - The control plane shall allow any end point or any intermediate 984 node to initiate call release procedures. 986 - Upon call release completion all resources associated with the call 987 shall become available for access for new requests. 989 - The management plane shall be able to release calls or connections 990 established by the control plane both gracefully and forcibly on 991 demand. 993 - Partially deleted calls or connections shall not remain within the 994 network. 996 - End-to-end acknowledgments shall be used for connection deletion 997 requests. 999 - Connection deletion shall not result in either restoration or 1000 Y. Xue et al 1002 protection being initiated. 1004 - The control plane shall support management plane and neighboring 1005 device requests for status query. 1007 - The UNI shall support initial registration and updates of the UNI-C 1008 with the network via the control plane. 1010 6.4. Optical Connection granularity 1012 The service granularity is determined by the specific technology, 1013 framing and bit rate of the physical interface between the ONE and 1014 the client at the edge and by the capabilities of the ONE. The 1015 control plane needs to support signaling and routing for all the 1016 services supported by the ONE. In general, there should not be a one- 1017 to-one correspondence imposed between the granularity of the service 1018 provided and the maximum capacity of the interface to the user. 1020 The control plane shall support the ITU Rec. G.709 connection 1021 granularity for the OTN network. 1023 The control plane shall support the SDH/SONET connection granularity. 1025 Sub-rate interfaces shall be supported by the optical control plane 1026 such as VT /TU granularity (as low as 1.5 Mb/s). 1028 The following fiber channel interfaces shall be supported by the 1029 control plane if the given interfaces are available on the equipment: 1031 - FC-12 1032 - FC-50 1033 - FC-100 1034 - FC-200 1036 Encoding of service types in the protocols used shall be such that 1037 new service types can be added by adding new code point values or 1038 objects. 1040 6.5. Other Service Parameters and Requirements 1042 6.5.1. Classes of Service 1044 We use "service level" to describe priority related characteristics 1045 of connections, such as holding priority, set-up priority, or 1046 restoration priority. The intent currently is to allow each carrier 1047 to define the actual service level in terms of priority, protection, 1048 and restoration options. Therefore, individual carriers will 1049 determine mapping of individual service levels to a specific set of 1050 quality features. 1052 The control plane shall be capable of mapping individual service 1053 Y. Xue et al 1055 classes into specific priority or protection and restoration options. 1057 6.5.2. Diverse Routing Attributes 1059 The diversity refers to the fact that a disjoint set of network resources (links 1060 and nodes) is utilized to provision multiple parallel optical connections 1061 terminated between a pair of ingress and egress ports. There are different 1062 levels of diversity based on link, node or administrative policy as described 1063 below. In the simple node and link diversity case: 1064 . Two optical connections are said to be node-disjoint diverse, if the two 1065 connections do not share any node along the path except the ingress and 1066 egress nodes. 1067 . Two optical connections are said to be link-disjoint diverse, if the two 1068 connections do not share any link along the path. 1070 A more general concept of diversity is the Shared Risk Group (SRG) that is based 1071 risk sharing model and allows the definition of administrative policy-based 1072 diversity. A SRG is defined as a group of links or nodes that share a common 1073 risk component, whose failure can potentially cause the failure of all the links 1074 or nodes in the group. When the SRG is applied to the link resource, it is 1075 referred to as shared risk link group (SRLG). For example, all fiber links that 1076 go through a common conduit under the ground belong to the same SRLG group, 1077 because the conduit is a shared risk component whose failure, such as a cut, may 1078 cause all fibers in the conduit to break. Note that SRLG is a relation defined 1079 within a group of links based upon a specific risk factor that can be defined 1080 based on various technical or administrative grounds such as �sharing a 1081 conduit�, �within 10 miles of distance proximity� etc. Please see ITU-T G.7715 1082 for more discussion [itu-rtg]. 1084 Therefore, two optical connections are said to be SRG-disjoint diverse if the 1085 two connections do not have any links or nodes that belong to the same SRG along 1086 the path. 1088 The ability to route service paths diversely is a required control 1089 feature. Diverse routing is one of the connection parameters and is 1090 specified at the time of the connection creation. 1092 The control plane routing algorithms shall be able to route a single 1093 demand diversely from N previously routed demands in terms of link 1094 disjoint path, node disjoint path and SRLG disjoint path. 1096 7. Optical Service Provider Requirements 1098 This section discusses specific service control and management 1099 requirements from the service provider's point of view. 1101 7.1. Service Access Methods to Optical Networks 1103 In order to have access to the optical network service, a customer needs to be 1104 physically connected to the service provider network on the transport plane. The 1105 control plane connection may or may not be required depending upon the service 1106 Y. Xue et al 1108 invocation model provided to the customer: provisioned vs. signaled. For the 1109 signaled, either direct or indirect signaling methods can be used depending upon 1110 if the UNI proxy is utilized on the client side. The detailed discussion on the 1111 UNI signaling methods is in [oif-uni]. 1113 Multiple access methods blow shall be supported: 1115 - Cross-office access (CNE co-located with ONE) 1117 - Direct remote access (Dedicated links to the user) 1119 - Remote access via access sub-network (via a 1120 multiplexing/distribution sub-network) 1122 7.2. Dual Homing and Network Interconnections 1124 Dual homing is a special case of the access network. Client devices 1125 can be dual homed to the same or different hub, the same or different 1126 access network, the same or different core networks, the same or 1127 different carriers. The different levels of dual homing connectivity 1128 result in many different combinations of configurations. The main 1129 objective for dual homing is for enhanced survivability. 1131 Dual homing must be supported. Dual homing shall not require the use 1132 of multiple addresses for the same client device. 1134 7.3. Inter-domain connectivity 1136 A domain is a portion of a network, or an entire network that is 1137 controlled by a single control plane entity. This section discusses 1138 the various requirements for connecting domains. 1140 7.3.1. Multi-Level Hierarchy 1142 Traditionally current transport networks are divided into core inter- 1143 city long haul networks, regional intra-city metro networks and 1144 access networks. Due to the differences in transmission technologies, 1145 service, and multiplexing needs, the three types of networks are 1146 served by different types of network elements and often have 1147 different capabilities. The network hierarchy is usually implemented through 1148 the control domain hierarchy. 1150 When control domains exists for routing and signaling purpose, there will be 1151 intra-domain routing/signaling and inter-domain routing/signaling. In general, 1152 domain-based routing/signaling autonomy is desired and the intra-domain 1153 routing/signaling and the inter-domain routing/signaling should be agnostic to 1154 each other. 1156 Routing and signaling for multi-level hierarchies shall be supported 1157 to allow carriers to configure their networks as needed. 1159 Y. Xue et al 1161 7.3.2. Network Interconnections 1163 Subnetworks may have multiple points of inter-connections. All 1164 relevant NNI functions, such as routing, reachability information 1165 exchanges, and inter-connection topology discovery must recognize and 1166 support multiple points of inter-connections between subnetworks. 1167 Dual inter-connection is often used as a survivable architecture. 1169 The control plane shall provide support for routing and signaling for 1170 subnetworks having multiple points of interconnection. 1172 7.4. Names and Address Management 1174 7.4.1. Address Space Separation 1176 To ensure the scalability of and smooth migration toward to the 1177 optical switched network, the separation of three address spaces are 1178 required as discussed in [oif-addr]: 1180 - Internal transport network addresses: This is used for routing 1181 control plane messages within the transport network. For example, 1182 if GMPLS is used then IP address should be used. 1184 - Transport Network Assigned (TNA) address: This is a routable 1185 address in the optical transport network and is assigned by the 1186 network. 1188 - Client addresses: This address has significance in the client layer. 1189 For example, if the clients are ATM switches, the NSAP address can be used. 1190 If the clients are IP router, then IP address should be used. 1192 7.4.2. Directory Services 1194 Directory Services shall support address resolution and translation 1195 between various user/client device names or address and the corresponding TNA 1196 addresses. UNI shall use the user naming schemes for connection request. The 1197 directory service is essential for the implementation of overlay model. 1199 7.4.3. Network element Identification 1201 Each control domain and each network element within a carrier network shall be 1202 uniquely identifiable. Similarly all the service access points shall be uniquely 1203 identifiable. 1205 7.5. Policy-Based Service Management Framework 1207 The IPO service must be supported by a robust policy-based management 1208 system to be able to make important decisions. 1210 Examples of policy decisions include: 1212 Y. Xue et al 1214 - What types of connections can be set up for a given UNI? 1216 - What information can be shared and what information must be 1217 restricted in automatic discovery functions? 1219 - What are the security policies over signaling interfaces? 1221 - What routing policies should be applied in the path selection? E.g 1222 The definition of the link diversity. 1224 Requirements: 1226 - Service and network policies related to configuration and 1227 provisioning, admission control, and support of Service Level 1228 Agreements (SLAs) must be flexible, and at the same time simple and 1229 scalable. 1231 - The policy-based management framework must be based on standards- 1232 based policy systems (e.g., IETF COPS [rfc2784]). 1234 - In addition, the IPO service management system must support and be 1235 backwards compatible with legacy service management systems. 1237 8. Control Plane Functional Requirements for Optical Services 1239 This section addresses the requirements for the optical control plane 1240 in support of service provisioning. 1242 The scope of the control plane include the control of the interfaces 1243 and network resources within an optical network and the interfaces 1244 between the optical network and its client networks. In other words, 1245 it should include both NNI and UNI aspects. 1247 8.1. Control Plane Capabilities and Functions 1249 The control capabilities are supported by the underlying control 1250 functions and protocols built in the control plane. 1252 8.1.1. Network Control Capabilities 1254 The following capabilities are required in the network control plane 1255 to successfully deliver automated provisioning for optical services: 1256 - Network resource discovery 1258 - Address assignment and resolution 1260 - Routing information propagation and dissemination 1262 - Path calculation and selection 1264 - Connection management 1265 Y. Xue et al 1267 These capabilities may be supported by a combination of functions 1268 across the control and the management planes. 1270 8.1.2. Control Plane Functions for network control 1272 The following are essential functions needed to support network 1273 control capabilities: 1275 - Signaling 1276 - Routing 1277 - Automatic resource, service and neighbor discovery 1279 Specific requirements for signaling, routing and discovery are 1280 addressed in Section 9. 1282 The general requirements for the control plane functions to support 1283 optical networking and service functions include: 1285 - The control plane must have the capability to establish, teardown 1286 and maintain the end-to-end connection, and the hop-by-hop connection 1287 segments between any two end-points. 1289 - The control plane must have the capability to support optical traffic- 1290 engineering (e.g. wavelength management) requirements including resource 1291 discovery and dissemination, constraint-based routing and path computation. 1293 - The control plane shall support network status or action result 1294 code responses to any requests over the control interfaces. 1296 - The control plane shall support call admission control on UNI and 1297 connection-admission control on NNI. 1299 - The control plane shall support graceful release of network 1300 resources associated with the connection after a successful 1301 connection teardown or failed connection. 1303 - The control plane shall support management plane request for 1304 connection attributes/status query. 1306 - The control plane must have the capability to support various 1307 protection and restoration schemes. 1309 - Control plane failures shall not affect active connections and 1310 shall not adversely impact the transport and data planes. 1312 - The control plane should support separation of control function 1313 entities including routing, signaling and discovery and should allow 1314 different control distributions of those functionalities, including 1315 centralized, distributed or hybrid. 1317 Y. Xue et al 1319 - The control plane should support physical separation of the control 1320 plane from the transport plane to support either tightly coupled or 1321 loosely coupled control plane solutions. 1323 - The control plane should support the routing and signaling proxy to 1324 participate in the normal routing and signaling message exchange and 1325 processing. 1327 - Security and resilience are crucial issues for the control plane 1328 and will be addressed in Section 10 and 11 of this document. 1330 8.2. Signaling Communication Network (SCN) 1332 The signaling communication network is a transport network for 1333 control plane messages and it consists of a set of control channels 1334 that interconnect the nodes within the control plane. Therefore, the 1335 signaling communication network must be accessible by each of the 1336 communicating nodes (e.g., OXCs). If an out-of-band IP-based control 1337 message transport network is an overlay network built on top of the 1338 IP data network using some tunneling technologies, these tunnels must 1339 be standards-based such as IPSec, GRE, etc. 1341 - The signaling communication network must terminate at each of the 1342 nodes in the transport plane. 1344 - The signaling communication network shall not be assumed to have 1345 the same topology as the data plane, nor shall the data plane and 1346 control plane traffic be assumed to be congruently routed. 1348 A control channel is the communication path for transporting control 1349 messages between network nodes, and over the UNI (i.e., between the 1350 UNI entity on the user side (UNI-C) and the UNI entity on the network 1351 side (UNI-N)). The control messages include signaling messages, 1352 routing information messages, and other control maintenance protocol 1353 messages such as neighbor and service discovery. 1355 The following three types of signaling in the control channel shall 1356 be supported: 1358 - In-band signaling: The signaling messages are carried over a 1359 logical communication channel embedded in the data-carrying optical 1360 link or channel. For example, using the overhead bytes in SONET data 1361 framing as a logical communication channel falls into the in-band 1362 signaling methods. 1364 - In fiber, Out-of-band signaling: The signaling messages are carried 1365 over a dedicated communication channel separate from the optical 1366 data-bearing channels, but within the same fiber. For example, a 1367 dedicated wavelength or TDM channel may be used within the same fiber 1368 as the data channels. 1370 Y. Xue et al 1372 - Out-of-fiber signaling: The signaling messages are carried over a 1373 dedicated communication channel or path within different fibers to 1374 those used by the optical data-bearing channels. For example, 1375 dedicated optical fiber links or communication path via separate and 1376 independent IP-based network infrastructure are both classified as 1377 out-of-fiber signaling. 1379 The UNI control channel and proxy signaling defined in the OIF UNI 1380 1.0 [oif-uni] shall be supported. 1382 The signaling communication network provides communication 1383 mechanisms between entities in the control plane. 1385 - The signaling communication network shall support reliable 1386 message transfer. 1388 - The signaling communication network shall have its own OAM mechanisms. 1390 - The signaling communication network shall use protocols that 1391 support congestion control mechanisms. 1393 In addition, the signaling communication network should support 1394 message priorities. Message prioritization allows time critical 1395 messages, such as those used for restoration, to have priority over 1396 other messages, such as other connection signaling messages and 1397 topology and resource discovery messages. 1399 The signaling communication network shall be highly reliable and 1400 implement failure recovery. 1402 8.3 Control Plane Interface to Data Plane 1404 In the situation where the control plane and data plane are decoupled, this 1405 interface needs to be standardized. 1406 Requirements for a standard control-data plane interface are under 1407 study. The specification of a control plane interface to the data 1408 plane is outside the scope of this document. 1410 Control plane should support a standards based interface to configure 1411 and switching fabrics and port functions. 1413 Data plane shall monitor and detect the failure (LOL, LOS, etc.) and 1414 quality degradation (high BER, etc.) of the signals and be able to 1415 provide signal-failure and signal-degrade alarms to the control plane 1416 accordingly to trigger proper mitigation actions in the control 1417 plane. 1419 8.4. Management Plane Interface to Data Plane 1421 The management plane shall be responsible for the network resource 1422 management in the data plane. It should able to partition the network 1423 Y. Xue et al 1425 resources and control the allocation and the deallocation of the 1426 resource for the use by the control plane. 1428 Data plane shall monitor and detect the failure and quality 1429 degradation of the signals and be able to provide signal-failure and 1430 signal-degrade alarms plus associated detailed fault information to 1431 the management plane to trigger and enable the management for fault 1432 location and repair. 1434 Management plane failures shall not affect the normal operation of a 1435 configured and operational control plane or data plane. 1437 8.5. Control Plane Interface to Management Plane 1439 The control plane is considered a managed entity within a network. 1440 Therefore, it is subject to management requirements just as other 1441 managed entities in the network are subject to such requirements. 1443 Control plane should be able to service the requests from the 1444 management plane for end-to-end connection provisioning (e.g. SPC 1445 connection) and control plane database information query (e.g. 1446 topology database) 1448 Control plane shall report all the control plane faults to the 1449 management plane with detailed fault information 1451 The control, management and transport plane each has its well-defined network 1452 functions. Those functions are orthogonal to each other. However, this does not 1453 total independency. Since the management plane is responsible for the management 1454 of both control plane and transport plane, the management plane plays an 1455 authoritative role 1457 In general, the management plane shall have authority over the 1458 control plane. Management plane should be able to configure the 1459 routing, signaling and discovery control parameters such as hold-down 1460 timers, hello-interval, etc. to effect the behavior of the control 1461 plane. 1463 In the case of network failure, both the management plane and 1464 the control plane need fault information at the same priority. The 1465 control plane shall be responsible for providing necessary statistic 1466 data such as call counts, traffic counts to the management plane. 1467 They should be available upon the query from the management plane. 1468 The management plane shall be able to tear down connections 1469 established by the control plane both gracefully and forcibly on 1470 demand. 1472 8.6. IP and Optical Control Plane Interconnection 1474 The control plane interconnection model defines the way how two 1475 Y. Xue et al 1477 control networks can be interconnected in terms of controlling 1478 relationship and control information flow allowed between them. 1479 There are three basic types of control plane network interconnection 1480 models: overlay, peer and hybrid, which are defined in the IETF IPO 1481 WG document [ipo_frame]. See Appendix A for more discussion. 1483 Choosing the level of coupling depends upon a number of different 1484 factors, some of which are: 1486 - Variety of clients using the optical network 1488 - Relationship between the client and optical network 1490 - Operating model of the carrier 1492 Overlay model (UNI like model) shall be supported for client to 1493 optical control plane interconnection. 1495 Other models are optional for client to optical control plane 1496 interconnection. 1498 For optical to optical control plane interconnection all three models 1499 shall be supported. In general, the priority for support of 1500 interconnection models should be overlay, hybrid and peer, in 1501 decreasing order. 1503 9. Requirements for Signaling, Routing and Discovery 1505 9.1. Requirements for information sharing over UNI, I-NNI and E-NNI 1507 Different types of interfaces shall impose different requirements and 1508 functionality due to their different trust relationships. 1509 Specifically: 1511 - Topology information shall not be exchanged across inter-carrier E-NNI and 1512 UNI. 1514 - The control plane shall allow the carrier to configure the type 1515 and extent of control information exchange across various interfaces. 1517 - Address resolution exchange over UNI is needed if an addressing 1518 directory service is not available. 1520 9.2. Signaling Functions 1522 Call and connection control and management signaling messages are 1523 used for the establishment, modification, status query and release of 1524 an end-to-end optical connection. Unless otherwise specified, the 1525 word "signaling" refers to both inter-domain and intra-domain 1526 signaling. 1528 Y. Xue et al 1530 - The inter-domain signaling protocol shall be agnostic to the intra- 1531 domain signaling protocol for all the domains within the network. 1533 - Signaling shall support both strict and loose routing. 1535 - Signaling shall support individual as well as groups of connection 1536 requests. 1538 - Signaling shall support fault notifications. 1540 - Inter-domain signaling shall support per connection, globally 1541 unique identifiers for all connection management primitives based on 1542 a well-defined naming scheme. 1544 - Inter-domain signaling shall support crank-back and rerouting. 1546 9.3. Routing Functions 1548 Routing includes reachability information propagation, network 1549 topology/resource information dissemination and path computation. 1550 Network topology/resource information dissemination is to provide 1551 each node in the network with information about the carrier network 1552 such that a single node is able to support constraint-based path 1553 selection. A mixture of hop-by-hop routing, explicit/source routing 1554 and hierarchical routing will likely be used within future transport 1555 networks. 1557 All three mechanisms (Hop-by-hop routing, explicit / source-based 1558 routing and hierarchical routing) must be supported. Messages 1559 crossing untrusted boundaries must not contain information regarding 1560 the details of an internal network topology. 1562 Requirements for routing information dissemination: 1564 - The inter-domain routing protocol shall be agnostic to the intra- 1565 domain routing protocol within any of the domains within the network. 1567 - The exchange of the following types of information shall be 1568 supported by inter-domain routing protocols: 1569 - Inter-domain topology 1570 - Per-domain topology abstraction 1571 - Per domain reachability summarization 1573 Major concerns for routing protocol performance are scalability and 1574 stability, which impose the following requirement on the routing 1575 protocols: 1577 - The routing protocol shall scale with the size of the network 1579 The routing protocols shall support following requirements: 1581 Y. Xue et al 1583 - Routing protocol shall support hierarchical routing information 1584 dissemination, including topology information aggregation and 1585 summarization. 1587 - The routing protocol(s) shall minimize global information and keep 1588 information locally significant as much as possible. 1589 Over external interfaces only reachability information, next 1590 routing hop and service capability information should be exchanged. 1591 Any other network related information shall not leak out to other 1592 networks. 1594 - The routing protocol shall be able to minimize global information 1595 and keep information locally significant as much as possible (e.g., 1596 information local to a node, a sub-network, a domain, etc). For 1597 example, a single optical node may have thousands of ports. The ports 1598 with common characteristics need not to be advertised individually. 1600 - The routing protocol shall distinguish static routing information 1601 and dynamic routing information. The routing protocol operation shall 1602 update dynamic and static routing information differently. Only 1603 dynamic 1604 routing information shall be updated in real time. 1606 - Routing protocol shall be able to control the dynamic information 1607 updating frequency through different types of thresholds. Two types 1608 of thresholds could be defined: absolute threshold and relative 1609 threshold. 1611 - The routing protocol shall support trigger-based and timeout-based 1612 information update. 1614 - Inter-domain routing protocol shall support policy-based routing 1615 information exchange. 1617 - The routing protocol shall be able to support different levels of 1618 protection/restoration and other resiliency requirements. These are 1619 discussed in Section 10. 1621 All the scalability techniques will impact the network resource 1622 representation accuracy. The tradeoff between accuracy of the routing 1623 information and the routing protocol scalability is an important 1624 consideration to be made by network operators. 1626 9.4. Requirements for path selection 1628 The following are functional requirements for path selection: 1630 - Path selection shall support shortest path routing. 1632 - Path selection shall also support constraint-based routing. At 1633 least the following constraints shall be supported: 1635 Y. Xue et al 1637 - Cost 1638 - Link utilization 1639 - Diversity 1640 - Service Class 1642 - Path selection shall be able to include/exclude some specific 1643 network resources, based on policy. 1645 - Path selection shall be able to support different levels of 1646 diversity, including node, link, SRLG and SRG. 1648 - Path selection algorithms shall provide carriers the ability to 1649 support a wide range of services and multiple levels of service 1650 classes. Parameters such as service type, transparency, bandwidth, 1651 latency, bit error rate, etc. may be relevant. 1653 Constraint-based routing in the optical network in significantly complex 1654 Compared to the IP network. There are many optical layer constraints to consider 1655 such as wavelength, diversity, optical layer impairments, etc. A detailed 1656 discussion on the routing constraints at the optical layer is in [ietf-olr]. 1658 9.5. Discovery Functions 1659 The discovery functions include neighbor, resource and service 1660 discovery. The control plane shall support both manual configuration and 1661 automatic discovery 1663 9.5.1. Neighbor discovery 1664 Neighbor Discovery can be described as an instance of auto-discovery 1665 that is used for associating two network entities within a layer 1666 network based on a specified adjacency relation. 1668 The control plane shall support the following neighbor discovery 1669 capabilities as described in [itu-disc]: 1671 - Physical media adjacency that detects and verifies the physical 1672 layer network connectivity between two connected network element 1673 ports. 1675 - Logical network adjacency that detects and verify the logical 1676 network layer connection above the physical layer between network 1677 layer specific ports. 1679 - Control adjacency that detect and verify the logical neighboring 1680 relation between two control entities associated with data plane 1681 network elements that form either physical or logical adjacency. 1683 The control plane shall support manual neighbor adjacency 1684 configuration to either overwrite or supplement the automatic 1685 neighbor discovery function. 1687 9.5.2. Resource Discovery 1688 Y. Xue et al 1690 Resource discovery is concerned with the ability to verify physical 1691 connectivity between two ports on adjacent network elements, improve 1692 inventory management of network resources, detect configuration 1693 mismatches between adjacent ports, associating port characteristics 1694 of adjacent network elements, etc. Resource discovery shall be 1695 supported. 1697 Resource discovery can be achieved through either manual provisioning 1698 or automated procedures. The procedures are generic while the 1699 specific mechanisms and control information can be technology 1700 dependent. 1702 After neighbor discovery, resource verification and monitoring must be 1703 performed periodically to verify physical attributes to ensure 1704 compatibility. 1706 9.5.3. Service Discovery 1708 Service Discovery can be described as an instance of auto-discovery 1709 that is used for verifying and exchanging service capabilities of a 1710 network. Service discovery can only happen after neighbor discovery. 1711 Since service capabilities of a network can dynamically change, 1712 service discovery may need to be repeated. 1714 Service discovery is required for all the optical services supported. 1716 10. Requirements for service and control plane resiliency 1718 Resiliency is a network capability to continue its operations under 1719 the condition of failures within the network. The automatic switched 1720 optical network assumes the separation of control plane and data 1721 plane. Therefore the failures in the network can be divided into 1722 those affecting the data plane and those affecting the control plane. 1723 To provide enhanced optical services, resiliency measures in both 1724 data plane and control plane should be implemented. The following 1725 failure handling principles shall be supported. 1727 The control plane shall provide optical service failure detection and 1728 recovery functions such that the failures in the data plane within 1729 the control plane coverage can be quickly mitigated. 1731 The failure of control plane shall not in any way adversely affect 1732 the normal functioning of existing optical connections in the data 1733 plane. 1735 In general, there shall be no single point of failure for all major 1736 control plane functions, including signaling, routing etc. The 1737 control plane shall provide reliable transfer of signaling messages 1738 and flow control mechanisms for easing any congestion within the 1739 control plane. 1741 Y. Xue et al 1743 10.1. Service resiliency 1745 In circuit-switched transport networks, the quality and reliability 1746 of the established optical connections in the transport plane can be 1747 enhanced by the protection and restoration mechanisms provided by the 1748 control plane functions. Rapid recovery is required by transport 1749 network providers to protect service and also to support stringent 1750 Service Level Agreements (SLAs) that dictate high reliability and 1751 availability for customer connectivity. 1753 Protection and restoration are closely related techniques for 1754 repairing network node and link failures. Protection is a collection 1755 of failure recovery techniques meant to rehabilitate failed 1756 connections by pre-provisioning dedicated protection network 1757 connections and switching to the protection circuit once the failure 1758 is detected. Restoration is a collection of reactive techniques used 1759 to rehabilitate failed connections by dynamic rerouting the failed 1760 connection around the network failures using the shared network 1761 resources. 1763 The protection switching is characterized by shorter recovery time at 1764 the cost of the dedicated network resources while dynamic restoration 1765 is characterized by longer recover time with efficient resource 1766 sharing. Furthermore, the protection and restoration can be 1767 performed either on a per link/span basis or on an end-to-end 1768 connection path basis. The former is called local repair initiated a 1769 node closest to the failure and the latter is called global repair 1770 initiated from the ingress node. 1772 The protection and restoration actions are usually in reaction to the 1773 failure in the networks. However, during the network maintenance 1774 affecting the protected connections, a network operator needs to 1775 proactively force the traffic on the protected connections to switch 1776 to its protection connection. 1778 The failure and signal degradation in the transport plane is usually 1779 technology specific and therefore shall be monitored and detected by 1780 the transport plane. 1782 The transport plane shall report both physical level failure and 1783 signal degradation to the control plane in the form of the signal 1784 failure alarm and signal degrade alarm. 1786 The control plane shall support both alarm-triggered and hold-down 1787 timers based protection switching and dynamic restoration for failure 1788 recovery. 1790 Clients will have different requirements for connection availability. 1791 These requirements can be expressed in terms of the "service level", 1792 which can be mapped to different restoration and protection options 1793 Y. Xue et al 1795 and priority related connection characteristics, such as holding 1796 priority(e.g. pre-emptable or not), set-up priority, or restoration 1797 priority. However, how the mapping of individual service levels to a 1798 specific set of protection/restoration options and connection 1799 priorities will be determined by individual carriers. 1801 In order for the network to support multiple grades of service, the 1802 control plane must support differing protection and restoration 1803 options on a per connection basis. 1805 In order for the network to support multiple grades of service, the 1806 control plane must support setup priority, restoration priority and 1807 holding priority on a per connection basis. 1809 In general, the following protection schemes shall be considered for 1810 all protection cases within the network: 1811 - Dedicated protection: 1+1 and 1:1 1812 - Shared protection: 1:N and M:N. 1813 - Unprotected 1815 The control plane shall support "extra-traffic" capability, which 1816 allows unprotected traffic to be transmitted on the protection 1817 circuit. 1819 The control plane shall support both trunk-side and drop-side 1820 protection switching. 1822 The following restoration schemes should be supported: 1823 - Restorable 1824 - Un-restorable 1826 Protection and restoration can be done on an end-to-end basis per 1827 connection. It can also be done on a per span or link basis between 1828 two adjacent network nodes. These schemes should be supported. 1830 The protection and restoration actions are usually triggered by the 1831 failure in the networks. However, during the network maintenance 1832 affecting the protected connections, a network operator need to 1833 proactively force the traffic on the protected connections to switch 1834 to its protection connection. Therefore in order to support easy 1835 network maintenance, it is required that management initiated 1836 protection and restoration be supported. 1838 Protection and restoration configuration should be based on software 1839 only. 1841 The control plane shall allow the modification of protection and 1842 restoration attributes on a per-connection basis. 1844 The control plane shall support mechanisms for reserving bandwidth 1845 resources for restoration. 1847 Y. Xue et al 1849 The control plane shall support mechanisms for normalizing connection 1850 routing (reversion) after failure repair. 1852 Normal connection management operations (e.g., connection deletion) 1853 shall not result in protection/restoration being initiated. 1855 10.2. Control plane resiliency 1857 The control plane may be affected by failures in signaling network 1858 connectivity and by software failures (e.g., signaling, topology and 1859 resource discovery modules). 1861 The signaling control plane should implement signaling message 1862 priorities to ensure that restoration messages receive preferential 1863 treatment, resulting in faster restoration. 1865 The optical control plane signal network shall support protection and 1866 restoration options to enable it to self-healing in case of failures 1867 within the control plane. 1869 Control network failure detection mechanisms shall distinguish 1870 between control channel and software process failures. 1872 The control plane failure shall only impact the capability to 1873 provision new services. 1875 Fault localization techniques for the isolation of failed control 1876 resources shall be supported. 1878 Recovery from control plane failures shall result in complete 1879 recovery and re-synchronization of the network. 1881 There shall not be a signal point of failure in the control plane systems 1882 design. 1884 Partial or total failure of the control plane shall not affect the existing 1885 established connections. It should only lose the capability to accept the new 1886 connection requests. 1888 11. Security Considerations 1890 In this section, security considerations and requirements for optical 1891 services and associated control plane requirements are described. 1893 11.1. Optical Network Security Concerns 1895 Since optical service is directly related to the physical network 1896 which is fundamental to a telecommunications infrastructure, 1897 stringent security assurance mechanism should be implemented in 1898 Y. Xue et al 1900 optical networks. 1902 In terms of security, an optical connection consists of two aspects. 1903 One is security of the data plane where an optical connection itself 1904 belongs, and the other is security of the control plane. 1906 11.1.1. Data Plane Security 1908 - Misconnection shall be avoided in order to keep the user's data 1909 confidential. For enhancing integrity and confidentiality of data, 1910 it may be helpful to support scrambling of data at layer 2 or 1911 encryption of data at a higher layer. 1913 11.1.2. Control Plane Security 1915 It is desirable to decouple the control plane from the data plane 1916 physically. 1918 Restoration shall not result in miss-connections (connections 1919 established to a destination other than that intended), even for 1920 short periods of time (e.g., during contention resolution). For 1921 example, signaling messages, used to restore connectivity after 1922 failure, should not be forwarded by a node before contention has been 1923 resolved. 1925 Additional security mechanisms should be provided to guard against 1926 intrusions on the signaling network. Some of these may be done with 1927 the help of the management plane. 1929 - Network information shall not be advertised across external 1930 interfaces (UNI or E-NNI). The advertisement of network information 1931 across the E-NNI shall be controlled and limited in a configurable 1932 policy based fashion. The advertisement of network information shall 1933 be isolated and managed separately by each administration. 1935 - The signaling network itself shall be secure, blocking all 1936 unauthorized access. The signaling network topology and addresses 1937 shall not be advertised outside a carrier's domain of trust. 1939 - Identification, authentication and access control shall be 1940 rigorously used by network operators for providing access to the 1941 control plane. 1943 - Discovery information, including neighbor discovery, service 1944 discovery, resource discovery and reachability information should be 1945 exchanged in a secure way. 1947 - Information on security-relevant events occurring in the control 1948 plane or security-relevant operations performed or attempted in the 1949 control plane shall be logged in the management plane. 1951 Y. Xue et al 1953 - The management plane shall be able to analyze and exploit logged 1954 data in order to check if they violate or threat security of the 1955 control plane. 1957 - The control plane shall be able to generate alarm notifications 1958 about security related events to the management plane in an 1959 adjustable and selectable fashion. 1961 - The control plane shall support recovery from successful and 1962 attempted intrusion attacks. 1964 11.2. Service Access Control 1966 From a security perspective, network resources should be protected 1967 from unauthorized accesses and should not be used by unauthorized 1968 entities. Service access control is the mechanism that limits and 1969 controls entities trying to access network resources. Especially on 1970 the UNI and E-NNI, Connection Admission Control (CAC) functions 1971 should also support the following security features: 1973 - CAC should be applied to any entity that tries to access network 1974 resources through the UNI (or E-NNI). CAC should include an 1975 authentication function of an entity in order to prevent masquerade 1976 (spoofing). Masquerade is fraudulent use of network resources by 1977 pretending to be a different entity. An authenticated entity should 1978 be given a service access level on a configurable policy basis. 1980 - The UNI and NNI should provide optional mechanisms to ensure origin 1981 authentication and message integrity for connection management 1982 requests such as set-up, tear-down and modify and connection 1983 signaling messages. This is important in order to prevent Denial of 1985 Service attacks. The UNI and E-NNI should also include mechanisms, 1986 such as usage-based billing based on CAC, to ensure non-repudiation 1987 of connection management messages. 1989 - Each entity should be authorized to use network resources according 1990 to the administrative policy set by the operator. 1992 12. Acknowledgements 1993 The authors of this document would like to extend our special appreciation to 1994 John Strand for his initial contributions to the carrier requirements. We 1995 also want to acknowledge the valuable inputs from, Yangguang Xu, Zhiwei Lin, 1996 Eve Verma, Daniel Awduche, James Luciani, Deborah Brunhard and Lynn Neir, 1997 Wesam Alanqar, Tammy Ferris, Mark Jones. 1999 13. References 2001 [rfc2026] S. Bradner, "The Internet Standards Process -- Revision 3," BCP 9, RFC 2002 2026, IETF October 1996. 2004 Y. Xue et al 2006 [rfc2119] S. Bradner, �Ke y words for use in RFC to indicate requirement 2007 levels�, BCP 14, RFC 2119, 1997 2009 [itu-otn] ITU-T G.872 (2000) � Architecture of Optical Transport Networks. 2011 [itu-g709] ITU-T G.709 (2001)� Network Node Interface for the Optical Transport 2012 Network. 2014 [itu-sdh] ITU-T Rec. G.803 (2000), Architecture of Transport Networks based on 2015 the Synchronous Digital Hierarchy 2017 [ipo-frw] B. Rajagopalan, et. al �IP over Optical Networks: A Framework�, work 2018 in progress, IETF 2002 2020 [oif-addr] M. Lazer, "High Level Requirements on Optical Network Addressing", 2021 oif2001.196, 2001 2023 [oif-carrier] Y. Xue and M. Lazer, et al, �Carrier Optical Service Framework and 2024 Associated Requirements for UNI�, OIF2000.155, 2000 2026 [oif-nnireq] M. Lazer et al, �Carrier NNI Requirements�, OIF2002.229, 2002 2028 [ipo-olr] A Chiu and J. Strand et al., "Impairments and Other Constraints on 2029 Optical Layer Routing", work in progress, IETF, 2002 2031 [ccamp-req] J. Jiang et al., "Common Control and Measurement Plane Framework 2032 and Requirements", work in progress, IETF, 2001 2034 [ietf-gsmp] A. Doria, et al �General Switch Management Protocol V3�, work in 2035 progress, IETF, 2002 2037 [id-freeland] D. Freeland, et al, �Consideration on the development of an 2038 optical control plane�, Nov. 2000 2040 [rfc2748] D. Durham, et al, �The COPS (Common Open Policy Services) Protocol�, 2041 RFC 2748, Jan. 2000 2043 [oif-uni] Optical Internetworking Forum (OIF), "UNI 1.0 Signaling 2044 Specification," December, 2001. 2046 [itu-astn] ITU-T Rec. G.8070/Y.1301 (2001), Requirements for the Automatic 2047 Switched Transport Network (ASTN). 2049 [itu-ason] ITU-T Rec. G.8080/Y.1304 (2001), Architecture of the Automatic 2050 Switched Optical Network (ASON). 2052 [itu-dcm] ITU-T Rec. G.7713/Y.1704 (2001), Distributed Call and Connection 2053 Management (DCM). 2055 Y. Xue et al 2057 [itu-rtg] ITU-T Draft Rec. G.7715/Y.1706 (2002), Architecture and Requirements 2058 for Routing in the Automatic Switched Optical Networks. 2060 [itu-lm] ITU-T Draft Rec. G.7716/Y.1706 (2002), Link Resource Management for 2061 ASON Networks. (work in progress) 2063 [itu-disc] ITU-T Rec. G.7714/Y.1705 (2001), Generalized Automatic Discovery 2064 Techniques. 2066 [itu-dcn]ITU-T Rec. G.7712/Y.1703 (2001), Architecture and Specification of Data 2067 Communication Network. 2069 14 Author's Addresses 2071 Yong Xue 2072 UUNET/WorldCom 2073 22001 Loudoun County Parkway 2074 Ashburn, VA 20147 2075 Email: yxue@ieee.org 2077 Monica Lazer 2078 AT&T 2079 900 ROUTE 202/206N PO BX 752 2080 BEDMINSTER, NJ 07921-0000 2081 mlazer@att.com 2083 Jennifer Yates, 2084 AT&T Labs 2085 180 PARK AVE, P.O. BOX 971 2086 FLORHAM PARK, NJ 07932-0000 2087 jyates@research.att.com 2089 Dongmei Wang 2090 AT&T Labs 2091 Room B180, Building 103 2092 180 Park Avenue 2093 Florham Park, NJ 07932 2094 mei@research.att.com 2096 Ananth Nagarajan 2097 Sprint 2098 6220 Sprint Parkway 2099 Overland Park, KS 66251, USA 2100 Email: ananth.nagarajan@mail.sprint.com 2102 Hirokazu Ishimatsu 2103 Japan Telecom Co., LTD 2104 2-9-1 Hatchobori, Chuo-ku, 2105 Tokyo 104-0032 Japan 2106 Phone: +81 3 5540 8493 2107 Fax: +81 3 5540 8485 2108 Y. Xue et al 2110 EMail: hirokazu@japan-telecom.co.jp 2112 Olga Aparicio 2113 Cable & Wireless Global 2114 11700 Plaza America Drive 2115 Reston, VA 20191 2116 Phone: 703-292-2022 2117 Email: olga.aparicio@cwusa.com 2119 Steven Wright 2120 Science & Technology 2121 BellSouth Telecommunications 2122 41G70 BSC 2123 675 West Peachtree St. NE. 2124 Atlanta, GA 30375 2125 Phone +1 (404) 332-2194 2126 Email: steven.wright@snt.bellsouth.com 2128 Appendix A: Interconnection of Control Planes 2130 The interconnection of the IP router (client) and optical control 2131 planes can be realized in a number of ways depending on the required 2132 level of coupling. The control planes can be loosely or tightly 2133 coupled. Loose coupling is generally referred to as the overlay 2134 model and tight coupling is referred to as the peer model. 2135 Additionally there is the augmented model that is somewhat in between 2136 the other two models but more akin to the peer model. The model 2137 selected determines the following: 2139 - The details of the topology, resource and reachability information 2140 advertised between the client and optical networks 2142 - The level of control IP routers can exercise in selecting paths 2143 across the optical network 2145 The next three sections discuss these models in more details and the 2146 last section describes the coupling requirements from a carrier's 2147 perspective. 2149 Peer Model (I-NNI like model) 2151 Under the peer model, the IP router clients act as peers of the 2152 optical transport network, such that single routing protocol instance 2153 runs over both the IP and optical domains. In this regard the 2154 optical network elements are treated just like any other router as 2155 far as the control plane is concerned. The peer model, although not 2156 strictly an internal NNI, behaves like an I-NNI in the sense that 2157 Y. Xue et al 2159 there is sharing of resource and topology information. 2161 Presumably a common IGP such as OSPF or IS-IS, with appropriate 2162 extensions, will be used to distribute topology information. One 2163 tacit assumption here is that a common addressing scheme will also be 2164 used for the optical and IP networks. A common address space can be 2165 trivially realized by using IP addresses in both IP and optical 2166 domains. Thus, the optical networks elements become IP addressable 2167 entities. 2169 The obvious advantage of the peer model is the seamless 2170 interconnection between the client and optical transport networks. 2171 The tradeoff is that the tight integration and the optical specific 2172 routing information that must be known to the IP clients. 2174 The discussion above has focused on the client to optical control 2175 plane inter-connection. The discussion applies equally well to 2176 inter-connecting two optical control planes. 2178 Overlay (UNI-like model) 2180 Under the overlay model, the IP client routing, topology 2181 distribution, and signaling protocols are independent of the routing, 2182 topology distribution, and signaling protocols at the optical layer. 2183 This model is conceptually similar to the classical IP over ATM 2184 model, but applied to an optical sub-network directly. 2186 Though the overlay model dictates that the client and optical network 2187 are independent this still allows the optical network to re-use IP 2188 layer protocols to perform the routing and signaling functions. 2190 In addition to the protocols being independent the addressing scheme 2191 used between the client and optical network must be independent in 2192 the overlay model. That is, the use of IP layer addressing in the 2193 clients must not place any specific requirement upon the addressing 2194 used within the optical control plane. 2196 The overlay model would provide a UNI to the client networks through 2197 which the clients could request to add, delete or modify optical 2198 connections. The optical network would additionally provide 2199 reachability information to the clients but no topology information 2200 would be provided across the UNI. 2202 Augmented model (E-NNI like model) 2204 Under the augmented model, there are actually separate routing 2205 instances in the IP and optical domains, but information from one 2206 routing instance is passed through the other routing instance. For 2207 example, external IP addresses could be carried within the optical 2208 routing protocols to allow reachability information to be passed to 2209 Y. Xue et al 2211 IP clients. A typical implementation would use BGP between the IP 2212 client and optical network. 2214 The augmented model, although not strictly an external NNI, behaves 2215 like an E-NNI in that there is limited sharing of information. 2217 Generally in a carrier environment there will be more than just IP 2218 routers connected to the optical network. Some other examples of 2219 clients could be ATM switches or SONET ADM equipment. This may drive 2220 the decision towards loose coupling to prevent undue burdens upon 2221 non-IP router clients. Also, loose coupling would ensure that future 2222 clients are not hampered by legacy technologies. 2224 Additionally, a carrier may for business reasons want a separation 2225 between the client and optical networks. For example, the ISP 2226 business unit may not want to be tightly coupled with the optical 2227 network business unit. Another reason for separation might be just 2228 pure politics that play out in a large carrier. That is, it would 2229 seem unlikely to force the optical transport network to run that same 2230 set of protocols as the IP router networks. Also, by forcing the 2231 same set of protocols in both networks the evolution of the networks 2232 is directly tied together. That is, it would seem you could not 2233 upgrade the optical transport network protocols without taking into 2234 consideration the impact on the IP router network (and vice versa). 2236 Operating models also play a role in deciding the level of coupling. 2237 [id-freeland] gives four main operating models envisioned for an optical 2238 transport network: 2239 Category 1: ISP owning all of its own infrastructure (i.e., 2240 including fiber and duct to the customer premises) 2242 Category 2: ISP leasing some or all of its capacity from a third party 2244 Category 3: Carriers carrier providing layer 1 services 2246 Category 4: Service provider offering multiple layer 1, 2, and 3 services over 2247 a common infrastructure 2249 Although relatively few, if any, ISPs fall into category 1 it would 2250 seem the mostly likely of the four to use the peer model. The other 2251 operating models would lend themselves more likely to choose an 2252 overlay model. Most carriers would fall into category 4 and thus 2253 would most likely choose an overlay model architecture. 2255 Full Copyright Statement 2257 Copyright (C) The Internet Society (2002). All Rights Reserved. 2259 This document and translations of it may be copied and furnished to 2260 others, and derivative works that comment on or otherwise explain it 2261 Y. Xue et al 2263 or assist in its implementation may be prepared, copied, published 2264 and distributed, in whole or in part, without restriction of any 2265 kind, provided that the above copyright notice and this paragraph are 2266 included on all such copies and derivative works. However, this 2267 document itself may not be modified in any way, such as by removing 2268 the copyright notice or references to the Internet Society or other 2269 Internet organizations, except as needed for the purpose of 2270 developing Internet standards in which case the procedures for 2271 copyrights defined in the Internet Standards process must be 2272 followed, or as required to translate it into languages other than 2273 English. 2275 The limited permissions granted above are perpetual and will not be 2276 revoked by the Internet Society or its successors or assigns. 2278 This document and the information contained herein is provided on an 2279 "AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING 2280 TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING 2281 BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION 2282 HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF 2283 MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.