idnits 2.17.1 draft-ietf-ipo-carrier-requirements-01.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity -- however, there's a paragraph with a matching beginning. Boilerplate error? ** The document is more than 15 pages and seems to lack a Table of Contents. == The page length should not exceed 58 lines per page, but there was 40 longer pages, the longest (page 2) being 60 lines == It seems as if not all pages are separated by form feeds - found 0 form feeds but 63 pages Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. Miscellaneous warnings: ---------------------------------------------------------------------------- == Line 395 has weird spacing: '...ctional poin...' == Line 476 has weird spacing: '...ustomer may h...' == Line 514 has weird spacing: '...cal and consi...' == Line 1538 has weird spacing: '...for its manag...' == Line 1708 has weird spacing: '...ll call manag...' == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- Couldn't find a document date in the document -- date freshness check skipped. Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'G.872' is mentioned on line 121, but not defined == Missing Reference: 'G.803' is mentioned on line 140, but not defined == Missing Reference: 'GMPLS-ARCH' is mentioned on line 1984, but not defined == Missing Reference: 'Freeland' is mentioned on line 2756, but not defined == Unused Reference: 'G.807' is defined on line 2346, but no explicit reference was found in the text == Unused Reference: 'G.dcm' is defined on line 2349, but no explicit reference was found in the text Summary: 6 errors (**), 0 flaws (~~), 14 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 INTERNET-DRAFT Yong Xue 3 Document: draft-ietf-ipo-carrier-requirements-01.txt Worldcom Inc. 4 Category: Informational (Editor) 6 Expiration Date: September, 2002 8 Monica Lazer 9 Jennifer Yates 10 Dongmei Wang 11 AT&T 13 Ananth Nagarajan 14 Sprint 16 Hirokazu Ishimatsu 17 Japan Telecom Co., LTD 19 Steven Wright 20 Bellsouth 22 Olga Aparicio 23 Cable & Wireless Global 24 March, 2002. 26 Carrier Optical Services Requirements 28 Status of this Memo 30 This document is an Internet-Draft and is in full conformance with 31 all provisions of Section 10 of RFC2026. Internet-Drafts are working 32 documents of the Internet Engineering Task Force (IETF), its areas, 33 and its working groups. Note that other groups may also distribute 34 working documents as Internet-Drafts. 36 Internet-Drafts are draft documents valid for a maximum of six months 37 and may be updated, replaced, or rendered obsolete by other documents 38 at any time. It is inappropriate to use Internet-Drafts as reference 39 material or to cite them other than as "work in progress." 41 The list of current Internet-Drafts can be accessed at 42 http://www.ietf.org/ietf/1id-abstracts.txt. 44 The list of Internet-Draft Shadow Directories can be accessed at 45 http://www.ietf.org/shadow.html. 47 Abstract 48 This Internet Draft describes the major carrier's service 49 requirements for the automatic switched optical networks 50 (ASON) from both an end-user's as well as an operator's 51 perspectives. Its focus is on the description of the 52 service building blocks and service-related control 53 plane functional requirements. The management functions 54 for the optical services and their underlying networks 55 are beyond the scope of this document and will be addressed 56 in a separate document. 58 Table of Contents 59 1. Introduction 3 60 1.1 Justification 3 61 1.2 Conventions used in this document 3 62 1.3 Value Statement 3 63 1.4 Scope of This Document 4 64 2. Abbreviations 5 65 3. General Requirements 5 66 3.1 Separation of Networking Functions 5 67 3.2 Network and Service Scalability 6 68 3.3 Transport Network Technology 6 69 3.4 Service Building Blocks 7 70 4. Service Model and Applications 7 71 4.1 Service and Connection Types 7 72 4.2 Examples of Common Service Models 8 73 5. Network Reference Model 9 74 5.1 Optical Networks and Subnetworks 9 75 5.2 Network Interfaces 9 76 5.3 Intra-Carrier Network Model 11 77 5.4 Inter-Carrier Network Model 12 78 6. Optical Service User Requirements 13 79 6.1 Common Optical Services 13 80 6.2 Optical Service Invocation 14 81 6.3 Bundled Connection 16 82 6.4 Levels of Transparency 17 83 6.5 Optical Connection granularity 17 84 6.6 Other Service Parameters and Requirements 18 85 7. Optical Service Provider Requirements 19 86 7.1 Access Methods to Optical Networks 19 87 7.2 Dual Homing and Network Interconnections 19 88 7.3 Inter-domain connectivity 20 89 7.4 Bearer Interface Types 21 90 7.5 Names and Address Management 21 91 7.6 Policy-Based Service Management Framework 22 92 7.7 Support of Hierarchical Routing and Signaling 22 93 8. Control Plane Functional Requirements for Optical 94 Services 23 95 8.1 Control Plane Capabilities and Functions 23 96 8.2 Signaling Network 24 97 8.3 Control Plane Interface to Data Plane 25 98 8.4 Management Plane Interface to Data Plane 25 99 8.5 Control Plane Interface to Management Plane 26 100 8.6 Control Plane Interconnection 27 101 9. Requirements for Signaling, Routing and Discovery 27 102 9.1 Requirements for information sharing over UNI, I-NNI 103 and E-NNI 27 104 9.2 Signaling Functions 28 105 9.3 Routing Functions 30 106 9.4 Requirements for path selection 32 107 9.5 Automatic Discovery Functions 32 108 10. Requirements for service and control plane resiliency 34 109 10.1 Service resiliency 34 110 10.2 Control plane resiliency 37 111 11. Security Considerations 40 112 11.1 Optical Network Security Concerns 40 113 11.2 Service Access Control 42 114 12. Acknowledgements 43 116 1. Introduction 118 Next generation WDM-based optical transport network (OTN) will 119 consist of optical cross-connects (OXC), DWDM optical line systems 120 (OLS) and optical add-drop multiplexers (OADM) based on the 121 architecture defined by the ITU Rec. G.872 in [G.872]. The OTN is 122 bounded by a set of optical channel access points and has a layered 123 structure consisting of optical channel, multiplex section and 124 transmission section sub-layer networks. Optical networking 125 encompasses the functionalities for the establishment, transmission, 126 multiplexing, switching of optical connections carrying a wide range 127 of user signals of varying formats and bit rate. 129 The ultimate goal is to enhance the OTN with an intelligent optical 130 layer control plane to dynamically provision network resources and to 131 provide network survivability using ring and mesh-based protection 132 and restoration techniques. The resulting intelligent networks are 133 called automatic switched optical networks or ASON [G.8080]. 135 The emerging and rapidly evolving ASON technologies are aimed at 136 providing optical networks with intelligent networking functions and 137 capabilities in its control plane to enable wavelength switching, 138 rapid optical connection provisioning and dynamic rerouting. The same 139 technology will also be able to control TDM based SONET/SDH optical 140 transport network as defined by ITU Rec. G.803 [G.803]. This new 141 networking platform will create tremendous business opportunities for 142 the network operators and service providers to offer new services to 143 the market. 145 1.1. Justification 147 The charter of the IPO WG calls for a document on "Carrier Optical 148 Services Requirements" for IP/Optical networks. This document 149 addresses that aspect of the IPO WG charter. Furthermore, this 150 document was accepted as an IPO WG document by unanimous agreement at 151 the IPO WG meeting held on March 19, 2001, in Minneapolis, MN, USA. 152 It presents a carrier and end-user perspective on optical network 153 services and requirements. 155 1.2. Conventions used in this document 157 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 158 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 159 document are to be interpreted as described in RFC 2119. 161 1.3. Value Statement 163 By deploying ASON technology, a carrier expects to achieve the 164 following benefits from both technical and business perspectives: 166 - Rapid Circuit Provisioning: ASON technology will enable the dynamic 167 end-to-end provisioning of the optical connections across the optical 168 network by using standard routing and signaling protocols. 170 - Enhanced Survivability: ASON technology will enable the network to 171 dynamically reroute an optical connection in case of a failure using 172 mesh-based network protection and restoration techniques, which 173 greatly improves the cost-effectiveness compared to the current line 174 and ring protection schemes in the SONET/SDH network. 176 - Cost-Reduction: ASON networks will enable the carrier to better 177 utilize the optical network , thus achieving significant unit cost 178 reduction per Megabit due to the cost-effective nature of the optical 179 transmission technology, simplified network architecture and reduced 180 operation cost. 182 - Service Flexibility: ASON technology will support provisioning of 183 an assortment of existing and new services such as protocol and bit- 184 rate independent transparent network services, and bandwidth-on- 185 demand services. 187 - Enhanced Interoperability: ASON technology will use a control plane 188 utilizing industry and international standards architecture and 189 protocols, which facilitate the interoperability of the optical 190 network equipment from different vendors. 192 In addition, the introduction of a standards-based control plane 193 offers the following potential benefits: 195 - Reactive traffic engineering at optical layer that allows network 196 resources to be dynamically allocated to traffic flow. 198 - Reduce the need for service providers to develop new operational 199 support systems software for the network control and new service 200 provisioning on the optical network, thus speeding up the deployment 201 of the optical network technology and reducing the software 202 development and maintenance cost. 204 - Potential development of a unified control plane that can be used 205 for different transport technologies including OTN, SONET/SDH, ATM 206 and PDH. 208 1.4. Scope of This Document 210 This document is aimed at providing, from the carrier's perspective, 211 a service framework and associated requirements in relation to the 212 optical services to be offered in the next generation optical 213 networking environment and their service control and management 214 functions. As such, this document concentrates on the requirements 215 driving the work towards realization of ASON. This document is 216 intended to be protocol-neutral. 218 Every carrier's needs are different. The objective of this document 219 is NOT to define some specific service models. Instead, some major 220 service building blocks are identified that will enable the carriers 221 to mix and match them in order to create the best service platform 222 most suitable to their business model. These building blocks include 223 generic service types, service enabling control mechanisms and 224 service control and management functions. The ultimate goal is to 225 provide the requirements to guide the control protocol developments 226 within IETF in terms of IP over optical technology. 228 In this document, we consider IP a major client to the optical 229 network, but the same requirements and principles should be equally 230 applicable to non-IP clients such as SONET/SDH, ATM, ITU G.709, etc. 232 2. Abbreviations 234 ASON Automatic Switched Optical Networking 235 ASTN Automatic Switched Transport Network 236 CAC Connection Admission Control 237 E-NNI Exterior NNI 238 E-UNI Exterior UNI 239 IWF Inter-Working Function 240 I-NNI Interior NNI 241 I-UNI Interior UNI 242 NNI Node-to-Node Interface 243 NE Network Element 244 OTN Optical Transport Network 245 OLS Optical Line System 246 PI Physical Interface 247 SLA Service Level Agreement 248 UNI User-to-Network Interface 250 3. General Requirements 252 In this section, a number of generic requirements related to the 253 service control and management functions are discussed. 255 3.1. Separation of Networking Functions 257 It makes logical sense to segregate the networking functions within 258 each layer network into three logical functional planes: control 259 plane, data plane and management plane. They are responsible for 260 providing network control functions, data transmission functions and 261 network management functions respectively. The crux of the ASON 262 network is the networking intelligence that contains automatic 263 routing, signaling and discovery functions to automate the network 264 control functions. 266 Control Plane: includes the functions related to networking control 267 capabilities such as routing, signaling, and policy control, as well 268 as resource and service discovery. These functions are automated. 270 Data Plane (transport plane): includes the functions related to 271 bearer channels and signal transmission. 273 Management Plane: includes the functions related to the management 274 functions of network element, networks and network resources and 275 services. These functions are less automated as compared to control 276 plane functions. 278 Each plane consists of a set of interconnected functional or control 279 entities, physical or logical, responsible for providing the 280 networking or control functions defined for that network layer. 282 The separation of the control plane from both the data and management 283 plane is beneficial to the carriers in that it: 285 - Allows equipment vendors to have a modular system design that will 286 be more reliable and maintainable thus reducing the overall systems 287 ownership and operation cost. 289 - Allows carriers to have the flexibility to choose a third party 290 vendor control plane software systems as its control plane solution 291 for its switched optical network. 293 - Allows carriers to deploy a unified control plane and 294 OSS/management systems to manage and control different types of 295 transport networks it owes. 297 - Allows carriers to use a separate control network specially 298 designed and engineered for the control plane communications. 300 The separation of control, management and transport function is 301 required and it shall accommodate both logical and physical level 302 separation. 304 Note that it is in contrast to the IP network where the control 305 messages and user traffic are routed and switched based on the same 306 network topology due to the associated in-band signaling nature of 307 the IP network. 309 3.2. Network and Service Scalability 311 Although specific applications or networks may be on a small scale, 312 the control plane protocol and functional capabilities shall not 313 limit large-scale networks 315 In terms of the scale and complexity of the future optical network, 316 the following assumption can be made when considering the scalability 317 and performance that are required of the optical control and 318 management functions. - There may be up to hundreds of OXC nodes and 319 the same order of magnitude of OADMs per carrier network. 321 - There may be up to thousands of terminating ports/wavelength per 322 OXC node. 324 - There may be up to hundreds of parallel fibers between a pair of 325 OXC nodes. 327 - There may be up to hundreds of wavelength channels transmitted on 328 each fiber. In relation to the frequency and duration of the optical 329 connections: 331 - The expected end-to-end connection setup/teardown time should be in 332 the order of seconds. 334 - The expected connection holding times should be in the order of 335 minutes or greater. 337 - The expected number of connection attempts at UNI should be in the 338 order of 100's. 340 - There may be up to millions of simultaneous optical connections 341 switched across a single carrier network. Note that even though 342 automated rapid optical connection provision is required, but the 343 carriers expect the majority of provisioned circuits, at least in 344 short term, to have a long lifespan ranging from months to years. 346 3.3. Transport Network Technology 348 Optical services can be offered over different types of underlying 349 optical transport technologies including both TDM-based SONET/SDH 350 network and WDM-based OTN networks. 352 For this document, standards-based transport technologies SONET/SDH 353 as defined in the ITU Rec. G.803 and OTN implementation framing as 354 defined in ITU Rec. G.709 shall be supported. 356 Note that the service characteristics such as bandwidth granularity 357 and signaling framing hierarchy to a large degree will be determined 358 by the capabilities and constraints of the server layer network. 360 3.4. Service Building Blocks 362 The primary goal of this document is to identify a set of basic 363 service building blocks the carriers can mix and match them to create 364 the best suitable service models that serve their business needs. 366 The service building blocks are comprised of a well-defined set of 367 service capabilities and a basic set of service control and 368 management functions, which offer a basic set of services and 369 additionally enable a carrier to define enhanced services through 370 extensions and customizations. Examples of the building blocks 371 include the connection types, provisioning methods, control 372 interfaces, policy control functions, and domain internetworking 373 mechanisms, etc. 375 4. Service Model and Applications 377 A carrier's optical network supports multiple types of service 378 models. Each service model may have its own service operations, 379 target markets, and service management requirements. 381 4.1. Service and Connection Types 383 The optical network is primarily offering high bandwidth connectivity 384 in the form of connections, where a connection is defined to be a 385 fixed bandwidth connection between two client network elements, such 386 as IP routers or ATM switches, established across the optical 387 network. A connection is also defined by its demarcation from ingress 388 access point, across the optical network, to egress access point of 389 the optical network. 391 The following connection capability types must be supported: 393 - Uni-directional point-to-point connection 395 - Bi-directional point-to-point connection 397 - Uni-directional point-to-multipoint connection 399 For point-to-point connection, the following three types of network 400 connections based on different connection set-up control methods 401 shall be supported: 403 - Permanent connection (PC): Established hop-by-hop directly on each 404 ONE along a specified path without relying on the network routing and 405 signaling capability. The connection has two fixed end-points and 406 fixed cross-connect configuration along the path and will stays 407 permanently until it is deleted. This is similar to the concept of 408 PVC in ATM. 410 - Switched connection (SC): Established through UNI signaling 411 interface and the connection is dynamically established by network 412 using the network routing and signaling functions. This is similar to 413 the concept of SVC in ATM. 415 - Soft permanent connection (SPC): Established by specifying two PC 416 at end-points and let the network dynamically establishes a SC 417 connection in between. This is similar to the SPVC concept in ATM. 419 The PC and SPC connections should be provisioned via management plane 420 to control interface and the SC connection should be provisioned via 421 signaled UNI interface. 423 4.2. Examples of Common Service Models 425 Each carrier can defines its own service model based on it business 426 strategy and environment. The following are three service models that 427 carriers may use: 429 4.2.1. Provisioned Bandwidth Service (PBS) 431 The PBS model provides enhanced leased/private line services 432 provisioned via service management interface (MI) using either PC or 433 SPC type of connection. The provisioning can be real-time or near 434 real-time. It has the following characteristics: 436 - Connection request goes through a well-defined management interface 438 - Client/Server relationship between clients and optical network. 440 - Clients have no optical network visibility and depend on network 441 intelligence or operator for optical connection setup. 443 4.2.2. Bandwidth on Demand Service (BDS) 445 The BDS model provides bandwidth-on-demand dynamic connection 446 services via signaled user-network interface (UNI). The provisioning 447 is real-time and is using SC type of optical connection. It has the 448 following characteristics: 450 - Signaled connection request via UNI directly from the user or its 451 proxy. 453 - Customer has no or limited network visibility depending upon the 454 control interconnection model used and network administrative policy. 456 - Relies on network or client intelligence for connection set-up 457 depending upon the control plane interconnection model used. 459 4.2.3. Optical Virtual Private Network (OVPN) 461 The OVPN model provides virtual private network at the optical layer 462 between a specified set of user sites. It has the following 463 characteristics: 465 - Customers contract for specific set of network resources such as 466 optical connection ports, wavelengths, etc. 468 - Closed User Group (CUG) concept is supported as in normal VPN. 470 - Optical connection can be of PC, SPC or SC type depending upon the 471 provisioning method used. 473 - An OVPN site can request dynamic reconfiguration of the connections 474 between sites within the same CUG. 476 - Customer may have limited or full visibility and control of 477 contracted network resources depending upon the customer service 478 contract. 480 At minimum, the PBS, BDS and OVPN service models described above 481 shall be supported by the control functions. 483 5. Network Reference Model 485 This section discusses major architectural and functional components 486 of a generic carrier optical network, which will provide a reference 487 model for describing the requirements for the control and management 488 of carrier optical services. 490 5.1. Optical Networks and Subnetworks 492 As mentioned before, there are two main types of optical networks 493 that are currently under consideration: SDH/SONET network as defined 494 in ITU Rec. G.803, and OTN as defined in ITU Rec. G.872. 496 We assume an OTN is composed of a set of optical cross-connects (OXC) 497 and optical add-drop multiplexer (OADM) which are interconnected in a 498 general mesh topology using DWDM optical line systems (OLS). 500 It is often convenient for easy discussion and description to treat 501 an optical network as an subnetwork cloud, in which the details of 502 the network become less important, instead focus is on the function 503 and the interfaces the optical network provides. In general, a 504 subnetwork can be defined as a set of access points on the network 505 boundary and a set of point-to-point optical connections between 506 those access points. 508 5.2. Network Interfaces 510 A generic carrier network reference model describes a multi-carrier 511 network environment. Each individual carrier network can be further 512 partitioned into domains or sub-networks based on administrative, 513 technological or architectural reasons. The demarcation between 514 (sub)networks can be either logical or physical and consists of a 515 set of reference points identifiable in the optical network. From the 516 control plane perspective, these reference points define a set of 517 control interfaces in terms of optical control and management 518 functionality. The following figure 5.1 is an illustrative diagram 519 for this. 521 +---------------------------------------+ 522 | single carrier network | 523 +--------------+ | | 524 | | | +------------+ +------------+ | 525 | IP | | | | | | | 526 | Network +-EUNI+ Optical +-I-UNI--+ Carrier IP | | 527 | | | | Subnetwork | | network | | 528 +--------------+ | | +--+ | | | 529 | +------+-----+ | +------+-----+ | 530 | | | | | 531 | I-NNI I-NNI I-UNI | 532 +--------------+ | | | | | 533 | | | +------+-----+ | +------+-----+ | 534 | IP +-EUNI| | +-----+ | | 535 | Network | | | Optical | | Optical | | 536 | | | | Subnetwork +-I-NNI--+ Subnetwork | | 537 +--------------+ | | | | | | 538 | +------+-----+ +------+-----+ | 539 | | | | 540 +---------------------------------------+ 541 E-UNI E-NNI 542 | | 543 +------+-------+ +----------------+ 544 | | | | 545 | Other Client | | Other Carrier | 546 | Network | | Network | 547 | (ATM/SONET) | | | 548 +--------------+ +----------------+ 550 Figure 5.1 Generic Carrier Network Reference 551 Model 553 The network interfaces encompass two aspects of the networking 554 functions: user data plane interface and control plane interface. The 555 former concerns about user data transmission across the physical 556 network interface and the latter concerns about the control message 557 exchange across the network interface such as signaling, routing, 558 etc. We call the former physical interface (PI) and the latter 559 control plane interface. Unless otherwise stated, the control 560 interface is assumed in the remaining of this document. 562 5.2.1. Control Plane Interfaces 564 Control interface defines a relationship between two connected 565 network entities on both side of the interface. For each control 566 interface, we need to define an architectural function each side 567 plays and a controlled set of information that can be exchanged 568 across the interface. The information flowing over this logical 569 interface may include, but not limited to: 571 - Endpoint name and address 573 - Reachability/summarized network address information 575 - Topology/routing information 576 - Authentication and connection admission control information 578 - Connection management signaling messages 580 - Network resource control information 582 Different types of the interfaces can be defined for the network 583 control and architectural purposes and can be used as the network 584 reference points in the control plane. In this document, the 585 following set of interfaces are defined as shown in Figure 5.1: 587 The User-Network Interface (UNI) is a bi-directional signaling 588 interface between service requester and service provider control 589 entities. We further differentiate between interior UNI (I-UNI) and 590 exterior UNI (E-UNI) as follows: 592 - E-UNI: A UNI interface for which the service request control entity 593 resides outside the carrier network control domain. 595 - I-UNI: A UNI interface for which the service requester control 596 entity resides within the carrier network control domain. 598 The reason for doing so is that we can differentiate a class of UNI 599 where there is trust relationship between the client equipment and 600 the optical network. This private nature of UNI may have similar 601 functionality to the NNI in that it may allow for controlled routing 602 information to cross the UNI. Specifics of the I-UNI are currently 603 under study. 605 The Network-Network Interface (NNI) is a bi-directional signaling 606 interface between two optical network elements or sub-networks. 608 We differentiate between interior (I-NNI) and exterior (E-NNI) NNI as 609 follows: 611 - E-NNI: A NNI interface between two control plane entities belonging 612 to different control domains. 614 - I-NNI: A NNI interface between two control plane entities within 615 the same control domain in the carrier network. 617 It should be noted that it is quite common to use E-NNI between two 618 sub-networks within the same carrier network if they belong to 619 different control domains. Different types of interface, interior vs. 620 exterior, have different implied trust relationship for security and 621 access control purposes. Trust relationship is not binary, instead a 622 policy-based control mechanism need to be in place to restrict the 623 type and amount of information that can flow cross each type of 624 interfaces depending the carrier's service and business requirements. 625 Generally, two networks have a trust relationship if they belong to 626 the same administrative domain. 628 Interior interface examples include an I-NNI between two optical 629 network elements in a single control domain or an I-UNI interface 630 between the optical transport network and an IP client network owned 631 by the same carrier. Exterior interface examples include an E-NNI 632 between two different carriers or an E-UNI interface between a 633 carrier optical network and its customers. 635 The control plane shall support the UNI and NNI interface described 636 above and the interfaces shall be configurable in terms of the type 637 and amount of control information exchange and their behavior shall 638 be consistent with the configuration (i.e., exterior versus interior 639 interfaces). 641 5.3. Intra-Carrier Network Model Intra-carrier network model is 642 concerned about the network service control and management issues 643 within networks owned by a single carrier. 645 5.3.1. Multiple Sub-networks 647 Without loss of generality, the optical network owned by a carrier 648 service operator can be depicted as consisting of one or more optical 649 sub-networks interconnected by direct optical links. There may be 650 many different reasons for more than one optical sub-networks It may 651 be the result of using hierarchical layering, different technologies 652 across access, metro and long haul (as discussed below), or a result 653 of business mergers and acquisitions or incremental optical network 654 technology deployment by the carrier using different vendors or 655 technologies. 657 A sub-network may be a single vendor and single technology network. 658 But in general, the carrier's optical network is heterogeneous in 659 terms of equipment vendor and the technology utilized in each sub- 660 network. 662 5.3.2. Access, Metro and Long-haul networks 664 Few carriers have end-to-end ownership of the optical networks. Even 665 if they do, access, metro and long-haul networks often belong to 666 different administrative divisions as separate optical sub-networks. 667 Therefore Inter-(sub)-networks interconnection is essential in terms 668 of supporting the end-to-end optical service provisioning and 669 management. The access, metro and long-haul networks may use 670 different technologies and architectures, and as such may have 671 different network properties. 673 In general, an end-to-end optical connection may easily cross 674 multiple sub-networks with the following possible scenarios 675 Access -- Metro -- Access 676 Access - Metro -- Long Haul -- Metro - Access 678 5.3.3. Implied Control Constraints 680 The carrier's optical network is in general treated as a trusted 681 domain, which is defined as a network under a single technical 682 administration with implied trust relationship. Within a trusted 683 domain, all the optical network elements and sub-networks are 684 considered to be secure and trusted by each other at a defined level. 685 In the intra-carrier model interior interfaces (I-NNI and I-UNI) are 686 generally assumed. 688 One business application for the interior UNI is the case where a 689 carrier service operator offers data services such as IP, ATM and 690 Frame Relay over its optical core network. Data services network 691 elements such as routers and ATM switches are considered to be 692 internal optical service client devices. The topology information for 693 the carrier optical network may be shared with the internal client 694 data networks. 696 5.4. Inter-Carrier Network Model 698 The inter-carrier model focuses on the service and control aspects 699 between different carrier networks and describes the internetworking 700 relationship between them. 702 5.4.1. Carrier Network Interconnection 704 Inter-carrier interconnection provides for connectivity among 705 different optical network operators. To provide the global reach end- 706 to-end optical services, the optical service control and management 707 between different carrier networks become essential. The normal 708 connectivity between the carriers may include: 710 Private Peering: Two carriers set up dedicated connection between 711 them via a private arrangement. 713 Public Peering: Two carriers set up a point-to-point connection 714 between them at a public optical network access points (ONAP) 716 Due to the nature of the automatic optical switched network, it is 717 possible to support the distributed peering for the IP client layer 718 network where the connection between two distant IP routers can be 719 connected via an optical connection. 721 5.4.2. Implied Control Constraints 723 In the inter-carrier network model, each carrier's optical network is 724 a separate administrative domain. Both the UNI interface between the 725 user and the carrier network and the NNI interface between two 726 carrier's networks are crossing the carrier's administrative boundary 727 and therefore are by definition exterior interfaces. 729 In terms of control information exchange, the topology information 730 shall not be allowed to across both E-NNI and E-UNI interfaces. 732 6. Optical Service User Requirements 734 This section describes the user requirements for optical services, 735 which in turn impose the requirements on service control and 736 management for the network operators. The user requirements reflect 737 the perception of the optical service from a user's point of view. 739 6.1. Common Optical Services 741 The basic unit of an optical service is a fixed-bandwidth optical 742 connection between connected parties. However different services are 743 created based on its supported signal characteristics (format, bit 744 rate, etc), the service invocation methods and possibly the 745 associated Service Level Agreement (SLA) provided by the service 746 provider. 748 At present, the following are the major optical services provided in 749 the industry: 751 - SONET/SDH, with different degrees of transparency 753 - Optical wavelength services: opaque or transparent 755 - Ethernet at 1 Gbps and 10 Gbps 757 - Storage Area Networks (SANs) based on FICON, ESCON and Fiber 758 Channel 760 The services mentioned above shall be provided by the optical 761 transport layer of the network being provisioned using the same 762 management, control and data planes. 764 Opaque Service refers to transport services where signal framing is 765 negotiated between the client and the network operator (framing and 766 bit-rate dependent), and only the payload is carried transparently. 767 SONET/SDH transport is most widely used for network-wide transport. 768 Different levels of transparency can be achieved in the SONET/SDH 769 transmission and is discussed in Section 6.4. 771 Transparent Service assumes protocol and rate independency. However, 772 since any optical connection is associated with a signal bandwidth, 773 for transparent optical services, knowledge of the maximum bandwidth 774 is required. 776 Ethernet Services, specifically 1Gb/s and 10Gbs Ethernet services, 777 are gaining more popularity due to the lower costs of the customers' 778 premises equipment and its simplified management requirements 779 (compared to SONET or SDH). 781 Ethernet services may be carried over either SONET/SDH (GFP mapping) 782 or WDM networks. The Ethernet service requests will require some 783 service specific parameters: priority class, VLAN Id/Tag, traffic 784 aggregation parameters. 786 Storage Area Network (SAN) Services. ESCON and FICON are proprietary 787 versions of the service, while Fiber Channel is the standard 788 alternative. As is the case with Ethernet services, SAN services may 789 be carried over either SONET/SDH (using GFP mapping) or WDM networks. 791 Currently SAN services require only point-to-point connections, but 792 it is envisioned that in the future they may also require multicast 793 connections. 795 The control plane shall provide the carrier with the capability 796 functionality to to provision, control and manage all the services 797 listed above. 799 6.2. Optical Service Invocation 801 As mentioned earlier, the methods of service invocation play an 802 important role in defining different services. 804 6.2.1. In this scenario, users forward their service request to the 805 provider via a well-defined service management interface. All 806 connection management operations, including set-up, release, query, 807 or modification shall be invoked from the management plane. 809 6.2.2. In this scenario, users forward their service request to the 810 provider via a well-defined UNI interface in the control plane 811 (including proxy signaling). All connection management operation 812 requests, including set-up, release, query, or modification shall be 813 invoked from directly connected user devices, or its signaling 814 representative (such as a signaling proxy). 816 In summary the following requirements for the control plane have been 817 identified: 819 The control plane shall support action results codes as responses to 820 any requests over the control interfaces. 822 The control plane shall support requests for connection set-up, 823 subject to policies in effect between the user and the network. 825 The control plane shall support the destination client device's 826 decision to accept or reject connection creation requests from the 827 initiating client's device. 829 - The control plane shall support requests for connection set-up 830 across multiple subnetworks over both Interior and Exterior Network 831 Interfaces. 833 - NNI signaling shall support requests for connection set-up, subject 834 to policies in effect between the subnetworks. 836 - Connection set-up shall be supported for both uni-directional and 837 bi-directional connections. 839 - Upon connection request initiation, the control plane shall 840 generate a network unique Connection-ID associated with the 841 connection, to be used for information retrieval or other activities 842 related to that connection. 844 - CAC shall be provided as part of the control plane functionality. 845 It is the role of the CAC function to determine if there is 846 sufficient free resource available downstream to allow a new 847 connection. 849 - When a connection request is received across the NNI, it is 850 necessary to ensure that the resources exist within the downstream 851 subnetwork to establish the connection. 853 - If sufficient resources are available, the CAC may permit the 854 connection request to proceed. 856 - If sufficient resources are not available, the CAC shall send an 857 appropriate notification upstream towards the originator of the 858 connection request that the request has been denied. 860 - Negotiation for connection set-up for multiple service level 861 options shall be supported across the NNI. 863 - The policy management system must determine what kind of 864 connections can be set up across a given NNI. 866 - The control plane elements need the ability to rate limit (or pace) 867 call setup attempts into the network. 869 - The control plane shall report to the management plane, the 870 Success/Failures of a connection request 872 - Upon a connection request failure, the control plane shall report 873 to the management plane a cause code identifying the reason for the 874 failure. 876 Upon a connection request failure: 878 - The control plane shall report to the management plane a cause code 879 identifying the reason for the failure 881 - A negative acknowledgment shall be returned across the NNI 883 - Allocated resources shall be released. 885 - Upon a connection request success: 887 - A positive acknowledgment shall be returned when a connection has 888 been successfully established. 890 - The positive acknowledgment shall be transmitted both downstream 891 and upstream, over the NNI, to inform both source and destination 892 clients of when they may start transmitting data. 894 The control plane shall support the client's request for connection 895 tear down. 897 NNI signaling plane shall support requests for connection tear down 898 by connection-ID. 900 The control plane shall allow either end to initiate connection 901 release procedures. 903 NNI signaling flows shall allow any end point or any intermediate 904 node to initiate the connection release over the NNI. 906 Upon connection teardown completion all resources associated with the 907 connection shall become available for access for new requests. 909 The management plane shall be able to tear down connections 910 established by the control plane both gracefully and forcibly on 911 demand. 913 Partially deleted connections shall not remain within the network. 915 End-to-end acknowledgments shall be used for connection deletion 916 requests. 918 Connection deletion shall not result in either restoration or 919 protection being initiated. 921 Connection deletion shall at a minimum use a two pass signaling 922 process, removing the cross-connection only after the first signaling 923 pass has completed. 925 The control plane shall support management plane and client's device 926 request for connection attributes or status query. 928 The control plane shall support management plane and neighboring 929 device (client or intermediate node) request for connection 930 attributes or status query. 932 The control plane shall support action results code responses to any 933 requests over the control interfaces. 935 The management plane shall be able to query on demand the status of 936 the connection 938 The UNI shall support initial registration of the UNI-C with the 939 network via the control plane. 941 The UNI shall support registration and updates by the UNI-C entity of 942 the clients and user interfaces that it controls. 944 The UNI shall support network queries of the client devices. 946 The UNI shall support detection of client devices or of edge ONE 947 failure. 949 6.3. Bundled Connection 951 Bundled connections differ from simple basic connections in that a 952 connection request may generate multiple parallel connections bundled 953 together as one virtual connection. 955 Multiple point-to-point connections may be managed by the network so 956 as to appear as a single compound connection to the end-points. 957 Examples of such bundled connections are connections based on virtual 958 concatenation, diverse routing, or restorable connections. 960 The actions required to manage compound connections are the same as 961 the ones outlined for the management of basic connections. 963 6.4. Levels of Transparency 965 Opaque connections are framing and bit-rate dependent - the exact 966 signal framing is known or needs to be negotiated between network 967 operator and its clients. However, there may be multiple levels of 968 transparency for individual framing types. Current transport networks 969 are mostly based on SONET/SDH technology. Therefore, multiple levels 970 have to be considered when defining specific optical services. 972 The example below shows multiple levels of transparency applicable to 973 SONET/SDH transport. 975 - Bit transparency in the SONET/SDH frames. This means that the OXCs 976 will not terminate any byte in the SONET OH bytes. 978 - SONET Line and section OH (SDH multiplex and regenerator section 979 OH) are normally terminated and the network can monitor a large set 980 of parameters. 982 However, if this level of transparency is used, the TOH will be 983 tunneled in unused bytes of the non-used frames and will be recovered 984 at the terminating ONE with their original values. 986 - Line and section OH are forwarded transparently, keeping their 987 integrity thus providing the customer the ability to better determine 988 where a failure has occurred, this is very helpful when the 989 connection traverses several carrier networks. 991 - G.709 OTN signals 993 6.5. Optical Connection granularity 995 The service granularity is determined by the specific technology, 996 framing and bit rate of the physical interface between the ONE and 997 the client at the edge and by the capabilities of the ONE. The 998 control plane needs to support signaling and routing for all the 999 services supported by the ONE. 1001 The physical connection is characterized by the nominal optical 1002 interface rate and other properties such as protocol supported. 1003 However, the consumable attribute is bandwidth. In general, there 1004 should not be a one-to-one correspondence imposed between the 1005 granularity of the service provided and the maximum capacity of the 1006 interface to the user. The bandwidth utilized by the client becomes 1007 the logical connection, for which the customer will be charged. 1009 In addition, sub-rate interfaces shall be supported by the optical 1010 control plane such as VT /TU granularity (as low as 1.5 Mb/s) 1012 The control plane shall support the ITU Rec. G.709 connection 1013 granularity for the OTN network. 1015 The control plane shall support the SDH and SONET connection 1016 granularity. 1018 In addition, 1 Gb and 10 Gb granularity shall be supported for 1 Gb/s 1019 and 10 Gb/s (WAN mode) Ethernet framing types, if implemented in the 1020 hardware. 1022 For SAN services the following interfaces have been defined and shall 1023 be supported by the control plane if the given interfaces are 1024 available on the equipment: 1025 - FC-12 1026 - FC-50 1027 - FC-100 1028 - FC-200 1030 Therefore, sub-rate fabric granularity shall support VT-x/TU-1n 1031 granularity down to VT1.5/TU-l1, consistent with the hardware. 1033 Encoding of service types in the protocols used shall be such that 1034 new service types can be added by adding new code point values or 1035 objects. 1037 6.6. Other Service Parameters and Requirements 1039 6.6.1. Classes of Service 1041 We use "service level" to describe priority related characteristics 1042 of connections, such as holding priority, set-up priority, or 1043 restoration priority. The intent currently is to allow each carrier 1044 to define the actual service level in terms of priority, protection, 1045 and restoration options. Therefore, individual carriers will 1046 determine mapping of individual service levels to a specific set of 1047 quality features. 1049 Specific protection and restoration options are discussed in Section 1050 10. However, it should be noted that while high grade services may 1051 require allocation of protection or restoration facilities, there may 1052 be an application for a low grade of service for which preemptable 1053 facilities may be used. 1055 Multiple service level options shall be supported and the user shall 1056 have the option of selecting over the UNI a service level for an 1057 individual connection. 1059 The control plane shall be capable of mapping individual service 1060 classes into specific protection and / or restoration options. 1062 6.6.2. Connection Latency 1064 Connection latency is a parameter required for support of time- 1065 sensitive services like Fiber Channel services. Connection latency is 1066 dependent on the circuit length, and as such for these services, it 1067 is essential that shortest path algorithms are used and end to-end 1068 latency is verified before acknowledging circuit availability. 1070 The control plane shall support latency-based routing constraint 1071 (such as distance) as a path selection parameter. 1073 6.6.3. Diverse Routing Attributes 1075 The ability to route service paths diversely is a highly desirable 1076 feature. Diverse routing is one of the connection parameters and is 1077 specified at the time of the connection creation. The following 1078 provides a basic set of requirements for the diverse routing support. 1080 Diversity between two links being used for routing should be defined 1081 in terms of link disjointness, node disjointness or Shared Risk Link 1082 Groups (SRLG) that is defined as a group of links which share some 1083 risky resources, such as a specific sequence of conduits or a 1084 specific office. A SRLG is a relationship between the links that 1085 should be characterized by two parameters: 1087 - Type of Compromise: Examples would be shared fiber cable, shared 1088 conduit, shared right-of-way (ROW), shared link on an optical ring, 1089 shared office - no power sharing, etc.) 1091 - Extent of Compromise: For compromised outside plant, this would be 1092 the length of the sharing. 1094 The control plane routing algorithms shall be able to route a single 1095 demand diversely from N previously routed demands in terms of link 1096 disjoint path, node disjoint path and SRLG disjoint path. 1098 7. Optical Service Provider Requirements 1100 This section discusses specific service control and management 1101 requirements from the service provider's point of view. 1103 7.1. Access Methods to Optical Networks 1105 Multiple access methods shall be supported: 1107 - Cross-office access (User NE co-located with ONE) In this scenario 1108 the user edge device resides in the same office as the ONE and has 1109 one or more physical connections to the ONE. Some of these access 1110 connections may be in use, while others may be idle pending a new 1111 connection request. 1113 - Direct remote access 1115 In this scenario the user edge device is remotely located from the 1116 ONE and has inter-location connections to the ONE over multiple fiber 1117 pairs or via a DWDM system. Some of these connections may be in use, 1118 while others may be idle pending a new connection request. 1120 - Remote access via access sub-network 1122 In this scenario remote user edge devices are connected to the ONE 1123 via a multiplexing/distribution sub-network. Several levels of 1124 multiplexing may be assumed in this case. This scenario is applicable 1125 to metro/access subnetworks of signals from multiple users, out, of 1126 which only a subset have connectivity to the ONE. 1128 All of the above access methods must be supported. 1130 7.2. Dual Homing and Network Interconnections 1132 Dual homing is a special case of the access network. Client devices 1133 can be dual homed to the same or different hub, the same or different 1134 access network, the same or different core networks, the same or 1135 different carriers. The different levels of dual homing connectivity 1136 result in many different combinations of configurations. The main 1137 objective for dual homing is for enhanced survivability. 1139 The different configurations of dual homing will have great impact on 1140 admission control, reachability information exchanges, 1141 authentication, neighbor and service discovery across the interface. 1143 Dual homing must be supported. 1145 7.3. Inter-domain connectivity 1147 A domain is a portion of a network, or an entire network that is 1148 controlled by a single control plane entity. This section discusses 1149 the various requirements for connecting domains. 1151 7.3.1. Multi-Level Hierarchy 1153 Traditionally current transport networks are divided into core inter- 1154 city long haul networks, regional intra-city metro networks and 1155 access networks. Due to the differences in transmission technologies, 1156 service, and multiplexing needs, the three types of networks are 1157 served by different types of network elements and often have 1158 different capabilities. The diagram below shows an example three- 1159 level hierarchical network. 1161 +--------------+ 1162 | Core Long | 1163 + -------------+ Haul +-------------+ 1164 | | Subnetwork | | 1165 | +-------+------+ | 1166 +-------+------+ +-------+------+ 1167 | | | | 1168 | Regional | | Regional | 1169 | Subnetwork | | Subnetwork | 1170 +-------+------+ +-------+------+ 1171 | | 1172 +-------+------+ +-------+------+ 1173 | | | | 1174 | Metro/Access | | Metro/Access | 1175 | Subnetwork | | Subnetwork | 1176 +--------------+ +--------------+ 1178 Figure 2 Multi-level hierarchy example 1180 Functionally we can often see clear split among the 3 types of 1181 networks: Core long-haul network deals primarily with facilities 1182 transport and switching. SONET signals at STS-1 and higher rates 1183 constitute the units of transport. Regional networks will be more 1184 closely tied to service support and VT-level signals need to be also 1185 switched. As an example of interaction a device switching DS1 signals 1186 interfaces to other such devices over the long-haul network via STS-1 1187 links. Regional networks will also groom traffic of the Metro 1188 networks, which generally have direct interfaces to clients, and 1189 support a highly varied mix of services. It should be noted that, 1190 although not shown in Figure 2, metro/access subnetworks may have 1191 interfaces to the core network, without having to go through a 1192 regional network. 1194 Routing and signaling for multi-level hierarchies shall be supported 1195 to allow carriers to configure their networks as needed. 1197 7.3.2. Network Interconnections 1199 Subnetworks may have multiple points of inter-connections. All 1200 relevant NNI functions, such as routing, reachability information 1201 exchanges, and inter-connection topology discovery must recognize and 1202 support multiple points of inter-connections between subnetworks. 1203 Dual inter-connection is often used as a survivable architecture. 1205 Such an inter-connection is a special case of a mesh network, 1206 especially if these subnetworks are connected via an I-NNI, i.e., 1207 they are within the same administrative domain. In this case the 1208 control plane requirements described in Section 8 will also apply for 1209 the inter-connected subnetworks, and are therefore not discussed 1210 here. 1212 However, there are additional requirements if the interconnection is 1213 across different domains, via an E-NNI. These additional 1214 requirements include the communication of failure handling functions, 1215 routing, load sharing, etc. while adhering to pre-negotiated 1216 agreements on these functions across the boundary nodes of the 1217 multiple domains. Subnetwork interconnection may also be achieved 1218 alternatively via a separate subnetwork. In this case, the above 1219 requirements stay the same, but need to be communicated over the 1220 interconnecting subnetwork, similar to the E-NNI scenario described 1221 above. 1223 7.4. Bearer Interface Types 1225 All the bearer interfaces implemented in the ONE shall be supported 1226 by the control plane and associated signaling protocols. 1228 The following interface types shall be supported by the signaling 1229 protocol: 1230 - SDH 1231 - SONET 1232 - 1 Gb Ethernet, 10 Gb Ethernet (WAN mode) 1233 - 10 Gb Ethernet (LAN mode) 1234 - FC-N (N= 12, 50, 100, or 200) for Fiber Channel services 1235 - OTN (G.709) 1236 - PDH 1237 - Transparent optical 1239 7.5. Names and Address Management 1241 7.5.1. Address Space Separation 1243 To ensure the scalability of and smooth migration toward to the 1244 optical switched network, the separation of three address spaces are 1245 required: 1246 - Internal transport network addresses 1247 - Transport Network Assigned (TNA) address 1248 - Client addresses. 1250 7.5.2. Directory Services 1252 Directory Services shall be supported to enable operator to query the 1253 optical network for the optical network address of a specified user. 1254 Address resolution and translation between various user edge device 1255 names and corresponding optical network addresses shall be supported. 1256 UNI shall use the user naming schemes for connection request. 1258 7.5.3. Network element Identification 1260 Each network element within a single control domain shall be uniquely 1261 identifiable. The identifiers may be re-used across multiple domains. 1262 However, unique identification of a network element becomes possible 1263 by associating its local identity with the global identity of its 1264 domain. 1266 7.6. Policy-Based Service Management Framework 1268 The IPO service must be supported by a robust policy-based management 1269 system to be able to make important decisions. 1271 Examples of policy decisions include: - What types of connections can 1272 be set up for a given UNI? 1274 - What information can be shared and what information must be 1275 restricted in automatic discovery functions? 1277 - What are the security policies over signaling interfaces? 1278 - What border nodes should be used when routing depend on factors 1279 including, but not limited to source and destination address, border 1280 nodes loading, time of connection request. 1282 Requirements: - Service and network policies related to configuration 1283 and provisioning, admission control, and support of Service Level 1284 Agreements (SLAs) must be flexible, and at the same time simple and 1285 scalable. 1287 - The policy-based management framework must be based on standards- 1288 based policy systems (e.g. IETF COPS). 1290 - In addition, the IPO service management system must support and be 1291 backwards compatible with legacy service management systems. 1293 7.7. Support of Hierarchical Routing and Signaling 1295 The routing protocol(s) shall support hierarchical routing 1296 information dissemination, including topology information aggregation 1297 and summarization. 1299 The routing protocol(s) shall minimize global information and keep 1300 information locally significant as much as possible. 1302 Over external interfaces only reachability information, next routing 1303 hop and service capability information should be exchanged. Any other 1304 network related information shall not leak out to other networks. 1306 8. Control Plane Functional Requirements for Optical Services 1308 This section addresses the requirements for the optical control plane 1309 in support of service provisioning. 1311 The scope of the control plane include the control of the interfaces 1312 and network resources within an optical network and the interfaces 1313 between the optical network its client networks. In other words, it 1314 include NNI and UNI aspects. 1316 8.1. Control Plane Capabilities and Functions 1318 The control capabilities are supported by the underlying control 1319 functions and protocols built in the control plane. 1321 8.1.1. Network Control Capabilities 1323 The following capabilities are required in the network control plane 1324 to successfully deliver automated provisioning for optical services: 1325 - Neighbor, service and topology discovery 1327 - Address assignment and resolution 1329 - Routing information propagation and dissemination 1331 - Path calculation and selection 1333 - Connection management 1335 These capabilities may be supported by a combination of functions 1336 across the control and the management planes. 1338 8.1.2. Control Plane Functions for network control 1340 The following are essential functions needed to support network 1341 control capabilities: 1342 - Signaling 1343 - Routing 1344 - Automatic resource, service and neighbor discovery 1346 Specific requirements for signaling, routing and discovery are 1347 addressed in Section 9. 1349 The general requirements for the control plane functions to support 1350 optical networking and service functions include: - The control plane 1351 must have the capability to establish, teardown and maintain the end- 1352 to-end connection, and the hop-by-hop connection segments between any 1353 two end-points. 1355 - The control plane must have the capability to support traffic- 1356 engineering requirements including resource discovery and 1357 dissemination, constraint-based routing and path computation. 1359 - The control plane shall support network status or action result 1360 code responses to any requests over the control interfaces. 1362 - The control plane shall support resource allocation on both UNI and 1363 NNI. 1365 - Upon successful connection teardown all resources associated with 1366 the connection shall become available for access for new requests. 1368 - The control plane shall support management plane request for 1369 connection attributes/status query. 1371 - The control plane must have the capability to support various 1372 protection and restoration schemes for the optical channel 1373 establishment. 1375 - Control plane failures shall not affect active connections. 1377 - The control plane shall be able to trigger restoration based on 1378 alarms or other indications of failure. 1380 8.2. Signaling Network 1382 The signaling network consists of a set of signaling channels that 1383 interconnect the nodes within the control plane. Therefore, the 1384 signaling network must be accessible by each of the communicating 1385 nodes (e.g., OXCs). 1387 - The signaling network must terminate at each of the nodes in the 1388 transport plane. 1390 - The signaling network shall not be assumed to have the same 1391 topology as the data plane, nor shall the data plane and control 1392 plane traffic be assumed to be congruently routed. A signaling 1393 channel is the communication path for transporting control messages 1394 between network nodes, and over the UNI (i.e., between the UNI entity 1395 on the user side (UNI-C) and the UNI entity on the network side (UNI- 1396 N)). The control messages include signaling messages, routing 1397 information messages, and other control maintenance protocol messages 1398 such as neighbor and service discovery. There are three different 1399 types of signaling methods depending on the way the signaling channel 1400 is constructed: - In-band signaling: The signaling messages are 1401 carried over a logical communication channel embedded in the data- 1402 carrying optical link or channel. For example, using the overhead 1403 bytes in SONET data framing as a logical communication channel falls 1404 into the in-band signaling methods. 1406 - In fiber, Out-of-band signaling: The signaling messages are carried 1407 over a dedicated communication channel separate from the optical 1408 data-bearing channels, but within the same fiber. For example, a 1409 dedicated wavelength or TDM channel may be used within the same fiber 1410 as the data channels. 1412 - Out-of-fiber signaling: The signaling messages are carried over a 1413 dedicated communication channel or path within different fibers to 1414 those used by the optical data-bearing channels. For example, 1415 dedicated optical fiber links or communication path via separate and 1416 independent IP-based network infrastructure are both classified as 1417 out-of-fiber signaling. 1419 In-band signaling may be used over a UNI interface, where there are 1420 relatively few data channels. Proxy signaling is also important over 1421 the UNI interface, as it is useful to support users unable to signal 1422 to the optical network via a direct communication channel. In this 1423 situation a third party system containing the UNI-C entity will 1424 initiate and process the information exchange on behalf of the user 1425 device. The UNI-C entities in this case reside outside of the user in 1426 separate signaling systems. 1428 In-fiber, out-of-band and out-of-fiber signaling channel alternatives 1429 are usually used for NNI interfaces, which generally have significant 1430 numbers of channels per link. Signaling messages relating to all of 1431 the different channels can then be aggregated over a single or small 1432 number of signaling channels. 1434 The signaling network forms the basis of the transport network 1435 control plane. - The signaling network shall support reliable 1436 message transfer. 1438 - The signaling network shall have its own OAM mechanisms. 1440 - The signaling network shall use protocols that support congestion 1441 control mechanisms. 1443 In addition, the signaling network should support message priorities. 1444 Message prioritization allows time critical messages, such as those 1445 used for restoration, to have priority over other messages, such as 1446 other connection signaling messages and topology and resource 1447 discovery messages. 1449 The signaling network must be highly scalable, with minimal 1450 performance degradations as the number of nodes and node sizes 1451 increase. 1453 The signaling network shall be highly reliable and implement failure 1454 recovery. 1456 Security and resilience are crucial issues for the signaling network 1457 will be addressed in Section 10 and 11 of this document. 1459 8.3. Control Plane Interface to Data Plane 1461 In the situation where the control plane and data plane are provided 1462 by different suppliers, this interface needs to be standardized. 1463 Requirements for a standard control -data plane interface are under 1464 study. Control plane interface to the data plane is outside the scope 1465 of this document. 1467 8.4. Management Plane Interface to Data Plane 1469 The management plane is responsible for identifying which network 1470 resources that the control plane may use to carry out its control 1471 functions. Additional resources may be allocated or existing 1472 resources deallocated over time. 1474 Resources shall be able to be allocated to the control plane for 1475 control plane functions include resources involved in setting up and 1476 tearing down calls and control plane specific resources. Resources 1477 allocated to the control plane for the purpose of setting up and 1478 tearing down calls include access groups (a set of access points), 1479 connection point groups (a set of connection points). Resources 1480 allocated to the control plane for the operation of the control plane 1481 itself may include protected and protecting control channels. 1483 Resources allocated to the control plane by the management plane 1484 shall be able to be de-allocated from the control plane on management 1485 plane request. 1487 If resources are supporting an active connection and the resources 1488 are requested to be de-allocated by management plane, the control 1489 plane shall reject the request. The management plane must either 1490 wait until the resources are no longer in use or tear down the 1491 connection before the resources can be de-allocated from the control 1492 plane. Management plane failures shall not affect active connections. 1494 Management plane failures shall not affect the normal operation of a 1495 configured and operational control plane or data plane. 1497 8.5. Control Plane Interface to Management Plane 1499 The control plane is considered a managed entity within a network. 1500 Therefore, it is subject to management requirements just as other 1501 managed entities in the network are subject to such requirements. 1503 8.5.1. Soft Permanent Connections (Point-and click provisioning) 1505 In the case of SPCs, the management plane requests the control plane 1506 to set up / tear down a connection just like what we can do over a 1507 UNI. 1509 The management plane shall be able to query on demand the status of 1510 the connection request The control plane shall report to the 1511 management plane, the Success/Failures of a connection request. Upon 1512 a connection request failure, the control plane shall report to the 1513 management plane a cause code identifying the reason for the failure. 1515 8.5.2. Resource Contention resolution Since resources are allocated to 1516 the control plane for use, there should not be contention between the 1517 management plane and the control plane for connection set-up. Only 1518 the control plane can establish connections for allocated resources. 1519 However, in general, the management plane shall have authority over 1520 the control plane. 1522 The control plane shall not assume authority over management plane 1523 provisioning functions. 1525 In the case of network failure, both the management plane and the 1526 control plane need fault information at the same priority. 1528 The control plane needs fault information in order to perform its 1529 restoration function (in the event that the control plane is 1530 providing this function). However, the control plane needs less 1531 granular information than that required by the management plane. For 1532 example, the control plane only needs to know whether the resource is 1533 good/bad. The management plane would additionally need to know if a 1534 resource was degraded or failed and the reason for the failure, the 1535 time the failure occurred and so on. 1537 The control plane shall not assume authority over management plane 1538 for its management functions (FCAPS). 1540 The control plane shall be responsible for providing necessary 1541 statistic data such as call counts, traffic counts to the management 1542 plane. They should be available upon the query from the management 1543 plane. 1545 Control plane shall support policy-based CAC function either within 1546 the control plane or provide an interface to a policy server outside 1547 the network. 1549 Topological information learned in the discovery process shall be 1550 able to be queried on demand from the management plane. 1552 The management plane shall be able to tear down connections 1553 established by the control plane both gracefully and forcibly on 1554 demand. 1556 8.6. Control Plane Interconnection 1558 When two (sub)networks are interconnected on transport plane level, 1559 so should be two corresponding control network at the control plane. 1560 The control plane interconnection model defines the way how two 1561 control networks can be interconnected in terms of controlling 1562 relationship and control information flow allowed between them. 1564 8.6.1. Interconnection Models 1566 There are three basic types of control plane network interconnection 1567 models: overlay, peer and hybrid, which are defined by the IETF IPO 1568 WG document [IPO_frame]. 1570 Choosing the level of coupling depends upon a number of different 1571 factors, some of which are: 1573 - Variety of clients using the optical network 1575 - Relationship between the client and optical network 1577 - Operating model of the carrier 1579 Overlay model (UNI like model) shall be supported for client to 1580 optical control plane interconnection 1582 Other models are optional for client to optical control plane 1583 interconnection 1585 For optical to optical control plane interconnection all three models 1586 shall be supported 1588 9. Requirements for Signaling, Routing and Discovery 1590 9.1. Requirements for information sharing over UNI, I-NNI and E-NNI 1592 There are three types of interfaces where the routing information 1593 dissemination may occur: UNI, I-NNI and E-NNI. Different types of 1594 interfaces shall impose different requirements and functionality due 1595 to their different trust relationships. Over UNI, the user network 1596 and the transport network form a client-server relationship. 1597 Therefore, the transport network topology shall not be disseminated 1598 from transport network to the user network. 1600 Information flows expected over the UNI shall support the following: 1601 - Call control 1602 - Resource Discovery 1603 - Connection Control 1604 - Connection Selection 1606 Address resolution exchange over UNI is needed if an addressing 1607 directory service is not available. 1609 Information flows over the I-NNI shall support the following: 1610 - Resource Discovery 1611 - Connection Control 1612 - Connection Selection 1613 - Connection Routing 1615 Information flows over the E-NNI shall support the following: 1617 - Call Control 1618 - Resource Discovery 1619 - Connection Control 1620 - Connection Selection 1621 - Connection Routing 1623 9.2. Signaling Functions 1625 Call and connection control and management signaling messages are 1626 used for the establishment, modification, status query and release of 1627 an end-to-end optical connection. 1629 9.2.1. Call and connection control 1631 To support many enhanced optical services, such as scheduled 1632 bandwidth on demand and bundled connections, a call model based on 1633 the separation of the call control and connection control is 1634 essential. The call control is responsible for the end-to-end session 1635 negotiation, call admission control and call state maintenance while 1636 connection control is responsible for setting up the connections 1637 associated with a call. A call can correspond to zero, one or more 1638 connections depending upon the number of connections needed to 1639 support the call. 1641 This call model has the advantage of reducing redundant call control 1642 information at intermediate (relay) connection control nodes, thereby 1643 removing the burden of decoding and interpreting the entire message 1644 and its parameters. Since the call control is provided at the ingress 1645 to the network or at gateways and network boundaries. As such the 1646 relay bearer needs only provide the procedures to support switching 1647 connections. 1649 Call control is a signaling association between one or more user 1650 applications and the network to control the set-up, release, 1651 modification and maintenance of sets of connections. Call control is 1652 used to maintain the association between parties and a call may 1653 embody any number of underlying connections, including zero, at any 1654 instance of time. 1656 Call control may be realized by one of the following methods: 1658 - Separation of the call information into parameters carried by a 1659 single call/connection protocol 1661 - Separation of the state machines for call control and connection 1662 control, whilst signaling information in a single call/connection 1663 protocol 1665 - Separation of information and state machines by providing separate 1666 signaling protocols for call control and connection control 1668 Call admission control is a policy function invoked by an 1669 Originating role in a Network and may involve cooperation with the 1670 Terminating role in the Network. Note that a call being allowed to 1671 proceed only indicates that the call may proceed to request one or 1672 more connections. It does not imply that any of those connection 1673 requests will succeed. Call admission control may also be invoked at 1674 other network boundaries. 1676 Connection control is responsible for the overall control of 1677 individual connections. Connection control may also be considered to 1678 be associated with link control. The overall control of a connection 1679 is performed by the protocol undertaking the set-up and release 1680 procedures associated with a connection and the maintenance of the 1681 state of the connection. 1683 Connection admission control is essentially a process that determines 1684 if there are sufficient resources to admit a connection (or re- 1685 negotiates resources during a call). This is usually performed on a 1686 link-by-link basis, based on local conditions and policy. Connection 1687 admission control may refuse the connection request. 1689 Control plane shall support the separation of call control and 1690 connection control. 1692 Control plane shall support proxy signaling. 1694 Inter-domain signaling shall comply with g.8080 and g.7713 (ITU). 1696 The inter-domain signaling protocol shall be agnostic to the intra- 1697 domain signaling protocol within any of the domains within the 1698 network. 1700 Inter-domain signaling shall support both strict and loose routing. 1702 Inter-domain signaling shall not be assumed necessarily congruent 1703 with routing. 1705 It should not be assumed that the same exact nodes are handling both 1706 signaling and routing in all situations. 1708 Inter-domain signaling shall support all call management primitives: 1709 - Per individual connections 1711 - Per groups of connections 1713 Inter-domain signaling shall support inter-domain notifications. 1715 Inter-domain signaling shall support per connection global connection 1716 identifier for all connection management primitives. 1718 Inter-domain signaling shall support both positive and negative 1719 responses for all requests, including the cause, when applicable. 1721 Inter-domain signaling shall support all the connection attributes 1722 representative to the connection characteristics of the individual 1723 connections in scope. 1725 Inter-domain signaling shall support crank-back and rerouting. 1727 Inter-domain signaling shall support graceful deletion of connections 1728 including of failed connections, if needed. 1730 9.3. Routing Functions 1732 Routing includes reachability information propagation, network 1733 topology/resource information dissemination and path computation. In 1734 optical network, each connection involves two user endpoints. When 1735 user endpoint A requests a connection to user endpoint B, the optical 1736 network needs the reachability information to select a path for the 1737 connection. If a user endpoint is unreachable, a connection request 1738 to that user endpoint shall be rejected. Network topology/resource 1739 information dissemination is to provide each node in the network with 1740 stabilized and consistent information about the carrier network such 1741 that a single node is able to support constrain-based path selection. 1742 A mixture of hop-by-hop routing, explicit/source routing and 1743 hierarchical routing will likely be used within future transport 1744 networks. Using hop-by-hop message routing, each node within a 1745 network makes routing decisions based on the message destination, and 1746 the network topology/resource information or the local routing tables 1747 if available. However, achieving efficient load balancing and 1748 establishing diverse connections are impractical using hop-by-hop 1749 routing. Instead, explicit (or source) routing may be used to send 1750 signaling messages along a route calculated by the source. This 1751 route, described using a set of nodes/links, is carried within the 1752 signaling message, and used in forwarding the message. 1754 Hierarchical routing supports signaling across NNIs. It allows 1755 conveying summarized information across I-NNIs, and avoids conveying 1756 topology information across trust boundaries. Each signaling message 1757 contains a list of the domains traversed, and potentially details of 1758 the route within the domain being traversed. 1760 All three mechanisms (Hop-by-hop routing, explicit / source-based 1761 routing and hierarchical routing) must be supported. Messages 1762 crossing trust boundaries must not contain information regarding the 1763 details of an internal network topology. This is particularly 1764 important in traversing E-UNIs and E-NNIs. Connection routes and 1765 identifiers encoded using topology information (e.g., node 1766 identifiers) must also not be conveyed over these boundaries. 1768 Requirements for routing information dissemination: 1770 Routing protocols must propagate the appropriate information 1771 efficiently to network nodes. 1772 The following requirements apply: 1774 The inter-domain routing protocol shall comply with G.8080 (ITU). 1776 The inter-domain routing protocol shall be agnostic to the intra- 1777 domain routing protocol within any of the domains within the network. 1779 The inter-domain routing protocol shall not impede any of the 1780 following routing paradigms within individual domains: 1782 - Hierarchical routing 1784 - Step-by-step routing 1786 - Source routing 1788 The exchange of the following types of information shall be supported 1789 by inter-domain routing protocols 1791 - Inter-domain topology 1793 - Per-domain topology abstraction 1795 - Per domain reachability information 1797 - Metrics for routing decisions supporting load sharing, a range of 1798 service granularity and service types, restoration capabilities, 1799 diversity, and policy. 1801 Inter-domain routing protocols shall support per domain topology and 1802 resource information abstraction. 1804 Inter-domain protocols shall support reachability information 1805 aggregation. 1807 A major concern for routing protocol performance is scalability and 1808 stability issues, which impose following requirements on the routing 1809 protocols: 1811 - The routing protocol performance shall not largely depend on the 1812 scale of the network (e.g. the number of nodes, the number of links, 1813 end user etc.). The routing protocol design shall keep the network 1814 size effect as small as possible. 1816 - The routing protocols shall support following scalability 1817 techniques: 1819 1. Routing protocol shall support hierarchical routing information 1820 dissemination, including topology information aggregation and 1821 summarization. 1823 2. The routing protocol shall be able to minimize global information 1824 and keep information locally significant as much as possible (e.g., 1825 information local to a node, a sub-network, a domain, etc). For 1826 example, a single optical node may have thousands of ports. The ports 1827 with common characteristics need not to be advertised individually. 1829 3. Routing protocol shall distinguish static routing information and 1830 dynamic routing information. Static routing information does not 1831 change due to connection operations, such as neighbor relationship, 1832 link attributes, total link bandwidth, etc. On the other hand, 1833 dynamic routing information updates due to connection operations, 1834 such as link bandwidth availability, link multiplexing fragmentation, 1835 etc. 1837 4. The routing protocol operation shall update dynamic and static 1838 routing information differently. Only dynamic routing information 1839 shall be updated in real time. 1841 5. Routing protocol shall be able to control the dynamic information 1842 updating frequency through different types of thresholds. Two types 1843 of thresholds could be defined: absolute threshold and relative 1844 threshold. The dynamic routing information will not be disseminated 1845 if its difference is still inside the threshold. When an update has 1846 not been sent for a specific time (this time shall be configurable 1847 the carrier), an update is automatically sent. Default time could be 1848 30 minutes. 1850 All the scalability techniques will impact the network resource 1851 representation accuracy. The tradeoff between accuracy of the routing 1852 information and the routing protocol scalability should be well 1853 studied. A routing protocol shall allow the network operators to 1854 adjust the balance according to their networks' specific 1855 characteristics. 1857 9.4. Requirements for path selection 1859 The path selection algorithm must be able to compute the path, which 1860 satisfies a list of service parameter requirements, such as service 1861 type requirements, bandwidth requirements, protection requirements, 1862 diversity requirements, bit error rate requirements, latency 1863 requirements, including/excluding area requirements. The 1864 characteristics of a path are those of the weakest link. For example, 1865 if one of the links does not have link protection capability, the 1866 whole path should be declared as having no link-based protection. The 1867 following are functional requirements on path selection. 1869 - Path selection shall support shortest path as well as constraint- 1870 based routing. 1872 - Various constraints may be required for constraint based path 1873 selection, including but not limited to: 1874 - Cost 1875 - Load Sharing 1876 - Diversity 1877 - Service Class 1879 - Path selection shall be able to include/exclude some specific 1880 locations, based on policy. 1882 - Path selection shall be able to support protection/restoration 1883 capability. Section 10 discusses this subject in more detail. 1885 - Path selection shall be able to support different levels of 1886 diversity, including diversity routing and protection/restoration 1887 diversity. 1889 - Path selection algorithms shall provide carriers the ability to 1890 support a wide range of services and multiple levels of service 1891 classes. Parameters such as service type, transparency, bandwidth, 1892 latency, bit error rate, etc. may be relevant. 1894 - Path selection algorithms shall support a set of requested routing 1895 constraints, and constraints of the networks. Some of the network 1896 constraints are technology specific, such as the constraints in all- 1897 optical networks addressed in [John_Angela_IPO_draft]. The requested 1898 constraints may include bandwidth requirement, diversity 1899 requirements, path specific requirements, as well as restoration 1900 requirements. 1902 9.5. Automatic Discovery Functions 1904 This section describes the requirements for automatic discovery to 1905 aid distributed connection management (DCM) in the context of 1906 automatically switched transport networks (ASTN/ASON), as specified 1907 in ITU-T recommendation G.807. Auto-discovery is applicable to the 1908 User-to-Network Interface (UNI), Network-Node Interfaces (NNI) and to 1909 the Transport Plane Interfaces (TPI) of the ASTN. 1911 Automatic discovery functions include neighbor, resource and service 1912 discovery. 1914 9.5.1. Neighbor discovery 1916 This section provides the requirements for the automatic neighbor 1917 discovery for the UNI and NNI and TPI interfaces. This requirement 1918 does not preclude specific manual configurations that may be required 1919 and in particular does not specify any mechanism that may be used for 1920 optimizing network management. 1922 Neighbor Discovery can be described as an instance of auto-discovery 1923 that is used for associating two subnet points that form a trail or a 1924 link connection in a particular layer network. The association 1925 created through neighbor discovery is valid so long as the trail or 1926 link connection that forms the association is capable of carrying 1927 traffic. This is referred to as transport plane neighbor discovery. 1928 In addition to transport plane neighbor discovery, auto-discovery can 1929 also be used for distributed subnet controller functions to establish 1930 adjacencies. This is referred to as control plane neighbor 1931 discovery. It should be noted that the Sub network points that are 1932 associated, as part of neighbor discovery do not have to be contained 1933 in network elements with physically adjacent ports. Thus neighbor 1934 discovery is specific to the layer in which connections are to be 1935 made and consequently is principally useful only when the network has 1936 switching capability at this layer. Further details on neighbor 1937 discovery can be obtained from ITU-T draft recommendations G.7713 and 1938 G.7714. 1940 Both control plane and transport plane neighbor discovery shall be 1941 supported. 1943 9.5.2. Resource Discovery 1945 Resource discovery can be described as an instance of auto-discovery 1946 that is used for verifying the physical connectivity between two 1947 ports on adjacent network elements in the network. Resource 1948 discovery is also concerned with the ability to improve inventory 1949 management of network resources, detect configuration mismatches 1950 between adjacent ports, associating port characteristics of adjacent 1951 network elements, etc. 1953 Resource discovery happens between neighbors. A mechanism designed 1954 for a technology domain can be applied to any pair of NEs 1955 interconnected through interfaces of the same technology. However, 1956 because resource discovery means certain information disclosure 1957 between two business domains, it is under the service providers' 1958 security and policy control. In certain network scenario, a service 1959 provider who owns the transport network may not be willing to 1960 disclose any internal addressing scheme to its client. So a client NE 1961 may not have the neighbor NE address and port ID in its NE level 1962 resource table. 1964 Interface ports and their characteristics define the network element 1965 resources. Each network can store its resources in a local table that 1966 could include switching granularity supported by the network element, 1967 ability to support concatenated services, range of bandwidths 1968 supported by adaptation, physical attributes signal format, 1969 transmission bit rate, optics type, multiplexing structure, 1970 wavelength, and the direction of the flow of information. Resource 1971 discovery can be achieved through either manual provisioning or 1972 automated procedures. The procedures are generic while the specific 1973 mechanisms and control information can be technology dependent. 1975 Resource discovery can be achieved in several methods. One of the 1976 methods is the self-resource discovery by which the NE populates its 1977 resource table with the physical attributes and resources. Neighbor 1978 discovery is another method by which NE discovers the adjacencies in 1979 the transport plane and their port association and populates the 1980 neighbor NE. After neighbor discovery resource verification and 1981 monitoring must be performed to verify physical attributes to ensure 1982 compatibility. Resource monitoring must be performed periodically 1983 since neighbor discovery and port association are repeated 1984 periodically. Further information can be found in [GMPLS-ARCH]. 1986 Resource discovery shall be supported. 1988 9.5.3. Service Discovery 1990 Service Discovery can be described as an instance of auto-discovery 1991 that is used for verifying and exchanging service capabilities that 1992 are supported by a particular link connection or trail. It is 1993 assumed that service discovery would take place after two Sub Network 1994 Points within the layer network are associated through neighbor 1995 discovery. However, since service capabilities of a link connection 1996 or trail can dynamically change, service discovery can take place at 1997 any time after neighbor discovery and any number of times as may be 1998 deemed necessary. 2000 Service discovery is required for all the optical services supported. 2002 10. Requirements for service and control plane resiliency 2004 Resiliency is a network capability to continue its operations under 2005 the condition of failures within the network. The automatic switched 2006 Optical network assumes the separation of control plane and data 2007 plane. Therefore the failures in the network can be divided into 2008 those affecting the data plane and those affecting the control plane. 2009 To provide enhanced optical services, resiliency measures in both 2010 data plane and control plane should be implemented. The following 2011 failure handling principles shall be supported. 2013 The control plane shall provide the failure detection and recovery 2014 functions such that the failures in the data plane within the control 2015 plane coverage can be quickly mitigated. 2017 The failure of control plane shall not in any way adversely affect 2018 the normal functioning of existing optical connections in the data 2019 plane. 2021 10.1. Service resiliency 2023 In circuit-switched transport networks, the quality and reliability 2024 of the established optical connections in the transport plane can be 2025 enhanced by the protection and restoration mechanisms provided by the 2026 control plane functions. Rapid recovery is required by transport 2027 network providers to protect service and also to support stringent 2028 Service Level Agreements (SLAs) that dictate high reliability and 2029 availability for customer connectivity. 2031 The choice of a protection/restoration mechanism is a tradeoff 2032 between network resource utilization (cost) and service interruption 2033 time. Clearly, minimizing service interruption time is desirable, but 2034 schemes achieving this usually do so at the expense of network 2035 resources, resulting in increased cost to the provider. Different 2036 protection/restoration schemes differ in the spare capacity 2037 requirements and service interruption time. 2039 In light of these tradeoffs, transport providers are expected to 2040 support a range of different levels of service offerings, 2041 characterized by the recovery speed in the event of network failures. 2042 For example, a provider's highest offered service level would 2043 generally ensure the most rapid recovery from network failures. 2044 However, such schemes (e.g., 1+1, 1:1 protection) generally use a 2045 large amount of spare restoration capacity, and are thus not cost 2046 effective for most customer applications. Significant reductions in 2047 spare capacity can be achieved by protection and restoration using 2048 shared network resources. 2050 Clients will have different requirements for connection availability. 2051 These requirements can be expressed in terms of the "service level", 2052 which can be mapped to different restoration and protection options 2053 and priority related connection characteristics, such as holding 2054 priority(e.g. pre-emptable or not), set-up priority, or restoration 2055 priority. However, how the mapping of individual service levels to a 2056 specific set of protection/restoration options and connection 2057 priorities will be determined by individual carriers. 2059 In order for the network to support multiple grades of service, the 2060 control plane must support differing protection and restoration 2061 options on a per connection basis. 2063 In order for the network to support multiple grades of service, the 2064 control plane must support setup priority, restoration priority and 2065 holding priority on a per connection basis. 2067 In general, the following protection schemes shall be considered for 2068 all protection cases within the network: 2069 - Dedicated protection: 1+1 and 1:1 2070 - Shared protection: 1:N and M:N.. 2071 - Unprotected 2073 In general, the following restoration schemes should be considered 2074 for all restoration cases within the network: 2075 - Shared restoration capacity. 2076 - Un-restorable 2078 Protection and restoration can be done on an end-to-end basis per 2079 connection. It can also be done on a per span or link basis between 2080 two adjacent network nodes. Specifically, the link can be a network 2081 link between two nodes within the network where the P&R scheme 2082 operates across a NNI interface or a drop-side link between the edge 2083 device and a switch node where the P&R scheme operates across a UNI 2084 interface. End-to-end Path protection and restoration schemes operate 2085 between access points across all NNI and UNI interfaces supporting 2086 the connection. 2088 In order for the network to support multiple grades of service, the 2089 control plane must support differing protection and restoration 2090 options on a per link or span basis within the network. 2092 In order for the network to support multiple grades of service, the 2093 control plane must support differing protection and restoration 2094 options on a per link or span basis for dropped customer connections. 2096 The protection and restoration actions are usually triggered by the 2097 failure in the networks. However, during the network maintenance 2098 affecting the protected connections, a network operator need to 2099 proactively force the traffic on the protected connections to switch 2100 to its protection connection. Therefore In order to support easy 2101 network maintenance, it required that management initiated protection 2102 and restoration be supported. 2104 To support the protection/restoration options: The control plane 2105 shall support configurable protection and restoration options via 2106 software commands (as opposed to needing hardware reconfigurations) 2107 to change the protection/restoration mode. 2109 The control plane shall support mechanisms to establish primary and 2110 protection paths. 2112 The control plane shall support mechanisms to modify protection 2113 assignments, subject to service protection constraints. 2115 The control plane shall support methods for fault notification to the 2116 nodes responsible for triggering restoration / protection (note that 2117 the transport plane is designed to provide the needed information 2118 between termination points. This information is expected to be 2119 utilized as appropriate.) 2121 The control plane shall support mechanisms for signaling rapid re- 2122 establishment of connection connectivity after failure. 2124 The control plane shall support mechanisms for reserving bandwidth 2125 resources for restoration. 2127 The control plane shall support mechanisms for normalizing connection 2128 routing (reversion) after failure repair. 2130 The signaling control plane should implement signaling message 2131 priorities to ensure that restoration messages receive preferential 2132 treatment, resulting in faster restoration. 2134 Normal connection management operations (e.g., connection deletion) 2135 shall not result in protection/restoration being initiated. 2137 Restoration shall not result in miss-connections (connections 2138 established to a destination other than that intended), even for 2139 short periods of time (e.g., during contention resolution). For 2140 example, signaling messages, used to restore connectivity after 2141 failure, should not be forwarded by a node before contention has been 2142 resolved. 2144 In the event of there being insufficient bandwidth available to 2145 restore all connections, restoration priorities / pre-emption should 2146 be used to determine which connections should be allocated the 2147 available capacity. 2149 The amount of restoration capacity reserved on the restoration paths 2150 determines the robustness of the restoration scheme to failures. For 2151 example, a network operator may choose to reserve sufficient capacity 2152 to ensure that all shared restorable connections can be recovered in 2153 the event of any single failure event (e.g., a conduit being cut). A 2154 network operator may instead reserve more or less capacity than 2155 required to handle any single failure event, or may alternatively 2156 choose to reserve only a fixed pool independent of the number of 2157 connections requiring this capacity (i.e., not reserve capacity for 2158 each individual connection). 2160 10.2. Control plane resiliency 2162 The control plane may be affected by failures in signaling network 2163 connectivity and by software failures (e.g., signaling, topology and 2164 resource discovery modules). 2166 Fast detection and recovery from failures in the control plane are 2167 important to allow normal network operation to continue in the event 2168 of signaling channel failures. 2170 The optical control plane signal network shall support protection and 2171 restoration options to enable it to self-healing in case of failures 2172 within the control plane. The control plane shall support the 2173 necessary options to ensure that no service-affecting module of the 2174 control plane (software modules or control plane communications) is a 2175 single point of failure. The control plane shall provide reliable 2176 transfer of signaling messages and flow control mechanisms for easing 2177 any congestion within the control plane. Control plane failures 2178 shall not cause failure of established data plane connections. 2179 Control network failure detection mechanisms shall distinguish 2180 between control channel and software process failures. 2182 When there are multiple channels (optical fibers or multiple 2183 wavelengths) between network elements and / or client devices, 2184 failure of the control channel will have a much bigger impact on the 2185 service availability than in the single case. It is therefore 2186 recommended to support a certain level of protection of the control 2187 channel. Control channel failures may be recovered by either using 2188 dedicated protection of control channels, or by re-routing control 2189 traffic within the control plane (e.g., using the self-healing 2190 properties of IP). To achieve this requires rapid failure detection 2191 and recovery mechanisms. For dedicated control channel protection, 2192 signaling traffic may be switched onto a backup control channel 2193 between the same adjacent pairs of nodes. Such mechanisms protect 2194 against control channel failure, but not against node failure. 2196 If a dedicated backup control channel is not available between 2197 adjacent nodes, or if a node failure has occurred, then signaling 2198 messages should be re-routed around the failed link / node. 2200 Fault localization techniques for the isolation of failed control 2201 resources shall be supported. 2203 Recovery from signaling process failures can be achieved by switching 2204 to a standby module, or by re-launching the failed signaling module. 2206 Recovery from software failures shall result in complete recovery of 2207 network state. 2209 Control channel failures may occur during connection establishment, 2210 modification or deletion. If this occurs, then the control channel 2211 failure must not result in partially established connections being 2212 left dangling within the network. Connections affected by a control 2213 channel failure during the establishment process must be removed from 2214 the network, re-routed (cranked back) or continued once the failure 2215 has been resolved. In the case of connection deletion requests 2216 affected by control channel failures, the connection deletion process 2217 must be completed once the signaling network connectivity is 2218 recovered. 2220 Connections shall not be left partially established as a result of a 2221 control plane failure. Connections affected by a control channel 2222 failure during the establishment process must be removed from the 2223 network, re-routed (cranked back) or continued once the failure has 2224 been resolved. Partial connection creations and deletions must be 2225 completed once the control plane connectivity is recovered. 2227 11. Security Considerations 2229 In this section, security considerations and requirements for optical 2230 services and associated control plane requirements are described. 2231 11.1 Optical Network Security Concerns Since optical service is 2232 directly related to the physical network which is fundamental to a 2233 telecommunications infrastructure, stringent security assurance 2234 mechanism should be implemented in optical networks. When designing 2235 equipment, protocols, NMS, and OSS that participate in optical 2236 service, every security aspect should be considered carefully in 2237 order to avoid any security holes that potentially cause dangers to 2238 an entire network, such as Denial of Service (DoS) attack, 2239 unauthorized access, masquerading, etc. 2241 In terms of security, an optical connection consists of two aspects. 2242 One is security of the data plane where an optical connection itself 2243 belongs, and the other is security of the control plane. 2245 11.0.1. Data Plane Security 2247 - Misconnection shall be avoided in order to keep the user's data 2248 confidential. For enhancing integrity and confidentiality of data, 2249 it may be helpful to support scrambling of data at layer 2 or 2250 encryption of data at a higher layer. 2252 11.0.2. Control Plane Security 2254 It is desirable to decouple the control plane from the data plane 2255 physically. 2257 Additional security mechanisms should be provided to guard against 2258 intrusions on the signaling network. Some of these may be done with 2259 the help of the management plane. 2261 - Network information shall not be advertised across exterior 2262 interfaces (E-UNI or E-NNI). The advertisement of network information 2263 across the E-NNI shall be controlled and limited in a configurable 2264 policy based fashion. The advertisement of network information shall 2265 be isolated and managed separately by each administration. 2267 - The signaling network itself shall be secure, blocking all 2268 unauthorized access. The signaling network topology and addresses 2269 shall not be advertised outside a carrier's domain of trust. 2271 - Identification, authentication and access control shall be 2272 rigorously used for providing access to the control plane. 2274 - Discovery information, including neighbor discovery, service 2275 discovery, resource discovery and reachability information should be 2276 exchanged in a secure way. This is an optional NNI requirement. 2278 - UNI shall support ongoing identification and authentication of the 2279 UNI-C entity (i.e., each user request shall be authenticated). 2281 - The UNI and NNI should provide optional mechanisms to ensure origin 2282 authentication and message integrity for connection management 2283 requests such as set-up, tear-down and modify and connection 2284 signaling messages. This is important in order to prevent Denial of 2285 Service attacks. The NNI (especially E-NNI) should also include 2286 mechanisms to ensure non-repudiation of connection management 2287 messages. 2289 - Information on security-relevant events occurring in the control 2290 plane or security-relevant operations performed or attempted in the 2291 control plane shall be logged in the management plane. 2293 - The management plane shall be able to analyze and exploit logged 2294 data in order to check if they violate or threat security of the 2295 control plane. 2297 - The control plane shall be able to generate alarm notifications 2298 about security related events to the management plane in an 2299 adjustable and selectable fashion. 2301 - The control plane shall support recovery from successful and 2302 attempted intrusion attacks. 2304 - The desired level of security depends on the type of interfaces and 2305 accounting relation between the two adjacent sub-networks or domains. 2306 Typically, in-band control channels are perceived as more secure than 2307 out-of-band, out-of-fiber channels, which may be partly colocated 2308 with a public network. 2310 11.1. Service Access Control 2312 From a security perspective, network resources should be protected 2313 from unauthorized accesses and should not be used by unauthorized 2314 entities. Service Access Control is the mechanism that limits and 2315 controls entities trying to access network resources. Especially on 2316 the public UNI, Connection Admission Control (CAC) functions should 2317 also support the following security features: 2319 - CAC should be applied to any entity that tries to access network 2320 resources through the public UNI (or E-UNI). CAC should include an 2321 authentication function of an entity in order to prevent masquerade 2322 (spoofing). Masquerade is fraudulent use of network resources by 2323 pretending to be a different entity. An authenticated entity should 2324 be given a service access level in a configurable policy basis. 2326 - Each entity should be authorized to use network resources according 2327 to the service level given. 2329 - With help of CAC, usage based billing should be realized. CAC and 2330 usage based billing should be enough stringent to avoid any 2331 repudiation. Repudiation means that an entity involved in a 2332 communication exchange subsequently denies the fact. 2334 12. Acknowledgements 2335 The authors of this document would like to acknowledge the 2336 valuable inputs from John Strand, Yangguang Xu, 2337 Deborah Brunhard, Daniel Awduche, Jim Luciani, Lynn Neir, Wesam 2338 Alanqar, Tammy Ferris, Mark Jones and Gerry Ash. 2340 References 2342 [carrier-framework] Y. Xue et al., Carrier Optical Services 2343 Framework and Associated UNI requirements", draft-many-carrier- 2344 framework-uni-00.txt, IETF, Nov. 2001. 2346 [G.807] ITU-T Recommendation G.807 (2001), "Requirements for the 2347 Automatic Switched Transport Network (ASTN)". 2349 [G.dcm] ITU-T New Recommendation G.dcm, "Distributed Connection 2350 Management (DCM)". 2352 [G.8080] ITU-T New recommendation G.ason, "Architecture for the 2353 Automatically Switched Optical Network (ASON)". 2355 [oif2001.196.0] M. Lazer, "High Level Requirements on Optical 2356 Network Addressing", oif2001.196.0. 2358 [oif2001.046.2] J. Strand and Y. Xue, "Routing For Optical Networks 2359 With Multiple Routing Domains", oif2001.046.2. 2361 [ipo-impairements] J. Strand et al., "Impairments and Other 2362 Constraints on Optical Layer Routing", draft-ietf-ipo- 2363 impairments-00.txt, work in progress. 2365 [ccamp-gmpls] Y. Xu et al., "A Framework for Generalized Multi- 2366 Protocol Label Switching (GMPLS)", draft-many-ccamp-gmpls- 2367 framework-00.txt, July 2001. 2369 [mesh-restoration] G. Li et al., "RSVP-TE extensions for shared mesh 2370 restoration in transport networks", draft-li-shared-mesh- 2371 restoration-00.txt, July 2001. 2373 [sis-framework] Yves T'Joens et al., "Service Level 2374 Specification and Usage Framework", 2375 draft-manyfolks-sls-framework-00.txt, IETF, Oct. 2000. 2377 [control-frmwrk] G. Bernstein et al., "Framework for MPLS-based 2378 control of Optical SDH/SONET Networks", draft-bms-optical-sdhsonet- 2379 mpls-control-frmwrk-00.txt, IETF, Nov. 2000. 2381 [ccamp-req] J. Jiang et al., "Common Control and Measurement 2382 Plane Framework and Requirements", draft-walker-ccamp-req-00.txt, 2383 CCAMP, August, 2001. 2385 [tewg-measure] W. S. Lai et al., "A Framework for Internet Traffic 2386 Engineering Neasurement", draft-wlai-tewg-measure-01.txt, IETF, May, 2387 2001. 2389 [ccamp-g.709] A. Bellato, "G. 709 Optical Transport Networks GMPLS 2390 Control Framework", draft-bellato-ccamp-g709-framework-00.txt, CCAMP, 2391 June, 2001. 2393 [onni-frame] D. Papadimitriou, "Optical Network-to-Network Interface 2394 Framework and Signaling Requirements", draft-papadimitriou-onni- 2395 frame-01.txt, IETF, Nov. 2000. 2397 [oif2001.188.0] R. Graveman et al.,"OIF Security requirement", 2398 oif2001.188.0.a` 2399 Author's Addresses 2401 Yong Xue 2402 UUNET/WorldCom 2403 22001 Loudoun County Parkway 2404 Ashburn, VA 20147 2405 Phone: +1 (703) 886-5358 2406 Email: yong.xue@wcom.com 2408 Monica Lazer 2409 AT&T 2410 900 ROUTE 202/206N PO BX 752 2411 BEDMINSTER, NJ 07921-0000 2412 mlazer@att.com 2414 Jennifer Yates, 2415 AT&T Labs 2416 180 PARK AVE, P.O. BOX 971 2417 FLORHAM PARK, NJ 07932-0000 2418 jyates@research.att.com 2420 Dongmei Wang 2421 AT&T Labs 2422 Room B180, Building 103 2423 180 Park Avenue 2424 Florham Park, NJ 07932 2425 mei@research.att.com 2427 Ananth Nagarajan 2428 Sprint 2429 9300 Metcalf Ave 2430 Overland Park, KS 66212, USA 2431 ananth.nagarajan@mail.sprint.com 2433 Hirokazu Ishimatsu 2434 Japan Telecom Co., LTD 2435 2-9-1 Hatchobori, Chuo-ku, 2436 Tokyo 104-0032 Japan 2437 Phone: +81 3 5540 8493 2438 Fax: +81 3 5540 8485 2439 EMail: hirokazu@japan-telecom.co.jp 2441 Olga Aparicio 2442 Cable & Wireless Global 2443 11700 Plaza America Drive 2444 Reston, VA 20191 2445 Phone: 703-292-2022 2446 Email: olga.aparicio@cwusa.com 2448 Steven Wright 2449 Science & Technology 2450 BellSouth Telecommunications 2451 41G70 BSC 2452 675 West Peachtree St. NE. 2453 Atlanta, GA 30375 2454 Phone +1 (404) 332-2194 2455 Email: steven.wright@snt.bellsouth.com 2457 Appendix A Commonly Required Signal Rate 2459 The table below outlines the different signal rates and granularities 2460 for the SONET and SDH signals. 2461 SDH SONET Transported signal 2462 name name 2463 RS64 STS-192 STM-64 (STS-192) signal without 2464 Section termination of any OH. 2465 RS16 STS-48 STM-16 (STS-48) signal without 2466 Section termination of any OH. 2467 MS64 STS-192 STM-64 (STS-192); termination of 2468 Line RSOH (section OH) possible. 2469 MS16 STS-48 STM-16 (STS-48); termination of 2470 Line RSOH (section OH) possible. 2471 VC-4- STS-192c- VC-4-64c (STS-192c-SPE); 2472 64c SPE termination of RSOH (section OH), 2473 MSOH (line OH) and VC-4-64c TCM OH 2474 possible. 2475 VC-4- STS-48c- VC-4-16c (STS-48c-SPE); 2476 16c SPE termination of RSOH (section OH), 2477 MSOH (line OH) and VC-4-16c TCM 2478 OH possible. 2479 VC-4-4c STS-12c- VC-4-4c (STS-12c-SPE); termination 2480 SPE of RSOH (section OH), MSOH (line 2481 OH) and VC-4-4c TCM OH possible. 2482 VC-4 STS-3c- VC-4 (STS-3c-SPE); termination of 2483 SPE RSOH (section OH), MSOH (line OH) 2484 and VC-4 TCM OH possible. 2485 VC-3 STS-1-SPE VC-3 (STS-1-SPE); termination of 2486 RSOH (section OH), MSOH (line OH) 2487 and VC-3 TCM OH possible. 2488 Note: In SDH it could be a higher 2489 order or lower order VC-3, this is 2490 identified by the sub-addressing 2491 scheme. In case of a lower order 2492 VC-3 the higher order VC-4 OH can 2493 be terminated. 2494 VC-2 VT6-SPE VC-2 (VT6-SPE); termination of 2495 RSOH (section OH), MSOH (line OH), 2496 higher order VC-3/4 (STS-1-SPE) OH 2497 and VC-2 TCM OH possible. 2498 - VT3-SPE VT3-SPE; termination of section 2499 OH, line OH, higher order STS-1- 2500 SPE OH and VC3-SPE TCM OH 2501 possible. 2502 VC-12 VT2-SPE VC-12 (VT2-SPE); termination of 2503 RSOH (section OH), MSOH (line OH), 2504 higher order VC-3/4 (STS-1-SPE) OH 2505 and VC-12 TCM OH possible. 2506 VC-11 VT1.5-SPE VC-11 (VT1.5-SPE); termination of 2507 RSOH (section OH), MSOH (line OH), 2508 higher order VC-3/4 (STS-1-SPE) OH 2509 and VC-11 TCM OH possible. 2510 The tables below outline the different signals, rates and 2511 granularities that have been defined for the OTN in G.709. 2513 OTU type OTU nominal bit rate OTU bit rate tolerance 2514 OTU1 255/238 * 2 488 320 kbit/s 20 ppm 2515 OTU2 255/237 * 9 953 280 kbit/s 2516 OTU3 255/236 * 39 813 120 kbit/s 2518 NOTE - The nominal OTUk rates are approximately: 2,666,057.143 kbit/s 2519 (OTU1), 10,709,225.316 kbit/s (OTU2) and 43,018,413.559 kbit/s 2520 (OTU3). 2522 ODU type ODU nominal bit rate ODU bit rate tolerance 2523 ODU1 239/238 * 2 488 320 kbit/s 20 ppm 2524 ODU2 239/237 * 9 953 280 kbit/s 2525 ODU3 239/236 * 39 813 120 kbit/s 2527 NOTE - The nominal ODUk rates are approximately: 2,498,775.126 kbit/s 2528 (ODU1), 10 037 273.924 kbit/s (ODU2) and 40 319 218.983 kbit/s 2529 (ODU3). ODU Type and Capacity (G.709) 2531 OPU type OPU Payload nominal OPU Payload bit rate 2532 bit rate tolerance 2533 OPU1 2488320 kbit/s 20 ppm 2534 OPU2 238/237 * 9953280 kbit/s 2535 OPU3 238/236 * 39813120 kbit/s 2536 NOTE - The nominal OPUk Payload rates are approximately: 2537 2,488,320.000 kbit/s (OPU1 Payload), 9,995,276.962 kbit/s (OPU2 2538 Payload) and 40,150,519.322 kbit/s (OPU3 Payload). 2540 Appendix B: Protection and Restoration Schemes 2542 For the purposes of this discussion, the following 2543 protection/restoration definitions have been provided: 2545 Reactive Protection: This is a function performed by either equipment 2546 management functions and/or the transport plane (i.e. depending on if 2547 it is equipment protection or facility protection and so on) in 2548 response to failures or degraded conditions. Thus if the control 2549 plane and/or management plane is disabled, the reactive protection 2550 function can still be performed. Reactive protection requires that 2551 protecting resources be configured and reserved (i.e. they cannot be 2552 used for other services). The time to exercise the protection is 2553 technology specific and designed to protect from service 2554 interruption. 2556 Proactive Protection: In this form of protection, protection events 2557 are initiated in response to planned engineering works (often from a 2558 centralized operations center). Protection events may be triggered 2559 manually via operator request or based on a schedule supported by a 2560 soft scheduling function. This soft scheduling function may be 2561 performed by either the management plane or the control plane but 2562 could also be part of the equipment management functions. If the 2563 control plane and/or management plane is disabled and this is where 2564 the soft scheduling function is performed, the proactive protection 2565 function cannot be performed. [Note that In the case of a 2566 hierarchical model of subnetworks, some protection may remain 2567 available in the case of partial failure (i.e. failure of a single 2568 subnetwork control plane or management plane controller) relates to 2569 all those entities below the failed subnetwork controller, but not 2570 its parents or peers.] Proactive protection requires that protecting 2571 resources be configured and reserved (i.e. they cannot be used for 2572 other services) prior to the protection exercise. The time to 2573 exercise the protection is technology specific and designed to 2574 protect from service interruption. 2576 Reactive Restoration: This is a function performed by either the 2577 management plane or the control plane. Thus if the control plane 2578 and/or management plane is disabled, the restoration function cannot 2579 be performed. [Note that in the case of a hierarchical model of 2580 subnetworks, some restoration may remain available in the case of 2581 partial failure (i.e. failure of a single subnetwork control plane or 2582 management plane controller) relates to all those entities below the 2583 failed subnetwork controller, but not its parents or peers.] 2584 Restoration capacity may be shared among multiple demands. A 2585 restoration path is created after detecting the failure. Path 2586 selection could be done either off-line or on-line. The path 2587 selection algorithms may also be executed in real-time or non-real 2588 time depending upon their computational complexity, implementation, 2589 and specific network context. 2591 - Off-line computation may be facilitated by simulation and/or 2592 network planning tools. Off-line computation can help provide 2593 guidance to subsequent real-time computations. 2595 - On-line computation may be done whenever a connection request is 2596 received. 2598 Off-line and on-line path selection may be used together to make 2599 network operation more efficient. Operators could use on-line 2600 computation to handle a subset of path selection decisions and use 2601 off-line computation for complicated traffic engineering and policy 2602 related issues such as demand planning, service scheduling, cost 2603 modeling and global optimization. 2605 Proactive Restoration: This is a function performed by either the 2606 management plane or the control plane. Thus if the control plane 2607 and/or management plane is disabled, the restoration function cannot 2608 be performed. [Note that in the case of a hierarchical model of 2609 subnetworks, some restoration may remain available in the case of 2610 partial failure (i.e. failure of a single subnetwork control plane or 2611 management plane controller) relates to all those entities below the 2612 failed subnetwork controller, but not its parents or peers.] 2613 Restoration capacity may be shared among multiple demands. Part or 2614 all of the restoration path is created before detecting the failure 2615 depending on algorithms used, types of restoration options supported 2616 (e.g. shared restoration/connection pool, dedicated restoration 2617 pool), whether the end-end call is protected or just UNI part or NNI 2618 part, available resources, and so on. In the event restoration path 2619 is fully pre-allocated, a protection switch must occur upon failure 2620 similarly to the reactive protection switch. The main difference 2621 between the options in this case is that the switch occurs through 2622 actions of the control plane rather than the transport plane Path 2623 selection could be done either off-line or on-line. The path 2624 selection algorithms may also be executed in real-time or non-real 2625 time depending upon their computational complexity, implementation, 2626 and specific network context. 2628 - Off-line computation may be facilitated by simulation and/or 2629 network planning tools. Off-line computation can help provide 2630 guidance to subsequent real-time computations. 2632 - On-line computation may be done whenever a connection request is 2633 received. 2635 Off-line and on-line path selection may be used together to make 2636 network operation more efficient. Operators could use on-line 2637 computation to handle a subset of path selection decisions and use 2638 off-line computation for complicated traffic engineering and policy 2639 related issues such as demand planning, service scheduling, cost 2640 modeling and global optimization. 2642 Control channel and signaling software failures shall not cause 2643 disruptions in established connections within the data plane, and 2644 signaling messages affected by control plane outages should not 2645 result in partially established connections remaining within the 2646 network. 2648 Control channel and signaling software failures shall not cause 2649 management plane failures. 2651 Appendix C Interconnection of Control Planes 2653 The interconnection of the IP router (client) and optical control 2654 planes can be realized in a number of ways depending on the required 2655 level of coupling. The control planes can be loosely or tightly 2656 coupled. Loose coupling is generally referred to as the overlay 2657 model and tight coupling is referred to as the peer model. 2658 Additionally there is the augmented model that is somewhat in between 2659 the other two models but more akin to the peer model. The model 2660 selected determines the following: 2662 - The details of the topology, resource and reachability information 2663 advertised between the client and optical networks 2665 - The level of control IP routers can exercise in selecting paths 2666 across the optical network 2668 The next three sections discuss these models in more details and the 2669 last section describes the coupling requirements from a carrier's 2670 perspective. 2672 C.1. Peer Model (I-NNI like model) 2674 Under the peer model, the IP router clients act as peers of the 2675 optical transport network, such that single routing protocol instance 2676 runs over both the IP and optical domains. In this regard the 2677 optical network elements are treated just like any other router as 2678 far as the control plane is concerned. The peer model, although not 2679 strictly an internal NNI, behaves like an I-NNI in the sense that 2680 there is sharing of resource and topology information. 2682 Presumably a common IGP such as OSPF or IS-IS, with appropriate 2683 extensions, will be used to distribute topology information. One 2684 tacit assumption here is that a common addressing scheme will also be 2685 used for the optical and IP networks. A common address space can be 2686 trivially realized by using IP addresses in both IP and optical 2687 domains. Thus, the optical networks elements become IP addressable 2688 entities. 2690 The obvious advantage of the peer model is the seamless 2691 interconnection between the client and optical transport networks. 2692 The tradeoff is that the tight integration and the optical specific 2693 routing information that must be known to the IP clients. 2695 The discussion above has focused on the client to optical control 2696 plane inter-connection. The discussion applies equally well to 2697 inter-connecting two optical control planes. 2699 C.2. Overlay (UNI-like model) 2701 Under the overlay model, the IP client routing, topology 2702 distribution, and signaling protocols are independent of the routing, 2703 topology distribution, and signaling protocols at the optical layer. 2704 This model is conceptually similar to the classical IP over ATM 2705 model, but applied to an optical sub-network directly. 2707 Though the overlay model dictates that the client and optical network 2708 are independent this still allows the optical network to re-use IP 2709 layer protocols to perform the routing and signaling functions. 2711 In addition to the protocols being independent the addressing scheme 2712 used between the client and optical network must be independent in 2713 the overlay model. That is, the use of IP layer addressing in the 2714 clients must not place any specific requirement upon the addressing 2715 used within the optical control plane. 2717 The overlay model would provide a UNI to the client networks through 2718 which the clients could request to add, delete or modify optical 2719 connections. The optical network would additionally provide 2720 reachability information to the clients but no topology information 2721 would be provided across the UNI. 2723 C.3. Augmented model (E-NNI like model) 2725 Under the augmented model, there are actually separate routing 2726 instances in the IP and optical domains, but information from one 2727 routing instance is passed through the other routing instance. For 2728 example, external IP addresses could be carried within the optical 2729 routing protocols to allow reachability information to be passed to 2730 IP clients. A typical implementation would use BGP between the IP 2731 client and optical network. 2733 The augmented model, although not strictly an external NNI, behaves 2734 like an E-NNI in that there is limited sharing of information. 2736 Generally in a carrier environment there will be more than just IP 2737 routers connected to the optical network. Some other examples of 2738 clients could be ATM switches or SONET ADM equipment. This may drive 2739 the decision towards loose coupling to prevent undue burdens upon 2740 non-IP router clients. Also, loose coupling would ensure that future 2741 clients are not hampered by legacy technologies. 2743 Additionally, a carrier may for business reasons want a separation 2744 between the client and optical networks. For example, the ISP 2745 business unit may not want to be tightly coupled with the optical 2746 network business unit. Another reason for separation might be just 2747 pure politics that play out in a large carrier. That is, it would 2748 seem unlikely to force the optical transport network to run that same 2749 set of protocols as the IP router networks. Also, by forcing the 2750 same set of protocols in both networks the evolution of the networks 2751 is directly tied together. That is, it would seem you could not 2752 upgrade the optical transport network protocols without taking into 2753 consideration the impact on the IP router network (and vice versa). 2755 Operating models also play a role in deciding the level of coupling. 2756 [Freeland] gives four main operating models envisioned for an optical 2757 transport network: - ISP owning all of its own infrastructure (i.e., 2758 including fiber and duct to the customer premises) 2760 - ISP leasing some or all of its capacity from a third party 2762 - Carriers carrier providing layer 1 services 2764 - Service provider offering multiple layer 1, 2, and 3 services over 2765 a common infrastructure 2766 Although relatively few, if any, ISPs fall into category 1 it would 2767 seem the mostly likely of the four to use the peer model. The other 2768 operating models would lend themselves more likely to choose an 2769 overlay model. Most carriers would fall into category 4 and thus 2770 would most likely choose an overlay model architecture.