idnits 2.17.1 draft-ietf-ipo-carrier-requirements-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity -- however, there's a paragraph with a matching beginning. Boilerplate error? ** The document is more than 15 pages and seems to lack a Table of Contents. == The page length should not exceed 58 lines per page, but there was 47 longer pages, the longest (page 6) being 61 lines == It seems as if not all pages are separated by form feeds - found 0 form feeds but 50 pages Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the RFC 3978 Section 5.4 Copyright Line does not match the current year == Line 379 has weird spacing: '...rt call admis...' == Line 591 has weird spacing: '...cal and consi...' == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- Couldn't find a document date in the document -- date freshness check skipped. Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'ITU-G803' is mentioned on line 121, but not defined == Missing Reference: 'ITU-G872' is mentioned on line 123, but not defined == Missing Reference: 'ITU-G8080' is mentioned on line 136, but not defined == Missing Reference: 'ITU-G807' is mentioned on line 136, but not defined == Missing Reference: 'DCN' is mentioned on line 273, but not defined == Missing Reference: 'OIFUNI' is mentioned on line 1272, but not defined == Missing Reference: 'ITU-g7714' is mentioned on line 1550, but not defined == Unused Reference: 'Freeland' is defined on line 2088, but no explicit reference was found in the text Summary: 6 errors (**), 0 flaws (~~), 14 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 INTERNET-DRAFT Yong Xue 3 Document: draft-ietf-ipo-carrier-requirements-02.txt Worldcom Inc. 4 Category: Informational (Editor) 6 Expiration Date: September, 2002 8 Monica Lazer 9 Jennifer Yates 10 Dongmei Wang 11 AT&T 13 Ananth Nagarajan 14 Sprint 16 Hirokazu Ishimatsu 17 Japan Telecom Co., LTD 19 Steven Wright 20 Bellsouth 22 Olga Aparicio 23 Cable & Wireless Global 24 March, 2002. 26 Carrier Optical Services Requirements 28 Status of this Memo 30 This document is an Internet-Draft and is in full conformance with 31 all provisions of Section 10 of RFC2026. Internet-Drafts are working 32 documents of the Internet Engineering Task Force (IETF), its areas, 33 and its working groups. Note that other groups may also distribute 34 working documents as Internet-Drafts. 36 Internet-Drafts are draft documents valid for a maximum of six months 37 and may be updated, replaced, or rendered obsolete by other documents 38 at any time. It is inappropriate to use Internet-Drafts as reference 39 material or to cite them other than as "work in progress." 41 The list of current Internet-Drafts can be accessed at 42 http://www.ietf.org/ietf/1id-abstracts.txt. 44 The list of Internet-Draft Shadow Directories can be accessed at 45 http://www.ietf.org/shadow.html. 47 Abstract 48 This Internet Draft describes the major carrier's service 49 requirements for the automatic switched optical networks 50 (ASON) from both an end-user's as well as an operator's 51 perspectives. Its focus is on the description of the 52 service building blocks and service-related control 53 plane functional requirements. The management functions 54 for the optical services and their underlying networks 55 are beyond the scope of this document and will be addressed 56 in a separate document. 58 Table of Contents 59 1. Introduction 3 60 1.1 Justification 4 61 1.2 Conventions used in this document 4 62 1.3 Value Statement 4 63 1.4 Scope of This Document 5 64 2. Abbreviations 7 65 3. General Requirements 7 66 3.1 Separation of Networking Functions 7 67 3.2 Separation of Call and Connection Control 8 68 3.3 Network and Service Scalability 9 69 3.4 Transport Network Technology 10 70 3.5 Service Building Blocks 11 71 4. Service Models and Applications 11 72 4.1 Service and Connection Types 11 73 4.2 Examples of Common Service Models 12 74 5. Network Reference Model 13 75 5.1 Optical Networks and Subnetworks 13 76 5.2 Network Interfaces 14 77 5.3 Intra-Carrier Network Model 17 78 5.4 Inter-Carrier Network Model 18 79 6. Optical Service User Requirements 19 80 6.1 Common Optical Services 19 81 6.2 Bearer Interface Types 20 82 6.3 Optical Service Invocation 20 83 6.4 Optical Connection Granularity 22 84 6.5 Other Service Parameters and Requirements 23 85 7. Optical Service Provider Requirements 24 86 7.1 Access Methods to Optical Networks 24 87 7.2 Dual Homing and Network Interconnections 24 88 7.3 Inter-domain connectivity 25 89 7.4 Names and Address Management 26 90 7.5 Policy-Based Service Management Framework 26 91 8. Control Plane Functional Requirements for Optical 92 Services 27 93 8.1 Control Plane Capabilities and Functions 27 94 8.2 Control Message Transport Network 29 95 8.3 Control Plane Interface to Data Plane 31 96 8.4 Management Plane Interface to Data Plane 31 97 8.5 Control Plane Interface to Management Plane 31 98 8.6 Control Plane Interconnection 32 99 9. Requirements for Signaling, Routing and Discovery 33 100 9.1 Requirements for information sharing over UNI, 101 I-NNI and E-NNI 33 102 9.2 Signaling Functions 33 103 9.3 Routing Functions 34 104 9.4 Requirements for path selection 35 105 9.5 Automatic Discovery Functions 36 106 10. Requirements for service and control plane 107 resiliency 37 108 10.1 Service resiliency 38 109 10.2 Control plane resiliency 40 110 11. Security Considerations 41 111 11.1 Optical Network Security Concerns 41 112 11.2 Service Access Control 42 113 12. Acknowledgements 43 114 13. References 43 115 Authors' Addresses 45 116 Appendix: Interconnection of Control Planes 47 118 1. Introduction 120 Optical transport networks are evolving from the current TDM-based 121 SONET/SDH optical networks as defined by ITU Rec. G.803 [ITU-G803] to 122 the emerging WDM-based optical transport networks (OTN) as defined by 123 the ITU Rec. G.872 in [ITU-G872]. Therefore in the near future, 124 carrier optical transport networks will consist of a mixture of the 125 SONET/SDH-based sub-networks and the WDM-based wavelength or fiber 126 switched OTN sub-networks. The OTN networks can be either transparent 127 or opaque depending upon if O-E-O functions are utilized within the 128 sub-networks. Optical networking encompasses the functionalities for 129 the establishment, transmission, multiplexing, switching of optical 130 connections carrying a wide range of user signals of varying formats 131 and bit rate. 133 Some of the biggest challenges for the carriers are bandwidth 134 management and fast service provisioning in such a multi-technology 135 networking environment. The emerging and rapidly evolving automatic 136 switched optical networks or ASON technology [ITU-G8080, ITU-G807] is 137 aimed at providing optical networks with intelligent networking 138 functions and capabilities in its control plane to enable rapid 139 optical connection provisioning, dynamic rerouting as well as 140 multiplexing and switching at different granularity level, including 141 fiber, wavelength and TDM time slots. The ASON control plane should 142 not only enable the new networking functions and capabilities for the 143 emerging OTN networks, but significantly enhance the service 144 provisioning capabilities for the existing SONET/SDH networks as 145 well. 147 The ultimate goals should be to allow the carriers to quickly and 148 dynamically provision network resources and to enhance network 149 survivability using ring and mesh-based protection and restoration 150 techniques. The carriers see that this new networking platform will 151 create tremendous business opportunities for the network operators 152 and service providers to offer new services to the market, reduce 153 their network Capital and Operational expenses (CAPEX and OPEX), and 154 improve their network efficiency. 156 1.1. Justification 158 The charter of the IPO WG calls for a document on "Carrier Optical 159 Services Requirements" for IP/Optical networks. This document 160 addresses that aspect of the IPO WG charter. Furthermore, this 161 document was accepted as an IPO WG document by unanimous agreement at 162 the IPO WG meeting held on March 19, 2001, in Minneapolis, MN, USA. 163 It presents a carrier and end-user perspective on optical network 164 services and requirements. 166 1.2. Conventions used in this document 168 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 169 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 170 document are to be interpreted as described in RFC 2119. 172 1.3. Value Statement 174 By deploying ASON technology, a carrier expects to achieve the 175 following benefits from both technical and business perspectives: 177 - Rapid Circuit Provisioning: ASON technology will enable the dynamic 178 end-to-end provisioning of the optical connections across the optical 179 network by using standard routing and signaling protocols. 181 - Enhanced Survivability: ASON technology will enable the network to 182 dynamically reroute an optical connection in case of a failure using 183 mesh-based network protection and restoration techniques, which 184 greatly improves the cost-effectiveness compared to the current line 185 and ring protection schemes in the SONET/SDH network. 187 - Cost-Reduction: ASON networks will enable the carrier to better 188 utilize the optical network , thus achieving significant unit cost 189 reduction per Megabit due to the cost-effective nature of the optical 190 transmission technology, simplified network architecture and reduced 191 operation cost. 193 - Service Flexibility: ASON technology will support provisioning of 194 an assortment of existing and new services such as protocol and bit- 195 rate independent transparent network services, and bandwidth-on- 196 demand services. 198 - Enhanced Interoperability: ASON technology will use a control plane 199 utilizing industry and international standards architecture and 200 protocols, which facilitate the interoperability of the optical 201 network equipment from different vendors. 203 In addition, the introduction of a standards-based control plane 204 offers the following potential benefits: 206 - Reactive traffic engineering at optical layer that allows network 207 resources to be dynamically allocated to traffic flow. 209 - Reduce the need for service providers to develop new operational 210 support systems software for the network control and new service 211 provisioning on the optical network, thus speeding up the deployment 212 of the optical network technology and reducing the software 213 development and maintenance cost. 215 - Potential development of a unified control plane that can be used 216 for different transport technologies including OTN, SONET/SDH, ATM 217 and PDH. 219 1.4. Scope of this document 221 This document is intended to provide, from the carriers perspective, 222 a service framework and some associated requirements in relation to 223 the optical services to be offered in the next generation optical 224 transport networking environment and their service control and 225 management functions. As such, this document concentrates on the 226 requirements driving the work towards realization of the automatic 227 switched optical networks. This document is intended to be protocol- 228 neutral, but the specific goals include providing the requirements to 229 guide the control protocol development and enhancement within IETF in 230 terms of reuse of IP-centric control protocols in the optical 231 transport network. 233 Every carrier's needs are different. The objective of this document 234 is NOT to define some specific service models. Instead, some major 235 service building blocks are identified that will enable the carriers 236 to use them in order to create the best service platform most 237 suitable to their business model. These building blocks include 238 generic service types, service enabling control mechanisms and 239 service control and management functions. 241 The fundamental principles and basic set of requirements for the 242 control plane of the automatic switched optical networks have been 243 provided in a series of ITU Recommendations under the umbrella of the 244 ITU ASTN/ASON architectural and functional requirements as listed 245 below: 247 Architecture: 249 - ITU-T Rec. G.8070/Y.1301 (2001), Requirements for the Automatic 250 Switched Transport Network (ASTN)[ASTN] 252 - ITU-T Rec. G.8080/Y.1304 (2001), Architecture of the Automatic 253 Switched Optical Network (ASON)[ASON] 255 Signaling: 257 - ITU-T Rec. G.7713/Y.1704 (2001), Distributed Call and Connection 258 Management (DCM)[DCM] 260 Routing: 262 - ITU-T Draft Rec. G.7715/Y.1706 (2002), Routing Architecture and 263 requirements for ASON Networks (work in progress)[ASONROUTING] 265 Discovery: 267 - ITU-T Rec. G.7714/Y.1705 (2001), Generalized Automatic Discovery 268 [DISC] 270 Control Transport Network: 272 - ITU-T Rec. G.7712/Y.1703 (2001), Architecture and Specification of 273 Data Communication Network[DCN] 274 This document provides further detailed requirements based on this 275 ASTN/ASON framework. In addition, even though we consider IP a major 276 client to the optical network in this document, the same requirements 277 and principles should be equally applicable to non-IP clients such as 278 SONET/SDH, ATM, ITU G.709, etc. 280 2. Abbreviations 282 ASON Automatic Switched Optical Networking 283 ASTN Automatic Switched Transport Network 284 CAC Connection Admission Control 285 NNI Node-to-Node Interface 286 UNI User-to-Network Interface 287 IWF Inter-Working Function 288 I-NNI Interior NNI 289 E-NNI Exterior NNI 290 NE Network Element 291 OTN Optical Transport Network 292 OLS Optical Line System 293 PI Physical Interface 294 SLA Service Level Agreement 296 3. General Requirements 298 In this section, a number of generic requirements related to the 299 service control and management functions are discussed. 301 3.1. Separation of Networking Functions 303 It makes logical sense to segregate the networking functions within 304 each layer network into three logical functional planes: control 305 plane, data plane and management plane. They are responsible for 306 providing network control functions, data transmission functions and 307 network management functions respectively. The crux of the ASON 308 network is the networking intelligence that contains automatic 309 routing, signaling and discovery functions to automate the network 310 control functions. 312 Control Plane: includes the functions related to networking control 313 capabilities such as routing, signaling, and policy control, as well 314 as resource and service discovery. These functions are automated. 316 Data Plane (transport plane): includes the functions related to 317 bearer channels and signal transmission. 319 Management Plane: includes the functions related to the management 320 functions of network element, networks and network resources and 321 services. These functions are less automated as compared to control 322 plane functions. 324 Each plane consists of a set of interconnected functional or control 325 entities, physical or logical, responsible for providing the 326 networking or control functions defined for that network layer. 328 The separation of the control plane from both the data and management 329 plane is beneficial to the carriers in that it: 331 - Allows equipment vendors to have a modular system design that will 332 be more reliable and maintainable thus reducing the overall systems 333 ownership and operation cost. 335 - Allows carriers to have the flexibility to choose a third party 336 vendor control plane software systems as its control plane solution 337 for its switched optical network. 339 - Allows carriers to deploy a unified control plane and 340 OSS/management systems to manage and control different types of 341 transport networks it owes. 343 - Allows carriers to use a separate control network specially 344 designed and engineered for the control plane communications. 346 The separation of control, management and transport function is 347 required and it shall accommodate both logical and physical level 348 separation. 350 Note that it is in contrast to the IP network where the control 351 messages and user traffic are routed and switched based on the same 352 network topology due to the associated in-band signaling nature of 353 the IP network. 355 3.2. Separation of call and connection control 357 To support many enhanced optical services, such as scheduled 358 bandwidth on demand and bundled connections, a call model based on 359 the separation of the call control and connection control is 360 essential. 362 The call control is responsible for the end-to-end session 363 negotiation, call admission control and call state maintenance while 364 connection control is responsible for setting up the connections 365 associated with a call across the network. A call can correspond to 366 zero, one or more connections depending upon the number of 367 connections needed to support the call. 369 The existence of the connection depends upon the existence of its 370 associated call session and connection can be deleted and re- 371 established while still keeping the call session up. 373 The call control shall be provided at an ingress port or gateway port 374 to the network such as UNI and E-NNI. 376 The control plane shall support the separation of the call control 377 from the connection control. 379 The control plane shall support call admission control on call setup 380 and connection admission control on connection setup. 382 3.3. Network and Service Scalability 384 Although some specific applications or networks may be on a small 385 scale, the control plane protocol and functional capabilities shall 386 support large-scale networks. 388 In terms of the scale and complexity of the future optical network, 389 the following assumption can be made when considering the scalability 390 and performance that are required of the optical control and 391 management functions. 393 - There may be up to thousands of OXC nodes and the same or higher 394 order of magnitude of OADMs per carrier network. 396 - There may be up to thousands of terminating ports/wavelength per 397 OXC node. 399 - There may be up to hundreds of parallel fibers between a pair of 400 OXC nodes. 402 - There may be up to hundreds of wavelength channels transmitted on 403 each fiber. 405 In relation to the frequency and duration of the optical connections: 407 - The expected end-to-end connection setup/teardown time should be in 408 the order of seconds, preferably less. 410 - The expected connection holding times should be in the order of 411 minutes or greater. 413 - There may be up to millions of simultaneous optical connections 414 switched across a single carrier network. 416 Note that even though automated rapid optical connection provisioning 417 is required, the carriers expect the majority of provisioned 418 circuits, at least in short term, to have a long lifespan ranging 419 from months to years. 421 In terms of service provisioning, some carriers may choose to perform 422 testing prior to turning over to the customer. 424 3.4. Transport Network Technology 426 Optical services can be offered over different types of underlying 427 optical transport technologies including both TDM-based SONET/SDH 428 network and WDM-based OTN networks. 430 For this document, standards-based transport technologies SONET/SDH 431 as defined in the ITU Rec. G.803 and OTN implementation framing as 432 defined in ITU Rec. G.709 shall be supported. 434 Note that the service characteristics such as bandwidth granularity 435 and signaling framing hierarchy to a large degree will be determined 436 by the capabilities and constraints of the server layer network. 438 3.5. Service Building Blocks 440 The primary goal of this document is to identify a set of basic 441 service building blocks the carriers can use to create the best 442 suitable service models that serve their business needs. 444 The service building blocks are comprised of a well-defined set of 445 capabilities and a basic set of control and management functions. 446 These capabilities and functions should support a basic set of 447 services and enable a carrier to build enhanced services through 448 extensions and customizations. Examples of the building blocks 449 include the connection types, provisioning methods, control 450 interfaces, policy control functions, and domain internetworking 451 mechanisms, etc. 453 4. Service Model and Applications 455 A carrier's optical network supports multiple types of service 456 models. Each service model may have its own service operations, 457 target markets, and service management requirements. 459 4.1. Service and Connection Types 461 The optical network is primarily offering high bandwidth connectivity 462 in the form of connections, where a connection is defined to be a 463 fixed bandwidth connection between two client network elements, such 464 as IP routers or ATM switches, established across the optical 465 network. A connection is also defined by its demarcation from ingress 466 access point, across the optical network, to egress access point of 467 the optical network. 469 The following connection capability topologies must be supported: 471 - Bi-directional point-to-point connection 473 - Uni-directional point-to-point connection 475 - Uni-directional point-to-multipoint connection 477 For point-to-point connection, the following three types of network 478 connections based on different connection set-up control methods 479 shall be supported: 481 - Permanent connection (PC): Established hop-by-hop directly on each 482 ONE along a specified path without relying on the network routing and 483 signaling capability. The connection has two fixed end-points and 484 fixed cross-connect configuration along the path and will stays 485 permanently until it is deleted. This is similar to the concept of 486 PVC in ATM. 488 - Switched connection (SC): Established through UNI signaling 489 interface and the connection is dynamically established by network 490 using the network routing and signaling functions. This is similar to 491 the concept of SVC in ATM. 493 - Soft permanent connection (SPC): Established by specifying two PC 494 at end-points and let the network dynamically establishes a SC 495 connection in between. This is similar to the SPVC concept in ATM. 497 The PC and SPC connections should be provisioned via management plane 498 to control interface and the SC connection should be provisioned via 499 signaled UNI interface. 501 4.2. Examples of Common Service Models 503 Each carrier may define its own service model based on it business 504 strategy and environment. The following are three example service 505 models that carriers may use. 507 4.2.1. Provisioned Bandwidth Service (PBS) 509 The PBS model provides enhanced leased/private line services 510 provisioned via service management interface (MI) using either PC or 511 SPC type of connection. The provisioning can be real-time or near 512 real-time. It has the following characteristics: 514 - Connection request goes through a well-defined management interface 516 - Client/Server relationship between clients and optical network. 518 - Clients have no optical network visibility and depend on network 519 intelligence or operator for optical connection setup. 521 4.2.2. Bandwidth on Demand Service (BDS) 523 The BDS model provides bandwidth-on-demand dynamic connection 524 services via signaled user-network interface (UNI). The provisioning 525 is real-time and is using SC type of optical connection. It has the 526 following characteristics: 528 - Signaled connection request via UNI directly from the user or its 529 proxy. 531 - Customer has no or limited network visibility depending upon the 532 control interconnection model used and network administrative policy. 534 - Relies on network or client intelligence for connection set-up 535 depending upon the control plane interconnection model used. 537 4.2.3. Optical Virtual Private Network (OVPN) 539 The OVPN model provides virtual private network at the optical layer 540 between a specified set of user sites. It has the following 541 characteristics: 543 - Customers contract for specific set of network resources such as 544 optical connection ports, wavelengths, etc. 546 - Closed User Group (CUG) concept is supported as in normal VPN. 548 - Optical connection can be of PC, SPC or SC type depending upon the 549 provisioning method used. 551 - An OVPN site can request dynamic reconfiguration of the connections 552 between sites within the same CUG. 554 - A customer may have visibility and control of network resources up 555 to the extent allowed by the customer service contract. 557 At a minimum, the PBS, BDS and OVPN service models described above 558 shall be supported by the control functions. 560 5. Network Reference Model 562 This section discusses major architectural and functional components 563 of a generic carrier optical network, which will provide a reference 564 model for describing the requirements for the control and management 565 of carrier optical services. 567 5.1. Optical Networks and Subnetworks 569 As mentioned before, there are two main types of optical networks 570 that are currently under consideration: SDH/SONET network as defined 571 in ITU Rec. G.803, and OTN as defined in ITU Rec. G.872. 573 We assume an OTN is composed of a set of optical cross-connects (OXC) 574 and optical add-drop multiplexer (OADM) which are interconnected in a 575 general mesh topology using DWDM optical line systems (OLS). 577 It is often convenient for easy discussion and description to treat 578 an optical network as an subnetwork cloud, in which the details of 579 the network become less important, instead focus is on the function 580 and the interfaces the optical network provides. In general, a 581 subnetwork can be defined as a set of access points on the network 582 boundary and a set of point-to-point optical connections between 583 those access points. 585 5.2. Network Interfaces 587 A generic carrier network reference model describes a multi-carrier 588 network environment. Each individual carrier network can be further 589 partitioned into domains or sub-networks based on administrative, 590 technological or architectural reasons. The demarcation between 591 (sub)networks can be either logical or physical and consists of a 592 set of reference points identifiable in the optical network. From the 593 control plane perspective, these reference points define a set of 594 control interfaces in terms of optical control and management 595 functionality. The following figure 5.1 is an illustrative diagram 596 for this. 598 +---------------------------------------+ 599 | single carrier network | 600 +--------------+ | | 601 | | | +------------+ +------------+ | 602 | IP | | | | | | | 603 | Network +--UNI+ Optical +---UNI--+ Carrier IP | | 604 | | | | Subnetwork | | network | | 605 +--------------+ | | (Domain A) +--+ | | | 606 | +------+-----+ | +------+-----+ | 607 | | | | | 608 | I-NNI E-NNI UNI | 609 +--------------+ | | | | | 610 | | | +------+-----+ | +------+-----+ | 611 | IP +--UNI+ | +-----+ | | 612 | Network | | | Optical | | Optical | | 613 | | | | Subnetwork +-E-NNI--+ Subnetwork | | 614 +--------------+ | | (Domain A) | | (Domain B) | | 615 | +------+-----+ +------+-----+ | 616 | | | | 617 +---------------------------------------+ 618 UNI E-NNI 619 | | 620 +------+-------+ +-------+--------+ 621 | | | | 622 | Other Client | | Other Carrier | 623 | Network | | Network | 624 | (ATM/SONET) | | | 625 +--------------+ +----------------+ 627 Figure 5.1 Generic Carrier Network Reference 628 Model 630 The network interfaces encompass two aspects of the networking 631 functions: user data plane interface and control plane interface. The 632 former concerns about user data transmission across the physical 633 network interface and the latter concerns about the control message 634 exchange across the network interface such as signaling, routing, 635 etc. We call the former physical interface (PI) and the latter 636 control plane interface. Unless otherwise stated, the control 637 interface is assumed in the remaining of this document. 639 5.2.1. Control Plane Interfaces 641 Control interface defines a relationship between two connected 642 network entities on both side of the interface. For each control 643 interface, we need to define an architectural function each side 644 plays and a controlled set of information that can be exchanged 645 across the interface. The information flowing over this logical 646 interface may include, but not limited to: 648 - Endpoint name and address 650 - Reachability/summarized network address information 652 - Topology/routing information 654 - Authentication and connection admission control information 656 - Connection management signaling messages 658 - Network resource control information 660 Different types of the interfaces can be defined for the network 661 control and architectural purposes and can be used as the network 662 reference points in the control plane. In this document, the 663 following set of interfaces are defined as shown in Figure 5.1. The 664 User-Network Interface (UNI) is a bi-directional signaling interface 665 between service requester and service provider control entities. The 666 service request control entity resides outside the carrier network 667 control domain. 669 The Network-Network Interface (NNI) is a bi-directional signaling 670 interface between two optical network elements or sub-networks. 672 We differentiate between interior (I-NNI) and exterior (E-NNI) NNI as 673 follows: 675 - E-NNI: A NNI interface between two control plane entities belonging 676 to different control domains. 678 - I-NNI: A NNI interface between two control plane entities within 679 the same control domain in the carrier network. 681 It should be noted that it is quite common to use E-NNI between two 682 sub-networks within the same carrier network if they belong to 683 different control domains. Different types of interface, interior vs. 684 exterior, have different implied trust relationship for security and 685 access control purposes. Trust relationship is not binary, instead a 686 policy-based control mechanism need to be in place to restrict the 687 type and amount of information that can flow cross each type of 688 interfaces depending the carrier's service and business requirements. 689 Generally, two networks have a trust relationship if they belong to 690 the same administrative domain. 692 An example of an interior interface is an I-NNI between two optical 693 network elements in a single control domain. Exterior interface 694 examples include an E-NNI between two different carriers or a UNI 695 interface between a carrier optical network and its customers. 697 The control plane shall support the UNI and NNI interface described 698 above and the interfaces shall be configurable in terms of the type 699 and amount of control information exchange and their behavior shall 700 be consistent with the configuration (i.e., exterior versus interior 701 interfaces). 703 5.3. Intra-Carrier Network Model 705 Intra-carrier network model concerns the network service control and 706 management issues within networks owned by a single carrier. 708 5.3.1. Multiple Sub-networks 710 Without loss of generality, the optical network owned by a carrier 711 service operator can be depicted as consisting of one or more optical 712 sub-networks interconnected by direct optical links. There may be 713 many different reasons for more than one optical sub-networks It may 714 be the result of using hierarchical layering, different technologies 715 across access, metro and long haul (as discussed below), or a result 716 of business mergers and acquisitions or incremental optical network 717 technology deployment by the carrier using different vendors or 718 technologies. 720 A sub-network may be a single vendor and single technology network. 721 But in general, the carrier's optical network is heterogeneous in 722 terms of equipment vendor and the technology utilized in each sub- 723 network. 725 5.3.2. Access, Metro and Long-haul networks 727 Few carriers have end-to-end ownership of the optical networks. Even 728 if they do, access, metro and long-haul networks often belong to 729 different administrative divisions as separate optical sub-networks. 730 Therefore Inter-(sub)-networks interconnection is essential in terms 731 of supporting the end-to-end optical service provisioning and 732 management. The access, metro and long-haul networks may use 733 different technologies and architectures, and as such may have 734 different network properties. 736 In general, end-to-end optical connectivity may easily cross multiple 737 sub-networks with the following possible scenarios: 738 Access -- Metro -- Access 739 Access - Metro -- Long Haul -- Metro - Access 741 5.4. Inter-Carrier Network Model 743 The inter-carrier model focuses on the service and control aspects 744 between different carrier networks and describes the internetworking 745 relationship between them. 747 5.4.1. Carrier Network Interconnection 749 Inter-carrier interconnection provides for connectivity between 750 optical network operators. To provide the global reach end-to-end 751 optical services, optical service control and management between 752 different carrier networks becomes essential. It is possible to 753 support distributed peering within the IP client layer network where 754 the connectivity between two distant IP routers can be achieved via 755 an optical transport network. 757 5.4.2. Implied Control Constraints 759 In the inter-carrier network model, each carrier's optical network is 760 a separate administrative domain. Both the UNI interface between the 761 user and the carrier network and the NNI interface between two 762 carrier's networks are crossing the carrier's administrative boundary 763 and therefore are by definition exterior interfaces. 765 In terms of control information exchange, the topology information 766 shall not be allowed to cross both E-NNI and UNI interfaces. 768 6. Optical Service User Requirements 770 This section describes the user requirements for optical services, 771 which in turn impose the requirements on service control and 772 management for the network operators. The user requirements reflect 773 the perception of the optical service from a user's point of view. 775 6.1. Common Optical Services 777 The basic unit of an optical transport service is fixed-bandwidth 778 optical connectivity between parties. However different services are 779 created based on its supported signal characteristics (format, bit 780 rate, etc), the service invocation methods and possibly the 781 associated Service Level Agreement (SLA) provided by the service 782 provider. 784 At present, the following are the major optical services provided in 785 the industry: 787 - SONET/SDH, with different degrees of transparency 789 - Optical wavelength services 791 - Ethernet at 1 Gbps and 10 Gbps 793 - Storage Area Networks (SANs) based on FICON, ESCON and Fiber 794 Channel 796 Optical Wavelength Service refers to transport services where signal 797 framing is negotiated between the client and the network operator 798 (framing and bit-rate dependent), and only the payload is carried 799 transparently. SONET/SDH transport is most widely used for network- 800 wide transport. Different levels of transparency can be achieved in 801 the SONET/SDH transmission. 803 Ethernet Services, specifically 1Gb/s and 10Gbs Ethernet services, 804 are gaining more popularity due to the lower costs of the customers' 805 premises equipment and its simplified management requirements 806 (compared to SONET or SDH). 808 Ethernet services may be carried over either SONET/SDH (GFP mapping) 809 or WDM networks. The Ethernet service requests will require some 810 service specific parameters: priority class, VLAN Id/Tag, traffic 811 aggregation parameters. 813 Storage Area Network (SAN) Services. ESCON and FICON are proprietary 814 versions of the service, while Fiber Channel is the standard 815 alternative. As is the case with Ethernet services, SAN services may 816 be carried over either SONET/SDH (using GFP mapping) or WDM networks. 818 The control plane shall provide the carrier with the capability 819 functionality to provision, control and manage all the services 820 listed above. 822 6.2. Bearer Interface Types 824 All the bearer interfaces implemented in the ONE shall be supported 825 by the control plane and associated signaling protocols. 827 The following interface types shall be supported by the signaling 828 protocol: 829 - SDH/SONET 830 - 1 Gb Ethernet, 10 Gb Ethernet (WAN mode) 831 - 10 Gb Ethernet (LAN mode) 832 - FC-N (N= 12, 50, 100, or 200) for Fiber Channel services 833 - OTN (G.709) 834 - PDH 836 6.3. Optical Service Invocation 838 As mentioned earlier, the methods of service invocation play an 839 important role in defining different services. 841 6.3.1. Provider-Controlled Service Provisioning 843 In this scenario, users forward their service request to the provider 844 via a well-defined service management interface. All connection 845 management operations, including set-up, release, query, or 846 modification shall be invoked from the management plane. 848 6.3.2. User-Control Service Provisioning 850 In this scenario, users forward their service request to the provider 851 via a well-defined UNI interface in the control plane (including 852 proxy signaling). All connection management operation requests, 853 including set-up, release, query, or modification shall be invoked 854 from directly connected user devices, or its signaling representative 855 (such as a signaling proxy). 857 6.3.3. Call set-up requirements 859 In summary the following requirements for the control plane have been 860 identified: 862 - The control plane shall support action result codes as responses to 863 any requests over the control interfaces. 865 - The control plane shall support requests for call set-up, subject 866 to policies in effect between the user and the network. 868 - The control plane shall support the destination client device's 869 decision to accept or reject call set-up requests from the source 870 client's device. 872 - The control plane shall support requests for call set-up and 873 deletion across multiple (sub)networks. 875 - NNI signaling shall support requests for call set-up, subject to 876 policies in effect between the (sub)networks. 878 - Call set-up shall be supported for both uni-directional and bi- 879 directional connections. 881 - Upon call request initiation, the control plane shall generate a 882 network unique Call-ID associated with the connection, to be used for 883 information retrieval or other activities related to that connection. 885 - CAC shall be provided as part of the call control functionality. It 886 is the role of the CAC function to determine if the call can be 887 allowed to proceed based on resource availability and authentication. 889 - Negotiation for call set-up for multiple service level options 890 shall be supported. 892 - The policy management system must determine what kind of calls can 893 be set up. 895 - The control plane elements need the ability to rate limit (or pace) 896 call setup attempts into the network. 898 - The control plane shall report to the management plane, the 899 Success/Failures of a call request. 901 - Upon a connection request failure, the control plane shall report 902 to the management plane a cause code identifying the reason for the 903 failure and all allocated resources shall be released. A negative 904 acknowledgment shall be returned to the source. 906 - Upon a connection request success a positive acknowledgment shall 907 be returned to the source when a connection has been successfully 908 established, the control plane shall be notified. 910 - The control plane shall support requests for call release by Call- 911 ID. 913 - The control plane shall allow any end point or any intermediate 914 node to initiate call release procedures. 916 - Upon call release completion all resources associated with the call 917 shall become available for access for new requests. 919 - The management plane shall be able to release calls or connections 920 established by the control plane both gracefully and forcibly on 921 demand. 923 - Partially deleted calls or connections shall not remain within the 924 network. 926 - End-to-end acknowledgments shall be used for connection deletion 927 requests. 929 - Connection deletion shall not result in either restoration or 930 protection being initiated. 932 - The control plane shall support management plane and neighboring 933 device requests for status query. 935 - The UNI shall support initial registration and updates of the UNI-C 936 with the network via the control plane. 938 6.4. Optical Connection granularity 940 The service granularity is determined by the specific technology, 941 framing and bit rate of the physical interface between the ONE and 942 the client at the edge and by the capabilities of the ONE. The 943 control plane needs to support signaling and routing for all the 944 services supported by the ONE. In general, there should not be a one- 945 to-one correspondence imposed between the granularity of the service 946 provided and the maximum capacity of the interface to the user. 948 The control plane shall support the ITU Rec. G.709 connection 949 granularity for the OTN network. 951 The control plane shall support the SDH/SONET connection granularity. 953 Sub-rate interfaces shall be supported by the optical control plane 954 such as VT /TU granularity (as low as 1.5 Mb/s). 956 In addition, 1 Gb and 10 Gb granularity shall be supported for 1 Gb/s 957 and 10 Gb/s (WAN mode) Ethernet framing types, if implemented in the 958 hardware. 960 The following fiber channel interfaces shall be supported by the 961 control plane if the given interfaces are available on the equipment: 963 - FC-12 964 - FC-50 965 - FC-100 966 - FC-200 968 Encoding of service types in the protocols used shall be such that 969 new service types can be added by adding new code point values or 970 objects. 972 6.5. Other Service Parameters and Requirements 974 6.5.1. Classes of Service 976 We use "service level" to describe priority related characteristics 977 of connections, such as holding priority, set-up priority, or 978 restoration priority. The intent currently is to allow each carrier 979 to define the actual service level in terms of priority, protection, 980 and restoration options. Therefore, individual carriers will 981 determine mapping of individual service levels to a specific set of 982 quality features. 984 The control plane shall be capable of mapping individual service 985 classes into specific protection and / or restoration options. 987 6.5.2. Diverse Routing Attributes 989 The ability to route service paths diversely is a highly desirable 990 feature. Diverse routing is one of the connection parameters and is 991 specified at the time of the connection creation. The following 992 provides a basic set of requirements for the diverse routing support. 994 The control plane routing algorithms shall be able to route a single 995 demand diversely from N previously routed demands in terms of link 996 disjoint path, node disjoint path and SRLG disjoint path. 998 7. Optical Service Provider Requirements 1000 This section discusses specific service control and management 1001 requirements from the service provider's point of view. 1003 7.1. Access Methods to Optical Networks 1005 Multiple access methods shall be supported: 1007 - Cross-office access (User NE co-located with ONE) 1009 - Direct remote access (Dedicated links to the user) 1011 - Remote access via access sub-network (via a 1012 multiplexing/distribution sub-network) 1014 All of the above access methods must be supported. 1016 7.2. Dual Homing and Network Interconnections 1018 Dual homing is a special case of the access network. Client devices 1019 can be dual homed to the same or different hub, the same or different 1020 access network, the same or different core networks, the same or 1021 different carriers. The different levels of dual homing connectivity 1022 result in many different combinations of configurations. The main 1023 objective for dual homing is for enhanced survivability. 1025 Dual homing must be supported. Dual homing shall not require the use 1026 of multiple addresses for the same client device. 1028 7.3. Inter-domain connectivity 1030 A domain is a portion of a network, or an entire network that is 1031 controlled by a single control plane entity. This section discusses 1032 the various requirements for connecting domains. 1034 7.3.1. Multi-Level Hierarchy 1036 Traditionally current transport networks are divided into core inter- 1037 city long haul networks, regional intra-city metro networks and 1038 access networks. Due to the differences in transmission technologies, 1039 service, and multiplexing needs, the three types of networks are 1040 served by different types of network elements and often have 1041 different capabilities. The diagram below shows an example three- 1042 level hierarchical network. 1044 +--------------+ 1045 | Core Long | 1046 +----------+ Haul +---------+ 1047 | | Subnetwork | | 1048 | +--------------+ | 1049 +-------+------+ +-------+------+ 1050 | | | | 1051 | Regional | | Regional | 1052 | Subnetwork | | Subnetwork | 1053 +-------+------+ +-------+------+ 1054 | | 1055 +-------+------+ +-------+------+ 1056 | | | | 1057 | Metro/Access | | Metro/Access | 1058 | Subnetwork | | Subnetwork | 1059 +--------------+ +--------------+ 1061 Figure 2 Multi-level hierarchy example 1063 Routing and signaling for multi-level hierarchies shall be supported 1064 to allow carriers to configure their networks as needed. 1066 7.3.2. Network Interconnections 1068 Subnetworks may have multiple points of inter-connections. All 1069 relevant NNI functions, such as routing, reachability information 1070 exchanges, and inter-connection topology discovery must recognize and 1071 support multiple points of inter-connections between subnetworks. 1072 Dual inter-connection is often used as a survivable architecture. 1074 The control plane shall provide support for routing and signaling for 1075 subnetworks having multiple points of interconnection. 1077 7.4. Names and Address Management 1079 7.4.1. Address Space Separation 1081 To ensure the scalability of and smooth migration toward to the 1082 optical switched network, the separation of three address spaces are 1083 required: 1085 - Internal transport network addresses: This is used for routing 1086 control plane messages within the transport network. 1088 - Transport Network Assigned (TNA) address: This is a routable 1089 address in the optical transport network. 1091 - Client addresses: This address has significance in the clientlayer. 1093 7.4.2. Directory Services 1095 Directory Services shall support address resolution and translation 1096 between various user edge device names and corresponding optical 1097 network addresses. UNI shall use the user naming schemes for 1098 connection request. 1100 7.4.3. Network element Identification 1102 Each control domain and each network element within it shall be 1103 uniquely identifiable. 1105 7.5. Policy-Based Service Management Framework 1107 The IPO service must be supported by a robust policy-based management 1108 system to be able to make important decisions. 1110 Examples of policy decisions include: 1112 - What types of connections can be set up for a given UNI? 1113 - What information can be shared and what information must be 1114 restricted in automatic discovery functions? 1116 - What are the security policies over signaling interfaces? 1118 - What border nodes should be used when routing depend on factors 1119 including, but not limited to source and destination address, border 1120 nodes loading, time of connection request. 1122 Requirements: 1124 - Service and network policies related to configuration and 1125 provisioning, admission control, and support of Service Level 1126 Agreements (SLAs) must be flexible, and at the same time simple and 1127 scalable. 1129 - The policy-based management framework must be based on standards- 1130 based policy systems (e.g., IETF COPS). 1132 - In addition, the IPO service management system must support and be 1133 backwards compatible with legacy service management systems. 1135 8. Control Plane Functional Requirements for Optical Services 1137 This section addresses the requirements for the optical control plane 1138 in support of service provisioning. 1140 The scope of the control plane include the control of the interfaces 1141 and network resources within an optical network and the interfaces 1142 between the optical network and its client networks. In other words, 1143 it should include both NNI and UNI aspects. 1145 8.1. Control Plane Capabilities and Functions 1147 The control capabilities are supported by the underlying control 1148 functions and protocols built in the control plane. 1150 8.1.1. Network Control Capabilities 1152 The following capabilities are required in the network control plane 1153 to successfully deliver automated provisioning for optical services: 1154 - Network resource discovery 1155 - Address assignment and resolution 1157 - Routing information propagation and dissemination 1159 - Path calculation and selection 1161 - Connection management 1163 These capabilities may be supported by a combination of functions 1164 across the control and the management planes. 1166 8.1.2. Control Plane Functions for network control 1168 The following are essential functions needed to support network 1169 control capabilities: 1171 - Signaling 1172 - Routing 1173 - Automatic resource, service and neighbor discovery 1175 Specific requirements for signaling, routing and discovery are 1176 addressed in Section 9. 1178 The general requirements for the control plane functions to support 1179 optical networking and service functions include: 1181 - The control plane must have the capability to establish, teardown 1182 and maintain the end-to-end connection, and the hop-by-hop connection 1183 segments between any two end-points. 1185 - The control plane must have the capability to support traffic- 1186 engineering requirements including resource discovery and 1187 dissemination, constraint-based routing and path computation. 1189 - The control plane shall support network status or action result 1190 code responses to any requests over the control interfaces. 1192 - The control plane shall support call admission control on UNI and 1193 connection-admission control on NNI. 1195 - The control plane shall support graceful release of network 1196 resources associated with the connection after aUpon successful 1197 connection teardown or failed connections. 1199 - The control plane shall support management plane request for 1200 connection attributes/status query. 1202 - The control plane must have the capability to support various 1203 protection and restoration schemes. 1205 - Control plane failures shall not affect active connections and 1206 shall not adversely impact the transport and data planes. 1208 - The control plane should allow separation of major control function 1209 entities including routing, signaling and discovery and should allow 1210 different control distribution of those functionalities, including 1211 centralized, distributed or hybrid. 1213 - The control plane should allow physical separation of the control 1214 plane from the transport plane to support either tightly coupled or 1215 loosely coupled control plane solutions. 1217 - The control plane should support the routing and signaling proxy to 1218 participate in the normal routing and signaling message exchange and 1219 processing. 1221 - Security and resilience are crucial issues for the control plane 1222 and will be addressed in Section 10 and 11 of this document. 1224 8.2. Control Message Transport Network 1226 The control message transport network is a transport network for 1227 control plane messages and it consists of a set of control channels 1228 that interconnect the nodes within the control plane. Therefore, the 1229 control message transport network must be accessible by each of the 1230 communicating nodes (e.g., OXCs). If an out-of-band IP-based control 1231 message transport network is an overlay network built on top of the 1232 IP data network using some tunneling technologies, these tunnels must 1233 be standards-based such as IPSec, GRE, etc. 1235 - The control message transport network must terminate at each of the 1236 nodes in the transport plane. 1238 - The control message transport network shall not be assumed to have 1239 the same topology as the data plane, nor shall the data plane and 1240 control plane traffic be assumed to be congruently routed. 1242 A control channel is the communication path for transporting control 1243 messages between network nodes, and over the UNI (i.e., between the 1244 UNI entity on the user side (UNI-C) and the UNI entity on the network 1245 side (UNI-N)). The control messages include signaling messages, 1246 routing information messages, and other control maintenance protocol 1247 messages such as neighbor and service discovery. 1249 The following three types of signaling in the control channel shall 1250 be supported: 1252 - In-band signaling: The signaling messages are carried over a 1253 logical communication channel embedded in the data-carrying optical 1254 link or channel. For example, using the overhead bytes in SONET data 1255 framing as a logical communication channel falls into the in-band 1256 signaling methods. 1258 - In fiber, Out-of-band signaling: The signaling messages are carried 1259 over a dedicated communication channel separate from the optical 1260 data-bearing channels, but within the same fiber. For example, a 1261 dedicated wavelength or TDM channel may be used within the same fiber 1262 as the data channels. 1264 - Out-of-fiber signaling: The signaling messages are carried over a 1265 dedicated communication channel or path within different fibers to 1266 those used by the optical data-bearing channels. For example, 1267 dedicated optical fiber links or communication path via separate and 1268 independent IP-based network infrastructure are both classified as 1269 out-of-fiber signaling. 1271 The UNI control channel and proxy signaling defined in the OIF UNI 1272 1.0 [OIFUNI] shall be supported. 1274 The control message transport network provides communication 1275 mechanisms between entities in the control plane. 1277 - The control message transport network shall support reliable 1278 message transfer. 1280 - The control message transport network shall have its own OAM 1281 mechanisms. 1283 - The control message transport network shall use protocols that 1284 support congestion control mechanisms. 1286 In addition, the control message transport network should support 1287 message priorities. Message prioritization allows time critical 1288 messages, such as those used for restoration, to have priority over 1289 other messages, such as other connection signaling messages and 1290 topology and resource discovery messages. 1292 The control message transport network shall be highly reliable and 1293 implement failure recovery. 1295 8.3. Control Plane Interface to Data Plane 1297 In the situation where the control plane and data plane are provided 1298 by different suppliers, this interface needs to be standardized. 1299 Requirements for a standard control-data plane interface are under 1300 study. The specification of a control plane interface to the data 1301 plane is outside the scope of this document. 1303 Control plane should support a standards based interface to configure 1304 and switching fabrics and port functions. 1306 Data plane shall monitor and detect the failure (LOL, LOS, etc.) and 1307 quality degradation (high BER, etc.) of the signals and be able to 1308 provide signal-failure and signal-degrade alarms to the control plane 1309 accordingly to trigger proper mitigation actions in the control 1310 plane. 1312 8.4. Management Plane Interface to Data Plane 1314 The management plane shall be responsible for the network resource 1315 management in the data plane. It should able to partition the network 1316 resources and control the allocation and the deallocation of the 1317 resource for the use by the control plane. 1319 Data plane shall monitor and detect the failure and quality 1320 degradation of the signals and be able to provide signal-failure and 1321 signal-degrade alarms plus associated detailed fault information to 1322 the management plane to trigger and enable the management for fault 1323 location and repair. 1325 Management plane failures shall not affect the normal operation of a 1326 configured and operational control plane or data plane. 1328 8.5. Control Plane Interface to Management Plane 1330 The control plane is considered a managed entity within a network. 1331 Therefore, it is subject to management requirements just as other 1332 managed entities in the network are subject to such requirements. 1334 Control plane should be able to service the requests from the 1335 management plane for end-to-end connection provisioning (e.g. SPC 1336 connection) and control plane database information query (e.g. 1337 topology database) 1338 Control plane shall report all the control plane faults to the 1339 management plane with detailed fault information 1341 In general, the management plane shall have authority over the 1342 control plane. Management plane should be able to configure the 1343 routing, signaling and discovery control parameters such as hold-down 1344 timers, hello-interval, etc. to effect the behavior of the control 1345 plane. In the case of network failure, both the management plane and 1346 the control plane need fault information at the same priority. The 1347 control plane shall be responsible for providing necessary statistic 1348 data such as call counts, traffic counts to the management plane. 1349 They should be available upon the query from the management plane. 1350 The management plane shall be able to tear down connections 1351 established by the control plane both gracefully and forcibly on 1352 demand. 1354 8.6. Control Plane Interconnection 1356 When two (sub)networks are interconnected on transport plane level, 1357 so should be two corresponding control network at the control plane. 1358 The control plane interconnection model defines the way how two 1359 control networks can be interconnected in terms of controlling 1360 relationship and control information flow allowed between them. 1362 8.6.1. Interconnection Models 1364 There are three basic types of control plane network interconnection 1365 models: overlay, peer and hybrid, which are defined by the IETF IPO 1366 WG document [IPO_frame], as discussed in the Appendix. 1368 Choosing the level of coupling depends upon a number of different 1369 factors, some of which are: 1371 - Variety of clients using the optical network 1373 - Relationship between the client and optical network 1375 - Operating model of the carrier 1377 Overlay model (UNI like model) shall be supported for client to 1378 optical control plane interconnection. 1380 Other models are optional for client to optical control plane 1381 interconnection. 1383 For optical to optical control plane interconnection all three models 1384 shall be supported. In general, the priority for support of 1385 interconnection models should be overlay, hybrid and peer, in 1386 decreasing order. 1388 9. Requirements for Signaling, Routing and Discovery 1390 9.1. Requirements for information sharing over UNI, I-NNI and E-NNI 1392 Different types of interfaces shall impose different requirements and 1393 functionality due to their different trust relationships. 1394 Specifically: 1396 - Topology information shall not be exchanged across E-NNI and UNI. 1398 - The control plane shall allow the carrier to configure the type 1399 and extent of control information exchange across various interfaces. 1401 - Address resolution exchange over UNI is needed if an addressing 1402 directory service is not available. 1404 9.2. Signaling Functions 1406 Call and connection control and management signaling messages are 1407 used for the establishment, modification, status query and release of 1408 an end-to-end optical connection. Unless otherwise specified, the 1409 word "signaling" refers to both inter-domain and intra-domain 1410 signaling. 1412 - The inter-domain signaling protocol shall be agnostic to the intra- 1413 domain signaling protocol for all the domains within the network. 1415 - Signaling shall support both strict and loose routing. 1417 - Signaling shall support individual as well as groups of connection 1418 requests. 1420 - Signaling shall support fault notifications. 1422 - Inter-domain signaling shall support per connection, globally 1423 unique identifiers for all connection management primitives based on 1424 a well-defined naming scheme. 1426 - Inter-domain signaling shall support crank-back and rerouting. 1428 9.3. Routing Functions 1430 Routing includes reachability information propagation, network 1431 topology/resource information dissemination and path computation. 1432 Network topology/resource information dissemination is to provide 1433 each node in the network with information about the carrier network 1434 such that a single node is able to support constraint-based path 1435 selection. A mixture of hop-by-hop routing, explicit/source routing 1436 and hierarchical routing will likely be used within future transport 1437 networks. 1439 All three mechanisms (Hop-by-hop routing, explicit / source-based 1440 routing and hierarchical routing) must be supported. Messages 1441 crossing untrusted boundaries must not contain information regarding 1442 the details of an internal network topology. 1444 Requirements for routing information dissemination: 1446 - The inter-domain routing protocol shall be agnostic to the intra- 1447 domain routing protocol within any of the domains within the network. 1449 - The exchange of the following types of information shall be 1450 supported by inter-domain routing protocols: 1452 - Inter-domain topology 1453 - Per-domain topology abstraction 1454 - Per domain reachability information 1456 - Metrics for routing decisions supporting load sharing, a range of 1457 service granularity and service types, restoration capabilities, 1458 diversity, and policy. 1460 Major concerns for routing protocol performance are scalability and 1461 stability, which impose the following requirement on the routing 1462 protocols: 1464 - The routing protocol shall scale with the size of the network 1466 The routing protocols shall support following requirements: 1468 1. Routing protocol shall support hierarchical routing information 1469 dissemination, including topology information aggregation and 1470 summarization. 1472 2. The routing protocol(s) shall minimize global information and keep 1473 information locally significant as much as possible. 1474 Over external interfaces only reachability information, next 1476 routing hop and service capability information should be exchanged. 1477 Any 1478 other network related information shall not leak out to other 1479 networks. 1481 3. The routing protocol shall be able to minimize global information 1482 and keep information locally significant as much as possible (e.g., 1483 information local to a node, a sub-network, a domain, etc). For 1484 example, a single optical node may have thousands of ports. The ports 1485 with common characteristics need not to be advertised individually. 1487 4. The routing protocol shall distinguish static routing information 1488 and dynamic routing information. The routing protocol operation shall 1489 update dynamic and static routing information differently. Only 1490 dynamic 1491 routing information shall be updated in real time. 1493 5. Routing protocol shall be able to control the dynamic information 1494 updating frequency through different types of thresholds. Two types 1495 of thresholds could be defined: absolute threshold and relative 1496 threshold. 1498 6. The routing protocol shall support trigger-based and timeout-based 1499 information update. 1501 7. Inter-domain routing protocol shall support policy-based routing 1502 information exchange. 1504 8. The routing protocol shall be able to support different levels of 1505 protection/restoration and other resiliency requirements. These are 1506 discussed in Section 10. 1508 All the scalability techniques will impact the network resource 1509 representation accuracy. The tradeoff between accuracy of the routing 1510 information and the routing protocol scalability is an important 1511 consideration to be made by network operators. 1513 9.4. Requirements for path selection 1515 The following are functional requirements for path selection: 1517 - Path selection shall support shortest path routing. 1519 - Path selection shall also support constraint-based routing. At 1520 least the following constraints shall be supported: 1522 - Cost 1523 - Link utilization 1524 - Diversity 1525 - Service Class 1527 - Path selection shall be able to include/exclude some specific 1528 network resources, based on policy. 1530 - Path selection shall be able to support different levels of 1531 diversity, including node, link, SRLG and SRG. 1533 - Path selection algorithms shall provide carriers the ability to 1534 support a wide range of services and multiple levels of service 1535 classes. Parameters such as service type, transparency, bandwidth, 1536 latency, bit error rate, etc. may be relevant. 1538 9.5. Automatic Discovery Functions 1540 Automatic discovery functions include neighbor, resource and service 1541 discovery. 1543 9.5.1. Neighbor discovery 1545 Neighbor Discovery can be described as an instance of auto-discovery 1546 that is used for associating two network entities within a layer 1547 network based on a specified adjacency relation. 1549 The control plane shall support the following neighbor discovery 1550 capability as described in [ITU-g7714]: 1552 - Physical media adjacency that detects and verifies the physical 1553 layer network connectivity between two connected network element 1554 ports. 1556 - Logical network adjacency that detects and verify the logical 1557 network layer connection above the physical layer between network 1558 layer specific ports. 1560 - Control adjacency that detect and verify the logical neighboring 1561 relation between two control entities associated with data plane 1562 network elements that form either physical or logical adjacency. 1564 The control plane shall support manual neighbor adjacency 1565 configuration to either overwrite or supplement the automatic 1566 neighbor discovery function. 1568 9.5.2. Resource Discovery 1570 Resource discovery is concerned with the ability to verify physical 1571 connectivity between two ports on adjacent network elements, improve 1572 inventory management of network resources, detect configuration 1573 mismatches between adjacent ports, associating port characteristics 1574 of adjacent network elements, etc. Resource discovery shall be 1575 supported. 1577 Resource discovery can be achieved through either manual provisioning 1578 or automated procedures. The procedures are generic while the 1579 specific mechanisms and control information can be technology 1580 dependent. 1582 After neighbor discovery resource verification and monitoring must be 1583 performed periodically to verify physical attributes to ensure 1584 compatibility. 1586 9.5.3. Service Discovery 1588 Service Discovery can be described as an instance of auto-discovery 1589 that is used for verifying and exchanging service capabilities of a 1590 network. Service discovery can only happen after neighbor discovery. 1591 Since service capabilities of a network can dynamically change, 1592 service discovery may need to be repeated. 1594 Service discovery is required for all the optical services supported. 1596 10. Requirements for service and control plane resiliency 1598 Resiliency is a network capability to continue its operations under 1599 the condition of failures within the network. The automatic switched 1600 optical network assumes the separation of control plane and data 1601 plane. Therefore the failures in the network can be divided into 1602 those affecting the data plane and those affecting the control plane. 1603 To provide enhanced optical services, resiliency measures in both 1604 data plane and control plane should be implemented. The following 1605 failure handling principles shall be supported. 1607 The control plane shall provide optical service failure detection and 1608 recovery functions such that the failures in the data plane within 1609 the control plane coverage can be quickly mitigated. 1611 The failure of control plane shall not in any way adversely affect 1612 the normal functioning of existing optical connections in the data 1613 plane. 1615 In general, there shall be no single point of failure for all major 1616 control plane functions, including signaling, routing etc. The 1617 control plane shall provide reliable transfer of signaling messages 1618 and flow control mechanisms for easing any congestion within the 1619 control plane. 1621 10.1. Service resiliency 1623 In circuit-switched transport networks, the quality and reliability 1624 of the established optical connections in the transport plane can be 1625 enhanced by the protection and restoration mechanisms provided by the 1626 control plane functions. Rapid recovery is required by transport 1627 network providers to protect service and also to support stringent 1628 Service Level Agreements (SLAs) that dictate high reliability and 1629 availability for customer connectivity. 1631 Protection and restoration are closely related techniques for 1632 repairing network node and link failures. Protection is a collection 1633 of failure recovery techniques meant to rehabilitate failed 1634 connections by pre-provisioning dedicated protection network 1635 connections and switching to the protection circuit once the failure 1636 is detected. Restoration is a collection of reactive techniques used 1637 to rehabilitate failed connections by dynamic rerouting the failed 1638 connection around the network failures using the shared network 1639 resources. 1641 The protection switching is characterized by shorter recovery time at 1642 the cost of the dedicated network resources while dynamic restoration 1643 is characterized by longer recover time with efficient resource 1644 sharing. Furthermore, the protection and restoration can be 1645 performed either on a per link/span basis or on an end-to-end 1646 connection path basis. The formal is called local repair initiated a 1647 node closest to the failure and the latter is called global repair 1648 initiated from the ingress node. 1650 The protection and restoration actions are usually in reaction to the 1651 failure in the networks. However, during the network maintenance 1652 affecting the protected connections, a network operator need to 1653 proactively force the traffic on the protected connections to switch 1654 to its protection connection. 1656 The failure and signal degradation in the transport plane is usually 1657 technology specific and therefore shall be monitored and detected by 1658 the transport plane. 1660 The transport plane shall report both physical level failure and 1661 signal degradation to the control plane in the form of the signal 1662 failure alarm and signal degrade alarm. 1664 The control plane shall support both alarm-triggered and hold-down 1665 timers based protection switching and dynamic restoration for failure 1666 recovery. 1668 Clients will have different requirements for connection availability. 1669 These requirements can be expressed in terms of the "service level", 1670 which can be mapped to different restoration and protection options 1671 and priority related connection characteristics, such as holding 1672 priority(e.g. pre-emptable or not), set-up priority, or restoration 1673 priority. However, how the mapping of individual service levels to a 1674 specific set of protection/restoration options and connection 1675 priorities will be determined by individual carriers. 1677 In order for the network to support multiple grades of service, the 1678 control plane must support differing protection and restoration 1679 options on a per connection basis. 1681 In order for the network to support multiple grades of service, the 1682 control plane must support setup priority, restoration priority and 1683 holding priority on a per connection basis. 1685 In general, the following protection schemes shall be considered for 1686 all protection cases within the network: 1687 - Dedicated protection: 1+1 and 1:1 1688 - Shared protection: 1:N and M:N. 1689 - Unprotected 1691 The control plane shall support "extra-traffic" capability, which 1692 allows unprotected traffic to be transmitted on the protection 1693 circuit. 1695 The control plane shall support both trunk-side and drop-side 1696 protection switching. 1698 The following restoration schemes should be supported: 1699 - Restorable 1700 - Un-restorable 1702 Protection and restoration can be done on an end-to-end basis per 1703 connection. It can also be done on a per span or link basis between 1704 two adjacent network nodes. These schemes should be supported. 1706 The protection and restoration actions are usually triggered by the 1707 failure in the networks. However, during the network maintenance 1708 affecting the protected connections, a network operator need to 1709 proactively force the traffic on the protected connections to switch 1710 to its protection connection. Therefore in order to support easy 1711 network maintenance, it is required that management initiated 1712 protection and restoration be supported. 1714 Protection and restoration configuration should be based on software 1715 only. 1717 The control plane shall allow the modification of protection and 1718 restoration attributes on a per-connection basis. 1720 The control plane shall support mechanisms for reserving bandwidth 1721 resources for restoration. 1723 The control plane shall support mechanisms for normalizing connection 1724 routing (reversion) after failure repair. 1726 Normal connection management operations (e.g., connection deletion) 1727 shall not result in protection/restoration being initiated. 1729 10.2. Control plane resiliency 1731 The control plane may be affected by failures in signaling network 1732 connectivity and by software failures (e.g., signaling, topology and 1733 resource discovery modules). 1735 The signaling control plane should implement signaling message 1736 priorities to ensure that restoration messages receive preferential 1737 treatment, resulting in faster restoration. 1739 The optical control plane signal network shall support protection and 1740 restoration options to enable it to self-healing in case of failures 1741 within the control plane. 1743 Control network failure detection mechanisms shall distinguish 1744 between control channel and software process failures. 1746 The control plane failure shall only impact the capability to 1747 provision new services. 1749 Fault localization techniques for the isolation of failed control 1750 resources shall be supported. 1752 Recovery from control plane failures shall result in complete 1753 recovery and re-synchronization of the network. 1755 11. Security Considerations 1757 In this section, security considerations and requirements for optical 1758 services and associated control plane requirements are described. 1760 11.1. Optical Network Security Concerns 1762 Since optical service is directly related to the physical network 1763 which is fundamental to a telecommunications infrastructure, 1764 stringent security assurance mechanism should be implemented in 1765 optical networks. 1767 In terms of security, an optical connection consists of two aspects. 1768 One is security of the data plane where an optical connection itself 1769 belongs, and the other is security of the control plane. 1771 11.1.1. Data Plane Security 1773 - Misconnection shall be avoided in order to keep the user's data 1774 confidential. For enhancing integrity and confidentiality of data, 1775 it may be helpful to support scrambling of data at layer 2 or 1776 encryption of data at a higher layer. 1778 11.1.2. Control Plane Security 1780 It is desirable to decouple the control plane from the data plane 1781 physically. 1783 Restoration shall not result in miss-connections (connections 1784 established to a destination other than that intended), even for 1785 short periods of time (e.g., during contention resolution). For 1786 example, signaling messages, used to restore connectivity after 1787 failure, should not be forwarded by a node before contention has been 1788 resolved. 1790 Additional security mechanisms should be provided to guard against 1791 intrusions on the signaling network. Some of these may be done with 1792 the help of the management plane. 1794 - Network information shall not be advertised across exterior 1795 interfaces (UNI or E-NNI). The advertisement of network information 1796 across the E-NNI shall be controlled and limited in a configurable 1797 policy based fashion. The advertisement of network information shall 1798 be isolated and managed separately by each administration. 1800 - The signaling network itself shall be secure, blocking all 1801 unauthorized access. The signaling network topology and addresses 1802 shall not be advertised outside a carrier's domain of trust. 1804 - Identification, authentication and access control shall be 1805 rigorously used by network operators for providing access to the 1806 control plane. 1808 - Discovery information, including neighbor discovery, service 1809 discovery, resource discovery and reachability information should be 1810 exchanged in a secure way. 1812 - Information on security-relevant events occurring in the control 1813 plane or security-relevant operations performed or attempted in the 1814 control plane shall be logged in the management plane. 1816 - The management plane shall be able to analyze and exploit logged 1817 data in order to check if they violate or threat security of the 1818 control plane. 1820 - The control plane shall be able to generate alarm notifications 1821 about security related events to the management plane in an 1822 adjustable and selectable fashion. 1824 - The control plane shall support recovery from successful and 1825 attempted intrusion attacks. 1827 11.2. Service Access Control 1829 From a security perspective, network resources should be protected 1830 from unauthorized accesses and should not be used by unauthorized 1831 entities. Service access control is the mechanism that limits and 1832 controls entities trying to access network resources. Especially on 1833 the UNI and E-NNI, Connection Admission Control (CAC) functions 1834 should also support the following security features: 1836 - CAC should be applied to any entity that tries to access network 1837 resources through the UNI (or E-NNI). CAC should include an 1838 authentication function of an entity in order to prevent masquerade 1839 (spoofing). Masquerade is fraudulent use of network resources by 1840 pretending to be a different entity. An authenticated entity should 1841 be given a service access level in a configurable policy basis. 1843 - The UNI and NNI should provide optional mechanisms to ensure origin 1844 authentication and message integrity for connection management 1845 requests such as set-up, tear-down and modify and connection 1846 signaling messages. This is important in order to prevent Denial of 1847 Service attacks. The UNI and E-NNI should also include mechanisms, 1848 such as usage-based billing based on CAC, to ensure non-repudiation 1849 of connection management messages. 1851 - Each entity should be authorized to use network resources according 1852 to the service level given. 1854 12. Acknowledgements 1856 The authors of this document would like to acknowledge the valuable 1857 inputs from John Strand, Yangguang Xu, Deborah Brunhard, Daniel 1858 Awduche, 1859 Jim Luciani, Lynn Neir, Wesam Alanqar, Tammy Ferris, Mark Jones and 1860 Jerry Ash. 1862 13. References 1864 [carrier-framework] Y. Xue et al., Carrier Optical Services 1865 Framework and Associated UNI requirements", draft-many-carrier- 1866 framework-uni-00.txt, IETF, Nov. 2001. 1868 [oif2001.196.0] M. Lazer, "High Level Requirements on Optical 1869 Network Addressing", oif2001.196.0. 1871 [oif2001.046.2] J. Strand and Y. Xue, "Routing For Optical Networks 1872 With Multiple Routing Domains", oif2001.046.2. 1874 [ipo-impairements] J. Strand et al., "Impairments and Other 1875 Constraints on Optical Layer Routing", Work in Progress, IETF 1877 [ccamp-gmpls] Y. Xu et al., "A Framework for Generalized Multi- 1878 Protocol Label Switching (GMPLS)", Work in Progress, IETF. 1880 [mesh-restoration] G. Li et al., "RSVP-TE extensions for shared mesh 1881 restoration in transport networks", Work in Progress, IETF. 1883 [sls-framework] Yves T'Joens et al., "Service Level Specification 1884 and Usage Framework", Work in Progress, IETF. 1886 [control-frmwrk] G. Bernstein et al., "Framework for MPLS-based 1887 control of Optical SDH/SONET Networks", Work in Progress, IETF. 1889 [ccamp-req] J. Jiang et al., "Common Control and Measurement 1890 Plane Framework and Requirements", Work in Progress, IETF. 1892 [tewg-measure] W. S. Lai et al., "A Framework for Internet Traffic 1893 Engineering Neasurement", Work in Progress, IETF. 1895 [ccamp-g.709] A. Bellato, "G. 709 Optical Transport Networks GMPLS 1896 Control Framework", Work in Progress, IETF. 1898 [onni-frame] D. Papadimitriou, "Optical Network-to-Network Interface 1899 Framework and Signaling Requirements", Work in Progress, IETF. 1901 [oif2001.188.0] R. Graveman et al.,"OIF Security requirement", 1902 oif2001.188.0.a. 1904 [ASTN] ITU-T Rec. G.8070/Y.1301 (2001), Requirements for the 1905 Automatic 1906 Switched Transport Network (ASTN). 1908 [ASON] ITU-T Rec. G.8080/Y.1304 (2001), Architecture of the 1909 Automatic 1910 Switched Optical Network (ASON). 1912 [DCM] ITU-T Rec. G.7713/Y.1704 (2001), Distributed Call and 1913 Connection 1914 Management (DCM). 1916 [ASONROUTING] ITU-T Draft Rec. G.7715/Y.1706 (2002), Routing 1917 Architecture and 1918 requirements for ASON Networks (work in progress). 1920 [DISC] ITU-T Rec. G.7714/Y.1705 (2001), Generalized Automatic 1921 Discovery. 1923 [DCN]ITU-T Rec. G.7712/Y.1703 (2001), Architecture and Specification 1924 of 1925 Data Communication Network. 1927 Author's Addresses 1929 Yong Xue 1930 UUNET/WorldCom 1931 22001 Loudoun County Parkway 1932 Ashburn, VA 20147 1933 Email: yong.xue@wcom.com 1935 Monica Lazer 1936 AT&T 1937 900 ROUTE 202/206N PO BX 752 1938 BEDMINSTER, NJ 07921-0000 1939 mlazer@att.com 1941 Jennifer Yates, 1942 AT&T Labs 1943 180 PARK AVE, P.O. BOX 971 1944 FLORHAM PARK, NJ 07932-0000 1945 jyates@research.att.com 1947 Dongmei Wang 1948 AT&T Labs 1949 Room B180, Building 103 1950 180 Park Avenue 1951 Florham Park, NJ 07932 1952 mei@research.att.com 1954 Ananth Nagarajan 1955 Sprint 1956 9300 Metcalf Ave 1957 Overland Park, KS 66212, USA 1958 ananth.nagarajan@mail.sprint.com 1960 Hirokazu Ishimatsu 1961 Japan Telecom Co., LTD 1962 2-9-1 Hatchobori, Chuo-ku, 1963 Tokyo 104-0032 Japan 1964 Phone: +81 3 5540 8493 1965 Fax: +81 3 5540 8485 1966 EMail: hirokazu@japan-telecom.co.jp 1968 Olga Aparicio 1969 Cable & Wireless Global 1970 11700 Plaza America Drive 1971 Reston, VA 20191 1972 Phone: 703-292-2022 1973 Email: olga.aparicio@cwusa.com 1975 Steven Wright 1976 Science & Technology 1977 BellSouth Telecommunications 1978 41G70 BSC 1979 675 West Peachtree St. NE. 1980 Atlanta, GA 30375 1981 Phone +1 (404) 332-2194 1982 Email: steven.wright@snt.bellsouth.com 1983 Appendix: Interconnection of Control Planes 1985 The interconnection of the IP router (client) and optical control 1986 planes can be realized in a number of ways depending on the required 1987 level of coupling. The control planes can be loosely or tightly 1988 coupled. Loose coupling is generally referred to as the overlay 1989 model and tight coupling is referred to as the peer model. 1990 Additionally there is the augmented model that is somewhat in between 1991 the other two models but more akin to the peer model. The model 1992 selected determines the following: 1994 - The details of the topology, resource and reachability information 1995 advertised between the client and optical networks 1997 - The level of control IP routers can exercise in selecting paths 1998 across the optical network 2000 The next three sections discuss these models in more details and the 2001 last section describes the coupling requirements from a carrier's 2002 perspective. 2004 Peer Model (I-NNI like model) 2006 Under the peer model, the IP router clients act as peers of the 2007 optical transport network, such that single routing protocol instance 2008 runs over both the IP and optical domains. In this regard the 2009 optical network elements are treated just like any other router as 2010 far as the control plane is concerned. The peer model, although not 2011 strictly an internal NNI, behaves like an I-NNI in the sense that 2012 there is sharing of resource and topology information. 2014 Presumably a common IGP such as OSPF or IS-IS, with appropriate 2015 extensions, will be used to distribute topology information. One 2016 tacit assumption here is that a common addressing scheme will also be 2017 used for the optical and IP networks. A common address space can be 2018 trivially realized by using IP addresses in both IP and optical 2019 domains. Thus, the optical networks elements become IP addressable 2020 entities. 2022 The obvious advantage of the peer model is the seamless 2023 interconnection between the client and optical transport networks. 2024 The tradeoff is that the tight integration and the optical specific 2025 routing information that must be known to the IP clients. 2027 The discussion above has focused on the client to optical control 2028 plane inter-connection. The discussion applies equally well to 2029 inter-connecting two optical control planes. 2031 Overlay (UNI-like model) 2033 Under the overlay model, the IP client routing, topology 2034 distribution, and signaling protocols are independent of the routing, 2035 topology distribution, and signaling protocols at the optical layer. 2036 This model is conceptually similar to the classical IP over ATM 2037 model, but applied to an optical sub-network directly. 2039 Though the overlay model dictates that the client and optical network 2040 are independent this still allows the optical network to re-use IP 2041 layer protocols to perform the routing and signaling functions. 2043 In addition to the protocols being independent the addressing scheme 2044 used between the client and optical network must be independent in 2045 the overlay model. That is, the use of IP layer addressing in the 2046 clients must not place any specific requirement upon the addressing 2047 used within the optical control plane. 2049 The overlay model would provide a UNI to the client networks through 2050 which the clients could request to add, delete or modify optical 2051 connections. The optical network would additionally provide 2052 reachability information to the clients but no topology information 2053 would be provided across the UNI. 2055 Augmented model (E-NNI like model) 2057 Under the augmented model, there are actually separate routing 2058 instances in the IP and optical domains, but information from one 2059 routing instance is passed through the other routing instance. For 2060 example, external IP addresses could be carried within the optical 2061 routing protocols to allow reachability information to be passed to 2062 IP clients. A typical implementation would use BGP between the IP 2063 client and optical network. 2065 The augmented model, although not strictly an external NNI, behaves 2066 like an E-NNI in that there is limited sharing of information. 2068 Generally in a carrier environment there will be more than just IP 2069 routers connected to the optical network. Some other examples of 2070 clients could be ATM switches or SONET ADM equipment. This may drive 2071 the decision towards loose coupling to prevent undue burdens upon 2072 non-IP router clients. Also, loose coupling would ensure that future 2073 clients are not hampered by legacy technologies. 2075 Additionally, a carrier may for business reasons want a separation 2076 between the client and optical networks. For example, the ISP 2077 business unit may not want to be tightly coupled with the optical 2078 network business unit. Another reason for separation might be just 2079 pure politics that play out in a large carrier. That is, it would 2080 seem unlikely to force the optical transport network to run that same 2081 set of protocols as the IP router networks. Also, by forcing the 2082 same set of protocols in both networks the evolution of the networks 2083 is directly tied together. That is, it would seem you could not 2084 upgrade the optical transport network protocols without taking into 2085 consideration the impact on the IP router network (and vice versa). 2087 Operating models also play a role in deciding the level of coupling. 2088 [Freeland] gives four main operating models envisioned for an optical 2089 transport network: - ISP owning all of its own infrastructure (i.e., 2090 including fiber and duct to the customer premises) 2092 - ISP leasing some or all of its capacity from a third party 2094 - Carriers carrier providing layer 1 services 2096 - Service provider offering multiple layer 1, 2, and 3 services over 2097 a common infrastructure 2099 Although relatively few, if any, ISPs fall into category 1 it would 2100 seem the mostly likely of the four to use the peer model. The other 2101 operating models would lend themselves more likely to choose an 2102 overlay model. Most carriers would fall into category 4 and thus 2103 would most likely choose an overlay model architecture. 2105 Full Copyright Statement 2107 Copyright (C) The Internet Society (2002). All Rights Reserved. 2109 This document and translations of it may be copied and furnished to 2110 others, and derivative works that comment on or otherwise explain it 2111 or assist in its implementation may be prepared, copied, published 2112 and distributed, in whole or in part, without restriction of any 2113 kind, provided that the above copyright notice and this paragraph are 2114 included on all such copies and derivative works. However, this 2115 document itself may not be modified in any way, such as by removing 2116 the copyright notice or references to the Internet Society or other 2117 Internet organizations, except as needed for the purpose of 2118 developing Internet standards in which case the procedures for 2119 copyrights defined in the Internet Standards process must be 2120 followed, or as required to translate it into languages other than 2121 English. 2123 The limited permissions granted above are perpetual and will not be 2124 revoked by the Internet Society or its successors or assigns. 2126 This document and the information contained herein is provided on an 2127 "AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING 2128 TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING 2129 BUT NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION 2130 HEREIN WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF 2131 MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.