idnits 2.17.1 draft-ietf-ipo-carrier-requirements-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document seems to lack a 1id_guidelines paragraph about 6 months document validity. ** The document seems to lack a 1id_guidelines paragraph about the list of current Internet-Drafts. == The page length should not exceed 58 lines per page, but there was 1 longer page, the longest (page 61) being 86 lines == It seems as if not all pages are separated by form feeds - found 0 form feeds but 63 pages Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack a Security Considerations section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. Miscellaneous warnings: ---------------------------------------------------------------------------- == Line 546 has weird spacing: '...cal and consi...' == Line 652 has weird spacing: '...n there is no...' == Line 694 has weird spacing: '...ay over its o...' == Line 1057 has weird spacing: '...s) will suppo...' == Line 1059 has weird spacing: '...ensions in ON...' == (2 more instances...) == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- Couldn't find a document date in the document -- date freshness check skipped. Checking references for intended status: Informational ---------------------------------------------------------------------------- == Missing Reference: 'G.872' is mentioned on line 132, but not defined == Missing Reference: 'G.ASON' is mentioned on line 146, but not defined == Missing Reference: 'Freeland' is mentioned on line 2006, but not defined == Missing Reference: 'GMPLS-ARCH' is mentioned on line 2443, but not defined == Unused Reference: 'G.807' is defined on line 2859, but no explicit reference was found in the text == Unused Reference: 'G.dcm' is defined on line 2861, but no explicit reference was found in the text == Unused Reference: 'G.ason' is defined on line 2863, but no explicit reference was found in the text == Outdated reference: A later version (-05) exists of draft-ietf-ipo-impairments-00 Summary: 7 errors (**), 0 flaws (~~), 17 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 INTERNET-DRAFT 3 Document: draft-ietf-ipo-carrier-requirements-00.txt 4 Yong Xue 5 Category: Informational 6 (Editor) 7 Expiration Date: January, 2002 8 UUNET/Worldcom 10 Monica Lazer 11 John Strand 12 Jennifer Yates 13 Dongmei Wang 14 AT&T 16 Ananth Nagarajan 17 Lynn Neir 18 Wesam Alanqar 19 Tammy Ferris 20 Sprint 22 Hirokazu Ishimatsu 23 Japan Telecom Co., LTD 25 Steven Wright 26 Bellsouth 28 Olga Aparicio 29 Cable & Wireless Global 31 Carrier Optical Services Requirements 33 Status of this Memo 35 This document is an Internet-Draft and is in full conformance with 36 all provisions of Section 10 of RFC2026. Internet-Drafts are 37 Working documents of the Internet Engineering Task Force (IETF), its 38 areas, and its working groups. Note that other groups may also 39 distribute working documents as Internet-Drafts. 40 Internet-Drafts are draft documents valid for a maximum of six 41 months and may be updated, replaced, or rendered obsolete by other 42 documents at any time. It is inappropriate to use Internet-Drafts as 43 reference material or to cite them other than as "work in progress." 44 The list of current Internet-Drafts can be accessed at 45 http://www.ietf.org/ietf/1id-abstracts.txt. 47 The list of Internet-Draft Shadow Directories can be accessed at 48 http://www.ietf.org/shadow.html. 50 Abstract 51 This contribution describes a carriers optical services framework 52 and associated requirements for the optical network. As such, this 53 document concentrates on the requirements driving the work towards 54 realization of ASON. This document is intended to be protocol- 55 neutral. 57 Table of Contents 59 1. Introduction....................................................3 60 1.1 Justification................................................3 61 1.2 Conventions used in this document............................3 62 1.3 Background...................................................3 63 1.4 Value Statement..............................................4 64 1.5 Scope of This Document.......................................5 65 2. Definitions and Terminology.....................................5 66 3. General Requirements............................................6 67 3.1 Separation of Networking Functions...........................6 68 3.2 Network and Service Scalability..............................7 69 3.3 Transport Network Technology.................................7 70 3.4 Service Building Blocks......................................8 71 4. Service Model and Applications..................................8 72 5. Network Reference Model........................................11 73 5.1 Optical Networks and Subnetworks............................11 74 5.2 Network Interfaces..........................................11 75 5.3 Intra-Carrier Network Model.................................15 76 5.4 Inter-Carrier Network Model.................................16 77 6. Optical Service User Requirements..............................17 78 6.1 Connection Management.......................................17 79 6.2 Optical Services............................................20 80 6.3 Levels of Transparency......................................21 81 6.4 Optical Connection granularity..............................21 82 6.5 Other Service Parameters and Requirements...................23 83 7. Optical Service Provider Requirements..........................25 84 7.1 Access Methods to Optical Networks..........................25 85 7.2 Bearer Interface Types ....................................26 86 7.3 Names and Address Management................................26 87 7.4 Link Identification.........................................29 88 7.5 Policy-Based Service Management Framework...................29 89 7.6 Multiple Hierarchies........................................32 90 8. Control Plane Functional Requirements for Optical Services.....32 91 8.1 Control Plane Capabilities and Functions....................32 92 8.2 Signaling Network...........................................34 93 8.3 Control Plane Interface to Data Plane.......................36 94 8.4 Control Plane Interface to Management Plane.................36 96 8.5 Control Plane Interconnection...............................41 97 9. Requirements for Signaling, Routing and Discovery .............43 98 9.1 Signaling Functions ........................................44 99 9.2 Routing Functions...........................................46 100 9.3 Automatic Discovery Functions...............................49 101 10. Requirements for service and control plane resiliency........54 102 10.1 Service resiliency.......................................54 103 10.2 Control plane resiliency........ ........... ...............58 104 11. Security concerns and requirements............................58 105 11.1 Data Plane Security and Control Plane Security.............58 106 11.2 Service Access Control.....................................59 107 11.3 Optical Network Security Concerns..........................62 109 1. Introduction 111 1.1 Justification 113 The charter of the IPO WG calls for a document on "Carrier Optical 114 Services Requirements" for IP/Optical networks. This document addresses 115 that aspect of the IPO WG charter. Furthermore, this document was 116 accepted as an IPO WG document by unanimous agreement at the IPO WG 117 meeting held on March 19, 2001, in Minneapolis, MN, USA. It presents a 118 carrier and end-user perspective on optical network services and 119 requirements. 121 1.2 Conventions used in this document 123 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 124 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 125 document are to be interpreted as described in RFC 2119. 127 1.3 Background 129 Next generation optical transport network (OTN) will consist of optical 130 crossconnects (OXC), DWDM optical line systems (OLS) and optical add- 131 drop multiplexers (OADM) based on the architecture defined by the ITU 132 standards G.872 in [G.872]. The OTN network is an optical transport 133 network bounded by a set of optical channel access points and has a 134 layered structure consisting of optical channel, multiplex section and 135 transmission section sub-layer networks. Optical networking encompasses 136 the functionality for establishment, transmission, multiplexing, 137 switching, protection, and restoration of optical connections carrying 138 a wide range of user signals of varying formats and bit rate. 139 It is an emerging trend to enhance the OTN network with an intelligent 140 optical layer control plane to dynamically provision network resources 141 and to provide network survivability using mesh-based protection and 142 restoration techniques. The resulting intelligent networks are called 143 automatic switched optical networks or ASON. 145 The emerging and rapidly evolving automatic switched optical networking 146 (ASON) technologies [G.ASON] are aimed at providing optical networks 147 with intelligent networking functions and capabilities in its control 148 plane to enable wavelength switching, rapid optical connection 149 provisioning and dynamic rerouting. This new networking platform will 150 create tremendous business opportunities for the network operators and 151 service providers to offer new services to the market. 153 1.4 Value Statement 155 By deploying ASON technology, a carrier expects to achieve the 156 following benefits from both technical and business perspectives: 157 Rapid Circuit Provisioning: ASON technology will enable the dynamic 158 end-to-end provisioning of the optical connections across the optical 159 network by using standard routing and signaling protocols. 161 Enhanced Survivability: ASON technology will enable the network to 162 dynamically reroute an optical connection in case of a failure using 163 mesh-based network protection and restoration techniques, which greatly 164 improves the cost-effectiveness compared to the current line and ring 165 protection schemes in the SONET/SDH network. 167 Cost-Reduction: ASON networks will enable the carrier to better utilize 168 the optical network , thus achieving significant unit cost reduction 169 per Megabit due to the cost-effective nature of the optical 170 transmission technology, simplified network architecture and reduced 171 operation cost. 173 Service Flexibility: ASON technology will support provisioning of an 174 assortment of existing and new services such as protocol and bit-rate 175 independent transparent network services, and bandwidth-on-demand 176 services. 178 Editor's Note: The next revision will make this more explicit with 179 respect to the relationship with the ASON control plane. 181 Enhanced Interoperability: ASON technology will be using a control 182 plane utilizing the industry and international standards architecture 183 and protocols, which facilitate the interoperability of the optical 184 network equipment from different vendors. 186 In addition, the introduction of a standards-based control plane offers 187 the following potential benefits: 188 - Reactive traffic engineering at optical layer that allows network 189 resources to be dynamically allocated to traffic flow. 190 - Reduce the need for service providers to develop new operational 191 support systems software for the network control and new service 192 provisioning on the optical network, thus speeding up the deployment 194 of the optical network technology and reducing the software 195 development and maintenance cost. 196 - Potential development of a unified control plane that can be used for 197 different transport technologies including ONT, SONET/SDH, ATM and 198 PDH. 200 1.5 Scope of This Document 202 This IPO working group (WG) document is aimed at providing, from the 203 carrier's perspective, a service framework and associated requirements 204 in relation to the optical services to be offered in the next 205 generation optical networking environment and the service control and 206 management functions. 208 As such, this document concentrates on the requirements driving the 209 work towards realization of ASON. This document is intended to be 210 protocol-neutral. 212 Note: It is recognized by carriers writing this document that some 213 features and requirements are not supported by protocols being 214 developed in the IETF. However, the purpose of this document is to 215 specify generic carrier functional requirements. 217 Editor's Note - We may add a statement that these are not all 218 inclusive requirements, and keep it until future revision make it an 219 all inclusive list of requirements. 221 Every carrier's needs are different. The objective of this document is 222 NOT to define some specific service models. Instead, some major service 223 building blocks are identified that will enable the carriers to mix and 224 match in order to create the best service platform most suitable to 225 their business model. These building blocks include generic service 226 types, service enabling control mechanisms and service control and 227 management functions. The ultimate goal is to provide the requirements 228 to guide the control protocol developments within IETF in terms of IP 229 over optical technology. 231 In this document, we consider IP a major client to the optical network, 232 but the same requirements and principles should be equally applicable 233 to non-IP clients such as SONET/SDH, ATM, ITU G.709, etc. 235 2. Definitions and Terminology 236 Optical Transport Network (OTN) 237 SONET/SDH Network 238 Automatic Switched Transport Network (ASTN) 239 Optical Service Carriers 240 Transparent and Opaque Network 241 Other Terminology 242 Bearer channels 243 Abbreviations 245 ASON Automatic Switched Optical Networking 246 ASTN Automatic Switched Transport Network 247 AD Administrative Domain 248 AND Automatic Neighbor Discovery 249 ASD Automatic Service Discovery 250 CAC Connection Admission Control 251 DCM Distributed Connection Management 252 E-NNI Exterior NNI 253 IWF InterWorking Function 254 I-NNI Interior NNI 255 IrDI Inter-Domain Interface 256 IaDI Intra-Domain Interface 257 INC Intra-network Connection 258 NNI Node-to-Node Interface 259 NE Network Element 260 OTN Optical Transport Network 261 OLS Optical Line System 262 OCC Optical Connection Controller 263 PI Physical Interface 264 SLA Service Level Agreement 265 UNI User-to-Network Interface 267 3. General Requirements 269 In this section, a number of generic requirements related to the 270 service control and management functions are discussed. 272 3.1 Separation of Networking Functions 274 It makes logical sense to segregate the networking functions within 275 each layer network into three logical functional network planes: 276 control plane, data plane and management plane. They are responsible 277 for providing network control functions, data transmission functions 278 and network element management functions respectively. 279 Control Plane: includes the functions related to networking control 280 capabilities such as routing, signaling, and policy control, as well as 281 resource and service discovery. 282 Data Plane (transport plane): includes the functions related to bearer 283 channels and transmission. 284 Management Plane: includes the functions related to the management 285 functions of network element, networks and network services. 286 Each plane consists of a set of interconnected functional or control 287 entities responsible for providing the networking or control functions 288 defined for that network layer. 289 The crux of the ASON network is the networking intelligence that 290 contains automatic routing, signaling and discovery functions to 291 automate the network control functions and these automatic control 292 functions are collectively called the control plane functions. 294 The separation of the control plane from both the data and management 295 plane is beneficial to the carriers in that: 296 . Allow equipment vendors to have a modular system design that will be 297 more reliable and maintainable thus reducing the overall systems 298 ownership and operation cost. 299 . Allow carriers to have the flexibility to choose a third party vendor 300 control plane software systems as its control plane solution for its 301 switched optical network. 302 . Allow carriers to deploy a unified control plane and OSS systems to 303 manage and control different types of transport networks it owes. 304 . Allow carriers to use a separate control network specially designed 305 and engineered for the control plane communications. 307 Requirement 1. The control traffic and user data traffic shall not be 308 assumed to be congruently routed under the same topology because the 309 control transport network topology may very well be different from 310 that of the data transport network. 312 Note: This is in contrast to the IP network where the control messages 313 and user traffic are routed and switched based on the same network 314 topology due to the associated in-band signaling nature of the IP 315 network. 317 3.2 Network and Service Scalability 319 In terms of the scale and complexity of the future optical network, the 320 following assumption can be made when considering the scalability and 321 performance requirements of the optical control and management 322 functions. 323 Within one operator subnetwork: 324 - There may be hundreds of OXC nodes 325 - There may be thousands of terminating ports/wavelength per OXC node 326 - There may be hundreds of parallel fibers between a pair of OXC nodes 327 - There may be hundreds of wavelength channels transmitted on each 328 fiber. 329 The number of optical connections on a network varies depending upon 330 the size of the network. 332 Requirement 2. Although specific applications may be on a small scale, 333 the protocol itself shall not limit large-scale networks. 335 3.3 Transport Network Technology 336 Optical services can be offered over different types of underlying 337 optical technologies. The service characteristic in certain degree will 338 determine the features and constraints of the services. 340 This document assumes standards-based transport technologies such as 341 SONET/SDH and OTN - G.709 343 3.4 Service Building Blocks 345 The ultimate goal of this document is to identify a set of basic 346 service building blocks the carriers can mix and match to create the 347 best suitable service models that serve their business needs. 348 Editor's Note: May need list of building blocks in view of document 349 content. 351 4. Service Model and Applications 353 A carrier's optical network supports multiple types of service models. 354 Each service model may have its own service operations, target markets, 355 and service management requirements. 357 4.1 Static Provisioned Bandwidth Service (SPB) 359 Static Provisioned Bandwidth Service creates Soft Permanent 360 Connections. Soft Permanent Connections are those connections initiated 361 from the management plane, but completed through the control plane and 362 its interactions with the management plane. These connections 363 traditionally fall within the category of circuit provisioning and are 364 characterized by very long holding times. 366 Requirement 3. The control plane shall allow the management plane 367 control of network resources for network management including, but 368 not limited to management of soft permanent connections. 370 Service Concept: The SPB supports enhanced leased line and private line 371 services. The network operator provides connection provisioning at the 372 customer request through carrier network operation center. The 373 provisioning time could take some time and provisioning process could 374 be manual or semi-manual. The specific functionalities of SPB offered 375 by a carrier may be carrier specific. But any network capability that 376 can be invoked by, say, signaling across the UNI shall also be directly 377 accessible by the network operator's network provisioning and network 378 management work centers. This is basically the "point and click" type 379 of provisioning services currently proposed by many vendors. The 380 connections established in this way are so-called permanent or soft- 381 permanent connections. 383 Service Operation: During the provision process multiple network 384 resources are reserved and dedicated to the specific path. The control 385 interface is either human (e.g. a customer calls a customer service 386 representative) or via a customer network management system (e.g., 387 customer may make its request over a secure web site or by logging into 388 a specialized OSS). Any provisioned bandwidth service facility is 390 tracked. The path is data based as an object (or structure) containing 391 information relating to the connection attributes and the physical 392 entities used in creating the path (e.g., ingress and egress, NE ports, 393 cross-office and inter-office facilities). This information is used to 394 reserve network resources at provisioning time, to track performance 395 parameters, and to perform maintenance functions. An end-to-end managed 396 service may involve multiple networks, e.g., both access networks and 397 an intercity network. In this case provisioning may be initiated by 398 whichever network has primary service responsibility. 400 Target Market: SPB service focuses on customers unable to request 401 connections using direct signaling to the network, customers with 402 complex engineering requirements that cannot be handled autonomously by 403 the operator's optical layer control plane, customers requiring 404 connections to off-net locations, and customers who need end-to-end 405 managed service offered by (or out-sourced to) carriers. 407 Service Management: SPB service involves carrier management system. The 408 connections provided by SPB may be under the control of value-added 409 network management services, such as specific path selection, complex 410 engineering requirements, or customer required monitor functions. The 411 connection should be deleted only at customer's request. Billing of SPB 412 will be based on the bandwidth, service during, quality-of-service, and 413 other characteristics of the connection. In SPB model, the user shall 414 not have any information about the optical network, however, 415 information on the health of the provisioned connection and other 416 technical aspects of this connection may be provided to the user as a 417 part of the service agreement. 419 4.2 Bandwidth-on-Demand Service (BOD) 421 Bandwidth on Demand Service supports management of switched 422 connections. Switched connections are those connections initiated by 423 the user edge device over the UNI and completed through the control 424 plane. These connections may be more dynamic than the soft permanent 425 connections and have much shorter holding times than soft permanent 426 connections. 428 Service Concept: In SPB model, user is required to pay the cost of the 429 connection independent of the usage of the connection. In current data 430 private line services, the average utilization rate is very low and 431 most of the bits are unused. This is mainly due to time of day and day 432 of week reasons. Even though businesses close down at night and over 433 the weekend, user still needs to pay for the SPB connections. In BOD 434 model, there shall be the potential of tearing down a user's connection 435 when he is closed and giving it back to the user again when his 436 business day begins. This is the service model of bandwidth on demand. 437 In BOD service model, connections are established and reconfigured in 438 real time, and are so-called switched optical connections. Signaling 439 between the user NE and the optical layer control plane initiates all 440 necessary network activities. A real-time commitment for a future 441 connection may also be established. A standard set of "branded" 443 service options is available. The functionality available is a proper 444 subset of that available to SPB Service users and is constrained by the 445 requirement for real-time provisioning, among other things. 446 Availability of the requested connection is contingent on resource 447 availability. 449 Service Operation: This service provides support of real-time creation 450 of bandwidth between two end-points. The time needed to set up 451 bandwidth on demand shall be on the order of seconds, preferably sub- 452 seconds. To support connections establishment dynamically, the end 453 terminals shall be already physically connected to the network with 454 adequate capacity. Ingress into the network needs to be pre-provisioned 455 for point-to-point ingress facilities. Also, necessary cross-connects 456 throughout the network shall be set up automatically upon service 457 request. To provide BOD services, the UNI signaling between user edge 458 device and network edge device is required for all connection end- 459 points. The BOD service request shall be completed if and only if the 460 request is consistent with the relevant SLAs, the network can support 461 the requested connection, and the user edge device at the other end 462 point accepts connection. 464 Target Market: BOD service focuses on customers, such as ISP's, large 465 intranet, and other data and SDH/SONET networks, requiring large point- 466 to-point capacities and having very dynamic demands, customers 467 supporting UNI functions in their edge devices. 469 Service Management: BOD service provides customers the possibility of 470 rapid provisioning and high service utilization. Since connection 471 establishment is not part of the functions of the network management 472 system, the connection management may be some value-added services 473 according to LSAs. Also, connection admission control shall be provided 474 at the connection request on time. The connection shall be deleted from 475 customer's request at either the source endpoint or the destination 476 endpoint. Billing of BOD shall be based on the bandwidth, service 477 during, quality-of-service, and other characteristics of the 478 connection. In BOD model, the user shall not have any information about 479 the optical network, however, information on the health of the 480 provisioned connection and other technical aspects of this connection 481 may be provided to the user via UNI connection request. 483 4.3 Optical Virtual Private Network (OVPN) 484 Service Concept: The customer may contract for some specific network 485 resources (capacity between OXCs, OXC ports, OXC switching resources) 486 such that the customer is able to control these resources to 487 reconfigure the optical cross-connections and establish, delete, 488 maintain connections. In effect they would have a dedicated optical 489 sub-network under the customer's control. 491 Service Operations: For future study. 493 Target market: OVPN service focuses on customers, such as ISP, large 494 intranets, carriers, and other networks requiring large point-to-point 495 capacities and having variable demands who wish to integrate the 496 control of their service and optical layers, business-to-business 497 broadband solution assemblers. 499 Service Management: OVPN service provides the customer the possibility 500 of loaning some optical network resources such that the customer is 501 able to maintain its own sub-network. Since the OVPN connections 502 maintenance is no longer part of the functions of the network 503 management system, the connection management may provide some value- 504 added services according to LSAs. In OVPN model, there is no connection 505 admission control from the carrier and the customer is free to 506 reconfigure its network resources. Billing of OVPN shall be based on 507 the network resources contracted. Network connection acceptance shall 508 involve only a simple check to ensure that the request is in 509 conformance with capacities and constraints specified in the OVPN 510 service agreement. 512 Requirement 4. In OVPN model, real-time information about the state of 513 all resources contracted for shall be made available to the customer. 514 Depending on the service agreement, this may include information on 515 both in-effects and spare resources accessible to the customer. 517 5. Network Reference Model 519 This Section discusses major architectural and functional components of 520 a generic carrier optical network, which should provide a reference 521 model for describing the requirements for the carrier optical services. 523 5.1 Optical Networks and Subnetworks 525 There are two main types of optical networks that are currently under 526 consideration: SDH/SONET network as defined in ITU G.707 and T1.105, 527 and OTN network as defined in ITU G.872. 528 We assume an optical transport network (OTN) is composed of a set of 529 optical cross-connects (OXC) and optical add-drop multiplexer (OADM) 530 which are interconnected in a general mesh topology using DWDM optical 531 line systems (OLS). 532 It is often convenient for easy discussion and description to treat an 533 optical network as an opaque subnetwork, in which the details of the 534 network become less important, instead focus is on the function and the 535 interfaces the optical network provides. In general, an opaque 536 subnetwork can be defined as a set of access points on the network 537 boundary and a set of point-to-point optical connections between those 538 access points. 540 5.2 Network Interfaces 542 A generic carrier network reference model describes a multi-carrier 543 network environment. Each individual carrier network can be further 544 partitioned into domains or sub-networks based on administrative, 545 technological or architectural reasons. The demarcation between 546 (sub)networks can be either logical or physical and consists of a set 547 of reference points identifiable in the optical network. From the 548 control plane perspective, these reference points define a set of 549 control interfaces in terms of optical control and management 550 functionality. The following is an illustrative diagram for this. 552 +---------------------------------------+ 553 | | 554 +--------------+ | | 555 | | | +------------+ +------------+ | 556 | IP | | | | | | | 557 | Network +-E-UNI-+ Optical +-I-UNI--+ Carrier IP | | 558 | | | | Subnetwork | | network | | 559 +--------------+ | | +--+ | | | 560 | +------+-----+ | +------+-----+ | 561 | | | | | 562 | I-NNI I-NNI E-UNI | 563 +--------------+ | | | | | 564 | | | +------+-----+ | +------+-----+ | 565 | IP +-E-UNI-| | +-----+ | | 566 | Network | | | Optical | | Optical | | 567 | | | | Subnetwork +-I-NNI--+ Subnetwork | | 568 +--------------+ | | | | | | 569 | +------+-----+ +------+-----+ | 570 | | | | 571 +---------------------------------------+ 572 I-UNI E-NNI 573 | | 574 +------+-------+ +----------------+ 575 | | | | 576 | Other Client | | Other Carrier | 577 | Network | | Network | 578 | (ATM/SONET) | | | 579 +--------------+ +----------------+ 580 Figure 5.1 Generic Carrier Network Reference Model 582 The network interfaces encompass two aspects of the networking 583 functions: user data plane interface and control plane interface. The 584 former concerns about user data transmission across the network 585 interface and the latter concerns about the control message exchange 586 across the network interface such as signaling, routing, etc. We call 587 the former physical interface (PI) and the latter control plane 588 interface. Unless otherwise stated, the control interface is assumed in 589 the remaining of this document. 591 Control Plane Interfaces 593 Control interface defines a relationship between two connected network 594 entities on both side of the interface. For each control interface, we 595 need to define an architectural function each side plays and a 596 controlled set of information, which can be exchanged across the 597 interface. The information flowing over this logical interface may 598 include: 599 - Endpoint name and address 601 - Reachability/summarized network address information 602 - Topology/routing information 603 - Authentication and connection admission control information 604 - Connection service messages 605 - Network resource control information (I-NNI only) 607 Different types of the interfaces can be defined for the network 608 control and architectural purposes and can be used as the network 609 reference points in the control plane. 610 The User-Network Interface (UNI) is a bi-directional signaling 611 interface between service requester and service provider control plane 612 entities. 613 We differentiate between interior (I-UNI) and exterior (E-UNI) UNI as 614 follows: 615 E-UNI: A bi-directional signaling interface between service requester 616 and control plane entities belonging to different domains. Information 617 flows include support of connection flows and address resolution. 618 I-UNI: A bi-directional signaling interface between service requester 619 and control plane entities belonging to one or more domains having a 620 trusted relationship. 622 Editor's Note: Details of I-UNI have to be worked out. 624 The Network-Network Interface (NNI) is the interface between two 625 optical networks or sub-networks, specifically between the two directly 626 linked edge ONEs of the two interconnected networks. 628 We differentiate between interior (I-NNI) and exterior (E-NNI) NNI as 629 follows: 631 E-NNI: A bi-directional signaling interface between control plane 632 entities belonging to different domains. Information flows include 633 support of connection flows and also reachability information 634 exchanges. 635 I-NNI: A bi-directional signaling interface between control plane 636 entities belonging to one or more domains having a trusted 637 relationship. Information flows over I-NNI also include topology 638 information. 640 It should be noted that it is quite possible to use E-NNI even between 641 subnetworks with a trust relationship to keep topology information 642 exchanges only within the subnetworks. 643 Generally, two networks have a trust relationship if they belong to the 644 same administrative domain. 646 Generally, two networks do not have a trust relationship if they belong 647 to the different administrative domains. 648 Generally speaking, the following levels of trust interfaces shall be 649 supported: 650 Interior interface: an interface is interior when there is a trusted 651 relationship between the two connected networks. 652 Exterior interface: an interface is exterior when there is no trusted 653 relationship between the two connected networks. 654 Interior interface examples include an I-NNI between two optical sub- 655 networks belonging to a single carrier or an I-UNI interface between 656 the optical transport network and an IP network owned by the same 657 carrier. Exterior interface examples include an E-NNI between two 658 different carriers or an E-UNI interface between a carrier optical 659 network and its customers. 660 The two types of interfaces may define different architectural 661 functions and distinctive level of access, security and trust 662 relationship. 664 Editor's Note: More work is needed in defining specific functions on 665 interior and exterior interfaces. 667 Requirement 5. The control plane interfaces shall be configurable and 668 their behavior shall be consistent with the configuration (i.e., 669 exterior versus interior interfaces). 671 5.3 Intra-Carrier Network Model 673 The carrier's optical network is treated as a trusted domain, which is 674 defined as network under a single technical administration with full 675 trust relationship within the network. Within a trusted domain, all the 676 optical network elements and sub-networks are considered to be secure 677 and trusted by each other. A highly simplified optical networking 678 environment consists of an optical transport network and a set of 679 interconnected client networks of various types such as IP, ATM and 680 SONET. 682 In the intra-carrier model, within a carrier-owned network, generally 683 interior interfaces (I-NNI and I-UNI) are assumed. 685 The interfaces between the carrier-owned network equipment and the 686 optical network are a interior UNI and the interfaces between optical 687 sub-networks within a carrier's administrative domain are interior NNI; 688 while the interfaces between the carrier's optical network and its 689 users are exterior UNI, and the interfaces between optical networks of 690 different operators are the exterior NNI. 692 One business application for the interior UNI is the case wherea 693 carrier service operator offers data services such as IP, ATM and Frame 694 Relay over its optical core network. Data services network elements 696 such as routers and ATM switches are considered to be internal optical 697 service client devices. The interconnection topology among the carrier 698 NEs should be completely transparent to the users of the data services. 700 5.3.1 Multiple Sub-networks 702 Without loss of generality, the optical network owned by a carrier 703 service operator can be depicted as consisting of one or more optical 704 sub-networks interconnected by direct optical links. There may be many 705 different reasons for more than one optical sub-networks It may be the 706 result of using hierarchical layering, different technologies across 707 access, metro and long haul (as discussed below), or a result of 708 business mergers and acquisitions or incremental optical network 709 technology deployment by the carrier using different vendors or 710 technologies. 712 A sub-network may be a single vendor and single technology network. But 713 in general, the carrier's optical network is heterogeneous in terms of 714 equipment vendor and the technology utilized in each sub-network. There 715 are four possible scenarios: 717 --- Single vendor and single technology 718 --- Single vendor and multiple technologies 719 --- Multiple vendor single technology 720 --- Multiple vendors and multiple technologies. 722 5.3.2 Access, Metro and Long-haul networks 724 Few carriers have end-to-end ownership of the optical networks. Even 725 ifthey do, access, metro and long-haul networks often belong to 726 different administrative divisions and they each for optical sub- 727 network. Therefore Inter-(sub)-networks interconnection is essential in 728 terms of supporting the end-to-end optical service provisioning and 729 management. The access, metro and long-haul networks may use different 730 technologies and architectures, and as such may have different network 731 properties. 733 In general, an end-to-end optical connection may easily cross multiple 734 sub-networks with the following possible scenarios 735 Access -- Metro -- Access 736 Access - Metro -- Long Haul -- Metro - Access 738 Editor's Note: More details will be added in a later revision of this 739 draft. 741 5.4 Inter-Carrier Network Model 743 The inter-carrier model focuses on the service and control aspects 744 between different carrier networks and describes the internetworking 745 relationship between the different carrier's optical networks. In the 746 inter-carrier network model, each carrier's optical network is a 748 separate administrative domain. Both the UNI interface between the user 749 and the carrier network and the NNI interface between two carrier's 750 network are crossing the carrier's administrative boundaries and 751 therefore are by definition exterior interfaces. 752 Carrier Network Interconnection 753 Inter-carrier interconnection provides for connectivity among different 754 optical network operators. Just as the success and scalability of the 755 Internet has in large part been attributed by the inter-domain routing 756 protocol like BGP, so is the future success of the optical network. The 757 normal connectivity between the carriers may include: 758 Private Peering: Two carriers set up dedicated connection between them 759 via a private arrangement. 760 Public Peering: Two carriers set up a point-to-point connection between 761 them at a public optical network access points (ONAP) 762 Due to the nature of the automatic optical switched network, it is 763 possible to have the distributed peering where the connection between 764 two distant ONE's is connected via an optical connection. 766 6. Optical Service User Requirements 768 An optical connection will traverse two UNI interfaces and zero or more 769 NNI interfaces depending on If it is between two client network users 770 crossing a single carrier's network, or if it is between two client 771 network users crossing multiple carriers' networks. 773 6.1 Connection Management 775 6.1.1 Basic Connection Management 777 In a connection oriented transport network a connection must be 778 established before data can be transferred. This requires, as a 779 minimum, that the following connection management actions shall be 780 supported: 781 Set-up Connection is initiated by the management plane on behalf of an 782 end-user or by the end-user signaling device. The results are as 783 follows: If set-up of connection is successful, then optical circuit, 784 resources, or required bandwidth is dedicated to associated end-points. 785 Dedicated resources may include active resources as well as protection 786 or restoration resources in accordance with the class of service 787 indicated by the user. If set-up of connection is not successful, a 788 negative response is returned to initiating entity and any partial 789 allocation of resources is de-allocated. 791 Editor's Note - may need to mention the ACK from the user on connection 792 create confirmation. 794 Teardown Connection is initiated by the management plane on behalf of 795 an end-user or by the end-user signaling device. The results are as 796 follows: optical circuit, resources or the required bandwidth are freed 798 up for ulterior usage. Dedicated resources are also freed. Shared 799 resources are only freed if there are no active connections sharing the 800 same protection or restoration resources. If tear down is not 801 successful, a negative response shall be returned to the end-user. 802 Query Connection is initiated by the management plane on behalf of an 803 end-user or by the end-user signaling device. Status report returned to 804 querying entity. 805 Accept/Reject Connection is initiated by the end-user signaling device. 806 This command is relevant in the context of switched connections only. 807 The destination end-user shall have the opportunity to accept or reject 808 new connection requests or connection modifications. 809 Furthermore, the following requirements need to be considered: 811 Requirement 6. The control plane shall support action results code 812 responses to any requests over the control interfaces. 813 Requirement 7. The control plane shall support requests for connection 814 set-up, subject to policies in effect between the user and the 815 network. 816 Requirement 8. The control plane shall support the destination user 817 edge device's decision to accept or reject connection creation 818 requests from the initiating user edge device. 819 Requirement 9. The control plane shall support the user request for 820 connection tear down. 821 Requirement 10. The control plane shall support management plane 822 and user edge device request for connection attributes or status 823 query. 825 In addition, there are several actions that need to be supported, which 826 are not directly related to an individual connection, but are necessary 827 for establishing healthy interfaces. The requirements below show some 828 of these actions: 830 Requirement 11. UNI shall support initial registration of the UNI-C 831 with the network. 832 Requirement 12. UNI shall support registration and updates by the 833 UNI-C entity of the edge devices and user interfaces that it 834 controls. 835 Requirement 13. UNI shall support network queries of the user edge 836 devices. 837 Requirement 14. UNI shall support detection of user edge device or 838 of edge ONE failure. 840 In addition, connection admission control (CAC) is necessary for 841 authentication of the user and controlling access to network resources. 843 Requirement 15. CAC shall be provided as part of the control plane 844 functionality. It is the role of the CAC function to determine if 846 there is sufficient free resource available to allow a new 847 connection. 848 Requirement 16. If there is sufficient resource available, the CAC 849 may permit the connection request to proceed. 850 Requirement 17. If there is not sufficient resource available, the 851 CAC shall notify the originator of the connection request that the 852 request has been denied. 854 6.2 Enhanced Connection Management 856 6.2.1 Compound Connections 858 Multiple point-to-point connections may be managed by the network so as 859 to appear as a single compound connection to the end-points. Examples 860 of such compound connections are connections based on virtual 861 concatenation, diverse routing, or restorable connections. 863 Compound connections are distinguished from basic connections in that a 864 UNI request will generate multiple parallel NNI signaling sessions. 865 Connection Restoration 866 The control plane should provide the signaling and routing capabilities 867 to permit connection restoration based on the user's request for its 868 assigned service class. 870 Diverse Routing 871 The control plane should provide the signaling and routing capabilities 872 to permit a user to request diversely routed connections from a carrier 873 who supports this functionality. 875 Multicast Connections 876 The control plane should provide the signaling and routing capabilities 877 to permit a user to request multicast connections from a carrier who 878 supports this functionality. 880 6.2.2 Supplemental Services 882 Requirement 18. The control plane shall provide support for the 883 development of supplementary services that are independent of the 884 bearer service. 886 Where these are carried across networks using a range of protocols, it 887 is necessary to ensure that the protocol interworking provides a 888 consistent service as viewed by the user regardless of the network 889 implementation. 891 Requirement 19. The control plane shall support closed user groups. 892 This allows a user group to create, for example, a virtual private 893 network. 895 Supplementary services may be not required or possible for soft 896 permanent connections. 898 6.2.3 Optical VPNs 900 In optical virtual private networks, the customer contracts for 901 specific network resources (capacity between OXCs, OXC ports, OXC 902 switching resources) and is able to control these resources to 903 establish, disconnect, and reconfigure optical connection connections. 905 Requirement 20. The control plane should provide the signaling and 906 routing capabilities to permit a user to request optical virtual 907 private networks from a carrier who supports this functionality. 909 6.3 Optical Services 911 Optical services embody a large range of transport services. Currently, 912 most transport systems are SONET/SDH based, however, innovations in 913 optical technology such as photonic switching bring about the distinct 914 possibility of support for pure optical transport services, while the 915 proliferation of Ethernet coupled with advancements in the technology 916 to support 1Gb/s and 10 Gb/s interfaces are drivers to make this 917 service class widely available. 919 Transparent Service assumes that the user requires optical transport 920 without the network being aware of the framing. However, since 921 transmission systems and the engineering rules that apply have 922 dependencies on the signal bandwidth, even for transparent optical 923 services, knowledge of the bandwidth requirements is essential. 924 Opaque Service refers to transport services where signal framing is 925 negotiated between the user and the network operator, and only the 926 payload is carried transparently. SONET/SDH transport is most widely 927 used for network-wide transport, and such is discussed in most detail 928 in the following sections. 930 As stated above, Ethernet Services, specifically 1Gb/s and 1Gbs 931 Ethernet services are gaining more and more popularity due to the lower 932 costs of the customers' premises equipment and its simplified 933 management requirements (compared to SONET or SDH). Therefore, more and 934 more network customers have expressed a high level of interest in 935 support of these transport services. 937 Ethernet services may be carried over either SONET/SDH or photonic 938 networks. As discussed in subsequent sections Ethernet service requests 939 require some service specific parameters: priority class, VLAN Id/Tag, 940 traffic aggregation parameters. 942 Also gaining ground in the industry are the Storage Area Network (SAN) 943 Services. ESCON and FICON are proprietary versions of the service, 944 while Fiber Channel is the standard alternative. As discussed in 945 subsequent sections Fiber Channel service may require a latency 946 parameter, since the protocol between the service clients and the 947 server may be dependent on the transmission delays (the service is 948 sensitive to delays in the range of hundreds of .s). As is the case with 949 Ethernet services, SAN services may be carried over either SONET/SDH 950 (using GFP mapping) or photonic networks. Currently SAN services 951 require only point-to-point connections, but it is envisioned that in 952 the future they may also require multicast connections. 954 6.4 Levels of Transparency 956 Bitstream connections are framing aware - the exact signal framing is 957 known or needs to be negotiated between network operator and user. 958 However, there may be multiple levels of transparency for individual 959 framing types. Current transport networks are mostly based on SONET/SDH 960 technology. Therefore, multiple levels have to be considered when 961 defining specific optical services. 963 The example below shows multiple levels of transparency applicable to 964 SONET/SDH transport. 965 - SONET Line and section OH (SDH multiplex and regenerator section OH) 966 are normally terminated and a large set of parameters can be 967 monitored by the network. 968 - Line and section OH are carried transparently 969 - Non-SONET/SDH transparent bit stream 971 6.5 Optical Connection granularity 973 The service granularity is determined by the specific technology, 974 framing and bit rate of the physical interface between the ONE and the 975 user edge device and by the capabilities of the ONE. The control plane 976 needs to support signaling and routing for all the services supported 977 by the ONE. Connection granularity is defined by a combination of 978 framing (e.g., SONET or SDH) and bandwidth of the signal carried over 979 the network for the user. The connection and associated properties may 980 define the physical characteristics of the optical connection. However, 981 the consumable attribute is bandwidth. In general, there should not be 982 a one-to-one correspondence imposed between the granularity of the 983 service provided and the maximum capacity of the interface to the user. 985 Requirement 21. The SDH and SONET connection granularity, shown in 986 the table below, shall be supported by the control plane. 987 Any specific NE's control plane implementation needs to support only 988 the subset consistent with its hardware. 990 Editor's Note: An OTN table for service granularity will be added. 992 SDH SONET Transported signal 993 name name 994 RS64 STS-192 STM-64 (STS-192) signal without 995 Section termination of any OH. 996 RS16 STS-48 STM-16 (STS-48) signal without 997 Section termination of any OH. 998 MS64 STS-192 STM-64 (STS-192); termination of 999 Line RSOH (section OH) possible. 1000 MS16 STS-48 STM-16 (STS-48); termination of 1001 Line RSOH (section OH) possible. 1002 VC-4- STS-192c- VC-4-64c (STS-192c-SPE); 1003 64c SPE termination of RSOH (section OH), 1004 MSOH (line OH) and VC-4-64c TCM OH 1005 possible. 1006 VC-4- STS-48c- VC-4-16c (STS-48c-SPE); 1007 16c SPE termination of RSOH (section OH), 1008 MSOH (line OH) and VC-4-16c TCM 1009 OH possible. 1010 VC-4-4c STS-12c- VC-4-4c (STS-12c-SPE); termination 1011 SPE of RSOH (section OH), MSOH (line 1012 OH) and VC-4-4c TCM OH possible. 1013 VC-4 STS-3c- VC-4 (STS-3c-SPE); termination of 1014 SPE RSOH (section OH), MSOH (line OH) 1015 and VC-4 TCM OH possible. 1016 VC-3 STS-1-SPE VC-3 (STS-1-SPE); termination of 1017 RSOH (section OH), MSOH (line OH) 1018 and VC-3 TCM OH possible. 1019 Note: In SDH it could be a higher 1020 order or lower order VC-3, this is 1021 identified by the sub-addressing 1022 scheme. In case of a lower order 1023 VC-3 the higher order VC-4 OH can 1024 be terminated. 1025 VC-2 VT6-SPE VC-2 (VT6-SPE); termination of 1026 RSOH (section OH), MSOH (line OH), 1027 higher order VC-3/4 (STS-1-SPE) OH 1028 and VC-2 TCM OH possible. 1029 - VT3-SPE VT3-SPE; termination of section 1030 OH, line OH, higher order STS-1- 1031 SPE OH and VC3-SPE TCM OH 1032 possible. 1033 VC-12 VT2-SPE VC-12 (VT2-SPE); termination of 1034 RSOH (section OH), MSOH (line OH), 1036 higher order VC-3/4 (STS-1-SPE) OH 1037 and VC-12 TCM OH possible. 1038 VC-11 VT1.5-SPE VC-11 (VT1.5-SPE); termination of 1039 RSOH (section OH), MSOH (line OH), 1040 higher order VC-3/4 (STS-1-SPE) OH 1041 and VC-11 TCM OH possible. 1043 Requirement 22. In addition, 1 Gb and 10 Gb granularity shall be 1044 supported for 1 Gb/s and 10 Gb/s (WAN mode) Ethernet framing types, 1045 if implemented in the hardware. 1047 Requirement 23. For SAN services the following interfaces have been 1048 defined and shall be supported by the control plane if the given 1049 interfaces are available on the equipment: 1050 - FC-12 1051 - FC-50 1052 - FC-100 1053 - FC-200 1055 In addition, extensions of the intelligent optical network 1056 functionality towards the edges of the network in support of sub-rate 1057 interfaces (as low as 1.5 Mb/s) will support of VT /TU granularity. 1059 Requirement 24. Therefore, sub-rate extensions in ONEs supporting 1060 sub-rate fabric granularity shall support VT-x/TU-1n granularity down 1061 to VT1.5/TU-l1, consistent with the hardware. 1063 Requirement 25. The connection types supported by control plane 1064 shall be consistent with the service granularity and interface types 1065 supported by the ONE. 1067 The control plane and its associated protocols should be extensible to 1068 support new services as needed. 1070 Requirement 26. Encoding of service types in the protocols used 1071 shall be such that new service types can be added by adding new 1072 codepoint values or objects. 1074 Note: Additional attributes may be required to ensure proper 1075 connectivity between endpoints. 1077 6.6 Other Service Parameters and Requirements 1079 6.6.1 Classes of Service 1081 We use "service level" to describe priority related characteristics of 1082 connections, such as holding priority, set-up priority, or restoration 1083 priority. The intent currently is to allow each carrier to define the 1084 actual service level in terms of priority, protection, and restoration 1086 options. Therefore, mapping of individual service levels to a specific 1087 set of priorities will be determined by individual carriers. 1089 Requirement 27. Multiple service level options shall be supported 1090 and the user shall have the option of selecting over the UNI a 1091 service level for an individual connection. 1093 However, in order for the network to support multiple grades of 1094 restoration, the control plane must identify, assign, and track 1095 multiple protection and restoration options. 1097 Requirement 28. Therefore, the control plane shall map individual 1098 service classes into specific protection and/or restoration options. 1100 Specific protection and restoration options are discussed in Section 1101 10. However, it should be noted that while high grade services may 1102 require allocation of protection or restoration facilities, there may 1103 be an application for a low grade of services for which pre-emptable 1104 facilities may be used. 1106 Individual carriers will select appropriate options for protection 1107 and/or restoration in support of their specific network plans. 1109 6.6.2 Connection Latency 1111 Connection latency is a parameter required for support of Fibber 1112 Channel services. Connection latency is dependent on the circuit 1113 length, and as such for these services, it is essential that shortest 1114 path algorithms are used and end-to-end latency is verified before 1115 acknowledging circuit availability. 1117 Editor's Note: more detail may be required here. 1119 6.6.3 Diverse Routing Attributes 1121 The ability to route service paths diversely is a highly desirable 1122 feature. Diverse routing is one of the connection parameters and is 1123 specified at the time of the connection creation. The following 1124 provides a basic set of requirements for the diverse routing support. 1125 - Diversity compromises between two links being used for routing should 1126 be defined in terms of Shared Risk Link Groups (SRLG - see [draft- 1127 chaudhuri-ip-olxc-control-00.txt]]), a group of links which share 1128 some resource, such as a specific sequence of conduits or a specific 1129 office. A SRLG is a relationship between the links that should be 1130 characterized by two parameters: 1131 - Type of Compromise: Examples would be shared fiber cable, shared 1132 conduit, shared right-of-way (ROW), shared link on an optical ring, 1133 shared office - no power sharing, etc.) 1134 - Extent of Compromise: For compromised outside plant, this would be 1135 the length of the sharing. 1137 Requirement 29. The control plane routing algorithms shall be able 1138 to route a single demand diversely from N previously routed demands, 1139 where diversity would be defined to mean that no more than K demands 1140 (previously routed plus the new demand) should fail in the event of a 1141 single covered failure. 1143 7. Optical Service Provider Requirements 1145 7.1 Access Methods to Optical Networks 1147 Multiple access methods shall be supported: 1148 - Cross-office access (User NE co-located with ONE) 1149 In this scenario the user edge device resides in the same office 1150 as the ONE and has one or more physical connections to the ONE. 1151 Some of these access connections may be in use, while others may 1152 be idle pending a new connection request. 1153 - Direct remote access 1154 In this scenario the user edge device is remotely located from the 1155 ONE and has inter-location connections to the ONE over multiple 1156 fiber pairs or via a DWDM system. Some of these connections may be 1157 in use, while others may be idle pending a new connection request. 1158 - Remote access via access sub-network 1159 In this scenario remote user edge devices are connected to the ONE 1160 via a multiplexing/distribution sub-network. Several levels of 1161 multiplexing may be assumed in this case. This scenario is 1162 applicable to metro/access subnetworks of signals from multiple 1163 users, out, of which only a subset have connectivity to the ONE. 1165 Requirement 30. All access methods must be supported. 1167 7.1.1 Dual Homing 1169 Dual homing is a special case of the access network. Dual homing may 1170 take different flavors, and as such affect interface designs in more 1171 than one way: 1172 - A client device may be dual homed on the same subnetwork 1173 - A client device may be dual homed on different subnetworks within the 1174 same administrative domain (and the same domain as the core 1175 subnetwork) 1176 - A client device may be dual homed on different subnetworks within the 1177 same administrative domain (but a different domain from the core 1178 subnetwork) 1179 - A client device may be dual homed on different subnetworks off 1180 different administrative domains. 1181 - A metro subnetwork may be dual homed on the same core subnetwork, 1182 within the same administrative domain 1184 - A metro subnetwork may be dual homed on the same core subnetwork, of 1185 a different administrative domain 1186 - A metro network may be dual homed to separate core subnetworks, of 1187 different administrative domains. 1188 The different flavors of dual homing will have great impact on 1189 admission control, reachability information exchanges, authentication, 1190 neighbor and service discovery across the interface. 1192 Requirement 31. Dual homing must be supported. 1194 7.2 Bearer Interface Types 1196 Requirement 32. All the bearer interfaces implemented in the ONE 1197 shall be supported by the control plane and associated signaling 1198 protocols. 1200 The following interface types shall be supported by the signaling 1201 protocol: 1202 - SDH 1203 - SONET 1204 - 1 Gb Ethernet, 10 Gb Ethernet (WAN mode) 1205 - 10 Gb Ethernet (LAN mode) 1206 - FC-N (N= 12, 50, 100, or 200) for Fiber Channel services 1207 - OTN (G.709) 1208 - PDH 1209 - Transparent optical 1211 7.3 Names and Address Management 1213 In this section addressing refers to optical layer addressing and it is 1214 an identifier required for routing and signaling protocol within the 1215 optical network. Identification used by other logical entities outside 1216 the optical network control plane (such as higher layer services 1217 addressing schemes or a management plane addressing scheme) may be used 1218 as naming schemes by the optical network. Recognizing that multiple 1219 types of higher layer services need to be supported by the optical 1220 network, multiple user edge device naming schemes must be supported, 1221 including at the minimum IP and NSAP naming schemes. 1222 The control plane shall use the higher layer service address as a name 1223 rather than as a routable address. The control plane must know what 1224 internal addressing scheme is used within the control plane domain. 1225 Optical layer addresses shall be provisionable for each connection 1226 point managed by the control plane. Dynamic address assignment schemes 1227 are desirable in the control plane, however in the event the assignment 1228 is not dynamic then connection point addresses need to be configurable 1229 from the management plane. In either case, the management system must 1230 be able to query the currently assigned value. 1232 While, IP-centric services are considered by many as one of the drivers 1233 for optical network services, it is also widely recognized that the 1234 optical network will be used in support of a large array of both data 1235 and voice services. In order to achieve real-time provisioning for all 1236 services supported by the optical network while minimizing OSS 1237 development by carriers, it is essential for the network to support a 1238 UNI definition that does not exclude non-IP services. 1240 Requirement 33. For this reason, multiple naming schemes shall be 1241 supported to allow network intelligence to grow towards the edges. 1243 One example of naming is the use of physical entity naming. 1245 Carrier Network Elements identify individual ports by their location 1246 using a scheme based on "CO/NE/bay/shelf/slot/port" addressing schema. 1247 Similarly, facilities are identified by route 1248 "id/fiber/wavelength/timeslot". 1249 Mapping of Physical Entity addressing to Optical Network addressing 1250 shall be supported. Name to address translation should be supported 1251 similar to DNS. 1252 To realize fast provisioning and bandwidth on demand services in 1253 response to router requests, it is essential to support IP naming. 1255 Requirement 34. Mapping of higher layer user IP naming to Optical 1256 Network Addressing shall be supported. 1257 European carriers use NSAP naming for private lines and many US data 1258 centric applications, including ATM-based services also use NSAP 1259 addresses. As such it is important that NSAP naming should be 1260 supported. 1262 Requirement 35. Mapping of higher layer NSAP naming to Optical 1263 Network shall be supported. 1265 Requirement 36. Listed below are additional Optical Network 1266 Addresses (ONA) requirements: 1267 1) There shall be at least one globally unique address associated with 1268 each user device. A user device may have one or more ports connected 1269 to the network. 1270 2) The address space shall support connection management across multiple 1271 networks, both within one administrative domain and across multiple 1272 administrative domains. 1273 3) Address hierarchies shall be supported. 1274 4) Address aggregation and summarization shall be supported. (This is 1275 actually an NNI requirement). 1276 5) Dual homing shall allow, but not require the use of multiple 1277 addresses whether within the same administrative domain, or across 1278 multiple administrative domains. 1280 6) Need an international body to administer the address space. Note that 1281 this need is independent of what addressing scheme is used, and this 1282 concerns the user and the network operator communities. 1283 7) The size of the Optical Network Address shall be sufficient to avoid 1284 address exhaustion within the next 50 years. The address space shall 1285 scale up to a large base of customers and to a large number of 1286 operators. 1287 8) Internal switch addresses shall not be derivable from ONAs and shall 1288 not be advertised to the customer. 1289 9) The ONA shall not imply network characteristics (port numbers, port 1290 granularity, etc). 1291 10) ONA reachability deals with connectivity and not with the user 1292 device being powered up (reachability updates triggered by 1293 registration and deregistration, not by client device reboots) (Name 1294 registration persists for as long as the user retains the same ONA - 1295 until de-registration). 1296 11) ONAs shall be independent of user names, higher layer services 1297 (i.e., should support IP, ATM, PL, etc) and optical network internal 1298 routing addresses. User names are opaque to optical network. User 1299 equipment and other optical carriers have no knowledge of optical 1300 network internal routing addresses, including ports information. 1301 12) The client (user) name should not make assumptions on what 1302 capabilities are offered by the server (service provider) name, and 1303 thus the semantics of the two name spaces should be separate and 1304 distinct. This does not place any constraints on the syntax of client 1305 and server layer name spaces, or of the user and service provider 1306 name spaces (G.astn draft) 1307 13) The addressing scheme shall not impede use of either client-server 1308 or peer model within an operator's network. 1309 14) There should be a single standard, fixed space of addresses to 1310 which names will be mapped from a wide range of higher layer 1311 services. 1313 7.3.1 Address Space Separation 1315 Requirement 37. The control plane must support all types of client 1316 addressing. 1317 Requirement 38. The control plane must use the client address as a 1318 name rather as a routable address. 1319 Requirement 39. The control plane must know what internal 1320 addressing scheme is used within the control plane domain. 1322 7.3.2 Directory Services 1324 Requirement 40. Directory Services shall be supported to enable 1325 operator to query the optical network for the optical network address 1326 of a specified user. 1327 Requirement 41. Address resolution and translation between various 1328 user edge device name and corresponding optical network address shall 1329 be supported. 1330 Requirement 42. UNI shall use the user naming schemes for 1331 connection request. 1333 7.4 Link Identification 1335 Optical devices might have thousands of incoming and outgoing 1336 connections. This will be of concern when trying to provide globally 1337 unique addresses to all optical nodes in an optical network. 1338 Requirement 43. The control plane should be able to address NE 1339 connection points with addresses that are locally defined. 1340 Requirement 44. The control plane should be able to advertise and 1341 signal for locally defined and non-unique addresses that have only 1342 local significance. This would allow for re-use of the addressing 1343 space. 1344 There is the issue of providing addresses for the optical nodes or 1345 devices that form the ASON/ASTN. The other issue is providing addresses 1346 for the incoming and outgoing connections/ports within each optical 1347 node/device. The first issue is not a problem, since the optical 1348 devices/nodes can use the standard IP or NSAP address space. Providing 1349 locally defined address space that can be re-used in other optical 1350 nodes within the domain can solve providing address space for the 1351 ports/connections within each node. So, the optical nodes within a 1352 domain or multiple domains in the network can communicate with each 1353 other using the standard address space like IP or NSAP. The switching & 1354 forwarding within each optical node can be based on locally defined 1355 addresses. 1357 7.5 Policy-Based Service Management Framework 1359 The IPO service must be supported by a robust policy-based management 1360 system to be able to make important decisions. 1361 Examples of policy decisions include: 1362 - What types of connections can be set up for a given UNI? 1363 - What information can be shared and what information must be 1364 restricted in automatic discovery functions? 1365 - What are the security policies over signaling interfaces? 1367 Requirement 45. Service and network policies related to 1368 configuration and provisioning, admission control, and support of 1370 Service Level Agreements (SLAs) must be flexible, and at the same 1371 time simple and scalable. 1373 Requirement 46. The policy-based management framework must be based 1374 on standards-based policy systems (e.g. IETF COPS). 1375 Requirement 47. In addition, the IPO service management system must 1376 support and be backwards compatible with legacy service management 1377 systems. 1379 7.5.1 Admission control 1381 Connection admission functionality required must include authentication 1382 of client, verification of services, and control of access to network 1383 resources. 1385 Requirement 48. The policy management system must determine what 1386 kind of connections can be set up for a given UNI. 1387 Connection Admission Control (CAC) is required for authentication of 1388 users (security), verification of connection service level parameters 1389 and for controlling access to network resources. The CAC policy should 1390 determine if there are adequate network resources available within the 1391 carrier to support each new connection. CAC policies are outside the 1392 scope of standardization. 1394 Requirement 49. When a connection request is received by the 1395 control plane, it is necessary to ensure that the resources exist 1396 within the optical transport network to establish the connection. 1397 Requirement 50. In addition to the above, the control plane 1398 elements need the ability to rate limit (or pace) call setup attempts 1399 into the network. 1401 This is an attempt to prevent overload of the control plane processors. 1402 In application to SPC type connections this might mean that the setup 1403 message would be slowed or buffered in order to handle the current 1404 load. 1406 Another aspect of admission control is security. 1408 Requirement 51. The policy-based management system must be able to 1409 authenticate and authorize a client requesting the given service. The 1410 management system must also be able to administer and maintain 1411 various security policies over signaling interfaces. 1413 7.5.2 SLA Support 1415 Requirement 52. The service management system should employ 1416 features to ensure client SLAs. 1418 In addition to setting up connections based on resource availability to 1419 meet SLAs, the management system must periodically monitor connections 1420 for the maintenance of SLAs. Complex SLAs, such as time-of-day or 1421 multiple-service-class based SLAs, should also be satisfied. In order 1422 to do this, the policy-based service management system should support 1423 automated SLA monitoring systems that may be embedded in the management 1424 system or may be separate entities. Mechanisms to report events of not 1425 meeting SLAs, or a customer repeatedly using more than the SLA, should 1426 be supported by the SLA monitoring system. Other off-line mechanisms 1427 to forecast network traffic growth and congestion via simulation and 1428 modeling systems, may be provided to aid in efficient SLA management. 1429 Another key aspect to SLA management is SLA translation. 1431 Requirement 53. In particular, policy-based Class of Service 1432 management schemes that accurately translate customer SLAs to 1433 parameters that the underlying mechanisms and protocols in the 1434 optical transport network can understand, must be supported. 1436 Consistent interpretation and satisfaction of SLAs is especially 1437 important when an IPO spans multiple domains or service providers. 1439 7.6 Inter-Carrier Connectivity 1441 Inter-carrier connectivity has specific implications on the admission 1442 control and SLA support aspects of the policy-based service management 1443 system. 1444 Multiple peering interfaces may be used between two carriers, whilst 1445 any given carrier is likely to peer with multiple other carriers. These 1446 peering interfaces must support all of the functions defined in section 1447 9, although each of these functions has a special flavor when applied 1448 to this interface. 1450 Carriers will not allow other carriers control over their network 1451 resources, or visibility of their topology or resources. Therefore, 1452 topology and resource discovery should not be supported between 1453 carriers. There may of course be instances where there is high degree 1454 of trust between carriers, allowing topology and resource discovery, 1455 but this would be a rare exception. 1457 Requirement 54. Inter-carrier connectivity shall be based on E-NNI. 1458 To provide connectivity between clients connected to different carriers 1459 requires that client reachability information be exchanged between 1460 carriers. Additional information regarding network peering points and 1461 summarized network topology and resource information will also have to 1462 be conveyed beyond the bounds of a single carrier. This information is 1463 required to make route selections for connections traversing multiple 1464 carriers. 1466 Given that detailed topology and resource information is not available 1467 outside a carrier's trust boundary, routing of connections over 1469 multiple carriers will involve selection of the autonomous systems 1470 (ASs) traversed. This can be defined using a series of peering points. 1471 More detailed route selection is then performed on a per carrier basis, 1472 as the signaling requests are received at each carrier's peering 1473 points. The detailed connection routing information should not be 1474 conveyed across the carrier trust boundary. 1476 CAC, as described above, is necessary at each trust interface, 1477 including those between carriers (see Section 11.2 for security 1478 considerations). 1480 Similar to dual homing it is possible to have inter-carrier 1481 connectivity over multiple diverse routes. These connectivity models 1482 support multi hosting. 1484 Editor's Note: further discussion on this will be added in a later 1485 revision. 1487 7.7 Multiple Hierarchies 1489 Transport networks are built in a tiered, hierarchal architecture. 1490 Also, by applying control plane support to service and facilities 1491 management, separate and distinct network layers may need to be 1492 supported across the same inter-domain interface. Furthermore, for 1493 large networks, it may be required to support multiple levels of 1494 routing domains. 1496 Requirement 55. Multi level hierarchy must be supported. 1498 Editor's Note: more details will be added as required. 1500 Network layer hierarchies 1501 Services (IP, SAN, Ethernet) 1502 Transport: SONET/SDH/Ethernet 1503 DWDM, Optics 1504 Address space hierarchies 1505 Geographical hierarchies 1506 Functional hierarchies 1507 Network Topology hierarchies 1508 Access, metro, inter-city, long haul - as routing areas. Any one 1509 large routing area may need to be decomposed in sub-areas. 1511 8. Control Plane Functional Requirements for Optical Services 1513 8.1 Control Plane Capabilities and Functions 1515 8.1.1 Network Control Capabilities 1517 The following capabilities are required in the network control plane to 1518 successfully deliver automated provisioning: 1519 - Neighbor discovery 1520 - Address assignment 1521 - Connection topology discovery 1522 - Address resolution 1524 - Reachability information dissemination 1525 - Connection Management 1526 These capabilities may be supported by a combination of functions 1527 across the control and the management planes. 1529 8.1.2 Control Plane Functions 1531 The following are essential functions needed to support network control 1532 capabilities: 1533 - Signaling 1534 - Routing 1535 - Resource and Service discovery 1537 Signaling is the process of control message exchange using a well- 1538 defined signaling protocol to achieve communication between the 1539 controlling functional entities connected through a specified 1540 communication channel. It is often used for dynamic connection set-up 1541 across a network. Signaling is used to disseminate information between 1542 network entities in support of all network control capabilities. 1543 Routing is a distributed networking process within the network for 1544 dynamic dissemination and propagation of the network information among 1545 all the routing entities based on a well-defined routing protocol. It 1546 enables the routing entity to compute the best path from one point to 1547 another. 1549 Resource and service discovery is the automatic process between the 1550 connected network devices using a resource/service discovery protocol 1551 to determine the available services and identify connection state 1552 information. 1554 Requirement 56. The general requirements for the control plane 1555 functions to support optical networking functions include: 1556 1. The control plane must have the capability to establish, 1557 teardown and maintain the end-to-end connection. 1558 2. The control plane must have the capability to establish, 1559 teardown and maintain the hop-by-hop connection segments 1560 between two end-points. 1561 3. The control plane must have the capability to support traffic- 1562 engineering requirements including resource discovery and 1563 dissemination, constraint-based routing and path computation. 1564 4. The control plane must have the capability to support 1565 reachability information dissemination. 1566 5. The control plane shall support network status or action 1567 result code responses to any requests over the control 1568 interfaces. 1569 6. The control plane shall support resource allocation on both UNI 1570 and NNI. 1572 7. Upon successful connection teardown all resources associated 1573 with the connection shall become available for access for new 1574 requests. 1575 8. The control plane shall ensure that there will not be unused, 1576 frozen network resources. 1577 9. The control plane shall ensure periodic or on demand clean-up 1578 of network resources. 1579 10. The control plane shall support management plane request for 1580 connection attributes/status query. 1581 11. The control plane must have the capability to support various 1582 protection and restoration schemes for the optical channel 1583 establishment. 1584 12. Control plane failures shall not affect active connections. 1585 13. The control plane shall be able to trigger restoration based 1586 on alarms or other indications of failure. 1588 8.2 Signaling Network 1590 The signaling network consists of a set of signaling channels that 1591 interconnect the nodes within the control plane. Therefore, the 1592 signaling network must be accessible by each of the communicating nodes 1593 (e.g., OXCs). 1594 Requirement 57. The signaling network must terminate at each of the 1595 communicating nodes. 1596 Requirement 58. The signaling network shall not be assumed to have 1597 the same physical connectivity as the data plane, nor shall the data 1598 plane and control plane traffic be assumed to be congruently routed. 1599 A signaling channel is the communication path for transporting 1600 signaling messages between network nodes, and over the UNI (i.e., 1601 between the UNI entity on the user side (UNI-C) and the UNI entity on 1602 the network side (UNI-N)). There are three different types of signaling 1603 methods depending on the way the signaling channel is constructed: 1604 . In-band signaling: The signaling messages are carried over a logical 1605 communication channel embedded in the data-carrying optical link or 1606 channel. For example, using the overhead bytes in SONET data framing 1607 as a logical communication channel falls into the in-band signaling 1608 methods. 1609 . In fiber, Out-of-band signaling: The signaling messages are carried 1610 over a dedicated communication channel separate from the optical 1611 data-bearing channels, but within the same fiber. For example, a 1612 dedicated wavelength or TDM channel may be used within the same fiber 1613 as the data channels. 1614 . Out-of-fiber signaling: The signaling messages are carried over a 1615 dedicated communication channel or path within different fibers to 1617 those used by the optical data-bearing channels. For example, 1618 dedicated optical fiber links or communication path via separate and 1619 independent IP-based network infrastructure are both classified as 1620 out-of-fiber signaling. 1622 In-band signaling is particularly important over a UNI interface, where 1623 there are relatively few data channels. Proxy signaling is also 1624 important over the UNI interface, as it is useful to support users 1625 unable to signal to the optical network via a direct communication 1626 channel. In this situation a third party system containing the UNI-C 1627 entity will initiate and process the information exchange on behalf of 1628 the user device. The UNI-C entities in this case reside outside of the 1629 user in separate signaling systems. 1631 In-fiber, out-of-band and out-of-fiber signaling channel alternatives 1632 are particularly important for NNI interfaces, which generally have 1633 significant numbers of channels per link. Signaling messages relating 1634 to all of the different channels can then be aggregated over a single 1635 or small number of signaling channels. 1637 The signaling network forms the basis of the transport network control 1638 plane. To achieve reliable signaling, the control plane needs to 1639 provide reliable transfer of signaling messages, its own OAM mechanisms 1640 and flow control mechanisms for restricting the transmission of 1641 signaling packets where appropriate. 1643 Requirement 59. The signaling protocol shall support reliable 1644 message transfer. 1645 Requirement 60. The signaling network shall have its own OAM 1646 mechanisms. 1647 Requirement 61. The signaling protocol shall support congestion 1648 control mechanisms. 1650 In addition, the signaling network should support message priorities. 1651 Message prioritization allows time critical messages, such as those 1652 used for restoration, to have priority over other messages, such as 1653 other connection signaling messages and topology and resource discovery 1654 messages. 1656 Requirement 62. The signaling network should support message 1657 priorities. 1658 The signaling network must be highly scalable, with minimal performance 1659 degradations as the number of nodes and node sizes increase. 1660 Requirement 63. The signaling network shall be highly scalable. 1662 The signaling network must also be highly reliable, implementing 1663 mechanisms for failure recovery. Furthermore, failure of signaling 1664 links or of the signaling software must not impact established 1665 connections or cause partially established connections, nor should they 1666 impact any elements of the management plane. 1668 Requirement 64. The signaling network shall be highly reliable and 1669 implement failure recovery. 1671 Requirement 65. Control channel and signaling software failures 1672 shall not cause disruptions in established connections within the 1673 data plane, and signaling messages affected by control plane outages 1674 should not result in partially established connections remaining 1675 within the network. 1677 Requirement 66. Control channel and signaling software failures 1678 shall not cause management plane failures. 1679 Security is also a crucial issue for the signaling network. Transport 1680 networks are generally expected to carry large traffic loads and high 1681 bandwidth connections. The consequence is significant economic impacts 1682 should hackers disrupt network operation, using techniques such as the 1683 recent denial of service attacks seen within the Internet. 1685 Requirement 67. The signaling network shall be secure, blocking all 1686 unauthorized access. 1688 Requirement 68. The signaling network topology and signaling node 1689 addresses shall not be advertised outside a carrier's domain of 1690 trust. 1692 8.3 Control Plane Interface to Data Plane 1694 In the situation where the control plane and data plane are provided by 1695 different suppliers, this interface needs to be standardized. 1696 Requirements for a standard control -data plane interface are under 1697 study. Control plane interface to the data plane is outside the scope 1698 of this document. 1700 8.4 Control Plane Interface to Management Plane 1702 The control plane is considered a managed entity within a network. 1703 Therefore, it is subject to management requirements just as other 1704 managed entities in the network are subject to such requirements. 1706 8.4.1 Allocation of resources 1708 The management plane is responsible for identifying which network 1709 resources that the control plane may use to carry out its control 1711 functions. Additional resources may be allocated or existing resources 1712 deallocated over time. 1714 Requirement 69. Resources shall be able to be allocated to the 1715 control plane for control plane functions include resources involved 1716 in setting up and tearing down calls and control plane specific 1717 resources. Resources allocated to the control plane for the purpose 1718 of setting up and tearing down calls include access groups (a set of 1719 access points), connection point groups (a set of connection points). 1720 Resources allocated to the control plane for the operation of the 1721 control plane itself may include protected and protecting control 1722 channels. 1723 Requirement 70. Resources allocated to the control plane by the 1724 management plane shall be able to be de-allocated from the control 1725 plane on management plane request. 1726 Requirement 71. If resources are supporting an active connection 1727 and the resources are requested to be de-allocated from the control 1728 plane, the control plane shall reject the request. The management 1729 plane must either wait until the resources are no longer in use or 1730 tear down the connection before the resources can be de-allocated 1731 from the control plane. Management plane failures shall not affect 1732 active connections. 1733 Requirement 72. Management plane failures shall not affect the 1734 normal operation of a configured and operational control plane or 1735 data plane. 1737 8.4.2 Soft Permanent Connections (Point-and click provisioning) 1739 In the case of SPCs, the management plane requests the control plane to 1740 set up/tear down a connection rather than a request coming over a UNI. 1742 Requirement 73. The management plane shall be able to query on 1743 demand the status of the connection request 1744 Requirement 74. The control plane shall report to the management 1745 plane, the Success/Failures of a connection request 1746 Requirement 75. Upon a connection request failure, the control 1747 plane shall report to the management plane a cause code identifying 1748 the reason for the failure. 1749 Requirement 76. In a set up connection request, the management 1750 plane shall be able to specify the service class that is required for 1751 the connection. 1753 8.4.3 Resource Contention resolution 1755 Since resources are allocated to the control plane for use, there 1756 should not be contention between the management plane and the control 1758 plane for connection set-up. Only the control plane can establish 1759 connections for allocated resources. However, in general, the 1760 management plane shall have authority over the control plane. 1762 Requirement 77. The control plane shall not assume authority over 1763 management plane provisioning functions. 1764 In the case of fault management, both the management plane and the 1765 control plane need fault information at the same priority. 1766 Requirement 78. The control plane shall not interfere with the 1767 speed or priority at which the management plane would receive alarm 1768 information from the NE or the transport plane in the absence of a 1769 control plane. 1771 The control plane needs fault information in order to perform its 1772 restoration function (in the event that the control plane is providing 1773 this function). However, the control plane needs less granular 1774 information than that required by the management plane. For example, 1775 the control plane only needs to know whether the resource is good/bad. 1776 The management plane would additionally need to know if a resource was 1777 degraded or failed and the reason for the failure, the time the failure 1778 occurred and so on. 1780 Requirement 79. Accounting information shall be provided by the 1781 control plane to the management plane. Again, there is no 1782 contention. This is addressed in the billing section.[open issue - 1783 what happens to accounting data histories when resource moved from 1784 control plane to management plane?] 1786 Performance management shall be a management plane function only. 1787 Again, there is no contention between the management plane and the 1788 control plane. 1790 Requirement 80. The control plane shall not assume authority over 1791 management plane performance management functions. 1793 8.4.4 MIBs 1795 Requirement 81. A standards based MIB shall be used for control 1796 plane management. 1797 Requirement 82. The standards based MIB definition shall support 1798 all management functionality required to manage the control plane. 1799 Requirement 83. The standards based MIB definition should support 1800 all optional management functionality desired to manage the control 1801 plane. 1803 8.4.5 Alarms 1805 The control plane is not responsible for monitoring and reporting 1806 problems in the transport plane or in the NE that are independent of 1808 the control plane. It is responsible, however for monitoring and 1809 reporting control plane alarms. The requirements in this section are 1810 applicable to the monitoring and reporting of control plane alarms. 1812 Requirement 84. The Control Plane shall not lose alarms. Alarms 1813 lost due to transmission errors between the Control Plane and the 1814 Management Plane shall be able to be recovered through Management 1815 Plane queries to the alarm notification log. 1816 Requirement 85. Alarms must take precedence over all other message 1817 types for transmission to the Management Plane. 1818 Requirement 86. Controls issued by the Management Plane must be 1819 able to interrupt an alarm stream coming from the Control Plane. 1820 Requirement 87. The alarm cause shall be based on the probableCause 1821 list in M.3100. 1822 Requirement 88. Detailed alarm information shall be included in the 1823 alarm notification including: the location of the alarm, the time the 1824 alarm occurred, and the perceived severity of the alarm. 1825 Requirement 89. The Control Plane shall send clear notifications 1826 for Critical, Major, and Minor alarms when the cleared condition is 1827 detected. 1828 Requirement 90. The Control Plane shall support Autonomous Alarm 1829 Reporting. 1830 Requirement 91. The Control Plane shall support Alarm Reporting 1831 Control (See M.3100, Amendment 3). 1832 Requirement 92. The Control Plane shall support the ability to 1833 configure and query the management plane applications that Autonomous 1834 Alarm Reporting will be sent. 1835 Requirement 93. The Control Plane shall support the ability to 1836 retrieve all or a subset of the Currently Active Alarms. 1837 Requirement 94. The Control Plane shall support Alarm Report 1838 Logging. 1839 Requirement 95. The Control Plane should support the ability to 1840 Buffer Alarm Reports separately for each management plane application 1841 that an Alarm Report is destined (See X.754, Enhanced Event Control 1842 Function). 1844 Requirement 96. The Control Plane shall support the ability to 1845 cancel a request to retrieve all or a subset of the Currently Active 1846 Alarms (See Q.821, Enhanced Current Alarm Summary Control). 1847 Requirement 97. The Control Plane should support the ability to 1848 Set/Get Alarm Severity Assignment per object instance and per Alarm 1849 basis. 1850 Requirement 98. The Control Plane shall log autonomous Alarm Event 1851 Reports / Notifications. 1852 Requirement 99. The Control Plane shall not report the symptoms of 1853 control plane problems as alarms (For example, an LOF condition shall 1854 not be reported when the problem is a supporting facility LOS). 1856 8.4.6 Status/State 1858 Requirement 100. The management plane shall be able to query the 1859 operational state of all control plane resources. 1860 Requirement 101. In addition, the control plane shall provide a log 1861 of current period and historical counts for call attempts and call 1862 blocks and capacity data for both UNI and NNI interfaces. 1864 3. The management plane shall be able to query current period and 1865 historical logs. 1867 8.4.7 Billing/Traffic and Network Engineering Support 1869 Requirement 102. The control plane shall record usage per UNI and 1870 per link connection. 1871 Requirement 103. Usage information shall be able to be queried by 1872 the management plane. 1874 8.4.8 Policy Information 1876 Requirement 104. In support of CAC, the management plane shall be 1877 able to configure multiple service classes and identify protection 1878 and or restoration allocations required for each service class, and 1879 then assign services classes on a per UNI basis. 1881 8.4.9 Control Plane Provisioning 1883 Requirement 105. Topological information learned in the discovery 1884 process shall be able to be queried on demand from the management 1885 plane. 1886 Requirement 106. The management plane shall be able to configure UNI 1887 and NNI protection groups. 1889 Requirement 107. The management plane shall be able to prohibit the 1890 control plane from using certain transport resources not currently 1891 being used for a connection for new connection set-up requests. 1892 There are various reasons for the management plane needing to do this 1893 including maintenance actions. 1894 Requirement 108. The management plane shall be able to tear down 1895 connections established by the control plane both gracefully and 1896 forcibly on demand. 1898 8.5 Control Plane Interconnection 1900 The interconnection of the IP router (client) and optical control 1901 planes can be realized in a number of ways depending on the required 1902 level of coupling. The control planes can be loosely or tightly 1903 coupled. Loose coupling is generally referred to as the overlay model 1904 and tight coupling is referred to as the peer model. Additionally 1905 there is the augmented model that is somewhat in between the other two 1906 models but more akin to the peer model. The model selected determines 1907 the following: 1908 - The details of the topology, resource and reachability information 1909 advertised between the client and optical networks 1910 - The level of control IP routers can exercise in selecting paths 1911 across the optical network 1912 The next three sections discuss these models in more details and the 1913 last section describes the coupling requirements from a carrier's 1914 perspective. 1916 8.5.1 Peer Model (I-NNI like model) 1918 Under the peer model, the IP router clients act as peers of the optical 1919 transport network, such that single routing protocol instance runs over 1920 both the IP and optical domains. In this regard the optical network 1921 elements are treated just like any other router as far as the control 1922 plane is concerned. The peer model, although not strictly an internal 1923 NNI, behaves like an I-NNI in the sense that there is sharing of 1924 resource and topology information. 1926 Presumably a common IGP such as OSPF or IS-IS, with appropriate 1927 extensions, will be used to distribute topology information. One tacit 1928 assumption here is that a common addressing scheme will also be used 1929 for the optical and IP networks. A common address space can be 1930 trivially realized by using IP addresses in both IP and optical 1931 domains. Thus, the optical networks elements become IP addressable 1932 entities. 1934 The obvious advantage of the peer model is the seamless interconnection 1935 between the client and optical transport networks. The tradeoff is 1937 that the tight integration and the optical specific routing information 1938 that must be known to the IP clients. 1939 The discussion above has focused on the client to optical control plane 1940 inter-connection. The discussion applies equally well to inter- 1941 connecting two optical control planes. 1943 8.5.2 Overlay (UNI-like model) 1945 Under the overlay model, the IP client routing, topology distribution, 1946 and signaling protocols are independent of the routing, topology 1947 distribution, and signaling protocols at the optical layer. This model 1948 is conceptually similar to the classical IP over ATM model, but applied 1949 to an optical sub-network directly. 1951 Though the overlay model dictates that the client and optical network 1952 are independent this still allows the optical network to re-use IP 1953 layer protocols to perform the routing and signaling functions. 1954 In addition to the protocols being independent the addressing scheme 1955 used between the client and optical network must be independent in the 1956 overlay model. That is, the use of IP layer addressing in the clients 1957 must not place any specific requirement upon the addressing used within 1958 the optical control plane. 1960 The overlay model would provide a UNI to the client networks through 1961 which the clients could request to add, delete or modify optical 1962 connections. The optical network would additionally provide 1963 reachability information to the clients but no topology information 1964 would be provided across the UNI. 1966 8.5.3 Augmented model (E-NNI like model) 1968 Under the augmented model, there are actually separate routing 1969 instances in the IP and optical domains, but information from one 1970 routing instance is passed through the other routing instance. For 1971 example, external IP addresses could be carried within the optical 1972 routing protocols to allow reachability information to be passed to IP 1973 clients. A typical implementation would use BGP between the IP client 1974 and optical network. 1976 The augmented model, although not strictly an external NNI, behaves 1977 like an E-NNI in that there is limited sharing of information. 1979 8.5.4 Carrier Control Plane Coupling Requirements 1981 Choosing the level of coupling depends upon a number of different 1982 factors, some of which are: 1983 - Variety of clients using the optical network 1984 - Relationship between the client and optical network 1985 - Operating model of the carrier 1987 Generally in a carrier environment there will be more than just IP 1988 routers connected to the optical network. Some other examples of 1989 clients could be ATM switches or SONET ADM equipment. This may drive 1990 the decision towards loose coupling to prevent undue burdens upon non- 1991 IP router clients. Also, loose coupling would ensure that future 1992 clients are not hampered by legacy technologies. 1993 Additionally, a carrier may for business reasons want a separation 1994 between the client and optical networks. For example, the ISP business 1995 unit may not want to be tightly coupled with the optical network 1996 business unit. Another reason for separation might be just pure 1997 politics that play out in a large carrier. That is, it would seem 1998 unlikely to force the optical transport network to run that same set of 1999 protocols as the IP router networks. Also, by forcing the same set of 2000 protocols in both networks the evolution of the networks is directly 2001 tied together. That is, it would seem you could not upgrade the 2002 optical transport network protocols without taking into consideration 2003 the impact on the IP router network (and vice versa). 2005 Operating models also play a role in deciding the level of coupling. 2006 [Freeland] gives four main operating models envisioned for an optical 2007 transport network: 2009 - ISP owning all of its own infrastructure (i.e., including fiber and 2010 duct to the customer premises) 2011 - ISP leasing some or all of its capacity from a third party 2012 - Carriers carrier providing layer 1 services 2013 - Service provider offering multiple layer 1, 2, and 3 services over a 2014 common infrastructure 2016 Although relatively few, if any, ISPs fall into category 1 it would 2017 seem the mostly likely of the four to use the peer model. The other 2018 operating models would lend themselves more likely to choose an overlay 2019 model. Most carriers would fall into category 4 and thus would most 2020 likely choose an overlay model architecture. 2022 In the context of the client and optical network control plane 2023 interconnection the discussion here leads to the conclusion that the 2024 overlay model is required and the other two models (peer and augmented) 2025 are optional. 2027 Requirement 109. Overlay model (UNI like model) shall be supported 2028 for client to optical control plane interconnection 2029 Requirement 110. Other models are optional for client to optical 2030 control plane interconnection 2031 Requirement 111. For optical to optical control plane 2032 interconnection all three models shall be supported 2034 9. Requirements for Signaling, Routing and Discovery 2036 9.1 Signaling Functions 2038 Connection management signaling messages are used for connection 2039 establishment and deletion. These signaling messages must be 2040 transported across UNIs, between nodes within a single carrier's 2041 domain, over I-NNIs and E-NNIs. 2043 A mixture of hop-by-hop routing, explicit/source routing and 2044 hierarchical routing will likely be used within future transport 2045 networks, so all three mechanisms must be supported by the control 2046 plane. Using hop-by-hop message routing, each node within a network 2047 makes routing decisions based on the message destination, and the local 2048 routing tables. However, achieving efficient load balancing and 2049 establishing diverse connections are impractical using hop-by-hop 2050 routing. Instead, explicit (or source) routing may be used to send 2051 signaling messages along a route calculated by the source. This route, 2052 described using a set of nodes/links, is carried within the signaling 2053 message, and used in forwarding the message. 2055 Finally, network topology information must not be conveyed outside a 2056 trust domain. Thus, hierarchical routing is required to support 2057 signaling across multiple domains. Each signaling message should 2058 contain a list of the domains traversed, and potentially details of the 2059 route within the domain being traversed. 2061 Signaling messages crossing trust boundaries must not contain 2062 information regarding the details of an internal network topology. This 2063 is particularly important in traversing E-UNIs and E-NNIs. Connection 2064 routes and identifiers encoded using topology information (e.g., node 2065 identifiers) must also not be conveyed over these boundaries. 2067 9.1.1 Connection establishment 2069 Connection establishment is achieved by sending signaling messages 2070 between the source and destination. If inadequate resources are 2071 encountered in establishing a connection, a negative acknowledgment 2072 shall be returned and allocated resources shall be released. A positive 2073 acknowledgment shall be used to acknowledge successful establishment of 2074 a connection (including confirmation of successful cross-connection). 2075 For connections requested over a UNI, a positive acknowledgment shall 2076 be used to inform both source and destination clients of when they may 2077 start transmitting data. 2079 The transport network signaling shall be able to support both uni- 2080 directional and bi-directional connections. Contention may occur 2081 between two bi-directional connections, or between uni-directional and 2082 bi-directional connections. There shall be at least one attempt and at 2083 a most N attempts at contention resolution before returning a negative 2084 acknowledgment where N is a configurable parameter with devalue value 2085 of 3. 2087 9.1.2 Connection deletion 2089 When a connection is no longer required, connectivity to the client 2090 shall be removed and network resources shall be released. 2091 Partially deleted connections are a serious concern. As a result, 2092 signaling network failures shall not result in partially deleted 2093 connections remaining in the network. An end-to-end deletion signaling 2094 message acknowledgment is required to avoid such situations. 2095 Many signaling protocols use a single message pass to delete a 2096 connection. However, in all-optical networks, loss of light will 2097 propagate faster than the deletion message. Thus, downstream cross- 2098 connects will detect loss of light and potentially trigger protection 2099 or restoration. Such behavior is not acceptable. 2100 Instead, connection deletion in all-optical networks shall involve a 2101 signaling message sent in the forward direction that shall take the 2102 connection out of service, de-allocating the resources, but not 2103 removing the cross-connection. Upon receipt of this message, the last 2104 network node must respond by sending a message in the reverse direction 2105 to remove the cross-connect at each node. 2107 Requirement 112. The following requirements are imposed on 2108 signaling: 2109 - Hop-by-hop routing, explicit / source-based routing and hierarchical 2110 routing shall all be supported. 2111 - A negative acknowledgment shall be returned if inadequate resources 2112 are encountered in establishing a connection, and allocated resources 2113 shall be released. 2114 - A positive acknowledgment shall be returned when a connection has 2115 been successfully established. 2116 - For connections requested over a UNI, a positive acknowledgment shall 2117 be used to inform both source and destination clients of when they 2118 may start transmitting data. 2119 - Signaling shall be supported for both uni-directional and bi- 2120 directional connections. 2121 - When contention occurs in establishing bi-directional connections, 2122 there shall be at least one attempt at a most N attempts at 2123 contention resolution before returning a negative acknowledgment 2124 where N is a configurable parameter with devalue value of 3. 2125 - Partially deleted connections shall not remain within the network. 2126 - End-to-end acknowledgments shall be used for connection deletion 2127 requests. 2128 - Connection deletion shall not result in either restoration or 2129 protection being invoked. 2130 - Connection deletion shall at a minimum use a two pass signaling 2131 process, removing the cross-connection only after the first signaling 2132 pass has completed. 2134 - Signaling shall not progress through the network with unresolved 2135 label contention left behind. 2136 - Acknowledgements of any requests shall not be sent until all 2137 necessary steps to ensure request fulfillment have been successful. 2138 - Label contention resolution attempts shall not result in infinite 2139 loops. 2140 Signaling for connection protection and restoration is addressed in a 2141 later section. 2143 9.2 Routing Functions 2145 9.2.1 General Description 2147 Routing is an important component of the control plane. It includes 2148 neighbor discovery, reachability information propagation, network 2149 topology information dissemination, service capability discovery. The 2150 objective of neighbor discovery is to provide the information needed to 2151 identify the neighbor relationship and neighbor connectivity over each 2152 link. Neighbor discovery may be realized via manual configuration or 2153 protocol automatic identification, such as link management protocol 2154 (LMP). Neighbor discovery exists between user network to optical 2155 network interface, network node to network node interface, network to 2156 network interface. In optical network, each connection involves two 2157 user endpoints. When user endpoint A requests a connection to user 2158 endpoint B, the optical network needs the reachability information to 2159 select a path for the connection. If a user endpoint is unreachable, a 2160 connection request to that user endpoint shall be rejected. Network 2161 topology information dissemination is to provide each node in the 2162 network with stabilized and consistent information about the carrier 2163 network such that a single node is able to support constrain-based path 2164 selection. Service capability discovery is strongly related to routing 2165 functions. Specific services of optical network require specific 2166 network resource information. Routing functions support service 2167 capabilities. 2169 9.2.2 I-UNI, E-UNI, I-NNI and E-NNI 2171 There are four types of interfaces where the routing information 2172 dissemination may occur: I-UNI, E-UNI, I-NNI and E-NNI. Different types 2173 of interfaces shall impose different requirements and functionality due 2174 to their different trust relationships. 2175 Due to business, geographical, technology, economic considerations, the 2176 global optical network is usually partitioned into several carrier 2177 autonomous systems (AS). Inside each carrier AS, the optical network 2178 may be separate into several routing domains. In each routing domain, 2179 the routing protocol may or may not be the same. 2181 While the I-UNI assumes a trust relationship, the user network and the 2182 transport network form a client-server relationship. Therefore, the 2183 benefits of dissemination of routing information from the transport 2184 network to the user network should be studied carefully. Sufficient, 2185 but only necessary information, should be disseminated across the I- 2186 UNI. In E-UNI, neighbor discovery, reachability information and service 2187 capability discovery are allowed to cross the interface, but any 2188 information related to network resources, topology shall not be 2189 exchanged. 2191 Any network topology and network resources information is may be 2192 exchanged across I-NNI. The routing protocol may exchange sufficient 2193 network topology and resource information. 2195 Requirement 113. However, to support scalability requirements, only 2196 the information necessary for optimized path selection shall be 2197 exchanged. 2199 Requirement 114. Over E-NNI only reachability information, next 2200 routing hop and service capability information should be exchanged. 2201 Any other network related information shall not leak out to other 2202 networks. Policy based routing should be applied to disseminate 2203 carrier specific network information. 2205 9.2.3 Requirements for routing information dissemination 2207 Routing protocols must propagate the appropriate information 2208 efficiently to network nodes. A major concern for routing protocol 2209 performance is scalability and stability issues. Scalability requires 2210 that the routing protocol performance shall not largely depend on the 2211 scale of the network (e.g. the number of nodes, the number of links, 2212 end user etc.). 2214 Requirement 115. The routing protocol design shall keep the network 2215 size effect as small as possible. 2217 Different scalability techniques should be considered. 2219 Requirement 116. Routing protocol shall support hierarchical routing 2220 information dissemination, including topology information aggregation 2221 and summarization. 2223 This technique is widely used in conventional networks, such as OSPF 2224 routing for IP networks and PNNI for ATM networks. But the tradeoff 2225 between the number of hierarchies and the degree of network information 2226 accuracy should be considered carefully. Too many aggregations may lose 2227 network topology information. 2228 - Optical transport switches may contain thousands of physical ports. 2229 The detailed link state information for a network element could be 2230 huge. 2232 Requirement 117. The routing protocol shall be able to minimize 2233 global information and keep information locally significant as much 2234 as possible. 2236 There is another tradeoff between accuracy of the network 2237 topology information and the routing protocol scalability. 2239 Requirement 118. Routing protocol shall distinguish static routing 2240 information and dynamic routing information. 2242 Static routing information does not change due to connection 2243 operations, such as neighbor relationship, link attributes, 2244 total link bandwidth, etc. On the other hand, dynamic routing 2245 information updates due to connection operations, such as link 2246 bandwidth availability, link multiplexing fragmentation, etc. 2247 The routing protocol operation shall consider the difference of 2248 these two types of routing information. 2250 Requirement 119. Only dynamic routing information needs to be 2251 updated in real time. 2253 Requirement 120. Routing protocol shall be able to control the 2254 dynamic information updating frequency through different types of 2255 thresholds. Two types of thresholds could be defined: absolute 2256 threshold and relative threshold. The dynamic routing information 2257 will not be disseminated if its difference is still inside the 2258 threshold. When an update has not been sent for a specific time (this 2259 time shall be configurable the carrier), an update is automatically 2260 sent. Default time could be 30 minutes. 2262 All these techniques will impact the network resource representation 2263 accuracy. The tradeoff between accuracy of the routing information 2264 and the routing protocol scalability should be well studied. A well- 2265 designed routing protocol should provide the flexibility such that 2266 the network operators are able to adjust the balance according to 2267 their networks' specific characteristics. 2269 9.2.4 Requirements for path selection 2271 The optical network provides connection services to its clients. Path 2272 selection requirements may be determined service parameters. However, 2273 path selection abilities are determined by routing information 2274 dissemination. In this section, we focus on path selection 2275 requirements. Service capabilities, such as service type requirements, 2276 bandwidth requirements, protection requirements, diversity 2277 requirements, bit error rate requirements, latency requirements 2278 including/excluding area requirements, can be satisfied via constraint 2279 based path calculation. Since a specific path selection is done in a 2280 single network element, the specific path selection algorithm and its 2281 interaction with the routing protocol are not discussed in this 2283 document. Note that a path consists of a series of links. The 2284 characteristics of a path are those of the weakest link. For example, 2285 if one of the links does not have link protection capability, the whole 2286 path should be declared as having no link-based protection. 2288 Requirement 121. Path selection shall support shortest path as well 2289 as constraint-based routing. Constraint-based path selection shall 2290 consider the whole network performance and provide traffic 2291 engineering capability. 2292 - A carrier would want to operate its network most efficiently, such 2293 as increasing network throughput and decreasing network blocking 2294 probability. The possible solution could be shortest path calculation 2295 or load balancing under congestion conditions. 2297 Requirement 122. Path selection shall be able to include/exclude 2298 some specific locations, based on policy. 2300 Requirement 123. Path selection shall be able to support protection/ 2301 restoration capability. Section 10 discusses this subject in more 2302 detail. 2304 Requirement 124. Path selection shall be able to support different 2305 levels of diversity, including diversity routing and protection/ 2306 restoration diversity. The simplest form of diversity is link 2307 diversification. More complete notions of diversity can be addressed 2308 by logical attributes such as shared risk link groups (SRLG). 2310 Requirement 125. Path selection algorithms shall provide carriers' 2311 the ability to support a wide range of services and multiple levels 2312 of service classes. Parameters such as service type, transparency, 2313 bandwidth, latency, bit error rate, etc. may be relevant. 2315 The inputs for path selection include connection end addresses, a 2316 set of requested routing constraints, and constraints of the 2317 networks. Some of the network constraints are technology specific, 2318 such as the constraints in all-optical networks addressed in 2319 [John_Angela_IPO_draft]. The requested constraints may include 2320 bandwidth requirement, diversity requirements, path specific 2321 requirements, as well as restoration requirements. 2323 9.3 Automatic Discovery Functions 2325 This section describes the specifications for automatic discovery to 2326 aid distributed connection management (DCM) in the context of 2327 automatically switched transport networks (ASTN/ASON). This section 2328 describes the requirements for the Automatically Switched Transport 2329 Networks (ASTN) as specified in ITU-T Rec.G.807. Auto-discovery is 2330 applicable to the User-to-Network Interface (UNI), Network-Node 2331 Interfaces (NNI) and to the Transport Plane Interfaces (TPI) as shown 2332 in ASTN reference model. 2334 Neighbor Discovery can be described as an instance of auto-discovery 2335 that is used for associating two subnet points that form a trail or a 2336 link connection in a particular layer network. The association created 2337 through neighbor discovery is valid so long as the trail or link 2338 connection that forms the association is capable of carrying traffic. 2339 This is referred to as transport plane neighbor discovery. In addition 2340 to transport plane neighbor discovery, auto-discovery can also be used 2341 for distributed subnet controller functions to establish adjacencies. 2342 This is referred to as control plane neighbor discovery. 2343 It is worthwhile to mention that the Sub network points that are 2344 associated as part of neighbor discovery do not have to be contained in 2345 network elements with physically adjacent ports. Thus neighbor 2346 discovery is specific to the layer in which connections are to be made 2347 and consequently is principally useful only when the network has 2348 switching capability at this layer. 2350 Service Discovery can be described as an instance of auto-discovery 2351 that is used for verifying and exchanging service capabilities that are 2352 supported by a particular link connection or trail. It is assumed that 2353 service discovery would take place after two Sub Network Points within 2354 the layer network are associated through neighbor discovery. However, 2355 since service capabilities of a link connection or trail can 2356 dynamically change, service discovery can take place at any time after 2357 neighbor discovery and any number of times as may be deemed necessary. 2358 Resource discovery can be described as an instance of auto-discovery 2359 that is used for verifying the physical connectivity between two ports 2360 on adjacent network elements in the network. Resource discovery is 2361 also concerned with the ability to improve inventory management of 2362 network resources, detect configuration mismatches between adjacent 2363 ports, associating port characteristics of adjacent network elements, 2364 etc. 2366 Automatic discovery runs over UNI, NNI and TPI interfaces[reference to 2367 g.disc]. 2369 9.3.1 Neighbor discovery 2371 This section provides the requirements for the automatic neighbor for 2372 the UNI and NNI and Physical Interface (PI). This requirement does not 2373 preclude specific manual configurations that may be required and in 2374 particular does not specify any mechanism that may be used for 2375 optimizing network management. 2377 Neighbor discovery is primarily concerned with automated discovery of 2378 port connectivity between network elements that form the transport 2379 plane and also involves the operations of connectivity verification, 2380 and bootstrapping of channels in the control plane for carrying 2381 discovery information between elements in the transport plane. This 2382 applies to discovery of port connectivity across a UNI between the 2383 elements in the user network and the transport plane. The information 2385 that is learnt is subject to various policy restrictions between 2386 administrative domains. 2388 Given that Automatic Neighbor Discovery (AND) is applicable across the 2389 whole network, it is important that AND supports protocol independence, 2390 and should be specified to allow ease of mapping into multiple protocol 2391 specifications. The actual implementation of AND depends on the 2392 protocols that are used for the purpose of automatic neighbor 2393 discovery. 2395 As mentioned earlier, AND runs over both UNI and NNI type interfaces in 2396 the control plane. Given that port connectivity discovery and 2397 connectivity verification (e.g., fiber connectivity verification) are 2398 to be performed at the transport plane, PI interfaces (IrDI and IaDI) 2399 are also considered as AND interfaces. Further information is 2400 available in Draft ITU-T G.ndisc. 2402 Although the minimal set of parameters for discovery includes the SP 2403 and User NE names, there are several policy restrictions that are 2404 considered while exchanging these names across untrusted boundaries. 2405 Several security requirements on the information exchanged needs to be 2406 considered. In addition to these, there are other security/reliability 2407 requirements on the actual control plane communications channels. 2408 These requirements are out of scope of this document. Draft ITU-T Rec. 2409 G.dcn discusses these requirements in much detail. 2411 9.3.2 Resource Discovery 2413 Resource discovery happens between neighbors. A mechanism designed for 2414 a technology domain can be applied to any pair of NEs interconnected 2415 through interfaces of the same technology. However, because resource 2416 discovery means certain information disclosure between two business 2417 domains, it is under the service providers' security and policy 2418 control. In certain network scenario, a service provider who owns the 2419 transport network may not be willing to disclose any internal 2420 addressing scheme to its client. So a client NE may not have the 2421 neighbor NE address and port ID in its NE level resource table. 2422 Interface ports and their characteristics define the network element 2423 resources. Each network can store its resources in a local table that 2424 could include switching granularity supported by the network element, 2425 ability to support concatenated services, range of bandwidths supported 2426 by adaptation, physical attributes signal format, transmission bit 2427 rate, optics type, multiplexing structure, wavelength, and the 2428 direction of the flow of information. Resource discovery can be 2429 achieved through either manual provisioning or automated procedures. 2430 The procedures are generic while the specific mechanisms and control 2431 information can be technology dependent. 2433 Resource discovery can be achieved in several methods. One of the 2434 methods is the self-resource discovery by which the NE populates its 2436 resource table with the physical attributes and resources. Neighbor 2437 discovery is another method by which NE discovers the adjacencies in 2438 the transport plane and their port association and populates the 2439 neighbor NE. After neighbor discovery resource verification and 2440 monitoring must be performed to verify physical attributes to ensure 2441 compatibility. Resource monitoring must be performed periodically since 2442 neighbor discovery and port association are repeated periodically. 2443 Further information can be found in [GMPLS-ARCH]. 2445 10. Requirements for service and control plane resiliency 2447 There is a range of failures that can occur within a network, including 2448 node failures (e.g. office outages, natural disasters), link failures 2449 (e.g. fiber cuts, failures arising from diverse circuits traversing 2450 shared facilities (e.g. conduit cuts)) and channel failures (e.g. laser 2451 failures). 2453 Failures may be divided into those affecting the data plane and the 2454 control plane . 2456 Requirement 126. The ASON architecture and associated protocols 2457 shall include redundancy/protection options such that any single 2458 failure event shall not impact the data plane or the control plane. 2460 10.1 Service resiliency 2462 Rapid protection/restoration from data plane failures is a crucial 2463 aspect of current and future transport networks. Rapid recovery is 2464 required by transport network providers to protect service and also to 2465 support stringent Service Level Agreements (SLAs) that dictate high 2466 reliability and availability for customer connectivity. 2468 The choice of a protection/restoration policy is a tradeoff between 2469 network resource utilization (cost) and service interruption time. 2471 Clearly, minimized service interruption time is desirable, but schemes 2472 achieving this usually do so at the expense of network resource 2473 utilization, resulting in increased cost to the provider. Different 2474 protection/restoration schemes operate with different tradeoffs between 2475 spare capacity requirements and service interruption time. 2477 In light of these tradeoffs, transport providers are expected to 2478 support a range of different service offerings, with a strong 2479 differentiating factor between these service offerings being service 2480 interruption time in the event of network failures. For example, a 2481 provider's highest offered service level would generally ensure the 2482 most rapid recovery from network failures. However, such schemes (e.g., 2483 1+1, 1:1 protection) generally use a large amount of spare restoration 2484 capacity, and are thus not cost effective for most customer 2485 applications. Significant reductions in spare capacity can be achieved 2486 by instead sharing this capacity across multiple independent failures. 2488 Clients will have different requirements for connection availability. 2489 These requirements can be expressed in terms of the "service level", 2490 which describes restoration/protection options and priority related 2491 connection characteristics, such as holding priority(e.g. pre-emptable 2492 or not), set-up priority, or restoration priority. Therefore, mapping 2493 of individual service levels to a specific set of 2494 protection/restoration options and connection priorities will be 2495 determined by individual carriers. 2497 Requirement 127. In order for the network to support multiple grades 2498 of service, the control plane must identify, assign, and track 2499 multiple protection and restoration options. 2501 For the purposes of this discussion, the following 2502 protection/restoration definitions have been provided: 2504 Reactive Protection: This is a function performed by either equipment 2505 management functions and/or the transport plane (i.e. depending on if 2506 it is equipment protection or facility protection and so on) in 2507 response to failures or degraded conditions. Thus if the control plane 2508 and/or management plane is disabled, the reactive protection function 2509 can still be performed. Reactive protection requires that protecting 2510 resources be configured and reserved (i.e. they cannot be used for 2511 other services). The time to exercise the protection is technology 2512 specific and designed to protect from service interruption. 2514 Proactive Protection: In this form of protection, protection events are 2515 initiated in response to planned engineering works (often from a 2516 centralized operations center). Protection events may be triggered 2517 manually via operator request or based on a schedule supported by a 2518 soft scheduling function. This soft scheduling function may be 2519 performed by either the management plane or the control plane but could 2520 also be part of the equipment management functions. If the control 2521 plane and/or management plane is disabled and this is where the soft 2522 scheduling function is performed, the proactive protection function 2523 cannot be performed. [Note that In the case of a hierarchical model of 2524 subnetworks, some protection may remain available in the case of 2525 partial failure (i.e. failure of a single subnetwork control plane or 2526 management plane controller) relates to all those entities below the 2527 failed subnetwork controller, but not its parents or peers.] Proactive 2528 protection requires that protecting resources be configured and 2529 reserved (i.e. they cannot be used for other services) prior to the 2530 protection exercise. The time to exercise the protection is technology 2531 specific and designed to protect from service interruption. 2533 Reactive Restoration: This is a function performed by either the 2534 management plane or the control plane. Thus if the control plane and/or 2535 management plane is disabled, the restoration function cannot be 2536 performed. [Note that in the case of a hierarchical model of 2537 subnetworks, some restoration may remain available in the case of 2538 partial failure (i.e. failure of a single subnetwork control plane or 2540 management plane controller) relates to all those entities below the 2541 failed subnetwork controller, but not its parents or peers.] 2542 Restoration capacity may be shared among multiple demands. A 2543 restoration path is created after detecting the failure. Path 2544 selection could be done either off-line or on-line. The path selection 2545 algorithms may also be executed in real-time or non-real time depending 2546 upon their computational complexity, implementation, and specific 2547 network context. 2548 . Off-line computation may be facilitated by simulation and/or network 2549 planning tools. Off-line computation can help provide guidance to 2550 subsequent real-time computations. 2551 . On-line computation may be done whenever a connection request is 2552 received. 2553 Off-line and on-line path selection may be used together to make 2554 network operation more efficient. Operators could use on-line 2555 computation to handle a subset of path selection decisions and use off- 2556 line computation for complicated traffic engineering and policy related 2557 issues such as demand planning, service scheduling, cost modeling and 2558 global optimization. 2560 Proactive Restoration: This is a function performed by either the 2561 management plane or the control plane. Thus if the control plane and/or 2562 management plane is disabled, the restoration function cannot be 2563 performed. [Note that in the case of a hierarchical model of 2564 subnetworks, some restoration may remain available in the case of 2565 partial failure (i.e. failure of a single subnetwork control plane or 2566 management plane controller) relates to all those entities below the 2567 failed subnetwork controller, but not its parents or peers.] 2568 Restoration capacity may be shared among multiple demands. Part or all 2569 of the restoration path is created before detecting the failure 2570 depending on algorithms used, types of restoration options supported 2571 (e.g. shared restoration/connection pool, dedicated restoration pool), 2572 whether the end-end call is protected or just UNI part or NNI part, 2573 available resources, and so on. In the event restoration path is fully 2574 pre-allocated, a protection switch must occur upon failure similarly to 2575 the reactive protection switch. The main difference between the 2576 options in this case is that the switch occurs through actions of the 2577 control plane rather than the transport plane Path selection could be 2578 done either off-line or on-line. The path selection algorithms may also 2579 be executed in real-time or non-real time depending upon their 2580 computational complexity, implementation, and specific network context. 2581 . Off-line computation may be facilitated by simulation and/or network 2582 planning tools. Off-line computation can help provide guidance to 2583 subsequent real-time computations. 2584 . On-line computation may be done whenever a connection request is 2585 received. 2587 Off-line and on-line path selection may be used together to make 2588 network operation more efficient. Operators could use on-line 2589 computation to handle a subset of path selection decisions and use off- 2590 line computation for complicated traffic engineering and policy related 2591 issues such as demand planning, service scheduling, cost modeling and 2592 global optimization. 2594 Multiple protection/restoration options are required in the network to 2595 support the range of offered services. NNI protection/restoration 2596 schemes operate between two adjacent nodes, with NNI 2597 protection/restoration involving switching to a protection/restoration 2598 connection when a failure occurs. UNI protection schemes operate 2599 between the edge device and a switch node (i.e. at the access or drop), 2600 End-End Path protection/restoration schemes operate between access 2601 points (i.e. connections are protected/restored across all NNI and UNI 2602 interfaces supporting the call). 2604 In general, the following protection schemes should be considered for 2605 all protection cases within the network: 2606 . Dedicated protection (e.g., 1+1, 1:1) 2607 . Shared protection (e.g., 1:N, M:N). This allows the network to ensure 2608 high quality service for customers, while still managing its physical 2609 resources efficiently. 2610 . Unprotected 2612 In general, the following restoration schemes should be considered for 2613 all restoration cases within the network: 2614 Dedicated restoration capacity 2615 Shared restoration capacity. This allows the network to ensure high 2616 quality of service for customers, while still managing its physical 2617 resources efficiently. 2618 . Un-restorable 2620 To support the protection/restoration options: 2622 Requirement 128. The control plane shall support multiple options 2623 for access (UNI), span (NNI), and end-to-end Path 2624 protection/restoration. 2626 Requirement 129. The control plane shall support configurable 2627 protection/restoration options via software commands (as opposed to 2628 needing hardware reconfigurations) to change the 2629 protection/restoration mode. 2631 Requirement 130. The control plane shall support mechanisms to 2632 establish primary and protection paths. 2634 Requirement 131. The control plane shall support mechanisms to 2635 modify protection assignments, subject to service protection 2636 constraints. 2638 Requirement 132. The control plane shall support methods for fault 2639 notification to the nodes responsible for triggering restoration / 2641 protection (note that the transport plane is designed to provide the 2642 needed information between termination points. This information is 2643 expected to be utilized as appropriate.) 2645 Requirement 133. The control plane shall support mechanisms for 2646 signaling rapid re-establishment of connection connectivity after 2647 failure. 2649 Requirement 134. The control plane shall support mechanisms for 2650 reserving restoration bandwidth. 2652 Requirement 135. The control plane shall support mechanisms for 2653 normalizing connection routing after failure repair. 2655 Requirement 136. The signaling control plane should implement 2656 signaling message priorities to ensure that restoration messages 2657 receive preferential treatment, resulting in faster restoration. 2659 Requirement 137. Normal connection operations (e.g., connection 2660 deletion) shall not result in protection/restoration being initiated. 2662 Requirement 138. Restoration shall not result in miss-connections 2663 (connections established to a destination other than that intended), 2664 even for short periods of time (e.g., during contention resolution). 2665 For example, signaling messages, used to restore connectivity after 2666 failure, should not be forwarded by a node before contention has been 2667 resolved. 2669 Requirement 139. In the event of there being insufficient bandwidth 2670 available to restore all connections, restoration priorities / pre- 2671 emption should be used to determine which connections should be 2672 allocated the available capacity. 2674 The amount of restoration capacity reserved on the restoration paths 2675 determines the robustness of the restoration scheme to failures. For 2676 example, a network operator may choose to reserve sufficient capacity 2677 to ensure that all shared restorable connections can be recovered in 2678 the event of any single failure event (e.g., a conduit being cut). A 2679 network operator may instead reserve more or less capacity than that 2680 required to handle any single failure event, or may alternatively 2681 choose to reserve only a fixed pool independent of the number of 2682 connections requiring this capacity (i.e., not reserve capacity for 2683 each individual connection). 2685 10.2 Control plane resiliency 2687 Requirement 140. The optical control plane network shall support 2688 protection and restoration options to enable it to be robust to 2689 failures. 2691 Requirement 141. The control plane shall support the necessary 2692 options to ensure that no service-affecting module of the control 2693 plane (software modules or control plane communications) is a single 2694 point of failure. 2696 Requirement 142. The control plane should support options to enable 2697 it to be self-healing. 2699 Requirement 143. The control plane shall provide reliable transfer 2700 of signaling messages and flow control mechanisms for restricting the 2701 transmission of signaling packets where appropriate. 2703 The control plane may be affected by failures in signaling network 2704 connectivity and by software failures (e.g., signaling, topology and 2705 resource discovery modules). 2707 Requirement 144. Control plane failures shall not cause failure of 2708 established data plane connections. 2710 Fast detection and recovery from failures in the control plane are 2711 important to allow normal network operation to continue in the event of 2712 signaling channel failures. 2714 Requirement 145. Control network failure detection mechanisms shall 2715 distinguish between control channel and software process failures. 2717 Different recovery techniques are initiated for the different failures. 2718 When there are multiple channels (optical fibers or multiple 2719 wavelengths) between network elements and / or client devices, failure 2720 of the control channel will have a much bigger impact on the service 2721 availability than in the single case. It is therefore recommended to 2722 support a certain level of protection of the control channel. Control 2723 channel failures may be recovered by either using dedicated protection 2724 of control channels, or by re-routing control traffic within the 2725 control plane (e.g., using the self-healing properties of IP). To 2726 achieve this requires rapid failure detection and recovery mechanisms. 2727 For dedicated control channel protection, signaling traffic may be 2728 switched onto a backup control channel between the same adjacent pairs 2729 of nodes. Such mechanisms protect against control channel failure, but 2730 not against node failure. 2732 Requirement 146. If a dedicated backup control channel is not 2733 available between adjacent nodes, or if a node failure has occurred, 2734 then signaling messages should be re-routed around the failed link / 2735 node. 2737 Requirement 147. Fault localization techniques for the isolation of 2738 failed control resources shall be supported. 2740 Recovery from signaling process failures can be achieved by switching 2741 to a standby module, or by re-launching the failed signaling module. 2743 Requirement 148. Recovery from software failures shall result in 2744 complete recovery of network state. 2746 Control channel failures may occur during connection establishment, 2747 modification or deletion. If this occurs, then the control channel 2748 failure must not result in partially established connections being left 2749 dangling within the network. Connections affected by a control channel 2750 failure during the establishment process must be removed from the 2751 network, re-routed (cranked back) or continued once the failure has 2752 been resolved. In the case of connection deletion requests affected by 2753 control channel failures, the connection deletion process must be 2754 completed once the signaling network connectivity is recovered. 2756 Requirement 149. Connections shall not be left partially established 2757 as a result of a control plane failure. 2759 Requirement 150. Connections affected by a control channel failure 2760 during the establishment process must be removed from the network, 2761 re-routed (cranked back) or continued once the failure has been 2762 resolved. 2764 Requirement 151. Partial connection creations and deletions must be 2765 completed once the control plane connectivity is recovered. 2767 11. Security concerns and requirements 2769 In this section, security concerns and requirements of optical 2770 connections are described. 2772 11.1 Data Plane Security and Control Plane Security 2774 In terms of security, an optical connection consists of two aspects. 2775 One is security of the data plane where an optical connection itself 2776 belongs, and the other is security of the control plane by which an 2777 optical connection is controlled. 2779 11.1.1 Data Plane Security 2781 Requirement 152. Misconnection shall be avoided in order to keep 2782 user's data confidential. 2784 Requirement 153. For enhancing integrity and confidentiality of 2785 data, it may be helpful to support scrambling of data at layer 2 or 2786 encryption of data at a higher layer. 2788 11.1.2 Control Plane Security 2790 It is desirable to decouple the control plane from the data plane 2791 physically. 2793 Additional security mechanisms should be provided to guard against 2794 intrusions on the signaling network. 2796 Requirement 154. Network information shall not be advertised across 2797 exterior interfaces (E-UNI or E-NNI). The advertisement of network 2798 information across the E-NNI shall be controlled and limited in a 2799 configurable policy based fashion. The advertisement of network 2800 information shall be isolated and managed separately by each 2801 administration. 2803 Requirement 155. Identification, authentication and access control 2804 shall be rigorously used for providing access to the control plane. 2806 Requirement 156. UNI shall support ongoing identification and 2807 authentication of the UNI-C entity (i.e., each user request shall be 2808 authenticated. 2810 Editor's Note: The control plane shall have an audit trail and log with 2811 timestamp recording access. 2813 11.2 Service Access Control 2815 >From a security perspective, network resources should be protected from 2816 unauthorized accesses and should not be used by unauthorized entities. 2817 Service Access Control is the mechanism that limits and controls 2818 entities trying to access network resources. Especially on the public 2819 UNI, Connection Admission Control (CAC) should be implemented and 2820 support the following features: 2822 Requirement 157. CAC should be applied to any entity that tries to 2823 access network resources through the public UNI. CAC should include 2824 an authentication function of an entity in order to prevent 2825 masquerade (spoofing). Masquerade is fraudulent use of network 2826 resources by pretending to be a different entity. An authenticated 2827 entity should be given a service access level in a configurable 2828 policy basis. 2830 Requirement 158. Each entity should be authorized to use network 2831 resources according to the service level given. 2833 Requirement 159. With help of CAC, usage based billing should be 2834 realized. CAC and usage based billing should be enough stringent to 2835 avoid any repudiation. Repudiation means that an entity involved in a 2836 communication exchange subsequently denies the fact. 2838 11.3 Optical Network Security Concerns 2840 Since optical service is directly related to the layer 1 network that 2841 is fundamental for telecom infrastructure, stringent security assurance 2842 mechanism should be implemented in optical networks. When designing 2843 equipment, protocols, NMS, and OSS that participate in optical service, 2844 every security aspect should be considered carefully in order to avoid 2845 any security holes that potentially cause dangers to an entire network, 2846 such as DoS attack, unauthorized access and etc. 2848 Acknowledgements 2850 The authors of this document would like to acknowledge the valuable 2851 inputs from Yangguang Xu, Deborah Brunhard, Daniel Awduche, Jim 2852 Luciani, Mark Jones and Gerry Ash. 2854 References 2856 [carrier-framework] Y. Xue et al., Carrier Optical Services Framework 2857 and Associated UNI requirements", draft-many-carrier-framework-uni- 2858 00.txt, IETF, Nov. 2001. 2859 [G.807] ITU-T Recommendation G.807 (2001), "Requirements for the 2860 Automatic Switched Transport Network (ASTN)". 2861 [G.dcm] ITU-T New Recommendation G.dcm, "Distributed Connection 2862 Management (DCM)". 2863 [G.ason] ITU-T New recommendation G.ason, "Architecture for the 2864 Automatically Switched Optical Network (ASON)". 2865 [oif2001.196.0] M. Lazer, "High Level Requirements on Optical Network 2866 Addressing", oif2001.196.0. 2867 [oif2001.046.2] J. Strand and Y. Xue, "Routing For Optical Networks 2868 With Multiple Routing Domains", oif2001.046.2. 2869 [ipo-impairements] J. Strand et al., "Impairments and Other 2870 Constraints on Optical Layer Routing", draft-ietf-ipo-impairments- 2871 00.txt, work in progress. 2872 [ccamp-gmpls] Y. Xu et al., "A Framework for Generalized Multi-Protocol 2873 Label Switching (GMPLS)", draft-many-ccamp-gmpls-framework-00.txt, July 2874 2001. 2875 [mesh-restoration] G. Li et al., "RSVP-TE extensions for shared mesh 2876 restoration in transport networks", draft-li-shared-mesh-restoration- 2877 00.txt, July 2001. 2878 [sis-framework] Yves T'Joens et al., "Service Level 2880 Specification and Usage Framework", 2881 draft-manyfolks-sls-framework-00.txt, IETF, Oct. 2000. 2882 [control-frmwrk] G. Bernstein et al., "Framework for MPLS-based control 2883 of Optical SDH/SONET Networks", draft-bms-optical-sdhsonet-mpls- 2884 control-frmwrk-00.txt, IETF, Nov. 2000. 2885 [ccamp-req] J. Jiang et al., "Common Control and Measurement Plane 2886 Framework and Requirements", draft-walker-ccamp-req-00.txt, CCAMP, 2887 August, 2001. 2888 [tewg-measure] W. S. Lai et al., "A Framework for Internet Traffic 2889 Engineering Neasurement", 2890 draft-wlai-tewg-measure-01.txt, IETF, May, 2001. 2891 [ccamp-g.709] A. Bellato, "G. 709 Optical Transport Networks GMPLS 2892 Control Framework", 2893 draft-bellato-ccamp-g709-framework-00.txt, CCAMP, June, 2001. 2894 [onni-frame] D. Papadimitriou, "Optical Network-to-Network Interface 2895 Framework and Signaling Requirements", draft-papadimitriou-onni-frame- 2896 01.txt, IETF, Nov. 2000. 2897 [oif2001.188.0] R. Graveman et al.,"OIF Security requirement", 2898 oif2001.188.0.a` 2900 Author's Addresses 2902 Yong Xue 2903 UUNET/WorldCom 2904 22001 Loudoun County Parkway 2905 Ashburn, VA 20147 2906 Phone: +1 (703) 886-5358 2907 Email: yxue@uu.net 2909 John Strand 2910 AT&T Labs 2911 100 Schulz Dr., 2912 Rm 4-212 Red Bank, 2913 NJ 07701, USA 2914 Phone: +1 (732) 345-3255 2915 Email: jls@att.com 2917 Monica Lazer 2918 AT&T 2919 900 ROUTE 202/206N PO BX 752 2920 BEDMINSTER, NJ 07921-0000 2921 mlazer@att.com 2923 Jennifer Yates, 2924 AT&T Labs 2925 180 PARK AVE, P.O. BOX 971 2926 FLORHAM PARK, NJ 07932-0000 2927 jyates@research.att.com 2929 Dongmei Wang 2930 AT&T Labs 2931 Room B180, Building 103 2932 180 Park Avenue 2933 Florham Park, NJ 07932 2934 mei@research.att.com 2936 Ananth Nagarajan 2937 Wesam Alanqar 2938 Lynn Neir 2939 Tammy Ferris 2940 Sprint 2941 9300 Metcalf Ave 2942 Overland Park, KS 66212, USA 2943 ananth.nagarajan@mail.sprint.com 2944 wesam.alanqar@mail.sprint.com 2945 lynn.neir@mail.sprint.com 2946 tammy.ferris@mail.sprint.com 2948 Hirokazu Ishimatsu 2949 Japan Telecom Co., LTD 2950 2-9-1 Hatchobori, Chuo-ku, 2951 Tokyo 104-0032 Japan 2952 Phone: +81 3 5540 8493 2953 Fax: +81 3 5540 8485 2954 EMail: hirokazu@japan-telecom.co.jp 2956 Olga Aparicio 2957 Cable & Wireless Global 2958 11700 Plaza America Drive 2959 Reston, VA 20191 2960 Phone: 703-292-2022 2961 Email: olga.aparicio@cwusa.com 2963 Steven Wright 2964 Science & Technology 2965 BellSouth Telecommunications 2966 41G70 BSC 2967 675 West Peachtree St. NE. 2968 Atlanta, GA 30375 2969 Phone +1 (404) 332-2194 2970 Email: steven.wright@snt.bellsouth.com