INTERNET-DRAFT Document: draft-ietf-ipo-carrier-requirements-00.txt Yong Xue Category: Informational (Editor) Expiration Date: January, 2002 UUNET/Worldcom Monica Lazer John Strand Jennifer Yates Dongmei Wang AT&T Ananth Nagarajan Lynn Neir Wesam Alanqar Tammy Ferris Sprint Hirokazu Ishimatsu Japan Telecom Co., LTD Steven Wright Bellsouth Olga Aparicio Cable & Wireless Global Carrier Optical Services Requirements Status of this Memo This document is an Internet-Draft and is in full conformance with all provisions of Section 10 of RFC2026. Internet-Drafts are Working documents of the Internet Engineering Task Force (IETF), its areas, and its working groups. Note that other groups may also distribute working documents as Internet-Drafts. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or rendered obsolete by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." The list of current Internet-Drafts can be accessed at http://www.ietf.org/ietf/1id-abstracts.txt. draft-ietf-ipo-carrier-requirements-00.txt [Page 1] The list of Internet-Draft Shadow Directories can be accessed at http://www.ietf.org/shadow.html. Abstract This contribution describes a carriers optical services framework and associated requirements for the optical network. As such, this document concentrates on the requirements driving the work towards realization of ASON. This document is intended to be protocol- neutral. Table of Contents 1. Introduction....................................................3 1.1 Justification................................................3 1.2 Conventions used in this document............................3 1.3 Background...................................................3 1.4 Value Statement..............................................4 1.5 Scope of This Document.......................................5 2. Definitions and Terminology.....................................5 3. General Requirements............................................6 3.1 Separation of Networking Functions...........................6 3.2 Network and Service Scalability..............................7 3.3 Transport Network Technology.................................7 3.4 Service Building Blocks......................................8 4. Service Model and Applications..................................8 5. Network Reference Model........................................11 5.1 Optical Networks and Subnetworks............................11 5.2 Network Interfaces..........................................11 5.3 Intra-Carrier Network Model.................................15 5.4 Inter-Carrier Network Model.................................16 6. Optical Service User Requirements..............................17 6.1 Connection Management.......................................17 6.2 Optical Services............................................20 6.3 Levels of Transparency......................................21 6.4 Optical Connection granularity..............................21 6.5 Other Service Parameters and Requirements...................23 7. Optical Service Provider Requirements..........................25 7.1 Access Methods to Optical Networks..........................25 7.2 Bearer Interface Types ....................................26 7.3 Names and Address Management................................26 7.4 Link Identification.........................................29 7.5 Policy-Based Service Management Framework...................29 7.6 Multiple Hierarchies........................................32 8. Control Plane Functional Requirements for Optical Services.....32 8.1 Control Plane Capabilities and Functions....................32 8.2 Signaling Network...........................................34 8.3 Control Plane Interface to Data Plane.......................36 8.4 Control Plane Interface to Management Plane.................36 draft-ietf-ipo-carrier-requirements-00.txt [page 2] 8.5 Control Plane Interconnection...............................41 9. Requirements for Signaling, Routing and Discovery .............43 9.1 Signaling Functions ........................................44 9.2 Routing Functions...........................................46 9.3 Automatic Discovery Functions...............................49 10. Requirements for service and control plane resiliency........54 10.1 Service resiliency.......................................54 10.2 Control plane resiliency........ ........... ...............58 11. Security concerns and requirements............................58 11.1 Data Plane Security and Control Plane Security.............58 11.2 Service Access Control.....................................59 11.3 Optical Network Security Concerns..........................62 1. Introduction 1.1 Justification The charter of the IPO WG calls for a document on "Carrier Optical Services Requirements" for IP/Optical networks. This document addresses that aspect of the IPO WG charter. Furthermore, this document was accepted as an IPO WG document by unanimous agreement at the IPO WG meeting held on March 19, 2001, in Minneapolis, MN, USA. It presents a carrier and end-user perspective on optical network services and requirements. 1.2 Conventions used in this document The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119. 1.3 Background Next generation optical transport network (OTN) will consist of optical crossconnects (OXC), DWDM optical line systems (OLS) and optical add- drop multiplexers (OADM) based on the architecture defined by the ITU standards G.872 in [G.872]. The OTN network is an optical transport network bounded by a set of optical channel access points and has a layered structure consisting of optical channel, multiplex section and transmission section sub-layer networks. Optical networking encompasses the functionality for establishment, transmission, multiplexing, switching, protection, and restoration of optical connections carrying a wide range of user signals of varying formats and bit rate. It is an emerging trend to enhance the OTN network with an intelligent optical layer control plane to dynamically provision network resources and to provide network survivability using mesh-based protection and restoration techniques. The resulting intelligent networks are called automatic switched optical networks or ASON. draft-ietf-ipo-carrier-requirements-00.txt [page 3] The emerging and rapidly evolving automatic switched optical networking (ASON) technologies [G.ASON] are aimed at providing optical networks with intelligent networking functions and capabilities in its control plane to enable wavelength switching, rapid optical connection provisioning and dynamic rerouting. This new networking platform will create tremendous business opportunities for the network operators and service providers to offer new services to the market. 1.4 Value Statement By deploying ASON technology, a carrier expects to achieve the following benefits from both technical and business perspectives: Rapid Circuit Provisioning: ASON technology will enable the dynamic end-to-end provisioning of the optical connections across the optical network by using standard routing and signaling protocols. Enhanced Survivability: ASON technology will enable the network to dynamically reroute an optical connection in case of a failure using mesh-based network protection and restoration techniques, which greatly improves the cost-effectiveness compared to the current line and ring protection schemes in the SONET/SDH network. Cost-Reduction: ASON networks will enable the carrier to better utilize the optical network , thus achieving significant unit cost reduction per Megabit due to the cost-effective nature of the optical transmission technology, simplified network architecture and reduced operation cost. Service Flexibility: ASON technology will support provisioning of an assortment of existing and new services such as protocol and bit-rate independent transparent network services, and bandwidth-on-demand services. Editor's Note: The next revision will make this more explicit with respect to the relationship with the ASON control plane. Enhanced Interoperability: ASON technology will be using a control plane utilizing the industry and international standards architecture and protocols, which facilitate the interoperability of the optical network equipment from different vendors. In addition, the introduction of a standards-based control plane offers the following potential benefits: - Reactive traffic engineering at optical layer that allows network resources to be dynamically allocated to traffic flow. - Reduce the need for service providers to develop new operational support systems software for the network control and new service provisioning on the optical network, thus speeding up the deployment draft-ietf-ipo-carrier-requirements-00.txt [page 4] of the optical network technology and reducing the software development and maintenance cost. - Potential development of a unified control plane that can be used for different transport technologies including ONT, SONET/SDH, ATM and PDH. 1.5 Scope of This Document This IPO working group (WG) document is aimed at providing, from the carrier's perspective, a service framework and associated requirements in relation to the optical services to be offered in the next generation optical networking environment and the service control and management functions. As such, this document concentrates on the requirements driving the work towards realization of ASON. This document is intended to be protocol-neutral. Note: It is recognized by carriers writing this document that some features and requirements are not supported by protocols being developed in the IETF. However, the purpose of this document is to specify generic carrier functional requirements. Editor's Note - We may add a statement that these are not all inclusive requirements, and keep it until future revision make it an all inclusive list of requirements. Every carrier's needs are different. The objective of this document is NOT to define some specific service models. Instead, some major service building blocks are identified that will enable the carriers to mix and match in order to create the best service platform most suitable to their business model. These building blocks include generic service types, service enabling control mechanisms and service control and management functions. The ultimate goal is to provide the requirements to guide the control protocol developments within IETF in terms of IP over optical technology. In this document, we consider IP a major client to the optical network, but the same requirements and principles should be equally applicable to non-IP clients such as SONET/SDH, ATM, ITU G.709, etc. 2. Definitions and Terminology Optical Transport Network (OTN) SONET/SDH Network Automatic Switched Transport Network (ASTN) Optical Service Carriers Transparent and Opaque Network Other Terminology Bearer channels Abbreviations draft-ietf-ipo-carrier-requirements-00.txt [page 5] ASON Automatic Switched Optical Networking ASTN Automatic Switched Transport Network AD Administrative Domain AND Automatic Neighbor Discovery ASD Automatic Service Discovery CAC Connection Admission Control DCM Distributed Connection Management E-NNI Exterior NNI IWF InterWorking Function I-NNI Interior NNI IrDI Inter-Domain Interface IaDI Intra-Domain Interface INC Intra-network Connection NNI Node-to-Node Interface NE Network Element OTN Optical Transport Network OLS Optical Line System OCC Optical Connection Controller PI Physical Interface SLA Service Level Agreement UNI User-to-Network Interface 3. General Requirements In this section, a number of generic requirements related to the service control and management functions are discussed. 3.1 Separation of Networking Functions It makes logical sense to segregate the networking functions within each layer network into three logical functional network planes: control plane, data plane and management plane. They are responsible for providing network control functions, data transmission functions and network element management functions respectively. Control Plane: includes the functions related to networking control capabilities such as routing, signaling, and policy control, as well as resource and service discovery. Data Plane (transport plane): includes the functions related to bearer channels and transmission. Management Plane: includes the functions related to the management functions of network element, networks and network services. Each plane consists of a set of interconnected functional or control entities responsible for providing the networking or control functions defined for that network layer. The crux of the ASON network is the networking intelligence that contains automatic routing, signaling and discovery functions to automate the network control functions and these automatic control functions are collectively called the control plane functions. draft-ietf-ipo-carrier-requirements-00.txt [page 6] The separation of the control plane from both the data and management plane is beneficial to the carriers in that: . Allow equipment vendors to have a modular system design that will be more reliable and maintainable thus reducing the overall systems ownership and operation cost. . Allow carriers to have the flexibility to choose a third party vendor control plane software systems as its control plane solution for its switched optical network. . Allow carriers to deploy a unified control plane and OSS systems to manage and control different types of transport networks it owes. . Allow carriers to use a separate control network specially designed and engineered for the control plane communications. Requirement 1. The control traffic and user data traffic shall not be assumed to be congruently routed under the same topology because the control transport network topology may very well be different from that of the data transport network. Note: This is in contrast to the IP network where the control messages and user traffic are routed and switched based on the same network topology due to the associated in-band signaling nature of the IP network. 3.2 Network and Service Scalability In terms of the scale and complexity of the future optical network, the following assumption can be made when considering the scalability and performance requirements of the optical control and management functions. Within one operator subnetwork: - There may be hundreds of OXC nodes - There may be thousands of terminating ports/wavelength per OXC node - There may be hundreds of parallel fibers between a pair of OXC nodes - There may be hundreds of wavelength channels transmitted on each fiber. The number of optical connections on a network varies depending upon the size of the network. Requirement 2. Although specific applications may be on a small scale, the protocol itself shall not limit large-scale networks. 3.3 Transport Network Technology Optical services can be offered over different types of underlying optical technologies. The service characteristic in certain degree will determine the features and constraints of the services. draft-ietf-ipo-carrier-requirements-00.txt [page 7] This document assumes standards-based transport technologies such as SONET/SDH and OTN - G.709 3.4 Service Building Blocks The ultimate goal of this document is to identify a set of basic service building blocks the carriers can mix and match to create the best suitable service models that serve their business needs. Editor's Note: May need list of building blocks in view of document content. 4. Service Model and Applications A carrier's optical network supports multiple types of service models. Each service model may have its own service operations, target markets, and service management requirements. 4.1 Static Provisioned Bandwidth Service (SPB) Static Provisioned Bandwidth Service creates Soft Permanent Connections. Soft Permanent Connections are those connections initiated from the management plane, but completed through the control plane and its interactions with the management plane. These connections traditionally fall within the category of circuit provisioning and are characterized by very long holding times. Requirement 3. The control plane shall allow the management plane control of network resources for network management including, but not limited to management of soft permanent connections. Service Concept: The SPB supports enhanced leased line and private line services. The network operator provides connection provisioning at the customer request through carrier network operation center. The provisioning time could take some time and provisioning process could be manual or semi-manual. The specific functionalities of SPB offered by a carrier may be carrier specific. But any network capability that can be invoked by, say, signaling across the UNI shall also be directly accessible by the network operator's network provisioning and network management work centers. This is basically the "point and click" type of provisioning services currently proposed by many vendors. The connections established in this way are so-called permanent or soft- permanent connections. Service Operation: During the provision process multiple network resources are reserved and dedicated to the specific path. The control interface is either human (e.g. a customer calls a customer service representative) or via a customer network management system (e.g., customer may make its request over a secure web site or by logging into a specialized OSS). Any provisioned bandwidth service facility is draft-ietf-ipo-carrier-requirements-00.txt [page 8] tracked. The path is data based as an object (or structure) containing information relating to the connection attributes and the physical entities used in creating the path (e.g., ingress and egress, NE ports, cross-office and inter-office facilities). This information is used to reserve network resources at provisioning time, to track performance parameters, and to perform maintenance functions. An end-to-end managed service may involve multiple networks, e.g., both access networks and an intercity network. In this case provisioning may be initiated by whichever network has primary service responsibility. Target Market: SPB service focuses on customers unable to request connections using direct signaling to the network, customers with complex engineering requirements that cannot be handled autonomously by the operator's optical layer control plane, customers requiring connections to off-net locations, and customers who need end-to-end managed service offered by (or out-sourced to) carriers. Service Management: SPB service involves carrier management system. The connections provided by SPB may be under the control of value-added network management services, such as specific path selection, complex engineering requirements, or customer required monitor functions. The connection should be deleted only at customer's request. Billing of SPB will be based on the bandwidth, service during, quality-of-service, and other characteristics of the connection. In SPB model, the user shall not have any information about the optical network, however, information on the health of the provisioned connection and other technical aspects of this connection may be provided to the user as a part of the service agreement. 4.2 Bandwidth-on-Demand Service (BOD) Bandwidth on Demand Service supports management of switched connections. Switched connections are those connections initiated by the user edge device over the UNI and completed through the control plane. These connections may be more dynamic than the soft permanent connections and have much shorter holding times than soft permanent connections. Service Concept: In SPB model, user is required to pay the cost of the connection independent of the usage of the connection. In current data private line services, the average utilization rate is very low and most of the bits are unused. This is mainly due to time of day and day of week reasons. Even though businesses close down at night and over the weekend, user still needs to pay for the SPB connections. In BOD model, there shall be the potential of tearing down a user's connection when he is closed and giving it back to the user again when his business day begins. This is the service model of bandwidth on demand. In BOD service model, connections are established and reconfigured in real time, and are so-called switched optical connections. Signaling between the user NE and the optical layer control plane initiates all necessary network activities. A real-time commitment for a future connection may also be established. A standard set of "branded" draft-ietf-ipo-carrier-requirements-00.txt [page 9] service options is available. The functionality available is a proper subset of that available to SPB Service users and is constrained by the requirement for real-time provisioning, among other things. Availability of the requested connection is contingent on resource availability. Service Operation: This service provides support of real-time creation of bandwidth between two end-points. The time needed to set up bandwidth on demand shall be on the order of seconds, preferably sub- seconds. To support connections establishment dynamically, the end terminals shall be already physically connected to the network with adequate capacity. Ingress into the network needs to be pre-provisioned for point-to-point ingress facilities. Also, necessary cross-connects throughout the network shall be set up automatically upon service request. To provide BOD services, the UNI signaling between user edge device and network edge device is required for all connection end- points. The BOD service request shall be completed if and only if the request is consistent with the relevant SLAs, the network can support the requested connection, and the user edge device at the other end point accepts connection. Target Market: BOD service focuses on customers, such as ISP's, large intranet, and other data and SDH/SONET networks, requiring large point- to-point capacities and having very dynamic demands, customers supporting UNI functions in their edge devices. Service Management: BOD service provides customers the possibility of rapid provisioning and high service utilization. Since connection establishment is not part of the functions of the network management system, the connection management may be some value-added services according to LSAs. Also, connection admission control shall be provided at the connection request on time. The connection shall be deleted from customer's request at either the source endpoint or the destination endpoint. Billing of BOD shall be based on the bandwidth, service during, quality-of-service, and other characteristics of the connection. In BOD model, the user shall not have any information about the optical network, however, information on the health of the provisioned connection and other technical aspects of this connection may be provided to the user via UNI connection request. 4.3 Optical Virtual Private Network (OVPN) Service Concept: The customer may contract for some specific network resources (capacity between OXCs, OXC ports, OXC switching resources) such that the customer is able to control these resources to reconfigure the optical cross-connections and establish, delete, maintain connections. In effect they would have a dedicated optical sub-network under the customer's control. Service Operations: For future study. draft-ietf-ipo-carrier-requirements-00.txt [page 10] Target market: OVPN service focuses on customers, such as ISP, large intranets, carriers, and other networks requiring large point-to-point capacities and having variable demands who wish to integrate the control of their service and optical layers, business-to-business broadband solution assemblers. Service Management: OVPN service provides the customer the possibility of loaning some optical network resources such that the customer is able to maintain its own sub-network. Since the OVPN connections maintenance is no longer part of the functions of the network management system, the connection management may provide some value- added services according to LSAs. In OVPN model, there is no connection admission control from the carrier and the customer is free to reconfigure its network resources. Billing of OVPN shall be based on the network resources contracted. Network connection acceptance shall involve only a simple check to ensure that the request is in conformance with capacities and constraints specified in the OVPN service agreement. Requirement 4. In OVPN model, real-time information about the state of all resources contracted for shall be made available to the customer. Depending on the service agreement, this may include information on both in-effects and spare resources accessible to the customer. 5. Network Reference Model This Section discusses major architectural and functional components of a generic carrier optical network, which should provide a reference model for describing the requirements for the carrier optical services. 5.1 Optical Networks and Subnetworks There are two main types of optical networks that are currently under consideration: SDH/SONET network as defined in ITU G.707 and T1.105, and OTN network as defined in ITU G.872. We assume an optical transport network (OTN) is composed of a set of optical cross-connects (OXC) and optical add-drop multiplexer (OADM) which are interconnected in a general mesh topology using DWDM optical line systems (OLS). It is often convenient for easy discussion and description to treat an optical network as an opaque subnetwork, in which the details of the network become less important, instead focus is on the function and the interfaces the optical network provides. In general, an opaque subnetwork can be defined as a set of access points on the network boundary and a set of point-to-point optical connections between those access points. 5.2 Network Interfaces draft-ietf-ipo-carrier-requirements-00.txt [page 11] A generic carrier network reference model describes a multi-carrier network environment. Each individual carrier network can be further partitioned into domains or sub-networks based on administrative, technological or architectural reasons. The demarcation between (sub)networks can be either logical or physical and consists of a set of reference points identifiable in the optical network. From the control plane perspective, these reference points define a set of control interfaces in terms of optical control and management functionality. The following is an illustrative diagram for this. draft-ietf-ipo-carrier-requirements-00.txt [page 12] +---------------------------------------+ | | +--------------+ | | | | | +------------+ +------------+ | | IP | | | | | | | | Network +-E-UNI-+ Optical +-I-UNI--+ Carrier IP | | | | | | Subnetwork | | network | | +--------------+ | | +--+ | | | | +------+-----+ | +------+-----+ | | | | | | | I-NNI I-NNI E-UNI | +--------------+ | | | | | | | | +------+-----+ | +------+-----+ | | IP +-E-UNI-| | +-----+ | | | Network | | | Optical | | Optical | | | | | | Subnetwork +-I-NNI--+ Subnetwork | | +--------------+ | | | | | | | +------+-----+ +------+-----+ | | | | | +---------------------------------------+ I-UNI E-NNI | | +------+-------+ +----------------+ | | | | | Other Client | | Other Carrier | | Network | | Network | | (ATM/SONET) | | | +--------------+ +----------------+ Figure 5.1 Generic Carrier Network Reference Model The network interfaces encompass two aspects of the networking functions: user data plane interface and control plane interface. The former concerns about user data transmission across the network interface and the latter concerns about the control message exchange across the network interface such as signaling, routing, etc. We call the former physical interface (PI) and the latter control plane interface. Unless otherwise stated, the control interface is assumed in the remaining of this document. Control Plane Interfaces Control interface defines a relationship between two connected network entities on both side of the interface. For each control interface, we need to define an architectural function each side plays and a controlled set of information, which can be exchanged across the interface. The information flowing over this logical interface may include: - Endpoint name and address draft-ietf-ipo-carrier-requirements-00.txt [page 13] - Reachability/summarized network address information - Topology/routing information - Authentication and connection admission control information - Connection service messages - Network resource control information (I-NNI only) Different types of the interfaces can be defined for the network control and architectural purposes and can be used as the network reference points in the control plane. The User-Network Interface (UNI) is a bi-directional signaling interface between service requester and service provider control plane entities. We differentiate between interior (I-UNI) and exterior (E-UNI) UNI as follows: E-UNI: A bi-directional signaling interface between service requester and control plane entities belonging to different domains. Information flows include support of connection flows and address resolution. I-UNI: A bi-directional signaling interface between service requester and control plane entities belonging to one or more domains having a trusted relationship. Editor's Note: Details of I-UNI have to be worked out. The Network-Network Interface (NNI) is the interface between two optical networks or sub-networks, specifically between the two directly linked edge ONEs of the two interconnected networks. We differentiate between interior (I-NNI) and exterior (E-NNI) NNI as follows: E-NNI: A bi-directional signaling interface between control plane entities belonging to different domains. Information flows include support of connection flows and also reachability information exchanges. I-NNI: A bi-directional signaling interface between control plane entities belonging to one or more domains having a trusted relationship. Information flows over I-NNI also include topology information. It should be noted that it is quite possible to use E-NNI even between subnetworks with a trust relationship to keep topology information exchanges only within the subnetworks. Generally, two networks have a trust relationship if they belong to the same administrative domain. draft-ietf-ipo-carrier-requirements-00.txt [page 14] Generally, two networks do not have a trust relationship if they belong to the different administrative domains. Generally speaking, the following levels of trust interfaces shall be supported: Interior interface: an interface is interior when there is a trusted relationship between the two connected networks. Exterior interface: an interface is exterior when there is no trusted relationship between the two connected networks. Interior interface examples include an I-NNI between two optical sub- networks belonging to a single carrier or an I-UNI interface between the optical transport network and an IP network owned by the same carrier. Exterior interface examples include an E-NNI between two different carriers or an E-UNI interface between a carrier optical network and its customers. The two types of interfaces may define different architectural functions and distinctive level of access, security and trust relationship. Editor's Note: More work is needed in defining specific functions on interior and exterior interfaces. Requirement 5. The control plane interfaces shall be configurable and their behavior shall be consistent with the configuration (i.e., exterior versus interior interfaces). 5.3 Intra-Carrier Network Model The carrier's optical network is treated as a trusted domain, which is defined as network under a single technical administration with full trust relationship within the network. Within a trusted domain, all the optical network elements and sub-networks are considered to be secure and trusted by each other. A highly simplified optical networking environment consists of an optical transport network and a set of interconnected client networks of various types such as IP, ATM and SONET. In the intra-carrier model, within a carrier-owned network, generally interior interfaces (I-NNI and I-UNI) are assumed. The interfaces between the carrier-owned network equipment and the optical network are a interior UNI and the interfaces between optical sub-networks within a carrier's administrative domain are interior NNI; while the interfaces between the carrier's optical network and its users are exterior UNI, and the interfaces between optical networks of different operators are the exterior NNI. One business application for the interior UNI is the case wherea carrier service operator offers data services such as IP, ATM and Frame Relay over its optical core network. Data services network elements draft-ietf-ipo-carrier-requirements-00.txt [page 15] such as routers and ATM switches are considered to be internal optical service client devices. The interconnection topology among the carrier NEs should be completely transparent to the users of the data services. 5.3.1 Multiple Sub-networks Without loss of generality, the optical network owned by a carrier service operator can be depicted as consisting of one or more optical sub-networks interconnected by direct optical links. There may be many different reasons for more than one optical sub-networks It may be the result of using hierarchical layering, different technologies across access, metro and long haul (as discussed below), or a result of business mergers and acquisitions or incremental optical network technology deployment by the carrier using different vendors or technologies. A sub-network may be a single vendor and single technology network. But in general, the carrier's optical network is heterogeneous in terms of equipment vendor and the technology utilized in each sub-network. There are four possible scenarios: --- Single vendor and single technology --- Single vendor and multiple technologies --- Multiple vendor single technology --- Multiple vendors and multiple technologies. 5.3.2 Access, Metro and Long-haul networks Few carriers have end-to-end ownership of the optical networks. Even ifthey do, access, metro and long-haul networks often belong to different administrative divisions and they each for optical sub- network. Therefore Inter-(sub)-networks interconnection is essential in terms of supporting the end-to-end optical service provisioning and management. The access, metro and long-haul networks may use different technologies and architectures, and as such may have different network properties. In general, an end-to-end optical connection may easily cross multiple sub-networks with the following possible scenarios Access -- Metro -- Access Access - Metro -- Long Haul -- Metro - Access Editor's Note: More details will be added in a later revision of this draft. 5.4 Inter-Carrier Network Model The inter-carrier model focuses on the service and control aspects between different carrier networks and describes the internetworking relationship between the different carrier's optical networks. In the inter-carrier network model, each carrier's optical network is a draft-ietf-ipo-carrier-requirements-00.txt [page 16] separate administrative domain. Both the UNI interface between the user and the carrier network and the NNI interface between two carrier's network are crossing the carrier's administrative boundaries and therefore are by definition exterior interfaces. Carrier Network Interconnection Inter-carrier interconnection provides for connectivity among different optical network operators. Just as the success and scalability of the Internet has in large part been attributed by the inter-domain routing protocol like BGP, so is the future success of the optical network. The normal connectivity between the carriers may include: Private Peering: Two carriers set up dedicated connection between them via a private arrangement. Public Peering: Two carriers set up a point-to-point connection between them at a public optical network access points (ONAP) Due to the nature of the automatic optical switched network, it is possible to have the distributed peering where the connection between two distant ONE's is connected via an optical connection. 6. Optical Service User Requirements An optical connection will traverse two UNI interfaces and zero or more NNI interfaces depending on If it is between two client network users crossing a single carrier's network, or if it is between two client network users crossing multiple carriers' networks. 6.1 Connection Management 6.1.1 Basic Connection Management In a connection oriented transport network a connection must be established before data can be transferred. This requires, as a minimum, that the following connection management actions shall be supported: Set-up Connection is initiated by the management plane on behalf of an end-user or by the end-user signaling device. The results are as follows: If set-up of connection is successful, then optical circuit, resources, or required bandwidth is dedicated to associated end-points. Dedicated resources may include active resources as well as protection or restoration resources in accordance with the class of service indicated by the user. If set-up of connection is not successful, a negative response is returned to initiating entity and any partial allocation of resources is de-allocated. Editor's Note - may need to mention the ACK from the user on connection create confirmation. Teardown Connection is initiated by the management plane on behalf of an end-user or by the end-user signaling device. The results are as follows: optical circuit, resources or the required bandwidth are freed draft-ietf-ipo-carrier-requirements-00.txt [page 17] up for ulterior usage. Dedicated resources are also freed. Shared resources are only freed if there are no active connections sharing the same protection or restoration resources. If tear down is not successful, a negative response shall be returned to the end-user. Query Connection is initiated by the management plane on behalf of an end-user or by the end-user signaling device. Status report returned to querying entity. Accept/Reject Connection is initiated by the end-user signaling device. This command is relevant in the context of switched connections only. The destination end-user shall have the opportunity to accept or reject new connection requests or connection modifications. Furthermore, the following requirements need to be considered: Requirement 6. The control plane shall support action results code responses to any requests over the control interfaces. Requirement 7. The control plane shall support requests for connection set-up, subject to policies in effect between the user and the network. Requirement 8. The control plane shall support the destination user edge device's decision to accept or reject connection creation requests from the initiating user edge device. Requirement 9. The control plane shall support the user request for connection tear down. Requirement 10. The control plane shall support management plane and user edge device request for connection attributes or status query. In addition, there are several actions that need to be supported, which are not directly related to an individual connection, but are necessary for establishing healthy interfaces. The requirements below show some of these actions: Requirement 11. UNI shall support initial registration of the UNI-C with the network. Requirement 12. UNI shall support registration and updates by the UNI-C entity of the edge devices and user interfaces that it controls. Requirement 13. UNI shall support network queries of the user edge devices. Requirement 14. UNI shall support detection of user edge device or of edge ONE failure. In addition, connection admission control (CAC) is necessary for authentication of the user and controlling access to network resources. Requirement 15. CAC shall be provided as part of the control plane functionality. It is the role of the CAC function to determine if draft-ietf-ipo-carrier-requirements-00.txt [page 18] there is sufficient free resource available to allow a new connection. Requirement 16. If there is sufficient resource available, the CAC may permit the connection request to proceed. Requirement 17. If there is not sufficient resource available, the CAC shall notify the originator of the connection request that the request has been denied. 6.2 Enhanced Connection Management 6.2.1 Compound Connections Multiple point-to-point connections may be managed by the network so as to appear as a single compound connection to the end-points. Examples of such compound connections are connections based on virtual concatenation, diverse routing, or restorable connections. Compound connections are distinguished from basic connections in that a UNI request will generate multiple parallel NNI signaling sessions. Connection Restoration The control plane should provide the signaling and routing capabilities to permit connection restoration based on the user's request for its assigned service class. Diverse Routing The control plane should provide the signaling and routing capabilities to permit a user to request diversely routed connections from a carrier who supports this functionality. Multicast Connections The control plane should provide the signaling and routing capabilities to permit a user to request multicast connections from a carrier who supports this functionality. 6.2.2 Supplemental Services Requirement 18. The control plane shall provide support for the development of supplementary services that are independent of the bearer service. Where these are carried across networks using a range of protocols, it is necessary to ensure that the protocol interworking provides a consistent service as viewed by the user regardless of the network implementation. draft-ietf-ipo-carrier-requirements-00.txt [page 19] Requirement 19. The control plane shall support closed user groups. This allows a user group to create, for example, a virtual private network. Supplementary services may be not required or possible for soft permanent connections. 6.2.3 Optical VPNs In optical virtual private networks, the customer contracts for specific network resources (capacity between OXCs, OXC ports, OXC switching resources) and is able to control these resources to establish, disconnect, and reconfigure optical connection connections. Requirement 20. The control plane should provide the signaling and routing capabilities to permit a user to request optical virtual private networks from a carrier who supports this functionality. 6.3 Optical Services Optical services embody a large range of transport services. Currently, most transport systems are SONET/SDH based, however, innovations in optical technology such as photonic switching bring about the distinct possibility of support for pure optical transport services, while the proliferation of Ethernet coupled with advancements in the technology to support 1Gb/s and 10 Gb/s interfaces are drivers to make this service class widely available. Transparent Service assumes that the user requires optical transport without the network being aware of the framing. However, since transmission systems and the engineering rules that apply have dependencies on the signal bandwidth, even for transparent optical services, knowledge of the bandwidth requirements is essential. Opaque Service refers to transport services where signal framing is negotiated between the user and the network operator, and only the payload is carried transparently. SONET/SDH transport is most widely used for network-wide transport, and such is discussed in most detail in the following sections. As stated above, Ethernet Services, specifically 1Gb/s and 1Gbs Ethernet services are gaining more and more popularity due to the lower costs of the customers' premises equipment and its simplified management requirements (compared to SONET or SDH). Therefore, more and more network customers have expressed a high level of interest in support of these transport services. draft-ietf-ipo-carrier-requirements-00.txt [page 20] Ethernet services may be carried over either SONET/SDH or photonic networks. As discussed in subsequent sections Ethernet service requests require some service specific parameters: priority class, VLAN Id/Tag, traffic aggregation parameters. Also gaining ground in the industry are the Storage Area Network (SAN) Services. ESCON and FICON are proprietary versions of the service, while Fiber Channel is the standard alternative. As discussed in subsequent sections Fiber Channel service may require a latency parameter, since the protocol between the service clients and the server may be dependent on the transmission delays (the service is sensitive to delays in the range of hundreds of .s). As is the case with Ethernet services, SAN services may be carried over either SONET/SDH (using GFP mapping) or photonic networks. Currently SAN services require only point-to-point connections, but it is envisioned that in the future they may also require multicast connections. 6.4 Levels of Transparency Bitstream connections are framing aware - the exact signal framing is known or needs to be negotiated between network operator and user. However, there may be multiple levels of transparency for individual framing types. Current transport networks are mostly based on SONET/SDH technology. Therefore, multiple levels have to be considered when defining specific optical services. The example below shows multiple levels of transparency applicable to SONET/SDH transport. - SONET Line and section OH (SDH multiplex and regenerator section OH) are normally terminated and a large set of parameters can be monitored by the network. - Line and section OH are carried transparently - Non-SONET/SDH transparent bit stream 6.5 Optical Connection granularity The service granularity is determined by the specific technology, framing and bit rate of the physical interface between the ONE and the user edge device and by the capabilities of the ONE. The control plane needs to support signaling and routing for all the services supported by the ONE. Connection granularity is defined by a combination of framing (e.g., SONET or SDH) and bandwidth of the signal carried over the network for the user. The connection and associated properties may define the physical characteristics of the optical connection. However, the consumable attribute is bandwidth. In general, there should not be a one-to-one correspondence imposed between the granularity of the service provided and the maximum capacity of the interface to the user. draft-ietf-ipo-carrier-requirements-00.txt [page 21] Requirement 21. The SDH and SONET connection granularity, shown in the table below, shall be supported by the control plane. Any specific NE's control plane implementation needs to support only the subset consistent with its hardware. Editor's Note: An OTN table for service granularity will be added. SDH SONET Transported signal name name RS64 STS-192 STM-64 (STS-192) signal without Section termination of any OH. RS16 STS-48 STM-16 (STS-48) signal without Section termination of any OH. MS64 STS-192 STM-64 (STS-192); termination of Line RSOH (section OH) possible. MS16 STS-48 STM-16 (STS-48); termination of Line RSOH (section OH) possible. VC-4- STS-192c- VC-4-64c (STS-192c-SPE); 64c SPE termination of RSOH (section OH), MSOH (line OH) and VC-4-64c TCM OH possible. VC-4- STS-48c- VC-4-16c (STS-48c-SPE); 16c SPE termination of RSOH (section OH), MSOH (line OH) and VC-4-16c TCM OH possible. VC-4-4c STS-12c- VC-4-4c (STS-12c-SPE); termination SPE of RSOH (section OH), MSOH (line OH) and VC-4-4c TCM OH possible. VC-4 STS-3c- VC-4 (STS-3c-SPE); termination of SPE RSOH (section OH), MSOH (line OH) and VC-4 TCM OH possible. VC-3 STS-1-SPE VC-3 (STS-1-SPE); termination of RSOH (section OH), MSOH (line OH) and VC-3 TCM OH possible. Note: In SDH it could be a higher order or lower order VC-3, this is identified by the sub-addressing scheme. In case of a lower order VC-3 the higher order VC-4 OH can be terminated. VC-2 VT6-SPE VC-2 (VT6-SPE); termination of RSOH (section OH), MSOH (line OH), higher order VC-3/4 (STS-1-SPE) OH and VC-2 TCM OH possible. - VT3-SPE VT3-SPE; termination of section OH, line OH, higher order STS-1- SPE OH and VC3-SPE TCM OH possible. VC-12 VT2-SPE VC-12 (VT2-SPE); termination of RSOH (section OH), MSOH (line OH), draft-ietf-ipo-carrier-requirements-00.txt [page 22] higher order VC-3/4 (STS-1-SPE) OH and VC-12 TCM OH possible. VC-11 VT1.5-SPE VC-11 (VT1.5-SPE); termination of RSOH (section OH), MSOH (line OH), higher order VC-3/4 (STS-1-SPE) OH and VC-11 TCM OH possible. Requirement 22. In addition, 1 Gb and 10 Gb granularity shall be supported for 1 Gb/s and 10 Gb/s (WAN mode) Ethernet framing types, if implemented in the hardware. Requirement 23. For SAN services the following interfaces have been defined and shall be supported by the control plane if the given interfaces are available on the equipment: - FC-12 - FC-50 - FC-100 - FC-200 In addition, extensions of the intelligent optical network functionality towards the edges of the network in support of sub-rate interfaces (as low as 1.5 Mb/s) will support of VT /TU granularity. Requirement 24. Therefore, sub-rate extensions in ONEs supporting sub-rate fabric granularity shall support VT-x/TU-1n granularity down to VT1.5/TU-l1, consistent with the hardware. Requirement 25. The connection types supported by control plane shall be consistent with the service granularity and interface types supported by the ONE. The control plane and its associated protocols should be extensible to support new services as needed. Requirement 26. Encoding of service types in the protocols used shall be such that new service types can be added by adding new codepoint values or objects. Note: Additional attributes may be required to ensure proper connectivity between endpoints. 6.6 Other Service Parameters and Requirements 6.6.1 Classes of Service We use "service level" to describe priority related characteristics of connections, such as holding priority, set-up priority, or restoration priority. The intent currently is to allow each carrier to define the actual service level in terms of priority, protection, and restoration draft-ietf-ipo-carrier-requirements-00.txt [page 23] options. Therefore, mapping of individual service levels to a specific set of priorities will be determined by individual carriers. Requirement 27. Multiple service level options shall be supported and the user shall have the option of selecting over the UNI a service level for an individual connection. However, in order for the network to support multiple grades of restoration, the control plane must identify, assign, and track multiple protection and restoration options. Requirement 28. Therefore, the control plane shall map individual service classes into specific protection and/or restoration options. Specific protection and restoration options are discussed in Section 10. However, it should be noted that while high grade services may require allocation of protection or restoration facilities, there may be an application for a low grade of services for which pre-emptable facilities may be used. Individual carriers will select appropriate options for protection and/or restoration in support of their specific network plans. 6.6.2 Connection Latency Connection latency is a parameter required for support of Fibber Channel services. Connection latency is dependent on the circuit length, and as such for these services, it is essential that shortest path algorithms are used and end-to-end latency is verified before acknowledging circuit availability. Editor's Note: more detail may be required here. 6.6.3 Diverse Routing Attributes The ability to route service paths diversely is a highly desirable feature. Diverse routing is one of the connection parameters and is specified at the time of the connection creation. The following provides a basic set of requirements for the diverse routing support. - Diversity compromises between two links being used for routing should be defined in terms of Shared Risk Link Groups (SRLG - see [draft- chaudhuri-ip-olxc-control-00.txt]]), a group of links which share some resource, such as a specific sequence of conduits or a specific office. A SRLG is a relationship between the links that should be characterized by two parameters: - Type of Compromise: Examples would be shared fiber cable, shared conduit, shared right-of-way (ROW), shared link on an optical ring, shared office - no power sharing, etc.) - Extent of Compromise: For compromised outside plant, this would be the length of the sharing. draft-ietf-ipo-carrier-requirements-00.txt [page 24] Requirement 29. The control plane routing algorithms shall be able to route a single demand diversely from N previously routed demands, where diversity would be defined to mean that no more than K demands (previously routed plus the new demand) should fail in the event of a single covered failure. 7. Optical Service Provider Requirements 7.1 Access Methods to Optical Networks Multiple access methods shall be supported: - Cross-office access (User NE co-located with ONE) In this scenario the user edge device resides in the same office as the ONE and has one or more physical connections to the ONE. Some of these access connections may be in use, while others may be idle pending a new connection request. - Direct remote access In this scenario the user edge device is remotely located from the ONE and has inter-location connections to the ONE over multiple fiber pairs or via a DWDM system. Some of these connections may be in use, while others may be idle pending a new connection request. - Remote access via access sub-network In this scenario remote user edge devices are connected to the ONE via a multiplexing/distribution sub-network. Several levels of multiplexing may be assumed in this case. This scenario is applicable to metro/access subnetworks of signals from multiple users, out, of which only a subset have connectivity to the ONE. Requirement 30. All access methods must be supported. 7.1.1 Dual Homing Dual homing is a special case of the access network. Dual homing may take different flavors, and as such affect interface designs in more than one way: - A client device may be dual homed on the same subnetwork - A client device may be dual homed on different subnetworks within the same administrative domain (and the same domain as the core subnetwork) - A client device may be dual homed on different subnetworks within the same administrative domain (but a different domain from the core subnetwork) - A client device may be dual homed on different subnetworks off different administrative domains. - A metro subnetwork may be dual homed on the same core subnetwork, within the same administrative domain draft-ietf-ipo-carrier-requirements-00.txt [page 25] - A metro subnetwork may be dual homed on the same core subnetwork, of a different administrative domain - A metro network may be dual homed to separate core subnetworks, of different administrative domains. The different flavors of dual homing will have great impact on admission control, reachability information exchanges, authentication, neighbor and service discovery across the interface. Requirement 31. Dual homing must be supported. 7.2 Bearer Interface Types Requirement 32. All the bearer interfaces implemented in the ONE shall be supported by the control plane and associated signaling protocols. The following interface types shall be supported by the signaling protocol: - SDH - SONET - 1 Gb Ethernet, 10 Gb Ethernet (WAN mode) - 10 Gb Ethernet (LAN mode) - FC-N (N= 12, 50, 100, or 200) for Fiber Channel services - OTN (G.709) - PDH - Transparent optical 7.3 Names and Address Management In this section addressing refers to optical layer addressing and it is an identifier required for routing and signaling protocol within the optical network. Identification used by other logical entities outside the optical network control plane (such as higher layer services addressing schemes or a management plane addressing scheme) may be used as naming schemes by the optical network. Recognizing that multiple types of higher layer services need to be supported by the optical network, multiple user edge device naming schemes must be supported, including at the minimum IP and NSAP naming schemes. The control plane shall use the higher layer service address as a name rather than as a routable address. The control plane must know what internal addressing scheme is used within the control plane domain. Optical layer addresses shall be provisionable for each connection point managed by the control plane. Dynamic address assignment schemes are desirable in the control plane, however in the event the assignment is not dynamic then connection point addresses need to be configurable from the management plane. In either case, the management system must be able to query the currently assigned value. draft-ietf-ipo-carrier-requirements-00.txt [page 26] While, IP-centric services are considered by many as one of the drivers for optical network services, it is also widely recognized that the optical network will be used in support of a large array of both data and voice services. In order to achieve real-time provisioning for all services supported by the optical network while minimizing OSS development by carriers, it is essential for the network to support a UNI definition that does not exclude non-IP services. Requirement 33. For this reason, multiple naming schemes shall be supported to allow network intelligence to grow towards the edges. One example of naming is the use of physical entity naming. Carrier Network Elements identify individual ports by their location using a scheme based on "CO/NE/bay/shelf/slot/port" addressing schema. Similarly, facilities are identified by route "id/fiber/wavelength/timeslot". Mapping of Physical Entity addressing to Optical Network addressing shall be supported. Name to address translation should be supported similar to DNS. To realize fast provisioning and bandwidth on demand services in response to router requests, it is essential to support IP naming. Requirement 34. Mapping of higher layer user IP naming to Optical Network Addressing shall be supported. European carriers use NSAP naming for private lines and many US data centric applications, including ATM-based services also use NSAP addresses. As such it is important that NSAP naming should be supported. Requirement 35. Mapping of higher layer NSAP naming to Optical Network shall be supported. Requirement 36. Listed below are additional Optical Network Addresses (ONA) requirements: 1) There shall be at least one globally unique address associated with each user device. A user device may have one or more ports connected to the network. 2) The address space shall support connection management across multiple networks, both within one administrative domain and across multiple administrative domains. 3) Address hierarchies shall be supported. 4) Address aggregation and summarization shall be supported. (This is actually an NNI requirement). 5) Dual homing shall allow, but not require the use of multiple addresses whether within the same administrative domain, or across multiple administrative domains. draft-ietf-ipo-carrier-requirements-00.txt [page 27] 6) Need an international body to administer the address space. Note that this need is independent of what addressing scheme is used, and this concerns the user and the network operator communities. 7) The size of the Optical Network Address shall be sufficient to avoid address exhaustion within the next 50 years. The address space shall scale up to a large base of customers and to a large number of operators. 8) Internal switch addresses shall not be derivable from ONAs and shall not be advertised to the customer. 9) The ONA shall not imply network characteristics (port numbers, port granularity, etc). 10) ONA reachability deals with connectivity and not with the user device being powered up (reachability updates triggered by registration and deregistration, not by client device reboots) (Name registration persists for as long as the user retains the same ONA - until de-registration). 11) ONAs shall be independent of user names, higher layer services (i.e., should support IP, ATM, PL, etc) and optical network internal routing addresses. User names are opaque to optical network. User equipment and other optical carriers have no knowledge of optical network internal routing addresses, including ports information. 12) The client (user) name should not make assumptions on what capabilities are offered by the server (service provider) name, and thus the semantics of the two name spaces should be separate and distinct. This does not place any constraints on the syntax of client and server layer name spaces, or of the user and service provider name spaces (G.astn draft) 13) The addressing scheme shall not impede use of either client-server or peer model within an operator's network. 14) There should be a single standard, fixed space of addresses to which names will be mapped from a wide range of higher layer services. 7.3.1 Address Space Separation Requirement 37. The control plane must support all types of client addressing. Requirement 38. The control plane must use the client address as a name rather as a routable address. Requirement 39. The control plane must know what internal addressing scheme is used within the control plane domain. 7.3.2 Directory Services draft-ietf-ipo-carrier-requirements-00.txt [page 28] Requirement 40. Directory Services shall be supported to enable operator to query the optical network for the optical network address of a specified user. Requirement 41. Address resolution and translation between various user edge device name and corresponding optical network address shall be supported. Requirement 42. UNI shall use the user naming schemes for connection request. 7.4 Link Identification Optical devices might have thousands of incoming and outgoing connections. This will be of concern when trying to provide globally unique addresses to all optical nodes in an optical network. Requirement 43. The control plane should be able to address NE connection points with addresses that are locally defined. Requirement 44. The control plane should be able to advertise and signal for locally defined and non-unique addresses that have only local significance. This would allow for re-use of the addressing space. There is the issue of providing addresses for the optical nodes or devices that form the ASON/ASTN. The other issue is providing addresses for the incoming and outgoing connections/ports within each optical node/device. The first issue is not a problem, since the optical devices/nodes can use the standard IP or NSAP address space. Providing locally defined address space that can be re-used in other optical nodes within the domain can solve providing address space for the ports/connections within each node. So, the optical nodes within a domain or multiple domains in the network can communicate with each other using the standard address space like IP or NSAP. The switching & forwarding within each optical node can be based on locally defined addresses. 7.5 Policy-Based Service Management Framework The IPO service must be supported by a robust policy-based management system to be able to make important decisions. Examples of policy decisions include: - What types of connections can be set up for a given UNI? - What information can be shared and what information must be restricted in automatic discovery functions? - What are the security policies over signaling interfaces? Requirement 45. Service and network policies related to configuration and provisioning, admission control, and support of draft-ietf-ipo-carrier-requirements-00.txt [page 29] Service Level Agreements (SLAs) must be flexible, and at the same time simple and scalable. Requirement 46. The policy-based management framework must be based on standards-based policy systems (e.g. IETF COPS). Requirement 47. In addition, the IPO service management system must support and be backwards compatible with legacy service management systems. 7.5.1 Admission control Connection admission functionality required must include authentication of client, verification of services, and control of access to network resources. Requirement 48. The policy management system must determine what kind of connections can be set up for a given UNI. Connection Admission Control (CAC) is required for authentication of users (security), verification of connection service level parameters and for controlling access to network resources. The CAC policy should determine if there are adequate network resources available within the carrier to support each new connection. CAC policies are outside the scope of standardization. Requirement 49. When a connection request is received by the control plane, it is necessary to ensure that the resources exist within the optical transport network to establish the connection. Requirement 50. In addition to the above, the control plane elements need the ability to rate limit (or pace) call setup attempts into the network. This is an attempt to prevent overload of the control plane processors. In application to SPC type connections this might mean that the setup message would be slowed or buffered in order to handle the current load. Another aspect of admission control is security. Requirement 51. The policy-based management system must be able to authenticate and authorize a client requesting the given service. The management system must also be able to administer and maintain various security policies over signaling interfaces. 7.5.2 SLA Support Requirement 52. The service management system should employ features to ensure client SLAs. draft-ietf-ipo-carrier-requirements-00.txt [page 30] In addition to setting up connections based on resource availability to meet SLAs, the management system must periodically monitor connections for the maintenance of SLAs. Complex SLAs, such as time-of-day or multiple-service-class based SLAs, should also be satisfied. In order to do this, the policy-based service management system should support automated SLA monitoring systems that may be embedded in the management system or may be separate entities. Mechanisms to report events of not meeting SLAs, or a customer repeatedly using more than the SLA, should be supported by the SLA monitoring system. Other off-line mechanisms to forecast network traffic growth and congestion via simulation and modeling systems, may be provided to aid in efficient SLA management. Another key aspect to SLA management is SLA translation. Requirement 53. In particular, policy-based Class of Service management schemes that accurately translate customer SLAs to parameters that the underlying mechanisms and protocols in the optical transport network can understand, must be supported. Consistent interpretation and satisfaction of SLAs is especially important when an IPO spans multiple domains or service providers. 7.6 Inter-Carrier Connectivity Inter-carrier connectivity has specific implications on the admission control and SLA support aspects of the policy-based service management system. Multiple peering interfaces may be used between two carriers, whilst any given carrier is likely to peer with multiple other carriers. These peering interfaces must support all of the functions defined in section 9, although each of these functions has a special flavor when applied to this interface. Carriers will not allow other carriers control over their network resources, or visibility of their topology or resources. Therefore, topology and resource discovery should not be supported between carriers. There may of course be instances where there is high degree of trust between carriers, allowing topology and resource discovery, but this would be a rare exception. Requirement 54. Inter-carrier connectivity shall be based on E-NNI. To provide connectivity between clients connected to different carriers requires that client reachability information be exchanged between carriers. Additional information regarding network peering points and summarized network topology and resource information will also have to be conveyed beyond the bounds of a single carrier. This information is required to make route selections for connections traversing multiple carriers. Given that detailed topology and resource information is not available outside a carrier's trust boundary, routing of connections over draft-ietf-ipo-carrier-requirements-00.txt [page 31] multiple carriers will involve selection of the autonomous systems (ASs) traversed. This can be defined using a series of peering points. More detailed route selection is then performed on a per carrier basis, as the signaling requests are received at each carrier's peering points. The detailed connection routing information should not be conveyed across the carrier trust boundary. CAC, as described above, is necessary at each trust interface, including those between carriers (see Section 11.2 for security considerations). Similar to dual homing it is possible to have inter-carrier connectivity over multiple diverse routes. These connectivity models support multi hosting. Editor's Note: further discussion on this will be added in a later revision. 7.7 Multiple Hierarchies Transport networks are built in a tiered, hierarchal architecture. Also, by applying control plane support to service and facilities management, separate and distinct network layers may need to be supported across the same inter-domain interface. Furthermore, for large networks, it may be required to support multiple levels of routing domains. Requirement 55. Multi level hierarchy must be supported. Editor's Note: more details will be added as required. Network layer hierarchies Services (IP, SAN, Ethernet) Transport: SONET/SDH/Ethernet DWDM, Optics Address space hierarchies Geographical hierarchies Functional hierarchies Network Topology hierarchies Access, metro, inter-city, long haul - as routing areas. Any one large routing area may need to be decomposed in sub-areas. 8. Control Plane Functional Requirements for Optical Services 8.1 Control Plane Capabilities and Functions 8.1.1 Network Control Capabilities The following capabilities are required in the network control plane to successfully deliver automated provisioning: - Neighbor discovery - Address assignment - Connection topology discovery - Address resolution draft-ietf-ipo-carrier-requirements-00.txt [page 32] - Reachability information dissemination - Connection Management These capabilities may be supported by a combination of functions across the control and the management planes. 8.1.2 Control Plane Functions The following are essential functions needed to support network control capabilities: - Signaling - Routing - Resource and Service discovery Signaling is the process of control message exchange using a well- defined signaling protocol to achieve communication between the controlling functional entities connected through a specified communication channel. It is often used for dynamic connection set-up across a network. Signaling is used to disseminate information between network entities in support of all network control capabilities. Routing is a distributed networking process within the network for dynamic dissemination and propagation of the network information among all the routing entities based on a well-defined routing protocol. It enables the routing entity to compute the best path from one point to another. Resource and service discovery is the automatic process between the connected network devices using a resource/service discovery protocol to determine the available services and identify connection state information. Requirement 56. The general requirements for the control plane functions to support optical networking functions include: 1. The control plane must have the capability to establish, teardown and maintain the end-to-end connection. 2. The control plane must have the capability to establish, teardown and maintain the hop-by-hop connection segments between two end-points. 3. The control plane must have the capability to support traffic- engineering requirements including resource discovery and dissemination, constraint-based routing and path computation. 4. The control plane must have the capability to support reachability information dissemination. 5. The control plane shall support network status or action result code responses to any requests over the control interfaces. 6. The control plane shall support resource allocation on both UNI and NNI. draft-ietf-ipo-carrier-requirements-00.txt [page 33] 7. Upon successful connection teardown all resources associated with the connection shall become available for access for new requests. 8. The control plane shall ensure that there will not be unused, frozen network resources. 9. The control plane shall ensure periodic or on demand clean-up of network resources. 10. The control plane shall support management plane request for connection attributes/status query. 11. The control plane must have the capability to support various protection and restoration schemes for the optical channel establishment. 12. Control plane failures shall not affect active connections. 13. The control plane shall be able to trigger restoration based on alarms or other indications of failure. 8.2 Signaling Network The signaling network consists of a set of signaling channels that interconnect the nodes within the control plane. Therefore, the signaling network must be accessible by each of the communicating nodes (e.g., OXCs). Requirement 57. The signaling network must terminate at each of the communicating nodes. Requirement 58. The signaling network shall not be assumed to have the same physical connectivity as the data plane, nor shall the data plane and control plane traffic be assumed to be congruently routed. A signaling channel is the communication path for transporting signaling messages between network nodes, and over the UNI (i.e., between the UNI entity on the user side (UNI-C) and the UNI entity on the network side (UNI-N)). There are three different types of signaling methods depending on the way the signaling channel is constructed: . In-band signaling: The signaling messages are carried over a logical communication channel embedded in the data-carrying optical link or channel. For example, using the overhead bytes in SONET data framing as a logical communication channel falls into the in-band signaling methods. . In fiber, Out-of-band signaling: The signaling messages are carried over a dedicated communication channel separate from the optical data-bearing channels, but within the same fiber. For example, a dedicated wavelength or TDM channel may be used within the same fiber as the data channels. . Out-of-fiber signaling: The signaling messages are carried over a dedicated communication channel or path within different fibers to draft-ietf-ipo-carrier-requirements-00.txt [page 34] those used by the optical data-bearing channels. For example, dedicated optical fiber links or communication path via separate and independent IP-based network infrastructure are both classified as out-of-fiber signaling. In-band signaling is particularly important over a UNI interface, where there are relatively few data channels. Proxy signaling is also important over the UNI interface, as it is useful to support users unable to signal to the optical network via a direct communication channel. In this situation a third party system containing the UNI-C entity will initiate and process the information exchange on behalf of the user device. The UNI-C entities in this case reside outside of the user in separate signaling systems. In-fiber, out-of-band and out-of-fiber signaling channel alternatives are particularly important for NNI interfaces, which generally have significant numbers of channels per link. Signaling messages relating to all of the different channels can then be aggregated over a single or small number of signaling channels. The signaling network forms the basis of the transport network control plane. To achieve reliable signaling, the control plane needs to provide reliable transfer of signaling messages, its own OAM mechanisms and flow control mechanisms for restricting the transmission of signaling packets where appropriate. Requirement 59. The signaling protocol shall support reliable message transfer. Requirement 60. The signaling network shall have its own OAM mechanisms. Requirement 61. The signaling protocol shall support congestion control mechanisms. In addition, the signaling network should support message priorities. Message prioritization allows time critical messages, such as those used for restoration, to have priority over other messages, such as other connection signaling messages and topology and resource discovery messages. Requirement 62. The signaling network should support message priorities. The signaling network must be highly scalable, with minimal performance degradations as the number of nodes and node sizes increase. Requirement 63. The signaling network shall be highly scalable. draft-ietf-ipo-carrier-requirements-00.txt [page 35] The signaling network must also be highly reliable, implementing mechanisms for failure recovery. Furthermore, failure of signaling links or of the signaling software must not impact established connections or cause partially established connections, nor should they impact any elements of the management plane. Requirement 64. The signaling network shall be highly reliable and implement failure recovery. Requirement 65. Control channel and signaling software failures shall not cause disruptions in established connections within the data plane, and signaling messages affected by control plane outages should not result in partially established connections remaining within the network. Requirement 66. Control channel and signaling software failures shall not cause management plane failures. Security is also a crucial issue for the signaling network. Transport networks are generally expected to carry large traffic loads and high bandwidth connections. The consequence is significant economic impacts should hackers disrupt network operation, using techniques such as the recent denial of service attacks seen within the Internet. Requirement 67. The signaling network shall be secure, blocking all unauthorized access. Requirement 68. The signaling network topology and signaling node addresses shall not be advertised outside a carrier's domain of trust. 8.3 Control Plane Interface to Data Plane In the situation where the control plane and data plane are provided by different suppliers, this interface needs to be standardized. Requirements for a standard control -data plane interface are under study. Control plane interface to the data plane is outside the scope of this document. 8.4 Control Plane Interface to Management Plane The control plane is considered a managed entity within a network. Therefore, it is subject to management requirements just as other managed entities in the network are subject to such requirements. 8.4.1 Allocation of resources The management plane is responsible for identifying which network resources that the control plane may use to carry out its control draft-ietf-ipo-carrier-requirements-00.txt [page 36] functions. Additional resources may be allocated or existing resources deallocated over time. Requirement 69. Resources shall be able to be allocated to the control plane for control plane functions include resources involved in setting up and tearing down calls and control plane specific resources. Resources allocated to the control plane for the purpose of setting up and tearing down calls include access groups (a set of access points), connection point groups (a set of connection points). Resources allocated to the control plane for the operation of the control plane itself may include protected and protecting control channels. Requirement 70. Resources allocated to the control plane by the management plane shall be able to be de-allocated from the control plane on management plane request. Requirement 71. If resources are supporting an active connection and the resources are requested to be de-allocated from the control plane, the control plane shall reject the request. The management plane must either wait until the resources are no longer in use or tear down the connection before the resources can be de-allocated from the control plane. Management plane failures shall not affect active connections. Requirement 72. Management plane failures shall not affect the normal operation of a configured and operational control plane or data plane. 8.4.2 Soft Permanent Connections (Point-and click provisioning) In the case of SPCs, the management plane requests the control plane to set up/tear down a connection rather than a request coming over a UNI. Requirement 73. The management plane shall be able to query on demand the status of the connection request Requirement 74. The control plane shall report to the management plane, the Success/Failures of a connection request Requirement 75. Upon a connection request failure, the control plane shall report to the management plane a cause code identifying the reason for the failure. Requirement 76. In a set up connection request, the management plane shall be able to specify the service class that is required for the connection. 8.4.3 Resource Contention resolution Since resources are allocated to the control plane for use, there should not be contention between the management plane and the control draft-ietf-ipo-carrier-requirements-00.txt [page 37] plane for connection set-up. Only the control plane can establish connections for allocated resources. However, in general, the management plane shall have authority over the control plane. Requirement 77. The control plane shall not assume authority over management plane provisioning functions. In the case of fault management, both the management plane and the control plane need fault information at the same priority. Requirement 78. The control plane shall not interfere with the speed or priority at which the management plane would receive alarm information from the NE or the transport plane in the absence of a control plane. The control plane needs fault information in order to perform its restoration function (in the event that the control plane is providing this function). However, the control plane needs less granular information than that required by the management plane. For example, the control plane only needs to know whether the resource is good/bad. The management plane would additionally need to know if a resource was degraded or failed and the reason for the failure, the time the failure occurred and so on. Requirement 79. Accounting information shall be provided by the control plane to the management plane. Again, there is no contention. This is addressed in the billing section.[open issue - what happens to accounting data histories when resource moved from control plane to management plane?] Performance management shall be a management plane function only. Again, there is no contention between the management plane and the control plane. Requirement 80. The control plane shall not assume authority over management plane performance management functions. 8.4.4 MIBs Requirement 81. A standards based MIB shall be used for control plane management. Requirement 82. The standards based MIB definition shall support all management functionality required to manage the control plane. Requirement 83. The standards based MIB definition should support all optional management functionality desired to manage the control plane. 8.4.5 Alarms The control plane is not responsible for monitoring and reporting problems in the transport plane or in the NE that are independent of draft-ietf-ipo-carrier-requirements-00.txt [page 38] the control plane. It is responsible, however for monitoring and reporting control plane alarms. The requirements in this section are applicable to the monitoring and reporting of control plane alarms. Requirement 84. The Control Plane shall not lose alarms. Alarms lost due to transmission errors between the Control Plane and the Management Plane shall be able to be recovered through Management Plane queries to the alarm notification log. Requirement 85. Alarms must take precedence over all other message types for transmission to the Management Plane. Requirement 86. Controls issued by the Management Plane must be able to interrupt an alarm stream coming from the Control Plane. Requirement 87. The alarm cause shall be based on the probableCause list in M.3100. Requirement 88. Detailed alarm information shall be included in the alarm notification including: the location of the alarm, the time the alarm occurred, and the perceived severity of the alarm. Requirement 89. The Control Plane shall send clear notifications for Critical, Major, and Minor alarms when the cleared condition is detected. Requirement 90. The Control Plane shall support Autonomous Alarm Reporting. Requirement 91. The Control Plane shall support Alarm Reporting Control (See M.3100, Amendment 3). Requirement 92. The Control Plane shall support the ability to configure and query the management plane applications that Autonomous Alarm Reporting will be sent. Requirement 93. The Control Plane shall support the ability to retrieve all or a subset of the Currently Active Alarms. Requirement 94. The Control Plane shall support Alarm Report Logging. Requirement 95. The Control Plane should support the ability to Buffer Alarm Reports separately for each management plane application that an Alarm Report is destined (See X.754, Enhanced Event Control Function). draft-ietf-ipo-carrier-requirements-00.txt [page 39] Requirement 96. The Control Plane shall support the ability to cancel a request to retrieve all or a subset of the Currently Active Alarms (See Q.821, Enhanced Current Alarm Summary Control). Requirement 97. The Control Plane should support the ability to Set/Get Alarm Severity Assignment per object instance and per Alarm basis. Requirement 98. The Control Plane shall log autonomous Alarm Event Reports / Notifications. Requirement 99. The Control Plane shall not report the symptoms of control plane problems as alarms (For example, an LOF condition shall not be reported when the problem is a supporting facility LOS). 8.4.6 Status/State Requirement 100. The management plane shall be able to query the operational state of all control plane resources. Requirement 101. In addition, the control plane shall provide a log of current period and historical counts for call attempts and call blocks and capacity data for both UNI and NNI interfaces. 3. The management plane shall be able to query current period and historical logs. 8.4.7 Billing/Traffic and Network Engineering Support Requirement 102. The control plane shall record usage per UNI and per link connection. Requirement 103. Usage information shall be able to be queried by the management plane. 8.4.8 Policy Information Requirement 104. In support of CAC, the management plane shall be able to configure multiple service classes and identify protection and or restoration allocations required for each service class, and then assign services classes on a per UNI basis. 8.4.9 Control Plane Provisioning Requirement 105. Topological information learned in the discovery process shall be able to be queried on demand from the management plane. Requirement 106. The management plane shall be able to configure UNI and NNI protection groups. draft-ietf-ipo-carrier-requirements-00.txt [page 40] Requirement 107. The management plane shall be able to prohibit the control plane from using certain transport resources not currently being used for a connection for new connection set-up requests. There are various reasons for the management plane needing to do this including maintenance actions. Requirement 108. The management plane shall be able to tear down connections established by the control plane both gracefully and forcibly on demand. 8.5 Control Plane Interconnection The interconnection of the IP router (client) and optical control planes can be realized in a number of ways depending on the required level of coupling. The control planes can be loosely or tightly coupled. Loose coupling is generally referred to as the overlay model and tight coupling is referred to as the peer model. Additionally there is the augmented model that is somewhat in between the other two models but more akin to the peer model. The model selected determines the following: - The details of the topology, resource and reachability information advertised between the client and optical networks - The level of control IP routers can exercise in selecting paths across the optical network The next three sections discuss these models in more details and the last section describes the coupling requirements from a carrier's perspective. 8.5.1 Peer Model (I-NNI like model) Under the peer model, the IP router clients act as peers of the optical transport network, such that single routing protocol instance runs over both the IP and optical domains. In this regard the optical network elements are treated just like any other router as far as the control plane is concerned. The peer model, although not strictly an internal NNI, behaves like an I-NNI in the sense that there is sharing of resource and topology information. Presumably a common IGP such as OSPF or IS-IS, with appropriate extensions, will be used to distribute topology information. One tacit assumption here is that a common addressing scheme will also be used for the optical and IP networks. A common address space can be trivially realized by using IP addresses in both IP and optical domains. Thus, the optical networks elements become IP addressable entities. The obvious advantage of the peer model is the seamless interconnection between the client and optical transport networks. The tradeoff is draft-ietf-ipo-carrier-requirements-00.txt [page 41] that the tight integration and the optical specific routing information that must be known to the IP clients. The discussion above has focused on the client to optical control plane inter-connection. The discussion applies equally well to inter- connecting two optical control planes. 8.5.2 Overlay (UNI-like model) Under the overlay model, the IP client routing, topology distribution, and signaling protocols are independent of the routing, topology distribution, and signaling protocols at the optical layer. This model is conceptually similar to the classical IP over ATM model, but applied to an optical sub-network directly. Though the overlay model dictates that the client and optical network are independent this still allows the optical network to re-use IP layer protocols to perform the routing and signaling functions. In addition to the protocols being independent the addressing scheme used between the client and optical network must be independent in the overlay model. That is, the use of IP layer addressing in the clients must not place any specific requirement upon the addressing used within the optical control plane. The overlay model would provide a UNI to the client networks through which the clients could request to add, delete or modify optical connections. The optical network would additionally provide reachability information to the clients but no topology information would be provided across the UNI. 8.5.3 Augmented model (E-NNI like model) Under the augmented model, there are actually separate routing instances in the IP and optical domains, but information from one routing instance is passed through the other routing instance. For example, external IP addresses could be carried within the optical routing protocols to allow reachability information to be passed to IP clients. A typical implementation would use BGP between the IP client and optical network. The augmented model, although not strictly an external NNI, behaves like an E-NNI in that there is limited sharing of information. 8.5.4 Carrier Control Plane Coupling Requirements Choosing the level of coupling depends upon a number of different factors, some of which are: - Variety of clients using the optical network - Relationship between the client and optical network - Operating model of the carrier draft-ietf-ipo-carrier-requirements-00.txt [page 42] Generally in a carrier environment there will be more than just IP routers connected to the optical network. Some other examples of clients could be ATM switches or SONET ADM equipment. This may drive the decision towards loose coupling to prevent undue burdens upon non- IP router clients. Also, loose coupling would ensure that future clients are not hampered by legacy technologies. Additionally, a carrier may for business reasons want a separation between the client and optical networks. For example, the ISP business unit may not want to be tightly coupled with the optical network business unit. Another reason for separation might be just pure politics that play out in a large carrier. That is, it would seem unlikely to force the optical transport network to run that same set of protocols as the IP router networks. Also, by forcing the same set of protocols in both networks the evolution of the networks is directly tied together. That is, it would seem you could not upgrade the optical transport network protocols without taking into consideration the impact on the IP router network (and vice versa). Operating models also play a role in deciding the level of coupling. [Freeland] gives four main operating models envisioned for an optical transport network: - ISP owning all of its own infrastructure (i.e., including fiber and duct to the customer premises) - ISP leasing some or all of its capacity from a third party - Carriers carrier providing layer 1 services - Service provider offering multiple layer 1, 2, and 3 services over a common infrastructure Although relatively few, if any, ISPs fall into category 1 it would seem the mostly likely of the four to use the peer model. The other operating models would lend themselves more likely to choose an overlay model. Most carriers would fall into category 4 and thus would most likely choose an overlay model architecture. In the context of the client and optical network control plane interconnection the discussion here leads to the conclusion that the overlay model is required and the other two models (peer and augmented) are optional. Requirement 109. Overlay model (UNI like model) shall be supported for client to optical control plane interconnection Requirement 110. Other models are optional for client to optical control plane interconnection Requirement 111. For optical to optical control plane interconnection all three models shall be supported 9. Requirements for Signaling, Routing and Discovery draft-ietf-ipo-carrier-requirements-00.txt [page 43] 9.1 Signaling Functions Connection management signaling messages are used for connection establishment and deletion. These signaling messages must be transported across UNIs, between nodes within a single carrier's domain, over I-NNIs and E-NNIs. A mixture of hop-by-hop routing, explicit/source routing and hierarchical routing will likely be used within future transport networks, so all three mechanisms must be supported by the control plane. Using hop-by-hop message routing, each node within a network makes routing decisions based on the message destination, and the local routing tables. However, achieving efficient load balancing and establishing diverse connections are impractical using hop-by-hop routing. Instead, explicit (or source) routing may be used to send signaling messages along a route calculated by the source. This route, described using a set of nodes/links, is carried within the signaling message, and used in forwarding the message. Finally, network topology information must not be conveyed outside a trust domain. Thus, hierarchical routing is required to support signaling across multiple domains. Each signaling message should contain a list of the domains traversed, and potentially details of the route within the domain being traversed. Signaling messages crossing trust boundaries must not contain information regarding the details of an internal network topology. This is particularly important in traversing E-UNIs and E-NNIs. Connection routes and identifiers encoded using topology information (e.g., node identifiers) must also not be conveyed over these boundaries. 9.1.1 Connection establishment Connection establishment is achieved by sending signaling messages between the source and destination. If inadequate resources are encountered in establishing a connection, a negative acknowledgment shall be returned and allocated resources shall be released. A positive acknowledgment shall be used to acknowledge successful establishment of a connection (including confirmation of successful cross-connection). For connections requested over a UNI, a positive acknowledgment shall be used to inform both source and destination clients of when they may start transmitting data. The transport network signaling shall be able to support both uni- directional and bi-directional connections. Contention may occur between two bi-directional connections, or between uni-directional and bi-directional connections. There shall be at least one attempt and at a most N attempts at contention resolution before returning a negative acknowledgment where N is a configurable parameter with devalue value of 3. 9.1.2 Connection deletion draft-ietf-ipo-carrier-requirements-00.txt [page 44] When a connection is no longer required, connectivity to the client shall be removed and network resources shall be released. Partially deleted connections are a serious concern. As a result, signaling network failures shall not result in partially deleted connections remaining in the network. An end-to-end deletion signaling message acknowledgment is required to avoid such situations. Many signaling protocols use a single message pass to delete a connection. However, in all-optical networks, loss of light will propagate faster than the deletion message. Thus, downstream cross- connects will detect loss of light and potentially trigger protection or restoration. Such behavior is not acceptable. Instead, connection deletion in all-optical networks shall involve a signaling message sent in the forward direction that shall take the connection out of service, de-allocating the resources, but not removing the cross-connection. Upon receipt of this message, the last network node must respond by sending a message in the reverse direction to remove the cross-connect at each node. Requirement 112. The following requirements are imposed on signaling: - Hop-by-hop routing, explicit / source-based routing and hierarchical routing shall all be supported. - A negative acknowledgment shall be returned if inadequate resources are encountered in establishing a connection, and allocated resources shall be released. - A positive acknowledgment shall be returned when a connection has been successfully established. - For connections requested over a UNI, a positive acknowledgment shall be used to inform both source and destination clients of when they may start transmitting data. - Signaling shall be supported for both uni-directional and bi- directional connections. - When contention occurs in establishing bi-directional connections, there shall be at least one attempt at a most N attempts at contention resolution before returning a negative acknowledgment where N is a configurable parameter with devalue value of 3. - Partially deleted connections shall not remain within the network. - End-to-end acknowledgments shall be used for connection deletion requests. - Connection deletion shall not result in either restoration or protection being invoked. - Connection deletion shall at a minimum use a two pass signaling process, removing the cross-connection only after the first signaling pass has completed. draft-ietf-ipo-carrier-requirements-00.txt [page 45] - Signaling shall not progress through the network with unresolved label contention left behind. - Acknowledgements of any requests shall not be sent until all necessary steps to ensure request fulfillment have been successful. - Label contention resolution attempts shall not result in infinite loops. Signaling for connection protection and restoration is addressed in a later section. 9.2 Routing Functions 9.2.1 General Description Routing is an important component of the control plane. It includes neighbor discovery, reachability information propagation, network topology information dissemination, service capability discovery. The objective of neighbor discovery is to provide the information needed to identify the neighbor relationship and neighbor connectivity over each link. Neighbor discovery may be realized via manual configuration or protocol automatic identification, such as link management protocol (LMP). Neighbor discovery exists between user network to optical network interface, network node to network node interface, network to network interface. In optical network, each connection involves two user endpoints. When user endpoint A requests a connection to user endpoint B, the optical network needs the reachability information to select a path for the connection. If a user endpoint is unreachable, a connection request to that user endpoint shall be rejected. Network topology information dissemination is to provide each node in the network with stabilized and consistent information about the carrier network such that a single node is able to support constrain-based path selection. Service capability discovery is strongly related to routing functions. Specific services of optical network require specific network resource information. Routing functions support service capabilities. 9.2.2 I-UNI, E-UNI, I-NNI and E-NNI There are four types of interfaces where the routing information dissemination may occur: I-UNI, E-UNI, I-NNI and E-NNI. Different types of interfaces shall impose different requirements and functionality due to their different trust relationships. Due to business, geographical, technology, economic considerations, the global optical network is usually partitioned into several carrier autonomous systems (AS). Inside each carrier AS, the optical network may be separate into several routing domains. In each routing domain, the routing protocol may or may not be the same. draft-ietf-ipo-carrier-requirements-00.txt [page 46] While the I-UNI assumes a trust relationship, the user network and the transport network form a client-server relationship. Therefore, the benefits of dissemination of routing information from the transport network to the user network should be studied carefully. Sufficient, but only necessary information, should be disseminated across the I- UNI. In E-UNI, neighbor discovery, reachability information and service capability discovery are allowed to cross the interface, but any information related to network resources, topology shall not be exchanged. Any network topology and network resources information is may be exchanged across I-NNI. The routing protocol may exchange sufficient network topology and resource information. Requirement 113. However, to support scalability requirements, only the information necessary for optimized path selection shall be exchanged. Requirement 114. Over E-NNI only reachability information, next routing hop and service capability information should be exchanged. Any other network related information shall not leak out to other networks. Policy based routing should be applied to disseminate carrier specific network information. 9.2.3 Requirements for routing information dissemination Routing protocols must propagate the appropriate information efficiently to network nodes. A major concern for routing protocol performance is scalability and stability issues. Scalability requires that the routing protocol performance shall not largely depend on the scale of the network (e.g. the number of nodes, the number of links, end user etc.). Requirement 115. The routing protocol design shall keep the network size effect as small as possible. Different scalability techniques should be considered. Requirement 116. Routing protocol shall support hierarchical routing information dissemination, including topology information aggregation and summarization. This technique is widely used in conventional networks, such as OSPF routing for IP networks and PNNI for ATM networks. But the tradeoff between the number of hierarchies and the degree of network information accuracy should be considered carefully. Too many aggregations may lose network topology information. - Optical transport switches may contain thousands of physical ports. The detailed link state information for a network element could be huge. draft-ietf-ipo-carrier-requirements-00.txt [page 47] Requirement 117. The routing protocol shall be able to minimize global information and keep information locally significant as much as possible. There is another tradeoff between accuracy of the network topology information and the routing protocol scalability. Requirement 118. Routing protocol shall distinguish static routing information and dynamic routing information. Static routing information does not change due to connection operations, such as neighbor relationship, link attributes, total link bandwidth, etc. On the other hand, dynamic routing information updates due to connection operations, such as link bandwidth availability, link multiplexing fragmentation, etc. The routing protocol operation shall consider the difference of these two types of routing information. Requirement 119. Only dynamic routing information needs to be updated in real time. Requirement 120. Routing protocol shall be able to control the dynamic information updating frequency through different types of thresholds. Two types of thresholds could be defined: absolute threshold and relative threshold. The dynamic routing information will not be disseminated if its difference is still inside the threshold. When an update has not been sent for a specific time (this time shall be configurable the carrier), an update is automatically sent. Default time could be 30 minutes. All these techniques will impact the network resource representation accuracy. The tradeoff between accuracy of the routing information and the routing protocol scalability should be well studied. A well- designed routing protocol should provide the flexibility such that the network operators are able to adjust the balance according to their networks' specific characteristics. 9.2.4 Requirements for path selection The optical network provides connection services to its clients. Path selection requirements may be determined service parameters. However, path selection abilities are determined by routing information dissemination. In this section, we focus on path selection requirements. Service capabilities, such as service type requirements, bandwidth requirements, protection requirements, diversity requirements, bit error rate requirements, latency requirements including/excluding area requirements, can be satisfied via constraint based path calculation. Since a specific path selection is done in a single network element, the specific path selection algorithm and its interaction with the routing protocol are not discussed in this draft-ietf-ipo-carrier-requirements-00.txt [page 48] document. Note that a path consists of a series of links. The characteristics of a path are those of the weakest link. For example, if one of the links does not have link protection capability, the whole path should be declared as having no link-based protection. Requirement 121. Path selection shall support shortest path as well as constraint-based routing. Constraint-based path selection shall consider the whole network performance and provide traffic engineering capability. - A carrier would want to operate its network most efficiently, such as increasing network throughput and decreasing network blocking probability. The possible solution could be shortest path calculation or load balancing under congestion conditions. Requirement 122. Path selection shall be able to include/exclude some specific locations, based on policy. Requirement 123. Path selection shall be able to support protection/ restoration capability. Section 10 discusses this subject in more detail. Requirement 124. Path selection shall be able to support different levels of diversity, including diversity routing and protection/ restoration diversity. The simplest form of diversity is link diversification. More complete notions of diversity can be addressed by logical attributes such as shared risk link groups (SRLG). Requirement 125. Path selection algorithms shall provide carriers' the ability to support a wide range of services and multiple levels of service classes. Parameters such as service type, transparency, bandwidth, latency, bit error rate, etc. may be relevant. The inputs for path selection include connection end addresses, a set of requested routing constraints, and constraints of the networks. Some of the network constraints are technology specific, such as the constraints in all-optical networks addressed in [John_Angela_IPO_draft]. The requested constraints may include bandwidth requirement, diversity requirements, path specific requirements, as well as restoration requirements. 9.3 Automatic Discovery Functions This section describes the specifications for automatic discovery to aid distributed connection management (DCM) in the context of automatically switched transport networks (ASTN/ASON). This section describes the requirements for the Automatically Switched Transport Networks (ASTN) as specified in ITU-T Rec.G.807. Auto-discovery is applicable to the User-to-Network Interface (UNI), Network-Node Interfaces (NNI) and to the Transport Plane Interfaces (TPI) as shown in ASTN reference model. draft-ietf-ipo-carrier-requirements-00.txt [page 49] Neighbor Discovery can be described as an instance of auto-discovery that is used for associating two subnet points that form a trail or a link connection in a particular layer network. The association created through neighbor discovery is valid so long as the trail or link connection that forms the association is capable of carrying traffic. This is referred to as transport plane neighbor discovery. In addition to transport plane neighbor discovery, auto-discovery can also be used for distributed subnet controller functions to establish adjacencies. This is referred to as control plane neighbor discovery. It is worthwhile to mention that the Sub network points that are associated as part of neighbor discovery do not have to be contained in network elements with physically adjacent ports. Thus neighbor discovery is specific to the layer in which connections are to be made and consequently is principally useful only when the network has switching capability at this layer. Service Discovery can be described as an instance of auto-discovery that is used for verifying and exchanging service capabilities that are supported by a particular link connection or trail. It is assumed that service discovery would take place after two Sub Network Points within the layer network are associated through neighbor discovery. However, since service capabilities of a link connection or trail can dynamically change, service discovery can take place at any time after neighbor discovery and any number of times as may be deemed necessary. Resource discovery can be described as an instance of auto-discovery that is used for verifying the physical connectivity between two ports on adjacent network elements in the network. Resource discovery is also concerned with the ability to improve inventory management of network resources, detect configuration mismatches between adjacent ports, associating port characteristics of adjacent network elements, etc. Automatic discovery runs over UNI, NNI and TPI interfaces[reference to g.disc]. 9.3.1 Neighbor discovery This section provides the requirements for the automatic neighbor for the UNI and NNI and Physical Interface (PI). This requirement does not preclude specific manual configurations that may be required and in particular does not specify any mechanism that may be used for optimizing network management. Neighbor discovery is primarily concerned with automated discovery of port connectivity between network elements that form the transport plane and also involves the operations of connectivity verification, and bootstrapping of channels in the control plane for carrying discovery information between elements in the transport plane. This applies to discovery of port connectivity across a UNI between the elements in the user network and the transport plane. The information draft-ietf-ipo-carrier-requirements-00.txt [page 50] that is learnt is subject to various policy restrictions between administrative domains. Given that Automatic Neighbor Discovery (AND) is applicable across the whole network, it is important that AND supports protocol independence, and should be specified to allow ease of mapping into multiple protocol specifications. The actual implementation of AND depends on the protocols that are used for the purpose of automatic neighbor discovery. As mentioned earlier, AND runs over both UNI and NNI type interfaces in the control plane. Given that port connectivity discovery and connectivity verification (e.g., fiber connectivity verification) are to be performed at the transport plane, PI interfaces (IrDI and IaDI) are also considered as AND interfaces. Further information is available in Draft ITU-T G.ndisc. Although the minimal set of parameters for discovery includes the SP and User NE names, there are several policy restrictions that are considered while exchanging these names across untrusted boundaries. Several security requirements on the information exchanged needs to be considered. In addition to these, there are other security/reliability requirements on the actual control plane communications channels. These requirements are out of scope of this document. Draft ITU-T Rec. G.dcn discusses these requirements in much detail. 9.3.2 Resource Discovery Resource discovery happens between neighbors. A mechanism designed for a technology domain can be applied to any pair of NEs interconnected through interfaces of the same technology. However, because resource discovery means certain information disclosure between two business domains, it is under the service providers' security and policy control. In certain network scenario, a service provider who owns the transport network may not be willing to disclose any internal addressing scheme to its client. So a client NE may not have the neighbor NE address and port ID in its NE level resource table. Interface ports and their characteristics define the network element resources. Each network can store its resources in a local table that could include switching granularity supported by the network element, ability to support concatenated services, range of bandwidths supported by adaptation, physical attributes signal format, transmission bit rate, optics type, multiplexing structure, wavelength, and the direction of the flow of information. Resource discovery can be achieved through either manual provisioning or automated procedures. The procedures are generic while the specific mechanisms and control information can be technology dependent. Resource discovery can be achieved in several methods. One of the methods is the self-resource discovery by which the NE populates its draft-ietf-ipo-carrier-requirements-00.txt [page 51] resource table with the physical attributes and resources. Neighbor discovery is another method by which NE discovers the adjacencies in the transport plane and their port association and populates the neighbor NE. After neighbor discovery resource verification and monitoring must be performed to verify physical attributes to ensure compatibility. Resource monitoring must be performed periodically since neighbor discovery and port association are repeated periodically. Further information can be found in [GMPLS-ARCH]. 10. Requirements for service and control plane resiliency There is a range of failures that can occur within a network, including node failures (e.g. office outages, natural disasters), link failures (e.g. fiber cuts, failures arising from diverse circuits traversing shared facilities (e.g. conduit cuts)) and channel failures (e.g. laser failures). Failures may be divided into those affecting the data plane and the control plane . Requirement 126. The ASON architecture and associated protocols shall include redundancy/protection options such that any single failure event shall not impact the data plane or the control plane. 10.1 Service resiliency Rapid protection/restoration from data plane failures is a crucial aspect of current and future transport networks. Rapid recovery is required by transport network providers to protect service and also to support stringent Service Level Agreements (SLAs) that dictate high reliability and availability for customer connectivity. The choice of a protection/restoration policy is a tradeoff between network resource utilization (cost) and service interruption time. Clearly, minimized service interruption time is desirable, but schemes achieving this usually do so at the expense of network resource utilization, resulting in increased cost to the provider. Different protection/restoration schemes operate with different tradeoffs between spare capacity requirements and service interruption time. In light of these tradeoffs, transport providers are expected to support a range of different service offerings, with a strong differentiating factor between these service offerings being service interruption time in the event of network failures. For example, a provider's highest offered service level would generally ensure the most rapid recovery from network failures. However, such schemes (e.g., 1+1, 1:1 protection) generally use a large amount of spare restoration capacity, and are thus not cost effective for most customer applications. Significant reductions in spare capacity can be achieved by instead sharing this capacity across multiple independent failures. draft-ietf-ipo-carrier-requirements-00.txt [page 52] Clients will have different requirements for connection availability. These requirements can be expressed in terms of the "service level", which describes restoration/protection options and priority related connection characteristics, such as holding priority(e.g. pre-emptable or not), set-up priority, or restoration priority. Therefore, mapping of individual service levels to a specific set of protection/restoration options and connection priorities will be determined by individual carriers. Requirement 127. In order for the network to support multiple grades of service, the control plane must identify, assign, and track multiple protection and restoration options. For the purposes of this discussion, the following protection/restoration definitions have been provided: Reactive Protection: This is a function performed by either equipment management functions and/or the transport plane (i.e. depending on if it is equipment protection or facility protection and so on) in response to failures or degraded conditions. Thus if the control plane and/or management plane is disabled, the reactive protection function can still be performed. Reactive protection requires that protecting resources be configured and reserved (i.e. they cannot be used for other services). The time to exercise the protection is technology specific and designed to protect from service interruption. Proactive Protection: In this form of protection, protection events are initiated in response to planned engineering works (often from a centralized operations center). Protection events may be triggered manually via operator request or based on a schedule supported by a soft scheduling function. This soft scheduling function may be performed by either the management plane or the control plane but could also be part of the equipment management functions. If the control plane and/or management plane is disabled and this is where the soft scheduling function is performed, the proactive protection function cannot be performed. [Note that In the case of a hierarchical model of subnetworks, some protection may remain available in the case of partial failure (i.e. failure of a single subnetwork control plane or management plane controller) relates to all those entities below the failed subnetwork controller, but not its parents or peers.] Proactive protection requires that protecting resources be configured and reserved (i.e. they cannot be used for other services) prior to the protection exercise. The time to exercise the protection is technology specific and designed to protect from service interruption. Reactive Restoration: This is a function performed by either the management plane or the control plane. Thus if the control plane and/or management plane is disabled, the restoration function cannot be performed. [Note that in the case of a hierarchical model of subnetworks, some restoration may remain available in the case of partial failure (i.e. failure of a single subnetwork control plane or draft-ietf-ipo-carrier-requirements-00.txt [page 53] management plane controller) relates to all those entities below the failed subnetwork controller, but not its parents or peers.] Restoration capacity may be shared among multiple demands. A restoration path is created after detecting the failure. Path selection could be done either off-line or on-line. The path selection algorithms may also be executed in real-time or non-real time depending upon their computational complexity, implementation, and specific network context. . Off-line computation may be facilitated by simulation and/or network planning tools. Off-line computation can help provide guidance to subsequent real-time computations. . On-line computation may be done whenever a connection request is received. Off-line and on-line path selection may be used together to make network operation more efficient. Operators could use on-line computation to handle a subset of path selection decisions and use off- line computation for complicated traffic engineering and policy related issues such as demand planning, service scheduling, cost modeling and global optimization. Proactive Restoration: This is a function performed by either the management plane or the control plane. Thus if the control plane and/or management plane is disabled, the restoration function cannot be performed. [Note that in the case of a hierarchical model of subnetworks, some restoration may remain available in the case of partial failure (i.e. failure of a single subnetwork control plane or management plane controller) relates to all those entities below the failed subnetwork controller, but not its parents or peers.] Restoration capacity may be shared among multiple demands. Part or all of the restoration path is created before detecting the failure depending on algorithms used, types of restoration options supported (e.g. shared restoration/connection pool, dedicated restoration pool), whether the end-end call is protected or just UNI part or NNI part, available resources, and so on. In the event restoration path is fully pre-allocated, a protection switch must occur upon failure similarly to the reactive protection switch. The main difference between the options in this case is that the switch occurs through actions of the control plane rather than the transport plane Path selection could be done either off-line or on-line. The path selection algorithms may also be executed in real-time or non-real time depending upon their computational complexity, implementation, and specific network context. . Off-line computation may be facilitated by simulation and/or network planning tools. Off-line computation can help provide guidance to subsequent real-time computations. . On-line computation may be done whenever a connection request is received. draft-ietf-ipo-carrier-requirements-00.txt [page 54] Off-line and on-line path selection may be used together to make network operation more efficient. Operators could use on-line computation to handle a subset of path selection decisions and use off- line computation for complicated traffic engineering and policy related issues such as demand planning, service scheduling, cost modeling and global optimization. Multiple protection/restoration options are required in the network to support the range of offered services. NNI protection/restoration schemes operate between two adjacent nodes, with NNI protection/restoration involving switching to a protection/restoration connection when a failure occurs. UNI protection schemes operate between the edge device and a switch node (i.e. at the access or drop), End-End Path protection/restoration schemes operate between access points (i.e. connections are protected/restored across all NNI and UNI interfaces supporting the call). In general, the following protection schemes should be considered for all protection cases within the network: . Dedicated protection (e.g., 1+1, 1:1) . Shared protection (e.g., 1:N, M:N). This allows the network to ensure high quality service for customers, while still managing its physical resources efficiently. . Unprotected In general, the following restoration schemes should be considered for all restoration cases within the network: Dedicated restoration capacity Shared restoration capacity. This allows the network to ensure high quality of service for customers, while still managing its physical resources efficiently. . Un-restorable To support the protection/restoration options: Requirement 128. The control plane shall support multiple options for access (UNI), span (NNI), and end-to-end Path protection/restoration. Requirement 129. The control plane shall support configurable protection/restoration options via software commands (as opposed to needing hardware reconfigurations) to change the protection/restoration mode. Requirement 130. The control plane shall support mechanisms to establish primary and protection paths. Requirement 131. The control plane shall support mechanisms to modify protection assignments, subject to service protection constraints. Requirement 132. The control plane shall support methods for fault notification to the nodes responsible for triggering restoration / draft-ietf-ipo-carrier-requirements-00.txt [page 55] protection (note that the transport plane is designed to provide the needed information between termination points. This information is expected to be utilized as appropriate.) Requirement 133. The control plane shall support mechanisms for signaling rapid re-establishment of connection connectivity after failure. Requirement 134. The control plane shall support mechanisms for reserving restoration bandwidth. Requirement 135. The control plane shall support mechanisms for normalizing connection routing after failure repair. Requirement 136. The signaling control plane should implement signaling message priorities to ensure that restoration messages receive preferential treatment, resulting in faster restoration. Requirement 137. Normal connection operations (e.g., connection deletion) shall not result in protection/restoration being initiated. Requirement 138. Restoration shall not result in miss-connections (connections established to a destination other than that intended), even for short periods of time (e.g., during contention resolution). For example, signaling messages, used to restore connectivity after failure, should not be forwarded by a node before contention has been resolved. Requirement 139. In the event of there being insufficient bandwidth available to restore all connections, restoration priorities / pre- emption should be used to determine which connections should be allocated the available capacity. The amount of restoration capacity reserved on the restoration paths determines the robustness of the restoration scheme to failures. For example, a network operator may choose to reserve sufficient capacity to ensure that all shared restorable connections can be recovered in the event of any single failure event (e.g., a conduit being cut). A network operator may instead reserve more or less capacity than that required to handle any single failure event, or may alternatively choose to reserve only a fixed pool independent of the number of connections requiring this capacity (i.e., not reserve capacity for each individual connection). 10.2 Control plane resiliency Requirement 140. The optical control plane network shall support protection and restoration options to enable it to be robust to failures. Requirement 141. The control plane shall support the necessary options to ensure that no service-affecting module of the control plane (software modules or control plane communications) is a single point of failure. draft-ietf-ipo-carrier-requirements-00.txt [page 56] Requirement 142. The control plane should support options to enable it to be self-healing. Requirement 143. The control plane shall provide reliable transfer of signaling messages and flow control mechanisms for restricting the transmission of signaling packets where appropriate. The control plane may be affected by failures in signaling network connectivity and by software failures (e.g., signaling, topology and resource discovery modules). Requirement 144. Control plane failures shall not cause failure of established data plane connections. Fast detection and recovery from failures in the control plane are important to allow normal network operation to continue in the event of signaling channel failures. Requirement 145. Control network failure detection mechanisms shall distinguish between control channel and software process failures. Different recovery techniques are initiated for the different failures. When there are multiple channels (optical fibers or multiple wavelengths) between network elements and / or client devices, failure of the control channel will have a much bigger impact on the service availability than in the single case. It is therefore recommended to support a certain level of protection of the control channel. Control channel failures may be recovered by either using dedicated protection of control channels, or by re-routing control traffic within the control plane (e.g., using the self-healing properties of IP). To achieve this requires rapid failure detection and recovery mechanisms. For dedicated control channel protection, signaling traffic may be switched onto a backup control channel between the same adjacent pairs of nodes. Such mechanisms protect against control channel failure, but not against node failure. Requirement 146. If a dedicated backup control channel is not available between adjacent nodes, or if a node failure has occurred, then signaling messages should be re-routed around the failed link / node. Requirement 147. Fault localization techniques for the isolation of failed control resources shall be supported. Recovery from signaling process failures can be achieved by switching to a standby module, or by re-launching the failed signaling module. draft-ietf-ipo-carrier-requirements-00.txt [page 57] Requirement 148. Recovery from software failures shall result in complete recovery of network state. Control channel failures may occur during connection establishment, modification or deletion. If this occurs, then the control channel failure must not result in partially established connections being left dangling within the network. Connections affected by a control channel failure during the establishment process must be removed from the network, re-routed (cranked back) or continued once the failure has been resolved. In the case of connection deletion requests affected by control channel failures, the connection deletion process must be completed once the signaling network connectivity is recovered. Requirement 149. Connections shall not be left partially established as a result of a control plane failure. Requirement 150. Connections affected by a control channel failure during the establishment process must be removed from the network, re-routed (cranked back) or continued once the failure has been resolved. Requirement 151. Partial connection creations and deletions must be completed once the control plane connectivity is recovered. 11. Security concerns and requirements In this section, security concerns and requirements of optical connections are described. 11.1 Data Plane Security and Control Plane Security In terms of security, an optical connection consists of two aspects. One is security of the data plane where an optical connection itself belongs, and the other is security of the control plane by which an optical connection is controlled. 11.1.1 Data Plane Security Requirement 152. Misconnection shall be avoided in order to keep user's data confidential. Requirement 153. For enhancing integrity and confidentiality of data, it may be helpful to support scrambling of data at layer 2 or encryption of data at a higher layer. 11.1.2 Control Plane Security It is desirable to decouple the control plane from the data plane physically. Additional security mechanisms should be provided to guard against intrusions on the signaling network. draft-ietf-ipo-carrier-requirements-00.txt [page 58] Requirement 154. Network information shall not be advertised across exterior interfaces (E-UNI or E-NNI). The advertisement of network information across the E-NNI shall be controlled and limited in a configurable policy based fashion. The advertisement of network information shall be isolated and managed separately by each administration. Requirement 155. Identification, authentication and access control shall be rigorously used for providing access to the control plane. Requirement 156. UNI shall support ongoing identification and authentication of the UNI-C entity (i.e., each user request shall be authenticated. Editor's Note: The control plane shall have an audit trail and log with timestamp recording access. 11.2 Service Access Control >From a security perspective, network resources should be protected from unauthorized accesses and should not be used by unauthorized entities. Service Access Control is the mechanism that limits and controls entities trying to access network resources. Especially on the public UNI, Connection Admission Control (CAC) should be implemented and support the following features: Requirement 157. CAC should be applied to any entity that tries to access network resources through the public UNI. CAC should include an authentication function of an entity in order to prevent masquerade (spoofing). Masquerade is fraudulent use of network resources by pretending to be a different entity. An authenticated entity should be given a service access level in a configurable policy basis. Requirement 158. Each entity should be authorized to use network resources according to the service level given. Requirement 159. With help of CAC, usage based billing should be realized. CAC and usage based billing should be enough stringent to avoid any repudiation. Repudiation means that an entity involved in a communication exchange subsequently denies the fact. 11.3 Optical Network Security Concerns Since optical service is directly related to the layer 1 network that is fundamental for telecom infrastructure, stringent security assurance mechanism should be implemented in optical networks. When designing equipment, protocols, NMS, and OSS that participate in optical service, every security aspect should be considered carefully in order to avoid any security holes that potentially cause dangers to an entire network, such as DoS attack, unauthorized access and etc. Acknowledgements The authors of this document would like to acknowledge the valuable inputs from Yangguang Xu, Deborah Brunhard, Daniel Awduche, Jim Luciani, Mark Jones and Gerry Ash. References [carrier-framework] Y. Xue et al., Carrier Optical Services Framework and Associated UNI requirements", draft-many-carrier-framework-uni- 00.txt, IETF, Nov. 2001. [G.807] ITU-T Recommendation G.807 (2001), "Requirements for the Automatic Switched Transport Network (ASTN)". [G.dcm] ITU-T New Recommendation G.dcm, "Distributed Connection Management (DCM)". [G.ason] ITU-T New recommendation G.ason, "Architecture for the Automatically Switched Optical Network (ASON)". [oif2001.196.0] M. Lazer, "High Level Requirements on Optical Network Addressing", oif2001.196.0. [oif2001.046.2] J. Strand and Y. Xue, "Routing For Optical Networks With Multiple Routing Domains", oif2001.046.2. [ipo-impairements] J. Strand et al., "Impairments and Other Constraints on Optical Layer Routing", draft-ietf-ipo-impairments- 00.txt, work in progress. [ccamp-gmpls] Y. Xu et al., "A Framework for Generalized Multi-Protocol Label Switching (GMPLS)", draft-many-ccamp-gmpls-framework-00.txt, July 2001. [mesh-restoration] G. Li et al., "RSVP-TE extensions for shared mesh restoration in transport networks", draft-li-shared-mesh-restoration- 00.txt, July 2001. [sis-framework] Yves T'Joens et al., "Service Level draft-ietf-ipo-carrier-requirements-00.txt [page 61] Specification and Usage Framework", draft-manyfolks-sls-framework-00.txt, IETF, Oct. 2000. [control-frmwrk] G. Bernstein et al., "Framework for MPLS-based control of Optical SDH/SONET Networks", draft-bms-optical-sdhsonet-mpls- control-frmwrk-00.txt, IETF, Nov. 2000. [ccamp-req] J. Jiang et al., "Common Control and Measurement Plane Framework and Requirements", draft-walker-ccamp-req-00.txt, CCAMP, August, 2001. [tewg-measure] W. S. Lai et al., "A Framework for Internet Traffic Engineering Neasurement", draft-wlai-tewg-measure-01.txt, IETF, May, 2001. [ccamp-g.709] A. Bellato, "G. 709 Optical Transport Networks GMPLS Control Framework", draft-bellato-ccamp-g709-framework-00.txt, CCAMP, June, 2001. [onni-frame] D. Papadimitriou, "Optical Network-to-Network Interface Framework and Signaling Requirements", draft-papadimitriou-onni-frame- 01.txt, IETF, Nov. 2000. [oif2001.188.0] R. Graveman et al.,"OIF Security requirement", oif2001.188.0.a` Author's Addresses Yong Xue UUNET/WorldCom 22001 Loudoun County Parkway Ashburn, VA 20147 Phone: +1 (703) 886-5358 Email: yxue@uu.net John Strand AT&T Labs 100 Schulz Dr., Rm 4-212 Red Bank, NJ 07701, USA Phone: +1 (732) 345-3255 Email: jls@att.com Monica Lazer AT&T 900 ROUTE 202/206N PO BX 752 BEDMINSTER, NJ 07921-0000 mlazer@att.com draft-ietf-ipo-carrier-requirements-00.txt [page 62] Jennifer Yates, AT&T Labs 180 PARK AVE, P.O. BOX 971 FLORHAM PARK, NJ 07932-0000 jyates@research.att.com Dongmei Wang AT&T Labs Room B180, Building 103 180 Park Avenue Florham Park, NJ 07932 mei@research.att.com Ananth Nagarajan Wesam Alanqar Lynn Neir Tammy Ferris Sprint 9300 Metcalf Ave Overland Park, KS 66212, USA ananth.nagarajan@mail.sprint.com wesam.alanqar@mail.sprint.com lynn.neir@mail.sprint.com tammy.ferris@mail.sprint.com Hirokazu Ishimatsu Japan Telecom Co., LTD 2-9-1 Hatchobori, Chuo-ku, Tokyo 104-0032 Japan Phone: +81 3 5540 8493 Fax: +81 3 5540 8485 EMail: hirokazu@japan-telecom.co.jp Olga Aparicio Cable & Wireless Global 11700 Plaza America Drive Reston, VA 20191 Phone: 703-292-2022 Email: olga.aparicio@cwusa.com Steven Wright Science & Technology BellSouth Telecommunications 41G70 BSC 675 West Peachtree St. NE. Atlanta, GA 30375 Phone +1 (404) 332-2194 Email: steven.wright@snt.bellsouth.com draft-ietf-ipo-carrier-requirements-00.txt [page 63]