`Contribution
Number: oif2002.229.42
Working Group: Architecture,
Carrier, Signaling, OAM&P
TITLE: Work in Progress - NNI requirements
SOURCE:
AT&T Monica
A. Lazer
Jennifer
Yates
Cable & Wireless Olga Aparicio
Global Crossing Kalyani
Bogineni
Interoute Nazik
Andonian
T-Systems Hans-Martin
FoiselVerizon Vishnu Shukla
WorldCom Curtis
Brownmiller
Yong
Xue
Williams Lucy
Yong
DATE: April
23rd, 2002
ABSTRACT: This contribution provides a current draft of the NNI 1.0 requirements.
Notice: This contribution
has been created to assist the Optical Internetworking Forum (OIF). This document is offered to the OIF solely
as a basis for discussion and is not a binding proposal on the companies listed
as resources above. Each company in the source list, and the OIF, reserves the
rights to at any time to add, amend, or withdraw statements contained herein.
This
Working Text represents work in progress by the OIF, and must not be construed
as an official OIF Technical Report.
Nothing in this document is in any way binding on the OIF or any of its
members. The document is offered as a
basis for discussion and communication, both within and without the OIF.
For additional information contact:
The Optical Internetworking Forum, 39355
California Street,
Suite 307, Fremont, CA 94538
510-608-5990 phone F info@oiforum.com
© 1998
Optical Internetworking Forum
Inter-Domain
Control Plane Prioritized Requirements.
2 Introduction and Assumptions
4 Carrier services requirements over NNI
4.1 NNI And Carrier
Service Concepts
4.1.1 General Service Concepts
5 Control
Plane invocation requirements
5.1.1 Service
Specific Invocation Methods
5.1.1.1 Management Plane Invocation/Control (provisioned)
5.1.1.2 User or Proxy Invocation/Control (signaled)
5.1.1.3 Hybrid Service Control
6 Connection Management Requirements
6.6 Call and
Connection Characteristics
7.1 Internal Network
Addressing
7.2 Signaling and
control routable addresses for clients.
7.3 External Client
Addressing (TNA)
7.4 Client routable
addresses for control and signaling messages (Node_id)
7.5.1 Intra-carrier Subnetwork Identification
7.6 Address Resolution
and Directory Services
9.2.2 Segmentation/Aggregation of Domains
9.2.2.1 Support for Segmentation of Domains
9.2.2.2 Support for Aggregation of Domains
9.2.3 Routing Protocol Scalability
9.2.4 Signaling Protocol Scalability
10.2 Support of
Hierarchical Routing and Signaling
11.1 Routing Protocol
Stability
12 Additional Signaling Requirements
13 Additional Routing Requirements
14.1.1 Information Exchange Security
14.1.2 Connection Management Security
14.1.3 Control channel implications
15 Management & Network Operation
15.1.1 Control Plane to Management Plane
15.1.2 Control Plane to Transport Network Element
15.2.1 Control Plane to Equipment Management Function
15.2.2 Control Plane to Transport Plane
16.1.1 Applicability of UNI 1.0 attributes to the NNI
Inter-Domain Control Plane Prioritized Requirements
This contribution provides input to the OIF Intra-Carrier NNI project by giving a high level review of the applications for NNI 1.0 and proposing a prioritization of the requirements specified. It provides some background information, assumptions, application scenarios and specific requirements. This project is defined as an E-NNI (ITU reference G.807) as applied to intra-carrier networks. This document focuses on a specific application – intra-carrier inter-domain interface.
This contribution provides input to the OIF Intra-Carrier NNI Project [3]. It provides background on two of the specific potential NNI applications specified for the project and suggests some assumptions and requirements for consideration. The intent is to describe in applications terms the distinguishing features of each. No attempt is made to suggest complete requirements – only aspects of the application that might not be known to architects and protocol designers who haven’t worked in these specific areas are discussed.
The services to be supported over the NNI are outlined in Sec. 1. Three broad groups are discussed:
· Static Provisioned Bandwidth Service (SPB) implemented as Permanent Connection (PC) (out of scope) or as Soft Permanent Connection (SPC) – see section 4.1.1.
· Switched Connection (SC), addressed as Bandwidth-on-Demand Service (BOD) in UNI documents see section 4.1.1.
· Optical Virtual Private Network (OVPN)
Ed Note: Global change to E-NNI. If we find a problem, we will deal with it at that point.(Curtis).
Shezad (agree on requirements, come back to use of definitions) Visit the applications, do the mapping there. Issue: constrain by application vs constraints by ITU definitions. To be worked by Curtis, Shehzad, Monica, Hans.
For the NNI 1.0 we are proposing that we focus on the 1st two only. OVPN would be left for further study and a later NNI release.
In what follows, “multi-domain” is used to indicate that the control plane is partitioned; the control plane partitions are called “control domains” in this document, or “domains”.
It is assumed that all the control domains are part of a single carrier network. However, there are multiple flavors of E-NNI that need to be addressed. There may be instances of E-NNI supporting abstract topology exchanges, but also instances where only reachability information exchanges may be allowed. The extent of information exchanged across this interface is dependent on policies within carriers.
We shall call a node, which is physically connected to a node in a different control domain a “border node”.
We are proposing that the following assumptions, valid throughout this document be also made project assumptions and requirements. (Note that these are NOT intended to be complete. )
Proposed Assumptions:
A1. Scope is as defined above.
A2. Each control domain is isolated by 3R regeneration.
A3. Any single NE will not be required to participate in the control plane of more than one control domain.
A4. Domains are agnostics to each other’s signaling or routing protocols.
A5. Inter-domain signaling and routing protocols are agnostic of individual domains routing and signaling protocols.
A6. Both end-points of an inter-domain link run the same protocol on the inter-domain link.
A7. This document addresses resilience of the E-NNI interconnection, not the resilience of the transport plane.
A8. SRLG existence or weighting cannot always be assumed across control domains.
A9. Connection Modification is not supported in NNI 1.0
We have used the following ratings throughout the document:
Guiding Principle – essential principle, however requirement not seen as directly reflected on protocols.
Phase 1 – requirement seen as essential for NNI 1.0. Needs to be supported.
Phase 2 – requirement seen as important, but applicable for later NNI work
TBD – this rating shows up in only in a few places, the authors believe that more discussion is needed for assessment.
Ed note: additional reference model discussion will be added for data plane, management plane.
More details on the specifics of the metro/core application can be found in Reference [3]. Additional applications considered high priority by the Carrier WG are also detailed in this reference.
Call and connection control need to be treated separately. This has the advantage of reducing redundant call control information at intermediate (relay) connection control nodes, thereby removing the burden of decoding and interpreting the entire message and its parameters. Call control can therefore be provided at the ingress to the network or at gateways and network boundaries. As such the relay bearer needs only provide the procedures to support switching connections.
Call control is a signalling association between one
or more user applications and the network to control the set-up, release,
modification and maintenance of sets of connections. Call control is used to
maintain the association between parties and a call may embody any number of
underlying connections, including zero, at any instance of time.
Call control may be realised by one of the following methods:
- Separation of the call information into parameters carried by a single call/connection protocol
- Separation of the state machines for call control and connection control, whilst signalling information in a single call/connection protocol
- Separation of information and state machines by providing separate signalling protocols for call control and connection control
Call control functionality is found in G.8080.
Call admission control is a policy function invoked by an Originating role in a Network and may involve cooperation with the Terminating role in the Network. Note that a call being allowed to proceed only indicates that the call may proceed to request one or more connections. It does not imply that any of those connection requests will succeed. Call admission control may also be invoked at other network boundaries.
- Support to call admission control shall conform to ITU-T recommendation G.8080.
Connection control is responsible for the overall control of individual connections. Connection control may also be considered to be associated with link control. The overall control of a connection is performed by the protocol undertaking the set-up and release procedures associated with a connection and the maintenance of the state of the connection.
Connection admission control is essentially a process that determines if there are sufficient resources to admit a connection (or re-negotiates resources during a call). This is usually performed on a link-by-link basis, based on local conditions and policy. Connection admission control may refuse the connection request.
- Support of connection control and connection admission control shall conform to ITU-T recommendation G.8080
Carriers have been evolving their vision of the broad types of optical layer services they would like to provide in a series of documents, starting with [1] and most recently in [2] submitted in IETF. ITU-T documents on transport technologies and automated switched networks are also used extensively.
ITU standards (G.807) define three basic connection types according to the distribution of connection management functionality between the control and the management planes. The following connection types have been identified:
Start quote from G.807
3.14 PC: Permanent connection: A PC is a connection type
that is provisioned by the management system.
3.15 SC: Switched Connection: A SC is any connection
that is established, as a result of a request from the end user, between
connection end points using a signalling/control plane and involves the dynamic
exchange of signalling information between signalling elements within the
control plane(s).
3.16 SPC: Soft Permanent Connection: An SPC is a
user-to-user connection where by the user-to-network portion of the end-to-end
connection is established by the network management system as a PC. The network
portion of the end-to-end connection is established as a switched connection
using the control plane. In the network portion of the connection, requests for
establishment of the connection are initiated by the management plane and setup
by the control plane.
End quote form G.807
Furthermore, the same recommendation identifies the relationship between the connection type and the connection management functionality distribution as follows:
Permanent connections are provisioned, being established by provisioning the network elements along the path. This may be done automatically by the EMS or NMS, or manually by technicians.
Switched connections are signaled, being established on demand between the communicating end-points within the control plane, using dynamic message exchanges, flowing across interfaces.
Soft permanent connections are managed in a hybrid environment, being established based on functionality distributed between the control and the management planes, using signaling and routing protocols within a carrier’s network, and using the management plane on the edge connections. Since this document focuses on control plane functionality, more specifically on NNI requirements, it focuses on support of switched and soft permanent connections.
Carriers have been evolving their vision of the broad types of optical layer services they would like to provide in a series of documents, starting with [1] and most recently in [2]. For the most part the service-model specific requirements in [2] translate smoothly to the NNI context. Service concepts identified in [2] are at the basis of control plane invocation methods.
1.
Management Invocation: This provides Permanent and Soft
Permanent Connections- those connections initiated from the management plane,
but completed through the control plane and its interactions with the
management plane. SPCs require that the
management plane be able to initiate and control connection establishment. To
be able to do this effectively it will need to be a participant in the NNI
signaling/advertisement processes. It will need to receive the appropriate
state information and be able to initiate end-to-end provisioning actions that
may involve specific path selection, complex engineering requirements, or
customer required monitor functions. An end-to-end managed service may involve
multiple networks, e.g., both access networks and an intercity network. In this
case provisioning may be initiated by whichever network has primary service
responsibility.
2.
UNI Invocation
: This supports
switched connections - those connections initiated by the user edge device over
the UNI and completed through the control plane. These connections may be more
dynamic than soft permanent connections and have much shorter holding times.
The service is targeted at customers supporting UNI functions in their edge
devices that require large point-to-point capacities and have very dynamic
demands. Rapid provisioning time, preferably sub-second, is desired for this
service. This will require an allocation of the connect time SLA across the
IGPs involved and to the NNI
Each of
the services discussed has some specific requirements regarding
service/activity invocation.
R 1 Connection management activities, including set-up, release, query, or modification shall be invokable from the EMF. Guiding
R 2 The multi-domain control plane shall support requests from the management plane for either end-to-end, loose or explicit routing. Phase 1
R 3 The EMF shall receive topology and utilization information from the control plane. Guiding Principle
R 4 The EMF shall have the ability to set any connection characteristics. Guiding Principle (priorities to be assigned to specific attributes)
R 5 The control plane shall support queries for status and PM information at any point of an established connection from the EMF. Guiding Principle
R 6 The control plane may support customer management of the facilities directly associated with the service (e.g., optical VPN). Guiding Principle
Guiding principle – it is the intent of this group
to ensure that all service functionality supported over the UNI will also be
supported by the network.
R 7 All connection management activities, including set-up, release, query, or modification shall be invokable from a user edge device, or its signaling representative (such as a signaling proxy). Guiding Principle
R 8 The control plane receiving a connection request from a user edge device, or its signaling representative (such as a signaling proxy) shall have the ability to forward all the information to the management plane or some other domain control plane. Guiding Principle
R 9 The control plane shall collect and forward to the management plane sufficient information about each connection to allow billing based on end points, service characteristics (e.g., bandwidth, format type, service class), customer identity, facilities traversed end-to-end, connect and disconnect times/dates. Guiding Principle
R 10 The control plane may support customer management of the facilities directly associated with the service (e.g., optical VPN). Guiding Principle
The following actions must be supported over the NNI:
· Set-up
· Release
· Query of attributes
· Attribute modification
· Restoration
Subsections of this section address individual call
and connection management actions and their related requirements.
The result of connection set-up is a connection with specified attributes (by a user, policy manager, or other OS) established between two or more end-points. The following requirements apply to connection set-up.
R 11 The control plane shall support requests for connection set-up across multiple subnetworks. Phase 1
R 12 NNI signaling shall support requests for connection set-up, subject to policies in effect between the subnetworks. Phase 2
R 13 Connection set-up shall be supported for bi-directional connections. Phase 1
R 14 Upon connection request initiation, the control plane shall generate a network unique Connection-ID associated with the connection, to be used for information retrieval or other activities related to that connection. Phase 1
R 15 CAC shall be provided as part of the control plane functionality. It is the role of the CAC function to determine if there is sufficient free resource available downstream to allow a new connection. Guiding Principle
R 16 When a connection request is received across the NNI, it is necessary to ensure that the resources exist within the downstream subnetwork to establish the connection. Guiding Principle
R 17 If there is sufficient resource available, the CAC may permit the connection request to proceed. Guiding Principle
R 18 If there is not sufficient resource available, the CAC shall send an appropriate notification upstream towards the originator of the connection request that the request has been denied. Phase 1
R 19 Connection set-up for multiple service level options shall be supported across the NNI. Phase 1 (signaling and routing)
R 20 Connection Management shall be governed by carrier policies. Phase 2
R
21
The control plane elements need the ability to rate limit
(or pace) call setup attempts into the network. TBD
R 22 The control plane shall report to the EMF, the Success/Failures of a connection request. Guiding Principle
R 23 [m1]Upon a connection request failure:
R.23.a The control plane shall report to the EMF a cause code(s) identifying the reason for the failure Guiding Principle
R.23.b A negative acknowledgment with appropriate error codes shall be returned across the NNI Phase 1
R.23.c All allocated resources shall be released. Phase 1
R.23.d The cause code shall have sufficient information to allow corrective action if needed, (e.g. resource limits).
R 24 Upon a connection request success:
R.24.a A positive acknowledgment shall be returned when a connection has been successfully established. Guiding Principle
R.24.b The positive acknowledgment shall be transmitted both downstream and upstream, over the NNI or the EMF if the connection request originated there]. Phase 1
R 25 If contention occurs in establishing connections, there shall be at least one attempt and at most N attempts at contention resolution before returning a negative acknowledgment where N is a configurable parameter with default value of 3. TBD
R 26 NNI signaling plane shall support requests for connection tear down by connection-ID. Phase 1
R 27 The control plane shall allow the EMF to initiate connection release procedures on any connection, regardless of how the connection was established. Phase 1
R 28 For switched connections, the control plane shall allow either end to initiate connection release procedures. Phase 1
R 29 NNI signaling flows shall allow any end point or any intermediate node (e.g., in case of failure) to initiate the connection release over the NNI. Phase 1
R 30 Upon connection teardown completion all resources associated with the connection shall become available for access for new requests. Phase 1
R 31 The EMF shall be able to tear down completely connections established by the control plane. Phase 1
R 32 The EMF shall be able to tear down connections established by the control plane forcibly, on demand. Phase 1
R 33 Partially deleted connections shall not remain within the network. Phase 1
R 34 End-to-end acknowledgments shall be used for connection deletion requests. Phase 1
R 35 Connection deletion shall not result in either restoration or protection being initiated. Phase 1
R 36 Connection deletion shall use a two pass signaling process, removing the cross-connection only after the first signaling pass has completed. Phase 1
Figure 1 illustrates the different types of failures considered in this document, i.e. inter-domain link, border node and intra-domain failures. The figure also depicts the configuration of metro-core dual interconnection, which refers to having multiple node interconnections between the metro and core subnetworks and needs to be supported for recovery from border node failures.
Figure
1. Example of inter-domain and intra-domain failures
Intra-domain failures may be handled by intra-domain recovery within the affected domain, or by inter-domain recovery. Inter-domain link failures may handled by link protection or inter-domain recovery. Finally, border node failure need to be handled by inter-domain recovery. End-to-end recovery, or re-provisioning may be used when other means of recovery fail.
This section presents a non-exhaustive list of functional requirements, i.e. independent of underlying technology, to be considered when implementing inter-domain as well as intra-domain and end-to-end restoration methods.
R 37 NNI 1.0 restoration shall support autonomous intra-domain restoration with inter-domain coordination and restoration of the inter-domain links.
R 38 Support for inter-domain link, border node and intra-domain recovery shall be considered in protocol development. Guiding Principle
R 39 At a minimum restoration shall support recovery from single failures. Phase 1
R 40 Recovery from multiple failures should be supported. (Recovery from multiple failures, while not 100% successful must behave in a predictable way such that the operator can determine what traffic is restored and what is not). Guiding Principle
R 41 Restoration mechanisms shall provide functional solutions independent of client signal. Phase 1
R 42 Network service and connection attributes, such as maximum restoration time and reversion strategy shall be factored in when assigning restoration priorities. Guiding Principle
R 43 Restoration methods shall address the impact of failures affecting multiple connections simultaneously.
R 44 Bulk restoration options shall be supported by the control plane. Phase 2
R 45 Restoration shall utilize a robust signaling mechanism; capable of ensuring restoration-related information flows survive multiple failures. Phase 1[MAL2]
R 46 Restoration signaling mechanisms must support message prioritization, if needed, to achieve service objectives Phase 1
R 47 The control plane must support a set of protection and restoration capabilities to support recovery of service due to transport plane link failures such as fiber cuts. Guiding Principle
R 48 Inter-domain protection schemes for consideration should include 1+1/1:1/1:n link protection. Phase 1
R 49 Restoration schemes for consideration should be capable of providing the desired level of reliability while taking advantage of shared backup resources. Guiding Principle
R 50 The control plane shall map individual service classes into specific protection and/or restoration options. Guiding Principle
R 51 The control plane must identify, assign, and track multiple protection and restoration options. Phase 1
R 52 Restoration priority shall be a higher priority than new connection set-ups of the same priority level during the restoration process. Phase 1
R 53 Multiple levels of restoration priority shall be supported. Higher priority service class restoration shall take precedence over lower priority service class restoration. Phase 1
R 54 Restoration priority shall be supported within each restoration scheme. Phase 2
R 55 High priority services shall be conveyed over transport links that have high priority restoration mechanisms in place (e.g. dedicated restoration methods). Phase 1
R 56 Connections, which are not restored, shall be released. A mechanism shall exist to send notifications both upstream and downstream that the connection cannot be restored. This should lead to the release and re-provisioning of the entire path. Phase 1
R 57 If full recovery of the service is not possible, then the control plane must maintain connection state information until the failure is restored. Policy and service level of the connection modifies this behavior, e.g., in case where a “best-effort” connection is provided then the connection may be released due to link failure. TBD
R 58 There shall not be any partial connections left in the network as a result of unsuccessful restoration attempts (A mechanism shall be supported to undo partially made connections when insufficient restoration capacity exists). Phase 1
R 59 Normal connection operations across the NNI shall not result in protection/restoration being initiated by either subnetwork on the boundary of the NNI. Phase 1
R 60 The control plane shall support mechanisms for normalizing connection routing after failure repair. Phase 2
R 61 Normalizing shall be based on the “make before break” concept.
R 62 Multi-layer recovery shall be coordinated so that: Guiding Principle
R.62.a The responsibility of each survivable layer may be delineated
R.62.b Assuming a failure may be first detected by the layer closer to the failure, a mechanisms shall be provided to determine which recovery mechanism is to act first according to the type of failure. Escalation strategies should be used to avoid having multiple layers respond to a single failure.
R 63 If a failure occurs on a path traversing multiple domains and no inter-domain link or border node is affected, intra-domain recovery shall take precedence over inter-domain or end-to-end recovery. Guiding Principle
R 64 A domain executing restoration of a given connection should notify adjacent domains (within the affected connection span) that a restoration process is in progress. Phase 1
R 65 If an inter-domain link fails and it is link protected (e.g. 1:1), then this protection mechanism shall precede any other restoration mechanisms. Guiding Principle
R 66 The control plane of each of the inter-connected subnetworks shall support Dual interconnection at the NNI in order to deal with potential border node failures. Phase 1
R 67 Border nodes should have the option to allow usage of available restoration resources for carrying low priority traffic[1]. Guiding Principle
R 68 Border nodes shall support different restoration priorities to support multiple grades of service. Phase 1
R 69 When a failure affects an inter-domain link or border node, information regarding the surviving inter-domain connectivity shall be propagated. Phase 1
R 70 Inter-domain restoration should support mechanisms (such as a hold-off timer or notifications capabilities), so that intra-domain restoration can take place first, when appropriate.
R 71 The control plane and its associated protocols shall have the capability to release all unused connection segments in all involved domains upon successful recovery.
R 72 The control plane and its associated protocols shall have the capability to record and alternatively to flag all resources needed in all involved domains for reversion.
R 73 The operation of revertive actions across all involved domains shall be supported automatically and invoked from the management plane.
R 74 Upon recovery failure, all unused connection segments in all involved domains, whether part of the initial connection or of the attempted recovery connection shall be released.
R 75 Both success and failure of restoration attempts shall be reported to the EMF.
There may be situations when end-to-end recovery is appropriate. One aspect of end-to-end recovery is often referred to as “re-provisioning after a failure”. Support of re-provisioning is needed in NNI1.0 A comprehensive specification for a fuller range of end-to-end recovery options is outside the scope of NNI 1.0. The following list of requirements are considered essential for support of re-provisioning after a failure
R 76 Dual ended operation, which refers to a protection method that takes switching action at both ends of a protected entity [G.805], shall be supported even in the case of unidirectional failures. Phase 1
R 77 Dual interconnection shall be supported in order to implement end-to-end inter-domain recovery over the NNI. Phase 1
R 78 When end-to-end recovery is made possible, a mechanism shall exist to tear down the existing (failed) connection. Phase 1
This section contains requirements related to specific connection related queries.
R 79 The control plane shall support EMF and neighboring device (client or intermediate node) request for connection attributes or status query. Phase 1
R 80 The control plane shall support action results code responses to any requests over the control interfaces. Phase 1
R 81 The EMF shall be able to query on demand the status of the connection. Guiding Principle
This section addresses requirements for NNI Phase 2
Connection modification refers to modification of a specific connection attribute of an established connection. Support of this primitive allows service providers to enable new services and makes optical transport networks more appropriate to serve data centric traffic as well as traditional services.
Attribute modification shall not cause service or network disruption. This limits modification to exclude attributes such as encoding type, transparency, logical port identifier, or end-points. Modifiable attributes include bandwidth, service class, and restoration priority.
R 82 Only non-destructive attribute modification requests are acceptable
R 83 Modification of any connection attribute shall be supported as a network configurable action, subject to established policies and SLAs.
R 84 Attribute modification shall not cause failure of the connection.
R 85 The CAC function shall determine if there is sufficient free resource available in the downstream subnetwork to allow the modification operation to proceed.
R 86 If there is sufficient resource available, the CAC will permit the modification request to proceed.
R 87 If there is not sufficient resource available, the CAC shall notify upstream towards the modification request originator that the request has been denied.
R 88 The control plane shall report to the EMF, the Success/Failures of a connection modification request.
R 89 Upon a connection modification failure:
R.89.a The control plane shall report to the EMF a cause code identifying the reason for the failure
R.89.b A negative acknowledgment shall be returned across the NNI
R.89.c Allocated modified resources shall be released.
R.89.d Connection shall maintain its initial attributes.
R 90 A positive acknowledgment shall be returned across the NNI when a connection has been successfully modified.
R 91
Attribute modification shall not result in protection
or restoration being initiated within an optical transport network.
R 92 The control plane shall be update resource availability upon attribute modification if it affects resource allocation.
R 93 Attribute modification shall be treated as a service offered by an optical transport network. Service discovery protocol shall support the capability to discover a network’s support for attribute modification.
R 94 Attribute modification shall be treated as a billable event and the control plane shall pass all the relevant information to the EMF.
Some carriers have expressed interest in obtaining the ability to support customer requests for modification of connection bandwidth. Since these are not requirements endorsed by all carriers, they should be considered as potentially attractive options.
R 95 The bandwidth modification shall support bandwidth increase and decrease capability by supporting modification requests for “Number of Virtual components” (NVC), or “Multiplier” fields.
R 96 The bandwidth modification shall only modify the “Number of Virtual components” (NVC), or “Multiplier” connection attributes.
It is the intention of this section to address support of connection characteristics consistently with the UNI 1.0 attributes and with NNI 1.0 required functionality.
The following service granularity requirements need to be supported over the NNI:
R 97 Connection management of all allowable SONET/SDH payloads (as supported by the hardware) shall be supported in the control plane. Phase 1
R 98 1 Gb and 10 Gb granularity shall be supported by NNI for 100 Mb/s, 1 Gb/s and 10 Gb/s Ethernet framing types, if implemented in the hardware. Phase 2
R 99 Connection management at sub-STS1 rate among subnetworks shall be supported. Extensions of the intelligent optical network functionality towards the edges of the network in support of sub-rate interfaces (as low as 1.5 Mb/s) will support of VT /TU granularity. Phase 2
R 100 Connection management at wavelength granularity shall be supported. Phase 2
R 101 SAN service support will be required. For SAN services the following interfaces have been defined and shall be supported by the control plane if the given interfaces are available on the equipment: TBD
R.101.a FC-12
R.101.b FC-50
R.101.cFC-100
R.101.d FC-200
R 102 Encoding of service types in the protocols used shall be such that new service types can be added by adding new codepoint values or objects. Guiding Principle
R 103 The NNI shall support interworking between domains with different granularities. Phase 1 for signaling, Low priority for routing
R 104 NNI shall support connection management for bundled connections. Phase 2
R 105 Connection Management shall support options for connections at different transport layers.
R.105.a Section overhead transport. Phase 1
R.105.b Line overhead transport. Phase 1
R.105.cBit-rate transparent transport. Phase 2
R.105.d Bit-rate
specific (within a range ex. 200M to 1500 Mbits/sec) transport. Phase 2
R.105.eBit-rate
dependent (exact bit rate ex. 2.48832 Gb/s, 9.95328 Gb/s, etc.) transport. Phase
2
R.105.f Transparent wavelength transport. Phase 2
R.105.g Format independent transport (independent of SONET/SDH, GigE, 10bE-LAN, etc.) Phase 2
Connections may be point-to-point or multicast. Most applications use point-to-point bi-directional connections. However, there are many applications for multicast unidirectional connections.
R 106 NNI signaling, and routing shall support connection management for point-to-point connections. Phase 1
R 107 NNI signaling and routing should support connection management for multicast connections, as needed by some applications. Phase 2
For NNI 1.0 it is expected that each domain is responsible for diversity within it; in addition use of multiple points of domain interconnectivity are used to support diversity in-between domains.
R 108 NNI information flows shall support the network’s ability to route diversely with respect to node and link diversity. Phase 1
R 109 Diverse routing support across NNI shall be based on use of dual internetworking capabilities, if available and allow each network to generate diverse internal paths. Guiding Principle
R 110 NNI information flows may support the network’s ability to route diversely with respect to SRLG. Phase 2
R 111 The control plane routing algorithms shall be able to route a single demand diversely from N previously routed demands, where diversity would be defined to mean that no more than K demands (previously routed plus the new demand) should fail in the event of a single covered failure. Phase 2
The UNI 1.0 specification [oif2000125.7] has already described the following addressable entities.
1. Internal transport network addresses
2. UNI-N and UNI-C Node Identifier (Node ID)
3. IP Control Channel Identifier (CCID)
4. Transport Network Assigned (TNA) address
5. Client addresses.
Node Id, TNA addresses, and CCID, as well as their usage in signaling and discovery messages are discussed at some length in the UNI document. Node ID is used as a source/destination identifier for messages between entities crossing administrative boundaries.
Internal transport network addresses (INAs) were not discussed in the document, since they were out of its scope. This section gives some high level requirements on addresses relevant to the NNI.
Transport network elements need addresses for routing and signaling network addresses, referred to as internal network addresses (to differentiate them from externally available transport network assigned addresses that will be used for user/client devices). This section explicitly addresses the routing of signaling and control messages within the control plane communications structure. The addressing method and the administration of internal addresses are the responsibility of the carrier. The following requirements apply to these internal network addresses:
R 112 There shall be at least one locally unique address associated with each transport network element.[dp3] “Locally unique” means that the address should be unique within a routing domain. Guiding Principle
R 113 The inter-domain protocols shall support multiple address formats (IP4, IPv6, NSAP).
R 114 Address hierarchies shall be supported Phase 1
R 115 Address aggregation and summarization shall be supported.[2] Phase 1
R 116 Dual domain inter-connectivity shall not require the use of multiple internal addresses per control entity (e.g., E-NNI node). Phase 1
R 117 The size of the address space shall be sufficient to avoid address exhaustion. It is desirable to have support for 128 bits or higher address length. Phase 1
R 118 The addressing shall not mandate revealing the Internal network addresses externally, outside of a carrier network boundaries. Guiding Principle
R 119 The internal network addresses shall not imply network characteristics (port numbers, port granularity, etc). Guiding Principle
R 120 End user equipment and other transport carriers can obtain no knowledge of internal network addresses, including ports information, from information exchanged across an NNI. Guiding Principle
R 121 Connection management between specific drops at the edges of subnetwork shall be supported over the NNI. A drop may be identified by specifying the TNA of the client device or the TID of the appropriate timeslot on the drop interface on a given NE (the CTP or equivalent information of the end points). Phase 1
The
following requirements refer to addresses used to identify transport network
clients or end users. These addresses may be carried in messages advertising
reachability information between different domains, and therefore are relevant
to the design and operation of the NNI. TNAs are specified in detail in the UNI
1.0 document. In addition, the
following requirements expand on TNA use for E-NNI.
R 122 The inter-domain protocols shall support multiple address formats (IP4, IPv6, NSAP).
R 123 Address hierarchies shall be supported Phase 1.
R 124 Address aggregation and summarization shall be supported. Phase 1
R 125 TNA reachability deals with connectivity and shall not be changed when the user device is not available (reachability updates are triggered by registration and deregistration, not by client device reboots) (Name registration persists for as long as the user retains the same TNA – until de-registration). Phase 1
These addresses are used by the UNI-N to send UNI messages to UNI-C. They are out of scope for the NNI work and are discussed at some length in the UNI 1.0 agreement.
R 126 Each control domain shall be uniquely identifiable within a network. Phase 1
Address resolution is needed to translate between names and addresses of end-devices.
R 127
Address resolution shall be supported. Address
resolution may be supported as part of connection set-up or as a separate
network service. Phase 1
Note: Carriers consider this a Phase 1 work item. Vendor input on support of these functions will be requested.
This section addresses resilience and reliability for the control plane.
Resilience refers to the ability of the control plane to continue operations under failure conditions. Reliability refers to the ability of the control plane to recover its operation due to failure conditions. It should be noted that the control plane may not be able to recover or continue operations for all failure conditions. However, the control plane should be designed to handle most common occurrences of failures.
Common occurrences of failures may include (this is not an exhaustive list):
- Signaling/control channel failure
- Control plane component failure
- Control plane controller failure
R 128 The control plane shall provide reliable transfer of signaling messages and flow control mechanisms for restricting the transmission of signaling packets where appropriate. Phase 1
R 129 Control plane must be able to recover from single failures, e.g., single disruption of control plane between two control plane controllers. Phase 1
R 130 The control plane shall be capable of operating in network environments where the transport plane has underlying recovery capability. Phase 1
R 131 The control plane recovery mechanism must support co-existence with existing transport plane recovery mechanisms. Phase 1
R 132 The control plane recovery mechanism must establish a linkage with recovery mechanisms at higher layers (e.g., IP, ATM, MPLS) such that multiple recovery mechanisms co-exist without clashing. Guiding Principle
R 133 The control plane must provide the capability to support recovery of transport plane node failures, e.g., route around a single transport plane node failure. Phase 1
R 134 The control plane shall support the necessary options to ensure that no service-affecting module of the control plane (software modules or control plane communications) is a single point of failure. Existing (established) connections must not be affected by any failures (either hardware or software module) within the control plane of any node. (Standard protection methods for signaling channels are necessary). Phase 1
R 135 Existing (established) connections must not be affected (i.e., no service affecting interruptions) even during complete control plane failure (i.e., failure of all control plane components) within any node. Phase 1
R 136 Existing (established) connections shall not be affected (i.e., no service affecting interruptions) by signaling or control channel failures. Phase 1
R 137 The control plane should support options to enable it to be self-healing. Phase 1
R 138 Control plane failure detection mechanisms shall distinguish between control channel and software process failures. Guiding Principle
R 139 Fault localization techniques for the isolation of failed control resources (including signaling channels and software modules) shall be supported. Guiding Principle
R 140 Recovery from hardware and software failures shall result in complete recovery of network state. Phase 1
R 141 In the event a control plane failure occurs, recovery mechanisms shall be provided such that Phase 1
R.141.a Connections should not be left partially established as a result of a control plane failure.
R.141.b Connections affected by a control channel failure during the establishment process must be removed from the network, re-routed (cranked back), or continued once the failure has been resolved.
R.141.cIf partial connections are detected upon control plane recovery, then those partial connections shall be removed once the control plane connectivity is recovered.
R.141.d The control plane shall ensure that there will not be unused, frozen network resources.
R 142 The control plane shall provide to the EMF the information necessary for the management plane mechanisms, which determine asset utilization, and support periodic and on demand clean-up of network resources. Guiding Principle
R 143 Signaling messages affected by control plane outages should not result in partially established connections remaining within the network. Phase 1
R 144 If a signaling channel or control channel has been detected to be failed, new connection requests or requests in the process of been completed may be dropped. Guiding Principle
R 145 The control plane shall ensure that signaling messages are routed around failure(s). This may be accomplished via use of alternate or back-up channels. Phase 1.
R 146 Control channel and signaling software failures shall not cause management plane failures. Guiding Principle
R 147 Recovery from control plane failers shall result in complete recovery of network state. Upon recovery failure, existing connection states must be recovered. For connections whose states changed during this failure (such as due to user request to teardown connection), the states of these connections must be synchronized. Phase 1
R 148 If control plane component mis-behaviors are detectable, then if a component mis-behaves, existing connections must not be affected. Guiding Principle
In order to allow the switched transport services to expand into a global service, and in order to support different client signals by the switched transport network, the NNI signaling and routing mechanisms must be scalable and should provide fast execution of the requested connection services. At the same time, protection and restoration schemes should be fast enough to ensure that service disruptions are within acceptable levels.
Scalability refers to the ability of the control plane to support ever-increasing requests and support for different clients with an existing switched infrastructure. Performance refers to the ability of the control plane to maintain the same/similar time for completing connection requests within the same timeframe.
While initial applications will be on smaller networks, considering the number of POPs, the number of small nodes in metro areas, an example of network size is given below:
· 1000 nodes in the core network
· 100000 nodes in metro and access networks.
ED note: global change to use transport plane in occurrences of transport plane and data plane. (after ITU meeting)
R 149 The signaling network shall not be assumed to have the same physical connectivity as the data plane, nor shall the data plane and control plane traffic be assumed to be congruently routed. Phase 1
R 150 Neither the signaling nor the routing protocols assume that any node in the transport network must belong to more than one domain at the same layer or at the same hierarchical level. Phase 1
R 151 The control plane addressing structure defined for the NNI must allow support for control plane growth in the future. Phase 1
R 152 The control plane must support transport plane growth in the future. Phase 1
R 153 Control plane topology for supporting communications must be scalable while maintaining the performance objectives specified. Guiding Principle
As the switched network grows, different network domains may be created, leading to both aggregation and segmentation of existing domains based on business-related activities. This segmentation and aggregation of domains (which may include areas or AS) may be performed manually with many implications for updating routing tables. In order to support a switched network that automatically reconfigures its domain information to better align with the scalability and performance objectives, a capability to perform the segmentation and aggregation function may be automated.
R 154 The control plane should be able to support segmentation of operating control domains into a set of sub-domains. Phase 1
R 155 Such segmentation operation may be supported via manually provisioned operation or via an automated operation. Guiding Principle
R 156 This operation (domain segmentation) shall be supported without affecting established connections.
R 157 The control plane should be able to support aggregation of multiple operating domains into a domain. Phase 2
R 158 Such aggregation operation may be supported via manually provisioned operation or via an automated operation. Guiding Principle
R 159 This operation (domain aggregation) shall be supported without affecting established connections.
R 160 The routing protocol shall be scalable without changing the protocol to support transport network growth in the areas of link capacity, number of nodes, number of hierarchical levels and number of networks. Additions of NEs, links, customers, or domains shall not require an overhaul of the routing protocol. Phase 1
R 161 The routing protocol must satisfy its performance objectives with increasing network size. Phase 1
R 162 The routing protocol design shall keep the network size effect as small as possible. Guiding Principle
R 163 The routing protocol(s) shall be able to minimize global information and keep information locally significant as much as possible. Guiding Principle
R 164 Topology summarization and connectivity abstraction shall be supported to ensure network scalability as specified above. Phase 1
R 165 The signaling protocol shall be scalable without changing the protocol to support transport network growth in the areas of link capacity, number of nodes, number of hierarchical levels and number of networks. Additions of NEs, links, customers, or domains shall not require an overhaul of the signaling protocol. Phase 1
R 166 The signaling protocol must satisfy its performance objectives with increasing network size. Phase 1
R 167 Control Plane communications should attempt avoid overload under failure/overload conditions, by ensuring that critical messages shall not get locked out and control messages shall not overwhelm the control plane operations.
R 168 Control Plane communications networks shall support the message stream increases due to failure/overload conditions (including both routing updates and restoration signaling). Guiding Principle
R 169 Control communications shall support assignment of priority levels for routing updates and individual types of signaling messages. Phase 1
R 170 The control plane shall ensure that there will not be, unaccountable network resources. Phase 1
R 171 The control plane shall enable periodic or on demand clean-up of network resources, as requested and administered by the management plane. Phase 1
Due to existing limitations of routing protocols, it may be necessary to establish routing domains within individual subnetworks. Further more, transport networks may be build in a tiered, hierarchal architecture. Also, by applying control plane support to service and facilities management, separate and distinct network layers may need to be supported across the same inter-domain interface.
For example, transport networks may be divided into global, continental, core , regional, metro, and access networks. Due to the differences in transmission technologies, service, and multiplexing needs, the different types of networks are served by different types of network elements and often have different capabilities. The diagram below shows an example three-level hierarchal network.
Figure 1 3-level hierarchy example
R 172 Multi-level hierarchies shall be supported to allow carriers to configure their networks as needed. Phase 1
R 173 NNI 1.0 shall support a minimum of 4 hierarchy levels. Phase 1
R 174 The routing protocol(s) shall support options for hierarchical routing information dissemination, including abstract topology. Phase 1
R 175 The routing protocol(s) shall minimize global information and keep information locally significant as much as possible. Guiding Principle
R 176 Over E-NNI crossing trust boundaries, only reachability information, next routing hop and service capability information should be exchanged. Any other network related information shall not leak out to other networks. Phase 2
Subnetworks may have multiple points of inter-connections. All relevant NNI functions, such as routing, reachability information exchanges, and inter-connection topology discovery must recognize and support multiple points of inter-connections between subnetworks
Dual inter-connection is often used as a survivable architecture.
Figure
2Figure 2Figure
2Figure 2shows an example of dual inter-connection. Subnetwork
X and Y have point of inter-connection A and B.
Figure 2 Dual Inter-connection Example
We identify network elements by their generic name and office location.
It is assumed that traffic crossing inter-connection point A needs to be protected by the point of interconnection B in case of network failures as follows:
R 177 If control plane communication is completely lost to a border node, or an internetworking border node’s control plane fails: Phase 1
R.177.a Established connections shall not be affected.
R.177.b The connected NE over an NNI shall realize it has lost its control communication link to its counterpart and shall advertise within its subnetwork, that the NNI interface has failed.
R.177.cConnection management activities interrupted by the failure are handled per requirements in section8.
R.177.d Appropriate alarms are issued and recovery of the control plane is handled per requirements in section 8.
R 178 In the case where all connectivity to a border node is lost (i.e., both control and data plane connectivity) due to multiple link failures or to complete border node failure:
R.178.a Neighboring nodes shall advertise the loss of the interconnection point between the two domains. Phase 1
R.178.b Failed connections through the affected node, that were using that border node to cross multiple domains, will be restored using an alternate path via the alternate domain inter-connection border node. Phase 2
R.178.cAppropriate alarms shall be issued. Phase 1
Under normal conditions it is highly desirable that traffic is routed over either of the inter-connection points, to avoid unequal loading of the nodes.
R 179 Routing algorithms used for transport networks shall include support of multiple points of interconnection between two domains. Phase 1
Another potentially desirable architecture, currently under investigation, is using connectivity through a different subnetwork as an alternative to dual interconnection. As an example a metro subnetwork without the possibility of being dual homed on a core or regional network, may have a connection to another metro network to be used in case its connecting node to the core subnetwork fails. The requirements listed above still apply, but in this example alternate routes will use cross a different new set of subnetworks.
Figure
43
Dual Interconnection via a different subnetwork
An excessive rate of advertised topology, resource and reachability updates and high routing convergence times may cause control network instability contributing to the networks potential inability to provide correct and loop-free routing. This is particularly relevant when hop-by-hop routing is employed.
R 180 The duration of unstable operational conditions should be minimized and mechanisms to speed-up convergence and to reduce the number of route flaps after link-state updates shall be supported. Guiding Principle
R 181 To minimize the amount of information flow during updates only changes should be advertised periodically on a short time scale. Phase 1
R 182 Major state changes should be advertised as they occur. Phase 1
After transitory control plane failures, it might be desirable to pick-up a pre-established adjacency rather than creating a new one, forcing former adjacencies to be removed from the link-state database.
R 183 Resynchronization shall be supported after transitory failures of the control plane. Phase 1
It is likely that multiple links will be bundled into one logical link with an associated control channel. However, it is also possible that many control channels will still be required between neighboring nodes.
R 184 Flooding optimizations shall be supported on a per-neighbor basis in order to avoid overheads when multiple links exist between neighboring nodes. Guiding Principle
R 185 Mechanisms to ensure loop-free flooding should be provided. Guiding Principle
In order to ensure reliable and loop-free flooding the following requirements need to be observed.
R 186 Routing protocols shall be reliable and resilient to many types of failure. Inter-domain link-state update messages should reach every single node within the flooding scope limits. Phase 1
R 187 Expired link-state information shall be removed from every node independently. Phase 1
Finally, another aspect limiting quick convergence is the database overflow.
R 188 Link state database overflow should be avoided. Guiding Principle
Contention is a problem that arises when two independent requests for the same resource arrive at the same time. Unresolved contention in the control plane may cause call blocking (see G.7713 for details).
In this section we focus on general requirements for contention resolution in optical networks.
R 189 Contention avoidance support is preferable over contention resolution. However, when contention occurs in establishing connections over NNI, there shall be at least one attempt at a most N attempts at contention resolution before returning a negative acknowledgment where N is a configurable parameter with devalue value of 3. Guiding Principle
R 190 Signaling shall not progress through the network with unresolved label contention left behind. Phase 1
R 191 The control plane shall not allow cross-connections with unresolved label contention on the outgoing link. Phase 1
R 192 Acknowledgements of any requests shall not be sent until all necessary steps to ensure request fulfillment have been successful. Phase 1
R 193 Contention resolution attempts shall not result in infinite loops. Phase 1
R 194 Contention resolution mechanisms must minimize control signaling and latency overheads. Guiding Principle
R 195 Inter-domain signaling shall comply with G.8080 and G.7713 (ITU). Phase 1
R 196 The inter-domain signaling protocol shall be agnostic to the intra-domain signaling protocol within any of the domains within the network. Phase 1
R 197 Inter-domain signaling shall be independent of the UNI signaling.
R 198 Inter-domain signaling shall allow for different UNI signaling protocols at the edges of the connections (shall allow for clients using different protocols at the UNI).
R 199 Inter-domain signaling shall allow for lack of UNI signaling protocols at the edges of the connections (i.e., shall not be dependent on client devices at both edges of the connections support of the UNI signaling)
R 200 Inter-domain signaling shall support both strict and loose routing. Phase 1
R 201 Inter-domain signaling shall not be assumed necessarily congruent with routing. It should not be assumed that the same exact nodes are handling both signaling and routing in all situations. Phase 1
R 202 Inter-domain signaling shall support all connection management actions:
R.202.a Per individual connections Phase 1
R.202.b Per groups of connections Phase 2
R 203 Inter-domain signaling shall support inter-domain notifications.[3] Phase 1
R 204 Inter-domain signaling shall support per connection global connection identifier for all connection management actions within a carrier’s network. Phase 1
R 205 Inter-domain signaling shall support both positive and negative responses for all requests, including the cause, when applicable. Phase 1
R 206 Inter-domain signaling shall support all the connection attributes representative of the connection characteristics of the individual connections in scope. Phase 1
R 207 Inter-domain signaling shall support crank-back and rerouting.[4] Phase 1
R 208 Inter-domain signaling shall support graceful deletion of connections including of failed connections, if needed. Phase 1
R 209 The inter-domain routing protocol shall comply with G.8080 (ITU). Phase 1
R 210 The inter-domain routing protocol shall be agnostic to the intra-domain routing protocol within any of the domains within the network. Phase 1
R 211 The inter-domain routing protocol shall not impede any of the following routing paradigms within individual domains: Phase 1
R.211.a Hierarchical routing
R.211.b Step-by-step routing
R.211.cSource routing
R 212 The exchange of the following types of information shall be supported by inter-domain routing protocols: Phase 1
R.212.a Inter-domain topology
R.212.b Per-domain topology abstraction
R.212.cPer domain reachability information
R.212.d Metrics for routing decisions supporting load sharing, a range of service granularity and service types, restoration capabilities, diversity, and policy.[5]
R 213 Inter-domain routing protocols shall support per domain topology and resource information abstraction. Phase 1
R 214 Inter-domain protocols shall support reachability information aggregation.
R 215 The routing protocol shall provide sufficient information so as to allow path computation by various constraints: Phase 1
R.215.a Cost
R.215.b Load sharing
R.215.cDiversity[6]
R.215.d Service class
R 216 The inter-domain routing protocol shall support policy-based constraints. Phase 1
R 218 These mechanisms should protect against both malicious attacks against the optical network as well as unintentionally malfunctioning control entities (for example, due to software errors). To reduce implementation cost, improve manageability, enhance interoperability, reduce risk of errors, and provide compatibility with other protocols, these mechanisms should be based on a minimal set of comprehensive key management and network or transport layer security solutions. Guiding Principle
These Guiding Principles are intended to reduce the number of different security measures from what were specified in UNI 1.0, provide options for more extended coverage, include a common method to secure additional protocols, allow compatibility with UNI 2.0 security, and reduce the need for manual intervention.
R 219 Network information (except reachability of network clients) shall not be advertised across administrative boundaries. The advertisement of network information outside a carrier’s network shall be controlled and limited in a configurable, policy-based fashion. Additionally, private network information may be protected with a confidentiality mechanism (see R 217). Phase 1
R 220 The signaling network shall be capable of deploying security services so that all unauthorized access can be blocked, and parties may optionally be authenticated (see R 217). Phase 1
R 221 The signaling network topology and the signaling node addresses shall not be advertised outside a carrier’s domain of trust. Additionally, network topology and node addresses may be protected with a confidentiality mechanism (see R 217). Phase 1
The NNI should provide (optional) mechanisms in support of the following requirements:
R 222 Neighbor discovery should be capable of supporting integrity and authentication security services. Guiding Principle
R 223 Service discovery should be capable of supporting integrity and authentication security services. Guiding Principle
R 224 Routing information exchanges should be capable of supporting integrity and authentication security services. Phase 1
The NNI should provide optional mechanisms to ensure origin authentication, message integrity, and confidentiality for connection management requests, such as connection set-up and teardown. Additionally, the NNI may include mechanisms for ensuring the non-repudiation of connection management messages. These optional mechanisms may be employed across an interface, depending on the interface characteristics, peering relation of the domains, type of control network (DCN) used, etc. Typically these mechanisms may be more applicable to an E-NNI interface.
R 225 The NNI should provide optional mechanisms for origin authentication, message integrity, and confidentiality of connection management (signaling) messages. Phase 1
R 226 The NNI should provide optional non-repudiation mechanisms for signaling messages. Phase 2
The desired level of security depends, in part, on the types of interface and accounting relation between the two adjacent sub-networks or domains. Typically in-band control channels are perceived as more secure than out-of-fiber channels parts, which may be co-located, in part, with a public network.
Control plane capabilities must support and enhance current management plane capabilities: flow-through provisioning and support processes, ability to support multiple grades of service guarantees; rapid restoration, ability to provide the necessary information to the management plane for root cause analysis across sub-networks boundaries, and for determination of capacity growth requirements.
This section is not intended to be a detailed requirements document, however it is intended to provide guidance on OAMP-related control plane functionality. The focus of this section is to give guidelines for the relationship between the control plane, the transport plane and the EMF and to describe the information flow and control functions that are related to the control plane.
The following requirements are OAM&P centered, by they reflect on control plane functionality. This section also addresses the need for the control plane to advertise its health and the health of its sub-components to the EMF.
This section is intended to be consistent with the requirements of ITU-T recommendation G.7710. Within the ITU-T suite of recommendations that address control plane architectures and functionalities, management requirements are also identified.
ED Note: explanatory text will be added after the May ITU meeting. (Curtis)
R 227 The control plane interfaces shall be configurable and their behavior shall be consistent with the configuration (i.e., exterior versus interior interfaces). Guiding Principle
R 228 Control plane failures shall not affect the normal operation of a configured and operational management plane or data plane. Guiding Principle
R 229 Management plane failures shall not affect the normal operation of a configured and operational control plane or data plane. Guiding Principle
R 230 The control plane shall not affect the performance of the management plane communications. Guiding Principle
This section is intended to be consistent with the requirements of ITU-T recommendation G.7710 and each technology supported is also required to meet the technology specific requirements of recommendations such as G.784 for SDH and G.874 for OTN technologies.
R 231 If the control plane is distributed from the network element then the EMF shall be able to configure the control plane physical interfaces to the network element in accordance with security and data transfer requirements. Guiding Principle
G.8080 provides the general architectural expectations for management of the control plane as a system, while each of the functional specific recommendations (e.g. G.7712 for the Data Communications Network, G.7713 for the signaling function, G.7714 for discovery functions, G.7715 for the routing functions, and G.7716 for link management functions) identify the management requirements and interactions for each of the functions. This section is intended to be consistent with the requirements of ITU-T recommendation G.7710 for information exchange requirements.
All of the requirements in this section apply to the communications required between the control plane and equipment management function (EMF) in support of control plane operation. The requirements in this section DO NOT apply to the communications between the EMF and the transport atomic function. (See G.7710)
R 232 The EMF shall be able to configure NNI protection groups. Guiding Principle
R 233 The EMF shall have the ability to either assign or remove resources under the NNI control plane. Guiding Principle
R 234 If resources are supporting an active connection and the resources are requested to be de-allocated from the control plane, the control plane shall transfer control of the connection to the EMF, while ensuring that this process will not cause a disruption of the established connection. Guiding Principle
R 235 The EMF shall be able to prohibit the control plane from using certain transport resources not currently being used for a connection for new connection set-up requests. (There are various reasons for the EMF needing to do such as maintenance activities). Guiding Principle
R 236 The EMF shall be allowed to prohibit the use of any existing resource for new connections. This is to allow connection to clear either normally or by a forced clear from the management plan and to prevent a race condition that would put more connects on the resource.
R 237 The EMF shall be able to query on demand the status of the connection request. Guiding Principle
R 238 The control plane shall report to the EMF, the Success/Failures of a connection request. Guiding Principle
R 239 Upon a connection request failure, the control plane shall report to the EMF a cause code identifying the reason for the failure. Guiding Principle
R 240 Re-occurring contention on the same resource shall be reported to the EMF. Guiding Principle
R 241 The control plane shall not assume authority over EMF provisioning functions. Guiding Principle
R 242 The control plane shall not assume authority over EMF performance management functions. Guiding Principle
R 243 Detailed alarm information shall be included in the alarm notification for each of the control plane components including: the location of the alarm, the time the alarm occurred, the probable cause, and the perceived severity of the alarm. Guiding Principle
R 244 The Control Plane shall support Autonomous Alarm Reporting for each of the control plane components. Guiding Principle
R 245 The Control Plane shall support the ability to retrieve all or a subset of the Currently Active Alarms for each of the control plane components. Guiding Principle
R 246 The Control Plane shall not lose alarms for any of the control plane components. Alarms lost due to transmission errors between the Control Plane and the EMF shall be able to be recovered through EMF queries to the alarm notification log. Guiding Principle
R 247 The Control Plane should supply the equipment management function (EMF) the management information (MI signals) required to Set/Get Alarm Severity Assignment per object instance and per Alarm basis. Guiding Principle
R 248 The Control Plane shall report only the root cause alarm, lower order alarms shall be suppressed. Guiding Principle
R 249 The EMF shall be able to query the operational state of all control plane resources. Guiding Principle
R 250 The control plane shall provide notifications of current period and historical counts for call attempt, call set-up failures, successes, and contention occurrences across the NNI interface. Guiding Principle
R 251 The EMF shall be able to query the call attempt, call set-up failures, successes, and contention occurrences. Guiding Principle
R 252 Topological information learned in the discovery process shall be able to be queried on demand from the EMF. Guiding Principle
R 253 The EMF shall be able to tear down connections established by the control plane both gracefully and forcibly on demand. (also under deletion process). Phase 1
R 254 The EMF shall be able to query connection routing information from the control plane by connection ID. Phase 1
R 255 The following PM capabilities are considered important: Guiding Principle
R.255.a Monitoring the number of routing table updates
R.255.b Monitoring the latency of the control plane component communications.
R.255.cMonitoring the number of component restarts.
R 256 The control plane shall maintain the current state for each of the following items:
R.256.a Calls under its control
R.256.b Connections under its control
R.256.cLinks under its control
R 257 Accounting information shall be provided by the control plane to the management plane. Phase 1
R 258 The control plane shall record network usage per individual client/connection Phase 1
R 259 Usage information shall be able to be queried by the management plane. Guiding Principle
Ed note; this section may belong somewhere else in the document, will be revised, may be belong in a separate document. Move to the end.
Connection attributes have been defined within the OIF UNI 1.0 signaling specification []. These attributes are used to specify the characteristics of a connection, as requested by a client. Importantly, as there is no topology or reachability information exchanged over the UNI interface, these attributes have only been defined for connection-related events (i.e., connection signaling). However, the NNI must implement both connection-related signaling, and routing (I-NNI) / reachability (E-NNI) information exchange.
In this section, we examine the applicability of each of the UNI 1.0 signaling message attributes for the NNI.
Source and destination TNA addresses, port identifiers, and multiplex channel identifier: This information will need to be conveyed across an NNI when signaling for connection-related functions. In addition, the TNA addresses must be conveyed across NNIs, whether via advertisements or via directory servers. Phase 1
Local connection ID: each connection will have to be identified over an UNI using a local connection ID. This ID is distinct from any identifier used within each (sub-)network. N/A
Encoding type, SONET/SDH traffic parameters, and directionality, GPID: all of these parameters must be signaled across the NNI for each requested connection. In addition, summarized network capabilities that include supported link encoding types, and SONET/SDH traffic parameters will need to be advertised across the NNI. Phase 1
Service level: Service level parameters will need to be conveyed across NNIs. If the definition of a given service level is service provider specific (i.e., not standardized), then a connection may require that different service levels be specified for each different carrier through which the connection is routed. This issue does not arise over an I-NNI, assuming that a service provider uses a consistent service level representation across all sub-networks. However, in traversing across multiple service providers, either a standard service level representation must be developed, or a conversion between service levels should be supported across each E-NNI (e.g., gold service for one provider may correspond to silver service for another – this conversion should be done across the E-NNI). Phase 1
Diversity: diversity requests will need to be conveyed across NNI boundaries (see section ?? for more detailed diversity requirements). In addition, if SRLG-diversity is supported across an NNI boundary, then SRLG-related information must also be advertised and/or signaled across these boundaries. Phase 1 for signaling TBD for routing
Contract ID: The contract ID is a service provider assigned identifier that is used to describe the contract between a client and a service provider. A client should only have to establish a contract with the service providers to which it is directly connected. Thus, the contract ID will not have to be conveyed over either I-NNIs or E-NNIs. N/A to signaling or routing
Connection status and error codes: The connection status and error codes will have to be conveyed in connection-related messages over all interfaces, including NNIs. However, connection status is clearly not relevant to topology and resource discovery / reachability information. Phase 1
There will be a number of parameters that need to be either signaled or advertised over an NNI, that are not required for the UNI. The following is by no means a complete list of additional parameters that should be supported by the NNI, but were not supported by UNI 1.0.
·
Global Call ID This needs to be unique within a
carriers network. It shall be generated by the network. Phase 1
· Traffic parameters for other encoding types: UNI 1.0 supports only SONET/SDH encoding – however, the NNI must be able to support other encoding types (e.g., Ethernet – see section ??). Encoding specific parameters for these other encoding types (e.g., bandwidth, transparency) must be included in both connection-related messages (i.e., signaling) and routing / reachability information. Phase 2
· SRLG information: SRLG-related information would need to be advertised for determination of physically diverse routes. In addition, as strict end-to-end routes are not calculated over NNI boundaries, SRLG information may also need to be conveyed within signaling messages. However, service providers may find it very unattractive to provide this information over E-NNIs (trust boundaries). In addition, SRLGs are unlikely to have meaning across these trust boundaries, and potentially even over I-NNIs. TBD
The author would like to acknowledge all the authors of the NNI study document, for the large set of requirements provided there, John Strand for the work on applications, and the authors of the separate restoration contribution that were used. The authors would like to thank Richard Graveman, Dimitrios Pendarakis, Ronald Roman, Douglas Zuckerman, and …….
References
[1] OIF Carrier Group, Yong Xue and Monica Lazer (eds.), “Carrier Optical Services Framework and Associated Requirements for UNI”, Optical Interworking Forum oif2000.155, Sept. 13, 2000. Also available as working documents in T1X1.5 and the IETF.
[2] Y. Xue (ed.), “Carrier Optical Services Requirements”, IETF Work In Progress, draft-ietf-ipo-carrier-requirements-00.txt, July, 2001.
[3] J. Strand, Application-Driven Assumptions And Requirements For the NNI[JS6], oif2001.639
[4] NNI Study Group – Document Editors: Greg Bernstein (Ciena), Dongmei Wang (AT&T), “NNI Study Group Requirements & Framework – Draft”, OIF2001.535
[5]
Luis Aguirre-Torres (ed.) “Carrier Requirements for Restoration over
NNI”, OIF2002.050
[1] Some carriers may wish to support low priority services to be discarded if capacity is required by high priority services.
[2] This actually is an NNI requirement
[3] Needed for restoration across domains.
[4] Needed for contention, restoration, and to deal with situations where re-routing is necessary to support the requests.
[5] Specific metrics need to be specified
[6] Link, node and SRLG diversity should be considered
Page: 11
[m1]
duplication with req below
Page: 13
[MAL2]This
was removed
Page: 18
[dp3] I
would replace this with “network element”
Page: 34
[MAL4]
Move section; TBD, will modify as appropriate
Page: 35
[MAL5]
Check with Miguel see also req 70
Page: 35
[JS6]
There was an update that included a title change.