This draft new Recommendation G.7715.1 “ASON Routing Architecture and Requirements for Link-State Protocols” provides requirements for a link-state instantiation of G.7715 . A link-state G.7715 routing instantiation supports both hierarchical and source routed path computation functions.
This recommendation provides of a mapping from the relevant ASON components to distributed link state routing functions. The mapping is one realization of ASON routing architecture.
Recommendations G.807 and G.8080 together specify the requirements and architecture for a dynamic optical network in which optical services are established using a control plane. Recommendation G.7715 contains the detailed architecture and requirements for routing in ASON, which in conjunction with the routing architecture defined in G.8080 allows for different implementations of the routing functions. It should be noted that the various routing functions can be instantiated in a variety of ways including distributed, co-located, and centralized mechanisms.
Among different link-state attributes defined within this document, support of hierarchical routing levels is defined as a key element built into this instantiation of G.7715 by the introduction of a number of hierarchy-related attributes. This document complies with the requirement from G.7715 that routing protocols in different hierarchical levels do not need to be homogeneous.
As described in G.807 and G.8080, the routing function is applied at the I-NNI and E-NNI reference points and supports the path computation requirements of connection management at those same reference points. Support of packet forwarding within the control plane using this routing protocol is not in the scope of this recommendation.
– ITU-T Rec. G.7713/Y.1704 (2001), Distributed Connection Management (DCM)
– ITU-T Rec. G.803 (2000), Architecture of Transport Networks based on the Synchronous Digital Hierarchy
– ITU-T Rec. G.805 (2000), Generic Functional Architecture of Transport Networks
– ITU-T Rec. G.807/Y.1301 (2001), Requirements for the Automatic Switched Transport Network (ASTN)
– ITU-T Rec. G.8080/Y.1304, Architecture of the Automatic Switched Optical Network (ASON)
– ITU-T Rec. G.7715/Y.1706 “Architecture and Requirements for Routing in the Automatically Switched Optical Network”
RA - RA (G.8080)
RP - Routing Performer (G.7715)
RC - Routing Controller (G.8080)
RCD - Routing Control Domain (G.7715)
RDB - Routing Database (G.7715)
RA ID - RA Identifier
RC ID - RC Identifier
RCD ID - RCD Identifier
LRM - Link Resource Manager (G.8080)
TAP – Termination and Adaptation Performer
The routing architecture defined in G.8080 and G.7715 allows for different distributions of the routing functions. These may be instantiated in a variety of ways such as distributed, co-located, and centralized.
Characteristics of the routing protocol described in this document are:
1. It is a link state routing protocol.
2. It operates for multiple layers.
3. It is hierarchical in the G.7715 sense. That is, it can participate in a G.7715 hierarchy. This hierarchy follows G.805 subnetwork structure through the nesting of G.8080 RAs.
4. Source routed path computation functions may be supported. This implies that topology information necessary to support source routing must be made available.
The choice of source routing for path computation has some advantages for supporting connection management in transport networks. It is similar to the manner in which many transport network management systems select paths today.
To accommodate these characteristics the following instantiation of the G.7715 architecture is defined. Hence a compliant link-state routing protocol is expected to locate and assign routing functions in the following way:
1. In a given RA, the RP is composed by a set of RCs. These RCs co-operate and exchange information via the routing protocol controller.
2. At the lowest level of the hierarchy, each matrix has a corresponding RC that performs topology distribution. At different levels of the hierarchy RCs representing lower areas also perform topology distribution within their level.
3. Path computation functions may exist in each RC, on selected RCs within the same RA, or could be centralized for the RA. Path computation on one RC is not dependent on the RDBs in other RCs in the RA. If path computation is centralized, any of the RDBs in the RA (or any instance) could be used.
4. The RDB is replicated at each RC within the same area, where the RC uses a distribution interface to maintain synchronization of the RDBs.
5. The RDB may contain information about multiple layers.
6. The RDB contains information from higher and lower routing levels
7. The protocol controller is a single type (link state) and is used to exchange information between RCs within a RA. The protocol controller can pass information for multiple layers and conceptually interact with various RCs at different layers. Layer information is, however, not exchanged between RCs at different layers.
8. When a protocol controller is used for multiple layers, the LRMs that are associated with the protocol controllers for every RCs (i.e. only those it interacts with) must share a common TAP. This means that the LRMs share a common locality.
The scenario where an RC does not have an associated path computation function may exist when there are no UNIs associated with that RC, i.e., no connection controller queries that RC.
It must be possible to distinguish between two RCs within the same RA, therefore requiring a RC identifier (RC ID). It should be noted that the notion of a RCD identifier is equivalent to that of an RC ID.
Before two RCs start talking to each other they should check that they are in the same RA , particularly when a hierarchical network is assumed. Therefore an identifier for the RA is also defined (RA ID) to define the scope within which one or more RCs may participate.
Both RC-ID and RA-ID are separate concepts
in a hierarchical network. However, as the RA-ID is used to identify and work
through different hierarchical levels, RC-ID MUST be unique within its
containing RA. Such a situation is
Figure 1 where the RC-IDs at hierarchy “level 2” overlap with
those used within some of the different “Level 1” RAs.
Another distinction between RA identifiers and RC identifiers is that RA identifiers are associated with a transport plane name space whereas RC identifiers are associated with a control plane name space.
Figure 1. Example network where RC identifiers within one RA are reused within another RA.
In the process of running an ASON network, it is anticipated that the containment relationships of RAs may need to change from time to time motivated by unforeseen events such as mergers, acquisitions, and divestitures.
The type of operations that may be performed on a RA include:
· Splitting and merging
· Adding a new RA between levels or at the top of the hierarchy
Support for splitting and merging areas are best handled by allowing a RA to have multiple synonymous RA identifiers.
The process of splitting can be accomplished in the following way:
1. Adding the second identifier to all RCs that will make up the new area
2. Establishing a separate parent/child RC adjacency for the new RA identifier to at least one route controller that will be in the new area
3. At a specified time, dropping the original RA identifier from the nodes being placed in the new Route Area. This would be first one on the nodes that are adjacent to the RCs that are staying in the old area.
The process of merging can be accomplished in the following way:
1. The RA identifier for the merged area is selected from the two areas being merged
2. The RA identifier for the merged area is added to the RCs in the RA being deprecated that are adjacent to RCs in the area that the merged area identifier is taken from
3. The RA identifier for the merged area is added to all other RCs in the RA being deprecated
4. The RA identifier for the merged area is added to any parent/child RC adjacencies that are supporting the RA identifier being deprecated
5. The RA identifier being deprecated is now removed from the RCs that came from the area being deprecated.
AS mentioned above, a RA MUST be able to support multiple synonymous RA Identifiers. It must be ensured that before merging two areas, their RA Identifiers are unique.
Adding a new area at the top of the hierarchy or between two existing areas in the hierarchy can be accomplished using similar methods as those explained above for splitting and merging of RAs. However, the extent of reconfiguration needed depends on how a RA is uniquely identified. Two different approaches exist for defining an RA identifier:
1. RA identifiers are scoped by the containing RA. Consequently, unique RA "names" consist of a string of RAs identifiers starting at the root of the hierarchy. The parent/child relationship that exists between two RAs is implicit in the RA "name".
2. RA identifiers are global in scope. Consequently, a RA will always uniquely be named by just using its RA identifier. The parent/child relationship that exists between two RAs needs to be explicitly declared.
Since RCs need to use the RA Identifier to identify if an adjacent RC is located in the same RA, the RA Identifier will need to be known prior to bringing up adjacencies.
If the first method is used, then insertion of a new area will require all RCs in all areas below the point of insertion to have the new RA identifier provisioned into it before the new area can be inserted. Likewise, once the new area has been inserted, the old RA identifier will need to be removed from the configuration active in these RCs. As the point of insertion is moved up in the hierarchy, the number of nodes that will need to be reconfigured will grow exponentially.
However, if RA identifiers are globally unique, then the amount of reconfiguration is greatly reduced. Instead of all RCs in areas below the point of insertion needing to be reconfigured, only the RCs involved in parent/child relationships modified by the insertion need to be reconfigured.
The ASON Routing component has identifiers whose values are drawn from several address spaces. Addressing issues that affect routing protocol requirements include maintaining separation of spaces, understanding what other components use the same space that routing uses, and what mappings are needed between spaces.
There are four broad categories of addresses used in ASON.
1. Transport plane addresses. These describe G.805 resources and multiple name spaces can exist to do this. Each space has an application that needs a particular organization or view of those resources, hence the different address spaces. For routing, there are two spaces to consider:
a. SNPP addresses. These addresses give a routing context to SNPs and were introduced in G.8080. They are used by the control plane to identify transport plane resources. However, they are not control plane addresses but are a (G.805) recursive subnetwork context for SNPs. The G.8080 architecture allows multiple SNPP names spaces to exist for the same resources. An SNPP name consists of a set of RA names, an optional subnetwork name, and link contexts.
b. UNI Transport Resource Addresses [term from G.8080]. These addresses are use to identify transport resources at a UNI reference point if they exist (SNPP links do not have to be present at reference points). From the point of view of Call and Connection Controllers in Access Group Containers, these are names. Control plane components and management plane applications use these addresses.
2. Control plane addresses for components. As per G.8080, the control plane consists of a number of components such as connection management and routing. Components may be instantiated differently from each other for a given ASON network. For example, one can have centralized routing with distributed signalling. Separate addresses are thus needed for:
a. Routing Controllers (RCs)
b. Network Call Controllers (NCCs)
c. Connection Controllers (CCs)
Additionally, components have Protocol Controllers (PCs) that are used for protocol specific communication. These also have addresses that are separate from the (abstract) components like RCs.
3. DCN addresses. To enable control plane components to communicate with each other, the DCN is used. DCN addresses are thus needed by the Protocol Controllers that instantiate control plane communication functions (generating and processing messages in protocol specific formats).
4. Management Plane Addresses. These addresses are used to identify management entities that are located in EMS, NMS, and OSS systems.
For the ASON routing function, there are:
o Identifiers for the RC itself. These are from the control plane address space.
o Identifiers for the RC Protocol Controller. These are from the control plane address space.
o Identifiers for communicating with RC PCs. These are from the DCN address space.
o Identifiers for transport resources that the RC is represents. These are from the SNPP name space.
o Identifier for a management application to configure and monitor the routing function. This is from the control plane address space.
It is important to distinguish between the address spaces used for identifiers so that functional separation can be maintained. For example, it should be possible to change the addresses used for communication between RC PCs (from the DCN address space) without affecting the contents of the routing database.
This separation of name spaces does not mean that identical formats cannot be used. For example, an IPv4 address format could be used for multiple name spaces. However, they have different semantics depending on the name space they are used in. This means that an identical value can be used for identifiers that have the same format but are in different name spaces.
The SNPP name space is one space that is used by routing, signalling, and management functions. In order for the path computation function of an RC to provide a path to a connection controller (CC) that is meaningful, they must use the same SNPP name space. For interactions between these routing and signalling, common encodings of the name spaces are needed. For example, the path computation function should return a path that CCs can understand. Because SNPP name constituents can vary, any RC and CC co-ordination requires common constituents and semantics. For example link contexts should be the same. If an RC returns say a card context for links, then the CC needs to be able to understand it. Similarly, crankback/feedback information given to RCs from a CC should be encoded in a form that the RC PC can understand.
The SNPP name that an NCC resolves a UNI Transport Address to must be in the same SNPP name space that both RC and CC understand. This resolution function resides in the control plane and other control plane identifiers may be associated with this function.
G.8080 does not restrict how many SNPs can be used for a CP. This means that there can be multiple SNPP name spaces for the same subnetwork. An important design consideration in routing hierarchy can be posed as a question of whether one or multiple SNPP name spaces are used. The following options exist:
1. Use a separate SNPP name space per level in a routing hierarchy. This requires a mapping to be maintained between each level. However, level insertion is much easier with this approach.
2. Use a common SNPP name space for all levels in a routing hierarchy. A hierarchical naming format could be used (e.g., PNNI addressing) which enables a subnetwork name at a given level to be easily related to SNPP names used within that subnetwork at the level below. If a hierarchical name is not used, a mapping is required between names used a different levels.
SNPP names consist of RAs, an optional subnetwork id, and link contexts. The RA name space is used by routing to represent the scope of an RC. This recommendation considers only the use of fixed length RA identifiers. The format can be drawn from any address-space global in scope. This includes IPv4, IPv6, and NSAP addresses.
The subnetwork id and link contexts are shared by routing and signalling functions. They need to have common semantics.
In this section we look at the flow of routing information up and down the hierarchy, and the relationship between routing and call control at various levels within a hierarchy.
At level N in a routing hierarchy under a
link state paradigm we are primarily interested in the links (data plane)
between the RCDs represented by the cooperating RCs at level N. Note however that in general the “node”
properties of an RC are derived from the corresponding level N-1 (next lower
level) RA. Note that links (data
plane) between level N-1 RA are actually level N RA links (or higher) as shown
Figure 2. In
addition, in some cases it may be very useful for an RC to offer some
approximate representation of the internal topology of its corresponding
RCD. It is important to assume that the
next lower level RA may implement a different routing protocol than the link
state protocol described in this recommendation. Information from lower levels is still needed. Such information
flow is shown in Figure 2 between, e.g., levels N-1, RC 11, of RA 505 and level
N, RC 12 of RA 1313.
hierarchy with up flow of information from RCs
1) Although summarization of information could be done across this interface, the lower level RC is not in a good position to understand the scope of the higher level RA and its desires with respect to summarization, hence initially this interface will convey similar link state information as a peer (same level) RC interface. This leaves the summarization functionality to the higher level RC. Hence we have a control adjacency (but no data plane adjacency between these RCs). Also their relationship is of a hierarchical nature rather than peer.
For 2)/3) above: The physical locations of the two RC, their relationship, and their communication protocol are not currently standardized; however they are considered two separate RC, belonging to two separate RAs. It should be noted that no data plane or control plane adjacency exists between them.
Information is exchanged by an RC with (a) other RCs within its own routing area; (b) parent RCs in the routing area immediately higher; and (c) child RCs in any routing areas immediately below (i.e., supporting subnetworks within its routing area).
It is assumed that the RC uses a link-state routing protocol within its own routing area, so that it exchanges reachability and topology information with other RCs within the area.
However, information that is passed between levels may go through a transformation prior to being passed
-- transformation may involve operations such as filtering, modification (change of value) and summarization (abstraction, aggregation)
This specification defines information elements for Level N to Level N+1/N-1 information exchange
Possible styles of interaction with parent and child RCs include: (a) request/response and (b) flooding, i.e., flow up and flow down.
[Editor's note: more text may be needed on request/response]
Information that flows up and down between the RC and its parent and child RCs may include reachability and node and link topology
-- multiple producer RCs within a routing area may be transforming and then passing information to receiving RCs at a different level; however in this case the resulting information at the receiving level must be self-consistent, i.e., coordination must be done among the producer RCs
-- the goal is that information elements should be capable of supporting interworking of different routing paradigms at the different levels, e.g., centralized at one level and link state at another. We will focus on a subset of cases: passing of reachability information; passing of topology information. A minimum amount of information might be the address of an RC in an adjacent level that can help to resolve an address.
In order to implement multi-level hierarchical routing, two issues must be resolved:
· How do routing functions within a level communicate and what information should be exchanged?
· How do routing functions at different levels communicate and what information should be exchanged?
In the process of answering these questions, the following model will be used:
3.: Area Containment Hierarchy
For this model, Levels are relative, and numbered from bottom up. So, Area A and Area B are at Level n while Area C is at Level n+1.
The numbers shown in the model represent different Intermediate Systems located within the various areas, and will be referenced in the following sections.
The communication between levels describes the interface between a routing function in an aggregation area, and the routing function(s) operating in a contained area.
The following potential cases are identified:
Parent RA info received
Example: different routing protocols used in different areas at the same level, routing information must be exchanged through a mutual parent area or areas
Note: local path computation has flexibility as to the detail of the route specified beyond the local area
abstracted topology is received for some area(s)
Note: local path computation cannot result in the full path and further route resolution will occur at a later point
Minimal or None
Minimal topology information may support the selection of a particular egress point. If no topology information is available then all egress points are considered equivalent for routing.
Reachability information provided in the form of summarized addresses
Local path computation must be done assuming that the address is resolvable
It is also possible that reachability is not provided for a particular address
In this case no path can be computed
Same comments as above on topology
Minimal or None
Path computation server approach.
Child RA info received
Note: not all cases are considered useful or will be addressed
The information flowing upward (i.e. Level n to Level n+1) and the information flowing downward (i.e. Level n+1 to Level n) are used for similar purposes -- namely, the exchange of reachability information and summarized topology for endpoints outside of an area. However, different methods may be used. The next two sections describe this further.
[More detailed text is needed in this section regarding what summarized topology information needs to be fed up/down the hierarchy. This needs to be considered in conjunction with the configuration procedure and routing attributes described later in this document.]
[Editor's note: text needs to be updated to include exchange of topology information and full/partial/minimal cases described in the table above]
Two different approaches exist for upward communications. In the first approach the Level n+1 routing function is statically configured with the endpoints located in Level n. This information may be represented by an address prefix to facilitate scalability, or it may be an actual list of the endpoints in the area.
In the second approach, the Level n+1 routing function listens to the routing protocol exchange occurring in each contained Level n area and retrieves the endpoints being announced by the Level n routing instance(s). This information may be summarized into one or more prefixes to facilitate scalability.
Some implementations have extended the weakly associated address approach. Instead of using a static table of prefixes, they listen to the endpoint announcements in the Level n area and dynamically export the endpoints reachable (either individually or as part of a prefix summary) into the Level n+1 area.
Some of the benefits that result from this dynamic approach are:
· It allows address formats to be independent of the area ID semantics used by the routing protocol. This allows a Service Provider to choose one of the common addressing schemes in use today (IPv4, IPv6, NSAP address, etc.), and allows new address formats to be easily introduced in the future.
· It allows for an endpoint to be attached to multiple switches located in different areas in the service provider's network and use the same address.
For Multi-level, the lower area routing function needs to provide the upper level routing function with information on the endpoints contained within the lower area. Any of these approaches may be used. However, a dynamic approach is preferable for the reasons mentioned above.
[Editor's note: text needs to be updated to include exchange of topology information and full/partial/minimal cases described above]
Four different approaches exist for downward communications. In the first approach, switches in an area at Level n that are attached to Level n+1 will announce that they are a border switch, and know how to get to endpoints outside of the area. When another switch within the area is presented with the need to develop a route to endpoints outside of the area, it can simply find a route to the closest border switch.
The second approach has the Level n+1 routing function determine the endpoints reachable from the different Level n border switches, and provide that information to the Level n routing function so it can be advertised into the Level n area. These advertisements are then used by non-border switches at Level n to determine which border switch would be preferable for reaching a destination.
When compared to the first approach the second approach increases the amount of information that needs to be shared within the Level n area. However, being able to determine which border switch is closer to the destination causes the route thus generated to be of "higher quality".
The third approach has the Level n+1 routing function provide the Level n routing function with all reachability and topology information visible at Level n+1. Since the information visible at Level n+1 includes the information visible at Levels n+2, n+3, and so on to the root of the hierarchy tree, the amount of information introduced into Level n is significant.
However, as with the second approach, this further increases the quality of the route generated. Unfortunately, the lower levels will never have the need for most of the information propagated. This approach has the highest "overhead cost".
A forth approach is to not communicate downward from Level n+1 to Level n any routing information. Instead, the border switches provides other switches in the area with the address of a Path Computation Server (PCS) that can develop routes at Level n+1. When a switch operating in an area at Level n needs to develop a route to a destination located outside that area, the PCS at Level n+1 is consulted. The PCS can then determine the route to the destination at Level n+1. If this PCS also is unable to determine the route as the endpoint is located outside of the PCS's area, then it can consult the PCS operating at Level n+2. This recursion will continue until the PCS responsible for area at the lowest level that contains both the source and destination endpoints is reached.
For Multi-level, any of these approaches may be used. The second and forth approaches are preferable as they provide high-quality routes with the least amount of overhead.
Almost all combinations of upward (Level n to Level n+1) and downward (Level n+1 to Level n) communications approaches described in this document will work without any problems. However, when both the upward and downward communication interfaces contain endpoint reachability information, a feedback loop is created. Consequently, this combination must include a method to prevent re-introduction of information propagated into the Level n area from the Level n+1 area back into the Level n+1 area, and vice versa.
Two methods that may be used to deal with this problem are as follows. The first method requires a static list of endpoint addresses or endpoint summaries to be defined in all machines participating in Level n to Level n+1 communications. This list is then used to validate if that piece of endpoint reachability information should be propagated into the Level n+1 area.
The second approach attaches an attribute to the information propagated from the Level n+1 area to the Level n area. Since endpoint information that was originated by the Level n area (or a contained area) will not have this attribute, the routing function can break the feedback loop by only propagating upward information where this attribute is appropriately set.
For the second approach, it is necessary to make certain that the area at Level n does not utilize the information received from Level n+1 when the endpoint is actually located within the Level n area or any area contained by Level n. This can be accomplished by establishing the following preference order for endpoints based on how an endpoint is reached. Specifically, the following preference order would be used:
1) Endpoint is reached through a node at Level n or below
2) Endpoint is reached through a node above Level n
The second approach is preferred as it allows for dynamic introduction of new prefixes into an area.
Two approaches exist for handling Level n to Level n+1 communications. The first approach places an instance of a Level n routing function and an instance of a Level n+1 routing function in the same system. The communications interface is now under control of a single vendor, meaning its implementation does not need to be an open protocol. However, there are downsides to this approach. Since both routing functions are competing for the same system resources (memory, and CPU), it is possible for one routing function to be starved, causing it to not perform effectively. Therefore, each system will need to be analyzed to identify the load it can support without affecting operations of the routing protocol.
The second approach places the Level n routing function on a separate system from the Level n+1 routing function. For this approach, two different methods exist to determine that a Level n to Level n+1 adjacency exists: static configuration, and automatic discovery. Static configuration relies on the network administrator configuring the two systems with their peer, and their specific role as parent (i.e. Level n+1 routing function) or child (i.e. Level n routing function).
For automatic discovery, the system will need to be configured with the RA ID(s) for its area, as well as the RA ID(s) of the "containing" area. The RA IDs will then be conveyed by the system in its neighbor discovery (i.e. Hello) messages. This in turn allows the system in the parent RA to identify its neighbor as a system participating in child RA, and vise versa.
One of the responsibilities of the LRM is to provide the RC with information regarding the type and availability of resources on a link, and any changes to those resources.
This requires the following basic functions between the LRM and the RC:
1) RC query to LRM of current link capabilities and available resources
2) LRM notification to RC when a signification change occurs
3) LRM procedure to determine when a change is considered significant
4) LRM procedure to limit notification frequency
The initialization process for the RC must first query the LRM to determine what resources are available and to populate its topology database with information is it responsible for sourcing into the network. The RC is then responsible for advertising this information to adjacent RCs and ensuring that other RCs can distinguish between current and stale information.
After the initialization process, the LRM is responsible for notifying the RC when any changes occur to the information it provided. The LRM must implement procedures that prevent overloading the RC with rapid changes.
The first procedure that must be performed is the determination of when a change is significant enough to notify the RC. This procedure will be dependent on the type of transport technology. For example, the allocation of a single VC11 or VC12 may not be deemed significant, but the allocation of a single wavelength on a DWDM system may be significant.
The second procedure that must be performed is a pacing of the messages sent to the RC. The rate at which the RC is notified of a change to a specific parameter must be limited (e.g. once per second).
The physical separation of the LRM and the RC is a new capability not previously supported in any protocol.
The required interaction is similar to the distribution of topology information between adjacent RCs, except that the flow of information is unidirectional from the LRM to the RC.
This interaction can be performed using a modified lightweight version of an existing routing protocol. The initial query from the RC to the LRM can reuse the database summary and LSA request used during synchronization of the link-state database. Updates from the LRM to the RC can use normal link-state database update messages.
The LRM would not need to implement any procedures for the reception of link-state information, flooding, topology database, etc.
[Editor's note: text in this section needs to be made protocol-independent]
[Ed. The following text is still in draft form and to be discussed further]
1. I think we all agree we don’t want to use the level indicator as in the PNNI when working on the protocol. The benefits to not have it including the flexibility in inserting a level in between two existing one, grouping two existing RA/RC into one RA, etc., without worrying about the level violation and complexity. We can still use “level” literally, but with relativity only and without code point defined.
2. [Ed. Keep this paragraph as a comment for now but it won’t make it to the final version] All nodes assigned the same RA ID will be in the same RA running the link-state protocol. We need to say how the control channels being defined and verified via their communications, or completely auto-discovered. This is required at each hierarchy.
3. [Ed. the way we instantiate the function that provides the interaction with the higher level needs to be decided] With a RA, there requires one or more RC that functions as “Peer Group Leader” to perform additional duties, including summarizing addresses, aggregating data plane topology etc. within its RA. The information is then communicated to one or more RC at the next higher-level RA. The summarization and aggregation can automatically occur but can also be accomplished via configuration. But the “relationship” between the RC at the level N and the RC at the level N+1 needs to be described. Note in PNNI, the two RCs are generally realized by two logical RC on the same switch with the internal IPC as their communication channel; shall we either assume this, leave this as blank (as in PNNI spec), or something else?
4. There may be traffic types as the following that need to be distinguished on the packet-based control channels: [Ed. need to work on this part]
a) Packets between peer-RC in the same RA. These packets should carry the same RA ID.
b) Packets received by the same switch but may be for different RC on that switch, and they should carry different RA ID or/and different RC ID. Note these packets may have different destination IPv4/IPv6/NSAP address, but this could be optional, to save address space – RA ID or RC ID cost nothing.
5. Information feed-up:
a) For reachable address, the information is always feed-up one level at a time
as is, without other additional information attached. This feed-up occurs recursively level-by-level upwards, with possible further summarization at any level.
b) For aggregated data plane topology (such as border-border TE links), it is always feed-up one level at a time as is, without other additional information attached.
c) Some of the TE links feed-up may need to include the “ancestor RC ID”, so it feed-up upwards until the ancestor RC gets it.
The RC at the level N+1 should have enough information not to feed-down the information.
6. Information feed-down:
The RC at the level N+1 should filter out the routing information feed-up from down stairs during the feed-down operation, and that is, the RC at the level N+1 only feed-down information it learnt from other RC in the same RA (at the level N+1), which will be the information to the RC at the level N as from other RA.
[Editor's note: e.g., between RCs across a lower level area boundary]
Editor's note: between parent and child RCs when in different systems]
Given data plane connectivity between two different RCDs that we wish to have cooperate within a RA we have two choices: (a) configure the corresponding RCs with information concerning their peers, or (b) discover the suitable corresponding RC on the basis of information shared via some type of enhanced NNI discovery procedure.
One fairly straight forward approach is for each side to share information concerning its RA containment hierarchy along with the addresses of the appropriate protocol controller for the RC within each of these RAs.
The architecture of Optical Networks is structured in layers to reflect technology differences and or switching granularity. This architecture follows a recursive model as described in Recommendation G.805. The Control plane is consistent with this model and thus enables Optical Networks to meet client signal requirements such as service type (VC-3 for VPNs), a specific quality of service, and the specific layer adaptations. Thus an ASON link is defined to be capable of carrying only a single layer of switched traffic. The fact that an ASON link is a single layer allows layers to be treated in the same way from the point of view of Signalling, Routing and Discovery. This requires that layers are treated separately and there is a layer-specific instance of the signalling, routing and discovery protocols running. From the routing point of view, it means that path computation needs to be able to find a layer specific path.
The hierarchical model of routing in G.7715 leads to several instances of the Routing protocol (e.g., instantiation of several hierarchies) operating over a single layer. Therefore, a topology may be structured with several routing levels of the hierarchy within a layer before the layer general topology is distributed. Hence a model is needed to enable effective routing on a layered transport network.
Additionally, transport layer adaptations are structured within an adaptation hierarchy which requires explicit indication of layer relationships for routing purposes. This is illustrated in Figure 1.
Fig 1.Layer structure in SDH
In transport networks, a Server layer trial may support different adaptations at the same time, which creates dependency between the layers. This makes necessary that the variable adaptation information needs to be distinguishable at each layer (e.g., VC-3 supporting n-VC-12c and m-VC-11c). A specific example is a server layer trail VC-3 supporting VC-11 and VC-12 client layers. In this case, a specific attribute like bandwidth can be supported in different ways over the same common server layer through the use concatenation. If VC-11c is chosen to support the VC-3, the availability of the VC-12 is affected, information that needs to be known by routing. Each of these two client layers have also specific constraints (e.g., cost), that routing need to understand on a layer basis.
Furthermore, routing for transport networks is done today by layer, where each layer may use a particular routing paradigm (one for DWDM layer and a different one for VC layer) This layer separation requires that attributes information be also separately handled by layer.
In Heterogeneous networks some NEs do not support the same set of layers (case that also applies to GMPLS). Even if this NE does not support a specific layer, it should be able to know if other NE in the network supports an adaptation that would enable that unsupported layer to be used.
[Editor's note: example needed]
Separate advertisement of the layer attributes may be chosen but this may lead to unnecessary duplication since some attributes can be derived from client-server relationships. These are inheritable attributes, property that can be used to avoid unnecessary duplication of information advertisement. To be able to determine inherited attributes, the relationship between layers need to be advertised. Protection and Diversity are examples of inherited attributes across different layers. Both Inherited and Layer specific attributes need to be supported.
The interlayer information advertisement is achieved through the coordination of the LRMs responsible for SNPPs at each layer. Some of the attributes to be exchanged between layers reside on the Discovery Agent where they have been provisioned or determined through the layer adjacency discovery process. To get this information, the LRMs access DA information through the TAP, as allowed by G.8080 components relationships.
In view of the fact that not all NEs support all layers today but may do so in the future, representation of attributes for routing needs to allow for new layers to be accommodated between existing layers. (This is much like the notion of the generalized label).
As per G.805, a network a specific layer can be partitioned to reflect the internal structure of that layer network or the way that it will be managed. It’s also possible that a subset of attributes is commonly supported by subsets of SNPP links located in different links (G.7715 subpartition of SNPPs). This means that it should be possible to organize link layers based on attributes and that routing needs to be able to differentiate attributes at specific layers. For example, an attribute may apply to a single link at a layer, or it may apply to a set of links at the same layer.
Following the above architectural principles, attributes can be organized according to the following categories:
- Attributes related to a node or to a link
- Provisioned or negotiated. Some attributes like Ownership and Protection are provisioned by the customer while adaptation can be configured as part of an automatic discovery process.
- Inherited and layer specific attributes. Client layers can inherit some attributes from the Server layer while others attributes like Link Capacity are specified by layer.
- Attributes used by a specific Plane or Function: Some attributes are relevant only to the transport topology while others are relevant to the control plane and furthermore, they are specific to a control plane function like Signalling, Routing or Discovery (e.g. Cost for routing). The Transport Discovery Process can be used to exchange control plane related attributes that are unrelated to transport plane attributes. The way that the exchange is done is out of the scope of this recommendation.
- While a set of attributes can apply to both planes, others have meaning only when a Control plane exists. E.g., SRLG and delay for the SNPPs
SNPP links as per G.8080 are configured by operator through grouping of SNPs links between the same two routing areas within the same layer. These two routing areas may be linked by one or more SNPP links. Multiple SNPP links may be required when SNP links are not equivalent for routing purposes with respect to routing areas they are attached, or to the containing routing area, or when smaller groupings are required for administrative purposes. Routing criteria for grouping of SNPP links can be based on different criteria (e.g., diversity, protection, cost, etc).
Some attributes are used for generic purposes of building topology. These basic attributes are exchanged as part of the transport discovery process. Part of this attributes are inherent to the transport discovery process (Adaptation potential) and others are inferred from high level applications (e.g., Diversity, Protection).
Attributes used only by the control plane can be provisioned/determined as part the Control Plane Discovery process.
Several possible configurations can be done to organize the SNPPs required for Control plane. Configuration includes:
- Provisioning of link attributes
- Provision of SNPPs is based on the attributes of the different SNPPs components (i.e. routing, cost, etc.).
- Provisioning of specific attributes that are relevant only to SNPPs
Configuration can be done at each layer of the network but this may lead to unnecessary repetition. Inheritance property of attributes can also be used to optimize the configuration process.
For practical purposes we further differentiate between two types of topology information. Topology between RCDs and topology internal to a RCD. Recall that the internal structure of a control domain is not required to be revealed. However, since an entire RA is represented as a RCD (with a corresponding RC) at the next level up in the hierarchy there are a number of reasons (some of which are detailed below) to reveal additional information. At a given level of hierarchy we may choose to represent a given RCD by a single node or we may represent it (or part of it) as a graph consisting of nodes and links. This is the process of topology/resource summarization and how this process is accomplished is not subject to standardization.
Per the approach of G.7715 we categorize our routing attributes into those pertaining to nodes and those pertaining to links. When we speak of nodes and links in this manner we are treating them as topological entities, i.e., in a graph theoretic manner.
Other information that could be advertised about an RA to the next level up could be aggregate characteristic properties. For example, the probabilities of setting up a connection between all pairs of gateways to the RA. SRG information about the RA could also be sent but without detailed topology information. [ Editor's note: placement of this paragraph tbd]
All nodes represented in the graph representation of the network belong to a RA, hence the RA ID can be considered an attribute of all nodes.
When a node is advertised representing an entire subnetwork then it will have the following attributes:
· RC ID (mandatory) – This number must be unique within an RA.
· Address of RC (mandatory) – This is the SCN address of the RC where routing protocol messages get sent.
· Subnetwork ID
· Client Reachability Information (mandatory)
· Hierarchy relationships
· Node SRG (optional) – The shared risk group information for the node.
· Recovery (Protection/Restoration) Support – Does the domain offer any protection or restoration services? Do we want to advertise them here? Could be useful in coordinating restoration…?
· General Characteristics: Transit Connections, Switching and branching capability, RCD dedicated path protection (e.g., 1:1, 1+1). (mandatory) à [Ed: This is really to express the idea that you really can’t get across this domain, either due to policy reasons or by blocking properties. Needed?][Editor's Note: could be replaced by a link-end transit attribute]
The nodes used to represent internal Subnetwork topology can be advertised using the NNI routing protocol. These nodes may correspond to physical nodes, such as border nodes or internal physical nodes, or logical or abstract nodes. Each node advertised within the NNI routing is identified by:
· RC ID (mandatory) – Let’s us know which RCD to which this intra-domain node belongs.
· Intra-Domain Node ID (mandatory) – Used to uniquely identify the intra domain abstract node within the RCD.
· Client Reachability Information (optional) – Usually would want this information for diverse source and destination services.
· Node SRG (optional) – The shared risk group information for the node.
In the case of physical nodes, the node ID is simply the node ID of the physical node itself. Otherwise the node is at some level of the routing hierarchy and can be named with an RA id.
It may be possible to extract the node information from link state advertisements, thus it may not be necessary to explicitly advertise this information.
The type of restoration and protection mechanisms supported within a control domain is represented by this attribute. N.B. This is a control domain attribute and does not necessarily apply only to intra-RCD nodes. The protection and restoration options specified include:
[Ed. this may be changed to a general field rather than being specific as it is right now]
· Link protection
· RCD dedicated path protection (e.g., 1:1, 1+1)
· Dynamic restoration (i.e., re-provisioning after failure)
· Shared mesh restoration
Other carrier specific protection and restoration schemes may also be supported.
Reachability information describes the set of end systems that are directly connected to a given control domain. One technique is to use a directory service to determine the control domain and/or the specific node to which a given client is physically connected. The alternative is to advertise client reachability through the NNI routing protocol. This is an optional capability of the routing protocol.
There are multiple ways to advertise client reachability information. Speaker nodes may advertise client reachability on a per-domain basis if path within the client domain is not desired. Speaker nodes may alternatively advertise client reachability in a more detailed fashion so that more optimized route selection can be performed within a connection’s destination control domain. Ideally, the network operator has allocated end system addresses in a manner that can be summarized so that only a small amount of reachability information needs to be advertised.
Client reachability is advertised as a set of clients directly connected to each domain. In this case, the attributes related to the reachability advertisement include:
– RA ID, RC ID, and possibly inter-RCD node ID
– List of UNI Transport Resource addresses or address prefixes
– List of SNPP Ids (Editor's note: can have variable context depth as in G.8080 Amend. 1)
Note that it is possible for client addresses to be connected by more than one RC or internal node. In that case, multiple RC ID, RA ID, etc. should be associated with those client addresses.
UNI Transport Resource Addresses are assigned by the service provider to one or more UNI transport links (see Recommendations G.7713.x). The UNI Transport Resource Address may be an IPv4, IPv6 or an NSAP address.
The links between RCDs are sometimes called external links. The connectivity between and within control domains is represented by intra-RCD and inter-RCD (external) links.
The advertisement of the external links is crucial for supporting following service functionalities:
(1) Load balancing: by advertising external links to other domains, it is possible to achieve load balancing among external links between neighbor control domains via available load balancing scheme. Lack of load balancing on external links may end up with uneven loads on intra-domain nodes and links.
(2) Fast end-to-end dynamic restoration: domain-by-domain restoration may be used for intra-domain link and node failures, but cannot be used to recover from border node failures. Fast end-to-end restoration can instead be used to recover from border node failures. With knowledge of the external links, source nodes can identify external links through alternate border nodes to achieve rapid restoration with minimal crankback.
(3) End-to-end diverse routing: to achieve node and SRLG diversity for connections across multiple domains, we require selection of different border nodes / SRLG disjoint external links. This could be achieved using crankback mechanisms to “search” for physically diverse routes. However, advertising external links would significantly reduce the crankback required, particularly if physically diverse paths are available within control domains.
Representation of links internal to a RCD that are advertised outside the RCD are called intra-RCD abstract links. Intra-RCD abstract links are important when efficient routing is required across / within a RCD (e.g., in core / long distance networks). Efficient routing can be achieved by advertising intra-RCD abstract links with metrics (costs) assigned to them, so long as the metrics are common across all RCDs.
The methods for summarizing control domain topologies to form an abstract topology trade off network scalability and routing efficiency. In addition, abstract topology allows treatment of vendor/domain specific constraints (technological or others). These methods can be a carrier / vendor specific function, allowing different carriers to make different tradeoffs. Resulting abstract topologies can vary from the full topology, a limited set of abstract links, or hub-spoke topologies. Note in addition to the intra-RCD abstract links we may use intra-RCD abstract nodes in the representation of a RCD’s internal topology. These abstract nodes are similar to the complex node representation described in section 18.104.22.168 of PNNI spec. Abstract links are similar to the exception bypass links of PNNI complex nodes. A metric of the intra-RCD abstract link could be used to represent some potential capacity between the two border nodes (on either end of the link) in that RCD.
elements relative to RCDs.
Note that intra-RCD links advertised using the NNI may be either physical or logical links, however, from the perspective of the NNI speakers in other control domains, this distinction is not relevant. This is because path selection computes the end-to-end path based on the advertised links and associated link state information, whether physical or logical. Although each of these links has a different location with respect to control domains, the attributes associated with them are essentially the same. We thus define the same set of attributes for all types of RCD links.
[Editor's note: List below to be further refined and descriptions provided]
[Editor's note: need to consider types of path computation to be supported and use of configuration as an alternative means to flooding to supply some information]
The following Link Attributes are defined:
- Connectivity Supported (e.g. CTP, TTP, etc)
- Bandwidth Encoding: Describes how the capacity is supported and its availability. I.e. 48 MB supported on a VC-3 circuit with the specific VC-3 structure (other possibility is concatenation of Vc 12s). It can advertise potential numbers of connections (192 STS-1 in an OC-192). (mandatory)
- Client-Server Relationship ((Clients and server(s) of this layer, including signal and encoding type)
- Attributes with client-server relationships (e.g., SRLG, server adaptation)
- Colour (e.g. for VPNs)
Link Inherited Attributes that can be determined by client-server layer relationships:
- SNPP name components (mandatory)
o Subnetwork (matrix)
§ Source Subnetwork/NE ID
§ Remote Subnetwork/NE ID
o Link Context
§ e.g., "Bundle ID"
- Encoding (e.g., SONET/SDH, OTN)
- Recovery (Protection and restoration) support
o Link protection
o Dynamic restoration (i.e., re-provisioning after failure)
o Shared mesh restoration
- Signalling (INNI, ENNI, UNI) [Editor's note: not clear why this is needed]
- Ownership OSS, ASTN. It may be used as a criteria choice for CP points that can be represented by SNPs
- Status: Blocked state could be temporary and needs to be advertised [Editor's note: seems to have some overlap with bandwidth encoding, needs to be resolved]
- Lack of Diversity [Editor's note: needs further explanation. One suggestion is that an abstract link representing subnetwork connectivity between border points might indicate a level of diversity, as an alternative to advertising all of the SRLGs]
For nodes representing RCD (i.e., RCs) the link source/destination subnetwork/NE ids consist of the RA ID and up to two 32 bit numbers to uniquely identify the interface off the RCD. However, if desired for simplicity in setting unique values one could use a triple such as (RA ID, Border Node IPv4 address, IfIndex). Note that the RA id sets the transport context.
For intra-RCD nodes the link source/destination end ids consists of the triple (RA ID, Node ID, IfIndex), where the Node ID was defined in the section on intra-RCD node attributes. One possible way to do this is by checking if the RA Ids of the source and destination ends of the link are the same.
Following the G.7715 model for routing, links are layer specific and link properties are reported to the RC via the respective LRM. Link capacity is characterized in terms of a certain number of link connections. Link connections represent the indivisible unit of bandwidth for a link in a particular layer network, e.g., a VC3 Link may contain ‘n’ VC3 link connections. The RC uses the link information obtained from the LRM in its route computation.
Interesting cases of link bandwidth accounting arise when equipment supporting flexible (or variable) adaptation are present in the network. Flexible (or variable) adaptation refers to the ability to tune adaptation functions in the element to flexibly provide a variety of link connections from a common trail. For example, an equipment supporting a flexible adaptation of OC192 to 192 STS-1, 64 STS-3c, 16 STS-12c, 4 STS-48c or 1 STS-192c link connections. In this case, there are potentially different types of links containing their respective types of link connections. However, given that all these link connections are supported by a common trail, the allocation of resources (link connections) from one link reflects as a “busy” state to a set of link connections in other links. In the above example, if one STS-3c is allocated, then the link states change as follows: 189 STS-1, 63 STS-3c, 15 STS-12c, 3 STS-48c, 0 STS-192c. Note that flexible (or variable) adaptation involves interactions between CTPs and TTPs and is as handled via interactions between LRM and TAP. Thus, the routing process does not concern itself with the “adaptation mangement”, but only receives and uses information obtained from LRM.
Thus far, we have considered the link state in terms of “available capacity” vs. “used capacity” (or link connections in the busy state). In addition to the notion of “available capacity”, we introduce the notion of “potential capacity” or “planned capacity”. This refers to the amount of link capacity that is planned (via some form of network engineering/planning). This type of capacity considerations are useful in the context of long-term route planning (or quasi-static route planning).
We have considered the interaction between
LRM and RC in terms of the link capacity attributes. We now consider the problem of how this information is represented
in the protocol LSA/LSPs in a canonical fashion. We observe that there are two possible architectural models for
canonical representations. The first
model employs a protocol controller which performs session multiplexing. In this case, we have RCs which are layer
specific that communicate with each other through sessions that are multiplexed
via the protocol controller. The other
model is to have a information multiplexing in the protocol PDU. This is shown in
– (a) Protocol Controller employing session multiplexing, (b) Protocol
controller employing information element multiplexing
In Figure 6, depending on whether model (a) or (b) is employed in the protocol design, two types of protocol encoding choices arise.
Regardless of the scenarios (a) and (b) for a given link type, i.e., we would send the following information
· Signal Type: This indicates the link type, for example a VC3/STS1 path
· Transit cost: This is a metric specific to this signal type
· Connection Types: Three choices transit, source or sink. This indicator allows the routing process to identify whether the remote endpoint is flexibly or inflexibly connected to a TTP or CTP. [Editor's note: does this require a 'transit' attribute for link ends? if so, does this conflict with the 'transit' attribute for nodes?]
· Link capacity count fields:
o Available count: The number of available, non-failed link connections
o Installed count: The number of available + unavailable (i.e. failed) link connections
o Planned count: The number of available + unavailable + uninstalled link connections
The amount of link connections in a specific state is really more a function of the current "alarm" being seen – if it’s a facility failure, then the link connection is included in the installed count. If it’s an equipage issue (i.e. card not installed) then the link connection would be included in the planned count. Therefore, the information does not necessarily need to be hand configured.
Each link is assigned a set of metrics. The metrics used are service-provider specific, and are used in route selection to select the preferred route among multiple choices. Examples of metrics include:
1. A static administrative weight
2. A dynamic attribute reflecting the available bandwidth on the links along with the least cost route (e.g., increasing the cost as bandwidth becomes scarce.) If the metric is a dynamic attribute, nodes may limit the rate at which it is advertised, e.g., using hysteresis.
3. An additive metric that could be minimized (e.g., Db loss)
4. A metric that a path computation may perform Boolean AND operations on.
Several protection parameters are associated with a link. They include:
· A protection switch time. This indicates how long the protection action will take. If there is no protection, this value would represented by an infinity value.
· An availability measure. This represents the degree of resource reservation supporting the protection characteristic. For example, a 1+1 linear protected link has 100% of reserved resources for its protection action. A 1:n protected link has less than 100%.
An intra-domain abstract link could be representative of a connection within that domain. Protection characteristics of the connection would then be used by that link. For example the connection could be protected with 1+1 trails (working and fully reserved protection trails). When a link can serve higher layers (i.e., be multiply adapted), those higher layer links "inherit" the protection characteristic of the (server) link.
A Shared Risk Link Group (SRLG) is an abstraction defined by the network operator referring a group of links that may be subject to a common failure. Each link may be in multiple SRLGs. The SRLGs are used when calculating physically diverse paths through the network, such as for restoration / protection, or for routing of diverse customer connections. Note that globally consistent SRLG information is not always available across multiple control domains, even within a single carrier.
Transport networks are built from bottom up, starting by the lowest layer (Physical layer) supported by the network. Following this architecture some attributes that are provisioned at the lowest layers apply to upper layers or are inherited. The inherited attributes can be inferred from client-server relationships and do not need to be flooded between layers, allowing in this way to optimize the information advertisement. Inherited attributes are tied to attributes with common applicability to several layers.
Other types of attributes are layer specific attributes, which cannot be inferred from client-server relationships and therefore need to be flooded between layers. These attributes are determined through the Layer Adjacency Discovery Process or provisioning. Then, they are passed between layers involving the interaction of the Discovery Agent, TAPs and LRMS.
Diversity is an attribute that can be inherited and/or layer specific. It can be provisioned at the Physical layer or each SNPP link. Lack of diversity could be inherited however the different diversity values are layer specific and they need to distinguishable by layer. In all cases it is necessary that attributes can be represented on a per layer basis.
Based on the principles and taxonomy explained above, the information advertised for routing in Optical networks has the following format:
<PDU Identifiers, <Layer Specific Node PDU>* >
[Editor's Note: further discussion needed on the attributes below]
<Layer Specific Node PDU> = <RC ID address>
< RC PC Communication Address>
<Downwards and Upwards Client Reachability>
<Relationship to attributes: Linkeages to other layers>.
<Hierarchy relationships >
<Inheritable>* = <Recovery Support>
<PDU Identifiers, <Layer Specific Link PDU>* >
[Editor's Note: Further discussion needed on the attributes below]
<Layer Specific Link PDU> = < Adaptation>
< Attributes with crosslayer relationships>
<SNPP utilization> (1) [Editor's note: to be further refined]
<Inheritable>* = <Local SNPP name>
<Remote SNPP name>
< Recovery Support>
Outline of Appendix text (some points may need expansion)
a. As specified in G.805, an ASON layer network can be recursively partitioned into subnetworks. Subnetworks are management policy artifacts and are not created from protocol actions or necessarily from protocol concerns such as scalability. Partitioning is related to policies associated with different parts of a carrier network. Examples include workforce organization, coexistence of legacy and newer equipment, etc. Subnetworks are defined to be completely contained within higher level subnetworks.
Figure A.I-1: Distinction between Layering and partitioning
b. Partitioning in the transport plane leads to multiplicity of routing areas in the control plane. Recursive partitioning using the G.805 principles leads to hierarchical organization of routing areas into multiple levels.
Figure A.I-2: Hierarchical Organization of Routing Areas
[Editor;s note: figure needs to be understandable in black and white]
c. Some relevant characteristics of ASON network topologies: there is typically not a single backbone. Traffic cannot be forced go through a backbone, as this is inconsistent with the topology, also regulatory concerns may add requirements for local call routing policies.
d. Routing areas follow the organization of subnetworks. Routing area organization must support ASON network topology, .e.g., there should not be a requirement for a single backbone area, containment relationship must be followed.
Figure A.I-3 contains an example of a transport network consisting of two carrier backbone networks. The metro transport networks connected to the backbones have the choice of which backbone to use to reach other metro networks. Also, adjacent metro networks can support connections between them without those connections traversing either backbone network.
Figure A.I-3: Example Network Topology
e. The protocol should attempt to minimize the amount of talking between RCs, e.g., by passing multiple level information together. Typically, because of scoping, only two or three levels might be passed - myself, my children, and my parents…..? The further away the destination, the more abstraction may be used to reduce the amount of information that must be passed.
f. The internal topology of a subnetwork is completely opaque to the outside. For routing purposes, the subnetwork may appear as a node (reachability only), or may be transformed to appear as some set of nodes and links, in which case the subnetwork is not visible as a distinct entity. Methods of transforming subnetwork structure to improve routing performance will likely depend on subnetwork topology and may evolve over time.
 Some literature have statements of the form “An OC192 Link containing 192 STS-1 link connections”. This is technically inaccurate as the link and the containing link connections have to be in the same layer network. This is more accurately stated as “An OC192 trail supporting 192 STS-1 link connections”.