idnits 2.17.1 draft-chiu-strand-unique-olcp-01.txt: -(45): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(187): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding -(688): Line appears to be too long, but this could be caused by non-ascii characters in UTF-8 encoding Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** Looks like you're using RFC 2026 boilerplate. This must be updated to follow RFC 3978/3979, as updated by RFC 4748. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- ** The document seems to lack a 1id_guidelines paragraph about Internet-Drafts being working documents. ** The document is more than 15 pages and seems to lack a Table of Contents. == There are 26 instances of lines with non-ascii characters in the document. == No 'Intended status' indicated for this document; assuming Proposed Standard == It seems as if not all pages are separated by form feeds - found 0 form feeds but 20 pages Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack an Introduction section. ** The document seems to lack a Security Considerations section. ** The document seems to lack an IANA Considerations section. (See Section 2.2 of https://www.ietf.org/id-info/checklist for how to handle the case when there are no actions for IANA.) ** There are 2 instances of too long lines in the document, the longest one being 2 characters in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == Line 191 has weird spacing: '...routing is li...' -- The document seems to lack a disclaimer for pre-RFC5378 work, but may have content which was first submitted before 10 November 2008. If you have contacted all the original authors and they are all willing to grant the BCP78 rights to the IETF Trust, then this is fine, and you can ignore this comment. If not, you may need to add the pre-RFC5378 disclaimer. (See the Legal Provisions document at https://trustee.ietf.org/license-info for more information.) -- Couldn't find a document date in the document -- date freshness check skipped. Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) -- Missing reference section? 'Awduche99' on line 907 looks like a reference -- Missing reference section? 'Tkach98' on line 933 looks like a reference -- Missing reference section? 'Yates99' on line 937 looks like a reference -- Missing reference section? 'Ramaswami98' on line 930 looks like a reference -- Missing reference section? 'ITU' on line 921 looks like a reference -- Missing reference section? 'Kaminow97' on line 924 looks like a reference -- Missing reference section? 'Chaudhuri00' on line 912 looks like a reference -- Missing reference section? 'Doverspike00' on line 916 looks like a reference -- Missing reference section? 'Ashwood00' on line 903 looks like a reference -- Missing reference section? 'Moy98' on line 927 looks like a reference Summary: 7 errors (**), 0 flaws (~~), 4 warnings (==), 12 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Angela Chiu 3 John Strand 4 AT&T 5 Internet Draft 6 Document: draft-chiu-strand-unique-olcp-01.txt Robert Tkach 7 Expiration Date: May 2001 Celion Networks 9 Unique Features and Requirements for The Optical Layer Control Plane 11 Status of this Memo 13 This document is an Internet-Draft and is in full conformance with 14 all provisions of Section 10 of RFC2026. Internet-Drafts are 15 Working documents of the Internet Engineering Task Force (IETF), its 16 areas, and its working groups. Note that other groups may also 17 distribute working documents as Internet-Drafts. 19 Internet-Drafts are draft documents valid for a maximum of six 20 months and may be updated, replaced, or obsoleted by other documents 21 at any time. It is inappropriate to use Internet-Drafts as reference 22 material or to cite them other than as "work in progress." 24 The list of current Internet-Drafts can be accessed at 25 http://www.ietf.org/ietf/1id-abstracts.txt. 26 The list of Internet-Draft Shadow Directories can be accessed at 27 http://www.ietf.org/shadow.html. 29 Abstract 31 Advances in the Optical Layer control plane are critical to ensure 32 tremendous amount of bandwidth generated by the DWDM technology be 33 provided to upper layer services in a timely, reliable, and cost 34 effective fashion. This document describes some unique features and 35 requirements for the Optical Layer control plane that protocol 36 designers need to take into consideration. 38 1. Introduction 40 The confluence of technical advances and service needs has focused 41 intense interest on optical networking. Dense Wave Division 42 Multiplexing (DWDM) is allowing unprecedented growth in raw optical 43 bandwidth; new cross-connect technologies promise the ability to 44 establish very high bandwidth connections within milliseconds; and 45 the insatiable appetite of the Internet for high capacity ``pipes�� 47 Unique Features and Requirements July 2000 48 For The Optical Layer Control Plane 50 has caused transport network operators to tear up their forecasts 51 and add optical capacity as fast as they can. 53 Critical to these advances are improvements to the "Optical Layer 54 Control Plane" - the software used to determine routings and 55 establish and maintain connections. Traditional centralized 56 transport operations systems (OS�s) are widely acknowledged to be 57 incapable of scaling to meet exploding demand or establishing 58 connections as rapidly as needed. Consequently much attention has 59 been paid recently to new control plane architectures based on data 60 networking protocols such as MPLS and OSPF/IS-IS). These 61 architectures feature distributed routing and control logic, auto 62 discovery and self inventorying, and many other advantages. OSPF/IS- 63 IS provides a constraint-based routing capability that takes 64 bandwidth availability into account. 66 The potential of these new architectures for optical networking are 67 enormous; however, to be successful they need to be adapted to the 68 specific technological, service, and business context characteristic 69 of optical networking. This document attempts to describe several 70 aspects of optical networking which differ from those in the data 71 networking environment inspiring these new architectures: 73 - Section 2 describes some distinctive technological and 74 networking aspects of optical networking that will constrain 75 routing in an optical network, and 77 - Section 3 gives a transport network operator�s perspective on 78 business and operational realities that optical networks are 79 likely to face which are unlike those in data networking. 81 We most definitely are not claiming that these differences are fatal 82 to these new architectures, only that the new architectures must be 83 built upon a detailed appreciation of the unique characteristics of 84 the optical world. 86 2. Constraints On Routing 88 Optical Layer routing is less insulated from details of physical 89 implementation than routing in higher layers. In this section we 90 give examples of constraints arising from the design of network 91 elements, from the accumulation of signal impairments, and from the 92 need to guarantee the physical diversity of some circuits. 94 Unique Features and Requirements July 2000 95 For The Optical Layer Control Plane 97 2.1 Reconfigurable Network Elements 99 Control plane architectural discussions (e.g., [Awduche99]) usually 100 assume that the only software reconfigurable network element is an 101 optical layer cross-connect (OLXC). There are however other 102 software reconfigurable elements on the horizon, specifically 103 tunable lasers and receivers and reconfigurable optical add-drop 104 multiplexers (OADM�s). These elements are illustrated in the 105 following simple example, which is modeled on announced Optical 106 Transport System (OTS) products: 108 + + 109 ---+---+ |\ /| +---+--- 110 ---| A |----|D| X Y |D|----| A |--- 111 ---+---+ |W| +--------+ +--------+ |W| +---+--- 112 : |D|-----| OADM |-----| OADM |-----|D| : 113 ---+---+ |M| +--------+ +--------+ |M| +---+--- 114 ---| A |----| | | | | | | |----| A |--- 115 ---+---+ |/ | | | | \| +---+--- 116 + +---+ +---+ +---+ +---+ + 117 D | A | | A | | A | | A | E 118 +---+ +---+ +---+ +---+ 119 | | | | | | | | 121 Figure 2-1: An OTS With OADM's - Functional Architecture 123 In Fig.2-1, the part that is on the inner side of all boxes labeled 124 "A" defines an all-optical subnetwork. From a routing perspective 125 two aspects are critical: 126 - Adaptation: These are the functions done at the edges of the 127 subnetwork that transform the incoming optical channel into the 128 physical wavelength to be transported through the subnetwork. 129 - Connectivity: This defines which pairs of edge Adaptation 130 functions can be interconnected through the subnetwork. 132 In Fig. 2-1, D and E are DWDM�s and X and Y are OADM�s. The boxes 133 labeled "A" are adaptation functions. They map one or more input 134 optical channels assumed to be standard short reach signals into a 135 long reach (LR) wavelength or wavelength group which will pass 136 transparently to a distant adaptation function. Adaptation 137 functionality which affects routing includes: 138 - Multiplexing: Either electrical or optical TDM may be used to 139 combine the input channels into a single wavelength. This is 140 done to increase effective capacity: A typical DWDM might be 141 able to handle 100 2.5 Gb/sec signals (250 Gb/sec total) or 50 142 10 Gb/sec (500 Gb/sec total); combining the 2.5 Gb/sec signals 143 together thus effectively doubles capacity. After multiplexing 145 Unique Features and Requirements July 2000 146 For The Optical Layer Control Plane 148 the combined signal must be routed as a group to the distant 149 adaptation function. 150 - Adaptation Grouping: In this technique, groups of k (e.g., 4) 151 wavelengths are managed as a group within the system and must be 152 added/dropped as a group. We will call such a group an 153 "adaptation grouping". 154 - Laser Tunability: The lasers producing the LR wavelengths may 155 have a fixed frequency, may be tunable over a limited range, or 156 be tunable over the entire range of wavelengths supported by the 157 DWDM. Tunability speeds may also vary. 159 Connectivity between adaptation functions may also be limited: 160 - As pointed out above, TDM multiplexing and/or adaptation 161 grouping by the adaptation function forces groups of input 162 channels to be delivered together to the same distant adaptation 163 function. 164 - Only adaptation functions whose lasers/receivers are tunable to 165 compatible frequencies can be connected. 166 - The switching capability of the OADM�s may also be constrained. 167 For example: 168 o There may be some wavelengths that can not be dropped at 169 all. 170 o There may be a fixed relationship between the frequency 171 dropped and the physical port on the OADM to which it is 172 dropped. 173 o OADM physical design may put an upper bound on the number 174 of adaptation groupings dropped at any single OADM. 176 For a fixed configuration of the OADM�s and adaptation functions 177 connectivity will be fixed: Each input port will essentially be 178 hard-wired to some specific distant port. However this connectivity 179 can be changed by changing the configurations of the OADM�s and 180 adaptation functions. For example, an additional adaptation grouping 181 might be dropped at an OADM or a tunable laser retuned. In each case 182 the port-to-port connectivity is changed. 184 This capability can be expected to be under software control. Today 185 the control would rest in the vendor-supplied Element Management 186 system (EMS), which in turn would be controlled by the operator�s 187 OS�s. However in principle the EMS could participate in the routing 188 process. The constraints on reconfiguration are likely to be quite 189 complex, dependent on the vendor design and also on exactly what 190 line cards have been deployed. Thus the state information needed for 191 routing is likely to be voluminous and possibly vendor specific. 192 However it is very desirable to solve these issues, possibly by 193 advertising only an abstraction of the complex configuration options 194 to the external world via the control plane. 196 2.2 Wavelength Routed All-Optical Networks 198 Unique Features and Requirements July 2000 199 For The Optical Layer Control Plane 201 The optical networks presently being deployed may be called "opaque" 202 ([Tkach98]): each link is optically isolated by transponders doing 203 O/E/O conversions. These transponders are quite expensive and they 204 also constrain the rapid evolution to new services - for example, 205 they tend to be bit rate and format specific. Thus there are strong 206 motivators to introduce "domains of transparency" - all-optical 207 subnetworks - larger than an OTS. 209 The routing of lightpaths through an all-optical network has 210 received extensive attention. (See [Yates99] or [Ramaswami98]). 211 When discussing routing in an all-optical network it is usually 212 assumed that all routes have adequate signal quality. This may be 213 ensured by limiting all-optical networks to subnetworks of limited 214 geographic size which are optically isolated from other parts of the 215 optical layer by transponders. This approach is very practical and 216 has been applied to date, e.g. when determining the maximum length 217 of an Optical Transport System (OTS). Furthermore operational 218 considerations like fault isolation also make limiting the size of 219 domains of transparency attractive. 221 There are however reasons to consider contained domains of 222 transparency in which not all routes have adequate signal quality. 223 From a demand perspective, maximum bit rates have rapidly increased 224 from DS3 to OC-192 and soon OC-768 (40 Gb/sec). As bit rates 225 increase it is necessary to increase power. This makes impairments 226 and nonlinearities more troublesome. From a supply perspective, 227 optical technology is advancing very rapidly, making ever-larger 228 domains possible. In this section we assume that these 229 considerations will lead to the deployment of a domain of 230 transparency that is too large to ensure that all potential routes 231 have adequate signal quality for all circuits. Our goal is to 232 understand the impacts of the various types of impairments in this 233 environment. 235 2.2.1 Problem Formulation 237 We consider a single domain of transparency. We wish to route a 238 unidirectional circuit from ingress client node X to egress client 239 node Y. At both X and Y, the circuit goes through an O/E/O 240 conversion which optically isolates the portion within our domain. 241 We assume that we know the bit rate of the circuit. Also, we assume 242 that the adaptation function at X applies some Forward Error 243 Correction (FEC) method to the circuit. We also assume we know the 244 launch power of the laser at X. 246 Impairments can be classified into two categories, linear and 247 nonlinear (See [Tkach98] for more on impairment constraints). Linear 248 effects are independent of signal power and affect wavelengths 250 Unique Features and Requirements July 2000 251 For The Optical Layer Control Plane 253 individually. Amplifier spontaneous emission (ASE), polarization 254 mode dispersion (PMD), and chromatic dispersion are examples. 255 Nonlinearities are significantly more complex: they generate not 256 only distortion for a given channel, but also crosstalk between 257 channels. 259 In the remainder of this section we first outline how two key linear 260 impairments (PMD and ASE) might be handled by a set of analytical 261 formulae as additional constraints on routing. We next discuss how 262 the remaining constraints might be approached. Finally we take a 263 broader perspective and discuss the implications of such constraints 264 on control plane architecture and also on broader constrained domain 265 of transparency architecture issues. 267 2.2.2 Polarization Mode Dispersion 269 For a transparent fiber segment, the general rule for the PMD 270 requirement is that the time-average differential time delay between 271 two orthogonal state of polarizations should be less than a% of the 272 bit duration. (A typical value for a is 10 [ITU]. More aggressive 273 designs to compensate for PMD may allow higher than 10%. This would 274 be a system parameter known to the routing process.) This results in 275 a upper bound on the maximum length of an M-fiber-span transparent 276 segment, which is inverse proportion to the square of bit rate and 277 fiber PMD parameter where a fiber span in a transparent network 278 refers to a segment between two optical amplifiers (The detailed 279 equation is omitted due to the format constraint). For typical fibers 280 with PMD parameter of 0.5 picosecond per square root of km, based on 281 the constraint, the maximum length of the transparent segment should 282 not exceed 400km and 25km for bit rates of 10Gb/s and 40Gb/s, 283 respectively. With newer fibers assuming PMD parameter equals to 0.1 284 picosecond per square root of km, the maximum length of the transparent 285 segment should not exceed 10000km and 625km for bit rates of 10Gb/s and 286 40Gb/, respectively. In general, the PMD requirement is not an issue 287 for most types of fibers at 10Gb/s or lower bit rate. But it will 288 become an issue at bit rates of 40Gb/s and higher. 290 2.2.3 Amplifier Spontaneous Emission 292 ASE degrades the signal to noise ratio. An acceptable optical SNR 293 level (SNRmin) which depends on the bit rate and transmitter-receiver 294 technology (e.g., FEC) needs to be maintained at the receiver. 295 In order to satisfy this requirement, vendors often provide some 296 general engineering rule in terms of maximum length of the 297 transparent segment and number of spans. For example, current 298 transmission systems are often limited to up to 6 spans of 80km. 299 Startups have announced ultra long haul systems that 300 are claimed to be able to support up to thousands of km. Although 302 Unique Features and Requirements July 2000 303 For The Optical Layer Control Plane 305 these general rules are helpful in network planning, more detailed 306 information on the SNR reduction in each component should be used to 307 determine whether the SNR level through a given transparent segment 308 is within the required value. This would provide flexibility in 309 provisioning or restoring a lightpath through a transparent 310 subnetwork. Here, we assume that the average optical power launched 311 at the transmitter is known as P. The lightpath from the transmitter 312 to the receiver goes through M optical amplifiers, with each 313 introducing some noise power. A constraint on the maximum number of 314 spans can be obtained [Kaminow97] which is proportional to P and 315 inverse proportional to SNRmin, optical bandwidth B, amplifier gain 316 G-1 and spontaneous emission factor n of the optical amplifier. 317 (Again, the detailed equation is omitted due to the format 318 constraint.) Let�s take a typical example. Assuming P=4dBm, 319 SNRmin=20dB with FEC, B=12.5GHz, n=2.5, G=25dB, based on the 320 constraint, the maximum number of spans is at most 10. However, if 321 without FEC where the requirement on SNRmin becomes 25dB, the 322 maximum number of spans drops down to 3. 324 2.2.4 Other Impairments 326 Other Polarization Dependent Impairments: Other polarization- 327 dependent effects besides PMD influence system performance. For 328 example, many components have polarization-dependent loss (PDL) 329 [Ramaswami98] which accumulates in a system with many components on 330 the transmission path. The state of polarization fluctuates with 331 time, and it is generally required to maintain the total PDL on the 332 path to be within some acceptable limit. 334 Chromatic Dispersion: For reasonably linear systems, there are 335 reasons to believe that this impairment can be adequately (but not 336 optimally) compensated for on a per-link basis. 338 Nonlinear Impairments: It seems unlikely that these can be dealt with 339 explicitly in a routing algorithm because they lead to constraints 340 that can couple routes together and lead to complex dependencies, 341 e.g. on the order in which specific fiber types are traversed. A 342 full treatment of the nonlinear constraints would likely require 343 very detailed knowledge of the physical infrastructure, including 344 measured dispersion values for each span, fiber core area and 345 composition, as well as knowledge of subsystem details such as 346 dispersion compensation technology. This information would need to 347 be combined with knowledge of the current loading of optical signals 348 on the links of interest to determine the level of nonlinear 349 impairment. Alternatively, one could assume that nonlinear 350 impairments are bounded and increase the required OSNR level, SNR 351 min 352 in Eq. (2) and (3), by X dB, where X for performance reasons would 353 be limited to 1 or 2 dB, consequently setting a limit on route 354 length. For the approach described here to be useful, it is 355 desirable for this length limit to be longer than that imposed by 357 Unique Features and Requirements July 2000 358 For The Optical Layer Control Plane 360 the constraints which can be treated explicitly. Further work is 361 required to determine the validity of this approach. However, it is 362 possible that there could be an advantage in designing systems which 363 are less aggressive with respect to nonlinearities, and therefore 364 somewhat sub-optimal, in exchange for improved scalability, 365 simplicity and flexibility in routing and control plane design. 367 2.2.5 Implications For Routing and Control Plane Design 369 - Additional state information will be required by the routing 370 algorithm for each type of impairment that has the potential of 371 being limiting for some routes. 373 - It is likely that the physical layer parameters do not change 374 value rapidly and could be stored in some database; however 375 these are physical layer parameters that today are frequently 376 not known at the granularity required. If the ingress node of a 377 lightpath does path selection these parameters would need to be 378 available at this node. 380 - The specific constraints required in a given situation will 381 depend on the design and engineering of the domain of 382 transparency; for example it will be important to know whether 383 chromatic dispersion has been dealt with on per-link basis, and 384 whether the domain is operating in a linear or nonlinear regime. 386 - In situations where only PMD and/or ASE impairments are 387 potentially binding the optimal routing problem as two 388 constraints OSPF algorithm enhancements will be needed. However, 389 it is likely that relatively simple heuristics could be used in 390 practice. 392 Additionally, routing in an all-optical network without wavelength 393 conversion raises several additional issues: 395 - Since the route selected must have the chosen wavelength 396 available on all links, this information needs to be considered 397 in the routing process. This is discussed in [Chaudhuri00], 398 where it is concluded that advertising detailed wavelength 399 availabilities on each link is not likely to scale. Instead they 400 propose an alternative method which probes along a chosen path 401 to determine which wavelengths (if any) are available. This 402 would require a significant addition to the routing logic 403 normally used in OSPF. 405 - Choosing a path first and then a wavelength along the path is 406 known to give adequate results in simple topologies such as 407 rings and trees ([Yates99]). This does not appear to be true in 408 large mesh networks under realistic provisioning scenarios, 409 however. Instead significantly better results are achieved if 411 Unique Features and Requirements July 2000 412 For The Optical Layer Control Plane 414 wavelength and route are chosen simultaneously. This approach 415 would however also have a significant affect on OSPF. 417 2.3 Diversity 419 "Diversity" is a relationship between lightpaths. Two lightpaths are 420 said to be diverse if they have no single point of failure. In 421 traditional telephony the dominant transport failure mode is a 422 failure in the interoffice plant, such as a fiber cut inflicted by a 423 backhoe. 425 To determine whether two lightpath routings are diverse it is 426 necessary to identify single points of failure in the interoffice 427 plant. To do so we will use the following terms: A fiber cable is a 428 uniform group of fibers contained in a sheath. An Optical Transport 429 System will occupy fibers in a sequence of fiber cables. Each fiber 430 cable will be placed in a sequence of conduits - buried honeycomb 431 structures through which fiber cables may be pulled - or buried in a 432 right of way (ROW). A ROW is land in which the network operator has 433 the right to install his conduit or fiber cable. It is worth noting 434 that for economic reasons, ROW�s are frequently obtained from 435 railroads, pipeline companies, or thruways. It is frequently the 436 case that several carriers may lease ROW from the same source; this 437 makes it common to have a number of carriers� fiber cables in close 438 proximity to each other. Similarly, in a metropolitan network, 439 several carriers might be leasing duct space in the same RBOC 440 conduit. There are also "carrier's carriers" - optical networks 441 which provide fibers to multiple carriers, all of whom could be 442 affected by a single failure in the "carrier's carrier" network. 444 In a typical intercity facility network there might be on the order 445 of 100 offices that are candidates for OLXC�s. To represent the 446 inter-office fiber network accurately a network with an order of 447 magnitude more nodes is required. In addition to Optical Amplifier 448 (OA) sites, these additional nodes include: 449 - Places where fiber cables enter/leave a conduit or right of way; 450 - Locations where fiber cables cross; 451 - Locations where fiber splices are used to interchange fibers 452 between fiber cables. 454 An example of the first might be: 456 Unique Features and Requirements July 2000 457 For The Optical Layer Control Plane 459 A B 460 A-------------B \ / 461 \ / 462 X-----Y 463 / \ 464 C-------------D / \ 465 C D 467 (a) Fiber Cable Topology (b) Right-Of-Way/Conduit Topology 469 Figure 2-2: Fiber Cable vs. ROW Topologies 471 Here the A-B fiber cable would be physically routed A-X-Y-B and the 472 C-D cable would be physically routed C-X-Y-D. This topology might 473 arise because of some physical bottleneck: X-Y might be the Lincoln 474 Tunnel, for example, or the Bay Bridge. 476 Fiber route crossing (the second case) is really a special case of 477 this, where X and Y coincide. In this case the crossing point may 478 not even be a manhole; the fiber routes might just be buried at 479 different depths. 481 Fiber splicing (the third case) often occurs when a major fiber 482 route passes near to a small office. To avoid the expense and 483 additional transmission loss only a small number of fibers are 484 spliced out of the major route into a smaller route going to the 485 small office. This might well occur in a manhole or hut. An 486 example is shown in Fig. 2-3(a), where A-X-B is the major route, X 487 the manhole, and C the smaller office. The actual fiber topology 488 would then look like Fig. 2-3(b), where there would typically be 489 many more A-B fibers than A-C or C-B fibers, and where A-C and C-B 490 might have different numbers of fibers. (One of the latter might 491 even be missing.) 493 C C 494 | / \ 495 | / \ 496 | / \ 497 A------X------B A---------------B 499 (a) Fiber Cable Topology (b) Fiber Topology 501 Figure 2-3. Fiber Cable vs Fiber Topologies 503 Unique Features and Requirements July 2000 504 For The Optical Layer Control Plane 506 The imminent deployment of ultra-long (>1000 km) Optical Transport 507 Systems introduces a further complexity: Two OTS's could interact a 508 number of times. To make up a hypothetical example: A New York - 509 Atlanta OTS and a Philadelphia - Orlando OTS might ride on the same 510 right of way for x miles in Maryland and then again for y miles in 511 Georgia. They might also cross at Raleigh or some other intermediate 512 node without sharing right of way. 514 Diversity is often equated to routing two lightpaths between a 515 single pair of points, or different pairs of points so that no 516 single route failure will disrupt them both. This is too simplistic, 517 for a number of reasons: 519 - A sophisticated client of an optical network will want to derive 520 diversity needs from his/her end customers' availability 521 requirements. These often lead to more complex diversity 522 requirements than simply providing diversity between two 523 lightpaths. For example, a common requirement is that no single 524 failure should isolate a node or nodes. If a node A has single 525 lightpaths to nodes B and C, this requires A-B and A-C to be 526 diverse. In real applications, a large data network with N 527 lightpaths between its routers might describe their needs in an 528 NxN matrix, where (i,j) defines whether lightpaths i and j must 529 be diverse. 531 - Two circuits that might be considered diverse for one 532 application might not be considered diverse for in another 533 situation. Diversity is usually thought of as a reaction to 534 interoffice route failures. High reliability applications may 535 require other types of failures to be taken into account. Some 536 examples: 537 o Office Outages: Although less frequent than route failures, 538 fires, power outages, and floods do occur. Many network 539 managers require that diverse routes have no (intermediate) 540 nodes in common. In other cases an intermediate node might 541 be acceptable as long as there is power diversity within 542 the office. 543 o Shared Rings: Many applications are willing to allow 544 "diverse" circuits to share a SONET ring-protected link; 545 presumably they would allow the same for optical layer 546 rings. 547 o Disasters: Earthquakes and floods can cause failures over 548 an extended area. Defense Department circuits might need 549 to be routed with nuclear damage radii taken into account. 551 Unique Features and Requirements July 2000 552 For The Optical Layer Control Plane 554 o Conversely, some networks may be willing to take somewhat 555 larger risks. Taking route failures as an example: Such a 556 network might be willing to consider two fiber cables in 557 heavy duty concrete conduit as having a low enough chance 558 of simultaneous failure to be considered "diverse". They 559 might also be willing to view two fiber cables buried on 560 opposite sides of a railroad track as being diverse because 561 there is minimal danger of a single backhoe disrupting them 562 both even though a bad train wreck might jeopardize them 563 both. 565 These considerations strongly suggest that the routing algorithm 566 should be sensitive to the types of threat considered unacceptable 567 by the requester. 569 [Chaudhuri00] introduced the term "Shared Risk Link Group" (SRLG) to 570 describe the relationship between two non-diverse links. The above 571 discussion suggests that an SRLG should be characterized by 2 572 parameters: 573 - Type of Compromise: Examples would be shared fiber cable, shared 574 conduit, shared ROW, shared optical ring, shared office without 575 power sharing, etc.) 576 - Extent of Compromise: For compromised outside plant, this would 577 be the length of the sharing. 579 Two links could be related by many SRLG's (AT&T's experience 580 indicates that a link may belong to over 100 SRLG's, each 581 corresponding to a separate fiber group. Each SRLG might relate a 582 single link to many other links. For the optical layer, similar 583 situations can be expected where a link is an ultra-long (3000 km) 584 OTS). The mapping between links and different types of SRLG�s is in 585 general defined by network operators based on the definition of each 586 SRLG type. Since SRLG information is not yet ready to be 587 discoverable by a network element and does not change dynamically, 588 it need not be advertised with other resource availability 589 information by network elements. It could be configured in some 590 central database and be distributed to or retrieved by the nodes, or 591 advertised by network elements at the topology discovery stage. On 592 the other hand, in order to be able to perform distribute path 593 selection at each node that satisfies certain diverse routing 594 criterion, each network element may need to propagate the 595 information of number of channels available for each channel type 596 (e.g., OC48, OC192) on each channel group, where channel group is 597 defined as a set of channels that are routed identically and should 599 Unique Features and Requirements July 2000 600 For The Optical Layer Control Plane 602 be given unique identification. Each channel group can be mapped 603 into a sequence of fiber cables while each fiber cable can belong to 604 multiple SRLG�s based on their definitions. 606 2.4 Other Unique Features of Optical Networks 608 There are other major differences between optical networks and IP 609 networks that have significant impacts on the design of the Optical 610 Layer control plane. They include the following two areas. 612 - Bi-directionality: In an IP network, Label Switched Paths (LSPs) 613 are inherently unidirectional. However, current transport 614 networks are bi-directional oriented, mostly due to the 615 evolution of two-way transmission in Public Switched Telephone 616 Network and by SONET/SDH line protection schemes [Doverspike00]. 617 This often requires the bi-directional connections provided by 618 the optical layer to use the same numbered channel in each 619 direction. As a result, a channel contention problem may occur 620 between two bi-directional request traveling in opposite 621 directions. Signaling mechanisms have been proposed to resolve 622 this type of contention [Ashwood00]. 624 - Protection and restoration: In an IP network, when a backup LSP 625 is pre-established to protect against failure(s) on a working 626 LSP, the backup LSP does not occupy any physical resources 627 before a failure occurs. However, in an optical network, a pre- 628 established optical connection for backup does occupy the ports 629 and channels on the path of the connection. This can be used for 630 the 1+1 protection, but not for shared mesh protection. Instead 631 with shared mesh protection, the backup path can be pre-selected 632 with or without the associated channels being chosen prior to 633 any failure, then cross-connect ports/channels physically after 634 a failure on the working path has been detected. See 635 [Doverspike00] for more detailed discussions on various 636 protection/restoration schemes. 638 2.4 Discussion and Summary 640 Dealing with diversity seems to be an unavoidable requirement on 641 optical layer routing. It requires dealing with additional 642 constraints in the routing process but most importantly requires 643 additional state information to be available to the routing process. 645 Unique Features and Requirements July 2000 646 For The Optical Layer Control Plane 648 The physical constraints of optical technology apply inside an all- 649 optical ``domains of transparency��. Today�s OTS is a simple 650 ``domain of transparency�� consisting of WDM Mux/Demuxers and 651 Optical Amplifiers. Because an OTS is not easily reconfigurable 652 these constraints are dealt with at the time of installation and 653 don�t complicate routing and the control plane. 655 As domains of transparency become both larger and software 656 reconfigurable as discussed earlier, these physical constraints on 657 connectivity and transmission quality become increasingly of concern 658 to the control plane. It is important to note that at present this 659 evolution is largely technology driven: vendors pushing the 660 technology envelope are competing fiercely to provide solutions 661 which have higher capacity, can go further all-optically, are more 662 reconfigurable, and are more cost-effective. Routing constraints, 663 which are essentially a by-product of this competitive dynamic, may 664 well become more complex. As vendors pursue their diverse visions it 665 is quite plausible that the optical layer of the future will be made 666 up of heterogeneous technologies which differ significantly in their 667 routing implications. 669 What are the control plane architecture choices in such an 670 eventuality? Alternative approaches that deserve consideration are: 672 - Per-Domain Routing: In this approach each domain could have its 673 own tuned approach to routing. Inter-domain routing would be 674 handled by a multi-domain or hierarchical protocol that allowed 675 the hiding of local complexity. Single vendor domains might 676 have proprietary intra-domain routing strategies. 678 - Enforced Homogeneity: The capabilities of the control plane 679 would impose constraints on system design and network 680 engineering. As examples: If control plane protocols did not 681 deal with non-linear impairments carriers would require their 682 vendors to provide transport systems where these constraints 683 were never binding. Transmission engineers could be required to 684 only deploy domains where every possible route met all 685 constraints not handled explicitly by the control plane even if 686 the cost penalties were severe. 688 - Additional Regeneration: At (selected) OLXC�s within a domain of 689 transparency, the control plane could insert O/E/O regeneration 690 into routes with transmission problems. This might make all 691 routes feasible again, but at the cost of additional cost and 692 complexity and with some loss of rate and format transparency. 694 - Standardized Intra-Domain Routing Protocol: The examples 695 discussed in Section 2 suggest that a single standardized 696 protocol which tries to deal with the full range of possible 697 topological and transmission constraints will be extremely 699 Unique Features and Requirements July 2000 700 For The Optical Layer Control Plane 702 complex and will require a lot of state information. However 703 when combined with limited application of the two previous 704 approaches it might be more plausible. 706 Given the complexity of physical and connectivity impairments and 707 diversity requirements, a valid question to ask is whether a 708 centralized routing model, where routing is done centrally using a 709 centralized database with a global network view would be better than 710 the distributed model favored in the Internet. Here, we provide some 711 pros and cons on each model. 713 To the extent that the per-domain routing approach just discussed is 714 used, the choice of model might be different depending on the 715 characteristics of the domain. For example, in a domain like Fig. 716 2-1 it seems likely that a centralized model is more appropriate 717 because network elements like tunable lasers and reconfigurable 718 OADM's seem on the surface to be unlikely peers to much more complex 719 devices like OXC's or routers. On the other hand, a purely "opaque" 720 domain where impairment constraints play no role in routing would 721 appear to be an excellent candidate for the distributed model. 723 In the context of the complexities discussed in this paper, a 724 centralized model has some advantages: 726 - Information such as SRLG�s and performance parameters which 727 change infrequently and are unlikely to be amenable to self- 728 discovery could reside in a central database and would not need 729 to be advertised. 731 - Routing dependencies among circuits (to ensure diversity, for 732 example) is more easily handled centrally when the circuits do 733 not share terminals since the necessary state information should 734 be more easily accessible in a centralized model. 736 - Pre-computation of restoration paths and other computations that 737 can benefit from the use of global state information may also 738 benefit from centralization. 740 There are, of course, significant disadvantages to the centralized 741 model when compared to a distributed model: 743 - If rapid restoration is required, it is not possible to rely on 744 a centralized routing system to compute a recovery path for each 745 failed lightpath on demand after a failure has been detected. 746 The distributed model arguably will not have this problem. 748 - The centralized approach is not consistent with the distributed 749 routing philosophy prevalent in the Internet. The reasons which 750 drove the Internet�s architecture � scalability, the inherent 751 problems with hard state information, etc. � are largely 753 Unique Features and Requirements July 2000 754 For The Optical Layer Control Plane 756 relevant to optical networking. In addition there is the major 757 disadvantage that a centralized approach would seem to preclude 758 integrated routing across the IP and optical boundary. 760 A related issue is whether routes should be pre-computed. It has 761 been suggested, for example, that all routes (or at least a large 762 number) be pre-computed and stored in a central database. This 763 potentially might allow more sophisticated algorithms to be used to 764 filter out the routes violating transmission constraints. There are 765 however serious disadvantages (in addition to the disadvantages of 766 the centralized model given above): 768 - In a large national network there are just too many routes that 769 might be needed, by orders of magnitude. This is particularly 770 true when diversity constraints and restoration routing may 771 force weird routings. 772 - Every time any parameter changes anywhere in the network all 773 routes using the impacted resource will need to be reexamined. 775 3. Business and Operational Realities 777 The Internet technologies being applied to define the new Optical 778 Layer control plane evolved in a very different business and 779 operational environment than that of today's transport network 780 provider. The differences need to be clearly understood and dealt 781 with if the new control plane is going to be a success. The Optical 782 Interworking Forum, one of the principal standards groups in this 783 area, has recently formed a Carrier Subgroup to provide guidance 784 from this perspective for their standards activities. 786 In this section we touch on two aspects of this problem: Business 787 Models and the management of the introduction of new technology. 789 3.1 Business Models 791 The cost of providing gigabit connections is expected to drop 792 rapidly, but will still require dedicated use of expensive and 793 periodically scarce capacity and equipments. Therefore the ability 794 to control network access, and to measure and bill for usage, will 795 be critical. Also, lightpath connections are expected to have quite 796 long holding times (weeks-months) compared to LSPs in an IP network. 797 Therefore the collection of usage data and the nature of the 798 connection establishment process have very different characteristics 799 in the Optical Network than in an IP network. 801 Unique Features and Requirements July 2000 802 For The Optical Layer Control Plane 804 In addition, industry revenues from legacy services (voice and 805 private line) are expected to dwarf those from IP transport for the 806 next few years. Meeting the needs of these services and migrating 807 them to the operator�s newer service platforms will also be a 808 critical need for operators with extensive embedded revenues. Thus 809 the needs of services based on SONET/SDH, Ethernet, ATM, etc. will 810 need to be given attention. In addition most operators hope that 811 they will have many different ISP's and Intranets as customers. Thus 812 the customer base for most operators will be quite diverse. 814 Another area of prime concern is Operations Systems (OS�s). The 815 opportunity to create a thinner and more nimble network management 816 plane by off-loading many provisioning and data-basing functions 817 onto a vendor-provided control plane and/or Element Management 818 System (EMS) holds the promise of large and immediate benefits to 819 operators in the form of reduced software development and more rapid 820 deployment of new functionality. This is a critical area to achieve 821 scalability. 823 In the short term the principal benefits of the proposed control 824 plane are two: rapid provisioning and a reduction in the cost and 825 complexity of OS�s and operations. Both of these benefits require 826 that circuits be controlled end-to-end by the new control plane, for 827 otherwise the provisioning times will be determined by those of the 828 older, much slower segments and OS costs and OS and operations 829 complexity may actually go up because of the need to interwork the 830 old and the new worlds. To avoid this the capabilities of the new 831 control plane need to be available end-to-end as soon as possible. 832 This will put a premium on the rapid development of standards for 833 interworking across trust boundaries, for example between Local 834 Exchange Carrier's and national networks. 836 3.2 Managing The Introduction Of New Technology 838 We expect optical layer hardware technology to continue to evolve 839 very rapidly, with a very real possibility of additional 840 "disruptive" advances. The analog nature of optical technology 841 compounds this problem for the control planes because these advances 842 are likely to be accompanied by complex technology-specific 843 constraints on routing and functionality. (Sections 2.1 and 2.2 844 above provide examples of this.) An architecture which allows the 845 gradual and seamless introduction of new technologies into the 846 network without time-consuming and costly changes to embedded 847 technologies and especially control planes is highly desirable. 849 Unique Features and Requirements July 2000 850 For The Optical Layer Control Plane 852 When compared to the IP experience several distinctions stand out: 853 - The optical layer control plane seems more likely to be buffeted 854 by hardware changes than is the IP control plane. 855 - Optical layer innovations are currently being driven by start-up 856 companies, with product innovation well ahead of the standards 857 process. Efforts at control plane standardization are much less 858 mature than comparable IP efforts. This is a matter of 859 considerable concern because neither rapid provisioning nor the 860 operational improvements desired are likely if each vendor has a 861 proprietary control plane, with interworking between vendors 862 (and hence between networks, in most cases) left as a problem 863 for operators' OS's to solve. 865 3.3 Service Framework Suggestions 867 For the reasons given above and others, we expect that the best 868 model for an optical layer control plane within a trust domain is 869 one that pays heavy attention to the management of heterogeneous 870 technologies and associated service capabilities. This might be done 871 by hiding complexities in subnetworks. These subnetworks would then 872 advertise only a standardized abstraction of their connectivity, 873 capacity, and functionality capabilities. Hopefully this would allow 874 even disruptive technologies such as all-optical subnetworks to be 875 introduced with a minimum of impact on preexisting parts of the 876 trust domain. 878 Each network operator will have a need to define "branded" services 879 - bundles of service functionality and SLA's with a specific price 880 structure. In a heterogeneous network it will be necessary to map a 881 customer request for such a "branded" service onto the specific 882 capabilities of each subnetwork. This suggests a hierarchical model, 883 decisions about these mappings, and also about policies for peering 884 with other networks and overall management of the service offerings 885 available to specific customers managed centrally but application of 886 these policies handled at the local or subnetwork level. 888 4. Security Considerations 890 The solution developed to address the requirements defined in this 891 document must address security aspects. 893 5. Acknowledgments 895 Unique Features and Requirements July 2000 896 For The Optical Layer Control Plane 898 This document has benefited from discussions with Michael Eiselt, 899 Mark Shtaif, and our other AT&T colleagues. 901 References: 903 [Ashwood00] Ashwood-Smith, P. et al., "MPLS Optical/Switching 904 Signaling Functional Description", Work in Progress, draft-ashwood- 905 generalized-mpls-signaling-00.txt. 907 [Awduche99] Awduche, D. O., Rekhter, Y., Drake, J., and Coltun, R., 908 "Multi-Protocol Lambda Switching: Combining MPLS Traffic Engineering 909 Control With Optical Crossconnects", Work in Progress, draft- 910 awduche-mpls-te-optical-01.txt. 912 [Chaudhuri00] Chaudhuri, S., Hjalmtysson, G., and Yates, J., 913 "Control of Lightpaths in an Optical Network", Work in Progress, 914 draft-chaudhuri-ip-olxc-control-00.txt. 916 [Doverspike00] Doverspike, R. and Yates, J., "Challenges For MPLS 917 Protocols in the Optical Network Control Plane", submitted for 918 journal publication, June, 2000 (online at 919 http://www.research.att.com/~rdd/). 921 [ITU] ITU-T Doc. G.663, Optical Fibers and Amplifiers, Section 922 II.4.1.2. 924 [Kaminow97] Kaminow, I. P. and Koch, T. L., editors, Optical Fiber 925 Telecommunications IIIA, Academic Press, 1997. 927 [Moy98] Moy, John T., OSPF: Anatomy of an Internet Routing Protocol, 928 Addison-Wesley, 1998. 930 [Ramaswami98] Ramaswami, R. and Sivarajan, K. N., Optical Networks: 931 A Practical Perspective, Morgan Kaufmann Publishers, 1998. 933 [Tkach98] Tkach, R., Goldstein, E., Nagel, J., and Strand, J., 934 "Fundamental Limits of Optical Transparency", Optical Fiber 935 Communication Conf., Feb. 1998, pp. 161-162. 937 [Yates99] Yates, J. M., Rumsewicz, M. P. and Lacey, J. P. R., 938 "Wavelength Converters in Dynamically-Reconfigurable WDM Networks", 939 IEEE Communications Surveys, 2Q1999 (online at 940 www.comsoc.org/pubs/surveys/2q99issue/yates.html). 942 Unique Features and Requirements July 2000 943 For The Optical Layer Control Plane 945 Authors' Addresses: 947 Angela Chiu 948 AT&T Labs 949 200 Laurel Ave., Rm A5-1F06 950 Middletown, NJ 07748 951 Phone:(732) 420-9057 952 Email: alchiu@att.com 954 John Strand 955 AT&T Labs 956 200 Laurel Ave., Rm A5-1D06 957 Middletown, NJ 07748 958 Phone:(732) 420-9036 959 Email: jls@att.com 961 Robert Tkach 962 Celion Networks 963 1 Shiela Dr., Suite 2 964 Tinton Falls, NJ 07733 965 Phone:(732) 747-9909 966 Email: bob.tkach@celion.com