idnits 2.17.1 draft-bernstein-alto-large-bandwidth-cases-02.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (July 16, 2012) is 4301 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Informational ---------------------------------------------------------------------------- -- Looks like a reference, but probably isn't: 'MultiCost' on line 507 -- Looks like a reference, but probably isn't: 'NetOpt' on line 543 -- Looks like a reference, but probably isn't: 'RWA' on line 547 -- Looks like a reference, but probably isn't: 'BGP4' on line 624 -- Looks like a reference, but probably isn't: 'MT-OSPF' on line 627 -- Looks like a reference, but probably isn't: 'OpenFlow' on line 645 -- Looks like a reference, but probably isn't: 'Online' on line 1088 == Unused Reference: '11' is defined on line 1092, but no explicit reference was found in the text == Unused Reference: '12' is defined on line 1095, but no explicit reference was found in the text == Unused Reference: '14' is defined on line 1101, but no explicit reference was found in the text == Unused Reference: '15' is defined on line 1104, but no explicit reference was found in the text == Unused Reference: '16' is defined on line 1107, but no explicit reference was found in the text == Unused Reference: '17' is defined on line 1110, but no explicit reference was found in the text Summary: 0 errors (**), 0 flaws (~~), 7 warnings (==), 9 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group Greg Bernstein 2 Internet Draft Grotto Networking 3 Intended status: Informational Young Lee 4 Huawei 6 July 16, 2012 8 Use Cases for High Bandwidth Query and Control of Core Networks 10 draft-bernstein-alto-large-bandwidth-cases-02.txt 12 Status of this Memo 14 This Internet-Draft is submitted to IETF in full conformance with 15 the provisions of BCP 78 and BCP 79. 17 Internet-Drafts are working documents of the Internet Engineering 18 Task Force (IETF), its areas, and its working groups. Note that 19 other groups may also distribute working documents as Internet- 20 Drafts. 22 Internet-Drafts are draft documents valid for a maximum of six 23 months and may be updated, replaced, or obsoleted by other documents 24 at any time. It is inappropriate to use Internet-Drafts as 25 reference material or to cite them other than as "work in progress." 27 The list of current Internet-Drafts can be accessed at 28 http://www.ietf.org/ietf/1id-abstracts.txt 30 The list of Internet-Draft Shadow Directories can be accessed at 31 http://www.ietf.org/shadow.html. 33 This Internet-Draft will expire on January 16, 2011. 35 Copyright Notice 37 Copyright (c) 2012 IETF Trust and the persons identified as the 38 document authors. All rights reserved. 40 This document is subject to BCP 78 and the IETF Trust's Legal 41 Provisions Relating to IETF Documents 42 (http://trustee.ietf.org/license-info) in effect on the date of 43 publication of this document. Please review these documents 44 carefully, as they describe your rights and restrictions with 45 respect to this document. 47 Abstract 49 This draft describes two generic use-cases that illustrate 50 application layer traffic optimization applied to high bandwidth 51 core networks. The type of information and interactions needed to 52 perform various optimizations is described. In addition extensions 53 to the existing ALTO protocol widely applicable to any high 54 bandwidth applications are suggested. These include bandwidth 55 constraint representations for a diverse range of control and data 56 plane technologies as well as advanced filtering based on 57 constraints. 59 Table of Contents 61 1. Introduction...................................................3 62 1.1. Computing Clouds, Data Centers, and End Systems...........4 63 2. End System Aggregate Networking................................5 64 2.1. Aggregated Bandwidth Scaling..............................5 65 2.2. Cross Stratum Optimization Example........................6 66 2.3. Data Center and Network Faults and Recovery...............7 67 3. Data Center to Data Center Networking..........................8 68 3.1. Cross Stratum Optimization Examples.......................9 69 3.2. Network and Data Center Faults and Reliability............9 70 4. Cross Stratum Control Interfaces..............................10 71 5. Potential ALTO Protocol Extensions............................11 72 6. Bandwidth Constraint Information..............................12 73 6.1. Introduction.............................................12 74 6.1.1. Example Network: Providers View.....................13 75 6.2. Data and Control Plane Path Choices......................14 76 6.3. ALTO Extensions..........................................15 77 6.3.1. Mutually Constrained Paths..........................15 78 6.3.1.1. Simple IP Network Example......................16 79 6.3.1.2. TDM Network Example............................16 80 6.3.1.3. JSON Encoding..................................18 81 6.3.2. Cost-Capacity Graphs................................18 82 6.3.2.1. Simple TDM Example with Graph Reduction........19 83 6.3.2.2. Ethernet MSTP Example with Multiple Graphs.....20 84 6.3.2.3. JSON Encoding..................................23 85 7. Constraint Based Filtering....................................24 86 8. Conclusion....................................................24 87 9. Security Considerations.......................................24 88 10. IANA Considerations..........................................25 89 11. References...................................................25 90 11.1. Informative References..................................25 92 Author's Addresses...............................................27 93 Intellectual Property Statement..................................27 94 Disclaimer of Validity...........................................27 96 1. Introduction 98 Cloud Computing, network applications, software as a service (SaaS), 99 Platform as a service (PaaS), and Infrastructure as a Service 100 (IaaS), are just a few of the terms used to describe situations 101 where multiple computation entities interact with one another across 102 a network. When the communication resources consumed by these 103 interacting entities is significant compared with link or network 104 capacity then opportunities may exist for more efficient utilization 105 of available computation and network resources if both computation 106 and network stratums cooperate in some way. The application layer 107 traffic optimization (ALTO) working group is tackling the similar 108 problem of "better-than-random peer selection" for distributed 109 applications based on peer to peer (P2P) or client server 110 architectures [1]. In addition, such optimization is important in 111 content distribution networks (CDNs) as illustrated in [2]. 113 In the network stratum, particularly at the lower layers such as 114 MPLS and optical, there are many restoration and recovery mechanisms 115 to deal with network faults. The emergence of network based 116 applications or cloud based disaster recovery/business recovery 117 brings a new dimension to fault management, but also opportunities 118 to more efficiently deliver higher levels of reliability. For 119 example, the reliability requirements for mission critical 120 applications are typically quantified by two key time parameters. 121 The first is the Recovery Time Objective (RTO) which is the time to 122 get the application back up and functioning and is similar to 123 network recovery time notions. The second is the Recovery Point 124 Objective (RPO) which quantifies in terms of time the amount of data 125 loss that can be tolerated when a disaster occurs. Different 126 applications and organizations can have greatly different demands 127 from miliseconds to 12 hours. In addition, the amount of data that 128 may need to be transferred to meet these objectives can vary greatly 129 amongst different application types. With recover point objectives 130 of, say an hour or more, a dynamic optical network layer could be 131 very efficiently shared so as to reduce the overall cost to achieve 132 a given layer of reliability. However, to do so requires cooperation 133 between application and network stratum. 135 General multi-protocol label switching (GMPLS) [3] can and is being 136 applied to various core networking technologies such as SONET/SDH 137 and wavelength division multiplexing (WDM) [4]. GMPLS provides 138 dynamic network topology and resource information, and the 139 capability to dynamically allocate resources (provision label 140 switched paths). Furthermore, the path computation element (PCE) [5] 141 provides for traffic engineered path optimization. 143 However, neither GMPLS nor PCE provide interfaces that are 144 appropriate for an application layer entity to use for the following 145 reasons: 147 . GMPLS routing exposes full network topology information which 148 tends to be proprietary to a carrier or require specialized 149 knowledge and techniques to make use of, e.g., the routing and 150 wavelength assignment (RWA) problem in WDM networks [4]. 152 . Core networks typically consist of two or more layers, while 153 applications are typically only know about the IP layer and 154 above. Hence applications would not be able to make direct use 155 of PCE capabilities. 157 . GMPLS signaling interfaces are defined for either peer GMPLS 158 nodes or via a user network interface (UNI) [6]. Neither of 159 these are appropriate for direct use by an application entity. 161 In this paper we discuss two general use-cases that can generate 162 core network flows with significant bandwidth and may vary 163 significantly over time. The "cross stratum optimization" problems 164 generated by these use cases are discussed. Finally, we look at 165 interfaces between the application and network "stratums" that can 166 enable these types of optimizations and how they can be created via 167 extensions to the current ALTO protocol[7]. 169 1.1. Computing Clouds, Data Centers, and End Systems 171 While the definition of cloud computing or compute clouds is 172 somewhat nebulous (or "foggy" if you will) [8], the physical 173 instantiation of compute resources with network connectivity is very 174 real and bounded by physical and logical constraints. For the 175 purposes of this draft, we will call any network connected compute 176 resources a data center if its network connectivity is significant 177 compared either to the bandwidth of an individual WDM wavelength or 178 with respect to the network links in which it is located. Hence we 179 include in our definition very large data centers that feature 180 multiple fiber access and consume more than 10MW of power, moderate 181 to large content distribution network (CDN) installations located in 182 or near major internet exchange points, medium sized business 183 centers, etc... 185 We will refer to those computational entities that don't meet our 186 bandwidth criteria for a data center as an "end system". 188 2. End System Aggregate Networking 190 In this section we consider the fundamental use case of end systems 191 communicating with data centers as shown in Figure 1. In this figure 192 the "clients" are end systems with relatively small access bandwidth 193 compared to a WDM wavelength, e.g., under 100Mbps. We show these 194 clients roughly partitioned into three network related end user 195 regions ("A", "B", and "C"). Given a particular network application, 196 in a static network application situation, each client in a region 197 would be associated with a particular data center. 199 Region B 200 +---------+ +------+ 201 | Data | |Client| 202 |Center 2 | | B1 |+------+ 203 +------+ +----+----+ +--+---+|Client| 204 |Client| | / | B2 | 205 | A1 `. _.-+--------+-. +--+---+ 206 Region A +------+ `-. ,-'' `--. / ... 207 +------+ ,`: `+. +------+ 208 |Client| / \ |Client| 209 | A2 +------+ \---+ BM | 210 +------+ ( Network ) +------+ 211 ... .-' / 212 +------+ _.-' \ `. 213 |Client|.-' `=. ,-' `. 214 | AN | _.-'' `--. _.-\ +---`.----+ 215 +------+ +----'----+ `----+------+'' \ | Data | 216 | Data | | \ | |Center 3 | 217 |Center 1 | +--+---+ +--+---+ \ +---------+ 218 +---------+ |Client| |Client| \------+ 219 | C1 | | C2 | |Client| 220 +------+ +------+ | CK | 221 Region C +------+ 223 Figure 1. End system to data center communications. 225 2.1. Aggregated Bandwidth Scaling 227 One of the simplest examples where the aggregation of end system 228 bandwidth can quickly become significant to the "network" is for 229 video on demand (VoD) streaming services. Unlike a live streaming 230 service where IP or lower layer multicast techniques can be 231 generally applied, in VoD the transmissions are unique between the 232 data center and clients. For regular quality VoD we'll use an 233 estimate of 1.5Mbps per stream (assuming H.264 coding), for HD VoD 234 we'll use an estimate of 10Mbps per stream. To fill up a 10Gbps 235 capacity optical wavelength requires either 6,666 or 1,000 clients 236 for regular or high definition respectively. Note that special 237 multicasting techniques such as those discussed in [9] and peer 238 assistance techniques such as provided in some commercial systems 239 [10] can reduce the overall network bandwidth requirements. 241 With current high speed internet deployment such numbers of clients 242 are easily achieved; in addition demand for VoD services can vary 243 significantly over time, e.g., new video releases, inclement weather 244 (increases number of viewers), etc... 246 2.2. Cross Stratum Optimization Example 248 In an ideal world both data centers and networks would have 249 unlimited capacity, however in actuality both can have constraints 250 and possibly varying marginal costs that vary with load or time of 251 day. For example suppose that in Figure 1 that Data Center 3 has 252 been primarily serving VoD to region "C" but that it has, at a 253 particular period in time, run out of computation capacity to serve 254 all the client requests coming from region "C". At this point we 255 have a fundamental cross stratum optimization (CSO) problem. We want 256 to see if we can accommodate additional client request from region 257 "C" by using a different data center than the fully utilized data 258 center #3. To answer this questions we need to know (a) available 259 capacity on other data centers to meet a request, (b) the marginal 260 (incremental) cost of servicing the request on a particular data 261 center with spare capacity, (c) the ability of the network to 262 provide bandwidth between region "C" to a data center, and (d) the 263 incremental cost of bandwidth from region "C" to a data center. 265 Region B 266 +---------+ +------+ 267 | Data | |Client| 268 |Center 2 | | B1 |+------+ 269 +------+ +----+----+ +--+---+|Client| 270 |Client| | / | B2 | 271 | A1 `. _.-+--------+-. +--+---+ 272 Region A +------+ `-. ,-'' XXXXX XX `--. / ... 273 +------+ ,`: ``---..__ XXXX `+. +------+ 274 |Client| / X | ```--XX \ |Client| 275 | A2 +------+..X`. \ XX--+---+ BM | 276 +------+ ( X `-/ \ ) +------+ 277 ... .-' .' | +----.X / 278 +------+ _.-' \ X/ \ | X `. 279 |Client|.-' `=.X \ XXXX ,-' `. 280 | AN | _.-'' `--. XXXXXXXXX _.-\ +---`.----+ 281 +------+ +----'----+ `----+------+'' \ | Data | 282 | Data | | \ | |Center 3 | 283 |Center 1 | +--+---+ +--+---+ \ +---------+ 284 +---------+ |Client| |Client| \------+ 285 | C1 | | C2 | |Client| 286 +------+ +------+ | CK | 287 Region C +------+ 289 Figure 2. Aggregated flows between end systems and data centers. 291 In Figure 2 we show a possible result of solving the previously 292 mentioned CSO problem. Here we show the additional client requests 293 from region "C" being serviced by data center #2 across the network. 294 Figure 2 also illustrates the possibility of setting up "express" 295 routes across the network at the MPLS level or below. Such 296 techniques, known as "optical grooming" or "optical bypass"[11],[12] 297 at the optical layer, can result in significant equipment and power 298 savings for the network by "bypassing" higher level routers and 299 switches. 301 2.3. Data Center and Network Faults and Recovery 303 Data center failures, whether partial or complete, can have a major 304 impact on revenues in the VoD example previously described. If there 305 is excess capacity in other data centers within the network 306 associated with the same application then clients could be 307 redirected to those other centers if the network has the capacity. 308 Moreover, MPLS and GMPLS controlled networks have the ability to 309 reroute traffic very quickly while preserving QoS. As with general 310 network recovery techniques [13] various combinations of pre- 311 planning and "on the fly" approaches can be used to tradeoff between 312 recovery time and excess network capacity needed for recovery. 314 In the case of network failures there is the potential for clients 315 to be redirected to other data centers to avoid failed or over 316 utilized links. 318 3. Data Center to Data Center Networking 320 There are a number of motivations for data center to data center 321 communications: on demand capacity expansion ("cloud bursting"), 322 cooperative exchanges between business partners, offsite data 323 backup, "rent before building", etc... In Figure 3 we show an 324 example where a number of businesses each with an "internal data 325 center" contracts with a large external data center for additional 326 computational (which may include storage) capacity. The data centers 327 may connect to each other via IP transit type services or more 328 typically via some type of Ethernet virtual private line or LAN 329 service. 331 +-------------------+ 332 | | 333 | Large Data Center | 334 | | 335 +----------+--------+ 336 | 337 _.+-----------. 338 ,--'' `---. 339 ,-' `-. 340 ,' `. 341 ,' `. 342 +--------+ ; Network : 343 |Business| __..+ | 344 | #1 DC +-' : ; 345 +--------+ `. ,' 346 `. ;: 347 `-. ,-' \ 348 `---. _.--' +--`.----+ 349 `+-----------'' |Business| 350 / | #N DC | 351 | +--------+ 352 +----+---+ 353 |Business| 354 | #2 DC | 355 +--------+ 357 Figure 3. Basic data center to data center networking. 359 3.1. Cross Stratum Optimization Examples 361 In the DC-to-DC example of Figure 3 we can have computational 362 constraints/limits at both local and remote data centers; fixed and 363 marginal computational costs at local and remote data centers; and 364 network bandwidth costs and constraints between data centers. Note 365 that computing costs could vary by the time of day along with the 366 cost of power and demand. Some cloud providers have quite 367 sophisticated compute pricing models including: reserved, on demand, 368 and spot (auction) variants. 370 In addition, to possibly dynamically changing pricing, traffic 371 loads between data centers can be quite dynamic. In addition, data 372 movement between data centers is another source of large network 373 usage variation. Such peaks can be due to scheduled daily or weekly 374 offsite data backup, bulk VM migration to a new data center, 375 periodic virtual machine migration, etc... 377 3.2. Network and Data Center Faults and Reliability 379 For networked applications that require high levels of 380 reliability/availability the network diagram of Figure 4 could be 381 enhanced with redundant business locations and external data centers 382 as shown in Figure 4. For example cell phone subscriber databases 383 and financial transactions generally require what is called 384 geographic database replication and results in extra communication 385 between sites supporting high availability. For example if business 386 #1 in Figure 4 required a highly available database related service 387 then there would be an additional communication flows from the data 388 center "1a" to data center "1b". Furthermore, if business #1 has 389 outsourced some of its computation and storage needs to independent 390 data center X then for resilience it may want/need to replicate 391 (hot-hot redundancy) this information at independent data center Y. 393 +-------------+ +-------------+ 394 |Independent | |Independent | 395 |Data Center X| |Data Center Y| 396 +-----+-------+ +------+------+ 397 \ / 398 `. _.------------. .' 399 \--'' `-+-. 400 ,-' `-. +--------+ 401 ,' `. .'Business| 402 ,' `.-' |#N DC-a | 403 ; Network : +--------+ 404 +--------+ | | 405 |Business+--- ; 406 |#1 DC-a | `. +: 407 +--------+ `. ;/ \ 408 `-. ,-' `. 409 .'`---. _.--' +--`.----+ 410 +--------+ / `+-+---------\' |Business| 411 |Business| .' | \ |#N DC-a | 412 |#1 DC-b .' / \ +--------+ 413 +--------+ | \ 414 +----+---+ +--------+ 415 |Business| |Business| 416 |#2 DC-a | |#2 DC-b | 417 +--------+ +--------+ 419 Figure 4. Data center to data center networking with redundancy. 421 4. Cross Stratum Control Interfaces 423 Two types of load balancing techniques are currently utilized in 424 cloud computing. The first is load balancing within a data center 425 and is sometimes referred to as local load balancing. Here one is 426 concerned with distributing requests to appropriate machines (or 427 virtual machines) in a pool based on the current machine 428 utilization. The second type of load balancing is known as global 429 load balancing and is used to assign clients to a particular data 430 center out of a choice of more than one within the network and is 431 our concern here. A number of commercial vendors offer both local 432 and global load balancing products. Currently global load balancing 433 systems have very little knowledge of the underlying network. To 434 make better assignments of clients to data centers many of these 435 systems use geographic information based on IP addresses. Hence we 436 see that current systems are attempting to perform cross stratum 437 optimization albeit with very coarse network information. A more 438 complete interface for CSO in the client aggregation case that is 439 also applicable in the "data center to data center" case would be: 441 1. A Network Query Interface - Where the global load balancer 442 can inquire as to the bandwidth availability between "client 443 regions" and data centers. 445 2. A Network Resource Reservation Interface - Where the global 446 load balancer can make explicit requests for bandwidth 447 between client regions and data centers. 449 3. A Fault Recovery Interface - For the global load balancer to 450 make requests for expedited bulk rerouting of client traffic 451 from one data center to another. Or for the network layer to 452 make requests to the application to help deal with network 453 faults. 455 The network query interface can be considered a superset of the 456 functionality supported by the current ALTO protocol [7]. Potential 457 extensions to ALTO for this purpose are given in the next section. 459 5. Potential ALTO Protocol Extensions 461 This section discusses the applicability of the ALTO protocol and 462 necessary extensions to support a network query interface suitable 463 for high bandwidth consuming applications. Before doing so we 464 discuss general properties of the high bandwidth scenarios that may 465 differ significantly from other uses of the ALTO protocol. 467 The first has to do with scope and scale. The consumer of high 468 bandwidth alto extensions is typically some type of application 469 controller within a data center, as opposed to an individual end 470 user. The number of such entities with a need for the high bandwidth 471 related information is orders of magnitude smaller than, say, peer 472 to peer networking users, or applications closer to the end user. 473 Since a network provider may consider this information sensitive, 474 there may be a desire to limit its distribution to a "pre- 475 registered" set of entities. Hence these extensions would be 476 applicable to controlled or partially controlled environments. 478 Secondly, there is the notion of time scales. In cloud services we 479 already see variants such as "on demand" compute instances and 480 "reserved" compute instances. For network resource queries we may be 481 concerned with (a) current bandwidth availability, (b) bandwidth 482 availability at a future time, or (c) bandwidth for a bulk data 483 transfer of a given amount that must take place within a given time 484 window. 486 Time-dependent bandwidth information can be and typically are 487 considered in network planning and provisioning systems. For 488 example, a VoD provider knows ahead of time when the latest 489 "blockbuster" film will be available via its service and can make 490 estimates based on historical data on the bandwidth that it will 491 need to deal with the subsequent demand. The following discussions, 492 however, are restricted to "current time" for now. 494 Finally another goal in the design of an interface between the 495 application and networking stratums is to minimize the need for 496 either stratum to know too much about the inner workings of the 497 other. Hence as much as possible it is desired to insulate the 498 applications stratum from technology specifics of the network. That 499 said, data centers providing IaaS may prefer to specify flows and 500 connectivity at a layer below IP such as Ethernet. 502 The key ALTO extensions useful for querying the network for high 503 bandwidth consuming applications are: 505 (a) Bandwidth Constraint Information 506 (b) Constraint Based Filtering 507 (c) Multi-cost information [MultiCost] 508 (d) Endpoint Access Bandwidth Capacity (a new endpoint property) 510 In the following sections we discuss (a) and (b). 512 6. Bandwidth Constraint Information 514 6.1. Introduction 516 The amount of bandwidth of available between two entities or two 517 sets of entities can be of prime interest to applications that have 518 stringent bandwidth requirements relative to a networks capacity. 519 Such entities can be communicating across a WAN, a metro area, a 520 LAN, or even within a compute cluster. 522 One may want to query the network as to the available bandwidth in a 523 number of different cases: 525 (a) Bandwidth available between a single source destination pair 527 (b) Bandwidth between one particular source and several other 528 destinations 530 (c) Bandwidth between one set of sources and another set of 531 destinations. 533 Case (a), bandwidth between two points, is well defined, however, in 534 cases (b) and (c) there is some ambiguity. In cases (b) and (c) one 535 may want to the query for the bandwidth available to a single "flow" 536 at a time, or for multiple simultaneous "flows" between sources and 537 destinations. 539 If the bandwidth query is for potentially simultaneous flows then 540 there is the possibility that the flows of interest would (or could) 541 share network resources, e.g., link capacity. Such a situation leads 542 to what is known as a multi-commodity flow problem [NetOpt]. General 543 formulations of this problem [NetOpt] allow for arbitrary path 544 selection and can permit splitting of user demands across multiple 545 paths if inverse multiplexing like techniques are available. 546 Alternative formulations of multi-commodity flow problems exist 547 [RWA] when path choices between a source and destination are 548 restricted to an explicit list of paths (or a single path). In both 549 formulations link capacities form a key optimization constraint. 551 To perform better application layer traffic optimization, the 552 presence and capacity of such "mutual bottleneck" links would need 553 to be considered by "large bandwidth applications". This draft shows 554 how a combination of abstract path link vectors and/or constrained 555 cost graph can be used to enable enhanced application layer traffic 556 optimization. These techniques are illustrated with connectionless 557 technologies such as IP and Ethernet, as well as MPLS and circuit 558 switched technologies that can be controlled via GMPLS. 560 6.1.1. Example Network: Providers View 562 In Figure 1 we show an example network consisting of five nodes and 563 six links. This is the network provider's view of the network and 564 not necessarily information to be shared in detail with 565 applications. We will use this same network to illustrate bandwidth 566 constraint representations for different technologies. For 567 illustrative purposes we only consider a single weight (cost) and 568 bandwidth constraint per link. The units of bandwidth could be Mbps, 569 Gbps, or wavelengths depending upon the technology. These costs and 570 constraints are from the network provider's perspective and may or 571 may not be the sole guidance in path selection, e.g., non-shortest 572 paths may be chosen depending upon data and control plane 573 technologies. However, when considering a path between a source and 574 destination across this network we sum the weights for each link 575 along the path to obtain the total cost for the path. 577 +----+ L0 Wt=10,BW=50 +----+ 578 | N0 |-----------------------------------------| N3 | 579 +----+ `. +----+ 580 | `. L4 Wt=7 | 581 | `-. BW=40 | 582 | `. +----+ | 583 | `.| N4 | | 584 | L1 .' +----+ | 585 | Wt=10 / L2 | 586 | BW=45 / Wt=12 | 587 | /L5 Wt=10 BW=30 | 588 | .' BW=45 | 589 | / | 590 | / | 591 +----+ .' L3 Wt=15 BW=42 +----+ 592 | N1 |.........................................| N2 | 593 +----+ +----+ 594 Figure 1 Generic Constrained Network Example 596 6.2. Data and Control Plane Path Choices 598 In this section we survey common data and control plane technologies 599 with respect to the path choices that they may allow as well as the 600 methods one can use to infer available paths. Methods for inferring 601 paths influence how efficient the network layer can convey cost and 602 constraint information to the application layer, i.e., even if the 603 control plane limits us to a single fixed path between a source an 604 destination, if we need many paths between many sources and 605 destinations it can be very efficient if such information can be 606 derived from a simple graph representation. 608 Technologies that allow arbitrary placement of paths across a 609 network include: circuit switched technologies (WDM, TDM), strictly 610 connection oriented packet technologies (MPLS, ATM, and Frame 611 Relay), and connection oriented modes of multi-purpose protocols 612 such as InfiniBand's CO service. In these cases a network provider 613 can furnish a graph representation of the network suitable for the 614 application optimizer to choose routes. In some cases, for example, 615 in WDN networks due to optical impairments, the usable paths may be 616 restricted in a way not readily discerned from a simple graph 617 representation. In such a case a list of possible paths would need 618 to be furnished. 620 For IP, a connectionless technology, one typically thinks of a 621 single path between each source and destination (not considering 622 equal cost multipath). Although no choice in path selection is 623 available, in the case of single area OSPF the paths can be derived 624 from a graph, while BGP [BGP4] uses techniques based on policies and 625 path vectors (AS_PATH) as part of its route selection process and 626 these are not derived from graphs. Multi-Topology Routing 627 enhancements to OSPF[MT-OSPF] can allow multiple path choices 628 between a source and destination and such paths could be derived 629 from their corresponding graphs. 631 Ethernet switching offers the greatest variety of path selection 632 capabilities depending upon the control plane employed. The basic 633 Ethernet Bridge specifications in 802.1D [802.1D] utilizes a single 634 tree structure as the communication backbone between all nodes. 635 Hence, one has no choice in path between nodes and the paths can be 636 easily derived from a graph of the spanning tree. We will also see 637 that such graphs are easy to reduce. IEEE 802.1Q [802.1Q] includes 638 virtual LANs (VLANs) and allows for multiple spanning trees. The 639 multiple spanning tree protocol (MSTP) allows for the assignment of 640 VLANs to trees. Hence we have more than one choice in paths but all 641 flows within the same VLAN have to share the same tree. Note that 642 trees can be given as graphs so this is a case where we may want 643 multiple graphs. 645 OpenFlow [OpenFlow] capable switches permit general forwarding 646 behavior based on general packet header matching. These can include 647 Ethernet destination and source addresses, IP destination and source 648 addresses, as well as other protocol related fields. Since both 649 source and destination information can be utilized in forwarding 650 OpenFlow can enable traffic engineering like a connection oriented 651 packet switching technology. Hence arbitrary path selection based on 652 a graph is possible. 654 6.3. ALTO Extensions 656 In this section we show give two different models for representing 657 bandwidth constraints, give several examples of both approaches, and 658 furnish an initial JSON encoding for both approaches. We end this 659 section with a discussion of which approach a network provider may 660 want to choose within a given context. 662 6.3.1. Mutually Constrained Paths 664 As discussed in section 6.2. the network's data or control plane may 665 dictate the paths taken between a source and destination. Even if 666 such paths could be derived from a graph, the network provider may 667 choose to provide information about the paths to promote information 668 hiding or to minimize the amount of information needed to be 669 transferred via ALTO. For example if the application is asking for 670 cost/capacity information between a few sources and destinations 671 providing path information for these few paths may take much less 672 space than a corresponding graph. 674 In the following we give examples of paths with shared link 675 bandwidth constraints for two different technologies then we provide 676 a tentative JSON encoding for use with the ALTO protocol. 678 6.3.1.1. Simple IP Network Example 680 Consider Figure 1 as a single OSPF area with N0 representing a large 681 data center and nodes N2 and N3 as potential clients. The 682 corresponding path link vectors with their corresponding cost (sum 683 of weights) and link bandwidth constraints: 685 Path Src-Dest Path Vector Path Cost 686 P1 N0-N2: {L0, L2} 22 687 P2 N0-N3: {L0} 10 688 ---------------------------------- 689 Link Bandwidth 690 L0 50 691 L2 30 693 Table 1. Path Vectors for paths P1 and P2, and used link capacities. 695 From an optimization perspective each (capacitated) link is a 696 potential traffic constraint. From Table 1 since the paths from N0- 697 N2 and N0-N3 shared a common link, L0, the sum of their bandwidth 698 flows must be less than the capacity of L0 (50 units). In addition, 699 the capacity constraint on link L2 tell us that the bandwidth of the 700 traffic from N0-N2 must be less than 30 units. This information, as 701 well as the total costs of the two paths, is all that is needed for 702 a constrained joint optimization to proceed. Detailed information on 703 link costs (as seen by the network) is not necessary, nor is 704 information on unused links. 706 6.3.1.2. TDM Network Example 708 Now suppose the network of Figure 1 is a TDM network controlled by 709 GMPLS. Once again N0 representing a large data center and nodes N2 710 and N3 as potential clients. However in this case the network 711 provider offers an additional path, P3, for getting from N0-N2. 713 Path Src-Dest Path Vector Path Cost 714 P1 N0-N2 {L0, L2} 22 715 P2 N0-N3 {L0} 10 716 P3 N0-N2 {L1,L3} 25 717 ---------------------------------- 718 Link Bandwidth 719 L0 50 720 L1 45 721 L2 30 722 L3 42 724 Table 2. Path Vectors for P1-P3 and used link capacities. 726 Once again no information in addition to that shown in Table 2 is 727 required to perform a constrained optimization. However, path P3 is 728 the only path using links L1 and L3. Link L3's capacity is 42 units 729 and is less that link L1's capacity of 45 units. Satisfying link 730 L3's capacity constraint (for the set of paths P1-P3) implies that 731 link L1's capacity constraint is always satisfied and hence no 732 information on link L1 needs to be sent from the network. In 733 particular the network could send the information shown in Table 3 734 where we have replaced links L1 and L3 with an "abstract link" 735 (AL13) with capacity equal to that of link L3. 737 Path Src-Dest Path Vector Path Cost 738 P1 N0-N2 {L0, L2} 22 739 P2 N0-N3 {L0} 10 740 P3 N0-N2 {AL13} 25 741 ---------------------------------- 742 Link Bandwidth 743 L0 50 744 L2 30 745 AL13 42 747 Table 3. Path Vectors for P1-P3 and abstract link capacities. 749 Note that simplifications such as the previous can frequently be 750 performed and can result in significant information savings. Also 751 this constraint information reduction was performed without the 752 network provider having knowledge of the application layers traffic 753 demands. Methods for performing these reductions may be specific to 754 service providers and not subject to standardization. 756 6.3.1.3. JSON Encoding 758 In some cases there may be more than one path given between a source 759 and destination. In this case the network needs to furnish with 760 each path the following information: (source, destination), (path id 761 if more than one between source and destination), costs, overall 762 path constraint (if any), and list of mutual abstract links for this 763 path. In addition we need to furnish capacities for all mutual 764 abstract links mentioned. 766 object { 767 PIDName source; 768 PIDName dest; 769 JSONNumber wt; //A numerical path cost 770 JSONNumber delay; //A numerical path latency, optional 771 JSONNumber bw; //A numerical bandwidth constraint, optional 772 LIDName mutual-links<1..*>; //shared constrained links, optional 773 } PathData; 775 Note that "mutual-links" is a JSON array that contains the names of 776 the shared links that this path depends upon (may be empty). Note 777 that all costs are associated with path entities, while constraints 778 may be associated with paths or links. 780 object { 781 JSONNumber bw; //A numerical bandwidth constraint, optional 782 } SharedAbstractLink; 784 Note that the shared abstract link only contains capacity 785 information. This is much different from the case where a graph is 786 shared. 788 object { 789 PathData [pathname]<0..*>; // The individual path info 790 SharedAbstractLink [linkname]<0..*>; //Shared link info 792 } NetworkPathData; 794 6.3.2. Cost-Capacity Graphs 796 As discussed in section 6.2. the network's data or control plane may 797 allow arbitrary path selection and hence a cost-capacity graph 798 representation would be needed for the optimization to fully take 799 advantage of this network flexibility. 801 In the case where path choice is limited, but the paths can be 802 derived from a graph, it may be useful for the network to supply a 803 graph to reduce the amount of information transferred via the ALTO 804 protocol. Suppose the application is interested in many source 805 destination pairs. In this case the amount of path information 806 including abstract link constraints could significantly exceed the 807 information size of a graph. 809 In the following we give examples of cost-capacity graphs for a 810 technology (TDM) that can offer arbitrary path choice, and for a 811 technology (MSTP Ethernet) that offers limited path choice but where 812 specifying graphs can result in significant efficiencies, we then 813 provide a tentative JSON encoding of cost-capacity graphs for use 814 with the ALTO protocol. 816 6.3.2.1. Simple TDM Example with Graph Reduction 818 Consider again where Figure 1 represents a TDM network and in this 819 case the provider will permit the application to make path choices. 820 Suppose that the application only involves nodes N0, N1, and N2, and 821 not N3 or N4. By studying the structure of the graph of Figure 1 one 822 can derive the reduced graph shown in Figure 2 that maintains all 823 relevant cost and capacity information from the point of view of 824 nodes N0, N1, and N2. In particular we were able to remove nodes N2 825 and N4, substitute abstract link AL0M2 for links L0 and L2, and 826 substitute abstract link AL4M5 for link L4 and L5. Note that any 827 such reductions, approximate or exact, are at the network providers 828 discretion. 830 +----+ 831 | N0 |-------------------------------------------+ 832 +----+ `. AL0M2 | 833 | `. Wt=22,BW=30 | 834 | `-. | 835 | `. | 836 | | AL4M5 | 837 | L1 . Wt=17,BW=40 | 838 | Wt=10 / | 839 | BW=45 / | 840 | / | 841 | .' | 842 | / | 843 | / | 844 +----+ .' L3 Wt=15 BW=42 +----+ 845 | N1 |.........................................| N2 | 846 +----+ +----+ 847 Figure 2. Reduced graph of Figure 1 from the perspective of nodes 848 N1-N3. 850 The resulting information to be conveyed concerning this reduced 851 graph is shown in Table 4. 853 Link End Nodes Bandwidth Cost 854 AL0M2 (N0, N2) 50 22 855 L1 (N0, N1) 45 10 856 L3 (N1, N2) 42 15 857 AL4M5 (N0, N1) 40 17 859 Table 4. Representation of the graph of Figure 2. 861 6.3.2.2. Ethernet MSTP Example with Multiple Graphs 863 Consider the Ethernet network shown in Figure 3 running the MSTP 864 with three multiple spanning tree instances define. Suppose the 865 application is interested in connectivity between nodes N1, N3, N5, 866 N6, and N7. In Figures 4-6 we show the spanning tree instances along 867 with a high fidelity graph reduction that removes nodes that are not 868 of interest and abstracts links as needed. 870 Let's compare these reduced graph representations with that of a 871 path representation. Since we have n=5 communicating nodes of 872 interest this leads to n*(n-1)/2 = 10 potential paths per MSTI that 873 the network would need to furnish cost and constraint information as 874 in section 6.3.1. In the case of graphs reduced for the nodes of 875 interest from tree structures it can be proved that the number of 876 links in the graph is equal to (n-1), e.g., the reduced graph 877 consists of 5 nodes and 4 links. 879 +----+ L4 880 /| N3 |..______ +----+ 881 | +----+ `````----| N4 |..__ L6 882 / .-'+----+ ``--.__ +----+ 883 / .-' | ``--..| N7 | 884 | L2 .-' | +----+ 885 / .-' / .' | 886 / .' | / / 887 | .-' / .' | 888 / .-' L9 | .' | 889 +-+--+ .-' | L11 / / 890 | N2 |.-' L5 / .' | 891 +----+ | / /L8 892 \ | .' | 893 \ L1 / .' | 894 \ | / / 895 \ / .' | 896 +----+ | .' / 897 | N1 |.__ L3 | / +----+ 898 +----+ `--._ / .' __..| N6 | 899 ``-.._ +----+ __..--'' +----+ 900 ``-.| N5 |.--'' L7 901 +----+ 902 Figure 3. Ethernet Network supporting MSTP. 904 L4 AL4M6 905 +--+ +--+ 906 +--+ __..--|N4|`. +--+ __..--|N7| 907 |N3|--' +--+ \ L6 |N3|--' +--+ 908 +--+ `. +--+ | 909 / `. / \ 910 L2 / +--+ / | 911 .' |N7| .'AL1M2 \ L8 912 / +--+ / | 913 +--+ MSTI #1 / +--+ \ 914 |N2| / |N1| | 915 +--+ L8| +--+ \ 916 \ (a) / (b) +--+ 917 | L1 / .'|N6| 918 \ +--+ +--+ .' +--+ 919 \ .'|N6| |N5|.' L7 920 +--+ +--+ .' +--+ +--+ 921 |N1| |N5|.' L7 922 +--+ +--+ 923 Figure 4. (a) Spanning tree instance #1, (b) Reduced graph from the 924 perspective of notes N1, N3, N5, N6, N7. 926 +--+ 927 +--+ L4_..-|N4| +--+ 928 |N3|.--'' +--+ |N3|| 929 +--+ .-' | +--+\ 930 .-' / | 931 _.-' | +--+ \ +--+ 932 .-' L9 | |N7| | |N7| 933 .-' / +--+ \ +--+ 934 +--+ | + AL4M5 \ + 935 |N2| L5 / | | | 936 +--+ MSTI #2 | L8 / \ L8 / 937 | / | / 938 (a) / / (b) \ / 939 | +--+ | +--+ 940 L3 / .'|N6| \ .'|N6| 941 +--+ +--+ .' +--+ +--+ L3 +--+ .' +--+ 942 |N1|-------|N5|.' L7 |N1|-------|N5|.' L7 943 +--+ +--+ +--+ +--+ 944 Figure 5. (a) Spanning tree instance #2, (b) Reduced graph from the 945 perspective of notes N1, N3, N5, N6, N7. 947 +--+ 948 +--+ L4 __.|N4|`. +--+ AL4M6 949 |N3|---' +--+ \L6 |N3|.__ 950 +--+ `. +--+ ``--...__ 951 / `. ``--.. 952 L2 / +--+ +--+ 953 .' MSTI #3 /|N7| /|N7| 954 / .' +--+ .' +--+ 955 +--+ L11 / | L11 / | 956 |N2| / / / / 957 +--+ (a) .' L8/ (b) .' L8/ 958 / | / | 959 / / / / 960 .' +--+ .' +--+ 961 / |N6| / |N6| 962 +--+ L3 +--+ +--+ +--+ L3 +--+ +--+ 963 |N1|.......|N5| |N1|.......|N5| 964 +--+ +--+ +--+ +--+ 965 Figure 6. (a) Spanning tree instance #2, (b) Reduced graph from the 966 perspective of notes N1, N3, N5, N6, N7. 968 In many data center applications all communicating virtual machines 969 (VM) need to be place within the same VLAN. MSTP allows the 970 assignment of VLANs to MSTIs hence a reduced graph representation 971 can provide a very good mechanism for determining an optimum fit 972 between communicating VM traffic patterns and MSTI VLAN assignment. 974 6.3.2.3. JSON Encoding 976 Like the current ALTO filtered cost map, a request for a cost- 977 capacity graph would take source and destination PIDs as inputs. In 978 JSON notation we could represent the return graph or graphs as an 979 JSON object containing link objects. As we saw in the Ethernet case 980 it may be useful to supply more than one graph. In addition 981 restrictions on routing such as only the shortest path between 982 source and destination is a valid route, e.g., OSPF routing for IP, 983 or that all routes come from the same graph, e.g., VLAN assignment 984 to MSTI in MSTP Ethernet. 986 Hence we are led to a tentative JSON encoding which includes named 987 link objects, named graph objects, an a versioned container for 988 holding graphs and any other general information such as the 989 previously mentioned restrictions. 991 object { 992 NIDName aend; // Node ids are similar to PIDs but 993 NIDName zend; // may not have end points 994 JSONNumber wt; //A numerical routing cost 995 JSONNumber delay; //A numerical latency cost, optional 996 JSONNumber bw; //A numerical bandwidth "cost", optional 997 // Other costs private or experimental could be added 998 // for example stuff related to reliability or economic cost. 999 // Only one cost of each type would be permitted. 1000 // Note a multi-cost like mechanism could be used. 1001 } LinkData 1003 // Collection of links each identified by link id (LID) name. 1004 object { 1005 LinkData [lidname]<0..*>; // Link id (LID) would be an identifier 1006 ... // similar to a PID or NID and identifies the 1007 // link 1008 } NetworkGraphData; 1009 // Finally Multiple graph encapsulation and versioning 1011 object { 1012 VersionTag map-vtag; 1013 NetworkGraphData [graphname]<1..*>; //named graphs 1014 ... // other information such as graph choice restrictions 1015 // or routing restrictions. 1017 } InfoResourceNetwork; 1019 Where a graph name is formatted like a PIDName, but names a graph. 1021 7. Constraint Based Filtering 1023 Young's stuff here. 1025 8. Conclusion 1027 In this draft we have discussed two generic use cases that motivate 1028 the usefulness of general interfaces for cross stratum optimization 1029 in the network core. In our first use case network resource usage 1030 became significant due to the aggregation of many individually 1031 unique client demands. While in the second use case where data 1032 centers were communicating with each other bandwidth usage was 1033 already significant enough to warrant the use of private line/LAN 1034 type of network services. 1036 Both use cases result in optimization problems that trade off 1037 computational versus network costs and constraints. Both featured 1038 scenarios where advanced reservation, on demand, and recovery type 1039 service interfaces could prove beneficial. In the later section of 1040 this document we showed how ALTO concepts [1] and the ALTO protocol 1041 could be used and extended to support joint application network 1042 optimization for large network bandwidth consuming applications. 1044 9. Security Considerations 1046 TBD 1048 10. IANA Considerations 1050 This informational document does not make any requests for IANA 1051 action. 1053 11. References 1055 11.1. Informative References 1057 [1] "draft-ietf-alto-reqs-09." [Online]. Available: 1058 http://datatracker.ietf.org/doc/draft-ietf-alto-reqs/. [Accessed: 1059 17-May-2011]. 1060 [2] J. Medved, N. Bitar, S. Previdi, B. Niven-Jenkins, and G. Watson, 1061 "Use Cases for ALTO within CDNs." [Online]. Available: 1062 http://tools.ietf.org/html/draft-jenkins-alto-cdn-use-cases-02. 1063 [Accessed: 06-Mar-2012]. 1064 [3] E. Mannie, Ed., "Generalized Multi-Protocol Label Switching (GMPLS) 1065 Architecture, RFC 3945." Oct-2004. 1066 [4] Y. Lee, G. Bernstein, and W. Imajuku, Eds., "Framework for GMPLS 1067 and PCE Control of Wavelength Switched Optical Networks (WSON), RFC 1068 6163." Apr-2011. 1069 [5] A. Farrel, J. P. Vasseur, and J. Ash, "A Path Computation Element 1070 (PCE)-Based Architecture, RFC 4655." Aug-2006. 1071 [6] G. Swallow, J. Drake, H. Ishimatsu, Y. Rekhter,, "Generalized 1072 Multiprotocol Label Switching (GMPLS) User-Network Interface (UNI): 1073 Resource ReserVation Protocol-Traffic Engineering(RSVP-TE) Support 1074 for the Overlay Model, RFC 4208," Oct-2005. 1075 [7] Y. R. Yang, R. Alimi, and R. Penno, "ALTO Protocol." [Online]. 1076 Available: http://tools.ietf.org/html/draft-ietf-alto-protocol-10. 1077 [Accessed: 05-Mar-2012]. 1078 [8] M. Armbrust, A. Fox, R. Griffith, A. D. Joseph, R. Katz, A. 1079 Konwinski, G. Lee, D. Patterson, A. Rabkin, I. Stoica, and M. 1080 Zaharia, "A view of cloud computing," Commun. ACM, vol. 53, pp. 50- 1081 58, Apr. 2010. 1082 [9] K. A. Hua and S. Sheu, "Skyscraper broadcasting: a new broadcasting 1083 scheme for metropolitan video-on-demand systems," in Proceedings of 1084 the ACM SIGCOMM '97 conference on Applications, technologies, 1085 architectures, and protocols for computer communication, Cannes, 1086 France, 1997, pp. 89-100. 1087 [10] "Adobe Flash Media Server 4.0 * Building peer-assisted networking 1088 applications." [Online]. Available: 1089 http://help.adobe.com/en_US/flashmediaserver/devguide/WSa4cb07693d12 1090 3884520b86f312a354ba36d-8000.html. [Accessed: 13-May-2011]. 1092 [11] Rudra Dutta and George N. Rouskas, "Traffic grooming in WDM 1093 networks: Past and future," IEEE Network, vol. 16, no. 6, pp. 46 - 1094 56, 2002. 1095 [12] Keyao Zhu and B. Mukherjee, "Traffic grooming in an optical WDM 1096 mesh network," Selected Areas in Communications, IEEE Journal on, 1097 vol. 20, no. 1, pp. 122-133, 2002. 1098 [13] G. Bernstein, B. Rajagopalan, and D. Saha, Optical Network 1099 Control: Architecture, Protocols, and Standards. Addison-Wesley 1100 Professional, 2003. 1101 [14] B. Awerbuch and Y. Shavitt, "Topology aggregation for directed 1102 graphs," Networking, IEEE/ACM Transactions on, vol. 9, no. 1, pp. 1103 82-90, 2001. 1104 [15] S. Uludag, K.-S. Lui, K. Nahrstedt, and G. Brewster, "Analysis of 1105 Topology Aggregation techniques for QoS routing," ACM Comput. Surv., 1106 vol. 39, Sep. 2007. 1107 [16] K. Nichols, D. L. Black, S. Blake, and F. Baker, "Definition of 1108 the Differentiated Services Field (DS Field) in the IPv4 and IPv6 1109 Headers." RFC2747. Available: http://tools.ietf.org/html/rfc2474. 1110 [17] D. O. Awduche and J. Agogbua, "Requirements for Traffic 1111 Engineering Over MPLS." RFC2702. Available: 1112 http://tools.ietf.org/html/rfc2702. 1114 Author's Addresses 1116 Greg M. Bernstein 1117 Grotto Networking 1118 Fremont California, USA 1119 Phone: (510) 573-2237 1120 Email: gregb@grotto-networking.com 1122 Young Lee 1123 Huawei Technologies 1124 1700 Alma Drive, Suite 500 1125 Plano, TX 75075 1126 USA 1127 Phone: (972) 509-5599 1128 Email: ylee@huawei.com 1130 Intellectual Property Statement 1132 The IETF Trust takes no position regarding the validity or scope of 1133 any Intellectual Property Rights or other rights that might be 1134 claimed to pertain to the implementation or use of the technology 1135 described in any IETF Document or the extent to which any license 1136 under such rights might or might not be available; nor does it 1137 represent that it has made any independent effort to identify any 1138 such rights. 1140 Copies of Intellectual Property disclosures made to the IETF 1141 Secretariat and any assurances of licenses to be made available, or 1142 the result of an attempt made to obtain a general license or 1143 permission for the use of such proprietary rights by implementers or 1144 users of this specification can be obtained from the IETF on-line 1145 IPR repository at http://www.ietf.org/ipr 1147 The IETF invites any interested party to bring to its attention any 1148 copyrights, patents or patent applications, or other proprietary 1149 rights that may cover technology that may be required to implement 1150 any standard or specification contained in an IETF Document. Please 1151 address the information to the IETF at ietf-ipr@ietf.org. 1153 Disclaimer of Validity 1155 All IETF Documents and the information contained therein are 1156 provided on an "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION 1157 HE/SHE REPRESENTS OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, 1158 THE IETF TRUST AND THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL 1159 WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY 1160 WARRANTY THAT THE USE OF THE INFORMATION THEREIN WILL NOT INFRINGE 1161 ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS 1162 FOR A PARTICULAR PURPOSE. 1164 Acknowledgment 1166 Funding for the RFC Editor function is currently provided by the 1167 Internet Society.