idnits 2.17.1 draft-bernstein-alto-large-bandwidth-cases-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (June 28, 2011) is 4685 days in the past. Is this intentional? -- Found something which looks like a code comment -- if you have code sections in the document, please surround them with '' and '' lines. Checking references for intended status: Informational ---------------------------------------------------------------------------- == Unused Reference: '12' is defined on line 522, but no explicit reference was found in the text Summary: 0 errors (**), 0 flaws (~~), 2 warnings (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 Network Working Group Greg Bernstein (Grotto) 2 Internet Draft Young Lee (Huawei) 3 Intended status: Informational 5 June 28, 2011 7 Use Cases for High Bandwidth Query and Control of Core Networks 9 draft-bernstein-alto-large-bandwidth-cases-00.txt 11 Status of this Memo 13 This Internet-Draft is submitted to IETF in full conformance with the 14 provisions of BCP 78 and BCP 79. 16 Internet-Drafts are working documents of the Internet Engineering 17 Task Force (IETF), its areas, and its working groups. Note that 18 other groups may also distribute working documents as Internet- 19 Drafts. 21 Internet-Drafts are draft documents valid for a maximum of six months 22 and may be updated, replaced, or obsoleted by other documents at any 23 time. It is inappropriate to use Internet-Drafts as reference 24 material or to cite them other than as "work in progress." 26 The list of current Internet-Drafts can be accessed at 27 http://www.ietf.org/ietf/1id-abstracts.txt 29 The list of Internet-Draft Shadow Directories can be accessed at 30 http://www.ietf.org/shadow.html. 32 This Internet-Draft will expire on December 28, 2011. 34 Copyright Notice 36 Copyright (c) 2011 IETF Trust and the persons identified as the 37 document authors. All rights reserved. 39 This document is subject to BCP 78 and the IETF Trust's Legal 40 Provisions Relating to IETF Documents 41 (http://trustee.ietf.org/license-info) in effect on the date of 42 publication of this document. Please review these documents 43 carefully, as they describe your rights and restrictions with respect 44 to this document. 46 Abstract 48 This draft describes two generic use-cases that illustrate 49 application layer traffic optimization concepts applied to high 50 bandwidth core networks. For the purposes here high bandwidth will 51 mean bandwidth that is significant with respect to the capacity of a 52 wavelength in a wavelength division multiplexed optical transport 53 system, e.g., 10-40Gbps or more. For each of these generic use cases, 54 we present a generic optimization problem, look at the type of 55 information needed (query interface) to perform the optimization, 56 investigate a reservation interface to request network resources, and 57 also consider enhanced availability and recovery scenarios. 59 Table of Contents 61 1. Introduction..................................................2 62 1.1. Computing Clouds, Data Centers, and End Systems...........3 63 2. End System Aggregate Networking................................4 64 2.1. Aggregated Bandwidth Scaling..............................5 65 2.2. Cross Stratum Optimization Example........................5 66 2.3. Data Center and Network Faults and Recovery...............6 67 2.4. Cross Stratum Control Interfaces..........................7 68 3. Data Center to Data Center Networking..........................8 69 3.1. Cross Stratum Optimization Examples.......................9 70 3.2. Network and Data Center Faults and Reliability............9 71 3.3. Cross Stratum Control Interfaces.........................10 72 4. Conclusion....................................................11 73 5. Security Considerations.......................................11 74 6. IANA Considerations...........................................11 75 7. References....................................................11 76 7.1. Informative References...................................11 77 Author's Addresses...............................................14 78 Intellectual Property Statement..................................14 79 Disclaimer of Validity...........................................14 81 1. Introduction 83 Cloud Computing, network applications, software as a service (SaaS), 84 Platform as a service (PaaS), and Infrastructure as a Service (IaaS), 85 are just a few of the terms used to describe situations where 86 multiple computation entities interact with one another across a 87 network. When the communication resources consumed by these 88 interacting entities is significant compared with link or network 89 capacity then opportunities may exist for more efficient utilization 90 of available computation and network resources if both computation 91 and network stratums cooperate in some way. The application layer 92 traffic optimization (ALTO) working group is tackling the similar 93 problem of "better-than-random peer selection" for distributed 94 applications based on peer to peer (P2P) or client server 95 architectures [16]. In addition, such optimization is important in 96 content distribution networks (CDNs) as illustrated in [17]. 98 General multi-protocol label switching (GMPLS) [18] can and is being 99 applied to various core networking technologies such as SONET/SDH 100 [19] and wavelength division multiplexing (WDM) [20]. GMPLS provides 101 dynamic network topology and resource information, and the capability 102 to dynamically allocation resources (provision label switched paths). 103 Furthermore, the path computation element (PCE) [21] provides for 104 traffic engineered path optimization. 106 However, neither GMPLS nor PCE provide interfaces that are 107 appropriate for an application layer entity to use for the following 108 reasons: 110 . GMPLS routing exposes full network topology information which 111 tends to be proprietary to a carrier or require specialized 112 knowledge and techniques to make use of, e.g., the routing and 113 wavelength assignment (RWA) problem in WDM networks [20]. 115 . Core networks typically consist of two or more layers, while 116 applications are typically only know about the IP layer and 117 above. Hence applications would not be able to make direct use 118 of PCE capabilities. 120 . GMPLS signaling interfaces are defined for either peer GMPLS 121 nodes or via a user network interface (UNI) [22]. Neither of 122 these is appropriate for direct use by an application entity. 124 In this paper we discuss two general use-cases that can generate core 125 network flows with significant bandwidth and may vary significantly 126 over time. The "cross stratum optimization" problems generated by 127 these use cases are discussed. Finally, we look at interfaces between 128 the application and network "stratums" that can enable overall 129 optimization. 131 1.1. Computing Clouds, Data Centers, and End Systems 133 While the definition of cloud computing or compute clouds is somewhat 134 nebulous (or "foggy" if you will) [1], the physical instantiation of 135 compute resources with network connectivity is very real and bounded 136 by physical and logical constraints. For the purposes of this paper 137 we will call any network connected compute resources a data center if 138 its network connectivity is significant compared either to the 139 bandwidth of an individual WDM wavelength or with respect to the 140 network links in which it is located. Hence we include in our 141 definition very large data centers that feature multiple fiber access 142 and consume more than 10MW of power [2], moderate to large content 143 distribution network (CDN) installations located in or near major 144 internet exchange points [3], medium sized business centers, etc... 146 We will refer to those computational entities that don't meet our 147 bandwidth criteria for a data center as an "end system". 149 2. End System Aggregate Networking 151 In this section we consider the fundamental use case of end systems 152 communicating with data centers as shown in Figure 1. In this figure 153 the "clients" are end systems with relatively small access bandwidth 154 compared to a WDM wavelength, e.g., under 100Mbps. We show these 155 clients roughly partitioned into three network related regions ("A", 156 "B", and "C"). Given a particular network application, in a static 157 network application situation, each client in a region would be 158 associated with a particular data center. 160 Region B 161 +---------+ +------+ 162 | Data | |Client| 163 |Center 2 | | B1 |+------+ 164 +------+ +----+----+ +--+---+|Client| 165 |Client| | / | B2 | 166 | A1 `. _.-+--------+-. +--+---+ 167 Region A +------+ `-. ,-'' `--. / ... 168 +------+ ,`: `+. +------+ 169 |Client| / \ |Client| 170 | A2 +------+ \---+ BM | 171 +------+ ( Network ) +------+ 172 ... .-' / 173 +------+ _.-' \ `. 174 |Client|.-' `=. ,-' `. 175 | AN | _.-'' `--. _.-\ +---`.----+ 176 +------+ +----'----+ `----+------+'' \ | Data | 177 | Data | | \ | |Center 3 | 178 |Center 1 | +--+---+ +--+---+ \ +---------+ 179 +---------+ |Client| |Client| \------+ 180 | C1 | | C2 | |Client| 181 +------+ +------+ | CK | 182 Region C +------+ 184 Figure 1. End system to data center communications. 186 2.1. Aggregated Bandwidth Scaling 188 One of the simplest examples where the aggregation of end system 189 bandwidth can quickly become significant to the "network" is for 190 video on demand (VoD) streaming services. Unlike a live streaming 191 service where IP or lower layer multicast techniques can be generally 192 applied, in VoD the transmissions are unique between the data center 193 and clients. For regular quality VoD we'll use an estimate of 1.5Mbps 194 per stream (assuming H.264 coding), for HD VoD we'll use an estimate 195 of 10Mbps per stream. To fill up a 10Gbps capacity optical wavelength 196 requires either 6,666 or 1,000 clients for regular or high definition 197 respectively. Note that special multicasting techniques such as 198 those discussed in [4] and peer assistance techniques such as 199 provided in some commercial systems [5] can reduce the overall 200 network bandwidth requirements. 202 With current high speed internet deployment such numbers of clients 203 are easily achieved; in addition demand for VoD services can vary 204 significantly over time, e.g., new video releases, inclement weather 205 (increases number of viewers), etc... 207 2.2. Cross Stratum Optimization Example 209 In an ideal world both data centers and networks would have 210 unlimited capacity, however in actuality both can have constraints 211 and possibly varying marginal costs that vary with load or time of 212 day. For example suppose that in Figure 1 that Data Center 3 has 213 been primarily serving VoD to region "C" but that it has, at a 214 particular period in time, run out of computation capacity to serve 215 all the client requests coming from region "C". At this point we have 216 a fundamental cross stratum optimization (CSO) problem. We want to 217 see if we can accommodate additional client request from region "C" 218 by using a different data center than the fully utilized data center 219 #3. To answer this questions we need to know (a) available capacity 220 on other data centers to meet a request, (b) the marginal 221 (incremental) cost of servicing the request on a particular data 222 center with spare capacity, (c) the ability of the network to provide 223 bandwidth between region "C" to a data center, and (d) the 224 incremental cost of bandwidth from region "C" to a data center. 226 Region B 227 +---------+ +------+ 228 | Data | |Client| 229 |Center 2 | | B1 |+------+ 230 +------+ +----+----+ +--+---+|Client| 231 |Client| | / | B2 | 232 | A1 `. _.-+--------+-. +--+---+ 233 Region A +------+ `-. ,-'' XXXXX XX `--. / ... 234 +------+ ,`: ``---..__ XXXX `+. +------+ 235 |Client| / X | ```--XX \ |Client| 236 | A2 +------+..X`. \ XX--+---+ BM | 237 +------+ ( X `-/ \ ) +------+ 238 ... .-' .' | +----.X / 239 +------+ _.-' \ X/ \ | X `. 240 |Client|.-' `=.X \ XXXX ,-' `. 241 | AN | _.-'' `--. XXXXXXXXX _.-\ +---`.----+ 242 +------+ +----'----+ `----+------+'' \ | Data | 243 | Data | | \ | |Center 3 | 244 |Center 1 | +--+---+ +--+---+ \ +---------+ 245 +---------+ |Client| |Client| \------+ 246 | C1 | | C2 | |Client| 247 +------+ +------+ | CK | 248 Region C +------+ 250 Figure 2. Aggregated flows between end systems and data centers. 252 In Figure 2 we show a possible result of solving the previously 253 mentioned CSO problem. Here we show the additional client requests 254 from region "C" being serviced by data center #2 across the network. 255 Figure 2 also illustrates the possibility of setting up "express" 256 routes across the network at the MPLS level or below. Such 257 techniques, known as "optical grooming" or "optical bypass" [6], [7] 258 at the optical layer, can result in significant equipment and power 259 savings for the network by "bypassing" higher level routers and 260 switches. 262 2.3. Data Center and Network Faults and Recovery 264 Data center failures, whether partial or complete, can have a major 265 impact on revenues in the VoD example previously described. If there 266 is excess capacity in other data centers within the network 267 associated with the same application then clients could be redirected 268 to those other centers if the network has the capacity. Moreover, 269 MPLS and GMPLS controlled networks have the ability to reroute 270 traffic very quickly while preserving QoS. As with general network 271 recovery techniques [8] various combinations of pre-planning and "on 272 the fly" approaches can be used to tradeoff between recovery time and 273 excess network capacity needed for recovery. 275 In the case of network failures there is the potential for clients 276 to be redirected to other data centers to avoid failed or over 277 utilized links. 279 2.4. Cross Stratum Control Interfaces 281 Two types of load balancing techniques are currently utilized in 282 cloud computing. The first is load balancing within a data center and 283 is sometimes referred to as local load balancing. Here one is 284 concerned with distributing requests to appropriate machines (or 285 virtual machines) in a pool based on the current machine utilization. 286 The second type of load balancing is known as global load balancing 287 and is used to assign clients to a particular data center out of a 288 choice of more than one within the network and is our concern here. 289 A number of commercial vendors offer both local and global load 290 balancing products (F5, Brocade, Coyote Point Systems). Currently 291 global load balancing systems have very little knowledge of the 292 underlying network. To make better assignments of clients to data 293 centers many of these systems use geographic information based on IP 294 addresses [9]. Hence we see that current systems are attempting to 295 perform cross stratum optimization albeit with very coarse network 296 information. A more elaborate interface for CSO in the client 297 aggregation case would be: 299 1. A Network Query Interface - Where the global load balancer can 300 inquire as to the bandwidth availability between "client 301 regions" and data centers. 303 2. A Network Resource Reservation Interface - Where the global 304 load balancer can make explicit requests for bandwidth between 305 client regions and data centers. 307 3. A Fault Recovery Interface - For the global load balancer to 308 make requests for expedited bulk rerouting of client traffic 309 from one data center to another. 311 The network query interface can be considered a superset of the 312 functionality proposed from the ALTO (application layer traffic 313 optimization) servers being standardized in [10]. Note that in the 314 network query and reservation interfaces it would be worthwhile to 315 consider both current resources and resources at a future time, i.e., 316 scheduled resources. Although scheduled reservations are not 317 supported directly by technologies such as MPLS and GMPLS they can be 318 considered in network planning and provisioning systems. For example, 319 a VoD provider knows ahead of time when the latest "blockbuster" film 320 will be available via its service and can make estimates based on 321 historical data on the bandwidth that it will need to deal with the 322 subsequent demand. 324 3. Data Center to Data Center Networking 326 There are a number of motivations for data center to data center 327 communications: on demand capacity expansion ("cloud bursting") [11], 328 cooperative exchanges between business partners, offsite data backup, 329 "rent before building"[12], etc... In Figure 3 we show an example 330 where a number of businesses each with an "internal data center" 331 contracts with a large external data center for additional 332 computational (which may include storage) capacity. The data centers 333 may connect to each other via IP transit type services or more 334 typically via some type of Ethernet virtual private line or LAN 335 service. 337 +-------------------+ 338 | | 339 | Large Data Center | 340 | | 341 +----------+--------+ 342 | 343 _.+-----------. 344 ,--'' `---. 345 ,-' `-. 346 ,' `. 347 ,' `. 348 +--------+ ; Network : 349 |Business| __..+ | 350 | #1 DC +-' : ; 351 +--------+ `. ,' 352 `. ;: 353 `-. ,-' \ 354 `---. _.--' +--`.----+ 355 `+-----------'' |Business| 356 / | #N DC | 357 | +--------+ 358 +----+---+ 359 |Business| 360 | #2 DC | 361 +--------+ 363 Figure 3. Basic data center to data center networking. 365 3.1. Cross Stratum Optimization Examples 367 In the DC-to-DC example of Figure 3 we can have computational 368 constraints/limits at both local and remote data centers; fixed and 369 marginal computational costs at local and remote data centers; and 370 network bandwidth costs and constraints between data centers. Note 371 that computing costs could vary by the time of day along with the 372 cost of power and demand. Some cloud providers such as Amazon [13] 373 have quite sophisticated compute pricing models including: reserved, 374 on demand, and spot (auction) variants. 376 In addition, to possibly dynamically changing pricing, traffic 377 loads between data centers can be quite dynamic. In addition, data 378 movement between data centers is another source of large network 379 usage variation. Such peaks can be due to scheduled daily or weekly 380 offsite data backup, bulk VM migration to a new data center, periodic 381 virtual machine migration [14], etc... 383 3.2. Network and Data Center Faults and Reliability 385 For networked applications that require high levels of 386 reliability/availability the network diagram of Figure 4 could be 387 enhanced with redundant business locations and external data centers 388 as shown in Figure 4. For example cell phone subscriber databases and 389 financial transactions generally require what is called geographic 390 database replication [15] and results in extra communication between 391 sites supporting high availability. For example if business #1 in 392 Figure 4 required a highly available database related service then 393 there would be an additional communication flows from the data center 394 "1a" to data center "1b". Furthermore, if business #1 has outsourced 395 some of its computation and storage needs to independent data center 396 X then for resilience it may want/need to replicate (hot-hot 397 redundancy) this information at independent data center Y. 399 +-------------+ +-------------+ 400 |Independent | |Independent | 401 |Data Center X| |Data Center Y| 402 +-----+-------+ +------+------+ 403 \ / 404 `. _.------------. .' 405 \--'' `-+-. 406 ,-' `-. +--------+ 407 ,' `. .'Business| 408 ,' `.-' |#N DC-a | 409 ; Network : +--------+ 410 +--------+ | | 411 |Business+--- ; 412 |#1 DC-a | `. +: 413 +--------+ `. ;/ \ 414 `-. ,-' `. 415 .'`---. _.--' +--`.----+ 416 +--------+ / `+-+---------\' |Business| 417 |Business| .' | \ |#N DC-a | 418 |#1 DC-b .' / \ +--------+ 419 +--------+ | \ 420 +----+---+ +--------+ 421 |Business| |Business| 422 |#2 DC-a | |#2 DC-b | 423 +--------+ +--------+ 425 Figure 4. Data center to data center networking with redundancy. 427 3.3. Cross Stratum Control Interfaces 429 Similar to the end system aggregation case we can decompose cross 430 stratum interfaces into three general types: (a) network query, (b) 431 network reservation, and (c) recovery. However for DC-to-DC 432 interfaces we are interested in network resources between data 433 centers rather than between "client regions" and data centers. 435 For network resource queries we may be concerned with (a) current 436 bandwidth availability, (b) bandwidth availability at a future time, 437 or (c) bandwidth for a bulk data transfer of a given amount that must 438 take place within a given time window. A network reservation 439 interface with both current and advanced reservation capability would 440 complement the query interface. 442 A simple recovery interface for data center based faults could be 443 based on unused backup paths between data centers that are reserved 444 but not activated unless a request is received from the application 445 stratum that recovery action is requested. 447 4. Conclusion 449 In this draft we have discussed two generic use cases that motivate 450 the usefulness of general interfaces for cross stratum optimization 451 in the network core. In our first use case network resource usage 452 became significant due to the aggregation of many individually unique 453 client demands. While in the second use case where data centers were 454 communicating with each other bandwidth usage was already significant 455 enough to warrant the use of private line/LAN type of network 456 services. 458 Both use cases result in optimization problems that trade off 459 computational versus network costs and constraints. Both featured 460 scenarios where advanced reservation, on demand, and recovery type 461 service interfaces could prove beneficial. Many concepts from recent 462 standardization work at the IETF [10] such as location identifiers, 463 and endpoint properties could be reused in defining such interfaces. 465 5. Security Considerations 467 TBD 469 6. IANA Considerations 471 This informational document does not make any requests for IANA 472 action. 474 7. References 476 7.1. Informative References 478 [1] M. Armbrust et al., "A view of cloud computing," Communications 479 of the ACM, vol. 53, p. 50-58, Apr. 2010. 481 [2] "Location Information | DuPont Fabros Technology." (Online). 482 Available: http://www.dft.com/data-centers/location- 483 information. 485 [3] "Amazon CloudFront." (Online). Available: 486 http://aws.amazon.com/cloudfront/. 488 [4] K. A. Hua and S. Sheu, "Skyscraper broadcasting: a new 489 broadcasting scheme for metropolitan video-on-demand systems," 490 in Proceedings of the ACM SIGCOMM '97 conference on 491 Applications, technologies, architectures, and protocols for 492 computer communication, Cannes, France, 1997, pp. 89-100. 494 [5] "Adobe Flash Media Server 4.0 * Building peer-assisted 495 networking applications." (Online). Available: 496 http://help.adobe.com/en_US/flashmediaserver/devguide/WSa4cb076 497 93d123884520b86f312a354ba36d-8000.html. 499 [6] Rudra Dutta and George N. Rouskas, "Traffic grooming in WDM 500 networks: Past and future," IEEE Network, vol. 16, no. 6, pp. 501 46 -56, 2002. 503 [7] Keyao Zhu and B. Mukherjee, "Traffic grooming in an optical WDM 504 mesh network," Selected Areas in Communications, IEEE Journal 505 on, vol. 20, no. 1, pp. 122-133, 2002. 507 [8] G. Bernstein, B. Rajagopalan, and D. Saha, Optical Network 508 Control: Architecture, Protocols, and Standards. Addison-Wesley 509 Professional, 2003. 511 [9] "Our IP Geolocation Products | Quova, Inc." (Online). 512 Available: http://www.quova.com/what/products/. 514 [10] "draft-ietf-alto-reqs-09." (Online). Available: 515 http://datatracker.ietf.org/doc/draft-ietf-alto-reqs/. 517 [11] "Cloud Computing's Tipping Point -- InformationWeek." (Online). 518 Available: 519 http://www.informationweek.com/news/government/cloud- 520 saas/229401691. 522 [12] "Lessons From FarmVille: How Zynga Uses The Cloud -- 523 InformationWeek." (Online). Available: 524 http://www.informationweek.com/news/global- 525 cio/interviews/229402805#. 527 [13] "Amazon EC2 Pricing." (Online). Available: 528 http://aws.amazon.com/ec2/pricing/. 530 [14] Dynamic Workload Balancing with EMC VPLEX and Ciena Networking. 531 EMC, 2010. 533 [15] "MySQL.:: MySQL Cluster Features." (Online). Available: 534 http://www.mysql.com/products/cluster/features.html#geo. 536 [16] Seedorf, J. and E. Burger, "Application-Layer Traffic 537 Optimization (ALTO) Problem Statement", RFC 5693, 538 October 2009. 540 [17] B. Niven-Jenkins (Ed.), G. Watson, N. Bitar, J. Medved, S. 541 Previdi, "Use Cases for ALTO within CDNs", work in progress, 542 draft-jenkins-alto-cdn-use-cases. 544 [18] E. Mannie, Ed., "GMPLS Framework Generalized Multi-Protocol 545 Label Switching (GMPLS) Architecture" RFC 3945, October 2004. 547 [19] G. Bernstein, E. Mannie, V. Sharma, E. Gray, "Framework for 548 Generalized Multi-Protocol Label Switching (GMPLS)-based 549 Control of Synchronous Digital Hierarchy/Synchronous Optical 550 Networking (SDH/SONET) Networks", RFC 4257, December 2005. 552 [20] Y. Lee, Ed., G. Bernstein, Ed., W. Imajuku, "WSON Framework 553 Framework for GMPLS and Path Computation Element (PCE) Control 554 of Wavelength Switched Optical Networks (WSONs)", RFC6163, 555 April 2011. 557 [21] A. Farrel, J.-P. Vasseur, J. Ash, "PCE Framework A Path 558 Computation Element (PCE)-Based Architecture", RFC 4655, August 559 2006. 561 [22] G. Swallow, J. Drake, H. Ishimatsu, Y. Rekhter, "Generalized 562 Multiprotocol Label Switching (GMPLS) User-Network Interface 563 (UNI): Resource ReserVation Protocol-Traffic Engineering(RSVP- 564 TE) Support for the Overlay Model" RFC 4208, October 2005. 566 Author's Addresses 568 Greg M. Bernstein 569 Grotto Networking 570 Fremont California, USA 571 Phone: (510) 573-2237 572 Email: gregb@grotto-networking.com 574 Young Lee 575 Huawei Technologies 576 1700 Alma Drive, Suite 500 577 Plano, TX 75075 578 USA 579 Phone: (972) 509-5599 580 Email: ylee@huawei.com 582 Intellectual Property Statement 584 The IETF Trust takes no position regarding the validity or scope of 585 any Intellectual Property Rights or other rights that might be 586 claimed to pertain to the implementation or use of the technology 587 described in any IETF Document or the extent to which any license 588 under such rights might or might not be available; nor does it 589 represent that it has made any independent effort to identify any 590 such rights. 592 Copies of Intellectual Property disclosures made to the IETF 593 Secretariat and any assurances of licenses to be made available, or 594 the result of an attempt made to obtain a general license or 595 permission for the use of such proprietary rights by implementers or 596 users of this specification can be obtained from the IETF on-line IPR 597 repository at http://www.ietf.org/ipr 599 The IETF invites any interested party to bring to its attention any 600 copyrights, patents or patent applications, or other proprietary 601 rights that may cover technology that may be required to implement 602 any standard or specification contained in an IETF Document. Please 603 address the information to the IETF at ietf-ipr@ietf.org. 605 Disclaimer of Validity 607 All IETF Documents and the information contained therein are provided 608 on an "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE 609 REPRESENTS OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE 610 IETF TRUST AND THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL 611 WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY 612 WARRANTY THAT THE USE OF THE INFORMATION THEREIN WILL NOT INFRINGE 613 ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS 614 FOR A PARTICULAR PURPOSE. 616 Acknowledgment 618 Funding for the RFC Editor function is currently provided by the 619 Internet Society.