idnits 2.17.1 draft-dunbar-arp-for-large-dc-problem-statement-00.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- ** You're using the IETF Trust Provisions' Section 6.b License Notice from 12 Sep 2009 rather than the newer Notice from 28 Dec 2009. (See https://trustee.ietf.org/license-info/) Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- ** The document seems to lack separate sections for Informative/Normative References. All references will be assumed normative when checking for downward references. ** There is 1 instance of too long lines in the document, the longest one being 49 characters in excess of 72. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year == The document doesn't use any RFC 2119 keywords, yet seems to have RFC 2119 boilerplate text. -- The document date (July 2, 2010) is 5019 days in the past. Is this intentional? Checking references for intended status: Proposed Standard ---------------------------------------------------------------------------- (See RFCs 3967 and 4897 for information about using normative references to lower-maturity documents in RFCs) == Unused Reference: 'ARP' is defined on line 559, but no explicit reference was found in the text Summary: 3 errors (**), 0 flaws (~~), 3 warnings (==), 1 comment (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 1 MPLS Working Group L. Dunbar 2 Internet Draft Huawei 3 Intended status: Standard Track S. Hares 4 Expires: January 2011 Huawei 5 July 2, 2010 7 Scalable Address Resolution for Large Data Center Problem Statements 8 draft-dunbar-arp-for-large-dc-problem-statement-00.txt 10 Status of this Memo 12 This Internet-Draft is submitted to IETF in full conformance with the 13 provisions of BCP 78 and BCP 79. 15 Internet-Drafts are working documents of the Internet Engineering 16 Task Force (IETF), its areas, and its working groups. Note that 17 other groups may also distribute working documents as Internet- 18 Drafts. 20 Internet-Drafts are draft documents valid for a maximum of six months 21 and may be updated, replaced, or obsoleted by other documents at any 22 time. It is inappropriate to use Internet-Drafts as reference 23 material or to cite them other than as "work in progress." 25 The list of current Internet-Drafts can be accessed at 26 http://www.ietf.org/ietf/1id-abstracts.txt 28 The list of Internet-Draft Shadow Directories can be accessed at 29 http://www.ietf.org/shadow.html 31 This Internet-Draft will expire on January 2, 2009. 33 Copyright Notice 35 Copyright (c) 2010 IETF Trust and the persons identified as the 36 document authors. All rights reserved. 38 This document is subject to BCP 78 and the IETF Trust's Legal 39 Provisions Relating to IETF Documents 40 (http://trustee.ietf.org/license-info) in effect on the date of 41 publication of this document. Please review these documents 42 carefully, as they describe your rights and restrictions with respect 43 to this document. Code Components extracted from this document must 44 include Simplified BSD License text as described in Section 4.e of 45 the Trust Legal Provisions and are provided without warranty as 46 described in the BSD License. 48 Abstract 50 Virtual machines, or virtualized servers, basically allow one 51 physical server to support multiple hosts (20, 30, or hundreds of). 52 As virtual machines are introduced to Data center, the number of 53 hosts within one data center can grow dramatically, which could 54 create tremendous impact to networks and hosts. 56 This document describes reasons why it is still desirable to have 57 virtual machines in Data Center to be in one Layer 2 network and 58 potential problems this type of Layer 2 network will face. The goal 59 is to justify why it is necessary for IETF to create a working group 60 to work on interoperable and scalable solutions for data center(s) 61 with large number of virtual machines. 63 Conventions used in this document 65 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 66 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 67 document are to be interpreted as described in RFC-2119 0. 69 Table of Contents 71 1. Introduction................................................3 72 2. Terminology.................................................4 73 3. Reasons for Virtual Machines in Data Center to stay in Layer 24 74 3.1. Load balance requires group of hosts on same Layer 2.....4 75 3.2. Redundancy requires both active and standby VM on same Layer 76 2...........................................................5 77 3.3. VM mobility requires them on same Layer 2...............5 78 4. Cloud Computing Service......................................6 79 5. Problems facing Layer 2 with large number of hosts...........7 80 5.1. Address Resolution creates significant burden to hosts...8 81 5.2. Large amount of MAC addresses to be learnt by intermediate 82 switches....................................................9 83 5.3. More chances of unknown broadcast......................10 84 6. Why dividing one Layer 2 into many smaller subnets is not enough? 85 ..............................................................10 86 7. Why IETF needs to develop solutions instead of relying on IEEE802 87 ..............................................................11 88 8. Conclusion and Recommendation...............................11 89 9. Manageability Considerations................................12 90 10. Security Considerations....................................12 91 11. IANA Considerations........................................12 92 12. Acknowledgments...........................................12 93 13. References................................................13 94 Authors' Addresses............................................13 95 Intellectual Property Statement................................13 96 Disclaimer of Validity........................................14 98 1. Introduction 100 Virtual machines are created by server virtualization, which allows 101 the sharing of the underlying physical machine (server) resources 102 among different virtual machines, each running its own operating 103 system. Server virtualization is the key enabler for Cloud Computing 104 services, such as Amazon's EC2 service. Virtual Machine also makes it 105 possible for virtual desktop services, which allow servers in data 106 center(s) to provide virtual desktops to millions of end users. 108 Servers virtualization provides numerous benefits, including higher 109 utilization rates, improved IT productivity, increased data security, 110 reduced user downtime, and even significant power conservation, and 111 the promise of a more flexible and dynamic computing environment. As 112 a result, many organizations are highly motivated to incorporate 113 server virtualization technologies into their data centers. In fact, 114 ESG research indicates that virtualization is being widely adopted in 115 production environments. 117 While Servers Virtualization is a great technology for flexible 118 management of server resources, it does impose great challenges to 119 networks which interconnect all the servers in data center(s). 121 For a typical tree structured Layer 2 network, with one or two 122 aggregation switches connected to a group of Top of Rack (ToR) 123 switches and each ToR switch connected to a group of physical servers 124 (hosts), the number of servers connected in this network is limited 125 to the switches' port count. If ToR switch has 20 downstream ports, 126 there are only 20 servers or hosts connected to the ToR switch. If 127 the Aggregation Switch has 256 ports connecting to ToR switch, there 128 could be up to 20*256=5120 hosts connected to one aggregation switch 129 when the servers are not virtualized. 131 When Virtual Machines are introduced to servers, one server can 132 support hundreds of VMs. Hypothetically, if one server supports up to 133 100 VMs, the same ToR switches and Aggregation switch as above can 134 support up to 512000 hosts. Even if there is enough bandwidth on 135 links to support the traffic volume from all those VMs, other issues 136 associated with Layer 2, like frequent ARP broadcast by hosts, 137 unknown broadcast, are creating a lot of challenges to the network 138 and hosts. 140 2. Terminology 142 Bridge: IEEE802.1Q compliant device. In this draft, Bridge is used 143 interchangeably with Layer 2 switch. 145 FDB: Filtering Database for Bridge or Layer 2 switch 147 ToR: Top of Rack Switch. It is also known as access switch. 149 Aggregation switch: a Layer 2 switch which connects a group of ToR 150 switches. It is also known as End of Row switch in data 151 center. 153 VM: Virtual Machines 155 3. Reasons for Virtual Machines in Data Center to stay in Layer 2 157 Two application scenarios for Virtual Machines deployment are 158 considered here: 160 - One Data Center with large amount of Virtual Machines 162 - Cloud Computing Service (Infrastructure as a Service) which 163 requires a large amount of virtual hosts. 165 3.1. Load balance requires group of hosts on same Layer 2 167 Server load balancing is a technique to distribute workload evenly 168 across two or more servers, in order to get optimal resource 169 utilization, minimize response time, and avoid overload. Using 170 multiple servers with load balancing, instead of a single one, also 171 increase reliability through redundancy. One of the most common 172 applications of load balancing is to provide a single Internet 173 service from multiple servers, sometimes known as a server farm. 174 Commonly, load-balanced systems include popular web sites, large 175 Internet Relay Chat networks, high-bandwidth File Transfer Protocol 176 sites, NNTP servers and DNS servers. 178 The load balancer typically sits in-line between the client and the 179 hosts that provide the services the client wants to use. Some load 180 balancer requires hosts to have a return route that points back to 181 the load balancer so that return traffic will be processed through it 182 on its way back to the client. 184 However, for applications with relative smaller amount of traffic 185 going into server and relative large amount of traffic from server, 186 it is desirable to allow reply data from servers go directly to 187 clients without going through the Load Balancer. In this kind of 188 design, called Direct Server Return, all servers in the cluster have 189 same IP addresses as the load balancer. External requests from 190 clients are directly sent to the Load Balancer, which distributes the 191 request to appropriate host among the cluster based on their load. 192 Any of those servers send reply directly out to clients, who see the 193 same IP address regardless which server handles the requests. Under 194 this design it is necessary for Load Balancer and the cluster of 195 hosts to be on same Layer 2 network so that they communicate with 196 each other via their MAC addresses. 198 3.2. Redundancy requires both active and standby VM on same Layer 2 200 For redundant servers (or VMs) serving same applications, both Active 201 and Standby servers (VMs) need to have keep-alive messages between 202 them. Since both Active and Standby servers (VMs) might have same IP 203 address, the only way to achieve this is via Layer 2 keep-alive 204 because the Active and Stand-by will have different MAC Addresses. 206 The VRRP (virtual router redundancy protocol) (RFC 3768) is designed 207 to increase the availability of the default gateway servicing hosts 208 on the same subnet. Two or more physical routers are then configured 209 to stand for the virtual router, with only one doing the actual 210 routing at any given time. It is necessary for the group of physical 211 routers serving one virtual router to be on same subnet as their 212 hosts. 214 3.3. VM mobility requires them on same Layer 2 216 VM mobility is referring to moving virtual machines from one server 217 to another. To fully realize the benefits of a virtualized server 218 environment, the ability to seamlessly move VMs within a resource 219 pool and still guarantee application performance and security is a 220 must. Mobility adds tremendous value because it enables organizations 221 to: 223 - Rapidly scale out new applications - Creating golden copies of a 224 VM allows organizations to introduce new applications into a 225 resource pool in dramatically less time. This accelerates the 226 time to market for new applications. 228 - Balance workloads - The ability to dynamically adjust workloads 229 within a resource pool will enable companies to optimize 230 performance and minimize power and cooling costs during off 231 hours by dynamically adjusting VM locations. 233 - Deliver high levels of availability - Mobility guarantees that 234 even in the event of physical infrastructure failure, 235 applications can be quickly moved to another physical resource 236 within that pool, dramatically minimizing downtime and 237 eliminating the need for dedicated redundant infrastructure. 239 - Recover from a site disaster - This involves the ability to 240 quickly migrate VMs to a remote secondary location in order for 241 operations to resume. Recovering from VMs significantly reduces 242 the amount of time required vs. manually reloading servers from 243 bare metal. 245 One of the key requirements for VM mobility is the VM maintaining the 246 same IP address after moving to the new location so that its 247 operation can be continued in the new location. 249 For a VM to maintain the same IP address when moving from Server A to 250 Server B, it is necessary for both servers to be on the same subnet. 251 If Server A and Server B are on different subnets, they will have 252 different gateway routers. In the subnet where Server A is in, e.g. 253 Subnet A, the VM sends ARP broadcast requests for target IP addresses 254 in the same subnet. For the target IP address not in the Subnet A, 255 the VM sends data frame to default gateway router. When this VM is 256 moved to Server B, if server B is in a different subnet than Server 257 A, e.g. Subnet B, then this VM couldn't even forward data to its 258 default gateway and can't find MAC addresses for other hosts in 259 subnet A. 261 That is why most VM mobility systems, such as VMware's vMotion, 262 require all hosts in one Layer 2. 264 4. Cloud Computing Service 266 Cloud Computing service, like Amazon's Elastic Compute Cloud (Amazon 267 EC2), allows users (clients) to create their own virtual servers and 268 virtual subnets. There are many potential services which Cloud 269 Computing services could offer to their clients. Here are just some 270 examples of those services: 272 - A client can request a group of virtual servers and their 273 associated virtual subnet, 275 - A client can specify policies among multiple subnets they 276 purchased, 278 - A client can specify preferred geographic locations for some of 279 the virtual servers they purchased, 281 - A client can specify redundancy criteria, like two virtual 282 servers on two different physical servers or in two different 283 locations, etc. 285 In order to efficiently support those services, network has to 286 support virtual subnet, i.e. one Layer 2 network spanning across 287 multiple locations, in addition to large amount of hosts in Layer 2. 289 It is possible for Cloud Computing service to have a network design 290 that each virtual subnet is mapped one independent Layer 2 network. 291 But this kind of design would require huge amount of administrative 292 and planning work to properly partition servers, switches to 293 appropriate Layer 2 network. The problem is that virtual servers and 294 virtual subnets purchased by clients change all the time. It is a lot 295 of administrative work to change Layer 2 network partition each time 296 there is a client request. Having a large amount of virtual machines 297 in one Layer 2 network can simplify some aspects of Cloud Computing 298 service design and management. 300 5. Problems facing Layer 2 with large number of hosts 302 In Layer 2 network, hosts can be attached and re-attached at any 303 location on the network. Hosts use ARP (Address Resolution Protocol) 304 to find the corresponding MAC address of a target host. ARP is a 305 protocol that uses the Ethernet broadcast service for discovering a 306 host's MAC address from its IP address. For host A to find the MAC 307 address of a host B on the same subnet with IP Address B-IP, host A 308 broadcasts an ARP query packet containing B as well as its own IP IP 309 address (A-IP) on its Ethernet interface. All hosts in the same 310 subnet receive the packet. Host B, whose IP address is B-IP, replies 311 (via unicast) to inform A of its MAC address. A will also record the 312 mapping between B-IP and B-MAC. 314 Even though all hosts maintain the MAC to target IP address mapping 315 locally to avoid repetitive ARP broadcast message for the same target 316 IP address, all hosts age out their learnt MAC to IP mapping very 317 frequently. For Microsoft Windows (versions XP and server 2003), the 318 default ARP cache policy is to discard entries that have not been 319 used in at least two minutes, and for cache entries that are in use, 320 to retransmit an ARP request every 10 minutes. So hosts send out ARP 321 very frequently. 323 In addition to broadcast messages sent from hosts, Layer 2 switches 324 also broadcast received packet if the destination address is unknown. 326 All Layer 2 switches learn MAC address of data frames which traverse 327 through the switches. Layer 2 switches also age out their learnt MAC 328 addresses in order to limit the number of entries in their Filtering 329 Database (FDB). When a switch receives packet with an unknown MAC 330 address, it broadcast this packet to all ports which are enabled for 331 the corresponding VLAN. 333 The flooding and broadcast have worked well in the past when the 334 Layer 2 network is limited to a smaller size. Most Layer 2 networks 335 limit the number of hosts to be less then 200, so that broadcast 336 storm and flooding can be kept in a smaller domain. 338 5.1. Address Resolution creates significant burden to hosts 340 When a Layer 2 network has tens of thousands of hosts, the frequent 341 ARP broadcast messages from all those hosts clearly present a 342 significant burden, especially to hosts. Many of today's layer 2 343 switches, even with hundreds of ports, can forward 10G traffic at 344 line rate. But they don't need to process ARP requests, they just 345 forward them. It is the host who needs to process every ARP message 346 that circulates in the network. 348 [Scaling Ethernet] of Carnegie Mellon did a study on the number of 349 ARP queries received at a workstation on CMU's School of Computer 350 Science LAN over a 12 hour period on August 9, 2004. At peak, the 351 host received 1150 ARPs per second, and on average, the host received 352 89 ARPs per second. During the data collection, 2,456 hosts were 353 observed sending ARP queries. [Scaling Ethernet] expects that the 354 amount of ARP traffic will scale linearly with the number of hosts on 355 the LAN. For 1 million hosts, it is expected to have 468,240 ARPs per 356 second or 239 Mbps of ARP traffic to arrive at each host at peak, 357 which is more than enough to overwhelm a standard 100 Mbps LAN 358 connection. Ignoring the link capacity, forcing hosts to handle an 359 extra half million packets per second to inspect each ARP packet 360 would impose a prohibitive computational burden. 362 To detect address conflict and refresh hosts address in a Layer 2 363 network, many types of hosts and almost all Virtual Machines send out 364 gratuitous ARP on regular basis. The Gratuitous ARP could mean either 365 gratuitous ARP request or gratuitous ARP reply. Gratuitous in this 366 case means a request/reply that is not normally needed according to 367 the ARP specification (RFC 826) but could be used in some cases. A 368 gratuitous ARP request is an Address Resolution Protocol request 369 packet where the source and destination IP are both set to the IP of 370 the machine issuing the packet and the destination MAC is the 371 broadcast address ff:ff:ff:ff:ff:ff. Ordinarily, no reply packet will 372 occur. A gratuitous ARP reply is a reply to which no request has been 373 made. 375 All the Gratuitous ARP messages also need to be processed by all 376 hosts in the Layer 2 domain. 378 Handling up to 1000~2000 ARP requests per second is almost the high 379 limit for any hosts. With more than 20K hosts in one Layer 2 domain, 380 the amount of ARP broadcast messages, plus other broadcast messages 381 such as DHCP, can create too much burden to be handled by hosts. 383 5.2. Large amount of MAC addresses to be learnt by intermediate switches 385 Ethernet's non-hierarchical flat layer 2 MAC addressing makes it not 386 possible for any types for address summarization. MAC addresses, 387 plus their VLAN IDs, have to be placed in switch's FDB without any 388 abbreviation, not like IP addresses which only need proper prefix to 389 be stored in router's forwarding table. 391 One advantage of Ethernet switches is that it can forward much more 392 addresses than its FDB entries. When a data frame's destination 393 address is not present in a switch's FDB entry, the switch just flood 394 this data frame to all ports which are enabled for the corresponding 395 VLAN. That is why Ethernet switches can have a relative small FDB 396 size, which is one of the key reasons that Ethernet switches can be 397 built at much lower cost than routers. To improve efficiency of the 398 FDB, Ethernet switches frequently age out learnt MAC addresses which 399 haven't been in use for a while and always replace older MAC entries 400 with newly learnt MACs when the FDB is full. 402 When servers in data center are virtualized, each server can host 403 tens or hundreds of virtual machines. Each virtual machine can be a 404 host to an application, which has its own IP address and MAC address. 405 With the same type and number of network equipments, i.e. ToR 406 switches and Aggregation switches, the number of hosts can grow 407 dramatically in this network. When the number of hosts grows, the 408 number of MAC addresses to be learnt by Layer 2 switches grows too. 409 For an example of tree shaped Layer 2 network with one core switch 410 connected to 3 aggregation switches, each aggregation switch 411 connected to 25 ToR switches, and each ToR switch connected to 25 412 physical servers, if each server supports 50 virtual machines, there 413 will be 50*25*25*3= 93750 hosts in this network. 415 Typical bridges support in the range of 16 to 32K MAC Addresses, with 416 some supporting 64K. With external memory (TCAM), bridges can support 417 up to 512K to 1M MAC addresses. But TCAM is expensive, which will 418 defeat the low cost advantage of Layer 2 switches. This problem is 419 especially severe for Top of Rack switches, which are supposed to be 420 very low cost. 422 In summary, the low cost ToR switches usually don't have enough FDB 423 entries for all VM's MAC addresses in the domain. 425 5.3. More chances of unknown broadcast 427 When the number of hosts, MAC addresses, are above the switches FDB 428 size, learnt MAC addresses in the FDB are aged out faster, which will 429 increase the chances of switch's FDB not having the entry for the 430 received packet's destination address, which then causes the packet 431 being flooded. 433 When the spanning tree topology changes (e.g. when a link fails), a 434 bridge clears its cached station location information because a 435 topology change could lead to a change in the spanning tree, and 436 packets for a given source may arrive on a different port on the 437 bridge. As a result, during periods of network convergence, network 438 capacity drops significantly as the bridges fall back to flooding for 439 all hosts. 441 6. Why dividing one Layer 2 into many smaller subnets is not enough? 443 Subnet (VLAN) can partition one Layer 2 network into many virtual 444 Layer 2 domains. All the broadcast messages are confined within one 445 subnet (VLAN). Subnet (VLAN) has worked well when each server serving 446 one single host. The server will not receive broadcast messages from 447 hosts in other subnets (VLANs). 449 When one physical server is supporting 100 plus Virtual Machines, 450 i.e. 100 plus hosts, most likely the virtual hosts on one server are 451 on different subnets (VLANs). If there are 50 subnets (VLANs) enabled 452 on the switch port to the server, the server has to handle all the 453 ARP broadcast messages on all 50 subnets (VLANs). When virtual hosts 454 are added or deleted from a server, the switch port to the server may 455 end up enabling more VLANs than the number of subnets actually active 456 on the server. Therefore, the amount of ARP messages to be processed 457 by each server is still too much. 459 For Cloud Computing Service, the number of virtual hosts and virtual 460 subnets can be very high. It might not be possible to limit the 461 number of virtual hosts in each subnet. 463 7. Why IETF needs to develop solutions instead of relying on IEEE802 465 Here are the reasons that IETF need to develop solutions: 467 - Client of Cloud Computing services may request redundancy across 468 two geographical locations. They may want two VMs in one Virtual 469 Subnet to be in two locations -> Most likely it is the IP/MPLS 470 networks which interconnect multiple locations 472 - The two redundant VMs may have same IP address with one being 473 Active and other one being Standby. The Active and Stand-by need 474 to have keep-alive messages between them. The only way to 475 achieve this is via Layer 2 keep-alive because the Active and 476 Stand-by will have different MAC Addresses -> Require the two 477 VMs on same Layer 2 479 - It is desirable for all hosts (VMs) of one Virtual Subnet to be 480 in one Layer 2 network for efficient multicast and broadcast 481 among them. 483 -Client may request hosts (VMs) in one Virtual Subnet to be on 484 different locations to have faster response for their 485 applications. -> Require IP/MPLS to interconnect 487 -Hosts can be added to one Virtual Subnet at different time. It 488 is possible that newly added hosts have to be placed at a 489 different site due to computing & storage resource availability 490 -> Require IP/MPLS to interconnect. 492 8. Conclusion and Recommendation 494 When there are tens of thousands of VMs in one Data Center, we have 495 concluded that: 497 - It is necessary to restrain the ARP storm and broadcast storm 498 initiated by (unpredictable) servers and applications into a 499 confined domain. 501 - It is necessary to have a way to restrain Layer 2 switches from 502 learning tens of thousands of MAC addresses. 504 - It is necessary for reduce the amount of un-known addresses 505 arriving at any Layer 2 switches to prevent large amount of un- 506 known broadcast in one Layer 2. 508 For Cloud services which offer virtual hosts and virtual subnets, we 509 have concluded that: 511 - It is necessary to have a more scalable address resolution 512 protocol for Layer 2 Network which spans across multiple 513 locations. 515 - It is desirable to constrain MAC addresses in each site from 516 being learnt by other sites. This is to allow traditional Layer 517 2 switches, which have limited amount of address space for 518 forwarding table, to function properly and to minimize unknown 519 broadcast by those switches. 521 Therefore, we recommend IETF to create a working group: 523 - To develop scalable address resolution protocols for data 524 center with large amount of hosts and Layer 2 spanning across 525 multiple locations, 527 - To develop mechanism to scope the broadcast messages, beyond 528 ARP and DHCP, to minimize impact to each layer 2 domain by 529 broadcast storms from other layer 2 domains, 531 - To have a scalable inter Layer 2 domain protocol, like BGP, for 532 each domain's gateways to exchange hosts' reachability 533 information among each other, and 535 - To identify mechanisms for proper handling of multicast 536 messages among hosts in one Virtual Subnet which spans across 537 multiple locations. 539 9. Manageability Considerations 541 This document does not add additional manageability considerations. 543 10. Security Considerations 545 This document has no additional requirement for a change to the 546 security models of MPLS-Ping and MPLS-Ping-Enhanced. 548 11. IANA Considerations 550 A future revision of this document will present requests to IANA for 551 codepoint allocation. 553 12. Acknowledgments 555 This document was prepared using 2-Word-v2.0.template.dot. 557 13. References 559 [ARP] D.C. Plummer, "An Ethernet address resolution protocol." 560 RFC826, Nov 1982. 562 [Microsoft Window] "Microsoft Windows Server 2003 TCP/IP 563 implementation details." 564 http://www.microsoft.com/technet/prodtechnol/windowsserver2 565 003/technologies/networking/tcpip03.mspx, June 2003. 567 [Scaling Ethernet] Myers, et. al., " Rethinking the Service Model: 568 Scaling Ethernet to a Million Nodes", Carnegie Mellon 569 University and Rice University 571 Authors' Addresses 573 Linda Dunbar 574 Huawei Technologies 575 1700 Alma Drive, Suite 500 576 Plano, TX 75075, USA 577 Phone: (972) 543 5849 578 Email: ldunbar@huawei.com 580 Sue Hares 581 Huawei Technologies 582 2330 Central Expressway, 583 Santa Clara, CA 95050, USA 584 Phone: 585 Email: shares@huawei.com 587 Intellectual Property Statement 589 The IETF Trust takes no position regarding the validity or scope of 590 any Intellectual Property Rights or other rights that might be 591 claimed to pertain to the implementation or use of the technology 592 described in any IETF Document or the extent to which any license 593 under such rights might or might not be available; nor does it 594 represent that it has made any independent effort to identify any 595 such rights. 597 Copies of Intellectual Property disclosures made to the IETF 598 Secretariat and any assurances of licenses to be made available, or 599 the result of an attempt made to obtain a general license or 600 permission for the use of such proprietary rights by implementers or 601 users of this specification can be obtained from the IETF on-line IPR 602 repository at http://www.ietf.org/ipr 604 The IETF invites any interested party to bring to its attention any 605 copyrights, patents or patent applications, or other proprietary 606 rights that may cover technology that may be required to implement 607 any standard or specification contained in an IETF Document. Please 608 address the information to the IETF at ietf-ipr@ietf.org. 610 Disclaimer of Validity 612 All IETF Documents and the information contained therein are provided 613 on an "AS IS" basis and THE CONTRIBUTOR, THE ORGANIZATION HE/SHE 614 REPRESENTS OR IS SPONSORED BY (IF ANY), THE INTERNET SOCIETY, THE 615 IETF TRUST AND THE INTERNET ENGINEERING TASK FORCE DISCLAIM ALL 616 WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO ANY 617 WARRANTY THAT THE USE OF THE INFORMATION THEREIN WILL NOT INFRINGE 618 ANY RIGHTS OR ANY IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS 619 FOR A PARTICULAR PURPOSE. 621 Acknowledgment 623 Funding for the RFC Editor function is currently provided by the 624 Internet Society.