idnits 2.17.1 draft-bensley-tcpm-dctcp-05.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (July 7, 2015) is 3215 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- ** Obsolete normative reference: RFC 793 (Obsoleted by RFC 9293) -- Obsolete informational reference (is this intentional?): RFC 6982 (Obsoleted by RFC 7942) Summary: 1 error (**), 0 flaws (~~), 1 warning (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group S. Bensley 3 Internet-Draft Microsoft 4 Intended status: Informational L. Eggert 5 Expires: January 8, 2016 NetApp 6 D. Thaler 7 P. Balasubramanian 8 Microsoft 9 G. Judd 10 Morgan Stanley 11 July 7, 2015 13 Microsoft's Datacenter TCP (DCTCP): 14 TCP Congestion Control for Datacenters 15 draft-bensley-tcpm-dctcp-05 17 Abstract 19 This memo describes Datacenter TCP (DCTCP), an improvement to TCP 20 congestion control for datacenter traffic. DCTCP uses improved 21 Explicit Congestion Notification (ECN) processing to estimate the 22 fraction of bytes that encounter congestion, rather than simply 23 detecting that some congestion has occurred. DCTCP then scales the 24 TCP congestion window based on this estimate. This method achieves 25 high burst tolerance, low latency, and high throughput with shallow- 26 buffered switches. 28 Status of This Memo 30 This Internet-Draft is submitted in full conformance with the 31 provisions of BCP 78 and BCP 79. 33 Internet-Drafts are working documents of the Internet Engineering 34 Task Force (IETF). Note that other groups may also distribute 35 working documents as Internet-Drafts. The list of current Internet- 36 Drafts is at http://datatracker.ietf.org/drafts/current/. 38 Internet-Drafts are draft documents valid for a maximum of six months 39 and may be updated, replaced, or obsoleted by other documents at any 40 time. It is inappropriate to use Internet-Drafts as reference 41 material or to cite them other than as "work in progress." 43 This Internet-Draft will expire on January 8, 2016. 45 Copyright Notice 47 Copyright (c) 2015 IETF Trust and the persons identified as the 48 document authors. All rights reserved. 50 This document is subject to BCP 78 and the IETF Trust's Legal 51 Provisions Relating to IETF Documents 52 (http://trustee.ietf.org/license-info) in effect on the date of 53 publication of this document. Please review these documents 54 carefully, as they describe your rights and restrictions with respect 55 to this document. Code Components extracted from this document must 56 include Simplified BSD License text as described in Section 4.e of 57 the Trust Legal Provisions and are provided without warranty as 58 described in the Simplified BSD License. 60 Table of Contents 62 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 63 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4 64 3. DCTCP Algorithm . . . . . . . . . . . . . . . . . . . . . . . 4 65 3.1. Marking Congestion on the Switch . . . . . . . . . . . . 4 66 3.2. Echoing Congestion Information on the Receiver . . . . . 4 67 3.3. Processing Congestion Indications on the Sender . . . . . 5 68 3.4. Handling of SYN, SYN-ACK, RST Packets . . . . . . . . . . 7 69 4. Implementation Issues . . . . . . . . . . . . . . . . . . . . 7 70 5. Deployment Issues . . . . . . . . . . . . . . . . . . . . . . 8 71 6. Known Issues . . . . . . . . . . . . . . . . . . . . . . . . 9 72 7. Implementation Status . . . . . . . . . . . . . . . . . . . . 10 73 8. Security Considerations . . . . . . . . . . . . . . . . . . . 10 74 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 10 75 10. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 10 76 11. References . . . . . . . . . . . . . . . . . . . . . . . . . 11 77 11.1. Normative References . . . . . . . . . . . . . . . . . . 11 78 11.2. Informative References . . . . . . . . . . . . . . . . . 11 79 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 12 81 1. Introduction 83 Large datacenters necessarily need a large number of network switches 84 to interconnect the servers in the datacenter. Therefore, a 85 datacenter can greatly reduce its capital expenditure by leveraging 86 low cost switches. However, low cost switches tend to have limited 87 queue capacities and thus are more susceptible to packet loss due to 88 congestion. 90 Network traffic in the datacenter is often a mix of short and long 91 flows, where the short flows require low latency and the long flows 92 require high throughput. Datacenters also experience incast bursts, 93 where many endpoints send traffic to a single server at the same 94 time. For example, this is a natural consequence of MapReduce 95 algorithms. The worker nodes complete at approximately the same 96 time, and all reply to the master node concurrently. 98 These factors place some conflicting demands on the queue occupancy 99 of a switch: 101 o The queue must be short enough that it does not impose excessive 102 latency on short flows. 104 o The queue must be long enough to buffer sufficient data for the 105 long flows to saturate the path bandwidth. 107 o The queue must be short enough to absorb incast bursts without 108 excessive packet loss. 110 Standard TCP congestion control [RFC5681] relies on segment loss to 111 detect congestion. This does not meet the demands described above. 112 First, the short flows will start to experience unacceptable 113 latencies before packet loss occurs. Second, by the time TCP 114 congestion control kicks in on the sender, most of the incast burst 115 has already been dropped. 117 [RFC3168] describes a mechanism for using Explicit Congestion 118 Notification (ECN) from the switch for early detection of congestion, 119 rather than waiting for segment loss to occur. However, this method 120 only detects the presence of congestion, not the extent. In the 121 presence of mild congestion, the TCP congestion window is reduced too 122 aggressively and unnecessarily affects the throughput of long flows. 124 Datacenter TCP (DCTCP) improvises upon traditional ECN processing by 125 estimating the fraction of bytes that encounter congestion, rather 126 than simply detecting that some congestion has occurred. DCTCP then 127 scales the TCP congestion window based on this estimate. This method 128 achieves high burst tolerance, low latency, and high throughput with 129 shallow-buffered switches. 131 It is recommended that DCTCP be deployed in a datacenter environment 132 where the endpoints and the switching fabric are under a single 133 administrative domain. Deployment issues around coexistence of DCTCP 134 and conventional TCP, and lack of a negotiating mechanism between 135 sender and receiver, and possible mitigations are also discussed. 137 2. Terminology 139 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 140 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 141 document are to be interpreted as described in [RFC2119]. 143 3. DCTCP Algorithm 145 There are three components involved in the DCTCP algorithm: 147 o The switch (or other intermediate device on the network) detects 148 congestion and sets the Congestion Encountered (CE) codepoint in 149 the IP header. 151 o The receiver echoes the congestion information back to the sender 152 using the ECN-Echo (ECE) flag in the TCP header. 154 o The sender reacts to the congestion indication by reducing the TCP 155 congestion window (cwnd). 157 3.1. Marking Congestion on the Switch 159 The switch indicates congestion to the end nodes by setting the CE 160 codepoint in the IP header as specified in Section 5 of [RFC3168]. 161 For example, the switch may be configured with a congestion 162 threshold. When a packet arrives at the switch and its queue length 163 is greater than the congestion threshold, the switch sets the CE 164 codepoint in the packet. For example, Section 3.4 of [DCTCP10] 165 suggests threshold marking with a threshold K > (RTT * C)/7, where C 166 is the sending rate in packets per second. However, the actual 167 algorithm for marking congestion is an implementation detail of the 168 switch and will generally not be known to the sender and receiver. 169 Therefore, sender and receiver MUST NOT assume that a particular 170 marking algorithm is implemented by the switching fabric. 172 3.2. Echoing Congestion Information on the Receiver 174 According to Section 6.1.3 of [RFC3168], the receiver sets the ECE 175 flag if any of the packets being acknowledged had the CE code point 176 set. The receiver then continues to set the ECE flag until it 177 receives a packet with the Congestion Window Reduced (CWR) flag set. 178 However, the DCTCP algorithm requires more detailed congestion 179 information. In particular, the sender must be able to determine the 180 number of sent bytes that encountered congestion. Thus, the scheme 181 described in [RFC3168] does not suffice. 183 One possible solution is to ACK every packet and set the ECE flag in 184 the ACK if and only if the CE code point was set in the packet being 185 acknowledged. However, this prevents the use of delayed ACKs, which 186 are an important performance optimization in datacenters. 188 Instead, DCTCP introduces a new Boolean TCP state variable, DCTCP 189 Congestion Encountered (DCTCP.CE), which is initialized to false and 190 stored in the Transmission Control Block (TCB). When sending an ACK, 191 the ECE flag MUST be set if and only if DCTCP.CE is true. When 192 receiving packets, the CE codepoint MUST be processed as follows: 194 1. If the CE codepoint is set and DCTCP.CE is false, send an ACK for 195 any previously unacknowledged packets and set DCTCP.CE to true. 197 2. If the CE codepoint is not set and DCTCP.CE is true, send an ACK 198 for any previously unacknowledged packets and set DCTCP.CE to 199 false. 201 3. Otherwise, ignore the CE codepoint. 203 3.3. Processing Congestion Indications on the Sender 205 The sender estimates the fraction of sent bytes that encountered 206 congestion. The current estimate is stored in a new TCP state 207 variable, DCTCP.Alpha, which is initialized to 1 and MUST be updated 208 as follows: 210 DCTCP.Alpha = DCTCP.Alpha * (1 - g) + g * M 212 where 214 o g is the estimation gain, a real number between 0 and 1. The 215 selection of g is left to the implementation. See Section 4 for 216 further considerations. 218 o M is the fraction of sent bytes that encountered congestion during 219 the previous observation window, where the observation window is 220 chosen to be approximately the Round Trip Time (RTT). In 221 particular, an observation window ends when all the sent bytes in 222 flight at the beginning of the window have been acknowledged. 224 In order to update DCTCP.Alpha, the TCP state variables defined in 225 [RFC0793] are used, and three additional TCP state variables are 226 introduced: 228 o DCTCP.WindowEnd: The TCP sequence number threshold for beginning a 229 new observation window; initialized to SND.UNA. 231 o DCTCP.BytesSent: The number of bytes sent during the current 232 observation window; initialized to zero. 234 o DCTCP.BytesMarked: The number of bytes sent during the current 235 observation window that encountered congestion; initialized to 236 zero. 238 The congestion estimator on the sender MUST process acceptable ACKs 239 as follows: 241 1. Compute the bytes acknowledged (TCP SACK options [RFC2018] are 242 ignored): 244 BytesAcked = SEG.ACK - SND.UNA 246 2. Update the bytes sent: 248 DCTCP.BytesSent += BytesAcked 250 3. If the ECE flag is set, update the bytes marked: 252 DCTCP.BytesMarked += BytesAcked 254 4. If the sequence number is less than or equal to DCTCP.WindowEnd, 255 then stop processing. Otherwise, the end of the observation 256 window was reached, so proceed to update the congestion estimate 257 as follows: 259 5. Compute the congestion level for the current observation window: 261 M = DCTCP.BytesMarked / DCTCP.BytesSent 263 6. Update the congestion estimate: 265 DCTCP.Alpha = DCTCP.Alpha * (1 - g) + g * M 267 7. Determine the end of the next observation window: 269 DCTCP.WindowEnd = SND.NXT 271 8. Reset the byte counters: 273 DCTCP.BytesSent = DCTCP.BytesMarked = 0 275 Rather than always halving the congestion window as described in 276 [RFC3168], when the sender receives an indication of congestion 277 (ECE), the sender MUST update cwnd as follows: 279 cwnd = cwnd * (1 - DCTCP.Alpha / 2) 281 Thus, when no sent byte experienced congestion, DCTCP.Alpha equals 282 zero, and cwnd is left unchanged. When all sent bytes experienced 283 congestion, DCTCP.Alpha equals one, and cwnd is reduced by half. 284 Lower levels of congestion will result in correspondingly smaller 285 reductions to cwnd. 287 Just as specified in [RFC3168], TCP should not react to congestion 288 indications more than once every window of data. The setting of the 289 "Congestion Window Reduced" (CWR) bit is also exactly as per 290 [RFC3168]. 292 3.4. Handling of SYN, SYN-ACK, RST Packets 294 [RFC3168] requires that compliant TCP MUST NOT set ECT on SYN or SYN- 295 ACK packets. [RFC5562] proposes setting ECT on SYN-ACK packets, but 296 maintains the restriction of no ECT on SYN packets. Both these RFCs 297 prohibit ECT in SYN packets due to security concerns regarding 298 malicious SYN packets with ECT set. These RFCs, however, are 299 intended for general Internet use, and do not directly apply to a 300 controlled datacenter deployment. The switching fabric can drop TCP 301 packets that do not have the ECT set in the IP header. If SYN and 302 SYN-ACK packets for DCTCP connections are non-ECT they will be 303 dropped with high probability. For DCTCP connections the sender 304 SHOULD set ECT for SYN, SYN-ACK and RST packets. 306 4. Implementation Issues 308 As noted in Section 3.3, the implementation must choose a suitable 309 estimation gain. [DCTCP10] provides a theoretical basis for 310 selecting the gain. However, it may be more practical to use 311 experimentation to select a suitable gain for a particular network 312 and workload. The Microsoft implementation of DCTCP in Windows 313 Server 2012 uses a fixed estimation gain of 1/16. 315 The implementation must also decide when to use DCTCP. Datacenter 316 servers may need to communicate with endpoints outside the 317 datacenter, where DCTCP is unsuitable or unsupported. Thus, a global 318 configuration setting to enable DCTCP will generally not suffice. 319 DCTCP may be configured based on the IP address of the remote 320 endpoint. Microsoft Windows Server 2012 also supports automatic 321 selection of DCTCP if the estimated RTT is less than 10 msec and ECN 322 is successfully negotiated, under the assumption that if the RTT is 323 low, then the two endpoints are likely on the same datacenter 324 network. 326 To prevent incast throughput collapse the minimum RTO (MinRTO) used 327 by TCP should be lowered significantly. The default value of MinRTO 328 in Windows is 300 msec which is much greater than the maximum 329 latencies inside a datacenter. Server 2012 onwards the MinRTO value 330 is configurable allowing values as low as 10 msec on a per subnet or 331 per TCP port basis or even globally. A lower MinRTO value requires 332 corresponding a lower delayed ACK timeout on the receiver. It is 333 recommended that the implementation allow configuration of lower 334 timeouts for DCTCP connections. 336 In the same vein, it is also recommended that the implementation 337 allow configuration of restarting the cwnd of idle DCTCP connections 338 as described in [RFC5681] since network conditions change rapidly in 339 the datacenter. The implementation can also allow configuration for 340 discarding the value of DCTCP.Alpha after cwnd restart and timeouts. 342 [RFC3168] forbids the ECN-marking of pure ACK packets because of the 343 inability of TCP to mitigate ACK-path congestion and protocol-wise 344 preferential treatment by routers. However dropping pure ACKs rather 345 than ECN marking them is disadvantageous in traffic scenarios typical 346 in the datacenter. Because of the prevalence of bursty traffic 347 patterns which involve transient congestion, the dropping of ACKS 348 causes subsequent retransmission. It is recommended that the 349 implementation a configuration knob that forces ECT on TCP pure ACK 350 packets. 352 5. Deployment Issues 354 DCTCP and conventional TCP congestion control does not coexist well. 355 In DCTCP, the marking threshold is set very low value to reduce 356 queueing delay, thus a relatively small amount of congestion will 357 exceed the marking threshold. During such periods of congestion, 358 conventional TCP will suffer packet losses and quickly scale back 359 cwnd. DCTCP, on the other hand, will use the fraction of marked 360 packets to scale back cwnd. Thus rate reduction in DCTCP will be 361 much lower than that of conventional TCP, and DCTCP traffic will 362 dominate conventional TCP traffic traversing the same link. Hence if 363 the traffic in the datacenter is a mix of conventional TCP and DCTCP, 364 it is recommended that DCTCP traffic be segregated from conventional 365 TCP traffic. [MORGANSTANLEY] describes a deployment that uses IP 366 DSCP bits where AQM is applied to DCTCP traffic, while TCP traffic is 367 managed via drop-tail queueing. 369 Today's commodity switches allow configuration of a different 370 marking/drop profile for non-TCP and non-IP packets. Non-TCP and 371 non-IP packets should be able to pass through the switch unless the 372 switch is really out of buffers. If the traffic in the datacenter 373 consists of such traffic (including UDP), one possible mitigation 374 would be to mark IP packets as ECT even when there is no transport 375 that is reacting to the marking. 377 Since DCTCP relies on congestion marking by the switch, DCTCP can 378 only be deployed in datacenters where the network infrastructure 379 supports ECN. The switches may also support configuration of the 380 congestion threshold used for marking. The proposed parameterization 381 can be configured with switches that implement RED. [DCTCP10] 382 provides a theoretical basis for selecting the congestion threshold, 383 but as with estimation gain, it may be more practical to rely on 384 experimentation or simply to use the default configuration of the 385 device. DCTCP will degrade to loss-based congestion control when 386 transiting a congested drop-tail link. 388 DCTCP requires changes on both the sender and the receiver, so both 389 endpoints must support DCTCP. Furthermore, DCTCP provides no 390 mechanism for negotiating its use, so both endpoints must be 391 configured through some out-of-band mechanism to use DCTCP. A 392 variant of DCTCP that can be deployed unilaterally and only requires 393 standard ECN behavior has been described in [ODCTCP][BSDCAN], but 394 requires additional experimental evaluation. 396 6. Known Issues 398 DCTCP relies on the sender's ability to reconstruct the stream of CE 399 codepoints received by the remote endpoint. To accomplish this, 400 DCTCP avoids using a single ACK packet to acknowledge segments 401 received both with and without the CE codepoint set. However, if one 402 or more ACK packets are dropped, it is possible that a subsequent ACK 403 will cumulatively acknowledge a mix of CE and non-CE segments. This 404 will, of course, result in a less accurate congestion estimate. 405 There are some potential mitigations: 407 o Even with a degraded congestion estimate, DCTCP may still perform 408 better than [RFC3168]. 410 o If the estimation gain is small relative to the packet loss rate, 411 the estimate may not be degraded much. 413 o If packet losses mostly occur under heavy congestion, most drops 414 will occur during an unbroken string of CE packets, and the 415 estimate will be unaffected. 417 However, the affect of packet drops on DCTCP under real world 418 conditions has not been analyzed. 420 DCTCP provides no mechanism for negotiating its use. Thus, there is 421 additional management and configuration overhead required to ensure 422 that DCTCP is not used with non-DCTCP endpoints. The affect of using 423 DCTCP with a standard ECN endpoint has been analyzed in 424 [ODCTCP][BSDCAN]. Furthermore, it is possible that other 425 implementations may also modify [RFC3168] behavior without 426 negotiation, causing further interoperability issues. 428 Much like standard TCP, DCTCP is biased against flows with longer 429 RTTs. A method for improving the fairness of DCTCP has been proposed 430 in [ADCTCP], but requires additional experimental evaluation. 432 7. Implementation Status 434 This section documents the implementation status of the specification 435 in this document, as recommended by [RFC6982]. 437 This document describes DCTCP as implemented in Microsoft Windows 438 Server 2012. Since publication of the first versions of this 439 document, the Linux [LINUX] and FreeBSD [FREEBSD] operating systems 440 have also implemented support for DCTCP in a way that is believed to 441 follow this document. 443 8. Security Considerations 445 DCTCP enhances ECN and thus inherits the security considerations 446 discussed in [RFC3168]. The processing changes introduced by DCTCP 447 do not exacerbate these considerations or introduce new ones. In 448 particular, with either algorithm, the network infrastructure or the 449 remote endpoint can falsely report congestion and thus cause the 450 sender to reduce cwnd. However, this is no worse than what can be 451 achieved by simply dropping packets. 453 9. IANA Considerations 455 This document has no actions for IANA. 457 10. Acknowledgements 459 The DCTCP algorithm was originally proposed and analyzed in [DCTCP10] 460 by Mohammad Alizadeh, Albert Greenberg, Dave Maltz, Jitu Padhye, 461 Parveen Patel, Balaji Prabhakar, Sudipta Sengupta, and Murari 462 Sridharan. 464 Lars Eggert has received funding from the European Union's Horizon 465 2020 research and innovation program 2014-2018 under grant agreement 466 No. 644866 ("SSICLOPS"). This document reflects only the authors' 467 views and the European Commission is not responsible for any use that 468 may be made of the information it contains. 470 11. References 472 11.1. Normative References 474 [RFC0793] Postel, J., "Transmission Control Protocol", STD 7, RFC 475 793, September 1981. 477 [RFC2018] Mathis, M., Mahdavi, J., Floyd, S., and A. Romanow, "TCP 478 Selective Acknowledgment Options", RFC 2018, October 1996. 480 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 481 Requirement Levels", BCP 14, RFC 2119, March 1997. 483 [RFC3168] Ramakrishnan, K., Floyd, S., and D. Black, "The Addition 484 of Explicit Congestion Notification (ECN) to IP", RFC 485 3168, September 2001. 487 11.2. Informative References 489 [RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion 490 Control", RFC 5681, September 2009. 492 [RFC5562] Kuzmanovic, A., Mondal, A., Floyd, S., and K. 493 Ramakrishnan, "Adding Explicit Congestion Notification 494 (ECN) Capability to TCP's SYN/ACK Packets", RFC 5562, June 495 2009. 497 [RFC6982] Sheffer, Y. and A. Farrel, "Improving Awareness of Running 498 Code: The Implementation Status Section", RFC 6982, July 499 2013. 501 [DCTCP10] Alizadeh, M., Greenberg, A., Maltz, D., Padhye, J., Patel, 502 P., Prabhakar, B., Sengupta, S., and M. Sridharan, "Data 503 Center TCP (DCTCP)", Proc. ACM SIGCOMM 2010 Conference 504 (SIGCOMM 10), August 2010, 505 . 507 [ODCTCP] Kato, M., "Improving Transmission Performance with One- 508 Sided Datacenter TCP", M.S. Thesis, Keio University, 2014, 509 . 511 [BSDCAN] Kato, M., Eggert, L., Zimmermann, A., van Meter, R., and 512 H. Tokuda, "Extensions to FreeBSD Datacenter TCP for 513 Incremental Deployment Support", BSDCan 2015, June 2015, 514 . 516 [ADCTCP] Alizadeh, M., Javanmard, A., and B. Prabhakar, "Analysis 517 of DCTCP: Stability, Convergence, and Fairness", Proc. ACM 518 SIGMETRICS Joint International Conference on Measurement 519 and Modeling of Computer Systems (SIGMETRICS 11), June 520 2011, . 522 [LINUX] Borkmann, D. and F. Westphal, "Linux DCTCP patch", 2014, 523 . 527 [FREEBSD] Kato, M. and H. Panchasara, "DCTCP (Data Center TCP) 528 implementation", 2015, 529 . 532 [MORGANSTANLEY] 533 Judd, G., "Attaining the Promise and Avoiding the Pitfalls 534 of TCP in the Datacenter", Proc. 12th USENIX Symposium on 535 Networked Systems Design and Implementation (NSDI 15), May 536 2015, . 539 Authors' Addresses 541 Stephen Bensley 542 Microsoft 543 One Microsoft Way 544 Redmond, WA 98052 545 USA 547 Phone: +1 425 703 5570 548 Email: sbens@microsoft.com 550 Lars Eggert 551 NetApp 552 Sonnenallee 1 553 Kirchheim 85551 554 Germany 556 Phone: +49 151 120 55791 557 Email: lars@netapp.com 558 URI: http://eggert.org/ 559 Dave Thaler 560 Microsoft 562 Phone: +1 425 703 8835 563 Email: dthaler@microsoft.com 565 Praveen Balasubramanian 566 Microsoft 568 Phone: +1 425 538 2782 569 Email: pravb@microsoft.com 571 Glenn Judd 572 Morgan Stanley 574 Phone: +1 973 979 6481 575 Email: glenn.judd@morganstanley.com