idnits 2.17.1 draft-ietf-tcpm-dctcp-05.txt: Checking boilerplate required by RFC 5378 and the IETF Trust (see https://trustee.ietf.org/license-info): ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/1id-guidelines.txt: ---------------------------------------------------------------------------- No issues found here. Checking nits according to https://www.ietf.org/id-info/checklist : ---------------------------------------------------------------------------- No issues found here. Miscellaneous warnings: ---------------------------------------------------------------------------- == The copyright year in the IETF Trust and authors Copyright Line does not match the current year -- The document date (March 27, 2017) is 2586 days in the past. Is this intentional? Checking references for intended status: Informational ---------------------------------------------------------------------------- ** Obsolete normative reference: RFC 793 (Obsoleted by RFC 9293) -- Duplicate reference: RFC3168, mentioned in 'RFC3168-ERRATA3639', was also mentioned in 'RFC3168'. Summary: 1 error (**), 0 flaws (~~), 1 warning (==), 2 comments (--). Run idnits with the --verbose option for more detailed information about the items above. -------------------------------------------------------------------------------- 2 Network Working Group S. Bensley 3 Internet-Draft D. Thaler 4 Intended status: Informational P. Balasubramanian 5 Expires: September 28, 2017 Microsoft 6 L. Eggert 7 NetApp 8 G. Judd 9 Morgan Stanley 10 March 27, 2017 12 Datacenter TCP (DCTCP): TCP Congestion Control for Datacenters 13 draft-ietf-tcpm-dctcp-05 15 Abstract 17 This informational memo describes Datacenter TCP (DCTCP), an 18 improvement to TCP congestion control for datacenter traffic. DCTCP 19 uses improved Explicit Congestion Notification (ECN) processing to 20 estimate the fraction of bytes that encounter congestion, rather than 21 simply detecting that some congestion has occurred. DCTCP then 22 scales the TCP congestion window based on this estimate. This method 23 achieves high burst tolerance, low latency, and high throughput with 24 shallow-buffered switches. This memo also discusses deployment 25 issues related to the coexistence of DCTCP and conventional TCP, the 26 lack of a negotiating mechanism between sender and receiver, and 27 presents some possible mitigations. DCTCP as described in this draft 28 is applicable to deployments in controlled environments like 29 datacenters but it MUST NOT be deployed over the public Internet 30 without additional measures, as detailed in Section 5. 32 Status of This Memo 34 This Internet-Draft is submitted in full conformance with the 35 provisions of BCP 78 and BCP 79. 37 Internet-Drafts are working documents of the Internet Engineering 38 Task Force (IETF). Note that other groups may also distribute 39 working documents as Internet-Drafts. The list of current Internet- 40 Drafts is at http://datatracker.ietf.org/drafts/current/. 42 Internet-Drafts are draft documents valid for a maximum of six months 43 and may be updated, replaced, or obsoleted by other documents at any 44 time. It is inappropriate to use Internet-Drafts as reference 45 material or to cite them other than as "work in progress." 47 This Internet-Draft will expire on September 28, 2017. 49 Copyright Notice 51 Copyright (c) 2017 IETF Trust and the persons identified as the 52 document authors. All rights reserved. 54 This document is subject to BCP 78 and the IETF Trust's Legal 55 Provisions Relating to IETF Documents 56 (http://trustee.ietf.org/license-info) in effect on the date of 57 publication of this document. Please review these documents 58 carefully, as they describe your rights and restrictions with respect 59 to this document. Code Components extracted from this document must 60 include Simplified BSD License text as described in Section 4.e of 61 the Trust Legal Provisions and are provided without warranty as 62 described in the Simplified BSD License. 64 Table of Contents 66 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . 2 67 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . 4 68 3. DCTCP Algorithm . . . . . . . . . . . . . . . . . . . . . . . 4 69 3.1. Marking Congestion on the L3 Switches and Routers . . . . 4 70 3.2. Echoing Congestion Information on the Receiver . . . . . 4 71 3.3. Processing Echoed Congestion Indications on the Sender . 6 72 3.4. Handling of packet loss . . . . . . . . . . . . . . . . . 8 73 3.5. Handling of SYN, SYN-ACK, RST Packets . . . . . . . . . . 8 74 4. Implementation Issues . . . . . . . . . . . . . . . . . . . . 8 75 5. Deployment Issues . . . . . . . . . . . . . . . . . . . . . . 10 76 6. Known Issues . . . . . . . . . . . . . . . . . . . . . . . . 11 77 7. Implementation Status . . . . . . . . . . . . . . . . . . . . 11 78 8. Security Considerations . . . . . . . . . . . . . . . . . . . 12 79 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . 12 80 10. Acknowledgements . . . . . . . . . . . . . . . . . . . . . . 12 81 11. References . . . . . . . . . . . . . . . . . . . . . . . . . 12 82 11.1. Normative References . . . . . . . . . . . . . . . . . . 13 83 11.2. Informative References . . . . . . . . . . . . . . . . . 13 84 Authors' Addresses . . . . . . . . . . . . . . . . . . . . . . . 14 86 1. Introduction 88 Large datacenters necessarily need many network switches to 89 interconnect their many servers. Therefore, a datacenter can greatly 90 reduce its capital expenditure by leveraging low-cost switches. 91 However, such low-cost switches tend to have limited queue capacities 92 and are thus more susceptible to packet loss due to congestion. 94 Network traffic in a datacenter is often a mix of short and long 95 flows, where the short flows require low latencies and the long flows 96 require high throughputs. Datacenters also experience incast bursts, 97 where many servers send traffic to a single server at the same time. 98 For example, this traffic pattern is a natural consequence of 99 MapReduce workload: The worker nodes complete at approximately the 100 same time, and all reply to the master node concurrently. 102 These factors place some conflicting demands on the queue occupancy 103 of a switch: 105 o The queue must be short enough that it does not impose excessive 106 latency on short flows. 108 o The queue must be long enough to buffer sufficient data for the 109 long flows to saturate the path capacity. 111 o The queue must be long enough to absorb incast bursts without 112 excessive packet loss. 114 Standard TCP congestion control [RFC5681] relies on packet loss to 115 detect congestion. This does not meet the demands described above. 116 First, short flows will start to experience unacceptable latencies 117 before packet loss occurs. Second, by the time TCP congestion 118 control kicks in on the senders, most of the incast burst has already 119 been dropped. 121 [RFC3168] describes a mechanism for using Explicit Congestion 122 Notification (ECN) from the switches for detection of congestion. 123 However, this method only detects the presence of congestion, not its 124 extent. In the presence of mild congestion, the TCP congestion 125 window is reduced too aggressively and this unnecessarily reduces the 126 throughput of long flows. 128 Datacenter TCP (DCTCP) improves traditional ECN processing by 129 estimating the fraction of bytes that encounter congestion, rather 130 than simply detecting that some congestion has occurred. DCTCP then 131 scales the TCP congestion window based on this estimate. This method 132 achieves high burst tolerance, low latency, and high throughput with 133 shallow-buffered switches. DCTCP is a modification to the processing 134 of ECN by a conventional TCP and requires that standard TCP 135 congestion control be used for handling packet loss. 137 It is recommended that DCTCP be only deployed in a datacenter 138 environment where the endpoints and the switching fabric are under a 139 single administrative domain. DCTCP MUST NOT be deployed over the 140 public Internet without additional measures, as detailed in 141 Section 5. 143 2. Terminology 145 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", 146 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 147 document are to be interpreted as described in [RFC2119]. Normative 148 language is used to describe how necessary the various aspects of the 149 Microsoft implementation are for interoperability, but even compliant 150 implementations without the measures in sections 4-6 would still only 151 be safe to deploy in controlled environments. 153 3. DCTCP Algorithm 155 There are three components involved in the DCTCP algorithm: 157 o The switches (or other intermediate devices in the network) detect 158 congestion and set the Congestion Encountered (CE) codepoint in 159 the IP header. 161 o The receiver echoes the congestion information back to the sender, 162 using the ECN-Echo (ECE) flag in the TCP header. 164 o The sender computes a congestion estimate and reacts, by reducing 165 the TCP congestion window accordingly (cwnd). 167 3.1. Marking Congestion on the L3 Switches and Routers 169 The L3 switches and routers in a datacenter fabric indicate 170 congestion to the end nodes by setting the CE codepoint in the IP 171 header as specified in Section 5 of [RFC3168]. For example, the 172 switches may be configured with a congestion threshold. When a 173 packet arrives at a switch and its queue length is greater than the 174 congestion threshold, the switch sets the CE codepoint in the packet. 175 For example, Section 3.4 of [DCTCP10] suggests threshold marking with 176 a threshold K > (RTT * C)/7, where C is the link rate in packets per 177 second. However, the actual algorithm for marking congestion is an 178 implementation detail of the switch and will generally not be known 179 to the sender and receiver. Therefore, sender and receiver should 180 not assume that a particular marking algorithm is implemented by the 181 switching fabric. 183 3.2. Echoing Congestion Information on the Receiver 185 According to Section 6.1.3 of [RFC3168], the receiver sets the ECE 186 flag if any of the packets being acknowledged had the CE code point 187 set. The receiver then continues to set the ECE flag until it 188 receives a packet with the Congestion Window Reduced (CWR) flag set. 189 However, the DCTCP algorithm requires more detailed congestion 190 information. In particular, the sender must be able to determine the 191 number of bytes sent that encountered congestion. Thus, the scheme 192 described in [RFC3168] does not suffice. 194 One possible solution is to ACK every packet and set the ECE flag in 195 the ACK if and only if the CE code point was set in the packet being 196 acknowledged. However, this prevents the use of delayed ACKs, which 197 are an important performance optimization in datacenters. If the 198 delayed ACK frequency is m, then an ACK is generated every m packets. 199 The typical value of m is 2 but it could be affected by ACK 200 throttling or packet coalescing techniques designed to improve 201 performance. 203 Instead, DCTCP introduces a new Boolean TCP state variable, "DCTCP 204 Congestion Encountered" (DCTCP.CE), which is initialized to false and 205 stored in the Transmission Control Block (TCB). When sending an ACK, 206 the ECE flag MUST be set if and only if DCTCP.CE is true. When 207 receiving packets, the CE codepoint MUST be processed as follows: 209 1. If the CE codepoint is set and DCTCP.CE is false, set DCTCP.CE to 210 true and send an immediate ACK. 212 2. If the CE codepoint is not set and DCTCP.CE is true, set DCTCP.CE 213 to false and send an immediate ACK. 215 3. Otherwise, ignore the CE codepoint. 217 Since the immediate ACK reflects the new DCTCP.CE state, it may 218 acknowledge any previously unacknowledged packets in the old state. 219 This can lead to an incorrect DCTCP.Alpha value computation at the 220 sender per Section 3.3. To avoid this, an implementation may choose 221 to send two ACKs, one for previously unacknowledged packets and 222 another acknowledging the most recently received packet. 224 Receiver handling of the "Congestion Window Reduced" (CWR) bit is 225 also per [RFC3168] including [RFC3168-ERRATA3639]. That is, on 226 receipt of a segment with both the CE and CWR bits set, CWR is 227 processed first and then ECE is processed. 229 Send immediate 230 ACK with ECE=0 231 .----. .-------------. .---. 232 Send 1 ACK / v v | | \ 233 for every | .------. .------. | Send 1 ACK 234 m packets | | CE=0 | | CE=1 | | for every 235 with ECE=0 | '------' '------' | m packets 236 \ | | ^ ^ / with ECE=1 237 '---' '------------' '----' 238 Send immediate 239 ACK with ECE=1 241 Figure 1: ACK generation state machine. DCTCP.CE abbreviated as CE. 243 3.3. Processing Echoed Congestion Indications on the Sender 245 The sender estimates the fraction of bytes sent that encountered 246 congestion. The current estimate is stored in a new TCP state 247 variable, DCTCP.Alpha, which is initialized to 1 and SHOULD be 248 updated as follows: 250 DCTCP.Alpha = DCTCP.Alpha * (1 - g) + g * M 252 where 254 o g is the estimation gain, a real number between 0 and 1. The 255 selection of g is left to the implementation. See Section 4 for 256 further considerations. 258 o M is the fraction of bytes sent that encountered congestion during 259 the previous observation window, where the observation window is 260 chosen to be approximately the Round Trip Time (RTT). In 261 particular, an observation window ends when all bytes in flight at 262 the beginning of the window have been acknowledged. 264 In order to update DCTCP.Alpha, the TCP state variables defined in 265 [RFC0793] are used, and three additional TCP state variables are 266 introduced: 268 o DCTCP.WindowEnd: The TCP sequence number threshold for beginning a 269 new observation window; initialized to SND.UNA. 271 o DCTCP.BytesAcked: The number of sent bytes acknowledged during the 272 current observation window; initialized to zero. 274 o DCTCP.BytesMarked: The number of bytes sent during the current 275 observation window that encountered congestion; initialized to 276 zero. 278 The congestion estimator on the sender SHOULD process acceptable ACKs 279 as follows: 281 1. Compute the bytes acknowledged (TCP SACK options [RFC2018] are 282 ignored for this computation): 284 BytesAcked = SEG.ACK - SND.UNA 286 2. Update the bytes sent: 288 DCTCP.BytesAcked += BytesAcked 290 3. If the ECE flag is set, update the bytes marked: 292 DCTCP.BytesMarked += BytesAcked 294 4. If the acknowledgment number is less than or equal to 295 DCTCP.WindowEnd, stop processing. Otherwise, the end of the 296 observation window has been reached, so proceed to update the 297 congestion estimate as follows: 299 5. Compute the congestion level for the current observation window: 301 M = DCTCP.BytesMarked / DCTCP.BytesAcked 303 6. Update the congestion estimate: 305 DCTCP.Alpha = DCTCP.Alpha * (1 - g) + g * M 307 7. Determine the end of the next observation window: 309 DCTCP.WindowEnd = SND.NXT 311 8. Reset the byte counters: 313 DCTCP.BytesAcked = DCTCP.BytesMarked = 0 315 9. Rather than always halving the congestion window as described in 316 [RFC3168], the sender SHOULD update cwnd as follows: 318 cwnd = cwnd * (1 - DCTCP.Alpha / 2) 320 Thus, when no bytes sent experienced congestion, DCTCP.Alpha equals 321 zero, and cwnd is left unchanged. When all sent bytes experienced 322 congestion, DCTCP.Alpha equals one, and cwnd is reduced by half. 323 Lower levels of congestion will result in correspondingly smaller 324 reductions to cwnd. 326 Just as specified in [RFC3168], DCTCP does not react to congestion 327 indications more than once for every window of data. The setting of 328 the "Congestion Window Reduced" (CWR) bit is also as per [RFC3168]. 329 This is required for interop with classic ECN receivers due to 330 potential misconfigurations. 332 3.4. Handling of packet loss 334 A DCTCP sender MUST react to loss episodes in the same way as 335 conventional TCP. For cases where the packet loss is inferred and 336 not explicitly signaled by ECN, the cwnd and other state variables 337 like ssthresh must be changed in the same way that a conventional TCP 338 would have changed them. As with ECN, DCTCP sender will only reduce 339 the cwnd once per window of data across all loss signals. Just as 340 specified in [RFC5681], upon a timeout, the cwnd MUST be set to no 341 more than the loss window (1 full-sized segment), regardless of 342 previous cwnd reductions in a given window of data. 344 3.5. Handling of SYN, SYN-ACK, RST Packets 346 If SYN , SYN-ACK and RST packets for DCTCP connections have ECT set 347 in the IP header, they will receive the same treatment as other DCTCP 348 packets when forwarded by a switching fabric under load. Lack of ECT 349 in these packets may result in a higher drop rate depending on the 350 switching fabric configuration. Hence for DCTCP connections, the 351 sender SHOULD set ECT for SYN, SYN-ACK and RST packets. 353 4. Implementation Issues 355 As noted in Section 3.3, the implementation will need to choose a 356 suitable estimation gain. [DCTCP10] provides a theoretical basis for 357 selecting the gain. However, it may be more practical to use 358 experimentation to select a suitable gain for a particular network 359 and workload. The Microsoft implementation of DCTCP in Windows 360 Server 2012 uses a fixed estimation gain of 1/16. 362 The implementation must also decide when to use DCTCP. Datacenter 363 servers may need to communicate with endpoints outside the 364 datacenter, where DCTCP is unsuitable or unsupported. Thus, a global 365 configuration setting to enable DCTCP will generally not suffice. 366 DCTCP provides no mechanism for negotiating its use. Thus, there is 367 additional management and configuration overhead required to ensure 368 that DCTCP is not used with non-DCTCP endpoints. 370 Potential solutions rely on either configuration or heuristics. 371 Heuristics need to allow endpoints to individually enable DCTCP, to 372 ensure a DCTCP sender is always paired with a DCTCP receiver. One 373 approach is to enable DCTCP based on the IP address of the remote 374 endpoint. Another approach is to detect connections that transmit 375 within the bounds a datacenter. For example, Microsoft Windows 376 Server 2012 (and later versions) supports automatic selection of 377 DCTCP if the estimated RTT is less than 10 msec and ECN is 378 successfully negotiated, under the assumption that if the RTT is low, 379 then the two endpoints are likely in the same datacenter network. 381 [RFC3168] forbids the ECN-marking of pure ACK packets, because of the 382 inability of TCP to mitigate ACK-path congestion. RFC 3168 also 383 forbids ECN-marking of retransmissions, window probes and RSTs. 384 However, dropping all these control packets - rather than ECN marking 385 them - has considerable performance disadvantages. It is RECOMMENDED 386 that an implementation provide a configuration knob that will cause 387 ECT to be set on such control packets, which can be used in 388 environments where such concerns do not apply. See 389 [ECN-EXPERIMENTATION] for details. 391 It would be useful to implement DCTCP as additional actions on top of 392 an existing congestion control algorithm like NewReno. The DCTCP 393 implementation MAY also allow configuration of resetting the value of 394 DCTCP.Alpha as part of processing any loss episodes. 396 The DCTCP.Alpha calculation as per the formula in Section 3.3 397 involves fractions. An efficient kernel implementation MAY scale the 398 DCTCP.Alpha value for efficient computation using shift operations. 399 For example, if the implementation chooses g as 1/16, multiplications 400 of DCTCP.Alpha by g become right-shifts by 4. A scaling 401 implementation SHOULD ensure that DCTCP.Alpha is able to reach zero 402 once it falls below the smallest shifted value (16 in the above 403 example). At the other extreme, a scaled update MUST also ensure 404 DCTCP.Alpha does not exceed the scaling factor, which would be 405 equivalent to greater than 100% congestion. So, DCTCP.Alpha MUST be 406 clamped after an update. 408 This results in the following computations replacing steps 5 and 6 in 409 Section 3.3, where SCF is the chosen scaling factor (65536 in the 410 example) and SHF is the shift factor (4 in the example): 412 1. Compute the congestion level for the current observation window: 414 ScaledM = SCF * DCTCP.BytesMarked / DCTCP.BytesAcked 416 2. Update the congestion estimate: 418 if (DCTCP.Alpha >> SHF) == 0 then DCTCP.Alpha = 0 420 DCTCP.Alpha += (ScaledM >> SHF) - (DCTCP.Alpha >> SHF) 421 if DCTCP.Alpha > SCF then DCTCP.Alpha = SCF 423 5. Deployment Issues 425 DCTCP and conventional TCP congestion control do not coexist well in 426 the same network. In DCTCP, the marking threshold is set to a very 427 low value to reduce queueing delay, and a relatively small amount of 428 congestion will exceed the marking threshold. During such periods of 429 congestion, conventional TCP will suffer packet loss and quickly and 430 drastically reduce cwnd. DCTCP, on the other hand, will use the 431 fraction of marked packets to reduce cwnd more gradually. Thus, the 432 rate reduction in DCTCP will be much slower than that of conventional 433 TCP, and DCTCP traffic will gain a larger share of the capacity 434 compared to conventional TCP traffic traversing the same path. If 435 the traffic in the datacenter is a mix of conventional TCP and DCTCP, 436 it is RECOMMENDED that DCTCP traffic be segregated from conventional 437 TCP traffic. [MORGANSTANLEY] describes a deployment that uses the IP 438 DSCP bits to segregate the network such that AQM is applied to DCTCP 439 traffic, whereas TCP traffic is managed via drop-tail queueing. 441 Deployments should take into account segregation of non-TCP traffic 442 as well. Today's commodity switches allow configuration of different 443 marking/drop profiles for non-TCP and non-IP packets. Non-TCP and 444 non-IP packets should be able to pass through such switches, unless 445 they really run out of buffer space. 447 Since DCTCP relies on congestion marking by the switches, DCTCP's 448 potential can only be realized in datacenters where the entire 449 network infrastructure supports ECN. The switches may also support 450 configuration of the congestion threshold used for marking. The 451 proposed parameterization can be configured with switches that 452 implement RED. [DCTCP10] provides a theoretical basis for selecting 453 the congestion threshold, but as with the estimation gain, it may be 454 more practical to rely on experimentation or simply to use the 455 default configuration of the device. DCTCP will revert to loss-based 456 congestion control when packet loss is experienced (e.g. when 457 transiting a congested drop-tail link, or a link with an AQM drop 458 behavior). 460 DCTCP requires changes on both the sender and the receiver, so both 461 endpoints must support DCTCP. Furthermore, DCTCP provides no 462 mechanism for negotiating its use, so both endpoints must be 463 configured through some out-of-band mechanism to use DCTCP. A 464 variant of DCTCP that can be deployed unilaterally and only requires 465 standard ECN behavior has been described in [ODCTCP][BSDCAN], but 466 requires additional experimental evaluation. 468 6. Known Issues 470 DCTCP relies on the sender's ability to reconstruct the stream of CE 471 codepoints received by the remote endpoint. To accomplish this, 472 DCTCP avoids using a single ACK packet to acknowledge segments 473 received both with and without the CE codepoint set. However, if one 474 or more ACK packets are dropped, it is possible that a subsequent ACK 475 will cumulatively acknowledge a mix of CE and non-CE segments. This 476 will, of course, result in a less accurate congestion estimate. 477 There are some potential considerations: 479 o Even with an inaccurate congestion estimate, DCTCP may still 480 perform better than [RFC3168]. 482 o If the estimation gain is small relative to the packet loss rate, 483 the estimate may not be too inaccurate. 485 o If ACK packet loss mostly occurs under heavy congestion, most 486 drops will occur during an unbroken string of CE packets, and the 487 estimate will be unaffected. 489 However, the effect of packet drops on DCTCP under real world 490 conditions has not been analyzed. 492 DCTCP provides no mechanism for negotiating its use. The effect of 493 using DCTCP with a standard ECN endpoint has been analyzed in 494 [ODCTCP][BSDCAN]. Furthermore, it is possible that other 495 implementations may also modify [RFC3168] behavior without 496 negotiation, causing further interoperability issues. 498 Much like standard TCP, DCTCP is biased against flows with longer 499 RTTs. A method for improving the RTT fairness of DCTCP has been 500 proposed in [ADCTCP], but requires additional experimental 501 evaluation. 503 7. Implementation Status 505 This section documents the implementation status of the specification 506 in this document, as recommended by [RFC7942]. 508 This document describes DCTCP as implemented in Microsoft Windows 509 Server 2012. Since publication of the first versions of this 510 document, the Linux [LINUX] and FreeBSD [FREEBSD] operating systems 511 have also implemented support for DCTCP in a way that is believed to 512 follow this document. 514 8. Security Considerations 516 DCTCP enhances ECN and thus inherits the security considerations 517 discussed in [RFC3168]. The processing changes introduced by DCTCP 518 do not exacerbate these considerations or introduce new ones. In 519 particular, with either algorithm, the network infrastructure or the 520 remote endpoint can falsely report congestion and thus cause the 521 sender to reduce cwnd. However, this is no worse than what can be 522 achieved by simply dropping packets. 524 [RFC3168] requires that a compliant TCP must not set ECT on SYN or 525 SYN-ACK packets. [RFC5562] proposes setting ECT on SYN-ACK packets, 526 but maintains the restriction of no ECT on SYN packets. Both these 527 RFCs prohibit ECT in SYN packets due to security concerns regarding 528 malicious SYN packets with ECT set. These RFCs, however, are 529 intended for general Internet use, and do not directly apply to a 530 controlled datacenter environment. The security concerns addressed 531 by both these RFCs might not apply in controlled environments like 532 datacenters, and it might not be necessary to account for the 533 presence of non-ECN servers. Since most servers run virtualized in 534 datacenters, additional security can be imposed in the physical 535 servers to intercept and drop traffic resembling an attack. 537 9. IANA Considerations 539 This document has no actions for IANA. 541 10. Acknowledgements 543 The DCTCP algorithm was originally proposed and analyzed in [DCTCP10] 544 by Mohammad Alizadeh, Albert Greenberg, Dave Maltz, Jitu Padhye, 545 Parveen Patel, Balaji Prabhakar, Sudipta Sengupta, and Murari 546 Sridharan. 548 We would like to thank Andrew Shewmaker for identifying the problem 549 of clamping DCTCP.Alpha and proposing a solution for it. 551 Lars Eggert has received funding from the European Union's Horizon 552 2020 research and innovation program 2014-2018 under grant agreement 553 No. 644866 ("SSICLOPS"). This document reflects only the authors' 554 views and the European Commission is not responsible for any use that 555 may be made of the information it contains. 557 11. References 558 11.1. Normative References 560 [RFC0793] Postel, J., "Transmission Control Protocol", STD 7, 561 RFC 793, DOI 10.17487/RFC0793, September 1981, 562 . 564 [RFC2018] Mathis, M., Mahdavi, J., Floyd, S., and A. Romanow, "TCP 565 Selective Acknowledgment Options", RFC 2018, 566 DOI 10.17487/RFC2018, October 1996, 567 . 569 [RFC2119] Bradner, S., "Key words for use in RFCs to Indicate 570 Requirement Levels", BCP 14, RFC 2119, 571 DOI 10.17487/RFC2119, March 1997, 572 . 574 [RFC3168] Ramakrishnan, K., Floyd, S., and D. Black, "The Addition 575 of Explicit Congestion Notification (ECN) to IP", 576 RFC 3168, DOI 10.17487/RFC3168, September 2001, 577 . 579 [RFC5681] Allman, M., Paxson, V., and E. Blanton, "TCP Congestion 580 Control", RFC 5681, DOI 10.17487/RFC5681, September 2009, 581 . 583 [RFC5562] Kuzmanovic, A., Mondal, A., Floyd, S., and K. 584 Ramakrishnan, "Adding Explicit Congestion Notification 585 (ECN) Capability to TCP's SYN/ACK Packets", RFC 5562, 586 DOI 10.17487/RFC5562, June 2009, 587 . 589 11.2. Informative References 591 [RFC7942] Sheffer, Y. and A. Farrel, "Improving Awareness of Running 592 Code: The Implementation Status Section", BCP 205, 593 RFC 7942, DOI 10.17487/RFC7942, July 2016, 594 . 596 [DCTCP10] Alizadeh, M., Greenberg, A., Maltz, D., Padhye, J., Patel, 597 P., Prabhakar, B., Sengupta, S., and M. Sridharan, "Data 598 Center TCP (DCTCP)", DOI 10.1145/1851182.1851192, Proc. 599 ACM SIGCOMM 2010 Conference (SIGCOMM 10), August 2010, 600 . 602 [ODCTCP] Kato, M., "Improving Transmission Performance with One- 603 Sided Datacenter TCP", M.S. Thesis, Keio University, 604 2014, . 606 [BSDCAN] Kato, M., Eggert, L., Zimmermann, A., van Meter, R., and 607 H. Tokuda, "Extensions to FreeBSD Datacenter TCP for 608 Incremental Deployment Support", BSDCan 2015, June 2015, 609 . 611 [ADCTCP] Alizadeh, M., Javanmard, A., and B. Prabhakar, "Analysis 612 of DCTCP: Stability, Convergence, and Fairness", 613 DOI 10.1145/1993744.1993753, Proc. ACM SIGMETRICS Joint 614 International Conference on Measurement and Modeling of 615 Computer Systems (SIGMETRICS 11), June 2011, 616 . 618 [LINUX] Borkmann, D. and F. Westphal, "Linux DCTCP patch", 2014, 619 . 623 [FREEBSD] Kato, M. and H. Panchasara, "DCTCP (Data Center TCP) 624 implementation", 2015, 625 . 628 [MORGANSTANLEY] 629 Judd, G., "Attaining the Promise and Avoiding the Pitfalls 630 of TCP in the Datacenter", Proc. 12th USENIX Symposium on 631 Networked Systems Design and Implementation (NSDI 15), May 632 2015, . 635 [RFC3168-ERRATA3639] 636 Scheffenegger, R., "RFC3168 Errata ID 3639", 2013, 637 . 640 [ECN-EXPERIMENTATION] 641 Black, D., "Explicit Congestion Notification (ECN) 642 Experimentation", 2017, . 645 Authors' Addresses 646 Stephen Bensley 647 Microsoft 648 One Microsoft Way 649 Redmond, WA 98052 650 USA 652 Phone: +1 425 703 5570 653 Email: sbens@microsoft.com 655 Dave Thaler 656 Microsoft 658 Phone: +1 425 703 8835 659 Email: dthaler@microsoft.com 661 Praveen Balasubramanian 662 Microsoft 664 Phone: +1 425 538 2782 665 Email: pravb@microsoft.com 667 Lars Eggert 668 NetApp 669 Sonnenallee 1 670 Kirchheim 85551 671 Germany 673 Phone: +49 151 120 55791 674 Email: lars@netapp.com 675 URI: http://eggert.org/ 677 Glenn Judd 678 Morgan Stanley 680 Phone: +1 973 979 6481 681 Email: glenn.judd@morganstanley.com