Minutes for TSVArea at IETF 74 San Francisco Wednesday 25th of March 13-14 The Transport Area directors thanks Scott Brim and Matt Zekauskas for taking notes during the session. State of the Area ----------------- Session started by having Lars Eggert and Magnus Westerlund present the state of the area. The only new thing is the last two lines in the plot. Approved 14 which is pretty good, published 7 which is okay. 16 in RFC Editor queue. All waiting for Channel Binding draft and count will go down by 8 or so. Looks good but could still use some work. Liaison activities: one from SG11 to PCN in response to earlier statements. One from SG11 to NSIS/TSV ADs/IESG/IAB. Protocol for support of flow state aware transport technology. See NSIS tomorrow afternoon. The area has two BOFs this meeting: SHARA and STORM. Rechartered: IPPM (TWAMP extensions, other), NFSv4 (federated file systems). Winding down: ROHC (header compression in IPsec tunnels -> publication requsted). RSERPOOL concludes after MIB doc published. Potentially on the horizon: P2P media streaming, BAR BOF at last and this IETFs; there seem to exist a community of interest. The work if taken on could be in RAI or APPS. For interested join ppsp@ietf.org. Multipath TCP: Handley spoke at IETF 72, this time Iljitsch. May see a BOF proposal in the near future. The Transport area directors have office hours today at 15:00, Continental 7/8. Multipath TCP opportunities and Pitfalls ---------------------------------------- Iljitsch van Beijnum presented "Multipath TCP opportunities and Pitfalls". Usually multiple paths exist. Even if routing protocols don't hide them, transport will send packets over just one path. If multiple path could be used simultanously it could provide better speed/utilization. Better robustness - if path fails, other paths will still be up etc. Suppose you round robin -- may be limited by the slowest path. Reordering triggers fast retransmit and cwnd/2. So MIP. Shim6, SCTP don't give us usable multipath. Question, Michael Tuxsen: Concurrent multipath transfer, for load sharing is a proposed SCTP extension that has been implemented and have one solution for congestion control. Path selection: multi-address (src/dst pair); first hop selection by host (multiple NICs or default routers); and "path selector value" (value in packet selects Q: Joe Touch: changing src/dst pair means it's a different connection. Need to do something above TCP, otherwise all these dependencies on TCP need changes to. Iljitsch: Propose to both change TCP and the upper layer. Mark Handley: yes, allow more than one socket path bound to the same socket buffer. So changes to TCP and only the socket interface looks the same. Issue needs more discussion at for example an BOF. Kinds of multipath. Paths may merge toward the destination (not always independent). If modify routing, paths may diverge somewhere in the middle of the network. Receive buffer issues: what if a path breaks but packets keep flowing over other path? Receive buffer fills up. Need to be solved. Fairness, performance: A single flow don't want to be more OR less aggressive than TCP. Multipath will either be more agressive than a single flow, or be restricted in its usage of the available resources. Mailing list for further discussion: www.ietf.org/mailman/listinfo/multipathtcp Questions Michael Tuxsen: one problem that hasn't been solved, how can we detect if paths towards a peer share a bottleneck? You could be fighting with yourself. Joe: How is the state of all those connections coupled together? Thinks corner cases are a mess. Iljitsch: there are many options here. Mark Handley: just on congestion control, see work by Frank Kelly and Peter Keane, there are a number of options you can do so you don't care if bottlenecks are the same or different. If you increase your window you increase by the total window of all flows, and back off similarly. Tends to push traffic onto the less loaded path. Michael Tuxsen: description of algorithm? Mark: Yes, theory is in papers but we can give you code. Marcelo Bagnulo: multiple addresses and IPsec ... what's the problem, can have multiple CoAs in MIP, Shim6 and so on. IPsec can deal with that. Joe Touch: if you have a single endpoint connection identifier and map it onto another set of connection identifiers then it will work. But what you are doing is taking TCP and breaking it, instead of taking SCTP which is already broken for IPsec -- people already know it is and don't try to use it. Lars: IPsec is not the biggest problem at this time. Joao Taveira: how can this work for multiple flow windows where receiver is ignorant? Mark: Our proposal requires subflows having individual flow windows, which implies compliant sender & receiver. Didn't understand how you can have a sender-only implementation without breaking CC. Iljitsch: flow control needs to be session wide, not something that can avoid. Rethinking TCP-Friendly ----------------------- Matt Mathis, Rethinking TCP-friendly Definition of fairness isn't enough. Have the network itself do something -- endpoints fill the system to capacity. "Relentless" TCP. Packets are sent in response to packets arriving. The only window reduction is due to loss. You always have exactly the window that the network held on the previous round trip. Additive increase only when lossless RTT and flight size == cwnd. But it's very aggressive. Queue controller: segregate flows, monitor David Black: it looks like the example you're using is in the large window area of TCP operation, but there are other proposals in that range that people are working on. All the high speed window people have been here. A: I'm bailing on that; in this range but in the gigabit range this is better. Bob Briscoe: The reason is that everything gets mushier and the answer is a 1/p controller not 1/sqrt(p). Rate is ~ 1/loss_rate. One loss every 3 RTTs. Vastly higher equilibrium loss rates than TCP withs its 1/sqrt(p) where each doubling in bandwidth result in factor 4 required reduction in packet loss. David Black: I think you just assumed no mice. A: he had assumed not changing the timeout behavior. Just used standard additive increase, thinks do better with things like delay sensing ... later. Implementation works but hammers hard SACK and the recovery code. The quality of recovery code has gotten a lot better during this work. Hammers on the network too, see components failing. Implementation: Flight size vs cwnd. Current philosophy is to protect the network and other flows from bursts. Noise in the system pulls down cwnd frequently. It would be better to let the network itself protect other flows. A Paper sumbitted to PFLDnet more information at: http://staff.psc.edu/mathis/relentless Questions: Mark Handley: clarify: convergence towards fairness comes out of router activity. A: Yes, believe that it converges to window fair in short term. This is based on the probability of loss, large flow higher probability of losses. For Multiple flows we haven't done that in real-life, only on a poor simulator. This needs more investigation. Bob Briscoe: doesn't want per-flow scheduling, the problem is we're thinking of fairness per box, remove that and we don't have to do per flow handling. A: "we will keep working on it". How can we move beyond one size fits all congestion control? TCP-friendly works pretty well, although there are some problems, but it forbids Relentless and other advances. Can traffic management work at Internet scales? Protect other people: release it "less effort" marked. A lot of LE-marked traffic --> nets will either do LE well or sequester it. LE definition is extended to include rfc2581. Requirement that non-AIMD use LE is relaxed. Solicit help. Undoing 20+ years of IETF legacy. Hums ... do we need to move beyond tcpfriendly? All hummed. Bob Briscoe: ok so we want to move but is 1/p the right direction? Mark Handley: a ton of solutions have been proposed, don't have time to figure out the right question. How many people are interested in supporting this draft in some way? A few hums, none against.