ICCRG meeting minutes, IETF 86, Orlando, Tuesday, March 12 2013, 13:00-15:00, room Caribbean 6 ================================================================ Note taker: Pasi Sarolahti, pasi.sarolahti@iki.fi * Announcement for the Berlin 87th IETF: we plan to organize a session on "congestion control beyond the IETF TSV area: in DTN, in ICN..." Gorry Fairhurst & Bob Briscoe: "Advice on network buffering - a suggested update to RFC 3819", 10 min --------------------------------------------------------------- * Bob Briscoe: disagree with flow isolation. * Vijay Subramanian: there are some reasons for the original recommendations. What's behind your recommendations vs. the earlier ones? * Gorry: from the buffer sizing point of view, there is some research to be done. * Matt Mathis: all talks are around the same subject. We realize there is a problem here. History: all TCP implementations were (...missed...). It matters how much queuing there is in the core, which was not the case a few years go. * Spencer Dawkins: historical point, earlier there was little understanding what buffer sizes should be. Rong Pan: "A follow up on the PIE queue management algorithm", 30 min --------------------------------------------------------------------- * Lars Eggert: are you implementing PIE before it is standardized or waiting standardization? * Rong: we are discussing it tomorrow [in the TSVAREA meeting]. * Fred Baker: we are not standardizing queue algorithms. purpose of standardization is interoperability. * Scott Whyte: can't use RED because everyone's implementation is different. A standard would be nice. * Rong: I don't think we can standardize implementation. * Bob: what do you mean exactly by standard implementation? what would be the same / different on each packet? * Scott: I get the same [drop/mark] probabilities on the same profiles. * Bob: Why do you want them to be the same? * Lars: comment on what was said earlier: we are going to standardize a congestion control algorithm in RMCAT. * Michael Welzl: Last time I got the impression you would replace codel because PIE is better? * Rong: not replacing, some details in Codel are unclear, just wanted to compare them. * Jim Gettys: Codel doesn't work well in datacenter environments, because inside datacenter RTTs are much smaller; one size does not fit all. * Bob: 1) To Rong: it would be interesting to compare with Datacenter TCP (DCTCP). Answer: DCTCP would probably help. 2) To Jim: DCTCP does not assume RTTs, that's why it works well. Toke Hoeiland-Joergensen: "The State of the Art in Bufferbloat Testing and Reduction on Linux", 30 min ---------------------------------------------------------------------- * Vijay: about the topology -- how TCP can play a role in this setup? Where is the bottleneck? where do you run qdisc? * Toke: all of them. Netem is not used. * Rong: fq_codel results are similar what I got with separate queues. * Toke: this is not priorities, just fairness queue. * Dave Taht: fq_codel is a tweak to the original algorithm, we believe it reduces latency. * Rong: if I have just one UDP stream, then fq_codel woudl behave same as Codel? * Toke: Yes. * Jim: this is data from running code, not simulation. * Vijay: (asking for clarification when new TCP flow arrives, what happens with old flows) * Toke: (Explaining the queueing logic details) * Bob: results show delay benefits, you need to show utilization also. * Toke: Utilization with fq_codel is higher than default. Greg White: "Simulation study of AQM performance in DOCSIS 3.0", 10 min ------------------------------------------------------------------- * Michael: different target values for delay between different presentations, why it's that? * Greg: 5ms target for codel would be trying to achieve latency that is impossible for it to meet in DOCSIS. * Jim Gettys: Couple of queues involved. Some queues are underneath what codel can do. Implementation issue with what Linux can do, world is not perfect yet. When playing with different links, need to tweak parameters. * Rong: We set the delay reference to 20 ms to fit for wide area * David Ros: why are you using different values for the "target" values in Codel and in PIE? * Greg: Codel and PIE use "target" for different purposes. We played with the target for PIE; works best with a value lower than we had thought it would work. * Rong Pang: 5 ms minimum latency is because request rate of link. Why sfq does not have that? * Greg: with sfq packet may arrive at modem buffer. By the time loop completes between request-reply, new packets may arrive and steal grant from the original packet. * Rong: if we run multiple times, sfq may shift to other side. * Matt: fq_codel does well with two-level queueing, you don't get the normal packet train behavior. changes dynamics. * Michael ??: looks like most of this small gaming traffic is stealing capacity from other traffic. * Bob: you can't tell if this is good or bad without seeing the other packet losses. * Greg: there will be more slides to go to show that. * Rong: In our environment we don't see this behavior with PIE. Something is lost in translation. We will debug later, sending to the mailing list. * Kevin Fall: did you do FQ by itself or only with AQM? * Greg: Not the most scalable option, doing something smaller is the thing to do. * Lars Eggert: Case of Bittorrent, was it TCP or UDP? * Greg: was using linux implementation of ledbat. * Yuchung Cheng: fq_codel good results. are we deploying fq_codel? If not significant concerns, let's just do it. We would like to be beta testers. * Greg: lot of work going on. This is pretty new. * Jana Iyengar: I would like to see experiments with just sfq. * Matt: algorithms are so much better today than drop tail, we should roll them out even if they are not yet perfect * Rong: congestion caused by UDP should be evaluated and not left out. Do we have bias with different TCPs? * Lee Howard: we have been waiting to see the competing strategies, might be ready to go. * Bob: these tests are not testing for other pathologies. We are testing for what we are better. * Dave Taht: codel deployed in Linux, tested in many places. Matt Mathis: "Drawing the line between transport and network requirements", 30 min ------------------------------------------------------------ * Vijay: does the run length depend on the size of the buffer? * Matt: will talk about it later. The important point is that actual parameters need to be published in some way. * Kevin Fall: mpeg-dash dynamically adapts bitrate selection, (... missing rest...) * Matt: Windows XP bottleneck is workstation.