ICCRG meeting minutes, IETF 88, Vancouver, BC, Canada TUESDAY, November 5, 2013, 16:10-18:40, Regency B Shahid Akhtar, An Evaluation of Various AQM techniques on Access Networks with Realistic Internet Traffic, 30 min ---------------------------------- -- Slide: network topology Michael Ramalho: RTT is lower in new environments. RTT is measured by the actual session that doesn't involve bufferbloat. did your simulations include bufferbloat. Shahid: absolutely. Bottleneck link queue increases are visible to TCP -- Slide: conclusions Stuart Cheshire: you identify higher packet loss rate with codel as disadvantage, but it goes with lower latency. if you want to keep queue short, you'll need to knock TCP down more often. ECN helps to avoid packet losses. Without ECN resources are wasted because of dropped packets Rong Pang: Can you elaborate on QoE score, combination of losses and latency? Shahid: three factors. video startup takes some time. only look at average bit rate. All losses are recovered. Rong: RED is hard to tune, question about max_p, effectively is tail drop. Shahid: loss rate never goes up to 5 % Rong: how about google QUIC that does FEC that covers up to 5%. How about RED averaging parameter? heavily depending on scenarios and queue draining rate Shahid: worst case RTT 250 ms, use that number to set the weight. ???: video downloading is not interactive real-time application. For consumer real-time that means SIP and Skype Naeem Khademi, Evaluating CoDel, FQ_CoDel and PIE: how good are they really?, 30 min -------------------------------------- -- Slide: FQ_codel: blending SFQ and AQM ???: why would fq have any difference with one flow? Naeem: there are four flows, 1 flow per sender, and we have 4 senders -- Slide: ECN Jim Gettys: wifi is broken, there is a huge amount of buffering in wifi stack that AQM can't control. In your graph there are 400 ms, most of it shouldn't be there. Underlying stack details matter, depends on device driver, other parts of the system. We must collaborate on the test beds, these are complex systems. Proof is in your slides, the 400 ms result. it is almost all buffering. Queues are not managed in a uniform manner. Tim Shepard: fascinating work. what is definition of delay, how do you measure? Naeem: TCP's RTT. statistics are for all packets, with tcpdump Tim: I don't care how much delay 64 TCP flows have. When I use ping experiment or Skype, i see none of it, because my packets go around, because the bottleneck is running fq_codel Tim: if you didn't run out of hash buckets then it would work better Naeem: 64 flows are going to be hashed to different buckets Tim: you ran out of hash? so my ping traffic goes to... Naeem: it will be hashed to one of the sub-queues Tim: is it correct that your statistic for fq_codel was over all packets? Naeem: yes, all packets! Tim: if you did not RUN OUT of hash buckets... Naeem: well, you have 1024 hash buckets in fq_codel, and here we use 64 of them at maximum Dave Taht: we chose 1024 (??? missed something ???) as default, and recommend people to stick to it Andrew McGregor: it takes a while to converge. one of the problems of not performing well on uplink: available bandwidth drops by more by the fraction taken from downlink (??? missed some here ???). what happens if you count that in the algorithm? Answer: take it offline Dave: look at ARED. I require you to enable SFQ simultaneously with AQM. Shahid: with mixed flows of different types AQM behaves differently Naeem: that's on our todo list Philippe Cadro, Packet oriented QoS management model for a wireless Access Point, 15 min http://tools.ietf.org/html/draft-jobert-iccrg-ip-aware-ap-00 ------------------------------------------- Bob Briscoe: how many flows do you have per bearer, how rapidly does the rate change? Philippe: depends on the radio scheduler. in current model this calculation does not change much. Bob: fair queueing between bearers? (Philippe clarifying on slides) Bob: no per flow scheduling? (A: right) Andrew McGregor: results apply to wifi with some differences Yuchung Cheng, Recent advancements in Linux TCP congestion control, 25 min ---------------------------------------------- -- Slides: reordering resilience Tim: Did you turn off fast retransmit? Yuchung: no, Linux automatically detects reordering Yoshifumi Nishida: you will see dupacks, why congestion control is completely agnostic Yuchung: congestion control decides how much to reduced based on the information in sacks Jana Iyengar: dupack thresh is set dynamically, when it is increased. how does TCP figure out it is spurious Yuchung: initially it is 3, TCP uses timestamps to detect reordering Jim: believe reordering has encouraged people to use large receive-side buffers. receivers block everything in the buffers. this is very important work, hope other OS'es take on that Shahid Akhtar: did you consider interaction between paced TCP and non-paced TCP Yuchung: here we only pace. with video we see 40% improvement in loss rate Matt Mathis: thanks, this was all done after I moved to other group Jana: Matt's draft lists many RFCs that need to be touched (regarding Laminar) -- Slide: TCP buffer autotuning Jana: how common are losses of retransmissions? Yuchung: 10% of retransmits could be lost -- Slide: other notable changes Matt: bursts are far larger than 10 -- Slide: calling for attention Zaheduzzaman Sarker, Congestion control issues in Real-time Communication - "Sprout" as an example, 20 min ---------------------------------------------------- -- Slides: evaluation Michael Ramalho: video is bursting everything at line rate, and waiting for next segment and then bursting again -- Slide: conclusions Andrew McGregor: when we were doing wifi, learned that in abscense of new information, assume no delta from previous information Michael Welzl, Transport Services BOF plan presentation / discussion, 30 min Activity website: https://sites.google.com/site/transportprotocolservices Problem statement draft: draft-moncaster-tsvwg-transport-services-00 ----------------------------------------------------- -- Slides: what real problems does this solve? Yuchung: don't understand the problem. If I want completely bufferbloat bullet proof transport, what then? Michael W: services like unordered delivery, multipath, deadlines, ... Idea is to have an API for service -- Slide: test result Tim: isn't scheduler the real magic there, not TCP? Michael W: it gives priorities. it is the common congestion control. part of SCTP that allows us to do that Jana: scheduler is helpful. -- Slide: plans for a TAPS WG-to-be Matt: early on in SCTP there was a long wish list from applications. you should try to resurrect the document. late 1990s. Jana: i'll second what Matt said. also need to find out what applications want, how to map to what applications want? Andrew: should this cover also transport services by game libraries, etc. Michael W: yes Spencer Dawkins: we are going to talk about related stuff in tsvarea meeting. RUTS BOF in Orlando was related. Lars Eggert: middlebox measurements showed that performance sucks for UDP flows in middleboxes. It is going to be problem for QUIC and other UDP bulk data. small percent was a problem, but it was still a problem. Michael W: start implementing SCTP over UDP, but figure out other cases along the way Lars: how do developers migrate to that? how to deploy? these are the hard questions. without this, why would we do this? Michael W: agree these are good questions. We will look at APIs that applications use. Jim Roskind: nice to think transport taking steps higher and higher. Issues with negotiation layers between SSL, etc. You will miss the baby in the bathwater, unless you think of specific applications and how they can be worked out. Michael W: agree, but how to do it? advice welcome Jana: Things are verticially integrated. Bunch of applications utilize APIs that exist in user space libraries. Michael W: today these libraries are only using TCP and UDP Kevin Lahey: how relates this to multiple interfaces stuff? Michael W: MPTCP is definitely a part of this. MPTCP API RFC has some related discussion. ---- end of meeting