Notes RMCAT at IETF-90 (Toronto) Chairs: Lars Eggert, Karen Neilson Notetaker: Gorry Fairhurst SESSION 1 (updates and new proposals): Thursday 1300-1500 RMCAT CC requirements: Colin Perkins: Low delay is a requirement, high delay variance would also be an issue - increasing the jitter buffer, hence low jitter is an issue. Karen: Randal proposed text that seemed to say that low delay is a requirement. Colin: You could subsume jitter into delay and make one item. Mo: The desire of low delay is a part of the sensitivity of the solution candidates - it is a requirement for some solutions: If jitter induces delay due too buffering, this is also a part of the delay requirement. Lars: Please propose an edit on the list to talk about delay. Karen: RTCweb terminology isn't really being used - so we need to edit to fix this. we need to define terms such as "flow". Colin: There is a draft in AVT-EXT work, that can help, Zahed: There are DS implications with regard to flows. I do not understand the requirements anymore with regard to DSCPs. Karen: The use of DSCPs impact several things, we'll take this later. Evaluation Criteria: Mo: Is there going to be some of media quality metric, this is now in an appendix. Lars: The B first goal is to seek experimentation for safe use, we could later focus on performance metrics. Mo: We want to allow a simpler comparison to candidates. Some candidates have different approaches that could deal with media quality when they experience rate variations. Lars: One issue is to establish test cases that others can use. I'm trying to prioritise time - I do see this as important to come out from this WG. Karen: We noted that the Charter says we should state the deficiencies of existing CCs, and would like to hear from the ADs. Lars: Should we suggest to remove the milestone, or do we need to do this? Varun: Google CC uses TFRC, and others use ledbat at least as a basis. Zahed : I think we should keep this as an educational document. Michael Ramalho: I think CCs in candidate proposals should say this - I think perhaps we should rethink the method. Michael W: Google uses the TCP-Eqtn, but not the TFRC method. I would scratch this. Jana: +1: Michael. Ledbat as specified is a Scavenger-class method. Mo: I do not think we need a separate document. We could add some explanation sentences saying some methods are not viable as a simple edit. Colin: We could ask if each CC should need could say why this was needed. Xiaoqing : We could ask them to evaluate against existing methods also, and this will show the deficiencies. Lars: I heard equal support for adding to the requirements or omit the document. Spencer (AD): I was hearing this could be short, I do not want to make work to produce a new documents, you could put this in the requirements so people know in future years why this was done. Matt: I am interested to know if there are any algorithms that meet the goals for the protocol (low delay) and also send to much that it causes concern. Lars: Are we ready to take forward evaluation of the methods? Varun: I think we are Ok to proceed with evaluation. Lars: There is a more difficult discussion later to look at standards adoption. Mo: Does our approach include morphing techniques to produce a good candidate. There are IPR disclosures for one draft (relating to work at Nokia). I think there is also an Ericsson IPR disclosure on the google CC. Note work in DART and TSVWG on use of DSCPs. Varun: If browsers add markings then the CC. Gorry: I suggest people keep an eye on the DART WG, this is a combination of transport and RAI areas to look at these issues - this topic is their only task. Look at the next version of their draft and see how this impacts the work here. At present it expects media to be sent in one AF-Class which appears end-to-end as if all traffic shared the same network queue. Mo: This discussion is at the OS level. I do not know what a CC can do with the outbound DSCP marks, This ] Michael Ramalho: I agree that this is only drop-precedence. Our algorithms seem the same queue, but the drop precedence is harder - this does impact differences between drop and mark - this impacts quality metrics. -: I disagree, this will make a difference to CC. Using RTCP Feedback for Unicast Multimedia Congestion Control Colin Perkins draft-perkins-rmcat-rtp-cc-feedback-01 (milestone rtcp- requirements) Jana: Why can RTCP not be sent per packet? Colin: There is no way of piggybacking feedback - the level of overhead is likely to be costly and could induce congestion. Mo: Would you be able to say whether we need RTCP-extensions? Colin: We have lots of metrics in RTCP already. If we continue we should look at what actually is needed - one suggestion is to add a receiver-generated bit-rate field. Joerg Ott: This is useful (3 others). Lars: This is pretty favourable - who would adopt as Info (12+2 via jabber) (none, against) Update on Test Cases for Evaluating RMCAT Proposals Zahed Sarker draft-sarker-rmcat-eval-test-01 (milestone eval-criteria) Matt M: Note there were proposals in TCPM to make TCP more robust to reordering. Ted: there was a RTT below which he found the method was ineffective. Do we have some way to know if this impacts multiple methods? Can we make sure these scenarios are tested for other techniques also. David T: I expect the capacities mentioned in this draft are too small, say look at an asymmetric 5 Mbps upload/22mbit download case with a cable modem. The difference in bit-rate can be "interesting", especially when dealing with download traffic at this rate, you get a lot more acks in the pipe than you'd expect from a symmetric network. Zahed: Yes we could look at a range of scenarios. Then need to think how to compare results? Who thinks it should be adopted (4) People who think not to be adopted (1) Ted: I think there are things to be added (bandwdith, rtt) and jitter needed to be decoded. David T: I think it needs to bake more. Lars: Please comment on the list to show what is missing and we can then take this on as a WG item. RMCAT Application Interaction Mo Zanaty draft-zanaty-rmcat-app- interaction-01 (milestone app-interactions) Zahed: I think the FEC/Quality metrics needs more thought. Lars: This is to identify interactions that may be used to help populate an API. Is this the model to use? Does RTCweb agree this model? Varun: Could we take this model and look at each CC? Lars: I think this needs to be decided on the list. Lars: Is this a good starting point for this work? (5) Against (0). Coupled congestion control for RTP media Michael Welzl draft-welzl- rmcat-coupled-cc-03 (milestone group-cc) Matt: If I have two flows using different codepoints may cause interactions. Lower drop precedence will incur loss on the other. Jana: This can be easy when this is not the case. Michael: Yes, with mesurements it's possible to support multiple receivers, and then this works. Zahed: I do not see an advantage for SCREAM. Michael: This is an implementation point. Colin: I would be surprised if there were unrelated apps across the same 5-Tuple. Michael: It could be other Tuples. Colin: I think we can say it works with cooperating applications. Varun: what happens at time 25 in the slides? Michael: At the end this is an artifact of the trace, which sends less at the end. Jana: This is pretty complicated as a mechanism for bottleneck detection? Karen: The detection os shared bottlenecks is a separate goal. Zahed: There is an inherent dependency this needs the other method. Michael: No, not when it relies on the 6-tuple. Adaptive FEC for Congestion Control Varun Singh draft-singh-rmcat- adaptive-fec-00 (milestone cc-cand) There is an IPR dislosure for this work. Ted: If we did not see agreed text by August, we would not specify FEC. Gorry: The draft starts by mentioing UEP - but is bit-error repair a part of the method? Varun: This is to come later. Michael Ramalho: The feedback can go and feedback to the codec also - this influences the codec. Varun: Yes this was intended also. We eant to combine the error- correction into the algorithm, we may want to add burst-loss protection with pacing and use probing at the same time. Jana: I am not quite sure about the proposal on the table. The choice of FEC seems like a deisgn choice later. Varun: There are many. Jana: I think this needs an interface between layers, and that is the main thing. Mo: This is useful and often done. How do you separate for the core CC, some parts of the CC can be pulled out of the main CC. Michael Ramalho: You need to signal the presence of CC, this seems a vendor decision. I am having a hard time knowing what the system needs to signal - set of FEC schemes supported? - The rest seems like an issue for vendor differentiation. Jim R: I think the confusion is integrated - this seems not a part of CC, more a part of jitter reduction. Lars: This is used when you increase the codec rate so you can hide the impact of a failed capacity probe. SESSION 2 (evaluation results): Thursday 1730-1830 A self-clocked based algorithm as a RMCAT candidate solution Zahed Sarker draft-johansson-rmcat-scream-cc-02 (milestone cc-cand) Colin: Frame Blanking puts the video on hold? Does this generate a burst? Zahed: That's handled by pacing, so it sends faster for a short tiem. Michael Thornburgh: ACK clocking seems like TCP, what happens after a "blanking" interval? Do you slow start? Zahed: There's no ACK clock, but we resume from where we were. Lars: The 32 bits are the ACK vector, why 32 packets? Zahed: We started with 32, and we were happy with the answer. Colin: So are you dropping queued RTP packets or input video frames? The former is going to create a "loss" from the view of RTCP Mo: It should drop payloads, not numbered RTP frames. - : This is dropping payloads (input) bo sequence numebr gaps. Lars: What is the base LTE delay? Zahed: 20ms base delay, 40 ms RTT. Michael Ramalho: There are several principles here that need to be done in a RMCAT protocol. Not all protocols need to do it the same way, but I like this. I think if you have an empty room - then if you "proxy" TCP, then is the mechanism counting packets in flight or bits in flight? Do they map? Zahed: I think packets and bytes in flight can be converted. Michael Ramalho: This isn't the same it depends on video content change. Zahed: If you don't see loss, then you startup. Lars: TCP has two flavours, bytes or packets. Zahed: The current queue is based on packet counts. Michael Thornburgh: Adobe has some experience of doing this sort of protocol in the network. We often see small packets so we use byte- based counting. Evlauation of SCREAM and GCC draft-johansson-rmcat-scream-cc-02 (milestone eval-results) Zahed Sarker Lars: Did you start with GCC code, or did you encode from the spec. Zahed: The spec, but we needed some extra information to compelete the implementation. Varun: Our results previously also were based on the spec rather than the code. Zahed: We also have a version that is cubic-like and ramps more slowly after probing, which reduces the jitter bursts. Michael Ramalho: RTT Fairness, lower right quadrant (video frame delay in secs?) If this all goes through the same queue - do all the spikes have the same peaks? Zahed: I will check the data. Xiaoqing: On the RTT slides it looks like TCP has to stablise (the plot is too short). Zahed: I think we have a longer run, this wasn't plotted here. Mo: When I first read this, I thought of SPROUT. In your model you react to ACKs, but SPROUT goes one way further to look at the trend. Could SPROUT help with prediction? Zahed: SPROUT also has much more detailed feedback, microseconds. We tried via a Master student project to explore SPROUT, but could not find nbenefit in this. Colin: Delay spikes make me think the video is going into a playout de-jitter buffer (there may be many models, so it may not be able to agree on a single model), but we do need to understand the implications of the jitter on the media. Lars: In DCCP work, we used an "oracle" playout buffer analysis - in wehich we looked at all possible playout buffers and then chose the best possible algorithm for setting the video. Mo: The frame-skipping scheme is one way, but there are more elastic encoders that use methods that could do bit rate changes - so this is just one possibility. Lars: We may be able to abstract some of this to help define thinsg in the evaluation criteria. Randall: On slide 7 - has this been tried with lower limits on delay?- 100ms is high. Update on NADA and Evaluation Results of Test Cases Xiaoqing Zhu draft-zhu-rmcat-nada-03 (milestone eval-results) The NADA algorithm has been updated. Michael Welzl: What are the variables in the control equation? Xiaoqing: R= Rate; D = Queue delay; Theta = dynamic range; Delta = interval between updates... Colin: How do you calculate queue delay Xiaoqing: This is a simple simulation. In future, we could use a long- term delay. A single flow assumes zero delay at startup. Lars: In the single flow case, does this acquire the same bandwidth as two sharing flows would in the same case? Lars: Is ramp-up slow (i.e. in seconds) Xiaoqing: Yes. Randal: It seems the build-up is still quite slow, this seems like it could be slow, it's hard to tell from the plots. Lars: What is the sharing tarffic? Slide 16 - 1 Mbps flow on the backward channel, this shows the forward channel. Richard Scheffneger: Were all queues drop-tail? Xiaoqing: Yes, in these tests. Zahed: The results are similar, but you do much longer test runs, why? becasue you ramp-up slowly? Xiaoqing: Yes, but ramp-up is another algorithm. Michael Ramalho: Both of the two behaviours could be changed, if we ramp-up and do not see queue delays then we can ramp-up faster; conversely if delay grows quickly with the rate, then we can reduce the growth. This seems like a second-order effect. We can add new mechanisms to do this and control ramp-up, The method currently is preducetd to be linear at ramp-up, and it does this. We'll address ramp-up later. Lars: I see also that this can be solved to look better at the test cases. Next time, you should not be scheduled at the end (remind me of this next time). Lars: One thing to ask, is whether a cicruit breaker would trip? Michael Ramalho: Yes, we have lots of data. we could look at this. [Close of Working Group meeting at 18:39]