Minutes for RTCWEB WG in London - Session One Tuesday 9.00-11:30 2014-03-04 Note Takers: Roni Even, Patrick McMannis Jabber Scribe: Hadriel Kaplan Chair Admin: (chairs) - 10 min ============================== WG Chair Magnus Westerlund presented the Note Well, followed by the session's agenda. The chairs then provided the stepping down area director Gonzalo Camarillo with some presents in form of Tea. The survey of Peer to Peer streaming built on WebRTC technology to be discussed in PPSP WG was announced. Then an announcement of the upcoming Interim meeting was done. This included a plea for a host. Then a current list of the RTCWEB dependencies in other WG was presented. Keith Drage (AVTEXT WG chair) reminded the WG about the need to review and use when appropriate draft-ietf-avtext-grouping-taxonomy. Data Channel: (Michael Tüxen) - 80 min ======================================= draft-ietf-rtcweb-data-channel draft-ietf-rtcweb-data-protocol Michael provided an update on the changes in the latest version of draft-ietf-rtcweb-data-protocol-03 (Slide 2). Including protocol and labels fields are now UTF-8 encoded as well as using the name "Data Channel Establishment Protocol (DCEP)". This was followed by the updates in draft-ietf-rtcweb-data-channel-07 (Slide 3). Then he started going through the open issues. Slide 4 - open issue 1 - message size limit ------------------------------------------- Michael introduced the issue and proposed that the receiver side limits for the intermediate step is removed. Martin Thomson commented that we should do as suggested. There are practical limitations, but let's use errors to indicate when such a limitation is reached. Michael commented that WebSockets, doesn't have any limits either. Cullen Jennings (as individual) have no issue with requiring NDATA as MUST implement as they will complete in similar timeframes. Strong concern about the need to have infinite buffer on the receiving device. Not applicable for Internet of Things since they have small memory size (WebSocket issue), but also more capable devices have limitations, thus the specification is not implementable without handling limitation. We should not reopen this question that was discussed at earlier IETF meeting. Michael commented that he originally thought a limitation was required, but then talked to the WebSocket saying that they can handle the concept of large message. Cullen responded that on a limited device this doesn't work, just because WebSocket did the wrong thing, we shouldn't do the same. The applications fail in random way when encountering this limitation. Randel Jessup stated that we should do this according to the comments in the list. Martin Thomson commented that the way to handle this issue when you are unable to handle the buffer requirement of what is being sent is to close the connection, i.e. reset the SCTP stream. Harald Alvestrand stated that what he remembers of the discussion last time was; That either one need to have a limit on the sender so that the receiver can know the sender to be out of bounds, or need a way for the receiver to send an error stating, "You sent more data then I can handle". There was strong push back against defining an error protocol in addition to the channel establishment. His opinion is that he wants to know what fails or know that handles. Eric Rescorla had several persons ask what the issue is. His view is that it is system design problem. There are a send API that allows sending unlimited sized objects. The receive API does not allow dealing with less than the whole object. Thus there exist no way to deal with objects larger than what can be handled by the receiving, including storing it on other storages than memory. The concerns would be limited if there was a streaming API. But current situation creates a bad situation where one have to guess and retry if the guess was wrong. Why is this good? Jörg Ott: You don't really avoid the need for an error code by having. For Internet of Thing device should not talk to devices that send unlimited size data. Randal Jesup commented that you actually don't have to store it in memory; you can stream the data to disk as it being received. The receiver limit might not be static as it might change as free storage varies. Justin Uberti commented as an application developer he rather deal with an error ahead of time, than afterwards one spent hours transferring multi-gigabytes of data that ends up discarded. What we are building is an escape vale in the receiver side limit. Jonathan Lennox commented that this appear to be an W3C API issue. The problem goes away if one support partial receives in the API. Eric Rescorla supported this, and see the issue as not having visibility in what goes on. The data goes in and you only get it after the whole object has been passed. There is also upper limits for how much resources that can be consumed on the receiver. In generalMartin Thomson, to a large extent this can be pushed to W3C. There is actualy work ongoing with a streaming API. Thus, we should take the opportunity to work with them. Randel Jesup commented that W3C strongly favor "send (blob)" to simply work. Hadriel Kaplan commented that we had an agreement, and that was that there would be be indication of the limit or getting a signal. The chairs prepaired a hum and significant numbers of the people believe that they understood the issue. After a first hum attempt showed uncertainty, the huming alternatives was changed to: 1. Want to retain receiver side limit 2. Dispose receiver side limit Hum was very clear to retain receiver side limit!!! (All in room for retaining, two in jabber for disposing). Conclusion: - Will keep the signaling of receiver size limits Slide 5 - open issue 2 - reaction on IP address change ------------------------------------------------------ Matthew Kaufman though this issue was related to issue 4, which was reviewed before continuing discussion. Then Matthew stated that the second statement on the slide, doesn't make sense when the congestion control mechanism doesn't have a congestion window. Thus it should be stated in another way, like resetting it to the initial path conditions. Michael agreed with that and added that we can be explicit when TCP based congestion control is being used. Conclusion: - add general statement on congestion control and specific for TCP windows based congestion control Slide 7 - open issue 4 -Alternate SCTP CC - congestion controller ----------------------------------------------------------------- Matthew Kaufman notes that the current SCTP congestion control algorithm does not allow us to meet the specified use cases including the data channel at all. For example use case number 4 where a file transfer is done in parallel with audio and video. The fact that there is no alternative congestion control is a big issue, and not only do we need to specify one, we also need to specify that the current is not usable. Bernard Aboba asked if one has a sender side congestion control, one don't need to negotiate, one can simply use it? Michael, commented that this is true if it purely sender side, the alternative would be something like doing LEDBAT for SCTP, that would require new chunks. Michael is positive to a new congestion control work, but that would be a separate document. This issue is only about how we specify this in the context of the data-channel document, especially considering that we know what the alternative will be. Matthew Kaufamn commented that the document does not meet its own use cases. Magnus Westerlund (as chair) commented that the use cases are what is desirable to be achieved in the end. But, we know we will not meet all of these initially. The WG can adopt a solution when a solution exists. Matthew responded that we can remove the data channel completely and go even faster. David Black (as TSVWG co-chair) asked Matthew to clarify what is broken, is it the congestion control or other QoS aspects? Matthew explained that if one has both a window based congestion control and a rate based one, then the window based will starve out the rate based one and one only receive the file, not the audio and video. David Black stated that a congestion negotiation framework should go in from the start. The reason is the lower then best-effort CS1 Diffserv is unreliable, by putting in LEDBAT one can protect oneself from the case. Michael commented that this still don't resolve the issue what to do in the data-channel. David, agreed that LEDBAT in SCTP should be done in TSVWG. The data-channel document is not the right place. But, some negotiation framework should be considered to be put in place. Cullen Jennings asked for clarification from Matthew if the main concern was that delay-based congestion control's interaction with any TCP-like congestion control? Matthew clarified that he like to be able to do better than that when all flows are under the same control, rather than an external application, like the email client downloading some large emails in parallel with your WebRTC session. Given, that we this use case in the document, and we know that it is possible to share congestion state between the implementations; therefore we must be able to do something. Harald Alvestrand in RTCWEB WG we don't change SCTP. Engineering experience shows that if you add extension points without requirements you get them wrong. Single ended congestion control can always send less than what the congestion control tells you are allowed to send. This doesn't need to be negotiated, and this will be done to address the starvation issue. The negotiation of congestion control in SCTP is a TSVWG matter, and RTCWEB WG should not depend on this. Randel Jesup stated that large transfers could starve RTP. But the browser can apply it's own limits to SCTP's sending rate outside of SCTP if it wants to. And LEDBAT (per Vancouver from the RMCAT discussions) can starve delay-based RTP congestion control. And most uses of datachannels don't involve large file transfers and won't starve RTP. And some uses of DataChannels don't use audio/video channels. We also stated we wanted to merge the congestion controls (or coordinate them) between datachannels and RTP (see RMCAT discussions). Ted Hardie (as individual) suggests removing the congestion control negotiation and explain the issue and the need for doing congestion control across all its flows, i.e. RTP and SCTP. Michael commented that the first part is good. But the second part will be jumped on as being underspecified. Ted volunteered to take an action to craft text for such a statement and send that to the list for discussion. Conclusion: removing the congestion negotiation from the document. Slide 6 -Open issue 3 - SCTP parameters --------------------------------------- Michael reviewed the issue with SCTP parameters that depends on that the default parameters that are created to handle multi-path. If path.max.retrans = association.max.retrans one avoids a state (dormant state) which is underspecified. Karen Nielsen commented that there are ongoing work in TSVWG to clarify the dormant state. Michael agreed, but that would require another normative dependency on work in progress. Karen further commented that it appears that you intended to use the default parameters for example RTO specified in RFC 4960. The deployments she knows of usually use other values. Michael commented that this is the big Internet. Karen replied, that she is simply stating that she is surprised that these are used for WebRTC. Michael, we can discuss these other parameters, likely on TSVWG mailing list. Martin Thomson moved the topic onto the relation of ICE, consent and the SCTP association timeout. They can be independent. ICE can perform a path change under SCTP association which continues given that the timeout is longer than it takes to re-establish the path. This is not an issue, given that that they are considered in the context of the other mechanism. It will be the shorter of the two that matters. Michael commented that having it long, do ensure that ICE have time to perform fail-over. Paul Kyzivat asked if the two ends of the SCTP association needs to be have the same values, and thus have these negotiated? Michael responded that no, each side can have different values. Magnus Westerlund (as chair) as asked if we at all need to specify this. Why not simply leave it to the default values? Matthew Kaufman agreed, if one lets ICE run, it may find a path and keeping the SCTP association around has benefits. In some application and environments even 300 seconds is usable. Before getting into a discussion again on the max.retrans parameters Cullen Jennings as chair stopped the discussion with the motivation that this is not the right room to ask. Michael said he will bring it up in the context of the DTSL-SCTP document in TSVWG. Michael went on with Heartbeats question. Matthew stated that he agree that we are likely not the right people for the discussion. Martin Thomson said he didn't particular care, but had an inclination to turn the heartbeats for as they are redundant. Cullen (as individual) this should go back to TSVWG, and it really is a question for why one uses them. Michael added that they tell you that one's peer, i.e. the SCTP stack at the peer is still there. Cullen added in that case let's do what SCTP does by default. Conclusion: The max.retrans will be taken up in TSVWG in the context of the DTLS encapsulation. The data-channel specification will be silent on the remaining parameters. Slide 8 Open issue 5 - Error handling ------------------------------------- Matthew Kaufman commented that he definitely don't want callbacks in JS due to malformed messages, either of the other options are fine. Martin agreed with Matthew, and for completeness there exist a fourth option of delivering it, but that shouldn't be done. Justin Uberti added that in most other cases we are resetting the channel. Rather than tolerating programming errors we should blow up. Tim Panton wanted to get it clarified that resetting the channel actually shows up to the JS as an unnamed error, which was confirmed. Eric Rescorla commented that these errors are caused by an error in the data-channel implementation, or due to talking to an older version, it can't be generated by the JavaScript. Therefore it is appropriate to reset channel. Randel Jessup commented that he is fine with the reset of the channel. And it is up to negotiation to ensure that the peers are using the agreed set of parameters. Michael went on that there are cases when a receiver can detect the sender misbehaving. Matthew Kaufman commented, that either you can deliver the message, or something bad happens you will forced to reset the channel. Why attempt to be smarter than that. Michael agreed and said that we can include a general statement of that nature. Paul Kyzivat commented that he thinks this issue is a manifestation of the ambiguity if the data channel is a protocol. What is missing is that data channel is protocol, a thin one, but still a protocol. Michael, commented there are no extra data, you are doing what SCTP does, thus there is no protocol. Paul argued that there are procedures associated with handling, thus this is a protocol. It would be simpler to actually call it protocol. If we called it a protocol we could also have a PPID to send error codes back to the sender before closing the channel. Ted Hardie (as individual) commented that one should consider these statements as jelly bracelets stating what SCTP would do anyway. Jonathan Lennox asked if it is the SCTP stack or the code above the SCTP stack that deals with this? An unknown PPID wouldn't that be passed on to the layer above? ? Henning (W3C) commented that Report the Error, would be a terrible from Security perspective. Agreed mumble from the crowd, and so far everyone has been arguing for reset the channel. Randel Jesup, commented this is mostly a result of erroneous negotiation. Do agree with Matthew Kaufman and Ted Hardie otherwise. Tim Panton asked for clarification if this means reset both SCTP streams. Michael answered yes. Tim commented that this really is an action for the protocol that we refuse to name. After that the topic was concluded as everyone support reset the channel. Conclusion: the next version of the draft will have text that will specify reset the channel. Slide 9 - open issue 6 - Protocol --------------------------------- Michael introduced that he like to verify that the "Sub Protocol" is rightly specified. Martin Thomson, said this exist to mimic WebSockets. It is a ridiculous feature that shouldn't have been specified. But, it is a string. There are no needs to make this complicated. The registry is specified in RFC6455 and which is first come, first served. David Black commented that if there are uniqueness requirements, then UTF-8 is a tar pit, please talk to him off-line if that applies. Christer Holmberg asked what information should be in the reference if they register. This is not for IANA but should be in the document. Martin Thomson commented this will be as useful as for WebSocket. Randel Jesup commented do as WebSocket do. Peter St Andre commented that he doesn't like this. What if someone proposes something wacky for the Data-channel, that can't be done over WebSocket? Magnus Westerlund commented that it is only a name-space. Paul Kyzivat added if you have a protocol that requires unreliable and unordered which is impossible in WebSockets, then you only define it for Data-channel. Dan Drutta, noted that this is an informational attribute for the JS application. Let's not complicate this. Jonathan Lennox commented that he is disagreeing that the registry is useful in the context of negotiating the data channel using SDP. Cullen Jennings (as individual) added that there are cases where an application implementer would like to use the same mechanism over both data-channel and WebSocket, and that is a strong argument for having a common namespace. Conclusion: use the websocket name space for registration Slide 10 - Open Issue 7 ----------------------- Skipped due to lack of time. Slide 11 - open issue 8 - Support of DCEP ----------------------------------------- Michael reviewed the issue. Christer asked for clarification that it was currently: MUST support, not must use. Martin Thomson asked if this was asking for a statement: You must implement this RFC (in that RFC)? It was clarified that the question is where do we mandate the implementation of DCEP? Jonathan Lennox commented that for WebRTC, DCEP is clearly a MUST implement, but there might be usage, like CLUE where the implementation and usage of data channel does not require DCEP. Magnus Westerlund commented that for WebRTC the right place might be in draft-ietf-rtcweb- transports. Paul Kyzivat, asked when negotiating this we need to know if DCEP is available or not. This means either a word somewhere or if optional, some words. Ted Hardie clarified that this goes for WebRTC into a system level document. Conclusion: This text should be not in the data channel document but in another RTCweb document. Slide 12 - open issue 9 - U-C 7: proxy browsing ----------------------------------------------- Magnus Westerlund (as individual) restated his issue, in that proxy browsing can be implemented in JS using WebRTC and has severe privacy concerns. Matthew Kaufman commented yes that U-C 7 has security implications. But, there are a lot of other things that has security implications. If this is not a required implementation feature for a browser, then we don't need to discuss it. Also, it is likely that the security considerations would be longer than the rest of the document. Martin Thomson added that he don't see a need to iron cloud our use cases. Randel Jesup commented that he is okay with documenting, but don't try to draft text at the mic. But even more happy with leaving it out. Harald added, that we don't include security statements about peer to peer networking which is another use either. People are going to do things that are possibly illegal, immoral and fattening using the technology. So leave it out. Magnus Westerlund (as individual) stated that he can live with doing nothing, but fears this will come back to haunt us in a later review step. Conclusion: do nothing Transports - Harald Alvestrand - 45 min ============================= draft-ietf-rtcweb-transports Harald reviewed the purpose of the document. Including that it mostly should be a pointer to other documents. Firewall Friendly Features -------------------------- Harald quickly reviewed that it appears non-controversial to say: TCP to TURN server MUST be implemented TLS to TURN server MUST be implemented What Harald though require more discussion was the question if one should be able to establish TCP directly peer to peer. This work well if the peer is having public addresses, it also works in some 30-60% of the cases. But this then requires TCP ICE candidates as well as specification of how to send the RTP packets as well as the DTLS/SCTP packets over TCP. For RTP a likely answer for this framing is RFC 4571, another would be TURN framing. For DTLS Harald asked for feedback. Eric Rescorla stated that the DTLS packets are self-contained and thus don't need framing, and could be sent consecutively over the TCP. Harald asked if that can be combined with RTP using RFC 4571 framing. Eric said he had to think about that. Justin Uberti stated that Chrome do send both RTP and DTLS in RFC 4571 framing. Eric responded doing definitely works as it falls back to the regular demultiplexing. Harald concluded that he will add to the document that framing of RTP and DTLS within TCP is done using RFC 4571. The next question is what requirement level support for ICE TCP candidates in an WebRTC implementation, MUST, SHOULD or MAY? Eric Rescorla commented that he do not agree with MUST, and is on record of not really caring between the difference in endorsement between SHOULD or MAY, but thinks MAY is appropriate. Justin Uberti commented that the TCP to TCP case is not interesting when both Peers are browsers. The interesting case is with gateways. Although TURN over TCP could be used, having support for TCP allows the gateway to select using TCP directly and avoid one more moving part. Therefore Justin likes to see this as at least SHOULD strength, preferably MUST. Cullen Jennings (as individual) stated that if this is going to be MUST strength, then the framing should be TURN Framing, as that avoids having to implement two framings in the browser. It can as easily be implemented in a gateway as RFC 4571. The only case TCP directly works, and not UDP is when you talk to a server in the middle of the network. Why should we implement to different framings at a MUST strength? Uwe Rauschenbach argued that allowing ICE TCP candidates would be good to improve gateway support. When it come to framing he asked Cullen if he would put only RTP in the TURN framing, or also UDP and thus require specification work? Cullen responded that you would do the same thing as doing in regular TURN, you are just doing TURN. Instead of forwarding the content of the TURN framing, it would locally consume it, it is simply an encapsulation called TURN. Harald reflected to do that, you actually don't have to support ICE TCP candidates to do that. You can simply announce the server as a TURN server and use regular TURN to it. Adam Roach like to push back on Justin's desire to make this MUST. One can create an almost arbitrarily long list of mechanism which will increase further increase the chance of getting through NAT/FWs, but each with a more and more diminishing return. Adam would not like to go down a road where we add more mechanisms, like NATPNP, PCP, SOCKS, TOREODO, etc. We have to draw the line somewhere. This proposal is well on the side of diminishing return. Justin responded that this is well on the side of single digit percent. In regards to use TURN directly, then you have to handle the allocate and all these things at the gateway/server rather than just putting a two-byte header on each packet. Using ICE also ensures that if UDP works, it will pick that over TCP. Eric Rescorla commented that in the case of browser to browser this is of no value what ever. The other case of talking to a server there clearly are some benefit, and we are debating if it should be TURN or ICE TCP. Where TURN means it is the Server guys that have to implement things, or with ICE TCP, the browser side. Eric want at this point revise his remarks on MAY vs SHOULD, unless the endpoints all support ICE TCP, the servers will still have to do TURN server. Thus, they could simply just do the TURN server. So it is either MUST or we should simply not recommend it at all. Jonathan Lennox commented that the alternatives is either ICE TCP candidates that do use RFC 4571 framing as specified by RFC 6544 or using co-location of TURN. Uwe wondered if using TURN can avoid assigning relay candidates. Using TCP candidates would mean host candidates and speed up the completion. With TURN you would have to connect, then run TURN allocate and then test them using ICE. Cullen Jennings (as individual) commented that Eric Rescorla really clarified the picture. This is just going to be a lite extra work for the browser people and really simplify it for everyone else, lets make it a MUST. Chairs called a hum between the alternatives: 1) TCP ICE Candidates are MUST implement 2) TCP ICE Candidates are SHOULD/MAY implement 3) TCP ICE Candidates will not be discussed in document. The hum indicated very strong support for 1). HTTP Connect Method ------------------- Harald stated his view on the issue of HTTP Connect through Web-proxies. The main questions are: - Definition for proxy discovery/configuration - Should identify itself to the proxy as being WebRTC traffic if it cares - Needs an encapsulation Martin Thomson states that we don't want to enter into the problem filled area of proxy discovery. Regarding the second item, we don't need to provide anything additional as you would use TURN, the HTTP connect method is simply a request to establish a TCP connection through the proxy to a specified address. For the third item, TURN resolves this. We have all the tools we need for this with the exception of the proxy discovery. Cullen Jennings (as Chair) commented that you forgot about the changes to the SOCKS protocol to run the TURN server on the ports for web- servers. The chairs have worked with people to establish a WG external mailing list to discuss this issue with the security people. Testing has shown that doing HTTP connect for TURN traffic has broken some proxies, including crashing them. Doing what is here suggested do require discussion with the people that deal with these protocols both from operational stand point, but also from a security one. This is easily seen as circumventing the will and policies of the firewall administrators. If one could get the consent of the proxy for connecting through, for example by having a way to ask for it, would change the perception of this. That is why we established that external mailing list because this do need discussion. There is a concern that doing this without consent may simply result in the firewall administrators to simply block WebRTC. So before causing irreparable damage we do need to resolve this. Harald commented that he put this into his slide to determine if this should go into the rtcweb-transports document or the firewall document. Especially as that mailing list isn't a WG and don't have a slot. Cullen stated that he don't think it is appropriate in any WG document at this time. Harald responded that he is fine with delaying this discussion to after the presentation of firewall. Jonathan Lennox commented on the main questions raised by Harald. On the first, browsers do know how to find proxies and we don't need to specify this. On the encapsulation, it depends on what the browser is connecting to, if it is TURN or ICE TCP use their respective formats. It would be good to enable a smart proxy to determine that this is WebRTC to know it. Justin Uberti agreed with Cullen's comments. Regarding the encapsulation, it is well understood. The proxy is only doing a TCP connect through it. Andrew Hutton commented that he has been talking to people. The general trend of the feedback has been, this is already happening, like for WebSockets, this is not necessarily something vastly differently. In the firewall document I want to discuss these issues, and it is intended to be discussed in tomorrows slot. Tim Panton, do we have any sense if this is useful? Justin Uberti it works just as well any other TCP proxy, which is better than what one would think. Some proxies do chop the connection after a minute or so. QoS Discussions --------------- draft-dhesikan in TSVWG. Eventually we will get some recommendation from them. There is one controversial line in the transports document, namely that one can have multiple DSCP values on the same transport flow (5- tuple). The question is how much discussion we need in this document why you need or should do certain things like BUNDLE. Or what about other QoS models? David Black (TSVWG co-chair) commented that on draft-dhesikan that it will be called to be adopted this week in TSVWG. The draft is in quite good shape, but someone dropped the ball on the draft after Berlin meeting. This has been resolved after Vancouver, but wished that it had been done earlier. There was a meeting of the mind last Sunday which has resulted in the DART WG charter proposal that has gone out on Dispatch. This will be a short lived WG that will produce a document regarding per packet marking in UDP, everything that you ought to know about transport when working up the RAI stack. Be very careful to do this for anything else than UDP, as mixing of DSCP can cause reorder. Please take a look at the charter. Harald commented that he thinks it good that we finally find a place to provide this guidance. David responded that a definition for what a WebRTC flow/stream is will be needed, and that is likely required to be done in the transports document. Dan Drutta commented that he is happy to final see this getting traction. QoS is aspirational, there are some use case where it is more appropriate then others. The transport document does need to contain requirements on the browsers on what to do. The document should be explicit about when this make sense, including turning of bundle. This is related to the API addTrack methods and constraints on them, which will part of the solution on how we define the relevant flow. Cullen Jennings wanted to "plus 1" David Blacks statements. Want to add that Dan York has volunteered to hold a pen on the DART document. Regarding the last two bullets on the slides. As long we can get from the stats API what the different flow (markings, 5-tuple, etc.) is being identified. Harald rephrased this as long as we require API upwards to identify the flows, we don't need to put anything else in the browser. Charles Eckel, adding to this. If there are cases where DSCP is not sufficient there is a draft for a proposal in the TRAM WG called DISCUSS about a usage for STUN message. Please review this document to see if that would provide something useful. Congestion Control Issues ------------------------- Harald reviewed the issue; we have circuit breakers for RTP (hoping for RMCAT), we have SCTP for data. Based on earlier discussion regarding data channel it is clear that we need prioritization in client, discussing the distribution of available bit-rate between components. This is likely something that do need the most words in this document. The question is still what do we need to say? Michael Tüxen commented that the issue is that the W3C does not define anything about what the priority is. Harald commented that it is on the requirement list, but not yet defined in the API. Dan Drutta commented that one can flip this question around, rather than what is available, rather expressing what is desired. This needs work to marry it together with the W3C work on prioritization. David Black commented that the draft-dhesikan is a QoS interface, that interacts with congestion control. Thus, try to say as little as you can get away with. Harald reflected that yes, the question is how little will I get away with. Martin Thomson the question he has is if there are any requirement other than that the implementation tries to do the best with the information it has. Will there be interoperability. Harald commented that what he don't like to see is that running the same application in two different browsers should not result in one thrashing the audio, and the other the video. There needs to be consistent behavior to avoid browser sniffing. Martin, commented that someone must provide a draft of some proposal for how to deal with this. Harald agreed and stated that he has the responsibility to provide at least one. Matthew Kaufman really hope that we can write down what the client should do. If not possible, at least write down that you need to think about this before implementing. Because the user experience if no thoughts will be bad. Conclusion: Prioritization needs to be discussed in this document. Harald has the responsibility to provide a proposal for this. Martin commented that please provide a proposal for discussion, rather than just adding it the document. Harald responded that any new proposal is open for discussion. Dan Drutta commented that it needs a clear reference to the API work. Consent (Martin Thomson) ======================== draft-ietf-rtcweb-stun-consent-freshness-00 draft-thomson-rtcweb-consent-00 Martin introduce the issue that is a coin toss. There are two options for consent, the previous agreed STUN based one. With the mandate on DTLS-SRTP, there is a potential for using DTLS Heartbeats to confirm consent. Matthew Kaufman stated that the DTLS feels more secure and is clearly clever. However, he is strong proponent for using ICE. Down the road we will like to get better and better transport behavior to changing network path. This will include doing ICE path rediscovery. This will include convincing implementations to continue to run their ICE engine during the session. Getting to that will be easier when using ICE for consent. Justin Uberti agreed with Matthew. Martin Thomson and Eric Rescorla was okay with that. Conclusion: The ICE direction - draft-rtcweb-stun-consent-freshness is preferred. Eric Rescorla asked what will be done to move the draft-ietf-rtcweb-stun-consent-freshness-00 forward. Chairs responded that they will find some targeted reviews before any WG last call. RTCWEB Minutes March 5, 2014 Chairs: Magnus Westerlund, Cullen Jennings, Ted Hardie Note takers: Dan Burnett, Matt Miller JSEP - Justin Uberti ------- Issue 1: MSID and direction interactions (Justin's slides say:) Two cases: 1. X offers video, Y wants to reject but add its own 2. video flowing in both directions, X wants to stop remote video Key is permanence of inactivity -- this is not a pause, but a termination that allows for the media stream resources (transports and candidate sets) to be recovered/collected and m-lines to be reused. Suggestion is to add new "a=msid-control: stop". In first case would be Y that includes, in second case would be X. Questions arose around name of this attribute (msid-control does not refer to local msid), effects on RTCP (none), semantic difference between restarting and newly creating a media stream, concerns with legacy behavior, question about whether this is really a W3C API issue Justin notes that actual proposal for msid-control attribute will come in MMUSIC but is being discussed here for completeness. Consensus call: * How many people understand the problem? (few hands) * How many to have MMUSIC solve this issue (some hums) * How many to not have MMUSIC solve this issue (none) Issue 2: ptime Cullen Jennings (CJ): We should put in a max ptime of whatever you are going to send. It will help with interoperability. I don't care if it's 60 or 3000, but you should put in the max ptime for whatever your browser supports. None of this applies to Opus. Martin Thomson (MT): I agree Justin Uberti (JU): This is what values should be used when sending audio. What you send should only be one of these 20, 30,or 60. Speaker: When we have a gateway, I don't want the requirement to srip out values. If the payload wants to specify something, let them. CJ: I want to receive anything, and FF and Chrome will. 0 is an invalid value. JL: The 20/30/60 should be in the codecs draft, not here. JU: So what I've heard is to send the browser's value for max ptime, and for ptime send one of the 20/30/60 frame sizes. CJ: The max is the critical thing there. I think we should be able to receive anything, but what the browser chooses to send is one of the three values. JL: It's not known what happens with ptime across BUNDLEs (but that is not for this group). MT: You probably want to say what ptime can be when it's Opus. It's mostly a case for PCMU, and what's the max in Opus (120). This means the m-line will indicate 120. It's the maximum-maximum. CJ: It's going to minimum-maximum of all the codecs. There are hardware-based codecs that only support 20 or 30. Speaker: Is this only for the MTI codecs, why haven't we analyzed all of the codecs? Chairs: I think we should start with the latest and try to include it. W3C: I think the codec question is a larger one. There is some discussion around HTML5..do you intend to liase over this? It would be good to be aligned. Chairs: That comment is outside the scope of this discussion. Speaker: I think we should deal with these things in payload. CJ: Let's say the payload said that the ptime is 20. This is about putting that in the SDP. maxptime is well defined, so we'd be violating various SDP. Tim P: Would these values be maleable between create and set? JU: It would depend on what the browser supports. TP: When we are adding things, we should think about how this go up in the app. Ronny: If the offer included both OPUS and PCMU, then it took away PCMU, you could just check the max ptime. Consensus call: * Do you support including a maxptime? (moderate hums) * Do you not support including? (very few hums) Chairs: I think someone needs to write the codec draft proposal, and the ptime/maxptime proposal and take it to the list. Issue 3: CNAMEs Proposal around synchronization is to use the same CNAME for all MediaStreamTracks (MSTs), even though behavior of non-WebRTC endpoints will be implementation dependent. Some remember a different decision from IETF-88, that we agreed to use lip sync groups. Justin will conform with RTP usage guidelines and will look into lip sync groups. Martin expresses concern about using the same CNAME in different Peer Connections. Justin agrees the question about linkability across Peer Connections is a good one to consider here, and he will take it to the list. Outcome: More list discussion needed. Issue 4: Same-Port Bundle Policy Proposal is to add new bundle policy value (such as "force-bundle") to allow initial offer to use the same (non-zero) ports when support for BUNDLE is known a priori, allowing the sender to skip having to send a revised offer to clean up the ports after BUNDLE is accepted. Questions arose concerning what happens if the a priori knowledge is wrong, whether this is an IETF issue at all (as opposed to a W3C issue), Justin splits this into two questions: should we allow rtcpmux-only, and is it helpful to have API support to skip fixup offer On first question Justin will make a proposal on the list. For the second, chair hum in favor: only one. Against, a few more. Very little support for this. EKR brought up a question about candidate pooling and trickle ICE. Question is whether you can force no trickle ICE. Justin answers that candidates will arrive asynchronously anyway. Eric will look at implications for Firefox and maybe submit a proposal. How it works today Justin will send to the list. Consensus call: * Anyone see the need for the fixup offer optimization? (few hums) * No need? (more hums) RMCAT update ------- Bernard, Justin, Harald agreed to review RMCAT CC requirements. Security / Security Architecture ------- Eric proposes preferring (i.e., SHOULD) PFS cipher suites over non-PFS cipher suites. Asks whether this should be MUST. Belief is that MTI ciphers for WebRTC will include PFS options, so they will offer. Consensus call: * For WebRTC implementations, they MUST offer/select PFS over non-PFS? (large hums) * opposed (none) Identity changes slides (Martin) --- No comments on Issue 1. Issue 2: EKR says this needs guidance to apps on what to do with it. Either rendered in a separate window or be treated as a click from the perspective of a popup blocker. Issue 3: EKR says this can occur in browsers. Martin proposes we do option 3, including multiple fingerprints. He also proposes to make a=identity a session-level attribute and have it cover all the fingerprints in use. Chairs ask if anyone wants to speak against option 3. No one does. Issue 4: No one stood up to object to his proposal, but it went by really fast. Issue 5: Proposes being able to validate via HTTP POST on servers rather than requiring browser validation. There were questions, so Martin will send this pull request to the list for review there. Issue 6: Question is about how to preserve identity-based stream isolation beyond the local browser and the peer connection and into the receiving browser so that it remains isolated there. This is about protecting against the javascript running on both ends. A reasonable number of questions came up in discussion. We all agree this needs more discussion. CHAIRS: Consensus seems to be option #3 (assertion includes multiple fingerprints) RTP usage ------- Open issue about encrypting all RTP header extensions rather than just client-to-mixer and mixer-to-client audio level information as defined in RFC 6094. Colin's proposal is SHOULD. Cullen wants control at the Javascript level of the existing audio level headers and then might consider others. Harald wants an ability to control which headers are *not* encrypted. CHAIRS: If anyone wants to change the recommendation, do so in WGLC.