Transport Area Open Meeting (TSVAREA)
IETF-88, Vancouver
THURSDAY, November 7, 2013
0900-1130 PST
Evolving the TSV AD role
Emphasize – We are talking changes the next year
2013 Nomcom already TSV Ad description
2013 Nomcom is already talking to willing nominees
IESG-level changes: more shepherd involvement, on telechats
Recruiting a TSVarea secretary
We are discussion starting a TSV-ART: soliciting coordinator for it.
What we need to hear:
What makes you unwilling /unable to server?
Time commitment, travel commitment, review commitment, management commitment?
We need to make change to enable more people willing to volunteer.
- Reducing Internet Latency workshop report (15 minutes + 5 minutes Q&A)
Mat Ford (Internet society co-chair)
-- http://www.internetsociety.org/latency2013
Summary of 2 days discussion in London (Sept 2013).
30 submissions
Scope:
Surveys of latency across all layers
Major goal: identify metric for (access) network latency; develop an action plan to educate industry, identity gap
Taxonomy:
Sources of latency
Delay between a physical event and availability of data; transmission; processing; multiplexing; grouping
Mitigating sources of latency
Relocation, speedup, dedicating resources,
Discussion:
Careful and look in a lot of places for potential optimizations and potential conflicts.
LOLA Use case: musicians play in different places and seamlessly integrate them together.
Discoveries with LOLA
Only work on high bandwidth/ low latency
Incentivizing the market
What is the latency budget
What to measure:
Minimum upload latency
Some suggestions:
Easily defined tests which are simple
Quantifying queue-related latency
More discussion is clearly needed:
What is fair anyway?
Need for latency is application dependent + all stream
What about ECN?
Potential benefits, but current semantics is not good enough.
Delay based congestion control:
Discussion:
Fundamentals:
Local content hosting
Peering
More tools, in the hands of end users, generating more data already preferable
Roles for regulators?
Bob Briscoe:
“Dedicated network is better for latency” is not a good message to public.
Larry Masinter: the measurement you are talking about does not align with protocol design
Jim Gettys:
remember when the network is only one piece of element?
Now the latency is sources from everywhere
Roberta (Google):
The measurement doesn’t say any anything about user experience.
??: please watch the video
Matt Mathis:
Bandwidth doesn’t always help latency. Optimized CPU doesn’t help network, but accurately hurt network.
- Google QUIC protocol (25 minutes + 10 minutes Q&A)
Jim Roskind
-effectively replace TCP and TLS out from under SPDY (predecessor of HTTP/2.0)
- push all the way to application space
-
Overview: predecessor of HTTP/2.0
Why is SPDY fast?
It is all about latency (time till response)
SPDY multiplexes requests over on TCP connection
Send multiple requests
Issues: SPDY runs over TCP
Lose one SPDY packet; all the streams wait : HOL blocking
SPDY maybe slow to connect
TCP connect may cost 1-round trip-time (RTT)
When one packet lost, all sessions stall
TCP connection: some countries are 400 ms. A lot of time goes by before any action can be exchanged.
QUIC goals:
Deploy in today’s internet
When one stream block, you don’t want other stream to be blocked.
Mobile is taking over the world.
QUIC Success criteria:
Two paths:
QUIC makes headway reducing latency
Field Data, plus applications needs drive development
Lars: Your QUIC is closed system.
Are you going to bring the protocol to IETF?
Jim: it is implemented on Chrome. It is open source. To be discussed with Firefox.
??: happy to see it being presented. I hope it will come to IETF.
Lars: that makes me happy.
Can we really deploy a UDP in a similar fashion?
Stewart:
Jim: we have to go to reserve port
Stuart Cheshire (Apple): For Chrome: It is typically home computer, but not on mobile device.
Jim: It is hard to dictate what browser to use, but there are people using this on Mobile
Michael Wetzel: UDP
Jim:
NAT unbinding: how unfriendly it is?
How much idle until unbinding?
Peak around 80%.
QUIC time out after 30 seconds.
Lars: have you done it for TCP?
Jim: no, I haven’t done it for TCP.
How does QUIC achieve 0-RTT connection cost?
Congestion avoidance via packet packing:
We worried of melting internet.
TCP cubic is baseline
Working on packing “plus” TCP cubic
Also monitor inter-packet spacing, to measure queue in the network.
Does packet pacing really reduce packet loss?
We see recognizable less amount of packet loss.
Predominated determined by the access link.
Jim Gettys: it encourages good behavior in the internet
How might QUIC connection survive a mobile network change?
Possible to use second IP address.
See QUIC do better to handle mobility than traditional TCP.
??: QUIC is more like 4 tuple
?? it is not globally unique
Jim:
?? can you talk about trade off? If it is only 4 tuples
Jim: it is nice to return back to the same site.
How can a Forward Error correction (FEC) packet help?
The 5% overhead in a 1-in-20 packet FEC is more than compensated for by retransmission reduction.
Solicit people to contribute: proto-quic@chromimum.org
Matt:
I was invited to participate in this project, but I turned it down. I think that protocol should be independent of applications. At the end of the days, those things are back to TCP. I support this project, because it is an experiment for internet, but not replacing TCP.
Dan Frost (Cisco): this work is very interesting. My concern is what is your intention on how to develop it? Do you intend to publish this as IETF draft?
Jim: being a new comer, I am not familiar with the process. The answer should be yes.
??: I hope we can learn from this. Here are the experts.
Dan Frost (Cisco): there are a lot of avenues for you get feedback to this protocol. You can publish it as independent experiential ID.
Jim: we have a server side demo version.
Michael Ramalho (Cisco): I am the one who asked Google to present it. What you presented has been developed. Most of my concerns are: this is a massive layer violation. But it is beneficial to end users. Second: I am scared of congestion control. I encourage you to work with other WGs.
Karen Nielsen (Ericsson): everything presented here have been discussed by some IETF WGs. Maybe should consider reference those items being standardized by IETF.
??: we are not claiming that what we do is new. However,
Jim: we did look at the HTTP, we believe that we can prototype something faster than bring a new protocols to standard body.
Karen Nielsen (Ericsson): bring your protocol to IETF to have more eyes to check on it.
- Evolution of IETF Transport Protocols (60 minutes) by Spencer D
-- http://www.ietf.org/mail-archive/web/tsv-area/current/msg00973.html
Purpose: check if the current IETF transport protocols up to speed with what is needed by today’s hosts:
RUTS at IETF 43:
Michael : congestion comes more important in the transport
Karen Nielsen (Ericsson): SCTP is just TCP without header. It is a streaming concept. It is TCP with congestion control.
??: SCTP is more than TCP with congestion control.
Matt:
Eric Rescorla (EKR): WebRTC uses SCTP over DTLS for data channels
Proposed optimization: reduce inter-layer redundancy; combine inter-layer messages (piggbacking).
Work Items: TLS
Matt: Layer is not optimizating on clarity. Mixing layer is for optimizing performance.
Wes Eddy: Saratoga Update
Saratoga is in operational use: disaster monitoring: to download pictures from satellite.
400 mgb/s download
Cisco has an implementation of Saratoga. They funded congestion control to Saratoga.
Background to Saratoga: file transfer protocol.
NASA and Cisco co-develop the protocol.
Characteristics relevant to evolution of IETF transport protocols.
Make it work on more paths.
Work with high bandwidth asymmetry
We get magnitude faster than TCP
You can scale up to triple computers.
Evolving IETF transport protocols:
TCP, UDP, SCTP, DCCP: it is not about protocol, but what is in the protocol. The end to end capability. It is not layering.
Saratoga is not a clean transport layer protocol. But it has great performance.
Scale to high throughput and low delay. It is possible by looking at Saratoga.
Carsten Bormann: there is a whole WG (RMT) to work on the transport protocol. A lot of concepts developed can be the building blocks.
Jana Iyenyar: Some thoughts from a disarrayed mind on the evolution of transport abstractions
What is our perview: new features or bug fixing?
Do Apps need new abstractions?
Apps (things that human interact with ) today
What is in the gap?
Common design patterns
Performance optimizations
New transport
We do a lot of work in mechanisms, as building blocks.
Karen Nielsen (Ericsson): what do you mean by the second bullet: (Improvements to congestion control seem hidden under TCP’s bytestream API)
Jana: this is to point that most of work is all around TCP.
Matt: there is fundamental limitation on TCP, there is no time stamp. A lot of little details in TCP that can’t be fixed.
Jana:
??: looking at stack: I can say that a lot of problems are that we build excellent mechanisms, but we hide how application using them. We should focus on that API to those mechanisms should be the fundamental thing to look at. We invent something new, but applications can’t see it.
YuShenChen: the current congestion we build is ill fitted for applications. There is no good congestion control mechanism (algorithms) that browsers can use.
Jana: Our network is hiding, focusing so much on the bits on the wire.
Reberto P: There are a lot of streams: hundreds of them. Our current scheme doesn’t consider them. Hiding the mechanism force us to create other mechanisms.
Jim: There is no way for people to test if things are done correctly. A lot of designs are not done in a unified way.
TCPcrypt: by Stanford Andrea:
Maximize the security for everything.
0 configuration, work with NAT
Integrate with app-level authentication
High performance
Avoid double encryption
Does it matter if I send thousands of connection?
??: If this is good, people don’t need to use TLS.
Andrea: this is not intended to replace TLS. If
Brain T: how many lines of code do I need to use?
Andrea: just one line
Dan Frost: Is there an attempt to have an IETF draft for this?
Andrea: the intension is to have an IETF draft. We want to
Dan Frost: we would like to see what you have done, and have IETF to work on it.
Dan F: What is it from implementation point of view? Is it from kernel implementation?
Andrea: you don’t have to change the Apps.
??: in the multi-path TCP group, we are working on security. This come up as a good candidate. We would like to work on this.
Andrea: I would like to hear more.
Philip (BT):
YuShenChen: Can you compare this with QUIC. When we talk about low latency, QUIC is a good example.
Andrea: QUIC is on a clean slate. But we are working with existing applications, i.e. using TCP.
Rut B: the intent of QUIC is to make it work. That is why we implement it and try it. We don’t want to leave it just like experimental.
Jim: I believe if more is given to transport layer, transport layer can do better. If it is possible for Transport to switch to different port depending on the underlay network and performance.
Rut B: when people say “why don’t you use this”. The intent of everyone here is to enhance people experience. There is also need to work from Transport to link layer. We don’t have well established mechanism across layers, e.g. some transport layer doesn’t require lower layer to re-transmit.
YuChenCheng: it is all about passing information across layers. If we have those information, the transport layer can be much faster.
Brain Trammell: more on interfaces from Applications to transport layer, and transport layer to lower layers. We need take those works.
Brain Trammell volunteer to bring in more work in this domain.
??: there is lot of emphasis on zero startup. That problem might not be important as it was.
Jana: We should eliminate the latency. Don’t be afraid of congestion control will melt internet. People do congestion control. TCP Cubic is the congestion running today. If we document those, then we are doing our job.
Lars: Cubic is a terrible example because there is still no document on it. It is quite exciting to see so many things in transport area. Just dropping the code is not solution. As a community, we need to do it together. It is Ok to experiment on Chrome. How fast can you go without burning CPU cycles. Think about 100G.
Jim: preliminary experiment with UDP doesn’t have significant difference from TCP.
David Black: like the ADs to look at the process. The problem has been based on what does it mean by interoperability of API. Should bring it to IESG. This concerns how the IETF deals with interoperability of APIs - that general issue needs IESG attention, but specific API work can go on in WGs while the IESG sorts out the process concern.
Spencer: make sure to continue the discussion on emails and hallway on the topics.