Agenda:
1. Welcome (Olaf Kolkman)
Good evening. Just a reminder that the note well applies as usual.
Welcome to the technical plenary. We have jabber for remote
participants, and the plenary materials have been posted to the
IETF site. When asking questions, please say who you are, and
speak slowly.
An observation - outside in the meeting hall you see this mosaic,
and if you zoom in, something weird there... (see slides)
We will start with the IRTF chair report, then have the IAB report,
and then presentations on the network neutrality debate viewed in a
technical, IETF context. What is the technical things we can learn
that influence IETF work. Marcelo will be the moderator for the open
mic session, and we will stop at 10 past seven for the regular IAB
open mic.
2. IRTF Chair's report (Aaron Falk)
Hi, I'm Aaron Falk, the chair of the Internet Research Task Force.
There were five research groups that met this week (see slides).
The DTN RG is having a disconnect-a-thon, where they are working on
interoperability. They have a hodgepodge of different
implementations there. This morning the DTN RG chairs met with the
IAB to review the status of the research group. One topic that
came up is whether to standardize this work in the IETF. There is
not an answer yet to this question, but the topic came up.
Some IRTF status - we are working on getting the IRTF RFC stream
defined and the copyrights defined. The goal is to maximize
commonalities with the IETF, but also permit derivative works if
that is what authors want. We are still trying to define that for
the non-IETF document streams. Also, there are a few drafts about
document formatting that are part of a 5 document cluster, which
will hopefully be cleared before the next IETF.
There is a new research group on PKI, which Paul Hoffman is
chairing. They plan to look at what have we learned in PKI, and
what might we do differently. The charter page is up, and the group
expects to kick off in the next few weeks.
There is some interest in network virtualization, and there have
been bar BOFs in this area for about 1.5 years. A link is in
previous IRTF reports.
As for the overall health of RGs, most are pretty active. The E2E
RG is so quiet they weren't on the list last time. Like last
plenary, rather than give a list of all RG bullets, I will look at a
few in more detail so folks can get a sense of what these RGs are
doing.
The HIP RG is focused on how the IP address serves two roles in the
architecture today: identifier and locator. That has some
benefits, but a shortcoming is that TCP and some applications bind
to the address, so it makes mobility hard. There are several
different IETF activities looking at this problem.
HIP does something different, in that it has a host ID. Transport
protocols or applications bind to the HID instead, which eliminates
name overloading. There is an IETF WG working on this, and HIP is
now starting to ship in some commercial products. The WG was
chartered to standardize the well understood parts, while the RG is
looking at more open issues, NAT traversal, APIs, support for
legacy apps, etc.
They have some RFCs published, and are also looking at possible
extensions. It is a broader solution than any one particular
problem. The WG is fairly narrow, but the RG is taking a broader
view and working on experiment reports. A little status: they are
looking at how HIP can be used for the internet of things, sensor
networks, HIP and DHTs, mobile routers. These are documents that
are in progress right now. There is an RFC on NAT / firewall
traversal and HIP in different environments that provides some
insights and lessons learned. There is some interest in helping
HIP progress along the standards track. At this point there are
several implementations, and you can download and run HIP on your
own machines.
Turning to the ICCRG - it is well understood and long discussed
that TCP congestion control is not ideal for all environments, or
all applications. There is lots of work on high-speed transfer, to
better make use of high capacity links. There are modifications to
the Van Jacobson algorithms that have been developed. This is the
area of work of the ICCRG. It is an expert community that helps to
inform the IETF when making changes to the standard congestion
control. They have tried to pull together the set of congestion
control RFCs and summarize them. The other thing is to identify
the known issues with existing congestion control mechanisms and
catalogue them.
There is also a design team that is looking at alternative models
for the goals of congestion control should be, that is, what
*should* sharing look like beyond TCP fairness. This is rooted in
the realization that if any new proposal has to compete fairly with
TCP, then you can never do better than TCP. Is there a better
answer for how resources can be shared? This week they talked
about implementation reports, alternative algorithms, and a report
from the TCP friendly design team.
3. IAB Chair's report (Olaf Kolkman)
This is the IAB report for IETF 75. Some information about the
IAB: we are a chartered activity, and you can find information on
the home page which also has an RSS feed. These are some of the
docs we publish, and we maintain meeting minutes - those are
public. We have invested a great deal of energy getting up to date
with minutes, and for that Dow Street spends a lot of time getting
this done.
Document activity...there is Principles of Internet Host
Configuration, Design Choices when Expanding DNS, and other docs
that have been submitted. Finishing up Streams, Headers, and
Boilerplates and the RFC Editor Model.
Some ongoing work...there is Thoughts on IPv6 NAT, which has not
seen any updates yet, but we are actively working on that. The
P2P Architectures document is in a call for comments, which is
equivalent to "last call" for IAB stream so please provide
feedback. Another active doc is draft-iab-iana. A doc that has
not had internal attention yet is the IP Model Evolution draft.
A new draft is the Uncoordinated Protocol Development Considered
Harmful draft. The goal is to demonstrate the importance of
coordination between SDOs, and there is a second goal of describing
MPLS-TP. The text contains warnings about using T-MPLS in
production networks. There is a call for comments on that as well.
Many do not realize that this is similar to "last call" - we have
extended the call for comments until after this IETF meeting. I
expect a new roll of this doc shortly based on the comments we have
received.
We had our retreat in Ashburn VA, which is the middle of nowhere,
but near to *hot* Reston. At the retreat all IAB members wrote
down what their interests were in the IAB context, and we spent 1.5
days on these various proposals. From those we identified three
things for an IAB work plan, and expect to produce output in those
areas. That output could be more concentrating on motivating work
in the IETF context, it could be documents, or in the form of
plenary sessions or workshops.
The three items are:
1. v4/v6 co-existance
2. security of the routing and routing control plane
3. internationalization issues with DNS and applications
This last one is not IDNAbis, but more in applications and how they
interact with the DNS and use various encodings.
In the area of inter-organizational coordination... one IAB
responsibility is to appoint an ISOC BoT member. We confirmed Eric
Burger for this, and he replaced Patrik Faltstrom. And since
Patrik now has more time on his hands, we asked him to be liaison
to the ITU-T, and we are grateful for that role. Patrik replaces
Scott Bradner, who has defined this relationship, and been the
liaison for as long as the role existed. Barry Lieba has been
appointed as liaison for MAAWG.
Other inter-org stuff... we responded to the NTIA NOI on upcoming
expiration of JPA with ICANN. What is most important is the role
of the IETF in the context of the JPA and the IANA protocol
parameters. There is a link on the slide to more information.
We also sent two liaisons to the ITU-T SG2 related to ENUM: a heads
up on Infrastructure ENUM and a status liaison. There was a request
to make that a permanent recommendation. So, we spend a lot of
time on organizational bits.
The RFC Editor Model defines that the IAOC is responsible for
selecting the RFC production center and publisher, the IAB is
responsible for appointment of the RSE and the ISE, and a body
(RSAG) will assist with that. We have appointed those bodies, and
we are looking for nominations for the RSE and ISE.
Here are the members of the advisory committees (see slides). We
had a call for nominations, and got fairly little response, so the
call for nominations has been extended to 8/15. I urge you to look
at the materials, and think who could take on these positions.
There is a lot of material on the links.
We have had no appeals, and that is it.
4. Network Neutrality; placing the debate in IETF context
Olaf: This is a session to inform the IAB and community on the
issues surrounding network neutrality, and that is why IAB will be
in the audience. The IAB as such does not have an opinion as of
now, we are being informed.
4.1. Introduction (IAB - Marcelo Bagnulo)
Marcelo: We're going to have a debate on the network neutrality
topic. The goal is to present the debate to the community - what
are the issues... The main goal is to try and understand how this
affects the work that we are doing here. Can the IETF do anything
to help in this area?
There are certain non-goals for this session. It is not a goal to
try and find out if network neutrality is a good or bad thing, or
determine legislation. It is more to find out what the IETF can
do.
Barbara will give the overview, Mark will discuss some implications
on IETF work, and then we will have 40 minutes of moderated
discussion. We are looking for questions for the presenters about
how the network neutrality debate affects protocol development.
What we don't want to hear is opinions of whether network
neutrality is a "good thing" or "bad thing".
As for the role of the IAB - we are currently trying to understand the
problem. The IAB as such does not have a position on this (as a
body).
4.2. Overview of the network neutrality debate (Barbara van
Schewick)
Barbara: Hi, I am glad to be here. The goal is to figure out how
network neutrality affects what the IETF is doing. But what is
network neutrality? You get very different answers. For example,
if a network network provider can offer QOS, or if content
providers can be charged for better transport, etc.
It originally emerged as a result of technological change, first
based on the end-to-end argument. In the last 10 years, providers
now have greater control over the traffic on the network. This is
a change in the balance of power from the user, and the debate is
largely about if this shift in control is good or bad.
In the beginning it was an easy debate - should providers be allowed
to block telephony, or should AOL / Time Warner be allowed to block
different portals? Then debate evolved, into more and more sub-
debates. That is why there are so many views, and the positions
themselves have become much more diverse.
For example, a network neutrality proponent would say "you
shouldn't block", but today it is more complex, a mix and match of
positions in these different questions. This is nice for academics,
but it makes it difficult to have a debate. The first goal of this
talk is to give us a framework for having the debate, to help make
sense of network neutrality proposals. I use this framework, and I
hope that it will be useful to you when thinking about what a new
proposal may mean.
For the IETF, you need to understand the main positions, and the
regulatory proposals, but the goal here is not to make a value
judgement. This talk is meant to be a description about the
debate.
This is the framework... The first question is "does this proposal
prohibit network providers from blocking applications?" This is
really at the core, the glue that holds the network neutrality
proponents together. Networks are or aren't allowed to block
traffic on their network. In this case, "networks" are defined in
different ways. For example, vodaphone over telecom
infrastructure. There are different views of whether these rules
should apply to access networks, or backbone networks, etc. The
rule is sometimes framed as "user rights" - for example, the
internet policy statement from US in 2005. That sort of implied
that if I can access the lawful content of my choice, then the
provider cannot block my content. There is the word "legal" in the
policy statement. That is a deliberate choice, at least in the US
people only talk about blocking of "legal" content.
There is a second constraint where blocking is motivated by a
network provider interest, but not blocking for political reasons.
In the great scheme of things, the state depends on this (i.e., free
exchange of ideas). But the debate right now is really about
blocking of legal stuff for ISP interests.
Do we really need such a rule? There are two questions... do
network providers have an incentive to block? If not, then you
probably don't need regulation. If they do have an incentive, if
it's beneficial, then you don't want to them to block.
There are different views. Do they have an incentive to block?
Not generally, since more applications make the network more
attractive. They can raise the price of internet service and get
more customers. But, there are exceptions to this general rule. A
provider may want to block certain applications to increase their
profits. This is pretty obvious, ones that threaten their
traditional revenue (e.g., traditional telephony). There are lots
of examples of this kind of blocking, in US, Mexico, and Saudi Arabia.
This is a current issue in Europe, where providers prohibit
internet telephony over mobile broadband. Some may think this is
yesterday's story, but it is similar to cable and video over IP.
Now the telephony providers move into that space, it threatens the
ability to make profits there. We haven't seen a lot of that in
practice. There are some cases in Korea, but many expect that this
will be the next big thing in this space.
And then there are the less obvious cases, such as advertsing
revenue, so you can get more people onto your portal (e.g., in AOL /
time warner merger) - using technical measures to keep people on
their portal longer. An insight is that you don't have to
monopolize the application. If you block, and another application is
still on the market, and it can still be profitable.
Providers might block because it lets them set the market. For
example, Comcast offered basic service (no VPNs), and also premium
service that allows you to use VPNs. So you pay more for "the
right" to use the VPN. These are often professionals who
telecommute, so they may be willing to pay more. Structuring it
this way allows for higher profits. The downside is that a regular
user cannot use VPN for private reasons.
Two more cases - the provider may take issue with certain content,
e.g., that threatens their business interest. The most famous example was
of this was Telus (Canada). The company was in a labor dispute
with their union, and they blocked the union website. They said
it was threatening employees and did not want this to happen.
Another example would be a provider who adopts a content policy,
such as the Apple iphone store, where they reject applications they
feel are controversial.
And finally, providers drop to manage bandwidth on their network.
Under the flat rate, people use the network more, increasing
congestion, raising the cost for a provider but with no more
revenue. It may be cheaper to block rather than increase capacity.
There is lots of debate on this.
There is a dispute as to whether all of these apply. Network
providers may be more or less open on their incentives to block.
Even if they want to block, they might think about it differently.
Net neutrality proponents think about developing new applications,
and the risk of being blocked reduces the incentive to innovate.
From the user perspective, users may not be able to use apps if
they are blocked. People think about application innovation - the
provider gets an advantage because their own applications are not
blocked.
Free speech is the final consideration. We like that everyone can
get access to whatever content they like, a variety of sources
which should improve democratic discourse. A provider can reduce
this.
For the net neutrality opponents' perspective on content blocking,
one camp says "never do that", while another camp says "maybe we
will, but that is a good thing". Sometimes described as similar
to the editorial function of cable provider or newspaper.
"Ultimately there is so much info, it is fine if the provider
serves an editorial function."
Back to the three motivations - they may block to increase profit,
they may say "there is only a subset of this behavior that is
problematic" in an anti-competative in an anti-trust sense. And
even if a competitor is harmed, that is not necessarily "anti-
competitive" since you usually need a dangerous success of
monopolizing before legal boundaries are crossed. In other words,
anti-trust rules may say it is still ok.
This leads to an important question. For the people in favor of
network neutrality, this is important - there is a position that
even if something is not fully excluded, it is still bad even if
competition as an "abstract value" is still alive.
So, why treat the internet different? How is the situation
different than the supermarket that only stocks some kinds of
chocolate? The idea is that the internet *is indeed different*.
It is a technology like electricity, or the steam engine, that can
be applied across the economy. We know that these are an engine
for growth, but not just in and of themselves - it is what you
can do with the network.
So, applications are the tool to create the value. You can show
that finding out how to use a general purpose technology is really
hard, so this hinges on people finding new uses for the internet.
This is what innovation is about.
That is one of the arguments. The other is that there is a
difference in the way people use the internet. For example, an
entry on wikipedia may help someone who I have never met. So
there is potential value that people create when they use the
internet that they may not themselves capture. That is one of the
other stories in this space.
Even if you think the internet is different, one would still need
to go to a whole cost-benefit analysis. Some people say, if there
is competition, then do we need regulation? For example, let's say
BT starts blocking a website, then a user can switch to a different
provider. This means BT will not block in the first place, since
competition will discipline providers. If you have choice, where
physical network providers are required to offer their network to
different providers, then this might not be bad (like in Europe).
Or if you can add competition, that might be an alternative to
regulation. But this might not always work, since all ISPs may
block (e.g,. all French ISPs block IP telephony).
The second factor is switching cost. You can go to a different
supermarket, but it is harder to change your ISP. It depends on
how easy it is to switch. For example, think of switching your
email address. In 2005, AOL still had people who pay $15 just to
keep their previous email address even though they were not using
the service. What you think about this issue can drive whether you
think there needs to be regulation. If you favor competition, you
could require disclosure. If you think competition is not that
effective, you might favor regulation.
For the stuff most relevant to the IETF... once we know about
blocking, there is the question about discrimination. I see
network providers providing QOS, blocking spam, handling DOS
attacks, etc. Another group might say "some of this appeals to us
as well". There is another part of the debate that says you can't
block, but you can slow down traffic. It becomes easy to get
around the blocking, since you can't tell where the degredation
comes from. Some degradation might be good, some bad.
So, how do you figure out which is which without impacting internet
evolution in the process? There might be unforeseen consequences,
then you might say "let's not touch discrimination". But if you
think you can get it right, then you might try to come up with a
rule that tries to differentiate the good from the bad.
The first rule is QOS - sell as a service, and you do that even
when there is no congestion (keep congestion management separate).
Everybody who wants to come up with a rule finds it is hard. The
definition of QOS is treating packets differently - is this
discrimination? There are people who say you begin discriminating
as soon as you start treating packets differently. There is money
in the US stimulus that says you can't offer QOS (if this
legislation is broadly adopted).
Others say that this kind of stuff is needed, and don't want to
constrain the evolution more than necessary, and they try to come
up with a balance. There are two appproaches - "like" treatment
where email is different than telephony, but skype is not treated
differently than vonage. The second is more subtle - where users
signal the kind of service they want. The approaches offer
different trade-offs.
The first protects application competition, since everyone has the
same chance to get QOS, but the choice the provider makes may not
always be the right choice for the user. Sometimes I may really
care about the quality of my internet telephony, but other times
not really care. However, many net neutrality people who support
QOS would probably think either of these would be fine.
The next question is, if I offer QOS, can the provider charge for
it? Some say no, and the basic idea is that people are worried
that if you charge for it, the provider would then degrade the
baseline service. The problem is that if you can't make money
from it, why make the investment? Maybe there are other solutions,
for example, the EU says "if this doesn't work, then regulators
would set minimum standards" to control the baseline service.
A second option is to charge only the end users, but not the
content providers. That seems kind of counter-intuitive. In the
past, people have not charged content providers for QOS if not
charging their own customers. Maybe that is a good model for the
future - the cost of a newspaper is shared between advertisers and
customers.
One problem is looking at the history of the internet, the
innovators are students, people in their garage. They were not
necessarily able to get funding. So what happens if you increase
the barrier to entry? An established provider be able and willing
to pay, but not others. Non-profits are also concerned about this. How
do you protect low-cost innovators?
There are proposed exceptions. The first is always for security.
It is interesting to me that there is a divide between the policy
people - "you can block if it is security", but network people say
it "might be hard to tell the difference". Should users have a say?
This is an area where network engineers can contribute to the
dialogue of regulators.
Finally there is congestion. If networks are evolving quickly, it
is hard to come up with rules that provide equality for all
customers, and so regulators should not try to do this.
Competition supporters would say that competition would take care
of this. In Canada, some testify that customers don't care about
network management, so a provider could disclose, but the customer
won't care. Maybe they would care about blocking or slowing down,
but compared to different reasons for blocking, the end result is
not really different. If i want access to content, it does not
really matter if the blocking is due to congestion management or
other reasons. So, trying to find the middle point that maintains
user choice. Within this view, these are the practices that are
"clearly ok".
There are lots of trade-offs. I have seen that providers will
usually argue "if can't do this I will make smaller profits, so I
will deploy less infrastructure." The other case is the limiting
of network innovation - this argument vs. application and user
control. The choice will end up with very different results, and
the trade-off needs to be viewed in relation to the choices made
beforehand.
We have seen the framework. Some net neutrality proposals would
make it impossible for QOS. There are usually exceptions for
security. A key question - do we want regulation, or is
competition sufficient? Could we use over-subscription ratios?
And the final question - what can the IETF do abouut it? I have
seen people who try to protect non-discrimination, but how do you do
that without being too restrictive? Should like apps be treated
alike? It can be difficult to determine if two apps are similar
enough.
On the other side, user choice looks very attractive. For example,
Juniper technology that allows users to signal what they want.
Some regulators (Canada) are interested in this. Does the IETF
offer technology that would allow this kind of regulation, or is
this just proprietary right now?
4.3. Implications for protocol design (Mark Handley)
Mark: I am trying to figure out why the IAB asked *me* to talk
about network neutrality. It may be that I simply didn't run
away fast enough, or perhaps because I am well known for thinking
that net neutrality is not that interesting.
However, while working on this I realized there are a bunch of
technical issues that I care about deeply that have an impact
here. What can we do to make the debate go away? Why should the
IETF care about network neutrality?
Much of the debate concerns economics and legal elements - we are
not good at this. We have both sides of the debate present here.
The issues are completely different in different countries, and our
technologies have to work everywhere in the world. So there are
lots of resons why the IETF shouldn't care, but some reasons we
should.
If you are designing network protocols, you should have read this
"A Tussle in Cyberspace". It is about how to design protocols when
users will have differinng views.
We have seen the tussle between users and application writers and
those who run the network. Accommodating this tussle is crucial to
evolution. This paper is about how to design stuff.
First, there is no such thing as a value-neutral design. When as
an engineer you design a protocol, you are shaping the way the
legal, economic, and politial space can play out. Don't assume you
can design the answer - you are designing the playing field.
An example, when we started on SIP in 1996, it used proxies for
user location. We didn't specify what the proxies did, just
specified enough to allow for interoperability. We now find that
SIP proxies are used for all sorts of stuff. Proxies became a
control point where this tussle plays out, with different ones in
different points.
The good thing about this design is it allowed the tussle to play
out, and is a good example of designing a playing field. It was a
design that accomodates different policies in different places.
On net neutrality, and Deep Packet Inspection vs. banning things, we
don't want to be at either end of this space:
- blocking/limiting from/to certain destinations
- blocking/limiting from/to different applications
Destination neutrality is usually not an IETF issue. We have BGP
policy. The other part is in the area of security, and we are
pretty bad at this. We don't have a good story for DDOS, spam,
etc. But this is not likely to be network-neutral, and would need
the freedom to solve in the future.
Also governments might block content because it is "illegal". It
is not clear that this is a technical question. There is
technology like TOR, but I am not sure this is an IETF focus (or
should be).
The other part is application neutrality, and this is firmly in the
IETF. The network is not just "all packets", and it hasn't been
for a long time. There is deep packet inspection, prioritization,
etc. There is a lot out there already which is not treated "just
as packets".
Why is all this stuff out there? Have we actually provided the
right tools or effective building blocks? I am not confident that
we have.
The technical issues... usually block for security and congestion,
creating a vicious cycle.
The result is that there is a ton of state in the network "trying
to find the good stuff", leading to lots of unpredictable behavior.
ISPs have lots of DPI infrastructure, and it is tempting to use it
for other stuff.
Some say "DPI? I don't see that in my country." Then come to
Britain. It seems to be the most common where there are cost
pressures, great competition, and no one can make money. It is a
problem already.
The outcome is the vicious cycle, which makes it hard to innovate,
or the regulators step in. They are not likely to get it right, at
least for the long term. We don't want either of these places.
I used P2P as example, but that may not be the biggest problem. TV
is more so. There is a huge shift in usage patterns, putting a lot
of new traffic on the network. The iplayer is really popular,
driving a lot more traffic, and no more money for providers. Also
games, VR, etc. Even if it is not TV that pushes things over the
edge, something will. So we primarily talking about congestion,
and trying to make money.
We have always used TCP, and that has brought us a long way. It
does a pretty good job, but doesn't seem to be sufficient anymore.
Looking at categories of applications, the mix of apps is very
diverse in terms of demand, and we probably don't want to treat
them all the same. But users are not interested in paying
separately for different apps.
One way to classify is to say there are a bunch that are limited by
latency. For example, gmail has 15k transfer before displaying my
email. Then there are others that are more tolerent of latency.
It is a huge design space, and we need to think about how to make
all this stuff play well together. For example, TCP does a good
job of filling up the router buffers. This is bad if you are VoIP.
We need to figure out how to split these apart.
For large transfers - these are pretty tolerant of latency. While
those are transferring, if I prioritize the short ones, I won't
make the long ones much longer. So prioritizing the short
transfers over the long ones would make everyone happier. This is
an example of where we can do better than we are doing at the
moment.
In some places, this is starting to be done using DPI. This is a
deeply flawed approach, creating a conflict between privacy and
providing service. You can't use IPsec to hide a voice flow, or
you will get terrible latency.
The second problem is the arms race, which is only a race to the
bottom. And a third results in lock in for today's applications.
So, this is not where we want to go.
If we don't try to address these, we will end up in one of these
bad positions. Maybe IPtv will force the issue, or something else.
What could the IETF do to help, so as not to end up with
non-neutral techniques, or regulation (that will hurt everything).
Here are some ideas. First is multi-path TCP, where you move
traffic away from the congested path. You get similar behavior out
of multi-server HTTP. There is LEDBAT, where one is happy to give
up network bandwidth when needed by something else. We need to be doing
this in the IETF.
There is a less-than-best-effort diffserv class, but haven't heard
of anyone deplying it. We could be an advocate here. Another
thing is to try and improve the visibility of congestion. ISPs
don't want to throttle based only on the application, it is more
like convgestion vs. value of the application. Bob Briscoe has
been touting re-ECN; this is one example that could really help.
You mark the amount of congestion in the packets. That is
interesting because it is the enabler you need for sane economics,
to capture the cost of the congestion. This could really help us
manage congestion sensibly.
We could also stop putting large buffers in routers. There
are places where you want this, but not many. We could design
mechanisms to defend against DDOS attacks. That would help shut
out unwanted traffic. We could get more extreme and encrypt
everything. In that case DPI can't work, and that would force the
issue, or otherwise design the protocols to make it hard for DPI
boxes.
I not saying these are good ideas, but they are things the IETF
could do that would force the issue. Maybe it is a good thing for
middleboxes to see things, if they are helping the quality of the
network. These are just some ideas, some examples.
So, some overall IETF goals are to devise mechanisms to make
congestion work better, and economics of congestion need to make
sense. There is a bunch of theory on this, and most say to charge
for congestion since this is the only place where traffic is
displacing other's traffic. But end customers don't want this, and
don't want a variable bill. So we need to figure out how to make
this make sense. It could be an indirect link and the mechanisms
aren't there currently.
ISPs are in a difficult position in that they don't have the tools
they need, so we should be thinking about how to let them manage
things without breaking the architecture. This takes us back to
the tussle. We should not try to design a specific set of
economics, but rather, design the tools to let it play out in
different places. Net neutrality is mostly an economics problem,
but the IETF has not given the ISPs the tools needed to manage
things effectively. The outcomes are bad, either ubiquitous DPI or
regulation. We need some path down the middle.
This is not entirely a technical debate, since even with better
tools we may need legislation, but I think there are some technical
things that we can help with. So the question is what should the
IETF do in this space?
4.4. Moderated discussion
Ted Hardie: One thing that was not summarized - are you sure we did
not have this discussion before, around the Raven discussions? I
remember a debate about a similar topic - it was about liberty and
regulations, and designing protocols to allow DPI in the context of
law enforcement. How is the discussion about government or
politicians who want to change the network for their benefits any
different from the discussion about enabling operators to maximise
profit? Is this really a different debate? I agree that we can't
have a value neutral design, and I think in the past we chose
liberty. We made a conscious choice to protect the end-to-end
network, not just the consumers, but originators of content. If we
have to screw up latency, maybe it is time to not let either
profit or governments control the net.
Peter Lothberg: Mark talked about congestion and management of the
network, but I think that operators just did not put in enough
capacity. There are people in this room who get paid to make the
network more complicated. Instead you could fire people and just
add bandwidth.
If the operators were to simplify their networks and, instead of
deploying all sorts of boxes to maintain their staff and its
organizational structure, build a highly optimized network with
modern technology and to deliver packets, they would have a
reasonable profit and we would remove the congestion in the
network.
Most of the work in the IETF today is about "special use" boxes and
technology to create jobs, or to make money for the box vendors,
not to make a better Internet for the future.
Any "on path" support in the network for an application, will limit
the use of the network and will over longer time periods just drive
the cost up, as the lifetime of such things are too short and it
limits the flexibility of the network.
Maybe the IETF should spend some time making the Internet bigger,
mobile at layer 3, and more cost effective per transferred general
purpose gigabit, than inventing technologies to emulate a
synchronous TDM network.
Mark Handley: I agree, except that it has already happened, and
the boxes are already out there. I want to get rid of them, but
they are already there.
Peter: Can we not define boxes and protocols that do nothing?
Doug Otis: I think we're sewing the seeds for the demise of
freedom. If you look at the exceptions that are made when you give
up the neutrality, there are a lot of security concerns. We need to
show how we can make the protocols more robust. We have some, but
not getting much play. As governments and bad guys force us to
beef things up, like DNSSEC... not going to see an answer. Yet we
have proprietary solutions with 9 patents, and so they are not free
for people to use.
We are worried about Network Neutrality, but we are giving away
the store by not worrying about how to make things more robust
(e.g., fight against abuse). We haven't held the provider's feet
to the fire.
Marcelo: So, what should the IETF do?
Doug: There are simple things you can do. We are authorising
e-mail but we are not saying who is authorised. They are the ones
pushing out the junk. If you don't have that in the protocol, who
do you hold responsible? Protocols against DDOS attacks, things
like SCTP provide better defense without spending a fortune on
equipment.
Tom Vest: To Mark, can you please clarify what you meant by
saying pricing was not an economic challenge, but a technological
one?
And to Barbara, I would like to hear your opinion on the following:
It looks like we will not have an intra-enterprise problem
(with QoS etc.), but rather an inter-generation neutrality problem.
We are running out of IPv4 and there are a lot of people that are
working on transitional mechanisms to IPv6. But for a long time
those mechanisms will provide less than best effort for IPv4
services.
Mark: Regarding congestion pricing, there is no marginal cost
unless they are displacing other traffic. At the moment we don't
have a way to hold people accountable for the congestion their
traffic is causing.
The simplest way is to charge for congestion, but we don't have the
technology to even figure out what congestion is being caused
downstream. The economics could play out in many ways, but right
now we don't have even the technology to make it visible.
Tom: I was talking about the cost multiplier. If I know that user
X is responsible for 45% of congestion... that is an endless
debate.
Barbara: If you ask lawyers if they thought about it? No, they
haven't. Is it necessary? Not sure. You want to have a rule that
is broad about capture stuff, whether because of the IP address
or other things, that shouldn't really matter. As a regulator, I
don't really care. Your point raises an important problem that it
is sometimes difficult to separate an accidental side effect from a
deliberate disruption. Ed Felton asks how do you distinguish
between the causes? This is a hard part of the problem, but it is
possible you might be able to detect the effect. A non-
discriminatory solution is not usually prescriptive of a particular
solution. It is more like "if there is discrimination, it should
be for an important goal and be the least intrusive as possible."
Spencer Dawkins: To Mark, you described a scenario where we have a
rich conversation about the characteristics of traffic you plan to
put into the network. Are you thinking about changes to IPv6 in
that context?
Mark: I am not thinking about anything in particular, just
outlining things that are happening that might help. I am not a
proponent of a particular approach, just pointing out things in
general that we might do. The IETF should not ignore the debate,
but should have a discussion ourselves about technological
possibilities that can help. Otherwise we abdicate control and
influence.
Leslie Daigle: Wanted to emphasise a few things that both
presenters said: Note that the definition of the Internet, and its
network protocols, have never included a global definition of
service. In many cases, packets are delivered on a "best effort"
basis, where "best" is in the eye of the deliverer. Technically,
it has never been "neutral" -- some links are better than others,
and routers make choices about which path a given packet takes.
While, in principle, any given endpoint has the ability to use all
bandwidth from its connection ("fill the pipe"), this is not
actually enshrined in technical specification, and there are many
good reasons why reasonable network management practices would
throttle that (e.g., DOS detection and mitigation).
In ISOC's experience, working in both the policy and technical
worlds, the most important thing is to educate regulators and
policy makers about the implications of heavy-handed, rigid
regulation that focuses on current network technologies: forcing
neutrality could be as detrimental as promoting bias; focusing on
technologies would require rewriting the policy every 6 months.
The sweet spot is to get regulators to the point of understanding
the definition of "good" and "bad" behavior lies outside the
technical realm and in the land of appropriate competition and
fairness.
From our perspective, it is important that the technical
specifications stay focused on building specifications that are
about structure and transmitting of packets for a global network
that supports innovation and development and deployment of new
applications.
QUOTE RAVEN: "The IETF, an international standards body, believes
itself to be the wrong forum for designing protocol or equipment
features that address needs arising from the laws of individual
countries, because these laws vary widely across the areas that
IETF standards are deployed in. Bodies whose scope of authority
correspond to a single regime of jurisdiction are more appropriate
for this task."
People should go back and read RFC 2804 - it is worth re-visiting in
this context.
James Woodyatt: A comment about application neutrality vs.
technology neutrality. I am the editor for an I-D in v6ops that
provides guidelines for simple security for residential customer
equipment, and how IPv6 residential gateways should block incoming
flows by default. There is an exception that is non-controversial
in v6ops: inbound IPsec, key exchange, etc. are allowed inbound by
default. This encourages the use of IPsec, possibly with BTNS,
mainly for the purpose of making the network difficult to observe
traffic-wise.
Mark: What we are doing there is to encourage everything to be
encrypted.
James: The intent was to keep the RFC 4864 recommendation. We didn't
want to make IPsec useless. The net result is this application
neutrality impact. Is this what we want to do?
Mark: A good observation of "no value-neutral design".
Dave Crocker: I think there are a substantial number of real
issues that need substantial changes, and that some of those are
technical. I am pretty sure we are not looking at this in the way
that will lead to the right technical solutions.
Both presentations were very well constructed from their
respective perspectives. My biggest take away is to understand the
perspectives better. Both seem to contain a magic bullet though:
one was to protect user choice, the other one is to fix congestion.
In many places these will not solve the problem of what to do when
you try to move the 101st person into a 100 person room. The tussle paper is
one of my favorite papers, because it lists problems, but not
solutions.
The first tussle we should look at is: how can we talk about this
topic in a way that is constructive? How do we find ways to go that
are not simplistic? And what are we doing to go forward with this?
Eve Varma: I thought the tussle paper was a powerful paper. You
spoke about congestion, but I think the ability to separate
concerns is important. It gives some good ideas here. It is kind
of like Object Oriented Design - what makes a good object? It hits
multiple areas, like ID-loc separation, and cuts across a whole
range. Are there places in the architecture where we could better
allow stuff to play out?
Alex Zimmerman: Research says that cooperative congestion control
gives the best fairness, and we have used that ever since.
Unfortunately, congestion control is susceptible to predators.
P2P is a good example. Is there an approach that doesn't fall prey
as easily? That is what network un-neutral operators are doing.
Should we re-think congestion control methods?
Mark: I think those are the right questions to ask.
Wes Hardaker: I think the best thing we can do is to ignore the
issue to a large extent. When you look at everything the IETF
produces, it all can be used for good or bad. TCP syn, UDP out of
control. All those technologies were developed for one good: the
end user. Huge DPI is a bad thing, but the better solution is to
move away from the service - the market will eventually decide.
Like cell phones - none let you use VoIP for their network because
it is competition.
We should at the IETF optimise for the user - e.g., provide them
with authentication and a better means to filter out the bad guys
in the middle.
Michael Behringer: You are buying a service and if there is
congestion outside your access line then you are not getting the
service you bought. We need to visualise this to the end-user. We
need to give users a dashboard that shows them the actual
performance of the providers in the middle.
Mark: Do you think fiber to the home is ubiquitous, the edge link
will no longer be the bottleneck?
Michael: There is not enough competition on the access links. I
live in a country with big competition and hardly any congestion in
the core networks. You need more competition in the last mile. We
used to have this problem 10 years ago, but with competition it went
away. You can show end users if they get bad service, and if they
are not getting what they paid for.
Mark: Most users would not see limiting unless congesting the
access links.
Bob Briscoe: I am glad that others clapped before, that we
shouldn't design a network to the provider's profit. Mark was not
saying that, but rather saying to deal with the cost. The hard part
is dealing with the cost without helping the provider make an
over-cost profit. The reason is that this is the cost of one user
over another. It is nice to clap when someone says "don't design to
make profits", but that is not actually what Mark said.
Olaf: Thanks Barbara and Mark to facilitate the discussion.
IAB open mic session:
Mark Handley: What is the IAB going to do about Network Neutrality?
(laughter)
Olaf: As I introduced, this was an informative session for the IAB
just as much as for the audience. It was good to be reminded of
previous IETF statements, glad that came out in the discussion. We
need to think about this and see if there are action items for the
IAB.
Jon Peterson: Mark pointed out a number of things that the IETF is
doing and that is highly appreciated, but the list doesn't end
there. There is the ALTO work, and there were two ad-hoc sessions
along the same lines. I think the IAB does its best work when
fostering activity, and that is up to the community as well. The
IAB will not solve this alone.
Stuart Cheshire: Someone said that we do engineering and not
economics. I think that all the engineering we do is economics.
Product developers and users game the system. The economics could
be to throw away packets, but without incentives there is no way to
shape properly. We create a playing field, not the outcomes.
Dave Oran: One of the authors of the Tussle paper said that we
should all give up our careers and go back to school to become
economists, because that is where the future is. Our challenge is
to look at the interaction of economics and the protocol
architecture for the Internet as a whole. We are doing the
Internet, that is, global, not local optimums.
Aaron Falk: The tussle paper is well cited because this is about
notions of fairness, balance. In the ICCRG at the IRTF
there is a discussion of moving beyond TCP fairness. Not an answer
yet, but exploration of the issue - working on congestion control
and resource allocation in Internet protocols. My sensitivity was
raised as to how that plays out in different realms. It is an
important conversation.
Andy Malis: One major conclusion was that ISPs have one hand tied
behind their back, and don't have the tools that they need. One
thing we can do is think about what the IETF can do to help here.
Bob Hinden: I wanted to thank the IAB for doing this plenary. It
was a great topic, and I learned a lot, and encourage the IAB
to keep bringing up topics like this. This has been more helpful
than some other sessions in the past.
Gregory Lebovitz: What part of it was most helpful for you?
Bob: It is an important and relevant topic, and we should all think
about it. It is important for the Internet. The IETF, when we
design protocols we give policy. This topic, the effect of what we
do, is good to understand.
Olaf: Barry Leiba proposed this topic. Marcelo Bagnulo did a lot to
put this session together. Also received help from Patrik
Faltstrom. Thanks to everyone for their contributions.
Bob Briscoe: There is a better, more readable version of the
Tussle paper from 2005 if anyone is interested.
Olaf: Any other questions?
|